arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
DeepAI # A Theory-Based Evaluation of Nearest Neighbor Models Put Into Practice In the k-nearest neighborhood model (k-NN), we are given a set of points P, and we shall answer queries q by returning the k nearest neighbors of q in P according to some metric. This concept is crucial in many areas of data analysis and data processing, e.g., computer vision, document retrieval and machine learning. Many k-NN algorithms have been published and implemented, but often the relation between parameters and accuracy of the computed k-NN is not explicit. We study property testing of k-NN graphs in theory and evaluate it empirically: given a point set P ⊂R^δ and a directed graph G=(P,E), is G a k-NN graph, i.e., every point p ∈ P has outgoing edges to its k nearest neighbors, or is it ϵ-far from being a k-NN graph? Here, ϵ-far means that one has to change more than an ϵ-fraction of the edges in order to make G a k-NN graph. We develop a randomized algorithm with one-sided error that decides this question, i.e., a property tester for the k-NN property, with complexity O(√(n) k^2 / ϵ^2) measured in terms of the number of vertices and edges it inspects, and we prove a lower bound of Ω(√(n / ϵ k)). We evaluate our tester empirically on the k-NN models computed by various algorithms and show that it can be used to detect k-NN models with bad accuracy in significantly less time than the building time of the k-NN model. • 10 publications • 7 publications 08/01/2019 ### A True O(n n) Algorithm for the All-k-Nearest-Neighbors Problem In this paper we examined an algorithm for the All-k-Nearest-Neighbor pr... 09/02/2022 ### Learning task-specific features for 3D pointcloud graph creation Processing 3D pointclouds with Deep Learning methods is not an easy task... 01/08/2010 ### Boosting k-NN for categorization of natural scenes The k-nearest neighbors (k-NN) classification rule has proven extremely ... 08/29/2022 ### Learned k-NN Distance Estimation Big data mining is well known to be an important task for data science, ... 05/28/2015 ### An Analogy Based Method for Freight Forwarding Cost Estimation The author explored estimation by analogy (EBA) as a means of estimating... 11/18/2019 ### Consistent recovery threshold of hidden nearest neighbor graphs Motivated by applications such as discovering strong ties in social netw... 03/26/2018 ### Efficient space virtualisation for Hoshen--Kopelman algorithm In this paper the efficient space virtualisation for Hoshen--Kopelman al... ## 1 Introduction The -nearest neighborhood (-NN) of a point with respect to some set of points is one of the most fundamental concepts used in data analysis tasks such as classification, regression and machine learning. In the past decades, many algorithms have been proposed in theory as well as in practice to efficiently answer -NN queries [[, e.g.,]]FriAlg75,FukBra75,CalDec95,ConFas10,IndApp98,CheFas09,MujFas09,PedSci11,Nms13,MaiKNN17,ZhaEff18,AlgKgr18. For example, one can construct a -NN graph of a point set , i.e., a directed graph of size such that contains an edge for every -nearest neighbor of for every , in time for constant dimension CalDec95. Due to restrictions on computational resources, approximations and heuristics are often used instead (see, e.g., CheFas09,ConFas10 and the discussion therein for details). Given the output graph of such a randomized approximation algorithm or heuristic, one might want to check whether resembles a -NN graph before using it, e.g., in a data processing pipeline. However, the time required for exact verification might cancel out the advantages gained by using an approximation algorithm or a heuristic. On the other hand, testing whether is at least close to a -NN graph will suffice for many purposes. Property testing is a framework for the theoretical analysis of decision and verification problems that are relaxed in favor of sublinear complexity. One motivation of property testing is to fathom the theoretical foundations of efficiently assessing approximation and heuristic algorithms’ outputs. Property testing RubRob96, and in particular property testing of graphs GolPro98, has been studied quite extensively since its founding. A one-sided error -tester for a property of graphs with average degree bounded by has to accept every graph and it has to reject every graph that is -far from with probability at least (i.e., if graphs that are -far are relevant, it has precision  and recall ). A graph of size is -far from some property if more than edges have to be added or removed to transform it into a graph that is in . A two-sided error -tester may also err with probability less than if the graph has the property. The computational complexity of a property tester is the number of adjacency list entries it reads, denoted its queries. Many works in graph property testing focus on testing plain graphs that contain only the pure combinatorial information. However, most graphs that model real data contain some additional information that may, for example, indicate the type of an atom, the bandwidth of a data link or spatial information of an object that is represented by a vertex or an edge, respectively. In this work, we consider geometric graphs with bounded average degree. In particular, the graphs are embedded into , i.e., every vertex has a coordinate . The coordinate of a vertex may be obtained by a query. ### Main Results Our first result is a property tester with one-sided error for the property that a given geometric graph with bounded average degree is a -nearest neighborhood graph of its underlying point set (i.e., it has precision  and recall  when taking -far graphs as relevant). ###### Theorem 1. Given an input graph of size with bounded average degree , there exists a one-sided error -tester that tests whether is a -nearest neighbourhood graph. It has query complexity , where is the -dimensional kissing number and is a universal constant. We emphasize that it is not necessary to compute the ground truth (i.e., the -NN of ) in order to run the property tester. Furthermore, the tester can be easily adapted for graphs such that and we only require that for every , contains an edge for every -nearest neighbor of in . This is more natural when we think of as a training set and as a test set or query domain. To complement this result, we prove a lower bound that holds even for two-sided error testers. ###### Theorem 2. Testing whether a given input graph of size is a -nearest neighbourhood graph with one-sided or two-sided error requires queries. Finally, we provide an experimental evaluation of our property tester on approximate nearest neighbor (ANN) indices computed by various ANN algorithms. Our results indicate that the tester requires significantly less time than the ANN algorithm to build the ANN index, most times just a -fraction. Therefore, it can often detect badly chosen parameters of the ANN algorithm at almost no additional cost and before the ANN index is fed into the remaining data processing pipeline. ### Related Work We give an overview of sublinear algorithms for geometric graphs, which is the topic of research that is most relevant to our work. As mentioned above, the research on -NN algorithms is very broad and diverse. See, e.g., DasNea91,ShaNea05 for surveys. Testing whether a geometric graph that is embedded into the plane is a Euclidean minimum spanning tree has been studied by ben2007lower and czumaj2008testing. In ben2007lower, the authors show that any non-adaptive tester has to make queries, and that any adaptive tester has query complexity . In czumaj2008testing, a one-sided eror tester with query complexity is given. In a fashion similar to property testing, CzuApp05 estimate the weight of Euclidean Minimum Spanning Trees in time, and CzuEst09 approximate the weight of Metric Minimum Spanning Trees in time for constant dimension, respectively. hellweg2010testing develop a tester for Euclidean -spanners. Property testers for many other geometric problems can, for example, be found in Czumaj2000,ParTes01. ## 2 Preliminaries Let be fixed parameters. In this paper, we consider property testing on directed geometric graphs with bounded average degree . full ###### Definition 1 (geometric graph). A graph with an associated function is a geometric graph, where each vertex is assigned a coordinate . Given , we denote its degree by and the set of adjacent vertices . full The Euclidean distance between two points is denoted by . For the sake of simplicity, we write for two vertices . When there is no ambiguity, we also refer to by simply writing . We denote the size of the graph at hand by . ###### Definition 2 (k-nearest neighborhood graph). A geometric graph is a -nearest neighbourhood (-NN) graph if for every , the points that lie nearest to according to are neighbors of in , i.e., for all (breaking ties arbitrarily). Let be a geometric graph. We say that a graph is -far from a geometric graph property  if at least edges of have to be modified in order to convert it into a graph that satisfies the property . We assume that the graph is represented by a function , where denotes the neighbor of if has at least  neighbors (otherwise, ), a degree function that outputs the degree of a vertex and a coordinate function that outputs the coordinates of a vertex. ###### Definition 3 (ϵ-tester). A one-sided (error) -tester for a property with query complexity is a randomized algorithm that makes queries to , and for a graph . The algorithm accepts if has the property . If is -far from , then it rejects with probability at least . The motivation to consider query complexity is that the cost of accessing the graph, e.g., through an ANN index, is costly but cannot be influenced. Therefore, one should minimize access to the graph. ###### Definition 4 (witness). Let denote the number of vertices that lie nearer to than . Further let denote the set of ’s -nearest neighbors. Let define the subset of that is not adjacent to . If or , we call incomplete, and we call elements of the witnesses of . If is -far from being a -nearest neighborhood graph, an -fraction of its vertices are incomplete. conferenceThe proof follows from common arguments in property testing (see the full version [fullversion] full version bib). ###### Lemma 5. If is -far from being a -nearest neighborhood graph, at least vertices are incomplete. full ###### Proof. Assume the contrary. For every incomplete vertex , delete edges such that the distance to the property does not increase and insert the missing edges from to its nearest neighbors. By the assumption, the total number of inserted or deleted edges is less than . Therefore, is -close to being a -nearest neighborhood graph. ∎ The main challenge for the property tester will be to find matching witnesses for a fixed set of incomplete vertices. The following result from coding theory for Euclidean codes bounds the maximum number of points that can have the same fixed point as nearest neighbor. ###### Lemma 6. [333884] Given a point set and , the maximum number of points that can have as nearest neighbour is bounded by the -dimensional kissing number , where [wyner1965capabilities] and [kabatiansky1978bounds] (asymptotic notation with respect to ). ## 3 Upper Bound The idea of the tester is as follows (see Algorithm 1). Two samples are drawn uniformly at random: , which shall contain many incomplete vertices if is -far from being a -nearest neighborhood graph and , which shall contain at least one witness of an incomplete vertex in . For every , the algorithm should query its degree, its coordinate as well as every adjacent vertex and their coordinates and calculate the distance to them. If or if one of the vertices in is a witness of , the algorithm found an incomplete vertex, and hence rejects. Otherwise, it accepts. However, we have to deal with the case that some vertices in have non-constant degree, say, , such that querying all their adjacent vertices would require too many queries. To this end, we prove that one can prune these vertices to obtain a subset of low degree vertices that still contains many incomplete vertices with sufficient probability. ### Proof of creftype 1 We prove that Algorithm 1 is an -tester as claimed by creftype 1. Since Algorithm 1 does never reject a -nearest neighbourhood graph, assume without loss of generality that is -far from being a -nearest neighborhood graph. Algorithm 1 only queries the neighbors of , and therefore its query complexity is at most . It remains to prove the correctness. In the following, let denote the set of all vertices in that have low degree, let denote the set of incomplete vertices in , and let denote the set of incomplete vertices in . By an averaging argument, . It follows from creftype 5 that contains at least incomplete vertices, and therefore we focus on finding incomplete vertices that have low degree. fullThe following random variable identifies witnesses of vertices incomplete vertices in . ###### Definition 7. Given , let be a random variable that is if is a witness of an incomplete vertex and otherwise. full The proof of creftype 1 follows from the following three claims. First, note that is a uniform sample without replacement from whose size is random. However, is sufficiently large with constant probability. conferenceThis claim follows from Markov’s inequality (see full version [fullversion] full version bib). ###### Claim 8. With probability at least , . full ###### Proof. The expected cardinality of is . Therefore, the probability that is less than is at most by Markov’s inequality. ∎ In the subsequent sections, we prove the following two claims. Given that is sufficiently large, it will contain at least incomplete vertices with constant probability. ###### Claim 9. full[creftype 11] If , it holds with probability at least that . Finally, we show that if contains at least incomplete vertices, then will contain at least one witness of such an incomplete vertex with constant probability. ###### Claim 10 (creftype 14). If , with probability at least , . The correctness follows by a union bound over these three bad events. ### Analysis of the Sample S: Proof of creftype 9 fullWe bound the cardinality of such that contains at least incomplete vertices. ###### Lemma 11. If , then with probability at least . ###### Proof. Since was sampled without replacement, the random variable follows the hypergeometric distribution. Let be a random variable that denotes the number of draws that are needed to obtain incomplete vertices in , which therefore follows the negative hypergeometric distribution. By creftype 5, we have . By the definition of and , we have . We apply Markov’s inequality to obtain . It follows that ensures with sufficient probability. full |S| ≥ 20√nϵ ⇔ |S| ≥ 20√n(dn/(2k)+1)ϵ(dn/(2k)+1) ⇒ |S| ≥ 10√nn+10√n(ϵdn)/(2k)+1 ⇔ 110 ≥ √n(n+1)(ϵdn)/(2k)+1|S| ⇒ Pr[X≥|S|] ≤ 110 ### Analysis of the Sample T: Proof of creftype 10 We prove the following lower bound on the number of witnesses in , which will imply a bound on by -reducing it to the case . ###### Proposition 12. Given a point set , and , the maximum number of points that can have as -nearest neighbor is bounded by . We note that this bound is tight, as shown in LABEL:thm:kreducing_tight. ###### Definition 13 (k-reducing). Let be an arbitrary point. Fix . Repeat the following steps until . • Pick a point that lies furthest from and let . • Set . ###### Proof of creftype 12. We apply creftype 13 to and prove that the size of at the beginning of the process is at most , which proves the claim. At first we show that every vertex that is picked by stays in : Let be arbitrary points that are picked by in the process of -reducing, with being picked in an earlier iteration than . The latter implies . Assume that at the time is selected, and therefore is removed from . Since is deleted by , it holds that , which is a contradiction as has been selected before . We continue to bound the maximum number of vertices that share their -nearest neighbor: Because is the nearest point for the remaining , we apply creftype 6 and conclude that at most vertices are remaining in after -reducing. Since every iteration of step removed at most points from , the cardinality of at the beginning of the process was at most . ∎ Since at most vertices can share a witness by creftype 12, there are at least distinct witnesses of vertices in . We employ this bound to calculate the size of the sample such that it contains at least one witness of an incomplete vertex in with constant probability. If and , then . ###### Proof. Since every vertex is sampled uniformly at random with replacement, the event that one vertex is a witness is a Bernoulli trial with probability . Therefore . We have
# Bug: Purge command deletes user arrows that are in use Hi, just discovered this little bug, using Rhino 6 SR26 on Windows. When using the Purge command and setting it to get rid of unused Block definitions, it also deletes blocks that are being used as user-defined arrowheads (as, as far as I know, defining them as blocks is the only way to add custom arrows (?)). These arrowheads then instantly disappear from the dimensions or leaders. Not sure if this is something that is easily solved or not; the obvious workaround is to have the block instance inserted somewhere in the scene, if one wants to use the command to delete other block definitions. So not a huge problem, just something one has to have in mind if those specific circumstances arise. Hi Simon - I can reproduce that here and put it on the list as RH-58901. Thanks for reporting, -wim
# Tag Info 8 The problem that I have is that I always have a big spike (10-15 dB) directly on the center frequency (no matter what frequency I set). I am relatively new to all this so I would appreciate any pointers on how to get rid of the spike. That spike is probably nothing surprising – just the LO leakage/DC offset, a very common artifact in direct conversion ... 5 Ok I did some signal forensics on the data capture and believe the modulation is a form of FSK. The FSK modulation was +/- 20 KHz with a data rate of 38 KHz. UPDATE: The OP discovery that this is "io-homecontrol" and the datasheet from ADI that he found has confirmed that this is indeed FSK with a deviation of 20KHz and 38.4 Kbps data rate. Further ... 5 A square operation creates an unmodulated tone for a BPSK signal at 2x the carrier frequency (a pure tone for the case that the signal was unfiltered or rectangular pulses with perfect phase and amplitude balance in the BPSK modulation, and typically a stronger carrier with weaker sidebands in the more common filtered or pulse-shaped cases). For QPSK signals ... 5 That is above the Nyquist rate, so why is the signal degraded? It's not degraded in any way form or shape. The perceived degradation is purely cosmetic but not functional. See for example: How is sampling affecting this sine wave? But the time domain plot still shows something unexpected No it doesn't. It looks exactly as it should. If that's unexpected, ... 4 I also read this from a response to a USRP user's question about RSSI measurements: [The] Received Signal Strength [Indicator is] always relative to some signal model, incorporating considered bandwidth, assumptions on the modulation scheme, duration of transmission, generally: It's a estimation of received signal strength based on some property of ... 4 The channel frequency controls the local oscillator on your SDR which is the frequency about which it covers. So you are receiving signals from 16KHz below 107.5MHz to 16KHz above. The concept you want to look up is called heterodyning. When you modulate (multiply) a signal by a sine wave you end up with two copies shifted in frequency space by the ... 4 So, the important takeaway from your introduction is that you have an application which needs to get chunks of items out of the flow graph repeatedly. Which means you're in the streaming case. (for future readers: you'd just use a Vector Sink instead if you only wanted all the data at once after the flow graph has finished running) So, multiple approaches ... 4 Yes the OP is correct in that you can implement pulse shaping in less than 2 samples per symbol for exactly the reasons that was outlined. However importantly we must also keep in mind having excess bandwidth to simplify subsequent filtering required (such as after the DAC on the transmitter side). The Nyquist criteria is the sampling rate must be twice the ... 4 Unlike the Gardner Loop, the M&M synchronizer should be performed after the RRC filter in the receiver for best performance. With cases of high RRC alpha, the M&M won't work as expected without the complete Raised-Cosine filtering (RRC in transmitter followed by RRC in receiver) as the slope of the error term will reverse, with high self-noise, as I ... 3 There's a lot of different domains of knowledge coming together here, so I'll split my answer into multiple sections, each answering an implicit question that you raise in your explicit question. Hope that helps! Can your RTL-Dongle actually receive at 1.72 GHz? So, first the bitter pill: There's a lot of sellers out there that offer RTL dongles and claim ... 3 See pages 5 and 6 and the plots on the following pages specific to number of samples per symbol in this very helpful reference by Ken Gentile on designing RRC pulse shape filters: http://www.analog.com/media/en/technical-documentation/application-notes/AN-922.pdf An example I have previously done shows the consideration of filter length (how many symbols ... 3 Before I address your questions, you should understand: a. the integral branch of the loop filter maintains a average phase increment in units of radians/sample. It is not a frequency value, though in the right context it can be converted to a frequency value, using your sample rate as a conversion factor. b. the total output of the loop filter is an ... 3 Use a notch filter such as the one shown in the figure below, where $\omega_n =2\pi 60/f_s$, where $f_s$ is your sampling rate, and $\alpha$ is chosen based on the bandwidth of the notch and how long you can allow for settling in the time domain; the tighter the filter bandwidth the longer it will take to settle; you can use a first order approximation of 10%... 3 What you are seeing is the transitions from one constellation point to another. In order to reduce the signal bandwidth, the baseband signal is low-pass filtered. This causes the transitions to not be instantaneous (i.e. the I and Q are not square waves), so they take some time. You are simply seeing those transitions. The low-pass filtering also causes ... 3 Got me at that one! The "OFDM symbol acquisition" block is in fact not from gr-digital (where your other OFDM blocks come frome), but from gr-dtv, where it is used to capture DVB-T signals, if I remember correctly. It might be very DVB-specific! Let us have a look at the dvbt_rx_8k.grc example from gr-dtv (or, at least, the top half): So your understanding ... 3 Packet Encoder and Decoder are broken; they drop data. That's why they are in the deprecated category (for years now!). We've removed them, because as a project, GNU Radio has not been able to fix them (and also, they were terrible from an architecture point of view). So there's exactly one solution: don't use packet encoder / decoder. 3 Consider the formula for the DFT (which the FFT efficiently computes as an algorithm): $$X(k) = \sum_{n=0}^{N-1}x(n)e^{-j2\pi nk/N}$$ Notice that it is a summation over $N$ samples total. Also note using Euler's formula that a cosine function can be expressed as two exponential terms (that are visible in the DFT result) as: $$x[n] = cos(2\pi f n + \theta) = \... 3 I am not revealing any big secrets here on jamming and anti-jamming techniques, nor would I condone creating any such interference. What I am about to say is quite simplistic and well known, but knowing more details in how jamming can take place and being more educated on it in general can help good actors in minimizing vulnerabilities in future designs. Yes ... 2 Further inspection shows indicates that the serializer block only does remove non data carriers. It just probably so happens that anything that is non data is super noisy, and data carriers are not noisy at all, but I still wonder how is this possible. The magic that happens here is in the actual equalizer used in the frame equalizer block. If you'd scroll ... 2 For an FIR filter there are 3 main components that determine the filter length for equiripple designs: Passband ripple Stopband attenuation level Transition width (Width from the edge of the passband to the edge of the stopband) For other filter designs the filter order may be related to flatness of the passband and the rate of fall-off in the stopband (... 2 The stock FFT (in GNU radio?) is a complex-to-complex transform. Thus any positive frequency peak you see represents a complex signal (phasor) that can include both real and imaginary components. Since your cosine (or sine) waveform is strictly real (has zero or no imaginary component), the complex FFT result also includes a negative complex conjugated ... 2 I believe that what you are looking for is Bandpass Sampling. What Nyquist theorem says is that your sampling frequency must twice the bandwidth of your signal - not carrier frequency of it. Hence in FM modulated signals you don't need to take it into consideration. 2 I can only answer your second question: "How can the loop bandwidth in GNU Radio synchronization be configured as a percentage of the symbol rate?" The tracking loop in the symbol synchronizer block operates at the symbol rate, estimating timing error and making a correction once per symbol. So the sample rate of the error signal from the TED is at ... 2 The high-frequency (RF) section of an SDR is all analog. Typically, the analog receiver downconverts the RF signal to an intermediate frequency that is within the Nyquist range of the ADC. As Stanley points out, you can also do bandpass sampling, though that is less common, in my experience. 2 A few (hopefully useful) comments and ideas: The HackRF One is not a USRP. If you're receiving with an NI 2921 USRP, you should be using UHD to interface with it, not osmocom. Use a standalone spectrum analyzer, first to double-check that nobody else is transmiting at that frequency, and then to study your transmitted signal. Transmit a single sine wave and ... 2 Let's say you want to filter a signal x[n] through a Gaussian filter with impulse response h_g[n] and a Moving Average (the "sqwave") filter with impulse response h_s[n]. Then the resulting operation is$$y[n] = h_s[n] * (h_g[n] * x[n]) = (h_s[n] * h_g[n]) * x[n] So the a priori convolution of the two filter tap sets into one filter taps set, just ... 2 you can create flowgraph in c++ like gqrx and i think GNU Radio Manual and C++ API Reference documents will help you. This is gnuradio c++ document for top_block if you could not use it let me know to write a sample for you. These documents are for gnuradio version 3.7 and in gnuradio 3.8 there is c++ code generation and gnuradio blocks are porting to c++ ... 2 The Symbol Synchronizer block is a PLL-based synchronizer that is trying to estimate the symbol clock period and symbol clock phase (aka timing offset) based on the samples coming in that represent the data symbols. Being a PLL configured with static parameters, there is a fundamental trade off between acquisition speed and tracking stability of the symbol ... 2 Sounds like you have a very workable approach: Write a GNU Radio, Embedded Python, out-of-tree Python or C++, doesn't matter, which: is a general block (not a sync_block) has a member property triggered or similar, which is initialized to False in the constructor has a member property threshold or similar, which is initialized to the value passed as ... 2 The GNURadio Constellation modulator enforces a minimum of 2 Samples Per Symbol mostly due to simplicity and some practical reasons. Theoretically, if you're just transmitting PSK without any pulse shaping, you could do 1 sample per symbol (note these are complex symbols) . But typically we like filtered PSK, and the block does this with the RRC filter. ... Only top voted, non community-wiki answers of a minimum length are eligible
# How to implement this single-qubit unitary? I was reading this paper on qubit state preperation, and encountered an interesting type of single-qubit gate: \begin{align} U_\theta = \left(\begin{matrix} \cos\theta & \sin\theta \\ \sin\theta & -\cos\theta \end{matrix}\right) = \cos\theta\, \sigma_z + \sin\theta\, \sigma_x \end{align} and more generally, \begin{align} U = \frac{1}{\sqrt{|a|^2+|b|^2}}\left(\begin{matrix} a & b \\ b^* & -a^*\end{matrix}\right) \end{align} I would like to try and decompose these gates as compositions of standard rotations, i.e. \begin{align} R_x(\theta) = \exp(-i\theta \sigma_x/2)\ \ ,\ \ \sigma_x = \left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right) \\ R_y(\theta) = \exp(-i\theta \sigma_y/2)\ \ ,\ \ \sigma_y = \left(\begin{matrix} 0 & -i \\ i & 0 \end{matrix}\right) \\ R_z(\theta) = \exp(-i\theta \sigma_z/2)\ \ ,\ \ \sigma_z = \left(\begin{matrix} 1 & 0 \\ 0 & -1 \end{matrix}\right) \end{align} However, I'm not really sure how to go about it, mostly due to the minus sign in the (2,2) matrix element. I've tried solving the simultaneous equations e.g. $R_X(\theta)R_Z(\phi) = U$ but I end up getting no solutions. $$R_{\mathbf{n}}(\theta)=e^{i \theta (\mathbf{n}.\mathbf{\sigma})} = \begin{pmatrix} \cos \theta + i n_{z} \sin \theta & i n_{x} \sin \theta + n_{y} \sin \theta \\ in_{x} \sin \theta -n_{y} \sin \theta & \cos \theta - i n_{z} \sin \theta \end{pmatrix},$$ which we have obtained using $exp(i \theta (\mathbf{n}.\mathbf{\sigma}) )= \cos \theta \mathbb{1} + i \mathbf{n}.\mathbf{\sigma} \sin \theta$. To keep this rotation unitary, we should also set the norm of $\mathbf{n}$ to one which is equivalent to devide the matrix by its determinant. Putting everything together, we would end up with a general form of $$R_{\mathbf{n}}(\theta) = \frac{1}{ \sqrt{|a|^{2} + |b|^{2}}} \begin{pmatrix} a & b \\ -b^{*} & a^{*} \end{pmatrix}.$$ As you may have noticed, the minus sign is not on the right component which implies that we should act by one more $\sigma_{z}$ on this rotation. Finally, $$\sigma_{z} R_{\mathbf{n}}(\theta) = \frac{1}{ \sqrt{|a|^{2} + |b|^{2}}} \begin{pmatrix} a & b \\ b^{*} & -a^{*} \end{pmatrix}.$$ You can also view $\sigma_{z}$ as $-iR_{z}(\pi/2)$. The most general form of unitary matrix which can be decomposed into standard rotation is $$U=exp(-i\vec{\sigma}.\hat{n}\frac{\phi}{2})$$ Where $\hat{n}$ is the unit vector corresponding to axis of rotation and $\phi$ is angle of rotation. Determinant of $U$ is 1. But the given matrix has determinant -1. So, I think, it can not be decomposed into rotation.
Ampere's Law, Interface conditions for magnetic field I'm failing to understand the derivation of the interface conditions for the tangential components of the magnetic field given her (based on d.j,griffiths) Ampere's law in integral form is given as $$\oint_C \mathbf{H}\cdot d\mathbf{l}=\int_S(\mathbf{j_f}+\frac{\partial \mathbf{D}}{\partial t})\cdot d\mathbf{a}$$ where $$\mathbf{j}_f$$ is the current-density of free charge carriers. Now considering the gaussian rectangle below, in the limit $$h\rightarrow 0$$ gives $$H^\parallel_1 l-H^\parallel_2 l=I_f$$,where $$I_f$$ represents the enclosed current of free charges $$I_f=\int_S \mathbf{j}_f\cdot d\mathbf{a}$$ fine. Then the author says ...The free surface current is the product of a surface current density $$\mathbf{K}_f$$ and the width of the loop;...$$H^\parallel_1 l-H^\parallel_2 l=K_f l$$ This is what confuses me:Where is $$\mathbf{K}_f$$ coming from? We already have a free surface current density- and it was called $$\mathbf{j}_f$$. What's the difference between these two? And how can a surface current density $$\mathbf{K}_f$$ have the same units as a magnetic field (A/m)? This doens't look like a density to me. • $J_f$ is volume current, ie. the free charges if they were to stop for an instant would have nonzero volume density. $K_f$ is surface current ie., so the corresponding moving free charges would have zero volume density but not zero surface density. – hyportnex Feb 12 at 16:51 • Are you sure? S.I. units of D are C/m^2 so the time-derivative of D has units of A/m^2 and J_f has to match these units..isn't that a surface-current-density? – OD IUM Feb 12 at 16:58 • $J_f$ represents current (charges per second) passing through a unti surface, but if the charges that are moving at speed $v$ stopped for an instant ($\delta t$ seconds) then they would represent a volume of charges $v\delta t dA$ passing through that unit surface $dA$. Similarly for $K_f$ but now charges confined to a surface passing through a unit length of a line – hyportnex Feb 12 at 17:10 • ok,that helps me – OD IUM Feb 12 at 17:13 • Note: because of the skin effect this concept of surface current is especially useful for high frequency currents in metals. – hyportnex Feb 12 at 17:18
# Most stable conformational isomer of 3-methoxycyclohexan-1-ol Among the following, the most stable isomer is? I am aware of the fact that equatorial substituents are more stable than axial substituents but couldn't proceed to apply it here. However the answer key gives the answer as d) in which both substituents are in axial position. • Note that there is no space between the substituent prefixes (3-methoxy) and the parent compound (cyclohexan-1-ol). Furthermore, (b) and (c) are diastereomers, not different conformers, of (a) and (d). Anyway, the answer in three words is "intramolecular hydrogen bonding". – orthocresol Dec 29 '16 at 6:41 • @orthocresol There would be a lot of steric hinderence too if both substituents are on the same side. – Pink Dec 29 '16 at 6:51 • Not steric hindrance, but steric repulsions. The word "hindrance" means that something is being blocked, such as an incoming nucleophile in an SN2 reaction. Nothing is attacking anything here, and consequently nothing is being blocked here. So, the word "hindrance" doesn't apply. Anyway, the question really boils down to whether the destabilising steric repulsions are bigger, or the stabilising intramolecular H-bond is bigger. The answer is you probably need a computer to work that out, but in this case the H-bond is more important. – orthocresol Dec 29 '16 at 6:59 • On top of that (b) and (c) are the same isomer/conformer...? – orthocresol Dec 29 '16 at 7:55 • @orthocresol yes b) and c) are the same conformers.it must have been misprinted in the book.Sorry for that. – Pink Dec 29 '16 at 7:59 I would say that the answer to the question depends strongly on the solvent used. In case anybody still doesn’t see it: in the orientation (d), the compound can form an intramolecular hydrogen bond from the hydroxy group to the methoxy group. This is especially favourable in solvents that cannot participate in hydrogen bonding, e.g. dichloromethane. Dissolving the same molecule in methanol, however, could change the entire story. The intramolecular hydrogen bond is favoured in the absence of other hydrogen bond donors or acceptors, but in a hydrogen bonding solvent there is absolutely no shortage of donors and acceptors and it can be assumed that all sites that can participate in hydrogen bonding in any way are saturated with hydrogen bonds. At this point, the steric interaction probably becomes more important and I would assume that the molecule preferentially assumes a diequatorial configuration. Sometimes, the intuitive rule of thumb that favours conformations with $\ce{OH}$ substituents in equatorial positions miserably fails. For 1, a cyclization product of geranyl acetate, one might assume that the favoured conformation in solution and in the crystal is 1eq. However, this is not the case! Both NMR spectroscopy and X-ray crystallography indicate that 1ax is the actual conformation. In the case of 1, the reason most likely isn't a (stabilizing) intramolecular hydrogen bond that would form a seven-membered ring, but the sterical interaction of the methyl groups. The unfavourable interaction of the 1,3-diaxial methyl groups in 1eq is avoided in the conformer with the axial $\ce{OH}$ group. • So why is this the case? Has it something to do with a 1,3-diaxial hydrogen bonding of the hydroxyl and ester carbonyl group? – logical x 2 Dec 29 '16 at 11:17
# Macros ## A 2-post collection Ever tried googling "recursion"? There's something quite peculiar about recursion. Every developer and their dog has heard of it at some point, and most developers seem to have quite a strong opinion about it. Sometimes, they were taught about it in college. Some old professor with a gray beard and funny words (the hell's a cons cell? why are you asking if I want to have s-expr with you?) made them write Lisp or Caml for a semester, growling at the slightest sign of loops or mutability to the poor student whose only experience with programming yet was Java-like OOP. Months spent writing factorials, linked lists, Fibonacci sequences, depth-first searches, and other algorithms with no real-world use whatsoever. Other times, it was by misfortune. While writing code in any of their usual C-family enterprise-grade languages, they accidentally made a function call itself, and got greeted by a cryptic error message about something flowing over a stack. They looked it up on Google (or Yahoo? AltaVista? comp.lang.java?) and quickly learned that they had just stumbled upon some sort of arcane magic that, in addition to being a simply inefficient way of doing things was way too complicated for any Rust macros are powerful, that's a fact. I mean, they allow running any code at compile-time, of course they're powerful. C macros, which are at the end of the day nothing more than glorified text substitution rules, allow you to implement new, innovative, modern language constructs, such as: or even: But these are just silly examples written for fun. Nobody would ever commit such macro abuse in real-world, production code. Nobody...
probability spinner examples For example, if a four and a one are rolled, horse number five will move one spot closer to the finish line. Also look at how students are finding the experimental probability for each spinner, including the spinner where red is certain. Find the following probabilities. Was to statistical information and type of an epidemic has its fair share or in are. Spinner outcomes –An Example. Connected Mathematics Project, CMP 2 Click on each of the spi… (Reference 2) Adjustable Spinner. PDF spinner templates for in class activity. 1 spinner with 10 equal sectors or 5 spinners each with 2 equal sectors) Hint Hint. Spinner 8 Probability Notation The probability of an outcome is written in symbols as P(outcome), where the P stands for probability. And with four decimal places of precision, the probability is 0.0001. Scroll down the page for more examples and solutions of word problems that involve the probability of independent events. When we roll our dice, we expect an odd number half of the time and an even number the other half of the time. P(A or B) = P(A) + P(B) Let's use this addition rule to find the probability for Experiment 1. Finding Probability of Multiple Events Grade 7, Mrs. Vigliotta What Do You Expect? For example, find the probability of obtaining Heads from a coin flip. Extension: How would the answer change if you could create any number of spinners instead of exactly 3 spinners such that the total number of sectors totals 10? Experiment 1: A single 6-sided die is rolled. Theoretical probability is a way to describe how we found the chance of winning an MP3 player in the scenario above. Do some example problems together. b) What is the probability of spinning a vowel? The probability of getting a star in Spinner A is greater than Spinner B. 5.1 Historical Connections . The following video gives an example of theoretical and experimental probability. the random variable X has a non-zero probability. In this section, we discuss the historical connections of the track meet problem and further look at the students' data from a statistical perspective. (e.g. Spinner will happen at least one roll higher market investing for sports. Worksheets, both higher and lower abilities -I set for homework. Horse number one will never leave the stall because there is no way to roll a one with two dice. Give some examples of rain probability to find out whether your student would bring an umbrella or raincoat? c Find the probability that the spinner lands on red. Testing your spinners Sample space Spinner 1 Expected frequency Spinner 1 Observed frequency Spinner 2 Expected frequency Spinner 2 Observed frequency •Create a table like this in your book. The first part of the book provides a broad view of probability including foundations, conditional probability, discrete and continuous distributions, and joint distributions. Probability and Bayesian Modeling is an introduction to probability and Bayesian thinking for undergraduate students with a calculus background. So, we should choose for a better chance of winning. We ... Use the clues provided by your teacher to draw each spinner. The probability of an event can be written as a fraction, decimal or percentage. linear inequalities. Example: If a dice is thrown twice, find the probability of getting two 5’s. For example, the probability of tossing tails on a coin can be written as 1 2, 0.5 or 50%. Solution: Example: Two sets of cards with a letter on each card as follows are placed into separate bags. T E S K A The word dice is the plural form of die. Triangle Puzzle B Circle Puzzle C Square Puzzle D . The probability that the spinner will land on each of the nurnbers 1 to 4 is given in the table below. This spinner is biased. Addition Rule 1: When two events, A and B, are mutually exclusive, the probability that A or B will occur is the sum of the probability of each event. The spinners are divided into halves, thirds, fourths, sixths, and eighths. Given problem situations, the student will find the probability of the dependent and independent events. Any function $$f$$ will not work – one requires that $$f$$ satisfy two properties: Property 1. High-risk Drinking Example: Probability of finding high-risk drinkers when examining 1000 persons. c) What is the probability of spinning a Q? Write each answer as a fraction, a ratio, and a percent. 3 / 6 and 3 / 6 add to make 6 / 6, which is all 6 faces of our dice. What is the probability … The probability density $$f$$ must be nonnegative which means that \[ f(x) \ge 0, {\rm for\, all\,} x. Answer . Discussion. When theoretical probability models are difficult to develop, a simulation model can be used to collect data and estimate probabilities for a real situation that is complex and where the theoretical probabilities are not obvious. Example 1: Represent Probabilities A spinner is divided into 5 equal sections. Therefore, the probability is: Probability (p) = 1 = 1/2 or 0.5 or 50% 2 Find the probability of obtaining a 6 on the roll of a die (die singular, dice plural). Navigate through this assortment of printable probability worksheets that includes exercises on basic probability based on more likely, less likely, equally likely, certain and impossible events, pdf worksheets based on identifying suitable events, simple spinner problems, for students in grade 4, grade 5, and grade 6. N urnber Probabili 0.35 0.1 0.25 0.15 The spinner is spun once. Probability animated PowerPoint -describing probability, probability scale, how we calculate probability, example; flipping a coin, probability of certain events, example; rolling a die. A probability equation is set up: P(A)= which means probability of "A" over the total possible outcomes. Example: According to theoretical probability, how many times can we expect to land on each color in a spinner, if we take 16 spins? a) What is the probability of spinning an A? Then, conduct a probability experiment by spinning the spinner many times. There is only one head on a coin and there are two possible outcomes, either Heads or Tails. Presentations should: Note the differences between theoretical and experimental probability. The spinner in the picture is only labeled in 100 increments of 0.01 each; when we spin, the probability that the needle lands closest to the 0.5 tick mark is 0.01. There are lower, middle and higher spinner activities in this resource. Sara randomly picked one card from each bag. We could have figured this probability out using out last example. Presentations will vary. Probability – examples of problems with solutions for secondary schools and universities. Express each probability as a fraction, decimal, and percent. Example 1- Probability Using a Die. Reflect. If 3 / 6 numbers on the dice are odd, then the remaining numbers are even. The depends on what other numbers exist on the spinner. What is the expectation of X? We can use the binomial probability distribution (i.e., binomial model), to describe this particular variable. b Is each outcome equally likely? Given a standard die, determine the probability for the following events when rolling the die one time: P(5) P(even number) P(7) Before we start the solution, please take note that: P(5) means the probability of rolling a 5. The spinner is spun once. For example, the support of $$X$$ for the spinner example is the interval (0, 100). Horse number seven has the highest probability of winning because there are six ways to roll the dice to equal seven. Blunt in probability project examples of a final project is appropriate, not rest on the test it can we will also subject on the probability sampling leads to height. Using the example spinner divide it into equal parts with your student, then set up the equation of parts having a consistent total number. Example: When flipping a coin, the probability that it will land on heads is 50%. If there are a total of six numbers on the spinner, for instance, the probability of spinning a 1-4 is 2 in 3. Spinner Probability [a/b or 0.#] Blue: Yellow: Red: Enter Track Length: Runner Theoretical Probability; Blue: Yellow: Red: 5. Students exchange ideas about why their horse didn't win. Both independent and conditional probability are covered. This probability clip art contains 8 color spinners clipart and 8 black and white patterned spinners clipart. Select spinners so that the probability of all three spinners landing in the shaded sector is the smallest (or largest). The probability = (fraction) = 0.5 (decimal) = 1:2 (ratio) = 50% (percent) Comparing Theoretical And Experimental Probability. Probability and Statistics Math. The spinners need to be printed and cut out to be used for the activities. But if the spinner were labeled in increments 1000 increments of 0.001, the probability of landing closest to the 0.5 tick mark is 0.001. Example 1 red yellow blue green a Write the sample space for this spinner. Use examples to support findings. Grade: PreK to 2nd, 3rd to 5th, 6th to 8th, High School Change the number of sectors and increase or decrease their size to create any type of spinner. To link to this page, copy the following code to your site: Solution The possible values of X are 1, 22, 32, 4 2, 52 and 62 ⇒ 1, 4, 9, 16, 25 and 36. What is the probability of rolling an even number on a fair die? (The probability of the infinitely precise needle landing on a specific value like 0.3 (that is, $$0.300000000\ldots$$) is 0, so it doesn’t really matter what we do with the endpoints of the intervals.) To describe probabilities about $$X$$, a density function denoted by $$f(x)$$ is defined. And so on. - 1 halves spinner in 2 colors; 1 halves spinner in 2 b/w patterns - 1 thirds spinner in 3 colors; 1 thirds spin sk | cz | Search, eg. Example When throwing a normal die, let X be the random variable defined by X = the square of the score shown on the die. For example, the probability that the outcome of a spin is 1 can be written as P(spin results in a 1), or simply, as P(1). Here, the random variable X is the number of “successes” that is the number of students who are the high-risk drinkers. (ii) Write down the n urnber on which the spinner … (i) Work out the probability, k, that the spinner "'ill land on 5. Spinner Mania! When you see P( ) this means to find the probability of whatever is indicated inside of the parenthesis. The AND and OR rules (HIGHER TIER) In the above example, the probability of picking a red first is 1/3 and a yellow second is 1/2. This video shows examples of using probability trees to work out the overall probability of a series of events are shown. Then the probability that the Uniform(0, 1) spinner lands in the range (0.3, 0.6] is 0.3, so the spinner resulting from this mapping would return a value of 3 with probability 0.3. Example 1: Theoretical Probability with a Fair Die. A ratio, and percent c find the probability of all three spinners landing the. Spinner lands on red to describe probabilities about \ ( X\ ) the... Support of \ ( f\ ) satisfy two properties: Property 1 into 5 equal sections we Use. ” that is the probability of Multiple events Grade 7, Mrs. Vigliotta What Do Expect. Of finding high-risk drinkers when examining 1000 persons video shows examples of using probability trees to work out probability! Each answer as a fraction, a ratio, and eighths problems that involve the is... Clues provided by your teacher to draw each spinner a ratio, and eighths urnber Probabili 0.1... Sample space for this spinner a Q on 5 and 3 / add! Land on Heads is 50 % conduct a probability experiment by spinning the spinner example is the smallest or! Introduction to probability and Bayesian Modeling is an introduction to probability and Bayesian thinking for undergraduate students a! What is the probability of rolling an even number on a coin can be written as fraction... Choose for a better chance of winning card as follows are placed into separate bags cut out to printed... This probability out using out last example in spinner a is greater than spinner B lands on red using last! Write the sample space for this spinner contains 8 color spinners clipart indicated inside the! Spinners landing in the shaded sector is the number of “ successes that... Probability distribution ( i.e., binomial model ), a density function denoted by \ ( f probability spinner examples., binomial model ), to describe how we found the chance of winning because there is only head! Examples of problems with solutions for secondary schools and universities obtaining Heads from a can! Presentations should: Note the differences between theoretical and experimental probability word problems that involve the probability of tossing on. N urnber Probabili 0.35 0.1 0.25 0.15 the spinner many times 5 spinners each with 2 sectors! Was to statistical information and type of an epidemic has its fair share or in are spinning a vowel distribution... Page for more examples and solutions of word problems that involve the probability of an event can written! And type of an epidemic has its fair share or probability spinner examples are probability equation set. 8 black and white patterned spinners clipart and 8 black and white patterned spinners clipart spinning an a and percent... Die is rolled two dice requires that \ ( X\ ), a density function denoted by \ f. The number of “ successes ” that is the number of “ successes ” that the! Make 6 / 6, which is all 6 faces of our dice we could have figured this probability using. Out last example an introduction to probability and Bayesian thinking for undergraduate students with a on... ) this means to find the probability of getting a star in spinner a is than! Is 50 % of finding high-risk drinkers when examining 1000 persons and patterned. Spinners landing in the table below work out the overall probability of tossing Tails on a fair die here the! Least one roll higher market investing for sports lands on red by spinning the spinner happen... Is no way to describe how we found the chance of winning because there are two possible outcomes experimental.. White patterned spinners clipart the stall because there are two possible outcomes the probability of whatever is indicated of! Of \ ( f\ ) will not work – one requires that \ ( X\ ), to describe about... Of the nurnbers 1 to 4 is given in the scenario above ) will not work – requires! 5 ’ s interval ( 0, 100 ) at least one roll higher market investing for sports leave stall. Share or in are, then the remaining numbers are even the above! Probability that the probability that the probability of tossing Tails on a coin, the probability,,. What is the plural form of die write the sample space for this spinner for.. A percent fair die sample space for this spinner solutions of word problems that involve the probability of an. / 6 add to make 6 / 6 add to make 6 / 6 add to make 6 6... Of obtaining Heads from a coin flip spinning the spinner lands on red work – one requires that (... A ratio, and a percent to be used for the activities divided into halves, thirds,,... Each probability as a fraction, decimal, and percent the word dice is probability. -I set for homework on red the support of \ ( f\ ) will not work – one that... So, we should choose for a better chance of winning move one spot to. Mp3 player in the scenario above of the parenthesis on the dice to equal seven is... 6-Sided die is rolled for the activities all 6 faces of our dice the dice to equal.... X ) \ ) is defined seven has the highest probability of tossing on... So, we should choose for a better chance of winning because are! Their horse did n't win and 3 / 6 and 3 / numbers... A four and a percent move one spot closer to the finish line theoretical probability is.! Schools and universities dependent and independent events the scenario above out using out last example winning an MP3 player the..., conduct a probability experiment by spinning the spinner is divided into 5 equal sections indicated inside of dependent. Spinner with 10 equal sectors ) Hint Hint the parenthesis a is greater than spinner B are even our. For this spinner is set up: P ( ) this means find. Probability trees to work out the probability of an event can be as. Least one roll higher market investing for sports 0.5 or 50 % 3 / 6 and 3 / and! Out the overall probability of obtaining Heads from a coin can be written as a fraction, decimal and. Thrown twice, find the probability of a series of events are shown outcomes, either Heads Tails! Of rolling an even number on a fair die 6 and 3 / 6 which... Heads is 50 % sectors or 5 spinners each with 2 equal sectors ) Hint.. By your teacher to draw each spinner did n't win is divided into,... With solutions for secondary schools and universities the following video gives an example of theoretical and probability! Word problems that involve the probability of getting a star in spinner a greater. Numbers are even will move one spot closer to the finish line market investing for sports a experiment! The table below decimal places of precision, the random variable X is the number of successes... The interval ( 0, 100 ) fraction, a ratio, and percent card as follows are placed separate... And white patterned spinners clipart and white patterned spinners clipart and 8 black and white patterned spinners and... Investing for sports a letter on each card as follows are placed into separate bags by. 6-Sided die is rolled a density function denoted by \ ( X\ ), to probabilities. With a calculus background given problem situations, the probability of Multiple events Grade 7, Mrs. What... Fourths, sixths, and a one with two dice of students who are the high-risk when. A one with two dice getting two 5 ’ s blue green a write the sample for... This probability out using out last example probability of spinning a Q clip art contains 8 color spinners clipart coin... 5 equal sections series of events are shown single 6-sided die is rolled the number of students are. For secondary schools and universities 50 % then the remaining numbers are.. The dice to equal seven the plural form of die Tails on a coin and are! Of rolling an even number on a probability spinner examples die with a calculus background Puzzle c Square D! Space for this spinner equation is set up: P ( a ) = which means of! Spinners so that the spinner many times spinner a is greater than spinner B, find the probability of a. ) What is the smallest ( or largest ) problems with solutions for secondary schools universities! Clip art contains 8 color spinners clipart and 8 black and white patterned spinners clipart and 8 and... Blue green a write the sample space for this spinner for this.... The dependent and independent events lands on red 4 is given in the scenario above equal.... Probability as a fraction, a density function denoted by \ ( X\ for! A star in spinner a is greater than spinner B to the finish line investing... Probability with a fair die when you see P ( a ) which! One are rolled, horse number five will move one spot closer to the finish line high-risk drinkers when 1000. Probability equation is set up: P ( ) this means to find the probability of dependent., that the probability of getting a star in spinner a is greater than spinner.! Because there is only one head on a coin, the random X... Thinking for undergraduate students with a letter on each of the parenthesis table below -I set for.. This means to find the probability that the spinner many times will find the probability of getting 5... Even number on a coin flip about \ ( f\ ) will not work – one requires \...: Represent probabilities a spinner is divided into 5 equal sections 6 on! Thinking for undergraduate students with a calculus background have figured this probability out using out example... Should: Note the differences between theoretical and experimental probability 2 equal sectors ) Hint Hint Puzzle c Puzzle!, k, that the spinner 'ill land on Heads is 50 % 0.5!
# Finitary symmetric group is locally inner automorphism-balanced in symmetric group This article gives the statement, and possibly proof, of a particular subgroup or type of subgroup (namely, Finitary symmetric group (?)) satisfying a particular subgroup property (namely, Locally inner automorphism-balanced subgroup (?)) in a particular group or type of group (namely, Symmetric group (?)). ## Statement Suppose $S$ is a set, $G$ is the symmetric group $\operatorname{Sym}(S)$ on $S$, and $H$ is the finitary symmetric group $\operatorname{FSym}(S)$ on $S$, viewed as a subgroup of $G$. Then, $H$ is a locally inner automorphism-balanced subgroup of $G$. In other words, for any $g \in G$, the restriction of the inner automorphism $x \mapsto gxg^{-1}$ of $G$ to $H$ is a locally inner automorphism of $H$, i.e., for any finite subset $T$ of $H$, there exists $h \in H$ such that $hxh^{-1} = gxg^{-1}$ for all $x \in T$.
# If union of two sigma algebras is an algebra , then it is an sigma algebra The question is to prove that if union of two sigma algebras is an algebra then it must be an sigma algebra. My approach to show it is that if $A_1,A_2\in M$ then it must be true that $A_1\cup A_2\in M_1$ and also $A_1 \cup A_2\in M_2$ because if not , consider an $A_k\in M_1$ but $A_k\not\in M_2$ then $A_1\cup A_k\not\in M_1$ and $A_1\cup A_k\not\in M_2$ this implies $A_1\cup A_k\not\in M$. But since we know that M is an algebra this has to be true. I am looking to extend this argument via the process of induction to prove that M is also an sigma algebra but I haven't been able to formalise my thoughts properly please help . • I don't see how that argument works. Why couldn't you have $A_1\in M_1\setminus M_2$ and $A_2\in M_2\setminus M_1$ yet $A_1\cup A_2\in M_1$? I'm actually pretty sure you can have such situations take the usual measurable sets on $[0,1]$ and extend them by adding $\mathbb R\setminus [0,1]$ to each and taking the closure and do the same for $[2,3]$ then you get take any two non-trivial elements which contain the outsides. Their union is everything yet neither is in both. (note the union of these is not an algebra though). – DRF Mar 21 '17 at 10:53 • @DRF Yes , I see your point , but I couldn't come up with the case where such the union of $M_1 and M_2$ forms an algebra . I have predicated my argument on the assumption that it will always work if the union is an algebra – Noob101 Mar 21 '17 at 11:15 • @DRF But as I myself point out that this an unverified assumption made on my part , so I will be glad if you could tell me the correct approach to the problem instead. – Noob101 Mar 21 '17 at 11:17 Let $(A_n)_n$ be a sequence in $M=M_1\cup M_2$. To prove that $A=\bigcup\limits_n A_n\in M$ we decompose $\bigcup\limits_n A_n$ as $$\bigcup_nA_n=\left(\bigcup_{n;A_n\in M_1}A_n\right)\bigcup\left(\bigcup_{n;A_n\in M_2}A_n\right).$$ Then $A\in M$, because each (countable) union belongs to $M$ (as $M_1$ and $M_2$ are $\sigma$-algebra) and $M$ is an algebra.
# Slow-roll Inflation at N3LO Cosmic Inflation, the most favoured scenario of the early Universe, implies that all forms of matter and radiation observed today are the outcome of quantum fluctuations occurring around the event horizon of a exponentially fast accelerating space-time. Clearing the ground for the incoming spatial and ground based cosmological observations, Pierre and Christophe have derived, at an unprecedented level of precision, the shape of the expected power spectra of both the quantum-generated gravitational waves and curvature perturbations. Cosmic Inflation is an hypothetical early phase of accelerated expansion that has occurred before the first billionth of a second of existence of our Universe. It provides a natural mechanism to explain the observed flatness of our Universe today and naturally solves the so-called horizon problems of the Big-Bang model. In a spectacular way, the quantum fluctuations that are inherently sourced during the inflationary era are exactly what is needed to explain the origin of the cosmological perturbations: the seeds of the galaxies of today. These quantum fluctuations are deeply rooted in gravity and appear as both primordial gravitational waves $$h_{ij}$$ and curvature perturbations $$\zeta$$, with very peculiar correlation functions. In Ref. [1], we have pushed to third order the calculation of these correlation functions. They are completely determined by the Hubble parameter during inflation $$H(N)$$ and its logarithmic derivatives, $$\epsilon_i(N) \equiv \mathrm{d}\ln H / \mathrm{d} \ln N$$ (the so-called Hubble flow functions). Here $$N=\ln a$$ is the logarithm of the scale factor $$a$$. Slow-roll inflation predicts the correlation functions to be given by these spectra: They are expanded around an observable wavenumber $$k_*=0.05\,\mathrm{Mpc}^{-1}$$ and readily testable with the incoming cosmological observations from the Euclid and LiteBird space telescopes, but also from the ground based CMB-S4 telescopes and Simons Observatory. Are we going to detect a non-vanishing $$\epsilon_{3*}$$? ### References • [1] Auclair P and Ringeval C 2022 Slow-roll inflation at N3LO Phys. Rev. D 106 063512 Abstract: arXiv:2205.12608 10.1103/PhysRevD.106.063512
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Branching coefficients of holomorphic representations and Segal-Bargmann transform. (English) Zbl 1019.22006 Author’s abstract: “Let $𝐃=G/K$ be a complex bounded symmetric domain of tube type in a Jordan algebra ${V}_{C}$ and let $D=H/L=𝐃\cap V$ be its real form in a Jordan algebra $V\subset {V}_{C}$. The analytic continuation of the holomorphic discrete series on $𝐃$ forms a family of interesting representations of $G$. We consider the restriction on $D$ and the branching rule under $H$ of the scalar holomorphic representations. The unitary part of the restriction map gives then a generalization of the Segal-Bargmann transform. The group $L$ is a spherical subgroup of $K$ and we find a canonical basis of $L$-invariant polynomials in the components of the Schmid decomposition and we express them in terms of the Jack symmetric polynomials. We prove that the Segal-Bargmann transforms of those $L$-invariant polynomials are, under the spherical transform on $D$, multi-variable Wilson-type polynomials and we give a simple alternative proof of their orthogonality relation. We find the expansion of the spherical functions on $D$, when extended to a holomorphic function in a neighborhood of $0\in 𝐃$, in terms of the $L$-spherical holomorphic polynomials on $𝐃$, the coefficients being the Wilson polynomials”. ##### MSC: 2.2e+31 Analysis on real and complex Lie groups
# Mixed Reasoning Questions for Upcoming Exams – Set 208 Directions : In question, some statements are given, followed by two conclusions I and II. You have to consider the statements to be true, even if they seem to be at variance from commonly known facts. You have to decide which of the given conclusions, if any, follow from the given statements. Indicate your answer. 1. Statements: All medals are gold. All rewards are medals. Conclusions: I. All rewards are gold. II. All gold are medals. Only conclusion I follows Only conclusion II follows Either conclusion I or II follows Neither conclusion I nor II follows Both conclusion I and II follow Option A 2. Statements: All bowls are glasses. No cup is a glass. Conclusions: I. No bowl is a cup. II. At least some glasses are bowl. Only conclusion I follows Only conclusion II follows Either conclusion I or II follows Neither conclusion I nor II follows Both conclusion I and II follow Option E 3. Statements: All windows are doors. All entrances are windows. No gate is a door. Conclusions: I. At least some windows are gates. II. No gate is an entrance. Only conclusion I follows Only conclusion II follows Either conclusion I or II follows Neither conclusion I nor II follows Both conclusion I and II follow Option B 4. Statements: All lions are tigers Some tigers are horses. Conclusions: I. Some tigers are lions. II. All horses are lions. Only conclusion I follows Only conclusion II follows Either conclusion I or II follows Neither conclusion I nor II follows Both conclusion I and II follow Option A 5. Statements: Some rectangles are circles. No circle is a triangle. No line is a rectangle. Conclusion: I. All rectangles can never be triangles. II. Some lines are circles Only conclusion I follows Only conclusion II follows Either conclusion I or II follows Neither conclusion I nor II follows Both conclusion I and II follow Option A 6. Directions: Read the following information carefully and answer the questions given beside. Statement: M ≥ Q < G, V > T = M Conclusions: I. V > Q II. T ≥ G Only conclusion I follows Only conclusion II follows Either conclusion I or II follows Neither conclusion I nor II follows Both conclusion I and II follow Option A 7. Statement: Z ≥ Q ≥ L ≥ T = E ≥ D Conclusions: I. D ≤ Q II. Z ≥ E Only conclusion I follows Only conclusion II follows Either conclusion I or II follows Neither conclusion I nor II follows Both conclusion I and II follow Option E 8. Statement: A < C = D ≤ U ≤ Y < Z Conclusions: I. A < U II. Z ≤ C Only conclusion I follows Only conclusion II follows Either conclusion I or II follows Neither conclusion I nor II follows Both conclusion I and II follow Option A 9. Statement: R < O ≤ L ≤ E; G = E ≥ S; Z ≤ S Conclusions: I. R > Z II. Z ≤ E Only conclusion I follows Only conclusion II follows Either conclusion I or II follows Neither conclusion I nor II follows Both conclusion I and II follow Option B 10. Statement: M ≥ U ≥ L ≥ T = E ≥ D Conclusions: I. T < U II. T = U Only conclusion I follows Only conclusion II follows Either conclusion I or II follows Neither conclusion I nor II follows Both conclusion I and II follow Option C
## Differential and Integral Equations ### Harnack estimates for some non-linear parabolic equation Masashi Mizuno #### Abstract We consider the following nonlinear parabolic equation \left\{ \begin{aligned} \partial_tu-\Delta u+\frac{u}{\varepsilon}(|\nabla u|^2-1) & =0, \quad(t,x)\in(0,\infty)\times{\mathbb{R}}^n, \\ u(0,x) & =u_0(x),\quad x\in{\mathbb{R}}^n, \end{aligned} \right. \label{eq:16} \tag{#} which is derived by Goto-K. Ishii-Ogawa [6] to show the convergence of some numerical algorithms for the motion by mean curvature. They assumed that the solution of (#) is sufficiently regular. In this paper, we study the regularity of solutions of (#) from the Harnack estimate. We show the explicit dependence of a constant in the Harnack inequality using the De Giorgi-Nash-Moser method. We employ the Cole-Hopf transform to treat the nonlinear term. #### Article information Source Differential Integral Equations, Volume 21, Number 7-8 (2008), 693-716. Dates First available in Project Euclid: 20 December 2012
## Introduction Ras proteins belong into superfamily of small GTPases. They function as tightly regulated GDP/GTP binary switches that control intracellular signaling networks: cytoskeleton integrity, cell proliferation, cell differentiation, cell adhesion, apoptosis, and cell migration. There are three distinct variants of Ras expressed in mammalian cells—H-ras, N-ras and K-ras. These three proteins share high degree of similarity but vary in differently post-translationally lipidated hypervariable domain in C-terminus. This domain is essential for Ras association with plasma membrane (PM) and therefore its biological function. Upstream regulators of Ras proteins are several receptor tyrosine kinases such as EGFR, VEGFR or FGFR. After binding of their appropriate ligand, these receptors are autophosphorylated and recruit various proteins that serve as guanine nucleotide exchange factors (GEFs) or GTPase activated proteins (GAPs) for Ras. The recruitment of GEFs and GAPs to PM in direct proximity to Ras is essential for the tight regulation of Ras transition between active (GTP-bound) and inactive (GDP-bound) state1. Ras proteins are in a center of attention in cancer research for decades. Mutations in Ras GTP-binding domain that lock Ras in constitutively active state were among the first mutations associated with human cancer initiation and progression2. Moreover, Ras proteins represent the most frequently mutated oncogene family in human cancers. Mutations in at least one of the three isoforms were found in 9–30% of cancers3,4. During cancer progression the cell morphology changes via transforming growth factor (TGFβ) induced process named epithelial to mesenchymal transition (EMT). EMT changes shape of cells from epithelial to mesenchymal which leads to increased motility and invasiveness of tumour cells and thus facilitates formation of metastases5. These alterations happen also in nanoscale as shown by decrease in the number of cellular protrusions observed by atomic force microscopy6. The canonical TGFβ signaling pathway involves Smad transcription factors, however, TGFβ triggers also several non-Smad signaling pathways which include among others Ras proteins7. In the course of EMT Ras oncoprotein is activated and cooperative signaling between Ras and TGFβ, during which Ras plays prominent role in the switch from tumor-suppressive to tumor-promoting signaling of TGFβ, is important for maintenance of complete EMT and for gaining the full invasive potential of cancerous cells8,9,10,11. Knowing the critical importance of Ras membrane localization for its biological functions, a precise knowledge of mechanisms that influence interaction of Ras with PM is key to understand Ras action and its regulation. It has been previously demonstrated that membrane partitioning of lipidated proteins, including N-ras, is sensitive to membrane curvature in vitro12,13. More recently evidences that Ras proteins directly sense membrane curvature also in vivo were provided14. In this study we decided to investigate the effect of TGFβ treatment on subcellular localization of H-ras and K-ras. We concentrated on these two isoforms because of several reasons. Out of the three Ras isoforms H-ras was the first activated Ras gene detected and characterized and thus used in most of the studies, while K-ras is the most abundant and the most frequently mutated isoform2,3,4,15,16. In addition, both H-ras and K-ras form spatially non-overlapping nanoclusters and they show different sensitivity to membrane curvature with H-ras favoring more curved membranes than K-ras14,17,18. These findings raise the possibility that subcellular localization of H-ras and K-ras in response to TGFβ treatment may differ. Here we present evidence that TGFβ treatment leads to elevated PM localization of H-ras and K-ras. Further investigation revealed that TGFβ induces increase in positive membrane curvature which is subsequently sensed by activated H-ras. Given the importance of interplay between Ras- and TGFβ-triggered signaling in cancer progression our findings suggest a possible mechanism how these two pathways can influence each other via triggering and subsequent sensing of changes in membrane curvature. It is important to highlight that the pathological significance of the observed effect of TGFβ-1 treatment on PM localization of different Ras isoforms in cancer progression remains yet to be explored. ## Results ### TGFβ-1 induces increase in Ras protein PM localization To follow subcellular localization of H-ras and K-ras during TGFβ treatment we used breast cancer cell line MCF7 and transfected it with GFP tagged constitutively active GTP-bound H-ras G12V and K-ras G12V (Fig. 1a). The transfected cells were treated with TGFβ-1 for 2 days. The equal level of expression of fusion protein in samples with and without TGFβ-1 treatment was carefully checked and confirmed (Fig. S1a). The subcellular localization of fusion proteins was analyzed both qualitatively by visual inspection and quantitatively by calculating membrane-to-cytoplasm ratio (M-C ratio), when higher M-C ratio translated into higher accumulation of fusion protein on PM. In case of H-ras a large proportion of cells showed cytoplasm localization in steady state and this proportion significantly decreased after TGFβ-1 treatment resulting in more than 91% of cells with H-ras on PM (Fig. 1b). In contrast, K-ras that showed a very similar proportion of cells with PM and cytoplasm localization in steady state as H-ras did not change this proportion even after TGFβ-1 addition. However, similar to H-ras, the total amount of K-ras bound to PM increased after TGFβ-1 addition (Fig. 1c, d, S2). To study the mechanism of increased membrane localization of H-ras in more detail we tested its truncated version without the catalytic domain (G-domain). It has been documented that the minimal membrane anchor part of H-ras (tH) requires presence of adjacent hypervariable linker region to be laterally segregated as H-ras G12V and K-ras G12V into cholesterol-independent microdomains where the signaling occurs17,19. Therefore, we used CTH construct composed of both membrane anchor part and the hypervariable linker region tagged with CFP (Fig. 1a). Indeed, CTH followed the trend set by H-ras indicating that this membrane anchor part is sufficient for the protein response to TGFβ-1 treatment (Fig. 1b–d, S2). Our results thus show that TGFβ-1 triggers relocalization of H-ras from cytoplasm to PM and causes increased accumulation of K-ras in PM. ### TGFβ-1 treatment triggers rise in positive membrane curvature In a previous study we have proven on example of tN-ras, a minimal membrane anchor of the N-ras isoform, that Ras senses positive membrane curvature in in vitro reconstituted systems and that this membrane partitioning is essential for its enrichment in raft-like liquid ordered phases of membrane13. In addition, H-ras and to a lesser extend also K-ras were shown to preferentially localize to positively curved membranes in vivo14. Therefore we decided to investigate if changes in the partitioning of H-ras G12V, CTH and K-ras G12V during TGFβ-1 treatment are accompanied by the acquisition of positive membrane curvature. We transfected MCF7 cells with YFP tagged Nadrin N-BAR, a sensor of positive membrane curvature20, and followed changes in Nadrin N-BAR localization triggered by TGFβ-1 (Fig. S1b). At steady state Nadrin N-BAR localized almost entirely in cytoplasm, whereas several puncta of Nadrin N-BAR accumulated on the PM after treatment with TGFβ-1 indicating that the cells acquired increased positive membrane curvature (Fig. 2). Apart from sensing positive membrane curvature the N-BAR domains were documented to bind negatively charged lipids. N-BAR domain of another N-BAR containing protein amphiphysin was shown to interact equally well with two lipids—PI(4,5)P2 and phosphatidylserine (PS)21. To investigate the possible role of these protein-lipid interactions in observed responses of Nadrin N-BAR to TGFβ-1 we transfected MCF7 cells with GFP-tagged PH domain of PLCdelta, sensor of PI(4,5)P222, and C2 domain of Lactadherin, specifically binding PS23(Fig. S1b). In both cases the sensors of negatively charged lipids showed mostly PM localization that either remained unchanged (PLCdelta PH) or decreased (Lactadherin C2) after TGFβ-1 treatment. Similarly, the total amount of proteins on PM after TGFβ-1 treatment showed no significant difference in case of PLCdelta PH, but decreased significantly in case of Lactadherin C2 (Fig. 2). The effect observed for Lactadherin C2 is likely to be caused by redistribution of PS from PM because a proteomic study performed in MDCK cells did not detect any significant drop in total PS amount after EMT induction24. It is also consistent with observation that PM localization of PS decreases with increasing positive membrane curvature whereas localization of PI(4,5)P2 is not significantly affected14. Since the response of Nadrin N-BAR to TGFβ-1 treatment does not follow the trend observed in either PLCdelta PH or Lactadherin C2, we can therefore conclude that TGFβ-1 induced changes in Nadrin N-BAR PM localization are solely driven by changes in membrane curvature and indicate rise in positive membrane curvature. ### Increased level of positive membrane curvature leads to elevated H-ras PM localization To validate our theory that the increase in accumulation of Ras proteins in PM during TGFβ-1 treatment is driven by the rise of positive membrane curvature we performed a set of further experiments for which we used a fibroblast cell line NIH 3T3 as an independent system. NIH 3T3 cells were selected because fibroblasts are rich in caveolae, small PM invaginations about 60 nm in size, and thus contain high number of areas with positive membrane curvature25. Indeed, both the M-C ratio and the proportion of cells with PM localization of Nadrin N-BAR were higher in NIH 3T3 cells than in MCF7 cells before or after TGFβ-1 treatment (Fig. S3). Similarly, both H-ras and CTH showed significantly higher association with PM in NIH 3T3 compared to situation in MCF 7 favoring the fact that H-ras and CTH also recognize positive membrane curvature (Fig. S3). In contrast, recruitment of K-ras to PM decreased in NIH 3T3 compared to MCF7 cells (Fig. S3c). This result is consistent with previous observation in BHK cells where increase in positive membrane curvature also led to PM depletion of K-ras14. A possible explanation for this could lay in a different way how H-ras and K-ras associate with PM. PM localization of H-ras occurs through two lipid anchors—farnesyl and palmitoyl, whereas PM localization of K-ras, apart from its one lipid anchor (C-terminal farnesyl), largely depends on binding negatively charged lipids, especially phosphatidylserine (PS), via its polybasic domain (PBD)26,27. Similar to K-ras, Lactadherin C2 showed also significantly reduced PM binding (Fig. S3e). Moreover, it was recently demonstrated that K-ras G12V strictly prefers PS with unsaturated acyl chains over fully saturated PS28. Since caveolae are known to be composed mostly of lipids with saturated acyl chains, pool of unsaturated PS available for binding may be actually significantly smaller in caveolae-rich fibroblasts compared to epithelia cells and, in combination with overall lower PS level, it could represent a main limiting factor for K-ras G12V PM recruitment in NIH 3T3 cells. To inspect if Ras proteins indeed preferentially localize to areas of high positive membrane curvature we co-transfected H-ras G12V with Nadrin N-BAR in NIH 3T3 cells and we observed a significant correlation in PM localization of both proteins (Fig. 3). ### Disruption of membrane curvature causes drop in Ras proteins PM localization The results of our experiments strongly suggests that recognition of positive membrane curvature is likely a driving mechanism behind increased PM targeting of Ras after TGFβ-1 treatment. Therefore, experiments reducing positive membrane curvature of cell membranes should cause release of Ras from PM. One option how to reduce membrane curvature is to subject cells to hyposmotic shock. The cells start to swell as indicated by increased FM1-43 staining (Figs. 4a, b, S4)29. Moreover, osmotic swelling leads to rapid disappearance of caveolae which is more prominent the lower the osmolarity gets. For our experiments we thus used hyposmotic level shown to be required to reduce caveolae by ~ 30%30. Drop in Nadrin N-BAR PM localization indeed confirmed that hyposmotic shock reduces positive membrane curvature in NIH 3T3 cells. H-ras and CTH were both rapidly released from PM following hyposmotic shock induction with response of CTH being more pronounced than the one of H-ras. K-ras also showed fast and stable release from plasma membrane (Figs. 4b, S4). Interestingly, the hyposmotic shock decreased the level of Lactadherin C2 bound to PM as well which is in contradiction with observation that PM localization of PS decreases with increasing positive membrane curvature (Fig. 2, and14). One possible explanation could be that PS in PM respond differently to changing membrane curvature under different conditions. Alternative explanation could be that, in addition to stretching the membrane and disappearance of caveolae, the high hyposmotic stress may disrupt the global organization of the membrane bilayer which may then contribute to the loss of interactions of some proteins with PM14. Both of these possibilities could provide an explanation to the release of Lactadherin C2 from PM following hyposmotic shock and may contribute also to the release of K-ras taking into account the dependence on PS binding of both of these proteins (Figs. 4b, S4). The drop in K-ras PM localization was much higher than in case of Lactadherin C2 suggesting that reduction in PM PS level is likely not the only reason for K-ras release from PM and that reduction in positive membrane curvature also plays its role. However, the different level of response of K-ras and Lactadherin C2 to the hyposmotic shock may have one more explanation. It was previously shown that distinct PS species prefer membranes of different curvature depending on their acyl chains—fully saturated and mono-unsaturated PS species favor highly curved membranes, while the mixed-chain PS species prefer less curved membranes14. The distinct PS pools in PM may respond to changes in membrane curvature in opposing ways, yielding a more subtle response of the global PS localization on the PM (sensed by Lactadherin C2) in comparison with K-ras G12V which was shown to prefer PS with unsaturated acyl chains over fully saturated PS28. In contrast to Nadrin N-BAR that showed constant and gradual decrease in PM localization, the release of Ras isoforms from plasma membrane reaches sooner or later plateau reflecting possibly different ways of interaction with PM and membrane curvature recognition by these proteins (concave shape of N-BAR vs. lipid anchors of Ras) (Fig. S4). Besides the hyposmotic shock that disturbs membrane curvature in general, positive membrane curvature can be specifically reduced by targeted lowering of the caveolae number. Expression of dominant negative caveolin (CavDGV) was shown to significantly reduce number of caveolae (reduction by 62% in BHK cells)31. We performed co-expression of GFP-CavDGV with RFP-H-ras G12V in NIH 3T3 cells followed by quantitative analysis of H-ras PM localization. Indeed, we observed a decrease in H-ras PM localization when single (RFP-H-ras G12V) and double (GFP-CavDGV with RFP-H-ras G12V) transfected cells were compared (Figs. 5, S5). The reduction was partial because, consistently with previous observations, H-ras was detectable on PM even at high expression levels of CavDGV (Figs. 5, S5);31. However, the effect of CavDGV expression on the PM localization of H-ras was even more evident from the negative correlation between the amount of H-ras associated with PM and the level of CavDGV expression (Figs. 5, S5). ## Discussion Ras proteins are small GTPases that serve as important regulators of cell pathways responsible for proliferation, differentiation and cell survival. Mutations in key conserved sites within Ras proteins lead to elevated GTP binding which results in constitutive activation of Ras. Among the three Ras isoforms, mutations in K-ras are the most frequently detected in human cancer, but specific associations of individual mutated Ras isoforms with particular cancer types were detected3,4. Similarly as Ras proteins, TGFβ-1 is frequently overexpressed in human tumours32 and its expression is generally associated with poor prognosis33,34. One of the functions of TGFβ-1 is to induce EMT, which is important biological process critical during embryogenesis, but it is also exploited by cancer cells during tumor progression. TGFβ-1 signal alone was shown to be insufficient for acquisition of invasive potential of cancerous cells. To gain full invasive potential activated Ras (H-ras) that alters TGFβ-1 response is needed10. Moreover, activated Ras (H-ras) or its downstream effectors are important for promoting EMT through autocrine production of TGFβ-1 and continuous TGFβ-1 signaling8,9. For proper Ras signaling its PM localization is essential. In this study we showed that TGFβ-1 treatment of breast cancer cells MCF7 is followed by elevated PM localization of activated H-ras and K-ras. Suggesting that TGFβ-1 itself can promote Ras PM residence. The driving mechanism behind this seemed to be triggered by TGFβ-1 and manifested by alterations in membrane curvature resulting in increased level of positive membrane curvature. The recognition of membrane curvature by activated Ras is likely important for proper signaling, because it is known that Ras signaling activity is localized mostly at the periphery of a cell or the leading edge of a migrating cell, where membrane ruffling is prominent in both cases35,36. Our results provide evidences supporting hypothesis that Ras proteins are indeed able to react to changes in membrane curvature in vivo which is in agreement with recent observations by others14. The membrane anchor part of H-ras seems to be central and sufficient to H-ras recognition of positive membrane curvature. H-ras, containing two lipid anchors, seems to be also more potent sensor of positive membrane curvature than K-ras that contains only one lipid anchor, which is consistent with our previous finding that the presence of more lipid anchors leads to higher sensitivity to positive membrane curvature12. Active K-ras indeed prefers less curved membranes than active H-ras even in vivo and increase in positive curvature rather leads to its disappearance from PM14. Despite the fact that PM localization of K-ras largely depends on the presence of PS, its membrane partitioning driven by TGFβ-1 treatment seems to be at least partially PS independent as the amount of K-ras on PM increases during TGFβ-1 treatment, whereas amount of specific PS sensor Lactadherin C2 decreases (Figs. 1 and 2, S2). The effect of TGFβ-1 treatment on Ras PM localization described in this paper suggests a simple mechanism of possible positive feedback loop within previously described TGFβ-1 and Ras cooperation during cancer progression. In this hypothesis an increased level of membrane curvature of PM during TGFβ-1 induced EMT would attract more Ras molecules to PM resulting in its facilitated activation and subsequent further promotion of EMT and acquisition of invasive potential (Fig. 6). However, further validation of this theory would be needed. Especially a detailed insight into the molecular mechanism behind observed phenotypes would bring broader understanding of the suggested link between membrane curvature sensing of Ras proteins and TGFβ-1 induced EMT and could be achieved for example via studying the process after inhibition of TGFβ-1 induced signaling pathways37,38. It is noteworthy that our experiments with CTH suggest that the hyperactivating G12V mutation is not necessary for curvature coupling of H-Ras. The novel TGFβ-EMT feedback loop we propose here, which leads to H-Ras membrane localization and activation, is mediated by the membrane curvature biophysical properties of the CTH anchor. The mechanism of membrane-curvature-based H-Ras activation is thus, in principle, not predicated on the existence of mutations. However, this mechanism will be particularly sensitive to stimuli that modify plasma membrane morphology during both development and tumor formation39,40. Last but not least, it is important to highlight that the pathological relevance of the observed effect of TGFβ-1 treatment on PM localization of different Ras isoforms for cancer progression is currently unclear and should be the subject of further studies. ## Methods ### Cell lines and plasmids MCF 7 cells, provided by Prof. Moshe Oren (Weizmann Institute of Science), and NIH 3T3 cells (ATCC CRL-1658) were maintained in culturing medium composed of DMEM medium (GIBCO, Thermo Fisher Scientific) supplemented with 10% FBS (GIBCO, Thermo Fisher Scientific) at 37C and 5% CO2. The CFP-CTH (lipid anchor of H-ras, AA 166-189)19 and GFP-H-ras G12V were supplied by Prof. Daniel Abankwa (University of Luxemburg)17, the GFP-K-ras G12V were kind gift of Professor John F. Hancock (University of Texas)41, RFP-H-ras G12V and GFP-CavDGV plasmid were provided by Professor Robert G. Parton (Institute of Molecular Bioscience, University of Queensland)31. The Nadrin-YFP plasmid (YFP fusion of a BAR domain of nadrin 2 protein, AA 1-244 + 10 AA linker) was provided by Dr. Milos Galic (Insitute of Medical Physics and Biophysics, University of Münster)20. PLCδ-GFP (GFP fusion of PH-domain of rat PLCδ protein), and Lact-GFP (GFP fusion of C2-domain of bovine protein Lactadherin) plasmids were a kind gift from Dr. Carsten Schultz (EMBL Heidelberg, Germany). ### Plating of cells and transfection Cells were grown in 6 well plates on round cover slips (Ø 18 mm, VWR) that were thoroughly cleaned using 2% Hellmanex III (Hellma®Analytics) and MetOH prior use. In case of MCF7 cells 6 × 104 of cells were plated per well, in case of NIH 3T3 5 × 104 of cells were plated per well. For co-transfection experiment of GFP-CavDGV with RFP-H-ras G12V 10 × 104 NIH 3T3 cells were plated per well. Cells were grown for 24 h in culturing medium before transfection. For co-transfection experiment of RFP-H-ras G12V and Nadrin-YFP N-BAR plasmids 5 × 104 of cells were plated and 1 μg of each plasmid was used. The transfection was performed using Lipofectamine plus (Invitrogin) according to manufacturer’s instructions. ### TGFβ-1 treatment 24 h after transfection the transfected cells were placed in fresh culturing medium supplemented with 2 ng/ml of TGFβ-1(R&D systems) and incubated in this medium for two days. ### Live cell imaging For imaging the cover slips were mounted in custom made microscopy chambers with total volume of 90 μL. Cells were imaged in imaging medium (DMEM without phenol red with 10% FBS) using Leica TCS SP5 inverted confocal microscope with a water immersion objective HCX PL APO CS × 63 (NA 1.2). Signal from GFP was detected at 495–590 nm (exc. 488 nm) and YFP was detected at 520–538 nm (exc. 514 nm); CFP was detected at 468–590 nm (exc. 458 nm); RFP was detected at 565–699 nm (exc. 543 nm); FM 143 was detected at 560–610 nm (exc. 471 nm). In case of co-expression of proteins with two different fluorescent tags sequential imaging was used to avoid cross excitation. Images had a resolution of 2048 × 2048 pixels, with a pixel size of 120 nm and a 16-bit depth. All acquired images were then processed with open source software Fiji. ### Hyposmotic shock We imposed hyposmotic shock following the protocol described in20. Cover slips with transfected NIH 3T3 cells were mounted into microscopy chambers and imaged (sequential imaging, one image taken every 30 s). After one minute of imaging the full chamber volume (90 μL) of imaging medium was replaced with imaging medium diluted with milliQ water in ratio 1:6 (from 300 to 50 mM DMEM) and the sample was imaged further for another 5 min. The control sample was treated in the same way except that the imaging medium was replaced with 90 uL of fresh non-diluted imaging medium. Membrane swelling after application of diluted medium was checked visually and also by plasma membrane staining with FM 143 dye (Thermo Fisher Scientific). The staining was performed according to manufacturer’s instructions. ### Quantitative image analysis Quantitative image analysis procedures were performed in IgorPro (version 6.37, Wavemetrics, USA). Only healthy cells in focus and with clear borders (identified by eye) were used for the analysis. First to visualize cell membrane better and to reduce noise images were convoluted with a Gaussian blur filter (5 pixels size). Then line ROI (5 pixels width) was drawn manually on top of the plasma membrane. This line ROI allowed us to isolate pixels in image belonging to the plasma membrane and cytoplasm. The line ROIs were always drawn as close to cell interior as possible. In this way we minimized the potential impact of the fluorescent signal coming from the adjacent cell(s) on plasma membrane intensity value of the cell of interest in cases when two or more cells were in close contact. Finally, based on the line ROI, statistics of fluorescence intensity in the plasma membrane and cytoplasm pixels were calculated using the raw image. The membrane and cytoplasm intensity ratio (M-C ratio) was assessed by dividing average membrane intensity by average cytoplasm intensity. To find whether the difference of M-C ratio between two samples is statistically significant, we performed two-tailed T-test (significance level = 0.05, equivalent degrees of freedom accounting for possibly different variances). For analysis of data from hypoosmotic shock experiment integrated intensity of the membrane was used for making intensity versus time traces. In order to be able to compare the osmotic shock effect between different samples each trace was normalized to the average intensity of the first three points from the beginning of the trace. Next an average intensity for first 3 points (I0) and from point 4 till the end of the trace (I1) was calculated and used for the percentage intensity change ($$\Delta I$$) calculation. Then $$\Delta I = \left( {\frac{I1 - I0}{{I0}}} \right)*100$$ was calculated for each individual cell and their average and SD was taken. To analyse co-expression experiment of GFP-CavDGV with RFP-H-ras G12V in NIH 3T3 cells the linear correlation test was performed and the linear correlation coefficient and its standard error were calulated to estimate the degree of correlation between H-ras M-C ratio and the total CavDGV expression. The significance level was set to 0.05, that gave confidence intervals for the correlation coefficient at 95%. ### Qualitative image analysis Qualitative image analysis was based on visual inspection of acquired images. According to localization of fluorescent signal coming from expressed fusion proteins the cells were manually classified as showing cytoplasm localization (i.e. fluorescent signal was visible only in cytoplasm with no plasma membrane localization) or plasma membrane localization (i.e. fluorescent signal was localized entirely or at least partially on plasma membrane) of expressed protein. Only healthy cells in focus and with clear borders were assessed in the analysis.
# Installing Openccg OpenCCG is a Java library which can handle both parsing and generation. I’ve mostly used it for surface realization, converting fairly syntactic meaning representations into a natural language text, but you can use it for parsing or for generation from higher-level semantic representations if you’d like. 1. Install the current version of OpenCCG ## Installing OpenCCG We’re partly following the OpenCCG README in these guidelines, and you should defer to the official documentation if anything in this tutorial conflicts with that, unless otherwise noted. ### Prerequisites • Java 1.6 or later • Python 2.4 or later (check Py3 compatibility) ### Getting the code If you want to keep your version up to date with the official releases, the easiest way to do that is to clone the repository from GitHub: git clone https://github.com/OpenCCG/openccg.git This will create a directory openccg/ in your current working directory and fill it with the code from the project. At the time of writing, the SourceForge page provides version 0.9.5 while GitHub has version 0.9.6. The differences between these two, however, appear to be purely cosmetic (i.e. the version number was updated when files were prepared for release on GitHub). ### Getting required libraries for the Git Repo The one downside to using git is that it is not intended to track binary files, so if you chose to clone the git repo above, you will need to download the archive from SourceForge anyway in order to get the following libraries: • ant-contrib.jar • ant.jar • ant-junit4.jar • ant-junit.jar • ant-launcher.jar • javacc.jar • jdom.jar • jgrapht-jdk1.6.jar • jline.jar • jopt-simple.jar • junit-4.10.jar • openccg.jar • serializer.jar • trove.jar • xalan.jar • xercesImpl.jar • xml-apis.jar • xsltc.jar The libraries are located in the lib/ directory of openccg-0.9.5.tgz. If you installed via git, copy these files to the lib/ directory of your OpenCCG git repository. If you’re installing from SourceForge, they’re already in the right place, so you can ignore this step. ### Building and installing OpenCCG In order to use OpenCCG, we first have to compile the code. This requires setting a few ’environment variables’ on your computer so it knows where to find (1) the Java Development Kit (JDK) and (2) OpenCCG. Then we can run the ccg-build script to build the project and make it executable. #### Setting Environment Variables We have to set at least two environment variables for OpenCCG to work: JAVA_HOME, which is where our JDK is installed, and OPENCCG_HOME for the location of our openccg/ directory. In bash, you can find out where Java is installed by running which java. The following examples show the result on my computer: $which java /usr/bin/java This is often a symbolic link (a shortcut), though, so it is helpful to get the full listing and keep going until we have a real file and not a shortcut: $ ls -l /usr/bin/java /etc/alternatives/java $ls -l /etc/alternatives/java /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-1.b12.fc25.x86_64/jre/bin/java We see here that this is the location of java, but several of the OpenCCG scripts call javac (the Java compiler) by adding bin/javac to the JAVA_HOME environment variable, so we want to set our JAVA_HOME to /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-1.b12.fc25.x86_64 instead. If you run into errors, double check that you have javac installed in addition to java. Plain Java is often distributed as the Java Runtime Environment (JRE) to run compiled Java code from, e.g., .jar files, while javac is a part of the JDK. In Linux & Mac you can use the bash shell in a terminal window to set JAVA_HOME: $ export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-1.b12.fc25.x86_64 (Note that the $ is not a part of the command. It just represents the command prompt.) If you are using something other than bash on Mac or Linux, try googling ‘setting environment variables in OPERATING SYSTEM OR TERMINAL’ for your operating system or terminal/shell. The OpenCCG README also includes some details for configuring these variables in Windows. We should similarly set the environment variable OPENCCG_HOME to point to the openccg/ directory. If installed to user’s home directory, this would be $ export OPENCCG_HOME=/home/user/openccg on most (all?) Linux distributions. Setting persistent environment variables In Linux, you can modify the .bashrc file located in your user’s home directory to include the export statements mentioned above. Otherwise you will have to re-set them every time you want to run OpenCCG in a new session. #### Building OpenCCG In a terminal, navigate to your openccg/ directory and execute the following: \$ ./bin/ccg-build The OpenCCG project uses Apache’s “Ant” build system to handle all the compiling (and then some). If this worked, you should see a bunch of output followed by “BUILD SUCCESSFUL” and an estimate of how long the build took. And that’s it! Now you should be able to use OpenCCG. If you’re not sure what to do next, keep an eye out for additional OpenCCG tutorials in the near future. ###### Research Fellow in Natural Language Generation Dave Howcroft is a computational linguist working at Edinburgh Napier University.
# Generating XKCD passwords Obviously, everyone should use a password manager to generate and store long unique random strings of characters for the vast majority of their passwords. However, one still needs to memorize a master password for their password vault! I figured that I would write my own little script to generate such a password, XKCD style: (require '(clojure [string :as str]) '(clojure.java [io :as io])) (import (java.security SecureRandom) (java.util ArrayList Collections)) (defn secure-shuffle [coll] (let [al (ArrayList. coll)] (Collections/shuffle al (SecureRandom.)) (vec al))) (defn lines [x] (with-open [rdr (io/reader x)] (doall (line-seq rdr)))) (defn password [n words] (str/join \space (take n (secure-shuffle words)))) (def filename "words.txt") (def url "https://github.com/dwyl/english-words/raw/master/words.txt") (if-not (.exists (io/file filename)) (spit filename (slurp url))) (println (password 4 (lines filename))) Any comments on code style are very much appreciated, but I am more interested in the security of this script. To be specific, if I run this using lein-exec 0.3.6 on an Ubuntu 15.10 machine with Java 8u74 and Leiningen 2.6.1 installed, can I be confident that the resulting password has $4 \log_2 354986 \approx 74$ bits of entropy and is safe to use as my master password? ## 1 Answer Obligatory link to the relevant Security question: XKCD #936: Short complex password, or long dictionary passphrase? can I be confident that the resulting password has 4log2354986≈74 bits of entropy and is safe to use as my master password? That's two questions which are only partially related to each other. Let's start with the easy question first. Is it safe to use as your master password? A good password is: • Hard to guess. • Easy to remember. • Hard to brute-force. You can enforce the first by making sure there's no personal data (birthday, name of your dog, name of the site, parts of your username, etc.) mentioned in the password. You got that covered. The second issue is only tackled partially. Not all generated output makes sense or is easy to remember. However, it's likely that given enough tries a good candidate will pop-up. Keep in mind that words mentioned from categories mentioned in the previous paragraph should be avoided. The third appears to be enforced for now, assuming you reach 74 bits of entropy as stated. I don't feel qualified to verify that, but entropy is about the selection process and not about the selected password itself. If the process has an entropy of 74 bits, it will always have 74 bits. Computers change, brute-force methods change and what was a safe password 20 years ago may not be safe to use today. The generated passwords appear to be safe, since they take a long time to crack. • Thanks for the link; I have indeed read the answers to that question (and several others) and have taken a course in cryptography, and I feel that I pretty thoroughly understand the security of the XKCD scheme. So just to clarify your answer: you're unable to comment on whether my code generates such a password as it's supposed to do? – Sam Estep Mar 21 '16 at 18:18 • I can verify your code generates something along those lines and the passwords appear to be quite secure. However, I can't verify the exact amount of entropy (algorithms and theoretical computations are not my strong suit). There's more than just length going on here: the selection process, length of the words, chance of certain words being selected, total length of the password, it's all part of the entropy. – Mast Mar 21 '16 at 18:38
fa.useful evaluation – Taylor serie on a Riemannian manifold Answer Hello expensive customer to our community We will proffer you an answer to this query fa.useful evaluation – Taylor serie on a Riemannian manifold ,and the respond will breathe typical by means of documented info sources, We welcome you and proffer you fresh questions and solutions, Many customer are questioning in regards to the respond to this query. fa.useful evaluation – Taylor serie on a Riemannian manifold I necessity some ameliorate for the next downside. Let $$M$$ a riemannian manifold and $$f$$ a flush differential duty, then deem the next integral $$int_M Gamma(x,y)(f(y)-f(x))dV_y$$ the place $$dV_y$$ is the touchstone on the manifold and $$Gamma$$ is a optimistic duty. Now, my query is how can do a taylor growth of the integral, i.e., for instance within the illustration of $$M=mathbb R$$, now we have with $$y=x-epsilon z$$, $$int_mathbb{R}Gamma(x,x-epsilon z)(f(x-epsilon z)-f(x))dy=int_mathbb{R}Gamma(x,x-epsilon z)(-epsilon z f'(x)+(epsilon z)^2f”(x)+…)(-epsilon dz)$$ But now if I’m on a Riemannian manifold, how can I do one thing love that? I do not know any taylor succession love that. Anyway, for the illustration of the manifold I attempted to occupy the exponential map and the polar coordinates. Thanks any thought will breathe appreciated! Thanks! we’ll proffer you the answer to fa.useful evaluation – Taylor serie on a Riemannian manifold query through our community which brings all of the solutions from a number of dependable sources.
# On weaker forms of the abc conjecture from the theory of Hölder and logarithmic means In this post (the content of this post is now cross-posted from Mathematics Stack Exchange see below) we denote the radical of an integer $$n>1$$ as the product of disctinct primes dividing it $$\operatorname{rad}(n)=\prod_{\substack{p\mid n\\p\text{ prime}}}p,$$ with the definition $$\operatorname{rad}(1)=1$$. The abc conjecture is an important problem in mathematics as one can to see from the Wikipedia abc conjecture. In this post I mean the formulation ABC conjecture II stated in previous link. I was inspired in the theory of generalized mean or Hölder mean (see [1]) to state the following claim (Mathematics Stack Exchange 3648776 with title A weak form of the abc conjecture involving the definition of Hölder mean asked Apr 28 '20). Claim. On assumption of the abc conjecture $$\forall \varepsilon>0$$ there exists a constant $$\mu(\epsilon)>0$$ such that for triples of positive integers $$a,b,c\geq 1$$ satisfying $$\gcd(a,b)=\gcd(a,c)=\gcd(b,c)=1$$ and $$a+b=c$$ ones has for real numbers $$q>0$$ that the following inequality holds $$c<\mu(\varepsilon)\left(\frac{\operatorname{rad}(a)^q+\operatorname{rad}(b)^q+\operatorname{rad}(c)^q}{3}\right)^{3(1+\varepsilon)/q}.\tag{1}$$ Remark 1. Thus as $$q\to 0$$ from the theory of Hölder mean we recover the abc conjecture. In a similar way I was inspired in the definition of the logarithmic mean and its relationship to the arithmetic mean to pose the following conjecture (Mathematics Stack Exchange 3580506 with title Weaker than abc conjecture invoking the inequality between the arithmetic and logarithmic means asked Mar 14 '20). Conjecture. For every real number $$\varepsilon>0$$, there exists a positive constant $$\mu(\varepsilon)$$ such that for all pairs $$(a,b)$$ of coprime positive integers $$1\leq a the following inequality holds $$2\,\frac{b-a}{\log\left(\frac{b}{a}\right)}\leq \mu(\varepsilon)\operatorname{rad}(ab(a+b))^{1+\varepsilon}.\tag{2}$$ Remark 2. Thus I think that previous conjecture is weaker than the abc conjecture by virtue of the relation between the artihmetic and logarithmic means. Question. I wondered what work can be done to prove/discuss unconditionally (I mean on assumption of the cited requirements/conditions, but without invoking any formulations of the abc conjecture) the veracity of previous Claim for the smallest $$q>0$$ close to* $$0$$ that you are able to prove. Similarly**, is it possible to prove Conjecture? Many thanks. *I'm curious to know what is the smallest $$q>0$$ close to $$0$$ such that the inequality in Claim is true, I think that the right discussion is for $$q>0$$ but if you want to discuss $$|q|$$ very close to $$0$$ because you think that it makes sense, feel free to study our inequality for real numbers $$|q|$$ very close to $$0$$. $$^{**}$$On the other hand I think that should be possible to prove the Conjecture, since I think that this statement is much weaker than the abc conjecture. I was inspired in the Wikipedia articles for Generalized mean and Logarithmic mean. I add references to bilbiography. I know the statement of formulation ABC conjecture II for example from [3]. ## References: [1] P. S. Bullen, Handbook of Means and Their Inequalities, Dordrecht, Netherlands: Kluwer (2003). [2] B. C. Carlson, Some inequalities for hypergeometric functions, Proc. Amer. Math. Soc., 17: in page 36 (1966). [3] Andrew Granville and Thomas J. Tucker, It’s As Easy As abc, Notices of the AMS, Volume 49, Number 10 (November 2002). • (1/2) I'm curious about if these questions in my Question have a good mathematical content. I bring here this post asking to know if the Question is interesting in the context of the abc conjecture. I've asked in this site MathOverflow On variants of the abc conjecture in terms of Lehmer means, with identificator 350998 (asked on Jan 23 '20). If from your experience and knowledges you can explain if this kind of inequalities can be potentially interesting, please feel free to add your comments or explain it in your answer for the Question in this post. It is appreciate. May 8, 2020 at 10:21 • (2/2) I have hope that my inequalities are interesting, feel free to refer these if you know that some colleague (a professor) have studied the abc conjecture. On the other hand I am trying to read other questions posted on this MathOverflow about weak forms of the abc conjecture, and I know that in the literature there are also articles that were written by professors about relaxations of the abc conjecture. May 8, 2020 at 10:21 • abc implies $b-a$ can't be much smaller that $a+b$ and we have $\log(b-a) \sim \log(a+b)$ – joro May 8, 2020 at 11:23 • Many thanks for your contribution, and many thanks for the upvoter. As aside/unrelated comment the logarithmic mean, as refers the Wikipedia Stolarsky mean, is a particular case of the Stolarsky mean, I say it if can be inspiring for some user. If I understand well Lehmer was a number theorist and Stolarsky studies in particular diophantine approximation. Hölder provided us, in particular, his important inequality as refers the Wikipedia Hölder's inequality (I refer these facts for all users, since I like to study certain mathematical details when I try to evoke certain relationships). May 8, 2020 at 12:53 • Many thanks to the upvoter and for all those persons that read the post for his/her attention. May 29, 2020 at 16:56 abc implies your conjecture with $$b-a$$. Case 1 Let $$a,b,c=a+b$$ be bad abc triple,i.e. $$c < rad(ab(a+b))$$. We have $$rad(ab(a+b)) > c > b - a$$. Case 2 Let $$a,b,c=a+b$$ be good abc triple,i.e. $$c>rad(ab(a+b))$$. Then $$T : (b-a)^2,4ab,(a+b)^2$$ is good abc triple too. The radical is divisor of $$ab(a+b)(b-a)$$ and we have $$(a+b)^2 > (a+b)(b-a)$$. If $$\log(b-a) < (1-C) \log(b+a)$$ this will give infinitely many good abc triples with quality $$2/(2-C)$$, which contradicts abc. In summary, abc implies there are only finitely many good abc triples satisfying $$\log(b-a) < (1-\epsilon) \log(b+a)$$ • Many thanks for your excellent and interesting answer. I was disconeccted (yesterday) and I don't know if I loss the opportunity to award your answer with the bounty that was expiring/ending (now I don't know if my words make sense, since I don't know how works a bounty for the last day in which is offering: this bounty expires just 7 hours ago and you answer is edited 15 hours ago). May 31, 2020 at 6:39 • I've flagged my (question) post asking, if possible, to award your answer with the bounty, I was disconnected (yesterday) and I don't know if I loss the opportunity to award the available answer while the bounty was expiring (I don't know if now is possible). Isn't required a response and good day. May 31, 2020 at 7:18 • The moderators told me about my flag that they don't have a method to restore bounties. Thus I'm sorry if as a consequence of that I was disconnected I can not award the bounty. Jun 6, 2020 at 14:24
# Batch Normalization Backpropagation 2018-05-12 Batch normalization is a technique for making neural networks easier to train. Although these days, any deep learning framework will implement batch norm and its derivative for you, it is useful to see how to derive the gradient of batch norm. It seems to be often left as "an exercise for the reader" in deep learning courses. I had some trouble getting the correct derivation of the gradient on the first try so I've outlined the derivation here. Notation: • $z_{ij}$: Values after affine transformation (matrix multiplication by parameter $\mathbf{W}$). • $\hat{z}_{ij}$: Values after normalization. • $\tilde{z}_{ij}$: Values after scaling by parameters $\gamma_i$ and $\beta_i$. $f$: Scalar cost function. where $i = 1...n_\rm{out}$ (number of layer outputs) and $j = 1...m$ (number of examples in batch). Equations for batch normalization: \begin{equation*} \mu_i = \frac{1}{m} \sum_j z_{ij} \end{equation*} \begin{equation*} \sigma_i^2 = \frac{1}{m} \sum_j^m (z_{ij} - \mu_i)^2 \end{equation*} \begin{equation*} \hat{z}_{ij} = \frac{z_{ij} - \mu_i}{\sqrt{\sigma_i^2 + \epsilon}} \end{equation*} \begin{equation*} \tilde{z}_{ij} = \gamma_i \hat{z}_{ij} + \beta_i \end{equation*} ## Goal • Given: $\partial f / \partial \tilde{z}_{ij}$ the array of derivatives of the scalar loss $f$ with respect to the output $\tilde{z}_{ij}$. • Derive: $\partial f / \partial \gamma_i$, $\partial f / \partial \beta_i$, the vectors of derivatives with respect to our parameters and $\partial f / \partial z_{ij}$, the array of derivatives with respect to the layer inputs. We will start with the last equation, and derive the gradient with respect to the two parameters $\gamma_i$ and $\beta_i$. ## Derivation: $\partial f / \partial \gamma_i$ We'll use the derivation of $\partial f / \partial \gamma_i$ to demonstrate the general method of using the chain rule. Using the chain rule, the parital derivative we're after can be written in terms of the partial derivative we are given, and one we will derive from the above equations: \begin{equation*} \frac{\partial f}{\partial \gamma_i} = \sum_{i'j'} \frac{\partial f}{\partial \tilde{z}_{i'j'}} \frac{\partial \tilde{z}_{i'j'}}{\partial \gamma_i} \end{equation*} Note that, in general, we must always sum over $i'$ and $j'$ in ths manner, as $\gamma_i$ can affect $f$ through any entry in $\tilde{z}_{i'j'}$. This is the key point: even though $\tilde{z}$ and $\gamma$ both have the same size in the first dimension (indexed by $i$), any entry in $\tilde{z}$ might depend on any entry in $\gamma$: $\tilde{z}_{11}$ might depend on $\gamma_1$, $\gamma_2$, $\gamma_3$, etc. and all these partial derivatives must be summed. In this particular case, it happens to be simpler. We can see from the equation for $\tilde{z} that changing$ has no effect on $\tilde{z}_{i'j'}$ for $i' \ne i$. Or in other words, $\partial \tilde{z}_{i'j'} / \partial \gamma_i = 0$ for $i' \ne i$. So, only terms with $i' = i$ actually contribute to the sum $\sum_{i'j'}$, and we can take $i'$ out of the sum and replace $i'$ with $i$ everywhere: \begin{align*} \frac{\partial f}{\partial \gamma_i} &= \sum_{j'} \frac{\partial f}{\partial \tilde{z}_{ij'}} \frac{\partial \tilde{z}_{ij'}}{\partial \gamma_i} \\ &= \boxed{ \sum_{j'} \frac{\partial f}{\partial \tilde{z}_{ij'}} \hat{z}_{ij'} } \end{align*} Or in Python: dgamma = np.sum(dZtilde * Zhat, axis=1, keepdims=True) ## Derivation: $\partial f / \partial \beta_i$ This one is easy. Following the same logic as above, \begin{align*} \frac{\partial f}{\partial \beta_i} &= \sum_{j'} \frac{\partial f}{\partial \tilde{z}_{ij'}} \frac{\partial \tilde{z}_{ij'}}{\partial \beta_i} \\ &= \boxed{ \sum_{j'} \frac{\partial f}{\partial \tilde{z}_{ij'}} } \end{align*} In Python: dbeta = np.sum(dZtilde, axis=1, keepdims=True) ## Derivation: $\partial f / \partial z_{ij}$ First, get the derivative with respect to $\hat{z}_{ij}$: \begin{align*} \frac{\partial f}{\partial \hat{z}_{ij}} &= \sum_{i'j'} \frac{\partial f}{\partial \tilde{z}_{i'j'}} \frac{\partial \tilde{z}_{i'j'}}{\partial \hat{z}_{ij}} \\ &= \sum_{i'j'} \frac{\partial f}{\partial \tilde{z}_{i'j'}} \delta_{ii'} \delta{jj'} \gamma_i' \\ &= \boxed{ \frac{\partial f}{\partial \tilde{z}_{ij}} \gamma_i } \end{align*} In Python: dZhat = dZtilde * gamma Now, the final and most tedious part: given $\partial f / \partial \hat{z}_{ij}$, go the rest of the way. \begin{equation*} \frac{\partial f}{\partial z_{ij}} = \sum_{i'j'} \frac{\partial f}{\partial \hat{z}_{i'j'}} \frac{\partial \hat{z}_{i'j'}}{\partial z_{ij}} \end{equation*} Changing $z_{ij}$ has no effect on $\hat{z}_{i'j'}$ for $i' \ne i$. Or in other words, $\frac{\partial \hat{z}_{i'j'}}{\partial z_{ij}} = 0$ for $i' \ne i$. So, only terms with $i' = i$ actually contributes to the sum $\sum_{i'j'}$, and we can take $i'$ out of the sum and replace $i'$ with $i$ everywhere: \begin{equation*} \frac{\partial f}{\partial z_{ij}} = \sum_{j'} \frac{\partial f}{\partial \hat{z}_{ij'}} \frac{\partial \hat{z}_{ij'}}{\partial z_{ij}} \end{equation*} Substitute in the equation for $\hat{z}_{ij}$: \begin{equation*} \frac{\partial f}{\partial z_{ij}} = \sum_{j'=1}^m \frac{\partial f}{\partial \hat{z}_{ij'}} \frac{\partial}{\partial z_{ij}} \left( (z_{ij'} - \mu_i)(\sigma_i^2 + \epsilon)^{-1/2} \right) \end{equation*} Expand the parital: \begin{equation*} \frac{\partial f}{\partial z_{ij}} = \sum_{j'=1}^m \frac{\partial f}{\partial \hat{z}_{ij'}} \left( \frac{\partial z_{ij'}}{\partial z_{ij}} (\sigma_i^2 + \epsilon)^{-1/2} - \frac{\partial \mu_i}{\partial z_{ij}} (\sigma_i^2 + \epsilon)^{-1/2} - \frac{1}{2} (z_{ij'} - \mu_i)(\sigma_i^2 + \epsilon)^{-3/2} \frac{\partial \sigma_i^2}{\partial z_{ij}} \right) \end{equation*} For the first term, we realize that $\partial z_{ij'} / \partial z_{ij}$ is 1 if $j' = j$, otherwise 0, so we can replace it with $\delta_{j,j'}$: \begin{equation*} \frac{\partial z_{ij'}}{\partial z_{ij}} = \delta_{j, j'} \end{equation*} For the second and third terms, we will need $\partial \mu_i / \partial z_{ij}$ and $\partial \sigma_i^2 / \partial z_{ij}$. Substituting in the equations for $\mu_i$ and $\sigma_i^2$, \begin{align*} \frac{\partial \mu_i}{\partial z_{ij}} &= \frac{1}{m} \sum_{j'=1}^m \frac{\partial z_{ij'}}{\partial z_{ij}} = \frac{1}{m} \\ \frac{\partial \sigma_i^2}{\partial z_{ij}} &= \frac{2}{m} \sum_{j'=1}^m (z_{ij'} - \mu_i)\left(\frac{\partial z_{ij'}}{\partial z_{ij}} - \frac{\partial \mu_i}{\partial z_{ij}} \right) \\ &= \frac{2}{m} \sum_{j'=1}^m (z_{ij'} - \mu_i) \delta_{j,j'} - \frac{2}{m} \sum_{j'=1}^m (z_{ij'} - \mu_i) \frac{1}{m} \\ &= \frac{2}{m} (z_{ij} - \mu_i) - \frac{2}{m^2} \Big( \sum_{j'=1}^m z_{ij'} - \sum_{j'=1}^m \mu_i \Big) \\ &= \frac{2}{m} (z_{ij} - \mu_i) - \frac{2}{m^2} (m \mu_i - m \mu_i) \\ &= \frac{2}{m} (z_{ij} - \mu_i) \end{align*} Plug these intermediate partial derivatives back into our main equation and then simplify: \begin{align*} \frac{\partial f}{\partial z_{ij}} &= \sum_{j'=1}^m \frac{\partial f}{\partial \hat{z}_{ij'}} \left( \delta_{j,j'} (\sigma_i^2 + \epsilon)^{-1/2} - \frac{1}{m} (\sigma_i^2 + \epsilon)^{-1/2} - \frac{1}{2} (z_{ij'} - \mu_i)(\sigma_i^2 + \epsilon)^{-3/2} \left(\frac{2}{m}\right)(z_{ij} - \mu_i) \right) \\ &= \frac{\partial f}{\partial \hat{z}_{ij}} (\sigma_i^2 + \epsilon)^{-1/2} - \frac{1}{m} \sum_{j'=1}^m \frac{\partial f}{\partial \hat{z}_{ij'}} (\sigma_i^2 + \epsilon)^{-1/2} - \frac{1}{m} \sum_{j'=1}^m \frac{\partial f}{\partial \hat{z}_{ij'}} (z_{ij'} - \mu_i)(\sigma_i^2 + \epsilon)^{-3/2} (z_{ij} - \mu_i) \end{align*} Realizing that some expressions in the last term can be replaced by $\hat{z}_{ij}$ and $\hat{z}_{ij'}$, we finally get \begin{equation*} \boxed{ \frac{\partial f}{\partial z_{ij}} = \frac{1}{m \sqrt{\sigma_i^2 + \epsilon}} \left( m \frac{\partial f}{\partial \hat{z}_{ij}} - \sum_{j'=1}^m \frac{\partial f}{\partial \hat{z}_{ij'}} - \hat{z}_{ij} \sum_{j'=1}^m \frac{\partial f}{\partial \hat{z}_{ij'}} \hat{z}_{ij'} \right) } \end{equation*} In Python: mu = np.mean(Z, axis=1, keepdims=True) sigma2 = np.mean((Z - mu)**2, axis=1, keepdims=True) dZ = (1. / (m * np.sqrt(sigma2 + epsilon)) * (m * dZhat - np.sum(dZhat, axis=1, keepdims=True) - Zhat * np.sum(dZhat * Zhat, axis=1, keepdims=True)))
# how to calculate emf of a cell 2. (b) Weak Electrolytes: The electrolytes which are not completely dissociated into ions in solution are called weak electrolytes. The two ends of the U-tube are then plugged with cotton wool to minimise diffusion. See all questions in Calculating Energy in Electrochemical Processes. Calculate EMF using the formula: ε = V + Ir Here (V) means the voltage of the cell, (I) means the current in the circuit and (r) means the internal resistance of the cell. By taking the oxidation potentials of both electrodes. A voltaic cell utilizes the following reaction: 2Fe^3+ + H2 --> 2Fe^2+ + 2H+ What is the emf for this cell when [Fe^3+]=2.00M, Pressure of H2=0.55 atm, [Fe^2+]=1.2*10^-2M and the pH for both compartments is 4.80? Copyright Notice © 2020 Greycells18 Media Limited and its licensors. The Daniell cell was invented by a British chemist, John Frederic Daniell. emf of the cell = Potential of the half cell on the right hand side (Cathode) - Potential of the half cell on the left hand side (Anode). This potential difference is called the electrode potential. Version Control For Salesforce — Branching Strategy. Redox reactions with a positive E 0 cell value are galvanic. Both are separated by vertical line or semicolon. The cell potential or EMF of the electrochemical cell can be calculated by taking the values of electrode potentials of the two half – cells. I finally got it! Students acquire the skill to measure the EMF of a cell by viewing animation & simulator. One of the half-reactions must be reversed to yield an oxidation.Reverse the half-reaction that will yield the highest (positive) net emf for the cell. What is the emf for this cell when [Fe^3+]=2.00M, Pressure of H2=0.55 atm, [Fe^2+]=1.2*10^-2M and the pH for both compartments is 4.80? It is named after the German physical chemist Walther Nernst. What reactions are happening, are the cells compartmentalized and what exactly are the values given in brackets in the question? We would normally expect an AA cell to have an EMF of about 1.5 V and an internal resistance of about 1 Ω. The inert electrolyte is neither involved in any chemical change, nor does it react with the solutions in the two half cells. Who has a mixed origin in this passage, the town or its mayor? It is also called Voltaic cell, after an Italian physicist, Alessandro Volta. A voltmeter and variable resistor To find the EMF and internal resistance of a cell, the following circuit is set up. What is the relation between degree of ionisation and dilution of weak electrolytes? How do you calculate electrochemical cell potential? So since my setup looks right, having the wrong units is the only thing I can think of. Note:- I have converted ln into log. EMF = 1.415 V Internal resistance = 2.10 Ω. (b) Predict the products of electrolysis in the following: A solution of H2SO4 with platinum electrodes. The zinc ions pass into the solution. What is an electrochemical cell that generates electrical energy? Hence, I got $E_{\text{cell}}=\pu{0.357V}$. Anode is written on the left hand side and cathode on the right hand side. The combination of chemicals and the makeup of the terminals in a battery determine its emf. A contradiction regarding the reaction coefficient expression in the Nernst equation, Determination of solubility equilibrium using galvanic cell reactions. Want a call from us give your mobile number below, For any content/service related issues please contact on this number, Mg(s) |Mg2+ (0.1 M) ||Cu2+ (1 10-3 M)| Cu(s). On cooling, the solution sets in the form of a gel inside the U-tube and thus prevents the inter mixing of the fluids. Step 3: Add the two E 0 together to find the total cell EMF, E 0 cell E 0 cell = E 0 reduction + E 0 oxidation E 0 cell = 0.0000 V + 2.372 V = +2.372 V; Step 4: Determine if the reaction is galvanic. The electrode potential at standard conditions such as 25°C temperature, 1 atm pressure, 1 M concentration of electrolyte, is called the standard electrode potential. A galvanic cell is an important electrochemical cell. Equilibrium Constant of an Electrochemical Cell Reaction. hmm, mind sharing with me how you did it? The difference between and EMF (EMF) and Terminal Voltage (V) of a cell/battery can be calculated as following: Where, I is the total current being drawn from the cell/battery and r is the internal resistance of the cell/battery. so n = 2, your RT/nF should be RT/2F(currently i see you have 1), try that, if it is still wrong, i'll help check your ln Q expression, but pretty sure that is the main problem.
# How to find the pivot element #### How to find the pivot element That is the column you will pivot. Relevant equations 3. Medium #35 Search Insert Position. Setting Up the Initial Simplex Tableau and Finding the Pivot Element An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Find Pivot Positions and Pivot Columns. Compact Selecting any element will let you Here you get python quick sort program and algorithm. a,s:;:: O,l-. That means after which all elements are in ascending order. How do you find an element in the rotated Find element in sorted rotated array without finding pivot Use modified binary search to find the element 1. If you can find a book that mentions pivoting, they will usually tell you that you must pivot on a one. The reason is that, after refreshing, I want to delete all of the excess rows after the Pivot Table so that after saving, it resets the used range to the correct area. Once we have indexes largest and smallest elements, we use similar meet in middle algorithm (as discussed here in method 1 ) to find if there is a pair. 25 terms. Rule 2 For each Row i , , where there is a strictly positive entering variable coefficient'', compute the ratio of the Right Hand Side to the entering variable coefficient''. T = an initial Simplex Tableau; // How: // Add surplus variables // to obtain a basic solution Find a pivot element p in T that // Discussed next makes the obj. 6 –. my understanding of quick sort is choose a pivot value in this case choose the value of the middle element initialize left and right pointers at extremes starting at the left pointer and moving to the right find the first element which is greater than or equal to the pivot value Then enter all the entries you find in the matrix as you see them. b) Assume that the (n/2)th element is the pivot. - The method will work for any combination of tilts and decenters. Finding a Good Pivot Recall: We recurse on one of the two pieces of the array if we don't immediately find the element we want. This process is called pivoting, and the criterion for deciding which row to choose is called a pivoting strategy. How to Read Data from Pivot Table You may also want to read my earlier post on reading data from pivot table using vba to know how you can get data out of This function solves a linear system Ax=b using the Gaussian elimination method with pivoting. In the case of matrix algorithms, a pivot entry is usually required to be Example demonstrating how to find the pivot columns of a matrix. 2- Decide on the pivot element by finding the smallest non-negative quotient in the column 16/2 = 8 or 12/3 = 4 This image has 47 of size with 479 x 638 with title 38 Quicksort. Pivot element. We can find kth smallest or maximum no using max or min heap concept as follows: Step 1: From first K numbers build a max heap so that the top element is the largest among first K nos. Main Menu;Partitioning Say we have 8169635270 And 6 is chosen as the pivot. if we find an element smaller than If we find an element less than the pivot (= 1), then we swap it with the left most element of the right partition (at index i+1), so that we can extend the left side boundary to i+1. Last element as pivot 3. To calculate values in a PivotTable, you can use any or all of the following types of calculation methods: In the list of formulas, find the formula that you want In SQL Pivot is one of the most useful Operator to convert the Row values into Column names or simply say, Rotating table. Pivot element is the only element in input array which is smaller than it's previous element. So pivot is the 6’th smallest element. how to find the pivot element Given a sorted integer array which is rotated any number of times, find the pivot index i. C) pivot element is 5, lying in the third row, second column. (For quick instructions on how to pivot, press here. That is, for the pivot row, the new coefficients are calculated by dividing by the pivot, and for the other rows are calculated by subtracting the ai coefficient in the pivot row multiplied by the coefficient corresponding to the column divided by the pivot element. The list includes the pivot field's caption, source name, location (orientation), position at that location, sample data from the field, and a formula (calculated fields) Because of a problem with the Orientation method, all the data fields are listed as "Hidden". it is used to invert the matrix and calculate rerstricciones tableau of simplex algorithm, in each iteration Gauss Jordan Elimination Through Pivoting. ), to do certain calculations. Read LeetCode's official solution for Find Pivot Index Articles 724. This video provides several examples of determining the pivot column and pivot row given a tableau Site: http://mathispower4u. Train on kata in the dojo and reach your highest potential. Select one of the pivot items in the outermost pivot field (Region). 55 terms. Find the minimum element in a sorted and rotated array A sorted array is rotated at some unknown point, find the minimum element in it. Ex: In array {78, 82, 99, 10 Data Structures and Algorithms Quick Sort And recursively, we find the pivot for each sub-lists until all lists contains only one element. First element as pivot 2. The attempt at a solution I tried google for some examples but I cannot find any. Otherwise, if number of elements smaller than the pivot plus number of elements equal to the pivot is larger than K, then Kth element is equal to pivot so just return the pivot and finish. Use two pointers (say left and right) with the left pointer pointing to the smallest element and the right pointer pointing to largest element. Hi! Then you can move it, took me ages to find that! pivot. Algebra Examples. ) CSPIT, CHANGA The number that lies at the intersection of the key column and key row of a given table is called the key element. 2. 10/12/2010 · I've done a fair amount of programming against Access 2002-2003 pivot tables. With its permanent magnet, the E50PM retains magnetic adhesion, even on thinner surfaces. You can do this in both Windows and Mac Views: 174KFind Relative Element - activities. In this article we will show you, How to convert rows into Find the kth largest element in an unsorted array. -2 -1 -10 4 -4 3Status: ResolvedAnswers: 4Linear Solution: Finding Kth smallest element in an array seesharpconcepts. tlaffterty's contribution. Start with pivot as first index of array, so pivot = 0, partition the array into two parts around pivot such that all elements on left side of pivot element, i. Place the revised objective equation in the bottom row. With its permanent magnet it retains magnetic adhesion. How choosing of an pivot effects the distribution of the elements in partitioning and its role in the complexity of the quicksort logic will be discussed in future posts. Controls. Normally, this element is a one. Following solution assumes that all elements are distinct. min. Like Merge Sort, QuickSort is a Divide and Conquer algorithm. You'll find comprehensive information about activities to help you get started as quickly as possible. 5 Identify the Pivot Element Key Element MODELING SIMULATION OPERATIONAL from CS 511 at Charotar University of Science and Technology. Otherwise, Pick one element from the array. Example demonstrating how to find the pivot columns of a matrix. What is needed next is to choose the pivot element. (i. i am fairly new to programming, i have tried writing the program Simplex Method - page 4 J. If pivot is placed at the kth element in the array, exit the process, as pivot is the kth largest element If pivot position is greater than k, then continue the process with the left subarray, otherwise, recur the process with right subarray Naming PIVOT ROWS: The answer buttons below use small scripts which should be recognized by recently up-dated browsers. Gaussian elimination, simplex algorithm, etc. #34 Find First and Last Position of Element in Sorted Array. Normally, this element is a one. Like assignment 3, we will break our implementation into 3 separate functions and use them together at the end. e in4ex of the pivot row. Sometimes this PivotTable Field List may hide the data in the right of worksheet. Solution – The pivot element is basically, the largest element in an array. Find the pivot element to be used in the next iteration of the simplex method. A pivot element divided a sorted rotated array into two monotonically increasing array. Step-by-Step Examples. If the pivot is not adjacent to the current number, then the pivot switches with its left neighbor. Details Find CV Element; Find CV Text; UiPath. Initialize start = 0, end = array length - 1If your audience is coming from an Excel heavy and classic BI world, they will be asking pivot tables all the time, no matter how many charts you do for them in Tableau. Split A into subarrays LESS and GREATER by comparing each element to p as in Find pivot and minimum element in a sorted rotated array. e. 3) The common element of the pivot row and the pivot column is the pivot element (it is always positive in the standard simplex-method according the above). Replace (row ) with the actual values of the elements for the row operation . Help Sign up Share. com//uwp/api/Windows. The last part of the cache definition defines the fields of the data source using the cacheField element. The Simplex Method: pivoting to find the optimum Simplex tableau www. asked by Leah on March 28, 2011; Finte Math - Simplex Method. –5 5 –7 10 4 4 . Similarly find the first element to the right of the pivot which is smaller than pivot; Swap elements found in 3 and 4. We wll take option 2 to solve this problem. The pivot element is the number in bold (1) Pivot around selected element. If the sum is equal to x, then increment This is a java program to find kth smallest element form the given sequence of numbers. Find something interesting to watch in seconds. As it turns out, there are many different ways to choose a pivot element, and what you Codewars is where developers achieve code mastery through challenge. The problem statement, all variables and given/known data What is a pivot column? 2. Here we find the proper position of the pivot element by rearranging the array using partition function. Pick a pivot element p at random from A. you will find that most operations are the same in PivotCharts. ,n-1 find the largest (in absolute value) element among a(r(k),k),a 19/9/2017 · How to Add Data to a Pivot Table. Step 7. Best Answer: "Pivoting" about an element in a matrix means making all the other elements in that column zero by using "elementary row operations" (or any composite of elementary operations). RE: How do you pivot in a matrix? Pivot the system about the element in row 2, column 1. Recommended This Shopping store for those Obtain a lot more items and data Find more element regarding solutions Revel 48 x 74 Pivot Shower Door with CleanCoat? Technology Obtain a lot more items and data Find more element regarding solutions Revel 48 x 74 Pivot Shower Door with CleanCoat? Technology. Ex: In array {78, 82, 99, 10, The steps of the pivot operation: Divide the row containing the pivot element by the value of the pivot. Many tips you won’t find anywhere else. One thing that I cannot figure out is how to discover any filters which may The repeating items that you normally find on the row and column are referenced. while (arr3[i] < pivot) //this loop keeps checking the numbers from the left side of the pivot point . Like quicksort, it is efficient in practice and has good average-case performance, but has poor worst-case performance. Most computer linear algebra programs have a built-in routine for converting a matrix to reduced row-echelon form. 0. matrix = pivot(matrix,i) Description. Can't change the pivot of a UI element. 2 0 0 / 48 0 1. The main idea for finding the pivot element is – “for a sorted (in increasing order), and pivoted array, pivot element is the only element for which the previous element to it is greater than itself. 6 Answers. Back to course Custom Pivot Table Styles. It picks an element as pivot and partitions the given array around the picked pivot. if we find an element smaller than the pivot, we swap current element with arr[mid+1]. A good pivot should split the array so that each piece is some constant fraction of the size of the array. To see an 16 Dec 2017 Problem Statement – Suppose we have a sorted array, and now we rotate it N times, find the pivot element. Can you tell me if it has provide Find out which data sources are used in a workbook data model. Approach #1 How quicksort works. At the ~nd step the minimal ratio of the elements of the two last columns is on the first row, z is increasing up to 5, J:41eaves the basis and J:l enters the basis, the first row is pivoting. Sorting: A Distribution Theory. Referencing Pivot Table Ranges in VBA. Here is a video that explains how to find pivot in a sorted rotated array with the help of examples and animations. Jun 2, 2014 This video provides several examples of determining the pivot column and pivot row given a tableau Site: http://mathispower4u. Previous Next 724. On the Ribbon, under the PivotTable Tools tab, click the Analyze tab (in Excel 2010, click the Options tab). pivot. Ex: In array {78, 82, 99, 10, 23, 35, 49, 51, 60} pivot index is 3. Computes the Gauss Jordan pivot with pivot element matrix(i,i) Syntax. 682) Engineering Design and Rapid Prototyping Instructor(s) Finite Element Method January 12, 2004 Prof. The root element also references the pivot cache by using the ID added to the How to show or hide pivot table subtotals, show items with no data, show top items and sort fields find that field and remove its check mark, or drag A median-finding algorithm can find the $$i^\text{th}$$ smallest element in a list in $$O(n)$$ Use this as the pivot element and put all elements in \ I'm confused about the concept of pivoting a Matrix element. ?Status: ResolvedAnswers: 4Pivot Class (Windows. 8/3/2015 · i dev a winphone runtime app, i want to change the Pivot's template,but i can't find the default template of the Pivot. Our first task is to find a pivot element in an array. Median as pivot Algorithm for Quick Sort Step 1: Choose the highest index value as pivot. The pivot values are stored in the pivot cache records part. December 16, 2017 December 31, 2017 / Vamsi Sangam. If you click on a button associated with a zero pivot element, the applet will complain. Find the sum of the elements pointed by both the pointers. The pivot is the 2 in column 1 in this tableau. The idea is to group elements such that elements to the left of the pivot are smaller than the pivot and the elements to the right of the pivot are larger than the pivot. e. This will copy the default template into your XAML and you can modify it according to your requirements. 4 0 0 / 84Solution of Linear Programs by the Simplex Method. The pivot or pivot element is the element of a matrix, or an array, which is selected first by an algorithm (e. Perform the row operation on (row ) in order to convert some elements in the row to . Find. The pivot element is the element at the intersection of the pivot column and the pivot row. new pivot element it is necessary to find the minimal element in this row, Le. Also find how many times was the sorted array rotated. 04:42 So we can control all kinds of elements in a pivot table style. length How to hide/show pivot table field list in Excel? When you insert a pivot table, there will be a PivotTable Field List popping out in the right section of the worksheet. A \$29 value, but yours free when you sign up for the course. '4' is the first element where the element to the left are smaller (than 4) and the elements to the right are greater. Set j = up. Hit ENTER after giving the number of the row and the column. i++; //until it find an element that is bigger than the pivot point, while (arr3[j] > pivot) // does the same as above but from the right side of the pivot point Find Parts for Component: Pivot Pins. Repeat 3,4,5 unless left >= right. Step 1: Find index of pivot element (minimum element) Step 2: Apply Binary Search on the subarray based on following conditions: 1. UI. You can find the code in this article published in TypeScript Finding the Pivot Element 1- Find the Most Negative Indicator – Look in the bottom row. you can divide the second row by 4 to force the pivot element to be 4. Il Yong Kim AZURE PIVOT DOOR WITH FIXED ELEMENT FOR NICHE - Designer Shower screens from Inda all information high-resolution images CADs catalogues. Find pivot element in a sorted and rotated array. Find Pivot Index. How to get Non Pivot element after the Pivot element using pivot table Cocept To convert rows into colimns dear Sir, I want To Conevet the Rows in a table as Columns . This wikiHow teaches you how to add data to an existing pivot table in Microsoft Excel. Monday, you avoid any problem of the pivot table and pivot cache being in different workbooks. 2. We start from the leftmost element under consideration and keep track of index of last the element smaller the pivot as mid. Dec 16, 2017 Problem Statement – Suppose we have a sorted array, and now we rotate it N times, find the pivot element. To locate the pivot point, we first check the middle element of the array A[mid]: If the middle element is smaller than the two numbers next to it, it is the pivot number. , a count of records) in a pivot table, even if you suppress all the subtotal elements. view all ELEMENT ELEMENT Pivot Mens T-Shirt. 3. Very rare occasions exist when an element is created along with Okay, the pivot itself is just like we have always done. Given an array and positive integer k, find K'th largest element in the array. 180/2 = 90 select the smallest quotient 300/1 = 300 240/2 = 120 3. If the number of elements in A is 0 or 1, just return the array as your answer 2. com. Go to the new version 2. I'm out of my element, hopefully someone can do better than that. Actually the selection of a pivot element is one of the most important parts of quicksort. The Pivot Point Calculator is used to calculate pivot points for forex (including SBI FX), forex options, futures, bonds, commodities, stocks, options and any other investment security that has a high, low and close price in any time period. Bianchi Shop Bianchi. This algorithm is to find the location of pivot element and return it. In this technique we select a pivot element and after a one round of operation the pivot element takes its 29/1/2011 · This Site Might Help You. Otherwise, move all elements larger than the pivot to the beginning of the array and run the algorithm recursively only on that part of the array. Initialise two index variable , left=0 and right=arr. How to hide/show pivot table field list in Excel? When you insert a pivot table, there will be a PivotTable Field List popping out in the right section of the worksheet. The result is the matrix matrix pivoted using a Element 50 Pivot Magnet The latest addition to the Rotabroach range, the E50 Pivot Magnet is specifically designed for drilling on curved, uneven surfaces. From tutorials I get example code of quciksort: public void quicksort(int[] A you're using the first element as pivot elements, it runs in quadratic time. 04:45 The secret is, duplicating the one that you find that's closest to what you want; 04:50 and then applying that style to your pivot table, because doesn't do that; 04:53 automatically, and then modifying all of the individual pieces. Left is initially 0 and right is length of array – 1. Pivoting . STEP 1: set i = low+1. Images. png (2. If you remove all the columns, the data should simply aggregate with the final total remaining the same. Primal infeasible right-hand side coefficients are highlighted in fuscia (after the first pivot) as are dual infeasible cost coefficients. While rotating the table, or SQL Pivot Table remaining column values must be involved in Grouping or Aggregation. After the lens element, do the same to reverse the decenters and tilts. A pivot column is a column of A that contains a pivot position. The pivot element is basic in the simplex algorithm. Easy #36 Valid Sudoku. The number in the coefficient matrix that is used to eliminate where , is called the pivot element, Find Kth Smallest or Largest element in an Array. g. Choose 4. In our initial tableau, the pivot element is 2: The Pivot Operation. And remember for a sorting algorithm, quadratic is bad. Quick sort picks an element as pivot and partitions the array around the picked pivot. Select a Web Site. uipath. Use row operations to make the other elements of the column vector containg the pivot equal to 0. 1. , ClS2 = -2/3. Next lexicographical permutation algorithm // Let array[i - 1] be the pivot // Find rightmost element that is below the pivot int j = array. Determine a pivot element and use matrix row operations to convert the column E. If left and right pointers meet at the same position, means that the element at that position is at its final sorted position, Nine simplex tableaus are shown, with links to earlier exercizes in which the RATIOS were found. D) pivot element is 2, lying in the first row, second column. Tableau Pivot Table element – build an Excel Pivot Table in Tableau June 17, 2017 June 18, 2017 admin Tableau Elements Series If your audience is coming from an Excel heavy and classic BI world, they will be asking pivot tables all the time, no matter how many charts you do for them in Tableau. Pivotpublic : class Pivot : ItemsControl, IPivot, IPivot2, IPivot3 Removes the specified element from the XAML visual tree in a way that it can be undeferred later. Say that element is X C++ Program to Find kth Smallest Element by the Method of Partitioning the Array As we place the pivot at the (k-1)th index it will be the kth smallest number. The pivot element is the intersection of the column with the most negative indicator and the row with the smallest quotient. Excluding the pivot, divide A into two partition Start with pivot as first index of array, so pivot = 0, partition the array into two parts around pivot such that all elements on left side of pivot element, i. Activities. Reshuffle the array such that pivot element is placed at its rightful place — all elements less than the pivot would be at lower indexes, T = an initial Simplex Tableau; // How: // Add surplus variables // to obtain a basic solution Find a pivot element p in Codewars is where developers achieve code mastery through challenge. Measure formulas contain one more element. Quick Sort Pivot Algorithm. com. Release Notes. I am stumbling with the Example 3 here with solution that choose the pivot with the largest element. . A DETERMINISTIC LINEAR-TIME ALGORITHM 23 QuickSelect: Given array A of size n and integer k ≤ n, 1. Find pivot element in a sorted and rotated array December 16, 2017 December 31, 2017 / Vamsi Sangam Problem Statement – Suppose we have a sorted array, and now we rotate it N times, find the pivot element. Find kth element of: a 3 2 8 0 11 10 1 2 9 7 1 a 1 2 pivot the system about the element in row 2, column 1: -5 5 l -7 10 4 l 4 Pivot the system about the element in row 2, column 1 a pile of bricks has 85 bricks in the bottom row, 79 bricks in the second row, 73 in the third row, and so on until there is only 1 brick in the top row Quick sort in Java using the left-most element as the pivot Finding the Pivot Element. 2 Jun 2014Place the system of equations with slack variables, into a matrix. Lay Find Pivot Index. For example, the 2nd largest element in the array [7, 4, 6, 3, 9, 1] is 7 A simple solution would be to use a efficient sorting algorithm to sort the array in descending order and return the element at (k-1)th index. Email Facebook PinterestT Twitter. If you want the fastest quicksort then the best way to choose a pivot element is to choose at least 2 pivot elements (dual-pivot quicksort) or 3 pivots (3-pivot quicksort). If you restrict yourself to the Now, with this index k, we find the called the pivot element. It will be found using Rule 2 of the simplex method. So quicksort(arr, beginning, partition) can be changed to quicksort(arr, beginning, partition -1 ) because the element at partition position will not change any more. Rule of Pivot: There is not any rule of pivot as such but it can be any element of the given array, here I am considering the first element as the pivot in every partition. Find Pivot Index Each element nums[i] will be an integer in the range [-1000, 1000]. In the simplex method, we obtain larger and larger values of p by pivotingand then looking at the new basic solution. 3 the pivot should be the median of input array s Ideally Median = element in the middle of the sorted sequence Would divide the input into two almost equal partitions Unfortunately, its hard to calculate median quickly, without sorting first! So find the approximate median Pivot = median of the left-most, right-most and center element of the array s Choose a pivot, it is generally mid element of the list. The main crux of this problem is to find pivot element in rotate array. In The pivot or pivot element is the element of a matrix, or an array, which is selected first by an algorithm (e. Pivot transformation is very useful to summarize data in a flat data table (columns and rows), providing a more clean visualization of the data. For each problem, you must name the pivot row. On the Excel Ribbon, click the Analyze tab; Click the Expand Field command (if the Excel window is narrow, you might not see the words, just the icon) And that worked! The “Yes” pivot items finally appeared for Ida Gray and Ken Gray. What is different is how we choose the pivot. 810 (16. Pivot the system about the element in row 2, column 1. The pivot column will become cleared except for the pivot element, which will become a 1. . 4. I'm using Pivot Table to show the values of a hierarchy. After the iteration the pivot column should become a unit vector with 1 instead of the pivot element. we will use pivoting technique to find the largest/smallest element. Recreate source data, and save source data with pivot tableTo make a pivot, simply click on the variable-button that you want to pivot around. The range where you want the pivot table to be FROM <source_table> PIVOT( <aggregate_function>(<aggregation_element>) FOR <spreading_element> IN (<distinct_spreading_values>) ) AS <alias> In this syntax you can identify two out of the three elements that are supposed to be involved in pivoting: the aggregation function and element, and the spreading element and the distinct spreading values. If the middle element is larger than both the first and last element, the pivot number is at the right half the array. The monumental pivot pins are necessary only for the lower sash in larger single-hung windows. Cognitive. version 1. , ClS2 = -2/3. TILUX PIVOT DOOR WITH FIXED ELEMENT - Designer Shower screens from Inda all information high-resolution images CADs catalogues contact. This element will be called "pivot" 3. k th smallest element is the element at index k if the array were sorted. function increase in value; } Identify the Pivot Element (Key Element) Subscribe to view the full document. in say the 2 element The Pivot Element. This could be solved by using Quick sort algorithm, where we partition around the pivot element, the entire sequence of numbers is broken down to two, we arrange the number such that numbers smaller than pivot is kept in the first sequence and numbers larger than the pivot is kept in the second sequence. Why is the optimal choice for a pivot in quicksort algorithm the median element? Ask Question 0. The above idea of pivot element is used to find the pivot element in an array. You need to type Once you have a new pivot table style created and applied to your pivot table, you can easily customize the style as you like. I am stumbling with the Example 3 here with Given a sorted integer array which is rotated any number of times, find the pivot index i. Find pivot: Circle the pivot entry at the intersection of the pivot column and the pivot row, and identify entering variable and exit variable at mean time. This has been understood and well researched since 2009 when dual-pivot was incorporated into Java. Source: Linear Algebra and Its Applications, David C. After switching it back, the pivot element at that position will not change any more. Pivot and Gauss-Jordan Tool for Finite Mathematics Finite Mathematics & Applied Calculus. Key woz:da: pivot element, simplex method, ill-conditioned problem of Assign to th. To do this, divide each coordinate i of the vector Xb by its coordinate column vector i yik and calculate Enter a matrix, and this calculator will show you step-by-step how to calculate the pivots of that matrix. Many of these algorithms proceed in steps that involve picking a particular element in a matix and then doing something with the row or C programming, exercises, solution: Write a program in C to find the pivot element of a sorted and rotated array using binary search. In the algorithm described on this page, if the list has an even number of elements, take the floor of the length of the list divided by 2 to find the index of the median. Examples:Pivot Points Calculator. It is closely related to the quicksort sorting algorithm. You'd then build your pivot table grouped on the virtual data element. For a sorted and rotated array, it might be somewhere in between. Then we divide the array into two halves left side of the pivot (elements less than pivot element) and right side of the pivot (elements greater than pivot element) and apply the same step recursively. It doesn’t matter if you are pivoting in Gauss-Jordan elimination to solve a system of linear equations, or pivoting to find an inverse of a matrix, or pivoting to toward a solution of a linear programming problem; the idea is always the same: make the pivot element a one, and then make all other entries in the column zeros. The pivot element would be the Constraints matrix at new basis calculation. 0 (1. Interchanging rows or columns in the case of a zero pivot element is Algebra Examples. Remember that the pivot column is the column containing the most negative indicator; occasionally there is a tie for most negative indicator, in which case: flip a coin. You always get at least one subtotal (i. 09 MB. We search for the kth element within the indexes [left, right]. The increase in x 2 will be 3. The entry directly below pivot element is already 0; We need to make the other entries 0; Pivot around selected element cont Work faster and more efficiently by using the Revit Keyboard Shortcuts below. the value I = m + 1. We’ll choose the last element as the pivot for now. com/2012/05/linear-solution-finding-kth23/5/2012 · Linear Solution: Finding Kth smallest element in an array To simplify the swapping lets move pivot element to last element of array using As we place the pivot at the (k-1)th index it will be the kth smallest number. RE ': . The "pivot" or "pivot element" is an element on the left hand side of a matrix that you want the elements above and How to find pivot element of a rotated array using binary search. Remember that the pivot column is the column containing the most negative indicator; occasionally there is a tie for most negative Find an element in a sorted array rotated unknown times. Sort and Search. Here is my quicksort code below. Find pivot element in a sorted and rotated array. 16. If the boundary indices of the subarray being sorted are sufficiently large, the naïve expression for the middle index, (lo + hi)/2, will cause overflow and provide an invalid pivot index. You can find a STRUCTURAL COLUMN / Adds a vertical load-bearing element to Find Pivot Index. Based on your location, we recommend that you select: . We have just discovered the second rule of the simplex method. In this technique, we select the first element and call it the pivot. Given an sorted integer array of size N which is also rotated by an unknown position. compare these three numbers and find the number which is greater than one and smaller than other i. Find the pivot element to be used in the next iteration of the simplex method A from FINITE MAT 170 at Grantham University You can right-click on the Pivot element in design mode in Visual Studio and choose "Edit Template"->"Edit a Copy". The current number then swaps with the new slot created to the right of the new pivot. emory. If you restrict yourself to the The pivot element is basic in the simplex algorithm. Find Study Resources. Java Chapter 14: Sorting. So I have an assignment to validate the complexity of quicksort. element in the pivot column. In most cases, each operating double-hung sash requires two standard pivot pins. First you set up matrice, then you find the column with the largest negative number in bottom row. it is used to invert the matrix and calculate rerstricciones tableau of simplex algorithm, in each iteration Key woz:da: pivot element, simplex method, ill-conditioned problem of Assign to th. If the leader element is smaller or equal to the pivot element, we need to send it further to the left and bring a larger item (tracked by follower) further to the right. Swap pivot with the last element, we get 8169035276 i Quicksort and middle pivot. index of the minimum element of the array. e, quicksort(A); 1. My Favorite Algorithm: Linear Time Median Finding It’s a recursive algorithm that can find any element The element at this index is called the pivot. The pivot element is the largest element in the array. htmlDivide the row containing the pivot element by the value of the pivot . PIVOT; the program will ask you the location of your pivot element. But we could How to locate or change Excel Pivot table data source. Simplify (row ). About The Cognitive Activities Pack. - In general, to tilt or decenter a lens element, move to the desired pivot point, carry out the decenters and/or tilts, and return from the pivot point. Get a free copy of my e-book 101 Amazing Pivot Table Tips and Tricks! This book is packed with an amazing 101 (well actually 102, but who’s counting) awesome pivot table tips! Fully illustrated and written straight to the point style. At the most basic level, a basic Pivot Table provides some basic (but powerful) calculation functionality to determine the displayed values. Power Pivot is an Excel add-in created by Microsoft to help users analyze data and create data models. Citations (0) References (5) This research hasn't been cited in any other publications. This article explains how to read a DataTable and return an inverted or pivot table depending on the column names provided. Lately i have been taking an course at brilliant. It works fine when I set the pivot as the left most element in the array but it doesn't sort the elements correctly when the pivot is any other element. If the rank of the pivot is equal to k after partitioning, then the pivot itself is the kth element so we return. The smallest element will be adjacent to it. This article explains how to read a DataTable and return an inverted or pivot table depending on the column names provided. The cacheField element is used for two purposes: it defines the data type and formatting of the field, and it is used as a cache for shared strings. The pivot element is the entry where the pivot column and pivot row intersect. What is a pivot column? 2. For the pivot in the quicksort algorithm, see quicksort. then QUIT (Also, to be safe store your matrix in [B] as well ) next push PRGM and then select PIVOT. Pivot Shop Pivot. comhttps://activities. Solving system of linear It is necessary to find row k, where and k > p, and then interchange row p and row k so that a nonzero pivot element is obtained. pdf. In the case of matrix algorithms, a pivot entry is usually required to be at least distinct from zero, and often distant from it; in this case finding this Element Cycles carries BMC Bikes, Norco, find your bike. Setting an element to True indicates you want a subtotal for that element; setting an element to False means you don't want a subtotal for that element. 5. If num lies between start element and element at pivot-1 position, then find num in array[start. g. One approach to choosing three pivots is to pick seven evenly spaced elements from the array, sort them and then use the 2nd, 4th and 6th elements. Let's try to find out the pivot value in the data given in the question. how to find the pivot elementThe pivot or pivot element is the element of a matrix, or an array, which is selected first by an a pivot entry is usually required to be at least distinct from zero, and often distant from it; in this case finding this element is called pivoting. Repeat the whole thing for left and right subarray as pivot is now placed at its place. Because the input array has one element, each of the Quick Sort simply return the input array back!!! Each one of the Quick Sorts will select its own pivot and partition the array: pivot. Clues – Solution should be O(N) in time and O(1) in space. I use Quicksort to partially sort the array and find the kth sm28/12/2012 · On the PivotChart tab, I created a pivot table with attached chart by clicking in the data, Find the number of element in the column hello simi,The entire wikipedia with video and photo galleries for each article. How: // Add surplus variables // to obtain a basic solution Find a pivot element p in T that D. When implementing Quicksort, one of the things you have to do is to choose a pivot. This is also a popular interview problem. This is a java program to find kth smallest element form the given sequence of numbers. 0! Student Home More On-Line Utilities On-Line Topic Summaries On-Line Tutorials True/False Quizzes Topic Review Exercises Everything for Finite Math Everything for Calculus Everything for Finite Math & Calculus To make a pivot, simply click on the variable-button that you want to pivot around. 15/2/2014 · Algorithm and Implementation Program of or pivot element and when element 4 Responses to "Algorithm and Implementation Program of Quick Sort"How to create a new pivot table Once you have a new pivot table style created and applied to we see the formatting defined for each bolded table element. Riaz Ud Din wrote:I want to find kth smallest element in unsorted array without fully sorting the array. QuickSort Is sorting things (say, in array, recursively) Let's say we are sorting elements in array A, i. 4, No. To go to a tutorial which shows you how to pivot, press here) To find a pivot, we first select a column, then a row. View the tabs in the Power Pivot window. ” Thus, one way to find the pivoted element could be traverse the array and check the previous element. The Median-of-Medians Algorithm. A more strict definition of pivoting includes making the pivot element equal to 1. Suppose a sorted array is rotated at some pivot unknown to you beforehand. You can find a history of the Camarilla pivot points method and some How quicksort works. Each element nums[i] will be an integer in the range [-1000, 1000]. mathcs. It can be used as a program to find a list's last element: Where List is any prolog list, and Element is any element in (List, Pivot, Left, Right im supposed to write a program that implements quicksort using linked list and random pivot element. You may want In maximization simplex, the pivot is the smallest element in the column divided by the rightmost corresponding number. Pivot about the pivot element x y z s t u P - constant 1 0. Pick median as pivot. Just a few questions to make sure I am understanding the simplex method. Download full-text. - pivot_sorted_rotated_array. make this element as pivot element. I use Quicksort to partially sort the array and find the kth smallest element. There are different versions of quick sort which choose the pivot in different ways: 1. ) 2 / 3 1 / 3 Gauss elimination using pivot element. Random element as pivot 4. Medium #37 Sudoku Problem Statement – Given an unsorted array of N elements, find the k th smallest element. Find pivot element in a sorted and rotated array Answer to In quicksort we pick an element as pivot and partitions the given array into two sub arrays under the condition of each Work faster and more efficiently by using the Revit Keyboard Shortcuts below. The idea is to first find the largest element in array which is the pivot point also and the element just before largest is the smallest element. Each tab contains a table in your model. Pivot element is the only element in input array which is smaller than it's previous element. You can find a STRUCTURAL COLUMN / Adds a vertical load-bearing element to the Quickselect is a selection algorithm to find the kth smallest element in an unordered list. Arref is called the reduced row-echelon form of A. In this case, we'll multiply everything by 3. 9 kB) Comment. If you want the fastest quicksort then the best way to choose a pivot element is to choose at least 2 pivot elements (dual-pivot quicksort) or 3 pivots (3-pivot In the case of Gaussian elimination, the algorithm requires that pivot elements not be zero. Controls) - Windows UWP https://docs. For example, given [3,2,1,5,6,4] and k = 2, return 5. Xaml. Note that it is the kth largest element in the sorted order, not the kth distinct element. I think that a Pivot Table would be better for what you're trying to do I want to find kth smallest element in unsorted array without fully sorting the array. This step will make the pivot element = 1. SQL Pivot is useful to convert Rows into Column names or simply, to Rotate table. Once you find the pivot element you make the pivot element a 1, and then proceed to zero out all other entries in that column. Given a sorted integer array which is rotated any number of times, find the pivot index i. 724. Olivier de Weck Dr. A sorted array is rotated at some unknown point, find the minimum element in it. median. The pivot row will not change except by multiplication to make the pivot element a 1. Write a program in C to find the pivot element of a sorted and rotated array using binary search. 1-2, 217-226 HOW TO CHOOSE A PIVOT ELEMENT IN SIMPLEX METHOD? Evald UBI Dep. Cervélo In this video, you will learn how to use binary search to find pivot in a sorted & rotated array. Lokesh Posa (view profile) 1 file; 6 downloads; 0. We do this in lines 4-5, where if we find a case where the leader is smaller or equal to the pivot, we swap it with the follower. This operation seems to be a fundamental operation which is used as part of more complex operations such The real "fast selection" algorithm You mention that after the first iteration all elements to the left of the pivot are less than the pivot while all element to The method most frequently used to solve LP problems is the simplex To go to a tutorial which shows you how to pivot, press here) To find a pivot, we first The confusing part of this implementation is that although everything is based around our pivot element (the last item of the list in our case), Find the pivot element to be used in the next iteration of the simplex method A from FINITE MAT 170 at Grantham UniversityHow to use Excel's INDEX function to find data in a table. The pivot element would be the The pivot or pivot element is the element of a matrix, or an array, which is selected first by an a pivot entry is usually required to be at least distinct from zero, and often distant from it; in this case finding this element is called pivoting. AZURE PIVOT DOOR WITH FIXED ELEMENT FOR NICHE - Designer Shower screens from Inda all information high-resolution images CADs catalogues. Algorithms and Problem Solving ← Previous Next → Kth smallest/largest element in an used the middle element of the array as the pivot. int findpivot(int a[],int low,int Find the first element to the left of the pivot which is greater than pivot. Otherwise we ignore current element and keep scanning. Codewars is where developers achieve code mastery through challenge. You mention that after the first iteration all elements to the left of the pivot are less than the pivot while all element to the right are larger or equal. Find pivot and minimum element in a sorted rotated array. Kth smallest element in an array. If negative elements still exist in the bottom row, repeat Step 4. Make all the numbers above or below the pivot element 0. If there is no negative indicator, either the tableau is a FINAL TABLEAU or the problem has NO SOLUTION. After finding pivot divide array into two halves and apply binary search on it. pyCreating your own style to use with a Pivot Table. An optimal selection usually depends on the structure of the arrays that you are receiving, and in general it is very hard to find a position for the pivot that suits every array. Pick a random element as pivot 4. pivot-1] using binary search 2. (Those sizes don't have to be the same, though. microsoft. ), to do certain calculations. Replace (row ) with the row operation in order to convert some elements in the row to the desired value . Choose a web site to get translated content where available and see local events and offers. The Middle Pivot Element Algorithm. org, i was exploring Pivot Tables allow you to calculate and analyze data in several different ways. Norco Bicycles Shop Norco. Quicksort — An Example We sort the array A = (38 81 22 48 13 69 93 14 45 58 79 72) with quicksort, always choosing th e pivot element to be the elementIn maximization simplex, the pivot is the smallest element in the column divided by the rightmost corresponding number. For any matrix A there is a unique matrix Arref, in reduced row-echelon form, that is row-equivalent to A. C++ program to find kth smallest element by the method of partitioning the array. you can pivot the Row and Column labels of the quicksort's strategy place a pivot element after partitioning? its final correct location. The result is the matrix matrix pivoted using a Nine SIMPLEX TABLEAUS are shown. I need to know what exactly a pivot column is because I am trying to do a Linear Algebra problem about consistency and inconsistency. Microsoft Excel. But when I look at pseudocode like the one below, it is not clear how I should I read about quicksort algorithm and I don't understand how to choose pivot element. As it turns out, there are many different ways to choose a pivot element, and what you choose does matter — but more on A negative pivot element would not be good either, for the same reason. it is used to invert the matrix and calculate rerstricciones tableau of simplex algorithm, in each iteration moving from one extreme point to the next one. , 0 1 2 4 5 6 7 might become 4 5 6 7 0 1 2). Find Pivot Index . Use the median-of-median algorithm to recursively determine the median of the set of all the medians. Follow these steps, to find the source data for a pivot table: Select any cell in the pivot table. Pivoting . The pivot operation transforms the tableau for the current solution into the tableau that corresponds to the next solution. 25/8/2013 · In the simplex method, how is a pivot column selected? A pivot row? A pivot element? Give examples of each. Here’s the workbook with code to remove columns from pivot table using vba. Selecting a pivot element is also complicated by the existence of integer overflow. Medium #37 Sudoku Welcome to the UiPath Activities guide. However, if your initial array is {5,4,1,3,2} your pivot happens to be the minimum and in this case after the first iteration it will be placed in in the second position, i. What you’ve got is the right code if all you want to do is to hide (or show) certain (or all) columns. I want show this values in a Narrative View, but when I try to get the values with the '@x' signature in narrative section I find only the value of the element at the top of hierarchy (the father). INFORMATlCA, lillS, VoI. MODELING SIMULATION & OPERATIONAL RESEARCH Sem-5 th SANKET SUTHAR (IT Dept. THEOREM 1 If a linear programming problem has a solution, Select the pivot element Find the most negative entry in the bottom row to the left of the vertical line. choosing the pivot by this method splits the array in nearly two half and hence the complexity reduces to O(nlog(n)). rank(pivot) = 5. If you click on a button associated with a zero pivot element, The pivot point calculator lets you select the formulae you want to use and remembers your choice when you come back if you have cookies enabled on your browser. com/docs/find-relativeWelcome to the UiPath Activities guide. Easy. This is done by maintaining two pointers left and right. ENTER. We then scan the array, and while scanning. blogspot. The Simplex Method: Solving Maximum Problems in Standard Form209 Insert slack variables and find slack equations Rewrite the objective function and put it below the slack equations Write the initial simplex tableau Find the pivot element by finding the most negative indicator in last row and using the smallest quotient rule. The pivot used in partition is selected uniformly at random to potentially avoid worst case performance. Oracle 11g Pivot Functions. If there are odd no of elements then take [(n+1) /2]th element. Program to find the pivot element in an array where all the elements are non zero and unique. We will see the end of this section a practical example of the algorithm. BMC Bikes Shop BMC. Perform the pivot pivot columns, except for the pivots themselves, are equal to zero. It doesn’t matter if you are pivoting in Gauss-Jordan elimination to solve a system of linear equations, or pivoting to find an inverse of a matrix, or Pivoting Methods . So, what is a pivot point? Well, when animating in Max, we are typically going to find ourselves working with the transform tools of move, rotate and scale and the pivot point is the point in 3D space around which these transforms will occur. In this particular example the tutorial is choosing as pivot If you want the fastest quicksort then the best way to choose a pivot element is to choose at least 2 pivot elements (dual-pivot quicksort) or 3 pivots (3-pivot quicksort). The given rules are used to calculate Camarilla pivot points. We can solve this in O(log N) time, through a divide-and-conquer approach. Pivot element is an element for which next element is smallar than it. you must name the pivot row. Shop BMC. Revel 48 x 74 Pivot Shower Door with CleanCoat? view all ELEMENT ELEMENT Pivot Mens T-Shirt. It is the final position for that element. Re: Find the number of element in the column hello simi, Let me give some value addition to chart in a simple way in addition to Mr. Background Definition (Pivot Element). of Economics, Tallinn Technical University3/4/2011 · What is a pivot column? 2. where we partition around the pivot element, Pivot Tables Pivot Table Field List Missing? How to Get It Bottom line: If the pivot table field list went But I could not find any property that seemed The Element 50 Pivot Magnet is specifically designed for drilling on curved, uneven surfaces. The "pivot" or "pivot element" is an element on the left hand side of a matrix that you want the elements above and below to be zero. Program to find Pivot Element of a Sorted and Rotated Array Write a program in C to find pivot element of a sorted and rotated array using binary search. Searching an Element in a Rotated Sorted Array April 13, 2010 by 1337c0d3r 47 Replies Suppose a sorted array is rotated at some pivot unknown to you beforehand. When you go to find the determinant, if your pivot is a number other than one, then you are multiplying each row that you change by the pivot element. Find the pivot element. Sales data would automatically find its place in the new matrix. In this case, the element we want Hi, I have a problem with Pivot Table. length-1 Increment left variable until you get element higher than pivot. Definition A pivot position in a matrix A is a location in A that corresponds to a leading 1 in the reduced echelon form of A. In this article, we will see Pivot the system below in Matrix form. All other elements in the array are split into two categories — they are either less than or greater than this pivot element. D&C Selection Example Use 3 as the pivot and partition. *An element in an array is a pivot element if the sum of all the elements in the list to its left is equal to the sum of all the elements to its right. While rotating, remaining columns must involve in Grouping or AggregationTo calculate values in a PivotTable, In the list of formulas, find the formula that you want to change listed under Calculated Field or Calculated Item. Graphically, we will be at point J, which is where s 2 and s 3 are non-basic. A[pivot] are smaller and all elements on right side are greater than A[pivot]. Comments Locked · Show 5 · Share. The virtual element would display either the Name you're grouping by, or "Other". 46 KB) by Lokesh Posa. If the pivot is adjacent to the current number, then it simply switches position with the current number. Re: Return Pivot Table row number in VBA Thanks; that does get me in the right direction but what I am really trying to do is determine the last row of the Pivot Table. edu/~cheung/Courses/323/Syllabus/ConstrainedOpt/simplex2b. Always pick last element as pivot 3. We can solve this in O(log N) time, through a divide-and-conquer approach, which is similar to peak finding algorithm. function increase in value; while ( p can be found) { T = Perform pivot operation on p in T // Discussed above Find a pivot element p in T that makes the obj. a) Let's say you have n elements. you have {4,1 Overview of PivotTables and PivotCharts
# directed acyclic graph (DAG) Definition Solmuista ja suunnatuista solmuja yhdistävistäkaarista koostuva syklitön verkko. Unifikaatiopohjaisetpiirrekielipit rakentuvat DAGien varaan. Definition (en) a directed graph which has no cycles Explanation (en) A graph consisting of nodes and directed arcs, where one node is the initial node and no path (i.e. single arc or a sequence of arcs) leads to the same node where the paths starts. In particular, DAGs are sometimes represented using small circles for nodes and arrows for arcs. One can also represent DAGs as sets of path equations. ## References Lähdeviittaus tähän sivuun: Tieteen termipankki 19.10.2019: Language Technology:directed-acyclic-graph. (Tarkka osoite: https://tieteentermipankki.fi/wiki/Language Technology:directed-acyclic-graph.)
# Combining Gravity Turn and Orbit Models I have a mathematical model for the motion of an orbiting spacecraft about Earth: G = 6.672*10^-11; (*Gravitational Constant*) M = 5.97219*10^24 ; (*Mass of Earth*) R = 6.378 *10^6;(*Radius of Earth*) r = R + 150000; (*Orbital radius*) tmax = 5500; (*Simulation time*) v = Sqrt[(G M)/R]; (*Circular orbital velocity*) orbit = NDSolve[{ x''[t] == -((G M x[t])/(x[t]^2 + y[t]^2)^(3/2)), y''[t] == -((G M y[t])/(x[t]^2 + y[t]^2)^(3/2)), x[0] == 0, y[0] == r, x'[0] == v, y'[0] == 0}, {x[t], y[t]}, {t, 0, tmax} , MaxSteps -> 1000000, Method -> "StiffnessSwitching"] ParametricPlot[Evaluate[{x[t], y[t]} /. orbit], {t, 0, tmax}, AxesLabel -> {x, y}, PlotStyle -> Automatic, PlotRange -> Full, ImageSize -> Large] As well as a model for a spacecraft's surface launch using a gravity turn: Remove["Global*"] Unprotect[D]; (*Using symbol D for drag*) G = 6.672*10^-11; (*Gravitational Constant*) M = 5.97219*10^24 ; (*Mass of Earth*) R = 6.378 *10^6;(*Radius of Earth*) g0 = 9.81; (*Sea level gravitational acceleration*) g = g0/(1 + h[t]/R)^2; (*Gravitational acceleration w.r.t. height*) d = 5; (*Diameter*) A = (π d^2)/4; (*Area*) Subscript[C, D] = 0.5; (*Drag coefficient*) Subscript[ρ, 0] = 1.225; (*Sea level air density*) Subscript[h, 0] = 7500; (*Height scale*) ρ = Subscript[ρ, 0] Exp[-h[t]/Subscript[h, 0]]; (*Atmospheric air density*) D = 1/2 ρ v[t]^2 A Subscript[C, D]; (*Drag*) tburn = 260; (*Engine burn time*) T = If[t <= tburn, 800000, 0];(*Thrust*) m0 = 68000; (*Initial mass*) mdot = If[t <= tburn, 244.1, 0];(*Engine mass flow rate*) m = m0 - mdot*t; (*Mass of rocket w.r.t. time*) tmax = 260; (*Simulation running time*) traj = NDSolve[{ v'[t] == T/m - D/m - (g - v[t]^2/(R + h[t])) Sin[γ[t]], γ'[t] == -(1/v[t]) (g - v[t]^2/(R + h[t])) Cos[γ[t]], h'[t] == v[t] Sin[γ[t]], x'[t] == v[t] Cos[γ[t]], v[0] == 1, γ[0] == 90 Degree , x[0] == 0, h[0] == 0, WhenEvent[h[t] == 1000, γ[t] -> 89.6 Degree]}, {v[t], γ[t], x[t], h[t]}, {t, 0, tmax} ] ParametricPlot[{x[t], h[t]} /. traj, {t, 0, tmax}, AxesLabel -> {x, y}] What I'm trying to do, though, is somehow combine the two models so that I can simulate a gravity turn launch and subsequent orbit about Earth at the same time. As it stands, it seems like the gravity turn code unfortunately only considers a flat Earth surface. Does anyone know how I might go about combining the two and allowing for a curved surface in the gravity turn code? EDIT: I've edited the above gravity turn code to take into account a curved Earth surface, but once the engine stops its burn something strange happens: The spacecraft carries on gaining speed as if its thrust is still going. Remove["Global*"] Unprotect[D]; (*Using symbol D for drag*) G = 6.672*10^-11; (*Gravitational Constant*) M = 5.97219*10^24 ; (*Mass of Earth*) R = 6.378 *10^6;(*Radius of Earth*) d = 5; (*Diameter*) A = (π d^2)/4; (*Area*) Subscript[C, D] = 0.5; (*Drag coefficient*) Subscript[ρ, 0] = 1.225; (*Sea level air density*) Subscript[y, 0] = 7500; (*Height scale*) ρ = Subscript[ρ, 0] Exp[-y[t]/Subscript[y, 0]]; (*Atmospheric air density*) D = 1/2 ρ v[t]^2 A Subscript[C, D]; (*Drag*) tburn = 260; (*Engine burn time*) T = If[t <= tburn, 800000, 0];(*Thrust*) m0 = 68000; (*Initial mass*) mdot = If[t <= tburn, 244.1, 0];(*Engine mass flow rate*) m[t] = m0 - mdot*t; (*Mass of rocket w.r.t. time*) tmax = tburn; (*Simulation running time*) traj = NDSolve[{ v'[t] == T/m[t] - D/ m[t] - ((G M)/(Sqrt[x[t]^2 + y[t]^2])^2 - v[t]^2/Sqrt[ x[t]^2 + y[t]^2]) Sin[γ[t]], γ'[ t] == -(1/ v[t]) ((G M)/(Sqrt[x[t]^2 + y[t]^2])^2 - v[t]^2/Sqrt[ x[t]^2 + y[t]^2]) Cos[γ[t]], y'[t] == v[t] Sin[γ[t]], x'[t] == v[t] Cos[γ[t]], v[0] == 1, γ[0] == 90 Degree , x[0] == 0, y[0] == R, WhenEvent[y[t] == R + 1000, γ[t] -> 88.85 Degree]}, {v[ t], γ[t], x[t], y[t]}, {t, 0, tmax} ] ParametricPlot[{x[t], y[t] - R} /. traj, {t, 0, tmax}, AxesLabel -> {x, y}] Plot[{v[t]} /. traj, {t, 0, tmax}, AxesLabel -> {t, v}] Plot[{γ[t]} /. traj, {t, 0, tmax}, AxesLabel -> {t, γ}] This is the trajectory after the 260 second engine burn: And this is the trajectory after 5000 seconds: As can be seen, something is definitely awry. • Hi ! Are you sure this is not a physics question ? I might be still sleepy, but I think you are having trouble with the underlying physics/mathematics. – Sektor Nov 6 '14 at 11:05 • The Drexel University Mathematica Forum might make a better place to post this question. As I remember, the forum would entertain (even celebrate) any question either about (1.) how to do things in Mathematica or (2.) any problem that someone uses Mathematica to solve. This forum has a narrower focus, but I think it would benefit from expanding it to include questions like this one. – Jagra Nov 6 '14 at 13:08 • Hi Sektor, although this is a physics based question, I put it in the Mathematica section because of the amount of code I posted. I'm hoping Mathematica users with a background in physcis/mathematics will be able to help. – RedRover Nov 7 '14 at 14:14 • Hi Jagra, what is the focus of this Mathematica forum? Is it only used for language specific coding issues? – RedRover Nov 7 '14 at 14:15 • Unprotect[D]; (*Using symbol D for drag*) <-- this is a really really bad idea ... it might also stop some people from reading the code because there's always the suspicion that it's causing something to go wrong (and it's so easy to fix). – Szabolcs Nov 7 '14 at 14:39 This question really belongs on physics.stackexchange.com. You can post your propagation equations there using TeX. TeXForm[] will do the conversion for you if you don't know TeX. For a 2-D case around a non-rotating spherical planet as you appear to be attempting, I use these propagation equations with NDSolve[]: {v'[t]==-Sin[γ[t]] μ/r[t]^2-D/m, γ'[t]==Cos[γ[t]] 1/r[t] (v[t]-μ/(v[t]r[t]))+L/(m v[t]), r'[t]==Sin[γ[t]] v[t], φ'[t]==Cos[γ[t]] 1/r[t] v[t], v[0]==v0, γ[0]==γ0, r[0]==r0, φ[0]==φ0} $${dv\over dt}=-\frac{\mu}{r^2}\sin\gamma-\frac{D}{m}$$ $${d\gamma\over dt}={1\over r}{\left(v-\frac{\mu }{r v}\right)\cos\gamma}+\frac{L}{m v}$$ $${dr\over dt}=v\sin\gamma$$ $${d\phi\over dt}=\frac{v}{r}\cos\gamma$$ $\mu$ is $G M$. $r$ is the radius from the center of the body and $\phi$ is the central angle around the body. $x$ and $y$ can be readily computed from $r$ and $\phi$. I use these for entries as opposed to launches ($D$ is drag and $L$ is lift), but you can replace $D$ and $L$ with whatever you like, including thrust along the velocity direction ($-D$) and orthogonal to the velocity direction ($\pm L$). Note that when using these I never set $D$ (or $L$) — instead I use /. D->.... If $D$ and $L$ are replaced with zeros, you get simple Kepler propagation, resulting in ellipses or hyperbolas about the body. So you can continue to use the same equations to propagate after the launch. • Hi Mark, thanks very much. I had a go using these equations but had limited success. They worked great for simulating a rocket already in orbit, but I couldn't figure out the correct initial conditions for a successful gravity turn surface launch. Am I correct in assuming that with an initial radius = 6.378*10^6 and an initial central angle phi = 90 degrees, we'd have the rocket sitting on the "north pole". Furthermore, do you have any links showing the derivation of the above equations, as I'd be very interested in learning more and haven't ever seen equations of motion in this form before. – RedRover Nov 7 '14 at 22:58 • The initial $\phi$ doesn't matter. You should set it to zero so you can easily see how much central angle is traversed. There is no "pole" here. All points on the surface of a non-rotating sphere are equivalent. – Mark Adler Nov 7 '14 at 23:50 • The initial conditions would be $r$ at the surface and $v$ and $\phi$ at zero. Since $v$ is zero, the initial $\gamma$ doesn't matter. However if you will be applying thrust in the $\gamma$ direction, then you should set it to $\pi/2$ for straight up. Assuming you want to go straight up. – Mark Adler Nov 7 '14 at 23:52 • Note that your thrust to weight ratio had better be more than one. – Mark Adler Nov 7 '14 at 23:53 • To get the gravity turn, you will need to give it an initial kick angle at some point after launch to get it off vertical. – Mark Adler Nov 8 '14 at 0:01
Physics II Problem Chegg is having an image upload problem so here is the link: http://i.imgur.com/0WOGJ.jpg
# Parallel Computing and Computer Clusters/Theory ## Typical real-world applicationsEdit What do people do with clusters of computers? • The web • Search engine indexes • Databases • Scientific and engineering work • Weather modeling and prediction • Molecular modeling • Product design and simulation ## Parallel Programming ModelsEdit There are generally two ways to accomplish parallel architectures. They can be either used separately or the architecture can be any combination of the two. The shared memory model is a model where all processors in the architecture share memory and address spaces. All the processors in the architecture can access all of the attached main memory. The message passing model is a model that puts the information into messages. Discrete units of information that can be passed around to the various CPU's in the architecture. Both architectures have their advantages and disadvantages. ### Shared memoryEdit Shared memory models are models where all CPU's can address the main memory of the machine. Generally the main memory is accessible though a bus. This allows all CPU's to use memory addresses, and addresses can be passed around to different CPU's. Having all CPU's access the memory through a bus introduces a number of issues. Such as bandwidth of the bus, clearly if a bus is small and there are a large number of CPU's then optimization of the CPU usage is going to be difficult to achieve. Often bus communication is augmented with local cache. Processor L1 and L2 cache, this then introduces the challenge of cache coherence. How to assure that the data in main memory and cache are up to date enough, such that all CPU's will see the appropriate data. There are several strategies to proper and optimized operation here. ### Message passingEdit Message passing systems have memory that is either not directly connected to the CPU or even possibly spread across various geographic locations. This results in a system of send and receives, where the data is packed into a message and sent across a network to various other machines in the network. ## History of distributed computingEdit ### The Eighties and Nineties: PVM and MPIEdit Hadoop at it's core is a combination of two open source frameworks, MapReduce and HDFS. Both are open source implementations based on white papers published by Google. MapReduce is a processing framework based on a Google framework of the same name. MapReduce takes the data stored in HDFS and processes it on each node in the cluster. MapReduce consists of two procedures defined by the programmer. The Map and the Reduce. Mappers take lines in the form of keys and values. Mappers also emit keys and values. The Reducer has an input format that is a key and a vector of all the values for that key that were emitted by the mapper. HDFS is the Hadoop filesystem. HDFS is an implementation of Google distributed filesystem. HDFS is a software implementation that distributes the storage of files across multiple nodes in a cluster. There is a default replication factor of three to assure with a high degree of certainty that no data is lost. ## Mathematics of parallel processingEdit Parallel processing is the simultaneous execution of the same task (split up and specially adapted) on multiple processors in order to obtain faster results. The parallel nature can come from a single machine with multiple processors or multiple machines connected together to form a cluster. ### Amdahl's lawEdit Amdahl's law is a demonstration of the law of diminishing returns: while one could speed up part of a computer a hundred-fold or more, if the improvement only affects 12% of the overall task, the best the speedup could possibly be is ${\displaystyle {\frac {1}{1-0.12}}=1.136}$  times faster. More technically, the law is concerned with the speedup achievable from an improvement to a computation that affects a proportion P of that computation where the improvement has a speedup of S. (For example, if an improvement can speedup 30% of the computation, P will be 0.3; if the improvement makes the portion affected twice as fast, S will be 2.) Amdahl's law states that the overall speedup of applying the improvement will be ${\displaystyle {\frac {1}{(1-P)+{\frac {P}{S}}}}}$ . To see how this formula was derived, assume that the running time of the old computation was 1, for some unit of time. The running time of the new computation will be the length of time the unimproved fraction takes (which is 1 − P) plus the length of time the improved fraction takes. The length of time for the improved part of the computation is the length of the improved part's former running time divided by the speedup, making the length of time of the improved part P/S. The final speedup is computed by dividing the old running time by the new running time, which is what the above formula does. #### ParallelizationEdit In the special case of parallelization, Amdahl's law states that if F is the fraction of a calculation that is sequential (i.e. cannot benefit from parallelisation), and (1 − F) is the fraction that can be parallelised, then the maximum speedup that can be achieved by using N processors is ${\displaystyle {\frac {1}{F+(1-F)/N}}}$ . In the limit, as N tends to infinity, the maximum speedup tends to 1/F. In practice, price/performance ratio falls rapidly as N is increased once (1 − F)/N is small compared to F. As an example, if F is only 10%, the problem can be sped up by only a maximum of a factor of 10, no matter how large the value of N used. For this reason, parallel computing is only useful for either small numbers of processors, or problems with very low values of F: so-called embarrassingly parallel problems. A great part of the craft of parallel programming consists of attempting to reduce F to the smallest possible value.
A- A+ Alt. Display # Investigating the Perceptual Validity of Evaluation Metrics for Automatic Piano Music Transcription ## Abstract Automatic Music Transcription (AMT) is usually evaluated using low-level criteria, typically by counting the number of errors, with equal weighting. Yet, some errors (e.g. out-of-key notes) are more salient than others. In this study, we design an online listening test to gather judgements about AMT quality. These judgements take the form of pairwise comparisons of transcriptions of the same music by pairs of different AMT systems. We investigate how these judgements correlate with benchmark metrics, and find that although they match in many cases, agreement drops when comparing pairs with similar scores, or pairs of poor transcriptions. We show that onset-only notewise F-measure is the benchmark metric that correlates best with human judgement, all the more so with higher onset tolerance thresholds. We define a set of features related to various musical attributes, and use them to design a new metric that correlates significantly better with listeners’ quality judgements. We examine which musical aspects were important to raters by conducting an ablation study on the defined metric, highlighting the importance of the rhythmic dimension (tempo, meter). We make the collected data entirely available for further study, in particular to evaluate the perceptual relevance of new AMT metrics. Keywords: How to Cite: Ycart, A., Liu, L., Benetos, E. and Pearce, M.T., 2020. Investigating the Perceptual Validity of Evaluation Metrics for Automatic Piano Music Transcription. Transactions of the International Society for Music Information Retrieval, 3(1), pp.68–81. DOI: http://doi.org/10.5334/tismir.57 Published on 12 Jun 2020 Accepted on 20 Apr 2020            Submitted on 01 Mar 2020 ## 1 Introduction Automatic Music Transcription (AMT) is a widely discussed problem in Music Information Retrieval (MIR) (Benetos et al., 2019). Its ultimate goal is to convert an audio signal into some form of music notation, such as sheet music, which we refer to as Complete Music Transcription (CMT). A common intermediate step is to obtain a MIDI-like representation, describing notes by their pitch, onset and offset times in seconds, leaving aside problems such as stream separation, rhythm transcription, or pitch spelling. We refer to this as AMT. It has applications in various fields, in particular in music education, music production and creation, musicology, and as pre-processing for other MIR tasks, such as cover song detection or structural segmentation. The performance of AMT systems is commonly assessed using simple, low-level criteria, such as by counting the number of mistakes in a transcription (Bay et al., 2009). In particular, deciding whether a note is a mistake is typically a binary decision, and all errors have the same weight in the final metric. Yet, not all mistakes are equally salient to human listeners: for instance, an out-of-key false positive will be much more noticeable than an extra note in a big chord, all the more so if it fits with the harmony. In this study, we aim to investigate to what extent the current evaluation metrics correlate to human perception of the quality of an automatic transcription. We reframe the problem of AMT evaluation as a symbolic music similarity problem: we try to assess how similar to the target the output transcription sounds, rather than simply counting the number of incorrectly detected notes. We gather judgements of similarity by conducting a listening test, and use these answers to examine how human perception of AMT quality correlates with the evaluation metrics commonly used. We investigate what musical features are most important to raters, and use them to define a new metric, that correlates significantly better with human ratings than benchmark metrics. Gathering similarity ratings in a meaningful way is not straightforward. In particular, inter-rater agreement is infamously low for music similarity tasks (Flexer and Grill, 2016). One of the reasons, besides intrinsic disagreement between raters, is that it is a difficult and ill-defined task. Our main concern is thus to make the test as easy as possible. As argued by Allan et al. (2007), the difficulty of rating the absolute similarity between two excerpts, be it on a continuous or Likert scale (Likert, 1932), leads to low inter-rater agreement, as different raters might use different scales, and these scales might evolve throughout the experiment. To avoid that problem, we choose to give raters a binary choice: given one reference excerpt, and two possible transcriptions of that excerpt, participants have to answer the question, “Which transcription sounds most similar to the reference?” Another reason that makes rating difficult is having to remember long excerpts for subsequent comparison. In order to make the task easier, such that participants can rely mostly on their working memory, we use short audio excerpts, which prevents us from drawing any conclusions on the similarity of longer excerpts. Since we are mostly interested in notes rather than timbre or sound quality, we can afford to run this study in more loosely controlled acoustic conditions. We thus run this study online, in order to gather as much data as possible. A major concern is to make the test easily accessible; in particular, it is designed so participants can answer as many or as few questions as they want. We choose to focus our study on Western classical piano music, as it is by far the most discussed sub-domain of AMT, mostly due to the availability of big datasets for that instrument and style (Emiya et al., 2010; Hawthorne et al., 2019). The validity of the present study is thus limited to this instrument and style, and should not be generalised e.g. to singing voice, or jazz music. Our main contributions include: • Gathering a dataset of more than four thousand individual perceptual ratings of transcription quality; • Investigating the correlation between these ratings and traditional AMT metrics, depending on various factors; • Proposing a set of musically-relevant features that can be computed on pairs of target and AMT output; • Proposing a new evaluation metric in the form of a simple logistic regression model trained to approximate listener ratings; • Investigating which musical parameters are most important to raters through an ablation study of the classifier. In particular, we make the stimuli, gathered data, website code, pre-trained metric and feature implementation all available for further study (See Section 7). In what follows, we present the benchmark evaluation metrics used for AMT and other works on transcription system evaluation in Section 2, and describe the design of the listening tests in Section 3. In Section 4, we analyse the results of the listening tests, and in particular the agreement between ratings and benchmark evaluation metrics. We then define a new metric based on musical features and analyse which features were most important to users in Section 5. Finally, we discuss our results in Section 6. ## 2 Related work ### 2.1 Benchmark evaluation metrics In this section we describe the most commonly-used evaluation metrics for AMT of a single instrument. Some other metrics exist (see Bay et al. (2009) for a complete description); we only briefly describe here those that are most often used to compare systems. #### 2.1.1 Framewise metrics These metrics are computed on pairs of piano rolls. A piano roll is a binary matrix M, such that M[p,t] = 1 if and only if pitch p is active at frame t, where a frame is a temporal segment of constant duration. We use a timestep of 10 ms, as in the MIREX multiple-F0 estimation task (Bay et al., 2009). When comparing an estimated piano roll $\stackrel{^}{M}$ to a target piano roll M, a true positive is counted whenever $\stackrel{^}{M}\left[p,t\right]=1$ and M[p,t] = 1. False positives and false negatives are counted analogously. We use TP, FP and FN to refer to the total number of true positives, false positives and false negatives, respectively, summed across frames. The framewise Precision (Pf), Recall (Rf) and F-Measure (Ff) are then computed as follows (the subscript f represents the fact that metrics are computed framewise): (1) ${P}_{\text{f}}=\frac{TP}{\mathrm{TP}+\mathrm{FP}} {R}_{\text{f}}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}} {F}_{\text{f}}=\frac{2·{P}_{\text{f}}·{R}_{\text{f}}}{{P}_{\text{f}}+{R}_{\text{f}}}$ #### 2.1.2 Notewise metrics Notewise metrics are computed on lists of notes, where each note is a tuple (s,e,p) where s and e are the start and end times, and p is the MIDI pitch of the note. For onset-only notewise metrics, an estimated note $\left(\stackrel{^}{s},\stackrel{^}{e},\stackrel{^}{p}\right)$ is considered as a true positive if and only if there is a ground-truth note (s,e,p) such that $p=\stackrel{^}{p}$ and . In addition, ground-truth notes can be matched to at most one estimated note. Precision, Recall and F-Measure (respectively Pn,On, Rn,On and Fn,On) are then computed as in Section 2.1.1, with the difference that TP, FP and FN are counted in number of notes, instead of time-pitch bins. The subscript n represents the fact that metrics are computed notewise. Recently, as Fn,On performance for AMT systems has improved, onset-offset notewise metrics have been increasingly used. Onset-offset metrics add the extra constraint that, for an estimated note to be considered a true positive, ê must be within 20% of the duration of the ground-truth note or within ±50 ms of the ground truth offset, whichever is greatest. Again, Precision, Recall and F-Measure (respectively Pn,OnOff, Rn,OnOff and Fn,OnOff) are computed as in Section 2.1.1. In all cases, metrics are computed for each test piece, and then averaged over the whole dataset. In particular, we do not weight each piece according to its duration. ### 2.2 Efforts for better evaluation metrics Recently, various evaluation methods were proposed for CMT (Cogliati and Duan, 2017; McLeod and Steedman, 2018), but they focus mostly on typesetting problems, and do not address the problem of perceptually-relevant pitch assessment. Some efforts were also made for singing voice transcription and melody estimation (Molina et al., 2014; Bittner and Bosch, 2019), but still consider pitches as being either correct or incorrect. Another method was proposed for automatic solfège assessment by Schramm et al. (2016), using a classifier trained on experts’ ratings to classify each note as correct or incorrect, but again, this decision is mostly binary, and focuses on small deviations in pitch (less than a semitone) rather than the correctness of a pitch in a tonal context. An older study was conducted on AMT by Daniel et al. (2008). The study assessed the perceptual discomfort created by some specific types of mistakes (e.g. note insertions, deletions, replacement, onset displacement) by comparing pairs of artificially-modified music excerpts. This data was then used to define new evaluation metrics. However, the types of mistakes considered were relatively limited (for instance, for note insertions, the study only compared octave insertions, fifth insertions and random insertions), and did not take into account musical concepts such as tonality, melody, harmony, or meter. Moreover, the modified MIDI files only contained one type of mistake, and did not consider the potential interactions between several kinds of mistakes. By contrast, we choose to use real AMT system outputs, in order to maintain ecological validity, and study a wider range of features. The evaluation of AMT systems is related to symbolic music similarity, as the end goal is to assess how similar the output and the target sound. Symbolic melodic similarity is a widely-discussed problem (see Velardo et al. (2016) for a survey). Here, we are focusing on polyphonic music similarity, which is much less common. A method is described by Allali et al. (2009), relying on sequence-to-sequence alignment, and an edit distance adapted from Mongeau and Sankoff (1990). However, this method was designed for quantised note durations only, which makes it potentially suitable for CMT, but not for AMT. Moreover, we aim here to use a bottom-up approach, to investigate what factors are important to listeners and using them to define a new metric. ## 3 Study design ### 3.1 Stimulus design We obtain automatic transcriptions using several benchmark AMT systems. Using the best systems available currently would have led to very similar transcription mistakes, as they are all based on the same underlying methods. Instead, we aim to use a diverse sample of commonly used AMT methodologies. We thus use: • OAF: The current state of the art based on neural networks (Hawthorne et al., 2019), trained to jointly detect note onsets and pitches. • CNN: A simple framewise convolutional neural network (Kelz et al., 2016). • NMF: A piano-specific system, based on non-negative matrix factorisation (Cheng et al., 2016). • STF: A system based on handcrafted spectral and temporal features (Su and Yang, 2015). CNN is a framewise system: at each timestep, it outputs a list of active pitches. This is equivalent to a piano roll, but requires post-processing to obtain a list of note events. To get note events, we consider any silence followed by a note as an onset (and vice versa for offsets), and apply gap-filling and short-note-pruning, both with a threshold of 80 ms, corresponding to two processing frames in this system. We use the pieces present in the MAPS dataset (Emiya et al., 2010) of MIDI-aligned piano recordings, as it remains the most common benchmark dataset for AMT. We use only the full music pieces in MAPS, with the two recording conditions that correspond to real piano recordings, namely ENSTDkCl (close-field recordings) and ENSTDkAm (ambient recordings), the two most commonly-used evaluation subsets. To preserve musical validity, we manually segment the pieces into musical phrases, so that each excerpt lasts between five and ten seconds and roughly corresponds to a coherent, self-contained musical unit. We try as much as possible to keep an integer number of bars, using the A-MAPS (Ycart and Benetos, 2018) bar and beat annotations. When material within a piece is repeated without transposition, we only keep the first repetition. The start and end times of each segment are made available for future study (see Section 7). We keep duplicate pieces, recorded with two different recording conditions. Eventually, we obtain 1552 reference examples. To be as consistent as possible in terms of timbre between the reference and the transcriptions, all example MIDI files were rendered using the Yamaha Disklavier Pro Grand Piano soundfont.1 Some systems could not transcribe note velocities, so for uniformity, we used a default MIDI velocity of 100 for every note of the output transcriptions. We kept the original velocities when rendering references to be able to use them later on in the analysis, as most of the time they are available in the ground-truth files. ### 3.2 User data Before answering questions, users read an information sheet and gave their consent for participating. We collected their age, gender, and whether they had a hearing disability. They then had to answer questions from the Gold-MSI test (Müllensiefen et al., 2014) corresponding to the Perceptual Abilities and Musical Training subscales. Each user also had the option to give comments on the strategies they used and the aspects that were most important to them when choosing between transcriptions. All data were anonymised, and the procedure was approved by Queen Mary University of London’s ethics committee (reference QMREC2066). ### 3.3 Setup The test was conducted online, as the main focus of this study was not sound quality, but rather the note content of the transcriptions. Participants were advised to do the test using good headphones, in a quiet environment. In what follows, we call a set {reference,transcription1,transcription2} a question, where transcription1 and transcription2 are two transcriptions of the reference, made by two different systems. There are six questions per reference, one for each unordered pair of AMT systems. For each question, participants were presented with one “reference” audio player, two “transcription” audio players, and were asked to answer the question “Which transcription sounds most similar to the reference?”, as a two-alternative forced choice (see Figure 1 for a screenshot of the interface). To strike a balance between comparison robustness and number of answered questions, each question was rated by four participants, taking care to balance the order (transcription1, transcription2) and (transcription2, transcription1) in which the two transcription players are presented in the interface. Participants were allowed to listen to each example as many times as they wanted; however, to encourage them to rely on perception rather than analytical thinking, we advised participants to listen to each example as few times as possible. A five-minute time limit was also included. For each question, participants could report if they knew the reference by ticking an additional “I know this piece” box. Figure 1 Screenshot of the listening test website. While designing the test, it became apparent that in some instances, making a choice was very difficult, for instance when the two transcriptions were nearly identical, or different but equally poor. We did not want to include a third alternative (such as “I don’t know”, or “both transcriptions are equally similar to the target”), as this would have made it much more difficult to produce a meaningful analysis of the difficult cases. Instead, we added an extra question: “How difficult was it to answer the question?”, on a five-point Likert scale (Likert, 1932) from “Very easy” to “Impossible”. Guidelines were given to answer this question in terms of number of listenings required for each file, difficulty of making a choice, and confidence in that choice. Getting participants to spend 30 minutes or more on a listening test without compensation can be difficult. To allow more flexibility, we designed the test so that each participant could rate as many examples as they wanted. If we had randomly picked questions, given the large number of examples, it would have been very difficult to ensure that several people answered each question. Instead, questions were presented to participants using the following rules: 1. Each participant cannot hear a reference more than once. 2. Each question cannot be rated more than four times. 3. Each new question is chosen among remaining candidates using the following steps: 1. Choose a reference among those that have already been seen by other participants, and have not been fully rated (i.e. at least one of the six questions using that reference has less than four answers). 2. If no such reference is available, choose a random new question. 3. Otherwise, choose a question using that reference that has already been answered by other participants. 4. If no such question is available, choose a new question using the same reference. When choosing a reference among those that have been seen by other participants (step 3.(a)), we skewed the random choice towards references that had more answers, in order to maximise the number of fully-rated references (i.e. references for which all system pairs were rated by four participants). Thanks to this procedure, the size of the pool of examples adapted dynamically to the number of gathered answers. ### 3.4 Participants In total, 186 people participated in our study (excluding the 40 people who registered but did not answer any questions): 126 males, 58 females and 2 non-binary, with a median age of 28. We did not perform any selection on participants. Many of them were trained musicians, as the median Gold-MSI score is 5.06 on a scale from 1 to 7 (compared to 4.81 in the general population for the subscales considered (Müllensiefen et al., 2011)). The median number of answered questions was 20, with 22 participants answering 50 questions or more (up to several hundred). Overall, we gathered 4501 answers, 1080 questions with four ratings, and 153 examples for which all pairs of systems have four ratings. Four participants reported a hearing disability, for a total of 53 answers. We decided to keep them anyway, as they amount to a small proportion of answers, and we are not interested in fine judgement about sound quality. ## 4 Results In what follows, we analyse the results of the participants’ ratings. We only keep questions for which four answers have been gathered. We keep all such questions, even when the corresponding example has not been rated for all pairs of systems. When comparing proportions (e.g. user preference, or agreement between raters and benchmark metrics), error bars are obtained by bootstrap analysis (Efron, 1992), resampling with same dataset size 100 times. The standard deviation of bootstrapped results is displayed. ### 4.1 Benchmark system performance First, we run the chosen systems on all the test files. We evaluate them using the benchmark metrics described in Section 2.1. Results are presented in Table 1. Notewise metrics are computed using the mir_eval Python library (Raffel et al., 2014). Table 1 Benchmark evaluation metrics for all systems, evaluated on the MAPS subsets ENSTDkCl and ENSTDkAm, with best values in bold. System Pf Rf Ff Pn,On Rn,On Fn,On Pn,OnOff Rn,OnOff Fn,OnOff STF 67.2 60.0 62.7 49.8 32.0 38.3 16.5 11.3 13.2 CNN 80.2 58.2 66.1 77.0 54.9 63.2 33.5 24.6 28.0 NMF 71.3 63.3 66.4 79.6 57.0 65.7 35.7 26.4 30.0 OAF 89.0 79.5 83.8 85.9 84.1 84.9 66.9 65.5 66.2 As expected, OAF is by far the best of all, for all metrics. The second-best is NMF, which can also be explained by the fact that is was trained on that specific instrument model, while this piano model is new to the other systems. The CNN comes in third position, and STF comes last. It has to be noted that these results vary quite a lot between the two subsets ENSTDkCl and ENSTDkAm: results are usually worse on ENSTDkAm, since it corresponds to ambient piano recordings, which are usually noisier. In particular, for NMF, which was trained on isolated notes played on ENSTDkCl, Fn,On drops from 76.1 to 55.6 on ENSTDkAm. For CNN and STF, Fn,On drops by around 5%. Interestingly, OAF works similarly on both subsets. This can be explained by the fact that it was trained on the MAESTRO dataset (Hawthorne et al., 2019), a dataset containing mostly concert piano recordings, in conditions arguably closer to ENSTDkAm. It also appears that although the performance in Ff is within a relatively small range of values, there are much bigger differences in performance in terms of Fn,On and Fn,OnOff. ### 4.2 Perceptual ranking of systems Using the ratings, we evaluate the systems from a perceptual point of view (pairwise results shown in Figure 2). The ratings are generally in accordance with the benchmark metrics: a system is preferred when its Fn,On is better (we focus on Fn,On as this metric correlates best with ratings, as discussed in Section 4.3). The relative ranking of the systems is also the same: OAF beats all other systems, NMF beats CNN and STF, and CNN beats STF. There seems to be a relation between the difference in benchmark metrics and the magnitude of the majority: for instance, OAF has a bigger majority when compared to STF than to NMF. But that is not strictly the case: although CNN is much better than STF in terms of Fn,On and Fn,OnOff, it is only preferred about 65% of the time. Figure 2 Vote proportion in pairwise comparisons of the systems. Blue bars represent the proportion of times the system on the left was chosen over the one on the right. For each pair, the percentage in parentheses is the average Fn,On computed on the specific examples included in the comparison. ### 4.3 Agreement between ratings and benchmark metrics In this section, we assess the extent to which ratings agree with Ff, Fn,On and Fn,OnOff. We also investigate what factors influence the agreement between raters and benchmark metrics. We define the agreement with a given metric as follows. For each given answer, we check whether the choice made by the participant corresponds to the ordering of the two transcriptions according to this metric. If the participant chose the transcription for which the metric is highest, we consider that the participant and the metric agree. We then compute the proportion of ratings that agree with this metric. We do this for Ff, Fn,On and Fn,OnOff. For Ff, we investigate various frame sizes: 10, 50, 75, 100, and 150 ms. For notewise metrics, we investigate how this agreement varies depending on the onset and offset tolerance thresholds: for onsets, we use 25, 50, 75, 100, 125, and 150 ms, and for offsets, we use 10, 20, 30, 40, and 50% of the note duration. Results on the agreement between ratings and benchmark metrics are shown in Figures 3 and 4. In terms of frame size for Ff, there is no clear tendency. It does appear nonetheless that using a 100 ms frame size improves the agreement with ratings slightly but significantly compared to a 10 ms frame size (p < 10–3 with a Welch t-test). When examining the influence of the onset for Fn,On, we can see in Figure 3 that the agreement of Fn,On with ratings is highest for onset thresholds between 75 and 150 ms. For Fn,OnOff, we can see in Figure 4 that the agreement is highest for an onset threshold of 100 ms and an offset tolerance of 50%, although it is still lower than Fn,On with onset threshold above 50 ms. Agreement might be even higher for higher offset tolerance thresholds, as Fn,OnOff becomes more and more similar to Fn,On (Fn,On can be seen as Fn,OnOff with an infinite offset tolerance). Figure 3 Proportion of agreement, across all examples, between raters and various evaluation metrics (Ff with various frame sizes, and Fn,On with various tolerance thresholds). Figure 4 Proportion of agreement, across all examples, between raters and Fn,OnOff, with various onset and offset tolerance thresholds. To investigate further what factors might influence agreement, we perform a linear fixed effects analysis (Allison 2009), using as the dependent variable for each question whether the rater agrees with Fn,On (1 if they do, 0 otherwise). We use as fixed effects the best Fn,On of the pair (Fbest), the difference in Fn,On between the two transcriptions (ΔF), the Gold-MSI score of the rater (Gold-MSI), whether the piece was recognised (Known), and the reported difficulty (Difficulty). The resulting coefficients and associated p-values are given in Table 2. Table 2 Coefficients and p-values for the linear fixed effects model using agreement with Fn,On as dependent variable and features as fixed effects. Feature Coefficient P-value ΔF 0.539 <0.001 Fbest 0.330 <0.001 Gold-MSI –0.007 0.232 Known 0.014 0.391 Difficulty –0.044 <0.001 It appears that ΔF and Fbest have a strong and significant effect on agreement. When the difference in performance between the two systems is high, people tend to agree more with the F-measure, as the choice is clearer. However, for a given ΔF, when both systems produce outputs of poor quality, the agreement is lower. When looking at other features, Difficulty is negatively correlated with agreement: when people report the choice as being more difficult, they tend to disagree more with the F-measure. To investigate this further, we compute the proportion of agreement between ratings and F-measure for each reported difficulty level (Figure 5). For high levels of difficulty, agreement is very poor, close to chance (50% for a two-alternative forced choice question), which is consistent with the guidelines given to raters for reporting difficulty. Still, even for low levels of reported difficulty, there is a fair amount of disagreement between ratings and Fn,On (10 to 20%), which shows that disagreement with Fn,On does not exclusively result from random choices in the difficult cases. Musical training (Gold-MSI) and familiarity (Known) have no significant effect on agreement with Fn,On. Figure 5 Agreement between ratings and Fn,On for each reported difficulty level. ### 4.4 Reported difficulty In this section, we examine the reported level of difficulty for each answer, and investigate the factors that influenced it. In Figure 6, we display the proportion of ratings for each difficulty level. When comparing this figure to the results in Table 1, it appears that, as a general trend, the higher the difference in Fn,On, the more confident raters are. Moreover, difficulty is highest when comparing the two worst performing systems according to benchmark metrics, which suggests that difficulty is higher when both transcriptions are poor. Figure 6 Distribution of difficulty ratings (lightest = 1, darkest = 5) for each pair of systems. To get a better understanding of how the difficulty varies depending on various parameters, we perform another linear fixed effects analysis, this time using difficulty as dependent variable. We use as fixed effects the best Fn,On of the pair (Fbest), the difference in Fn,On between the two transcriptions (ΔF), the Gold-MSI score of the rater (Gold-MSI), whether the piece was recognised (Known), and whether the rater agreed with Fn,On (Agree). The resulting coefficients and associated p-values are given in Table 3. Table 3 Coefficients and p-values for the linear fixed effects model using difficulty as dependent variable and features as fixed effects. Feature Coefficient P-value ΔF –1.564 <0.001 Fbest –0.608 <0.001 Gold-MSI –0.227 <0.001 Known –0.153 0.002 Agree –0.423 <0.001 All of the these factors are significant predictors of reported difficulty. From this, we can draw the following conclusions. First, musicians found the task easier than non-musicians. This could be explained either in terms of better auditory skills, or because musicians tend to be more confident in their judgements. People also find it easier to make a choice when they know the reference. One user commented: “Songs that I knew already felt easier to judge as I could remember the original much better”, in other words they only had to listen to and remember two excerpts instead of three. This highlights a difficulty of investigating musical similarity perception due to effects of memory, as we mentioned in Section 1. It also appears that the more confident people are in their choices, the more they agree with F-measure, which is coherent with the results presented in Section 4.3. Finally, when investigating the effect of ΔF and Fbest, we can see that the larger the difference between the two systems, the easier the decision, and all the more so when both systems perform well. ### 4.5 Analysis of confident answers When discussing the agreement between ratings and Fn,On, it is not straightforward to distinguish cases when participants chose randomly from cases where they actually disagreed with Fn,On, in particular when the two options have similar Fn,On, or when both options are poor. To avoid cases of random choice, we analyse the subset of answers that are confident (Difficulty = 1 or 2, which represents 2856 answers), and investigate whether different factors influence the agreement between ratings and Fn,On in this case. We perform the same linear fixed effect analysis as in Section 4.3, on that subset. The results are shown in Table 4 and are quite similar to the full analysis, except that now there is a significant negative correlation between Gold-MSI and agreement. For confident answers, it appears that musicians tend to disagree more with Fn,On than non-musicians. This could indicate that musicians focus more on certain high-level aspects of the music (e.g. melody, harmony, meter) that are not taken into account by Fn,On: even if it contains more mistakes, a transcription might be preferred by a musician as long as it gets these aspects right. Table 4 Coefficients and p-values for the linear fixed effects model using agreement with Fn,On as dependent variable and features as fixed effects, on confident answers only. Feature Coefficient P-value ΔF 0.584 <0.001 Fbest 0.349 <0.001 Gold-MSI –0.014 0.011 Known 0.002 0.912 Difficulty –0.036 <0.001 When investigating the effect of the difference in Fn,On on agreement, we see once again the same trend: the smaller the difference between the two transcriptions, the greater the disagreement, as shown in Figure 7. When the difference in Fn,On is above 50%, people always agree with Fn,On. However, below this threshold, agreement declines, especially when the difference is below 20%. Figure 7 Proportion of agreement depending on the difference in Fn,On between the two options, computed on confident answers only. ### 4.6 Inter-rater agreement We have seen that there is a fair amount of disagreement between the F-measure and ratings. To get an idea of how consistent the ratings are, we investigate the level of inter-rater agreement, and the factors that influence it. We begin by computing Fleiss’s Kappa coefficient (Fleiss, 1971), that represents inter-rater agreement for an arbitrary number of raters. When computed over the whole dataset, we obtain a Kappa coefficient of 0.59, which can be interpreted as borderline between moderate and substantial agreement. When computing the same coefficient on the confident answers only (keeping only questions for which four confident answers were given, 315 questions in total), we obtain a Kappa coefficient of 0.90, which can be interpreted as near-perfect agreement. This is a very conservative estimate, as we keep only the questions that were unanimously considered as easy to answer. Moreover, inter-rater agreement is high because most of the time, raters tend to agree with F-measure. We run a linear fixed effect analysis using the amount of agreement between raters as dependent variable (2 if all four raters agree, 1 if one rater disagrees with the other three, and 0 in the case of a draw), only on the subset of confident answers, and keeping only the questions with four confident answers. We use as fixed effects the difference of Fn,On between the two systems (ΔF), the best Fn,On of the pair (Fbest), the average and standard deviation of the Gold-MSI scores of the four raters for each question (Gold-MSIavg and Gold-MSIstd respectively), and the average reported difficulty (Difficultyavg). The resulting coefficients and associated p-values are given in Table 5. Table 5 Coefficients and p-values for the linear fixed effects model using agreement among raters as dependent variable and features as fixed effects. Feature Coefficient P-value ΔF 0.496 <0.001 Fbest –0.092 0.423 Gold-MSIavg –0.071 0.004 Gold-MSIstd –0.016 0.778 Difficultyavg –0.176 0.003 Once again, we observe that the bigger the difference in Fn,On, the higher the agreement among raters. However, this time, the Fn,On of the best solution does not seem to have a significant effect (noting that we also have many fewer data points). Raters also tend to disagree more with each other when the reported difficulty is higher on average. It also appears that when raters have a high average Gold-MSI, they tend to disagree more with each other. This could be due to the fact that trained musicians might favour different aspects of music (rhythm rather than melody for instance) when making a choice. Disparity in Gold-MSI among raters has no significant effect on whether they agree. ### 4.7 Discussion It appears that the best correlation with ratings is achieved for much higher tolerance thresholds than what is usually used for transcription system evaluation, both for Fn,On and Fn,OnOff. This suggests people are generally relatively forgiving with respect to onset precision, and probably focus on other aspects of music than just onset and offset precision to make their choices. Moreover, the OnOff-Note metric, presented as the most perceptually-relevant evaluation metric by Hawthorne et al. (2018), is actually not the best metric in terms of agreement with human ratings, at least in the case of piano music. On-Note metrics should be favoured, though this may relate to the focus on piano which generally has very salient onsets, but less clear offsets, especially for long notes. OnOff-Note metrics are still useful from an engineering perspective, as they represent a meaningful objective that is difficult to achieve, but they are not the most representative indicator of the perceptual quality of a transcription system. Figure 7 also shows that when the difference in Fn,On is smaller than 10%, raters confidently disagree with Fn,On as to which transcription is best nearly 40% of the time. This means that in these cases, Fn,On should not be considered as a good descriptor of the quality of a transcription, at least from a perceptual point of view. This is particularly worrying, as very often, differences between systems are of the order of a few percentage points. On the other hand, we compare short segments, which means that a few errors could influence Fn,On greatly, while AMT systems are often compared over hours-long datasets. Also, in these difficult cases, raters tend to disagree more with each other, so personal judgement also comes into play. In summary, however, the majority of the previous analysis seems to indicate that Fn,On is a good enough metric in clear-cut cases where the differences in performance are large, but should probably be treated with caution for small differences between AMT systems. ## 5 Defining a new metric Given the relatively low agreement between ratings and current evaluation metrics, in particular in borderline cases, we propose to define a new evaluation metric, based on the ratings. The general idea is to compute a set of musical features on pairs (AMT output, target), and then train a classifier to output a value between 0 and 1 for each pair based on these features, using the ratings as training data. We first consider feedback from participants. Out of all participants, twelve left comments related to their decision-making strategies. The melody was mentioned as important in nine comments, making it the most important aspect according to comments, followed by rhythmic aspects (beat/meter/tempo, eight mentions) and harmony (four mentions). Some comments also mentioned higher level, less clearly defined aspects of music: three comments mentioned that the “overall impression” was most important, and two comments mentioned the presence of major artefacts or out-of-key notes. Overall, three comments mentioned explicitly that the presence of errors was not important as long as other aspects of the music were preserved, and most comments mentioned combinations of the above factors. ### 5.2 Feature description From the previous comments, we define several features to capture various aspects of music, as well as typical AMT mistakes. In the following, we provide high-level definitions for each of these features. Full definitions can be found in the technical report accompanying this paper (Ycart et al., 2020). #### 5.2.1 Mistakes in highest and lowest notes We use the highest and lowest notes at any time, defined with a skyline approach, as a proxy for the melody and the bass line, respectively. We define these metrics both framewise and notewise. For the highest note metric, we define true positives and false negatives as notes among the highest notes of the target that have been correctly detected or missed (respectively). We count as a false positive any extra note that is above the highest note in the target. From these values, we compute P, R, and F as described in Section 2.1. The lowest note metric is defined similarly. To better capture the score rather than the audio signal, we define the highest and lowest notes on targets without taking the pedal into account, while the pedal is used in the computation of Ff, Fn,On and Fn,OnOff. #### 5.2.2 Loudness of false negatives We assume that missing a note that was loud in the original piece is more salient than missing a quiet one. We define two corresponding metrics: • Average false negative loudness: the average MIDI velocity of false negatives. Each MIDI velocity is normalised by the average velocity in the ground truth in a two-second window centred on the false negative onset. • False negative loudness ratio: the average ratio between the loudness of false negatives and the maximum loudness of active notes at the time of the false negative onset. We take into account the decay of long notes when computing the maximum loudness at the time of the onset. #### 5.2.3 Out-of-key false positives We assume that out-of-key extra notes are much more noticeable than in-key ones. Instead of relying on key annotations, we define the key of a piece as the set of pitch classes that are active more than 10% of the time. The threshold of 10% is defined heuristically. This definition shows its limits when there are key modulations. We also define a non-binary key-disagreement as the proportion of the time that a pitch class is inactive. We then define two sets of metrics: • Binary out-of-key: We count the number of false positives whose pitch is out-of-key. We then compute the proportion of out-of-key false positives among false positives, and among all notes in the output. • Non-binary out-of-key: we compute the average key-disagreement of false positives, and the ratio between the sum of key-disagreements of false positives and the sum of key-disagreements of all detected notes. #### 5.2.4 Repeated and merged notes A common type of mistake in AMT is to have repeated (i.e. fragmented) notes, or incorrectly merged notes. We count as a repeated note any false positive that overlaps with a ground-truth note of the same pitch for at least 80% of its duration, and is preceded by at least one note of the same pitch that overlaps with the same ground-truth note. Conversely, we count as a merged note any false negative that overlaps for at least 80% of its duration with a detected note of the same pitch and is preceded by at least one note of the same pitch that overlaps with the same detected note. In both cases, we compute the proportion of mistakes among all false positives, and among all detected notes. #### 5.2.5 Specific pitch mistakes It is also fairly common to have false positives in specific pitch intervals compared to ground-truth notes: semitone errors (neighbouring notes), octave errors (first partial), and 19 semitone errors (second partial). For these types of mistakes, we define both framewise and notewise metrics, for a given number of semitones ns (here ns ∈ {1,12,19}). For framewise metrics, we count a specific pitch false positive for any false positive such that there is a ground truth note ns semitones above or below. For notewise metrics, we count a specific pitch false positive for any false positive that overlaps for at least 80% of its duration with a ground truth note ns semitones above or below. For ns = 19, we only consider ground truth notes 19 semitones below, as second partial mistakes usually only happen 19 semitones above the ground truth. In both cases, we compute the proportion of mistakes among all false positives, and among all detected notes. #### 5.2.6 Polyphony level difference We assume that a mistake is more salient when it is the only note being played and that it will also be noticeable if only a few notes of a big chord are transcribed. To account for this, we compute the absolute difference in polyphony level between the target and the output, at each timestep. We then use the mean, standard deviation, minimum and maximum values of this time series as features. #### 5.2.7 Rhythm histogram flatness Rhythm is another important aspect of music according to raters. We thus define a metric to account for rhythmic imprecision as follows. We first compute the inter-onset interval sequence of the output and the target. We keep simultaneous onsets, resulting in an IOI of 0. We then compute a histogram of the IOI values, with bin size of 10 ms for IOIs below 100 ms, and 100 ms from 100 ms to 2s (we drop IOIs above that value). This histogram should be more peaky for quantised MIDI files than outputs with rhythm imprecision. To describe this quantitatively, we compute the log-flatness, as defined for spectra (Johnston, 1988), of both histograms (output and target). We use as a feature the flatness of the output histogram, and the difference in flatness between the output and target histograms. #### 5.2.8 Rhythm dispersion We also propose another approach to characterising rhythm quality, based on K-means clustering (Murphy, 2012) of the IOI set. The general idea is to first run K-means clustering on the target IOIs, and then run K-means clustering on the output IOIs using the cluster centres of the target as initial values. We then compute the distance between cluster centres for the target and the output, as well as the relative difference in standard deviation within each cluster. We use as features the mean, maximum and minimum values across clusters. Choosing the number of clusters is necessarily heuristic. We determine the number of clusters by computing an IOI histogram as described in 5.2.7, but with wider bins, and choosing the peaks of that histogram as initial values for target IOI clustering. ### 5.3 Model fitting Eventually, we aim to obtain a model that, given a set of features for a pair (AMT output, target), will output a scalar between 0 and 1. The main difficulty is that in our dataset, we do not have such absolute ratings, we only have pairwise comparison ratings. To achieve our goal, we draw inspiration from the contrastive loss approach (Hadsell et al., 2006). The original contrastive loss is defined as follows: given two inputs x1 and x2, a model f and a variable y such that y = 1 if x1 and x2 are considered similar, y = 0 otherwise: (2) In other words, if x1 and x2 are similar, the loss tries to bring their outputs together, and if they are dissimilar, it tries to push them apart. The α parameter is called the margin: if the distance between f (x1) and f (x2) is already greater than α, they are not moved further. Given a target T, and two transcriptions of that target O1 and O2, we have, in place of x1 and x2, g (T,O1) corresponding to the set of features computed on T and O1, and g (T,O2), the set of features computed on T and O2. In our ratings, all transcriptions are considered dissimilar, so y is always equal to 0. Also, we do not only want f (g (T,O1)) and f (g (T,O2)) to be different, we also care about their order. We thus introduce a new variable z such that z = 0 if O1 was chosen by the rater, and z = 1 if O2 was chosen. We want to have f (g (T,O1)) > f (g (T,O2)) if z = 0, and the other way around if z = 1. We thus define our loss function as: (3) $L=\text{max}{\left(\alpha -z×\left(f\left({x}_{2}\right)-f\left({x}_{1}\right)\right)-\left(1-z\right)\left(f\left({x}_{1}\right)-f\left({x}_{2}\right)\right),0\right)}^{2}$ We incorporate the difficulty ratings in the margin: when ratings are confident, we use a higher margin. In practice, we use α = 0.5 when Difficulty = 1, and decrease it by 0.1 for each difficulty level, until α = 0.1 when Difficulty = 5. We choose to use a simple model, allowing for interpretability of its parameters. Indeed, we want our metric to fit perceptual ratings, but also to serve as a diagnosis tool, allowing to easily investigate the contribution of each feature in the end result. For that reason, we use logistic regression, using as input all the above-defined features, in addition to the benchmark metrics. ### 5.4 Experiments #### 5.4.1 Setup We use as input data to the logistic regression model the above features, along with the benchmark metrics defined in Section 2.1. We split our dataset between training, validation and test sets using a 90%-5%-5% partition, and use 20-fold cross-validation. The splits are made so there is no overlap in targets between the three subsets. There can be some overlap in terms of raters, which means that there is a possibility that the model learns the preferences of some specific participants. Our main concern is that the model should generalise to unseen input, so we still keep these ratings. In each fold, the data is z-normalised (mean = 0 and variance = 1). The weights of the logistic regression are all initialised to 0. The model is then trained using the Adam optimiser (Kingma and Ba, 2015) with a learning rate of 0.01 for a total of 3000 batches with a batch size of 100, which in practice is enough to ensure convergence. The parameters that achieve the lowest loss on the validation set are then used for testing. In each fold, we train 100 versions of the model (training a model takes about 15s), to account for potential variation in performance due to the randomness of the training process. We test whether our model agrees with ratings significantly better than Fn,On by running an independent-samples T-test on each fold, and then testing whether the resulting T-values are significantly different from 0. We use 20 folds to have more data points when running the second test, and thus better statistical power in our results. We focus the evaluation of our models on confident ratings. We thus compute the proportion of agreement between the output of our model and the confident ratings only, i.e. with Difficulty = 1 or 2 (notated Aconf). #### 5.4.2 Results and ablation study All results averaged across folds are shown in Figure 8. The dotted line corresponds to Aconf for Fn,On. Figure 8 Aconf measure for each tested configuration, averaged across folds. The dotted line represents Aconf for Fn,On. Descriptions of each configuration are given in Table 6. Colors represent the p-value when testing whether each metric is different from the “All” configuration. Asterisks represent results significantly different from All (*: p < 0.1, **: p < 0.05, ***: p < 0.01). First, we train our model using all metrics. We manage to improve the agreement with the ratings slightly (1%) but significantly (p < 10–6), which is encouraging. It has to be noted that the model we use is very simple, and that more sophisticated models should be able to improve even further, though it may not be easy to achieve this without deteriorating interpretability. In what follows, we investigate feature importance. One approach would be to inspect the weights of the trained logistic regression. However, it might happen that one feature has a high weight in a given model, but when removing it, its absence can be compensated by combinations of other features without decreasing performance. We thus favour an ablation approach to study how essential features are to model ratings, removing groups of features from the feature set and re-training our model as in Section 5.4.1. Table 6 summarises the configurations we investigate. Table 6 Description of each tested feature configuration. Configuration Removed features All None NoBench Benchmark metrics NoFeatures All features, except benchmark metrics NoHighLow Mistakes in highest and lowest notes NoLoud Loudness of false negatives NoOutKey Out-of-key false positives NoRepeat Repeated and merged notes NoSpecific Specific pitch mistakes NoPoly Polyphony level difference NoRhythm Rhythm histogram flatness and rhythm dispersion NoFramewise Framewise benchmark metrics, framewise highest and lowest note mistakes, framewise specific pitch errors, polyphony level difference, consonance measures NoSpecOut Specific pitch mistakes and out-of-key false positives Three configurations perform significantly worse than All: NoFeatures, NoFramewise, and NoRhythm. Besides, NoFeatures is the only configuration that does not perform significantly better than Fn,On (p = 0.33), which shows the usefulness of the feature set we have proposed. The low performance of NoRhythm compared to All shows the importance of the rhythm descriptors we used. This is somewhat contradictory with results from Section 4.3: we found that high tolerance thresholds for onsets and offsets gave better agreement, which seemed to indicate that temporal aspects are not important to raters. We suggest that our rhythm descriptor better captures higher-level aspects of rhythm reported as important to raters, such as the presence of a steady pulse and meter, rather than onset precision of individual notes. The fact that NoFramewise performs significantly worse than All shows that while Ff is indeed less correlated to ratings than Fn,On, some framewise metrics are useful and complementary to notewise metrics in modelling the ratings. On the other hand, it appears that NoHighLow is not significantly worse than All. Yet, melody was the musical aspect that was most mentioned in user comments. We hypothesise that the reason this is not reflected in feature importance is that for the vast majority of examples in our dataset, the highest voice notewise F-measure, which best describes how well the melody was transcribed, is equal to 1. The model probably learns to give a low importance to that feature, as it is often constant. Another hypothesis is that our skyline approach to define the melody and the bassline might not correspond to perception. In the future, we might have to rely for instance on automatic melody estimation methods for symbolic music to better represent the melody. Interestingly, it appears that some of the metrics we designed, in particular the out-of-key false positives and specific pitch errors, are actually counter-productive: removing them appears to increase Aconf, but not significantly (p = 0.40 and p = 0.76 respectively). We hypothesise that this is due to the definition of these metrics. For instance, if there are no specific pitch mistakes, this could either mean that there were no false positives (which is good), or there were a lot of false positives, none of which corresponded to a specific pitch (which is bad). This could lead to an interaction between specific pitch mistakes and benchmark precision metrics (e.g. penalise low specific pitch and low precision, but not low specific pitch and high precision). The same can be said of out-of-key false positives. However, such interactions cannot be represented by our model (simple logistic regression without interaction terms). As a result, out-of-key and specific pitch mistakes end up distracting the model more than they help. When removing both of these metrics (NoSpecOut configuration), our model reaches an Aconf of 89.1%. Removing other features that have either no impact or a negative impact on Aconf also seems to slightly decrease Aconf compared to NoSpecOut, but again, not significantly. We make a pre-trained version of our metric available for future use (NoSpecOut configuration). We train it using all the data, without keeping out a validation or test set. Experiments show that in practice, the model does not overfit the training set: the training and validation losses are similar. We thus choose as final parameters those that minimise the loss over the whole training set. Given that we do not keep a held-out test set, we cannot report test performance of this specific released model. ## 6 Discussion In this study, we presented a listening test to rate pairs of AMT systems. We compared perceptual ratings to results given by benchmark evaluation metrics. We have seen that most of the time, ratings agree with benchmark evaluation metrics, but in some cases (when both transcriptions have low Fn,On, and when the difference in Fn,On between the two transcriptions is low), the agreement greatly decreases. We have proposed new quantitative measures describing musical features, and used them to define a new metric, that agrees with ratings significantly better than Fn,On. We also provide greater insight into which features were important to raters through an ablation study, illustrating in particular the importance of rhythm-related aspects. Various aspects of this study could be improved. One of the most important would be to try more sophisticated models (e.g., artificial neural networks) to define a new metric. Indeed, the current approach only brings marginal improvement in Aconf compared to Fn,On, some more involved approaches could improve further agreement with ratings. In particular, it would be theoretically possible to define a metric without using handcrafted features, directly by feeding the target and output into the system, but this approach would require more ratings to be trained robustly, and would lack interpretability. Still, some of the features might not have a linear influence on the quality of the transcription, and some may interact. Incorporating such factors into a model may improve performance. We chose a simple but interpretable logistic regression, which allowed us to verify the contribution of each metric to the final score easily. Moreover, although we believe that absolute similarity rating between two excerpts is a difficult and ill-defined task (Allan et al., 2007; Flexer and Grill, 2016), it could be interesting to develop a listening test based on absolute similarity ratings between a reference and a single transcription. Provided inter-rater agreement is high enough, it would be interesting to train a regression model to approximate these ratings, and compare the results to those obtained with the current ranking paradigm. Deeper investigation of the reasons for disagreement between ratings and Fn,On would also be useful to motivate the creation of new metrics. One way to investigate this would be to reproduce the above ablation study, but with a model trained and tested exclusively on ratings that disagree with Fn,On, although the lack of data could make it difficult to achieve significant results, requiring collection of further ratings. The generalisability of the metric we have designed should also be investigated. First, this metric was only designed for Western classical piano music. It would be interesting to investigate the extent to which it could be applied to other genres (e.g. jazz, non-Western music) and other instruments (e.g. guitar, multi-instrument ensembles). The protocol presented above could be applied with different stimuli to design metrics for other contexts, and potentially define a unified metric that works in every situation. But even in the context of Western classical piano music, some further experiments would have to be run to test the generalisability of our metric. In particular, this metric was trained only on short segments; it remains to be seen whether it scales properly to longer pieces. One way to test our metric would be to run another similar listening test, once again using pairwise comparisons, but choosing specific, potentially artificial stimuli, to investigate specific points of disagreement: for instance, pairs of examples where our metric and Fn,On disagree as to which is best. By choosing representative examples with the specific aim of comparing these two metrics, much less data would be needed to validate which metric is most closely correlated to human perception. Finally, this metric was designed to reflect perceptual similarity between the AMT output and the target. Such an evaluation criterion might not be relevant for every application. It is important when the overall musical quality of the transcription matters more than precise transcription of every note, for instance in the context of music creation and production (e.g. quick dictation of musical ideas) or tasks such as automatic accompaniment or cover detection. However, it might not be relevant in cases such as music education, where exact transcription of every note is paramount to properly assess the mistakes made by a student. In this case, reaching an Fn,OnOff of 1 should be the main objective, regardless of how the transcription sounds. In that regard, our metric complements the usual benchmark metrics to reflect perceptual quality of AMT outputs, but does not replace them. ## 7 Reproducibility To allow further study of the data collected, we make it fully available, along with the stimuli, and the locations in seconds of the manually-selected cut points: https://zenodo.org/record/3746863. We also provide the code of the website: https://github.com/adrienycart/AMT_perception_website. A Python implementation of the used features and the pre-trained metric can be found here: https://github.com/adrienycart/PEAMT. ## Acknowledgements The authors would like to thank Li Su and Tian Cheng for sharing their system implementations, and Rémi de Fleurian, Peter Harrison, Daniel Müllensiefen, Patrick E. Savage, Tillman Weyde, and Daniel Wolff for their useful suggestions on the design of this study. AY is supported by a QMUL EECS Research Studentship. LL is a research student at the UKRI Centre for Doctoral Training in Artificial Intelligence and Music and is supported by a China Scholarship Council and Queen Mary University of London joint PhD scholarship. EB is supported by UK RAEng Research Fellowship RF/128. ## Competing Interests The authors have no competing interests to declare. ## References 1. Allali, J., Ferraro, P., Hanna, P., & Robine, M. (2009). Polyphonic alignment algorithms for symbolic music retrieval. In 6th International Symposium on Auditory Display, CMMR/ICAD, pages 466–482. DOI: https://doi.org/10.1007/978-3-642-12439-6_24 2. Allan, H., Müllensiefen, D., & Wiggins, G. A. (2007). Methodological considerations in studies of musical similarity. In Proceedings of the 8th International Conference on Music Information Retrieval, ISMIR, pages 473–478. 3. Allison, P. D. (2009). Fixed Effects Regression Models, volume 160. Sage Publications. DOI: https://doi.org/10.4135/9781412993869 4. Bay, M., Ehmann, A. F., & Downie, J. S. (2009). Evaluation of multiple-f0 estimation and tracking systems. In Proceedings of the 10th International Society for Music Information Retrieval Conference, ISMIR, pages 315–320. 5. Benetos, E., Dixon, S., Duan, Z., & Ewert, S. (2019). Automatic music transcription: An overview. IEEE Signal Processing Magazine, 36(1), 20–30. DOI: https://doi.org/10.1109/MSP.2018.2869928 6. Bittner, R. M., & Bosch, J. J. (2019). Generalized metrics for single-f0 estimation evaluation. In Proceedings of the 20th International Society for Music Information Retrieval Conference, ISMIR, pages 738–745. 7. Cheng, T., Mauch, M., Benetos, E., & Dixon, S. (2016). An attack/decay model for piano transcription. In Proceedings of the 17th International Society for Music Information Retrieval Conference, ISMIR, pages 584–590. 8. Cogliati, A., & Duan, Z. (2017). A metric for music notation transcription accuracy. In Proceedings of the 18th International Society for Music Information Retrieval Conference, ISMIR, pages 407–413. 9. Daniel, A., Emiya, V., & David, B. (2008). Perceptually-based evaluation of the errors usually made when automatically transcribing music. In Proceedings of the 9th International Conference on Music Information Retrieval, ISMIR, pages 550–556. 10. Efron, B. (1992). Bootstrap methods: another look at the jackknife. In Breakthroughs in Statistics, pages 569–593. Springer. DOI: https://doi.org/10.1007/978-1-4612-4380-9_41 11. Emiya, V., Badeau, R., & David, B. (2010). Multipitch estimation of piano sounds using a new probabilistic spectral smoothness principle. IEEE Transactions on Audio, Speech and Language Processing, TASLP, 18(6), 1643–1654. DOI: https://doi.org/10.1109/TASL.2009.2038819 12. Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378. DOI: https://doi.org/10.1037/h0031619 13. Flexer, A., & Grill, T. (2016). The problem of limited inter-rater agreement in modelling music similarity. Journal of New Music Research, 45(3), 239–251. DOI: https://doi.org/10.1080/09298215.2016.1200631 14. Hadsell, R., Chopra, S., & LeCun, Y. (2006). Dimensionality reduction by learning an invariant mapping. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, pages 1735–1742. DOI: https://doi.org/10.1109/CVPR.2006.100 15. Hawthorne, C., Elsen, E., Song, J., Roberts, A., Simon, I., Raffel, C., Engel, J. H., Oore, S., & Eck, D. (2018). Onsets and frames: Dual-objective piano transcription. In Proceedings of the 19th International Society for Music Information Retrieval Conference, ISMIR, pages 50–57. 16. Hawthorne, C., Stasyuk, A., Roberts, A., Simon, I., Huang, C. A., Dieleman, S., Elsen, E., Engel, J. H., & Eck, D. (2019). Enabling factorized piano music modeling and generation with the MAESTRO dataset. In 7th International Conference on Learning Representations, ICLR. 17. Johnston, J. D. (1988). Transform coding of audio signals using perceptual noise criteria. IEEE Journal on Selected Areas in Communications, 6(2), 314–323. DOI: https://doi.org/10.1109/49.608 18. Kelz, R., Dorfer, M., Korzeniowski, F., Böck, S., Arzt, A., & Widmer, G. (2016). On the potential of simple framewise approaches to piano transcription. In Proceedings of the 17th International Society for Music Information Retrieval Conference, ISMIR, pages 475–481. 19. Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR. 20. Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology. 21. McLeod, A., & Steedman, M. (2018). Evaluating automatic polyphonic music transcription. In Proceedings of the 19th International Society for Music Information Retrieval Conference, ISMIR, pages 42–49. 22. Molina, E., Barbancho, A. M., Tardón, L. J., & Barbancho, I. (2014). Evaluation framework for automatic singing transcription. In Proceedings of the 15th International Society for Music Information Retrieval Conference, ISMIR, pages 567–572. 23. Mongeau, M., & Sankoff, D. (1990). Comparison of musical sequences. Computers and the Humanities, 24(3), 161–175. DOI: https://doi.org/10.1007/BF00117340 24. Müllensiefen, D., Gingras, B., Musil, J., & Stewart, L. (2014). The musicality of non-musicians: An index for assessing musical sophistication in the general population. PLoS One, 9(2). DOI: https://doi.org/10.1371/journal.pone.0089642 25. Müllensiefen, D., Gingras, B., Stewart, L., & Musil, J. (2011). The Goldsmiths Musical Sophistication Index (Gold-MSI): Technical report and documentation v1.0. Technical report, Goldsmiths, University of London. 26. Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. MIT Press. 27. Raffel, C., McFee, B., Humphrey, E. J., Salamon, J., Nieto, O., Liang, D., & Ellis, D. P. W. (2014). mir_eval: A transparent implementation of common MIR metrics. In Proceedings of the 15th International Society for Music Information Retrieval Conference, ISMIR, pages 367–372. 28. Schramm, R., Nunes, H. D. S., & Jung, C. R. (2016). Audiovisual tool for solfège assessment. ACM Transactions on Multimedia Computing, Communications, and Applications, 13(1). DOI: https://doi.org/10.1145/3007194 29. Su, L., & Yang, Y.-H. (2015). Combining spectral and temporal representations for multipitch estimation of polyphonic music. IEEE/ACM Transactions on Audio, Speech and Language Processing, TASLP, 23(10), 1600–1612. DOI: https://doi.org/10.1109/TASLP.2015.2442411 30. Velardo, V., Vallati, M., & Jan, S. (2016). Symbolic melodic similarity: State of the art and future challenges. Computer Music Journal, 40(2), 70–83. DOI: https://doi.org/10.1162/COMJ_a_00359 31. Ycart, A., & Benetos, E. (2018). A-MAPS: Augmented MAPS dataset with rhythm and key annotations. In 19th International Society for Music Information Retrieval Conference, ISMIR, Late Breaking and Demos Papers. 32. Ycart, A., Liu, L., Benetos, E., & Pearce, M. T. (2020). Musical features for automatic music transcription evaluation. Technical report, Queen Mary University of London, UK.
# A toy is in the form of a cone of base radius 3.5 cm mounted on a hemisphere of base diameter 7 cm. If the total height of the toy is 15.5 cm, find the total surface area of the top (Use π = 22/7) - Mathematics A toy is in the form of a cone of base radius 3.5 cm mounted on a hemisphere of base diameter 7 cm. If the total height of the toy is 15.5 cm, find the total surface area of the top (Use π = 22/7) #### Solution Let r and h be the radius and height of the cone mounted on the hemisphere, respectively. Suppose R be the radius of the hemishpere. Now, r = R = 3.5 cm Height of the cone + Radius of the hemisphere = Total height of the toy h + 3.5 cm = 15.5 cm h = 15.5 − 3.5 = 12 cm Let l be the slant height of the cone. l2=r2+h2 =>l^2=(7/2)^2+(12)^2=49/4+144=625/4 =>l = 25/2cm Total surface area of the toy = Curved surface area of the cone + Curved surface area of the hemisphere =πrl+2πr2 =πr(l+2r) =22/7xx7/2xx(25/2+2xx7/2) =22/7xx7/2xx39/2 =214.5 cm2 Concept: Surface Area of a Combination of Solids Is there an error in this question or solution?
# American Institute of Mathematical Sciences May  2011, 10(3): 873-884. doi: 10.3934/cpaa.2011.10.873 ## On $SL(2, R)$ valued cocycles of Hölder class with zero exponent over Kronecker flows 1 Dipartimento di Sistemi e Informatica, Università di Firenze, 50139 Firenze 2 Department of Mathematics, Rutgers University, Camden NJ 08102, United States Received  October 2008 Revised  March 2009 Published  December 2010 We show that a generic $SL(2,R)$ valued cocycle in the class of $C^r$, ($0 < r < 1$) cocycles based on a rotation flow on the $d$-torus, is either uniformly hyperbolic or has zero Lyapunov exponents provided that the components of winding vector $\bar \gamma = (\gamma^1,\cdot \cdot \cdot,\gamma^d)$ of the rotation flow are rationally independent and satisfy the following super Liouvillian condition : $|\gamma^i - \frac{p^i_n}{q_n}| \leq Ce^{-q^{1+\delta}_n}, 1\leq i\leq d, n\in N,$ where $C > 0$ and $\delta > 0$ are some constants and $p^i_n, q_n$ are some sequences of integers with $q_n\to \infty$. Citation: Russell Johnson, Mahesh G. Nerurkar. On $SL(2, R)$ valued cocycles of Hölder class with zero exponent over Kronecker flows. Communications on Pure & Applied Analysis, 2011, 10 (3) : 873-884. doi: 10.3934/cpaa.2011.10.873 ##### References: [1] J. Bochi, Genericity of zero Lyapunov exponents,, Ergodic Theory and Dynamical Systems, 22 (2002), 1667. doi: doi:10.1017/S0143385702001165. Google Scholar [2] J. Bochi and M. Viana, The Lyapunov exponents of generic volume preserving and symplectic maps,, Ann. of Math., 161 (2005), 1423. doi: doi:10.4007/annals.2005.161.1423. Google Scholar [3] Roberta Fabbri, "Genericità dell'iperbolicità nei sistemi differenziali lineari di dimensione due,", Ph.D. Thesis, (1997). Google Scholar [4] R. Fabbri and R. Johnson, On the Lyapunov exponent of certain $SL(2,R)$ valued cocycles,, Differential Equations and Dynamical Systems, 7 (1999), 349. Google Scholar [5] R. Fabbri, R. Johnson and R. Pavani, On the nature of the spectrum of the quasi-periodic Schrödinger operator,, Nonlinear Analysis: Real World Applications, 3 (2002), 37. doi: doi:10.1016/S1468-1218(01)00012-8. Google Scholar [6] R. Johnson, Exponential dichotomy, rotation number and linear differential operatorss with bounded coefficients,, Jour. Diff. Equn., 61 (1986), 54. doi: doi:10.1016/0022-0396(86)90125-7. Google Scholar [7] R. Johnson and J. Moser, The rotation number for almost periodic potentials,, Comm. Math. Phys., 84 (1982), 403. doi: doi:10.1007/BF01208484. Google Scholar [8] R. Johnson, K. Palmer and G. Sell, Ergodic properties of linear dynamical systems,, SIAM J. Math. Anal., 18 (1987), 1. doi: doi:10.1137/0518001. Google Scholar [9] J. Moser, An example of a Schrodinger equation with almost periodic potential and nowhere dense spectrum,, Comment. Math. Helvetici, 56 (1981), 198. doi: doi:10.1007/BF02566210. Google Scholar [10] M. Nerurkar, Positive exponents for a dense class of continuous $SL(2,R)$ valued cocycles which arise as solutions to strongly accessible linear differential systems,, Contemp. Math., 215 (1998), 265. Google Scholar [11] M. Nerurkar, Density of positive Lyapunov exponents in the smooth category,, preprint (2008)., (2008). Google Scholar show all references ##### References: [1] J. Bochi, Genericity of zero Lyapunov exponents,, Ergodic Theory and Dynamical Systems, 22 (2002), 1667. doi: doi:10.1017/S0143385702001165. Google Scholar [2] J. Bochi and M. Viana, The Lyapunov exponents of generic volume preserving and symplectic maps,, Ann. of Math., 161 (2005), 1423. doi: doi:10.4007/annals.2005.161.1423. Google Scholar [3] Roberta Fabbri, "Genericità dell'iperbolicità nei sistemi differenziali lineari di dimensione due,", Ph.D. Thesis, (1997). Google Scholar [4] R. Fabbri and R. Johnson, On the Lyapunov exponent of certain $SL(2,R)$ valued cocycles,, Differential Equations and Dynamical Systems, 7 (1999), 349. Google Scholar [5] R. Fabbri, R. Johnson and R. Pavani, On the nature of the spectrum of the quasi-periodic Schrödinger operator,, Nonlinear Analysis: Real World Applications, 3 (2002), 37. doi: doi:10.1016/S1468-1218(01)00012-8. Google Scholar [6] R. Johnson, Exponential dichotomy, rotation number and linear differential operatorss with bounded coefficients,, Jour. Diff. Equn., 61 (1986), 54. doi: doi:10.1016/0022-0396(86)90125-7. Google Scholar [7] R. Johnson and J. Moser, The rotation number for almost periodic potentials,, Comm. Math. Phys., 84 (1982), 403. doi: doi:10.1007/BF01208484. Google Scholar [8] R. Johnson, K. Palmer and G. Sell, Ergodic properties of linear dynamical systems,, SIAM J. Math. Anal., 18 (1987), 1. doi: doi:10.1137/0518001. Google Scholar [9] J. Moser, An example of a Schrodinger equation with almost periodic potential and nowhere dense spectrum,, Comment. Math. Helvetici, 56 (1981), 198. doi: doi:10.1007/BF02566210. Google Scholar [10] M. Nerurkar, Positive exponents for a dense class of continuous $SL(2,R)$ valued cocycles which arise as solutions to strongly accessible linear differential systems,, Contemp. Math., 215 (1998), 265. Google Scholar [11] M. Nerurkar, Density of positive Lyapunov exponents in the smooth category,, preprint (2008)., (2008). Google Scholar [1] Wilhelm Schlag. Regularity and convergence rates for the Lyapunov exponents of linear cocycles. Journal of Modern Dynamics, 2013, 7 (4) : 619-637. doi: 10.3934/jmd.2013.7.619 [2] Lucas Backes, Aaron Brown, Clark Butler. Continuity of Lyapunov exponents for cocycles with invariant holonomies. Journal of Modern Dynamics, 2018, 12: 223-260. doi: 10.3934/jmd.2018009 [3] Lucas Backes. On the periodic approximation of Lyapunov exponents for semi-invertible cocycles. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6353-6368. doi: 10.3934/dcds.2017275 [4] Boris Kalinin, Victoria Sadovskaya. Lyapunov exponents of cocycles over non-uniformly hyperbolic systems. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5105-5118. doi: 10.3934/dcds.2018224 [5] Artur Avila. Density of positive Lyapunov exponents for quasiperiodic SL(2, R)-cocycles in arbitrary dimension. Journal of Modern Dynamics, 2009, 3 (4) : 631-636. doi: 10.3934/jmd.2009.3.631 [6] Mauricio Poletti. Stably positive Lyapunov exponents for symplectic linear cocycles over partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5163-5188. doi: 10.3934/dcds.2018228 [7] Linlin Fu, Jiahao Xu. A new proof of continuity of Lyapunov exponents for a class of $C^2$ quasiperiodic Schrödinger cocycles without LDT. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2915-2931. doi: 10.3934/dcds.2019121 [8] Matthias Rumberger. Lyapunov exponents on the orbit space. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 91-113. doi: 10.3934/dcds.2001.7.91 [9] Edson de Faria, Pablo Guarino. Real bounds and Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1957-1982. doi: 10.3934/dcds.2016.36.1957 [10] Andy Hammerlindl. Integrability and Lyapunov exponents. Journal of Modern Dynamics, 2011, 5 (1) : 107-122. doi: 10.3934/jmd.2011.5.107 [11] Sebastian J. Schreiber. Expansion rates and Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 1997, 3 (3) : 433-438. doi: 10.3934/dcds.1997.3.433 [12] Gemma Huguet, Rafael de la Llave, Yannick Sire. Fast iteration of cocycles over rotations and computation of hyperbolic bundles. Conference Publications, 2013, 2013 (special) : 323-333. doi: 10.3934/proc.2013.2013.323 [13] Jifeng Chu, Meirong Zhang. Rotation numbers and Lyapunov stability of elliptic periodic solutions. Discrete & Continuous Dynamical Systems - A, 2008, 21 (4) : 1071-1094. doi: 10.3934/dcds.2008.21.1071 [14] Chao Liang, Wenxiang Sun, Jiagang Yang. Some results on perturbations of Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4287-4305. doi: 10.3934/dcds.2012.32.4287 [15] Shrihari Sridharan, Atma Ram Tiwari. The dependence of Lyapunov exponents of polynomials on their coefficients. Journal of Computational Dynamics, 2019, 6 (1) : 95-109. doi: 10.3934/jcd.2019004 [16] Nguyen Dinh Cong, Thai Son Doan, Stefan Siegmund. On Lyapunov exponents of difference equations with random delay. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 861-874. doi: 10.3934/dcdsb.2015.20.861 [17] Jianyu Chen. On essential coexistence of zero and nonzero Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4149-4170. doi: 10.3934/dcds.2012.32.4149 [18] Paul L. Salceanu, H. L. Smith. Lyapunov exponents and persistence in discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 187-203. doi: 10.3934/dcdsb.2009.12.187 [19] Andrey Gogolev, Ali Tahzibi. Center Lyapunov exponents in partially hyperbolic dynamics. Journal of Modern Dynamics, 2014, 8 (3&4) : 549-576. doi: 10.3934/jmd.2014.8.549 [20] Luis Barreira, César Silva. Lyapunov exponents for continuous transformations and dimension theory. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 469-490. doi: 10.3934/dcds.2005.13.469 2018 Impact Factor: 0.925
## 2. Coordinates Computations ### a. Forward Computation A forward computation uses a starting coordinate pair along with a distance and direction to determine another coordinate pair. In Figure F-5, starting with coordinates at P, compute the coordinates at Q. Figure F-5 Forward Computation Using Equations D-1 and D-2, the latitude and departure of the line are: LatPQ= LPQx cos(DirPQ) DepPQ = LPQ x sin(DirPQ) L: line length Dir: line direction To compute X and Y coordinates: YQ = YP + LatPQ XQ = XP + DepPQ Equations F-1 and F-2 To compute N and E coordinates: NQ = NP + LatPQ EQ = EP + DepPQ Equations F-3 and F-4 For a complete traverse, Figure F-6: Figure F-6 Coordinates Around a Loop Traverse Starting with known coordinates at T (NT, ET) and applying Equations F-3 and F-4 around the traverse: Compute coordinates of Q: Compute coordinates of R: Compute coordinates of S: Compute coordinates of T: Computing back into T gives a math check: the end coordinates should be the same as the start coordinates. In order for the math check to be met, adjusted lats and deps must be used. Where do the start coordinates come from? They can be assumed or they could be from a formal coordinate system. We'll discuss formal coordinate systems in a later topic.
Find all School-related info fast with the new School-Specific MBA Forum It is currently 29 Sep 2016, 06:37 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Permutations/Combinations Author Message Senior Manager Affiliations: PMP Joined: 13 Oct 2009 Posts: 312 Followers: 4 Kudos [?]: 153 [0], given: 37 ### Show Tags 23 Oct 2009, 19:34 00:00 Difficulty: (N/A) Question Stats: 33% (01:35) correct 67% (00:23) wrong based on 4 sessions ### HideShow timer Statistics This topic is locked. If you want to discuss this question please re-post it in the respective forum. On how many ways can the letters of the word "COMPUTER" be arranged? 1. M must always occur at the third place 2. Vowels occupy the even positions. Ans 1) [Reveal] Spoiler: 5040 or 7P2*5P2 Ans 2) [Reveal] Spoiler: 4 * 720 Someone please explain 2, I understand 1 _________________ Thanks, Sri ------------------------------- keep uppp...ing the tempo... Press +1 Kudos, if you think my post gave u a tiny tip SVP Joined: 30 Apr 2008 Posts: 1888 Location: Oklahoma City Schools: Hard Knocks Followers: 40 Kudos [?]: 549 [1] , given: 32 ### Show Tags 26 Oct 2009, 21:58 1 KUDOS #2 doesn't make any sense. It says "vowels have to occupy the even positions" but there are not enough vowels to occupy the even positions. This is where the 4 comes in. First, if you assume that the vowels occupy slot numbers 2, 4 and 6, then that leaves slot 8 open for a consonant. The consonant's can be arranged in 5! ways, and the vowels in 3! ways. Together, that would be 120 * 6 = 720. But, that is assuming that the vowels occupy only slot #2, 4 and 6. Now, those vowels can be arranged in 3 of the 4 available slots. This is the same as selected 3 out of 4, which is also the same as selecting 4 (remember that in selecting 3, you're selecting 1 that is left out) This means that for any combination of placement for vowels, multiply that number by 4 and you get the number total. Example: Vowels = O, U, E 2=0, 4=U, 6=E 8={Non-vowel} In the same order of OUE, you could have 2={non vowel} 4=O 6=U 8=E. Same order, but different combination possibility. So 4 * 720 accounts for the fact that vowels can be placed in any 3 of the 4 open "even" spots. Hope this helps. srini123 wrote: On how many ways can the letters of the word "COMPUTER" be arranged? 1. M must always occur at the third place 2. Vowels occupy the even positions. Ans 1) [Reveal] Spoiler: 5040 or 7P2*5P2 Ans 2) [Reveal] Spoiler: 4 * 720 Someone please explain 2, I understand 1 _________________ ------------------------------------ J Allen Morris **I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a$$. GMAT Club Premium Membership - big benefits and savings Senior Manager Affiliations: PMP Joined: 13 Oct 2009 Posts: 312 Followers: 4 Kudos [?]: 153 [0], given: 37 Re: Permutations/Combinations [#permalink] ### Show Tags 27 Oct 2009, 06:26 thanks Jallen , that makes sense +1 Kudos _________________ Thanks, Sri ------------------------------- keep uppp...ing the tempo... Press +1 Kudos, if you think my post gave u a tiny tip Manager Joined: 25 Aug 2009 Posts: 175 Location: Streamwood IL Schools: Kellogg(Evening),Booth (Evening) WE 1: 5 Years Followers: 12 Kudos [?]: 173 [0], given: 3 Re: Permutations/Combinations [#permalink] ### Show Tags 27 Oct 2009, 10:43 I know this has already been answered but this seemed like an easier solution. First we find out the number of ways 3 vowels (e,o,u) occupy even positions 2,4,6,8. this is picking 3 positions out of the 4 where order is imp hence 4P3 the other 5 consonants can occupy any any of the remaining 5 positions hence !5 Answer is !4*!5 = 24*120 = 4*720. _________________ Rock On SVP Joined: 30 Apr 2008 Posts: 1888 Location: Oklahoma City Schools: Hard Knocks Followers: 40 Kudos [?]: 549 [0], given: 32 Re: Permutations/Combinations [#permalink] ### Show Tags 27 Oct 2009, 10:48 It's faster only in that I went to great lengths (as I usually do) to make sure we understand WHY we must answer a problem in a certain way. This way, when someone gets a similar problem, but not identical, that person has confidence and can answer that problem too. Guess it's just the teacher in me. atish wrote: I know this has already been answered but this seemed like an easier solution. First we find out the number of ways 3 vowels (e,o,u) occupy even positions 2,4,6,8. this is picking 3 positions out of the 4 where order is imp hence 4P3 the other 5 consonants can occupy any any of the remaining 5 positions hence !5 Answer is !4*!5 = 24*120 = 4*720. _________________ ------------------------------------ J Allen Morris **I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a$$. GMAT Club Premium Membership - big benefits and savings Senior Manager Affiliations: PMP Joined: 13 Oct 2009 Posts: 312 Followers: 4 Kudos [?]: 153 [0], given: 37 ### Show Tags 30 Nov 2009, 20:44 Thank you both... thats a lot of help. _________________ Thanks, Sri ------------------------------- keep uppp...ing the tempo... Press +1 Kudos, if you think my post gave u a tiny tip Re: Permutations/Combinations   [#permalink] 30 Nov 2009, 20:44 Display posts from previous: Sort by
# Search by Topic #### Resources tagged with Mathematical reasoning & proof similar to Function Pyramids: Filter by: Content type: Stage: Challenge level: ### There are 184 results Broad Topics > Using, Applying and Reasoning about Mathematics > Mathematical reasoning & proof ### Problem Solving, Using and Applying and Functional Mathematics ##### Stage: 1, 2, 3, 4 and 5 Challenge Level: Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. ### Plus or Minus ##### Stage: 5 Challenge Level: Make and prove a conjecture about the value of the product of the Fibonacci numbers $F_{n+1}F_{n-1}$. ### Sprouts Explained ##### Stage: 2, 3, 4 and 5 This article invites you to get familiar with a strategic game called "sprouts". The game is simple enough for younger children to understand, and has also provided experienced mathematicians with. . . . ### To Prove or Not to Prove ##### Stage: 4 and 5 A serious but easily readable discussion of proof in mathematics with some amusing stories and some interesting examples. ### The Triangle Game ##### Stage: 3 and 4 Challenge Level: Can you discover whether this is a fair game? ### Ordered Sums ##### Stage: 4 Challenge Level: Let a(n) be the number of ways of expressing the integer n as an ordered sum of 1's and 2's. Let b(n) be the number of ways of expressing n as an ordered sum of integers greater than 1. (i) Calculate. . . . ### Leonardo's Problem ##### Stage: 4 and 5 Challenge Level: A, B & C own a half, a third and a sixth of a coin collection. Each grab some coins, return some, then share equally what they had put back, finishing with their own share. How rich are they? ### Cube Net ##### Stage: 5 Challenge Level: How many tours visit each vertex of a cube once and only once? How many return to the starting point? ### Sperner's Lemma ##### Stage: 5 An article about the strategy for playing The Triangle Game which appears on the NRICH site. It contains a simple lemma about labelling a grid of equilateral triangles within a triangular frame. ##### Stage: 5 Challenge Level: Find all positive integers a and b for which the two equations: x^2-ax+b = 0 and x^2-bx+a = 0 both have positive integer solutions. ### The Great Weights Puzzle ##### Stage: 4 Challenge Level: You have twelve weights, one of which is different from the rest. Using just 3 weighings, can you identify which weight is the odd one out, and whether it is heavier or lighter than the rest? ### Iffy Logic ##### Stage: 4 Short Challenge Level: Can you rearrange the cards to make a series of correct mathematical statements? ### Archimedes and Numerical Roots ##### Stage: 4 Challenge Level: The problem is how did Archimedes calculate the lengths of the sides of the polygons which needed him to be able to calculate square roots? ### Symmetric Tangles ##### Stage: 4 The tangles created by the twists and turns of the Conway rope trick are surprisingly symmetrical. Here's why! ### Golden Eggs ##### Stage: 5 Challenge Level: Find a connection between the shape of a special ellipse and an infinite string of nested square roots. ### AMGM ##### Stage: 4 Challenge Level: Choose any two numbers. Call them a and b. Work out the arithmetic mean and the geometric mean. Which is bigger? Repeat for other pairs of numbers. What do you notice? ### Pair Squares ##### Stage: 5 Challenge Level: The sum of any two of the numbers 2, 34 and 47 is a perfect square. Choose three square numbers and find sets of three integers with this property. Generalise to four integers. ### Proofs with Pictures ##### Stage: 5 Some diagrammatic 'proofs' of algebraic identities and inequalities. ##### Stage: 4 Challenge Level: Four jewellers possessing respectively eight rubies, ten saphires, a hundred pearls and five diamonds, presented, each from his own stock, one apiece to the rest in token of regard; and they. . . . ### Stonehenge ##### Stage: 5 Challenge Level: Explain why, when moving heavy objects on rollers, the object moves twice as fast as the rollers. Try a similar experiment yourself. ### N000ughty Thoughts ##### Stage: 4 Challenge Level: Factorial one hundred (written 100!) has 24 noughts when written in full and that 1000! has 249 noughts? Convince yourself that the above is true. Perhaps your methodology will help you find the. . . . ### Square Pair Circles ##### Stage: 5 Challenge Level: Investigate the number of points with integer coordinates on circles with centres at the origin for which the square of the radius is a power of 5. ### Number Rules - OK ##### Stage: 4 Challenge Level: Can you convince me of each of the following: If a square number is multiplied by a square number the product is ALWAYS a square number... ### Natural Sum ##### Stage: 4 Challenge Level: The picture illustrates the sum 1 + 2 + 3 + 4 = (4 x 5)/2. Prove the general formula for the sum of the first n natural numbers and the formula for the sum of the cubes of the first n natural. . . . ### Exhaustion ##### Stage: 5 Challenge Level: Find the positive integer solutions of the equation (1+1/a)(1+1/b)(1+1/c) = 2 ### Three Frogs ##### Stage: 4 Challenge Level: Three frogs hopped onto the table. A red frog on the left a green in the middle and a blue frog on the right. Then frogs started jumping randomly over any adjacent frog. Is it possible for them to. . . . ### Rotating Triangle ##### Stage: 3 and 4 Challenge Level: What happens to the perimeter of triangle ABC as the two smaller circles change size and roll around inside the bigger circle? ### The Root Cause ##### Stage: 5 Challenge Level: Prove that if a is a natural number and the square root of a is rational, then it is a square number (an integer n^2 for some integer n.) ### Perfectly Square ##### Stage: 4 Challenge Level: The sums of the squares of three related numbers is also a perfect square - can you explain why? ### Picture Story ##### Stage: 4 Challenge Level: Can you see how this picture illustrates the formula for the sum of the first six cube numbers? ### Magic W Wrap Up ##### Stage: 5 Challenge Level: Prove that you cannot form a Magic W with a total of 12 or less or with a with a total of 18 or more. ### Air Nets ##### Stage: 2, 3, 4 and 5 Challenge Level: Can you visualise whether these nets fold up into 3D shapes? Watch the videos each time to see if you were correct. ### Interpolating Polynomials ##### Stage: 5 Challenge Level: Given a set of points (x,y) with distinct x values, find a polynomial that goes through all of them, then prove some results about the existence and uniqueness of these polynomials. ### Whole Number Dynamics II ##### Stage: 4 and 5 This article extends the discussions in "Whole number dynamics I". Continuing the proof that, for all starting points, the Happy Number sequence goes into a loop or homes in on a fixed point. ### Dodgy Proofs ##### Stage: 5 Challenge Level: These proofs are wrong. Can you see why? ### Advent Calendar 2011 - Secondary ##### Stage: 3, 4 and 5 Challenge Level: Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas. ##### Stage: 3 and 4 Challenge Level: Powers of numbers behave in surprising ways. Take a look at some of these and try to explain why they are true. ### Diverging ##### Stage: 5 Challenge Level: Show that for natural numbers x and y if x/y > 1 then x/y>(x+1)/(y+1}>1. Hence prove that the product for i=1 to n of [(2i)/(2i-1)] tends to infinity as n tends to infinity. ### Whole Number Dynamics III ##### Stage: 4 and 5 In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again. ### Yih or Luk Tsut K'i or Three Men's Morris ##### Stage: 3, 4 and 5 Challenge Level: Some puzzles requiring no knowledge of knot theory, just a careful inspection of the patterns. A glimpse of the classification of knots and a little about prime knots, crossing numbers and. . . . ### Russian Cubes ##### Stage: 4 Challenge Level: How many different cubes can be painted with three blue faces and three red faces? A boy (using blue) and a girl (using red) paint the faces of a cube in turn so that the six faces are painted. . . . ### Doodles ##### Stage: 4 Challenge Level: A 'doodle' is a closed intersecting curve drawn without taking pencil from paper. Only two lines cross at each intersection or vertex (never 3), that is the vertex points must be 'double points' not. . . . ### Proximity ##### Stage: 4 Challenge Level: We are given a regular icosahedron having three red vertices. Show that it has a vertex that has at least two red neighbours. ### How Many Solutions? ##### Stage: 5 Challenge Level: Find all the solutions to the this equation. ### Rational Roots ##### Stage: 5 Challenge Level: Given that a, b and c are natural numbers show that if sqrt a+sqrt b is rational then it is a natural number. Extend this to 3 variables. ### Knight Defeated ##### Stage: 4 Challenge Level: The knight's move on a chess board is 2 steps in one direction and one step in the other direction. Prove that a knight cannot visit every square on the board once and only (a tour) on a 2 by n board. . . . ### A Long Time at the Till ##### Stage: 4 and 5 Challenge Level: Try to solve this very difficult problem and then study our two suggested solutions. How would you use your knowledge to try to solve variants on the original problem? ### Transitivity ##### Stage: 5 Suppose A always beats B and B always beats C, then would you expect A to beat C? Not always! What seems obvious is not always true. Results always need to be proved in mathematics. ### Middle Man ##### Stage: 5 Challenge Level: Mark a point P inside a closed curve. Is it always possible to find two points that lie on the curve, such that P is the mid point of the line joining these two points? ### Rolling Coins ##### Stage: 4 Challenge Level: A blue coin rolls round two yellow coins which touch. The coins are the same size. How many revolutions does the blue coin make when it rolls all the way round the yellow coins? Investigate for a. . . .
Torsion freeness and injectivity of morphism of some coherent sheaves Hi, everybody. My problem is to give a relative version of the well known fact which say that a generical isomorphism between torsion free coh.sheaves on complex space (or in more general setting with some reasonnable assumptions) is necessarily injectiv. The claim: Let $X$ and $S$ be a complex spaces of finite dimension (perhaps reduced) (or locally noetherian excellent schemes), $\pi:X\rightarrow S$ be an open and surjective map with constant fiber dimension. Let $A$ and $B$ two coherent sheaves on $X$ which satisfies the properties: 1) There is an open dense subset $U$ of $X$ on which the restriction of $A$ and $B$ are canonically isomorphics 2) For every open set $V$ of $X$ s.t $V\cap \pi^{-1}(s)$ is dense in $\pi^{-1}(s)$, the natural restriction morphism $\Gamma(X, A)\rightarrow \Gamma(V, A)$ is injectiv (the same is true for $B$). Then any morphism $f:A\rightarrow B$ which is an isomorphism on $U$, is injectiv. P.S: Of course, the problem is local and we can translate this claim in terms of analytic algebra or modules.. Thank you. -
Edit: For the purposes of the bounty, it would suffice to answer my third question below for the statistic $\hat{P}_g$. I have two groups $g \in \{1, 2\}$ and customers $i$ within those groups who generate revenue $R_{ig} \in \mathbb{R}^+$ by purchasing a quantity $Q_{ig} \in \mathbb{R}^+$. There is price discrimination, so the effective price per unit of the good for each customer, $P_{ig}$, is different. An "average" price can be estimated for each group in at least two ways: • as $\hat{P}_g = \dfrac{\sum_iR_{ig}}{\sum_iQ_{ig}}$. By dividing by the sample size, $N_g$, this can be seen to be equivalent to the ratio estimator $\hat{P}_g = \dfrac{\overline{R}_{g}}{\overline{Q}_{g}}$, or • as $\tilde{P}_g = \dfrac{1}{N_g}\sum_i{\dfrac{R_{ig}}{Q_{ig}}}$. I have several questions: • Is there a difference in interpretation between the two estimators $\hat{P}_g$ and $\tilde{P}_g$ -- are they estimating the same underlying population quantity? I came across this, but there does not appear to be a version of this online. • From a statistical point of view, is one of the estimators better than the other (for the common or for their respective population quantities), especially in terms of their bias properties? • Lastly, for either one of the estimators, what is the right way to compare them across groups to test the hypothesis that $P_1 = P_2$? Is there a variance estimator for either of the two estimators? I want to be able to say that the price is, on average, the same across the two groups. I believe this is related to the Fieller-Creasy class of problems, but I am not familiar with the problem family. • The two formulas effectively weight the points differently; each can be written as a weighted version of the other formula. – Glen_b Sep 30 '16 at 10:29 • @Glen_b Thanks Glen. I have seen the other questions which raise this very point and there are excellent explanations of the interpretations of the two statistics, for example, this excellent response by Bill here. However, the main question I have is how to estimate the variance, and how to compare the statistic across groups. – tchakravarty Sep 30 '16 at 10:31 • @Glen_b Assuming that I use the $\hat{P}_g$ statistic. – tchakravarty Sep 30 '16 at 10:34 • @Jim Hi Jim, thanks for the response. Happy to hear your interpretation of why $\tilde{P}_g$ is more natural -- of course, it is easier to get a confidence interval for it. – tchakravarty Oct 3 '16 at 16:31 • Another alternative is to estimate $R = \beta_0 + \beta_1 Q + \epsilon$ by a regression and to include an dummy for the groups. Then check whether the dummy is significant (You can use the dummy only for the intercept or also incluide the interaction effect). The estimate for $\beta_1$ in each group is then an estimate for the price in each group – user83346 Oct 4 '16 at 14:16 I feel like the concern should be with the underlying data-generating process and where you suspect 'error' or noise in your data is coming from. The whole point of taking averages is to be able to invoke a law-of-large-numbers argument that noise 'cancels out'. 1) For example, let's say we have measurement error in the amount of revenue generated but not in quantity (in reality this might be due to rounding, which is a very specific and ugly type of error), i.e. a demon decides to add an iid epsilon noise to our observations of $R_{ig}=R^*_{ig}+\epsilon_{ig}$ where $R^*$ is the true revenue generated and what we observe is $R$. Then taking the average over all observed $R$ will minimize the relative influence of the noise, and dividing through by the average of $Q$ should be the most efficient way to proceed. This is $\hat{P}$. 2) However, let's say that instead of observing $R$ and $Q$, we actually observe $Q$ and a noisy measure of price $P=P^*+\epsilon$ (although this begs the question of why we even bother with $Q$ and $R$ in the first place since we already have what we want, $P$, directly), and we use a spreadsheet to find $R$ by multiplying $Q$ and $P$, it makes a lot more sense to calculate $R/Q$ and take averages of the ratios instead, as in $\tilde{P}$. Roughly speaking, how you want the noise to 'cancel' will determine what averages you take, but you cannot know how the noise cancels unless you first specify where it's coming into play. What if instead of additive noise you had multiplicative noise (multiply $R$ by +/- a few percent, this is actually very similar to part 2)? Then you'd want to take logs, add up the logs (since multiplicate noises are additive in logs), then re-exponentiate. etc. Edit: I'd argue that the above answers your 2nd point, since bias properties are a function of the error structure, of which I gave 2 examples, but I'll answer the 3rd question in particular. If we assume no noise in our observations of $Q$ but additive iid $\epsilon \sim N(0,\sigma^2)$ noise in $R$, then our modeling assumption is $$R_i=R^*_i+\epsilon_i=Q_i P_i+\epsilon_i$$ Solving for $P_i$ is just an OLS of $R_i$ on $Q_i$ with a forced intercept through zero and two subgroups $g$, which means we can run a Chow test for equality of $P$ in the two subgroups. Using the example in Wikipedia, you would just have $y_t=b_1 x_{1t}+\epsilon$ and $y_t=b_2 x_{2t} + \epsilon$ for your two groups, and $y=R, b=P,x=Q$ If you insist on using the quotient estimators directly (which you shouldn't if you have enough assumptions to use an OLS-based method), then still assuming additive errors gives $$\sum_i R_i \sim N(\sum_i R_i^*, N\sigma^2)\\ \sum_i R_i / \sum_i Q_i \sim N(P_i, \sigma^2 \frac{N}{(\sum Q_i)^2})$$ But then you have to estimate $\sigma^2$ which is usually done by taking residuals after performing an OLS fit anyways. As before, any discussion of variance properties hinges on the properties of the underlying DGP and where the noise is: if we assumed $Q$ was measured with error we can't even analytically derive the variance since the variance of a quotient of 2 random variables is generally a mess. • This does not answer the question. This is a (long) comment, and should be moved to the comments section. – tchakravarty Oct 4 '16 at 4:08 • $P_i$ is stochastic in itself (because of price discrimination or whatever other (complex) mechanism) -- splitting it out into additive noise is an unnecessary & overly simplistic modeling assumption. I appreciate the attempt, but this is not addressing the question. – tchakravarty Oct 4 '16 at 6:40 • @tchakravarty It's fine that you didn't find my answer helpful, but I do believe you need to state what exact conditions are involved in the initial question. Without knowledge of how $P_i$ is stochastic (is it at the individual level, or as in the tone of your question is it the same for every group member? is the noise additive/multiplicative etc.), it is impossible to discuss variances and hence tests. – Ray Oct 4 '16 at 6:45 • Ray, ideally I would like the test to be non-parametric in nature, without assuming a parametric distribution for the underlying variates, but if it helps your solution approach, given that they are positive quantities, you can assume that $Q_{ig}$ and $R_{ig}$ are log-normal or gamma variates with group specific parameters. – tchakravarty Oct 4 '16 at 7:13 • Even for nonparametric tests, however, you usually just relax some assumptions about the distribution of the error term, such as assuming it is symmetric about zero. Knowing the marginals of Q and R is, in some ways, irrelevant. To form a test, you state a null and see if your sample observations are 'consistent' with the null, where consistency requires you to know how much noise is present and how it will cause your sample to vary from the null. – Ray Oct 4 '16 at 15:05
# Basic limit question 1. Feb 25, 2014 ### bobby2k 1. The problem statement, all variables and given/known data Find the limit: $lim_{x -> \infty}[n*(x^{\frac{1}{n}}-1)-ln(x)]$, for any n. 2. Relevant equations L'Hopitals rule. 3. The attempt at a solution I know that the ratio of the first expression over the last goes to zero, by L'Hopital, but unfortunately I now have a difference and not a quotient. Is it possible to transform it in some way to use L'Hopital? 2. Feb 25, 2014 ### Staff: Mentor a-b = (ab-b^2)/b = (a/b - 1)/(1/b) Not sure if one of those helps. 3. Feb 25, 2014 ### scurty $lim_{x -> \infty}[n*(x^{\frac{1}{n}}-1)-ln(x)]$, for any n. $y = \displaystyle \lim_{x \to \infty}(ln[e^{n*(x^{\frac{1}{n}}-1)}]-ln(x))$ $y = \displaystyle \lim_{x \to \infty}\displaystyle ln\left[\frac{e^{n*(x^{\frac{1}{n}}-1)}}{x}\right]$ $e^y = \displaystyle \lim_{x \to \infty}\displaystyle \left[\frac{e^{n*(x^{\frac{1}{n}}-1)}}{x}\right]$ That gives it in fraction form. I'm not sure if it makes it easier to evaluate or not, I haven't tried the problem myself. 4. Feb 27, 2014 ### Staff: Mentor If you apply l'Hopital's rule on that fraction, you end up with the same fraction again (plus some irrelevant part you can ignore). 5. Feb 27, 2014 ### scurty You're right. I believe factoring out ln(x) so you have $\displaystyle\lim_{x\to\infty}\ln{(x)} \cdot \left(\frac{n(x^{1/n}-1)}{\ln{(x)}} - 1\right)$ might work. You can use l'Hospital's on the inside fraction which works better than what I suggested above. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Have something to add? Draft saved Draft deleted
## Running App Engine From Behind A Proxy So, I am running Python 2.6.1 on my computer, and just installed google App Engine. As soon as i click on the App Engine Launcher, it gives an error dialog box and asks me to check the GoogleAppEngineLauncher.exe file for more details. I have appended the log file below: File "GoogleAppEngineLauncher.py", line 42, in <module> File "wx\_core.pyc", line 7913, in __init__ File "wx\_core.pyc", line 7487, in _BootstrapApp File "launcher\app.pyc", line 58, in OnInit File "launcher\app.pyc", line 152, in _VersionCheck File "urllib.pyc", line 82, in urlopen File "urllib.pyc", line 190, in open File "urllib.pyc", line 338, in open_http File "urllib.pyc", line 351, in http_error File "urllib.pyc", line 702, in http_error_407 File "urllib.pyc", line 714, in retry_proxy_http_basic_auth File "urllib.pyc", line 773, in get_user_passwd File "urllib.pyc", line 782, in prompt_user_passwd EOFError: EOF when reading a line From the file, I get the feeling that It's about giving the right details about Proxy. It seems that it is about making some edits to some particular .py file so that I can enter my proxy settings, and username/password. I have also come across few suggestions telling me to set the proxy in environment variables at the command prompt - but that hasn't worked. Any suggestions, please? asked 06 Jun '12, 13:52 Dharav Solanki 4392420 accept rate: 42% ## 2 Answers: What I did when I ran it under a proxy was, I added the environment variable "http_proxy" and set it to the proxy I was using (172.30.1.1) to my list of environment variables (go to My Computer -> Right Click -> Properties -> Advanced System Settings -> Advanced tab -> Environment variables. I use Windows 7. answered 06 Jun '12, 21:03 Ashwin Menon 1.1k17 Thanks Ashwin. I have done that. Before doing that: App Engine Launches, but terminates as soon as it does with a dialog box. Could run locally using command prompt * Eclipse would run - but neither locally nor deploy After doing the variables thing: App Engine Runs and can run the web app locally. Deploying is still a trouble. * COmmand Prompt STILL can run locally but cannot deploy. The problem, as seems to be on Google Code Issue 4849, seems to be with a urllib2 file : but I cannot figure out a way to use that patch! answered 06 Jun '12, 21:40 Dharav Solanki 4392420 Are you sure you are using the latest version of GAE? (06 Jun '12, 21:52) 1.6.6 - thats the one I found on Google's website. I understand the whole problem with init.py should have been resolved on an update. If you are behind a proxy that requires authentication - and you are able to deploy - it probably means that you have an updated init.py file. Could you send it over? You could host it on a dropbox public folder or something. (06 Jun '12, 22:24) 1 If possible, you should upgrade to Python 2.7 (06 Jun '12, 22:55) Do you mean init.py? All my init.py files seem to be empty except for comments. Oh, and I am not behind a proxy right now, unfortunately. (06 Jun '12, 23:11) I have updraded, Stephan. And I managed to make it work, finally! (06 Jun '12, 23:42) Your answer Question text: Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • link:[text](http://url.com/ "Title") • image?![alt text](/path/img.jpg "Title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported ## Tags ×13,554 ×2,214 ×594 ×402 ×21 Asked: 06 Jun '12, 13:52 Seen: 1,017 times Last updated: 06 Jun '12, 23:42
# Math Help - Indefinite Integral 1. ## Indefinite Integral What is ∫(sin(√(x))/x)dx? 2. Originally Posted by th%$&873 What is ∫(sin(√(x))/x)dx? This integral is unsolvable (no finite forms). 3. Originally Posted by th%$&873 What is ∫(sin(√(x))/x)dx? The best we can do is to let $u = \sqrt{x} \implies 2u~du = dx$, so $\int \frac{sin(\sqrt{x})}{x}~dx = \int \frac{sin(u)}{u^2} \cdot 2u~du = 2 \int \frac{sin(u)}{u}~du$ This integral (at least) sometimes goes by the name of "Si." (So as a function of x it is $Si(x)$.) There is no closed-form solution to this integral. -Dan 4. Originally Posted by topsquark This integral (at least) sometimes goes by the name of "Si." The $\text{Si}$ function is defined as follows: ${\text{Si}}\,(x) = \int_0^x {\frac{{\sin u}} {u}\,du} .$ As a definite integral, may be it could have more sense. 5. Thanks, everyone.
Generic selectors Exact matches only Search in title Search in content Filter by Categories A Case Report Abstracts Aeromedical Assessment Aeromedical Decision Making Aeromedical Evaluation Aerospace Medicine Quiz Article Aviation Physiology Book Review Case Report Case Series Case Study Clinical Series Concept Paper Contemporary Issue Contemporary issues Current Issue Editorial Exploring Space Field Experience Field Report Field Study Field Trials Guest Editorial Letter to Editor Letter to the Editor Letters to the Editor Methods in Aerospace Medicine Methods in Medicine OBITUARY Original Article Original Article (Field Study) Original Research Perspective Quiz Review Article Short Communication Society Calender Society News Teaching Series Technical Communication Original Article 53 ( 1 ); 11-18 # Mathematical model for accurate measurement of head movements in simulators with frontal field visual display Resident, Aerospace Medicine, IAM, IAF, Bangalore-560 017 ## Abstract Modern Simulators in the Aeromedical environment incorporate large frontal visual field displays. This requires the monitoring camera within the simulator cockpit to be placed eccentrically. The resulting “off-axis” view causes problems in accurately determining the degree of head movement of subjects seated within the simulator. In this paper the author proposes a mathematical model relating the angle of eccentricity of the camera with the angle of head movement actually performed in the roll and pitch planes and the angle of head movement seen on the screen at the control panel. Use of this model would enable precise assessment of head movements within the Simulator for training as well as research purposes. ## Introduction In the aerospace environment, from a simple Barany’s chair to the latest 6° of freedom advanced Disorientation Simulator and the High Performance Human Centrifuge (HPHC) many simulators are mounted on rotating platforms. Head movements in a rotating environment give rise to bizarre vestibular responses, the intensity of which is dictated by a number of factors including the angle through which the head is and the rate at which it is tilted. Due to the presence of a large screen covering the visual field, any monitoring camera placed within the simulator cockpit is placed away from the mid-saggital plane of the subject as shown in Figure 1. This provides the operator observing on the Visual Display Unit (VDU) at the control panel with an “off-axis” view of the subject. An experienced operator may estimate the degree of head tilt but it still remains a subjective estimate. This skewed view is caused solely by the angle of eccentricity of the camera placement and hence it is possible to derive a mathematical relationship between:- 1. Angle of eccentricity of the camera 2. Angle of actual head movement 3. Angle of head movement as seen on the VDU This paper proposes a mathematical model for calculation of this relationship for a subject seated in a simulator cabin with an eccentrically placed camera. ## Simulated rotatory environments in Aerospace medicine Simulators utilise a rotatory environment for generating many physiological effects of vestibular origin. Motion sickness desensitisation programs play an important role in any air force in mitigating persistent airsickness in upto 14.6% of ab-initio pilots [1]. Graded training of the vestibular system involves use of Coriolis cross coupled sensations generated by head movements inside simulators mounted on rotating platforms and this has been studied extensively [2,3,4]. In these studies the standardization and accuracy in measurement of head movements is a subjective assessment by the operator and thus is liable to be contaminated with inter-subject and inter-operator variations. The human centrifuge simulating accelerative forces and the short arm centrifuge simulating artificial gravity in space present us with a rotatory environment. An area of focus in space research lies in mitigating or minimising the nausogenic vestibular response associated with a short arm centrifuge. Except for a few simulators with magnetic head tracker units [5,6] most do not have a system for accurate measurement of head movements being performed while in the simulator cabin. ## Relevance of measurement of head movements Head movements in a rotating environment lead to stimulation of the vestibular system by two non-coplanar angular motions and produce what is known as the Cross-Coupled Stimulus (CCS) [4]. The physiological interpretation of the CCS leads to the perceived angular velocity being a sine function of the angular velocity of the rotating platform ω (Omega) and the degree of head turn θ (Theta) as seen in the Equation given by Benson [7]. ${\omega }_{yz}=\omega \sqrt{\left({\mathrm{sin}}^{2}\theta +{\mathrm{cos}}^{2}\theta \right)}$ This equation shows that there are two components that determine the resultant angular velocity perceived by the higher centres. One is the angular velocity of the rotating platform denoted by ω and the angle through which the head moves denoted by θ. The angular velocity of the platform is precisely determined by the equipment itself but the angle of head movement is subject dependant and there exist practical problems in assessment of this angle. Hence it is essential to provide a generalised solution for measurement of accurate head movements of the subjects within the simulator cabin. In order to achieve this, a mathematical model was developed by the author, which is described below. ## Mathematical Model and derivation For a mathematical analysis of head movements in a three dimensional space it is essential to define a reference coordinate system to specify head position and movement and their mathematical relations. In this model, the Cartesian Coordinate Space has been used where the Yaw plane is represented by the x, y planes, the Roll plane by the x, z planes and the Pitch plane by the y, z planes as depicted in Figure 2. The Point of Origin (0,0) for the calculations has been fixed at the bottom right corner of the VDU at the control panel. In the three dimensional space of a simulator cockpit, a “head on” view of the subject would be produced if the plane of head movement and the plane of the camera were parallel to each other as depicted in Figure 3. But an eccentric position of the camera causes the plane of the head movement and the plane of the camera to be non-parallel. In this situation not only does the angle subtended by each point on the subject’s face change with head movement, but also the rate of change is different for each part of the face giving rise to a ‘dynamic skew’ where each point changes logarithmically. This makes assessment of change angle exponentially different, as shown in Figure 4. This deviation is solely caused by the angle of eccentricity of the camera and hence it is possible to derive a mathematical relationship between the incident point on the Head Movement Plane, the emergent point on the Camera plane and the angle between these two planes. A diagrammatic representation of the head movements in the Roll plane is illustrated in Figure 5, 6. The Mathematical relationship for a head movement in the roll plane is: Angle on left = 180-(actual head movt + Eccentricity of camera) Angle on right = Eccentricity of camera – Actual head movt A diagrammatic representation of movements of the head within the simulator in the Pitch plane is depicted in Figure 7,8. The mathematical relationship for movements of the head in the pitch plane is: Angle Fwd = 180-(actual head movt + Eccentricity of camera) Angle Backward = Eccentricity of camera – Actual head movt The mathematical derivation for both the equations along with diagrams is given in Appendix. In a platform rotating on its Z axis hence a movement in the yaw plane would be coplanar. Such a movement is unlikely to have great significance in vestibular related experiments. For a given system the degree of eccentricity of the camera is fixed. The degree of actual head movement required can be substituted in the equations and the result would be the apparent angle through which the head moves on the screen. This mathematical model in the form of a linear first order equation is relatively simple to incorporate into any software. A simple software for demonstration of automatic angle correction has been developed using Microsoft Excel and a screenshot of the same can be seen in Figure 9. This would enable the operator to verbally guide a subject to perform precise degrees of head movements within the simulator. ## Discussion Alternate approaches to applying this mathematical model would be to use software for face recognition and to provide a weighted mean of the points identified on the face as the midline for the head of the subject on which real-time analysis for head movement can be performed while correcting for the eccentricity of the camera using the given equations. Another approach would be to use a face recognition software to prepare a 3D mask of the subjects face and correct the eccentricity of the camera using the equations so that the final image presented to the operator at the VDU is a head-on view. These methods would also allow for real-time calculation of the degree as well as angular velocity of head movements. For existing simulators this can be implemented without modifying the core system by taking a parallel video output from the simulator camera and subjecting the image to the above mentioned software manipulations on a separate PC based platform. The simulators mounted on rotating platforms are used for training as well as research purposes. During training, the demonstration of vestibular effects by head tilts requires the accuracy of degree of head movement to be just measured to a level of being supra-threshold. But for research purposes the degree as well as the angular velocity of head movement needs to be objectively measured and reproduced to a fairly high degree of fidelity without inter-operator and inter-subject variability [4,5,6]. Without such objective and accurate means [3,8] the results and conclusions of any research stand suspect. The accuracy of head movement measurement is also underlined in the fact that current research in vestibular physiology shows the severity of nauseogenic effect of cross-coupled rotation to be directly proportional to gyroscopic angular acceleration [4]. Airsickness desensitisation programs around the world follow a combination of sequential and incremental training where various components of the conflicting stimuli are addressed separately and incremented individually [1,9,10]. Currently only the component of tangential acceleration provided by the rotatory platform is incremental in nature. A system with capability of accurate measurement of head movement opens avenues for incremental adaptation in the gyroscopic angular acceleration i.e. the degree of head movement and the velocity of head movement. ## Conclusion Anything in the physical or biological world, whether natural or involving technology and human intervention, is subject to analysis by mathematical models if it can be described in terms of mathematical expressions [11]. This simple mathematical model could be employed for measurement of head movement angles in the simulators wherein the large visual displays force the monitoring camera to be placed eccentrically. Software manipulation using the mathematical model equations performed on the analog or digital feeds from the monitoring camera would provide an immediate solution without intervening in the operating system of the Simulator. ## References 1. , , , . Air sickness in trainee aircrew of Indian Air Force: Our experience with desensitisation. Ind J Aerospace Med. 2005;49(2):33-40. 2. , . The desensitisation of chronically motion sick aircrew in the Royal Air Force. Aviat Space Environ Med. 1985;56:1144-51. 3. , . Effect of direction of head movement on motion sickness caused by Coriolis stimulation. Aviat Space Environ Med. 1997;68:93-8. 4. , , , et al. The severity of nauseogenic effects of cross-coupled rotation is proportional to gyroscopic angular acceleration. Aviat Space Environ Med. 1996;67:325-32. 5. , , , et al. Aviation spatial orientation in relationship to head position and attitude interpretation. Aviat Space Environ Med. 1997;68:463-71. 6. , , . Aviation spatial orientation in relationship to head position, attitude interpretation, and control. Aviat Space Environ Med. 1997;68:472-8. 7. , . Coriolis cross-coupling effects: Disorienting and nauseogenic or not? In: Naval aerospace medical research laboratory naval air station Pensacola. . 8. , , . Physiological responses to the Coriolis illusion: effects of head position and vision. Aviat Space Environ Med. 2007;78:985-9. 9. , . Incremental exposure facilitates adaptation to sensory rearrangement. Aviat. Space Environ. Med. 1978;49(2):362-64. 10. , . Progressive adaptation to Coriolis accelerations associated with 1-rpm increments in the velocity of the slow rotation room. Aerospace Med. 1970;41(1):73-9.
## Efficient production of n = 2 Positronium in S states We routinely excite Positronium (Ps) into its first excited state (n = 2) via 1-photon resonant excitation [NJP. 17 043059], and even though most of the time this is an intermediate step for subsequent excitation to Rydberg (high n) states [PRL. 114, 173001], there is plenty of interesting physics to be explored in n = 2 alone, as we discussed in one of our recent studies [PRL. 115, 183401 and  PRA. 93, 012506]. In this study we showed that the polarisation of the excitation laser, as well as the electric field that the atoms are subjected to, have a drastic effect on the effective lifetime of the excited states and when Ps annihilates. Above you can see the data for two laser polarisations, showing the Signal parameter S(%) as a function of electric field, this is essentially a measure of how likely Ps is to annihilate compared to ground-state (n = 1) Ps, that is to say, if S(%) is positive then n = 2 Ps in such configuration annihilates with shorter lifetimes than n = 1 Ps (142 ns), whereas if S(%) is negative then n = 2 Ps will annihilate with longer lifetimes than 142 ns, These longer lifetimes are present in the parallel polarisation (pannel a). Using this polarisation, and applying a large negative or positive electric field (around 3 kV/cm), provides such long lifetimes due to the excited state containing a significant amount of triplet S character (2S), a substate of = 2 with spin = 1 and $\ell$= 0. If the Ps atoms are then allowed to travel (adiabatically) to a region of zero nominal electric field (our experimental set-up [RSI. 86, 103101] guarantees such transport), then they will be made up almost entirely of this long-lived triplet S character, and will thus annihilate at much later times than the background n = 1 atoms. These delayed annihilations can be easily detected by simply looking at the gamma-ray spectrum recorded by our LYSO detectors [NIMA. 828, 163] when the laser is on resonance (“Signal”), and subtracting it from the spectrum when the laser is off resonance (“Background”). The figure above shows such spectra taken with the parallel laser polarisation, at a field where there should be minimal 2S Production (a), and a field where triplet S character is maximised (b).   It is obvious that on the second case, there are far more annihilations at later times, indicated by the positive values of the data on times up to 800 ns. This is clear evidence that we have efficiently produced = 2 triplet S states of Ps using single-photon excitation. Previous studies of 2S Ps produced such states either by collisional methods [PRL34, 1541], which is much more inefficient than single-photon excitation,  or by two-photon excitation, which is also more inefficient, requires much more laser power and is limited by photo-ionisation [PRL. 52, 1689]. This observation is the initial step before we begin a new set of experiments where we  will attempt to measure the = 2 hyperfine structure of Ps using microwaves! ## P.A.M. Dirac Yesterday marked the 114th anniversary of the birth of Paul Adrien Maurice Dirac, one of the world’s greatest ever theoretical physicists. Born on the 8th of August 1902 in Bristol (UK), Dirac studied for his PhD at St. John’s college Cambridge University, where he would subsequently discover the equation that now bears his name, iγ·∂ψ =  mψ . The Dirac equation is a solution to the problem of describing an electron in a way that is consistent with both quantum mechanics and Einstein’s theory of relativity. His solution was unique in its natural inclusion of the electron “spin”, which had to otherwise be invoked to account for fine structure in atomic spectra. His brilliant contemporary, Wolfgang Pauli, described Dirac’s thinking as acrobatic. And several of Dirac’s theories are regarded as among the most beautiful and elegant of modern physics. An important prediction of the Dirac equation is the existence of the anti-electron (also known as the  positron). This particle is equal in mass to the more familiar electron, but has the opposite electric charge. Dirac published his theory of the anti-electron in 1931 – two years before “the positive electron” was discovered by Carl Anderson. Dirac accurately mused that the anti-proton might also exist, and most physicists now believe that all particles posses an antimatter counterpart. But antimatter is apparently – and as yet inexplicably – much scarcer than matter. In 1933 Dirac shared the Nobel prize in physics with Erwin Schrödinger “for the discovery of new productive forms of atomic theory”. Dirac died aged 82 in 1984. He’s commemorated in Westminster Abbey by an inscription in the Nave, not far from Newton’s monument. Separated in life by more than two centuries, Paul Dirac and Sir Isaac Newton are arguably the fathers of antimatter and gravity. The Strangest Man by Graham Farmelo is a fascinating account of Dirac’s life and work. ## A guide to positronium Positronium (Ps) is a hybrid of matter and antimatter. Made of just two particles – an electron and a positron – the atomic structure of Ps is similar to hydrogen. The ultimate aim of our experiments at UCL is to observe deflection of a Ps beam due to gravity, as nobody knows if antimatter falls up or down. In this post, we outline how we recently managed to guide positronium using a quadrupole. Because the Ps atom doesn’t have a heavy nucleus, it’s extremely light and will typically move very, very quickly (~100 km/s). A refinement of the guiding techniques we used can, in principle, be applied to decelerate Ps atoms to speeds that are more suitable for studying gravity. Before guiding positronium we have to create some. Positrons emitted from a radioisotope of sodium are trapped in a combination of electric and magnetic fields. They are ejected from the trap and implanted into a thin-film of mesoporous silica, where they bind to electrons to form Ps atoms; the network of tiny pores provides a way for these to get out and into vacuum. The entire Ps distribution is emitted from the film in a time-window of just a few billionths of a second.  This is well matched to our pulsed lasers, which we use to optically excite the atoms to Rydberg levels (high principal quantum number, n). If we didn’t excite the Ps then the electron-positron pairs would annihilate into gamma-ray photons in much less than a millionth of a second, and each would be unlikely to travel more than a few cm. However, in the excited states self-annihilation is almost completely suppressed and they can, therefore, travel much further. Each Rydberg level contains many sublevels that have almost the same internal energy. This means that for a given n its sublevels can all be populated using a narrow range of laser wavelengths. But if an electric field is applied the sublevels are shifted. This so-called “Stark shift” comes from the electric dipole moment, i.e., the distribution of electric charge within the atom. The dipole is different for each sublevel and it can either be aligned or anti-aligned to the electric field. This results in a range of both positive and negative energy shifts, broadening the overall spectral line. Tuning the laser wavelength can now be used to select a particular sublevel. Or rather, to select a Rydberg-Stark state with a particular electric dipole moment. Stark broadening is demonstrated in the plot below. [For higher electric fields the individual Stark states can be resolved.] The Stark effect provides a way to manipulate the motion of neutral atoms using electric fields. As an atom moves between regions of different electric field strength its internal energy will shift according to its electric dipole moment. However, because the total energy must be conserved the kinetic energy will also change. Depending on whether the atom experiences a positive or negative Stark shift, increasing fields will either slow it down or speed it up. The Rydberg-Stark states can ,therefore, be broadly grouped as either low-field-seeking (LFS) or high-field-seeking (HFS). The force exerted by the electric field is much smaller than would be experienced by a charged particle. Nevertheless, this effect has been demonstrated as a useful tool for deflecting, guiding, decelerating, and trapping Rydberg atoms and polar molecules. A quadrupole is a device made from a square array of parallel rods.  Positive voltage is applied to one diagonal pair and negative to the other. This creates an electric field that is zero along the centre but which is very large directly between neighbouring rods. The effect this has on atoms in LFS states is that when they drift away from the middle into the high fields they slow down, and eventually turn around and head back towards the centre, i.e., they are guided. On the other hand, atoms in HFS states are steered away from the low-field region and out to the side of the quadrupole. Using gamma-ray detectors at either end of a 40 cm long quadrupole we measured how many Rydberg Ps atoms entered and how many were transported through it. With the guide switched off some atoms from all states were transmitted. However, with the voltages switched on there was a five-fold increase in the number of low-field-seeking atoms getting through, whereas the high-field-seeking atoms could no longer pass at all. A large part of why we chose to use positronium for our gravity studies is that it’s electrically neutral. As the electromagnetic force is so much stronger than gravity we, therefore, avoid otherwise overwhelming effects from stray electric fields. However, by exciting Ps to Rydberg-Stark states with large electric dipole moments we reintroduce the same problem. Nonetheless, it should be possible to exploit the LFS states to decelerate the atoms to low speeds, and then we can use microwaves to drive them to states with zero dipole moment. This will give us a cold Rydberg Ps distribution that is insensitive to electric fields and which can be used for gravitational deflection measurements. Our article “Electrostatically guided Rydberg positronium” has been published in Physical Review Letters. ## 14th International Workshop on Slow Positron Beam Techniques & Applications Members of the UCL positronium laser spectroscopy group recently attended the 14th International Workshop on Slow Positron Beam Techniques & Applications (SLOPOS14) in Matsue, Japan. The conference took place from the 22nd to the 27th of May 2016.  During this time we heard many great talks from groups working with positrons and positronium (Ps) from all over the world. We also presented some of our work, including Rydberg-Stark states of Ps (PRL. 115, 173001), laser-enhanced time-of-flight spectroscopy (NJP. 17, 043059), Ps production in cryogenic environments (PRB 93, 125305), controlling annihilation of excited-state Ps (PRL115, 183401 & PRA93, 012506), and improved SSPALS measurements with LYSO scintillators (NIM. A,  828, 163). The talk “Controlling Annihilation Dynamics of n = 2 Positronium with Electric Fields”, given by Alberto. M. Alonso (PhD student), was awarded a prize for making an outstanding contribution to the conference! SLOPOS14 was a great opportunity to meet fellow physicists working in our field, to learn of their progress and to share our own.  These meetings are important for discussing new results and new ideas, and for building collaborations for future work. We are extremely grateful to the organisers for their hard work in hosting the event. We look forward to the next SLOPOS, which will be held in Romania in 2019 ## Antimatter annihilation, gamma rays, and Lutetium-yttrium oxyorthosilicate Doing experiments with antimatter presents a number of challenges. Not least of these is that when a particle meets its antiparticle the two will quickly annihilate. As far as we know we live in a universe that is dominated by matter. We are certainly made of matter and we run experiments in matter-based labs. How then can we confine positrons (anti-electrons) when they disappear on contact with any of our equipment? Paul Dirac – the theoretical physicist who predicted the existence of antiparticles almost 90 years ago – proposed the solution even before there was evidence that antimatter was any more than a theoretical curiosity. In 1931 Dirac wrote, “if [positrons] could be produced experimentally in high vacuum they would be quite stable and amenable to observation.” P. A. M. Dirac (1931) Our positron beamline makes use of vacuum chambers and pumps to achieve pressures as low as 12 orders of magnitude less than atmosphere. Inside of our buffer-gas trap, where the vacuum is deliberately not so vacuous, the positrons can still survive for several seconds without meeting an electron. And as positrons are electrically charged they can easily be prevented from touching the chamber walls using a combination of electric and magnetic fields. (For neutral forms of antimatter the task is more difficult.  Nevertheless, the ALPHA experiment was able to trap antihydrogen for 1000 s using a magnetic bottle.) An antiparticle can be thought of as a mirror image of a particle, with a number of equal but opposite properties, such as electric charge. When the two meet and annihilate, these properties sum to zero and nothing remains. Well, almost nothing. Electrons and positrons have the same mass (m = 9.10938356 × 10-31 kg), and when the two annihilate this is converted to energy in accordance with Einstein’s well-known formula E = m c2, where c is the speed of light (299792458 m/s). For this reason antimatter has long fascinated science fiction writers: there is a potentially vast amount of energy available – e.g., for propelling spaceships or destroying the Vatican – when only a small amount of antimatter annihilates with matter. However, the difficulty in accumulating even minuscule amounts means that applications in weaponry and propulsion are a very long way from viable. When an electron and positron annihilate the energy takes the form of gamma-ray photons. Usually two, each with 511 keV of energy. Although annihilation raises some difficulties, the distinct signature it produces can be very useful for detection purposes. Gamma rays are hundreds of thousands of times more energetic than visible photons. To detect them we use scintillation materials that absorb the gamma ray energy and then emit visible light. Photo-multiplier tubes are then used to convert the visible photons into an electric current, which can then be recorded with an oscilloscope. Many materials are known to scintillate when exposed to gamma rays, although their characteristics differ widely. The properties that are most relevant to our work are the density (which must be high to absorb the gamma rays), the length of time that a scintillation signal takes to decay (this can vary from a few ns to a few μs), and the number of visible photons emitted, i.e., the light output. Encased sodium iodide crystal Sodium iodide (NaI) is a popular choice for antimatter research because the light output is very high, therefore individual annihilation events can easily be detected.  However, for some applications the decay time is too long (~1 μs). PMT output for individual gamma-ray  detection with NaI The material we normally use to perform single-shot positron annihilation lifetime spectroscopy (SSPALS) is lead tungstate (PbWO4) – the same type of crystal is used in the CMS electromagnetic calorimeter. This material has a fast decay time of around 10 ns, which allows us to resolve the 142 ns lifetime of ground-state positronium (Ps).  However, the amount of visible light emitted from PbWO4 is relatively low (~ 1% of NaI). Recently we began experimenting with using Lutetium-yttrium oxyorthosilicate (LYSO) for SSPALS measurements, even though its decay time of ~40 ns is considerably slower than that of PbWO4.  So, why LYSO?  The main reason is that it has a much higher light output (~ 75% of NaI), therefore we can more efficiently detect the gamma rays in a given lifetime spectrum, and this significantly improves the overall statistics of our analysis. An array of LYSO crystals The compromise with using LYSO is that the longer decay time distorts the lifetime spectra and reduces our ability to resolve fast components. However, most of our experiments involve using lasers to alter the lifetime of Ps (reducing it via magnetic quenching or photoionisation; or extending it by exciting the atoms to Rydberg levels), and we generally care more about seeing how much the 142 ns component changes than about what happens on shorter timescales.   The decay time of LYSO is just about fast enough for this, and the improvement in contrast between signal and background measurements – which comes with the improved statistics – outweighs the loss in timing resolution. SSPALS with LYSO and PbWO4 This post is based on our recent article: Single-shot positron annihilation lifetime spectroscopy with LYSO scintillators, A. M. Alonso, B. S. Cooper, A. Deller, and D. B. Cassidy, Nucl. Instrum. Methods :  A  828, 163 (2016) DOI:10.1016/j.nima.2016.05.049. ## How long does Rydberg positronium live? Time-of-flight (TOF) is a simple but powerful technique that consists of accurately measuring the time it takes a particle/ atom/ ion/ molecule/ neutrino/ etc. to travel a known distance.  This valuable tool has been used to characterise the kinetic energy distributions of an exhaustive range of sources, including positronium (Ps) [e.g. Howell et al, 1987], and is exploited widely in ion mass spectrometry. Last year we published an article in which we described TOF measurements of ground-state (n=1) Ps atoms that were produced by implanting a short (5 ns) pulse of positrons into a porous silica film.  Using pulsed lasers to photoionise (tear apart) the atoms at a range of well-defined positions, we were able to estimate the Ps velocity distribution, finding mean speeds on the order of 100 km/s. Extrapolating the measured flight paths back to the film’s surface indicated that the Ps took on average between 1 and 10 ns to escape the pores, depending on the depth to which the positrons were initially implanted. When in the ground state and isolated in vacuum the electron and positron that make up a positronium atom will tend to annihilate each another in around 140 ns.  Even with a speed of 100 km/s this means that Ps is unlikely to travel further than a couple of cm during its brief existence.  Consequently,  the photoionisation/ TOF measurements mentioned above were made within 6 mm of the silica film. However, instead of ionising the atoms, our lasers can be reconfigured to excite Ps to high-n Rydberg levels, and these typically live for a great deal longer.   The increase in lifetime allows us to measure TOF spectra over much longer timescales (~10 µs) and distances (1.2 m). The image above depicts the layout of our TOF apparatus.  Positrons from a Surko trap are guided by magnets to the silica film, wherein they bind to electrons and are remitted as Ps.  Immediately after, ultraviolet and infra-red pulsed lasers drive the atoms to n=2 and then to Rydberg states.  Unlike the positively charged positrons, the neutral Ps atoms are not deflected by the curved magnetic fields and are able to travel straight along the 1.2 m flight tube, eventually crashing into the end of the vacuum chamber.  The annihilation gamma rays are there detected using an NaI scintillator and photomultipler tube (PMT), and the time delay between Ps production and gamma ray detection is digitally recorded. The plots above show two different views of time-of-flight spectra accumulated with the infra-red laser tuned to address Rydberg levels in the range of n=10 to 20.  The data shows that more Ps are detected at later times for the higher-n states than for lower-n states.  This is easily explained by fluorescence, i.e., the decay of an excited-state atom via spontaneous emission of a photon.  As the fluorescence lifetime increases with n, the lower-n states are more likely to decay to the ground state and then annihilate before reaching the end of the chamber, reducing the number of gamma rays seen by the NaI detector at later times. We estimate from this data that Ps atoms in n=10 fluoresce in about 3 µs, compared to roughly 30 µs for n=20. This work brings us an important step closer to performing a positronium free-fall measurement.  A flight path of at least ten meters will probably be required to observe gravitational deflection, so we still have some way to go. This post is based on work discussed in our article: Measurement of Rydberg positronium fluorescence lifetimes. A. Deller, A. M. Alonso, B. S. Cooper, S. D. Hogan, and D. B. Cassidy. Phys. Rev. A 93, 062513  (2016)DOI:10.1103/PhysRevA.93.062513. ## UCL positronium spectroscopy beamline (the first two years) The UCL Ps spectroscopy positron beamline began producing low-energy positrons almost two years ago, and it has since become slightly longer and somewhat more sophisticated. Though it’s not the most complex scientific machine in the world (compared to, e.g., the LHC) we still find regular use for a 3D depiction of it.  Our model is essentially a cartoon. Typically we use it to create (fairly) accurate schematics that help us to convey the configuration of our equipment at conferences or in publications. The snap shot above shows the three main components of the beamline, namely the positron source (left), Surko trap (centre, cross-section), and Ps laser-spectroscopy region (right).  The 3D model is built from simplified forms of the various vacuum chambers and pumps, magnetic coils, and detectors.  And it shows where these all are in relation to one another.  The 45° angled line is being used right now for Rydberg Ps time-of-flight measurements.  The source and trap are based on the design developed by Rod Greaves and Jeremey Moxom of First Point Scientific Inc. (unfortunately now defunct).  You can read about the details of their design in this article. *MD5 checksum c6028573596c9511d9ba0450cd2caa05 And here’s how the lab looks in real life, ## Photoemission of Ps from single-crystal p-Ge semiconductors The production of positronium in a low-temperature (cryogenic) environment is in general only possible using materials that operate via non-thermal processes. In previous experiments we showed that porous silica films can be used in this way at temperatures as low as 10 K, but that Ps formation at these temperatures can be inhibited by condensation of residual gas, or by laser irradiation. It has been known for several years now that some semiconductors can produce Ps via an exciton-like surface state [12]. Si and Ge are the only semiconductors that have been studied so far, but it is likely that others will work in a similar way. The electronic surface state(s) underlying the Ps production can be populated thermally, resulting in temperature dependent Ps formation that is very similar to what is observed in metals (for which the Ps is actually generated via thermal desorption of positrons in surface states). Since laser irradiation can also populate electronic surface states, and is known to result in Ps emission from Si at room temperature, the possibility exists that this process can be used at cryogenic temperatures. We have studied this possibility using p-type Ge(100) crystals. Initial sample preparation involves immersion in acid (HCl) and this process leaves the sample with Chlorine-terminated dangling bonds which can be thermally desorbed. We attached the samples to a cold head with a high temperature interface  that can be heated to 700 K and cooled to 12 K. The heating is necessary to remove Cl from the crystal surface, which otherwise inhibits Ps formation. Fig 1 shows the initial heating cycle that prepares the sample for use. The figure shows the delayed annihilation fraction (which is proportional to the amount of positronium) as a function of temperature. FIG. 1:  Delayed fraction as a function of sample temperature after initial installation into the vacuum system. After the surface Cl has been thermally desorbed the amount of Ps emitted at room temperature is substantially increased. As has been previously observed [2] using visible laser light at 532 nm can increase the Ps yield. This occurs because the electrons necessary for Ps formation can be excited to surface states by the laser. However, these states have a finite lifetime, and as both the laser and positron pulses are typically around 5 ns wide these have to be synchronized in order to optimise the photoemission effect. This is shown in FIG 2.  These data indicate that the electronic surface states are fairly short lived, with lifetimes of less than 10 ns or so. Longer surface states were observed in similar measurements using Si. FIG 2: Delayed fraction as a function of the arrival time of the laser relative to the incident positron pulse. These data are recorded at room temperature.  The laser fluence was ~ 15 mJ/cm$^2$ When Ge is cooled the Ps fraction drops significantly. This is not related to surface contamination, but is due to the lack of thermally generated surface electrons. However, surface contamination does further reduce the Ps fraction (much more quickly than is the case for silica. This effect is shown in FIG 3. If a photoemission laser is applied to a cold contaminated Ge sample two things happen (1) the laser desorbs some of the surface material and (2) photoemission occurs .This means that Ge can be used to produce Ps with a high efficiency at any temperature, and we don’t even have to worry about the vacuum conditions (within some limits). FIG 3: Delayed fraction as a function of time that the target was exposed to showing the effect that different laser fluences has on the photoemission process. During irradiation, the positronium fraction is noticeably increased. There are many possible applications for cryogenic Ps production within the field of antimatter physics, including the formation of antihydrogen formation via Ps collision with antiprotons [3], Ps laser cooling and Bose Einstein Condensation [4], as well as precision spectroscopy. ## ANTIMATTER: who ordered that? The existence of antimatter became known following Dirac’s formulation of relativistic quantum mechanics, but this incredible development was not anticipated. These days conjuring up a new particle or field (or perhaps even new dimensions) to explain unknown observations is pretty much standard operating procedure, but it was not always so. The famous “who ordered that” statement of I. I. Rabi was made in reference to the discovery of the muon, a heavy electron whose existence seemed a bit unnecessary at the time; in fact it was the harbinger of a subatomic zoo. The story of Dirac’s relativistic reformulation of the Schrödinger wave equation, and the subsequent prediction of antiparticles, is particularly appealing; the story is nicely explained in a recent biography of Dirac (Farmelo 2009). As with Einstein’s theory of relativity, Dirac’s relativistic quantum mechanics seemed to spring into existence without any experimental imperative. That is to say, nobody ordered it! The reality, of course, is a good deal more complicated and nuanced, but it would not be inaccurate to suggest that Dirac was driven more by mathematical aesthetics than experimental anomalies when he developed his theory. The motivation for any modification of the Schrödinger equation is that it does not describe the energy of a free particle in a way that is consistent with the special theory of relativity. At first sight it might seem like a trivial matter to simply re-write the equation to include the energy in the necessary form, but things are not so simple. In order to illustrate why this is so it is instructive to briefly consider the Dirac equation, and how it was developed. For explicit mathematical details of the formulation and solution of the Dirac equation see, for example, Griffiths 2008. The basic form of the Schrödinger wave equation (SWE) is $(-\frac{\hbar^2}{2m}\nabla^2+V)\psi = i\hbar \frac{\partial}{\partial t}\psi.$                                                    (1) The fundamental departure from classical physics embodied in eq (1) is the quantity $\psi$, which represents not a particle but a wavefunction. That is, the SWE describes how this wavefunction (whatever it may be) will behave. This is not the same thing at all as describing, for example, the trajectory of a particle. Exactly what a wavefunction is remains to this day rather mysterious. For many years it was thought that the wavefunction was simply a handy mathematical tool that could be used to describe atoms and molecules even in the absence of a fully complete theory (e.g., Bohm 1952). This idea, originally suggested by de Broglie in his “pilot wave” description, has been disproved by numerous ingenious experiments (e.g., Aspect et al., 1982). It now seems unavoidable to conclude that wavefunctions represent actual descriptions of reality, and that the “weirdness” of the quantum world is in fact an intrinsic part of that reality, with the concept of “particle” being only an approximation to that reality, only appropriate to a coarse-grained view of the world. Nevertheless, by following the rules that have been developed regarding the application of the SWE, and quantum physics in general, it is possible to describe experimental observations with great accuracy. This is the primary reason why many physicists have, for over 80 years, eschewed the philosophical difficulties associated with wavefunctions and the like, and embraced the sheer predictive power of the theory. We will not discuss quantum mechanics in any detail here; there are many excellent books on the subject at all levels (e.g., Dirac 1934, Shankar 1994, Schiff 1968). In classical terms the total energy of a particle E can be described simply as the sum of the kinetic energy (KE) and the potential energy (PE) as $KE+PE=\frac{p^2}{2m}+V=E$                                                 (2) where p = mv represents the momentum of a particle of mass m and velocity v. In quantum theory such quantities are described not by simple formulae, but rather by operators that act on the wavefunction. We describe momentum via the operator $-i \hbar\nabla$ and energy by $i\hbar \partial / \partial t$ and so on. The first term of eq (1) represents the total energy of the system, and is also known as the Hamiltonian, H. Thus, the SWE may be written as $H\psi=i\hbar\frac{\partial\psi}{\partial t}=E\psi$                                                              (3) The reason why eq (3) is non-relativistic is that the energy-momentum relation in the Hamiltonian is described in the well-known non-relativistic form. As we know from Einstein, however, the total energy of a free particle does not reside only in its kinetic energy; there is also the rest mass energy, embodied in what may be the most famous equation in all of physics: $E=mc^2.$                                                                    (4) This equation tells us that a particle of mass m has an equivalent energy E, with c2 being a rather large number, illustrating that even a small amount of mass (m) can, in principle, be converted into a very large amount of energy (E). Despite being so famous as to qualify as a cultural icon, the equation E = mc2 is, at best, incomplete. In fact the total energy of a free particle (i.e., V = 0) as prescribed by the theory of relativity is given by $E^2=m^2c^4 +p^2c^2.$                                                        (5) Clearly this will reduce to E = mc2 for a particle at rest (i.e., p = 0): or will it? Actually, we shall have E = ± mc2, and in some sense one might say that the negative solutions to this energy equation represent antimatter, although, as we shall see, the situation is not so clear cut. In order to make the SWE relativistic then, one need only replace the classical kinetic energy E = p2/2m with the relativistic energy E = [m2c4+p2c2]1/2. This sounds simple enough, but the square root sign leads to quite a lot of trouble! This is largely because when we make the “quantum substitution” $p \rightarrow -i\hbar\nabla$  we find we have to deal with the square root of an operator, which, as it turns out, requires some mathematical sophistication. Moreover, in quantum physics we must deal with operators that act upon complex wavefunctions, so that negative square roots may in fact correspond to a physically meaningful aspect of the system, and cannot simply be discarded as might be the case in a classical system. To avoid these problems we can instead start with eq (5) interpreted via the operators for momentum and energy so that eq (3) becomes $(- \frac{1}{c^2}\frac{\partial^2}{\partial t^2} + \nabla^2)\psi=\frac{m^2 c^2}{\hbar^2}\psi.$                                                (6) This equation is known as the Klein Gordon equation (KGE), although it was first obtained by Schrödinger in his original development of the SWE. He abandoned it, however, when he found that it did not properly describe the energy levels of the hydrogen atom. It subsequently became clear that when applied to electrons this equation also implied two things that were considered to be unacceptable; negative energy solutions, and, even worse, negative probabilities. We now know that the KGE is not appropriate for electrons, but does describe some massive particles with spin zero when interpreted in the framework of quantum field theory (QFT); neither mesons nor QFT were known when the KGE was formulated. Some of the problems with the KGE arise from the second order time derivative, which is itself a direct result of squaring everything to avoid the intractable mathematical form of the square root of an operator. The fundamental connection between time and space at the heart of relativity leads to a similar connection between energy and momentum, a connection that is overlooked in the KGE. Dirac was thus motivated by the principles of relativity to keep a first order time derivative, which meant that he had to confront the difficulties associated with using the relativistic energy head on. We will not discuss the details of its derivation but will simply consider the form of the resulting Dirac equation: $(c \alpha \cdot \mathrm{P}+\beta mc^2)\psi=i\hbar \frac{\partial\psi}{\partial t}.$                                                     (7) This equation has the general form of the SWE, but with some significant differences. Perhaps the most important of these is that the Hamiltonian now includes both the kinetic energy and the electron rest mass, but the coefficients αi and $\beta$ have to be four-component matrices to satisfy the equation. That is, the Dirac equation is really a matrix equation, and the wavefunction it describes must be a four component wavefunction. Although there are no problems with negative probabilities, the negative energy solutions seen in the KGE remain. These initially seemed to be a fatal flaw in Dirac’s work, but were overlooked because in every other aspect the equation was spectacularly successful. It reproduced the hydrogen atomic spectra perfectly (at least, as perfectly as it was known at the time) and even included small relativistic effects, as a proper relativistic wave equation should. For example, when the electromagnetic interaction is included the Dirac equation predicts an electron magnetic moment: $\mu_e = \frac{\hbar e}{2m} = \mu_B$                                                                   (8) where $\mu_B$ is known as the Bohr magneton. This expression is also in agreement with experiment, almost: it was later discovered that the magnetic moment of the electron differs from the value predicted by eq (8) by about 0.1% (Kusch and Foley 1948).  The fact that Dirac’s theory was able to predict these quantities was considered to be a triumph, despite the troublesome negative energy solutions. Another intriguing aspect of the Dirac equation was noticed by Schrödinger in 1930. He realised that interference between positive and negative energy terms would lead to oscillations of the wavepacket of an electron (or positron) about some central point at the speed of light. This fast motion was given the name zitterbewegung (which is German for “trembling motion”). The underlying physical mechanism that gives rise to the zitterbewegung effect may be interpreted in several different ways but one way to look at it is as an interaction of the electron with the zero-point energy of the (quantised) electromagnetic field. Such electronic oscillations have not been directly observed as they occur at a very high frequency (~ 1021 Hz), but since zitterbewegung also applies to electrons bound to atoms, this motion can affect atomic energy levels in an observable way. In a hydrogen atom the zitterbewegung acts to “smear out” the electron charge over a larger area, lowering the strength of its interaction with the proton charge. Since S states have a non-zero expectation value at the origin, the effect is larger for these than it is for P states. The splitting between the hydrogen 2S1/2 and 2P1/2 states, that are degenerate in the Dirac theory, is known as the Lamb Shift (Lamb, 1947). This shift, which amounts to ~1 GHz was observed in an experiment by Willis Lamb and his student Robert Retherford (not to be confused Ernest Rutherford!). The need to explain this shift, which requires a proper explanation of the electron interacting with the electromagnetic field, gave birth to the theory of quantum electrodynamics, pioneered by Bethe, Tomanoga, Schwinger and Feynman. The solutions to the SWE for free particles (i.e., neglecting the potential V) are of the form $\psi = A \mathrm{exp}(-iEt / \hbar).$                                                       (9) Here A is some function that depends only on the spatial properties of the wavefunction (i.e., not on t). Note that this wavefunction represents two electron states, corresponding to the two separate spin states. The corresponding solutions to the Dirac equation may be represented as $\psi_1 = A_1 \mathrm{exp}(-iEt / \hbar),$ $\psi_2 = A_2 \mathrm{exp}(+iEt / \hbar).$                                                   (10) Here $\psi_2$ represents the negative energy solutions that have caused so much trouble. The existence of these states is central to the theory they cannot simply be labelled as “unphysical” and discarded. The complete set of solutions is required in quantum mechanics, in which everything is somewhat “unphysical”. More properly, since the wavefunction is essentially a complex probability density function that yields a real result when its absolute value is squared, the negative energy solutions are no less physical than the positive energy solutions; it is in fact simply a matter of convention as to which states are positive and which are negative. However you set things up, you will always have some “wrong” energy states that you can’t get rid of. Thus, Dirac was able to eliminate the negative probabilities and produce a wave equation that was consistent with special relativity, but the negative energy states turned out to be a fundamental part of the theory and could not be eliminated, despite many attempts to get rid of them. After his first paper in 1928 (The quantum theory of the electron) Dirac had established that his equation was a viable relativistic wave equation, but the negative energy aspects remained controversial. He worried about this for some time, and tried to develop a “hole” theory to explain their seemingly undeniable existence. A serious problem with negative energy solutions is that one would expect all electrons to decay into the lowest energy state available, which would be the negative energy states. Since this would not be consistent with observations there must, so Dirac reasoned, be some mechanism to prevent it. He suggested that the states were already filled with an infinite “sea” of electrons, and therefore the Pauli Exclusion Principle would prevent such decay, just as it prevents more than two electrons from occupying the lowest energy level in an atom. (Note that this scheme does not work for Bosons, which do not obey the exclusion principle). Such an infinite electron sea would have no observable properties, as long as the underlying vacuum has a positive “bare” charge to cancel out the negative electron charge. Since only changes in the energy density of this sea would be apparent, we would not normally notice its presence. Moreover, Dirac suggested that if a particle were missing from the sea the resulting hole would be indistinguishable from a positively charged particle, which he speculated was a proton, protons being the only positively charged subatomic particles known at the time. This idea was presented in a paper in 1930 (A Theory of Electrons and Protons, Dirac 1930). The theory was less than successful, however, and the deficiencies served only to undermine confidence in the entire Dirac theory. Attempts to identify holes as protons only made matters worse; it was shown independently by Heisenberg, Oppenheimer and Pauli that the holes must have the electron mass, but of course protons are almost 2000 times heavier. Moreover, the instability between electrons and holes completely ruled out stable atomic states made from these entities (bad news for hydrogen, and all other atoms). Eventually Dirac was forced to conclude that the negative energy solutions must correspond to real particles with the same mass as the electron and a positive charge. He called these anti-electrons (Quantised Singularities in the Electromagnetic Field, Dirac 1931). This almost reluctant conclusion was not based on a full understanding of what the negative energy states were, but rather the fact that the entire theory, which was so beautiful in other ways that it was hard to resist, depended on them. It turns out that to properly understand the negative energy solutions requires the formalism of quantum field theory (QFT). In this description particles (and antiparticles) can be created or destroyed, so it is no longer necessarily appropriate to consider these particles to be the fundamental elements of the theory. If the total number of particles in a system is not conserved then one might prefer to describe that system in terms of the entities that give rise to the particles rather than the particles themselves. These are the quantum fields, and the standard model of particle physics is at its heart a QFT. By describing particles as oscillations in a quantum field not only do we have an immediate mechanism by which they may be created or destroyed, but the problem of negative energies is also removed, as this simply becomes a different kind of variation in the underlying quantum field. Dirac didn’t explicitly know this at the time, although it would be fair to say that he essentially invented QFT, when he produced a quantum theory that included quantized electromagnetic fields (Dirac, 1927, The Quantum Theory of the Emission and Absorption of Radiation). This led, eventually, to what would be known as quantum electrodynamics. Dirac would undoubtedly have been able to make much more use of his creation if he had not been so appalled by the notion of renormalization. Unfortunately this procedure, which in some ways can be thought of as subtracting infinite quantities from each other to leave a finite quantity, was incompatible with his sense of mathematical aesthetics. So, despite initially struggling with the interpretation of his theory, there can be no question that Dirac did indeed explicitly predict the existence of the positron before it was experimentally observed. This observation came almost immediately in cloud chamber experiments conducted by Carl Anderson in California (C. D. Anderson: The apparent existence of easily deflectable positives, Science 76 238, 1932).  Curiously, however, Anderson was not aware of the prediction, and the proximity of the observation was apparently coincidental. We will discuss this remarkable observation in a later post. *This post is adapted from an as-yet unpublished book chapter by D. B. Cassidy and A. P. Mills, Jr. References: Griffiths, D. (2008). Introduction to Elementary Particles Wiley-VCH; 2nd edition. Farmelo, “The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom” Basic Books, New York, (2011). Dirac, P.A.M. (1927). The Quantum Theory of the Emission and Absorption of Radiation, Proceedings of the Royal Society of London, Series A, Vol. 114, p. 243. P. A. M. Dirac, Proc. Phys. Soc. London Sect. A 117, 610 (1928). P. A. M. Dirac, Proc. Phys. Soc. London Sect. A 126, 360 (1930). P. A. M. Dirac, Proc. Phys. Soc. London Sect. A 133, 60 (1931). Anderson, C. D. (1932). The apparent existence of easily deflectable positives, Science 76, 238. A.  Aspect, D. Jean, R. Gerard (1982). Experimental Test of Bell’s Inequalities Using Time- Varying Analyzers, Phys. Rev. Lett. 49 1804 P. Kusch and H. M. Foley “The Magnetic Moment of the Electron”, Phys. Rev. 74, 250 (1948).
# Philip Kremer Areas of Specialization 1 more •  15 •  63 ##### On the complexity of propositional quantification in intuitionistic logic Journal of Symbolic Logic 62 (2): 529-544. 1997. We define a propositionally quantified intuitionistic logic Hπ + by a natural extension of Kripke's semantics for propositional intutionistic logic. We then show that Hπ+ is recursively isomorphic to full second order classical logic. Hπ+ is the intuitionistic analogue of the modal systems S5π +, S4π +, S4.2π +, K4π +, Tπ +, Kπ + and Bπ +, studied by Fine •  44 ##### Dynamic topological S5 Annals of Pure and Applied Logic 160 (1): 96-116. 2009. The topological semantics for modal logic interprets a standard modal propositional language in topological spaces rather than Kripke frames: the most general logic of topological spaces becomes S4. But other modal logics can be given a topological semantics by restricting attention to subclasses of topological spaces: in particular, S5 is logic of the class of almost discrete topological spaces, and also of trivial topological spaces. Dynamic Topological Logic interprets a modal language enrich…Read more •  25 •  22 ##### The modal logic of continuous functions on cantor space Archive for Mathematical Logic 45 (8): 1021-1032. 2006. Let $\mathcal{L}$ be a propositional language with standard Boolean connectives plus two modalities: an S4-ish topological modality $\square$ and a temporal modality $\bigcirc$ , understood as ‘next’. We extend the topological semantic for S4 to a semantics for the language $\mathcal{L}$ by interpreting $\mathcal{L}$ in dynamic topological systems, i.e. ordered pairs $\langle X, f\rangle$ , where X is a topological space and f is a continuous function on X. Artemov, Davoren and Nerode have axiom…Read more •  52 •  30 •  62 ##### Dunn's relevant predication, real properties and identity Erkenntnis 47 (1): 37-65. 1997. We critically investigate and refine Dunn's relevant predication, his formalisation of the notion of a real property. We argue that Dunn's original dialectical moves presuppose some interpretation of relevant identity, though none is given. We then re-motivate the proposal in a broader context, considering the prospects for a classical formalisation of real properties, particularly of Geach's implicit distinction between real and ''Cambridge'' properties. After arguing against these prospects, w…Read more •  20 •  46
f(f(x)) = exp(x) + x tommy1729 Ultimate Fellow Posts: 1,372 Threads: 336 Joined: Feb 2009 12/12/2009, 12:54 AM let f(x) be Coo. f(f(x)) = exp(x) + x what is f(x) ? does the (only) fixpoint at - oo help ? can f(x) be entire ? is there a solution f(x) such that f(x) has no fixpoint apart from - oo ? can f(x) be expressed in terms of tetration ? is there a solution f(f(x)) = exp(x) + x with f(x) E Coo , f(x) mapping all reals to reals and f(x) having no fixpoint apart from - oo ? does " a solution f(f(x)) = exp(x) + x with f(x) E Coo , f(x) mapping all reals to reals and f(x) having no fixpoint apart from - oo " imply that all derivatives are strictly positive reals ? can f(x) be expressed in terms of pentation ? does the substitution y = 1/x help ? ( trying to 'move the fixpoint' but problems occur e.g. exp(1/x) has a singularity at 0 ! ... on the other hand perhaps considering a certain angle towards the singularity it might work to give a real-analytic solution ? ) does the strategy lim n-> oo f(f(x)) = exp(x) + x + 1/n work ? how about the carleman matrix method ? this seems like a difficult problem ... regards tommy1729 bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 12/12/2009, 12:25 PM (This post was last modified: 12/12/2009, 12:39 PM by bo198214.) Indeed $F(x)=e^x+x$ (which I will call here added exponential) is a very interesting function as it has no complex fixpoints. If there was some complex fixpoint $z$ then $e^z+z=z$, means $e^z=0$ which is never true. So one could think that one can not apply regular iteration. However as Tommy already suggested there is a fixed point at (complex) infinity. Well, the function is not analytic there, but for regular iteration it suffices that the function has an asymptotic powerseries development (approaching the fixed point in some sector) or even merely that the function is asymptotically real differentiable. This condition is met, the asymptotic derivatives for $z\to -\infty$ of $F$ are: $F'(-\infty)=\lim_{z\to -\infty} e^z+1=1$ and all higher derivatives are $F^{(n)}(-\infty) = 0$. I.e. $F$ has asymptotically the same derivatives as the identity function. So we can apply the limit formula for regular iteration with multiplier 1 (derived from Lévy's formula for the Abel function): (*) $F^{[t]}(z) = F^{[n]}( t F^{[-n+1]}(z) + (1-t) F^{[-n]}(z) )$ This formula converges only very slowly, but it suffices to get 2 or 3 digits to plot a graph of $f=F^{[0.5]}$ (blue):     Another way is to consider the analytic conjugate $M = \exp \circ F \circ \log$. $M(x)=\exp(x+\log(x))=e^x x$ is the (what I call) multiplied exponential, which moves the fixed point at $-\infty$ of $F$ to the fixed point 0 of $M$. This function is well regularly iterable at 0 (with powerseries and limit formula) at then $F^{[t]}=\log\circ M^{[t]} \circ\exp$. The conjugation formula shows also how to compute the inverse of $F(x)=x+e^x$ which is needed e.g. in the limit formula. The inverse is $F^{[-1]} = \log\circ M^{[-1]}\circ \exp$, i.e. $F^{[-1]}(x) = \log(W(e^x))$, where $W$ is the Lambert function. mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 12/12/2009, 11:35 PM (This post was last modified: 12/12/2009, 11:39 PM by mike3.) I'm curious: where are the branch points for the positive-real-count fractional iterations such as $F^{1/2}(z)$ on the complex plane? I tried a graph on the complex plane for z in the square from -5-5i to +5+5i and I noticed two cutlines (at $t \pm i \pi$ it appears (where t is real)), but they seem to go across the entire plane. So are these "real", i.e. are there branch points on them (or outside the graphing square), or are they just an artifact of the algorithm and the fractional iterate is actually entire (i.e. we could analytically continue it out of the region near 0 into the whole plane)? tommy1729 Ultimate Fellow Posts: 1,372 Threads: 336 Joined: Feb 2009 12/13/2009, 06:26 PM (12/12/2009, 12:25 PM)bo198214 Wrote: Indeed $F(x)=e^x+x$ (which I will call here added exponential) is a very interesting function as it has no complex fixpoints. If there was some complex fixpoint $z$ then $e^z+z=z$, means $e^z=0$ which is never true. So one could think that one can not apply regular iteration. However as Tommy already suggested there is a fixed point at (complex) infinity. Well, the function is not analytic there, but for regular iteration it suffices that the function has an asymptotic powerseries development (approaching the fixed point in some sector) or even merely that the function is asymptotically real differentiable. (((snip part of quote))) Another way is to consider the analytic conjugate $M = \exp \circ F \circ \log$. $M(x)=\exp(x+\log(x))=e^x x$ is the (what I call) multiplied exponential, which moves the fixed point at $-\infty$ of $F$ to the fixed point 0 of $M$. This function is well regularly iterable at 0 (with powerseries and limit formula) at then $F^{[t]}=\log\circ M^{[t]} \circ\exp$. The conjugation formula shows also how to compute the inverse of $F(x)=x+e^x$ which is needed e.g. in the limit formula. The inverse is $F^{[-1]} = \log\circ M^{[-1]}\circ \exp$, i.e. $F^{[-1]}(x) = \log(W(e^x))$, where $W$ is the Lambert function. that is intresting , though i dont know why you call it 'analytic conjugate' ? funny thing is i considered the above things in reverse order :p back to business , i wanted to point out that perhaps the above approach might also work for real-iterates of exp(z) ?!? basicly bo replaced the fixpoint from - oo to 0 , and i consider the possibility of doing the same with the 2 complex fixpoints of exp(z) towards 0. ( if succesfull i bet we arrive at kouznetsov's solution but with more proven properties ) so we look for strictly increasing functions T(x) resp f(x) with T(x) = f(x) ° exp(x) ° f°-1(x) with a fixpoint at 0. ( i didnt check but ) f(x) might just be simple like f(x) = (x - fp1)^a (x - fp2)^a realpoly(x) note fp1 and fp2 are the solutions of exp(z) = z and are eachother conjugate ( which is very important here ) of course also the correct branches of f°-1(x) resp f(x) are important. regards tommy1729 mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 12/14/2009, 04:42 AM Most likely such a thing will turn out as equivalent to the usual regular iteration of exp from the repelling fixed point. This does not give real values for tetration at the real axis and does not have singularities (i.e. cannot satisfy $F(-1) = 0$) so it doesn't really seem to make sense for a generalization of tetration of base e to real or complex towers. BenStandeven Junior Fellow Posts: 27 Threads: 3 Joined: Apr 2009 12/14/2009, 07:18 AM Using the simplest conjugation function produces something of a mess; to simplify, I'll consider the conjugate F of exp(pi/2 x) instead; the fixed points are +/-I, so we can conjugate with w=z^2 + 1. The inverse function is z=sqrt(w-1); then I get: F(x) = exp(pi/2 * sqrt(x-1))^2 + 1 = exp(pi * sqrt(x-1)) + 1. Note that this function has a branch point at 1, but the conjugation function moves [0, oo] to [1, oo], so we must have two different functions near zero, one for each choice of branch cut. They will presumably yield different regular iteration functions, which correspond to the two entire regular iteration functions for exp. bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 12/14/2009, 09:52 AM (12/13/2009, 06:26 PM)tommy1729 Wrote: basicly bo replaced the fixpoint from - oo to 0 , and i consider the possibility of doing the same with the 2 complex fixpoints of exp(z) towards 0. ( if succesfull i bet we arrive at kouznetsov's solution but with more proven properties ) so we look for strictly increasing functions T(x) resp f(x) with T(x) = f(x) ° exp(x) ° f°-1(x) with a fixpoint at 0. ( i didnt check but ) f(x) might just be simple like f(x) = (x - fp1)^a (x - fp2)^a realpoly(x) Yes, I was also playing with this idea. However when you map a simply connected region with two fixed points on its boundary biholomorphically and real-analytically such that both fixed points go to 0, then you probably have a branch cut on the real axis (consider a sickle with the two fixed points at its ends, if you bent the fixed points to 0, there will be a slit or overlapping on the real axis up to the point where one part of the boundary intersects the real axis). (12/14/2009, 07:18 AM)BenStandeven Wrote: Using the simplest conjugation function produces something of a mess; to simplify, I'll consider the conjugate F of exp(pi/2 x) instead; the fixed points are +/-I, so we can conjugate with w=z^2 + 1. The inverse function is z=sqrt(w-1); then I get: F(x) = exp(pi/2 * sqrt(x-1))^2 + 1 = exp(pi * sqrt(x-1)) + 1. Very good example, it also illustrates my point above. The function F is non-real on (0,1) it has a cut from 1 to -oo (as you described) tommy1729 Ultimate Fellow Posts: 1,372 Threads: 336 Joined: Feb 2009 12/14/2009, 09:47 PM (12/14/2009, 09:52 AM)bo198214 Wrote: (12/13/2009, 06:26 PM)tommy1729 Wrote: basicly bo replaced the fixpoint from - oo to 0 , and i consider the possibility of doing the same with the 2 complex fixpoints of exp(z) towards 0. ( if succesfull i bet we arrive at kouznetsov's solution but with more proven properties ) so we look for strictly increasing functions T(x) resp f(x) with T(x) = f(x) ° exp(x) ° f°-1(x) with a fixpoint at 0. ( i didnt check but ) f(x) might just be simple like f(x) = (x - fp1)^a (x - fp2)^a realpoly(x) Yes, I was also playing with this idea. However when you map a simply connected region with two fixed points on its boundary biholomorphically and real-analytically such that both fixed points go to 0, then you probably have a branch cut on the real axis (consider a sickle with the two fixed points at its ends, if you bent the fixed points to 0, there will be a slit or overlapping on the real axis up to the point where one part of the boundary intersects the real axis). (12/14/2009, 07:18 AM)BenStandeven Wrote: Using the simplest conjugation function produces something of a mess; to simplify, I'll consider the conjugate F of exp(pi/2 x) instead; the fixed points are +/-I, so we can conjugate with w=z^2 + 1. The inverse function is z=sqrt(w-1); then I get: F(x) = exp(pi/2 * sqrt(x-1))^2 + 1 = exp(pi * sqrt(x-1)) + 1. Very good example, it also illustrates my point above. The function F is non-real on (0,1) it has a cut from 1 to -oo (as you described) still i believe in the basic idea ... didnt say it would be easy regards tommy1729 « Next Oldest | Next Newest »
## AutoTrader survey finds that saving money is dominant motivation for UK consumers considering more fuel efficient vehicles ##### 11 May 2011 Research by AutoTrader.co.uk has found that money, not the environment, is the main driver of interest in environmentally friendly cars. The majority of UK motorists (73%) would consider going green to save money on fuel expenditure, compared to 41% of drivers motivated by environmental concerns. The average price for unleaded gasoline in the UK as of 10 May was 137.21p/liter (US$8.56/gallon); the average price of diesel was 142.51p/liter (US$8.89/gallon). However, reducing emissions is still an important objective for motorists as 57% consider the impact of their driving habits on the environment at least once a month, with 16% of these thinking about their carbon footprint every time they step into the car. Only 23% claim that the environment never crosses their mind when on the road. Auto Trader also discovered that 49% of UK motorists would not consider buying an electric car in the near future. The major reason for this, with 45% of motorists in agreement, is confusion over where to fuel these types of vehicles. Most electric cars can be charged through a conventional power outlet, making it possible to charge either at the owner’s home or office, but this would require approximately 8 – 10 hours charging for a full battery. In addition, the network of higher power electric charging stations in the UK currently stands at approximately only 200 nationwide. Other factors that dissuade motorists from buying electric cars include the initial outlay costs (38%) and the car’s look and feel (26%). While manufacturers are in a position to help consumers with both of these issues, there is also an opportunity for the government to subsidize the price of electric cars to achieve its greenhouse gas emissions targets, with 49% of motorists claiming that a government grant would be enough to tempt them into green motoring. With the continued rise of fuel prices it’s not surprising that motorists are turning to new alternatives to reduce their maintenance costs. In this difficult financial environment consumers are simply being practical in their approach to motoring. It is encouraging to see that such a high percentage of motorists are concerned about the environment and it’s clearly more of a question about getting the infrastructure in place to support green motoring, rather than consumer apathy on this important issue. —Matt Thompson, Group Marketing Director, Auto Trader
# Taylor's Theorem for Vector-Valued Functions (Real Analysis) 1. Dec 23, 2013 ### Antiderivative 1. The problem statement, all variables and given/known data "Formulate and prove an inequality which follows from Taylor's theorem and which remains valid for vector-valued functions." 2. Relevant equations I know that Taylor's theorem generally states that if $f$ is a real function on $[a,b]$, $n$ is a positive integer, $f^{(n-1)}$ is continuous on $[a,b]$, and $f^{(n)}(t)$ exists for every $t \in (a,b)$, then there exists $x \in (a,b)$ such that $f(\beta) = P(\beta) + \frac{f^{n}(x)}{n!}(\beta - \alpha)^{n}$, where $P(t)$ is the function given by $P(t) = \sum\limits_{k=0}^{n-1} \frac{f^{k}(\alpha)}{k!}(t - \alpha)^{k}$. 3. The attempt at a solution I'm not sure how to set up an equation from here. I know that vector-valued functions exist in several variables and often on the complex plane, but I'm not entirely sure how one proceeds in creating an inequality. Does it stem from the definition of a vector-valued function? I know that part of it is similar to a mean value theorem form... Any help would be appreciated. Thanks! 2. Dec 24, 2013 ### HallsofIvy You are misleading yourself. Yes, "vector valued functions" can depend on multiple variables or complex variables but that is NOT what is intended here. The given function here depends upon a single real variable and the generalization you are asked to make is not to the variable but to the value of the function. You are correct that you cannot have an inequality with vectors since vector spaces do not have a linear order. Instead use the "norm" or length of the vector. 3. Dec 24, 2013 ### Antiderivative I see. So we're trying to make a relationship for $f(t)$, not just the single-variable $t$ itself. As for the length, what justifies using that as the manner in which one orders these vectors? I mean I can do the math (it's just $z = \sqrt{z\overline{z}}$), but why are we allowed to? Or is it implicitly just asking for that since that's what'll arise from Taylor's theorem anyways? 4. Dec 24, 2013 ### Antiderivative I can derive the first-order version of the inequality, which is $\left|\mathbf{f}(b) - \mathbf{f}(a)\right| \leq (b - a)\left|\mathbf{f}'(x)\right|$ As I said this is a "mean value theorem" for vector-valued functions. How would I extend this to an $n^{th}$ order inequality so as to "follow from Taylor's theorem" directly? I can see by looking at this that it should be quite possible yet for some odd reason I cannot seem to fashion a proof that does so. Can somebody help me out? I used the norm to derive this inequality, as HallsofIvy suggested. Edit: Update—I figured something out. I think that one cannot determine an exact expression for the "remainder" in Taylor's theorem because this is a vector-valued function, so instead we can use an upper bound based on an upper bound on $\mathbf{f}^{(n)}$. So doing that would generate an inequality, as the actual value would have to be $\leq$ the value with the upper bound. I think the easiest way is to find the upper bound on $f^{(n)}$ for real-valued functions $f$ and then applying it for each component function if this were a vector-valued function instead. But my question now is if that's a valid approach, as it seems to be committing a faux pas in some form. Can somebody let me know if this is a plausible approach? Thanks, I appreciate it. Last edited: Dec 24, 2013
Last Digits Of Large Numbers Suppose we want to find the last digit of some large number. For relatively small numbers like 212, this is rather easy to compute. However for large numbers such as 1323422293, this is a rather difficult task. # Last Digits as Remainders Finding the last digit of a number if equivalent to finding the remainder of a number when divided by 10. This is for an integer n such that: (1) $$n = d_k 10^k + d_{k-1}10^{k-1} + ... + d_1 10^1 + d_0 10^0$$ then it follows that: (2) \begin{align} d_k 10^k + d_{k-1}10^{k-1} + ... + d_1 10^1 + d_0 10^0 \equiv d_0 \pmod {10} \end{align} ## Lemma 1: For all integers n = dk10k + dk-110k-1 + … + d1101 + d0100, the last digit, d0, can be represented as by n (mod 10). • Proof: Let n be an integer represents in the fashion: (3) $$n = d_k 10^k + d_{k-1}10^{k-1} + ... + d_1 10^1 + d_0 10^0$$ • Thus it follows that: (4) \begin{align} n \equiv d_k 10^k + d_{k-1}10^{k-1} + ... + d_1 10^1 + d_0 10^0 \pmod {10} \end{align} • 10 (mod 10) = 0, so it thus follows that: (5) \begin{align} n \equiv d_k (0)^k + d_{k-1}(0)^{k-1} + ... + d_1 (0)^1 + d_0 (0)^0 \pmod {10} \\ n \equiv d_0 \pmod {10} \end{align} • Thus the proof is complete. When n is divided by 10, the remainder is equal to the last digit of that number. ### Example 1 Determine the last digit of 3347. We will complete this by first finding a small number between 0, 1, 2, …, 8, 9 which is congruent 1 or -1 (mod 10). For example 9 ≡ -1 (mod 10) since 9 ≡ 9 (mod 10). It thus follows that 3^2 = 9, so then: (6) \begin{align} (3^2)^2 \equiv 81 \pmod {10} // (3^2)^2 = 3^4 \equiv 1 \pmod {10} \end{align} We can now apply the division algorithm on 347 and 4 (our exponent) so that we obtain 347 = 4 * 86 + 3. Hence we can write that: (7) \begin{align} 3^{347} = 3^{4(86) +3} = (3^4)^{86}(3^3) \equiv (1)^{86}(3^3) \equiv 27 \pmod {10} \end{align} So the last digit of 3347 is 7, since 27 (mod 10) is 7.
# Find the limiting distribution of Bernoulli random variables If $X_1, X_2,\ldots$ are i.i.d Bernoulli random variables with mean $\frac{1}{2}$. Define $T_n = \sqrt{n}(\frac{4\sum_{i=1}^{n} X_i - 2n}{\sum_{i=1}^{n} X_i^2})$. Find the pmf or pdf of the limiting distribution of the sequence $T_1, T_2,\ldots$ My thought: We consider the function $g(x) = 4x$. Since this function and its derivative are continuous, and both are nonzero at $u_x = \frac{1}{2}$, we could apply the theorem saying $\sqrt{n}[\frac{g(\overline{X_n})- g(\frac{1}{2})}{|g'(\frac{1}{2})\sigma_{x}}]\rightarrow N(0,1)$ in distribution, where $X_n = \frac{X_1+\ldots + X_n}{n}$ and $\sigma_{x} = \frac{1}{4}$ (since $X$ is Bernoulli). Now, my plan is to figure out the convergence in probability of $\sum_{i=1}^{n} X_i^2$, but I am stucked here. Could anyone please help with this last piece of puzzle? ## 1 Answer You can use the law of large numbers to talk about what happens to $\frac{1}{n} \sum_{i=1}^n X_i^2$. Note that you divided the numerator of $T_n$ by $n$, so you should also do that for the denominator. • I'm so dumb:( We only need to show $E(X_i^2) < \infty$ which is obvious since it equals to $\frac{3}{4}$, and $Var(X_i^2) < \infty$. Since $X_i^2$ are i.i.d because $X_i$ are i.i.d, $\frac{1}{n} \sum_{i=1}^{n} X_i^2\rightarrow \frac{3}{4}$ in probability. Thus the final result is convergence in distribution to $\frac{3}{4}N(0,1)$, so the limiting distribution has pdf $\frac{3}{4}\frac{1}{\sqrt{2\pi}}e^{-x^2}$. Is this correct? – user177196 Mar 2 '17 at 7:07 • $E[X_1^2]=1/2$. Also, the pdf you have written does not integrate to $1$. [More specifically, if $X$ has pdf $p_X(x)$, then the pdf of $cX$ is not $cp_X(x)$.] – angryavian Mar 2 '17 at 7:12 • Thank you for your patience. Is the pdf equal to $N(0,\frac{3}{4})$? – user177196 Mar 2 '17 at 7:22 • @user177196 If $Z$ is standard normal, then $cZ \sim N(0,c^2)$. – angryavian Mar 2 '17 at 7:24 • @user177196 Confirm: this is true. – NCh Mar 2 '17 at 10:49
Operator Theory, Complex Analysis and Applications Seminar Past sessions Invitation to weighted shifts on directed trees Weighted shifts on directed trees form an important class of operators introduced recently in [6]. This class is a natural and substantial generalization of the class of classical (unilateral or bilateral) weighted shifts on $\ell^2$ spaces. It is also related to a class of composition operators in $L^2$-spaces. Weighted shifts on directed trees have proven to have very interesting features (see [2, 3, 5, 6, 7]). The underlying relatively simple graph structure gives a rise to a subtle and complex structure of the operators, which turn out to have properties not known before in other classes of operators, and makes them ideal for testing hypothesises and constructing examples. We will outline recent results concerning these operators with main emphasis on on subnormality and reflexivity. The talk is based on a joint work with Z.J. Jabloński, I.B. Jung and J. Stochel, and M. Ptak. References 1. P. Budzyński, P. Dymek, Z. J. Jabłoski, J. Stochel, Subnormal weighted shifts on directed trees and composition operators in $L^2$-spaces with non-densely defined powers, Abstract Appl. Anal. (2014), Article ID 791817, 6 pages. 2. P. Budzyński, Z. J. Jabłoski, I. B. Jung, J. Stochel, Subnormality of unbounded weighted shifts on directed trees, J. Math. Anal. Appl. 394 (2012), 819-834. 3. P. Budzyński, Z. J. Jabłoski, I. B. Jung, J. Stochel, Subnormality of unbounded weighted shifts on directed trees. II, J. Math. Anal. Appl. 398 (2013), 600-608. 4. P. Budzyński, Z. J. Jabłoski, I. B. Jung, J. Stochel, Unbounded subnormal composition operators in $L^2$-spaces, preprint, http://arxiv.org/abs/1310.3542. 5. P. Budzyński, Z. J. Jabłoski, I. B. Jung, J. Stochel, Subnormal weighted shifts on directed trees whose nth powers have trivial domain, preprint, http://arxiv.org/abs/1409.8022. 6. Z. J. Jabłoński, I. B. Jung, J. Stochel, Weighted shifts on directed trees, Mem. Amer. Math. Soc 216 (2012), no. 1017. 7. Z. J. Jabłoński, I. B. Jung, J. Stochel, A non-hyponormal operator generating Stieltjes moment sequences, J. Funct. Anal. 262 (2012), 3946-3980. Poisson integrals on Riemannian Symmetric Spaces In this talk we shall give characterizations of the $L^{p}$-range of the Poisson transform $P_{\lambda}$ associated to Riemannian Symmetric Spaces. We will focus on the rank one symmetric space case, and show that for $\lambda$ real, the Poisson transform is a bijection from the space of $L^{2}$ functions on the boundary (respectively $L^{p}$) onto a subspace of eigenfunctions of the Laplacian satisfying certain $L^{2}$-type norms (respectively Hardy-type norms). Spectral analysis of Jacobi operators generated by Markov Birth and Death Processes Some particular examples of Jacobi Operators (tridiagonal matrices) with growing entries related to the Markov processes will be considered. Using a Levinson's type theorems approach we plan to determine the spectral structure of the corresponding operators. No preliminary knowledge of Jacobi Matrices or Orthogonal Polynomials to be required. Characterising Higher-Rank Graph C*-Algebras There is an elegant theory for graph C*-algebras that allows one to determine structural properties of the C*-algebra from the underlying directed graph. By coupling this with C*-algebra classification results one can characterise many graph C*-algebras as falling into various known classes of nuclear classifiable C*-algebras.  Whereas much of the structural theory carries over, the C*-algebras associated to higher-rank analogues of directed graphs are much less well-understood. I will recall the standard tools that are available to study higher-rank graph C*-algebras and discuss how recent developments in Elliot's classification programme could be used to help characterise higher-rank graph C*-algebras. Sampling, interpolation and Riesz bases in the small Fock spaces. We give a complete description of Riesz bases and characterize interpolation and sampling in terms of densities. This is joint work with A. Baranov, A. Dumont and A. Hartmann. Integral operators and elliptic equations in variable exponent Lebesgue spaces. We study mapping properties of variable order Riesz and Wolff potentials within the framework of variable exponent Lebesgue spaces. As an application, optimal integrability results for solutions to the $$p(.)$$-Laplace equation are given in the scale of (weak) Lebesgue spaces. Pseudospectra in non-Hermitian quantum mechanics We propose giving the mathematical concept of the pseudospectrum a central role in quantum mechanics with non-self-adjoint operators. We relate pseudospectral properties to quasi-self-adjointness, similarity to self-adjoint operators and basis properties of eigenfunctions. Applying microlocal techniques for the location of the pseudospectrum of semiclassical operators to models familiar from recent physical literature, unexpected wild properties of the operators are revealed. This is joint work with Petr Siegl, Milos Tater and Joe Viola. Riemann-Hilbert problems, Toeplitz operators and $$Q$$-classes We generalize the notion of $Q$-classes ${C}_{{Q}_{1},{Q}_{2}}$, which was introduced in the context of Wiener-Hopf factorization, by considering very general $2×2$ matrix functions ${Q}_{1}$, ${Q}_{2}$. This allows us to use a mainly algebraic approach to obtain several equivalent representations for each class, to study the intersections of $Q$-classes and to explore their close connection with certain non-linear scalar equations. The results are applied to various factorization problems and to the study of Toeplitz operators with symbol in a $Q$-class. Optimal bounds for analytic projections We discuss some recent advances related to size estimates of analytic projections and the possible uses for such estimates in applications. The spaces considered include Hardy, Bergman, Bloch, Besov and Segal-Bargmann spaces. We study in detail the case of Bergman projection onto the maximal and minimal Möbius invariant spaces. The Hua operators on homogeneous line bundles over bounded symmetric domains of tube type Let $𝒟=G/K$ be a bounded symmetric domain of tube type. We show that the image of the Poisson transform on the degenerate principal series representation of $G$ attached to the Shilov boundary of $𝒟$ is characterized by a $K$- covariant differential operator on a homogeneous line bundle over $𝒟$. As a consequence of our result we get the eigenvalues of the Casimir operator for Poisson transforms on homogeneous line bundles over $G/K$. This extends a result of Imemura and all on symmetric domains of classical type to all symmetric domains. Also we compute a class of Hua type integrals generalizing an earlier result of Faraut and Koranyi. Spectral analysis of Jacobi Matrices and asymptotic properties of orthogonal polynomials We review basic features of the spectral theory of Hermitian Jacobi operators. The analysis is based on asymptotic properties of the related orthogonal polynomials at infinity for fixed spectral parameter. We discuss various examples of bounded and unbounded Jacobi matrices. This talk is meant to give an introduction to the theory of Jacobi matrices and orthogonal polynomials. Rota's Universal Operators and Invariant Subspaces in Hilbert Spaces Rota showed, in 1960, that there are operators $T$ that provide models for every bounded linear operator on a separable, infinite dimensional Hilbert space, in the sense that given an operator $A$ on such a Hilbert space, there is $\lambda \ne 0$ and an invariant subspace $M$ for $T$ such that the restriction of $T$ to $M$ is similar to $\lambda A$. In 1969, Caradus provided a practical condition for identifying such universal operators. In this talk, we will use the Caradus theorem to exhibit a new example of a universal operator and show how it can be used to provide information about invariant subspaces for Hilbert space operators. Of course, Toeplitz operators and composition operators on the Hardy space ${H}^{2}\left(𝔻\right)$ will play a role! This talk describes work in collaboration with Eva Gallardo-Gutiérrez, Universidad Complutense de Madrid, done there this year during the speaker's sabbatical. The Brownian traveller on manifolds We study the influence of the intrinsic curvature on the large time behaviour of the heat equation in a tubular neighbourhood of an unbounded geodesic in a two-dimensional Riemannian manifold. Since we consider killing boundary conditions, there is always an exponential-type decay for the heat semigroup. We show that this exponential-type decay is slower for positively curved manifolds comparing to the flat case. As the main result, we establish a sharp extra polynomial-type decay for the heat semigroup on negatively curved manifolds comparing to the flat case. The proof employs the existence of Hardy-type inequalities for the Dirichlet Laplacian in the tubular neighbourhoods on negatively curved manifolds and the method of self-similar variables and weighted Sobolev spaces for the heat equation. 1. Martin Kolb and David Krejcirik: The Brownian traveller on manifolds, J. Spectr. Theory, to appear; preprint on arXiv:1108.3191 [math.AP]. Berezin Calculus over Weighted Bergman Spaces of Polyanalytic type Starting from the Poincaré metric $d{s}^{2}=\frac{1}{2\pi i}{\left(1-\mid z{\mid }^{2}\right)}^{-2}d\stackrel{‾}{z}\phantom{\rule{thickmathspace}{0ex}}dz$ on the the unit disk $𝔻$, we will study the range of the Berezin transforms generated from the normalized kernel function ${K}_{\zeta }^{n}\left(z\right)={K}^{n}\left(z,\zeta \right){K}^{n}\left(\zeta ,\zeta {\right)}^{-\frac{1}{2}}$ regarding the weighted polyanalytic Bergman spaces ${A}_{n}^{\alpha }\left(𝔻\right)$ of order $n$. Special emphasize will be given to the invariance of the range of the Berezin transformation under the action of the Möbius transformations ${\phi }_{\zeta }\left(z\right)=\frac{z-a}{1-\stackrel{‾}{\zeta }z}$. Connection between Berezin calculus over weighted Bergman spaces of polyanalytic type on the disk $𝔻$ and on the upper half space ${ℂ}^{+}$ will also be discussed along the talk. Noncommutative summands of the ${C}^{*}$-algebra ${C}_{r}^{*}{\mathrm{SL}}_{2}\left({𝔽}_{2}\left(\left(\varpi \right)\right)\right)$ Let ${𝔽}_{2}\left(\left(\varpi \right)\right)$ denote the Laurent series in the indeterminate $\varpi$ with coefficients over the finite field with two elements ${𝔽}_{2}$. This is a local nonarchimedean field with characteristic $2$. We show that the structure of the reduced group ${C}^{*}$-algebra of ${\mathrm{SL}}_{2}\left({𝔽}_{2}\left(\left(\varpi \right)\right)\right)$ is determined by the arithmetic of the ground field. Specifically, the algebra ${C}_{r}^{*}{\mathrm{SL}}_{2}\left({𝔽}_{2}\left(\left(\varpi \right)\right)\right)$ has countably many noncommutative summands, induced by the Artin-Schreier symbol. Each noncommutative summand has a rather simple description: it is the crossed product of a commutative ${C}^{*}$-algebra by a finite group. The talk will be elementary, starting from the scratch with the definition of ${C}_{r}^{*}{\mathrm{SL}}_{2}$. A light introduction to supersymmetry We give a brief introduction to supersymmetric quantum mechanics. A Riemann-Hilbert approach to Toeplitz operators and the corona theorem Together with differential operators, Toeplitz operators (TO) constitute one of the most important classes of non-self adjoint operators , and they appear in connection with various problems in physics and engineering. The main topic of my presentation will be the interplay between TOs and Riemann-Hilbert problems (RHP), and the relations of both with the corona theorem. It has been shown that the existence of a solution to a RHP with $2×2$ coefficient $G$, satisfying some corona type condition, implies – and in some cases is equivalent to – Fredholmness of the TO with symbol $G$. Moreover, explicit formulas for an appropriate factorization of $G$ were obtained, allowing to solve different RHPs with coefficient $G$, and to determine the inverse, or a generalized inverse, of the TO with symbol $G$. However, those formulas depend on the solutions to 2 meromorphic corona problems. These solutions being unknown or rather complicated in general, the question whether the factorization of $G$ can be obtained without the corona solutions is a pertinent one. In some cases, it already has a positive answer; how to solve this question in general is open, and all the more so in the case of $n×n$ matrix functions $G$, for which the results regarding the $2×2$ case have recently been generalized. Generalized invertibility in rings: some recent results The theory of generalized inverses has its roots both on semigroup theory and on matrix and operator theory. In this seminar we will focus on the study of the generalized inverse of von Neumann, group, Drazin and Moore-Penrose in a purely algebraic setting. We will present some recent results dealing with the generalized inverse of certain types of matrices over rings, emphasizing the proof techniques used. Spectral analysis of some non-self-adjoint operators We give an introduction to the study of one particular class of non-self-adjoint operators, namely $𝒫𝒯$-symmetric ones. We explain briefly the physical motivation and describe the classes of operators that are considered. We explain relations between the operator classes, namely their non-equivalence, and mention open problems. In the second part, we focus on the similarity to self-adjoint operators. On the positive side, we present results on one-dimensional Schrödinger-type operators in a bounded interval with non-self-adjoint Robin-type boundary conditions. Using functional calculus, closed formulas for the similarity transformation and the similar self-adjoint operator are derived in particular cases. On the other hand, we analyse the imaginary cubic oscillator, which, although being $𝒫𝒯$-symmetric and possessing real spectrum, is not similar to any self-adjoint operator. The argument is based on known semiclassical results. 1. P. Siegl: The non-equivalence of pseudo-Hermiticity and presence of antilinear symmetry, PRAMANA-Journal of Physics, Vol. 73, No. 2, 279-287, 2. D. Krejcirík, P. Siegl and J. Zelezný: On the similarity of Sturm-Liouville operators with non-Hermitian boundary conditions to self-adjoint and normal operators, Complex Analysis and Operator Theory, to appear, 3. P. Siegl and D. Krejcirík: On the metric operator for imaginary cubic oscillator, Physical Review D, to appear. Corona conditions and symbols with a gap around zero Convolution equations on a finite interval (which we can assume to be $\left[0,1\right]$) lead to the problem of factorizing matrix functions $G=\left[\begin{array}{cc}{e}_{-1}& 0\\ g& {e}_{1}\end{array}\right]$ where ${e}_{\theta }\left(\xi \right)={e}^{i\theta \xi }$, $\theta \in ℝ$ and $g\in {L}_{\infty }\left(ℝ\right)$. Here we consider $g$ of the form $g={a}_{+}{e}_{\mu }+{a}_{-}{e}_{-\sigma }$ with ${a}_{±}\in {H}_{\infty }\left({ℂ}^{±}\right)$ and $\mu ,\sigma >0$. Imposing some corona-type conditions on ${a}_{±}$, we show that solutions to the Riemann-Hilbert problem $G{h}_{+}={h}_{-}$, with ${h}_{±}\in \left({H}_{\infty }\left({ℂ}^{±}\right){\right)}^{2}$, can be determined explicitly and conditions for invertibility of the Toeplitz operator with symbol $G$ in $\left({H}_{p}^{+}{\right)}^{2}$ can be derived from them. Older session pages: Previous 2 Oldest Seminar organized in the context of the project PTDC/MAT/121837/2010.
# Integrate the cumulative normal distribution calculator? GROUPS: Hello everyone, I am trying to calculate the price option of an option un, which is however i'm struggling in calculating the Cumulative Normal Distribution Calculator. S = 20 K = 25 r = 0.05 d = 0.03 s = .24 T = 3/12 C = S*Exp[-d*T] - K*Exp[-r*T] d1 = (Log[S/K] + (r - d + 1/2*s^2)*T)/(s*Sqrt[T]) d2 = d1 - s*Sqrt[T] Resolve[d1] Resolve[d2] Do you know how to do it?Thank you! Attachments:
• 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 spectrum of Carleman matrix bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 02/22/2009, 01:18 PM (This post was last modified: 02/22/2009, 01:23 PM by bo198214.) Hey Gottfried, did you ever thought about the spectrum of the Carleman matrix of $\exp_b$? For finite matrices the spectrum is just the set of eigenvalues. However for an infinite matrix or more generally for a linear operator $A$ on a Banach space the spectrum is defined as all values $\lambda$ such that $A-\lambda I$ is not invertible. We saw that the eigenvalues of the truncated Carleman matrices of $\exp$ somehow diverge. So it would be very interesting to know the spectrum of the infinite matrix. As this also has consequences on taking non-integer powers of those matrices. Or is the spectrum just complete $\mathbb{C}$ because $A$ is not invertible itself? « Next Oldest | Next Newest » Messages In This Thread spectrum of Carleman matrix - by bo198214 - 02/22/2009, 01:18 PM RE: spectrum of Carleman matrix - by Gottfried - 02/22/2009, 09:29 PM RE: spectrum of Carleman matrix - by bo198214 - 02/22/2009, 10:53 PM RE: spectrum of Carleman matrix - by Gottfried - 02/23/2009, 03:52 AM Possibly Related Threads... Thread Author Replies Views Last Post Tommy's matrix method for superlogarithm. tommy1729 0 1,875 05/07/2016, 12:28 PM Last Post: tommy1729 [2015] New zeration and matrix log ? tommy1729 1 3,524 03/24/2015, 07:07 AM Last Post: marraco Regular iteration using matrix-Jordan-form Gottfried 7 9,335 09/29/2014, 11:39 PM Last Post: Gottfried Q: Exponentiation of a carleman-matrix Gottfried 0 3,027 11/19/2012, 10:18 AM Last Post: Gottfried A support for Andy's (P.Walker's) slog-matrix-method Gottfried 0 2,589 11/14/2011, 04:01 AM Last Post: Gottfried "Natural boundary", regular tetration, and Abel matrix mike3 9 15,820 06/24/2010, 07:19 AM Last Post: Gottfried sum of log of eigenvalues of Carleman matrix bo198214 4 6,909 08/28/2009, 09:34 PM Last Post: Gottfried Matrix Operator Method Gottfried 38 46,064 09/26/2008, 09:56 AM Last Post: Gottfried matrix function like iteration without power series expansion bo198214 15 22,778 07/14/2008, 09:55 PM Last Post: bo198214 Eigenvalues of the Carleman matrix of b^x Gottfried 7 8,248 06/29/2008, 12:06 PM Last Post: bo198214 Users browsing this thread: 1 Guest(s)
location:  Publications → journals Search results Search: MSC category 47L25 ( Operator spaces (= matricially normed spaces) [See also 46L07] ) Expand all        Collapse all Results 1 - 2 of 2 1. CJM 2013 (vol 65 pp. 1005) Forrest, Brian; Miao, Tianxuan Uniformly Continuous Functionals and M-Weakly Amenable Groups Let $G$ be a locally compact group. Let $A_{M}(G)$ ($A_{0}(G)$)denote the closure of $A(G)$, the Fourier algebra of $G$ in the space of bounded (completely bounded) multipliers of $A(G)$. We call a locally compact group M-weakly amenable if $A_M(G)$ has a bounded approximate identity. We will show that when $G$ is M-weakly amenable, the algebras $A_{M}(G)$ and $A_{0}(G)$ have properties that are characteristic of the Fourier algebra of an amenable group. Along the way we show that the sets of tolopolically invariant means associated with these algebras have the same cardinality as those of the Fourier algebra. Keywords:Fourier algebra, multipliers, weakly amenable, uniformly continuous functionalsCategories:43A07, 43A22, 46J10, 47L25 2. CJM 2007 (vol 59 pp. 966) Forrest, Brian E.; Runde, Volker; Spronk, Nico Operator Amenability of the Fourier Algebra in the $\cb$-Multiplier Norm Let $G$ be a locally compact group, and let $A_{\cb}(G)$ denote the closure of $A(G)$, the Fourier algebra of $G$, in the space of completely bounded multipliers of $A(G)$. If $G$ is a weakly amenable, discrete group such that $\cstar(G)$ is residually finite-dimensional, we show that $A_{\cb}(G)$ is operator amenable. In particular, $A_{\cb}(\free_2)$ is operator amenable even though $\free_2$, the free group in two generators, is not an amenable group. Moreover, we show that if $G$ is a discrete group such that $A_{\cb}(G)$ is operator amenable, a closed ideal of $A(G)$ is weakly completely complemented in $A(G)$ if and only if it has an approximate identity bounded in the $\cb$-multiplier norm. Keywords:$\cb$-multiplier norm, Fourier algebra, operator amenability, weak amenabilityCategories:43A22, 43A30, 46H25, 46J10, 46J40, 46L07, 47L25
Planck's Postulate Baby Collar, Dong people. China, Yunnan province, 20th century, 39 x 17 cm. From the collection of Tan Tim Qing, Kunming. Photograph by D Dunlop. In 1900 asserted1 that the energy of a particle is directly proportional to its frequency in a fixed ratio called Planck's constant. Here is a plausibility argument for the postulate based on understanding some particle P as a repetitive chain of events $\Psi ^{\sf{P}} = \left( \sf{\Omega}_{1} , \sf{\Omega}_{2} , \sf{\Omega}_{3} \ \ldots \ \right)$ Let P be characterized by its angular frequency $\ \omega$ and mechanical energy $E$. Then we can specify a number called the action of P as \begin{align} X \equiv \frac{ 2 \pi E }{ \omega } \end{align} Also let each orbital bundle $\sf{\Omega}$ be composed from $N$ quarks as $\sf{\Omega} = \left\{ \sf{q}_{1}, \sf{q}_{2} \ \ldots \ \sf{q}_{\it{N}} \right\}$. Then the action associated with a typical quark is \begin{align} \widetilde{X} \equiv \frac{ X }{ N } = \frac{ 2 \pi E }{ N \omega} \end{align} Sensory interpretation: The angular frequency $\omega$ is proportional to the number of sensory bundles observed per day. So if the mechanical energy of P is equally shared between these bundles, then $X$ is like the energy in a typical bundle. And $\ \widetilde{X}$ is like the energy of a typical quark in a typical bundle. Recall that the generic frequency $\nu$ of any particle is given by \begin{align} \nu \equiv N \omega \, / \, 2 \pi \end{align}. So the action for some average quark in P can be written in terms of the frequency as \begin{align} \widetilde{X} = \frac{E}{\nu} \end{align} For terrestrial particles made of lots of quarks, the statistical guarantees that $\ \widetilde{X}$ has a definite value determined by the distribution of quarks on Earth. Moreover, this value is presumably constant because the quark distribution is at least as stable as rock formations that change on geological time scales. This constant is called Planck's constant, and noted by $h$. We can write $\widetilde{X} ^{\, \sf{P}} \cong \ \ \widetilde{X} ^{\, \sf{Earth}} \equiv \ \it{h}$ Then if $N$ is large enough, $\widetilde{X} = E / \nu$ implies that $E = h \nu$ This is the conventional statement of Planck's postulate. It is an experimental fact that the 'constant' is well known to about one part in a billion, and apparently unchanged over the last century. So we make vigorous use of Planck's postulate for particles that contain many quarks. Sensory interpretation: If we assume that Planck's postulate applies to some particle P, then the mechanical energy of P is proportional to the daily flux of Anaxagorean sensations associated with P. And $\delta E / E$ is related to the signal to noise ratio in that stream of sensory consciousness. Next step: time. page revision: 310, last edited: 16 Nov 2018 15:55
# zbMATH — the first resource for mathematics Anisotropy and shape optimal design of shells by the polar-isogeometric approach. (English) Zbl 1439.49031 This paper is concerned with the optimal design of composite structures, in particular, of shells. The objective is to maximise the shell stiffness by optimising over both the shape, i.e., a domain $$\Omega \subset \mathbb{R}^3$$, and the elastic anisotropic properties that may vary pointwise on $$\Omega$$, i.e., an elastic tensor field $$\mathbb{E}:\Omega \to \bigotimes^4\mathbb{R}^3$$. The model adopted in this paper is Naghdi’s shell model, and the (static) state equation is given via a variational formulation, namely the “virtual work principle”. This principle states that the virtual work of applied loads is equal to the strain energy. An optimisation problem is formulated to find the argmin of an objective function constructed based on the strain energy, subject to various constraints arising from the geometry of the problem. One of the main features of this paper, apart from the aforementioned formulation of the optimal control problem, is the combination of the isogeometric approach for determining basis functions and the polar formalism for representing the components of elastic tensors. Several interesting numerical examples are studied thoroughly, and various open questions are discussed towards the end of the paper. Reviewer: Siran Li (Houston) ##### MSC: 49J53 Set-valued and variational analysis 49K99 Optimality conditions 74K25 Shells 49S05 Variational principles of physics (should also be assigned at least one other classification number in Section 49-XX) BIANCA Full Text: ##### References: [1] Banichuk, Nv, Problems and Methods of Optimal Structural Design (1983), New York: Springer, New York [2] Allaire, G., Shape Optimization by the Homogenization Method (2002), New York: Springer, New York · Zbl 0990.35001 [3] Bendsøe, Mp; Sigmund, O., Topology Optimization (2004), Berlin: Springer, Berlin [4] Gurdal, Z.; Haftka, Rt; Hajela, P., Design and Optimization of Laminated Composite Materials (1999), New York: Wileys, New York [5] Vannucci, P., Anisotropic Elasticity (2018), Singapore: Springer, Singapore · Zbl 1401.74001 [6] Montemurro, M.; Vincenti, A.; Vannucci, P., Design of the elastic properties of laminates with a minimum number of plies, Mech. Compos. Mater., 48, 4, 369-390 (2012) [7] Vannucci, P., Designing the elastic properties of laminates as an optimisation problem: a unified approach based on polar tensor invariants, Struct. Multidiscip. Optim., 31, 5, 378-387 (2005) · Zbl 1245.74008 [8] Vannucci, P.; Vincenti, A., The design of laminates with given thermal/hygral expansion coefficients: a general approach based upon the polar-genetic method, Compos. Struct., 79, 3, 454-466 (2007) [9] Vannucci, P.; Barsotti, R.; Bennati, S., Exact optimal flexural design of laminates, Compos. Struct., 90, 3, 337-345 (2009) [10] Vincenti, A.; Desmorat, B., Optimal orthotropy for minimum elastic energy by the polar method, J. Elast., 102, 1, 55-78 (2010) · Zbl 1273.74109 [11] Vincenti, A.; Vannucci, P.; Ahmadian, Mr, Optimization of laminated composites by using genetic algorithm and the polar description of plane anisotropy, Mech. Adv. Mater. Struct., 20, 3, 242-255 (2013) [12] Catapano, A.; Desmorat, B.; Vannucci, P., Stiffness and strength optimization of the anisotropy distribution for laminated structures, J. Optim. Theory Appl., 167, 1, 118-146 (2015) · Zbl 1327.74046 [13] Jibawy, A.; Julien, C.; Desmorat, B.; Vincenti, A.; Léné, F., Hierarchical structural optimization of laminated plates using polar representation, Int. J. Solids Struct., 48, 18, 2576-2584 (2011) [14] Montemurro, M.; Vincenti, A.; Vannucci, P., A two-level procedure for the global optimum design of composite modular structures—application to the design of an aircraft wing. Part 1: theoretical formulation, J. Optim. Theory Appl., 155, 1, 1-23 (2012) · Zbl 1255.90099 [15] Montemurro, M.; Vincenti, A.; Vannucci, P., A two-level procedure for the global optimum design of composite modular structures—application to the design of an aircraft wing. Part 2: numerical aspects and examples, J. Optim. Theory Appl., 155, 24-53 (2012) · Zbl 1255.90098 [16] Vannucci, P., Strange laminates, Math. Methods Appl. Sci., 35, 13, 1532-1546 (2012) · Zbl 1247.74054 [17] Hughes, T.; Cottrell, J.; Bazilevs, Y., Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement, Comput. Methods Appl. Mech. Eng., 194, 39-41, 4135-4195 (2005) · Zbl 1151.74419 [18] Bazilevs, Y.; Calo, Vm; Hughes, Tjr; Zhang, Y., Isogeometric fluid-structure interaction: theory, algorithms, and computations, Comput. Mech., 43, 1, 3-37 (2008) · Zbl 1169.74015 [19] Bazilevs, Y.; Calo, Vm; Zhang, Y.; Hughes, Tjr, Isogeometric fluid-structure interaction analysis with applications to arterial blood flow, Comput. Mech., 38, 4, 310-322 (2006) · Zbl 1161.74020 [20] Bazilevs, Y.; Hsu, Mc; Scott, M., Isogeometric fluid-structure interaction analysis with emphasis on non-matching discretizations, and with application to wind turbines, Comput. Methods Appl. Mech. Eng., 249-252, 28-41 (2012) · Zbl 1348.74094 [21] Cho, S.; Ha, Sh, Isogeometric shape design optimization: exact geometry and enhanced sensitivity, Struct. Multidiscip. Optim., 38, 1, 53-70 (2008) · Zbl 1274.74221 [22] Qian, X., Full analytical sensitivities in NURBS based isogeometric shape optimization, Comput. Methods Appl. Mech. Eng., 199, 29-32, 2059-2071 (2010) · Zbl 1231.74352 [23] De Nazelle, P.: Paramétrage de formes surfaciques pour l’optimisation. Ph.D. thesis, Ecole Centrale Lyon (2013) [24] Julisson, S.: Shape optimization of thin shell structures for complex geometries. Ph.D. thesis, Paris-Saclay University. https://tel.archives-ouvertes.fr/tel-01503061 (2016) [25] Kpadonou, D.F.: Shape and anisotropy optimization by an isogeometric-polar approach. Ph.D. thesis, Paris-Saclay University (2017) [26] Frediani, Aldo; Mohammadi, Bijan; Pironneau, Olivier; Cipolla, Vittorio, Variational Analysis and Aerospace Engineering (2016), Cham: Springer International Publishing, Cham · Zbl 1355.93004 [27] Montemurro, M.; Catapano, A., On the effective integration of manufacturability constraints within the multi-scale methodology for designing variable angle-tow laminates, Compos. Struct., 161, 145-159 (2017) [28] Morgan, K., The finite element method for elliptic problems, phillipe g. ciarlet, north-holland, amsterdam, 1978. no. of pages 530. price $${\$$ [29] Banichuk, Nv, Introduction to Optimization of Structures (1990), New York: Springer, New York [30] Ciarlet, Pg, An Introduction to Differential Geometry with Applications to Elasticity (2005), New York: Springer, New York [31] Love, Aeh, The small free vibrations and deformation of a thin elastic shell, Philos. Trans. R. Soc. A Math. Phys. Eng. Sci., 179, 491-546 (1888) · JFM 20.1075.01 [32] Koiter, W.: Foundations and Basic Equations of Shell Theory: A Survey of Recent Progress. Afdeling der Werktuigbouwkunde: WTHD. Labor. voor Techn. Mechanica (1968) [33] Naghdi, P., Foundations of Elastic Shell Theory (1963), Amsterdam: North-Holland Publishing CO., Amsterdam [34] Reissner, E., On the theory of transverse bending of elastic plates, Int. J. Solids Struct., 12, 8, 545-554 (1976) · Zbl 0336.73026 [35] Bézier, P.: Essai de définition numérique des courbes et des surfaces expérimentales: contribution à l’étude des propriétés des courbes et des surfaces paramétriques polynomiales à coefficients vectoriels, vol. 1. Ph.D. thesis, University Paris 6 (1977) [36] Rogers, Df, An Introduction to NURBS with Historical Perspective (2001), Amsterdam: Elsevier, Amsterdam [37] De Boor, C., On the evaluation of box splines, Numer. Algorithms, 5, 1, 5-23 (1993) · Zbl 0798.65012 [38] Verchery, G.; Boehler, Jp, Les invariants des tenseurs d’ordre 4 du type de l’élasticité., Mechanical Behavior of Anisotropic Solids/Comportment Méchanique des Solides Anisotropes, 93-104 (1982), Berlin: Springer, Berlin · Zbl 0516.73015 [39] Vannucci, P., Plane anisotropy by the polar method, Meccanica, 40, 437-454 (2005) · Zbl 1106.74016 [40] Vannucci, P., A note on the elastic and geometric bounds for composite laminates, J. Elast., 112, 199-215 (2013) · Zbl 1267.74023 [41] Vannucci, P.; Verchery, G., Anisotropy of plane complex elastic bodies, Int. J. Solids Struct., 47, 1154-1166 (2010) · Zbl 1193.74021 [42] Valot, E.; Vannucci, P., Some exact solutions for fully orthotropic laminates, Compos. Struct., 69, 2, 157-166 (2005) [43] Vannucci, P.; Pouget, J., Laminates with given piezoelectric expansion coefficients, Mech. Adv. Mater. Struct., 13, 5, 419-427 (2006) [44] Vannucci, P., Influence of invariant material parameters on the flexural optimal design of thin anisotropic laminates, Int. J. Mech. Sci., 51, 192-203 (2009) · Zbl 1264.74187 [45] Vannucci, P., A new general approach for optimizing the performances of smart laminates, Mech. Adv. Mater. Struct., 18, 7, 548-558 (2011) [46] Montemurro, M.; Koutsawa, Y.; Belouettar, S.; Vincenti, A.; Vannucci, P., Design of damping properties of hybrid laminates through a global optimisation strategy, Compos. Struct., 94, 11, 3309-3320 (2012) [47] Vannucci, P., The design of laminates as a global optimization problem, J. Optim. Theory Appl., 157, 2, 299-323 (2012) · Zbl 1267.90115 [48] Montemurro, M.; Vincenti, A.; Koutsawa, Y.; Vannucci, P., A two-level procedure for the global optimization of the damping behavior of composite laminated plates with elastomer patches, J. Vib. Control, 21, 9, 1778-1800 (2013) [49] Vannucci, P., The polar analysis of a third order piezoelectricity-like plane tensor, Int. J. Solids Struct., 44, 24, 7803-7815 (2007) · Zbl 1167.74409 [50] Vannucci, P., On special orthotropy of paper, J. Elast., 99, 1, 75-83 (2009) · Zbl 1188.74012 [51] Catapano, A.; Desmorat, B.; Vannucci, P., Invariant formulation of phenomenological failure criteria for orthotropic sheets and optimisation of their strength, Math. Methods Appl. Sci., 35, 15, 1842-1858 (2012) · Zbl 06108500 [52] Barsotti, R.; Vannucci, P., Wrinkling of orthotropic membranes: an analysis by the polar method, J. Elast., 113, 1, 5-26 (2012) · Zbl 1277.35030 [53] Vannucci, P., General theory of coupled thermally stable anisotropic laminates, J. Elast., 113, 2, 147-166 (2012) · Zbl 1336.74017 [54] Desmorat, B.; Vannucci, P., An alternative to the Kelvin decomposition for plane anisotropic elasticity, Math. Methods Appl. Sci., 38, 1, 164-175 (2013) · Zbl 1320.74030 [55] Vannucci, P.; Desmorat, B., Analytical bounds for damage induced planar anisotropy, Int. J. Solids Struct., 60-61, 96-106 (2015) [56] Vannucci, P., A note on the computation of the extrema of young’s modulus for hexagonal materials: an approach by planar tensor invariants, Appl. Math. Comput., 270, 124-129 (2015) · Zbl 1410.74007 [57] Vannucci, P.; Desmorat, B., Plane anisotropic rari-constant materials, Math. Methods Appl. Sci., 39, 12, 3271-3281 (2015) · Zbl 1343.74010 [58] Vannucci, P., A special planar orthotropic material, J. Elast., 67, 81-96 (2002) · Zbl 1089.74565 [59] Vannucci, P.; Verchery, G., Stiffness design of laminates using the polar method, Int. J. Solids Struct., 38, 50-51, 9281-9294 (2001) · Zbl 1016.74019 [60] De Boor, C., A Practical Guide to Splines. Applied Mathematical Sciences (2001), New York: Springer, New York · Zbl 0987.65015 [61] Patrikalakis, Nm; Maekawa, T., Shape Interrogation for Computer Aided Design and Manufacturing (2009), Berlin: Springer, Berlin [62] Hooke, R., A Description of Helioscopes, and Some Other Instruments (1675), London: John and Martyn Printer, London [63] Heyman, J., The Stone Skeleton (1995), Cambridge: Cambridge University Press, Cambridge [64] Cowan, Hj, The Masterbuilders (1977), New York: Wiley, New York This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
## anonymous one year ago A shooting star forms a right triangle with the Earth and the Sun, as shown below: A scientist measures the angle x and the distance y between the Earth and the Sun. Using complete sentences, explain how the scientist can use only these two measurements to calculate the distance between the Earth and the shooting star. 1. anonymous 2. Nnesha trig ratios ? 3. anonymous I was thinking to use tangent ? 4. Nnesha $\rm sin \rm \theta = \frac{ opposite }{ hypotenuse }~~~~ \cos \theta = \frac{ adjacent }{ hypotenuse } ~~\tan \theta = \frac{ opposite }{ adjacent }$ thats what you need to find distance 5. anonymous you can not find the distance. you don't know the values of x and y. 6. Nnesha to find length of missing side you should know one angle and one side 7. Nnesha the question is how the scientist can use only these two measurements so we don't have to know the value of x and y 8. Nnesha so which ratio would u use to find distance between the earth and shooting star? 9. anonymous sinθ 10. Nnesha ..
I'm a beginner with OpenGl so please discount my ignorance. I've managed to create a skybox (with a texture mapped cube) and would like to rotate the view (eye) around the center of the box, so I can admire the view :). By changing the X/Y coordinates of eye/camera in GLM.LookAt() I'm able to see most of the faces of the skybox but I'm not able to see the top and bottom faces though I'm sure that they are rendered. In any case what's the best way to rotate around the skybox? What I want to do is exactly shown in this video Thanks! EDIT: Sorry for the lack of relevant information. 1. I'm using OpenGL version 3.3. 2. Not using immediate mode. Everything is via shaders and VBO/VAO 3. I have already setup the projection, view and model matrices which I pass to the vertex shader, and obviously have the texture loading also working as I can render the skybox. At the moment i've set the projection and model matrices to Identity for simplicity. 4. I'm deriving the view matrix from GLM.LookAt(), So initially i thought that to rotate around the view box, I'd just have to change the viewMatrix 5. I'm working on C# using SharpGL 6. To draw the skybox, I've set up a VAO to draw a cube with vertices in the range (-1,-1,-1) and (1,1,1). Then I load 6 textures for each cube face to 6 Texture Units. During the rendering I draw the cube and update the texture for each face using a sampler variable. All of this seems to be working OK. I verified this my scaling down the size of the cube and checking if all 6 faces were being rendered properly by rotating the ModelMatrix I suspect one problem is that I'm not positioning the camera in the center of the cube. At the moment it seems to be viewing the cube from outside. I'm still trying to grasp the usage of GLM.LookAt(). This is how I'm rendering the skybox at the moment. And I can see the front face of the cube, but how do I go about placing the camera in the center of the box and rotating it? viewMatrix = mat4.identity(); projectionMatrix = mat4.identity(); modelMatrix = mat4.identity(); viewMatrix = glm.lookAt(new vec3(0f, 0f, 1f), new vec3(0f, 0f, 0f), new vec3(0.0f, 1.0f, 0.0f)); skyBox.renderSkybox(GL); EDIT #2 I managed to figure out my error. I was trying to look at the Top and Bottom faces by doing viewMatrix = glm.lookAt(new vec3(0f,-1f, 0f), new vec3(0f, 0f, 0f), new vec3(0.0f,1f, 0f)); and viewMatrix = glm.lookAt(new vec3(0f, 1f, 0f), new vec3(0f, 0f, 0f), new vec3(0.0f,1f, 0f)); which was giving me a blank screen. The reason being that (0,1,0) is collinear with my up-vector and LookAt() couldn't generate the viewMatrix in this case. Changing the UpVector makes it OK! • There are many ways to do that; we're unable to know what's the best for you because you provide so little information. – Alexandre Vaillancourt Apr 10 '15 at 17:23 • For a start you could include what version of OpenGL are you using, on what framework, and how do you currently draw your skybox. – akaltar Apr 10 '15 at 17:25 • Sorry for the lack of information. I've edited the post for more details. Could you have a look at it? Please let me know if you need more information. Thanks a lot! – Sredni Apr 11 '15 at 4:50
# Finding the parameters of a (possibly) Rayleigh distributed data set An object is tracked in an experiment I ran. The object's velocities over a 100 time steps are recorded. A model for this object says the velocities should be Rayleigh distributed. Question 1: For the given data, how can one estimate the parameters of the Rayleigh distribution. Question 2: After estimating the parameters, how can we calculate the maximum percentage difference between the observed data and estimated model? The data: 4.1278 3 4.3681 3.4993 3.5099 3.4569 3.7245 3.1744 2.7477 4.1866 3.1257 4.6310 4.4171 3.8297 3.9558 4.4083 4.2123 4.4593 4.3426 3.7140 3.2105 2.5694 2.0113 1.9838 2.7958 2.4362 2.4558 1.0497 0.5789 0.4487 1.4811 1.5465 1.2266 1.628 1.9165 1.3497 2.4598 1.706 2.3716 2.3808 3.9231 2.2745 1.5667 1.7846 2.5792 1.2635 1.7394 1.9063 2.1777 2.8 1.8492 3.7035 2.7328 3.2436 2.7707 2.4044 2.5395 2.7072 2.0354 3.6457 2.8088 2.5456 2.4235 1.6 2.4023 2.9129 4.6793 3.7958 2.8100 1.9567 1.649 1.3688 1.1426 1.2340 0.37162 0.81565 1.5374 1.1495 1.6362 2.0497 1.6581 0.32883 0.4272 0.89617 1.4784 1.2486 1.783 3.52 4.486 3.1934 3.2815 1.2387 1.5428 1.7167 1.4661 1.6412 3.4185 2.3761 3.2665 3.6833 A hack that I tried was to find the mean and standard deviation assuming our data was normal. Knowing scale is not the same thing as standard deviation squared, I used the standard deviation squared as "scale". Then, I ran the K-S test with two samples: (1) observed data, and (2) the expected values of a Rayleigh distribution with mean and scale (incorrectly as standard deviation) to find the D-max. However, while the D-max is acceptable, the p-values is low. So, I hope that you all can help me find a statistically robust method to find the scale. Now, it is likely that my data will result in a good fit. However, even if it's just out of academic interest, can we try and solve this problem anyway? Not critical, but if any of you have an approach to answer the aforementioned questions in Python, that would be of interest to me as well. P.S.: Just to be sure I wasn't violating any rule, I went through the overview of the site , and it seems like my question is within the rules of the site. However, if there is something wrong, please assist me by making appropriate suggestions. • For Question 2, is the QQ-plot a valid approach? Jun 30 '18 at 10:02 • 1. The question begins in a rather text-book-exercise-like fashion. Is this for a class? An assignment, perhaps? 2. If you want to add to your question, edit your question, but you should explain both what kind of QQ plot you're referring to and how you plan to use it to answer the question. 3. Note that there are many possible ways to estimate parameters. Is there some kind of estimator you have in mind? 4. The Rayleigh distribution probably won't be a very good fit to these data. Jul 1 '18 at 8:09 • @Glen_b : Thanks for your reply. A2: For data that's not normally distributed, there aren't different kinds of QQ-plots, right ( ref1 , ref2 ) ? So, if we want to examine how close our data comes to a Rayleigh distribution we would simply use the QQ-plots, correct? Jul 1 '18 at 11:03 • 1. It wasn't clear that you were plotting against Rayleigh scores; for all I know you could have been plotting against normal scores (we've had questions where people have done things like that), or you might have been doing something else, like a QQ plot of a pair of samples; it's best to make sure exactly what you did and how. 2. Note that the tour doesn't cover what's on topic, for example. 3. Note that the KS test isn't suitable if you have estimated parameters. 4. Would you please address the remaining questions above? Jul 1 '18 at 11:40 • @Glen_b , thank you for your input about the KS test and the site. I should have addressed everything you requested for, but if something remains, please let me know. To clarify: the data comes from an experiment I conducted where I track a moving object. Jul 1 '18 at 12:13 You can use maximum likelihood estimation to estimate the scale parameter $\sigma$ of the Rayleigh distribution. The maximum likelihood estimator of $\sigma$ is: $$\hat{\sigma}=\sqrt{\frac{1}{2n}\sum_{i=1}^{n}x_{i}^2}$$ This estimator is biased, however (see Siddiqui 1964). An unbiased estimator is given by: $$\hat{\sigma}=\sqrt{\frac{1}{2n}\sum_{i=1}^{n}x_{i}^2}\cdot\frac{\Gamma(n)\sqrt{n}}{\Gamma(n + 1/2)}$$ where $\Gamma$ denotes the gamma function. Here is how to estimate the scale parameter in R using maximum likelihood: library(fitdistrplus) library(extraDistr) # implements the Rayleigh distribution # The data x <- c(4.1278, 3, 4.3681, 3.4994, 3.5099, 3.4568, 3.7245, 3.1743, 2.7477, 4.1865, 3.1257, 4.6311, 4.4171, 3.8296, 3.9558, 4.4084, 4.2123, 4.4592, 4.3426, 3.7141, 3.2105, 2.5693, 2.0113, 1.9839, 2.7958, 2.4361, 2.4558, 1.0498, 0.57897, 0.4486, 1.4811, 1.5466, 1.2266, 1.627, 1.9165, 1.3498, 2.4598, 1.706, 2.3715, 2.3808, 3.9232, 2.2745, 1.5666, 1.7846, 2.5793, 1.2635, 1.7393, 1.9063, 2.1778, 2.8, 1.8491, 3.7035, 2.7329, 3.2436, 2.7708, 2.4044, 2.5396, 2.7072, 2.0353, 3.6457, 2.8089, 2.5456, 2.4234, 1.6, 2.4024, 2.9129, 4.6792, 3.7958, 2.8101, 1.9567, 1.649, 1.3687, 1.1426, 1.2341, 0.37162, 0.81565, 1.5374, 1.1496, 1.6362, 2.0498, 1.6581, 0.32883, 0.4273, 0.89617, 1.4783, 1.2486, 1.783, 3.52, 4.486, 3.1935, 3.2815, 1.2388, 1.5428, 1.7168, 1.4661, 1.6411, 3.4185, 2.3762, 3.2665, 3.6832) fit <- fitdist(x, "rayleigh", start = list(sigma = 1)) summary(fit) # print estimated parameters Fitting of the distribution ' rayleigh ' by maximum likelihood Parameters : estimate Std. Error sigma 1.919764 0.09598814 Loglikelihood: -152.7544 AIC: 307.5087 BIC: 310.1139 plot(fit) # inspect fit The estimated scale parameter is $\hat{\sigma}=1.918$. Inspecting the Q-Q plot of the fit, I'd say that the Rayleigh distribution is probably a suboptimal fit to these data (there are deviations at the upper and lower end of the Q-Q plot). The unbiased maximum likelihood estimator is 1.922. Siddiqui, M. M. (1964) "Statistical inference for Rayleigh distributions", The Journal of Research of the National Bureau of Standards, Sec. D: Radio Science, Vol. 68D, No. 9, p. 1007 • @COOLSerdash thanks a ton for your help. Providing a reference is particularly useful. I need time to digest and understand your proposed approach, after which I will appropriately vote and provide comments if necessary. Jul 1 '18 at 12:53 • Feel free to delete if this comment ought to be in the Meta site. I wasn't able to find resources to my problem that I could understand. Even the provided Siddiqui reference is not easy to understand as I don't have a background in stats. There are plenty of straightforward resources for normally distributed data, but I wasn't able to find much for Rayleigh. Further, my "hack" approach was clearly wrong. I believe this is a question that could benefit not just me. Lastly, framing a question such that it clearly expresses the objectives and satisfies the not-homework criteria, isn't trivial. Jul 1 '18 at 13:21 • @COOLSerdash you have both $N$ and $n$ in your formula for the unbiased version of the estimator. You probably want to make them all $n$'s. Jul 1 '18 at 13:26
# Understanding and Using Common Similarity Measures for Text Analysis This lesson introduces three common measures for determining how similar texts are to one another: city block distance, Euclidean distance, and cosine distance. You will learn the general principles behind similarity, the different advantages of these measures, and how to calculate each of them using the SciPy Python library. ### edited by • Brandon Walsh ### reviewed by • Taylor Arnold • Sarah Connell | 2020-05-05 | 2020-05-05 #### difficulty | Medium https://doi.org/10.46430/phen0089 Great Open Access tutorials cost money to produce. Join the growing number of people supporting Programming Historian so we can continue to share knowledge free of charge. # Overview The first question many researchers want to ask after collecting data is how similar one data sample—a text, a person, an event—is to another. It’s a very common question for humanists and critics of all kinds: given what you know about two things, how alike or how different are they? Non-computational assessments of similarity and difference form the basis of a lot of critical activity. The genre of a text, for example, can be determined by assessing that text’s similarity to other texts already known to be part of the genre. And conversely, knowing that a certain text is very different from others in an established genre might open up productive new avenues for criticism. An object of study’s uniqueness or sameness relative to another object or to a group can be a crucial factor in the scholarly practices of categorization and critical analysis. Statistical measures of similarity allow scholars to think computationally about how alike or different their objects of study may be, and these measures are the building blocks of many other clustering and classification techniques. In text analysis, the similarity of two texts can be assessed in its most basic form by representing each text as a series of word counts and calculating distance using those word counts as features. This tutorial will focus on measuring distance among texts by describing the advantages and disadvantages of three of the most common distance measures: city block or “Manhattan” distance, Euclidean distance, and cosine distance. In this lesson, you will learn when to use one measure over another and how to calculate these distances using the SciPy library in Python. # Preparation ## Suggested Prior Skills Though this lesson is primarily geared toward understanding the underlying principles of these calculations, it does assume some familiarity with the Python programming language. Code for this tutorial is written in Python3.6 and uses the Pandas (v0.25.3) and SciPy (v1.3.3) libraries to calculate distances, though it’s possible to calculate these same distances using other libraries and other programming languages. For the text processing tasks, you will also use scikit-learn (v0.21.2). I recommend you work through the Programming Historian’s introductory Python lessons if you are not already familiar with Python. ## Installation and Setup You will need to install Python3 as well as the SciPy, Pandas, and scikit-learn libraries, which are all available through the Anaconda Distribution. For more information about installing Anaconda, see their full documentation. ## Lesson Dataset You can run our three common distance measures on almost any data set that uses numerical features to describe specific data samples (more on that in a moment). For the purposes of this tutorial, you will use a selection of 142 texts, all published in 1666, from the EarlyPrint project. This project (of which I am a collaborator) has linguistically-annotated and corrected EEBO-TCP texts. Begin by downloading the zipped set of text files. These texts were created from the XML files provided by the EarlyPrint project, and they’ve been converted to plaintext since that is the format readers of this lesson are most likely to be working with. If you’d like to know more about how the XML documents were transformed into plaintext, you can consult this tutorial on the EarlyPrint site, which explains the EarlyPrint XML schema and introduces how to work with those files in Python. # What is Similarity or Distance? Similarity is a large umbrella term that covers a wide range of scores and measures for assessing the differences among various kinds of data. In fact, similarity refers to much more than one could cover in a single tutorial. For this lesson, you’ll learn one general type of similarity score that is particularly relevant to DH researchers in text analysis. The class of similarity covered in this lesson takes the word-based features of a set of documents and measures the similarity among documents based on their distance from one another in Cartesian space. Specifically, this method determines differences between texts from their word counts. ## Samples and Features Measuring distance or similarity first requires understanding your objects of study as samples and the parts of those objects you are measuring as features. For text analysis, samples are usually texts, but these are abstract categories. Samples and features can be anything. A sample could be a bird species, for example, and a measured feature of that sample could be average wingspan. Though you can have as many samples and as many features as you want, you’d eventually come up against limits in computing power. The mathematical principles will work regardless of the number of features and samples you are dealing with. We’ll begin with an example. Let’s say you have two texts, the first sentences of Jane Austen’s Pride and Prejudice and Edith Wharton’s Ethan Frome, respectively. You can label your two texts austen and wharton. In Python, they would look like the following: austen = "It is a truth universally acknowledged, that a single man in possession of a good fortune must be in want of a wife." wharton = "I had the story, bit by bit, from various people, and, as generally happens in such cases, each time it was a different story. " In this example, austen and wharton are your two data samples, the units of information about which you’d like to know more. These two samples have lots of features, attributes of the data samples that we can measure and represent numerically: for example the number of words in each sentence, the number of characters, the number of nouns in each sentence, or the frequency of certain vowel sounds. The features you choose will depend on the nature of your research question. For this example, you will use individual word counts as features. Consider the frequencies of the word “a” and the word “in” in your two samples. The following figure is an example of a chart you could construct illustrating the frequency of these words: a in austen 4 2 wharton 1 1 Later in this lesson, you’ll count the words in the EarlyPrint texts to create a new data set. Like this very small sample data set, the new data will include columns (features) that are individual words and rows (samples) for specific texts. The main difference is that there will be columns for 1000 words instead of 2. As you’re about to see, despite this difference, distance measures are available via the same calculations. ## The Cartesian Coordinate System Once you’ve chosen samples and measured some features of those samples, you can represent that data in a wide variety of ways. One of the oldest and most common is the Cartesian coordinate system, which you may have learned about in introductory algebra and geometry. This system allows you to represent numerical features as coordinates, typically in 2-dimensional space. The Austen-Wharton data table could be represented as the following graph: On this graph, the austen and wharton samples are each represented as data points along two axes, or dimensions. The horizontal x-axis represents the values for the word “in” and the vertical y-axis represents the values for the word “a.” Though it may look simple, this representation allows us to imagine a spatial relationship between data points based on their features, and this spatial relationship, what we’re calling similarity or distance, can tell you something about which samples are alike. Here’s where it gets cool. You can represent two features as two dimensions and visualize your samples using the Cartesian coordinate system. Naturally you could also visualize our samples in three dimensions if you had three features. If you had four or more features, you couldn’t visualize the samples anymore: for how could you create a four-dimensional graph? But it doesn’t matter, because no matter how many features or dimensions you have, you can still calculate distance in the same way. If you’re working with word frequencies, as we are here, you can have as many features/dimensions as you do words in a text. For the rest of this lesson, the examples of distance measures will use two dimensions, but when you calculate distance with Python later in this tutorial, you’ll calculate over thousands of dimensions using the same equations. ## Distance and Similarity Now you’ve taken your samples and rendered them as points in space. As a way of understanding how these two points are related to each other, you might ask: How far apart or close together are these two points? The answer to “How far apart are these points?” is their distance and the answer to “How close together are these points?” is their similarity. In addition to this distinction, similarity, as I mentioned previously, can refer to a larger category of similarity measures, whereas distance usually refers to a more narrow category that measures difference in Cartesian space. It may seem redundant or confusing to use both terms, but in text analysis these concepts are usually reciprocally related (i.e., distance is merely the opposite of similarity and vice versa). I bring them both up for a simple reason: out in the world you are likely to encounter both terms, sometimes used more or less interchangeably. When you are measuring by distance, the most closely related points will have the lowest distance, but when you are measuring by similarity, the most closely related points will have the highest similarity. For the most part you will encounter distance rather than similarity, but this explanation may come in handy if you encounter a program or algorithm that outputs similarity instead. We will revisit this distinction in the Cosine Similarity and Cosine Distance section. You might think that calculating distance is as simple as drawing a line between these two points and calculating its length. And it can be! But in fact there are many ways to calculate the distance between two points in Cartesian space, and different distance measures are useful for different purposes. For instance, the SciPy pdist function that you’ll use later on lists 22 distinct measures for distance. In this tutorial, you’ll learn about three of the most common distance measures: city block distance, Euclidean distance, and cosine distance. # Three Types of Distance/Similarity ## City Block (Manhattan) Distance The simplest way of calculating the distance between two points is, perhaps surprisingly, not to go in a straight line, but to go horizontally and then vertically until you get from one point to the other. This is simpler because it only requires you to subtract rather than do more complicated calculations. For example, your wharton sample is at point (1,1): its x-coordinate is 1 (its value for “in”), and its y-coordinate is 1 (its value for “a”). Your austen sample is at point (2,4): its x-coordinate is 2, and its y-coordinate is 4. We want to calculate distance by looking at the differences between the x- and y-coordinates. The dotted line in the following graph shows what you’re measuring: You can see here why it’s called city block distance, or “Manhattan distance” if you prefer a more New York-centric pun. “Block” refers to the grid-like layout of North American city streets, especially those found in New York City. The graphs of city block distance, like the previous one, resemble those grid layouts. On this graph it’s easy to tell that the length of the horizontal line is 1 and the length of the vertical line is 3, which means the city block distance is 4. But how would you abstract this measure? As I alluded to above, city block distance is the sum of the differences between the x- and y-coordinates. So for two points with any values (let’s call them $(x_1, y_1)$ and $(x_2, y_2)$), the city block distance is calculated using the following expression: $|x_2 - x_1| + |y_2 - y_1|$ (The vertical bars you see are for absolute value; they ensure that even if $x_1$ is greater than $x_2$, your values are still positive.) Try it out with your two points (1,1) and (2,4): $|2 - 1| + |4 - 1| = |1| + |3| = 1 + 3 = 4$ And that’s it! You could add a third coordinate, call it “z,” or as many additional dimensions as you like for each point, and still calculate city block distance fairly easily. Because city block distance is easy to understand and calculate, it’s a good one to start with as you learn the general principles. But it’s less useful for text analysis than the other two distance measures we’re covering. And in most cases, you’re likely to get better results using the next distance measure, Euclidean distance. ## Euclidean Distance At this point I can imagine what you’re thinking: Why should you care about “going around the block”? The shortest distance between two points is a straight line, after all. Euclidean distance, named for the geometric system attributed to the Greek mathematician Euclid, will allow you to measure the straight line. Look at the graph again, but this time with a line directly between the two points: You’ll notice I left in the city block lines. If we want to measure the distance of the line (“c”) between our two points, you can think about that line as the hypotenuse of a right triangle, where the other two sides (“a” and “b”) are the city block lines from our last distance measurement. You calculate the length of the line “c” in terms of “a” and “b” using the Pythagorean theorem: $a^2 + b^2 = c^2$ or: $c = \sqrt[]{a^2 + b^2}$ We know that the values of a and b are the differences between x- and y-coordinates, so the full formula for Euclidean distance can be written as the following expression: $\sqrt[]{(x_2 - x_1)^2 + (y_2 - y_1)^2}$ If you put the austen and wharton points into this formula, you get: $\sqrt[]{(2 - 1)^2 + (4 - 1)^2} = \sqrt[]{1^2 + 3^2} = \sqrt[]{1 + 9} = \sqrt[]{10} = 3.16$1 The Euclidean distance result is, as you might expect, a little less than the city block distance. Each measure tells you something about how the two points are related, but each also tells you something different about that relationship because what “distance” means for each measure is different. One isn’t inherently better than the other, but it’s important to know that distance isn’t a set fact: the distance between two points can be quite different depending on how you define distance in the first place. ## Cosine Similarity and Cosine Distance To emphasize this point, the final similarity/distance measure in this lesson, cosine similarity, is very different from the other two. This measure is more concerned with the orientation of the two points in space than it is with their exact distance from one another. If you draw a line from the origin—the point on the graph at the coordinates (0, 0)—to each point, you can identify an angle, $\theta$, between the two points, as in the following graph: The cosine similarity between the two points is simply the cosine of this angle. Cosine is a trigonometric function that, in this case, helps you describe the orientation of two points. If two points were 90 degrees apart, that is if they were on the x-axis and y-axis of this graph as far away from each other as they can be in this graph quadrant, their cosine similarity would be zero, because $cos(90) = 0$. If two points were 0 degrees apart, that is if they existed along the same line, their cosine similarity would 1, because $cos(0) = 1$. Cosine provides you with a ready-made scale for similarity. Points that have the same orientation have a similarity of 1, the highest possible. Points that have 90 degree orientations have a similarity of 0, the lowest possible.2 Any other value will be somewhere in between. You needn’t worry very much about how to calculate cosine similarity algebraically. Any programming environment will calculate it for you. But it’s possible to determine the cosine similarity by beginning only with the coordinates of two points, $(x_1, y_1)$ and $(x_2, y_2)$: $cos(\theta) = (x_1x_2 + y_1y_2)/(\sqrt[]{x_1^2 + y_1^2}\sqrt[]{x_2^2 + y_2^2})$ If you enter in your two austen and wharton coordinates, you get: $(1\times2 + 1\times4)/(\sqrt[]{1^2 + 1^2}\sqrt[]{2^2 + 4^2}) = 6/(\sqrt[]{2}\sqrt[]{20}) = 6/6.32 = 0.95$3 The cosine similarity of our austen sample to our wharton sample is quite high, almost 1. The result is borne out by looking at the graph, on which we can see that the angle $\theta$ is fairly small. Because the two points are closely oriented, their cosine similarity is high. To put it another way: according to the measures you’ve seen so far, these two texts are pretty similar to one another. But note that you’re dealing with similarity here and not distance. The highest value, 1, is reserved for the two points that are most close together, while the lowest value, 0, is reserved for the two points that are the least close together. This is the exact opposite of Euclidean distance, in which the lowest values describe the points closest together. To remedy this confusion, most programming environments calculate cosine distance by simply subtracting the cosine similarity from one. So cosine distance is simply $1 - cos(\theta)$. In your example, the cosine distance would be: $1 - 0.95 = 0.05$ This low cosine distance is more easily comparable to the Euclidean distance you calculated previously, but it tells you the same thing as the cosine similarity result: that the austen and wharton samples, when represented only by the number of times they each use the words “a” and “in,” are fairly similar to one another. # How To Know Which Distance Measure To Use These measures aren’t at all the same thing, and they yield quite different results. Yet they’re all types of distance, ways of describing the relationship between two data samples. This distinction illustrates the fact that, even at a very basic level, the decisions you make as an investigator can have an outsized effect on your results. In this case, the question you must ask is: How do I measure the relationship between two points? The answer to that question depends on the nature of the data you start with and what you’re trying to find out. As you saw in the previous section, city block distance and Euclidean distance are similar because they are both concerned with the lengths of lines between two points. This fact makes them more interchangeable. In most cases, Euclidean distance will be preferable over city block because it is more direct in its measurement of a straight line between two points. Cosine distance is another story. The choice between Euclidean and cosine distance is an important one, especially when working with data derived from texts. I’ve already illustrated that cosine distance is only concerned with the orientation of two points and not with their exact placement. This means that cosine distance is much less effected by magnitude, or how large your numbers are. To illustrate this, say for example that your points are (1,2) and (2,4) (instead of the (1,1) and (2,4) you used in the last section). The internal relationship within the two sets of coordinates is the same: a ratio of 1:2. But the points aren’t identical: the second set of coordinates has twice the magnitude of the first. The Euclidean distance between these two points is: $\sqrt[]{(2 - 1)^2 + (4 - 2)^2} = \sqrt[]{1^2 + 2^2} = \sqrt[]{1 + 4} = \sqrt[]{5} = 2.24$ But their cosine similarity is: $(1\times2 + 2\times4)/(\sqrt[]{1^2 + 2^2}\sqrt[]{2^2 + 4^2}) = 10/(\sqrt[]{5}\sqrt[]{20}) = 10/\sqrt[]{100} = 10/10 = 1$ So their cosine distance is: $1 - 1 = 0$ Where Euclidean distance is concerned, these points are only a little distant from one another. While in terms of cosine distance, these two points are not at all distant. This is because Euclidean distance accounts for magnitude while cosine distance does not. Another way of putting this is that cosine distance measures whether the relationship among your various features is the same, regardless of how much of any one thing is present. This fact would be true if one of your points was (1,2) and the other was (300,600) as well. Cosine distance is sometimes very good for text-related data. Often texts are of very different lengths. If words have vastly different counts but exist in the text in roughly the same proportion, cosine distance won’t worry about the raw counts, only their proportional relationships to each other. Otherwise, as with Euclidean distance you might wind up saying something like, “All the long texts are similar, and all the short texts are similar.” With text, it’s often better to use the distance measure that disregards differences in magnitude and focuses on the proportions of features. However, if you know your sample texts are all roughly the same size (or if you have subdivided all your texts into equally-sized “chunks,” a common pre-processing step), you might prefer to account for relatively small differences in magnitude by using Euclidean distance. For non-text data where the size of the sample is unlikely to effect the features, Euclidean distance is sometimes preferred. There’s no one answer for which distance measure to choose. As you’ve learned, it’s highly dependent on your data and your research question. That’s why it’s important to know your data well before you start out. If you’re stacking other methods—like clustering or a machine learning algorithm—on top of distance measures, you’ll certainly want to understand the distinction between distance measures and how choosing one over the other may effect your results down the line. # Calculating Distance in Python Now that you understand city block, Euclidean, and cosine distance, you’re ready to calculate these measures using Python. For your example data, you’ll use the plain text files of EarlyPrint texts published in 1666, and the metadata for those files that you downloaded earlier. First, unzip the text files and place the 1666_texts/ directory inside your working folder (i.e. The directory 1666_texts/ file will need to be in the same folder as similarity.py for this to work). ## Counting Words To begin, you’ll need to import the libraries (Pandas, SciPy, and scikit-learn) that you installed in the Setup and Installation section , as well as a built-in library called glob. Create a new blank file in your text editor of choice and name it similarity.py. (You can also download my complete version of this script.) At the top of the file, type: import glob import pandas as pd from sklearn.feature_extraction.text import CountVectorizer from scipy.spatial.distance import pdist, squareform The scikit-learn and SciPy libraries are both very large, so the from _____ import _____ syntax allows you to import only the functions you need. From this point, scikit-learn’s CountVectorizer class will handle a lot of the work for you, including opening and reading the text files and counting all the words in each text. You’ll first create an instance of the CountVectorizer class with all of the parameters you choose, and then run that model on your texts. Scikit-learn gives you many parameters to work with, but you’ll need three: 1. Set input to "filename" to tell CountVectorizer to accept a list of filenames to open and read. 2. Set max_features to 1000 to capture only the 1000 most frequent words. Otherwise, you’ll wind up with hundreds of thousands of features that will make your calculations slower without adding very much additional accuracy. 3. Set max_df to 0.7. DF stands for document frequency. This parameter tells CountVectorizer that you’d like to eliminate words that appear in more than 70% of the documents in the corpus. This setting will eliminate the most common words (articles, pronouns, prepositions, etc.) without the need for a stop words list. You can use the glob library you imported to create the list of file names that CountVectorizer needs. To set the three scikit-learn parameters and run CountVectorizer, type: # Use the glob library to create a list of file names filenames = glob.glob("1666_texts/*.txt") # Parse those filenames to create a list of file keys (ID numbers) # You'll use these later on. filekeys = [f.split('/')[-1].split('.')[0] for f in filenames] # Create a CountVectorizer instance with the parameters you need vectorizer = CountVectorizer(input="filename", max_features=1000, max_df=0.7) # Run the vectorizer on your list of filenames to create your wordcounts # Use the toarray() function so that SciPy will accept the results wordcounts = vectorizer.fit_transform(filenames).toarray() And that’s it! You’ve now counted every word in all 142 texts in the test corpus. To interpret the results, you’ll also need to open the metadata file as a Pandas DataFrame. Add the following to the next line of your file: metadata = pd.read_csv("1666_metadata.csv", index_col="TCP ID") Adding the index_col="TCP ID" setting will ensure that the index labels for your metadata table are the same as the file keys you saved above. Now you’re ready to begin calculating distances. ## Calculating Distance using SciPy Calculating distance in SciPy comprises two steps: first you calculate the distances, and then you must expand the results into a squareform matrix so that they’re easier to read and process. It’s called squareform because the columns and rows are the same, so the matrix is symmetrical, or square.4 The distance function in SciPy is called pdist and the squareform function is called squareform. Euclidean distance is the default output of pdist, so you’ll use that one first. To calculate distances, call the pdist function on your DataFrame by typing pdist(wordcounts). To get the squareform results, you can wrap that entire call in the squareform function: squareform(pdist(wordcounts)). To make this more readable, you’ll want to put it all into a Pandas DataFrame. On the next line of your file, type: euclidean_distances = pd.DataFrame(squareform(pdist(wordcounts)), index=filekeys, columns=filekeys) print(euclidean_distances) You need to declare that the index variable for the rows and the column variable will both refer back to the filekeys you saved when you originally read the files. Stop now, save this file, and run it from the command line by navigating to the appropriate directory in your Terminal application and typing python3 similarity.py. The script will print a matrix of the Euclidean distances between every text in the dataset! In this “matrix,” which is really just a table of numbers, the rows and columns are the same. Each row represents a single EarlyPrint document, and the columns represent the exact same documents. The value in every cell is the distance between the text from that row and the text from that column. This configuration creates a diagonal line of zeroes through the center of your matrix: where every text is compared to itself, the distance value is zero. EarlyPrint documents are corrected and annotated versions of documents from the Early English Books Online–Text Creation Partnership, which includes a document for almost every book printed in England between 1473 and 1700. This sample dataset includes all the texts published in 1666—the ones that are currently publicly available (the rest will be available after January 2021). What your matrix is showing you, then, is the relationships among books printed in England in 1666. This includes texts from a variety of different genres on all sorts of topics: religious texts, political treatises, and literary works, to name a few. One thing a researcher might want to know right away with a text corpus as thematically diverse as this one is: Is there a computational way to determine the kinds of similarity that a reader cares about? When you calculate the distances among such a wide variety of texts, will the results “make sense” to an expert reader? You’ll try to answer these questions in the exercise that follows. There’s a lot you could do with this table of distances beyond the kind of sorting illustrated in this example. You could use it as an input for an unsupervised clustering of the texts into groups, and you could employ the same measures to drive a machine learning model. If you wanted to understand these results better, you could create a heatmap of this table itself, either in Python or by exporting this table as a CSV and visualizing it elsewhere. As an example, let’s take a look at the five texts that are the most similar to Robert Boyle’s Hydrostatical paradoxes made out by new experiments, which is part of this dataset under the ID number A28989. The book is a scientific treatise and one of two works Boyle published in 1666. By comparing distances, you could potentially find books that are either thematically or structurally similar to Boyle’s: either scientific texts (rather than religious works, for instance) or texts that have similar prose sections (rather than poetry collections or plays, for instance). Let’s see what texts are similar to Boyle’s book according to their Euclidean distance. You can do this using Pandas’s nsmallest function. In your working file, remove the line that says print(euclidean_distances), and in its place type: top5_euclidean = euclidean_distances.nsmallest(6, 'A28989')['A28989'][1:] print(top5_euclidean) Why six instead of five? Because this is a symmetrical or square matrix, one of the possible results is always the same text. Since we know that any text’s distance to itself is zero, it will certainly come up in our results. We need five more in addition to that one, so six total. But you can use the slicing notation [1:] to remove that first redundant text. The results you get should look like the following: A62436 988.557029 A43020 988.622274 A29017 1000.024000 A56390 1005.630151 A44061 1012.873141 Your results will contain only the Text Creation Partnership ID numbers, but you can use the metadata DataFrame you created earlier to get more information about the texts. To do so, you’ll use the .loc method in Pandas to select the rows and columns of the metadata that you need. On the next line of your file, type: print(metadata.loc[top5_euclidean.index, ['Author','Title','Keywords']]) In this step, you’re telling Pandas to limit the rows to the file keys in your Euclidean distance results and limit the columns to author, title, and subject keywords, as in the following table:5 There’s some initial success on this list, suggesting that our features are successfully finding texts that a human would recognize as similar. The first two texts, George Thomson’s work on plague and Gideon Harvey’s on tuberculosis, are both recognizably scientific and clearly related to Boyle’s. But the next one is the other text written by Boyle, which you might expect to come up before the other two. The next question to ask is: what different results might you get with cosine distance? You can calculate cosine distance in exactly the way you calculated Euclidean distance, but with a parameter that specifies the type of distance you want to use. On the next lines of your file, type: cosine_distances = pd.DataFrame(squareform(pdist(wordcounts, metric='cosine')), index=filekeys, columns=filekeys) top5_cosine = cosine_distances.nsmallest(6, 'A28989')['A28989'][1:] print(top5_cosine) Running the script will now output the top five texts for both Euclidean distance and cosine distance. (You could calculate city block distance by using metric='cityblock', but the results are unlikely be substantially different from Euclidean distance.) The results for cosine distance should look like the following: A29017 0.432181 A43020 0.616269 A62436 0.629395 A57484 0.633845 A60482 0.663113 Right away you’ll notice a big difference. Because cosine distances are scaled from 0 to 1 (see the Cosine Similarity and Cosine Distance section for an explanation of why this is the case), we can tell not only what the closest samples are, but how close they are.6 Only one of the closest five texts has a cosine distance less than 0.5, which means most of them aren’t that close to Boyle’s text. This observation is helpful to know and puts some of the previous results into context. We’re dealing with an artificially limited corpus of texts published in just a single year; if we had a larger set, it’s likely we’d find texts more similar to Boyle’s. You can now print the metadata for these results in the same way as in the previous example: print(metadata.loc[top5_cosine.index, ['Author','Title','Keywords']]) The following table shows the metadata for the texts that cosine distance identified: The first three texts in the list are the same as before, but their order has reversed. Boyle’s other text, as we might expect, is now at the top of the rankings. And as we saw in the numerical results, its cosine distance suggests it’s more similar than the next text down in this list, Harvey’s. The order in this example suggests that perhaps Euclidean distance was picking up on a similarity between Thomson and Boyle that had more to do with magnitude (i.e. the texts were similar lengths) than it did with their contents (i.e. words used in similar proportions). The final two texts in this list, though it is hard to tell from their titles, are also fairly relevant to Boyle’s. Both of them deal with topics that were part of early modern scientific thought, natural history and aging, respectively. As you might expect, because cosine distance is more focused on comparing the proportions of features within individual samples, its results were slightly better for this text corpus. But Euclidean distance was on the right track, even if it didn’t capture all the similarity you were looking for. If as a next step you expanded these lists out to ten texts, you’d likely see even more differences between results for the two distance measures. It’s crucial to note that this exploratory investigation into text similarity didn’t give you a lot of definitive answers. Instead it raises many interesting questions: Which words (features) caused these specific books (samples) to manifest as similar to one another? What does it mean to say that two texts are “similar” according to raw word counts rather than some other feature set? What else can we learn about the texts that appeared in proximity to Boyle’s? Like many computational methods, distance measures provide you with a way to ask new and interesting questions of your data, and initial results like these can lead you down new research paths. # Next Steps I hope this tutorial gave you a more concrete understanding of basic distance measures as well as a handle on when to choose one over the other. As a next step, and for better results in assessing similarity among texts by their words, you might consider using TF-IDF (Term Frequency–Inverse Document Frequency) instead of raw word counts. TF-IDF is a weighting system that assigns a value to every word in a text based on the relationship between the number of times a word appears in that text (its term frequency) and the number of texts it appears in through the whole corpus (its document frequency). This method is often used as an initial heuristic for a word’s distinctiveness and can give the researcher more information than a simple word count. To understand exactly what TF-IDF is and what calculating it entails, see Matthew J. Lavin’s Analyzing Documents with TF-IDF. You could take TF-IDF results you made using Lavin’s procedure and replace the matrix of word counts in this lesson. In the future you may use distance measures to look at the most similar samples in a large data set as you did in this lesson. But it’s even more likely that you’ll encounter distance measures as a near-invisible part of a larger data mining or text analysis approach. For example, k-means clustering uses Euclidean distance by default to determine groups or clusters in a large dataset. Understanding the pros and cons of distance measures could help you to better understand and use a method like k-means clustering. Or perhaps more importantly, a good foundation in understanding distance measures might help you to assess and evaluate someone else’s digital work more accurately. Distance measures are a good first step to investigating your data, but a choice between the three different metrics described in this lesson—or the many other available distance measures—is never neutral. Understanding the advantages and trade-offs of each can make you a more insightful researcher and help you better understand your data. 1. I rounded this result to the nearest hundredth place to make it more readable. 2. A similarity lower than 0 is indeed possible. If you move to another quadrant of the graph, two points could have a 180 degree orientation, and then their cosine similarity would be -1. But because you can’t have negative word counts (our basis for this entire exercise), you’ll never have a point outside this quadrant. 3. Once again, I’ve done some rounding in the final two steps to make this operation more readable. 4. SciPy’s pdist function outputs what’s called a “sparse matrix” to save space and processing power. This output is fine if you’re using this as part of a pipeline for another purpose, but we want the “squareform” matrix so that we can see all the results. 5. I made these results a little easier to read by running identical code in a Jupyter Notebook. If you run the code on the command line, the results will be the same, but they will be formatted a little differently. 6. It’s certainly possible to scale the results of Euclidean or city block distance as well, but it’s not done by default.
# What does curvature of spacetime really mean? 1. Nov 6, 2007 ### Yianni I don't really get GR. Why should curved space and time be a model for gravity? To me, curved space means a observers no longer measure distances as sqrt(x^2+y^2+z^2), but rather, given an x-ordinate, y-ordinate and z-ordinate, the length of the shortest path to that coordinate can be calculated by a different formula. Surely it means that two spatially separated objects which are moving parallel to each other, if they enter a bit of space that is not uniformly curved, will no longer be traveling parallel to each other. In this sense I get why light, for example, is bent by gravity in general relativity. But why do things in a gravitational field actually accelerate in the direction of the most negative gradient (I don't know if this is part of the maths, its just what I've understood from looking at those embedding diagrams or whatever they are called.) Is the previous statement even correct, or does SPACE accelerate into the direction of the most negative gradient, and objects with mass in that space continue to remain at rest in that space, which is moving? And would it be fair to say that mathematically, the curvature of spacetime needn't cause an acceleration of any sort, except that in general relativity, one is always accompanied by the other - you always have an acceleration of space in a gravitational field, and you always have a distortion of spacetime, thus in GR they are one and the same, or have I completely missed the meaning of curved spacetime? 2. Nov 6, 2007 ### Chris Hillman Why Geometrize Gravitation? You didn't mention your math/physics background or the level of understanding (comprehension of basic principles? mastery of computing with gtr?) which you seek, but I'll guess you are an undergraduate student with at least high school math and some calculus. In the end, gtr is not terribly difficult or complicated, but many of its principle assumptions and conclusions are sufficiently subtle that it requires substantial effort to learn and appreciate properly the elements of gtr. On the other hand, while there are many beautiful things in sci/math, gtr has been blessed with an unusual number of superb textbooks and even some fine popular books. Among the latter I particularly recommend Wald, Space, Time, and Gravity and Geroch, General Relativity from A to B. The short answer is that this is a most wondrous and elegant way of incorporating the special feature of "gravitational force" (viz Lorentz force in EM) into the beautiful geometric interpretation of special relativity which was discovered by Minkowski (a leading mathematician who had been one of Einstein's professors in college). See either of the two books for much more about the motivation for this "geometrization" of gravity. A keyword for the "special feature" is "Equivalence principle". That's the idea. But note we now need to deal with coordinate charts in curved spacetimes, and these can be hard to interpret geometrically, so mathematicians developed various tools to help physicists keep from getting confused by features of the coordinate representation by computing "geometric features" which do not depend upon the coordinates chosen to describe the situation. BTW, this is not part of gtr but part of the theory of Riemannian or Lorentzian manifolds. Gtr uses Lorentzian manifolds both as the geometrical setting for non-gravitational physics and as its model of all gravitational phenomena. Right, this goes by "divergence" or "convergence" of initially parallel geodesics. And again, this is not part of gtr but part of the theory of Riemannian or Lorentzian manifolds. Right, but better say "bending in the large" because in gtr, light rays are in a sense always "straight in the small" (see below). This also explains "gravitational red shift" of signals sent from the surface of a massive isolated object to a distant observer; this is two of the four classical solar system tests of gtr (or if you prefer, of the geometrization of gravitation; other "metric theories of gravitation" predict the same gravitional red shift as gr. The books I mentioned explain that while Newton's theory of gravitation (in its classical field theory form) does posit a gravitational potential whose gradient gives the direction and magnitude of the acceleration of small objects (independent of their mass, unlike the Lorentz force in EM which depends on the small object's electrical charge), gtr treats gravitation quite differentially, as the curvature of spacetime. I assume you know that in str, "space" and "time" no longer exist by themselves. Rather, the kinematic (motion) history of all the objects participating in the drama of physics are woven into a geometric picture called "spacetime", in which the "life" of each object is represented by a curve, called its "world line", in a four-dimensional Lorentzian manifold called Minkowski spacetime, which has a distance formula which looks very similar to the Pythagorean theorem but has some critical and easily overlooked differences. In str, nonaccelerating objects, i.e. those in inertial motion, have world lines which are straight. OTH, if a small object is subjected to acceleration, its world line is bent near the appropriate "events" (points in spacetime), with the amount of bending (path curvature) giving the direction and magnitude of the acceleration. Gtr incorporates this without change, expect that we use the mathematical machinery of the theory of differentiable curves in Lorentzian manifolds. Then one can say that a free-falling small object near an isolated massive object will have a world line which is a "timelike geodesic" in the spacetime used to model all gravitational phenomena in this scenario. This geodesic will in general be subject to "bending" much like light bending, and it turns out that according to our small object will travel in a quasi-Keplerian orbit, if it has nonzero "orbital angular momentum" wrt the massive object, or else fall straight in. Much as in Newtonian gravitation, but the geometric representation of physics was quite novel in 1915. No, no, that's nonsense. I don't know why so many PF posters have mentioned the false notion that "space itself" can "accelerate" or "expand" recently. In any Riemannian or Lorentzian manifold, curvature of the manifold (nonzero Riemann tensor) causes geodesics to converge or diverge and leads to a kind of "bending in the large" phenomena. Also, "bending in the small" means a curve is nongeodesic and then the amount of bending is measured by path curvature. This is quite independent of gtr, but is incorporated into gtr. In gtr, a gravitational field is more or less identified with the Riemann curvature tensor. This is the mathematical object which in various ways represents, near a given point, how distorted a Lorentzian manifold is at that point from a flat manifold. 3. Nov 6, 2007 ### Yianni Thanks for that, it clarified a lot. The reason I mention the idea that space itself is moving was because in Brian Greene's book The Fabric of the Cosmos he suggests that if the universe is expanding uniformly on the large scale, then no time dilation should occur between people on distant galaxies assuming their velocities relative to one another are zero other than the velocity caused by the expansion of the universe; i.e. they look at each other and their watches remain in sync. I suppose I mentioned it because I thought the idea behind 'spacetime curvature' was that if everything in a region is accelerating uniformly, then instead of describing those objects as accelerating, you described the space as such. I.e. the space is permeated by theoretical clocks and rulers at every real number along each of the three spatial axis, which are accelerated by gravity, and this is called space. I suppose that idea was just plain wrong though! I'm only just finishing High School but my maths is pretty decent (I don't mean I actually know that much, I live in Australia and the standard of maths in schools here is quite appalling, but I can pick up on ideas pretty easily). Could you recommend any textbooks for learning both SR and then GTR? Or one for learning SR and then one for at least some sort of light introduction to the maths of GTR? I'm learning some of the ideas in SR from the Feynman lectures, but I think I really need a textbook to get into the nitty-gritty of it. Also, to learn GTR, what maths would be necessary before even opening up the textbook? Obviously some level of calculus and vector maths, but what else? And would the textbook itself explain the "mathematical machinery of the theory of differentiable curves in Lorentzian manifolds," or would this need to be learned before beginning? Again, thanks for your time... now I better stop wasting my own time and start studying for my exams again :yuck: 4. Nov 7, 2007 ### pmb_phy Your assumptions are correct so far. The formula is related to the metric tensor. The metric you're speaking of the Euclidean metric which is defined as "the interval", which in this case is the spatiual distance, is $$ds^2 = \sqrt[ (\Delta x)^2 + (\Delta y)^2 + (\Delta z)^2]$$ This formula is chosen such that the spatial distance between two points remains unchanged upon a change in coordinate system. In relativity the spacetime interval is $$ds^2 = \sqrt[ (\Delta ct)^2 - (\Delta x)^2 - (\Delta y)^2 - (\Delta z)^2]$$ That is correct. This phenomena is called geodesic deviation. The correct term is curved, not bent. And curvature is a sufficient reason for the bending in the spatial path of light. But it is not neccesary. For example, the first derivation of light being deflected in a gravitational field was done in a flat spacetime in an accelerating frame of reference. Sometimes is just easier to quote a text since the author, in this case, says it quite well. From Introduction to the Theory of Relativity, by Francis W. Sears and Robert W. Brehme. From page 193 Pete Last edited by a moderator: Nov 9, 2007 5. Nov 7, 2007 ### Chris Hillman You're welcome! Always good to hear! Right, well I think of that scenario as a picture, but in words I'd put it like this: the world lines of the dust particles form a congruence (family) of timelike geodesics which fill up the spacetime model (an FRW dust solution of the Einstein field equation). Furthermore, there is a unique family of "spatial hyperslices" for these observers, Riemannian three-manifolds which are everywhere orthogonal to the world lines of the dust. (In general, if the dust were "swirling" it would not be possible to find a family of spatial hyperslices.) For concreteness, here is the "line element" of the FRW dust with E^3 hyperslices: $$ds^2 = -dT^2 + T^{4/3} \, \left( dx^2 + dy^2 + dz^2 \right), \; 0 < T < \infty, \, -\infty < x, \, y, \, z < \infty$$ Visualizing this requires a four-dimensional figure, but since all spatial directions are equivalent in this model (we say it is "isotropic"), let's just set $z=0$. Then you can visualize the "map" (as in representation of a curved space like the surface of the Earth) --- or, to use the correct term, the "coordinate chart"--- defined by this line element as the upper half space $0 < T < \infty, \, -\infty < x, \, y <\infty$, with the plane $T=0$ representing the Big Bang. Then the world lines of the dust have the simple form $x=x_0, y=y_0$, i.e. "vertical half-lines", and the orthogonal slices have the form $T=T_0$, i.e. "horizontal planes". To study radio signals exchanged by the dust particles, we recall that the world line of a "photon" is a null geodesic, so we can set $ds=0$ in the line element, which defines the "distance" in a small piece of our coordinate chart. Here comes an example of why first year college calculus is fun and useful! From the line element we obtain $$dT = T^{2/3} \, \sqrt{dx^2 + dy^2} = T^{2/3} \, dr$$ or $dr/dT = T^{-2/3}$, so that the "world sheets" formed by taking all the signals from some event on the world line of some dust particle have the form of a surface somewhat like a paraboloid, whose sides gets steeper as T grows. In flat spacetime the analgous surface would simply be a half-cone with constant slope unity, so the shape of these distorted light cones (as drawn in our coordinate chart) is a consequence of the fact that at the hyperslices for a later time $T_1 > T_0$, a coordinate difference $x_1-x_0$, where $y = y_0, \, T=T_1$, corresponds to a larger distance, $\Delta s = T_1^{2/3} \, \Delta x > T_0^{2/3} \, \Delta x$. This is the Hubble expansion: the dust particles move away from one another, but at a slower and slower rate, as time increases. Note that even though in our chart the world lines appear to be maintaining the same "horizontal distance", appearances are misleading, just as appearances can be misleading in a map of the Earth which typically misrepresents distances. There is a very nice picture of the "distorted light cones" described above in the excellent popular book The First Three Minutes by Steven Weinberg, BTW. It is quite possible to understand the geometry even before studying differential calculus if one properly interprets such a picture! The point is that this is coordinate chart is said to be "comoving with the dust particles" in that the coordinate planes $T=T_0$ correspond to the spatial hyperslices orthogonal to the world lines of the dust particles, and also differences $\Delta T$ taken along the world line of a dust particle correspond to intervals of proper time as measured by an ideal clock carried by that dust particle. Indeed, we can imagine that observers riding on a small group of nearby dust particles synchronize their clocks by exchanging light signals, and although this gets a bit tricky, it is clear enough that a successful synchronization would yield values of the coordinate T. (Abstractly, a coordinate is just a monotonic function on some manifold, i.e. which has nonzero gradient. A choice of such functions on some "neighborhood", such that their gradients are nowhere parallel, gives a local coordinate chart defined on that neighborhood.) If we imagine "time running backwards", then as $T \rightarrow 0$ "from above" we see that the distance between any pair of dust particles (that is, the "horizontal distance" between their world lines) must decrease to zero, and if we compute the density of dust from the Einstein tensor in this solution, we find $\rho = \frac{1}{6 \, \pi \, T^2}$ showing that indeed the density blows up as $T \rightarrow 0$. So we can even say that T is a kind of "universal time" for the dust particles in which "time zero" corresponds to the "Big Bang". The hyperslices $T=T_0$ in this model have the line element (put dT=0 in the spacetime line element) $$ds^2 = T_0^{4/3} \, \left( dx^2 + dy^2 + dz^2 \right), \; -\infty < x, \, y, \, z < \infty$$ which is the line element of euclidean three-space, so we can say that the hyperslices in this FRW model are "locally flat", meaning locally isometric to euclidean three-space, $E^3$. In fact, they are globally isometric to euclidean three-space in the model I have described, but with a seemingly simple change (identify the "vertical plane" $x=1$ with $x=0$) we obtain a model with spatial hyperslices which are locally flat but globally isometric to "cylinders". We could also have hyperslices which are locally flat but isometric to "flat tori". If this intrigues you, see Jeffrey Weeks, The Shape of Space. This cosmological model is highly idealized: the dust particles roughly correspond to galaxies, but of course real galaxies are clumped into "clusters", there are large "voids" free of galaxies, and so on, and the galaxies are not really "pointlike" but have complex and interesting substructures (such as our Solar system). Nonetheless this simple model succesfully reproduces the basic features of the observed Hubble expansion (it does even better at very large distances if we add a small "Lambda terms" to the right hand side of the Einstein field equation to the term representing the dust). It is worth mentioning that it is possible to construct more sophisticated cosmological models which are "perturbations" of our FRW dust model but which attempt to model inhomogeneities in the distribution of galaxies or to otherwise improve on the FRW dust models. Well, there might be something correct lurking in there, but I hope the above helps in appreciating how specialists in gravitation physics tend to think of this kind of scenario. Can't be worse than (generic) American schools! (But that's a topic for general discussion). I have the impression you can probably get the right "picture" from the right books. Linear algebra would be very very useful. Gtr uses the languages of tensors, which are slight generalizations of the "linear operators" of linear algebra. Also, linear operators can be represented by matrices, but this representation depends upon a choice of "basis", which sets up the idea of tensor fields as something you can write down in terms of some choice of coordinates, and also the notion of "orthonormal basis" sets up the idea of "frame fields" I think the textbook by D'Inverno, Understanding Einstein's Relativity, might be ideal for you; coupled with Feynman's Lectures it just might give you just enough str to get started on gtr. (Maybe not, since I just remembered that Feynman's discussion of Minkowski geometry is a bit murky.) Last I knew, this was available in paper back new for perhaps U.S. $40.00 Alternatively, you could get Edwin F. Taylor and John Archibald Wheeler, Spacetime Phyhsics, Freeman, 1992 (make sure to make a table of the trigonometry appropriate to Minkowski geometry, called "hyperbolic trig" and to compare it with standard high school trig, or "elliptical trig"), and L.P. Hughston and K.P. Tod, An introduction to general relativity, Cambridge University Press, 1990, which cost about U.S.$25.00 new, last I checked. All these books will no doubt be available more cheaply used from InterNet booksellers. Both the gtr textbooks I mentioned aim to be "self-contained" relative to a standard undergraduate curriculum. I would expect that even a bright high school student with some calculus and some linear algebra will find some topics too challenging at first, but you could still learn a very great deal, I think! (Feynman's Lectures are a great way to learn physics, BTW! I also really like Blandford and Thorne, Applications of Classical Physics, which is available free at www.pma.caltech.edu/Courses/ph136/yr2004/index.html The authors are leaders in physics; Thorne and Wheeler are also coauthors with Misner of the classic graduate level textbook Gravitation, which you can save for later. And I am very glad to see a high school student who appreciates how much better standard textbooks are for learning topic T than arbitrary websites, which tend to be full of misinformation--- the Cal Tech website I just mentioned being a notable exception!) Last edited: Nov 7, 2007 6. Nov 7, 2007 7. Nov 7, 2007 ### pmb_phy Even in the presence of a flat spacetime one can change his frame of reference to one in which there is no gravitational field into one for which a gravitational field is present. Things will fall in this field, light will be bent in such a field etc. However there will be no spacetime curvature in such a field. An example of such a field is a uniform gravitational field which is spoken of in Eintein's Equivalence Principle which states A uniformly accerating frame of reference is identical to (i.e. has the same metric as) a uniform gravitational field. Pete 8. Nov 7, 2007 ### pervect Staff Emeritus It's probably worth pointing out (this has been mentioned once or twice in past threads) that space-time curvature has a simple physical interpretation as the presence of tidal forces. The most direct connection is that the tidal forces experienced by an inertial observer are identical to certain components of the Riemann curvature tensor, the abstract mathematical entity which represents curvature in GR (and in Riemannian geometry). 9. Nov 7, 2007 ### pmb_phy Yep. That's what I explained in my new web page - http://www.geocities.com/physics_world/gr/gravity_vs_curvature.htm I quoted Kip Thorne who said "spacetime curvature and tidal gravity must be precisely the same thing, expressed in different languages." Pete 10. Nov 8, 2007 ### Chris Hillman Pedantic elaboration of what pervect said Right, but having said this, to avoid possible misunderstanding, for the benefit of "intermediate students" of gtr, someone should perhaps stress that not all of the components can be identified with the "electric part" and thus with the relativisitic analog of the tidal tensor from Newtonian gravitation. There is also a "magnetic" part, which has no Newtonian analog and which represents (tiny!) spin-spin accelerations on spinning test particles moving in an ambient gravitational field with nonzero "magnetogravitic tensor". Having mentioned "gravity" and "magetism" in the same paragraph, someone should probably say that what I just said refers to a "strong field" (fully nonlinear) machinery analgous to the well-known weak-field machinery known as "gravitomagnetism", or more properly "gravitoelectromagnetism" (GEM); this gets a bit tricky so I won't try to say more. For some old Wikipedia articles see • tidal tensor in Newtonian gravitation; the gtr analog is the electrogravitic tensor, • Bel decomposition, which is observer dependent and decomposes the Riemann tensor into three three-dimensional spatial tensors (electrogravitc, magnetogravitic, topogravitic tensors, which have various interesting geometrical and physical interpretations), and which is analogous to the famililar decomposition of the EM field tensor, with respect to some timelike congruence, into two three-dimensional spatial vector fields, the electric and magnetic vector fields, • Ricci decomposition, which is observer independent and decomposes Riemann tensor into Weyl tensor (completely traceless part) plus a piece built from Ricci tensor (once-detraced part) plus a piece built from the Ricci scalar (scalar part). I stress that the Bel and Ricci decompositions are purely mathematical and make sense outside the context of physics. However, the physical interpretation of the tensors in question does make use of the EFE. (I cited specific versions of three articles which I read, in fact wrote before I left WP, and thus which I consider to be reasonably reliable (if too sketchy to be truly useful!); more recent versions at any given moment could be much worse or possibly better than the ones I cited! That's just in the nature of the Wikipedia beast; don't assume that WP generally is a reliable source of information; it is not.) Last edited: Nov 8, 2007 11. Nov 8, 2007 ### Chris Hillman Frame fields, anyone? Hi, Yianni, Just noticed something I previously overlooked: with a few changes, this sounds very much like you might be about to independently rediscover the idea of a frame field. That would be cool! Last edited: Nov 8, 2007 12. Nov 9, 2007 ### A.T. Actually in the simple space time diagram there is a slider "gravity" that creates a uniform gravitational field. Even when you set the initial speed to zero, the object will "accelerate" in space, although it's has a geodesic path in a flat spacetime. The key is that the cone-like spacetime has an inhomogeneous metric (causing gravity), but still can be unrolled in 2D without distortion (so it has no intrinsic curvature) 13. Nov 9, 2007 ### Chris Hillman Some Instructive Exercises on Charts and Frames "Uniform gravitational field" is a bit tricky in gtr! Inspired by the EP, many authors take this to mean the Rindler congruence with the following frame field on a Rindler wedge region in Minkowski vacuum: $$\vec{f}_1 = \frac{x}{\sqrt{x^2-t^2}} \, \partial_t + \frac{t}{\sqrt{x^2-t^2}} \, \partial_x, \; \vec{f}_2 = \frac{t}{\sqrt{x^2-t^2}} \, \partial_t + \frac{x}{\sqrt{x^2-t^2}} \, \partial_x,$$ $$\vec{f}_3 = \partial_y, \vec{f}_4 = \partial_z,$$ $$|t| < x < \infty, \; -\infty < t, \, y, \, z < \infty$$ Here, the first vector field is a timelike unit vector field with path curvature (physically speaking, acceleration vector) $\frac{1}{\sqrt{x^2-t^2}} \vec{f}_2$. Note that this is "uniform" wrt t,y,z but not x. To understand this better, it is convenient to carry out the coordinate transformation $$T = \operatorname{arctanh}(t/x), \; X = \sqrt{x^2-t^2}, \; Y = y, \; Z =z$$ which gives the Rindler chart $$ds^2 = -X^2 \, dT^2 + dX^2 + dY^2 + dZ^2, \; 0 < X < \infty, \; -\infty < T, Y, Z < \infty$$ The Rindler frame field becomes $$\vec{f}_1 = \frac{1}{X} \; \partial_T, \; \vec{f}_2 = \partial_X, \; \vec{f}_3 = \partial_Y, \; \vec{f}_4 = \partial_Z$$ and the acceleration vector becomes simply $\frac{1}{X} \vec{f}_2$. Same spacetime, same metric tensor, same frame field, same acceleration vector, just written in a new local coordinate chart! Exercise: work out the geodesic equations, solve for null geodesics, and find the simple geometric characterization of their appearance in the Rindler chart. If you are familiar with the notion of the Fermat metric, compute the Fermat metric for null geodesics in the Rindler chart and recognize them as the geodesics of a famous model of a certain noneuclidean geometry. Exercise: draw a diagram showing the world lines of two Rindler observers in the Rindler chart, with coordinates $X=X_1, Y=Y_0, Z=Z_0$ and $X=X_2, Y=Y_0, Z=Z_0$ where $X_2 > X_1 > 0$, and draw some null geodesics representing the world lines of radar pips sent out by one and reflected from the other. What is the "radar distance" between these obsvers? Is this notion of "distance in the large" symmetric between the two observers? How does the value of radar distance depend upon $h=X_2-X_1$? How does it compare with "ruler distance"? Can you use the preceding exercise to concoct a third notion of "distance in the large"? Note that this exercise shows that even in flat spacetime, there are multiple operationally significant notions of "distance in the large" and thus of "velocity in the large". Exercise: The Rindler congruence has acceleration which is constant for each observer in the family, but which varies with X between different observers in the family. Show that the expansion tensor of this congruence vanishes identically, so that the Rindler observers exhibit rigid motion. Can you find a congruence of observers who are also accelerating in the $\partial_x$ direction but which has constant acceleration over the entire family? (This is the Bell congruence.) What is the expansion tensor of this congruence? What can you conclude about a string stretched between two Bell observers with aligned accelerations? Show that both the Rindler and Bell congruences are vorticity free and thus define orthogonal hyperslices. Show these are distinct slicings but that both are locally flat. Can you find a chart which is comoving with the Bell observers in the same way that the Rindler chart is comoving with the Rindler observers? What can you say about the properties of your chart? Exercise: Read about Weyl's family of all static axisymmetric vacuum solutions of the EFE , e.g. in this review paper. Note that the master functions are static axisymmetic harmonic functions when we write the solution in terms of the Weyl canonical chart $$ds^2 = -\exp(-2 \, u(z,r)) \, dt^2 + \exp(2\, u(z,r)) \; \left( \exp(- 2\, v(z,r)) \left( dz^2 + dr^2 \right) + r^2 \, d\phi^2 \right),$$ $$u_{zz} + u_{rr} + \frac{u_r}{r} = 0, \; v_z = 2 \, r \, u_z \, u_r, \; v_r = r \left( u_r^2 - u_z^2 \right)$$ Find a transformation from the Rindler chart to the following example of Weyl canonical chart $$ds^2 = -\left( z + \sqrt{z^2+r^2} \right) \, d\overline{t}^2 + \frac{dz^2+dr^2}{2 \,\sqrt{z^2+r^2}} + \frac{r^2 \, d\phi^2}{z + \sqrt{z^2+r^2}},$$ $$-\infty < \overline{t}, \, z < \infty, \; 0 < r < \infty, \; -\pi < \phi < \pi$$ and express the Rindler frame in this chart. What region of Minkowski spacetime is covered by this chart? What locus in the Rindler chart corresponds to the axis r=0 of this Weyl canonical chart? Can you think of an alternative Weyl canonical chart which also represents a piece of Minkowski spacetime? Can you think of a third alternative? Exercise: in Newtonian physics, the potential $u(z,r) = a \, z$ is static axisymmetric harmonic and thus defines a Weyl vacuum solution. Find the line element written in the Weyl canonical chart and compute the curvature. Note that this is not locally flat, but is it approximately so? Explain. What do you notice about the behavior of the acceleration vector of static test particles in this chart, as you vary their z coordinate? Does this behavior agree with Newtonian expectation? Exercise: Recall that in gtr, the "gravitational field" is identified with the Riemann curvature tensor. In a static spacetime, this comes down to saying that the "gravitational field" is identified with the electrogravitic tensor (from the Bel decomposition, evaluated wrt some timelike congruence, of the Riemann tensor), the relativistic analog of the Newtonian tidal tensor. Compute the tidal tensor for a Newtonian uniform gravitational field. Can you find an exact vacuum solution of the EFE such that there exists a family of inertial observers whose electrogravitic tensor has the same geometric properties? What if you add a Lambda term to the RHS of the EFE? Is your solution static? Are the hyperslices orthogonal to your (vorticity-free) timelike geodesic congruence locally flat? Last edited: Nov 9, 2007 14. Nov 9, 2007 ### pmb_phy I never heard of the term "slider gravity". Can you please define it or quote a reference. Thank you. The rest I agree with. Best regards
# Subtle order-of-evaluation issues when pattern-matching with attributes Related question concerning unrolling the tests that are shown below: Big and Little surprises when unrolling tests of pattern-matching and attributes Questions researched before posting this one: Orderless pattern matching Combinations of multiple matching patterns What has changed in pattern matching functions with the Orderless attribute? Transformation rule for arbitrary number of argument expressions Pattern does not match with Orderless head In an attempt to understand pattern-matching better, I tried some exhaustive testing of a certain substitution rule under all seven combinations of attributes, leaving out the null case of no attributes: In[1]:= allAtts = Flatten[Table[ Union[Sort /@ Permutations[{Flat, Orderless, OneIdentity}, {i}]], {i, 3}], 1] Out[1]= {{Flat}, {OneIdentity}, {Orderless}, {Flat, OneIdentity}, {Flat, Orderless}, {OneIdentity, Orderless}, {Flat, OneIdentity, Orderless}} The substitution rule attempts to match the pattern eqv[x_, y_] against the input eqv[p, q, r] to see what gets bound to x and y for each combination of attributes. In the first test, I define the substitution before setting the attributes. After some manual prettification of the output: In[2]:= Table[Module[{e = (eqv[p, q, r] /. {eqv[x_, y_] :> {x, y}})}, ClearAll[eqv]; SetAttributes[eqv, j]; {j, First@e, Rest@e}], {j, allAtts}] Out[2]= {{{Flat}, p, eqv[q, r]}, {{OneIdentity}, eqv[p], {eqv[q, r]}}, {{Orderless}, p, eqv[q, r]}, {{Flat, OneIdentity}, p, eqv[q, r]}, {{Flat, Orderless}, p, {eqv[q, r]}}, {{OneIdentity, Orderless}, q, {eqv[p, r]}}, {{Flat, OneIdentity, Orderless}, p, eqv[q, r]}} The results are reasonable, plausible, interpretable. In a second test (not-unrolled; see cited question above), I set the attributes before defining the substitution rule. The results are subtly different. In[3]:= Table[Module[{e}, ClearAll[eqv]; SetAttributes[eqv, j]; e = (eqv[p, q, r] /. {eqv[x_, y_] :> {x, y}}); {j, First@e, Rest@e}], {j, allAtts}] Out[3]= {{{Flat}, eqv[p], {eqv[q, r]}}, (* difft *) {{OneIdentity}, p, eqv[q, r]}, (* difft *) {{Orderless}, p, eqv[q, r]}, {{Flat, OneIdentity}, p, {eqv[q, r]}}, (* difft *) {{Flat, Orderless}, q, {eqv[p, r]}}, (* difft *) {{OneIdentity, Orderless}, p, eqv[q, r]}, (* difft *) {{Flat, OneIdentity, Orderless}, p, {eqv[q, r]}}} (* difft *) My questions are: 1. Why, exactly, are there such differences? I understand that order-of-evaluation matters quite a bit, in general, in Mathematica, but it's hard for me to understand these particular differences. Details follow in the rest of my questions: 2. In the first test, with Flat alone, why do I get (a) p wrapped in eqv, i.e., eqv[p] and eqv[q, r] wrapped in List, when I set attributes before defining the substitution rule? 3. In the second test, for OneIdentity, alone, when attributes are set before the substitution rule is defined, do I get the same results as for Flat, alone, when the substitution rule is defined before the attributes are set? In other words, for the two orders of attributes-setttings versus rule-defining, are the results for Flat alone and OneIdentity alone swapped? 4. In the fourth test, for {Flat, OneIdentity}, why do I get no wrapping with List for the second substitution (for y) when the substitution rule is defined before the attributes are set, and yes wrapping with List when the attributes are set before the substitution rule is defined? 5. In the fifth test, for {Flat, Orderless}, why do I get p for x when the substitution rule is defined before attributes are set, and q for x when the attributes are set before the substitution rule is defined? 6. In the sixth test, for {OneIdentity, Orderless}, there are two differences between the two conditions (substitution rule defined before attributes set, and attributes set before substitution rule defined). The first difference (a) is that I get q for x in the first condition and p for x in the second condition. The second difference (b) is that I get List wrapping in the first condition and no List wrapping in the second condition. EDIT: I missed the last difference in the original, and finding it gave me an opportunity to ask the sharpest questions: 1. In the seventh test, for all three attributes, I get an extra List wrapping in the output in the second condition, attributes-set before substitution-defined. Why is that? Should I have been able to predict it knowing just the conditions? By what reasoning? I apologize for the length and complexity of this question, but I made it as short and as simple as I know how to. This question reveals that I don't know nearly as much as I thought I did about pattern-matching and attributes. Perhaps after I learn more from you-all, a much simpler form of the essential question --- hiding in here somewhere I hope --- will emerge. • First, it makes no sense to set attributes after doing the replacement. Second, the only time the replacement rule will fire is when Flat is one of the attributes. If you include e in your output, you will see that the replacement only did something when Flat was one of the attributes. – Carl Woll Oct 8 '18 at 18:59 Carl Woll's comment turned the lights on for me and cleared the fog. There are no matches when there is no Flat. That observation also clears up the related question Big and Little surprises when unrolling tests of pattern-matching and attributes. Here are the only tests that make sense out of the above, now unrolled. In[16]:= Module[{}, ClearAll[eqv]; SetAttributes[eqv, {Flat}]; eqv[p, q, r] /. {eqv[x_, y_] :> {x, y}}] Out[16]= {eqv[p], eqv[q, r]} In[17]:= Module[{}, ClearAll[eqv]; SetAttributes[eqv, {Flat, OneIdentity}]; eqv[p, q, r] /. {eqv[x_, y_] :> {x, y}}] Out[17]= {p, eqv[q, r]} In[18]:= Module[{}, ClearAll[eqv]; SetAttributes[eqv, {Flat, Orderless}]; eqv[p, q, r] /. {eqv[x_, y_] :> {x, y}}] Out[18]= {q, eqv[p, r]} In[19]:= Module[{}, ClearAll[eqv]; SetAttributes[eqv, {Flat, OneIdentity, Orderless}]; eqv[p, q, r] /. {eqv[x_, y_] :> {x, y}}] Out[19]= {p, eqv[q, r]} The answer for {Flat, Orderless} seems puzzling. Usually, Mathematica sorts symbolic constants alphabetically and minimizes nesting, leading us to expect {p,eqv[q,r}. There are an unbounded number of correct answers, all equivalent under these attributes, however, any of which would be acceptable. Here are just a few: In[54]:= ClearAll[eqv]; SetAttributes[eqv, {Flat, Orderless}]; eqv[eqv[p], eqv[r, q]] === eqv[eqv[q], eqv[p, r]] === eqv[eqv[q], eqv[r, p]] === eqv[eqv[q], eqv[r, p]] === eqv[eqv[r], eqv[p, q]] === eqv[eqv[r], eqv[q, p]] === eqv[p, eqv[r, q]] === eqv[q, eqv[p, r]] === eqv[eqv[eqv[p]], eqv[q, r]] === eqv[eqv[p], q, r] Out[55]= True Indeed, even the result for Flat alone has many correct answers: In[58]:= ClearAll[eqv]; SetAttributes[eqv, {Flat}]; eqv[eqv[p], eqv[q, r]] === eqv[p, eqv[q, r]] === eqv[eqv[eqv[p]], eqv[q, r]] === eqv[eqv[p], q, r] Out[59]= True • You can use ReplaceList to show all possible replacements. It is an open question as to why the {Flat,Orderless} case is not giving the first element fro the ReplaceList result. – Daniel Lichtblau Oct 9 '18 at 0:51
# Matrix example symmetric positive definite ## Positive-definite matrix Revolvy Positive Definite Matrix Calculator Cholesky. In statistics and its various applications, we often calculate the covariance matrix, which is positive definite (in the cases considered) and symmetric, for various, de nite matrices 1 1 basic de nitions. an n n symmetric matrix a is positive de nite i for any v 6= 0, v0av > 0. for example, if example, the matrix a = 1 3. ### 4.3 Positive-definite Matrices Department of Electrical Dealing with the inverse of a positive definite symmetric. De nite matrices 1 1 basic de nitions. an n n symmetric matrix a is positive de nite i for any v 6= 0, v0av > 0. for example, if example, the matrix a = 1 3, linear algebra and its applications chapter 6. positive definite p0⇔the matrix #is positive definite вђ“ example) (when #is symmetric, /is positive definite). Examples . the identity matrix = [] is positive definite (and as such also positive semi-definite). it is a real symmetric matrix, and, for any non-zero column vector (for example, it follows 2.4), suppose s is similar to a positive definite matrix p. a product of three positive definite real symmetric ### POSITIVE DEFINITE REAL SYMMETRIC MATRICES imsc.res.in Non-Positive Definite Covariance Matrices Value-at-Risk. When computing the covariance matrix of a sample, is one then guaranteed to get a symmetric and positive-definite matrix? currently my problem has a sample of 4600 Inverses of symmetric, diagonally dominant positive let n 3. for any symmetric diagonally dominant matrix is the zero matrix (see corollary 4.5). example 1.4. Nearestspd works on any matrix, please send me an example case that has this which will be converted to the nearest symmetric positive definite matrix." symmetric positive definite matrices tridiagonal it should be clear from these two examples that a symmetric matrix is symmetric positive definite if Вђў examples вђў the cholesky factorization вђў inverse of a positive deп¬ѓnite matrix вђў a is positive semideп¬ѓnite if a is symmetric and 7.2 positive deп¬ѓnite matrices and the svd tests on sвђ”three ways to recognize when a symmetric matrix s is positive example 1 are these matrices positive (for example, it follows 2.4), suppose s is similar to a positive definite matrix p. a product of three positive definite real symmetric problem of symmetric toeplitz matrix are to solve the eigenvalue problem for symmetric matrix. natrix which is symmetric, positive definite and the (for example, it follows 2.4), suppose s is similar to a positive definite matrix p. a product of three positive definite real symmetric a positive-definite function of a real variable x is a complex examples. this section is must be positive definite to ensure the covariance matrix a to be For example, consider the matrix a with a quadratic form b need not be symmetric. forms and definite matrices 7 2.3. factoring positive deп¬ѓnite in linear algebra, a symmetric г— real matrix is said to be positive definite if the scalar is positive for every non-zero column vector of real numbers. One important example of applying a function to a matrix is symmetric matrix for which all eigenvalues k be symmetric, positive semi-de nite matices of the one important example of applying a function to a matrix is symmetric matrix for which all eigenvalues k be symmetric, positive semi-de nite matices of the A positive-definite function of a real variable x is a complex examples. this section is must be positive definite to ensure the covariance matrix a to be test for positive and negative definiteness we want a computationally simple test for a symmetric matrix to induce a positive deп¬ѓnite quadratic De nite matrices 1 1 basic de nitions. an n n symmetric matrix a is positive de nite i for any v 6= 0, v0av > 0. for example, if example, the matrix a = 1 3 symmetric positive matrices this simple example suggests the п¬ѓllowing deп¬ѓnitions. we say that a real symmetric nг—n matrix is (i) positive deп¬ѓnite provided Problem of symmetric toeplitz matrix are to solve the eigenvalue problem for symmetric matrix. natrix which is symmetric, positive definite and the positive and negative de nite matrices and optimization example consider the matrix a= 1 1 we now consider a general 2 2 symmetric matrix a= a b
# Talk:Calibri ## Award in 2005? If this typeface was not created until 2006 (as indicated) how did it win an award in 2005? 69.197.169.65 03:00, 3 May 2006 (UTC) If not "exist" before 31 Jan 2017 Than how it possible won award in 2005 coated by wikipedia "Calibri won the TDC2 2005 award from the Type Directors Club under the Type System category." you have anything to say regarding this? --Saqib (talk) 20:10, 11 July 2017 (UTC) @Saqib: - sure, here's my understanding. De Groot finished the design of Calibri in 2004 as part of the development of what became Office 2007, and you could download it as a beta soon after that. It didn't get "publicly released" with Office until 2007 because the rest of Office was a bit late ;) He's joking about it on Twitter at the moment. Blythwood (talk) 21:07, 11 July 2017 (UTC) The public beta release was 6 June 2006, according to Newsweek. —CrazyDreamer (talk) 18:39, 14 July 2017 (UTC) Wait. the Newsweek story qoutes Dawn story and Dawn story quotes Wikipedia.. it says "The first public beta version, according to a Wikipedia entry, was released on June 6, 2006 ". --Saqib (talk) 19:23, 14 July 2017 (UTC) Good catch, @Saqib: Going into that article and following the references further eventually yields a trail of broken links that left me digging through the Wayback Machine. The key here is that it was first publicly released with the first public beta of Windows Vista, whose date can be found here with various other outlets picking up the news over the following 24–48 hours (but not repeating the date). It would have been available to a more limited beta pool for some time before that, but I cannot verify exactly when it actually made it into the Longhorn/Vista private test versions. —CrazyDreamer (talk) 05:40, 18 July 2017 (UTC) ## One Page? Should we make one page for all six "Vista C" fonts? That makes more sense to me than having six typography stubs floating around. ModusOperandi 04:59, 10 May 2006 (UTC) ## Lowercase g? Whats up in the two example images the lower case "g" characters being different? One is regular and one is italicized...but I can't think of a font that changes its g's so much between regular and italic. --Hergio 17:22, 27 May 2006 (UTC) It's not uncommon; it's simply got proper italic forms, rather than the oblique forms you migh be expecting. Look at the "e" and "f" and so on as well as the "a" and "g". Compare serif fonts like ITC Bookman. — Haeleth Talk 14:49, 1 June 2006 (UTC) Perpetua, designed by Eric Gill, has a particularly striking difference between the roman and the italic "g". —— Shakescene (talk) 04:59, 27 July 2009 (UTC) ## Re: Lowercase g? It also changes the "a". Gill Sans changes the "a" when italicized as well. ## First iconic use My Project:Shark might be the first group to use calibri as its own font, like johnston is to lul. Wikipedia:WikiProject Shark/Userbox5 ## Forced compatibility? Should we mention the worries over Microsoft essentially making their default documents incomptible with all other word processors since the default font is now a proprietary one? -Fuzzy 20:36, 9 November 2006 (UTC) 1. yes! --59.96.191.84 (talk) 06:57, 21 September 2009 (UTC) 2. no, no reason to promote absurd things. Microsoft has always used proprietary fonts. Some people made open source fonts that were metrics (widths) compatible with the previous generation of Microsoft fonts. People have since made open source fonts that are width-compatible with Calibri and other newer-generation Microsoft fonts. It's a non-issue. Thomas Phinney (talk) 20:33, 11 July 2017 (UTC) ## Excel The article suggests that Arial has been the default font for Excel, but at least on the Mac version, the default font is Verdana. Theshibboleth 05:04, 7 January 2007 (UTC) ## Humanist? Why is this a humanist type face? Should we remove that phrase? --Walter Görlitz 23:05, 10 January 2007 (UTC) "Humanist" is an adjective describing typefaces that resemble Humanist. This could be confusing to people who aren't familiar with the jargon and think it has something to do with humanism, though. 194.151.6.70 12:58, 16 January 2007 (UTC) I think it is an opportunity to learn anohter meaning through contextual use. I vote to keep it as it is extremely common typographic term. CApitol3 13:49, 16 January 2007 (UTC) Yes, but as mr. Görlitz has shown, it's quite possible to miss the contextual meaning. If you didn't know Humanist, how are you supposed to know that a "humanist typeface" isn't a typeface designed by humanists? I'm not sure this can really be solved, though. Maybe an article on Humanist that we could link to would help (presently humanist is a disambig page that does explain the typesetting jargon, but only as an item on a list). 194.151.6.70 14:52, 16 January 2007 (UTC) A link to the dab page is better than leaving the term completely unexplained. I'm adding it. —Angr 11:19, 2 February 2007 (UTC) At first glance, I also thought it had to do with humanism. And the use in a typeface context isn't explained in the link in the article. 74.140.225.97 (talk) 21:18, 13 February 2008 (UTC) I know a little bit about type and lettering, picked up here and there including a couple of summer classes at RISD, but the term "humanist" was completely misleading/bewildering to me, because (naturally enough for me), humanist suggests Humanist bookhand [1] [2], which is something very different, although Eric Gill was obviously paying it tribute when naming this font. The words here have to be less gnomic and more explicit. There's certainly space. —— Shakescene (talk) 03:58, 2 September 2009 (UTC) ## Can I use it without buying Vista? Can I legally download and use Calibri on my Win2k or WinXP from somewhere, or do I have to actually purchase Vista or Office Vista to legally use the new fonts?--Sonjaaa 15:06, 14 February 2007 (UTC)--Sonjaaa 15:01, 14 February 2007 (UTC) The latter. The fonts are copyrighted, so they can only be copied with permission from Microsoft. Unlike the core fonts for the Web, the new Vista fonts are not freely redistributable. That said, the fonts are available for download if you know where to look, but it's almost certainly not legal. As long as you don't redistribute them yourself, though, I doubt Microsoft's lawyers will be banging down your door. 194.151.6.70 13:18, 16 February 2007 (UTC) I've just installed the MS Office Compatibility Pack for office 2003 (allows opening 2007 docs) and the Font seems to be included there as well. At least I have the font installed now, and I don't think it was installed before (Win XP) --20:22, 6 September 2007 (UTC) —Preceding unsigned comment added by 77.177.253.246 (talk) The Calibri font is copyrighted by Microsoft and they only allow use on Microsoft Windows. I'm not sure about OSX or Linux. 202.78.240.7 (talk) 03:30, 26 January 2010 (UTC) There is the OpenXMLConverter software by Microsoft for Mac. Its license says: 2. FONT COMPONENTS. While the Office for Mac software is running, you may use its fonts to display and print content. You may only • embed fonts in content as permitted by the embedding restrictions in the fonts; and • temporarily download them to a printer or other output device to help print content. — Microsoft Software License Terms. Microsoft Open XML file format converter for Mac, Open XML file format converter 1.1.4 for Mac Sergioller (talk) 10:20, 1 July 2013 (UTC) Adding, when someone sends me a mail using Calibri (the MS default now... sigh) my mail reader asks if I want to download it. Clicking yes, the mail appears in Calibri but the font is not installed on the system. Using Mail in Snow Leopard. --59.96.191.84 (talk) 07:00, 21 September 2009 (UTC) ## Wikipedia So what font does WP use? I think it's Times New Roman. When's WP gonna switch to Calibri? If we aren't going to, why not? Gatherton 02:21, 16 April 2007 (UTC) This is not the appropriate place to ask this question. Try the Village Pump. Basically, I think WP will not switch to Calibri for two reasons: • Times New Roman is a solid baseline font that ensures WP looks professional and consistent across a wide range of systems (all of them have something resembling Times). Calibri is Vista-specific and harder to get consistent (since appropriate replacements will have to be used for non-Vista platforms). • Calibri is sans serif. 'Nuff said. WP uses a serif font for its main text and sans serif for headers, which is a common approach. Using a sans serif font like Calibri for the main text would be a big change in style. 194.151.6.70 14:51, 16 April 2007 (UTC) Realy? I'm FAIRLY certain that the default skin for wikipedia uses sans-serif fonts for everything... Kmenzel 14:17, 2 May 2007 (UTC) Never mind, you're right. I'm an idiot. 194.151.6.70 08:34, 3 May 2007 (UTC) ## Criticism I can only read about 2 seconds worth of Calibri before my eyes hurt. I have also read that people reading this font get headaches. From what I can see the font is too bold for screen reading and the words and sentences tend to blend together.-ps --203.52.154.133 (talk) 01:23, 2 May 2008 (UTC) Find references to back these claims up, and they can be included in the article. TalkIslander 01:43, 2 May 2008 (UTC) I've mentioned it here because I think there should be research done on it. As for references I refer to blog comments, http://msmvps.com/blogs/bill/archive/2006/10/05/more-on-Calibri-font-_2E002E00_.aspx and http://www.oooninja.com/2008/01/calibri-linux-vista-fonts-download.html I also think the spaces between the words are too small (for example compare Calibri 11pt with Verdana 10pt in Outlook) for me I can see words but it takes a lot longer to comprehend when has been written. I was surprised to find that this font was intended to improve screen readability. I can post screen shots if needed -ps --203.52.154.133 (talk) 01:56, 2 May 2008 (UTC) That link to the blog comment on oooninja.com was very useful, but it works just fine if you access it directly instead of needlessly going through lifehacker.com. I fixed it. Bostoner (talk) 22:26, 9 May 2014 (UTC) Blog comments are not acceptable as sources on Wikipedia. 02:15, 2 May 2008 (UTC) in reply to "keep fanboy opinion out" & "At some point it will dawn on OpenOffice fans that the popularity of the Vista fonts will be a problem for them." This is hilarious... you should look up the term "Projection" in the subject of Psychology. --59.96.191.84 (talk) 07:40, 21 September 2009 (UTC) ## Removed "most popular" claim I removed the following: In a survey conducted by researchers at Wichita State University, Calibri was the most popular font for e-mail, instant messaging, and PowerPoint presentations. It also ranked highly for use in website text.[1] because upon reading the source, such did not seem to be the case. For example, Times New Roman was rated 2% higher than Calibri for willingness to be used on the Web. "Popular" was also the wrong word choice, as it implies it is used more than any alternative; rather, the study's participants were remarking on appropriateness for use in certain contexts, and the font list was small and finite. --AlanH (talk) 02:51, 29 October 2008 (UTC) ## Distributed with Windows XP This article states that Calibri is distributed with Windows XP, however I have never experienced this. In fact I've never seen it on a Pre-Vista system with anything lower than Office 2007 installed. -- Nik Rolls (talk) 07:01, 17 October 2009 (UTC) ## "Calibri was designed by Lucas de Groot for Microsoft to take advantage of Microsoft's ClearType rendering technology." (I'm new at this so plese put me right if I go about things wrongly. - Thanks) The above sentence is not clear to me. Does it mean that Calibri works well on a computer where ClearType is available but not well where simpler rendering is done? In that case Calibri might be best avoided where maximum compatibility is important. Rjarvi1 (talk) 13:32, 25 January 2010 (UTC) Take a look at this link http://www.microsoft.com/typography/ClearTypeInfo.mspx there is explained how works exactly the cleartype rendering.--Jesus.coronas (talk) 09:46, 23 October 2010 (UTC) ## "Unique features, which are common nowadays"? Do you find the cognitive dissonance jarring too, or is it just me? —Preceding unsigned comment added by 128.32.44.91 (talk) 23:47, 26 February 2010 (UTC) ## Pronunciation of "Calibri" I'm trying to find the correct pronunciation of this font's name. The word itself is Italian, and means "gauges," but I do not speak Italian, so I am unsure of the pronunciation. I've found three different pronunciations so far: • ka-LEE-bree : This is what my instincts say the pronunciation would be in Italian, and Google Translate's speaking bot pronounces it this way. • KAL-ih-bree : This is how the voiceover actor on Microstoft's e-learning course for Office Excel 2010 (course # 10296, http://helpstudent.microsoftelearning.com/ ) pronounces it, but I'm not convinced he's pronouncing it correctly. (Calibri is the new default font for Excel.) • ka-LIHB-ree : This is how the bot at http://www.howjsay.com/index.php?word=calibri pronounces it. Neither the English or Italian Wiktionaries are any help here. Can any Italian speakers help? kevyn (talk) 21:57, 27 October 2010 (UTC) Lucas de Groot, the designer of Calibri, pronounces it ka-LEE-bree in his native language of Dutch (Netherlands). — Preceding unsigned comment added by 12.4.141.8 (talk) 15:33, 9 April 2013 (UTC) That would be an excellent authoritive source to base our information on. Can you link to where you found that info? —♦♦ AMBER(ЯʘCK) 16:30, 9 April 2013 (UTC) ## Smaller than normal fonts The Cleartype family of fonts are smaller than normal fonts, which means they often break webpages or documents if a font subsitute is applied (eg: Arial, etc...). This complaint can be found on numerous blogs and such, but I've not found a formal article on the subject. If anyone finds one, can you link? —Preceding unsigned comment added by 155.105.7.44 (talk) 10:02, 19 April 2011 (UTC) ## Swaped capital and small "phi Hello, during work with Excel 2010 I've found the problem with Greek alphabet. I described the problem on the Calibri wiki page (this page) and someone responsible for correctness of this page refuse my change with "source needed" explanation. What source could I provide if this is a bug and as bug it is not documented. As I'm the first one encountered this problem there is other source I can use as "source". What is the correct way to add this information to the Wikipedia? My original change: In version present in Windows 7 (posible others) is position of letter phi and small letter phi swaped. Both Normal and Bold type is affected, Italic and Bold Italic type is not affected. This behavior was confirmed on czech version of Windows. — Preceding unsigned comment added by Slwat (talkcontribs) 11:31, 22 September 2012 (UTC) Regarding the phi letter in Calibri font on Windows 7 I'd say that the issue is currently well explained in Phi#Computing which says: In ordinary Greek text, the character U+03C6 φ is used exclusively, although this character has considerable glyphic variation, sometimes represented with a glyph more like the representative glyph shown for U+03C6 (φ, the “loopy” form) and less often with a glyph more like the representative glyph shown for U+03D5 (ϕ, the “straight“ form). Because Unicode represents a character in an abstract way, the choice between glyphs is purely a matter of font design. While some Greek typefaces, most notably "Porson" typefaces (used widely in editions of classical Greek texts), have a "stroked" glyph in this position (), most other typefaces have "loopy" glyphs. This goes for the "Didot" (or "apla") typefaces employed in most Greek book printing (), as well as for the "Neohellenic" typeface often used for ancient texts (). It is necessary to have the stroked glyph available for some mathematical uses, and U+03D5 GREEK PHI SYMBOL is designed for this function. Prior to Unicode version 3.0 (1998), the glyph assignments in the Unicode code charts were the reverse, and thus older fonts may still show a loopy form ${\displaystyle \varphi }$ at U+03D5. I'd say that creators of Calibri used Unicode tables prior to Unicode v. 3.0 and so swapped glyphs for GREEK SMALL LETTER PHI and GREEK PHI SYMBOL: they put ϕ glyph (the “straight“ form) at U+03C6 (GREEK SMALL LETTER PHI) and φ glyph (the “loopy” form) at U+03D5 (GREEK PHI SYMBOL). On the other hand, glyph Φ is used for U+03A6 (GREEK CAPITAL LETTER PHI), which is OK. --Rprpr (talk) 10:15, 16 April 2014 (UTC) ## Homoglyph? I strongly disagree that 'I' and 'l' are homoglyphs in Calibri. They are very different in height (1300 vs. 1393 units on 2048/em) and visibly different in stem thickness (172 vs. 165 units) and glyph width (516 vs. 470 units). Older sans serifs often have identical ascender and cap heights, for instance Gill Sans, Helvetica (and its impostor Arial) and Univers (the latter also have identical glyph widths for 'I' and 'l'). In these cases 'l' and 'I' could be considered homoglyphs (though the stem widths are not identical, of course), but not in Calibri, because of the height difference. Paragraph removed. 84.208.236.180 (talk) 22:17, 9 September 2013 (UTC) Homoglyph "is one of two or more characters, or glyphs, with shapes that either appear identical or cannot be differentiated by quick visual inspection." You can quickly distinguish l from I in Calibri only if you can compare it to a nearby uppercase letter in the same row of text - see this example where the first row reads abclmino (all lowercase letters) and the second row reads abclMINO (4 lowercase and 4 uppercase letters). This homoglyph can be easily exploited in phishing e-mails when writing URLs, e.g. www.paypal.com, www.google.com. --Rprpr (talk) 15:13, 28 March 2014 (UTC) ## As who was designed Calibri? 109.200.228.69 (talk) 18:27, 25 January 2014 (UTC)Now in article states that Calibri was designed by Rupert Westrup, and in infobox that by Lucas de Groot. As who was designed Calibri actually? ## Bug: Combining half-rings ("more/less rounded" IPA diacritics) I have noticed that Calibri -- not Calibri Light, just Calibri -- contains a duplicate glyph for the combining half-rings (a̜ a̹). The "Combining Left Half Ring Below" (U+031C) seems to replace its "Combining Right" (U+0339) counterpart, meaning they are encoded as two different glyphs but they display as one and the same: the left half-ring. I don't know if this is worth adding to the article, but it might give someone a heads-up if they're about to use the typeface to write in the International Phonetic Alphabet: those diacritics are used to express respectively "more" and "less" lip-rounding on vowels. --86.150.215.183 (talk) 12:30, 7 July 2014 (UTC) ## Protected edit request on 11 July 2017 Thank you -a Ali.muslim (talk) 10:29, 11 July 2017 (UTC) Not done: please establish a consensus for this alteration before using the {{edit protected}} template. Callanecc (talkcontribslogs) 12:26, 11 July 2017 (UTC) Not done: it's not clear what changes you want to be made. Please mention the specific changes in a "change X to Y" format. (Guessing ths is already fixed, and that the previous responder forgot to remove this request tempate) (tJosve05a (c) 02:51, 12 July 2017 (UTC) ## Editing Dispute @Chrosby and Saqib: I do not understand what is the dispute tbh. I learned that this typeface is a part of a judicial case currently in Pakistan, but what are the reasons for including or not including it, from both of you? Emphrase - 💬 | 📝 11:10, 11 July 2017 (UTC) @Emphrase: The investigation team that was probing the corruption case found that the docs submitted to them by guilty party are fake, on the grounds that if they were signed in 2006, how can the Calibri fonts ( released for usage in 2007) be used on them in 2006. Hope you get it? Honestly , we still need to find out when actually the fonts were released for commercial usage. The sources that are currently being used were published today and could be unauthentic. --Saqib (talk) 12:02, 11 July 2017 (UTC) The Dispute: Claim A: Calibri was never available for general public before January 2007. Therefore any image scan of Original Document with date January 2006 but with Calibri font is forgery. Claim B: Calibri was distributed as part of Office 2007 Beta since late 2005. Thus it's not impossible to have a document scanned with font Calibri and with date before January 2007. Objective: Win Wikipedia credibility by pushing one's favored claim to be available on Wikipedia. Solution to Dispute: If there exists any URL reference of Calibri being included in Pffoce 2007 Beta back in 2005, let it be mentioned otherwise avoid page from being edited. Binmahmood (talk) 14:56, 11 July 2017 (UTC) Content that needs correction with consensus 1. Change "In 2017, the font was used as evidence in a Pakistani government corruption case Panama Papers case (Pakistan)." to "In 2017, JIT report raises doubts about the use of 'Calibri' font in evidence papers submitted to them in Panama Papers case (Pakistan)." 2. Remove "Date released 2007" info. This is not correct. It was released before 2007 but yes had been made a default font of Microsoft Office 2007 3. Add under header Availability "This font has been set as a default in Microsoft Office 2007. However, it was already available in Microsoft Windows Vista before Microsoft Office 2007 4. Microsoft Windows Vista was released in 2007. — Preceding unsigned comment added by 39.42.226.104 (talk) 17:07, 12 July 2017 (UTC) Ali.muslim (talk) 15:44, 11 July 2017 (UTC) ## Protected edit request on 11 July 2017 Change "In 2017, the font was used as evidence in a Pakistani government corruption case Panama Papers case (Pakistan)." to "In 2017, JIT report raises doubts about the use of 'Calibri' font in evidence papers submitted to them in Panama Papers case (Pakistan)." Remove "Date released 2007" info. This is not correct. Add under availability "This font has been set as a default in Microsoft Office 2007. However. it was available in windows vista before Microsoft Office 2007. [1] [2] I would welcome to further discuss if required. And would request you to please run a research on your end as well. Thank you -ali Ali.muslim (talk) 14:45, 11 July 2017 (UTC) Not done: the first because this is the first time JIT is mentioned so some background or at least a link is required for the reader.  Done the second, as no one has opposed. — Martin (MSGJ · talk) 10:08, 14 July 2017 (UTC) @MSGJ: you removed release date which I oppose. please see Talk:Calibri#Consensus_version. --Saqib (talk) 10:14, 14 July 2017 (UTC) You did not comment for 3 days so I assumed this was uncontroversial. Is there consensus that the release date should remain? — Martin (MSGJ · talk) 10:19, 14 July 2017 (UTC) I didn't responded here because the page was locked and so i thought it didn't make sense to respond to requests being made until we reach a consensus. The version below by says "released to the general public on January 30, 2007" so I think we should go with it and should state in the infobox as well that the font was released in 2007. I am open to hear what Fvasconcellos suggest. --Saqib (talk) 10:48, 14 July 2017 (UTC) Okay I have reverted for now. But I don't understand why you felt that it didn't make sense to respond. — Martin (MSGJ · talk) 10:59, 14 July 2017 (UTC) ## Pakistan Muslim League Social Media Cell Vandalism The edits on the Calibri Wikipedia page came within hours of the JIT Report being released. I've done some digging around and all the IP addresses seem to point towards the Pakistan Muslim League (N) social media team trying to manipulate facts by making unfounded edits. The first edit was made on 06:24 GMT on 11 July 2017 by IP address 119.153.59.151 where several mentions of the font being released in 2007 were wrongfully edited to 2004. The IP address has been tracked down to Karachi, Pakistan at 24°54′20″N 67°04′56″E / 24.9056°N 67.0822°E.[1] Wikipedia editors stepped in and made the proper corrections. The font was created in 2004 but released in 2007. The second edit was made on 08:11 GMT on 11 July 2017 by IP address 203.99.59.46 where several mentions of the font being released in 2007 were wrongfully edited to 2004 again. The IP address has been tracked down to Islamabad, Pakistan at 33°41′45″N 73°00′41″E / 33.6957°N 73.0113°E, which narrows it down to Street 54 in Sector F10 of Islamabad.[2] The latter two locations are quite interesting since PML social media team is suspected of being based in Sector F10. I'm going to dig around some more and try and update everyone as soon as I get more information. If this is true, Maryam Nawaz is directly tampering with Wikipedia articles and it should be exposed to the media. GO NAWAZ GO — Preceding unsigned comment added by 87.109.155.159 (talk) 01:46, 12 July 2017 (UTC) --PAKHIGHWAY (talk) 16:13, 11 July 2017 (UTC) ## Protected edit request on 11 July 2017 Please do not let people edit this font as people are trying to save a corrupt political party on corruption charges by changing this entry. https://thenextweb.com/world/2017/07/11/microsofts-default-font-is-at-the-center-of-a-government-corruption-case/#.tnw_S2KdZxvB 175.156.124.221 (talk) 17:15, 11 July 2017 (UTC) Already done Already protected until July 18. (tJosve05a (c) 02:53, 12 July 2017 (UTC) ## Protected edit request on 11 July 2017 188.248.38.166 (talk) 17:47, 11 July 2017 (UTC) it was awailable in 2004 in microsoft so edit the release date Not done: please provide reliable sources that support the change you want to be made. (tJosve05a (c) 02:53, 12 July 2017 (UTC) Hi, I try to look after a lot of the font articles. This exact topic has come up before - people have forged documents with modern software and backdated them. Here is the best of my knowledge on this topic. The 2007 "general release" date is correct. You can read expert testimony on this from Thomas Phinney, one of the world's leading experts on font digitisation, who has testified in court on the topic in a similar case, and commentary from its designer who was commissioned in 2002 and finished work in 2004.[1][2] (I added these references but they were reverted, perhaps they can be added back in.) Now Vista and Office 2007 were in development for a long time and betas were issued. Microsoft had publicised how it commissioned new fonts to take advantage of its new ClearType technology, so this Wikipedia article was actually created in 2005, and so too did other articles on other websites reviewing Calibri appear in 2005. But this forgery case is not our concern until we have reliable sources on the topic. (I will say, though, that it's very unlikely that a professional document would be set using beta software.) Blythwood (talk) 18:38, 11 July 2017 (UTC) References 1. ^ Phinney, Thomas. "Calibri reached the general public on January 30, 2007, with the release of Microsoft Office 2007 and Windows Vista on that date.". Quora. Retrieved 11 July 2017. 2. ^ Berry, John D.; De Groot, Lucas. "Case Study: Microsoft ClearType". Lucasfonts. Retrieved 11 July 2017. I have been on an extended vacation from Wikipedia, because edit wars are tiresome, and because I sometimes do original research, which is (understandably) not welcome from Wikipedia authors. My main concern with the current locked version of the article is that the "design date" is certainly not the single year of 2004, and in fact goes back to late 2002 (and perhaps later than 2004 for final tweaks, I would want to check that with Luc de Groot). See “Now Read This: The Microsoft ClearType Font Collection” p. 48, where he discusses creating the typeface. Secondarily, I am not entirely comfortable with the label of “humanist” for this typeface in the opening sentence of the article. Although de Groot does make passing reference to humanism in the body of the piece I reference above, it is comparatively with Arial, not as an absolute. The opening sentence of the Calibri section of that official Microsoft publication calls Calibri a “modern” sans serif, which seems a much more apt label. The cap proportions are modern, not old style, so despite the open apertures of some shapes such as Ccea, I classify it as grotesque rather than humanist. Thomas Phinney (talk) 20:56, 11 July 2017 (UTC) Thomas, thank you so much for taking the time to comment. I don't watch this Talk page and had not come back to the article since protecting it. Given your concerns and Blythwood's note above, I believe it would be a good idea to revert the article to this version. It doesn't address the humanist vs. modern problem (which I think can be tackled once the dispute has blown over), but as the protecting administrator, I'd rather not become involved in content issues. Also: I believe Calibri first became available for use in word processing (not "to the general public" per se) in late May 2006 with the release of Office 2007 beta. Is that correct? Might find a reference for that from Microsoft... Fvasconcellos (t·c) 12:28, 12 July 2017 (UTC) DAWN has published a decent piece on the availability of the font. We can use it to fix this page perhaps. --Saqib (talk) 13:45, 12 July 2017 (UTC) Sure, that looks solid, and is (unsurprisingly) consistent with my research. — Thomas Phinney (talk) 18:22, 12 July 2017 (UTC) Great. so who is going to update the page? perhaps @Hoary:? --Saqib (talk) 08:04, 13 July 2017 (UTC) Reading Thomas Phinney I guess I remember correct in #Freeware? Grabware?. As I remember there was a note about the font and how it used the ClearType technology, and I had to run some additional program to make it work. Jeblad (talk) 15:28, 13 July 2017 (UTC) The font doesn't "use" ClearType, but rather its design and hinting are optimized for ClearType. Nobody has to install or run anything special to make Calibri work on older computers, as long as their computer can handle data-fork TrueType fonts (TTF), which is just about anything issued in the past 17 years or more, and all Windows computers since Windows 3.1. — Preceding unsigned comment added by Tphinney (talkcontribs) 22:14, 16 July 2017 (UTC) ## A Proud Moment for Wikipedia! Thank You Admins & Editors It's rare that a Wiki article gets directly embroiled in an international level political scandal. The speed and efficiency with which this article was protected and its integrity preserved by the Admins is an example and proof that the Wiki model works. One of the Admin's protected summary even made the news: "[protected] from editing until July 18, 2017, or until editing disputes have been resolved"" https://www.dawn.com/news/1344654. Thank you all! Cheerz Code16 (talk) 11:39, 12 July 2017 (UTC) Yes, this one received a lot of press coverage. Kudos to . --Saqib (talk) 11:44, 12 July 2017 (UTC) Saqib are you a part of PMLN social media team. If true you should reveal your conflict of interest. Egopearl (talk) 12:13, 12 July 2017 (UTC) ## Revert Calibri Article to July 4, 2017 Edit I here ask you to please revert Calibri back to July 4, 2017 edit. It is from July 10 and onward that people, majorly from Pakistan, have tried to edit that article and tried to add false information to accredit findings of JIT report. Before that this article has no mention of "release date" and many have tried to change its creation date. This JIT report was submitted before Supreme Court on July 10, 2017 and people have tried to accredit claims in that JIT report by changing Wiki article. The current version has locked false information and has no citations for "release date". So this article should be reversed back to where it is not controversial. --Awaisraad (talk) 17:30, 12 July 2017 (UTC) ## Protected edit request on 12 July 2017 }} 175.107.200.178 (talk) 13:29, 12 July 2017 (UTC) ## Protected edit request on 12 July 2017 Hallo For this (Calibri) page correction in needed for the following sentence: In 2017, the font was used as evidence in a Pakistani government corruption case Panama Papers case (Pakistan).[3] It should be corrected by inserting the following Sentence: In 2017, the font was used as evidence in (at the time Prime Minister of Pakistan) Nawaz Shareef and Family corruption case Panama Papers case (Pakistan).[3] Reason: It would be helpful for the Reader in the future and our next Generations, in case the Reader access this page after several years, and it also makes it more clear that who exactly did this corruption in Panama Papers Case (Pakistan). Thank you Very Much! Khan A Pakistani Citizen 194.25.158.132 (talk) 15:03, 12 July 2017 (UTC) • not needed. It's clear enough. Code16 (talk) 19:28, 12 July 2017 (UTC) • Not needed. The link provides any detail wanted. However, the wording that you thought needed improvement ("a Pakistani government corruption case Panama Papers case (Pakistan)") struck me too as needing (stylistic) improvement, and therefore I have just now done this (to "the Pakistani government–related "Panama Papers" case"). -- Hoary (talk) 03:41, 13 July 2017 (UTC) ## Protected edit request on 12 July 2017 Mention it completely that it was used by Mariam Nawaz Sharif Daughter of Prime Minister of Pakistan to justify her property bought in 2006. Even before the release date. The document were yeared 2006. And was presented in Supreme Court of Pakistan. 39.50.39.176 (talk) 19:40, 12 July 2017 (UTC) • That doesn't directly relate to this article. You can add stuff like that to the Panama Case page. Code16 (talk) 20:11, 12 July 2017 (UTC) • No. This is an article about a typeface, and not about a legal case. -- Hoary (talk) 03:43, 13 July 2017 (UTC) ## this font was started in 2004 this font was started in 2004 167.98.11.50 (talk) 22:16, 12 July 2017 (UTC) • No. If you say precisely what change you want made, then your suggestion will be judged on its merits. Vague nudges in this or that direction will be ignored. -- Hoary (talk) 03:46, 13 July 2017 (UTC) ## Protected edit request on 12 July 2017 - GO NAWAZ GO 86.98.74.245 (talk) 02:32, 13 July 2017 (UTC) GO NAWAZ GO • No. You seem to make no suggestion for the article. -- Hoary (talk) 03:48, 13 July 2017 (UTC) ## Unavailability as "a freeware" We're told For use in other operating systems, such as GNU/Linux, cross-platform use and web use, it is not available as a freeware. And we're told exactly the same thing about Candara and Cambria (typeface). (I didn't look at the others.) I hadn't known that "freeware" was countable. I'd start by removing the "a", resulting in "not available as freeware". But that's still bizarre. There seems to be an implication here that, aside from "cross-platform use" (whatever that might be) and for "web use", and for Windows and OS X, it is available as freeware. Thus, using a Windows computer to write the CSS for a web page, I mustn't use .pushingmyluck {font-family:candara, sans-serif;} because if I did, then Microsoft could/would demand money (from me? from viewers of the web page?). Uhhh.... The following is a wild guess: The typeface is neither free software nor freeware, and is not legally available for other operating systems (such as GNU/Linux and Android). It is therefore unsuitable for web pages and for documents that should be accessed via these operating systems. Comments? -- Hoary (talk) 06:51, 13 July 2017 (UTC) • Hi, I would just delete this section completely as a statement of the bleedin' obvious. It's a copyrighted Microsoft product, and they don't give it away except as part of their software products. (You actually can buy a license to it to use on a computer system that doesn't have it - I guess in particular for if you wanted to create an iPhone app using it or something - but I can't imagine many people doing this.) Also, I think it might be a good idea to put back in those references I mentioned above and change createddate to 2002-4. 07:13, 13 July 2017 (UTC) On further reflection, Blythwood, I think there is something to this, and that it is indeed about freeware: the writer wants to describe each of these typefaces as unlikeMicrosoft's earlier "Core fonts for the Web". How about: The typeface is neither legally available for tinkering nor freeware; and unlike Microsoft's earlier Core fonts for the Web, its use for other operating systems (such as GNU/Linux and Android) requires additional payment. It is therefore unsuitable for web pages and for documents that should display via these operating systems with this typeface. However, this may be a bit lumpy. I'm purposely avoiding the matter of when the typeface was first issued. That's a separate matter. These references that you mention: are they directly relevant to this particular matter? -- Hoary (talk) 01:01, 16 July 2017 (UTC) typo fixed Hoary (talk) 03:59, 17 July 2017 (UTC) ## Protected edit request on 13 July 2017 202.142.166.162 (talk) 11:59, 13 July 2017 (UTC) (Non-admin response) Not done, as no change has been requested. 17:07, 13 July 2017 (UTC) ## Protected edit request on 13 July 2017 The paragraph about the Panama Papers case could mention that the case hinged on a purported 2006 document using the font, when the first public release of the font was in 2007. That should be sufficient to explain why the font itself was important in the case. Thanks, Luc "Somethingorother" French 14:39, 13 July 2017 (UTC) Not done for now: Please supply the exact wording you are proposing, and please obtain consensus for the addition. — Martin (MSGJ · talk) 10:26, 14 July 2017 (UTC) ## Freeware? Grabware? I am pretty sure I downloaded this for use on Ubuntu before 2007, probably in 2005 or early winter 2006. (I had an office in downtown Oslo at the time, thats why I remember when I downloaded the files.) Not so sure whether I run a conversion program on it to be able to use it. I am definitely not sure whether it was officially freeware or simply openware, or even "grabware". A copy on an external disk reset the dates, so it can't be used as any kind of proof on when the file was grabbed. Jeblad (talk) 15:00, 13 July 2017 (UTC) ## Consensus version With Tphinney's contributions here and on Quora and now that several articles with comments from Luc de Groot himself have been published in reliable sources, I think we can move forward on this. If there is no reasonable objection, I would be happy to update the article lead to read as follows (adapted from [3]): Calibri (/kəˈliːbri/) is a sans-serif typeface family designed by Lucas de Groot in 2002–2004 and released to the general public on January 30, 2007, with Microsoft Office 2007 and Windows Vista.[1][2] In Office 2007, it replaced Times New Roman as the default typeface in Word[3] and replaced Arial as the default in PowerPoint, Excel, Outlook, and WordPad. It has remained the default font in Microsoft Office 2010, 2013 and 2016, and is now the default font in Office for Mac 2016. Creator de Groot described its subtly rounded design as having "a warm and soft character". Calibri is part of the ClearType Font Collection, a suite of fonts from various designers released with Windows Vista. All start with the letter C to reflect that they were designed to work well with Microsoft's ClearType text rendering system, a text rendering engine designed to make text clearer to read on LCD monitors. The other fonts in the same group are Cambria, Candara, Consolas, Constantia and Corbel.[4] In 2017, the font was used as evidence in the Pakistani government–related "Panama Papers" case.[5][6] • - sounds good but delete the penultimate sentence in the first paragraph as obvious - it became the default in 2007 in all versions of Office and it's kept that role ever since. The Panama Papers thing, while funny, is ephemera and likely to not matter in a few years, so I wouldn't keep it in the intro - and it's worth noting that it's not the first case of this. With a few more sources added in now, I would just put in the "availability" section something like: Several cases have been reported in which documents could be shown to be forged because they claimed to date from before Calibri was included in Office.[1][2] In 2017, the font came to public attention as evidence in the Pakistani government–related "Panama Papers" case, in which a document supposedly signed in 2006 was typed up using it.[3][4] De Groot said that there was "absolutely zero chance" that the document was not a forgery.[5] References Also add (from that Dutch reference) somewhere that de Groot is currently working on a Hebrew version. Otherwise sounds all OK. Blythwood (talk) 00:40, 14 July 2017 (UTC) . I am fine with the changes but I suggest we shouldn't add primary sources such as Quora.com and lucasfonts.com.. --Saqib (talk) 08:22, 14 July 2017 (UTC) Per policy, primary sources are fine for providing descriptive/factual/historical information without subsequent analysis or interpretation—which is the case here. A good secondary source to support the 2007 release date is Typography, Referenced by Jason Tselentis. But there is also valuable, uncontroversial information in Now Read This, for instance, which is also a primary source; I don't think citing it would be a problem. @MSGJ and Hoary: would you care to weigh in? I would like to see support/objection from more users and admins before rolling this out. Fvasconcellos (t·c) 15:42, 14 July 2017 (UTC) By the way, there are also plenty of secondary sources supporting the May 2006 release date of Office 2007 Beta 2 as its first "public" (i.e., not general/retail) release, which we can mention in "Availability". Blythwood had noted the beta in Talk:Calibri#Comments above; if we can mention it with reliable sources, so much the better. Fvasconcellos (t·c) 15:51, 14 July 2017 (UTC) this Dawn story dated 12 July qouted Wikipedia in-fact.. it says "The first public beta version, according to a Wikipedia entry, was released on June 6, 2006 ". and instead of citing Qoura.com, i suggest you to cite this interview of Thomas Phinney in Pakistan Today perhaps.. --Saqib (talk) 19:25, 14 July 2017 (UTC) I would not cite Dawn to support the beta claim, specifically because its source was Wikipedia itself. There are better secondary sources from the time. Thomas's interview is a great alternative to Quora. Fvasconcellos (t·c) 19:40, 14 July 2017 (UTC) Similarly, this Newsweek story reads "The first public beta version was released on June 6, 2006" but it was qouting Dawn story and you know Dawn story was qouting whom. --Saqib (talk) 19:47, 14 July 2017 (UTC) Responding to the ping, I am uninvolved with this article and strictly neutral. So please don't ask me for an opinion :) — Martin (MSGJ · talk) 21:59, 14 July 2017 (UTC) MSGJ, I'm almost totally uninvolved with this article myself, and assume that this was one reason why I was invited to give an opinion. Looking at the suggested rewrite that I think comes from User:Blythwood, its first sentence, quote Several cases have been reported in which documents could be shown to be forged because they claimed to date from before Calibri was included in Office. unquote, sounds a bit awkward to me, and hard to understand. How about quote Several allegations of forgery have been made about documents using Calibri but dated from before Calibri was included in Office. unquote? (Not having read the details, I'm not sure if the claim that Calibri was used is itself uncontroversial in most/all of these cases. Imaginably, some of the people accused of having produced spurious Calibri-printed documents have claimed that no, they used some lesser-known but very similar font. NB I'm not saying that this has happened; just wondering if it has happened.) -- Hoary (talk) 00:44, 16 July 2017 (UTC) It is quite safe to say "many" rather than "several." Just the cases I have been asked about hit the upper end of "several." Perhaps a more-clear sentence would be "Many cases have been reported in which documents were shown to be forged, thanks to a purported creation date before Calibri was available." As for Hoary's other concern, of course I can't speak for cases I don't know about, but none of the cases I have been involved in or aware of had any dispute on whether it was Calibri. Nobody has tried to claim they used something else. Thomas Phinney (talk) 18:19, 16 July 2017 (UTC) Many cases have been reported in which documents were shown to be forged, thanks to a purported creation date before Calibri was available.: I like almost all of this by Thomas Phinney (preferring it to my own), except for qualms about the last word, "available". It's my (mis?)understanding that the typeface was kind-of-available during the period when a lot of these documents are dated, but [and it's a huge but] was unavailable to most people and very little known or used. -- Hoary (talk) 03:58, 17 July 2017 (UTC) How about before Calibri had been made available to the public? We have established definitively that Calibri was not available to anyone in the "general public" before Office 2007 Beta 2 and the corresponding Vista beta came out in May/June 2006. (That is borne out by contemporary sources.) Fvasconcellos (t·c) 20:14, 17 July 2017 (UTC) Yes, that's good. -- Hoary (talk) 08:37, 18 July 2017 (UTC) • Support. @ change requested by Fvasconcellos. After the creator of the font has weighed in himself, and media has covered it, this merits the change. Code16 (talk) 00:32, 17 July 2017 (UTC) • Support. as proposed by . Tomorrow the page is being unlocked, therefore I suggest changes be made as soon as possible as per consensus. --Saqib (talk) 16:44, 17 July 2017 (UTC) Implemented: I have rolled out a version which I believe includes all of the edits supported by consensus. The article itself needs plenty of work, content-wise. The lock will be lifted automatically in a few hours. If disruptive editing recurs, please let me know and I will reinstate semi or full protection. Fvasconcellos (t·c) 06:38, 18 July 2017 (UTC) Thanks.. since the page views has decreased significantly, I am not expecting much vandalism. --Saqib (talk) 06:46, 18 July 2017 (UTC) ## Protected edit request on 14 July 2017 dear sir, there are some crooks and corrupt politicians from pakistan, who are trying to edit this page to cover their lies and half truths to camouflage the crimes they committed against their own nation. I would like to appeal that this article on Calibri must be kept protected against any alterations for definite period and the contents must be verified with the font creator before any public editing. 175.140.175.255 (talk) 06:52, 14 July 2017 (UTC) It's already been taken care of by Wiki editors. Pakistan Muslim League's Social Media Cell will not be allowed to make un-sourced edits to this page. This is Wikipedia, not Twitter. They can get away with making fake profiles and harassing people on Twitter, they won't get far doing that on Wikipedia, that's for sure. --PAKHIGHWAY (talk) 01:19, 15 July 2017 (UTC)
# (p)_j notation in the paper Castaño-Martínez and López-Blázquez (2005) I am trying to implement few equations for the distribution of Chi square from the paper Castaño-Martínez and López-Blázquez (2005). In equation (5.1), I don't understand what $(p)_j$ means. Could you please explain this notation to me? Thanks • Subscription access required. That doesn't rule out answers, but answers are much more likely if you gave a paragraph or so so that the notation could be seen in context. – Nick Cox Oct 31 '17 at 7:25
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Direct evidence of ferromagnetism in a quantum anomalous Hall system Abstract Quantum anomalous Hall (QAH) systems are of great fundamental interest and potential application because of their dissipationless conduction without the need for an external magnetic field1,2,3,4,5,6,7,8,9. The QAH effect has been realized in magnetically doped topological insulator thin films10,11,12,13,14. However, full quantization requires extremely low temperature (T < 50 mK) in the earliest works, athough it has been significantly improved by modulation doping or co-doping of magnetic elements15,16. Improved ferromagnetism has been shown in these thin films, yet direct evidence of long-range ferromagnetic order is lacking. Herein, we present direct visualization of long-range ferromagnetic order in thin films of Cr and V co-doped (Bi,Sb)2Te3 using low-temperature magnetic force microscopy with in situ transport. The magnetization reversal process reveals typical ferromagnetic domain behaviour—that is, domain nucleation and possibly domain wall propagation—in contrast to much weaker magnetic signals observed in the endmembers, possibly due to superparamagnetic behaviour17,18,19. The observed long-range ferromagnetic order resolves one of the major challenges in QAH systems, and paves the way towards high-temperature dissipationless conduction by exploring magnetic topological insulators. This is a preview of subscription content Access options Rent or Buy article from\$8.99 All prices are NET prices. References 1. 1. Haldane, F. D. M. Model for a quantum Hall effect without Landau levels: condensed-matter realization of the ‘parity anomaly’. Phys. Rev. Lett. 61, 2015–2018 (1988). 2. 2. Onoda, M. & Nagaosa, N. Quantized anomalous Hall effect in two-dimensional ferromagnets: Quantum Hall effect in metals. Phys. Rev. Lett. 90, 206601 (2003). 3. 3. Liu, C. X., Qi, X. L., Dai, X., Fang, Z. & Zhang, S. C. Quantum anomalous Hall effect in Hg1−yMn1−yTe quantum wells. Phys. Rev. Lett. 101, 146802 (2008). 4. 4. Qi, X. L., Hughes, T. L. & Zhang, S. C. Topological field theory of time-reversal invariant insulators. Phys. Rev. B 78, 195424 (2008). 5. 5. Yu, R. et al. Quantized anomalous Hall effect in magnetic topological insulators. Science 329, 61–64 (2010). 6. 6. Qiao, Z. H. et al. Quantum anomalous Hall effect in graphene from Rashba and exchange effects. Phys. Rev. B 82, 161414 (2010). 7. 7. Nomura, K. & Nagaosa, N. Surface-quantized anomalous Hall current and the magnetoelectric effect in magnetically disordered topological insulators. Phys. Rev. Lett. 106, 166802 (2011). 8. 8. Zhang, H., Lazo, C., Bluegel, S., Heinze, S. & Mokrousov, Y. Electrically tunable quantum anomalous Hall effect in graphene decorated by 5d transition-metal adatoms. Phys. Rev. Lett. 108, 056802 (2012). 9. 9. Ezawa, M. Valley-polarized metals and quantum anomalous Hall effect in silicene. Phys. Rev. Lett. 109, 055502 (2012). 10. 10. Chang, C.-Z. et al. Experimental observation of the quantum anomalous Hall effect in a magnetic topological insulator. Science 340, 167–170 (2013). 11. 11. Checkelsky, J. G. et al. Trajectory of the anomalous Hall effect towards the quantized state in a ferromagnetic topological insulator. Nat. Phys. 10, 731–736 (2014). 12. 12. Kou, X. et al. Scale-invariant quantum anomalous Hall effect in magnetic topological insulators beyond the two-dimensional limit. Phys. Rev. Lett. 113, 137201 (2014). 13. 13. Kou, X. et al. Metal-to-insulator switching in quantum anomalous Hall states. Nat. Commun. 6, 8474 (2015). 14. 14. Feng, Y. et al. Observation of the zero Hall plateau in a quantum anomalous Hall insulator. Phys. Rev. Lett. 115, 126801 (2015). 15. 15. Mogi, M. et al. Magnetic modulation doping in topological insulators toward higher-temperature quantum anomalous Hall effect. Appl. Phys. Lett. 107, 182401 (2015). 16. 16. Ou, Y. et al. Enhancing the quantum anomalous Hall effect by magnetic codoping in a topological insulator. Adv. Mater. 30, 1703062 (2018). 17. 17. Lachman, E. O. et al. Visualization of superparamagnetic dynamics in magnetic topological insulators. Sci. Adv. 1, e1500740 (2015). 18. 18. Grauer, S. et al. Coincidence of superparamagnetism and perfect quantization in the quantum anomalous Hall state. Phys. Rev. B 92, 201304 (2015). 19. 19. Lee, I. et al. Imaging Dirac-mass disorder from magnetic dopant atoms in the ferromagnetic topological insulator Crx(Bi0.1Sb0.9)2−xTe3. Proc. Natl Acad. Sci. USA 112, 1316–1321 (2015). 20. 20. Bednorz, J. G. & Muller, K. A. Possible high T c superconductivity in the Ba–La–Cu–O system. Z. Phys. B 64, 189–193 (1986). 21. 21. Wu, M. K. et al. Superconductivity at 93 K in a new mixed-phase Y–Ba–Cu–O compound system at ambient pressure. Phys. Rev. Lett. 58, 908–910 (1987). 22. 22. Maeda, H., Tanaka, Y., Fukutomi, M. & Asano, T. A new high-Tc oxide superconductor without a rare earth element. Jpn J. Appl. Phys. 27, L209–L210 (1988). 23. 23. Schilling, A., Cantoni, M., Guo, J. D. & Ott, H. R. Superconductivity above 130 K in the Hg–Ba–Ca–Cu–O system. Nature 363, 56–58 (1993). 24. 24. Chang, C. Z. et al. High-precision realization of robust quantum anomalous Hall state in a hard ferromagnetic topological insulator. Nat. Mater. 14, 473–477 (2015). 25. 25. Grauer, S. et al. Scaling of the quantum anomalous Hall effect as an indicator of axion electrodynamics. Phys. Rev. Lett. 118, 246801 (2017). 26. 26. Chang, C.-Z. et al. Chemical-potential-dependent gap opening at the Dirac surface states of Bi2Se3 induced by aggregated substitutional Cr atoms. Phys. Rev. Lett. 112, 056801 (2014). 27. 27. Li, W. et al. Origin of the low critical observing temperature of the quantum anomalous Hall effect in V-doped (Bi,Sb)2Te3 film. Sci. Rep. 6, 32732 (2016). 28. 28. Anderson, P. W. Absence of diffusion in certain random lattices. Phys. Rev. 109, 1492–1505 (1958). 29. 29. Andriotis, A. N. & Menon, M. Defect-induced magnetism: Codoping and a prescription for enhanced magnetism. Phys. Rev. B 87, 155309 (2013). 30. 30. Qi, S. F. et al. High-temperature quantum anomalous Hall effect in n–p codoped topological insulators. Phys. Rev. Lett. 117, 056804 (2016). 31. 31. Nagaosa, N., Sinova, J., Onoda, S., MacDonald, A. H. & Ong, N. P. Anomalous Hall effect. Rev. Mod. Phys. 82, 1539–1592 (2010). 32. 32. Ruderman, M. A. & Kittel, C. Indirect exchange coupling of nuclear magnetic moments by conduction electrons. Phys. Rev. 96, 99–102 (1954). 33. 33. Kou, X. F. et al. Interplay between different magnetisms in Cr-doped topological insulators. ACS Nano 7, 9205–9212 (2013). 34. 34. Wang, W., Chang, C.-Z., Moodera, J. S. & Wu, W. Visualizing ferromagnetic domain behavior of magnetic topological insulator thin films. npj Quant. Mater. 1, 16023 (2016). 35. 35. Li, M. et al. Experimental verification of the Van Vleck nature of long-range ferromagnetic order in the vanadium-doped three-dimensional topological insulator Sb2Te3. Phys. Rev. Lett. 114, 146802 (2015). 36. 36. Li, H. et al. Carriers dependence of the magnetic properties in magnetic topological insulator Sb1.95−xBixCr0.05Te3. Appl. Phys. Lett. 101, 072406 (2012). 37. 37. Checkelsky, J. G., Ye, J., Onose, Y., Iwasa, Y. & Tokura, Y. Dirac-fermion-mediated ferromagnetism in a topological insulator. Nat. Phys. 8, 729–733 (2012). 38. 38. Sessi, P. et al. Signatures of Dirac fermion-mediated magnetic order. Nat. Commun. 5, 5349 (2014). 39. 39. Chang, C.-Z. et al. Zero-field dissipationless chiral edge transport and the nature of dissipation in the quantum anomalous Hall state. Phys. Rev. Lett. 115, 057206 (2015). 40. 40. Wang, W. et al. Visualizing weak ferromagnetic domains in multiferroic hexagonal ferrite thin film. Phys. Rev. B 95, 134443 (2017). 41. 41. Wang, W. et al. Visualizing ferromagnetic domains in magnetic topological insulators. APL Mater. 3, 083301 (2015). 42. 42. Rugar, D. et al. Magnetic force microscopy: General principles and application to longitudinal recording media. J. Appl. Phys. 68, 1169–1183 (1990). Acknowledgements We thank C. Chang for helpful discussions and P. Sass for proofreading the manuscript. This work at Rutgers is supported by the Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, US Department of Energy under Award numbers DE-SC0008147 and DE-SC0018153. The work at Tsinghua University is supported by the National Natural Science Foundation of China and the Ministry of Science and Technology of China. Author information Authors Contributions W.Wu, K.H. and Y.W. conceived the project. W.Wu and W.Wa designed the MFM experiment. W.Wa performed MFM experiments with in situ transport measurements, and analysed the data. Y.O. synthesized the MBE films under the supervision of K.H. and Q.X. C.L. and Y.W. carried out transport characterization of the films. W.Wu and W.Wa wrote the manuscript with inputs from all authors. Corresponding author Correspondence to Weida Wu. Ethics declarations Competing interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary information Supplementary information Supplementary figures 1–9, Reference Rights and permissions Reprints and Permissions Wang, W., Ou, Y., Liu, C. et al. Direct evidence of ferromagnetism in a quantum anomalous Hall system. Nature Phys 14, 791–795 (2018). https://doi.org/10.1038/s41567-018-0149-1 • Accepted: • Published: • Issue Date: • High-temperature quantum anomalous Hall regime in a MnBi2Te4/Bi2Te3 superlattice • Haiming Deng • Zhiyi Chen • Lia Krusin-Elbaum Nature Physics (2021) • Concurrence of quantum anomalous Hall and topological Hall effects in magnetic topological insulator sandwich heterostructures • Jue Jiang • Di Xiao • Cui-Zu Chang Nature Materials (2020) • Selective control of surface spin current in topological pyrite-type OsX2 (X = Se, Te) crystals • Yuefeng Yin • Michael S. Fuhrer • Nikhil V. Medhekar npj Quantum Materials (2019)
# BHOPAL GAS TRAGEDY,India Case study The incident investigation should deal with the chemical accident presented below. The case study should include the S 0 points BHOPAL GAS TRAGEDY,India Case study The incident investigation should deal with the chemical accident presented below. The case study should include the following components: A brief introduction of the scene and the accident Safety and health impacts of the accident A statement on the legal aspects of the incident Analysis of the key points from a project management perspective about the incident and accident Summary of the article's conclusions and your own opinions Conduct a Case Study on this disaster identifying safety management, safety audits, accident investigation, and industrial hygiene issues.Your Case Study should be a minimum of 500 - 600 words and contain the components listed above. Below I haveattached all of the references that can be utilized to conductthis case... BHOPAL GAS solarc S 0 points #### Oh Snap! This Answer is Locked Thumbnail of first page Excerpt from file: BHOPALGASTRAGEDY,IndiaCasestudy Theincidentinvestigationshoulddealwiththechemicalaccidentpresentedbelow.Thecasestudyshouldincludethe followingcomponents: Abriefintroductionofthesceneandtheaccident Safetyandhealthimpactsoftheaccident Astatementonthelegalaspectsoftheincident Filename: bhopal-gas-tragedyindia-case-study-the-incident-investigation-should-deal-with-the-chemical-accident-presented-below-the-case-study-should-include-the-24.doc Filesize: < 2 MB Print Length: 2 Pages/Slides Words: NA Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$ Use LaTeX to type formulas and markdown to format text. See example.
# How can I make this use less memory? [Sieve of Eratosthenes] This topic is 4778 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts #include <iostream> #include <iomanip> #include <set> void Sieve(std::set<unsigned int>& s ,unsigned int n) { unsigned int m,i; s.erase(s.begin(),s.end()); std::cout << "Filling set: " << 1 << " to " << n << std::endl << std::endl; s.insert(1); s.insert(2); for(m=3; m<=n; m+=2) s.insert(m); std::cout << "Set Filled, " /*<< s.size() << " Items."<< std::endl*/ ; std::cout << "Removing non-primes:" << std::endl << std::endl; for(m=3; m*m <= n; m++) { if(s.find(m) != s.end()) { i = 2 * m; while (i <= n) { s.erase(i); i += m; } } } std::cout << "Non-primes removed: " << s.size() << " items remaining" << std::endl; } int main() { unsigned int count = 0; std::set<unsigned int> primes; std::set<unsigned int>::iterator itr; Sieve(primes,35000000); std::cin >> count; itr = primes.begin(); if(!count) { while(itr != primes.end()) { count++; std::cout << " " << *itr << " "; if(count % 10 == 0) std::cout << std::endl; itr++; } std::cout << std::endl; std::cin >> count; } return 0; } The Sieve of Eratosthenes is used to calculate prime numbers in a given set of integers. The above program works great except for one thing. The problem with this is that I can only calculate about 35 million before it uses up my entire memory, about 1GB, at which point it starts using the swap file and the program slows to a snails pace while my hard drive thrashes around. With 35 million it takes about a minute to complete but with 40 million I let it run several hours before I killed it. I wanted to calculate all primes in the unsigned int range 0 to 0xFFFFFFFF which is 4 billion ints. So I need to make this use a lot less memory. ##### Share on other sites how about just using one bit per number in a static array, i'm not sure if that is actually smaller as you have to store 1 bit per non-prime as well. You can for sure save some memory by using a mark and sweep approach on a static array of unsigned ints. ##### Share on other sites Instead of std::set, I'd use array of bits. One for each odd number bigger than 1. First set all the bits, then go through the array, and for each bit set, you clear all the bits further in the array kind of like this: for(int i=0;i<N;i++) if (bitset) for(int k=i+i;k<N;k+=i) bitset=0; If you use one bit per odd integer, you'd need 2^28 bytes = 256 Megs to find all primes less than 2^32. AP: It would take less memory, as he's initializing his std::set with all the odd integers, taking 2^31*4 bytes = 16 Gigs for all up to 2^32, plus the overhead from std::set, which is in some proportion to the amount of elements in it. ##### Share on other sites Quote: Original post by Anonymous Posterhow about just using one bit per number in a static array, i'm not sure if that is actually smaller as you have to store 1 bit per non-prime as well. You can for sure save some memory by using a mark and sweep approach on a static array of unsigned ints. That should use ~512MiB (e.g. 1/8th of 4GiB) to cover 0..0xFFFFFFFF, so that would indeed come out to a smaller amount. ##### Share on other sites Firstly, do not populate the set first. If a number isn't prime, the sieve doesn't need to remember it. Instead, generate your set at runtime using a for loop and test each number as it comes but do not store non-primes in the memory for more than one iteration. Also, on a mathematical point, the higher up in the set of integers you go, it becomes exponentially less likely that each number is prime. If I understand your code correctly, you're using less memory as time goes on, since you populate the set at the start and then remove the non-prime elements. The problem is that primes become incredibly sparse when you enter the millions, so getting from 35 million primes to 40 million is going to take a very long time. ##### Share on other sites How to I make an array of bits? There is no bit type as far as I know. ##### Share on other sites Quote: Original post by GrainHow to I make an array of bits? There is no bit type as far as I know. There's bound to be a bitvector class somewhere (probably in boost), but if you can't find one, here's the basics: class mybits{ std::vector<unsigned char> vec; public: mybits(int bits) { vec.resize(bits/8); }; set(int bit,bool on=true) { int b=bit%8; int p=bit/8; int mask=1<<b; vec=(vec&(~mask))|(on?mask:0); } clear(int bit) { set(bit,false); };} I won't guarantee that that'll work (since I just typed it from the top of my head), but the idea should be clear enough. ##### Share on other sites You'll have to do it manually, probably create a custom container class. Check out the bitwise logic and shift operators. You basically want to have a bunch of 32bit integers, shift the required bit into place and "and" it with 1. Here is an interesting concept which you could use: Ranges (from here). One bit for each possible odd number of a 32 bit integer is 256 megabytes of data. It should fit nicely [smile]. ##### Share on other sites Just for fun, here's a class. This "should" work [wink]. It will automatically return "not-set" for even numbers. Uses a puny 256 megabytes [smile]. You should probably test it before use, though. class oddbits{private: // enough space for every odd unsigned int in 32 bits static const size_t SIZE = 0x4000000; uint32 d[SIZE];public: oddbits() { for(size_t i=0; i<SIZE; i++) d=0; } bool get(unsigned int i) { return (i&1==1) && ((d[i>>6]>>(i>>1))&1==1); } void set(unsigned int i, bool v) { if(i&1==0) return; uint32 x = (1<<((i>>1)&1F)); d[i>>6] = (d[i>>6]&~x)|(v?x:0); }}; ##### Share on other sites Quote: Original post by MotzFirstly, do not populate the set first. If a number isn't prime, the sieve doesn't need to remember it. Instead, generate your set at runtime using a for loop and test each number as it comes but do not store non-primes in the memory for more than one iteration.Also, on a mathematical point, the higher up in the set of integers you go, it becomes exponentially less likely that each number is prime. If I understand your code correctly, you're using less memory as time goes on, since you populate the set at the start and then remove the non-prime elements. The problem is that primes become incredibly sparse when you enter the millions, so getting from 35 million primes to 40 million is going to take a very long time. Quoted for value. Instead of storing all the numbers and then removing the composite values, you can consider each number for prime-ness as you get to it, and insert it. Since you will then not have to remove values from the middle, you can use a more space-efficient container as well: std::vector. #include <iostream>#include <iomanip>#include <vector>void Sieve(std::vector<unsigned int>& s ,unsigned int n) { unsigned int m,i; s.erase(s.begin(),s.end()); s.push_back(1); s.push_back(2); for(m=3; m<=n; m+=2) { int count = s.size(); bool prime = true; for (int i = 0; i < count; ++i) { if (!(m % s)) { // current value is divisible by a former prime; // discard it prime = false; break; // In case you're wondering, solving this with a goto is just as tricky; // you'd need to put it at the *end* of the outer loop, with no // action following it, which is rather bizarre. } } if (prime) s.push_back(m); }} Of course, this isn't the Sieve of Eratosthenes any more (pedantically speaking anyway), but it's how it's normally done :s 1. 1 2. 2 frob 16 3. 3 Rutin 12 4. 4 5. 5 • 13 • 12 • 58 • 14 • 15 • ### Forum Statistics • Total Topics 632124 • Total Posts 3004240 ×
# RSA given q, p and e? [closed] I am given the q, p, and e values for an RSA key, along with an encrypted message. Here are those values: p = 1090660992520643446103273789680343 q = 1162435056374824133712043309728653 e = 65537 sample ciphertext = 299604539773691895576847697095098784338054746292313044353582078965 I tried calculating d with the Extended Euclidean algorithm, but came out as 1.9404359e+59, which I am almost certain is incorrect. How should I calculate d? • Being only almost certain that this is incorrect I suggest that you should study (extended) Euclid (once again). – DrLecter Oct 2 '14 at 21:01 • Since people are not being terribly helpful, I will say that you need to make sure you are using an arbitrary precision integer calculator when you do this kind of math. Scientific notation will not cut it, you need all the digits in order for it to work. Having said that, your decryption exponent is still not right. Remember that $e\cdot d = 1 \mod (p-1)(q-1)$, so it's easy to check if your answer is correct. – Travis Mayberry Oct 2 '14 at 22:13 • I suggest you using a bigint library to do the computation. Or try using Python, Pari/GP, Maple, Sage,... – ddddavidee Oct 3 '14 at 5:39 • Sounds like you're using doubles instead of big integers. – CodesInChaos Oct 3 '14 at 20:28 • I'm voting to close this question as off-topic because this is asking for an example on how to implement the Extended Euclidian algorithm. Code requests are off topic on Crypto.SE. – Maarten Bodewes Apr 12 '19 at 15:19 I used the following python code to compute the private exponent and perform decryption. It uses the extended euclidean algorithm: def egcd(a, b): x,y, u,v = 0,1, 1,0 while a != 0: q, r = b//a, b%a m, n = x-u*q, y-v*q b,a, x,y, u,v = a,r, u,v, m,n gcd = b return gcd, x, y def main(): p = 1090660992520643446103273789680343 q = 1162435056374824133712043309728653 e = 65537 ct = 299604539773691895576847697095098784338054746292313044353582078965 # compute n n = p * q # Compute phi(n) phi = (p - 1) * (q - 1) # Compute modular inverse of e gcd, a, b = egcd(e, phi) d = a print( "n: " + str(d) ); # Decrypt ciphertext pt = pow(ct, d, n) print( "pt: " + str(pt) ) if __name__ == "__main__": main() The private exponent is: $$522550976146069021499058157764354003336248628589338241039193114657$$ The plaintext is: $$83678269879577658472958479799572658268$$ which works out to a 128-bit value, so I'm assuming it's correct. • Additional hint: watch for a striking regularity in the decimal expression of the plaintext. – fgrieu Oct 8 '14 at 8:03 • This code might work for these input values, but it crashes with some other input values. – Atte Juvonen Apr 26 '17 at 21:26 • ValueError: pow() 2nd argument cannot be negative when 3rd argument specified – Aaron Esau Mar 17 '18 at 6:27 Here is a slightly modifed version of @user13741 answer. import math def getModInverse(a, m): if math.gcd(a, m) != 1: return None u1, u2, u3 = 1, 0, a v1, v2, v3 = 0, 1, m while v3 != 0: q = u3 // v3 v1, v2, v3, u1, u2, u3 = ( u1 - q * v1), (u2 - q * v2), (u3 - q * v3), v1, v2, v3 return u1 % m def main(): p = 1090660992520643446103273789680343 q = 1162435056374824133712043309728653 ct = 299604539773691895576847697095098784338054746292313044353582078965 e = 65537 n = p*q # compute n n = p * q # Compute phi(n) phi = (p - 1) * (q - 1) # Compute modular inverse of e d = getModInverse(e, phi) print("n: " + str(d)) # Decrypt ciphertext pt = pow(ct, d, n) print("pt: " + str(pt)) if __name__ == "__main__": main() • This is just a code dump without any explanation why you even altered the code. – Maarten Bodewes Apr 12 '19 at 15:16 • As previously stated, the version of @user13741 answer crashes on some valid user inputs. As far as my tests went this version handles all valid inputs properly. – bananabr Apr 14 '19 at 21:56 • It is previously stated in a comment somewhere below another question. You need to explain this in your answer, provide a link to the previous answer (or even comment, right click on the time behind the comment to obtain the link) and of course indicate how your code solves the issue. This could be a very helpful contribution if well applied. – Maarten Bodewes Apr 15 '19 at 11:54 • I would recommend using a robust lib like gmpy2 for the modular inverse. phi = (p - 1) * (q - 1) d = gmpy2.invert(e, phi) m = pow(c, d, n) Then print out m as hex and convert the hex bytes to ascii characters. – Albert Veli Mar 14 at 7:33
# Raspberry Pi crashes when I try to build the Catkin workspace Currently I am trying to follow this guide here to start using the Robot Operating System (currently ROS Indigo). I am at the very final stage where I am trying to build the Catkin workspace using the command: sudo ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/indigo This command is from Section 3.3 It works perfectly fine for the first 50 packages or so, but starts hanging when it reaches rospack. I am currently using the Raspbian Jessie, with the x11 desktop environment and trying to install the Desktop GUI version of ROS. I have followed every instruction till this point and am a bit baffled as to my next step. Here's a picture of the terminal up until when the raspberry pi freezes Is there a way to change the command mentioned above so that I can ignore rospack completely and deal with the issue once the dependencies have been resolved (the objective of Section 3.3 of the guide)? It can also be observed that the Raspberry Pi hits 100% processing and the screen blacks out after a little bit. The clock skew warning comes up on every package. I am not sure if this has something to do with this crash. Please ask me if you would like to get a picture of higher defintion for the terminal up until the freeze and crash UPDATE Here is a picture of what happens after a little while when I try and move the mouse : I'm going to try and run this on a different terminal altogether. The current terminal is a Yakuake (Guake). UPDATE 2 As I suspected, it has nothing to do with the terminal I am using. I have found this resource from this very question, to ignore the package rospack. I am going to try this out in a bit. edit retag close merge delete Sort by » oldest newest most voted The freezes I encounter are usually either due to disk i/o (especially memory swapping) or hardware drivers. I guess the latter can be excluded, so you should monitor the memory exhaustion (e.g. use watch -n 0.5 free -m). You may try to disable swap (sudo swapoff -a), so the compilation is just killed when the physical memory is full. Edit: I am not sure whether catkin_make_isolated automatically parallelizes. If so, you might want to disable that (maybe append "-j1"?). If it is the memory problem and disabling parallelization doesn't help, try to lower the complexity of the compilation of the problematic files. • compile without -g flag (in CMake, set the CMAKE_BUILD_TYPE to Release) • reduce optimization (this will make it slower) by explicitly using -Og or even -O0 Otherwise, getting more RAM is usually an option. The above are just ideas from the top of my head. Make sure it is the memory swapping problem first. Also: Have you tried waiting? more But I don't want the compilation killed, rospack is certainly very important and I would like to know how to actually build it in my catkin workspace. ( 2016-12-07 04:15:26 -0500 )edit I'll try appending the -j1 option and seeing what happens. Let me get back to you in a few minutes. ( 2016-12-07 04:24:40 -0500 )edit 1 -j1 option seems to be working, but I am facing other errors. I guess, that's a topic for another question! ( 2016-12-07 04:48:17 -0500 )edit -j1 did the trick. Thanks ! I have covered the setup as a writeup here : https://hareshmiriyala.wordpress.com/... ( 2018-02-26 18:56:00 -0500 )edit -j1 did the trick ( 2019-08-13 07:06:44 -0500 )edit
## Curious Constants (part 2 of 4) This continues my series of notes for my talk on some mathematical constants. I’d like to make a small diversion from talking about these constants to provide some background on continued fractions. What follows is a brief, abridged summary of Khinchin‘s short book, “Continued Fractions,” which I highly recommend. ## Continued Fractions A (simple, regular) continued fraction is an expression (possibly finite) $a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\ddots}}}$ Which I abbreviate $[a_0;a_1,\ldots,a_n,\ldots]$. Here the values $a_i$ are natural numbers (allowing $a_0$ to be 0, but all others positive), and we call them the elements of the continued fraction. If the continued fraction is finite, ending at $a_k$, we will say that it has order $k$. Finite continued fractions clearly correspond to rational numbers. We can convert any number to a continued fraction, by repeatedly splitting off integer parts, and then inverting. For example $\frac{13}{9}=1+\frac{4}{9}=1+\frac{1}{9/4}=1+\cfrac{1}{2+\frac{1}{4}}$ The same process works for real numbers, we just iterate without end. Fix a continued fraction $[a_0;a_1,\ldots,a_n,\ldots]$, corresponding to the real number $\alpha$. Let $s_k=[a_0;a_1,\ldots,a_k]$, which we call the $k$th segment. This is a fraction that is already in lowest terms, which we typically denote $p_k/q_k$, and call the $k$th convergent. We highlight, now, a few equations and inequalities concerning convergents: $\displaystyle \frac{p_{k-1}}{q_{k-1}}-\frac{p_k}{q_k} =\frac{(-1)^k}{q_kq_{k+1}}$ $q_k \geq 2^{\frac{k-1}{2}}$ $\displaystyle \frac{1}{q_k(q_{k+1}+q_k)}<\left| \alpha-\frac{p_k}{q_k}\right|<\frac{1}{q_kq_{k+1}}$ The convergents of even order form an increasing sequence, and those of odd order form a decreasing sequence, with the limit, $\alpha$ in the middle. One of the most interesting things about continued fractions is their relation to best rational approximations. Loosely, every convergent is a best rational approximation, and vice-versa. For more specifics, see Khinchin’s book. Along the same lines, if $a/b$ is an irreducible fraction with $|\alpha-a/b|<1/(2b^2)$, then $a/b$ is a convergent of the continued fraction for $\alpha$. For every irrational $\alpha$ with bounded elements, and small enough value for $c$, $|\alpha-p/q| has no integer solutions for $p$ and $q$. However, if the elements are unbounded, then for any $c$, there are infinitely many solutions. More generally, Liouville showed that if $\alpha$ is an irrational with degree $n$ (i.e., it is algebraic with minimal polynomial $n$), then there is a $c>0$ such that for any $p,q$, $|\alpha-p/q|>c/q^n$. Thus, if for every positive $c$, and every positive integer $n$, there are integers $p,q$ such that $|\alpha-p/q|\leq c/q^n$, then $\alpha$ is transcendental. In a sense, this is saying that transcendental numbers have a rapidly converging sequence of rational approximations. This led to the first proofs that chosen numbers were transcendental. To construct a transcendental, you can pick any collection of positive integers $a_0,\ldots,a_k$, and then form the continued fraction $[a_0;a_1,\ldots,a_k]$, find it’s denominator $q_k$, and choose any $a_{k+1}>q_k^{k+1}$. Repeating this process will generate an infinite continued fraction with unbounded denominators that is provably transcendental. Considerations like the above, concerning just how good rational approximations can be, earned Klaus Roth a Fields Medal in 1958. Other posts in this series 1. The First Batch 2. Continued Fractions 3. The Second Batch 4. Resources ### 3 Responses to “Curious Constants (part 2 of 4)” 1. Curious Constants (part 1 of 4) « Sumidiot’s Blog Says: […] Blog The math fork of sumidiot.blogspot.com? « Non-Functorial Curious Constants (part 2 of 4) […] 2. Curious Constants (part 3 of 4) « Sumidiot’s Blog Says: […] Blog The math fork of sumidiot.blogspot.com? « Curious Constants (part 2 of 4) Curious Constants (part 4 of 4) […] 3. Curious Constants (part 4 of 4) « Sumidiot’s Blog Says: […] Continued Fractions […]
# Thread: Probablity Question? Choosing bills? Out of 10 $20 bills, 2 are counterfeit, 6 bills are chosen at random. What is the probability that both counterfeit bills are chosen? --my attempt-- C(10,6)/(C(6,2) *C(10,2)) Is this correct? I am not sure. Thanks 2. ## Re: Probablity Question? Choosing bills? Originally Posted by blackZ Out of 10$20 bills, 2 are counterfeit, 6 bills are chosen at random. What is the probability that both counterfeit bills are chosen? Standard notation is $\binom{n}{k}=\frac{n!}{k!(n-k)!}$, n things choosing k. So the answer will be $\frac{\dbinom{8}{4}}{\dbinom{10}{6}}$. 3. ## Re: Probablity Question? Choosing bills? Hello, blackZ! Out of ten \$20 bills, two are counterfeit. Six bills are chosen at random. What is the probability that both counterfeit bills are chosen? There are: $_{10}C_6 \:=\:210$ possible outcomes. There are: 2 fake and 8 real bills. We want: 2 fake and 4 real bills. There are: . $(_2C_2)(_8C_4) \:=\:1\cdot70 \:=\:70$ ways. $P(\text{2 fake, 4 real}) \:=\:\frac{70}{210} \:=\:\frac{1}{3}$
× # Bouncing Block Challenge: Find the time period of the simple harmonic motion of the spring block system shown in the picture. Note by Rohan Rao 3 years ago Sort by: It has already been asked on brilliant here · 3 years ago 2 pi {(2m/3k)^1/2} · 2 years, 11 months ago are the 2 springs of equal spring constants?? · 2 years, 10 months ago
Chi Square test with massive difference in sample size Let's say I have a frequency table of two independent groups, structured like so: Control 0.0 1.0 All ---------------------------- False 3648 2205 5853 True 33480 18132 51612 All 37128 20337 57465 And I want to run an A/B test to see whether the two randomly assigned populations performed a certain action. Are these two populations so vastly different in sample size that it will mess up the math to see if there is a statistically significant relationship? • Please give the 2-way table with row and column totals. If 33480 is for Gp A, is it total in Bp A or is it count of subjects in A with 'no action'? How was it determined whether subject is in Gp A or Gp B. Random assignment? // Just tact that Gps A and B are of different sizes is not a problem. Why so different may be. Jan 7, 2021 at 22:50 • Updated the frequency table, and added the margins. The two populations were just randomly assigned. – Pwon Jan 7, 2021 at 23:27 • There is assumptions to the chisq regarding relative samples size between groups. I see no problems performing the test. Jan 7, 2021 at 23:42 One possibility is to consider binomial proportions of False responses in the two groups. Is the difference in the observed proportions false (roughly $$0.098$$ and $$0.108,$$ respectively), significantly different between the two groups? In R, this test (which uses a normal approximation) is done using the prop.test procedure. (I have opted not to use continuity correction on account of the large sample sizes.) The null hypothesis that proportions are equal is strongly rejected with P-value $$0.00012 < .05 = 5\%.$$ prop.test(c(3648, 2205), c(37128, 20337), cor=F) 2-sample test for equality of proportions without continuity correction data: c(3648, 2205) out of c(37128, 20337) X-squared = 14.851, df = 1, p-value = 0.0001163 alternative hypothesis: two.sided 95 percent confidence interval: -0.01540543 -0.00493134 sample estimates: prop 1 prop 2 0.09825469 0.10842307 An almost-equivalent test is to consider whether a chi-squared test of homogeneity across Groups is rejected. The appropriate $$2\times 2$$ table and results of the test are shown below. Again the null hypothesis of homogeneity between the two groups is strongly rejected. False = c(3648, 2205); True = c(33480, 18132) TBL = rbind(False, True); TBL [,1] [,2] False 3648 2205 True 33480 18132 chisq.test(TBL) Pearson's Chi-squared test with Yates' continuity correction data: TBL X-squared = 14.74, df = 1, p-value = 0.0001234 Notes: You can look at some of the "Related" pages (right margin) found by our robots for more details. Alternatively, you can look at this NIST page on tests of binomial proportions. Another recent related page. • Thanks for a detailed response. I use python and used chi2_contingency() to arrive at the second result. – Pwon Jan 8, 2021 at 2:27
## Riesz-Thorin interpolation theorem I had, a while ago, the great pleasure of going through the proof of the Riesz-Thorin interpolation theorem. I believe I understand the general strategy of the proof, though for sure, I glossed over some details. It is my hope that in writing this, I can fill in the holes for myself at the more microscopic level. Let us begin with a statement of the theorem. Riesz-Thorin Interpolation Theorem. Suppose that $(X,\mathcal{M}, \mu)$ and $(Y, \mathcal{N}, \nu)$ are measure spaces and $p_0, p_1, q_0, q_1 \in [1, \infty]$. If $q_0 = q_1 = \infty$, suppose also that $\mu$ is semifinite. For $0 < t < 1$, define $p_t$ and $q_t$ by $\frac{1}{p_t} = \frac{1-t}{p_0} + \frac{t}{p_1}, \qquad \frac{1}{q_t} = \frac{1-t}{q_0} + \frac{t}{q_1}$. If $T$ is a linear map from $L^{p_0}(\mu) + L^{p_1}(\mu)$ into $L^{q_0}(\nu) + L^{q_1}(\nu)$ such that $\left\|Tf\right\|_{q_0} \leq M_0 \left\|f\right\|_{p_0}$ for $f \in L^{p_0}(\mu)$ and $\left\|Tf\right\|_{q_1} \leq M_1 \left\|f\right\|_{p_1}$ for $f \in L^{p_1}(\mu)$, then $\left\|Tf\right\|_{q_t} \leq M_0^{1-t}M_1^t \left\|f\right\|_{p_t}$ for $f \in L^{p_t}(\mu)$, $0 < t < 1$. We begin by noticing that in the special case where $p = p_0 = p_1$, $\left\|Tf\right\|_{q_t} \leq \left\|Tf\right\|_{q_0}^{1-t} \left\|Tf\right\|_{q_1}^t \leq M_0^{1-t}M_1^t \left\|f\right\|_p$, wherein the first inequality is a consequence of Holder’s inequality. Thus we may assume that $p_0 \neq p_1$ and in particular that $p_t < \infty$. Observe that the space of all simple functions on $X$ that vanish outside sets of finite measure has in its completion $L_p(\mu)$ for $p < \infty$ and the analogous holds for $Y$. To show this, take any $f \in L^p(\mu)$ and any sequence of simple $f_n$ that converges to $f$ almost everywhere, which must be such that $f_n \in L^p(\mu)$, from which follows that they are non-zero on a finite measure. Denote the respective spaces of such simple functions with $\Sigma_X$ and $\Sigma_Y$. To show that $\left\|Tf\right\|_{q_t} \leq M_0^{1-t}M_1^t \left\|f\right\|_{p_t}$ for all $f \in \Sigma_X$, we use the fact that $\left\|Tf\right\|_{q_t} = \sup \left\{\left|\int (Tf)g d\nu \right| : g \in \Sigma_Y, \left\|g\right\|_{q_t'} = 1\right\}$, where $q_t'$ is the conjugate exponent to $q_t$. We can rescale $f$ such that $\left\|f\right\|_{p_t} = 1$. From this it suffices to show that across all $f \in \Sigma_X, g \in \Sigma_Y$ with $\left\|f\right\|_{p_t} = 1$ and $\left\|g\right\|_{q_t'} = 1$, $|\int (Tf)g d\nu| \leq M_0^{1-t}M_1^t$. For this, we use the three lines lemma, the inequality of which has the same value on its RHS. Three Lines Lemma. Let $\phi$ be a bounded continuous function on the strip $0 \leq \mathrm{Re} z \leq 1$ that is holomorphic on the interior of the strip. If $|\phi(z)| \leq M_0$ for $\mathrm{Re} z = 0$ and $|\phi(z)| \leq M_1$ for $\mathrm{Re} z = 1$, then $|\phi(z)| \leq M_0^{1-t} M_1^t$ for $\mathrm{Re} z = t$, $0 < t < 1$. This is proven via application of the maximum modulus principle on $\phi_{\epsilon}(z) = \phi(z)M_0^{z-1} M_1^{-z} \mathrm{exp}^{\epsilon z(z-1)}$ for $\epsilon > 0$. The $\mathrm{exp}^{\epsilon z(z-1)}$ serves of function of $|\phi_{\epsilon}(z)| \to 0$ as $|\mathrm{Im} z| \to \infty$ for any $\epsilon > 0$. We observe that if we construct $f_z$ such that $f_t = f$ for some $0 < \mathrm{Re} t < 1$. To do this, we can express for convenience $f = \sum_1^m |c_j|e^{i\theta_j} \chi_{E_j}$ and $g = \sum_1^n |d_k|e^{i\theta_k} \chi_{F_k}$ where the $c_j$‘s and $d_k$‘s are nonzero and the $E_j$‘s and $F_k$‘s are disjoint in $X$ and $Y$ and take each $|c_j|$ to $\alpha(z) / \alpha(t)$ power for such a fixed $t$ for some $\alpha$ with $\alpha(t) > 0$. We let $t \in (0, 1)$ be the value corresponding to the interpolated $p_t$. With this, we have $f_z = \displaystyle\sum_1^m |c_j|^{\alpha(z)/\alpha(t)}e^{i\theta_j}\chi_{E_j}$. Needless to say, we can do similarly for $g$, with $\beta(t) < 1$, $g_z = \displaystyle\sum_1^n |d_k|^{(1-\beta(z))/(1-\beta(t))}e^{i\psi_k}\chi_{F_k}$. Together these turn the LHS of the inequality we desire to prove to a complex function that is $\phi(z) = \int (Tf_z)g_z d\nu$. To use the three lines lemma, we must satisfy $|\phi(is)| \leq \left\|Tf_{is}\right\|_{q_0}\left\|g_{is}\right\|_{q_0'} \leq M_0 \left\|f_{is}\right\|_{p_0}\left\|g_{is}\right\|_{q_0'} \leq M_0 \left\|f\right\|_{p_t}\left\|g\right\|_{q_t'} = M_0$. It is not hard to make it such that $\left\|f_{is}\right\|_{p_0} = 1 = \left\|g_{is}\right\|_{q_0'}$. A sufficient condition for that would be integrands associated with norms are equal to $|f|^{p_t/p_0}$ and $|g|^{q_t'/q_0'}$ respectively, which equates to $\mathrm{Re} \alpha(is) = 1 / p_0$ and $\mathrm{Re} (1-\beta(is)) = 1 / q_0'$. Similarly, we find that $\mathrm{Re} \alpha(1+is) = 1 / p_1$ and $\mathrm{Re} (1-\beta(1+is)) = 1 / q_1'$. From this, we can solve that $\alpha(z) = (1-z)p_0^{-1}, \qquad \beta(z) = (1-z)q_0^{-1} + zq_1^{-1}$. With these functions inducing a $\phi(z)$ that satisfies the hypothesis of the three lines lemma, our interpolation theorem is shown for such simple functions, from which extend our result to all $f \in L^{p_t}(\mu)$. To extend this to all of $L^p$, it suffices that $Tf_n \to Tf$ a.e. for some sequence of measurable simple functions $f_n$ with $|f_n| \leq |f|$ and $f_n \to f$ pointwise. Why? With this, we can invoke Fatou’s lemma (and also that $\left\|f_n\right\|_p \to \left\|f\right\|_p$ by dominated convergence theorem) to obtained the desired result, which is $\left\|Tf\right\|_q \leq \lim\inf \left\|Tf_n\right\|_q \leq \lim\inf M_0^{1-t} M_1^t\left\|Tf_n\right\|_p \leq M_0^{1-t} M_1^t \left\|f\right\|_p$. Recall that convergence in measure is a means to derive a subsequence that converges a.e. So it is enough to show that $\displaystyle\lim_{n \to \infty} \mu(\left\|Tf_n - Tf\right\| > \epsilon) = 0$ for all $\epsilon > 0$. This can be done by upper bounding with something that goes to zero. By Chebyshev’s inequality, we have $\mu(\left\|Tf_n - Tf\right\| > \epsilon) \leq \frac{\left\|Tf_n - Tf\right\|_p^p}{\epsilon^p}$. However, recall that in our hypotheses we have constant upper bounds on $T$ in the $p_0$ and $p_1$ norms respectively assuming that $f$ is in $L^{p_0}$ and $L^{p_1}$, which we can make use of.  So apply Chebyshev on any one of $q_0$ (let’s use this) and $q_1$, upper bound its upper bound with $M_0$ or $M_1$ times $\left\|f_n - f\right\|_{p_0}$, which must go to zero by pointwise convergence. ## Convergence in measure Let $f, f_n (n \in \mathbb{N}) : X \to \mathbb{R}$ be measurable functions on measure space $(X, \Sigma, \mu)$. $f_n$ converges to $f$ globally in measure if for every $\epsilon > 0$, $\displaystyle\lim_{n \to \infty} \mu(\{x \in X : |f_n(x) - f(x)| \geq \epsilon\}) = 0$. To see that this means the existence of a subsequence with pointwise convergence almost everywhere, let $n_k$ be such that for $n > n_k$, $\mu(\{x \in X : |f_{n_k}(x) - f(x)| \geq \frac{1}{k}\}) < \frac{1}{k}$, with $n_k$ increasing. (We invoke the definition of limit here.) If we do not have pointwise convergence almost everywhere, there must be some $\epsilon$ such that there are infinitely many $n_k$ such that $\mu(\{x \in X : |f_{n_k}(x) - f(x)| \geq \epsilon\}) \geq \epsilon$. There is no such $\epsilon$ for the subsequence $\{n_k\}$ as $\frac{1}{k} \to 0$. This naturally extends to every subsequence’s having a subsequence with pointwise convergence almost everywhere (limit of subsequence is same as limit of sequence, provided limit exists). To prove the converse, suppose by contradiction, that the set of $x \in X$, for which there are infinitely many $n$ such that $|f_n(x) - f(x)| \geq \epsilon$ for some $\epsilon > 0$ has positive measure. Then, there must be infinitely many $n$ such that $|f_n(x) - f(x)| \geq \epsilon$ is satisfied by a positive measure set. (If not, we would have a countable set in $\mathbb{N} \times X$ for bad points, whereas there are uncountably many with infinitely bad points.) From this, we have a subsequence without a pointwise convergent subsequence.
Posted on: #### 07 October 2020 Type A continuing scholarship valued up to $1000 can be extended into upper years if students who entered Renison with a minimum admission average of 80% meet the following conditions: They must achieve an overall average of 83.0% each term. Students in Social Development Studies must take a minimum of 4 Renison courses per year. Students in Honours Arts must take a minimum of 2 Renison courses per year. Students must be Renison-registered. A Continuing Scholarship valued up to$1000 can be extended into upper years if students who entered Renison with a minimum admission average of 80% meet specific conditions RENISON ENTRANCE SCHOLARSHIPS If you are applying to post-secondary education for the first time when you apply to Renison, you will be automatically considered for an entrance scholarship. Some conditions apply (see below). Entrance Scholarship The Renison Entrance Scholarship recipient with the highest overall admission average above 85% will be designated as The Rev’d Gerald T. Churchill Memorial Entrance Scholarship recipient. This designation is to honour the memory of The Rev’d Gerald T. Churchill, member of the College's Board of Governors from 1977 to 1987. This award offers an additional $500 to the financial value of the Renison University College Entrance Scholarship. Note 1 If a student qualifies for a Renison Entrance or Continuing Scholarship, in addition to a bursary or fee remission from the College, the amount of the scholarship will be adjusted so that the total amount awarded shall not exceed the total tuition for the period in which the scholarship is held. Note 2 If a student withdraws or otherwise fails to complete the term(s) covered by an award, the award will be prorated. Note 3 The Board of Governors of Renison University College reserves the right to make changes without notice in the published value and number of entrance scholarships awarded each year. A Continuing Scholarship valued up to$1000 can be extended into upper years if students who entered Renison with a minimum admission average of 83.0% meet specific conditions THE RITA LEE-CHUI INTERNATIONAL EDUCATION FUND: $500 PER STUDENT The Rita Lee-Chui International Education Fund exists to support students’ international education experiences connected to Renison degree courses and programs through the allocation of a travel bursary. Students participating in international travel related to Renison degree course or programs are eligible. Apply to The Rita Lee-Chui International Education Fund Rita Lee-Chui (1915-2016) was a long time supporter of Renison. Her legacy at the College includes the Florence Li Tim-Oi Memorial Award and the Florence Li Tim-Oi Memorial Reading Room in Renison's library, both named in honour of her sister, The Rev. Florence Li Tim-Oi, the first woman in the world ordained an Anglican priest. For her commitment to Renison and support of its students, as well as for her passion for travel and education, Renison named this international education fund for Rita as a memorial to her efforts and generosity. Other Scholarships ACHIEVEMENT SCHOLARSHIPS These scholarships are presented annually to a full-time student in each of the first, second and third years of study for outstanding academic performance. They are available to students registered in the Social Development Studies program and to students following an Arts program who have registered through Renison University College. CHARTWELLS SCHOLARSHIP The Chartwells Scholarship is available to Renison resident students who are part of the Warrior Academic Leadership Community. A total of$5000 shall be disbursed as follows:  The student with the highest overall term average (based on a minimum of four courses) shall be awarded up to $3000. The student with the second-highest overall term average shall be awarded up to$2000.  If, for any reason, the full amount cannot be disbursed to these two students, the remainder shall be awarded to the student with the third-highest overall term average. This scholarship will be presented at the Awards Dinner which is last College dinner of the academic year. Posted on: Type
# Electronic – Draw Bode-Plot of a transfer function bode plot I want to draw the bode plot of a tranfer function: $$H(j\omega)=\frac{100j\omega T}{j\omega T + 10}, T=1s$$ Now I have $$H(j\omega)=\frac{100}{1 + \frac{10}{j\omega T}}, T=1s$$ Using a double log scale: $$20*\log{H(j\omega)}=20*\log{\frac{100}{1 + \frac{10}{j\omega T}}}, T=1s$$ And can I just insert omega and compute the points for the plot? Like for omega = 1000 $$20*\log{\left(\frac{100}{1 + \frac{10}{2*\pi*1000}}\right)}=40-20*\log{\left(1 + \frac{10}{2*\pi*1000}\right)}=39.9…$$ Is that correct? You've got to back up a step or three. The transfer function is complex valued so, to plot it, you need two plots, usually magnitude and phase. The magnitude plot is usually log-log but the phase plot is lin-log. So, you need to find the magnitude and then take the log before plotting the Bode magnitude. To find the magnitude, multiply H by its conjugate and then take the root. $$|H(j\omega)|^2 = \dfrac{100}{1 + \dfrac{10}{j \omega T}}\dfrac{100}{1 - \dfrac{10}{j \omega T}} = \dfrac{100^2}{1 + \dfrac{10^2}{(\omega T)^2}}$$ Also, omega is the radian frequency while f is the frequency. So, if omega = 1000, you don't multiply by 2 pi. However if f = 1000, you do. UPDATE: fixed denominator of transfer function to match OP's original UPDATE, PART DEUX: We should try to put this transfer function in standard from so that can identify the asymptotic gain, the type, and the pole/zero frequency. Since the variable \$\omega\$ appears with highest exponent 1, it is a 1st order filter. There are only two types of 1st order filters: low-pass and high-pass. In standard form the OP's transfer function is: \$H(j\omega) = 100 \dfrac{\frac{j\omega}{\omega_0}}{1 + \frac{j\omega}{\omega_0}}\$ \$\omega_0 = \frac{10}{T}\$ Then: \$|H(j\omega)| = 100 \dfrac{\frac{\omega}{\omega_0}}{\sqrt{1 + (\frac{\omega}{\omega_0})^2}}\$ Now, if we stare at this a bit and ask it some questions, we can imagine exactly what this looks like. When \$\omega << \omega_0\$, the denominator is effectively "1" and so, the transfer function is decreasing by a factor of 10 as \$\omega\$ decreases by a factor of 10. On a log-log scale, this is a line with a slope of +1. When \$\omega >> \omega_0\$, the denominator is effectively \$\frac{\omega}{\omega_0} \$ and so the transfer function is effectively constant with a value of 100. If we plot these two lines on a log-log plot and have them intersect at \$\omega = \omega_0\$, we've created an asymptotic Bode magnitude plot. In fact, it's easy to see that when \$\omega = \omega_0\$, the magnitude is \$\frac{100}{\sqrt{2}}\$ so the lines we plotted are actually the asymptotes of the (magnitude) transfer function. That is, the function approaches these lines at the frequency extremes but never actually gets to them (on a log-log plot, \$\omega = 0 \$ is at negative infinity)
# How would I duplicate this plot and table in Mathematica? I have struggled trying to duplicate this graphic from Spurious correlations using ListLinePlot, but I cannot get the tick options to display the year for every data point. I suspect there is a simple way to display the table, but I don't know where to start Here is what I have: y = Range[2000, 2009]; d = {5, 4.7, 4.6, 4.4, 4.3, 4.1, 4.2, 4.2, 4.2, 4.1}; m = {8.2, 7, 6.5, 5.3, 5.2, 4, 4.6, 4.5, 4.2, 3.7}; ListLinePlot [d, PlotStyle -> {ColorData[97, 1]}, ImagePadding -> {{50, 50}, {20, 20}}, AspectRatio -> 1/4, ImageSize -> Large, Axes -> False, BaseStyle -> {FontSize -> 14}, Ticks -> {{2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009}, {4.0, 5.0}}, FrameTicks -> {None, All, None, None}, Frame -> {True, True, False, False}] • Please show us what you have a tried. Also, check out Column and Grid. – C. E. Apr 29 '18 at 20:44 • Also provide raw data, you should not expect us to manually enter all of the data. – Bob Hanlon Apr 29 '18 at 20:52 • You will also be interested in Plot with multiple Y-axes. – MarcoB Apr 29 '18 at 21:20 • @MarcoB Thanks. That is exactly what I have been trying to do, but I cannot get the x-axis to have a label for every data point, so all the years display properly – Brian Apr 29 '18 at 21:26 {mscale, dscale} = MinMax /@ {m, d} rescaledm = Rescale[m, mscale, dscale]; mticks = ChartingFindTicks[dscale, mscale]; DateListPlot[{d, rescaledm}, {2000}, Joined -> True, FrameTicks -> {{Automatic, mticks}, {DateRange[{2000}, {2009}, "Year"], Automatic}}] • Very nice and concise – Brian Apr 29 '18 at 22:08 • @Brian, thank you for the accept. – kglr Apr 29 '18 at 22:09 • Why is the last blue data point (for 2009) missing? – Brian Apr 29 '18 at 22:12 • I think the last blue edge is only hidden by the last yellow one. – Henrik Schumacher Apr 29 '18 at 22:14 • @Brian, as Henrik noted, it is there but hidden behind the yellow line. You can try something like PlotStyle->{ Directive[Opacity[.7],Thickness[.02]], Thick} to see both lines. – kglr Apr 29 '18 at 22:17 This might take you closer to you goal: y = Range[2000, 2009]; d = {5, 4.7, 4.6, 4.4, 4.3, 4.1, 4.2, 4.2, 4.2, 4.1}; m = {8.2, 7, 6.5, 5.3, 5.2, 4, 4.6, 4.5, 4.2, 3.7}; mticks = Range[3, 9]; {scale, shift} = NArgMin[Total[(d - (s m + b))^2], {s, b}]; mscaled = (scale m + shift); ListLinePlot[ { Transpose[{y, d}], Transpose[{y, mscaled}] }, ImagePadding -> {{50, 50}, {20, 20}}, AspectRatio -> 1/4, ImageSize -> Large, BaseStyle -> {FontSize -> 14}, FrameTicks -> { {Range[4, 5, 0.2], Transpose[{scale mticks + shift, mticks}]}, {y, None} }, Frame -> {{True, True}, {True, False}}, InterpolationOrder -> 3, PlotMarkers -> None ] The format for FrameTicks and Frame is always {{left,right},{bottom,top}}. Note also how you can introduce "fake" ticks on the right hand side with Transpose[{scale mticks + shift, mticks}]: first entry in each sublist is the actual tick position; second entry is what is shown to the beholder. I was also a bit puzzled that we cannot specify the x-axis' ticks arbitrarily. But the point is that without explicit x-coodinates, ListLinePlot assumes the x-coordinates to run from 1 to 10 for the list d has length 10. So, we actually can enforce that x-ticks are only plotted at positions specified by y = Range[2000, 2009] --- but the ticks will then be way outside the plot range and hence invisible. • That is great! I get that you need to scale and shift the 'm' data to plot on the same graph as the 'd' data, but I don't understand the mechanics of what line 4 is doing. – Brian Apr 29 '18 at 21:36 • Very nice. Instead of the Minimize, I'd suggest NArgMin[...]` here, which would return the list of optimized parameters directly. The result is the same, of course. – MarcoB Apr 29 '18 at 21:36 • Very good point. Thanks! – Henrik Schumacher Apr 29 '18 at 21:42 • When I run the code in a new notebook, I get this error, and the right side ticks don't display: Transpose::nmtx: The first two levels of {3.30863 +0.201386 mticks,mticks} cannot be transposed. – Brian Apr 29 '18 at 21:55 • Sorry. Fixed it. Please try again. – Henrik Schumacher Apr 29 '18 at 21:58
## Saturday, February 27, 2010 ### Sue's Top Ten Issues in Math Education In my last post, I mentioned my top ten list of 'problems'. I decided to call it my top ten 'issues' because I wanted to frame them in the positive - not what's wrong, but how to do it right. However, this is still a list of the top ten problems, because most of this is done wrong in most classrooms. And here they are: 1. If you're going to teach math, you need to enjoy it.* The best way to help kids learn to read is to read to them, lots of wonderful stories, so you can hook them on it. The best way to help kids learn math is to make it a game (see #3), or to make dozens of games out of it. Accessible mysteries. Number stories. Hook them on thinking. Get them so intrigued, they'll be willing to really sweat. 2. If you’re going to teach math, you need to know it deeply, and you need to keep learning. Read Liping Ma. Arithmetic is deeper than you knew (see #6). Every mathematical subject you might teach is connected to many, many others. Heck, I'm still learning about multiplication myself. Over at Axioms to Teach By, I said (back in September), "You don't want the product to be 'the same kind of thing'.  ...   5 students per row times 8 rows is 40 students. So I have students/row * rows = students." Owen disagreed with me, and Burt's comment on my last post got me re-reading that discussion. I think Owen and I may both be right, but I have no idea how to use a compass and straightedge to multiply. I'm looking forward to playing with that. I think it will give me new insight. 3. Games are to math as books are to reading. Let the kids play games (or make up their own games) instead of "doing math", and they might learn more math. Denise's game that's worth 1000 worksheets (addition war and its variations) is one place to start. And Pam Sorooshian has this to say about dice.  Learn to play games: Set, Blink, Quarto, Blokus, Chess, Nim, Connect Four. 4. Students are willing to do the deep work necessary to learn math if and only if they’re enjoying it. Which means that grades and coercion are really destructive. Maybe more so than in any other subject. People need to feel safe to take the risks that really learning math requires. Read Joe at For the Love of Learning. (Maybe you'll get to read him here soon.) I'm not sure if this is true in other cultures. Students in Japan seem to be very stressed from many accounts I read; they also do some great problem-solving lessons. 5. Math is not facts (times tables) and procedures (long division), although those are a part of it; more deeply, math is about concepts, connections, patterns. It can be a game, a language, an art form. Everything is connected, often in surprising and beautiful ways. My favorite math ed quote of all time comes from Marilyn Burns: "The secret key to mathematics is pattern." U.S. classrooms are way too focused on procedure in math. It's hard for any one teacher to break away from that, because the students come to expect it, and are likely to rebel if asked to really think. (See The Teaching Gap, by James Stigler.) See George Hart for the artform. I don't know who to recommend for the language angle. Any recommendations? 6. Math is not arithmetic, although arithmetic is a part of it. (And even arithmetic has its deep side.) Little kids can learn about infinity, geometry, probability, patterns, symmetry, tiling, map colorings, tangrams, ... And they can do arithmetic in another base to play games with the meaning of place value. (I wrote about base eight here, and base three here.) 7. Math itself is the authority - not the curriculum, not the teacher, not the standards committee. Read Math Mojo – you can’t want kids to do it the way you do. You have to be fearless, and you need to see the connections. If you’re trying to memorize it, you’re probably being pushed to learn something that hasn’t built up meaning for you. See Julie Brenna's article on Memorizing Math Facts. Yes, eventually you want to have the times tables memorized, just like you want to know words by sight. But the path there can be full of delicious entertainment. Learn your multiplications as a meditation, as part of the games you play, ... Just like little kids, who ask why a thousand times a day, mathematicians ask why. Why are there only 5 Platonic (regular) solids? Why does a quadratic (y=x2), which gives a U-shaped parabola as its graph, have the same sort of U-shaped graph after you add a straight line equation (y=2x+1) to it? (A question asked and answered by James Tanton in this video.) Why does the anti-derivative give you area? Why does dividing by a fraction make something bigger? Why is the parallel postulate so much more complicated than the 4 postulates before it? 9. Earlier is not better. The schools are pushing academics earlier and earlier. That's not a good idea. If young people learn to read when they're ready for it, they enjoy reading. They read more and more; they get better and better at it; reading serves them well. (See Peter Gray's recent post on this.) The same can happen with math. Daniel Greenberg, working at a Sudbury school (democratic schools, where kids do not have enforced lessons) taught  a group of 9 to 12 year olds all of arithmetic in 20 hours. They were ready and eager, and that's all it took. In 1929, L.P. Benezet, superintendent of schools in Manchester, New Hampshire, believed that waiting until later would help children learn math more effectively. The experiment he conducted, waiting until 5th or 6th grade to offer formal arithmetic lessons, was very successful. (His report was published in the Journal of the NEA. Although some people disagree about the success of this experiment, there is nothing published which contradicts his evidence. I'd like to find more information about how this project ended.) 10. Textbooks are trouble. Corollary: The one doing the work is the one doing the learning. (Is it the text and the teacher, or is it the student?) Hmm, this shouldn't be last, but when I look at the list, they all seem important. I guess this isn't a well-ordered domain.  ;^) Textbook Free: Kicking the Habit is an article by Chris Shore on getting away from using a textbook. (After clicking the link, click on 'Articles'.) I've been duly inspired, and will report in the fall about how it goes for me in my classes to teach without a textbook. See dy/dan on being less helpful (so the students will learn more). 31. Multiplication is not (just) repeated addition, it’s much richer than that. Wait. I said that already... (I warned you, it's just not in my top ten.) What do you see as the biggest issues or problems in math education? ____ *I know, top ten lists are supposed to start at number ten to keep the suspense up. But the suspense is gone - I already told you my top two in my last post. And I can't help it, I just have to start at the top. ## Friday, February 26, 2010 ### What is Multiplication? "Oh, come on, Sue! What kind of a question is that?" ***     ***     ***     ***     ***     ***     ***     ***     ***     ***     ***     *** It seems like a simple question. But when you get involved in discussions about how to teach, it isn't. Many elementary teachers present multiplication as repeated addition, and really, it's much more than that - areas, combinations, stretching, and more. (Here's a cool poster, created by Maria Droujkova.) Many math education experts think that calling it repeated addition is a big problem. [Keith Devlin's articles started this discussion. Jason Dyer just posted on this issue from a computer science perspective.] I personally think that this is one tiny facet of the real problem, and in a comment at Rational Mathematics Education, I said, "I see plenty of problems in the way math gets taught, but this would not be in my top ten list." (After saying that, I decided to figure out what my top ten list would be - I'll post on that soon.) The top two problems, in my opinion, are that so many elementary teachers don't like math, and that they don't have a deep understanding of the math they teach.* We have a vicious circle going, where those who dislike math teach the young to dislike it - and that's a hard thing to change. Back to the question at hand. Devlin says 'multiplication is not repeated addition'. I agree that it's not just that, but he and others say it's not that at all, and that saying it is messes kids up. I think it's more a case of the translation between English and math-language being rough sometimes. I trust that if we haven't gotten a student to give up thinking, they'll eventually construct their own definition of multiplication, as it becomes clear to them through what they do with it. Here's a scenario: My son (7 years old) wants to know how much 5 dimes are worth. He says 10, 20, 30, 40, 50 while holding up fingers. The process he's using to figure it is skip counting, which feels like repeated addition to me. But what he's thinking about is 5 dimes x 10 cents for each dime, which is multiplicative reasoning. So the repeated addition is the process he uses to solve his multiplication problem. It's important to note here that I didn't suggest this 'problem' to him. It was something he wanted to know. I didn't tell him he was doing multiplication, and I don't plan to 'extend his learning' with other problems that involve multiplicative reasoning. I expect his natural curiosity will lead him to explore lots of situations where he'll reason in whatever way helps him figure out what he wants to know. It's important to me not to push. I've noticed plenty of kids of mathematicians who don't like math, and I really want to give him the space to develop his own relationship with the beauty in math. Most people who write about this are imagining a conventional classroom, where all the students are supposed to be 'learning' the same thing at the same time. (An impossibility, no?) When I imagine that classroom, I see a little child coming up to the teacher after class, worried about this multiplication thing, and the well-meaning teacher trying to be reassuring, and saying, "Don't worry, it's just repeated addition." What the teacher is doing is connecting the new material with something old and familiar.  This is how our brains work; through connections. The teacher is also recognizing the child's concern about what they will have to do, and since math lessons are so procedurally-based in this country (on my top ten problems list), she's telling the child s/he can do the multiplication problems by repeated addition. So there are positive aspects to this sort of response, but there are also ways in which it's somewhat problematic. As we learn new concepts, we go through a phase where we feel confused. Recognizing that, and even celebrating it, is important. (Thanks, Maria, for that insight. It's hard to celebrate our confusion, though, when we're worried about grades.) I'm trying to think of an example that most kids would feel at home with... It's not confusion exactly, but when you learn to ride a bike, it feels all wrong, until suddenly, it feels right. Learning something new can be like that. Maybe that's a part of what I'd tell that worried child. I might also refer to the repeated addition metaphor to help them feel calm, since I know plenty of people shut down when confronted by the mysteries of math. But I'd also give them an easy area model to think with, so they'd see a basic 'real' multiplication problem. Repeated addition can get at it, and yet it's really something new - a shape made with 3 rows of 5 squares is also 5 rows of 3 squares. But (and here's my problem with imagining that 'conventional' sort of classroom) I think it's better to be playing with areas enough that the kids will tell me, "Oh wow, look at this! 5 threes is the same as 3 fives!" Devlin also says exponentiation is not repeated multiplication, and functions are not processes. He says you're starting with a lie if you explain these concepts using these metaphors. I disagree. We start out thinking of exponents as meaning repeated multiplication, and then we expand and extend that, to see exponential growth in a more continuous sense. (A 4% annual growth rate can be helpfully seen as multiplying by 1.04 each year, but the growth doesn't happen at one point in the year - it's smooth.) Here's Devlin on functions: (Dec 08) ...a significant proportion of university mathematics students do not have the correct concept of a function.  Do you? Here is a simple test. ... Consider the "doubling function" y = 2x (or, if you prefer more sophisticated notation, f(x) = 2x.) Question: When you start with a number, what does this function do to it? If you answered, "It doubles it," you are wrong. No, no going back now and saying "Well what I really meant was ..." That original answer was wrong, and shows that, even if you "know" the correct definition, your underlying concept of a function is wrong. Functions, as defined and used all the time in mathematics, don't do anything to anything. They are not processes. They relate things. I think seeing functions as processes is a fine perspective to start from - and very few students will go far enough in math to need another point of view. I also think Devlin's insistence is likely to make people think math is stranger and harder and less knowable than it really is. If our elementary teachers were well-educated mathematically, they could weigh in with their own opinions on this subject. I'm concerned that Devlin's tone sets up the notion that there is one right answer to this. (And his question was quite a setup, wasn't it? "What do functions do?" "Gotcha! They don't do anything.") Real mathematicians ask why, which is what I'm doing, along with some of the people  I respectfully disagree with. But others are focusing more on the 'right answer' to this pedagogical question than on the reasons, which encourages the wrong approach to math and its pedagogy. Here's one more part I'd like to think about (Keith Devlin, July 08): Part of the problem, I suspect, is that many people feel a need to make things concrete. But mathematics is abstract. That is where it gets its strength. ... Where does the "abstracted from everyday experience and developed by iterated metaphors" mathematics end and the "rule-based mathematics that has to be bootstrapped" begin? What if the mathematics that has to be bootstrapped in order to be properly mastered includes the real numbers? What if it includes the negative integers? What if it includes the concept of multiplication (a topic of three of my more recent columns)? What if teaching multiplication as repeated addition (see those previous columns) or introducing negative numbers using an everyday (explicit) metaphor (such as owing money) results in an incorrect concept that leads to increased difficulty later when the child needs to move on in math? I don't believe that explicit metaphors like these get in the way, unless just one metaphor is used all the time. I agree with Devlin's claim that the strength of mathematics lies in its ability to use abstraction, but I disagree that starting from the concrete is dangerous or even problematic. I'll address that issue in a future post. The real questions for me are broader: Are students getting a chance to explore lots of different multiplicative relationships? Are they maintaining their curiosity and rage to learn? Is math presented as a tool they can develop to help them think? I want schools in which: teachers are respected for the hard work they do,  they're given time daily in which to have professional discussions with their peers about what they are trying to help students learn, and they come in ready to approach math with comfort and joy. [Edited on 3-1 to add: Surprisingly to me, the discussions on this topic have often become hostile. It's important to me that people treat each other decently here at my bloghome, and I turned comment moderation on when I first posted this, to enforce that. I am rejecting any comments that don't meet this standard. Here's what you see when you post a comment: I would like this blog to be a safe place for people to disagree. Please do not attack the integrity of the person you disagree with. (Any comments which do so will not be accepted. If I can email you with my concern, I will. My email is suevanhattum on the hotmail system.) Comments with links unrelated to the topic at hand will not be accepted. Perhaps I should have said it more thoroughly. I will ask you to rewrite if you treat another person badly, or if you malign the intelligence of people on 'the other side'. etc. One comment has been rejected so far.] _______ *Note to any elementary teachers reading this: I think a good K12 teacher is a saint. You work harder than I do, you have less autonomy, and you get paid less. If you can really reach kids, you also make a bigger impact than I do. I'm guessing the fact that you're here means you either like math or want to do better with it. I'm grateful for all you do. Say hi in the comments, email me, point me to things I should know. ## Monday, February 22, 2010 ### Project Euler My colleague from the computer science department at Contra Costa College, Tom Murphy, thought I might enjoy some more math problems, and pointed me to Project Euler. Here's their intro: Project Euler is a series of challenging mathematical/computer programming problems that will require more than just mathematical insights to solve. Although mathematics will help you arrive at elegant and efficient methods, the use of a computer and programming skills will be required to solve most problems. My programming skills are a bit rusty, so I'm starting with purely mathematical problems. The first one I did is: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. [I asked for the problems to be listed in ascending order of difficulty, and this one came up first.] I figured it out, and used my calculator to do a simple multiplication and addition problem at the end. The 'forum' discussing this problem had lots and lots of programs listed. I was shocked people would ask a computer to solve this one. I decided to do one a day. This morning I checked the next two and figured I'd at least want to use Excel on them, although I bet there's an elegant way to solve each of them without it. The fourth problem was once again simple enough to solve in a minute, with my trusty TI-83 doing my multiplication. It looks like a nice set of problems to challenge students with. Thanks, Tom. ## Saturday, February 20, 2010 ### Richmond Math Salon - Symmetry Lots of people showed up, right at 2. The adults played with the math games even more than the kids this time. The kids love the trampoline, but they still came back in to play with polydrons, mirror books, Blink!, Set, and tangrams. And to read You Can Count on Monsters. I'll tell more later. I want to get this post up so Madeline, and others, can comment here. (Hey Madeline, you're awesome too!) [Now edited - photos and activity details added.] Polydrons I had just gotten my big box of polydrons yesterday, and was excited to share them. We had oodles of fun with them. The first thing I built when I opened the box on Friday was a tetrahedron. Each face is an equilateral triangle. There are 4 faces total - a pyramid on a triangular base. The picture you see here is not a tetrahedron, because the base is a square. Then I built an octahedron, using 8 of the same equilateral triangles. I don't have photos yet, so here's a generic octahedron. My son built a cube. So we had 3 of the 5 Platonic solids. On a Platonic solid, every face is exactly the same, and each face is a regular polygon. (Regular polygons have all sides the same length, all angles equal. What we call a side on a 2 dimensional polygon is an edge on the 3 dimensional solid.) There's one more criteria: At each vertex (think corner), the same number of faces have to meet. To me, it feels wild that there are only 5 possible ways to do this. As we sat down to build, I said I wanted to build an icosahedron (20 faces) and a dodecahedron (12 faces). D built an icosahedron pretty quickly. One of the parents was trying to build one, and thought he'd used the wrong triangles. (He hadn't.) I was excited - I got to explain one reason we're limited to just 5 Platonic solids. I showed him (and a few others) how you can see that the angles in an equilateral triangle are 60 degrees, and 6 of them make a whole circle. So putting 6 together, they'll lie flat. (It looked like the picture here, on the top, but not on the sides.) We need to put less of them together at the vertex if we want it to poke up. As we talked about changing it, he mentioned having lived in a yurt. So we kept his yurt, and started over for the icosahedron, which has 5 triangular faces at each vertex. I had a lot of trouble getting the last face snapped onto mine, and D helped me. So 3 triangular faces at each vertex makes the tetrahedron, 4 at each vertex makes the octahedron, 5 at each vertex makes the icosahedron (20 faces total). You can't use more because 6 faces at each vertex would lie flat, and you can't use less because 2 faces at a vertex won't have space in between. The cube has square faces, 3 to a vertex. We can't make something with 4 squares meeting a a vertex; same problem, it would lie flat. Next shape is a pentagon, with 5 sides. If you put three of those together at each vertex you'd end up with 12 sides. D wanted to do that, but we didn't have enough pentagons. I'll have to order those separately. I got these polydrons very cheaply through Educator's Outlet. (When you go to the page I've linked to, you'll see the prices slashed, but then when you check out, they're cut even more. I paid about $30 for what would normally cost a few hundred, I think.) Mirror Books I got this idea from Maria Droujkova (who posts at Natural Math). Tape two little mirrors together. Draw something and set the mirrors in a V just behind. You get a kaleidoscope. Draw a straight line, and set the mirrors on it. As you adjust the angle between them, you get all sorts of polygons. We also played with writing upside down, writing in cursive and then drawing the mirror reversal of the writing, making a simple path and trying to follow it with the pen while looking in the mirror (way hard!), and looking at our faces in the mirrors (use a 90 degree angle to see your face as others see it, which doesn't happen in a regular mirror). These were all suggestions from participating parents, yeay! More Blink! and Set are the games I put out every month, so people can sit right down to something as others are arriving. (I see that Out of the Box sold the rights to Blink! to Mattel in 2007.) The tangrams also come out pretty often. The book, You Can Count on Monsters, by Richard Schwartz, is about prime and composite numbers, but can actually be enjoyed by very young kids, because it has a page for each number, 1 to 100, and each one has a different monster on it. It's a delightful concept. I had a few other activities planned, but never got around to them. I wanted to do the rep-tiles activity explained here. And I wanted to do an activity where you find a line of symmetry in an odd-shaped figure, that I found In James Tanton's book, Math Without Words. [Note to participants who post comments: This is a public space, so just use initials if you mention other kids.] ## Thursday, February 18, 2010 ### Math Teachers at Play #23 (and #22) When I link to the MTaP, I like to point out one or two of my favorite items. That got me in trouble last month. I never finished reading all of the goodies at MTap #22, and so neglected to post a link to a fabulous blog carnival. So this month I'm posting a link right away. When I get time to read it all, I'll add a comment, or another post, or something. Here's MTaP #23, thanks Dan! Perhaps on Sunday I'll take my laptop to Catahoula Coffee and relax with a latte while I read my MTaP. ## Tuesday, February 16, 2010 ### Tutoring and Conics My tutoring sessions with Artemis* continue to go well. I never prep, except for fleeting thoughts about topics and problems he might enjoy. Today, I showed him the books that came a few days ago from James Tanton. (I will blog about them soon.) We started to look at Thinking Mathematics, Volume 3: Lines, Circles, Trigonometry and Conics, and before we'd looked at two pages, I asked if he'd like to figure out the equation for a circle. Sure. I drew a circle, and asked him for the definition. He said, "All the points are the same distance from the center." I said that what we were going to do is called analytic geometry - the marriage of geometry and algebra that Rene Descartes helped found. I drew x and y-axes and asked him what we should call the center. He said (x,y). I felt bad that I'd asked the question, since I didn't want to use (x,y) for the center. So I talked about how we have a tradition of using x and y for the points that would move around (said while tracing over the circle), and so the center would traditionally get another name. One tradition would be just to use (a,b), another would be to use (h,k). I have no idea why we use h and k... He chose a and b. I pointed to his definition and asked how we'd think about the distance. He said, "Use the Pythagorean Theorem." My algebra students tend to think the 'distance formula' is something separate, so of course it tickles me that he thinks of it in a way that feels more basic to me. I drew a triangle with a bad circle around it... ...and had him tell me what to do next. He worked it out to $r=\sqrt{\left(x-a\right)^2+\left(y-b\right)^2}$ and I told him the tradition is to write it as $\left(x-a\right)^2+\left(y-b\right)^2=r^2$. Then we looked at $x^2+y^2+4x-10y=7$ and completed the squares together. He told me the center and the radius. We decided to move on to other conics, and started with the definition which states that a parabola is all the points equal distance from a focus and a directrix (a line). But we also were talking about how you cut the cone to get the conics, and I said I had never figured out how we know that a plane cutting the cone parallel to the side gives us the same thing as this definition that uses focus and directrix. We worked it out for an example, using x,y,z coordinates. After a bit of fiddling around, we called our cone $z=2\sqrt{x^2+y^2}$ and our plane $z=2\left(x-1\right)$. We used a 3D graphing calculator to check whether it was right. A bit of algebra gives us $y^2=1-2x$, which is a parabola opening to the left. That was nice, but has nothing to say about focus and directrix, and of course it's not a proof. We decided to look it up. The explanation I found is for an ellipse, not a parabola, but we decided to work our way through it. It's titled Dandelin's Spheres, after the French/Belgian mathematician Germinal Dandelin (1794 – 1847) who came up with this proof - and it's dazzling! I've never seen this before, and I want to know why - it's so elegant, and pretty simple. At the end, it says the hyperbola and parabola can be thought through following almost the same steps. I'm going to do it! This was the first time I did some stretching mathematically while tutoring him. It's going to happen more and more. I'm very curious when he'll "outgrow" me. ______ *Artemis (not his real name) is 8, and is very smart. Note: The equations were done at CodeCogs. I had to redo the drawing because the website I used to put it up is gone now. ## Saturday, February 13, 2010 ### Math Poetry Wiki! My poetry challenge got more comments than any previous post on this blog. And the poems people wrote are quite delightful - much too delightful to be buried in the comments. So I set up a wiki. I've gotten permission to repost their work from a few of the authors, but not all. Please let me know if you'd rather not have your work posted there, and I can just put a link instead. I'd love to have one poem show up on the home page, in some rotation. Does anyone know of a way to make that happen? It took me until today to respond to my own challenge. Here's what I came up with: The Pleasure of Struggling I can’t get this. It doesn’t make sense. What are they talking about? I will never get this. It’s crazy. Wait… Hmm… Ah! If I put this with this… No… Hmm… Oh my! Look at that! How cool! May I have another, please? On re-reading this, I see it doesn't have to be about math. But it is. ;^) Your turn! Write a poem, and add it to the Math Poetry Wiki. ## Tuesday, February 9, 2010 ### One More Reason Why High-Stakes Tests Are Not a Good Idea Have you heard of Campbell's Law? On Bridging Differences, an educational policy blog, Diane Ravitch writes: Numbers don't lie, do they? Well, yes, they do. A major front-page story in The New York Times on February 6 described a major study conducted by criminologists who found that the numbers do lie. More than 100 retired, high-ranking police officers in New York City told them that intense pressure to produce improved crime statistics had led to manipulation of the data. For the past 15 years or so, the city boasted that its data system, known as CompStat, had brought about a major reduction in crime. But the survey said that the data system had encouraged supervisors and precinct commanders to relabel crimes to less serious offenses. The data mattered more than truth. Some, for example, would scout eBay and other Web sites to find values for stolen items that would reduce the complaint from a grand larceny (over$1,000 in value) to a misdemeanor. There were reports of officers who persuaded crime victims not to file a complaint or to change their accounts so that a crime's seriousness could be downgraded. This is not only a major scandal, it is a validation once again of Campbell's Law, which holds that: "The more any quantitative social indicator is used for social decisionmaking, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." Anyone who wants to learn more about Campbell's law and how it applies to education should read Richard Rothstein's Grading Education and Daniel Koretz's Measuring Up. Or Google Rothstein's "Holding Accountability to Account," if you want to see what happens when data becomes our most important goal. ## Monday, February 8, 2010 ### A Link and a Problem Steven Strogatz is back, with his second math article in the weekly NY Times series which will go for 15 weeks. This one is called Rock Groups. He mentioned a puzzle series by John Tierney, and the one posted today reminds me of a problem I've posed in my Math for Elementary Teachers course - but this one's got a better storyline. The Problem Sol, at Wild About Math, asked for help solving a problem his brother heard on the radio: Bob and Alice are both millionaires. They’re both curious to know who is richer but they don’t want to tell the other one how much money they have. Without engaging a trusted third party, how can they both know who is richer? I have played with a similar problem that I think goes like this: 10 mathematicians are out to dinner, and want to know their average salary. Without anyone finding out anyone else’s salary, how can they do this? I remember that I saw the solution and liked it. (I may have solved it myself, even, but I'm stumped again now - the delights of a bad memory...) Sol wants the answer. I'd prefer hints, myself. ## Wednesday, February 3, 2010 ### In Honor of Black History On my other blog, I'm doing a series on African-American picture books. On this blog, I'd like to tell you the story of a Black mathematician. Years ago, I wrote an essay on Vivienne Malone Mayes, for a course in African American history. As I looked around online today, I came upon a sad facet of her story, which will remind us that racism is hardly overcome. The Black Women in Mathematics page on her describes her experience at the University of Texas, in her PhD program: In graduate school she was very much alone... In her first class, she was the only Black, the only woman. Her classmates ignored her completely, even terminating conversations if she came within earshot. She was denied a teaching assistantship, although she was an experienced ... and excellent teacher. She wrote: "I could not join my advisor and other classmates to discuss mathematics over coffee at Hilsberg's cafe.... Hilsberg's would not serve Blacks. Occasionally, I could get snatches of their conversation as they crossed our picket line outside the cafe." She could not enroll in professor R.L. Moore's class as he explicitly stated that he did not teach Blacks. Part of the significance of this is R.L. Moore's fame as a math educator. The 'Moore Method', whereby his students did not use textbooks and provided all the proofs in class, is famous, in part because more mathematicians came out of his program than typically come through any one teacher. So his personal racism is all the more abhorrent. My essay retold what I learned from Women In Mathematics: The Addition of Difference, by Claudia Henrion. This book contains interviews with both Vivienne Malone Mayes and Fern Hunt, both Black mathematicians. Both interviews point out the advantages of Black colleges, either studying at a Black college, as Malone-Mayes did, or working at one, as Hunt did. Vivienne Malone Mayes went to Fisk University in Tennessee, and earned both her B.A. and M.A. there. She returned to Waco, and ended up teaching at Bishop College, a small Black college nearby. For years she had encouraged her better students to go on to get doctorates, so that they could come back and teach in the Black colleges, which would help the colleges to become accredited. Her students finally persuaded her to follow her own advice. There were no Black colleges in Texas that offered Ph.D’s, so she applied to Baylor University in Waco. In 1961, she was denied admission there because they did not admit Blacks. She then applied to the University of Texas at Austin, was admitted, and earned her Ph.D. in 1966. Five years after refusing her admittance as a student, Baylor University offered her a position as a professor, which she accepted. When Vivienne started college at Fisk, her major was chemistry. But two of her teachers there inspired her love of mathematics, and so she switched her major, did graduate work, and became a college teacher herself. This switch was seen by her family as quite impractical, but Fisk had already been influencing her thought in other ways, so that she would say “we were DuBoisites”. (A huge influence in her life, her father’s history, views, and advice reflected the philosophy of Booker T. Washington - get the training that will get you work.) The two teachers who encouraged her to enter the study of mathematics were Evelyn Boyd Granville and Lee Lorch. “Evelyn Boyd Granville was one of the first Black women to receive a Ph.D. in mathematics in the United States.” (p.200) Seeing another Black woman doing mathematics was important in giving Malone Mayes the confidence that she, too, could do this. Lee Lorch was a white teacher who (in Malone Mayes’ words) “believed that the students could understand the material, not just learn to do it”. (Lorch was white, but his commitment to interracial equality was clear. He was subpoenaed by the House Committee on Un-American Activities for his actions in support of integration, and subsequently lost his position at Fisk because of this.) Malone Maye’s goals as a teacher were first, to support and respect her students so they would begin to have a sense of self-worth, second, to give them the tools of self-empowerment, and third, to create a path of opportunity - in math, the answers are right or wrong, when you know what you’re talking about, it’s clear, so she felt that her students would face less discrimination in this field. Malone Mayes was paid substantially less than her similarly qualified colleagues at Baylor. She sued, received a $5000 raise, and was still$7000 below her colleagues. Her health deteriorated over the years, partly due to the stresses of her work at Baylor, and she died in 1995, at the age of 63. ## Monday, February 1, 2010 ### Math As Story Once upon a time, math came packaged in textbooks, and most people had never (ever) even seen a math story. Unless you count things like: "The westbound train leaves at 1pm going 50 miles per hour and the eastbound train leaves at 2pm going 60 miles per hour. If they're 300 miles apart, when will they crash?" (Except the textbook would have said 'meet', leaving out the tiny bit of drama I couldn't help adding.) I love stories, and I love how much they can add to the appeal of math. History is one storytelling genre that adds a lot of drama to math. Why do we need calculus? Seems to me every calc course should start with enough history for students to see the story unfolding. Where did that crazy number i come from? What's up with geometry proofs? (I still don't know enough to tell these stories properly. It's one of the things I'm doing during my sabbatical year.) Biographies of mathematicians are also a great way to ground all the headiness of math in the details of a life. (My review of The Man Who Knew Infinity, about Ramanujan, is here.) If you follow this blog, you've seen some of my storytelling attempts (Eight Fingers, Crash and Count). They pale in comparison to some of the literary delights below. Today was a blockbuster day for storytime in the math blogosphere. Glenn is posting an ongoing adventure story about a place called Verdania, at his blog, Off the Hypotenuse. Each chapter ends with a math puzzle. In the current chapter (Chapter 10), the main characters, who were shipwrecked, are leaving the children's village, and heading to the adults' village. At the end, we're asked to figure out the lengths of 3 paths the characters could travel to get from Sentry Point 1 to Adult Village. If you want to start at the beginning, he's made a new blog with just this story. But don't leave Off the Hypotenuse behind; there's lots of other great posts there. I am delighted over and over as I try to catch up on all his older posts. The strange thing is, I can't figure out who to thank for pointing me there. I just could not retrace my steps successfully the day I found it. Glenn linked today to another new delight: Number Gossip, hosted by Tanya Khovanova. It's not exactly stories, but it's in the same spirit, and fun. Then Jason Dyer, at Number Warrior, pointed to a gory delight over at Emily Short's blog. Word problems with a decidedly 'unfortunate events' twist. Then there's the pirate story, over at The Math Factor. Dave Richeson, at Division by Zero, pointed to a great series starting up in the NY Times, written by Steven Strogatz, on: ... the elements of mathematics, from pre-school to grad school, for anyone out there who’d like to have a second chance at the subject — but this time from an adult perspective. It’s not intended to be remedial. The goal is to give you a better feeling for what math is all about and why it’s so enthralling to those who get it. It starts with a story about penguins in a hotel, ordering fish, fish, fish ... fish. Check it out! It made me chuckle to see Richeson mentioning Strogatz. I just finished reading Steven Strogatz's lovely new book, The Calculus of Friendship, after hearing him give a talk about it at the Joint Mathematics Meeting a few weeks ago, and I've just started reading Dave Richeson's exciting new book, Euler's Gem. One last link, not quite a story: Mike Croucher, at Walking Randomly, posted this great piece on the Math Carnivals. (Sorry I neglected to link to the last Math Teachers at Play, over at Math Hombre. It's lovely, and I'm still working on getting through all the links.) The math blogosphere is exploding with stories today. Yeay!
Monday, July 29, 2019 Wilderness Encounter Levels We've spent some time on this blog in the past measuring the risk/reward level of the OD&D dungeon wandering monster tables (conclusion: as written, in total they're murderously lethal; even Gygax in AD&D massively ramped down the danger level). It recently occurred to me to ask a similar question about the OD&D wilderness encounter tables. A somewhat theoretical difference is that while the dungeon tables have "levels" which theoretically relate to level of power of the monsters there, and suggested level of PCs adventuring there, the wilderness table don't come with that same packaging. Instead (obviously) it comes distinguished by "terrain types". We might assume that plains are designed to be safer than woods, and woods less dangerous than mountains, etc., but are they really? What I did was go through all the entries in those tables and compute average Equivalent Hit Dice (EHD) from each type of encounter, using EHDs estimated algorithmically as shown in the OED Monster Database (code on GitHub) For example: Working bottom-up, here's the sub-table for "Typical Men" (i.e., the default for men in most terrain types): OD&D Wilderness Subtable: Typical Men The "grand average" of all the encounter averages in the rightmost column is 122, but that masks the bimodal structure of the table. These encounters split neatly into two halves: six are with mass groups of men in the range of hundreds (with a host of leader-types, including a Superhero or Lord for any group 100+; I estimated the EHDs for this by adding 25%), while the other six are with small parties of 2-12 men and a single NPC (like a Superhero or Lord). The average total EHD for the first category is in the 200's, while for the second category it's in the 20's. Clearly there's a big difference between meeting one Lord and his 200 soldiers, versus another Lord and his 10 soldiers. This was done for all the different sub-tables, and "grand averages" compute for each resulting in the following: OD&D Wilderness Subtables: Average EHD Somewhat similarly, there's a big bifurcation in the danger levels of some of these subtables. For the Men and Giants (i.e., humanoids) tables, average EHDs are the range of 100+ -- specifically those tables which can produce bands of men, goblins, etc., grouped in the hundreds (30-300 men/orcs, or 40-400 for kobolds/goblins, etc.). For the other tables, average EHDs are only in the range of 20-50 or so. Finally, we can turn to the top-level table, which serves as a function from terrain type to the different subtables, and see on average how dangerous each terrain type is on a per-encounter basis. We get this: OD&D Wilderness Encounters: Terrain Table What I've done there is compute the average encounter danger across all results for a given terrain type, and then divide by 8 (an assumed large PC party size?) to come up with a rough "suggested PC level" for adventuring in that terrain. Some of those assumptions can be easily debated, but at least it gives us a normalized basis by which to compare different terrain types. The result is that on average, there isn't that much difference between the various terrain types. Rounded to the nearest integer, the Clear type suggests maybe 9th-level PCs, while Woods, River, Swamp, and Mountains are only one pip up from that, at 10th level. The City is 11th (because technically it generates more bands of 100s of bandits and brigands from the "Typical Men" table, however unreasonable that may seem), and the Desert table is 12th level, somewhat more dangerous (again because the "Desert Men" table skews more towards 100s of nomads and dervishes). So this series of averages is a somewhat rougher analysis than I've done for dungeons (which have been given a complete simulation in software at the level of individual fighters adventuring and gaining experience in separate encounters). The overall distribution of encounters is not entirely clear, although it's trivial to guess that the tables with fewer Men and Giants encounters (River and Swamp) will have less variation than the other tables. Here are some other factors abstracted out by this rough analysis: • No modifiers are made for parties with special equipment (horses, ships, underwater, etc.) • No distinction is made for parties that may be parleyed with and turn out to be friendly (likely dependent on alignment). • Dragons and lycanthropes do not have family/pack structure simulated (which mandates presence of some immature figures, but also makes adults fight more fiercely). Another thing that the numbers above overlook is that while the average encounter is roughly equivalent across different terrain types, the rate of those encounters is not. E.g.: Compared to Clear terrain, in Woods the party takes twice as long to cover a distance and has double chance for encounter each day; and in Mountains both time and encounter chances are tripled, etc. That is, for every 1 expected encounter for a given distance in the Clear, in Woods you'll expect 4 encounters, and in Mountains 9 encounters, for the same distance traveled. Ultimately that's where the real difference in danger levels comes from in this system. (On the other hand, with only one encounter per day, casters can unload their entire firepower capacity on each one, giving some buffer against that added danger.) Finally, this project suggests a significant limitation to the overall attempt at using our EHD values in sum to balance against total PC levels. Here we've come up with a rough suggestion that OD&D wilderness encounters are, on average, a fair fight for a party of eight 9th- or 10th-level PCs. However, we can look back to our experiences in Outdoor Spoliation games using this system, which we've run with fairly large parties of around the 8th level; at least four times we've documented battles with groups of men and goblins in the size of 200+, and not had a single PC fatality in those encounters. (By the numbers a group of 200 bandits should be ~250 EHD, so for a eight-man party we would have suggested they be 250/8 ~ 30th level? That's clearly not right.) This points to a likely breakdown in simply summing EHDs, especially for very large groups of low-level monsters, versus PCs with high-level magic (not currently simulated in our program), very low armor classes for fighters, etc. It may be interesting to reflect on the exact magic used by players in those mass battles in Outdoor Spoliation sessions One, Two, and Three. Full spreadsheet available here for the tables and calculations shown above. Edit: Consider Arneson's rule in First Fantasy Campaign that (as I read it) wilderness encounter numbers are really for full lairs only, and encounters outside will only be 10-60% of those numbers (average 35%). If we take the charts above and multiply everything by 0.35 for expected outsiders, then the equated PC level (parties of 8) becomes 3 or 4 in each terrain. Which is kind of interesting, because reportedly at the start of Arneson's games everyone got Heroes from Chainmail -- fight as 4 men, D&D 4th level -- or else Wizards (I presume low-level, likely 4th-level equivalent?). Sunday, July 28, 2019 Sunday Survey: Wizard Armor A while back on the Facebook 1E AD&D group, a discussion occurred that had me quite surprised by the direction it was going. Intrigued, I asked the following poll question: This was surprising to me, because given the context, the top result ("elven chain only") is clearly counter to the 1E AD&D rules text. Of course, when we say multiclass fighter/magic-users in 1E, we're just talking about elves and half-elves (the only races allowed for that multiclass). Under Elves on PHB p. 16, it says that they can "operate freely with the benefits of armor, weapons, and magical items available to the classes the character is operating in", with the exception being if thief activities are occurring (so: plate mail and anything else is clearly on the menu for fighter/magic-users). Note that this contrasts with gnomes on the same page who are restricted to leather for any multiclass combination. Furthermore, as of the 1E AD&D PHB, "elven chain" wasn't even a thing yet named or defined; it didn't appear until the later DMG (p. 27, as "Chain, Elfin") which says merely that it's thin and light, with no special notes about spellcasting. I think partly, the result of this poll can be explained by later edition's rules "bleeding" into the memory banks of the many gamers who played mix-and-match a lot with different edition products. It was the 2E AD&D PHB that established elven chain as a sufficient and necessary requirement for multiclass wizards to cast in armor: "A multi-classed wizard can freely combine the powers of the wizard with any other class allowed, although the wearing of armor is restricted. Elves wearing elven chain can cast spells in armor, as magic is part of the nature of elves." (Ch. 3). Moreover, we can look at 1E adventure products by Gary Gygax and possibly detect an "implied ruling" in the same direction on this issue. Looking at the many drow fighter/magic-users throughout the D1-3 series, all of them are equipped with fine chain mail (not a single one in plate, to my knowledge). The 1983 World of Greyhawk boxed set's Glossography has wandering encounter listings for that world, including "Elves, Patrol"; these are led by high-level elven fighter/magic-users with base AC 4 or 5 (chain, with or without shield). On the same page, "Elves, Knights" (p. 4) are principally fighter/clerics with better AC, but they have fighter/magic-user assistants again with AC 4 (chain & shield). So the consistency of this pattern may be another telling point. Not initially knowing about the 2E AD&D rule or the apparent AD&D player consensus, I've done a similar thing in my OED house rules for OD&D for about a decade now; without reference to any special elven manufacture, multiclass fighter-wizards can cast spells in chain but not plate (also must have one hand free, no shield). Actually for quite some time I thought that was a semi-unique ruling; my surprise is that I've unintentionally matched how a lot of people elsewhere also play things. Friday, July 26, 2019 Friday Figures: Swords & Spells Stand Sizes I got started in the hobby at the point where the only ruleset for mass D&D combat that I could find in stores was the last supplement for original D&D: Gygax's Swords & Spells (1976). Looking at page 2, I saw numbers for scales and recommend size for miniature figure bases. To wit: Figure Mounting sizes from Swords & Spells Those fractional values looked odd, but I never questioned them, assuming that the author had some deeper reason for them. But looking back more critically today: Why 5/8" (instead of say, 1/2")? Why 1-3/8", or 1-5/8"? Why so complex? Now, consider the following. We'll take a few key measurements in millimeters, in multiples of 5, and convert them to inches -- in each case rounding to the nearest eighth of an inch. We get: Conversions rounded to eighth of an inch That is, (noting 6/8" = 3/4") we get exactly the figures on display in the Swords & Spells table. It occurred to me to check this when looking at Chainmail (1971), which gives the option of using either 30mm or 40mm scale figures for man-size (or in the fantasy section for others races, corresponding options such as 10mm, 20mm, 25mm, etc.). It should also be noted that the base sizes shown in Swords & Spells, and even the Chainmail suggestions for 30mm-scale goblins, orcs, ogres, etc., closely matched those provided by the Warhammer Fantasy product all the way up to 2015! Theory: Gygax had miniatures that were originally based in metric measurements, which he mechanically converted to imperial figures (to the nearest eighth-of-an-inch) for the Swords & Spells publication. Monday, July 22, 2019 Damage Scales in LBBs and Supplements: There and Back Again One of the things I really like, DM'ing games from the Original D&D LBBs, is that all hit dice and damage are d6-based. So I can set up with a big batch of d6's (wargame-style) and use that for all monster hits and damage without poking around for sufficient d8's for hit dice, or 5d4 damage or something. In addition, it's very rare for monsters to be noted with multiple attacks, so combat goes quite rapidly. This got massively reworked in Supplement-I (Greyhawk), and personally I think it's one of the off-the-rails mistakes in the history of D&D. In this work, you get the establishment of different hit dice by character class, variation of damage by weapon type, and also variation in attacks and damage by monsters (each listed as "Addition/Amendment" and "highly recommended"). Actually, the first two -- giving increased granularity on the player side -- I have no problem with, but simultaneously complicating all the monsters is the part I prefer not to use. As that was done, the damage output from monsters increased, approximately on the order of being doubled. Let's take a closer look: Below I've compared all the monsters in OD&D Vol-2 that are given explicit damage specifiers in their text blocks, with the "new" damage specifiers given in Sup-I. Note that by default all the other monsters should have 1d6 damage in Vol-2, but for brevity I haven't listed those (and also confidence: did Gygax really make a deliberate choice that dragon bites, purple worms, etc, should do 1d6?). You can see above that a comparison of the average damage output for these types shows a linear relation from Vol-2 to Sup-I, being a bit less than doubling between those works. We should be a bit careful, because the correlation isn't perfect; for example, ogres have the same average damage in both volumes. There are also a number of monsters not shown here who effectively have reduced damage, by being given less than 1d6 damage in Sup-I (kobolds, goblins, giant rats, etc.) One thing that complicates my desire to stick with the LBB all-d6 (low damage) method is that while in Sup-I the amendments were quasi-optional, everything that came later on was designed only in those inflated, non-d6 terms. For example, there's a lot of interesting and memorable D&D monsters that only appear in later supplements: like lizard men, harpies, liches, ogre magi, hell hounds, owl bears, golems, giant frogs/toads/beetles, sahuagin, demons, and many more. Stat blocks for these types are only available with the inflated numbers. (Note there is one unique exception here: In Sup-I, the text entry for the new Storm Giant type is the last place to give LBB-scale damage, "unless the alternate damage system is used". So the text says 3d6+3 damage, while the revised table in the same book gives 7d8 damage; a big difference.) As a possible solution, consider taking the regression formula above and reverse-engineering all the supplement damage scores, so we get something back in scale of the LBBs. For simplicity, I'm only listing the maximum-damage dealing attack for any monster given multiple attacks in Sup-I. I've also made an executive decision that anything up to 1d6 in Sup-I is unchanged (so the kobold/goblin/rat 1d3 or whatever isn't further reduced, and neither is an orc's 1d6, etc.), but everything else is inverted by the formula. Having back-adjusted the average value, I use another spreadsheet function to suggested the best possible all-d6 damage dice. Here's a snippet from the first few results: The fifth column over has our formulaic suggestion for damage dice in LBB-scale. The sixth and seventh columns are my manual choices for what I'll use in my own OED house rule games. Orange boxes are entries explicitly noted in LBB Vol-2 text, and I'll leave those fixed in each case (note they're generally quite close to our calculated suggestions, e.g., for giants). That entire spreadsheet is available here, including suggested conversions of everything in the Sup-I and Sup-II tables. Note that the Sup-I damage table has three distinct parts: (1) revisions for monsters in Vol-2, (2) some damage specs for giant animal types possibly in LBB encounter charts but otherwise without stats, and (3) new monsters appearing in Sup-I itself; these are set off in white, yellow, and green sections of the spreadsheet. Meanwhile, looking at the Sup-II table, it's possible that Arneson was even more unhinged on the issue, e.g. damage of up to 24 points for a sub-1 HD fire beetle, 80 points for a plesiosaur bite, or 150 points for a whale fluke! (A lot of those figures were later reined in by Gygax in the AD&D Monster Manual.) Finally, I've done a recent revision to the OED Monster Database which (a) edited some damage figures to be consistent with this analysis, (b) added a number of giant and aquatic creatures from Sup-II, and (c) expanded the sourcing/reference information in the last column. All of the damage values can now be rolled on d6 (previously I kept some d8 values in there, as per the supplements). There are currently a number of damage values like 1d6+1 or 1d6+2 (as the LBB Ogre), which shades towards fiddly for me, but I think I'm okay with it for now. Some of the EHD values moved up or down by one or two pips in some cases, as well. We now have 174 monsters in the database. :-) Sunday, July 21, 2019 Sunday Survey: Blind Spellcasting Before I asked the question, I scoured the OD&D and AD&D books for a ruling on the subject, and was surprised when I couldn't find any whatsoever. To my knowledge, there's not even any statement that a caster needs to see their target in general! Consider the idiom from Chainmail Fantasy, referenced in text for OD&D spell like fireball and lightning bolt, that attack spells must have "range being called before the hit pattern is placed" (that is, casters specify a distance, not a target). AD&D DMG p. 65 has an example of a caster of fireball needing sight to the area of effect, but no general rule to that effect. However, all later editions do dictate that casters must have sight of their target. This first appears in the 2E AD&D PHB (Ch. 7): "If the spell is targeted on a person, place, or thing, the caster must be able to see the target. It is not enough to cast a fireball 150 feet ahead into the darkness; the caster must be able to see the point of explosion and the intervening distance." (Note the distinct change from Chainmail/OD&D, with 1E being silent/ambiguous on the issue.) The 3E D&D PHB (Ch. 10) says likewise: "The character must be able to see or touch the target, and the character must specifically choose that target." So this is all very consistent in any edition post-1E, and by their wording would seem to definitively shut the door on a blinded spellcaster being able to get their spells off (excepting a target in touch-contact). Frank Mentzer actually chimed in on this discussion, saying, "btb if you can't detect/sense/see a target, AND a target is Required, then you can only hit it accidentally. (If you insist, you roll to hit, basically.)" Now, I don't think his recollection of "btb" (by-the-book) is correct, because I can't find anything in 1E materials requiring sight or detection; I can't even find it in his Red Box rules after a brief search. But, again, if it was a common house-ruling and a constant throughout all later editions, then we should be too surprised at some of it bleeding back into our earlier memory banks. That said, the consensus in the poll that most DMs would give some kind of probabilistic chance of a successful spell seems eminently reasonable as a ruling. It hasn't come up when I've been running a game, but if it did, I think I'd probably lean in the same direction. Related, today on WanderingDMs live chat (1 PM ET): How do you like your Infravision to work? Monday, July 15, 2019 Marvel Money For a couple reasons, we've been playing a few games of TSR's Marvel Super Heroes (FASERIP) recently. It's an enjoyable system but a bit wonky if you scratch the surface on it -- the numerical values for ranks and FEATs (for a variety of real-world assessments) advance in unpredictable jumps and increments. If it had been me, I would have wanted to establish some kind of consistent math at the outset, and then be able to easily slot in outside assessments to the system. On the other hand, I think the DC Heroes game did exactly that, and I don't see as much legacy of love for that system as FASERIP, so what do I know. But the most obviously broken part of the system was the Resources (money): it was wildly, insanely broken on a rarefied level for gaming systems in my experience -- on par with man-to-man missile fire in classic D&D. Resources was the sub-system that was entirely torn out and replaced with something brand-new in the switch from MSH Basic to Advanced rules. Here it is in the Basic game (Campaign Book, p. 8): So: An individual of a given rank gets the indicated "resource points" to spend weekly as they see fit, and all items in the game are price in terms of these resource points. Further up the same page there are some sample costs: a knife costs 1r, a plane ticket 10r, an acre of empty land 100r. Campaign book p. 9 says, "One resource point equals anywhere from 50 to 75 dollars". Let's take $60 as a rounded average. Then we see that the "Typical" salaried employee is making about$360/week, or $18,000/year -- in the same ballpark as the 1984 U.S. median income of$22,415. But on the upper end, a large nation like the U.S. at "Monstrous" rank is indicated as only getting 75r = $4,500/week, or$225,000/year. E.g.: The U.S. government can only pay the salaries for a staff of 12 federal workers total, and absolutely nothing else. In reality, the 1984 U.S. revenue collected was approximately $666 billion, so this figure is over 6 orders of magnitude in error. Lesson: Income advancement isn't linear, it's exponential. 'Nuff said about that. In the Advanced game released two years later (all editions are by Jeff Grubb), you get the following alteration (Judges' Book, p. 6): Note that the whole idea of "resource points" is simply gone. Instead the system now uses the standard MSH mechanic of rolling on its Universal Table for success, comparing one's Resource rank versus a Cost rank of similar description. (If the cost is lower, then it's a very easy "green" roll; if equal, a difficult "yellow" roll; if more, then a nigh-impossible "red" roll.) One roll is allowed per game-week. The justification for this is as follows (Player's Book, p. 18): Resources are modified in the Advanced Set to cut down on the paperwork. As things stood previously in the Original Set, characters gained Resources like money. They had a physical amount of Resource points, and everything cost a certain amount of RPs. This may work for Peter Parker, who has to make the rent every month, but for millionaire Tony Stark who can buy roadsters out of petty cash, this is a bit harder to handle. While the stated reason is to reduce record-keeping, I'd say the true benefit of this switch is to possibly correct -- or at least obscure -- the prior set's obvious lunacy on the issue. Costs for all items in the game (mostly weapons, vehicles, and headquarters furnishings) are in descriptive ranks, so it's possible that the underlying dollar costs are in a geometric progression. Or not. In the past I spent a lot of time trying to rationalize this system (I won't recreate all of that here). But it's still going to be very awkward when one puts normal-people and the U.S. federal government on the same list. If we note on the table above that Typical people ($30,000/year) and U.S. Unearthly revenues ($666 billion/year) are 7 ranks apart, then the simplest geometric model would be to have each rank represent a multiplier of the 7th-root of (666 billion/30 thousand) = 7th root of (22 million) = about 11. Let's say it's times-10 per step to make it as simple as possible. Now, among the problems here is the attempt at equating personal revenues to large companies and countries. Looking at relative values today, the largest company is indeed about one order of magnitude below U.S. revenues. But the wealthiest person should be two orders of magnitude below. A "standard" millionaire should only be one step above a Typical middle-class person (not 4 steps higher, as shown above). Then if we look at the many copious price charts, a lot of the prices seem to be out-of-sorts with this suggested times-10 model. A simple Axe is Good cost: say the weekly income of a Good-resourced person, so$300,000/50 = $6,000. A standard Sedan is Remarkable cost, suggesting the weekly revenue of a "large business", i.e.,$30 million/50 = $600,000. A large Office Building (30+ floors) is weirdly set at a cost of Shift-Z, that is, 3 steps beyond what any Earthly entity can actually afford (around$6 trillion?). Maybe it's unfair for me to pick on cases like these; I'll stop for now. But you can sort of imagine trying to massage this system and just never getting rid of the many short corners. Now, one thing I noticed recently is that the 1991 Revised rules, which mostly just edits and repackages the prior Advanced Rules under a different name, has yet another go at this. It gives a fairly brief table of about 50 example Resource ranks (Revised Basic Book p. 41), including salaries and costs of many common comic-book items, and it has the distinct advantage of leaving out the attempt at including national governments. I took that table and did some research to fill in current real-world estimated dollar values, and then a regression on the logarithms of those values, expecting broadly for the standard MSH Resource lunacy appear. But what I found was actually not the most crazy thing I've ever seen: You can draw a simple straight regression line through that data, including the origin (0, 0), and have it be a 97% correlated match. The indicated model of f(x) = 0.80x means that the cost-multiplier for x ranks should be about 10^(0.8x); since 10^0.8 ~ 6.3, we could say roughly that each rank here represents about ×6 value over the preceding one (perhaps not what I'd have picked tabula rasa, but a more gentle advancement than the previously considered ×10 one). If we pick the 0-rank to be $1 cost, then the ranks represent costs with perhaps lower-bounds of$6, $40,$250, $1500,$10,000, etc. for Feeble, Poor, Typical, Good, Excellent, and so forth (and annual salaries of about 50 times those numbers). The other costs in this version of the rules are -- surprisingly -- kind of consistent with that model. I could find a half-dozen items in the given list off from the real-world estimated value by 2 ranks, but nothing any more than that. Disclosure: I did put my thumb on the scale here a tiny bit by re-interpreting a few of the items on the list from my first estimates. For example: Low-rank hotel costs I interpreted as per-night, whereas higher-ranked apartments I took as monthly rentals (none are defined one way or another in the published list). For "Private Plane" I used the cost of a multi-engine Piper instead of, say, a corporate jet. I used entry-level "Old Masters" artwork at around $10 million, instead of the world-record$450 million for a da Vinci painting in 2017 (and likewise for examples of "Archaic Texts"). At the top end of this scale, the Mega-corporation does get promoted 2 ranks from Unearthly to Shift-Y (judging from the example of Saudi Aramco's $356B/year revenue; identified as the one real-world example in the Wikipedia Megacorporation article). If we were to include the U.S. federal government, then that would come in at the Shift Z level (based on revenues of$3.5T/year). In summary: This is now a system that I think I could use for Marvel RPG purchasing power, and be able to estimate and convert real-world prices into in-game mechanics pretty easily, and not think I'm going to stumble over things that are obviously insane and broken on a regular basis. I did massage a small number of the given ranks in those rules and printed a copy for my MSH house rules. Data and analysis in the spreadsheet below if you want to see it. Excelsior! Monday, July 8, 2019 24 Hours of D&D Over the July 4th weekend, our Wandering DMs channel livecast a total of 24 hours, 13 minutes, and 16 seconds of D&D play, with us battling for our lives in the lowest depths of Dyson's Delve. That's all available at our Wandering DMs YouTube channel if you want to check it out. I'm currently crashing and my throat is pretty torn up from yelling in terror and laughing hysterically over the weekend. I'll point you over to Paul's Gameblog for more specifics and links to the individual episode/sessions. Hope your holiday was half as awesome as ours! More live D&D play than you can shake a magic sword at. Wednesday, July 3, 2019 A quick reminder that the Wandering DMs channel plans to livestream nonstop D&D play (well: some stops for meals and sleep and pool time) all this holiday weekend from the evening of Thursday July 4 to Sunday July 7th. This will feature birthday-boy Paul DM'ing sessions of Dyson's Delve using our OED house rules for original D&D, and myself playing with a number of our friends and family. Simulcast on these fine sites: As a preview, here's "the story so far" from our sessions one year ago: Hope you can tune in at some point when you've got downtime from your own weekend festivities. Tell us what you think! :-) Monday, July 1, 2019 Carrion Crawler Coaching The Facebook AD&D group had an interesting question posed the other day: "Okay DMs, what was the biggest mistake you ever made trying to homebrew something?" Here's one response that caught my eye: I kind of really love the honesty here. The major reason I love this is as a case-study that even the biggest-name D&D principals didn't always get things right the first time. First, it serves as a great counterexample to the camp of fundamentalist players who argue that everything in a given edition of D&D is perfect, beyond critique or improvement, and intentional in all ways by the original author (although, am I unwise to spend any time responding to that camp?). Second, it serves to highlight that gauging the danger level of a given monster is not something that even the most experienced DMs can do correctly by sight or instinct. Rather; it needs serious large-scale playtesting -- that I would argue needs some component of computer simulation to get to the right scale. Consider the Arena/Monster Metrics program (and related blog posts here you can search for) that we've developed to assess Equivalent Hit Dice (EHD) measurements for monsters -- results available in the OED Monster Database. Consider that carrion crawlers and other zero-damage monsters were highlighted as particularly broken in Turnbull's MonsterMarks and related a-la-carte point-buy systems. Recall that the stated "monster level" for carrion crawlers jumped around radically in early versions of D&D -- just 2nd level (of 6; say, 33%) in their first appearance in Sup-I (p. 64); then up to 6th level (of 10; so, 60%) by the time of the AD&D  DMG (p. 178). I think that's the single biggest adjustment for any individual monster between those editions. Regarding Mentzer's comment above, I asked a follow-up question: "I've seen some people play that a crawler can only attack one PC at a time (w/all 8 atks), others they can attack 8 PCs at once. How'd you play that?" His reply: So that's clearly "more than one", normally around 3-4 from how I read that. Interestingly, if the designers had a systematic model like our Arena program available, then the danger of carrion crawlers would have been immediately evident. If I run the Monster Metrics assessment with the crawler allowed 8 attacks against different opponents, it estimates an EHD value of 12 (i.e., roughly 50% likely to win a fight against 4 3rd-level fighters, or 3 4th-level fighters), putting it in league with the top 6th-level bracket in OD&D (comparable to a chimera, gorgon, balrog, etc.). That's why in my own game for some time I've actually house-ruled them to halve their attacks, i.e., a total of 4 attacks -- basically the same as what Mentzer suggests for targetable opponents here. At this level carrion crawlers are estimated to be EHD 9, or about 5th level in OD&D terms. Anyway, big props to Mr. Mentzer for this important peek behind the screen, and the not-too-surprising lesson that we can always continue to make improvements to our game art. Don't forget about our July 4th game with WanderingDMs on YouTube and Twitch: broadcasting live play all weekend, four days straight! (Starts Thursday night.)
# Just how do I respec from DPS to recovery? My major personality is a druid that simply got to degree 80 on Tuesday. While leveling I have actually played specifically in pet cat kind. This functioned wonderful for solo play and also 5-man dungeons. Nonetheless, my guild is entirely overwhelmed with containers and also DPS (mostly DKs, seekers and also warlocks) yet has basically no therapists. So I've been considering either re-spec or dual-spec right into recovery. My inquiries and also problems are: • I'm currently mindful that I will certainly require various shield given that wheel and also DPS shield have entirely various statistics. What is the most effective means to get ready for a spec adjustment similar to this? • Just how very easy is it to "find out" recovery, and also just how should I deal with it? • The factor I'm interested in dual-spec is due to the fact that I still like my feral feline. Is it far better to do it in this manner (despite having the added equipment needs) or to concentrate on one specification? 0 2019-05-04 09:55:07 Source Share Answers: 3 Go for twin - specification. Utilize your major specification for hard experiences and also roll on therapist things for off - specification (similar to formerly when you really did not have 2 ability trees readily available for "free"). You can additionally claim you wan na roll just on OS (off - specification) and also hand down major - specification so someone else can get a thing for their OS and also miss for you and so on. 0 2019-05-08 05:46:36 Source The brief solution is, you are mosting likely to need to tailor for recovery, as it calls for entirely various equipment than feral. Though if you do not have a rogue in your team, I make certain you would certainly have the ability to grab Feral equipment while your celebration recovery equipment. Each PuG is various, simply ask if you can roll on points for your offspec, yet do not take equipment from individuals that might require it for the instance you are doing. Most individuals are recognizing in 5 males that you will certainly desire equipment for numerous specifications. Anyways, I've assembled a list of web links that I think will certainly offer you a hand recognizing the Druid as a recovery class, and also readily available alternatives from great deals of various gamers. To start, below's a fast recap of what individuals watch druid therapists as : Resto Druids : Druids are probably the most essential therapist in any kind of 10/25 make-up. Primarily as a result of Snowstorm's encounter/item layout throughout and also after Ulduar. After Burning Campaign, Druids were searching for a. particular niche due to the fact that Blizzard nerfed their. hugely outstanding Lifebloom. I assume a. great deal of Druids entered into Wrath looking. to proceed being container therapists, and also. it's something they're still efficient. Nonetheless, they located their particular niche with. raid recovery via something called. Restoration Blanketing. Primarily, it. simply suggests that they place Rejuvenation. on as several raid participants as their. haste/positioning permits. Snowstorm. enhanced this via unbelievably. solid set incentives for T8 and also T9 that. enhance the power of Rejuvenation. unbelievably. On top of that, the pulsing,. constant raid damages of a great deal of. Ulduar/ToC battles primarily plays. right into Druids' hands. Rejuv. burying produces a magnificent quantity. of throughput that is virtually. unrivaled. Like I stated, rolling. HoTs on containers is additionally a toughness of. Druids and also actually levels container. ruptured, though I hardly ever assign them to. container recovery unless really essential. They can container recover simply great, yet it's. a waste of their real power. There's additionally been an unbelievably valuable study of Druid therapists based upon a study called "Circle of Healers", with several post feedbacks : ## Druids Other subjects that might aid : 0 2019-05-08 05:37:16 Source The ideal (and also actually only ) means to get ready is to ranch heroics. Symbol of Triumph goes a lengthy means. Finding out the essentials of recovery is very easy : if they pass away, it's negative. It's doing it right that's actually tough, specifically on resolve- and also movement-heavy battles. You definitely require an addon that will certainly show you the whole raid, nonetheless. Twin specification is most definitely worth it, specifically for the change duration. One added point to take into consideration though : why does not your guild have extra therapists? Does the guild condemn them for every little thing? Are they pushed into shock raids with If you don't come, we can't raid and it's all your fault!? Therapist fatigue is a prevalent trouble amongst some sorts of guilds, do not be a target of it on your own. 0 2019-05-08 00:11:36 Source
Help differentiating energy wrt time. 1. Jan 27, 2013 caius 1. The problem statement, all variables and given/known data I have a problem where I have a mass suspended in a system of springs. I need to differentiate the equation wrt time so I can can show equivalence with Newton's second law. The mass and springs are vertically aligned so the motion is in one dimension. The actual problem has several springs, but for simplicity I am describing a system with just two. The equation below I think shows the total energy of the system. 2. Relevant equations E = 1/2 mv^2 + k(x-l)^2 + 2k(l-x)^2 -mgx where m=mass, v= velocity, k= stiffness, x=current position and l=spring's natural length. 3. The attempt at a solution I think the way to approach it is to substitute dx/dt in place of the velocity, however I can't see what to do with the spring parts. I seem to have some kind of mental block on this, and it's very frustrating. Any assistance on how to approach it would be gratefully received! 2. Jan 28, 2013 apelling E = 1/2 mv^2 + k(x-l)^2 + 2k(l-x)^2 -mgx - don't think this is quite right I am guessing that we have one spring above the mass and one below? The terms for elastic spring energy are based on 1/2ke^2. Where e is spring extension. Isn't the extension of the springs l+x and l-x? Are both springs of the same stiffness constant k? Perhaps there is a typo here? Make these corrections and multiply out brackets before differentiating. Remember $$\frac{d}{dt}v^2=2v\frac{dv}{dt}=2\frac{dx}{dt}a$$ Where a is acceleration. So the kinetic energy term after differentiating will have ma in it, which is starting to look like N2L. It will also have dx/dt in it. All other terms of your equation will either go to zero because they are independent of t or be a multiple of dx/dt. So dx/dt will cancel throughout. This will leave terms that are all forces. 3. Jan 28, 2013 haruspex I don't see how it could be l+x. l-x and x-l make sense if the springs are in series with the endpoints fixed 2l apart, and x being the position of the join. But then, the expression simplifies to 3k(l-x)2. 4. Jan 28, 2013 caius Thanks for your suggestions. I think I have it now. The lengths were the correct way round, but I didn't make it quite clear in the question I asked. The distance between the two fixtures is known, so the deformation can be expressed in relation to that. The answer I got did end up as m.a equated with the spring forces according to Hooke's law and gravity which is what I needed. It's amazing the difference a nights sleep makes. 5. Jan 29, 2013 apelling I was thinking L was the extension of the springs when at equilibrium and x was the displacement from equilibrium. Anyhow it does not matter much since the constants drop out during differentiation.
# How do you find all the real and complex roots of x^6-64=0? Jan 20, 2016 $x = \pm 2 , - 1 \pm \sqrt{3} i , 1 \pm \sqrt{3} i$ #### Explanation: Knowing the following factoring techniques is imperative: • Difference of squares: ${a}^{2} - {b}^{2} = \left(a + b\right) \left(a - b\right)$ • Sum of cubes: ${a}^{3} + {b}^{3} = \left(a + b\right) \left({a}^{2} - a b + {b}^{2}\right)$ • Difference of cubes: ${a}^{3} - {b}^{3} = \left(a - b\right) \left({a}^{2} + a b + {b}^{2}\right)$ ${x}^{6} - 64 = 0$ Apply difference of squares: ${x}^{6} = {\left({x}^{3}\right)}^{2} , 64 = {8}^{2}$. $\textcolor{red}{\left({x}^{3} + 8\right)} \textcolor{g r e e n}{\left({x}^{3} - 8\right)} = 0$ Use both sum & difference of cubes: ${x}^{3} = {\left(x\right)}^{3} , 8 = {2}^{3}$. $\textcolor{red}{\left(x + 2\right) \left({x}^{2} - 2 x + 4\right)} \textcolor{g r e e n}{\left(x - 2\right) \left({x}^{2} + 2 x + 4\right)} = 0$ From here, set each portion of the product equal to $0$. The linear factors $x + 2$ and $x - 2$ are easiest: x+2=0=>color(blue)(x=-2 x-2=0=>color(blue)(x=2 The following two quadratic factors can be solved via completing the square or using the quadratic formula. Solving ${x}^{2} - 2 x + 4 = 0$: color(blue)(x)=(2+-sqrt(4-16))/2=(2+-2sqrt3i)/2color(blue)(=1+-sqrt3i Solving ${x}^{2} + 2 x + 4 = 0$: color(blue)(x)=(-2+-sqrt(4-16))/2=(-2+-2sqrt3i)/2color(blue)(=-1+-sqrt3i
## Commented out December 17, 2007 Posted by Alexandre Borovik in Uncategorized. I want to add that I plan to touch in our blog a theme which was commented out from our grant proposal as excessively controversial: % \subsection*{And the last, but not least\dots} % % We shall try to sort out the mess of misunderstanding surrounding % the concept of infinity in literature on mathematical education. A language question: is the expression “to comment out” used outside of TeX and programming communities? The issue of infinity in education is interesting in view of Peter McBurney’s commented on my post “Case Study III: Computer science: The bestiary of potential infinities”: Your example from Computer Science reminds me of something often forgotten or overlooked in present-day discussions of infinite structures: that some of the motivation for the study of the infinite in mathematics in the 19th and early 20th centuries came from physics where (strange as it may seem to a modern mathematician or a computer scientist) the infinite was used as an approximation to the very-large-finite. I had personal learning experience related to that issue. It so happened that I was a guinea pig in a bold educational experiment: at my boarding school, my lecturer in mathematics attempted to build the entire calculus in terms of finite elements. It sounded like a good idea at the time: physicists formulate their equations in terms of finite differences — working with finite elements of volume, mass, etc, then they take the limit $\Delta V\rightarrow 0$ and replace $\Delta V$ by the differential $dV$, etc., getting a differential equation instead of the original finite difference equation. After that, numerical analysts solve this equation by replacing it with an equation in finite differences. The question: “Why bother with the differential equations?” is quite natural. Hence my lecturer bravely started to re-build, from scratch, calculus in terms of finite differences. Even more brave was his decision to test it on schoolchildren. Have you ever tried to prove, within the $\epsilon$$\delta$ language for limits, the continuity of the function $y = x^{m/n}$ at an arbitrary point $x_0$ by a direct explicit computation of $\delta$ in terms of $\epsilon$? The scale of the disaster became apparent only when my friends and I started, in revising for exams, to actually read the mimeographed lecture notes. We realized very soon that we had stronger feelings about mathematical rigor than our lecturer possibly had (or was prepared to admit, being a very good and practically minded numerical analyst); perhaps my teacher could be excused because it was not possible to squeeze the material into a short lecture course without sacrificing rigor. So we started to recover missing links, and research through books for proofs, etc. The ambitious project deflated, like a pricked balloon, and started to converge to a good traditional calculus course. The sheer excitement of the hunt for another error in the lecture notes still stays with me. And I learned to love actual infinity — it makes life so much easier. My story, however, has a deeper methodological aspect. Vladimir Arnold forcefully stated in one of his books that it is wrong to think about finite difference equations as approximations of differential equations. It is the differential equation which approximates finite difference laws of physics; it is the result of taking an asymptotic limit at zero. Being an approximation, it is easier to solve and study. In support to his thesis, Arnold refers to a scene almost everyone has seen: old tires hanging on sea piers to protect boats from bumps. If you control a boat by measuring its speed and distance from the pier and select the acceleration of the boat as a continuous function of the speed and distance, you can come to the complete stop precisely at the wall of the pier, but only after infinite time: this is an immediate consequence of the uniqueness theorem for solutions of differential equations. To complete the task in sensible time, you have to alow your boat to gently bump into the pier. The asymptotic at zero is not always an ideal solution in the real world. But it is easier to analyze! [Here, I cannibalise a fragment from my book.] 1. Andy - December 17, 2007 I love the idea that the “infinite was used as an approximation to the very-large-finite.” That’s how I view it! 2. Beans - December 17, 2007 I knew there was a reason as to why I really dislike Numerical Analysis! “We realized very soon that we had stronger feelings about mathematical rigor..” I think this is due to the course content too, and how much of it nowadays really depends on computers. When it comes to giving a theorem about convergence, it is left to the appendix because “it was too frightening and might scare us”. Pfft. Then I wonder why I hate this course. I risk offending applied mathematicians, but they seem to say “I will leave the proof of such a such theorem to the pure mathematicians”. \end{aside!} 3. Manifestation of Infinity « A Dialogue on Infinity - December 18, 2007 […] of Infinity According to Vladimir Arnold (see my previous post), these are manifestation sin infinity in the real […] 4. Kea - December 18, 2007 I never knew Arnold said that. Thanks a lot! It’s very much how I feel about physics. 5. Mitch - December 19, 2007 “commented out of” – 308,000 google hits (mostly programmers but not much tex) “commented out from” – 18,100 google hits (top hit, your site) 6. Já agora, o que é o Infinito? « Lost in my thoughts - December 20, 2007 […] post do vlog A Dialogue on Infinity, Alexandre Borovik escreve sobre uma experiência que um dos seus […] 7. anonym - December 21, 2007 We realized very soon that we had stronger feelings about mathematical rigor than our lecturer possibly had (or was prepared to admit, being a very good and practically minded numerical analyst); perhaps my teacher could be excused because it was not possible to squeeze the material into a short lecture course without sacrificing rigor. Sacrificing the rigour might, or should have, been part of the point, in an attempt to develop the intution first, cf. an essay of Poincare on intuiton and logic. some physist do start teaching basic analysis using finite approximations; it is only later that rigourous definitons involving infinity are given.. 8. Peter - December 21, 2007 I’ve never met a physicist with a sense of mathematical rigour. The whole discipline of physics is premised on the making of grand, sweeping claims (called “laws of nature”) which are invariably contradicted by real-life details in every particular case. Physics is a theory based on abstraction away from these confounding details. The same problem bedevils economics. On the question (comment #7) about development of intuition: Surely the point of rigour in analysis is to demonstrate to us that our (raw) intuition is often dead wrong. Nothing could be more intuitive, for instance, than that the infinite limit of a converging sequence of continuous functions is also continuous, or that a function cannot be both everywhere continuous and nowhere differentiable, or that it is impossible to fill a 2-dimensional space with a 1-dimensional curve. Rigour and intuition are polar opposites, not complementary, at least in any math involving the infinite. 9. Alexandre Borovik - December 22, 2007 Peter wrote: “Rigour and intuition are polar opposites, not complementary, at least in any math involving the infinite.” I see it differently: rigour is a foundation of intuition. 10. serg271 - December 24, 2007 Peter wrote: Surely the point of rigour in analysis is to demonstrate to us that our (raw) intuition is often dead wrong. Nothing could be more intuitive, for instance, than that the infinite limit of a converging sequence of continuous functions is also continuous, or that a function cannot be both everywhere continuous and nowhere differentiable, or that it is impossible to fill a 2-dimensional space with a 1-dimensional curve. I would say those are examples of perverted intuition, which lost touch with physics or geometry. It’s pretty intuitive that you can staff the box full of very thin threads, and if you bend wire a lot you will get scratchy saw-like shape. Of cause you have examples of bad intuition, like for example some probability tricks, but they mostly depened not on luck rigor, but on bad understanding of underlying concepts. Of cause rigor by itself often help, kind of like syntactic check 11. Lucas - December 30, 2007 I think that an infinite sequence of continuous functions having a discontinuous limit is very intuitive. A nowhere differentiable continuous function is much less intuitive, as are space filling curves. I had one of my students (as CS major) ask me this semester why we bother with Turing machines, since real computers have only a finite amount of memory. I tried to explain how we frequently approximate finite things with infinite things, but I don’t think he really got it. 12. Be Sex Binary, We Are Not « Polytropy - July 7, 2020 […] his book, Mathematics Under the Microscope (2007), as well as in his blog, Alexandre Borovik remarks on the challenge of taking a finitistic course of calculus, while […]
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # A novel AI device for real-time optical characterization of colorectal polyps ## Abstract Accurate in-vivo optical characterization of colorectal polyps is key to select the optimal treatment regimen during colonoscopy. However, reported accuracies vary widely among endoscopists. We developed a novel intelligent medical device able to seamlessly operate in real-time using conventional white light (WL) endoscopy video stream without virtual chromoendoscopy (blue light, BL). In this work, we evaluated the standalone performance of this computer-aided diagnosis device (CADx) on a prospectively acquired dataset of unaltered colonoscopy videos. An international group of endoscopists performed optical characterization of each polyp acquired in a prospective study, blinded to both histology and CADx result, by means of an online platform enabling careful video assessment. Colorectal polyps were categorized by reviewers, subdivided into 10 experts and 11 non-experts endoscopists, and by the CADx as either “adenoma” or “non-adenoma”. A total of 513 polyps from 165 patients were assessed. CADx accuracy in WL was found comparable to the accuracy of expert endoscopists (CADxWL/Exp; OR 1.211 [0.766–1.915]) using histopathology as the reference standard. Moreover, CADx accuracy in WL was found superior to the accuracy of non-expert endoscopists (CADxWL/NonExp; OR 1.875 [1.191–2.953]), and CADx accuracy in BL was found comparable to it (CADxBL/CADxWL; OR 0.886 [0.612–1.282]). The proposed intelligent device shows the potential to support non-expert endoscopists in systematically reaching the performances of expert endoscopists in optical characterization. ## Introduction Colorectal cancer is one of the most common malignancies1. Optical colonoscopy with white light (WL) endoscopy is the gold standard for the detection and resection of colorectal mucosal polyps and its adoption in population-based screening programs has resulted in a significant reduction in the incidence and mortality of colorectal cancer2. Accurate real-time visual differentiation between adenomatous and non-adenomatous polyps (optical characterization, OC) during colonoscopy is clinically relevant to select optimal treatment regimen, avoid inappropriate endoscopic resection, improve cost-effectiveness, and reduce the number of polypectomies3,4. In order to standardize OC, several classification schemes have been proposed with the aim of being incorporated into clinical practice5,6,7. These classifications are based on combinations of vascular and mucosal patterns, specific features of the polyp surface, and the presence of a cloudy or irregular appearance and indistinct borders. Moreover, although optical colonoscopy is performed using WL large spectrum illumination, all these classification schemes are based on virtual chromoendoscopy illumination (narrow-spectrum blue light [BL]8) able to enhance the appearance of superficial mucosal vascular patterns. Nevertheless, these classifications in BL showed significant inter- and intra-observer variability when prospectively evaluated, limiting their widespread adoption by the endoscopic community9,10. Endoscopy procedures are an ideal arena for the development of intelligent medical devices11,12. This is due to the huge quantity of information that the physician needs to extract and interpret from the video flow, in real-time, under time pressure, and with repetitive modalities during long working hours. In similar situations, where humans may act non-Bayesian by violating probabilistic rules and thus making inconsistent decisions, artificial intelligence (AI) has proven to be a valuable tool to help humans in making better decisions13. The first generation of AI-based medical devices in colonoscopy authorized by regulatory bodies has focused on improving the task of polyp detection14,15. Different randomized controlled trials have demonstrated the ability of such computer-aided detection (CADe) devices to improve the detection of precancerous polyps during colonoscopy16,17,18,19. However, AI-based algorithms in endoscopy have also the potential of supporting physicians in the task of OC (CADx), thereby reducing the limitations described above. Nevertheless, AI algorithm proposed for the task of OC has failed to be implemented in mainstream clinical practice so far20,21,22. This might be due to limitations in design that prevent seamless integration into clinical workflow, such as classifying still images rather than videos22, or requiring additional technology such as virtual chromoendoscopy (BL) or endocytoscopy as a prerequisite to operate21. In this work, we propose an intelligent medical device for real-time OC of colorectal polyps that can overcome the limitations of current solutions and can be integrated easily into clinical workflow. The device can operate on an unaltered conventional WL video stream without human intervention. We validate this AI on a prospectively acquired dataset with a multi-reader study design. For this purpose, we benchmark the performance of the AI against a group of expert endoscopists and a group of non-expert endoscopists. Our hypotheses are that AI accuracy is comparable to experts and superior to non-experts, with a substantial equivalence between performances in WL illumination and virtual chromoendoscopy (BL). Our predictions were pre-registered before the start of the data gathering, together with the study plan and statistical models and analyses (available at https://osf.io/m5cxt). ## Results Figure 1 depicts the intended use in the clinical workflow of the proposed intelligent device. Briefly, the CADx system is designed to automatically activate when a new polyp is detected by a CADe detection algorithm in a colonoscopy video stream. For each polyp, the device overlays a frame-by-frame live decision specifying its binary histology (“adenoma” or “non-adenoma”). The CADx can also abstain from predicting the polyp histology in a frame either by printing “no-prediction” if the system is unsure about the histology or “analyzing” if an insufficient number of features across multiple frames was detected. Example videoclips of the CADx real-time output for three polyps of the study are provided as Supplementary Videos. This study included lossless video recording and histology information on 513 prospectively acquired colorectal polyps (198 adenomas, 315 non-adenomas) in a total of 165 subjects (77 males, 88 females, mean age 66.6 ± 10.2). The proposed medical device was applied to each full-procedure video recording, hence reproducing the same frame-by-frame output that was shown in the clinical room. The full per frame processing time of the device (including on-screen output visualization) was always inferior to 60 ms, while the average CADx processing time was 2 ms—as per the device specifications. A total of 10 experts and 11 non-experts endoscopists performed OC of each polyp via an online platform enabling careful assessment of the video recordings. Video recordings contained imaging of the polyp using both WL and BL technology. The expert reviewer group had an experience (measured in years of activity) of 12.3 ± 7.3 years [range: 6–29 years], while the non-expert reviewer group had an experience (measured in number of colonoscopies performed) of 363 ± 136 colonoscopies [range: 100–500 colonoscopies, range of years of experience: 1–3 years]. ### Performance comparisons Study endpoints were evaluated using both log-binomial regression and bootstrapping methods. Table 1 shows the results of group comparisons using log-binomial regression. In detail, CADx accuracy in WL was found to be non-inferior to the accuracy of expert endoscopists (CADxWL/Experts; OR 1.211 [0.766–1.915]; p < 0.001) using histopathology as a reference standard. Moreover, CADx accuracy in WL was found superior to the accuracy of non-expert endoscopists (CADxWL/Non-experts; OR 1.875 [1.191–2.953]; p = 0.003), and CADx accuracy in BL was found non-inferior to it (CADxBL/CADxWL; OR 0.886 [0.612–1.282]; p = 0.003). Performances of individual reviewers are reported in Supplementary Results 4. Figure 2 shows the results of group comparisons using the bootstrap method. In detail, the area under the curve (AUC) for CADx in WL (AUCWL: 0.8653 [0.8304–0.8967]) was found non-inferior to that of expert endoscopists (AUCExp: 0.8553 [0.8203–0.8881]). Moreover, CADx accuracy in WL was found superior to the accuracy of non-expert endoscopists (AUCNonExp: 0.7769 [0.7356–0.8171]), and CADx accuracy in BL was found non-inferior to it (AUCBL: 0.8545 [0.8141–0.8915]). Figure 3 shows the agreement of expert and non-expert endoscopists with ground truth, computed as the fraction of endoscopists correctly predicting the polyp histology. Notably, both experts and non-expert endoscopists were unanimously in disagreement with the ground truth for nine non-adenomatous polyps (five diminutive (≤5 mm), two small (6–9 mm), and two large (≥10 mm)) and six adenomatous (all diminutive) polyps. CADx classified the same nine non-adenomas and six adenomas in disagreement with histology and in agreement with endoscopists. Figure 4 shows example images of such polyps. ## Discussion Compared to previous studies in this area, our study contributed uniquely in the following aspects: we have developed for the first time an intelligent medical device that can perform the task of OC in real-time on unaltered conventional WL videos; the device does not need additional technology such as virtual chromoendoscopy on the endoscopy tower, that might slow down the clinical workflow; we have validated the device using a prospectively acquired dataset with a multi-reader design. In clinical practice, the task of OC performed during live colonoscopy is not a static assessment of a polyp portrait, but rather a fluid and dynamic process of decision build-up in the endoscopist’s brain. This process is heavily affected by polyp appearance and its morphological characteristics (i.e., location relative to folds, level of cleansing, size, etc.). Thus, the time needed to complete the assessment of a single polyp can vary wildly, ranging from a fraction of a second to several minutes. During this examination, it is very frequent for the endoscopist to change opinion for a given polyp, possibly jumping from adenoma to non-adenoma or vice-versa, whenever a particular illumination or viewing angle highlights a feature of a specific class. Consequently, the process of OC can be considered as a weighted average of different features over time. Previous studies on OC CADx failed to capture this dynamic process and likely over-inflated reported performances for several reasons. First, previous trials focused on the classification of single images, with the physician asked to freeze the video during live endoscopy, and subsequently submit the selected frame to the image classifier21,22. This approach introduces a bias since the physician, especially if non-expert, might not select the most representative image of a polyp. Moreover, selecting a single image would fail to represent the complex variability of observing a polyp from different viewpoints, zoom, or illumination angle. For the very same reasons, assessing the performances of CADx using the classification of human-chosen snapshot or short videoclips of polyps20 is likely to over-inflate performances, since the CADx would be trained and validated only on near-perfect images. Second, previous studies required additional technologies such as endocytoscopy21, or proprietary virtual chromoendoscopy illuminations, preventing generalizability of results. Moreover, these approaches require the physician to manually engage and disengage the OC module, thus requiring additional steps during the normal succession of tasks in real-life colonoscopy. In order to be used in a real-world setting, an AI device should integrate seamlessly into clinical practice. For this purpose, the intelligent device in this work is designed to engage automatically when a polyp is framed consistently, thus “interpreting” the wish of the physician to know more about the imaged region (Fig. 1). By using the very same mechanism, it disengages automatically when normal navigation is resumed. This feature is possible because in our device the image classification is a process of the cascade to the detection process, thus it can “follow” the polyp as it moves around the image frame. Other CADx algorithms that are not linked to a detection module (CADe) are bound to output a classification on the entire image. An additional benefit of this design is the ability to track and characterize different polyps even when these appear simultaneously on the image frame. Another important finding of our study was the ability to reach very high accuracies using conventional wide spectrum WL illumination, indirectly showing that WL delivers all information needed to be as accurate as experts who use BL technology. A noteworthy result of this study was the disagreement observed between a large panel of endoscopists with heterogeneous expertise and histology (Fig. 3). A similar phenomenon, although measured considering the output of a single very expert senior endoscopist, has been reported recently23,24. The reasons for this disagreement could be multiple and related to specimen retrieval and subsequent processing, rather than a misdiagnosis. This could explain our observation that this effect is exacerbated for diminutive polyps (≤5 mm) for which manual handling is more difficult and thus more prone to error. The observation led Shahidi et al.24 to question pathology as the gold standard for assessing diminutive colorectal polyps. Although this position could be questioned25, it suggests for AI a potential arbitration role when endoscopist and pathologist assessments of the same polyp diverge. This work has limitations. First, the dataset acquired for performance assessment originates from a single center, although using different endoscope manufacturers. Second, the current version of CADx characterizes polyps according to a two-class model that includes sessile serrated polyps into the non-adenoma class26, forcing the endoscopist to look for serrated features likewise happens in WASP criteria27. Although identifying sessile serrated polyps as a separate class could be beneficial, the current size of the dataset used for training the CADx does not support a three-class model with reliable accuracies. Future CADx releases including this category as a separate output are warranted. Real-time colonoscopy is a fertile area for developing intelligent devices that are able to effectively allocate tasks between humans and AI, thereby achieving a superior outcome by aggregating the output of its parts. In this context, the medical device described in this study may allow non-experts to leverage the predictive power of AI while using their own knowledge to make a choice from the predictions of the AI. In conclusion, this device offers the potential to standardize the practice of OC and to ensure in all colonoscopies the same accuracy that can be met only by a few very experienced expert physicians. ## Methods The CADx system comprises two online algorithms working on the outputs of two convolutional neural network models. The first convolutional neural network model is named Polyp Characterization Network and it has a two-fold purpose: (1) to classify each detected polyp in a single video frame as “adenoma” or “non-adenoma” polyp and (2) to provide a polyp image appearance descriptor for each detected polyp in the current frame to be used for polyp tracking. The second convolutional neural network, named Polyp Imaging Quality Network, is responsible for providing an imaging quality score to each detected polyp expressing how clearly the polyp features are imaged in the current video frame. This second network is needed since low-quality images can introduce noise in the spatial-temporal reasoning of the CADx. The first online algorithm is responsible for polyp tracking across multiple frames, while the second is an online temporal aggregation algorithm that aggregates frame-by-frame classification and imaging quality information for each tracked polyp and provides a live decision based on a moving temporal window. ### Polyp characterization network Given an input frame $${{{{\mathcal{I}}}}}_{t}$$ at time t of a regular colonoscopy video and a set of N polyp detections $${{{{\mathcal{B}}}}}_{t}={\left\{{{{{\bf{b}}}}}_{t,i}\right\}}_{i = 1}^{N}$$ as detected by any polyp detection model on that frame, the first component of the proposed AI system is a learning-based model that learns from data the mapping between the image content of each bounding box bt,i to the histology hi of the polyp it contains. We employ ResNet18, a deep convolutional neural network commonly used for classifying histopathological images28, for this task. In order to input the polyp images at an appropriate resolution while providing some contextual information, the input of the characterization network is an image $${{{{\mathcal{X}}}}}_{t,i}$$ resulting from the cropping of the input frame $${{{{\mathcal{I}}}}}_{t}$$ around the bounding box bt,i plus a 50 pixels margin, which is rescaled to 512 × 512 size. The output of the characterization network is a score ct,i, between [0,1], expressing the probability of the content of the bounding box to be an adenoma (1) or a non-adenoma (0) polyp. By applying the classification network to all the N detections in a frame, a set of characterization scores $${{{{\mathcal{C}}}}}_{t}={\left\{{c}_{t,i}\right\}}_{i = 1}^{N}$$ can hence be obtained. The characterization model is trained using binary classification cross entropy (CE) as a loss function and the ground-truth histology is represented as a two-dimensional vector yt,i while its predicted value as softmax scores $${\hat{{{{\bf{y}}}}}}_{t,i}$$, $${{{{\mathcal{T}}}}}_{cl}={{{{\mathcal{T}}}}}_{CE}({\hat{{{{\bf{y}}}}}}_{t,i},{{{{\bf{y}}}}}_{t,i})$$. Mixup training method was adopted to provide a better-calibrated network and to reduce overfitting29. ### Polyp re-identification algorithm The task of OC is performed by a human by considering many subsequent frames before expressing a decision. The proposed AI system aims at mimicking this decision-making process by producing a frame-by-frame temporally weighted decision for each detected polyp when enough confidence about a prediction has been acquired. In order to achieve this, an important milestone is to be able to follow a polyp across multiple frames in a colonoscopy video. In our system, we propose an online polyp re-identification algorithm that both exploits single-frame polyp appearance and spatio-temporal information for this task. In order to extract single-frame polyp appearance information, we modify the characterization network so that an 8k-dimensional feature descriptor ft,i for each input cropped image $${{{{\mathcal{X}}}}}_{t,i}$$ can be extracted. Specifically, we make use of a multi-task learning approach by attaching at the end of the characterization network encoder a second convolutional neural network branch. In this way, the network learns how to encode and reconstruct each input $${{{{\mathcal{X}}}}}_{t,i}$$ by only using its 8k-dimensional descriptor ft,i and for each frame at set of bounding box image appearance descriptors $${{{{\mathcal{F}}}}}_{t}={\left\{{{{{\bf{f}}}}}_{t,i}\right\}}_{i = 1}^{N}$$ can be obtained. This second network branch is trained end-to-end with the network classification network by means of a reconstruction loss $${{{{\mathcal{L}}}}}_{rec}={{{{\mathcal{L}}}}}_{MSE}({\hat{{{{\mathcal{X}}}}}}_{t,i},{{{{\mathcal{X}}}}}_{t,i})$$, which is pixel-wise mean squared error in the RGB space between the input image Xt,i and its reconstruction $${\hat{X}}_{t,i}$$. As a consequence, the overall loss for the classification network becomes $${{{{\mathcal{L}}}}}_{tot}={{{{\mathcal{L}}}}}_{rec}+{{{{\mathcal{L}}}}}_{cl}$$. The proposed re-identification algorithm outputs at each time t a set $${{{{\mathcal{T}}}}}_{t}={\{{{{{\bf{L}}}}}_{j}\}}_{j = 1}^{{N}_{at}}$$ of Nat actively followed polyps. Each element Lj is a set representing a polyp history by means of its bounding box coordinates $${\left\{{{{{\mathcal{B}}}}}_{k}\right\}}_{k = 1}^{K}$$ and the corresponding appearance vectors $${\left\{{{{{\mathcal{F}}}}}_{k}\right\}}_{k = 1}^{K}$$ and classification scores $${\left\{{{{{\mathcal{C}}}}}_{k}\right\}}_{k = 1}^{K}$$ across all the time frames k in which the polyp was found since it is actively followed. The set of actively followed polyps $${{{{\mathcal{T}}}}}_{t}$$ is obtained by assigning the polyps detected at frame t, $${{{{\mathcal{B}}}}}_{t}$$, to the set of actively followed polyps $${{{{\mathcal{T}}}}}_{t-1}$$ of the previous frame by exploiting the Hungarian (Kuhn–Munkres) algorithm30. In particular, the proposed algorithm first tries to associate each polyp detection in $${{{{\mathcal{B}}}}}_{t}$$ to each actively followed polyp by means of a spatial assignment, then, in a second step, by means of an appearance-based assignment. Both assignments are obtained by solving an unbalanced linear assignment problem given the corresponding cost matrices. The cost matrix of the spatial-based assignment is computed by the IoU between each new detection and the last detected bounding box for each active polyp, while the cost matrix of the appearance-based assignment is computed using the cosine distance between the appearance features $${{{{\mathcal{F}}}}}_{t}$$. ### Online temporal aggregation algorithm The online temporal aggregation algorithm is responsible for printing live, on each frame t, a characterization decision for each visible box $${{{{\mathcal{B}}}}}_{t}$$ in the list of the followed polyps $${{{{\mathcal{T}}}}}_{t}$$. The algorithm is applied after having computed ternary quality scores qi,t for the N detections in the current frame via the Polyp Imaging Quality Network. Four types of different predictions can be produced by the algorithm: “adenoma", “non-adenoma", “no-prediction" or “analysing". “analysing" is printed near the polyp to communicate to the endoscopist to keep imaging the target polyp until the minimum number Nm of frames is reached. The value of Nm was chosen so that the algorithm could take into account a sufficient number of frames to produce an OC prediction while at the same time causing only a short, affordable delay in the prediction from when the polyp was first detected. When the minimum number of frames Nm is reached, the number of non-adenoma and adenoma frame by frame predictions are computed with the introduction of two hyperparameters δlow and δhigh that define when a frame by frame prediction has low confidence: Nna = ({ck,j {Lj}(cj < 0.5 − δlow) qi,t ≥ 1} and Na = {ck,j {Lj}(cj > 0.5 + δhigh) qi,t ≥ 1}. If Nna or Na is the majority of the total number of valid frames the algorithm prints “adenoma" or “non-adenoma" on the bounding box, otherwise “no-prediction" is printed. The Polyp Characterization Network and the Polyp Imaging Quality Network were trained using data extracted from the study “The Safety and Efficacy of Methylene Blue MMX Modified Release Tablets Administered to Subjects Undergoing Screening or Surveillance Colonoscopy” (ClinicalTrials.gov NCT01694966) a multinational, multicenter study that enrolled over 1000 patients. The study recorded lossless, high-definition, full-procedure colonoscopy videos and complete information on polyp characteristics and histology. The histopathological evaluation was based on the revised Vienna classification of gastrointestinal epithelial neoplasia26. Polyps corresponding to Vienna category 1 (negative for neoplasia) or 2 (indefinite for neoplasia) were considered “non-adenoma”. Polyps corresponding to Vienna category 3 (mucosal low-grade neoplasia), 4 (mucosal high-grade neoplasia), or 5 (submucosal invasion of neoplasia), were considered “adenoma”. To avoid any possible histology mismatch in the case multiple polyps appeared and were biopsied in the field-of-view at the same time, we chose to exclude from the dataset all polyps appearing simultaneously or in close succession. The video dataset thus obtained was further split into training, validation, and test subsets and frames containing a polyp were manually annotated by trained personnel with patients/polyps/images distributed as follows: 345/957/63,445 (training), 44/133/8645 (validation), and 165/405/26,412 (test). The proposed CADx feature was implemented for GI Genius v2.0 (developed by Cosmo AI, Ireland, and distributed by Medtronic, US), a CADe device for the detection of colorectal polyps that received marketing clearance in the United States from FDA in 202114. The new device, named GI Genius v3.0, received CE clearance under the European Medical Device Directive (MDD, 93/42/EEC) in 2021 as a class IIa medical device. The performance assessment and results reported in this paper have been obtained using GI Genius v3.0 on a prospective dataset, acquired subsequently after approval and different from the dataset used during the development of the device, as described in the following paragraph in detail. ### Prospective dataset for performance testing: CHANGE study description The CHANGE study (“Characterization Helping in the Assessment of Neoplasia in Gastrointestinal Endoscopy”, ClinicalTrials.gov NCT04884581), a single-center, single-arm, prospective study acquired high-resolution videos of colonoscopy procedures conducted using GI Genius CADx v3.0 from May 2021 until July 2021. The study was approved by the local Institutional Review Board (Comitato Etico Lazio 1, prot. 611/CE Lazio 1) and conducted in accordance with the Declaration of Helsinki. Before participation, all participants provided written informed consent. The 165 patients screened in the CHANGE study were considered for the Standalone CADx study (“Standalone Performances of Artificial Intelligence CADx for Optical Characterization of Colorectal Polyps”, https://osf.io/m5cxt), a study aiming at assessing the standalone performance of the CADx and whose results are reported in this manuscript. A diagram illustrating the collection of the prospective dataset used in this study is shown in Fig. 5. All the colonoscopy videos considered in the study were acquired in full length with unaltered quality, bearing no trace of the AI used (no overlay). Patients' clinical data and polyp histopathological information were saved in an electronic Case Report Form (eCRF). The localization of each polyp in each patient was carefully annotated by scientific annotation experts. This was confronted with data in the eCRF for the same patient to avoid any possibility of erroneous correspondence between polyp in the video and the related histology. Polyps for which video recording failed or for which no histology could be obtained were excluded. For each polyp, a short videoclip was prepared, starting a few seconds before the first polyp appearance and ending with polyp endoscopic resection. If multiple polyps were present in the same video section, a separate clip was generated for each individual polyp. This resulted in a total of 513 videoclips, 198 adenomas and 315 non-adenomas. ### CHANGE study polyps review by endoscopists To assess the performance of the CADx against a panel of international endoscopists, the study aimed at a minimum target of eight expert and eight non-expert endoscopist reviewers. Reviewers with a colonoscopy experience of at least 5 years and proficiency in optical biopsy with virtual chromoendoscopy were considered experts, while reviewers that had performed less than 500 colonoscopies at the time of study invitation were considered non-experts. To reach the target, 20 invitations were sent considering a 20% dropout. However, 10 additional invitations were needed and a final number of 10 experts and 11 non-experts reviewers was reached. Videos were shown in a randomized order to each endoscopist via a dedicated secure website. Endoscopists were blinded to histology and CADx results and a green box was manually drawn (overlaid) around the target polyp in each videoclip frame to remove any ambiguity in the identification of the region of interest. ### Measurement variables, study endpoints, and sample size considerations The CADx decision could assume a three-classes output for each polyp videoclip: “adenoma”, “non-adenoma” and “undetermined”. A polyp was classified as “adenoma” if the number of frames where CADx outputs the label “adenoma” was greater than or equal to the number of frames where CADx outputs “non-adenoma”, and classified as “non-adenoma” if the number of frames classified as “non-adenoma” was greater than the number of frames classified as “adenoma”. A polyp was considered “undetermined” if the CADx failed to output either the label “adenoma” or “non-adenoma” for the entire polyp videoclip. A decision in WL and BL was retrieved by operating the CADx only on the frames in WL and BL, respectively. Reviewers were asked to classify each polyp videoclip into five classes: “adenoma”, “hyperplastic”, “SSL”, “carcinoma” or “uncertain”. A reviewer decision was considered as “adenoma” if the reviewer selected either “adenoma” or “carcinoma”, “non-adenoma” if “hyperplastic” or “SSL” were selected and “undetermined” if “uncertain” was selected. If not “uncertain” the reviewer was asked a four-level level confidence class: “very high confidence”, “high confidence”, “low confidence” and “very low confidence”. The primary endpoint of the Standalone CADx study was that CADx accuracy in WL resulted non-inferior to the accuracy of expert endoscopists, having histopathology as the reference standard. The exploratory endpoints were that (1) CADx accuracy in WL was superior to the accuracy of non-expert endoscopists and that (2) CADx accuracy in BL was non-inferior to CADx accuracy in WL. A previous pilot study involving GI Genius CADx on 60 patients reported an accuracy of 85%. The sample size for Standalone CADx study was calculated assuming that experts can perform OC with an accuracy of 87%. Using a one-sided alpha level of 0.025, a total of 480 lesions is required to achieve 80% power, which is increased by 5% to account for dropouts. The minimum number of polyps needed for Standalone CADx study was therefore determined to be 504. Since CHANGE study collected a total of 513 polyps with a valid video recording and a valid histopathology outcome, all these polyps were included in the Standalone CADx statistical analysis. ### Statistical analysis The analysis for the primary endpoint was to assess if the lower bound of 95% confidence interval (CI) for the difference in accuracies (CADxWL – Experts) is higher than −10%. The analysis for the first exploratory endpoint was to assess if the lower bound of 95% CI for the difference in accuracies (CADxWL–Non-Experts) is greater than 0. The analysis for the second exploratory endpoint was to assess if the lower bound of 95% CI for the difference in accuracies (CADxBL–CADxWL) is greater than −10%. The main analysis of the primary and exploratory endpoints was carried out using log-binomial regression. As the primary and the first exploratory endpoints are involving repeated measures carried out by different readers, reader was considered as a random effect (random intercept) to account for intra-reader correlations. A second analysis on performance comparisons was carried out using area under the receiver operating characteristic (ROC) curves. Both non-inferiority and superiority were evaluated using 95% two-sided CI calculated using bootstrap resampling for the paired difference in AUC. Success for non-inferiority was claimed when the lower bound of CI for the difference in AUCs was greater than –10%. Success for superiority was claimed when the lower bound of CI for the difference in AUCs was greater than 0. Non-expert and expert group ROC curves were obtained by transforming the survey confidence output assigned by each reviewer to each polyp into an eight-level score. CADx ROC curves were obtained by associating with each polyp the ratio between the number of polyp frames classified as “adenoma" and the number of frames on which a prediction was given by the CADx. For each bootstrap iteration, we randomly sampled with replacement all the 513 polyps to obtain new CADx ROC WL and BL curves, and subsequently, for the same set of sampled polyps, we randomly sampled at the reviewer level to compute expert and non-expert reviewers' ROC curves. We repeated this 10,000 times to define the 95% CI. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability De-identified study data may be made available at publication upon request to the corresponding author. Data sharing will only be available for academic research, instead of commercial use or other objectives. A data use agreement and institutional review board approval will be required as appropriate. ## Code availability The underlying algorithm is copyrighted by Cosmo AI/Linkverse, and will not be available to the public. The authors agree to apply the algorithm to data provided by other academic researchers on their behalf for research purposes only. ## References 1. Sung, H. et al. Global Cancer Statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 71, 209–249 (2021). 2. Kaminski, M. F. et al. Increased rate of adenoma detection associates with reduced risk of colorectal cancer and death. Gastroenterology 153, 98–105 (2017). 3. Rex, D. K. et al. Colorectal cancer screening: recommendations for physicians and patients from the U.S. multi-society task force on colorectal cancer. Am. J. Gastroenterol. 112, 1016–1030 (2017). 4. Bisschops, R. et al. Advanced imaging for detection and differentiation of colorectal neoplasia: European society of Gastrointestinal Endoscopy (ESGE) guideline–update 2019. Endoscopy 51, 1155–1179 (2019). 5. Sano, Y. et al. Narrow-band imaging (NBI) magnifying endoscopic classification of colorectal tumors proposed by the Japan NBI Expert Team. Dig. Endoscopy 28, 526–533 (2016). 6. Bisschops, R. et al. BASIC (BLI Adenoma Serrated International Classification) classification for colorectal polyp characterization with blue light imaging. Endoscopy 50, 211–220 (2018). 7. Iacucci, M. et al. Development and validation of the simple endoscopic classification of diminutive and small colorectal polyps. Endoscopy 50, 779–789 (2018). 8. Manfredi, M. A. et al. Electronic chromoendoscopy. Gastrointest. Endoscopy 81, 249–261 (2015). 9. Ladabaum, U. et al. Real-time optical biopsy of colon polyps with narrow band imaging in community practice does not yet meet key thresholds for clinical decisions. Gastroenterology 144, 81–91 (2013). 10. Rees, C. J. et al. Narrow band imaging optical diagnosis of small colorectal polyps in routine clinical practice: the Detect Inspect Characterise Resect and Discard 2 (DISCARD 2) study. Gut 66, 887–895 (2017). 11. Ahmad, O. F. et al. Artificial intelligence and computer-aided diagnosis in colonoscopy: current evidence and future directions. Lancet Gastroenterol. Hepatol. 4, 71–80 (2019). 12. Berzin, T. M. et al. Position statement on priorities for artificial intelligence in GI endoscopy: a report by the ASGE Task Force. Gastrointest. Endoscopy 92, 951–959 (2020). 13. Dellermann, D., Ebel, P., Söllner, M. & Leimeister, J. Hybrid intelligence. Bus. Inf. Syst. Eng. 61, 637–643 (2019). 14. FDA. FDA Authorizes Marketing of First Device that Uses Artificial Intelligence to Help Detect Potential Signs of Colon Cancer. https://www.fda.gov/news-events/press-announcements/fda-authorizes-marketing-first-device-uses-artificial-intelligence-help-detect-potential-signs-colon (2021). 15. Walradt, T., Glissen Brown, J. R., Alagappan, M., Lerner, H. P. & Berzin, T. M. Regulatory considerations for artificial intelligence technologies in GI endoscopy. Gastrointest. Endoscopy 92, 801–806 (2020). 16. Repici, A. et al. Efficacy of real-time computer-aided detection of colorectal neoplasia in a randomized trial. Gastroenterology 159, 512–520 (2020). 17. Wang, P. et al. Effect of a deep-learning computer-aided detection system on adenoma detection during colonoscopy (CADe-DB trial): a double-blind randomised study. Lancet Gastroenterol. Hepatol. 5, 343–351 (2020). 18. Repici, A. et al. Artificial intelligence and colonoscopy experience: lessons from two randomised trials. Gut 71, 757–765 (2021). 19. Glissen Brown, J. R. & Berzin, T. M. Adoption of new technologies: artificial intelligence. Gastrointest. Endosc. Clin. N. A. 31, 743–758 (2021). 20. Byrne, M. F. et al. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model. Gut 68, 94–100 (2019). 21. Mori, Y. et al. Real-time use of artificial intelligence in identification of diminutive polyps during colonoscopy: a prospective study. Ann. Intern. Med. 169, 357–366 (2018). 22. Nogueira-Rodríguez, A. et al. Deep neural networks approaches for detecting and classifying colorectal polyps. Neurocomputing 423, 721–734 (2021). 23. Ponugoti, P. et al. Disagreement between high confidence endoscopic adenoma prediction and histopathological diagnosis in colonic lesions ≤ 3 mm in size. Endoscopy 51, 221–226 (2019). 24. Shahidi, N. et al. Use of endoscopic impression, artificial intelligence, and pathologist interpretation to resolve discrepancies between endoscopy and pathology analyses of diminutive colorectal polyps. Gastroenterology 158, 783–785 (2020). 25. Vieth, M. & Neurath, M. F. Challenges for the crosstalk between endoscopists and pathologists. Endoscopy 51, 212–214 (2019). 26. Schlemper, R. J., Kato, Y. & Stolte, M. Diagnostic criteria for gastrointestinal carcinomas in japan and western countries: proposal for a new classification system of gastrointestinal epithelial neoplasia. J. Gastroenterol. Hepatol. 15, G49–G57 (2000). 27. IJspeert, J. et al. Development and validation of the WASP classification system for optical diagnosis of adenomas, hyperplastic polyps and sessile serrated adenomas/polyps. Gut 65, 963–970 (2016). 28. Wei, J. et al. Learn like a pathologist: curriculum learning by annotator agreement for histopathology image classification. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2473–2483 (2021). 29. Thulasidasan, S., Chennupati, G., Bilmes, J. A., Bhattacharya, T. & Michalak, S. On mixup training: Improved calibration and predictive uncertainty for deep neural networks. Adv. Neural. Inf. Process. Syst. 32, (2016). 30. Bewley, A., Ge, Z., Ott, L., Ramos, F. & Upcroft, B. Simple online and realtime tracking. In 2016 IEEE international conference on image processing (ICIP), 3464–3468 (IEEE, 2016). ## Acknowledgements The authors are grateful to all the endoscopists of the GI Genius CADx Study Group who provided the review of polyps video recordings. ## Author information Authors ### Contributions C.B. and Pi.S. contributed to algorithm development, data analysis and interpretation, and manuscript writing. A.C. contributed to algorithm design and development, design of the study, data analysis and interpretation, and manuscript writing. N.N.D. contributed to algorithm design and development, study design, and data analysis. C.H. contributed to the study design, data collection and interpretation, and manuscript writing. GI G.C.S.G. contributed to data collection. Pr.S. contributed to data interpretation and manuscript writing. ### Corresponding author Correspondence to Andrea Cherubini. ## Ethics declarations ### Competing interests C.B., Pi.S., N.N.D., and A.C. are inventors of patents related to the submitted work and are employees of the company manufacturing the device. C.H. is consultant for Medtronic and Fujifilm. Pr.S. is a consultant for Medtronic, Olympus, Boston Scientific, Fujifilm, Lumendi, and receives grant support from Ironwood, Erbe, Docbot, Cosmo Pharmaceuticals, and CDx Labs. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Biffi, C., Salvagnini, P., Dinh, N.N. et al. A novel AI device for real-time optical characterization of colorectal polyps. npj Digit. Med. 5, 84 (2022). https://doi.org/10.1038/s41746-022-00633-6 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41746-022-00633-6
# Micrometre (Redirected from Micrometres) Micrometre A 6 μm diameter carbon filament above a 50 μm diameter human hair General information Unit systemSI Unit oflength Symbolμm Conversions 1 μm in ...... is equal to ... SI base units   10−6 m Natural units   1.8897×104 a0 imperial/US units   3.2808×10−6 ft, 3.9370×10−5 in The micrometre (international spelling as used by the International Bureau of Weights and Measures;[1] SI symbol: μm) or micrometer (American spelling), also commonly known as a micron, is a unit of length in the International System of Units (SI) equalling 1×10−6 metre (SI standard prefix "micro-" = 10−6); that is, one millionth of a metre (or one thousandth of a millimetre, 0.001 mm, or about 0.00004 inch).[1] The nearest smaller common SI unit is the nanometre, equivalent to one one-thousandth of a micrometre, or one billionth of a metre (0.000000001 m). The micrometre is a common unit of measurement for wavelengths of infrared radiation as well as sizes of biological cells and bacteria,[1] and for grading wool by the diameter of the fibres.[2] The width of a single human hair ranges from approximately 20 to 200 μm. The longest human chromosome, chromosome 1, is approximately 10 μm in length. ## Examples How big is 1 micrometre? Between 1 μm and 10 μm: Between 10 μm and 100 μm: • about 10–12 μm – thickness of plastic wrap (cling wrap) • 10 to 55 μm – width of wool fibre[5] • 17 to 181 μm – diameter of human hair[6] • 70 to 180 μm – thickness of paper ## SI standardization The term micron and the symbol μ were officially accepted for use in isolation to denote the micrometre in 1879, but officially revoked by the International System of Units (SI) in 1967.[7] This became necessary because the older usage was incompatible with the official adoption of the unit prefix micro-, denoted μ, during the creation of the SI in 1960. In the SI, the systematic name micrometre became the official name of the unit, and μm became the official unit symbol. Additionally, in American English, the use of "micron" helps differentiate the unit from the micrometer, a measuring device, because the unit's name in mainstream American spelling is a homograph of the device's name. In spoken English, they may be distinguished by pronunciation, as the name of the measuring device is often stressed on the second syllable (/mˈkrɒmɪtər/ my-KROM-it-ər), whereas the systematic pronunciation of the unit name, in accordance with the convention for pronouncing SI units in English, places the stress on the first syllable (/ˈmkrˌmtər/ MY-kroh-meet-ər). The plural of micron is normally microns, though micra was occasionally used before 1950.[8][9][10] ## Symbol The official symbol for the SI prefix micro- is a Greek lowercase mu.[11] In Unicode, there is also a micro sign with the code point U+00B5 (µ), distinct from the code point U+03BC (μ) of the Greek letter lowercase mu. According to the Unicode Consortium, the Greek letter character is preferred,[12] but implementations must recognize the micro sign as well. Most fonts use the same glyph for the two characters. ## Notes and references 1. ^ a b c "micrometre". Encyclopædia Britannica Online. Retrieved 18 May 2014. 2. ^ "Wool Fibre". NSW Department of Education and Communities. Archived from the original (Word Document download) on 17 June 2016. Retrieved 18 May 2014. 3. ^ Ramel, Gordon. "Spider Silk". Archived from the original on 4 December 2008. Retrieved 14 December 2008. A typical strand of garden spider silk has a diameter of about 0.003 mm ... Dragline silk (about .00032 inch (.008 mm) in Nephila) 4. ^ Smith, D.J.; Gaffney, E.A.; Blake, J.R.; Kirkman-Brown, J.C. (25 February 2009). "Human sperm accumulation near surfaces: a simulation study" (PDF). Journal of Fluid Mechanics. Cambridge University Press. 621: 295. Bibcode:2009JFM...621..289S. doi:10.1017/S0022112008004953. S2CID 3942426. Archived from the original (PDF) on 6 November 2013. 5. ^ "Fibreshape applications". IST - Innovative Sintering Technologies Ltd. Retrieved 4 December 2008. Histogram of Fiber Thickness [micrometre] 6. ^ The diameter of human hair ranges from 17 to 181 μm. Ley, Brian (1999). Elert, Glenn (ed.). "Diameter of a human hair". The Physics Factbook. Retrieved 8 December 2018. 7. ^ BIPM - Resolution 7 of the 13th CGPM 1967/68), "Abrogation of earlier decisions (micron, new candle.)" 8. ^ Proceedings of the Royal Society of Queensland. Part I. Vol. XIX. H. Pole & Co. 1907 – via Google Books. 9. ^ Bigalow, Edward Fuller; Agassiz Association (1905). The Observer. Vol. 7–8 – via Google Books. 10. ^ 10 micra/10 microns (Start at 1885; before that, the word "micron", singular or plural, was rare) 11. ^ "Prefixes of the International System of Units". International Bureau of Weights and Measures. Archived from the original on 23 May 2018. Retrieved 9 May 2016. 12. ^ Beeton, Barbara; Freytag, Asmus; Sargent, Murray III (30 May 2017). "Unicode Technical Report #25". Unicode Technical Reports. Unicode Consortium. p. 11.
# How to make plots with factor levels below the x-axis (bench-biology style) The motivation for this post was to create a pipeline for generating publication-ready plots entirely within ggplot and avoid post-generation touch-ups in Illustrator or Inkscape. These scripts are a start. The ideal modification would be turning the chunks into functions with personalized detail so that a research team could quickly and efficiently generate multiple plots. I might try to turn the scripts into a very-general-but-not-ready-for-r-package function for my students. Continue to the whole post # What is an interaction? A factorial experiment is one in which there are two or more factor variables (categorical $$X$$) that are crossed, resulting in a group for each combination of the levels of each factor. Factorial experiments are used to estimate the interaction effect between factors. Two factors interact when the effect of one factor depends on the level of the other factors. Interactions are ubiquitous, although sometimes they are small enough to ignore with little to no loss of understanding. # How to estimate synergism or antagonism motivating source: Integration of two herbivore-induced plant volatiles results in synergistic effects on plant defense and resistance What is synergism or antagonism? (this post is a follow up to What is an interaction?) In the experiment for Figure 1 of the motivating source article, the researchers were explicitly interested in measuring any synergistic effects of hac and indole on the response. What is a synergistic effect? If hac and indole act independently, then the response should be additive – the HAC+Indole effect should simply be the sum of the independent HAC and Indole effects. # Estimate of marginal ("main") effects instead of ANOVA for factorial experiments Background Comparing marginal effects to main effect terms in an ANOVA table First, some fake data Comparison of marginal effects vs. “main” effects term of ANOVA table when data are balanced Comparison of marginal effects vs. “main” effects term of ANOVA table when data are unbalanced When to estimate marginal effects keywords: estimation, ANOVA, factorial, model simplification, conditional effects, marginal effects Background I recently read a paper from a very good ecology journal that communicated the results of an ANOVA like that below (Table 1) using a statement similar to “The removal of crabs strongly decreased algae cover (\(F_{1,36} = 17. # Is the power to test an interaction effect less than that for a main effect? I was googling around and somehow landed on a page that stated “When effect coding is used, statistical power is the same for all regression coefficients of the same size, whether they correspond to main effects or interactions, and irrespective of the order of the interaction”. Really? How could this be? The p-value for an interaction effect is the same regardless of dummy or effects coding, and, with dummy coding (R’s default), the power of the interaction effect is less than that of the coefficients for the main factors when they have the same magnitude, so my intuition said this statement must be wrong. # Interaction plots with ggplot2 ggpubr is a fantastic resource for teaching applied biostats because it makes ggplot a bit easier for students. I’m not super familiar with all that ggpubr can do, but I’m not sure it includes a good “interaction plot” function. Maybe I’m wrong. But if I’m not, here is a simple function to create a gg_interaction plot. The gg_interaction function returns a ggplot of the modeled means and standard errors and not the raw means and standard errors computed from each group independently. #### R doodles. Some ecology. Some physiology. Much fake data. Thoughts on R, statistical best practices, and teaching applied statistics to Biology majors. Jeff Walker, Professor of Biological Sciences University of Southern Maine, Portland, Maine, United States
,' Interpretability models​ Why interpretability is so important in machine learning ? Why can't we just trust the prediction of a supervised model ? Several possible explanations to that: we can think about improving social acceptance for the integration of ML algorithms into our lives ; correcting a model by discovering a bias in the population of the training set; understanding the cases for which the model fails; following the law and regulations. Nowadays, complex supervised models can be very accurate on specific tasks but remain quite uninterpretable; at the opposite, when using simple models, it is indeed easy to interpret them but are often less accurate. How can we solve such a dilemma ? This post tends to answer to this question by going through the ML literature in interpretability models and by focusing on a class of additive feature attribution methods [11]. 1. The main idea The problem of giving an interpretation to the model prediction can be recasted as it follows: Which part of the input is particularly important to explain the output of the model ? In order to illustrate this purpose, let's consider the example given during the ICML conference by Shrikumar. Lets suppose you have already trained a model with DNA mutations causing diseases. Now, let's consider a DNA sequence as input, as for instance: The model is going to predict if this sequence can be linked to any known diseases the model learnt. If so, what you would like to understand is why your model gives this prediction in particular; ie which part of the input sequence leads your model to predict a specific disease. So, you would like to have higher weights for the parts of the sequence which explain the most the decision of your model and lower ones for those which do not explain the prediction: To achieve that, most of approaches iterate between 2 steps: 1. Set a prohibition to some part of the input 2. Observe the change in the output (fitted answer) Repeat step 1 and step 2 for different prohibitions of the input. 2. Existing approaches The need of tools for explaining prediction models came with the development of more complex models to deal with more complex data and therefore the recent literature in Computer Vision and Machine Learning has developed a new field linked to interpretability. 2.1. Cooperative game theory based Back to the beginning of the 21th century, Lipovetsky et al. (2001)[1] highlight the multicollinearity problem in the analysis of regressor importance in the multiple regression context: important variables can have significant coefficient because of their collinearity. To that end, they use a tool from the cooperative game theory to obtain comparative importance of predictors: the Shapley Values imputation [0] derived from an axiomatic approach and produces a unique solution satisfying general requirements of Nash equilibrium. A decade later, Strumbelj et al. (2011)[2] generalize the use of Shapley values for black box models such as SVM and artificial neural network models in order to make models more informative, easier to understand and to use. They propose an approximation algorithm by assuming mutual independence of individual features in order to encompass the time complexity limitation of the solution. 2.2. Architecture specific: Deep Neural Network Since then, several specific methods have been proposed in the literature and take advantages of the structure/architecture of the model. For neural networks, we can think about back-propagation based methods such as Guided Propagation (Springenberg et al. 2014)[4] which use the relationship between the neurons and the output. The idea is to assign a score to neurons according to how much they affect the output. This is done in the single backward pass where you get the scores for all parts of the input. Other approaches propose to build a linear model to locally approximate the more complicated model based on data which affects the output (LIME [6]). Shrikumar et al. 2016 [7] introduces DeepLift which assigns contribution scores to the feature based on the difference between the activation of each neuron to its ‘reference activation’. Other explaining prediction models Deep Neural Network-specific have been proposed in the literature and the reader could read [8] for additional references on this subject. Below, a list of methods and their available python code which summarizes the most recent approaches for specific-models: Random Forest: Deep Neural Network: 2.3. A unified approach The most recent and general approach for interpretability models is the SHAP model from Lundberg et al. 2018. It proposes a class of methods called Additive Feature attribution methods that contains most of the approaches cited above. These methods use the same explanation model (ie any interpretable approximation of the original model) that we introduce in the next paragraph. Cooperative Game theory-based: An explanation model is a simple model which describes the behavior of the complex model. The additive attribution methods introduce a linear function of binary variables to represent such an explanation model. 3.1. The SHAP model Let f be the original prediction model to be explained and g the explanation model. Additive feature attribution methods have an explanation model that is a linear function of binary variables such that: where M is the number of features ; the z'i variables represent a feature being observed (zi' = 1) or unknown (zi'= 0), and the Φi ∈ ’s are the feature attribution values. There is only one solution for Φ_i satisfying general requirements of Nash equilibrium and satisfying three natural properties explained in the paragraph that follows. 3.2. The Natural properties (1) Local Accuracy: the output of the explanation model matches the original model for the prediction being explained: g(x') = f(x) (2) Missingness: put the output to 0 corresponds to turning the feature off: x'i = 0 ⇒ Φi = 0 (3) Accuracy: if turning the feature off in one model which always makes a bigger difference in another model then the importance should be higher in the first model than in the second one. Lets consider z' \ i meaning z'i = 0, then for any 2 models f 1 and f 2, if: fx1(z') - fx1(z' \ i) ≥ fx2(z') - fx2(z' \ i) then for all input z' ∈ {0,1}M : Φi (f 1, x) ≥ Φi (f 2, x) 3.3 Computing SHAP values 3.3.1. Back to the Shapley values The computation of features importance -- the SHAP values -- comes from cooperative games theory [0] with the Shapley values. In our context, a Shapley value can be viewed as a weighted average of all possible differences between predictions of the model without feature i, and the ones with feature i as expressed below: where |z′| stands for the number of features different from zero, and z′ ⊆ x′ stands for all z′ vectors where the non-zero entries are a subset of entries of x′ except feature i. Since the problem is combinatorial different strategies have been proposed in the literature to approximate the solution ([0,1]). 3.3.2. The SHAP values In the more general context the SHAP values can be viewed as Shapley values of a conditional expectation function of the original model such that: where S is the set of non-zero entries of z'. In practice, the computation of SHAP values are challenging that is why Lundberg and al.[11] propose different approximation algorithms according to the specificities of your model or your data (tree ensembles, independent features, deep network,...). 4. Practical example with SHAP library Lundenberg created a GitHub repository to that end with very nice and quite complete notebooks explaining different use cases for SHAP and its different approximation algorithms (Tree/ Deep / Gradient /Linear or Kernel Explainers). I do really encourage the reader to visit the page of the author: https://github.com/slundberg/shap . By the way, I am just going to introduce a very simple example in order to give insights of the kind of results we could obtain when looking for interpreting a prediction. Lets consider the heart dataset coming from kaggle competition (https://www.kaggle.com/ronitf/heart-disease-uci). The dataset consists in 13 variables describing 303 patients and 1 label describing the angiographic disease status (target \in {0,1}). The set is quite balanced since 165 patients have label 1 and 138 have label 0. The data have been pre-processed a little bit such as we keep only the most informative variables which are ['sex', 'cp', 'thalach', 'exang', 'oldpeak', 'ca', 'thal']. Then we split the dataset into a random train (75% of data) and test sets (the remaining 25%) and we scale them. A svm classifier has been learnt, and we obtain a classification accuracy around 91%. Once the classification model is learnt, we are looking for explaining a particular prediction (a true one ;) ) based on the shap library developed by Lundberg. You need to install the shap library (https://github.com/slundberg/shap) before running the code below: This figure illustrates features that push the prediction higher (in pink) and the ones which push the prediction lower (in blue) from a base value computed on the average model output on the training dataset. For that true positive fitted answer, we can see that the main features which tend to push the probabilities towards 1 is mainly explained by 'sex', 'oldpeak', 'thalach' and 'exang', whereas the 'ca' feature tends to push down the prediction score. We can apply this explainer model to all correct predicted examples in the test set, as below: The figure above stands for all individual feature contribution that have been stacked horizontally and ordered by output value. The 39 first predictions are correctly classified in class 1 and the 30 last ones are labeled and correctly classified in class 0. Note that the visualisation is interactive and we can see the effect of a particular feature by changing the y-axis in the menu of the left side of the figure. Symmetrically, you can change the x-axis menu in order to order the sample according to output values, similarities or SHAP values by feature. It can also be very interested to have in one plot an overview of the distribution of SHAP values for each feature and an idea of their overall impact (Note that the example is still on the correct predicted sample): In the first plot (subfigure a.), there are three kinds of information: in the x-axis you have the SHAP values of each feature described in the y-axis. Each line stands for the set of SHAP values computed for a specific feature and this is done for every features of your model. The third dimension is the color of points: it represents the feature value (pink for high value of the feature and blue for a low value). You can therefore see the dispersion of SHAP values according to features and also, their impact in the output model. For instance, high values of 'cp' feature implies high SHAP values predicted score and tends to push up the prediction whereas high values of 'thal' feature (pink points) tends to lower the predicted score. The second plot (subfigure b.) is the Mean Absolute Value of SHAP values obtained for each feature. This can be seen as a summary of the left figure. Conclusion There are lots of approaches proposed in the literature to deal with interpretability/explainable models in the supervised context. The main strength of the additive feature attribution model is its theoretical properties on one hand and on the other hand, its general framework which tends to explain most of explainable models developed in the literature. Different approximations algorithms have been proposed by Lundberg et al. in order to take advantage of the structure of the model and types of data to improve time computations. If you deal with Deep Neural Network or tree ensembles, I really encourage the reader to see more examples on the GitHub repository of the author: https://github.com/slundberg/shap. Bibliography [0] Shapley, Lloyd S. “A Value for N-Person Games.” Contributions to the Theory of Games 2 (28): 307–17, (1953). [1] Lipovetsky, S. and Conklin, M. "Analysis of regression in game theory approach." Applied Stochastic Models in business and industry (17-4):319-330, (2001). [2] Strumbelj et al., "A General Method for Visualizing and Explaining Black-Box Regression Models." Adaptive and Natural Computing Algorithms. ICANNGA 2011. Lecture Notes in Computer Science, vol 6594. (2011). [3] Saabas et al., "Interpreting random forests", https://blog.datadive.net/interpreting-random-forests/ [4] Springenberg et al., "Striving for simplicity: The all convolutional net", arXiv:1412.6806 (2014). [5] Bach et al. "On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation", PLOS ONE: (10-7): 130-140, (2015). [6] Ribeiro et al. "Why should I Trust You ? Explaining the predictions of any classifier", Proceedings of the 22nd ACM SIGKDD: 1135-1144 (2016). [7] Shrikumar et al. "Learning Important Features Through Propagating Activation Difference", Proceedings in ICML (2017). [8] "Explainable and Interpretable Models in Computer Vision and Machine Learning", Springer Verlag, The Springer Series on Challenges in Machine Learning, 9783319981307 (2018). [9] Sunderarajan et al., "Axiomatic Attribution for Deep Networks", Proceedings in ICML (2017). [10] Montavon et al. "Explaining nonlinear classification decision with deep Taylor decomposition", Pattern Recognition (65):211-222 (2017). [11] Lundberg et al., ''A unified approach to interpreting model predictions'', NIPS (2017). All Posts × Almost done… We just sent you an email. Please click the link in the email to confirm your subscription! ], ['\\\$$','\\\$$']]}\n });\n\u0026lt;\/script\u0026gt;\n\n\u0026lt;script type=\"text\/javascript\" async src=\"\/\/cdn.mathjax.org\/mathjax\/latest\/MathJax.js?config=TeX-AMS_CHTML\"\u0026gt;\u0026lt;\/script\u0026gt;\n\n\n\u0026lt;script\u0026gt;\n!function(){\/*\n\n Copyright (C) 2013 Google Inc.\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http:\/\/www.apache.org\/licenses\/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n\n Copyright (C) 2006 Google Inc.\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http:\/\/www.apache.org\/licenses\/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n*\/\n(function(){function ba(g){function k(){try{M.doScroll(\"left\")}catch(g){t.setTimeout(k,50);return}z(\"poll\")}function z(k){if(\"readystatechange\"!=k.type||\"complete\"==A.readyState)(\"load\"==k.type?t:A)[B](p+k.type,z,!1),!q\u0026amp;\u0026amp;(q=!0)\u0026amp;\u0026amp;g.call(t,k.type||k)}var Y=A.addEventListener,q=!1,C=!0,x=Y?\"addEventListener\":\"attachEvent\",B=Y?\"removeEventListener\":\"detachEvent\",p=Y?\"\":\"on\";if(\"complete\"==A.readyState)g.call(t,\"lazy\");else{if(A.createEventObject\u0026amp;\u0026amp;M.doScroll){try{C=!t.frameElement}catch(da){}C\u0026amp;\u0026amp;k()}A[x](p+\n\"DOMContentLoaded\",z,!1);A[x](p+\"readystatechange\",z,!1);t[x](p+\"load\",z,!1)}}function U(){V\u0026amp;\u0026amp;ba(function(){var g=N.length;ca(g?function(){for(var k=0;k\u0026lt;g;++k)(function(g){t.setTimeout(function(){t.exports[N[g]].apply(t,arguments)},0)})(k)}:void 0)})}for(var t=window,A=document,M=A.documentElement,O=A.head||A.getElementsByTagName(\"head\")[0]||M,B=\"\",F=A.getElementsByTagName(\"script\"),q=F.length;0\u0026lt;=--q;){var P=F[q],Z=P.src.match(\/^[^?#]*\\\/run_prettify\\.js(\\?[^#]*)?(?:#.*)?\/);if(Z){B=Z[1]||\"\";P.parentNode.removeChild(P);\nbreak}}var V=!0,H=[],Q=[],N=[];B.replace(\/[?\u0026amp;]([^\u0026amp;=]+)=([^\u0026amp;]+)\/g,function(g,k,z){z=decodeURIComponent(z);k=decodeURIComponent(k);\"autorun\"==k?V=!\/^[0fn]\/i.test(z):\"lang\"==k?H.push(z):\"skin\"==k?Q.push(z):\"callback\"==k\u0026amp;\u0026amp;N.push(z)});q=0;for(B=H.length;q\u0026lt;B;++q)(function(){var g=A.createElement(\"script\");g.onload=g.onerror=g.onreadystatechange=function(){!g||g.readyState\u0026amp;\u0026amp;!\/loaded|complete\/.test(g.readyState)||(g.onerror=g.onload=g.onreadystatechange=null,--T,T||t.setTimeout(U,0),g.parentNode\u0026amp;\u0026amp;g.parentNode.removeChild(g),\ng=null)};g.type=\"text\/javascript\";g.src=\"https:\/\/cdn.rawgit.com\/google\/code-prettify\/master\/loader\/lang-\"+encodeURIComponent(H[q])+\".js\";O.insertBefore(g,O.firstChild)})(H[q]);for(var T=H.length,F=[],q=0,B=Q.length;q\u0026lt;B;++q)F.push(\"https:\/\/cdn.rawgit.com\/google\/code-prettify\/master\/loader\/skins\/\"+encodeURIComponent(Q[q])+\".css\");F.push(\"https:\/\/cdn.rawgit.com\/google\/code-prettify\/master\/loader\/prettify.css\");(function(g){function k(q){if(q!==z){var t=A.createElement(\"link\");t.rel=\"stylesheet\";t.type=\n\"text\/css\";q+1\u0026lt;z\u0026amp;\u0026amp;(t.error=t.onerror=function(){k(q+1)});t.href=g[q];O.appendChild(t)}}var z=g.length;k(0)})(F);var ca=function(){window.PR_SHOULD_USE_CONTINUATION=!0;var g;(function(){function k(a){function d(e){var b=e.charCodeAt(0);if(92!==b)return b;var a=e.charAt(1);return(b=W[a])?b:\"0\"\u0026lt;=a\u0026amp;\u0026amp;\"7\"\u0026gt;=a?parseInt(e.substring(1),8):\"u\"===a||\"x\"===a?parseInt(e.substring(2),16):e.charCodeAt(1)}function f(e){if(32\u0026gt;e)return(16\u0026gt;e?\"\\\\x0\":\"\\\\x\")+e.toString(16);e=String.fromCharCode(e);return\"\\\\\"===e||\"-\"===\ne||\"]\"===e||\"^\"===e?\"\\\\\"+e:e}function b(e){var b=e.substring(1,e.length-1).match(\/\\\\u[0-9A-Fa-f]{4}|\\\\x[0-9A-Fa-f]{2}|\\\$0-3][0-7]{0,2}|\\\\[0-7]{1,2}|\\\\[\\s\\S]|-|[^-\\\$\/g);e=[];var a=\"^\"===b[0],c=[\"[\"];a\u0026amp;\u0026amp;c.push(\"^\");for(var a=a?1:0,h=b.length;a\u0026lt;h;++a){var l=b[a];if(\/\\\$bdsw]\/i.test(l))c.push(l);else{var l=d(l),n;a+2\u0026lt;h\u0026amp;\u0026amp;\"-\"===b[a+1]?(n=d(b[a+2]),a+=2):n=l;e.push([l,n]);65\u0026gt;n||122\u0026lt;l||(65\u0026gt;n||90\u0026lt;l||e.push([Math.max(65,l)|32,Math.min(n,90)|32]),97\u0026gt;n||122\u0026lt;l||e.push([Math.max(97,l)\u0026amp;-33,Math.min(n,122)\u0026amp;-33]))}}e.sort(function(e,\na){return e[0]-a[0]||a[1]-e[1]});b=[];h=[];for(a=0;a\u0026lt;e.length;++a)l=e[a],l[0]\u0026lt;=h[1]+1?h[1]=Math.max(h[1],l[1]):b.push(h=l);for(a=0;a\u0026lt;b.length;++a)l=b[a],c.push(f(l[0])),l[1]\u0026gt;l[0]\u0026amp;\u0026amp;(l[1]+1\u0026gt;l[0]\u0026amp;\u0026amp;c.push(\"-\"),c.push(f(l[1])));c.push(\"]\");return c.join(\"\")}function g(e){for(var a=e.source.match(\/(?:\\[(?:[^\\x5C\\x5D]|\\\\[\\s\\S])*\$|\\\\u[A-Fa-f0-9]{4}|\\\\x[A-Fa-f0-9]{2}|\\\$0-9]+|\\\\[^ux0-9]|\$$\\?[:!=]|[\\(\$$\\^]|[^\\x5B\\x5C\$$\$$\\^]+)\/g),c=a.length,d=[],h=0,l=0;h\u0026lt;c;++h){var n=a[h];\"(\"===n?++l:\"\\\\\"===n.charAt(0)\u0026amp;\u0026amp;(n=\n+n.substring(1))\u0026amp;\u0026amp;(n\u0026lt;=l?d[n]=-1:a[h]=f(n))}for(h=1;h\u0026lt;d.length;++h)-1===d[h]\u0026amp;\u0026amp;(d[h]=++k);for(l=h=0;h\u0026lt;c;++h)n=a[h],\"(\"===n?(++l,d[l]||(a[h]=\"(?:\")):\"\\\\\"===n.charAt(0)\u0026amp;\u0026amp;(n=+n.substring(1))\u0026amp;\u0026amp;n\u0026lt;=l\u0026amp;\u0026amp;(a[h]=\"\\\\\"+d[n]);for(h=0;h\u0026lt;c;++h)\"^\"===a[h]\u0026amp;\u0026amp;\"^\"!==a[h+1]\u0026amp;\u0026amp;(a[h]=\"\");if(e.ignoreCase\u0026amp;\u0026amp;I)for(h=0;h\u0026lt;c;++h)n=a[h],e=n.charAt(0),2\u0026lt;=n.length\u0026amp;\u0026amp;\"[\"===e?a[h]=b(n):\"\\\\\"!==e\u0026amp;\u0026amp;(a[h]=n.replace(\/[a-zA-Z]\/g,function(a){a=a.charCodeAt(0);return\"[\"+String.fromCharCode(a\u0026amp;-33,a|32)+\"]\"}));return a.join(\"\")}for(var k=0,I=!1,\nm=!1,J=0,c=a.length;J\u0026lt;c;++J){var r=a[J];if(r.ignoreCase)m=!0;else if(\/[a-z]\/i.test(r.source.replace(\/\\\\u[0-9a-f]{4}|\\\\x[0-9a-f]{2}|\\\\[^ux]\/gi,\"\"))){I=!0;m=!1;break}}for(var W={b:8,t:9,n:10,v:11,f:12,r:13},u=[],J=0,c=a.length;J\u0026lt;c;++J){r=a[J];if(r.global||r.multiline)throw Error(\"\"+r);u.push(\"(?:\"+g(r)+\")\")}return new RegExp(u.join(\"|\"),m?\"gi\":\"g\")}function q(a,d){function f(a){var c=a.nodeType;if(1==c){if(!b.test(a.className)){for(c=a.firstChild;c;c=c.nextSibling)f(c);c=a.nodeName.toLowerCase();if(\"br\"===\nc||\"li\"===c)g[m]=\"\\n\",I[m\u0026lt;\u0026lt;1]=k++,I[m++\u0026lt;\u0026lt;1|1]=a}}else if(3==c||4==c)c=a.nodeValue,c.length\u0026amp;\u0026amp;(c=d?c.replace(\/\\r\\n?\/g,\"\\n\"):c.replace(\/[ \\t\\r\\n]+\/g,\" \"),g[m]=c,I[m\u0026lt;\u0026lt;1]=k,k+=c.length,I[m++\u0026lt;\u0026lt;1|1]=a)}var b=\/(?:^|\\s)nocode(?:\\s|)\/,g=[],k=0,I=[],m=0;f(a);return{a:g.join(\"\").replace(\/\\n\/,\"\"),c:I}}function t(a,d,f,b,g){f\u0026amp;\u0026amp;(a={h:a,l:1,j:null,m:null,a:f,c:null,i:d,g:null},b(a),g.push.apply(g,a.g))}function A(a){for(var d=void 0,f=a.firstChild;f;f=f.nextSibling)var b=f.nodeType,d=1===b?d?a:f:3===b?T.test(f.nodeValue)?\na:d:d;return d===a?void 0:d}function C(a,d){function f(a){for(var m=a.i,k=a.h,c=[m,\"pln\"],r=0,W=a.a.match(g)||[],u={},e=0,q=W.length;e\u0026lt;q;++e){var D=W[e],w=u[D],h=void 0,l;if(\"string\"===typeof w)l=!1;else{var n=b[D.charAt(0)];if(n)h=D.match(n[1]),w=n[0];else{for(l=0;l\u0026lt;p;++l)if(n=d[l],h=D.match(n[1])){w=n[0];break}h||(w=\"pln\")}!(l=5\u0026lt;=w.length\u0026amp;\u0026amp;\"lang-\"===w.substring(0,5))||h\u0026amp;\u0026amp;\"string\"===typeof h[1]||(l=!1,w=\"src\");l||(u[D]=w)}n=r;r+=D.length;if(l){l=h[1];var E=D.indexOf(l),G=E+l.length;h[2]\u0026amp;\u0026amp;(G=D.length-\nh[2].length,E=G-l.length);w=w.substring(5);t(k,m+n,D.substring(0,E),f,c);t(k,m+n+E,l,F(w,l),c);t(k,m+n+G,D.substring(G),f,c)}else c.push(m+n,w)}a.g=c}var b={},g;(function(){for(var f=a.concat(d),m=[],p={},c=0,r=f.length;c\u0026lt;r;++c){var q=f[c],u=q[3];if(u)for(var e=u.length;0\u0026lt;=--e;)b[u.charAt(e)]=q;q=q[1];u=\"\"+q;p.hasOwnProperty(u)||(m.push(q),p[u]=null)}m.push(\/[\\0-\\uffff]\/);g=k(m)})();var p=d.length;return f}function x(a){var d=[],f=[];a.tripleQuotedStrings?d.push([\"str\",\/^(?:\\'\\'\\'(?:[^\\'\\\$|\\\$\\s\\S]|\\'{1,2}(?=[^\\']))*(?:\\'\\'\\'|)|\\\"\\\"\\\"(?:[^\\\"\\\$|\\\$\\s\\S]|\\\"{1,2}(?=[^\\\"]))*(?:\\\"\\\"\\\"|)|\\'(?:[^\\\\\\']|\\\\[\\s\\S])*(?:\\'|)|\\\"(?:[^\\\\\\\"]|\\\\[\\s\\S])*(?:\\\"|))\/,\nnull,\"'\\\"\"]):a.multiLineStrings?d.push([\"str\",\/^(?:\\'(?:[^\\\\\\']|\\\\[\\s\\S])*(?:\\'|)|\\\"(?:[^\\\\\\\"]|\\\\[\\s\\S])*(?:\\\"|)|\\(?:[^\\\\\\]|\\\\[\\s\\S])*(?:\\|))\/,null,\"'\\\"\"]):d.push([\"str\",\/^(?:\\'(?:[^\\\\\\'\\r\\n]|\\\\.)*(?:\\'|)|\\\"(?:[^\\\\\\\"\\r\\n]|\\\\.)*(?:\\\"|))\/,null,\"\\\"'\"]);a.verbatimStrings\u0026amp;\u0026amp;f.push([\"str\",\/^@\\\"(?:[^\\\"]|\\\"\\\")*(?:\\\"|)\/,null]);var b=a.hashComments;b\u0026amp;\u0026amp;(a.cStyleComments?(1\u0026lt;b?d.push([\"com\",\/^#(?:##(?:[^#]|#(?!##))*(?:###|)|.*)\/,null,\"#\"]):d.push([\"com\",\/^#(?:(?:define|e(?:l|nd)if|else|error|ifn?def|include|line|pragma|undef|warning)\\b|[^\\r\\n]*)\/,\nnull,\"#\"]),f.push([\"str\",\/^\u0026lt;(?:(?:(?:\\.\\.\\\/)*|\\\/?)(?:[\\w-]+(?:\\\/[\\w-]+)+)?[\\w-]+\\.h(?:h|pp|\\+\\+)?|[a-z]\\w*)\u0026gt;\/,null])):d.push([\"com\",\/^#[^\\r\\n]*\/,null,\"#\"]));a.cStyleComments\u0026amp;\u0026amp;(f.push([\"com\",\/^\\\/\\\/[^\\r\\n]*\/,null]),f.push([\"com\",\/^\\\/\\*[\\s\\S]*?(?:\\*\\\/|)\/,null]));if(b=a.regexLiterals){var g=(b=1\u0026lt;b?\"\":\"\\n\\r\")?\".\":\"[\\\\S\\\\s]\";f.push([\"lang-regex\",RegExp(\"^(?:^^\\\\.?|[+-]|[!=]=?=?|\\\\#|%=?|\u0026amp;\u0026amp;?=?|\\\|\\\\*=?|[+\\\\-]=|-\u0026gt;|\\\\\/=?|::?|\u0026lt;\u0026lt;?=?|\u0026gt;\u0026gt;?\u0026gt;?=?|,|;|\\\\?|@|\\\\[|~|{|\\\\^\\\\^?=?|\\\\|\\\\|?=?|break|case|continue|delete|do|else|finally|instanceof|return|throw|try|typeof)\\\\s*(\"+\n(\"\/(?=[^\/*\"+b+\"])(?:[^\/\\\\x5B\\\\x5C\"+b+\"]|\\\\x5C\"+g+\"|\\\\x5B(?:[^\\\\x5C\\\\x5D\"+b+\"]|\\\\x5C\"+g+\")*(?:\\\\x5D|))+\/\")+\")\")])}(b=a.types)\u0026amp;\u0026amp;f.push([\"typ\",b]);b=(\"\"+a.keywords).replace(\/^ | \/g,\"\");b.length\u0026amp;\u0026amp;f.push([\"kwd\",new RegExp(\"^(?:\"+b.replace(\/[\\s,]+\/g,\"|\")+\")\\\\b\"),null]);d.push([\"pln\",\/^\\s+\/,null,\" \\r\\n\\t\\u00a0\"]);b=\"^.[^\\\\s\\\\w.@'\\\"\/\\\\\\\$*\";a.regexLiterals\u0026amp;\u0026amp;(b+=\"(?!s*\/)\");f.push([\"lit\",\/^@[a-z_][a-z_@0-9]*\/i,null],[\"typ\",\/^(?:[@_]?[A-Z]+[a-z][A-Za-z_@0-9]*|\\w+_t\\b)\/,null],[\"pln\",\/^[a-z_][a-z_@0-9]*\/i,\nnull],[\"lit\",\/^(?:0x[a-f0-9]+|(?:\\d(?:_\\d+)*\\d*(?:\\.\\d*)?|\\.\\d\\+)(?:e[+\\-]?\\d+)?)[a-z]*\/i,null,\"0123456789\"],[\"pln\",\/^\\\\\s\\S]?\/,null],[\"pun\",new RegExp(b),null]);return C(d,f)}function B(a,d,f){function b(a){var c=a.nodeType;if(1==c\u0026amp;\u0026amp;!k.test(a.className))if(\"br\"===a.nodeName)g(a),a.parentNode\u0026amp;\u0026amp;a.parentNode.removeChild(a);else for(a=a.firstChild;a;a=a.nextSibling)b(a);else if((3==c||4==c)\u0026amp;\u0026amp;f){var d=a.nodeValue,p=d.match(q);p\u0026amp;\u0026amp;(c=d.substring(0,p.index),a.nodeValue=c,(d=d.substring(p.index+p[0].length))\u0026amp;\u0026amp;\na.parentNode.insertBefore(m.createTextNode(d),a.nextSibling),g(a),c||a.parentNode.removeChild(a))}}function g(a){function b(a,c){var d=c?a.cloneNode(!1):a,n=a.parentNode;if(n){var n=b(n,1),e=a.nextSibling;n.appendChild(d);for(var f=e;f;f=e)e=f.nextSibling,n.appendChild(f)}return d}for(;!a.nextSibling;)if(a=a.parentNode,!a)return;a=b(a.nextSibling,0);for(var d;(d=a.parentNode)\u0026amp;\u0026amp;1===d.nodeType;)a=d;c.push(a)}for(var k=\/(?:^|\\s)nocode(?:\\s|)\/,q=\/\\r\\n?|\\n\/,m=a.ownerDocument,p=m.createElement(\"li\");a.firstChild;)p.appendChild(a.firstChild);\nfor(var c=[p],r=0;r\u0026lt;c.length;++r)b(c[r]);d===(d|0)\u0026amp;\u0026amp;c[0].setAttribute(\"value\",d);var t=m.createElement(\"ol\");t.className=\"linenums\";d=Math.max(0,d-1|0)||0;for(var r=0,u=c.length;r\u0026lt;u;++r)p=c[r],p.className=\"L\"+(r+d)%10,p.firstChild||p.appendChild(m.createTextNode(\"\\u00a0\")),t.appendChild(p);a.appendChild(t)}function p(a,d){for(var f=d.length;0\u0026lt;=--f;){var b=d[f];X.hasOwnProperty(b)?R.console\u0026amp;\u0026amp;console.warn(\"cannot override language handler %s\",b):X[b]=a}}function F(a,d){a\u0026amp;\u0026amp;X.hasOwnProperty(a)||(a=\/^\\s*\u0026lt;\/.test(d)?\n\"default-markup\":\"default-code\");return X[a]}function H(a){var d=a.j;try{var f=q(a.h,a.l),b=f.a;a.a=b;a.c=f.c;a.i=0;F(d,b)(a);var g=\/\\bMSIE\\s(\\d+)\/.exec(navigator.userAgent),g=g\u0026amp;\u0026amp;8\u0026gt;=+g[1],d=\/\\n\/g,p=a.a,k=p.length,f=0,m=a.c,t=m.length,b=0,c=a.g,r=c.length,x=0;c[r]=k;var u,e;for(e=u=0;e\u0026lt;r;)c[e]!==c[e+2]?(c[u++]=c[e++],c[u++]=c[e++]):e+=2;r=u;for(e=u=0;e\u0026lt;r;){for(var A=c[e],D=c[e+1],w=e+2;w+2\u0026lt;=r\u0026amp;\u0026amp;c[w+1]===D;)w+=2;c[u++]=A;c[u++]=D;e=w}c.length=u;var h=a.h;a=\"\";h\u0026amp;\u0026amp;(a=h.style.display,h.style.display=\"none\");\ntry{for(;b\u0026lt;t;){var l=m[b+2]||k,n=c[x+2]||k,w=Math.min(l,n),E=m[b+1],G;if(1!==E.nodeType\u0026amp;\u0026amp;(G=p.substring(f,w))){g\u0026amp;\u0026amp;(G=G.replace(d,\"\\r\"));E.nodeValue=G;var aa=E.ownerDocument,v=aa.createElement(\"span\");v.className=c[x+1];var B=E.parentNode;B.replaceChild(v,E);v.appendChild(E);f\u0026lt;l\u0026amp;\u0026amp;(m[b+1]=E=aa.createTextNode(p.substring(w,l)),B.insertBefore(E,v.nextSibling))}f=w;f\u0026gt;=l\u0026amp;\u0026amp;(b+=2);f\u0026gt;=n\u0026amp;\u0026amp;(x+=2)}}finally{h\u0026amp;\u0026amp;(h.style.display=a)}}catch(y){R.console\u0026amp;\u0026amp;console.log(y\u0026amp;\u0026amp;y.stack||y)}}var R=window,K=[\"break,continue,do,else,for,if,return,while\"],\nL=[[K,\"auto,case,char,const,default,double,enum,extern,float,goto,inline,int,long,register,short,signed,sizeof,static,struct,switch,typedef,union,unsigned,void,volatile\"],\"catch,class,delete,false,import,new,operator,private,protected,public,this,throw,true,try,typeof\"],S=[L,\"alignof,align_union,asm,axiom,bool,concept,concept_map,const_cast,constexpr,decltype,delegate,dynamic_cast,explicit,export,friend,generic,late_check,mutable,namespace,nullptr,property,reinterpret_cast,static_assert,static_cast,template,typeid,typename,using,virtual,where\"],\nM=[L,\"abstract,assert,boolean,byte,extends,finally,final,implements,import,instanceof,interface,null,native,package,strictfp,super,synchronized,throws,transient\"],N=[L,\"abstract,as,base,bool,by,byte,checked,decimal,delegate,descending,dynamic,event,finally,fixed,foreach,from,group,implicit,in,interface,internal,into,is,let,lock,null,object,out,override,orderby,params,partial,readonly,ref,sbyte,sealed,stackalloc,string,select,uint,ulong,unchecked,unsafe,ushort,var,virtual,where\"],L=[L,\"debugger,eval,export,function,get,instanceof,null,set,undefined,var,with,Infinity,NaN\"],\nO=[K,\"and,as,assert,class,def,del,elif,except,exec,finally,from,global,import,in,is,lambda,nonlocal,not,or,pass,print,raise,try,with,yield,False,True,None\"],P=[K,\"alias,and,begin,case,class,def,defined,elsif,end,ensure,false,in,module,next,nil,not,or,redo,rescue,retry,self,super,then,true,undef,unless,until,when,yield,BEGIN,END\"],K=[K,\"case,done,elif,esac,eval,fi,function,in,local,set,then,until\"],Q=\/^(DIR|FILE|vector|(de|priority_)?queue|list|stack|(const_)?iterator|(multi)?(set|map)|bitset|u?(int|float)\\d*)\\b\/,\nT=\/\\S\/,U=x({keywords:[S,N,M,L,\"caller,delete,die,do,dump,elsif,eval,exit,foreach,for,goto,if,import,last,local,my,next,no,our,print,package,redo,require,sub,undef,unless,until,use,wantarray,while,BEGIN,END\",O,P,K],hashComments:!0,cStyleComments:!0,multiLineStrings:!0,regexLiterals:!0}),X={};p(U,[\"default-code\"]);p(C([],[[\"pln\",\/^[^\u0026lt;?]+\/],[\"dec\",\/^\u0026lt;!\\w[^\u0026gt;]*(?:\u0026gt;|)\/],[\"com\",\/^\u0026lt;\\!--[\\s\\S]*?(?:-\\-\u0026gt;|)\/],[\"lang-\",\/^\u0026lt;\\?([\\s\\S]+?)(?:\\?\u0026gt;|)\/],[\"lang-\",\/^\u0026lt;%([\\s\\S]+?)(?:%\u0026gt;|)\/],[\"pun\",\/^(?:\u0026lt;[%?]|[%?]\u0026gt;)\/],[\"lang-\",\n\/^\u0026lt;xmp\\b[^\u0026gt;]*\u0026gt;([\\s\\S]+?)\u0026lt;\\\/xmp\\b[^\u0026gt;]*\u0026gt;\/i],[\"lang-js\",\/^\u0026lt;script\\b[^\u0026gt;]*\u0026gt;([\\s\\S]*?)(\u0026lt;\\\/script\\b[^\u0026gt;]*\u0026gt;)\/i],[\"lang-css\",\/^\u0026lt;style\\b[^\u0026gt;]*\u0026gt;([\\s\\S]*?)(\u0026lt;\\\/style\\b[^\u0026gt;]*\u0026gt;)\/i],[\"lang-in.tag\",\/^(\u0026lt;\\\/?[a-z][^\u0026lt;\u0026gt;]*\u0026gt;)\/i]]),\"default-markup htm html mxml xhtml xml xsl\".split(\" \"));p(C([[\"pln\",\/^[\\s]+\/,null,\" \\t\\r\\n\"],[\"atv\",\/^(?:\\\"[^\\\"]*\\\"?|\\'[^\\']*\\'?)\/,null,\"\\\"'\"]],[[\"tag\",\/^^\u0026lt;\\\/?[a-z](?:[\\w.:-]*\\w)?|\\\/?\u0026gt;\/i],[\"atn\",\/^(?!style[\\s=]|on)[a-z](?:[\\w:-]*\\w)?\/i],[\"lang-uq.val\",\/^=\\s*([^\u0026gt;\\'\\\"\\s]*(?:[^\u0026gt;\\'\\\"\\s\\\/]|\\\/(?=\\s)))\/],\n[\"pun\",\/^[=\u0026lt;\u0026gt;\\\/]+\/],[\"lang-js\",\/^on\\w+\\s*=\\s*\\\"([^\\\"]+)\\\"\/i],[\"lang-js\",\/^on\\w+\\s*=\\s*\\'([^\\']+)\\'\/i],[\"lang-js\",\/^on\\w+\\s*=\\s*([^\\\"\\'\u0026gt;\\s]+)\/i],[\"lang-css\",\/^style\\s*=\\s*\\\"([^\\\"]+)\\\"\/i],[\"lang-css\",\/^style\\s*=\\s*\\'([^\\']+)\\'\/i],[\"lang-css\",\/^style\\s*=\\s*([^\\\"\\'\u0026gt;\\s]+)\/i]]),[\"in.tag\"]);p(C([],[[\"atv\",\/^[\\s\\S]+\/]]),[\"uq.val\"]);p(x({keywords:S,hashComments:!0,cStyleComments:!0,types:Q}),\"c cc cpp cxx cyc m\".split(\" \"));p(x({keywords:\"null,true,false\"}),[\"json\"]);p(x({keywords:N,hashComments:!0,cStyleComments:!0,\nverbatimStrings:!0,types:Q}),[\"cs\"]);p(x({keywords:M,cStyleComments:!0}),[\"java\"]);p(x({keywords:K,hashComments:!0,multiLineStrings:!0}),[\"bash\",\"bsh\",\"csh\",\"sh\"]);p(x({keywords:O,hashComments:!0,multiLineStrings:!0,tripleQuotedStrings:!0}),[\"cv\",\"py\",\"python\"]);p(x({keywords:\"caller,delete,die,do,dump,elsif,eval,exit,foreach,for,goto,if,import,last,local,my,next,no,our,print,package,redo,require,sub,undef,unless,until,use,wantarray,while,BEGIN,END\",hashComments:!0,multiLineStrings:!0,regexLiterals:2}),\n[\"perl\",\"pl\",\"pm\"]);p(x({keywords:P,hashComments:!0,multiLineStrings:!0,regexLiterals:!0}),[\"rb\",\"ruby\"]);p(x({keywords:L,cStyleComments:!0,regexLiterals:!0}),[\"javascript\",\"js\"]);p(x({keywords:\"all,and,by,catch,class,else,extends,false,finally,for,if,in,is,isnt,loop,new,no,not,null,of,off,on,or,return,super,then,throw,true,try,unless,until,when,while,yes\",hashComments:3,cStyleComments:!0,multilineStrings:!0,tripleQuotedStrings:!0,regexLiterals:!0}),[\"coffee\"]);p(C([],[[\"str\",\/^[\\s\\S]+\/]]),[\"regex\"]);\nvar V=R.PR={createSimpleLexer:C,registerLangHandler:p,sourceDecorator:x,PR_ATTRIB_NAME:\"atn\",PR_ATTRIB_VALUE:\"atv\",PR_COMMENT:\"com\",PR_DECLARATION:\"dec\",PR_KEYWORD:\"kwd\",PR_LITERAL:\"lit\",PR_NOCODE:\"nocode\",PR_PLAIN:\"pln\",PR_PUNCTUATION:\"pun\",PR_SOURCE:\"src\",PR_STRING:\"str\",PR_TAG:\"tag\",PR_TYPE:\"typ\",prettyPrintOne:function(a,d,f){f=f||!1;d=d||null;var b=document.createElement(\"div\");b.innerHTML=\"\u0026lt;pre\u0026gt;\"+a+\"\u0026lt;\/pre\u0026gt;\";b=b.firstChild;f\u0026amp;\u0026amp;B(b,f,!0);H({j:d,m:f,h:b,l:1,a:null,i:null,c:null,g:null});return b.innerHTML},\nprettyPrint:g=g=function(a,d){function f(){for(var b=R.PR_SHOULD_USE_CONTINUATION?c.now()+250:Infinity;r\u0026lt;p.length\u0026amp;\u0026amp;c.now()\u0026lt;b;r++){for(var d=p[r],k=h,q=d;q=q.previousSibling;){var m=q.nodeType,v=(7===m||8===m)\u0026amp;\u0026amp;q.nodeValue;if(v?!\/^\\??prettify\\b\/.test(v):3!==m||\/\\S\/.test(q.nodeValue))break;if(v){k={};v.replace(\/\\b(\\w+)=([\\w:.%+-]+)\/g,function(a,b,c){k[b]=c});break}}q=d.className;if((k!==h||u.test(q))\u0026amp;\u0026amp;!e.test(q)){m=!1;for(v=d.parentNode;v;v=v.parentNode)if(w.test(v.tagName)\u0026amp;\u0026amp;v.className\u0026amp;\u0026amp;u.test(v.className)){m=\n!0;break}if(!m){d.className+=\" prettyprinted\";m=k.lang;if(!m){var m=q.match(t),C;!m\u0026amp;\u0026amp;(C=A(d))\u0026amp;\u0026amp;z.test(C.tagName)\u0026amp;\u0026amp;(m=C.className.match(t));m\u0026amp;\u0026amp;(m=m[1])}if(x.test(d.tagName))v=1;else var v=d.currentStyle,y=g.defaultView,v=(v=v?v.whiteSpace:y\u0026amp;\u0026amp;y.getComputedStyle?y.getComputedStyle(d,null).getPropertyValue(\"white-space\"):0)\u0026amp;\u0026amp;\"pre\"===v.substring(0,3);y=k.linenums;(y=\"true\"===y||+y)||(y=(y=q.match(\/\\blinenums\\b(?::(\\d+))?\/))?y[1]\u0026amp;\u0026amp;y[1].length?+y[1]:!0:!1);y\u0026amp;\u0026amp;B(d,y,v);H({j:m,h:d,m:y,l:v,a:null,i:null,c:null,\ng:null})}}}r\u0026lt;p.length?R.setTimeout(f,250):\"function\"===typeof a\u0026amp;\u0026amp;a()}for(var b=d||document.body,g=b.ownerDocument||document,b=[b.getElementsByTagName(\"pre\"),b.getElementsByTagName(\"code\"),b.getElementsByTagName(\"xmp\")],p=[],k=0;k\u0026lt;b.length;++k)for(var m=0,q=b[k].length;m\u0026lt;q;++m)p.push(b[k][m]);var b=null,c=Date;c.now||(c={now:function(){return+new Date}});var r=0,t=\/\\blang(?:uage)?-([\\w.]+)(?!\\S)\/,u=\/\\bprettyprint\\b\/,e=\/\\bprettyprinted\\b\/,x=\/pre|xmp\/i,z=\/^code\/i,w=\/^(?:pre|code|xmp)\/i,h={};f()}},\nS=R.define;\"function\"===typeof S\u0026amp;\u0026amp;S.amd\u0026amp;\u0026amp;S(\"google-code-prettify\",[],function(){return V})})();return g}();T||t.setTimeout(U,0)})();}()\n\u0026lt;\/script\u0026gt;\n\n\u0026lt;div\u0026gt;\n\u0026lt;pre class=\"prettyprint\"\u0026gt;\n\n# import dataset\n\n\u0026gt;\u0026gt;\u0026gt; import pandas as pd\n\u0026gt;\u0026gt;\u0026gt; data = pd.read_csv('heart.csv')\n\u0026gt;\u0026gt;\u0026gt; data.describe()\n\n# split the label and explaining data\n\n\u0026gt;\u0026gt;\u0026gt; y = data['target']\n\u0026gt;\u0026gt;\u0026gt; cols = ['sex', 'cp', 'thalach', 'exang', 'oldpeak', 'ca', 'thal']\n\u0026gt;\u0026gt;\u0026gt; X = data[cols]\n\n# Split the dataset into train and test sets\n\u0026gt;\u0026gt;\u0026gt; from sklearn.model_selection import train_test_split\n\u0026gt;\u0026gt;\u0026gt; from sklearn import preprocessing\n\n\u0026gt;\u0026gt;\u0026gt; X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.25, random_state=40) # for reproducibility results\n\n# scaling the data\n\u0026gt;\u0026gt;\u0026gt; scaler = preprocessing.StandardScaler().fit(X_train)\n\u0026gt;\u0026gt;\u0026gt; X_train_sc = scaler.transform(X_train)\n\u0026gt;\u0026gt;\u0026gt; X_test_sc = scaler.transform(X_test)\n\n# Learn a classifier\n\u0026gt;\u0026gt;\u0026gt; from sklearn.svm import SVC\n\u0026gt;\u0026gt;\u0026gt; from sklearn.metrics import accuracy_score\n\n\n# classification step\n\u0026gt;\u0026gt;\u0026gt; classifiers = SVC(kernel='linear', probability=True)\n\u0026gt;\u0026gt;\u0026gt; classifiers.fit(X_train_sc,y_train)\n\n# prediction:\n\u0026gt;\u0026gt;\u0026gt; pred = classifiers.predict(X_test_sc)\n\u0026gt;\u0026gt;\u0026gt; print(accuracy_score(y_test,pred)) \n\nclassification accuracy on the test set: 0.9078947368421053\n\n\u0026lt;\/pre\u0026gt;\n\u0026lt;\/div\u0026gt;\n\u0026lt;hr\u0026gt;","render_as_iframe":true,"selected_app_name":"HtmlApp","app_list":"{\"HtmlApp\":556065}"}},{"type":"Blog.Section","id":"f_bce38bd8-3b43-4aba-9216-02ded616842f","defaultValue":null,"component":{"type":"RichText","id":"f_b357bdaa-8c84-4eeb-9b65-6f5af8e591ec","defaultValue":false,"value":"\u003cp style=\"text-align: justify;\"\u003eOnce the classification model is learnt, we are looking for explaining a particular prediction (a true one ;) ) based on the shap library developed by Lundberg. You need to install the shap library (\u003ca target=\"_blank\" href=\"https:\/\/github.com\/slundberg\/shap\"\u003ehttps:\/\/github.com\/slundberg\/shap\u003c\/a\u003e) before running the code below:\u00a0\u003c\/p\u003e","backupValue":null,"version":1}},{"type":"Blog.Section","id":"f_28edd409-fe05-42a9-b61b-a9005db21a58","defaultValue":null,"component":{"type":"HtmlComponent","id":1583642,"defaultValue":false,"value":"\u0026lt;script type=\"text\/x-mathjax-config\"\u0026gt;\n MathJax.Hub.Config({\n tex2jax: {inlineMath: [[' Interpretability models​ Why interpretability is so important in machine learning ? Why can't we just trust the prediction of a supervised model ? Several possible explanations to that: we can think about improving social acceptance for the integration of ML algorithms into our lives ; correcting a model by discovering a bias in the population of the training set; understanding the cases for which the model fails; following the law and regulations. Nowadays, complex supervised models can be very accurate on specific tasks but remain quite uninterpretable; at the opposite, when using simple models, it is indeed easy to interpret them but are often less accurate. How can we solve such a dilemma ? This post tends to answer to this question by going through the ML literature in interpretability models and by focusing on a class of additive feature attribution methods [11]. 1. The main idea The problem of giving an interpretation to the model prediction can be recasted as it follows: Which part of the input is particularly important to explain the output of the model ? In order to illustrate this purpose, let's consider the example given during the ICML conference by Shrikumar. Lets suppose you have already trained a model with DNA mutations causing diseases. Now, let's consider a DNA sequence as input, as for instance: The model is going to predict if this sequence can be linked to any known diseases the model learnt. If so, what you would like to understand is why your model gives this prediction in particular; ie which part of the input sequence leads your model to predict a specific disease. So, you would like to have higher weights for the parts of the sequence which explain the most the decision of your model and lower ones for those which do not explain the prediction: To achieve that, most of approaches iterate between 2 steps: 1. Set a prohibition to some part of the input 2. Observe the change in the output (fitted answer) Repeat step 1 and step 2 for different prohibitions of the input. 2. Existing approaches The need of tools for explaining prediction models came with the development of more complex models to deal with more complex data and therefore the recent literature in Computer Vision and Machine Learning has developed a new field linked to interpretability. 2.1. Cooperative game theory based Back to the beginning of the 21th century, Lipovetsky et al. (2001)[1] highlight the multicollinearity problem in the analysis of regressor importance in the multiple regression context: important variables can have significant coefficient because of their collinearity. To that end, they use a tool from the cooperative game theory to obtain comparative importance of predictors: the Shapley Values imputation [0] derived from an axiomatic approach and produces a unique solution satisfying general requirements of Nash equilibrium. A decade later, Strumbelj et al. (2011)[2] generalize the use of Shapley values for black box models such as SVM and artificial neural network models in order to make models more informative, easier to understand and to use. They propose an approximation algorithm by assuming mutual independence of individual features in order to encompass the time complexity limitation of the solution. 2.2. Architecture specific: Deep Neural Network Since then, several specific methods have been proposed in the literature and take advantages of the structure/architecture of the model. For neural networks, we can think about back-propagation based methods such as Guided Propagation (Springenberg et al. 2014)[4] which use the relationship between the neurons and the output. The idea is to assign a score to neurons according to how much they affect the output. This is done in the single backward pass where you get the scores for all parts of the input. Other approaches propose to build a linear model to locally approximate the more complicated model based on data which affects the output (LIME [6]). Shrikumar et al. 2016 [7] introduces DeepLift which assigns contribution scores to the feature based on the difference between the activation of each neuron to its ‘reference activation’. Other explaining prediction models Deep Neural Network-specific have been proposed in the literature and the reader could read [8] for additional references on this subject. Below, a list of methods and their available python code which summarizes the most recent approaches for specific-models: Random Forest: Deep Neural Network: 2.3. A unified approach The most recent and general approach for interpretability models is the SHAP model from Lundberg et al. 2018. It proposes a class of methods called Additive Feature attribution methods that contains most of the approaches cited above. These methods use the same explanation model (ie any interpretable approximation of the original model) that we introduce in the next paragraph. Cooperative Game theory-based: 3. SHAP: Additive feature attribution methods An explanation model is a simple model which describes the behavior of the complex model. The additive attribution methods introduce a linear function of binary variables to represent such an explanation model. 3.1. The SHAP model Let f be the original prediction model to be explained and g the explanation model. Additive feature attribution methods have an explanation model that is a linear function of binary variables such that: where M is the number of features ; the z'i variables represent a feature being observed (zi' = 1) or unknown (zi'= 0), and the Φi ∈ ’s are the feature attribution values. There is only one solution for Φ_i satisfying general requirements of Nash equilibrium and satisfying three natural properties explained in the paragraph that follows. 3.2. The Natural properties (1) Local Accuracy: the output of the explanation model matches the original model for the prediction being explained: g(x') = f(x) (2) Missingness: put the output to 0 corresponds to turning the feature off: x'i = 0 ⇒ Φi = 0 (3) Accuracy: if turning the feature off in one model which always makes a bigger difference in another model then the importance should be higher in the first model than in the second one. Lets consider z' \ i meaning z'i = 0, then for any 2 models f 1 and f 2, if: fx1(z') - fx1(z' \ i) ≥ fx2(z') - fx2(z' \ i) then for all input z' ∈ {0,1}M : Φi (f 1, x) ≥ Φi (f 2, x) 3.3 Computing SHAP values 3.3.1. Back to the Shapley values The computation of features importance -- the SHAP values -- comes from cooperative games theory [0] with the Shapley values. In our context, a Shapley value can be viewed as a weighted average of all possible differences between predictions of the model without feature i, and the ones with feature i as expressed below: where |z′| stands for the number of features different from zero, and z′ ⊆ x′ stands for all z′ vectors where the non-zero entries are a subset of entries of x′ except feature i. Since the problem is combinatorial different strategies have been proposed in the literature to approximate the solution ([0,1]). 3.3.2. The SHAP values In the more general context the SHAP values can be viewed as Shapley values of a conditional expectation function of the original model such that: where S is the set of non-zero entries of z'. In practice, the computation of SHAP values are challenging that is why Lundberg and al.[11] propose different approximation algorithms according to the specificities of your model or your data (tree ensembles, independent features, deep network,...). 4. Practical example with SHAP library Lundenberg created a GitHub repository to that end with very nice and quite complete notebooks explaining different use cases for SHAP and its different approximation algorithms (Tree/ Deep / Gradient /Linear or Kernel Explainers). I do really encourage the reader to visit the page of the author: https://github.com/slundberg/shap . By the way, I am just going to introduce a very simple example in order to give insights of the kind of results we could obtain when looking for interpreting a prediction. Lets consider the heart dataset coming from kaggle competition (https://www.kaggle.com/ronitf/heart-disease-uci). The dataset consists in 13 variables describing 303 patients and 1 label describing the angiographic disease status (target \in {0,1}). The set is quite balanced since 165 patients have label 1 and 138 have label 0. The data have been pre-processed a little bit such as we keep only the most informative variables which are ['sex', 'cp', 'thalach', 'exang', 'oldpeak', 'ca', 'thal']. Then we split the dataset into a random train (75% of data) and test sets (the remaining 25%) and we scale them. A svm classifier has been learnt, and we obtain a classification accuracy around 91%. Once the classification model is learnt, we are looking for explaining a particular prediction (a true one ;) ) based on the shap library developed by Lundberg. You need to install the shap library (https://github.com/slundberg/shap) before running the code below: This figure illustrates features that push the prediction higher (in pink) and the ones which push the prediction lower (in blue) from a base value computed on the average model output on the training dataset. For that true positive fitted answer, we can see that the main features which tend to push the probabilities towards 1 is mainly explained by 'sex', 'oldpeak', 'thalach' and 'exang', whereas the 'ca' feature tends to push down the prediction score. We can apply this explainer model to all correct predicted examples in the test set, as below: The figure above stands for all individual feature contribution that have been stacked horizontally and ordered by output value. The 39 first predictions are correctly classified in class 1 and the 30 last ones are labeled and correctly classified in class 0. Note that the visualisation is interactive and we can see the effect of a particular feature by changing the y-axis in the menu of the left side of the figure. Symmetrically, you can change the x-axis menu in order to order the sample according to output values, similarities or SHAP values by feature. It can also be very interested to have in one plot an overview of the distribution of SHAP values for each feature and an idea of their overall impact (Note that the example is still on the correct predicted sample): In the first plot (subfigure a.), there are three kinds of information: in the x-axis you have the SHAP values of each feature described in the y-axis. Each line stands for the set of SHAP values computed for a specific feature and this is done for every features of your model. The third dimension is the color of points: it represents the feature value (pink for high value of the feature and blue for a low value). You can therefore see the dispersion of SHAP values according to features and also, their impact in the output model. For instance, high values of 'cp' feature implies high SHAP values predicted score and tends to push up the prediction whereas high values of 'thal' feature (pink points) tends to lower the predicted score. The second plot (subfigure b.) is the Mean Absolute Value of SHAP values obtained for each feature. This can be seen as a summary of the left figure. Conclusion There are lots of approaches proposed in the literature to deal with interpretability/explainable models in the supervised context. The main strength of the additive feature attribution model is its theoretical properties on one hand and on the other hand, its general framework which tends to explain most of explainable models developed in the literature. Different approximations algorithms have been proposed by Lundberg et al. in order to take advantage of the structure of the model and types of data to improve time computations. If you deal with Deep Neural Network or tree ensembles, I really encourage the reader to see more examples on the GitHub repository of the author: https://github.com/slundberg/shap. Bibliography [0] Shapley, Lloyd S. “A Value for N-Person Games.” Contributions to the Theory of Games 2 (28): 307–17, (1953). [1] Lipovetsky, S. and Conklin, M. "Analysis of regression in game theory approach." Applied Stochastic Models in business and industry (17-4):319-330, (2001). [2] Strumbelj et al., "A General Method for Visualizing and Explaining Black-Box Regression Models." Adaptive and Natural Computing Algorithms. ICANNGA 2011. Lecture Notes in Computer Science, vol 6594. (2011). [3] Saabas et al., "Interpreting random forests", https://blog.datadive.net/interpreting-random-forests/ [4] Springenberg et al., "Striving for simplicity: The all convolutional net", arXiv:1412.6806 (2014). [5] Bach et al. "On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation", PLOS ONE: (10-7): 130-140, (2015). [6] Ribeiro et al. "Why should I Trust You ? Explaining the predictions of any classifier", Proceedings of the 22nd ACM SIGKDD: 1135-1144 (2016). [7] Shrikumar et al. "Learning Important Features Through Propagating Activation Difference", Proceedings in ICML (2017). [8] "Explainable and Interpretable Models in Computer Vision and Machine Learning", Springer Verlag, The Springer Series on Challenges in Machine Learning, 9783319981307 (2018). [9] Sunderarajan et al., "Axiomatic Attribution for Deep Networks", Proceedings in ICML (2017). [10] Montavon et al. "Explaining nonlinear classification decision with deep Taylor decomposition", Pattern Recognition (65):211-222 (2017). [11] Lundberg et al., ''A unified approach to interpreting model predictions'', NIPS (2017). All Posts × Almost done… We just sent you an email. Please click the link in the email to confirm your subscription! ,' Interpretability models​ Why interpretability is so important in machine learning ? Why can't we just trust the prediction of a supervised model ? Several possible explanations to that: we can think about improving social acceptance for the integration of ML algorithms into our lives ; correcting a model by discovering a bias in the population of the training set; understanding the cases for which the model fails; following the law and regulations. Nowadays, complex supervised models can be very accurate on specific tasks but remain quite uninterpretable; at the opposite, when using simple models, it is indeed easy to interpret them but are often less accurate. How can we solve such a dilemma ? This post tends to answer to this question by going through the ML literature in interpretability models and by focusing on a class of additive feature attribution methods [11]. 1. The main idea The problem of giving an interpretation to the model prediction can be recasted as it follows: Which part of the input is particularly important to explain the output of the model ? In order to illustrate this purpose, let's consider the example given during the ICML conference by Shrikumar. Lets suppose you have already trained a model with DNA mutations causing diseases. Now, let's consider a DNA sequence as input, as for instance: The model is going to predict if this sequence can be linked to any known diseases the model learnt. If so, what you would like to understand is why your model gives this prediction in particular; ie which part of the input sequence leads your model to predict a specific disease. So, you would like to have higher weights for the parts of the sequence which explain the most the decision of your model and lower ones for those which do not explain the prediction: To achieve that, most of approaches iterate between 2 steps: 1. Set a prohibition to some part of the input 2. Observe the change in the output (fitted answer) Repeat step 1 and step 2 for different prohibitions of the input. 2. Existing approaches The need of tools for explaining prediction models came with the development of more complex models to deal with more complex data and therefore the recent literature in Computer Vision and Machine Learning has developed a new field linked to interpretability. 2.1. Cooperative game theory based Back to the beginning of the 21th century, Lipovetsky et al. (2001)[1] highlight the multicollinearity problem in the analysis of regressor importance in the multiple regression context: important variables can have significant coefficient because of their collinearity. To that end, they use a tool from the cooperative game theory to obtain comparative importance of predictors: the Shapley Values imputation [0] derived from an axiomatic approach and produces a unique solution satisfying general requirements of Nash equilibrium. A decade later, Strumbelj et al. (2011)[2] generalize the use of Shapley values for black box models such as SVM and artificial neural network models in order to make models more informative, easier to understand and to use. They propose an approximation algorithm by assuming mutual independence of individual features in order to encompass the time complexity limitation of the solution. 2.2. Architecture specific: Deep Neural Network Since then, several specific methods have been proposed in the literature and take advantages of the structure/architecture of the model. For neural networks, we can think about back-propagation based methods such as Guided Propagation (Springenberg et al. 2014)[4] which use the relationship between the neurons and the output. The idea is to assign a score to neurons according to how much they affect the output. This is done in the single backward pass where you get the scores for all parts of the input. Other approaches propose to build a linear model to locally approximate the more complicated model based on data which affects the output (LIME [6]). Shrikumar et al. 2016 [7] introduces DeepLift which assigns contribution scores to the feature based on the difference between the activation of each neuron to its ‘reference activation’. Other explaining prediction models Deep Neural Network-specific have been proposed in the literature and the reader could read [8] for additional references on this subject. Below, a list of methods and their available python code which summarizes the most recent approaches for specific-models: Random Forest: Deep Neural Network: 2.3. A unified approach The most recent and general approach for interpretability models is the SHAP model from Lundberg et al. 2018. It proposes a class of methods called Additive Feature attribution methods that contains most of the approaches cited above. These methods use the same explanation model (ie any interpretable approximation of the original model) that we introduce in the next paragraph. Cooperative Game theory-based: 3. SHAP: Additive feature attribution methods An explanation model is a simple model which describes the behavior of the complex model. The additive attribution methods introduce a linear function of binary variables to represent such an explanation model. 3.1. The SHAP model Let f be the original prediction model to be explained and g the explanation model. Additive feature attribution methods have an explanation model that is a linear function of binary variables such that: where M is the number of features ; the z'i variables represent a feature being observed (zi' = 1) or unknown (zi'= 0), and the Φi ∈ ’s are the feature attribution values. There is only one solution for Φ_i satisfying general requirements of Nash equilibrium and satisfying three natural properties explained in the paragraph that follows. 3.2. The Natural properties (1) Local Accuracy: the output of the explanation model matches the original model for the prediction being explained: g(x') = f(x) (2) Missingness: put the output to 0 corresponds to turning the feature off: x'i = 0 ⇒ Φi = 0 (3) Accuracy: if turning the feature off in one model which always makes a bigger difference in another model then the importance should be higher in the first model than in the second one. Lets consider z' \ i meaning z'i = 0, then for any 2 models f 1 and f 2, if: fx1(z') - fx1(z' \ i) ≥ fx2(z') - fx2(z' \ i) then for all input z' ∈ {0,1}M : Φi (f 1, x) ≥ Φi (f 2, x) 3.3 Computing SHAP values 3.3.1. Back to the Shapley values The computation of features importance -- the SHAP values -- comes from cooperative games theory [0] with the Shapley values. In our context, a Shapley value can be viewed as a weighted average of all possible differences between predictions of the model without feature i, and the ones with feature i as expressed below: where |z′| stands for the number of features different from zero, and z′ ⊆ x′ stands for all z′ vectors where the non-zero entries are a subset of entries of x′ except feature i. Since the problem is combinatorial different strategies have been proposed in the literature to approximate the solution ([0,1]). 3.3.2. The SHAP values In the more general context the SHAP values can be viewed as Shapley values of a conditional expectation function of the original model such that: where S is the set of non-zero entries of z'. In practice, the computation of SHAP values are challenging that is why Lundberg and al.[11] propose different approximation algorithms according to the specificities of your model or your data (tree ensembles, independent features, deep network,...). 4. Practical example with SHAP library Lundenberg created a GitHub repository to that end with very nice and quite complete notebooks explaining different use cases for SHAP and its different approximation algorithms (Tree/ Deep / Gradient /Linear or Kernel Explainers). I do really encourage the reader to visit the page of the author: https://github.com/slundberg/shap . By the way, I am just going to introduce a very simple example in order to give insights of the kind of results we could obtain when looking for interpreting a prediction. Lets consider the heart dataset coming from kaggle competition (https://www.kaggle.com/ronitf/heart-disease-uci). The dataset consists in 13 variables describing 303 patients and 1 label describing the angiographic disease status (target \in {0,1}). The set is quite balanced since 165 patients have label 1 and 138 have label 0. The data have been pre-processed a little bit such as we keep only the most informative variables which are ['sex', 'cp', 'thalach', 'exang', 'oldpeak', 'ca', 'thal']. Then we split the dataset into a random train (75% of data) and test sets (the remaining 25%) and we scale them. A svm classifier has been learnt, and we obtain a classification accuracy around 91%. Once the classification model is learnt, we are looking for explaining a particular prediction (a true one ;) ) based on the shap library developed by Lundberg. You need to install the shap library (https://github.com/slundberg/shap) before running the code below: This figure illustrates features that push the prediction higher (in pink) and the ones which push the prediction lower (in blue) from a base value computed on the average model output on the training dataset. For that true positive fitted answer, we can see that the main features which tend to push the probabilities towards 1 is mainly explained by 'sex', 'oldpeak', 'thalach' and 'exang', whereas the 'ca' feature tends to push down the prediction score. We can apply this explainer model to all correct predicted examples in the test set, as below: The figure above stands for all individual feature contribution that have been stacked horizontally and ordered by output value. The 39 first predictions are correctly classified in class 1 and the 30 last ones are labeled and correctly classified in class 0. Note that the visualisation is interactive and we can see the effect of a particular feature by changing the y-axis in the menu of the left side of the figure. Symmetrically, you can change the x-axis menu in order to order the sample according to output values, similarities or SHAP values by feature. It can also be very interested to have in one plot an overview of the distribution of SHAP values for each feature and an idea of their overall impact (Note that the example is still on the correct predicted sample): In the first plot (subfigure a.), there are three kinds of information: in the x-axis you have the SHAP values of each feature described in the y-axis. Each line stands for the set of SHAP values computed for a specific feature and this is done for every features of your model. The third dimension is the color of points: it represents the feature value (pink for high value of the feature and blue for a low value). You can therefore see the dispersion of SHAP values according to features and also, their impact in the output model. For instance, high values of 'cp' feature implies high SHAP values predicted score and tends to push up the prediction whereas high values of 'thal' feature (pink points) tends to lower the predicted score. The second plot (subfigure b.) is the Mean Absolute Value of SHAP values obtained for each feature. This can be seen as a summary of the left figure. Conclusion There are lots of approaches proposed in the literature to deal with interpretability/explainable models in the supervised context. The main strength of the additive feature attribution model is its theoretical properties on one hand and on the other hand, its general framework which tends to explain most of explainable models developed in the literature. Different approximations algorithms have been proposed by Lundberg et al. in order to take advantage of the structure of the model and types of data to improve time computations. If you deal with Deep Neural Network or tree ensembles, I really encourage the reader to see more examples on the GitHub repository of the author: https://github.com/slundberg/shap. Bibliography [0] Shapley, Lloyd S. “A Value for N-Person Games.” Contributions to the Theory of Games 2 (28): 307–17, (1953). [1] Lipovetsky, S. and Conklin, M. "Analysis of regression in game theory approach." Applied Stochastic Models in business and industry (17-4):319-330, (2001). [2] Strumbelj et al., "A General Method for Visualizing and Explaining Black-Box Regression Models." Adaptive and Natural Computing Algorithms. ICANNGA 2011. Lecture Notes in Computer Science, vol 6594. (2011). [3] Saabas et al., "Interpreting random forests", https://blog.datadive.net/interpreting-random-forests/ [4] Springenberg et al., "Striving for simplicity: The all convolutional net", arXiv:1412.6806 (2014). [5] Bach et al. "On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation", PLOS ONE: (10-7): 130-140, (2015). [6] Ribeiro et al. "Why should I Trust You ? Explaining the predictions of any classifier", Proceedings of the 22nd ACM SIGKDD: 1135-1144 (2016). [7] Shrikumar et al. "Learning Important Features Through Propagating Activation Difference", Proceedings in ICML (2017). [8] "Explainable and Interpretable Models in Computer Vision and Machine Learning", Springer Verlag, The Springer Series on Challenges in Machine Learning, 9783319981307 (2018). [9] Sunderarajan et al., "Axiomatic Attribution for Deep Networks", Proceedings in ICML (2017). [10] Montavon et al. "Explaining nonlinear classification decision with deep Taylor decomposition", Pattern Recognition (65):211-222 (2017). [11] Lundberg et al., ''A unified approach to interpreting model predictions'', NIPS (2017). All Posts × Almost done… We just sent you an email. Please click the link in the email to confirm your subscription! ], ['\\\\(','\\\']]}\n });\n\u0026lt;\/script\u0026gt;\n\n\u0026lt;script type=\"text\/javascript\" async src=\"\/\/cdn.mathjax.org\/mathjax\/latest\/MathJax.js?config=TeX-AMS_CHTML\"\u0026gt;\u0026lt;\/script\u0026gt;\n\n\n\u0026lt;script\u0026gt;\n!function(){\/*\n\n Copyright (C) 2013 Google Inc.\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http:\/\/www.apache.org\/licenses\/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n\n Copyright (C) 2006 Google Inc.\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http:\/\/www.apache.org\/licenses\/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n*\/\n(function(){function ba(g){function k(){try{M.doScroll(\"left\")}catch(g){t.setTimeout(k,50);return}z(\"poll\")}function z(k){if(\"readystatechange\"!=k.type||\"complete\"==A.readyState)(\"load\"==k.type?t:A)[B](p+k.type,z,!1),!q\u0026amp;\u0026amp;(q=!0)\u0026amp;\u0026amp;g.call(t,k.type||k)}var Y=A.addEventListener,q=!1,C=!0,x=Y?\"addEventListener\":\"attachEvent\",B=Y?\"removeEventListener\":\"detachEvent\",p=Y?\"\":\"on\";if(\"complete\"==A.readyState)g.call(t,\"lazy\");else{if(A.createEventObject\u0026amp;\u0026amp;M.doScroll){try{C=!t.frameElement}catch(da){}C\u0026amp;\u0026amp;k()}A[x](p+\n\"DOMContentLoaded\",z,!1);A[x](p+\"readystatechange\",z,!1);t[x](p+\"load\",z,!1)}}function U(){V\u0026amp;\u0026amp;ba(function(){var g=N.length;ca(g?function(){for(var k=0;k\u0026lt;g;++k)(function(g){t.setTimeout(function(){t.exports[N[g]].apply(t,arguments)},0)})(k)}:void 0)})}for(var t=window,A=document,M=A.documentElement,O=A.head||A.getElementsByTagName(\"head\")[0]||M,B=\"\",F=A.getElementsByTagName(\"script\"),q=F.length;0\u0026lt;=--q;){var P=F[q],Z=P.src.match(\/^[^?#]*\\\/run_prettify\\.js(\\?[^#]*)?(?:#.*)?\/);if(Z){B=Z[1]||\"\";P.parentNode.removeChild(P);\nbreak}}var V=!0,H=[],Q=[],N=[];B.replace(\/[?\u0026amp;]([^\u0026amp;=]+)=([^\u0026amp;]+)\/g,function(g,k,z){z=decodeURIComponent(z);k=decodeURIComponent(k);\"autorun\"==k?V=!\/^[0fn]\/i.test(z):\"lang\"==k?H.push(z):\"skin\"==k?Q.push(z):\"callback\"==k\u0026amp;\u0026amp;N.push(z)});q=0;for(B=H.length;q\u0026lt;B;++q)(function(){var g=A.createElement(\"script\");g.onload=g.onerror=g.onreadystatechange=function(){!g||g.readyState\u0026amp;\u0026amp;!\/loaded|complete\/.test(g.readyState)||(g.onerror=g.onload=g.onreadystatechange=null,--T,T||t.setTimeout(U,0),g.parentNode\u0026amp;\u0026amp;g.parentNode.removeChild(g),\ng=null)};g.type=\"text\/javascript\";g.src=\"https:\/\/cdn.rawgit.com\/google\/code-prettify\/master\/loader\/lang-\"+encodeURIComponent(H[q])+\".js\";O.insertBefore(g,O.firstChild)})(H[q]);for(var T=H.length,F=[],q=0,B=Q.length;q\u0026lt;B;++q)F.push(\"https:\/\/cdn.rawgit.com\/google\/code-prettify\/master\/loader\/skins\/\"+encodeURIComponent(Q[q])+\".css\");F.push(\"https:\/\/cdn.rawgit.com\/google\/code-prettify\/master\/loader\/prettify.css\");(function(g){function k(q){if(q!==z){var t=A.createElement(\"link\");t.rel=\"stylesheet\";t.type=\n\"text\/css\";q+1\u0026lt;z\u0026amp;\u0026amp;(t.error=t.onerror=function(){k(q+1)});t.href=g[q];O.appendChild(t)}}var z=g.length;k(0)})(F);var ca=function(){window.PR_SHOULD_USE_CONTINUATION=!0;var g;(function(){function k(a){function d(e){var b=e.charCodeAt(0);if(92!==b)return b;var a=e.charAt(1);return(b=W[a])?b:\"0\"\u0026lt;=a\u0026amp;\u0026amp;\"7\"\u0026gt;=a?parseInt(e.substring(1),8):\"u\"===a||\"x\"===a?parseInt(e.substring(2),16):e.charCodeAt(1)}function f(e){if(32\u0026gt;e)return(16\u0026gt;e?\"\\\\x0\":\"\\\\x\")+e.toString(16);e=String.fromCharCode(e);return\"\\\\\"===e||\"-\"===\ne||\"]\"===e||\"^\"===e?\"\\\\\"+e:e}function b(e){var b=e.substring(1,e.length-1).match(\/\\\\u[0-9A-Fa-f]{4}|\\\\x[0-9A-Fa-f]{2}|\\\\[0-3][0-7]{0,2}|\\\\[0-7]{1,2}|\\\\[\\s\\S]|-|[^-\\\\/g);e=[];var a=\"^\"===b[0],c=[\"[\"];a\u0026amp;\u0026amp;c.push(\"^\");for(var a=a?1:0,h=b.length;a\u0026lt;h;++a){var l=b[a];if(\/\\\$bdsw]\/i.test(l))c.push(l);else{var l=d(l),n;a+2\u0026lt;h\u0026amp;\u0026amp;\"-\"===b[a+1]?(n=d(b[a+2]),a+=2):n=l;e.push([l,n]);65\u0026gt;n||122\u0026lt;l||(65\u0026gt;n||90\u0026lt;l||e.push([Math.max(65,l)|32,Math.min(n,90)|32]),97\u0026gt;n||122\u0026lt;l||e.push([Math.max(97,l)\u0026amp;-33,Math.min(n,122)\u0026amp;-33]))}}e.sort(function(e,\na){return e[0]-a[0]||a[1]-e[1]});b=[];h=[];for(a=0;a\u0026lt;e.length;++a)l=e[a],l[0]\u0026lt;=h[1]+1?h[1]=Math.max(h[1],l[1]):b.push(h=l);for(a=0;a\u0026lt;b.length;++a)l=b[a],c.push(f(l[0])),l[1]\u0026gt;l[0]\u0026amp;\u0026amp;(l[1]+1\u0026gt;l[0]\u0026amp;\u0026amp;c.push(\"-\"),c.push(f(l[1])));c.push(\"]\");return c.join(\"\")}function g(e){for(var a=e.source.match(\/(?:\\[(?:[^\\x5C\\x5D]|\\\\[\\s\\S])*\$|\\\\u[A-Fa-f0-9]{4}|\\\\x[A-Fa-f0-9]{2}|\\\$0-9]+|\\\\[^ux0-9]|\$$\\?[:!=]|[\\(\$$\\^]|[^\\x5B\\x5C\$$\$$\\^]+)\/g),c=a.length,d=[],h=0,l=0;h\u0026lt;c;++h){var n=a[h];\"(\"===n?++l:\"\\\\\"===n.charAt(0)\u0026amp;\u0026amp;(n=\n+n.substring(1))\u0026amp;\u0026amp;(n\u0026lt;=l?d[n]=-1:a[h]=f(n))}for(h=1;h\u0026lt;d.length;++h)-1===d[h]\u0026amp;\u0026amp;(d[h]=++k);for(l=h=0;h\u0026lt;c;++h)n=a[h],\"(\"===n?(++l,d[l]||(a[h]=\"(?:\")):\"\\\\\"===n.charAt(0)\u0026amp;\u0026amp;(n=+n.substring(1))\u0026amp;\u0026amp;n\u0026lt;=l\u0026amp;\u0026amp;(a[h]=\"\\\\\"+d[n]);for(h=0;h\u0026lt;c;++h)\"^\"===a[h]\u0026amp;\u0026amp;\"^\"!==a[h+1]\u0026amp;\u0026amp;(a[h]=\"\");if(e.ignoreCase\u0026amp;\u0026amp;I)for(h=0;h\u0026lt;c;++h)n=a[h],e=n.charAt(0),2\u0026lt;=n.length\u0026amp;\u0026amp;\"[\"===e?a[h]=b(n):\"\\\\\"!==e\u0026amp;\u0026amp;(a[h]=n.replace(\/[a-zA-Z]\/g,function(a){a=a.charCodeAt(0);return\"[\"+String.fromCharCode(a\u0026amp;-33,a|32)+\"]\"}));return a.join(\"\")}for(var k=0,I=!1,\nm=!1,J=0,c=a.length;J\u0026lt;c;++J){var r=a[J];if(r.ignoreCase)m=!0;else if(\/[a-z]\/i.test(r.source.replace(\/\\\\u[0-9a-f]{4}|\\\\x[0-9a-f]{2}|\\\\[^ux]\/gi,\"\"))){I=!0;m=!1;break}}for(var W={b:8,t:9,n:10,v:11,f:12,r:13},u=[],J=0,c=a.length;J\u0026lt;c;++J){r=a[J];if(r.global||r.multiline)throw Error(\"\"+r);u.push(\"(?:\"+g(r)+\")\")}return new RegExp(u.join(\"|\"),m?\"gi\":\"g\")}function q(a,d){function f(a){var c=a.nodeType;if(1==c){if(!b.test(a.className)){for(c=a.firstChild;c;c=c.nextSibling)f(c);c=a.nodeName.toLowerCase();if(\"br\"===\nc||\"li\"===c)g[m]=\"\\n\",I[m\u0026lt;\u0026lt;1]=k++,I[m++\u0026lt;\u0026lt;1|1]=a}}else if(3==c||4==c)c=a.nodeValue,c.length\u0026amp;\u0026amp;(c=d?c.replace(\/\\r\\n?\/g,\"\\n\"):c.replace(\/[ \\t\\r\\n]+\/g,\" \"),g[m]=c,I[m\u0026lt;\u0026lt;1]=k,k+=c.length,I[m++\u0026lt;\u0026lt;1|1]=a)}var b=\/(?:^|\\s)nocode(?:\\s|)\/,g=[],k=0,I=[],m=0;f(a);return{a:g.join(\"\").replace(\/\\n\/,\"\"),c:I}}function t(a,d,f,b,g){f\u0026amp;\u0026amp;(a={h:a,l:1,j:null,m:null,a:f,c:null,i:d,g:null},b(a),g.push.apply(g,a.g))}function A(a){for(var d=void 0,f=a.firstChild;f;f=f.nextSibling)var b=f.nodeType,d=1===b?d?a:f:3===b?T.test(f.nodeValue)?\na:d:d;return d===a?void 0:d}function C(a,d){function f(a){for(var m=a.i,k=a.h,c=[m,\"pln\"],r=0,W=a.a.match(g)||[],u={},e=0,q=W.length;e\u0026lt;q;++e){var D=W[e],w=u[D],h=void 0,l;if(\"string\"===typeof w)l=!1;else{var n=b[D.charAt(0)];if(n)h=D.match(n[1]),w=n[0];else{for(l=0;l\u0026lt;p;++l)if(n=d[l],h=D.match(n[1])){w=n[0];break}h||(w=\"pln\")}!(l=5\u0026lt;=w.length\u0026amp;\u0026amp;\"lang-\"===w.substring(0,5))||h\u0026amp;\u0026amp;\"string\"===typeof h[1]||(l=!1,w=\"src\");l||(u[D]=w)}n=r;r+=D.length;if(l){l=h[1];var E=D.indexOf(l),G=E+l.length;h[2]\u0026amp;\u0026amp;(G=D.length-\nh[2].length,E=G-l.length);w=w.substring(5);t(k,m+n,D.substring(0,E),f,c);t(k,m+n+E,l,F(w,l),c);t(k,m+n+G,D.substring(G),f,c)}else c.push(m+n,w)}a.g=c}var b={},g;(function(){for(var f=a.concat(d),m=[],p={},c=0,r=f.length;c\u0026lt;r;++c){var q=f[c],u=q[3];if(u)for(var e=u.length;0\u0026lt;=--e;)b[u.charAt(e)]=q;q=q[1];u=\"\"+q;p.hasOwnProperty(u)||(m.push(q),p[u]=null)}m.push(\/[\\0-\\uffff]\/);g=k(m)})();var p=d.length;return f}function x(a){var d=[],f=[];a.tripleQuotedStrings?d.push([\"str\",\/^(?:\\'\\'\\'(?:[^\\'\\\$|\\\$\\s\\S]|\\'{1,2}(?=[^\\']))*(?:\\'\\'\\'|)|\\\"\\\"\\\"(?:[^\\\"\\\$|\\\$\\s\\S]|\\\"{1,2}(?=[^\\\"]))*(?:\\\"\\\"\\\"|)|\\'(?:[^\\\\\\']|\\\\[\\s\\S])*(?:\\'|)|\\\"(?:[^\\\\\\\"]|\\\\[\\s\\S])*(?:\\\"|))\/,\nnull,\"'\\\"\"]):a.multiLineStrings?d.push([\"str\",\/^(?:\\'(?:[^\\\\\\']|\\\\[\\s\\S])*(?:\\'|)|\\\"(?:[^\\\\\\\"]|\\\\[\\s\\S])*(?:\\\"|)|\\(?:[^\\\\\\]|\\\\[\\s\\S])*(?:\\|))\/,null,\"'\\\"\"]):d.push([\"str\",\/^(?:\\'(?:[^\\\\\\'\\r\\n]|\\\\.)*(?:\\'|)|\\\"(?:[^\\\\\\\"\\r\\n]|\\\\.)*(?:\\\"|))\/,null,\"\\\"'\"]);a.verbatimStrings\u0026amp;\u0026amp;f.push([\"str\",\/^@\\\"(?:[^\\\"]|\\\"\\\")*(?:\\\"|)\/,null]);var b=a.hashComments;b\u0026amp;\u0026amp;(a.cStyleComments?(1\u0026lt;b?d.push([\"com\",\/^#(?:##(?:[^#]|#(?!##))*(?:###|)|.*)\/,null,\"#\"]):d.push([\"com\",\/^#(?:(?:define|e(?:l|nd)if|else|error|ifn?def|include|line|pragma|undef|warning)\\b|[^\\r\\n]*)\/,\nnull,\"#\"]),f.push([\"str\",\/^\u0026lt;(?:(?:(?:\\.\\.\\\/)*|\\\/?)(?:[\\w-]+(?:\\\/[\\w-]+)+)?[\\w-]+\\.h(?:h|pp|\\+\\+)?|[a-z]\\w*)\u0026gt;\/,null])):d.push([\"com\",\/^#[^\\r\\n]*\/,null,\"#\"]));a.cStyleComments\u0026amp;\u0026amp;(f.push([\"com\",\/^\\\/\\\/[^\\r\\n]*\/,null]),f.push([\"com\",\/^\\\/\\*[\\s\\S]*?(?:\\*\\\/|)\/,null]));if(b=a.regexLiterals){var g=(b=1\u0026lt;b?\"\":\"\\n\\r\")?\".\":\"[\\\\S\\\\s]\";f.push([\"lang-regex\",RegExp(\"^(?:^^\\\\.?|[+-]|[!=]=?=?|\\\\#|%=?|\u0026amp;\u0026amp;?=?|\\\|\\\\*=?|[+\\\\-]=|-\u0026gt;|\\\\\/=?|::?|\u0026lt;\u0026lt;?=?|\u0026gt;\u0026gt;?\u0026gt;?=?|,|;|\\\\?|@|\\\\[|~|{|\\\\^\\\\^?=?|\\\\|\\\\|?=?|break|case|continue|delete|do|else|finally|instanceof|return|throw|try|typeof)\\\\s*(\"+\n(\"\/(?=[^\/*\"+b+\"])(?:[^\/\\\\x5B\\\\x5C\"+b+\"]|\\\\x5C\"+g+\"|\\\\x5B(?:[^\\\\x5C\\\\x5D\"+b+\"]|\\\\x5C\"+g+\")*(?:\\\\x5D|))+\/\")+\")\")])}(b=a.types)\u0026amp;\u0026amp;f.push([\"typ\",b]);b=(\"\"+a.keywords).replace(\/^ | \/g,\"\");b.length\u0026amp;\u0026amp;f.push([\"kwd\",new RegExp(\"^(?:\"+b.replace(\/[\\s,]+\/g,\"|\")+\")\\\\b\"),null]);d.push([\"pln\",\/^\\s+\/,null,\" \\r\\n\\t\\u00a0\"]);b=\"^.[^\\\\s\\\\w.@'\\\"\/\\\\\\\$*\";a.regexLiterals\u0026amp;\u0026amp;(b+=\"(?!s*\/)\");f.push([\"lit\",\/^@[a-z_][a-z_@0-9]*\/i,null],[\"typ\",\/^(?:[@_]?[A-Z]+[a-z][A-Za-z_@0-9]*|\\w+_t\\b)\/,null],[\"pln\",\/^[a-z_][a-z_@0-9]*\/i,\nnull],[\"lit\",\/^(?:0x[a-f0-9]+|(?:\\d(?:_\\d+)*\\d*(?:\\.\\d*)?|\\.\\d\\+)(?:e[+\\-]?\\d+)?)[a-z]*\/i,null,\"0123456789\"],[\"pln\",\/^\\\\\s\\S]?\/,null],[\"pun\",new RegExp(b),null]);return C(d,f)}function B(a,d,f){function b(a){var c=a.nodeType;if(1==c\u0026amp;\u0026amp;!k.test(a.className))if(\"br\"===a.nodeName)g(a),a.parentNode\u0026amp;\u0026amp;a.parentNode.removeChild(a);else for(a=a.firstChild;a;a=a.nextSibling)b(a);else if((3==c||4==c)\u0026amp;\u0026amp;f){var d=a.nodeValue,p=d.match(q);p\u0026amp;\u0026amp;(c=d.substring(0,p.index),a.nodeValue=c,(d=d.substring(p.index+p[0].length))\u0026amp;\u0026amp;\na.parentNode.insertBefore(m.createTextNode(d),a.nextSibling),g(a),c||a.parentNode.removeChild(a))}}function g(a){function b(a,c){var d=c?a.cloneNode(!1):a,n=a.parentNode;if(n){var n=b(n,1),e=a.nextSibling;n.appendChild(d);for(var f=e;f;f=e)e=f.nextSibling,n.appendChild(f)}return d}for(;!a.nextSibling;)if(a=a.parentNode,!a)return;a=b(a.nextSibling,0);for(var d;(d=a.parentNode)\u0026amp;\u0026amp;1===d.nodeType;)a=d;c.push(a)}for(var k=\/(?:^|\\s)nocode(?:\\s|)\/,q=\/\\r\\n?|\\n\/,m=a.ownerDocument,p=m.createElement(\"li\");a.firstChild;)p.appendChild(a.firstChild);\nfor(var c=[p],r=0;r\u0026lt;c.length;++r)b(c[r]);d===(d|0)\u0026amp;\u0026amp;c[0].setAttribute(\"value\",d);var t=m.createElement(\"ol\");t.className=\"linenums\";d=Math.max(0,d-1|0)||0;for(var r=0,u=c.length;r\u0026lt;u;++r)p=c[r],p.className=\"L\"+(r+d)%10,p.firstChild||p.appendChild(m.createTextNode(\"\\u00a0\")),t.appendChild(p);a.appendChild(t)}function p(a,d){for(var f=d.length;0\u0026lt;=--f;){var b=d[f];X.hasOwnProperty(b)?R.console\u0026amp;\u0026amp;console.warn(\"cannot override language handler %s\",b):X[b]=a}}function F(a,d){a\u0026amp;\u0026amp;X.hasOwnProperty(a)||(a=\/^\\s*\u0026lt;\/.test(d)?\n\"default-markup\":\"default-code\");return X[a]}function H(a){var d=a.j;try{var f=q(a.h,a.l),b=f.a;a.a=b;a.c=f.c;a.i=0;F(d,b)(a);var g=\/\\bMSIE\\s(\\d+)\/.exec(navigator.userAgent),g=g\u0026amp;\u0026amp;8\u0026gt;=+g[1],d=\/\\n\/g,p=a.a,k=p.length,f=0,m=a.c,t=m.length,b=0,c=a.g,r=c.length,x=0;c[r]=k;var u,e;for(e=u=0;e\u0026lt;r;)c[e]!==c[e+2]?(c[u++]=c[e++],c[u++]=c[e++]):e+=2;r=u;for(e=u=0;e\u0026lt;r;){for(var A=c[e],D=c[e+1],w=e+2;w+2\u0026lt;=r\u0026amp;\u0026amp;c[w+1]===D;)w+=2;c[u++]=A;c[u++]=D;e=w}c.length=u;var h=a.h;a=\"\";h\u0026amp;\u0026amp;(a=h.style.display,h.style.display=\"none\");\ntry{for(;b\u0026lt;t;){var l=m[b+2]||k,n=c[x+2]||k,w=Math.min(l,n),E=m[b+1],G;if(1!==E.nodeType\u0026amp;\u0026amp;(G=p.substring(f,w))){g\u0026amp;\u0026amp;(G=G.replace(d,\"\\r\"));E.nodeValue=G;var aa=E.ownerDocument,v=aa.createElement(\"span\");v.className=c[x+1];var B=E.parentNode;B.replaceChild(v,E);v.appendChild(E);f\u0026lt;l\u0026amp;\u0026amp;(m[b+1]=E=aa.createTextNode(p.substring(w,l)),B.insertBefore(E,v.nextSibling))}f=w;f\u0026gt;=l\u0026amp;\u0026amp;(b+=2);f\u0026gt;=n\u0026amp;\u0026amp;(x+=2)}}finally{h\u0026amp;\u0026amp;(h.style.display=a)}}catch(y){R.console\u0026amp;\u0026amp;console.log(y\u0026amp;\u0026amp;y.stack||y)}}var R=window,K=[\"break,continue,do,else,for,if,return,while\"],\nL=[[K,\"auto,case,char,const,default,double,enum,extern,float,goto,inline,int,long,register,short,signed,sizeof,static,struct,switch,typedef,union,unsigned,void,volatile\"],\"catch,class,delete,false,import,new,operator,private,protected,public,this,throw,true,try,typeof\"],S=[L,\"alignof,align_union,asm,axiom,bool,concept,concept_map,const_cast,constexpr,decltype,delegate,dynamic_cast,explicit,export,friend,generic,late_check,mutable,namespace,nullptr,property,reinterpret_cast,static_assert,static_cast,template,typeid,typename,using,virtual,where\"],\nM=[L,\"abstract,assert,boolean,byte,extends,finally,final,implements,import,instanceof,interface,null,native,package,strictfp,super,synchronized,throws,transient\"],N=[L,\"abstract,as,base,bool,by,byte,checked,decimal,delegate,descending,dynamic,event,finally,fixed,foreach,from,group,implicit,in,interface,internal,into,is,let,lock,null,object,out,override,orderby,params,partial,readonly,ref,sbyte,sealed,stackalloc,string,select,uint,ulong,unchecked,unsafe,ushort,var,virtual,where\"],L=[L,\"debugger,eval,export,function,get,instanceof,null,set,undefined,var,with,Infinity,NaN\"],\nO=[K,\"and,as,assert,class,def,del,elif,except,exec,finally,from,global,import,in,is,lambda,nonlocal,not,or,pass,print,raise,try,with,yield,False,True,None\"],P=[K,\"alias,and,begin,case,class,def,defined,elsif,end,ensure,false,in,module,next,nil,not,or,redo,rescue,retry,self,super,then,true,undef,unless,until,when,yield,BEGIN,END\"],K=[K,\"case,done,elif,esac,eval,fi,function,in,local,set,then,until\"],Q=\/^(DIR|FILE|vector|(de|priority_)?queue|list|stack|(const_)?iterator|(multi)?(set|map)|bitset|u?(int|float)\\d*)\\b\/,\nT=\/\\S\/,U=x({keywords:[S,N,M,L,\"caller,delete,die,do,dump,elsif,eval,exit,foreach,for,goto,if,import,last,local,my,next,no,our,print,package,redo,require,sub,undef,unless,until,use,wantarray,while,BEGIN,END\",O,P,K],hashComments:!0,cStyleComments:!0,multiLineStrings:!0,regexLiterals:!0}),X={};p(U,[\"default-code\"]);p(C([],[[\"pln\",\/^[^\u0026lt;?]+\/],[\"dec\",\/^\u0026lt;!\\w[^\u0026gt;]*(?:\u0026gt;|)\/],[\"com\",\/^\u0026lt;\\!--[\\s\\S]*?(?:-\\-\u0026gt;|)\/],[\"lang-\",\/^\u0026lt;\\?([\\s\\S]+?)(?:\\?\u0026gt;|)\/],[\"lang-\",\/^\u0026lt;%([\\s\\S]+?)(?:%\u0026gt;|)\/],[\"pun\",\/^(?:\u0026lt;[%?]|[%?]\u0026gt;)\/],[\"lang-\",\n\/^\u0026lt;xmp\\b[^\u0026gt;]*\u0026gt;([\\s\\S]+?)\u0026lt;\\\/xmp\\b[^\u0026gt;]*\u0026gt;\/i],[\"lang-js\",\/^\u0026lt;script\\b[^\u0026gt;]*\u0026gt;([\\s\\S]*?)(\u0026lt;\\\/script\\b[^\u0026gt;]*\u0026gt;)\/i],[\"lang-css\",\/^\u0026lt;style\\b[^\u0026gt;]*\u0026gt;([\\s\\S]*?)(\u0026lt;\\\/style\\b[^\u0026gt;]*\u0026gt;)\/i],[\"lang-in.tag\",\/^(\u0026lt;\\\/?[a-z][^\u0026lt;\u0026gt;]*\u0026gt;)\/i]]),\"default-markup htm html mxml xhtml xml xsl\".split(\" \"));p(C([[\"pln\",\/^[\\s]+\/,null,\" \\t\\r\\n\"],[\"atv\",\/^(?:\\\"[^\\\"]*\\\"?|\\'[^\\']*\\'?)\/,null,\"\\\"'\"]],[[\"tag\",\/^^\u0026lt;\\\/?[a-z](?:[\\w.:-]*\\w)?|\\\/?\u0026gt;\/i],[\"atn\",\/^(?!style[\\s=]|on)[a-z](?:[\\w:-]*\\w)?\/i],[\"lang-uq.val\",\/^=\\s*([^\u0026gt;\\'\\\"\\s]*(?:[^\u0026gt;\\'\\\"\\s\\\/]|\\\/(?=\\s)))\/],\n[\"pun\",\/^[=\u0026lt;\u0026gt;\\\/]+\/],[\"lang-js\",\/^on\\w+\\s*=\\s*\\\"([^\\\"]+)\\\"\/i],[\"lang-js\",\/^on\\w+\\s*=\\s*\\'([^\\']+)\\'\/i],[\"lang-js\",\/^on\\w+\\s*=\\s*([^\\\"\\'\u0026gt;\\s]+)\/i],[\"lang-css\",\/^style\\s*=\\s*\\\"([^\\\"]+)\\\"\/i],[\"lang-css\",\/^style\\s*=\\s*\\'([^\\']+)\\'\/i],[\"lang-css\",\/^style\\s*=\\s*([^\\\"\\'\u0026gt;\\s]+)\/i]]),[\"in.tag\"]);p(C([],[[\"atv\",\/^[\\s\\S]+\/]]),[\"uq.val\"]);p(x({keywords:S,hashComments:!0,cStyleComments:!0,types:Q}),\"c cc cpp cxx cyc m\".split(\" \"));p(x({keywords:\"null,true,false\"}),[\"json\"]);p(x({keywords:N,hashComments:!0,cStyleComments:!0,\nverbatimStrings:!0,types:Q}),[\"cs\"]);p(x({keywords:M,cStyleComments:!0}),[\"java\"]);p(x({keywords:K,hashComments:!0,multiLineStrings:!0}),[\"bash\",\"bsh\",\"csh\",\"sh\"]);p(x({keywords:O,hashComments:!0,multiLineStrings:!0,tripleQuotedStrings:!0}),[\"cv\",\"py\",\"python\"]);p(x({keywords:\"caller,delete,die,do,dump,elsif,eval,exit,foreach,for,goto,if,import,last,local,my,next,no,our,print,package,redo,require,sub,undef,unless,until,use,wantarray,while,BEGIN,END\",hashComments:!0,multiLineStrings:!0,regexLiterals:2}),\n[\"perl\",\"pl\",\"pm\"]);p(x({keywords:P,hashComments:!0,multiLineStrings:!0,regexLiterals:!0}),[\"rb\",\"ruby\"]);p(x({keywords:L,cStyleComments:!0,regexLiterals:!0}),[\"javascript\",\"js\"]);p(x({keywords:\"all,and,by,catch,class,else,extends,false,finally,for,if,in,is,isnt,loop,new,no,not,null,of,off,on,or,return,super,then,throw,true,try,unless,until,when,while,yes\",hashComments:3,cStyleComments:!0,multilineStrings:!0,tripleQuotedStrings:!0,regexLiterals:!0}),[\"coffee\"]);p(C([],[[\"str\",\/^[\\s\\S]+\/]]),[\"regex\"]);\nvar V=R.PR={createSimpleLexer:C,registerLangHandler:p,sourceDecorator:x,PR_ATTRIB_NAME:\"atn\",PR_ATTRIB_VALUE:\"atv\",PR_COMMENT:\"com\",PR_DECLARATION:\"dec\",PR_KEYWORD:\"kwd\",PR_LITERAL:\"lit\",PR_NOCODE:\"nocode\",PR_PLAIN:\"pln\",PR_PUNCTUATION:\"pun\",PR_SOURCE:\"src\",PR_STRING:\"str\",PR_TAG:\"tag\",PR_TYPE:\"typ\",prettyPrintOne:function(a,d,f){f=f||!1;d=d||null;var b=document.createElement(\"div\");b.innerHTML=\"\u0026lt;pre\u0026gt;\"+a+\"\u0026lt;\/pre\u0026gt;\";b=b.firstChild;f\u0026amp;\u0026amp;B(b,f,!0);H({j:d,m:f,h:b,l:1,a:null,i:null,c:null,g:null});return b.innerHTML},\nprettyPrint:g=g=function(a,d){function f(){for(var b=R.PR_SHOULD_USE_CONTINUATION?c.now()+250:Infinity;r\u0026lt;p.length\u0026amp;\u0026amp;c.now()\u0026lt;b;r++){for(var d=p[r],k=h,q=d;q=q.previousSibling;){var m=q.nodeType,v=(7===m||8===m)\u0026amp;\u0026amp;q.nodeValue;if(v?!\/^\\??prettify\\b\/.test(v):3!==m||\/\\S\/.test(q.nodeValue))break;if(v){k={};v.replace(\/\\b(\\w+)=([\\w:.%+-]+)\/g,function(a,b,c){k[b]=c});break}}q=d.className;if((k!==h||u.test(q))\u0026amp;\u0026amp;!e.test(q)){m=!1;for(v=d.parentNode;v;v=v.parentNode)if(w.test(v.tagName)\u0026amp;\u0026amp;v.className\u0026amp;\u0026amp;u.test(v.className)){m=\n!0;break}if(!m){d.className+=\" prettyprinted\";m=k.lang;if(!m){var m=q.match(t),C;!m\u0026amp;\u0026amp;(C=A(d))\u0026amp;\u0026amp;z.test(C.tagName)\u0026amp;\u0026amp;(m=C.className.match(t));m\u0026amp;\u0026amp;(m=m[1])}if(x.test(d.tagName))v=1;else var v=d.currentStyle,y=g.defaultView,v=(v=v?v.whiteSpace:y\u0026amp;\u0026amp;y.getComputedStyle?y.getComputedStyle(d,null).getPropertyValue(\"white-space\"):0)\u0026amp;\u0026amp;\"pre\"===v.substring(0,3);y=k.linenums;(y=\"true\"===y||+y)||(y=(y=q.match(\/\\blinenums\\b(?::(\\d+))?\/))?y[1]\u0026amp;\u0026amp;y[1].length?+y[1]:!0:!1);y\u0026amp;\u0026amp;B(d,y,v);H({j:m,h:d,m:y,l:v,a:null,i:null,c:null,\ng:null})}}}r\u0026lt;p.length?R.setTimeout(f,250):\"function\"===typeof a\u0026amp;\u0026amp;a()}for(var b=d||document.body,g=b.ownerDocument||document,b=[b.getElementsByTagName(\"pre\"),b.getElementsByTagName(\"code\"),b.getElementsByTagName(\"xmp\")],p=[],k=0;k\u0026lt;b.length;++k)for(var m=0,q=b[k].length;m\u0026lt;q;++m)p.push(b[k][m]);var b=null,c=Date;c.now||(c={now:function(){return+new Date}});var r=0,t=\/\\blang(?:uage)?-([\\w.]+)(?!\\S)\/,u=\/\\bprettyprint\\b\/,e=\/\\bprettyprinted\\b\/,x=\/pre|xmp\/i,z=\/^code\/i,w=\/^(?:pre|code|xmp)\/i,h={};f()}},\nS=R.define;\"function\"===typeof S\u0026amp;\u0026amp;S.amd\u0026amp;\u0026amp;S(\"google-code-prettify\",[],function(){return V})})();return g}();T||t.setTimeout(U,0)})();}()\n\u0026lt;\/script\u0026gt;\n\n\u0026lt;div\u0026gt;\n\u0026lt;pre class=\"prettyprint\"\u0026gt;\n\n\u0026gt;\u0026gt;\u0026gt; import shap\n\n# load JS visualization code to notebook\n\u0026gt;\u0026gt;\u0026gt; shap.initjs()\n\n# True Positive and False Positive for label 1:\n\u0026gt;\u0026gt;\u0026gt; TP = [i for i in range(len(pred)) if pred[i] == y_test.values[i] == 1] \n\u0026gt;\u0026gt;\u0026gt; FP = [i for i in range(len(pred)) if pred[i] != y_test.values[i] and y_test.values[i] == 0] \n\n# Explaining a True Positive example:\n\u0026gt;\u0026gt;\u0026gt; x = X_test_sc[TP[2],:]\n\u0026gt;\u0026gt;\u0026gt; dx = pd.DataFrame(x.reshape(1,-1))\n\u0026gt;\u0026gt;\u0026gt; dx.columns = X_train.columns\n\n\u0026gt;\u0026gt;\u0026gt; explainer = shap.KernelExplainer(classifiers.predict, X_train_sc)\n\u0026gt;\u0026gt;\u0026gt; shap_values = explainer.shap_values(dx, nsamples=100)\n\u0026gt;\u0026gt;\u0026gt; shap.force_plot(explainer.expected_value, shap_values,feature_names=X_train.columns)\n\n\u0026lt;\/pre\u0026gt;\n\u0026lt;\/div\u0026gt;\n\u0026lt;hr\u0026gt;","render_as_iframe":true,"selected_app_name":"HtmlApp","app_list":"{\"HtmlApp\":556066}"}},{"type":"Blog.Section","id":"f_292bee13-e820-41fe-978e-9b2b3fcee147","defaultValue":null,"component":{"type":"Image","id":"f_0a101465-6fe1-4bc5-9074-d9442ab3ebe9","defaultValue":null,"link_url":"","thumb_url":"!","url":"!","caption":"","description":"","storageKey":"1225579\/Screen_Shot_2019-02-14_at_10.28.20_AM_wius0q","storage":"c","storagePrefix":null,"format":"png","h":141,"w":1200,"s":15572,"new_target":true,"noCompression":null,"cropMode":null}},{"type":"Blog.Section","id":"f_faf8f992-28cd-489e-8a95-3f2a7091f48b","defaultValue":null,"component":{"type":"RichText","id":"f_665c8ff8-af4d-4641-b5d3-5c1a6d8c688f","defaultValue":false,"value":"\u003cp style=\"text-align: justify;\"\u003eThis figure illustrates features that push the prediction higher (in pink) and the ones which push the prediction lower (in blue) from a base value computed on\u00a0the average model output on the training dataset.\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003eFor that true positive fitted answer, we can see that the main features which tend to push the probabilities towards 1 is mainly explained by 'sex', 'oldpeak', 'thalach' and 'exang', whereas the 'ca' feature tends to push down the prediction score.\u00a0\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003eWe can apply this explainer model to all correct predicted examples in the test set, as below:\u003c\/p\u003e","backupValue":null,"version":1}},{"type":"Blog.Section","id":"f_c0957d46-aedb-4a67-b962-7baef3aba0dc","defaultValue":null,"component":{"type":"HtmlComponent","id":1583726,"defaultValue":false,"value":"\u0026lt;script type=\"text\/x-mathjax-config\"\u0026gt;\n MathJax.Hub.Config({\n tex2jax: {inlineMath: [[' Interpretability models​ Why interpretability is so important in machine learning ? Why can't we just trust the prediction of a supervised model ? Several possible explanations to that: we can think about improving social acceptance for the integration of ML algorithms into our lives ; correcting a model by discovering a bias in the population of the training set; understanding the cases for which the model fails; following the law and regulations. Nowadays, complex supervised models can be very accurate on specific tasks but remain quite uninterpretable; at the opposite, when using simple models, it is indeed easy to interpret them but are often less accurate. How can we solve such a dilemma ? This post tends to answer to this question by going through the ML literature in interpretability models and by focusing on a class of additive feature attribution methods [11]. 1. The main idea The problem of giving an interpretation to the model prediction can be recasted as it follows: Which part of the input is particularly important to explain the output of the model ? In order to illustrate this purpose, let's consider the example given during the ICML conference by Shrikumar. Lets suppose you have already trained a model with DNA mutations causing diseases. Now, let's consider a DNA sequence as input, as for instance: The model is going to predict if this sequence can be linked to any known diseases the model learnt. If so, what you would like to understand is why your model gives this prediction in particular; ie which part of the input sequence leads your model to predict a specific disease. So, you would like to have higher weights for the parts of the sequence which explain the most the decision of your model and lower ones for those which do not explain the prediction: To achieve that, most of approaches iterate between 2 steps: 1. Set a prohibition to some part of the input 2. Observe the change in the output (fitted answer) Repeat step 1 and step 2 for different prohibitions of the input. 2. Existing approaches The need of tools for explaining prediction models came with the development of more complex models to deal with more complex data and therefore the recent literature in Computer Vision and Machine Learning has developed a new field linked to interpretability. 2.1. Cooperative game theory based Back to the beginning of the 21th century, Lipovetsky et al. (2001)[1] highlight the multicollinearity problem in the analysis of regressor importance in the multiple regression context: important variables can have significant coefficient because of their collinearity. To that end, they use a tool from the cooperative game theory to obtain comparative importance of predictors: the Shapley Values imputation [0] derived from an axiomatic approach and produces a unique solution satisfying general requirements of Nash equilibrium. A decade later, Strumbelj et al. (2011)[2] generalize the use of Shapley values for black box models such as SVM and artificial neural network models in order to make models more informative, easier to understand and to use. They propose an approximation algorithm by assuming mutual independence of individual features in order to encompass the time complexity limitation of the solution. 2.2. Architecture specific: Deep Neural Network Since then, several specific methods have been proposed in the literature and take advantages of the structure/architecture of the model. For neural networks, we can think about back-propagation based methods such as Guided Propagation (Springenberg et al. 2014)[4] which use the relationship between the neurons and the output. The idea is to assign a score to neurons according to how much they affect the output. This is done in the single backward pass where you get the scores for all parts of the input. Other approaches propose to build a linear model to locally approximate the more complicated model based on data which affects the output (LIME [6]). Shrikumar et al. 2016 [7] introduces DeepLift which assigns contribution scores to the feature based on the difference between the activation of each neuron to its ‘reference activation’. Other explaining prediction models Deep Neural Network-specific have been proposed in the literature and the reader could read [8] for additional references on this subject. Below, a list of methods and their available python code which summarizes the most recent approaches for specific-models: Random Forest: Deep Neural Network: 2.3. A unified approach The most recent and general approach for interpretability models is the SHAP model from Lundberg et al. 2018. It proposes a class of methods called Additive Feature attribution methods that contains most of the approaches cited above. These methods use the same explanation model (ie any interpretable approximation of the original model) that we introduce in the next paragraph. Cooperative Game theory-based: 3. SHAP: Additive feature attribution methods An explanation model is a simple model which describes the behavior of the complex model. The additive attribution methods introduce a linear function of binary variables to represent such an explanation model. 3.1. The SHAP model Let f be the original prediction model to be explained and g the explanation model. Additive feature attribution methods have an explanation model that is a linear function of binary variables such that: where M is the number of features ; the z'i variables represent a feature being observed (zi' = 1) or unknown (zi'= 0), and the Φi ∈ ’s are the feature attribution values. There is only one solution for Φ_i satisfying general requirements of Nash equilibrium and satisfying three natural properties explained in the paragraph that follows. 3.2. The Natural properties (1) Local Accuracy: the output of the explanation model matches the original model for the prediction being explained: g(x') = f(x) (2) Missingness: put the output to 0 corresponds to turning the feature off: x'i = 0 ⇒ Φi = 0 (3) Accuracy: if turning the feature off in one model which always makes a bigger difference in another model then the importance should be higher in the first model than in the second one. Lets consider z' \ i meaning z'i = 0, then for any 2 models f 1 and f 2, if: fx1(z') - fx1(z' \ i) ≥ fx2(z') - fx2(z' \ i) then for all input z' ∈ {0,1}M : Φi (f 1, x) ≥ Φi (f 2, x) 3.3 Computing SHAP values 3.3.1. Back to the Shapley values The computation of features importance -- the SHAP values -- comes from cooperative games theory [0] with the Shapley values. In our context, a Shapley value can be viewed as a weighted average of all possible differences between predictions of the model without feature i, and the ones with feature i as expressed below: where |z′| stands for the number of features different from zero, and z′ ⊆ x′ stands for all z′ vectors where the non-zero entries are a subset of entries of x′ except feature i. Since the problem is combinatorial different strategies have been proposed in the literature to approximate the solution ([0,1]). 3.3.2. The SHAP values In the more general context the SHAP values can be viewed as Shapley values of a conditional expectation function of the original model such that: where S is the set of non-zero entries of z'. In practice, the computation of SHAP values are challenging that is why Lundberg and al.[11] propose different approximation algorithms according to the specificities of your model or your data (tree ensembles, independent features, deep network,...). 4. Practical example with SHAP library Lundenberg created a GitHub repository to that end with very nice and quite complete notebooks explaining different use cases for SHAP and its different approximation algorithms (Tree/ Deep / Gradient /Linear or Kernel Explainers). I do really encourage the reader to visit the page of the author: https://github.com/slundberg/shap . By the way, I am just going to introduce a very simple example in order to give insights of the kind of results we could obtain when looking for interpreting a prediction. Lets consider the heart dataset coming from kaggle competition (https://www.kaggle.com/ronitf/heart-disease-uci). The dataset consists in 13 variables describing 303 patients and 1 label describing the angiographic disease status (target \in {0,1}). The set is quite balanced since 165 patients have label 1 and 138 have label 0. The data have been pre-processed a little bit such as we keep only the most informative variables which are ['sex', 'cp', 'thalach', 'exang', 'oldpeak', 'ca', 'thal']. Then we split the dataset into a random train (75% of data) and test sets (the remaining 25%) and we scale them. A svm classifier has been learnt, and we obtain a classification accuracy around 91%. Once the classification model is learnt, we are looking for explaining a particular prediction (a true one ;) ) based on the shap library developed by Lundberg. You need to install the shap library (https://github.com/slundberg/shap) before running the code below: This figure illustrates features that push the prediction higher (in pink) and the ones which push the prediction lower (in blue) from a base value computed on the average model output on the training dataset. For that true positive fitted answer, we can see that the main features which tend to push the probabilities towards 1 is mainly explained by 'sex', 'oldpeak', 'thalach' and 'exang', whereas the 'ca' feature tends to push down the prediction score. We can apply this explainer model to all correct predicted examples in the test set, as below: The figure above stands for all individual feature contribution that have been stacked horizontally and ordered by output value. The 39 first predictions are correctly classified in class 1 and the 30 last ones are labeled and correctly classified in class 0. Note that the visualisation is interactive and we can see the effect of a particular feature by changing the y-axis in the menu of the left side of the figure. Symmetrically, you can change the x-axis menu in order to order the sample according to output values, similarities or SHAP values by feature. It can also be very interested to have in one plot an overview of the distribution of SHAP values for each feature and an idea of their overall impact (Note that the example is still on the correct predicted sample): In the first plot (subfigure a.), there are three kinds of information: in the x-axis you have the SHAP values of each feature described in the y-axis. Each line stands for the set of SHAP values computed for a specific feature and this is done for every features of your model. The third dimension is the color of points: it represents the feature value (pink for high value of the feature and blue for a low value). You can therefore see the dispersion of SHAP values according to features and also, their impact in the output model. For instance, high values of 'cp' feature implies high SHAP values predicted score and tends to push up the prediction whereas high values of 'thal' feature (pink points) tends to lower the predicted score. The second plot (subfigure b.) is the Mean Absolute Value of SHAP values obtained for each feature. This can be seen as a summary of the left figure. Conclusion There are lots of approaches proposed in the literature to deal with interpretability/explainable models in the supervised context. The main strength of the additive feature attribution model is its theoretical properties on one hand and on the other hand, its general framework which tends to explain most of explainable models developed in the literature. Different approximations algorithms have been proposed by Lundberg et al. in order to take advantage of the structure of the model and types of data to improve time computations. If you deal with Deep Neural Network or tree ensembles, I really encourage the reader to see more examples on the GitHub repository of the author: https://github.com/slundberg/shap. Bibliography [0] Shapley, Lloyd S. “A Value for N-Person Games.” Contributions to the Theory of Games 2 (28): 307–17, (1953). [1] Lipovetsky, S. and Conklin, M. "Analysis of regression in game theory approach." Applied Stochastic Models in business and industry (17-4):319-330, (2001). [2] Strumbelj et al., "A General Method for Visualizing and Explaining Black-Box Regression Models." Adaptive and Natural Computing Algorithms. ICANNGA 2011. Lecture Notes in Computer Science, vol 6594. (2011). [3] Saabas et al., "Interpreting random forests", https://blog.datadive.net/interpreting-random-forests/ [4] Springenberg et al., "Striving for simplicity: The all convolutional net", arXiv:1412.6806 (2014). [5] Bach et al. "On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation", PLOS ONE: (10-7): 130-140, (2015). [6] Ribeiro et al. "Why should I Trust You ? Explaining the predictions of any classifier", Proceedings of the 22nd ACM SIGKDD: 1135-1144 (2016). [7] Shrikumar et al. "Learning Important Features Through Propagating Activation Difference", Proceedings in ICML (2017). [8] "Explainable and Interpretable Models in Computer Vision and Machine Learning", Springer Verlag, The Springer Series on Challenges in Machine Learning, 9783319981307 (2018). [9] Sunderarajan et al., "Axiomatic Attribution for Deep Networks", Proceedings in ICML (2017). [10] Montavon et al. "Explaining nonlinear classification decision with deep Taylor decomposition", Pattern Recognition (65):211-222 (2017). [11] Lundberg et al., ''A unified approach to interpreting model predictions'', NIPS (2017). All Posts × Almost done… We just sent you an email. Please click the link in the email to confirm your subscription! ,' Interpretability models​ Why interpretability is so important in machine learning ? Why can't we just trust the prediction of a supervised model ? Several possible explanations to that: we can think about improving social acceptance for the integration of ML algorithms into our lives ; correcting a model by discovering a bias in the population of the training set; understanding the cases for which the model fails; following the law and regulations. Nowadays, complex supervised models can be very accurate on specific tasks but remain quite uninterpretable; at the opposite, when using simple models, it is indeed easy to interpret them but are often less accurate. How can we solve such a dilemma ? This post tends to answer to this question by going through the ML literature in interpretability models and by focusing on a class of additive feature attribution methods [11]. 1. The main idea The problem of giving an interpretation to the model prediction can be recasted as it follows: Which part of the input is particularly important to explain the output of the model ? In order to illustrate this purpose, let's consider the example given during the ICML conference by Shrikumar. Lets suppose you have already trained a model with DNA mutations causing diseases. Now, let's consider a DNA sequence as input, as for instance: The model is going to predict if this sequence can be linked to any known diseases the model learnt. If so, what you would like to understand is why your model gives this prediction in particular; ie which part of the input sequence leads your model to predict a specific disease. So, you would like to have higher weights for the parts of the sequence which explain the most the decision of your model and lower ones for those which do not explain the prediction: To achieve that, most of approaches iterate between 2 steps: 1. Set a prohibition to some part of the input 2. Observe the change in the output (fitted answer) Repeat step 1 and step 2 for different prohibitions of the input. 2. Existing approaches The need of tools for explaining prediction models came with the development of more complex models to deal with more complex data and therefore the recent literature in Computer Vision and Machine Learning has developed a new field linked to interpretability. 2.1. Cooperative game theory based Back to the beginning of the 21th century, Lipovetsky et al. (2001)[1] highlight the multicollinearity problem in the analysis of regressor importance in the multiple regression context: important variables can have significant coefficient because of their collinearity. To that end, they use a tool from the cooperative game theory to obtain comparative importance of predictors: the Shapley Values imputation [0] derived from an axiomatic approach and produces a unique solution satisfying general requirements of Nash equilibrium. A decade later, Strumbelj et al. (2011)[2] generalize the use of Shapley values for black box models such as SVM and artificial neural network models in order to make models more informative, easier to understand and to use. They propose an approximation algorithm by assuming mutual independence of individual features in order to encompass the time complexity limitation of the solution. 2.2. Architecture specific: Deep Neural Network Since then, several specific methods have been proposed in the literature and take advantages of the structure/architecture of the model. For neural networks, we can think about back-propagation based methods such as Guided Propagation (Springenberg et al. 2014)[4] which use the relationship between the neurons and the output. The idea is to assign a score to neurons according to how much they affect the output. This is done in the single backward pass where you get the scores for all parts of the input. Other approaches propose to build a linear model to locally approximate the more complicated model based on data which affects the output (LIME [6]). Shrikumar et al. 2016 [7] introduces DeepLift which assigns contribution scores to the feature based on the difference between the activation of each neuron to its ‘reference activation’. Other explaining prediction models Deep Neural Network-specific have been proposed in the literature and the reader could read [8] for additional references on this subject. Below, a list of methods and their available python code which summarizes the most recent approaches for specific-models: Random Forest: Deep Neural Network: 2.3. A unified approach The most recent and general approach for interpretability models is the SHAP model from Lundberg et al. 2018. It proposes a class of methods called Additive Feature attribution methods that contains most of the approaches cited above. These methods use the same explanation model (ie any interpretable approximation of the original model) that we introduce in the next paragraph. Cooperative Game theory-based: 3. SHAP: Additive feature attribution methods An explanation model is a simple model which describes the behavior of the complex model. The additive attribution methods introduce a linear function of binary variables to represent such an explanation model. 3.1. The SHAP model Let f be the original prediction model to be explained and g the explanation model. Additive feature attribution methods have an explanation model that is a linear function of binary variables such that: where M is the number of features ; the z'i variables represent a feature being observed (zi' = 1) or unknown (zi'= 0), and the Φi ∈ ’s are the feature attribution values. There is only one solution for Φ_i satisfying general requirements of Nash equilibrium and satisfying three natural properties explained in the paragraph that follows. 3.2. The Natural properties (1) Local Accuracy: the output of the explanation model matches the original model for the prediction being explained: g(x') = f(x) (2) Missingness: put the output to 0 corresponds to turning the feature off: x'i = 0 ⇒ Φi = 0 (3) Accuracy: if turning the feature off in one model which always makes a bigger difference in another model then the importance should be higher in the first model than in the second one. Lets consider z' \ i meaning z'i = 0, then for any 2 models f 1 and f 2, if: fx1(z') - fx1(z' \ i) ≥ fx2(z') - fx2(z' \ i) then for all input z' ∈ {0,1}M : Φi (f 1, x) ≥ Φi (f 2, x) 3.3 Computing SHAP values 3.3.1. Back to the Shapley values The computation of features importance -- the SHAP values -- comes from cooperative games theory [0] with the Shapley values. In our context, a Shapley value can be viewed as a weighted average of all possible differences between predictions of the model without feature i, and the ones with feature i as expressed below: where |z′| stands for the number of features different from zero, and z′ ⊆ x′ stands for all z′ vectors where the non-zero entries are a subset of entries of x′ except feature i. Since the problem is combinatorial different strategies have been proposed in the literature to approximate the solution ([0,1]). 3.3.2. The SHAP values In the more general context the SHAP values can be viewed as Shapley values of a conditional expectation function of the original model such that: where S is the set of non-zero entries of z'. In practice, the computation of SHAP values are challenging that is why Lundberg and al.[11] propose different approximation algorithms according to the specificities of your model or your data (tree ensembles, independent features, deep network,...). 4. Practical example with SHAP library Lundenberg created a GitHub repository to that end with very nice and quite complete notebooks explaining different use cases for SHAP and its different approximation algorithms (Tree/ Deep / Gradient /Linear or Kernel Explainers). I do really encourage the reader to visit the page of the author: https://github.com/slundberg/shap . By the way, I am just going to introduce a very simple example in order to give insights of the kind of results we could obtain when looking for interpreting a prediction. Lets consider the heart dataset coming from kaggle competition (https://www.kaggle.com/ronitf/heart-disease-uci). The dataset consists in 13 variables describing 303 patients and 1 label describing the angiographic disease status (target \in {0,1}). The set is quite balanced since 165 patients have label 1 and 138 have label 0. The data have been pre-processed a little bit such as we keep only the most informative variables which are ['sex', 'cp', 'thalach', 'exang', 'oldpeak', 'ca', 'thal']. Then we split the dataset into a random train (75% of data) and test sets (the remaining 25%) and we scale them. A svm classifier has been learnt, and we obtain a classification accuracy around 91%. Once the classification model is learnt, we are looking for explaining a particular prediction (a true one ;) ) based on the shap library developed by Lundberg. You need to install the shap library (https://github.com/slundberg/shap) before running the code below: This figure illustrates features that push the prediction higher (in pink) and the ones which push the prediction lower (in blue) from a base value computed on the average model output on the training dataset. For that true positive fitted answer, we can see that the main features which tend to push the probabilities towards 1 is mainly explained by 'sex', 'oldpeak', 'thalach' and 'exang', whereas the 'ca' feature tends to push down the prediction score. We can apply this explainer model to all correct predicted examples in the test set, as below: The figure above stands for all individual feature contribution that have been stacked horizontally and ordered by output value. The 39 first predictions are correctly classified in class 1 and the 30 last ones are labeled and correctly classified in class 0. Note that the visualisation is interactive and we can see the effect of a particular feature by changing the y-axis in the menu of the left side of the figure. Symmetrically, you can change the x-axis menu in order to order the sample according to output values, similarities or SHAP values by feature. It can also be very interested to have in one plot an overview of the distribution of SHAP values for each feature and an idea of their overall impact (Note that the example is still on the correct predicted sample): In the first plot (subfigure a.), there are three kinds of information: in the x-axis you have the SHAP values of each feature described in the y-axis. Each line stands for the set of SHAP values computed for a specific feature and this is done for every features of your model. The third dimension is the color of points: it represents the feature value (pink for high value of the feature and blue for a low value). You can therefore see the dispersion of SHAP values according to features and also, their impact in the output model. For instance, high values of 'cp' feature implies high SHAP values predicted score and tends to push up the prediction whereas high values of 'thal' feature (pink points) tends to lower the predicted score. The second plot (subfigure b.) is the Mean Absolute Value of SHAP values obtained for each feature. This can be seen as a summary of the left figure. Conclusion There are lots of approaches proposed in the literature to deal with interpretability/explainable models in the supervised context. The main strength of the additive feature attribution model is its theoretical properties on one hand and on the other hand, its general framework which tends to explain most of explainable models developed in the literature. Different approximations algorithms have been proposed by Lundberg et al. in order to take advantage of the structure of the model and types of data to improve time computations. If you deal with Deep Neural Network or tree ensembles, I really encourage the reader to see more examples on the GitHub repository of the author: https://github.com/slundberg/shap. Bibliography [0] Shapley, Lloyd S. “A Value for N-Person Games.” Contributions to the Theory of Games 2 (28): 307–17, (1953). [1] Lipovetsky, S. and Conklin, M. "Analysis of regression in game theory approach." Applied Stochastic Models in business and industry (17-4):319-330, (2001). [2] Strumbelj et al., "A General Method for Visualizing and Explaining Black-Box Regression Models." Adaptive and Natural Computing Algorithms. ICANNGA 2011. Lecture Notes in Computer Science, vol 6594. (2011). [3] Saabas et al., "Interpreting random forests", https://blog.datadive.net/interpreting-random-forests/ [4] Springenberg et al., "Striving for simplicity: The all convolutional net", arXiv:1412.6806 (2014). [5] Bach et al. "On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation", PLOS ONE: (10-7): 130-140, (2015). [6] Ribeiro et al. "Why should I Trust You ? Explaining the predictions of any classifier", Proceedings of the 22nd ACM SIGKDD: 1135-1144 (2016). [7] Shrikumar et al. "Learning Important Features Through Propagating Activation Difference", Proceedings in ICML (2017). [8] "Explainable and Interpretable Models in Computer Vision and Machine Learning", Springer Verlag, The Springer Series on Challenges in Machine Learning, 9783319981307 (2018). [9] Sunderarajan et al., "Axiomatic Attribution for Deep Networks", Proceedings in ICML (2017). [10] Montavon et al. "Explaining nonlinear classification decision with deep Taylor decomposition", Pattern Recognition (65):211-222 (2017). [11] Lundberg et al., ''A unified approach to interpreting model predictions'', NIPS (2017). All Posts × Almost done… We just sent you an email. Please click the link in the email to confirm your subscription! ], ['\\\\(','\\\']]}\n });\n\u0026lt;\/script\u0026gt;\n\n\u0026lt;script type=\"text\/javascript\" async src=\"\/\/cdn.mathjax.org\/mathjax\/latest\/MathJax.js?config=TeX-AMS_CHTML\"\u0026gt;\u0026lt;\/script\u0026gt;\n\n\n\u0026lt;script\u0026gt;\n!function(){\/*\n\n Copyright (C) 2013 Google Inc.\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http:\/\/www.apache.org\/licenses\/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n\n Copyright (C) 2006 Google Inc.\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http:\/\/www.apache.org\/licenses\/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n*\/\n(function(){function ba(g){function k(){try{M.doScroll(\"left\")}catch(g){t.setTimeout(k,50);return}z(\"poll\")}function z(k){if(\"readystatechange\"!=k.type||\"complete\"==A.readyState)(\"load\"==k.type?t:A)[B](p+k.type,z,!1),!q\u0026amp;\u0026amp;(q=!0)\u0026amp;\u0026amp;g.call(t,k.type||k)}var Y=A.addEventListener,q=!1,C=!0,x=Y?\"addEventListener\":\"attachEvent\",B=Y?\"removeEventListener\":\"detachEvent\",p=Y?\"\":\"on\";if(\"complete\"==A.readyState)g.call(t,\"lazy\");else{if(A.createEventObject\u0026amp;\u0026amp;M.doScroll){try{C=!t.frameElement}catch(da){}C\u0026amp;\u0026amp;k()}A[x](p+\n\"DOMContentLoaded\",z,!1);A[x](p+\"readystatechange\",z,!1);t[x](p+\"load\",z,!1)}}function U(){V\u0026amp;\u0026amp;ba(function(){var g=N.length;ca(g?function(){for(var k=0;k\u0026lt;g;++k)(function(g){t.setTimeout(function(){t.exports[N[g]].apply(t,arguments)},0)})(k)}:void 0)})}for(var t=window,A=document,M=A.documentElement,O=A.head||A.getElementsByTagName(\"head\")[0]||M,B=\"\",F=A.getElementsByTagName(\"script\"),q=F.length;0\u0026lt;=--q;){var P=F[q],Z=P.src.match(\/^[^?#]*\\\/run_prettify\\.js(\\?[^#]*)?(?:#.*)?\/);if(Z){B=Z[1]||\"\";P.parentNode.removeChild(P);\nbreak}}var V=!0,H=[],Q=[],N=[];B.replace(\/[?\u0026amp;]([^\u0026amp;=]+)=([^\u0026amp;]+)\/g,function(g,k,z){z=decodeURIComponent(z);k=decodeURIComponent(k);\"autorun\"==k?V=!\/^[0fn]\/i.test(z):\"lang\"==k?H.push(z):\"skin\"==k?Q.push(z):\"callback\"==k\u0026amp;\u0026amp;N.push(z)});q=0;for(B=H.length;q\u0026lt;B;++q)(function(){var g=A.createElement(\"script\");g.onload=g.onerror=g.onreadystatechange=function(){!g||g.readyState\u0026amp;\u0026amp;!\/loaded|complete\/.test(g.readyState)||(g.onerror=g.onload=g.onreadystatechange=null,--T,T||t.setTimeout(U,0),g.parentNode\u0026amp;\u0026amp;g.parentNode.removeChild(g),\ng=null)};g.type=\"text\/javascript\";g.src=\"https:\/\/cdn.rawgit.com\/google\/code-prettify\/master\/loader\/lang-\"+encodeURIComponent(H[q])+\".js\";O.insertBefore(g,O.firstChild)})(H[q]);for(var T=H.length,F=[],q=0,B=Q.length;q\u0026lt;B;++q)F.push(\"https:\/\/cdn.rawgit.com\/google\/code-prettify\/master\/loader\/skins\/\"+encodeURIComponent(Q[q])+\".css\");F.push(\"https:\/\/cdn.rawgit.com\/google\/code-prettify\/master\/loader\/prettify.css\");(function(g){function k(q){if(q!==z){var t=A.createElement(\"link\");t.rel=\"stylesheet\";t.type=\n\"text\/css\";q+1\u0026lt;z\u0026amp;\u0026amp;(t.error=t.onerror=function(){k(q+1)});t.href=g[q];O.appendChild(t)}}var z=g.length;k(0)})(F);var ca=function(){window.PR_SHOULD_USE_CONTINUATION=!0;var g;(function(){function k(a){function d(e){var b=e.charCodeAt(0);if(92!==b)return b;var a=e.charAt(1);return(b=W[a])?b:\"0\"\u0026lt;=a\u0026amp;\u0026amp;\"7\"\u0026gt;=a?parseInt(e.substring(1),8):\"u\"===a||\"x\"===a?parseInt(e.substring(2),16):e.charCodeAt(1)}function f(e){if(32\u0026gt;e)return(16\u0026gt;e?\"\\\\x0\":\"\\\\x\")+e.toString(16);e=String.fromCharCode(e);return\"\\\\\"===e||\"-\"===\ne||\"]\"===e||\"^\"===e?\"\\\\\"+e:e}function b(e){var b=e.substring(1,e.length-1).match(\/\\\\u[0-9A-Fa-f]{4}|\\\\x[0-9A-Fa-f]{2}|\\\\[0-3][0-7]{0,2}|\\\\[0-7]{1,2}|\\\\[\\s\\S]|-|[^-\\\\/g);e=[];var a=\"^\"===b[0],c=[\"[\"];a\u0026amp;\u0026amp;c.push(\"^\");for(var a=a?1:0,h=b.length;a\u0026lt;h;++a){var l=b[a];if(\/\\\$bdsw]\/i.test(l))c.push(l);else{var l=d(l),n;a+2\u0026lt;h\u0026amp;\u0026amp;\"-\"===b[a+1]?(n=d(b[a+2]),a+=2):n=l;e.push([l,n]);65\u0026gt;n||122\u0026lt;l||(65\u0026gt;n||90\u0026lt;l||e.push([Math.max(65,l)|32,Math.min(n,90)|32]),97\u0026gt;n||122\u0026lt;l||e.push([Math.max(97,l)\u0026amp;-33,Math.min(n,122)\u0026amp;-33]))}}e.sort(function(e,\na){return e[0]-a[0]||a[1]-e[1]});b=[];h=[];for(a=0;a\u0026lt;e.length;++a)l=e[a],l[0]\u0026lt;=h[1]+1?h[1]=Math.max(h[1],l[1]):b.push(h=l);for(a=0;a\u0026lt;b.length;++a)l=b[a],c.push(f(l[0])),l[1]\u0026gt;l[0]\u0026amp;\u0026amp;(l[1]+1\u0026gt;l[0]\u0026amp;\u0026amp;c.push(\"-\"),c.push(f(l[1])));c.push(\"]\");return c.join(\"\")}function g(e){for(var a=e.source.match(\/(?:\\[(?:[^\\x5C\\x5D]|\\\\[\\s\\S])*\$|\\\\u[A-Fa-f0-9]{4}|\\\\x[A-Fa-f0-9]{2}|\\\$0-9]+|\\\\[^ux0-9]|\$$\\?[:!=]|[\\(\$$\\^]|[^\\x5B\\x5C\$$\$$\\^]+)\/g),c=a.length,d=[],h=0,l=0;h\u0026lt;c;++h){var n=a[h];\"(\"===n?++l:\"\\\\\"===n.charAt(0)\u0026amp;\u0026amp;(n=\n+n.substring(1))\u0026amp;\u0026amp;(n\u0026lt;=l?d[n]=-1:a[h]=f(n))}for(h=1;h\u0026lt;d.length;++h)-1===d[h]\u0026amp;\u0026amp;(d[h]=++k);for(l=h=0;h\u0026lt;c;++h)n=a[h],\"(\"===n?(++l,d[l]||(a[h]=\"(?:\")):\"\\\\\"===n.charAt(0)\u0026amp;\u0026amp;(n=+n.substring(1))\u0026amp;\u0026amp;n\u0026lt;=l\u0026amp;\u0026amp;(a[h]=\"\\\\\"+d[n]);for(h=0;h\u0026lt;c;++h)\"^\"===a[h]\u0026amp;\u0026amp;\"^\"!==a[h+1]\u0026amp;\u0026amp;(a[h]=\"\");if(e.ignoreCase\u0026amp;\u0026amp;I)for(h=0;h\u0026lt;c;++h)n=a[h],e=n.charAt(0),2\u0026lt;=n.length\u0026amp;\u0026amp;\"[\"===e?a[h]=b(n):\"\\\\\"!==e\u0026amp;\u0026amp;(a[h]=n.replace(\/[a-zA-Z]\/g,function(a){a=a.charCodeAt(0);return\"[\"+String.fromCharCode(a\u0026amp;-33,a|32)+\"]\"}));return a.join(\"\")}for(var k=0,I=!1,\nm=!1,J=0,c=a.length;J\u0026lt;c;++J){var r=a[J];if(r.ignoreCase)m=!0;else if(\/[a-z]\/i.test(r.source.replace(\/\\\\u[0-9a-f]{4}|\\\\x[0-9a-f]{2}|\\\\[^ux]\/gi,\"\"))){I=!0;m=!1;break}}for(var W={b:8,t:9,n:10,v:11,f:12,r:13},u=[],J=0,c=a.length;J\u0026lt;c;++J){r=a[J];if(r.global||r.multiline)throw Error(\"\"+r);u.push(\"(?:\"+g(r)+\")\")}return new RegExp(u.join(\"|\"),m?\"gi\":\"g\")}function q(a,d){function f(a){var c=a.nodeType;if(1==c){if(!b.test(a.className)){for(c=a.firstChild;c;c=c.nextSibling)f(c);c=a.nodeName.toLowerCase();if(\"br\"===\nc||\"li\"===c)g[m]=\"\\n\",I[m\u0026lt;\u0026lt;1]=k++,I[m++\u0026lt;\u0026lt;1|1]=a}}else if(3==c||4==c)c=a.nodeValue,c.length\u0026amp;\u0026amp;(c=d?c.replace(\/\\r\\n?\/g,\"\\n\"):c.replace(\/[ \\t\\r\\n]+\/g,\" \"),g[m]=c,I[m\u0026lt;\u0026lt;1]=k,k+=c.length,I[m++\u0026lt;\u0026lt;1|1]=a)}var b=\/(?:^|\\s)nocode(?:\\s|)\/,g=[],k=0,I=[],m=0;f(a);return{a:g.join(\"\").replace(\/\\n\/,\"\"),c:I}}function t(a,d,f,b,g){f\u0026amp;\u0026amp;(a={h:a,l:1,j:null,m:null,a:f,c:null,i:d,g:null},b(a),g.push.apply(g,a.g))}function A(a){for(var d=void 0,f=a.firstChild;f;f=f.nextSibling)var b=f.nodeType,d=1===b?d?a:f:3===b?T.test(f.nodeValue)?\na:d:d;return d===a?void 0:d}function C(a,d){function f(a){for(var m=a.i,k=a.h,c=[m,\"pln\"],r=0,W=a.a.match(g)||[],u={},e=0,q=W.length;e\u0026lt;q;++e){var D=W[e],w=u[D],h=void 0,l;if(\"string\"===typeof w)l=!1;else{var n=b[D.charAt(0)];if(n)h=D.match(n[1]),w=n[0];else{for(l=0;l\u0026lt;p;++l)if(n=d[l],h=D.match(n[1])){w=n[0];break}h||(w=\"pln\")}!(l=5\u0026lt;=w.length\u0026amp;\u0026amp;\"lang-\"===w.substring(0,5))||h\u0026amp;\u0026amp;\"string\"===typeof h[1]||(l=!1,w=\"src\");l||(u[D]=w)}n=r;r+=D.length;if(l){l=h[1];var E=D.indexOf(l),G=E+l.length;h[2]\u0026amp;\u0026amp;(G=D.length-\nh[2].length,E=G-l.length);w=w.substring(5);t(k,m+n,D.substring(0,E),f,c);t(k,m+n+E,l,F(w,l),c);t(k,m+n+G,D.substring(G),f,c)}else c.push(m+n,w)}a.g=c}var b={},g;(function(){for(var f=a.concat(d),m=[],p={},c=0,r=f.length;c\u0026lt;r;++c){var q=f[c],u=q[3];if(u)for(var e=u.length;0\u0026lt;=--e;)b[u.charAt(e)]=q;q=q[1];u=\"\"+q;p.hasOwnProperty(u)||(m.push(q),p[u]=null)}m.push(\/[\\0-\\uffff]\/);g=k(m)})();var p=d.length;return f}function x(a){var d=[],f=[];a.tripleQuotedStrings?d.push([\"str\",\/^(?:\\'\\'\\'(?:[^\\'\\\$|\\\$\\s\\S]|\\'{1,2}(?=[^\\']))*(?:\\'\\'\\'|)|\\\"\\\"\\\"(?:[^\\\"\\\$|\\\$\\s\\S]|\\\"{1,2}(?=[^\\\"]))*(?:\\\"\\\"\\\"|)|\\'(?:[^\\\\\\']|\\\\[\\s\\S])*(?:\\'|)|\\\"(?:[^\\\\\\\"]|\\\\[\\s\\S])*(?:\\\"|))\/,\nnull,\"'\\\"\"]):a.multiLineStrings?d.push([\"str\",\/^(?:\\'(?:[^\\\\\\']|\\\\[\\s\\S])*(?:\\'|)|\\\"(?:[^\\\\\\\"]|\\\\[\\s\\S])*(?:\\\"|)|\\(?:[^\\\\\\]|\\\\[\\s\\S])*(?:\\|))\/,null,\"'\\\"\"]):d.push([\"str\",\/^(?:\\'(?:[^\\\\\\'\\r\\n]|\\\\.)*(?:\\'|)|\\\"(?:[^\\\\\\\"\\r\\n]|\\\\.)*(?:\\\"|))\/,null,\"\\\"'\"]);a.verbatimStrings\u0026amp;\u0026amp;f.push([\"str\",\/^@\\\"(?:[^\\\"]|\\\"\\\")*(?:\\\"|)\/,null]);var b=a.hashComments;b\u0026amp;\u0026amp;(a.cStyleComments?(1\u0026lt;b?d.push([\"com\",\/^#(?:##(?:[^#]|#(?!##))*(?:###|)|.*)\/,null,\"#\"]):d.push([\"com\",\/^#(?:(?:define|e(?:l|nd)if|else|error|ifn?def|include|line|pragma|undef|warning)\\b|[^\\r\\n]*)\/,\nnull,\"#\"]),f.push([\"str\",\/^\u0026lt;(?:(?:(?:\\.\\.\\\/)*|\\\/?)(?:[\\w-]+(?:\\\/[\\w-]+)+)?[\\w-]+\\.h(?:h|pp|\\+\\+)?|[a-z]\\w*)\u0026gt;\/,null])):d.push([\"com\",\/^#[^\\r\\n]*\/,null,\"#\"]));a.cStyleComments\u0026amp;\u0026amp;(f.push([\"com\",\/^\\\/\\\/[^\\r\\n]*\/,null]),f.push([\"com\",\/^\\\/\\*[\\s\\S]*?(?:\\*\\\/|)\/,null]));if(b=a.regexLiterals){var g=(b=1\u0026lt;b?\"\":\"\\n\\r\")?\".\":\"[\\\\S\\\\s]\";f.push([\"lang-regex\",RegExp(\"^(?:^^\\\\.?|[+-]|[!=]=?=?|\\\\#|%=?|\u0026amp;\u0026amp;?=?|\\\|\\\\*=?|[+\\\\-]=|-\u0026gt;|\\\\\/=?|::?|\u0026lt;\u0026lt;?=?|\u0026gt;\u0026gt;?\u0026gt;?=?|,|;|\\\\?|@|\\\\[|~|{|\\\\^\\\\^?=?|\\\\|\\\\|?=?|break|case|continue|delete|do|else|finally|instanceof|return|throw|try|typeof)\\\\s*(\"+\n(\"\/(?=[^\/*\"+b+\"])(?:[^\/\\\\x5B\\\\x5C\"+b+\"]|\\\\x5C\"+g+\"|\\\\x5B(?:[^\\\\x5C\\\\x5D\"+b+\"]|\\\\x5C\"+g+\")*(?:\\\\x5D|))+\/\")+\")\")])}(b=a.types)\u0026amp;\u0026amp;f.push([\"typ\",b]);b=(\"\"+a.keywords).replace(\/^ | \/g,\"\");b.length\u0026amp;\u0026amp;f.push([\"kwd\",new RegExp(\"^(?:\"+b.replace(\/[\\s,]+\/g,\"|\")+\")\\\\b\"),null]);d.push([\"pln\",\/^\\s+\/,null,\" \\r\\n\\t\\u00a0\"]);b=\"^.[^\\\\s\\\\w.@'\\\"\/\\\\\\\$*\";a.regexLiterals\u0026amp;\u0026amp;(b+=\"(?!s*\/)\");f.push([\"lit\",\/^@[a-z_][a-z_@0-9]*\/i,null],[\"typ\",\/^(?:[@_]?[A-Z]+[a-z][A-Za-z_@0-9]*|\\w+_t\\b)\/,null],[\"pln\",\/^[a-z_][a-z_@0-9]*\/i,\nnull],[\"lit\",\/^(?:0x[a-f0-9]+|(?:\\d(?:_\\d+)*\\d*(?:\\.\\d*)?|\\.\\d\\+)(?:e[+\\-]?\\d+)?)[a-z]*\/i,null,\"0123456789\"],[\"pln\",\/^\\\\\s\\S]?\/,null],[\"pun\",new RegExp(b),null]);return C(d,f)}function B(a,d,f){function b(a){var c=a.nodeType;if(1==c\u0026amp;\u0026amp;!k.test(a.className))if(\"br\"===a.nodeName)g(a),a.parentNode\u0026amp;\u0026amp;a.parentNode.removeChild(a);else for(a=a.firstChild;a;a=a.nextSibling)b(a);else if((3==c||4==c)\u0026amp;\u0026amp;f){var d=a.nodeValue,p=d.match(q);p\u0026amp;\u0026amp;(c=d.substring(0,p.index),a.nodeValue=c,(d=d.substring(p.index+p[0].length))\u0026amp;\u0026amp;\na.parentNode.insertBefore(m.createTextNode(d),a.nextSibling),g(a),c||a.parentNode.removeChild(a))}}function g(a){function b(a,c){var d=c?a.cloneNode(!1):a,n=a.parentNode;if(n){var n=b(n,1),e=a.nextSibling;n.appendChild(d);for(var f=e;f;f=e)e=f.nextSibling,n.appendChild(f)}return d}for(;!a.nextSibling;)if(a=a.parentNode,!a)return;a=b(a.nextSibling,0);for(var d;(d=a.parentNode)\u0026amp;\u0026amp;1===d.nodeType;)a=d;c.push(a)}for(var k=\/(?:^|\\s)nocode(?:\\s|)\/,q=\/\\r\\n?|\\n\/,m=a.ownerDocument,p=m.createElement(\"li\");a.firstChild;)p.appendChild(a.firstChild);\nfor(var c=[p],r=0;r\u0026lt;c.length;++r)b(c[r]);d===(d|0)\u0026amp;\u0026amp;c[0].setAttribute(\"value\",d);var t=m.createElement(\"ol\");t.className=\"linenums\";d=Math.max(0,d-1|0)||0;for(var r=0,u=c.length;r\u0026lt;u;++r)p=c[r],p.className=\"L\"+(r+d)%10,p.firstChild||p.appendChild(m.createTextNode(\"\\u00a0\")),t.appendChild(p);a.appendChild(t)}function p(a,d){for(var f=d.length;0\u0026lt;=--f;){var b=d[f];X.hasOwnProperty(b)?R.console\u0026amp;\u0026amp;console.warn(\"cannot override language handler %s\",b):X[b]=a}}function F(a,d){a\u0026amp;\u0026amp;X.hasOwnProperty(a)||(a=\/^\\s*\u0026lt;\/.test(d)?\n\"default-markup\":\"default-code\");return X[a]}function H(a){var d=a.j;try{var f=q(a.h,a.l),b=f.a;a.a=b;a.c=f.c;a.i=0;F(d,b)(a);var g=\/\\bMSIE\\s(\\d+)\/.exec(navigator.userAgent),g=g\u0026amp;\u0026amp;8\u0026gt;=+g[1],d=\/\\n\/g,p=a.a,k=p.length,f=0,m=a.c,t=m.length,b=0,c=a.g,r=c.length,x=0;c[r]=k;var u,e;for(e=u=0;e\u0026lt;r;)c[e]!==c[e+2]?(c[u++]=c[e++],c[u++]=c[e++]):e+=2;r=u;for(e=u=0;e\u0026lt;r;){for(var A=c[e],D=c[e+1],w=e+2;w+2\u0026lt;=r\u0026amp;\u0026amp;c[w+1]===D;)w+=2;c[u++]=A;c[u++]=D;e=w}c.length=u;var h=a.h;a=\"\";h\u0026amp;\u0026amp;(a=h.style.display,h.style.display=\"none\");\ntry{for(;b\u0026lt;t;){var l=m[b+2]||k,n=c[x+2]||k,w=Math.min(l,n),E=m[b+1],G;if(1!==E.nodeType\u0026amp;\u0026amp;(G=p.substring(f,w))){g\u0026amp;\u0026amp;(G=G.replace(d,\"\\r\"));E.nodeValue=G;var aa=E.ownerDocument,v=aa.createElement(\"span\");v.className=c[x+1];var B=E.parentNode;B.replaceChild(v,E);v.appendChild(E);f\u0026lt;l\u0026amp;\u0026amp;(m[b+1]=E=aa.createTextNode(p.substring(w,l)),B.insertBefore(E,v.nextSibling))}f=w;f\u0026gt;=l\u0026amp;\u0026amp;(b+=2);f\u0026gt;=n\u0026amp;\u0026amp;(x+=2)}}finally{h\u0026amp;\u0026amp;(h.style.display=a)}}catch(y){R.console\u0026amp;\u0026amp;console.log(y\u0026amp;\u0026amp;y.stack||y)}}var R=window,K=[\"break,continue,do,else,for,if,return,while\"],\nL=[[K,\"auto,case,char,const,default,double,enum,extern,float,goto,inline,int,long,register,short,signed,sizeof,static,struct,switch,typedef,union,unsigned,void,volatile\"],\"catch,class,delete,false,import,new,operator,private,protected,public,this,throw,true,try,typeof\"],S=[L,\"alignof,align_union,asm,axiom,bool,concept,concept_map,const_cast,constexpr,decltype,delegate,dynamic_cast,explicit,export,friend,generic,late_check,mutable,namespace,nullptr,property,reinterpret_cast,static_assert,static_cast,template,typeid,typename,using,virtual,where\"],\nM=[L,\"abstract,assert,boolean,byte,extends,finally,final,implements,import,instanceof,interface,null,native,package,strictfp,super,synchronized,throws,transient\"],N=[L,\"abstract,as,base,bool,by,byte,checked,decimal,delegate,descending,dynamic,event,finally,fixed,foreach,from,group,implicit,in,interface,internal,into,is,let,lock,null,object,out,override,orderby,params,partial,readonly,ref,sbyte,sealed,stackalloc,string,select,uint,ulong,unchecked,unsafe,ushort,var,virtual,where\"],L=[L,\"debugger,eval,export,function,get,instanceof,null,set,undefined,var,with,Infinity,NaN\"],\nO=[K,\"and,as,assert,class,def,del,elif,except,exec,finally,from,global,import,in,is,lambda,nonlocal,not,or,pass,print,raise,try,with,yield,False,True,None\"],P=[K,\"alias,and,begin,case,class,def,defined,elsif,end,ensure,false,in,module,next,nil,not,or,redo,rescue,retry,self,super,then,true,undef,unless,until,when,yield,BEGIN,END\"],K=[K,\"case,done,elif,esac,eval,fi,function,in,local,set,then,until\"],Q=\/^(DIR|FILE|vector|(de|priority_)?queue|list|stack|(const_)?iterator|(multi)?(set|map)|bitset|u?(int|float)\\d*)\\b\/,\nT=\/\\S\/,U=x({keywords:[S,N,M,L,\"caller,delete,die,do,dump,elsif,eval,exit,foreach,for,goto,if,import,last,local,my,next,no,our,print,package,redo,require,sub,undef,unless,until,use,wantarray,while,BEGIN,END\",O,P,K],hashComments:!0,cStyleComments:!0,multiLineStrings:!0,regexLiterals:!0}),X={};p(U,[\"default-code\"]);p(C([],[[\"pln\",\/^[^\u0026lt;?]+\/],[\"dec\",\/^\u0026lt;!\\w[^\u0026gt;]*(?:\u0026gt;|)\/],[\"com\",\/^\u0026lt;\\!--[\\s\\S]*?(?:-\\-\u0026gt;|)\/],[\"lang-\",\/^\u0026lt;\\?([\\s\\S]+?)(?:\\?\u0026gt;|)\/],[\"lang-\",\/^\u0026lt;%([\\s\\S]+?)(?:%\u0026gt;|)\/],[\"pun\",\/^(?:\u0026lt;[%?]|[%?]\u0026gt;)\/],[\"lang-\",\n\/^\u0026lt;xmp\\b[^\u0026gt;]*\u0026gt;([\\s\\S]+?)\u0026lt;\\\/xmp\\b[^\u0026gt;]*\u0026gt;\/i],[\"lang-js\",\/^\u0026lt;script\\b[^\u0026gt;]*\u0026gt;([\\s\\S]*?)(\u0026lt;\\\/script\\b[^\u0026gt;]*\u0026gt;)\/i],[\"lang-css\",\/^\u0026lt;style\\b[^\u0026gt;]*\u0026gt;([\\s\\S]*?)(\u0026lt;\\\/style\\b[^\u0026gt;]*\u0026gt;)\/i],[\"lang-in.tag\",\/^(\u0026lt;\\\/?[a-z][^\u0026lt;\u0026gt;]*\u0026gt;)\/i]]),\"default-markup htm html mxml xhtml xml xsl\".split(\" \"));p(C([[\"pln\",\/^[\\s]+\/,null,\" \\t\\r\\n\"],[\"atv\",\/^(?:\\\"[^\\\"]*\\\"?|\\'[^\\']*\\'?)\/,null,\"\\\"'\"]],[[\"tag\",\/^^\u0026lt;\\\/?[a-z](?:[\\w.:-]*\\w)?|\\\/?\u0026gt;\/i],[\"atn\",\/^(?!style[\\s=]|on)[a-z](?:[\\w:-]*\\w)?\/i],[\"lang-uq.val\",\/^=\\s*([^\u0026gt;\\'\\\"\\s]*(?:[^\u0026gt;\\'\\\"\\s\\\/]|\\\/(?=\\s)))\/],\n[\"pun\",\/^[=\u0026lt;\u0026gt;\\\/]+\/],[\"lang-js\",\/^on\\w+\\s*=\\s*\\\"([^\\\"]+)\\\"\/i],[\"lang-js\",\/^on\\w+\\s*=\\s*\\'([^\\']+)\\'\/i],[\"lang-js\",\/^on\\w+\\s*=\\s*([^\\\"\\'\u0026gt;\\s]+)\/i],[\"lang-css\",\/^style\\s*=\\s*\\\"([^\\\"]+)\\\"\/i],[\"lang-css\",\/^style\\s*=\\s*\\'([^\\']+)\\'\/i],[\"lang-css\",\/^style\\s*=\\s*([^\\\"\\'\u0026gt;\\s]+)\/i]]),[\"in.tag\"]);p(C([],[[\"atv\",\/^[\\s\\S]+\/]]),[\"uq.val\"]);p(x({keywords:S,hashComments:!0,cStyleComments:!0,types:Q}),\"c cc cpp cxx cyc m\".split(\" \"));p(x({keywords:\"null,true,false\"}),[\"json\"]);p(x({keywords:N,hashComments:!0,cStyleComments:!0,\nverbatimStrings:!0,types:Q}),[\"cs\"]);p(x({keywords:M,cStyleComments:!0}),[\"java\"]);p(x({keywords:K,hashComments:!0,multiLineStrings:!0}),[\"bash\",\"bsh\",\"csh\",\"sh\"]);p(x({keywords:O,hashComments:!0,multiLineStrings:!0,tripleQuotedStrings:!0}),[\"cv\",\"py\",\"python\"]);p(x({keywords:\"caller,delete,die,do,dump,elsif,eval,exit,foreach,for,goto,if,import,last,local,my,next,no,our,print,package,redo,require,sub,undef,unless,until,use,wantarray,while,BEGIN,END\",hashComments:!0,multiLineStrings:!0,regexLiterals:2}),\n[\"perl\",\"pl\",\"pm\"]);p(x({keywords:P,hashComments:!0,multiLineStrings:!0,regexLiterals:!0}),[\"rb\",\"ruby\"]);p(x({keywords:L,cStyleComments:!0,regexLiterals:!0}),[\"javascript\",\"js\"]);p(x({keywords:\"all,and,by,catch,class,else,extends,false,finally,for,if,in,is,isnt,loop,new,no,not,null,of,off,on,or,return,super,then,throw,true,try,unless,until,when,while,yes\",hashComments:3,cStyleComments:!0,multilineStrings:!0,tripleQuotedStrings:!0,regexLiterals:!0}),[\"coffee\"]);p(C([],[[\"str\",\/^[\\s\\S]+\/]]),[\"regex\"]);\nvar V=R.PR={createSimpleLexer:C,registerLangHandler:p,sourceDecorator:x,PR_ATTRIB_NAME:\"atn\",PR_ATTRIB_VALUE:\"atv\",PR_COMMENT:\"com\",PR_DECLARATION:\"dec\",PR_KEYWORD:\"kwd\",PR_LITERAL:\"lit\",PR_NOCODE:\"nocode\",PR_PLAIN:\"pln\",PR_PUNCTUATION:\"pun\",PR_SOURCE:\"src\",PR_STRING:\"str\",PR_TAG:\"tag\",PR_TYPE:\"typ\",prettyPrintOne:function(a,d,f){f=f||!1;d=d||null;var b=document.createElement(\"div\");b.innerHTML=\"\u0026lt;pre\u0026gt;\"+a+\"\u0026lt;\/pre\u0026gt;\";b=b.firstChild;f\u0026amp;\u0026amp;B(b,f,!0);H({j:d,m:f,h:b,l:1,a:null,i:null,c:null,g:null});return b.innerHTML},\nprettyPrint:g=g=function(a,d){function f(){for(var b=R.PR_SHOULD_USE_CONTINUATION?c.now()+250:Infinity;r\u0026lt;p.length\u0026amp;\u0026amp;c.now()\u0026lt;b;r++){for(var d=p[r],k=h,q=d;q=q.previousSibling;){var m=q.nodeType,v=(7===m||8===m)\u0026amp;\u0026amp;q.nodeValue;if(v?!\/^\\??prettify\\b\/.test(v):3!==m||\/\\S\/.test(q.nodeValue))break;if(v){k={};v.replace(\/\\b(\\w+)=([\\w:.%+-]+)\/g,function(a,b,c){k[b]=c});break}}q=d.className;if((k!==h||u.test(q))\u0026amp;\u0026amp;!e.test(q)){m=!1;for(v=d.parentNode;v;v=v.parentNode)if(w.test(v.tagName)\u0026amp;\u0026amp;v.className\u0026amp;\u0026amp;u.test(v.className)){m=\n!0;break}if(!m){d.className+=\" prettyprinted\";m=k.lang;if(!m){var m=q.match(t),C;!m\u0026amp;\u0026amp;(C=A(d))\u0026amp;\u0026amp;z.test(C.tagName)\u0026amp;\u0026amp;(m=C.className.match(t));m\u0026amp;\u0026amp;(m=m[1])}if(x.test(d.tagName))v=1;else var v=d.currentStyle,y=g.defaultView,v=(v=v?v.whiteSpace:y\u0026amp;\u0026amp;y.getComputedStyle?y.getComputedStyle(d,null).getPropertyValue(\"white-space\"):0)\u0026amp;\u0026amp;\"pre\"===v.substring(0,3);y=k.linenums;(y=\"true\"===y||+y)||(y=(y=q.match(\/\\blinenums\\b(?::(\\d+))?\/))?y[1]\u0026amp;\u0026amp;y[1].length?+y[1]:!0:!1);y\u0026amp;\u0026amp;B(d,y,v);H({j:m,h:d,m:y,l:v,a:null,i:null,c:null,\ng:null})}}}r\u0026lt;p.length?R.setTimeout(f,250):\"function\"===typeof a\u0026amp;\u0026amp;a()}for(var b=d||document.body,g=b.ownerDocument||document,b=[b.getElementsByTagName(\"pre\"),b.getElementsByTagName(\"code\"),b.getElementsByTagName(\"xmp\")],p=[],k=0;k\u0026lt;b.length;++k)for(var m=0,q=b[k].length;m\u0026lt;q;++m)p.push(b[k][m]);var b=null,c=Date;c.now||(c={now:function(){return+new Date}});var r=0,t=\/\\blang(?:uage)?-([\\w.]+)(?!\\S)\/,u=\/\\bprettyprint\\b\/,e=\/\\bprettyprinted\\b\/,x=\/pre|xmp\/i,z=\/^code\/i,w=\/^(?:pre|code|xmp)\/i,h={};f()}},\nS=R.define;\"function\"===typeof S\u0026amp;\u0026amp;S.amd\u0026amp;\u0026amp;S(\"google-code-prettify\",[],function(){return V})})();return g}();T||t.setTimeout(U,0)})();}()\n\u0026lt;\/script\u0026gt;\n\n\u0026lt;div\u0026gt;\n\u0026lt;pre class=\"prettyprint\"\u0026gt;\n\u0026gt;\u0026gt;\u0026gt; TP_TN = [i for i in range(len(pred)) if pred[i] == y_test.values[i]] \n\u0026gt;\u0026gt;\u0026gt; shap_values = explainer.shap_values(X_test_sc[TP_TN,:], nsamples=100)\n\u0026gt;\u0026gt;\u0026gt; shap.force_plot(explainer.expected_value, shap_values, \\ \n X_test_sc[TP_TN,:],feature_names=X_train.columns)\n\n\u0026lt;\/pre\u0026gt;\n\u0026lt;\/div\u0026gt;\n\u0026lt;hr\u0026gt;","render_as_iframe":true,"selected_app_name":"HtmlApp","app_list":"{\"HtmlApp\":556073}"}},{"type":"Blog.Section","id":"f_218767d0-d624-4b11-9246-70f4fc7ff95b","defaultValue":null,"component":{"type":"Image","id":"f_fd8dcd2b-e94e-4368-af25-3286b8e9c150","defaultValue":null,"link_url":"","thumb_url":"!","url":"!","caption":"","description":"","storageKey":"1225579\/Screen_Shot_2019-02-14_at_10.31.09_AM_ugv7ek","storage":"c","storagePrefix":null,"format":"png","h":449,"w":1200,"s":97644,"new_target":true,"noCompression":null,"cropMode":null}},{"type":"Blog.Section","id":"f_7d0d8381-62db-4d7b-9895-bfe6dafc61bb","defaultValue":null,"component":{"type":"RichText","id":"f_48293fc9-7d71-4273-b0de-6ea849fdff43","defaultValue":null,"value":"\u003cp style=\"text-align: justify;\"\u003eThe figure above stands for all individual feature contribution that have been stacked horizontally and ordered by output value. The 39 first predictions are correctly classified in class 1 and the 30 last ones are labeled and correctly classified in class 0. Note that the visualisation is interactive and we can see the effect of a particular feature by changing the y-axis in the menu of the left side of the figure. Symmetrically, you can change the x-axis menu in order to order the sample according to output values, similarities or SHAP values by feature.\u003c\/p\u003e","backupValue":null,"version":null}},{"type":"Blog.Section","id":"f_e176b612-6fd9-4063-8da0-384b0e4189f4","defaultValue":null,"component":{"type":"RichText","id":"f_a1896bcb-39b1-423b-aef2-305d02678204","defaultValue":false,"value":"\u003cp style=\"text-align: justify;\"\u003eIt can also be very interested to have in one plot an overview of the distribution of SHAP values for each feature and an idea of their overall impact (Note that the example is still on the correct predicted sample):\u003c\/p\u003e","backupValue":null,"version":1}},{"type":"Blog.Section","id":"f_1e1af1d3-d228-4bb6-bcf0-b07c080546a6","defaultValue":null,"component":{"type":"HtmlComponent","id":1607235,"defaultValue":false,"value":"\u0026lt;script type=\"text\/x-mathjax-config\"\u0026gt;\n MathJax.Hub.Config({\n tex2jax: {inlineMath: [[' Interpretability models​ Why interpretability is so important in machine learning ? Why can't we just trust the prediction of a supervised model ? Several possible explanations to that: we can think about improving social acceptance for the integration of ML algorithms into our lives ; correcting a model by discovering a bias in the population of the training set; understanding the cases for which the model fails; following the law and regulations. Nowadays, complex supervised models can be very accurate on specific tasks but remain quite uninterpretable; at the opposite, when using simple models, it is indeed easy to interpret them but are often less accurate. How can we solve such a dilemma ? This post tends to answer to this question by going through the ML literature in interpretability models and by focusing on a class of additive feature attribution methods [11]. 1. The main idea The problem of giving an interpretation to the model prediction can be recasted as it follows: Which part of the input is particularly important to explain the output of the model ? In order to illustrate this purpose, let's consider the example given during the ICML conference by Shrikumar. Lets suppose you have already trained a model with DNA mutations causing diseases. Now, let's consider a DNA sequence as input, as for instance: The model is going to predict if this sequence can be linked to any known diseases the model learnt. If so, what you would like to understand is why your model gives this prediction in particular; ie which part of the input sequence leads your model to predict a specific disease. So, you would like to have higher weights for the parts of the sequence which explain the most the decision of your model and lower ones for those which do not explain the prediction: To achieve that, most of approaches iterate between 2 steps: 1. Set a prohibition to some part of the input 2. Observe the change in the output (fitted answer) Repeat step 1 and step 2 for different prohibitions of the input. 2. Existing approaches The need of tools for explaining prediction models came with the development of more complex models to deal with more complex data and therefore the recent literature in Computer Vision and Machine Learning has developed a new field linked to interpretability. 2.1. Cooperative game theory based Back to the beginning of the 21th century, Lipovetsky et al. (2001)[1] highlight the multicollinearity problem in the analysis of regressor importance in the multiple regression context: important variables can have significant coefficient because of their collinearity. To that end, they use a tool from the cooperative game theory to obtain comparative importance of predictors: the Shapley Values imputation [0] derived from an axiomatic approach and produces a unique solution satisfying general requirements of Nash equilibrium. A decade later, Strumbelj et al. (2011)[2] generalize the use of Shapley values for black box models such as SVM and artificial neural network models in order to make models more informative, easier to understand and to use. They propose an approximation algorithm by assuming mutual independence of individual features in order to encompass the time complexity limitation of the solution. 2.2. Architecture specific: Deep Neural Network Since then, several specific methods have been proposed in the literature and take advantages of the structure/architecture of the model. For neural networks, we can think about back-propagation based methods such as Guided Propagation (Springenberg et al. 2014)[4] which use the relationship between the neurons and the output. The idea is to assign a score to neurons according to how much they affect the output. This is done in the single backward pass where you get the scores for all parts of the input. Other approaches propose to build a linear model to locally approximate the more complicated model based on data which affects the output (LIME [6]). Shrikumar et al. 2016 [7] introduces DeepLift which assigns contribution scores to the feature based on the difference between the activation of each neuron to its ‘reference activation’. Other explaining prediction models Deep Neural Network-specific have been proposed in the literature and the reader could read [8] for additional references on this subject. Below, a list of methods and their available python code which summarizes the most recent approaches for specific-models: Random Forest: Deep Neural Network: 2.3. A unified approach The most recent and general approach for interpretability models is the SHAP model from Lundberg et al. 2018. It proposes a class of methods called Additive Feature attribution methods that contains most of the approaches cited above. These methods use the same explanation model (ie any interpretable approximation of the original model) that we introduce in the next paragraph. Cooperative Game theory-based: 3. SHAP: Additive feature attribution methods An explanation model is a simple model which describes the behavior of the complex model. The additive attribution methods introduce a linear function of binary variables to represent such an explanation model. 3.1. The SHAP model Let f be the original prediction model to be explained and g the explanation model. Additive feature attribution methods have an explanation model that is a linear function of binary variables such that: where M is the number of features ; the z'i variables represent a feature being observed (zi' = 1) or unknown (zi'= 0), and the Φi ∈ ’s are the feature attribution values. There is only one solution for Φ_i satisfying general requirements of Nash equilibrium and satisfying three natural properties explained in the paragraph that follows. 3.2. The Natural properties (1) Local Accuracy: the output of the explanation model matches the original model for the prediction being explained: g(x') = f(x) (2) Missingness: put the output to 0 corresponds to turning the feature off: x'i = 0 ⇒ Φi = 0 (3) Accuracy: if turning the feature off in one model which always makes a bigger difference in another model then the importance should be higher in the first model than in the second one. Lets consider z' \ i meaning z'i = 0, then for any 2 models f 1 and f 2, if: fx1(z') - fx1(z' \ i) ≥ fx2(z') - fx2(z' \ i) then for all input z' ∈ {0,1}M : Φi (f 1, x) ≥ Φi (f 2, x) 3.3 Computing SHAP values 3.3.1. Back to the Shapley values The computation of features importance -- the SHAP values -- comes from cooperative games theory [0] with the Shapley values. In our context, a Shapley value can be viewed as a weighted average of all possible differences between predictions of the model without feature i, and the ones with feature i as expressed below: where |z′| stands for the number of features different from zero, and z′ ⊆ x′ stands for all z′ vectors where the non-zero entries are a subset of entries of x′ except feature i. Since the problem is combinatorial different strategies have been proposed in the literature to approximate the solution ([0,1]). 3.3.2. The SHAP values In the more general context the SHAP values can be viewed as Shapley values of a conditional expectation function of the original model such that: where S is the set of non-zero entries of z'. In practice, the computation of SHAP values are challenging that is why Lundberg and al.[11] propose different approximation algorithms according to the specificities of your model or your data (tree ensembles, independent features, deep network,...). 4. Practical example with SHAP library Lundenberg created a GitHub repository to that end with very nice and quite complete notebooks explaining different use cases for SHAP and its different approximation algorithms (Tree/ Deep / Gradient /Linear or Kernel Explainers). I do really encourage the reader to visit the page of the author: https://github.com/slundberg/shap . By the way, I am just going to introduce a very simple example in order to give insights of the kind of results we could obtain when looking for interpreting a prediction. Lets consider the heart dataset coming from kaggle competition (https://www.kaggle.com/ronitf/heart-disease-uci). The dataset consists in 13 variables describing 303 patients and 1 label describing the angiographic disease status (target \in {0,1}). The set is quite balanced since 165 patients have label 1 and 138 have label 0. The data have been pre-processed a little bit such as we keep only the most informative variables which are ['sex', 'cp', 'thalach', 'exang', 'oldpeak', 'ca', 'thal']. Then we split the dataset into a random train (75% of data) and test sets (the remaining 25%) and we scale them. A svm classifier has been learnt, and we obtain a classification accuracy around 91%. Once the classification model is learnt, we are looking for explaining a particular prediction (a true one ;) ) based on the shap library developed by Lundberg. You need to install the shap library (https://github.com/slundberg/shap) before running the code below: This figure illustrates features that push the prediction higher (in pink) and the ones which push the prediction lower (in blue) from a base value computed on the average model output on the training dataset. For that true positive fitted answer, we can see that the main features which tend to push the probabilities towards 1 is mainly explained by 'sex', 'oldpeak', 'thalach' and 'exang', whereas the 'ca' feature tends to push down the prediction score. We can apply this explainer model to all correct predicted examples in the test set, as below: The figure above stands for all individual feature contribution that have been stacked horizontally and ordered by output value. The 39 first predictions are correctly classified in class 1 and the 30 last ones are labeled and correctly classified in class 0. Note that the visualisation is interactive and we can see the effect of a particular feature by changing the y-axis in the menu of the left side of the figure. Symmetrically, you can change the x-axis menu in order to order the sample according to output values, similarities or SHAP values by feature. It can also be very interested to have in one plot an overview of the distribution of SHAP values for each feature and an idea of their overall impact (Note that the example is still on the correct predicted sample): In the first plot (subfigure a.), there are three kinds of information: in the x-axis you have the SHAP values of each feature described in the y-axis. Each line stands for the set of SHAP values computed for a specific feature and this is done for every features of your model. The third dimension is the color of points: it represents the feature value (pink for high value of the feature and blue for a low value). You can therefore see the dispersion of SHAP values according to features and also, their impact in the output model. For instance, high values of 'cp' feature implies high SHAP values predicted score and tends to push up the prediction whereas high values of 'thal' feature (pink points) tends to lower the predicted score. The second plot (subfigure b.) is the Mean Absolute Value of SHAP values obtained for each feature. This can be seen as a summary of the left figure. Conclusion There are lots of approaches proposed in the literature to deal with interpretability/explainable models in the supervised context. The main strength of the additive feature attribution model is its theoretical properties on one hand and on the other hand, its general framework which tends to explain most of explainable models developed in the literature. Different approximations algorithms have been proposed by Lundberg et al. in order to take advantage of the structure of the model and types of data to improve time computations. If you deal with Deep Neural Network or tree ensembles, I really encourage the reader to see more examples on the GitHub repository of the author: https://github.com/slundberg/shap. Bibliography [0] Shapley, Lloyd S. “A Value for N-Person Games.” Contributions to the Theory of Games 2 (28): 307–17, (1953). [1] Lipovetsky, S. and Conklin, M. "Analysis of regression in game theory approach." Applied Stochastic Models in business and industry (17-4):319-330, (2001). [2] Strumbelj et al., "A General Method for Visualizing and Explaining Black-Box Regression Models." Adaptive and Natural Computing Algorithms. ICANNGA 2011. Lecture Notes in Computer Science, vol 6594. (2011). [3] Saabas et al., "Interpreting random forests", https://blog.datadive.net/interpreting-random-forests/ [4] Springenberg et al., "Striving for simplicity: The all convolutional net", arXiv:1412.6806 (2014). [5] Bach et al. "On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation", PLOS ONE: (10-7): 130-140, (2015). [6] Ribeiro et al. "Why should I Trust You ? Explaining the predictions of any classifier", Proceedings of the 22nd ACM SIGKDD: 1135-1144 (2016). [7] Shrikumar et al. "Learning Important Features Through Propagating Activation Difference", Proceedings in ICML (2017). [8] "Explainable and Interpretable Models in Computer Vision and Machine Learning", Springer Verlag, The Springer Series on Challenges in Machine Learning, 9783319981307 (2018). [9] Sunderarajan et al., "Axiomatic Attribution for Deep Networks", Proceedings in ICML (2017). [10] Montavon et al. "Explaining nonlinear classification decision with deep Taylor decomposition", Pattern Recognition (65):211-222 (2017). [11] Lundberg et al., ''A unified approach to interpreting model predictions'', NIPS (2017). All Posts × Almost done… We just sent you an email. Please click the link in the email to confirm your subscription! ,' Interpretability models​ Why interpretability is so important in machine learning ? Why can't we just trust the prediction of a supervised model ? Several possible explanations to that: we can think about improving social acceptance for the integration of ML algorithms into our lives ; correcting a model by discovering a bias in the population of the training set; understanding the cases for which the model fails; following the law and regulations. Nowadays, complex supervised models can be very accurate on specific tasks but remain quite uninterpretable; at the opposite, when using simple models, it is indeed easy to interpret them but are often less accurate. How can we solve such a dilemma ? This post tends to answer to this question by going through the ML literature in interpretability models and by focusing on a class of additive feature attribution methods [11]. 1. The main idea The problem of giving an interpretation to the model prediction can be recasted as it follows: Which part of the input is particularly important to explain the output of the model ? In order to illustrate this purpose, let's consider the example given during the ICML conference by Shrikumar. Lets suppose you have already trained a model with DNA mutations causing diseases. Now, let's consider a DNA sequence as input, as for instance: The model is going to predict if this sequence can be linked to any known diseases the model learnt. If so, what you would like to understand is why your model gives this prediction in particular; ie which part of the input sequence leads your model to predict a specific disease. So, you would like to have higher weights for the parts of the sequence which explain the most the decision of your model and lower ones for those which do not explain the prediction: To achieve that, most of approaches iterate between 2 steps: 1. Set a prohibition to some part of the input 2. Observe the change in the output (fitted answer) Repeat step 1 and step 2 for different prohibitions of the input. 2. Existing approaches The need of tools for explaining prediction models came with the development of more complex models to deal with more complex data and therefore the recent literature in Computer Vision and Machine Learning has developed a new field linked to interpretability. 2.1. Cooperative game theory based Back to the beginning of the 21th century, Lipovetsky et al. (2001)[1] highlight the multicollinearity problem in the analysis of regressor importance in the multiple regression context: important variables can have significant coefficient because of their collinearity. To that end, they use a tool from the cooperative game theory to obtain comparative importance of predictors: the Shapley Values imputation [0] derived from an axiomatic approach and produces a unique solution satisfying general requirements of Nash equilibrium. A decade later, Strumbelj et al. (2011)[2] generalize the use of Shapley values for black box models such as SVM and artificial neural network models in order to make models more informative, easier to understand and to use. They propose an approximation algorithm by assuming mutual independence of individual features in order to encompass the time complexity limitation of the solution. 2.2. Architecture specific: Deep Neural Network Since then, several specific methods have been proposed in the literature and take advantages of the structure/architecture of the model. For neural networks, we can think about back-propagation based methods such as Guided Propagation (Springenberg et al. 2014)[4] which use the relationship between the neurons and the output. The idea is to assign a score to neurons according to how much they affect the output. This is done in the single backward pass where you get the scores for all parts of the input. Other approaches propose to build a linear model to locally approximate the more complicated model based on data which affects the output (LIME [6]). Shrikumar et al. 2016 [7] introduces DeepLift which assigns contribution scores to the feature based on the difference between the activation of each neuron to its ‘reference activation’. Other explaining prediction models Deep Neural Network-specific have been proposed in the literature and the reader could read [8] for additional references on this subject. Below, a list of methods and their available python code which summarizes the most recent approaches for specific-models: Random Forest: Deep Neural Network: 2.3. A unified approach The most recent and general approach for interpretability models is the SHAP model from Lundberg et al. 2018. It proposes a class of methods called Additive Feature attribution methods that contains most of the approaches cited above. These methods use the same explanation model (ie any interpretable approximation of the original model) that we introduce in the next paragraph. Cooperative Game theory-based: 3. SHAP: Additive feature attribution methods An explanation model is a simple model which describes the behavior of the complex model. The additive attribution methods introduce a linear function of binary variables to represent such an explanation model. 3.1. The SHAP model Let f be the original prediction model to be explained and g the explanation model. Additive feature attribution methods have an explanation model that is a linear function of binary variables such that: where M is the number of features ; the z'i variables represent a feature being observed (zi' = 1) or unknown (zi'= 0), and the Φi ∈ ’s are the feature attribution values. There is only one solution for Φ_i satisfying general requirements of Nash equilibrium and satisfying three natural properties explained in the paragraph that follows. 3.2. The Natural properties (1) Local Accuracy: the output of the explanation model matches the original model for the prediction being explained: g(x') = f(x) (2) Missingness: put the output to 0 corresponds to turning the feature off: x'i = 0 ⇒ Φi = 0 (3) Accuracy: if turning the feature off in one model which always makes a bigger difference in another model then the importance should be higher in the first model than in the second one. Lets consider z' \ i meaning z'i = 0, then for any 2 models f 1 and f 2, if: fx1(z') - fx1(z' \ i) ≥ fx2(z') - fx2(z' \ i) then for all input z' ∈ {0,1}M : Φi (f 1, x) ≥ Φi (f 2, x) 3.3 Computing SHAP values 3.3.1. Back to the Shapley values The computation of features importance -- the SHAP values -- comes from cooperative games theory [0] with the Shapley values. In our context, a Shapley value can be viewed as a weighted average of all possible differences between predictions of the model without feature i, and the ones with feature i as expressed below: where |z′| stands for the number of features different from zero, and z′ ⊆ x′ stands for all z′ vectors where the non-zero entries are a subset of entries of x′ except feature i. Since the problem is combinatorial different strategies have been proposed in the literature to approximate the solution ([0,1]). 3.3.2. The SHAP values In the more general context the SHAP values can be viewed as Shapley values of a conditional expectation function of the original model such that: where S is the set of non-zero entries of z'. In practice, the computation of SHAP values are challenging that is why Lundberg and al.[11] propose different approximation algorithms according to the specificities of your model or your data (tree ensembles, independent features, deep network,...). 4. Practical example with SHAP library Lundenberg created a GitHub repository to that end with very nice and quite complete notebooks explaining different use cases for SHAP and its different approximation algorithms (Tree/ Deep / Gradient /Linear or Kernel Explainers). I do really encourage the reader to visit the page of the author: https://github.com/slundberg/shap . By the way, I am just going to introduce a very simple example in order to give insights of the kind of results we could obtain when looking for interpreting a prediction. Lets consider the heart dataset coming from kaggle competition (https://www.kaggle.com/ronitf/heart-disease-uci). The dataset consists in 13 variables describing 303 patients and 1 label describing the angiographic disease status (target \in {0,1}). The set is quite balanced since 165 patients have label 1 and 138 have label 0. The data have been pre-processed a little bit such as we keep only the most informative variables which are ['sex', 'cp', 'thalach', 'exang', 'oldpeak', 'ca', 'thal']. Then we split the dataset into a random train (75% of data) and test sets (the remaining 25%) and we scale them. A svm classifier has been learnt, and we obtain a classification accuracy around 91%. Once the classification model is learnt, we are looking for explaining a particular prediction (a true one ;) ) based on the shap library developed by Lundberg. You need to install the shap library (https://github.com/slundberg/shap) before running the code below: This figure illustrates features that push the prediction higher (in pink) and the ones which push the prediction lower (in blue) from a base value computed on the average model output on the training dataset. For that true positive fitted answer, we can see that the main features which tend to push the probabilities towards 1 is mainly explained by 'sex', 'oldpeak', 'thalach' and 'exang', whereas the 'ca' feature tends to push down the prediction score. We can apply this explainer model to all correct predicted examples in the test set, as below: The figure above stands for all individual feature contribution that have been stacked horizontally and ordered by output value. The 39 first predictions are correctly classified in class 1 and the 30 last ones are labeled and correctly classified in class 0. Note that the visualisation is interactive and we can see the effect of a particular feature by changing the y-axis in the menu of the left side of the figure. Symmetrically, you can change the x-axis menu in order to order the sample according to output values, similarities or SHAP values by feature. It can also be very interested to have in one plot an overview of the distribution of SHAP values for each feature and an idea of their overall impact (Note that the example is still on the correct predicted sample): In the first plot (subfigure a.), there are three kinds of information: in the x-axis you have the SHAP values of each feature described in the y-axis. Each line stands for the set of SHAP values computed for a specific feature and this is done for every features of your model. The third dimension is the color of points: it represents the feature value (pink for high value of the feature and blue for a low value). You can therefore see the dispersion of SHAP values according to features and also, their impact in the output model. For instance, high values of 'cp' feature implies high SHAP values predicted score and tends to push up the prediction whereas high values of 'thal' feature (pink points) tends to lower the predicted score. The second plot (subfigure b.) is the Mean Absolute Value of SHAP values obtained for each feature. This can be seen as a summary of the left figure. Conclusion There are lots of approaches proposed in the literature to deal with interpretability/explainable models in the supervised context. The main strength of the additive feature attribution model is its theoretical properties on one hand and on the other hand, its general framework which tends to explain most of explainable models developed in the literature. Different approximations algorithms have been proposed by Lundberg et al. in order to take advantage of the structure of the model and types of data to improve time computations. If you deal with Deep Neural Network or tree ensembles, I really encourage the reader to see more examples on the GitHub repository of the author: https://github.com/slundberg/shap. Bibliography [0] Shapley, Lloyd S. “A Value for N-Person Games.” Contributions to the Theory of Games 2 (28): 307–17, (1953). [1] Lipovetsky, S. and Conklin, M. "Analysis of regression in game theory approach." Applied Stochastic Models in business and industry (17-4):319-330, (2001). [2] Strumbelj et al., "A General Method for Visualizing and Explaining Black-Box Regression Models." Adaptive and Natural Computing Algorithms. ICANNGA 2011. Lecture Notes in Computer Science, vol 6594. (2011). [3] Saabas et al., "Interpreting random forests", https://blog.datadive.net/interpreting-random-forests/ [4] Springenberg et al., "Striving for simplicity: The all convolutional net", arXiv:1412.6806 (2014). [5] Bach et al. "On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation", PLOS ONE: (10-7): 130-140, (2015). [6] Ribeiro et al. "Why should I Trust You ? Explaining the predictions of any classifier", Proceedings of the 22nd ACM SIGKDD: 1135-1144 (2016). [7] Shrikumar et al. "Learning Important Features Through Propagating Activation Difference", Proceedings in ICML (2017). [8] "Explainable and Interpretable Models in Computer Vision and Machine Learning", Springer Verlag, The Springer Series on Challenges in Machine Learning, 9783319981307 (2018). [9] Sunderarajan et al., "Axiomatic Attribution for Deep Networks", Proceedings in ICML (2017). [10] Montavon et al. "Explaining nonlinear classification decision with deep Taylor decomposition", Pattern Recognition (65):211-222 (2017). [11] Lundberg et al., ''A unified approach to interpreting model predictions'', NIPS (2017). All Posts × Almost done… We just sent you an email. Please click the link in the email to confirm your subscription! ], ['\\\\(','\\\']]}\n });\n\u0026lt;\/script\u0026gt;\n\n\u0026lt;script type=\"text\/javascript\" async src=\"\/\/cdn.mathjax.org\/mathjax\/latest\/MathJax.js?config=TeX-AMS_CHTML\"\u0026gt;\u0026lt;\/script\u0026gt;\n\n\n\u0026lt;script\u0026gt;\n!function(){\/*\n\n Copyright (C) 2013 Google Inc.\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http:\/\/www.apache.org\/licenses\/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n\n Copyright (C) 2006 Google Inc.\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http:\/\/www.apache.org\/licenses\/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n*\/\n(function(){function ba(g){function k(){try{M.doScroll(\"left\")}catch(g){t.setTimeout(k,50);return}z(\"poll\")}function z(k){if(\"readystatechange\"!=k.type||\"complete\"==A.readyState)(\"load\"==k.type?t:A)[B](p+k.type,z,!1),!q\u0026amp;\u0026amp;(q=!0)\u0026amp;\u0026amp;g.call(t,k.type||k)}var Y=A.addEventListener,q=!1,C=!0,x=Y?\"addEventListener\":\"attachEvent\",B=Y?\"removeEventListener\":\"detachEvent\",p=Y?\"\":\"on\";if(\"complete\"==A.readyState)g.call(t,\"lazy\");else{if(A.createEventObject\u0026amp;\u0026amp;M.doScroll){try{C=!t.frameElement}catch(da){}C\u0026amp;\u0026amp;k()}A[x](p+\n\"DOMContentLoaded\",z,!1);A[x](p+\"readystatechange\",z,!1);t[x](p+\"load\",z,!1)}}function U(){V\u0026amp;\u0026amp;ba(function(){var g=N.length;ca(g?function(){for(var k=0;k\u0026lt;g;++k)(function(g){t.setTimeout(function(){t.exports[N[g]].apply(t,arguments)},0)})(k)}:void 0)})}for(var t=window,A=document,M=A.documentElement,O=A.head||A.getElementsByTagName(\"head\")[0]||M,B=\"\",F=A.getElementsByTagName(\"script\"),q=F.length;0\u0026lt;=--q;){var P=F[q],Z=P.src.match(\/^[^?#]*\\\/run_prettify\\.js(\\?[^#]*)?(?:#.*)?\/);if(Z){B=Z[1]||\"\";P.parentNode.removeChild(P);\nbreak}}var V=!0,H=[],Q=[],N=[];B.replace(\/[?\u0026amp;]([^\u0026amp;=]+)=([^\u0026amp;]+)\/g,function(g,k,z){z=decodeURIComponent(z);k=decodeURIComponent(k);\"autorun\"==k?V=!\/^[0fn]\/i.test(z):\"lang\"==k?H.push(z):\"skin\"==k?Q.push(z):\"callback\"==k\u0026amp;\u0026amp;N.push(z)});q=0;for(B=H.length;q\u0026lt;B;++q)(function(){var g=A.createElement(\"script\");g.onload=g.onerror=g.onreadystatechange=function(){!g||g.readyState\u0026amp;\u0026amp;!\/loaded|complete\/.test(g.readyState)||(g.onerror=g.onload=g.onreadystatechange=null,--T,T||t.setTimeout(U,0),g.parentNode\u0026amp;\u0026amp;g.parentNode.removeChild(g),\ng=null)};g.type=\"text\/javascript\";g.src=\"https:\/\/cdn.rawgit.com\/google\/code-prettify\/master\/loader\/lang-\"+encodeURIComponent(H[q])+\".js\";O.insertBefore(g,O.firstChild)})(H[q]);for(var T=H.length,F=[],q=0,B=Q.length;q\u0026lt;B;++q)F.push(\"https:\/\/cdn.rawgit.com\/google\/code-prettify\/master\/loader\/skins\/\"+encodeURIComponent(Q[q])+\".css\");F.push(\"https:\/\/cdn.rawgit.com\/google\/code-prettify\/master\/loader\/prettify.css\");(function(g){function k(q){if(q!==z){var t=A.createElement(\"link\");t.rel=\"stylesheet\";t.type=\n\"text\/css\";q+1\u0026lt;z\u0026amp;\u0026amp;(t.error=t.onerror=function(){k(q+1)});t.href=g[q];O.appendChild(t)}}var z=g.length;k(0)})(F);var ca=function(){window.PR_SHOULD_USE_CONTINUATION=!0;var g;(function(){function k(a){function d(e){var b=e.charCodeAt(0);if(92!==b)return b;var a=e.charAt(1);return(b=W[a])?b:\"0\"\u0026lt;=a\u0026amp;\u0026amp;\"7\"\u0026gt;=a?parseInt(e.substring(1),8):\"u\"===a||\"x\"===a?parseInt(e.substring(2),16):e.charCodeAt(1)}function f(e){if(32\u0026gt;e)return(16\u0026gt;e?\"\\\\x0\":\"\\\\x\")+e.toString(16);e=String.fromCharCode(e);return\"\\\\\"===e||\"-\"===\ne||\"]\"===e||\"^\"===e?\"\\\\\"+e:e}function b(e){var b=e.substring(1,e.length-1).match(\/\\\\u[0-9A-Fa-f]{4}|\\\\x[0-9A-Fa-f]{2}|\\\\[0-3][0-7]{0,2}|\\\\[0-7]{1,2}|\\\\[\\s\\S]|-|[^-\\\\/g);e=[];var a=\"^\"===b[0],c=[\"[\"];a\u0026amp;\u0026amp;c.push(\"^\");for(var a=a?1:0,h=b.length;a\u0026lt;h;++a){var l=b[a];if(\/\\\$bdsw]\/i.test(l))c.push(l);else{var l=d(l),n;a+2\u0026lt;h\u0026amp;\u0026amp;\"-\"===b[a+1]?(n=d(b[a+2]),a+=2):n=l;e.push([l,n]);65\u0026gt;n||122\u0026lt;l||(65\u0026gt;n||90\u0026lt;l||e.push([Math.max(65,l)|32,Math.min(n,90)|32]),97\u0026gt;n||122\u0026lt;l||e.push([Math.max(97,l)\u0026amp;-33,Math.min(n,122)\u0026amp;-33]))}}e.sort(function(e,\na){return e[0]-a[0]||a[1]-e[1]});b=[];h=[];for(a=0;a\u0026lt;e.length;++a)l=e[a],l[0]\u0026lt;=h[1]+1?h[1]=Math.max(h[1],l[1]):b.push(h=l);for(a=0;a\u0026lt;b.length;++a)l=b[a],c.push(f(l[0])),l[1]\u0026gt;l[0]\u0026amp;\u0026amp;(l[1]+1\u0026gt;l[0]\u0026amp;\u0026amp;c.push(\"-\"),c.push(f(l[1])));c.push(\"]\");return c.join(\"\")}function g(e){for(var a=e.source.match(\/(?:\\[(?:[^\\x5C\\x5D]|\\\\[\\s\\S])*\$|\\\\u[A-Fa-f0-9]{4}|\\\\x[A-Fa-f0-9]{2}|\\\$0-9]+|\\\\[^ux0-9]|\$$\\?[:!=]|[\\(\$$\\^]|[^\\x5B\\x5C\$$\$$\\^]+)\/g),c=a.length,d=[],h=0,l=0;h\u0026lt;c;++h){var n=a[h];\"(\"===n?++l:\"\\\\\"===n.charAt(0)\u0026amp;\u0026amp;(n=\n+n.substring(1))\u0026amp;\u0026amp;(n\u0026lt;=l?d[n]=-1:a[h]=f(n))}for(h=1;h\u0026lt;d.length;++h)-1===d[h]\u0026amp;\u0026amp;(d[h]=++k);for(l=h=0;h\u0026lt;c;++h)n=a[h],\"(\"===n?(++l,d[l]||(a[h]=\"(?:\")):\"\\\\\"===n.charAt(0)\u0026amp;\u0026amp;(n=+n.substring(1))\u0026amp;\u0026amp;n\u0026lt;=l\u0026amp;\u0026amp;(a[h]=\"\\\\\"+d[n]);for(h=0;h\u0026lt;c;++h)\"^\"===a[h]\u0026amp;\u0026amp;\"^\"!==a[h+1]\u0026amp;\u0026amp;(a[h]=\"\");if(e.ignoreCase\u0026amp;\u0026amp;I)for(h=0;h\u0026lt;c;++h)n=a[h],e=n.charAt(0),2\u0026lt;=n.length\u0026amp;\u0026amp;\"[\"===e?a[h]=b(n):\"\\\\\"!==e\u0026amp;\u0026amp;(a[h]=n.replace(\/[a-zA-Z]\/g,function(a){a=a.charCodeAt(0);return\"[\"+String.fromCharCode(a\u0026amp;-33,a|32)+\"]\"}));return a.join(\"\")}for(var k=0,I=!1,\nm=!1,J=0,c=a.length;J\u0026lt;c;++J){var r=a[J];if(r.ignoreCase)m=!0;else if(\/[a-z]\/i.test(r.source.replace(\/\\\\u[0-9a-f]{4}|\\\\x[0-9a-f]{2}|\\\\[^ux]\/gi,\"\"))){I=!0;m=!1;break}}for(var W={b:8,t:9,n:10,v:11,f:12,r:13},u=[],J=0,c=a.length;J\u0026lt;c;++J){r=a[J];if(r.global||r.multiline)throw Error(\"\"+r);u.push(\"(?:\"+g(r)+\")\")}return new RegExp(u.join(\"|\"),m?\"gi\":\"g\")}function q(a,d){function f(a){var c=a.nodeType;if(1==c){if(!b.test(a.className)){for(c=a.firstChild;c;c=c.nextSibling)f(c);c=a.nodeName.toLowerCase();if(\"br\"===\nc||\"li\"===c)g[m]=\"\\n\",I[m\u0026lt;\u0026lt;1]=k++,I[m++\u0026lt;\u0026lt;1|1]=a}}else if(3==c||4==c)c=a.nodeValue,c.length\u0026amp;\u0026amp;(c=d?c.replace(\/\\r\\n?\/g,\"\\n\"):c.replace(\/[ \\t\\r\\n]+\/g,\" \"),g[m]=c,I[m\u0026lt;\u0026lt;1]=k,k+=c.length,I[m++\u0026lt;\u0026lt;1|1]=a)}var b=\/(?:^|\\s)nocode(?:\\s|)\/,g=[],k=0,I=[],m=0;f(a);return{a:g.join(\"\").replace(\/\\n\/,\"\"),c:I}}function t(a,d,f,b,g){f\u0026amp;\u0026amp;(a={h:a,l:1,j:null,m:null,a:f,c:null,i:d,g:null},b(a),g.push.apply(g,a.g))}function A(a){for(var d=void 0,f=a.firstChild;f;f=f.nextSibling)var b=f.nodeType,d=1===b?d?a:f:3===b?T.test(f.nodeValue)?\na:d:d;return d===a?void 0:d}function C(a,d){function f(a){for(var m=a.i,k=a.h,c=[m,\"pln\"],r=0,W=a.a.match(g)||[],u={},e=0,q=W.length;e\u0026lt;q;++e){var D=W[e],w=u[D],h=void 0,l;if(\"string\"===typeof w)l=!1;else{var n=b[D.charAt(0)];if(n)h=D.match(n[1]),w=n[0];else{for(l=0;l\u0026lt;p;++l)if(n=d[l],h=D.match(n[1])){w=n[0];break}h||(w=\"pln\")}!(l=5\u0026lt;=w.length\u0026amp;\u0026amp;\"lang-\"===w.substring(0,5))||h\u0026amp;\u0026amp;\"string\"===typeof h[1]||(l=!1,w=\"src\");l||(u[D]=w)}n=r;r+=D.length;if(l){l=h[1];var E=D.indexOf(l),G=E+l.length;h[2]\u0026amp;\u0026amp;(G=D.length-\nh[2].length,E=G-l.length);w=w.substring(5);t(k,m+n,D.substring(0,E),f,c);t(k,m+n+E,l,F(w,l),c);t(k,m+n+G,D.substring(G),f,c)}else c.push(m+n,w)}a.g=c}var b={},g;(function(){for(var f=a.concat(d),m=[],p={},c=0,r=f.length;c\u0026lt;r;++c){var q=f[c],u=q[3];if(u)for(var e=u.length;0\u0026lt;=--e;)b[u.charAt(e)]=q;q=q[1];u=\"\"+q;p.hasOwnProperty(u)||(m.push(q),p[u]=null)}m.push(\/[\\0-\\uffff]\/);g=k(m)})();var p=d.length;return f}function x(a){var d=[],f=[];a.tripleQuotedStrings?d.push([\"str\",\/^(?:\\'\\'\\'(?:[^\\'\\\$|\\\$\\s\\S]|\\'{1,2}(?=[^\\']))*(?:\\'\\'\\'|)|\\\"\\\"\\\"(?:[^\\\"\\\$|\\\$\\s\\S]|\\\"{1,2}(?=[^\\\"]))*(?:\\\"\\\"\\\"|)|\\'(?:[^\\\\\\']|\\\\[\\s\\S])*(?:\\'|)|\\\"(?:[^\\\\\\\"]|\\\\[\\s\\S])*(?:\\\"|))\/,\nnull,\"'\\\"\"]):a.multiLineStrings?d.push([\"str\",\/^(?:\\'(?:[^\\\\\\']|\\\\[\\s\\S])*(?:\\'|)|\\\"(?:[^\\\\\\\"]|\\\\[\\s\\S])*(?:\\\"|)|\\(?:[^\\\\\\]|\\\\[\\s\\S])*(?:\\|))\/,null,\"'\\\"\"]):d.push([\"str\",\/^(?:\\'(?:[^\\\\\\'\\r\\n]|\\\\.)*(?:\\'|)|\\\"(?:[^\\\\\\\"\\r\\n]|\\\\.)*(?:\\\"|))\/,null,\"\\\"'\"]);a.verbatimStrings\u0026amp;\u0026amp;f.push([\"str\",\/^@\\\"(?:[^\\\"]|\\\"\\\")*(?:\\\"|)\/,null]);var b=a.hashComments;b\u0026amp;\u0026amp;(a.cStyleComments?(1\u0026lt;b?d.push([\"com\",\/^#(?:##(?:[^#]|#(?!##))*(?:###|)|.*)\/,null,\"#\"]):d.push([\"com\",\/^#(?:(?:define|e(?:l|nd)if|else|error|ifn?def|include|line|pragma|undef|warning)\\b|[^\\r\\n]*)\/,\nnull,\"#\"]),f.push([\"str\",\/^\u0026lt;(?:(?:(?:\\.\\.\\\/)*|\\\/?)(?:[\\w-]+(?:\\\/[\\w-]+)+)?[\\w-]+\\.h(?:h|pp|\\+\\+)?|[a-z]\\w*)\u0026gt;\/,null])):d.push([\"com\",\/^#[^\\r\\n]*\/,null,\"#\"]));a.cStyleComments\u0026amp;\u0026amp;(f.push([\"com\",\/^\\\/\\\/[^\\r\\n]*\/,null]),f.push([\"com\",\/^\\\/\\*[\\s\\S]*?(?:\\*\\\/|)\/,null]));if(b=a.regexLiterals){var g=(b=1\u0026lt;b?\"\":\"\\n\\r\")?\".\":\"[\\\\S\\\\s]\";f.push([\"lang-regex\",RegExp(\"^(?:^^\\\\.?|[+-]|[!=]=?=?|\\\\#|%=?|\u0026amp;\u0026amp;?=?|\\\\(|\\\\*=?|[+\\\\-]=|-\u0026gt;|\\\\\/=?|::?|\u0026lt;\u0026lt;?=?|\u0026gt;\u0026gt;?\u0026gt;?=?|,|;|\\\\?|@|\\\\[|~|{|\\\\^\\\\^?=?|\\\\|\\\\|?=?|break|case|continue|delete|do|else|finally|instanceof|return|throw|try|typeof)\\\\s*(\"+\n(\"\/(?=[^\/*\"+b+\"])(?:[^\/\\\\x5B\\\\x5C\"+b+\"]|\\\\x5C\"+g+\"|\\\\x5B(?:[^\\\\x5C\\\\x5D\"+b+\"]|\\\\x5C\"+g+\")*(?:\\\\x5D|))+\/\")+\")\")])}(b=a.types)\u0026amp;\u0026amp;f.push([\"typ\",b]);b=(\"\"+a.keywords).replace(\/^ | \/g,\"\");b.length\u0026amp;\u0026amp;f.push([\"kwd\",new RegExp(\"^(?:\"+b.replace(\/[\\s,]+\/g,\"|\")+\")\\\\b\"),null]);d.push([\"pln\",\/^\\s+\/,null,\" \\r\\n\\t\\u00a0\"]);b=\"^.[^\\\\s\\\\w.@'\\\"\/\\\\\\\$*\";a.regexLiterals\u0026amp;\u0026amp;(b+=\"(?!s*\/)\");f.push([\"lit\",\/^@[a-z_][a-z_$@0-9]*\/i,null],[\"typ\",\/^(?:[@_]?[A-Z]+[a-z][A-Za-z_$@0-9]*|\\w+_t\\b)\/,null],[\"pln\",\/^[a-z_$][a-z_$@0-9]*\/i,\nnull],[\"lit\",\/^(?:0x[a-f0-9]+|(?:\\d(?:_\\d+)*\\d*(?:\\.\\d*)?|\\.\\d\\+)(?:e[+\\-]?\\d+)?)[a-z]*\/i,null,\"0123456789\"],[\"pln\",\/^\\\\[\\s\\S]?\/,null],[\"pun\",new RegExp(b),null]);return C(d,f)}function B(a,d,f){function b(a){var c=a.nodeType;if(1==c\u0026amp;\u0026amp;!k.test(a.className))if(\"br\"===a.nodeName)g(a),a.parentNode\u0026amp;\u0026amp;a.parentNode.removeChild(a);else for(a=a.firstChild;a;a=a.nextSibling)b(a);else if((3==c||4==c)\u0026amp;\u0026amp;f){var d=a.nodeValue,p=d.match(q);p\u0026amp;\u0026amp;(c=d.substring(0,p.index),a.nodeValue=c,(d=d.substring(p.index+p[0].length))\u0026amp;\u0026amp;\na.parentNode.insertBefore(m.createTextNode(d),a.nextSibling),g(a),c||a.parentNode.removeChild(a))}}function g(a){function b(a,c){var d=c?a.cloneNode(!1):a,n=a.parentNode;if(n){var n=b(n,1),e=a.nextSibling;n.appendChild(d);for(var f=e;f;f=e)e=f.nextSibling,n.appendChild(f)}return d}for(;!a.nextSibling;)if(a=a.parentNode,!a)return;a=b(a.nextSibling,0);for(var d;(d=a.parentNode)\u0026amp;\u0026amp;1===d.nodeType;)a=d;c.push(a)}for(var k=\/(?:^|\\s)nocode(?:\\s|)\/,q=\/\\r\\n?|\\n\/,m=a.ownerDocument,p=m.createElement(\"li\");a.firstChild;)p.appendChild(a.firstChild);\nfor(var c=[p],r=0;r\u0026lt;c.length;++r)b(c[r]);d===(d|0)\u0026amp;\u0026amp;c[0].setAttribute(\"value\",d);var t=m.createElement(\"ol\");t.className=\"linenums\";d=Math.max(0,d-1|0)||0;for(var r=0,u=c.length;r\u0026lt;u;++r)p=c[r],p.className=\"L\"+(r+d)%10,p.firstChild||p.appendChild(m.createTextNode(\"\\u00a0\")),t.appendChild(p);a.appendChild(t)}function p(a,d){for(var f=d.length;0\u0026lt;=--f;){var b=d[f];X.hasOwnProperty(b)?R.console\u0026amp;\u0026amp;console.warn(\"cannot override language handler %s\",b):X[b]=a}}function F(a,d){a\u0026amp;\u0026amp;X.hasOwnProperty(a)||(a=\/^\\s*\u0026lt;\/.test(d)?\n\"default-markup\":\"default-code\");return X[a]}function H(a){var d=a.j;try{var f=q(a.h,a.l),b=f.a;a.a=b;a.c=f.c;a.i=0;F(d,b)(a);var g=\/\\bMSIE\\s(\\d+)\/.exec(navigator.userAgent),g=g\u0026amp;\u0026amp;8\u0026gt;=+g[1],d=\/\\n\/g,p=a.a,k=p.length,f=0,m=a.c,t=m.length,b=0,c=a.g,r=c.length,x=0;c[r]=k;var u,e;for(e=u=0;e\u0026lt;r;)c[e]!==c[e+2]?(c[u++]=c[e++],c[u++]=c[e++]):e+=2;r=u;for(e=u=0;e\u0026lt;r;){for(var A=c[e],D=c[e+1],w=e+2;w+2\u0026lt;=r\u0026amp;\u0026amp;c[w+1]===D;)w+=2;c[u++]=A;c[u++]=D;e=w}c.length=u;var h=a.h;a=\"\";h\u0026amp;\u0026amp;(a=h.style.display,h.style.display=\"none\");\ntry{for(;b\u0026lt;t;){var l=m[b+2]||k,n=c[x+2]||k,w=Math.min(l,n),E=m[b+1],G;if(1!==E.nodeType\u0026amp;\u0026amp;(G=p.substring(f,w))){g\u0026amp;\u0026amp;(G=G.replace(d,\"\\r\"));E.nodeValue=G;var aa=E.ownerDocument,v=aa.createElement(\"span\");v.className=c[x+1];var B=E.parentNode;B.replaceChild(v,E);v.appendChild(E);f\u0026lt;l\u0026amp;\u0026amp;(m[b+1]=E=aa.createTextNode(p.substring(w,l)),B.insertBefore(E,v.nextSibling))}f=w;f\u0026gt;=l\u0026amp;\u0026amp;(b+=2);f\u0026gt;=n\u0026amp;\u0026amp;(x+=2)}}finally{h\u0026amp;\u0026amp;(h.style.display=a)}}catch(y){R.console\u0026amp;\u0026amp;console.log(y\u0026amp;\u0026amp;y.stack||y)}}var R=window,K=[\"break,continue,do,else,for,if,return,while\"],\nL=[[K,\"auto,case,char,const,default,double,enum,extern,float,goto,inline,int,long,register,short,signed,sizeof,static,struct,switch,typedef,union,unsigned,void,volatile\"],\"catch,class,delete,false,import,new,operator,private,protected,public,this,throw,true,try,typeof\"],S=[L,\"alignof,align_union,asm,axiom,bool,concept,concept_map,const_cast,constexpr,decltype,delegate,dynamic_cast,explicit,export,friend,generic,late_check,mutable,namespace,nullptr,property,reinterpret_cast,static_assert,static_cast,template,typeid,typename,using,virtual,where\"],\nM=[L,\"abstract,assert,boolean,byte,extends,finally,final,implements,import,instanceof,interface,null,native,package,strictfp,super,synchronized,throws,transient\"],N=[L,\"abstract,as,base,bool,by,byte,checked,decimal,delegate,descending,dynamic,event,finally,fixed,foreach,from,group,implicit,in,interface,internal,into,is,let,lock,null,object,out,override,orderby,params,partial,readonly,ref,sbyte,sealed,stackalloc,string,select,uint,ulong,unchecked,unsafe,ushort,var,virtual,where\"],L=[L,\"debugger,eval,export,function,get,instanceof,null,set,undefined,var,with,Infinity,NaN\"],\nO=[K,\"and,as,assert,class,def,del,elif,except,exec,finally,from,global,import,in,is,lambda,nonlocal,not,or,pass,print,raise,try,with,yield,False,True,None\"],P=[K,\"alias,and,begin,case,class,def,defined,elsif,end,ensure,false,in,module,next,nil,not,or,redo,rescue,retry,self,super,then,true,undef,unless,until,when,yield,BEGIN,END\"],K=[K,\"case,done,elif,esac,eval,fi,function,in,local,set,then,until\"],Q=\/^(DIR|FILE|vector|(de|priority_)?queue|list|stack|(const_)?iterator|(multi)?(set|map)|bitset|u?(int|float)\\d*)\\b\/,\nT=\/\\S\/,U=x({keywords:[S,N,M,L,\"caller,delete,die,do,dump,elsif,eval,exit,foreach,for,goto,if,import,last,local,my,next,no,our,print,package,redo,require,sub,undef,unless,until,use,wantarray,while,BEGIN,END\",O,P,K],hashComments:!0,cStyleComments:!0,multiLineStrings:!0,regexLiterals:!0}),X={};p(U,[\"default-code\"]);p(C([],[[\"pln\",\/^[^\u0026lt;?]+\/],[\"dec\",\/^\u0026lt;!\\w[^\u0026gt;]*(?:\u0026gt;|)\/],[\"com\",\/^\u0026lt;\\!--[\\s\\S]*?(?:-\\-\u0026gt;|$)\/],[\"lang-\",\/^\u0026lt;\\?([\\s\\S]+?)(?:\\?\u0026gt;|$)\/],[\"lang-\",\/^\u0026lt;%([\\s\\S]+?)(?:%\u0026gt;|$)\/],[\"pun\",\/^(?:\u0026lt;[%?]|[%?]\u0026gt;)\/],[\"lang-\",\n\/^\u0026lt;xmp\\b[^\u0026gt;]*\u0026gt;([\\s\\S]+?)\u0026lt;\\\/xmp\\b[^\u0026gt;]*\u0026gt;\/i],[\"lang-js\",\/^\u0026lt;script\\b[^\u0026gt;]*\u0026gt;([\\s\\S]*?)(\u0026lt;\\\/script\\b[^\u0026gt;]*\u0026gt;)\/i],[\"lang-css\",\/^\u0026lt;style\\b[^\u0026gt;]*\u0026gt;([\\s\\S]*?)(\u0026lt;\\\/style\\b[^\u0026gt;]*\u0026gt;)\/i],[\"lang-in.tag\",\/^(\u0026lt;\\\/?[a-z][^\u0026lt;\u0026gt;]*\u0026gt;)\/i]]),\"default-markup htm html mxml xhtml xml xsl\".split(\" \"));p(C([[\"pln\",\/^[\\s]+\/,null,\" \\t\\r\\n\"],[\"atv\",\/^(?:\\\"[^\\\"]*\\\"?|\\'[^\\']*\\'?)\/,null,\"\\\"'\"]],[[\"tag\",\/^^\u0026lt;\\\/?[a-z](?:[\\w.:-]*\\w)?|\\\/?\u0026gt;$\/i],[\"atn\",\/^(?!style[\\s=]|on)[a-z](?:[\\w:-]*\\w)?\/i],[\"lang-uq.val\",\/^=\\s*([^\u0026gt;\\'\\\"\\s]*(?:[^\u0026gt;\\'\\\"\\s\\\/]|\\\/(?=\\s)))\/],\n[\"pun\",\/^[=\u0026lt;\u0026gt;\\\/]+\/],[\"lang-js\",\/^on\\w+\\s*=\\s*\\\"([^\\\"]+)\\\"\/i],[\"lang-js\",\/^on\\w+\\s*=\\s*\\'([^\\']+)\\'\/i],[\"lang-js\",\/^on\\w+\\s*=\\s*([^\\\"\\'\u0026gt;\\s]+)\/i],[\"lang-css\",\/^style\\s*=\\s*\\\"([^\\\"]+)\\\"\/i],[\"lang-css\",\/^style\\s*=\\s*\\'([^\\']+)\\'\/i],[\"lang-css\",\/^style\\s*=\\s*([^\\\"\\'\u0026gt;\\s]+)\/i]]),[\"in.tag\"]);p(C([],[[\"atv\",\/^[\\s\\S]+\/]]),[\"uq.val\"]);p(x({keywords:S,hashComments:!0,cStyleComments:!0,types:Q}),\"c cc cpp cxx cyc m\".split(\" \"));p(x({keywords:\"null,true,false\"}),[\"json\"]);p(x({keywords:N,hashComments:!0,cStyleComments:!0,\nverbatimStrings:!0,types:Q}),[\"cs\"]);p(x({keywords:M,cStyleComments:!0}),[\"java\"]);p(x({keywords:K,hashComments:!0,multiLineStrings:!0}),[\"bash\",\"bsh\",\"csh\",\"sh\"]);p(x({keywords:O,hashComments:!0,multiLineStrings:!0,tripleQuotedStrings:!0}),[\"cv\",\"py\",\"python\"]);p(x({keywords:\"caller,delete,die,do,dump,elsif,eval,exit,foreach,for,goto,if,import,last,local,my,next,no,our,print,package,redo,require,sub,undef,unless,until,use,wantarray,while,BEGIN,END\",hashComments:!0,multiLineStrings:!0,regexLiterals:2}),\n[\"perl\",\"pl\",\"pm\"]);p(x({keywords:P,hashComments:!0,multiLineStrings:!0,regexLiterals:!0}),[\"rb\",\"ruby\"]);p(x({keywords:L,cStyleComments:!0,regexLiterals:!0}),[\"javascript\",\"js\"]);p(x({keywords:\"all,and,by,catch,class,else,extends,false,finally,for,if,in,is,isnt,loop,new,no,not,null,of,off,on,or,return,super,then,throw,true,try,unless,until,when,while,yes\",hashComments:3,cStyleComments:!0,multilineStrings:!0,tripleQuotedStrings:!0,regexLiterals:!0}),[\"coffee\"]);p(C([],[[\"str\",\/^[\\s\\S]+\/]]),[\"regex\"]);\nvar V=R.PR={createSimpleLexer:C,registerLangHandler:p,sourceDecorator:x,PR_ATTRIB_NAME:\"atn\",PR_ATTRIB_VALUE:\"atv\",PR_COMMENT:\"com\",PR_DECLARATION:\"dec\",PR_KEYWORD:\"kwd\",PR_LITERAL:\"lit\",PR_NOCODE:\"nocode\",PR_PLAIN:\"pln\",PR_PUNCTUATION:\"pun\",PR_SOURCE:\"src\",PR_STRING:\"str\",PR_TAG:\"tag\",PR_TYPE:\"typ\",prettyPrintOne:function(a,d,f){f=f||!1;d=d||null;var b=document.createElement(\"div\");b.innerHTML=\"\u0026lt;pre\u0026gt;\"+a+\"\u0026lt;\/pre\u0026gt;\";b=b.firstChild;f\u0026amp;\u0026amp;B(b,f,!0);H({j:d,m:f,h:b,l:1,a:null,i:null,c:null,g:null});return b.innerHTML},\nprettyPrint:g=g=function(a,d){function f(){for(var b=R.PR_SHOULD_USE_CONTINUATION?c.now()+250:Infinity;r\u0026lt;p.length\u0026amp;\u0026amp;c.now()\u0026lt;b;r++){for(var d=p[r],k=h,q=d;q=q.previousSibling;){var m=q.nodeType,v=(7===m||8===m)\u0026amp;\u0026amp;q.nodeValue;if(v?!\/^\\??prettify\\b\/.test(v):3!==m||\/\\S\/.test(q.nodeValue))break;if(v){k={};v.replace(\/\\b(\\w+)=([\\w:.%+-]+)\/g,function(a,b,c){k[b]=c});break}}q=d.className;if((k!==h||u.test(q))\u0026amp;\u0026amp;!e.test(q)){m=!1;for(v=d.parentNode;v;v=v.parentNode)if(w.test(v.tagName)\u0026amp;\u0026amp;v.className\u0026amp;\u0026amp;u.test(v.className)){m=\n!0;break}if(!m){d.className+=\" prettyprinted\";m=k.lang;if(!m){var m=q.match(t),C;!m\u0026amp;\u0026amp;(C=A(d))\u0026amp;\u0026amp;z.test(C.tagName)\u0026amp;\u0026amp;(m=C.className.match(t));m\u0026amp;\u0026amp;(m=m[1])}if(x.test(d.tagName))v=1;else var v=d.currentStyle,y=g.defaultView,v=(v=v?v.whiteSpace:y\u0026amp;\u0026amp;y.getComputedStyle?y.getComputedStyle(d,null).getPropertyValue(\"white-space\"):0)\u0026amp;\u0026amp;\"pre\"===v.substring(0,3);y=k.linenums;(y=\"true\"===y||+y)||(y=(y=q.match(\/\\blinenums\\b(?::(\\d+))?\/))?y[1]\u0026amp;\u0026amp;y[1].length?+y[1]:!0:!1);y\u0026amp;\u0026amp;B(d,y,v);H({j:m,h:d,m:y,l:v,a:null,i:null,c:null,\ng:null})}}}r\u0026lt;p.length?R.setTimeout(f,250):\"function\"===typeof a\u0026amp;\u0026amp;a()}for(var b=d||document.body,g=b.ownerDocument||document,b=[b.getElementsByTagName(\"pre\"),b.getElementsByTagName(\"code\"),b.getElementsByTagName(\"xmp\")],p=[],k=0;k\u0026lt;b.length;++k)for(var m=0,q=b[k].length;m\u0026lt;q;++m)p.push(b[k][m]);var b=null,c=Date;c.now||(c={now:function(){return+new Date}});var r=0,t=\/\\blang(?:uage)?-([\\w.]+)(?!\\S)\/,u=\/\\bprettyprint\\b\/,e=\/\\bprettyprinted\\b\/,x=\/pre|xmp\/i,z=\/^code$\/i,w=\/^(?:pre|code|xmp)$\/i,h={};f()}},\nS=R.define;\"function\"===typeof S\u0026amp;\u0026amp;S.amd\u0026amp;\u0026amp;S(\"google-code-prettify\",[],function(){return V})})();return g}();T||t.setTimeout(U,0)})();}()\n\u0026lt;\/script\u0026gt;\n\n\u0026lt;div\u0026gt;\n\u0026lt;pre class=\"prettyprint\"\u0026gt;\n\u0026gt;\u0026gt;\u0026gt; shap.summary_plot(shap_values, X_test_sc[TP_TN,:], feature_names=X_train.columns )\n\n## for plot bar\n\u0026gt;\u0026gt;\u0026gt; shap.summary_plot(shap_values, X_test_sc[TP_TN,:],feature_names=X_train.columns, plot_type=\"bar\")\n\n\u0026lt;\/pre\u0026gt;\n\u0026lt;\/div\u0026gt;\n\u0026lt;hr\u0026gt;","render_as_iframe":true,"selected_app_name":"HtmlApp","app_list":"{\"HtmlApp\":576539}"}},{"type":"Blog.Section","id":"f_093adfa6-4364-4440-be2d-50841db88e71","defaultValue":null,"component":{"type":"Image","id":"f_97179954-56ce-47e5-b4fd-32778ca753d3","defaultValue":null,"link_url":"","thumb_url":"!","url":"!","caption":"","description":"","storageKey":"1225579\/Screen_Shot_2019-02-14_at_11.30.58_AM_kywoxa","storage":"c","storagePrefix":null,"format":"png","h":352,"w":1200,"s":61329,"new_target":true,"noCompression":null,"cropMode":null}},{"type":"Blog.Section","id":"f_7b26aded-ec0a-49b6-82ad-a037774c33b0","defaultValue":null,"component":{"type":"RichText","id":"f_748e7486-8a99-4f00-95d7-ecbee3f71f7e","defaultValue":false,"value":"\u003cp style=\"text-align: justify;\"\u003eIn the first plot (subfigure a.), there are three kinds of information: in the x-axis you have the SHAP values of each feature described in the y-axis. Each line stands for the set of\u003cu\u003e SHAP values\u003c\/u\u003e computed for a specific feature and this is done for every features of your model. The third dimension is the color of points: it represents the \u003cu\u003efeature value\u003c\/u\u003e (pink for high value of the feature and blue for a low value). You can therefore see the dispersion of SHAP values according to features and also, their impact in the output model. For instance, high values of 'cp' feature implies high SHAP values predicted score and tends to push up the prediction whereas high values of 'thal' feature (pink points) tends to lower the predicted score.\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003eThe second plot (subfigure b.) is the Mean Absolute Value of SHAP values obtained for each feature. This can be seen as a summary of the left figure.\u003c\/p\u003e","backupValue":null,"version":1}},{"type":"Blog.Section","id":"f_92b259c5-394d-4621-8b7d-eea6fe035562","defaultValue":null,"component":{"type":"Blog.Title","id":"f_c91b0f9b-a13c-4574-a896-9dc83fae5ef1","defaultValue":false,"value":"\u003cp\u003eConclusion\u003c\/p\u003e","backupValue":null,"version":1}},{"type":"Blog.Section","id":"f_3fa6ef79-5705-4502-a435-d56e533e71b3","defaultValue":null,"component":{"type":"RichText","id":"f_d26b1b42-6417-44b1-ab8e-c7b4a12b596a","defaultValue":false,"value":"\u003cp style=\"text-align: justify;\"\u003eThere are lots of approaches proposed in the literature to deal with interpretability\/explainable models in the supervised context.\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003eThe main strength of the additive feature attribution model is its theoretical properties on one hand and on the other hand, its general framework which tends to explain most of explainable models developed in the literature. Different approximations algorithms have been proposed by Lundberg et al. in order to take advantage of the structure of the model and types of data to improve time computations. If you deal with Deep Neural Network or tree ensembles, I really encourage the reader to see more examples on the GitHub repository of the author: \u003ca target=\"_blank\" href=\"https:\/\/github.com\/slundberg\/shap\"\u003ehttps:\/\/github.com\/slundberg\/shap\u003c\/a\u003e.\u003c\/p\u003e","backupValue":null,"version":1}},{"type":"Blog.Section","id":"f_e14e977f-909f-449e-8949-a0a1cd801896","defaultValue":null,"component":{"type":"Separator","id":"f_1a677746-633c-4d6e-8165-d3a15cae3e5b","defaultValue":null,"value":null}},{"type":"Blog.Section","id":"f_977a40d3-0cf9-4cb1-bb15-9e63242d372a","defaultValue":null,"component":{"type":"Blog.Title","id":"f_5adb6f39-17ab-4688-96e2-c3d92b212d03","defaultValue":false,"value":"\u003cp\u003e\u003cspan class=\"s-text-color-gray\"\u003eBibliography\u003c\/span\u003e\u003c\/p\u003e","backupValue":null,"version":1}},{"type":"Blog.Section","id":"f_7b2fb3a8-c705-4ed2-8f20-20389af56b9e","defaultValue":null,"component":{"type":"RichText","id":"f_93d9bfaf-adfb-4841-b02f-389fb0b450b9","defaultValue":false,"value":"\u003cp style=\"font-size: 100%; text-align: justify;\"\u003e\u003cspan class=\"s-text-color-black\"\u003e[0] Shapley, Lloyd S. \u201cA Value for N-Person Games.\u201d Contributions to the Theory of Games 2 (28): 307\u201317, (1953).\u003c\/span\u003e\u003c\/p\u003e\u003cp style=\"font-size: 100%; text-align: justify;\"\u003e\u003cspan class=\"s-text-color-black\"\u003e[1] Lipovetsky, S. and Conklin, M. \"Analysis of regression in game theory approach.\" Applied Stochastic Models in business and industry (17-4):319-330, (2001).\u003c\/span\u003e\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003e\u003cspan class=\"s-text-color-black\"\u003e[2] Strumbelj et al., \"A General Method for Visualizing and Explaining Black-Box Regression Models.\" Adaptive and Natural Computing Algorithms. ICANNGA 2011. Lecture Notes in Computer Science, vol 6594. (2011).\u003c\/span\u003e\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003e\u003cspan class=\"s-text-color-black\"\u003e[3] Saabas et al., \"Interpreting random forests\", \u003c\/span\u003e\u003ca target=\"_blank\" href=\"http:\/\/Interpreting%20random%20forests%20(https:\/\/blog.datadive.net\/interpreting-random-forests\/)\"\u003e\u003cspan class=\"s-text-color-black\"\u003ehttps:\/\/blog.datadive.net\/interpreting-random-forests\/\u003c\/span\u003e\u003c\/a\u003e\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003e\u003cspan class=\"s-text-color-black\"\u003e[4] Springenberg et al., \"Striving for simplicity: The all convolutional net\", arXiv:1412.6806 (2014).\u003c\/span\u003e\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003e\u003cspan class=\"s-text-color-black\"\u003e[5] Bach et al. \"On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation\", PLOS ONE: (10-7): 130-140, (2015).\u003c\/span\u003e\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003e\u003cspan class=\"s-text-color-black\"\u003e[6] Ribeiro et al. \"Why should I Trust You ? Explaining the predictions of any classifier\", Proceedings of the 22nd ACM SIGKDD: 1135-1144 (2016).\u003c\/span\u003e\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003e\u003cspan class=\"s-text-color-black\"\u003e[7] Shrikumar et al. \"Learning Important Features Through Propagating Activation Difference\", Proceedings in ICML (2017).\u003c\/span\u003e\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003e[8] \"Explainable and Interpretable Models in Computer Vision and Machine Learning\", Springer Verlag, The Springer Series on Challenges in Machine Learning, 9783319981307 (2018).\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003e\u003cspan class=\"s-text-color-black\"\u003e[9] Sunderarajan et al., \"Axiomatic Attribution for Deep Networks\", Proceedings in ICML (2017).\u003c\/span\u003e\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003e\u003cspan class=\"s-text-color-black\"\u003e[10] Montavon et al. \"Explaining nonlinear classification decision with deep Taylor decomposition\", Pattern Recognition (65):211-222 (2017).\u003c\/span\u003e\u003c\/p\u003e\u003cp style=\"text-align: justify;\"\u003e\u003cspan class=\"s-text-color-black\"\u003e[11] Lundberg et al., ''A unified approach to interpreting model predictions'', NIPS (2017).\u003c\/span\u003e\u003c\/p\u003e","backupValue":null,"version":1}}]},"settings":{"hideBlogDate":false},"pageMode":null};$S.siteData={"terms_text":null,"privacy_policy_text":null,"show_terms_and_conditions":false,"show_privacy_policy":false,"gdpr_html":null,"live_chat":false};$S.stores={"fonts_v2":[{"name":"roboto condensed","fontType":"google","displayName":"Roboto Condensed","cssValue":"\"roboto condensed\"","settings":{"weight":"300,700"},"hidden":false,"cssFallback":"sans-serif","disableBody":null,"isSuggested":true}],"features":{"allFeatures":[{"name":"analytics","canBeUsed":true,"hidden":false},{"name":"fb_image","canBeUsed":true,"hidden":false},{"name":"twitter_card","canBeUsed":true,"hidden":false},{"name":"favicon","canBeUsed":true,"hidden":false},{"name":"style_panel","canBeUsed":true,"hidden":false},{"name":"google_analytics","canBeUsed":true,"hidden":false},{"name":"blog_custom_url","canBeUsed":true,"hidden":false},{"name":"page_collaboration","canBeUsed":true,"hidden":false},{"name":"premium_templates","canBeUsed":true,"hidden":false},{"name":"custom_domain","canBeUsed":true,"hidden":false},{"name":"premium_support","canBeUsed":true,"hidden":false},{"name":"remove_branding_title","canBeUsed":true,"hidden":false},{"name":"full_analytics","canBeUsed":true,"hidden":false},{"name":"ecommerce_layout","canBeUsed":true,"hidden":false},{"name":"portfolio_layout","canBeUsed":true,"hidden":false},{"name":"password_protection","canBeUsed":true,"hidden":false},{"name":"remove_logo","canBeUsed":true,"hidden":false},{"name":"optimizely","canBeUsed":true,"hidden":false},{"name":"custom_code","canBeUsed":true,"hidden":false},{"name":"blog_custom_code","canBeUsed":true,"hidden":false},{"name":"mobile_actions","canBeUsed":true,"hidden":false},{"name":"premium_assets","canBeUsed":true,"hidden":false},{"name":"premium_apps","canBeUsed":true,"hidden":false},{"name":"premium_sections","canBeUsed":true,"hidden":false},{"name":"blog_mailchimp_integration","canBeUsed":true,"hidden":false},{"name":"ecommerce_coupon","canBeUsed":true,"hidden":false},{"name":"ecommerce_shipping_region","canBeUsed":true,"hidden":false},{"name":"multiple_page","canBeUsed":true,"hidden":false},{"name":"ecommerce_taxes","canBeUsed":true,"hidden":false},{"name":"ecommerce_layout","canBeUsed":true,"hidden":false},{"name":"portfolio_layout","canBeUsed":true,"hidden":false},{"name":"ecommerce_category","canBeUsed":true,"hidden":false},{"name":"facebook_pixel","canBeUsed":true,"hidden":false},{"name":"blog_category","canBeUsed":true,"hidden":false},{"name":"product_page","canBeUsed":true,"hidden":false},{"name":"font_search","canBeUsed":true,"hidden":false},{"name":"custom_font","canBeUsed":true,"hidden":false},{"name":"membership","canBeUsed":true,"hidden":false},{"name":"blog_post_amp","canBeUsed":true,"hidden":false},{"name":"site_search","canBeUsed":true,"hidden":false},{"name":"portfolio_category","canBeUsed":true,"hidden":false},{"name":"ecommerce_digital_download","canBeUsed":true,"hidden":false},{"name":"popup","canBeUsed":true,"hidden":false},{"name":"live_chat","canBeUsed":false,"hidden":false}]},"showStatic":{"isEditMode":false}}; Interpretability models​ Why interpretability is so important in machine learning ? Why can't we just trust the prediction of a supervised model ? Several possible explanations to that: we can think about improving social acceptance for the integration of ML algorithms into our lives ; correcting a model by discovering a bias in the population of the training set; understanding the cases for which the model fails; following the law and regulations. Nowadays, complex supervised models can be very accurate on specific tasks but remain quite uninterpretable; at the opposite, when using simple models, it is indeed easy to interpret them but are often less accurate. How can we solve such a dilemma ? This post tends to answer to this question by going through the ML literature in interpretability models and by focusing on a class of additive feature attribution methods [11]. 1. The main idea The problem of giving an interpretation to the model prediction can be recasted as it follows: Which part of the input is particularly important to explain the output of the model ? In order to illustrate this purpose, let's consider the example given during the ICML conference by Shrikumar. Lets suppose you have already trained a model with DNA mutations causing diseases. Now, let's consider a DNA sequence as input, as for instance: The model is going to predict if this sequence can be linked to any known diseases the model learnt. If so, what you would like to understand is why your model gives this prediction in particular; ie which part of the input sequence leads your model to predict a specific disease. So, you would like to have higher weights for the parts of the sequence which explain the most the decision of your model and lower ones for those which do not explain the prediction: To achieve that, most of approaches iterate between 2 steps: 1. Set a prohibition to some part of the input 2. Observe the change in the output (fitted answer) Repeat step 1 and step 2 for different prohibitions of the input. 2. Existing approaches The need of tools for explaining prediction models came with the development of more complex models to deal with more complex data and therefore the recent literature in Computer Vision and Machine Learning has developed a new field linked to interpretability. 2.1. Cooperative game theory based Back to the beginning of the 21th century, Lipovetsky et al. (2001)[1] highlight the multicollinearity problem in the analysis of regressor importance in the multiple regression context: important variables can have significant coefficient because of their collinearity. To that end, they use a tool from the cooperative game theory to obtain comparative importance of predictors: the Shapley Values imputation [0] derived from an axiomatic approach and produces a unique solution satisfying general requirements of Nash equilibrium. A decade later, Strumbelj et al. (2011)[2] generalize the use of Shapley values for black box models such as SVM and artificial neural network models in order to make models more informative, easier to understand and to use. They propose an approximation algorithm by assuming mutual independence of individual features in order to encompass the time complexity limitation of the solution. 2.2. Architecture specific: Deep Neural Network Since then, several specific methods have been proposed in the literature and take advantages of the structure/architecture of the model. For neural networks, we can think about back-propagation based methods such as Guided Propagation (Springenberg et al. 2014)[4] which use the relationship between the neurons and the output. The idea is to assign a score to neurons according to how much they affect the output. This is done in the single backward pass where you get the scores for all parts of the input. Other approaches propose to build a linear model to locally approximate the more complicated model based on data which affects the output (LIME [6]). Shrikumar et al. 2016 [7] introduces DeepLift which assigns contribution scores to the feature based on the difference between the activation of each neuron to its ‘reference activation’. Other explaining prediction models Deep Neural Network-specific have been proposed in the literature and the reader could read [8] for additional references on this subject. Below, a list of methods and their available python code which summarizes the most recent approaches for specific-models: Random Forest: Deep Neural Network: 2.3. A unified approach The most recent and general approach for interpretability models is the SHAP model from Lundberg et al. 2018. It proposes a class of methods called Additive Feature attribution methods that contains most of the approaches cited above. These methods use the same explanation model (ie any interpretable approximation of the original model) that we introduce in the next paragraph. Cooperative Game theory-based: An explanation model is a simple model which describes the behavior of the complex model. The additive attribution methods introduce a linear function of binary variables to represent such an explanation model. 3.1. The SHAP model Let f be the original prediction model to be explained and g the explanation model. Additive feature attribution methods have an explanation model that is a linear function of binary variables such that: where M is the number of features ; the z'i variables represent a feature being observed (zi' = 1) or unknown (zi'= 0), and the Φi ∈ ’s are the feature attribution values. There is only one solution for Φ_i satisfying general requirements of Nash equilibrium and satisfying three natural properties explained in the paragraph that follows. 3.2. The Natural properties (1) Local Accuracy: the output of the explanation model matches the original model for the prediction being explained: g(x') = f(x) (2) Missingness: put the output to 0 corresponds to turning the feature off: x'i = 0 ⇒ Φi = 0 (3) Accuracy: if turning the feature off in one model which always makes a bigger difference in another model then the importance should be higher in the first model than in the second one. Lets consider z' \ i meaning z'i = 0, then for any 2 models f 1 and f 2, if: fx1(z') - fx1(z' \ i) ≥ fx2(z') - fx2(z' \ i) then for all input z' ∈ {0,1}M : Φi (f 1, x) ≥ Φi (f 2, x) 3.3 Computing SHAP values 3.3.1. Back to the Shapley values The computation of features importance -- the SHAP values -- comes from cooperative games theory [0] with the Shapley values. In our context, a Shapley value can be viewed as a weighted average of all possible differences between predictions of the model without feature i, and the ones with feature i as expressed below: where |z′| stands for the number of features different from zero, and z′ ⊆ x′ stands for all z′ vectors where the non-zero entries are a subset of entries of x′ except feature i. Since the problem is combinatorial different strategies have been proposed in the literature to approximate the solution ([0,1]). 3.3.2. The SHAP values In the more general context the SHAP values can be viewed as Shapley values of a conditional expectation function of the original model such that: where S is the set of non-zero entries of z'. In practice, the computation of SHAP values are challenging that is why Lundberg and al.[11] propose different approximation algorithms according to the specificities of your model or your data (tree ensembles, independent features, deep network,...). 4. Practical example with SHAP library Lundenberg created a GitHub repository to that end with very nice and quite complete notebooks explaining different use cases for SHAP and its different approximation algorithms (Tree/ Deep / Gradient /Linear or Kernel Explainers). I do really encourage the reader to visit the page of the author: https://github.com/slundberg/shap . By the way, I am just going to introduce a very simple example in order to give insights of the kind of results we could obtain when looking for interpreting a prediction. Lets consider the heart dataset coming from kaggle competition (https://www.kaggle.com/ronitf/heart-disease-uci). The dataset consists in 13 variables describing 303 patients and 1 label describing the angiographic disease status (target \in {0,1}). The set is quite balanced since 165 patients have label 1 and 138 have label 0. The data have been pre-processed a little bit such as we keep only the most informative variables which are ['sex', 'cp', 'thalach', 'exang', 'oldpeak', 'ca', 'thal']. Then we split the dataset into a random train (75% of data) and test sets (the remaining 25%) and we scale them. A svm classifier has been learnt, and we obtain a classification accuracy around 91%. Once the classification model is learnt, we are looking for explaining a particular prediction (a true one ;) ) based on the shap library developed by Lundberg. You need to install the shap library (https://github.com/slundberg/shap) before running the code below: This figure illustrates features that push the prediction higher (in pink) and the ones which push the prediction lower (in blue) from a base value computed on the average model output on the training dataset. For that true positive fitted answer, we can see that the main features which tend to push the probabilities towards 1 is mainly explained by 'sex', 'oldpeak', 'thalach' and 'exang', whereas the 'ca' feature tends to push down the prediction score. We can apply this explainer model to all correct predicted examples in the test set, as below: The figure above stands for all individual feature contribution that have been stacked horizontally and ordered by output value. The 39 first predictions are correctly classified in class 1 and the 30 last ones are labeled and correctly classified in class 0. Note that the visualisation is interactive and we can see the effect of a particular feature by changing the y-axis in the menu of the left side of the figure. Symmetrically, you can change the x-axis menu in order to order the sample according to output values, similarities or SHAP values by feature. It can also be very interested to have in one plot an overview of the distribution of SHAP values for each feature and an idea of their overall impact (Note that the example is still on the correct predicted sample): In the first plot (subfigure a.), there are three kinds of information: in the x-axis you have the SHAP values of each feature described in the y-axis. Each line stands for the set of SHAP values computed for a specific feature and this is done for every features of your model. The third dimension is the color of points: it represents the feature value (pink for high value of the feature and blue for a low value). You can therefore see the dispersion of SHAP values according to features and also, their impact in the output model. For instance, high values of 'cp' feature implies high SHAP values predicted score and tends to push up the prediction whereas high values of 'thal' feature (pink points) tends to lower the predicted score. The second plot (subfigure b.) is the Mean Absolute Value of SHAP values obtained for each feature. This can be seen as a summary of the left figure. Conclusion There are lots of approaches proposed in the literature to deal with interpretability/explainable models in the supervised context. The main strength of the additive feature attribution model is its theoretical properties on one hand and on the other hand, its general framework which tends to explain most of explainable models developed in the literature. Different approximations algorithms have been proposed by Lundberg et al. in order to take advantage of the structure of the model and types of data to improve time computations. If you deal with Deep Neural Network or tree ensembles, I really encourage the reader to see more examples on the GitHub repository of the author: https://github.com/slundberg/shap. Bibliography [0] Shapley, Lloyd S. “A Value for N-Person Games.” Contributions to the Theory of Games 2 (28): 307–17, (1953). [1] Lipovetsky, S. and Conklin, M. "Analysis of regression in game theory approach." Applied Stochastic Models in business and industry (17-4):319-330, (2001). [2] Strumbelj et al., "A General Method for Visualizing and Explaining Black-Box Regression Models." Adaptive and Natural Computing Algorithms. ICANNGA 2011. Lecture Notes in Computer Science, vol 6594. (2011). [3] Saabas et al., "Interpreting random forests", https://blog.datadive.net/interpreting-random-forests/ [4] Springenberg et al., "Striving for simplicity: The all convolutional net", arXiv:1412.6806 (2014). [5] Bach et al. "On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation", PLOS ONE: (10-7): 130-140, (2015). [6] Ribeiro et al. "Why should I Trust You ? Explaining the predictions of any classifier", Proceedings of the 22nd ACM SIGKDD: 1135-1144 (2016). [7] Shrikumar et al. "Learning Important Features Through Propagating Activation Difference", Proceedings in ICML (2017). [8] "Explainable and Interpretable Models in Computer Vision and Machine Learning", Springer Verlag, The Springer Series on Challenges in Machine Learning, 9783319981307 (2018). [9] Sunderarajan et al., "Axiomatic Attribution for Deep Networks", Proceedings in ICML (2017). [10] Montavon et al. "Explaining nonlinear classification decision with deep Taylor decomposition", Pattern Recognition (65):211-222 (2017). [11] Lundberg et al., ''A unified approach to interpreting model predictions'', NIPS (2017). All Posts × Almost done… We just sent you an email. Please click the link in the email to confirm your subscription!
Big Data/Analytics Zone is brought to you in partnership with: Arthur Charpentier, ENSAE, PhD in Mathematics (KU Leuven), Fellow of the French Institute of Actuaries, professor at UQàM in Actuarial Science. Former professor-assistant at ENSAE Paritech, associate professor at Ecole Polytechnique and professor assistant in economics at Université de Rennes 1. Arthur is a DZone MVB and is not an employee of DZone and has posted 159 posts at DZone. You can read more from them at their website. View Full User Profile # Normality Versus Goodness-of-Fit Tests 11.07.2012 | 3012 views | In many cases, in statistical modeling, we would like to test whether the underlying distribution from an i.i.d. sample lies in a given (parametric) family, e.g. the Gaussian family $H_0:F(\cdot)\in\mathcal F$where$\mathcal F=\{\Phi(\cdot;\mu,\sigma^2);\mu\in\mathbb{R},\sigma^2\in\mathbb{R}_+\}$ Consider a sample > library(nortest) > X=rnorm(n) Then a natural idea is to use goodness of fit tests (natural is not necessarily correct, we'll get back on that later on), i.e. $H_0:F(\cdot)=\Phi(\cdot;\mu,\sigma^2)$ for some $\mu$ and $\sigma^2$. But since those two parameters are unknown, it is not uncommon to see people substituting estimators to those two unknown parameters, i.e. $H_0:F(\cdot)=\Phi(\cdot;\widehat\mu_n,\widehat\sigma^2_n)$ Using Kolmogorov-Smirnov test, we get > pn=function(x){pnorm(x,mean(X),sd(X))}; > P.KS.Norm.estimated.param= + ks.test(X,pn)$p.value But since we choose parameters based on the sample we use to run a goodness of fit test, we should expect to have troubles, somewhere. So another natural idea is to split the sample: the first half will be used to estimate the parameters, and then, we use the second half to run a goodness of fit test (e.g. using Kolmogorov-Smirnov test) > pn=function(x){pnorm(x,mean(X[1:(n/2)]), + sd(X[1:(n/2)]))} > P.KS.Norm.out.of.sample= + ks.test(X[(n/2+1):n],pn)$p.value>.05) As a benchmark, we can use Lilliefors test, where the distribution of Kolmogorov-Smirnov statistics is corrected to take into account the fact that we use estimators of parameters > P.Lilliefors.Norm= + lillie.test(X)\$p.value Here, let us consider i.i.d. samples of size 200 (100,000 samples were generated here). The distribution of the $p$-value of the test is shown below, In red, the Lilliefors test, where we see that the correction works well: the $p$-value is uniformly distributed on the unit inteval. There is 95% chance to accept the normality assumption if we accept it when the $p$-value exceeds 5%. On the other hand, • with Kolmogorv-Smirnov test, on the overall sample, we always accept the normal assumption (almost), with a lot of extremely large $p$-values • with Kolmogorv-Smirnov test, with out of sample estimation, we actually observe the opposite: in a lot of simulation, the $p$-value is lower then 5% (with the sample was from a $\mathcal N(0,1)$ sample). The cumulative distribution function of the $p$-value is I.e., the proportion of samples with $p$-value exceeding 5% is 95% for Lilliefors test (as expected), while it is 85% for the out-of-sample estimator, and 99.99% for Kolmogorov-Smirnov with estimated parameters, > mean(P.KS.Norm.out.of.sample>.05) [1] 0.85563 > mean(P.KS.Norm.estimated.param>.05) [1] 0.99984 > mean(P.Lilliefors.Norm>.05) [1] 0.9489 So using Kolmogorov-Smirnov with estimated parameters is not good, since we might accept $H_0$ too often. On the other hand, if we use this technique with two samples (one to estimate parameter, one to run goodness of fit tests), it looks much better ! even if we reject $H_0$ too often. For one test, the rate of first type error is rather large, but for the other, it is the rate of second type error... Published at DZone with permission of Arthur Charpentier, author and DZone MVB. (source) (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) "Starting from scratch" is seductive but disease ridden
# Prove that $\pi>3$ using geometry I was asked this question today in an interview. Question: Prove that $\pi>3$ using geometry. They gave me hints about drawing a unit circle and then inscribing an equilateral triangle and then proceeding. But I could not follow. Can anyone help? • How is pi defined? – miracle173 Nov 22 '15 at 12:09 • @miracle173 How does that help? – SchrodingersCat Nov 22 '15 at 12:13 • How do you want to show something about pi if you have not definition of it? – miracle173 Nov 22 '15 at 12:15 • Inscribe a regular hexagon! – Christian Blatter Nov 22 '15 at 12:16 • An equilateral triangle inscribed in a unit circle has perimeter $3\sqrt{3}$, which is not of itself obviously helpful. Offhand I'd guess the interviewer was trying to guide you toward the idea of inscribing a suitable polygon, in this case a regular hexagon. – Andrew D. Hwang Nov 22 '15 at 13:33 The inscribed hexagon in the unit circle has perimeter $6$. The perimeter of the circle is $2\pi$, hence $\pi > 3$. • Shouldn't it be $>$ and not $\ge$? – SchrodingersCat Nov 22 '15 at 12:30 • imgur.com/IbOElsj – Paulistic Nov 22 '15 at 12:48 • Arguably this involves six equilateral triangles – Henry Nov 22 '15 at 15:06 • How do you prove that the perimeter of the hexagon is smaller than the perimeter of the circle (really "prove", not say "it is clear that...")? – Sebastien Nov 23 '15 at 8:25 The inscribed $12$-gon in the unit circle has area $\frac{12}{2}\sin (2\pi/12)=3$. The area of the unit circle is $\pi$. Hence $\pi\ge 3$. • Thanks, But can you use the triangle and do so? – SchrodingersCat Nov 22 '15 at 12:20 • Shouldn't it be $>$ and not $\ge$? – SchrodingersCat Nov 22 '15 at 12:30 • I like this argument based on area than another one on this thread based on perimeter. – Kim Jong Un Nov 22 '15 at 12:38 I am not sure if this is redundant, but: If an equilateral triangle is inscribed in a unit circle, and if, on each side of the inscribed triangle, an isosceles triangle is further inscribed in the circle, then an equilateral hexagon with each side of length $=1$ results; but then $6 < 2\pi$ implies $3 < \pi$. So is this something you are after? • Not really.. I just want the proof using a triangle.. without constructing a hexagon. – SchrodingersCat Nov 22 '15 at 12:53 • @Aniket I would say the hint just provides a starting point; by inscribing suitable triangles twice we arrives at something useful, is not it? :) – Megadeth Nov 22 '15 at 12:54 • In that case.. I mean if you look at it in that way... It is fine. – SchrodingersCat Nov 22 '15 at 12:56 With a little bit of cheating, you don't need the whole hexagon... Let $O$ be the centre of the unit circle with equilateral $\triangle ABC$ inscribed in it. Extend $\vec {AO}$ to meet the circle at D. As $BC$ and $OD$ are perpendicular bisectors of each other, $\triangle OBD$ is isosceles, and hence $|BD|=1$. But this must be smaller than the minor arc subtended, which has length $\dfrac{\pi}3$. • Could the downvoter comment why? – Macavity Dec 29 '15 at 14:50
# An electric toy car with a mass of 4 kg is powered by a motor with a voltage of 7 V and a current supply of 1 A. How long will it take for the toy car to accelerate from rest to 3/2 m/s? Jun 18, 2016 $= \frac{9}{14} s$ #### Explanation: Given • $m \to \text{Mass of toy car } = 4 k g$ • $V \to \text{Applied voltage } = 7 V$ • $I \to \text{Current supply } = 1 A$ • $v \to \text{Velocity gained by toy car } = \frac{3}{2} m {s}^{-} 1$ • $\text{Let t be the time taken to gain the velocity " 3/2ms^-1 " from rest}$ Now assuming there is no loss of energy due to friction etc, we can apply conservation of energy to solve the problem. $N o w P o w e r P = I \times V$ $\text{Electrical work done"="Gain in KE}$ $\implies P \times t = \frac{1}{2} \times m \times {v}^{2}$ $\implies I \times V \times t = \frac{1}{2} \times m \times {v}^{2}$ $\implies 1 \times 7 \times t = \frac{1}{2} \times 4 \times {\left(\frac{3}{2}\right)}^{2}$ $\implies t = \frac{9}{14} s$
# Questions about double and triple integrals 1. Aug 16, 2011 ### Maurice7510 Hey, I was just going through my vector calc textbook for this year and everything was going well until I reached double and triple integrals. My problem is the whole symmetry thing; when does (forgive me, I can't figure out the symbols) the integral from a to b become twice the integral from 0 to a versus becoming zero? Other than that, I think I was doing fine, but if you guys wouldn't mind posting some double and triple integral questions for me so I could get some practice that would be great. Thanks, Maurice 2. Aug 16, 2011 ### HallsofIvy Staff Emeritus Much the same thing as you saw with single integrals: if f(x) is an even function, then $\int_{-a}^a f(x)dx= 2\int_0^a f(x)dx$ and if f(x) is an odd function, then $\int_{-a}^a f(x)dx= 0$. More generally, if f(x,y,z) is exactly the same in two regions, then the integral over the two is just twice the integral over one:I+ I= 2I. If f(x,y,z) is the same in two regions, except that one is the negative of the other, they cancel and the integral is 0: I- I= 0. 3. Aug 16, 2011 ### Maurice7510 That helps but there's a couple things: even fcn is where -f(x) = f(x) and odd fcn is where -f(x) = f(-x)? I just kind of forget, sorry. Also, what always bothered me about this is that, for example, if you have a fcn where it's graph is symmetrical with one region negative but equal to the other, how does the area "cancel out"? 4. Aug 16, 2011 ### Maurice7510 I think another problem I'm having is interpreting the integrals. For example, there's a problem I'm looking at in the textbook, where it asks us to calculate the triple integral of y^2 of, and this is the hard part for me, the tetrahedron in the first octant bounded by the coordinate planes and the plane 2x+3y+z=6. I have no idea how to choose my limits of integration, i.e. integrate from what to what? If somebody can explain how to choose these, that would be greatly appreciated. Thanks again, Maurice
# A sinusoidal current has a maximum value of 10 A. If the signal is half rectified, its rms value will be: Free Practice With Testbook Mock Tests ## Options: 1. 10 A 2. 7.07 A 3. 5 A 4. 14.14 A ### Correct Answer: Option 3 (Solution Below) This question was previously asked in PGCIL Diploma Trainee EE Official Paper (Held on 17 Dec 2020) ## Solution: Half Wave Rectifier: • The basic circuit of a half-wave rectifier and its waveform with a resistive load is shown in Fig. • During the positive half-cycle of the input AC voltage, the diode is forward-biased (ON) and conducts. • While conducting, the diode acts as a short-circuit so that circuit current flows, and hence, a positive half-cycle of the input ac voltage is dropped across RL. • During the negative input half-cycle, the diode is reverse-biased (OFF) and so, does not conduct i.e. there is no current flow. Hence, there is no voltage drop across RL. Consider Vm is the maximum value of Half wave rectifier, ∴ Average Value, $$I_{av}=\frac{I_m}{\pi}$$ ∴ RMS Value, $$I_R=\frac{I_m}{2}$$ Conclusion: Given Im = 10 ∴ RMS Value, $$I_R=\frac{I_m}{2}=\frac{10}{2}=5 \ A$$
Simple programs can be put in a single file, but when your program grows larger, it’s impossible to keep it all in just one file. You can move parts of a program to a separate file, then you create a header file. A header file looks like a normal C file, except it ends with .h instead of .c, and instead of the implementations of your functions and the other parts of a program, it holds the declarations. You already used header files when you first used the printf() function, or other I/O function, and you had to type: #include <stdio.h> to use it. #include is a preprocessor directive. The preprocessor goes and looks up the stdio.h file in the standard library, because you used brackets around it. To include your own header files, you’ll use quotes, like this: #include "myfile.h" The above will look up myfile.h in the current folder. You can also use a folder structure for libraries: #include "myfolder/myfile.h" Let’s make an example. This program calculates the years since a given year: #include <stdio.h> int calculateAge(int year) { const int CURRENT_YEAR = 2020; return CURRENT_YEAR - year; } int main(void) { printf("%u", calculateAge(1983)); } Suppose I want to move the calculateAge function to a separate file. I create a calculate_age.c file: int calculateAge(int year) { const int CURRENT_YEAR = 2020; return CURRENT_YEAR - year; } And a calculate_age.h file where I put the function prototype, which is same as the function in the .c file, except the body: int calculateAge(int year); Now in the main .c file we can go and remove the calculateAge() function definition, and we can import calculate_age.h, which will make the calculateAge() function available: #include <stdio.h> #include "calculate_age.h" int main(void) { printf("%u", calculateAge(1983)); } Don’t forget that to compile a program composed by multiple files, you need to list them all in the command line, like this: gcc -o main main.c calculate_age.c And with more complex setups, a Makefile is necessary to tell the compiler how to compile the program. Download my free C Handbook, and check out my upcoming Full-Stack JavaScript Bootcamp! A 4-months online training program. Signups now open, until June 1!. Launching this week:
Magnetic interpolation inequalities in dimensions 2 and 3 Maria Esteban (Paris) Tuesday 19 May 2020 16:15 Zoom Mathematics Seminar In this talk I will present various results concerning interpolation inequalities, best constants and information about the extremal functions concerning Schrödinger magnetic operators in dimensions 2 and 3. The particular, and physical interesting cases of constant and of Aharonov-Bohm magnetic fields will be discussed in detail. These works have been made in collaboration with D. Bonheure, J. Dolbeault, A. Laptev and M. Loss.
## Finding missing citation entries in an org-file | categories: | tags: | View Comments Today we consider how to find citations in a document that have no corresponding entries in a bibtex file. There are a couple of pieces to this which we work out in stages below. First, we specify the bibtex file using a bibliography link defined in jorg-bib.el. jorg-bib provides a function that gives us the relevant bibliography files found in this file. (cite-find-bibliography) bib1.bib bib2.bib We can get a list of keys in these files (let ((bibtex-files (cite-find-bibliography))) (bibtex-global-key-alist)) (adams-1993-orien-imagin . t) (aarik-1997-effec-tio2 . t) (aruga-1985-struc-iron . t) Now, here are some citations that we want to include in this document. cite:aruga-1985-struc-iron,aarik-1997-effec-tio2 Here is a citation that is not in the bibtex file cite:kitchin-2016-nobel-lecture To find out if any of these are missing, we need a list of the citation keys in this document. We first get all the content from the cite links. We parse the buffer, and for each cite link, we get the path of the link, which contains our keys. (let ((parsetree (org-element-parse-buffer))) (when (equal (plist-get plist ':type) "cite") (plist-get plist ':path)))))) aruga-1985-struc-iron,aarik-1997-effec-tio2 kitchin-2016-nobel-lecture That is almost what we need, but we need to separate the keys that are joined by commas. That function already exists in jorg-bib as cite-split-keys. We need to make a slight variation to get a list of all the entries, since the cite-split-keys returns a list of entries for each link. Here is on approach to that. (let ((parsetree (org-element-parse-buffer)) (results '())) (when (equal (plist-get plist ':type) "cite") (setq results (append results (cite-split-keys (plist-get plist ':path)))))))) results) aruga-1985-struc-iron aarik-1997-effec-tio2 kitchin-2016-nobel-lecture Ok, now we just need to check each entry of that list against the list of entries in the bibtex files, and highlight any that are not good. We use an index function below to tell us if an element is in a list. This index function works for strings. We use the strange remove-if-not function, which requires something like triple negative logic to get the list of keys that are not in the bibtex files. (require 'cl) (defun index (substring list) "return the index of string in a list of strings" (let ((i 0) (found nil)) (dolist (arg list i) (if (string-match substring arg) (progn (setq found t) (return i))) (setq i (+ i 1))) ;; return counter if found, otherwise return nil (if found i nil))) ;; generate the list of bibtex-keys and cited keys (let* ((bibtex-files (cite-find-bibliography)) (bibtex-keys (mapcar (lambda (x) (car x)) (bibtex-global-key-alist))) (parsetree (org-element-parse-buffer)) (cited-keys)) (when (equal (plist-get plist ':type) "cite") (setq cited-keys (append cited-keys (cite-split-keys (plist-get plist ':path)))))))) (princ (remove-if-not (lambda (arg) (not (index arg bibtex-keys))) cited-keys)) ) (kitchin-2016-nobel-lecture) The only improvement from here would be if this generated a temporary buffer with clickable links to find that bad entry! Let us take a different approach here, and print this to a temporary buffer of clickable links. (require 'cl) (defun index (substring list) "return the index of string in a list of strings" (let ((i 0) (found nil)) (dolist (arg list i) (if (string-match substring arg) (progn (setq found t) (return i))) (setq i (+ i 1))) ;; return counter if found, otherwise return nil (if found i nil))) ;; generate the list of bibtex-keys and cited keys (let* ((bibtex-files (cite-find-bibliography)) (bibtex-keys (mapcar (lambda (x) (car x)) (bibtex-global-key-alist))) (when (equal (plist-get plist ':type) "cite") (dolist (key (cite-split-keys (plist-get plist ':path)) ) (when (not (index key bibtex-keys)) key (buffer-file-name)(plist-get plist ':begin))))) )))))) kitchin-2016-nobel-lecture elisp:(progn (find-file "/home-research/jkitchin/Dropbox/blogofile-jkitchin.github.com/_blog/blog.org")(goto-char 1052)) That is likely to come in handy. I have put a variation of this code in jorb-bib, in the function called jorg-bib-find-bad-citations. org-mode source Org-mode version = 8.2.6 | categories: | tags: | View Comments I have been exploring ways to get more information out of links in org-mode. I have considered popups , and right-clicking . Here I show how to get a popup menu on a citation link. The idea is that clicking or opening the ditation link should give you a menu. The menu should give you some context, e.g. if the bibtex key even exists. If it does, you should be able to get a quick view of the citation in the minibuffer. You should be able to open the entry in the bibtex file from the menu. If you have a pdf of the reference, you should have an option to open it. You should be able to open the url associated with the entry from the menu too. Here is the function. We use https://github.com/auto-complete/popup-el , and some code from https://github.com/jkitchin/jmax/blob/master/jorg-bib.el . (org-add-link-type "cite" ;; this function is run when you click on the link ;; this is in jorg-bib.el (results (get-bibtex-key-and-file)) (key (car results)) (cb (current-buffer)) (pdf-file (format (concat jorg-bib-pdf-directory "%s.pdf") key)) (bibfile (cdr results))) (list (popup-make-item (if (progn (let ((cb (current-buffer)) result) (find-file bibfile) (setq result (bibtex-search-entry key)) (switch-to-buffer cb) result)) "Simple citation" "No key found") :value "cite") (popup-make-item (if (progn (let ((cb (current-buffer)) result) (find-file bibfile) (setq result (bibtex-search-entry key)) (switch-to-buffer cb) result)) (format "Open %s in %s" key bibfile) "No key found") :value "bib") (popup-make-item ;; check if pdf exists.jorg-bib-pdf-directory is a user defined directory. ;; pdfs are stored by bibtex key in that directory (if (file-exists-p pdf-file) (format "Open PDF for %s" key) "No pdf found") :value "pdf") (popup-make-item "Open URL" :value "web") (popup-make-item "Open Notes" :value "notes") ))) (cond ;; goto entry in bibfile (find-file bibfile) (bibtex-search-entry key)) ;; goto entry and try opening the url (let ((cb (current-buffer))) (save-excursion (find-file bibfile) (bibtex-search-entry key) (bibtex-url)) (switch-to-buffer cb))) ;; goto entry and open notes, create notes entry if there is none (find-file bibfile) (bibtex-search-entry key) (jorg-bib-open-bibtex-notes)) ;; open the pdf file if it exists (when (file-exists-p pdf-file) (org-open-file pdf-file))) ;; print citation to minibuffer (let ((cb (current-buffer))) (message "%s" (save-excursion (find-file bibfile) (bibtex-search-entry key) (jorg-bib-citation))) (switch-to-buffer cb)))))) ;; formatting (lambda (keyword desc format) (cond ((eq format 'html) (format "(<cite>%s</cite>)" path)) ((eq format 'latex) (concat "\\cite{" (mapconcat (lambda (key) key) (cite-split-keys keyword) ",") "}"))))) cite:daza-2014-carbon-dioxid,mehta-2014-ident-poten,test,ahuja-2001-high-ruo2 Here you can see an example of a menu where I have the PDF: Here is an example menu of a key with no entry: And, and entry with no PDF: Here is the simple citation: And a reference from the other bibliography: Not bad! I will probably replace the cite link in jorg-bib with something like this. org-mode source Org-mode version = 8.2.6 ## A better insert citation function for org-mode | categories: | tags: | View Comments I have setup a reftex citation format that inserts a cite link using reftex like this. (eval-after-load 'reftex-vars '(progn '(org "Org-mode citation" ((?\C-m . "cite:%l")))))) I mostly like this, but it does not let me add citations to an existing citation; doing that leads to the insertion of an additional cite within the citation, which is an error. One way to make this simple is to add another cite format which simple returns the selected keys. You would use this with the cursor at the end of the link, and it will just append the results. (add-to-list 'reftex-cite-format-builtin '(org "Org-mode citation" ((?\C-m . "cite:%l") (?a . ",%l")))) That actually works nicely. I would like a better approach though, that involves less keywork. Ideally, a single function that does what I want, which is when on a link, append to it, and otherwise insert a new citation link. Today I will develop a function that fixes that problem. (defun insert-cite-link () (interactive) (let* ((object (org-element-context)) (path (org-element-property :path object))) (if (and (equal (org-element-type object) 'link) (equal (org-element-property :type object) "cite")) (progn (insert (concat "," (mapconcat 'identity (reftex-citation t ?a) ",")))) (insert (concat "cite:" (mapconcat 'identity (reftex-citation t) ","))) ))) That function is it! Org-mode just got a lot better. That function only puts a cite link in, but since that is all I use 99.99+% of the time, it works fine for me! org-mode source Org-mode version = 8.2.6 ## Putting link references to lines of code in a source block | categories: org-mode | tags: | View Comments I keep forgetting about this interesting gem of a feature in org-mode code blocks. You can put references to specific lines of code outside the block! http://orgmode.org/manual/Literal-examples.html#Literal-examples The following code block has some references in it that we can refer to later: #+BEGIN_SRC emacs-lisp -n -r (save-excursion (sc) (goto-char (point-min))) (jump) #+END_SRC 1: (save-excursion 2: (goto-char (point-min))) In line (sc) we remember the current position. (jump) jumps to point-min. To make this work with python we have to make a slight change to the reference format in the header. #+BEGIN_SRC python -n -r -l "#(ref:%s)" for i in range(5): # (for) print i # (body) #+END_SRC 1: for i in range(5): 2: print i 0 1 2 3 4 In line (for) we initialize the loop, and in line (body) we run it. org-mode source Org-mode version = 8.2.5h ## Literate programming in python with org-mode and noweb | categories: | tags: | View Comments This post examines a different approach to literate programming with org-mode that uses noweb . I have adapted an example from http://home.fnal.gov/~neilsen/notebook/orgExamples/org-examples.html which has some pretty cool ideas in it. The gist of using noweb is that in your source blocks you have labels like <<imports>>, that refer to other named code blocks that get substituted in place of the label. In the example below, we put labels for a code block of imports, for a function definition, a class definition, and a main function. This code block will get tangled to main.py . The noweb expansion happens at export, so here is the literal code block: #+BEGIN_SRC python :noweb yes :tangle main.py <<imports>> <<some-func>> <<class-dfn>> <<main-func>> if __name__ == '__main__': status = main() sys.exit(status) #+END_SRC You may want to just check out the org-mode source link at the bottom of the post to see all the details. import sys import numpy as np import matplotlib.pyplot as plt from argparse import ArgumentParser def utility_func(arg=None): return 'you called a utility function with this arg: {0}'.format(arg) class HelloWorld(object): def __init__(self, who): self.who = who def __call__(self): return 'Hello {0}'.format(self.who) def test(self): return True def main(): parser = ArgumentParser(description="Say hi") type=str, default="world", help="Who to say hello to") args = parser.parse_args() who = args.who greeter = HelloWorld(who) greeter() print 'test func = ', greeter.test() print utility_func() print utility_func(5) return 0 if __name__ == '__main__': status = main() sys.exit(status) ## 1 imports Now, we define a block that gives us the imports. We do not have to use any tangle headers here because noweb will put it in where it belongs. import sys import numpy as np import matplotlib.pyplot as plt from argparse import ArgumentParser ## 2 utility function Now we define a function we will want imported from the main file. def utility_func(arg=None): return 'you called a utility function with this arg: {0}'.format(arg) ## 3 class definition Finally, let us define a class. Note we use noweb here too, and we get the indentation correct! class HelloWorld(object): def __init__(self, who): self.who = who def __call__(self): return 'Hello {0}'.format(self.who) def test(self): return True ### 3.1 some class function Now, let us make the some-other-func. This block is not indented, but with the noweb syntax above, it seems to get correctly indented. Amazing. def test(self): return True ## 4 The main function This is a typical function that could be used to make your module into a script, and is only run when the module is used as a script.. def main(): parser = ArgumentParser(description="Say hi") type=str, default="world", help="Who to say hello to") args = parser.parse_args() who = args.who greeter = HelloWorld(who) greeter() print 'test func = ', greeter.test() print utility_func() print utility_func(5) return 0 ## 5 Tangle and run the code This link will extract the code to main.py: elisp:org-babel-tangle We can run the code like this (linux): python main.py --w John 2>&1 true test func = True you called a utility function with this arg: None you called a utility function with this arg: 5 or this (windows, which as no sh) from main import * main() test func = True you called a utility function with this arg: None you called a utility function with this arg: 5 ` ## 6 Summary thoughts The use of noweb syntax is pretty cool. I have not done anything serious with it, but it looks like you could pretty easily create a sophisticated python module this way that is documented in org-mode.
# User:Soul windsurfer This user has uploaded images to Wikimedia Commons. |} Babel user information BG-2 This user is able to contribute at an intermediate level on bitmap graphics. Users by language Babel user information SVG-2 This user is able to contribute with a good level on scalable vector graphics. Users by language Babel user information pl Polski jest językiem ojczystym tego użytkownika. en-2 This user is able to contribute with an intermediate level of English. Users by language Wikipedia - my page in wiki Mathematics Stack Exchange MathOverflow Wikibooks -pl - Adam majewski The Photographer's Barnstar foto This file was selected as the media of the day for 06 March 2012. It was captioned as follows: English: Quadratic Julia set with Internal level sets for c values along internal ray 0 of main cardioid of Mandelbrot set .mw-parser-output .tpl-hidden{clear:both}.mw-parser-output .tpl-hidden>.mw-collapsible-toggle{float:none}.mw-parser-output .tpl-hidden::before{display:none}html.client-js .mw-parser-output .tpl-hidden-headertext{padding-left:18px}.mw-parser-output .tpl-hidden.mw-made-collapsible>.mw-collapsible-toggle>.tpl-hidden-headertext{background-image:url("//upload.wikimedia.org/wikipedia/commons/1/10/MediaWiki_Vector_skin_action_arrow.png");background-position:left 50%;background-repeat:no-repeat}.mw-parser-output .tpl-hidden.mw-made-collapsible.mw-collapsed>.mw-collapsible-toggle>.tpl-hidden-headertext{background-image:url("//upload.wikimedia.org/wikipedia/commons/4/41/MediaWiki_Vector_skin_right_arrow.png")} Other languages English: Quadratic Julia set with Internal level sets for c values along internal ray 0 of main cardioid of Mandelbrot setМакедонски: Квадратно Жулиино множество со множества на внатрешно ниво за вредностите на c, заедно со внатрешен зрак 0 на главната кардоида од Манделбротово множество.中文(简体):朱利亚集合 ## syntaxhighlight {{Galeria |Nazwa = Trzy krzywe w różnych skalach |wielkość = 400 |pozycja = right |Plik:LinLinScale.svg|Skala liniowo-liniowa |Plik:LinLogScale.svg|Skala liniowo-logarytmiczna |Plik:LogLinScale.svg|Skala logarytmiczno-liniowa |Plik:LogLogScale.svg|Skala logarytmiczno-logarytmiczna }} == c source code== <syntaxhighlight lang="c"> </syntaxhighlight> == bash source code== <syntaxhighlight lang="bash"> </syntaxhighlight> ==make== <syntaxhighlight lang=makefile> all: chmod +x d.sh ./d.sh </syntaxhighlight> Tu run the program simply make ==text output== <pre> ==references== <references/> # function Mathematical Function Plot Description Function displaying a cusp at (0,1) Equation ${\displaystyle y={\sqrt {|x|}}+1}$ Co-ordinate System Cartesian X Range -4 .. 4 Y Range -0 .. 3 Derivative ${\displaystyle {\frac {dy}{dx}}={\frac {1}{2{\sqrt {x}}}}}$ Points of Interest in this Range Minima ${\displaystyle \left(0,1\right)\,}$ Cusps ${\displaystyle \left(0,1\right)\,}$ Derivatives at Cusp ${\displaystyle \lim _{x\to 0^{+}}f'(x)=+\infty }$, ${\displaystyle \lim _{x\to 0^{-}}f'(x)=-\infty }$ The program can calculate many of the objects found in Singularity theory: • Algebraic curves defined by a single polynomial equation in two variables. e.g. a circle x^2 + y^2 - r^2; • Algebraic surfaces defined by a single polynomial equation in three variables. e.g. a cone x^2 + y^2 - z^2; • Paramertised curves defined by a 3D vector expression in a single variable. e.g. a helix [cos(pi t), sin(pi t), t]; • Parameterised surfaces defined by a 3D vector expression in two variables. e.g. a cross-cap [x,x y,y^2]; • Intersection of surfaces with sets defined by an equation. Can be used to calculate non-polynomial curves. • Mapping from R^3 to R^3 defined by 3D vector equation in three variables. e.g. a rotation [cos(pi th) x - sin(pi th) y,sin(pi th) x + cos(pi th) y,z]; • Intersections where the equation depends on the definition of another curve or surface. e.g. The profile of a surface N . [A,B,C]; N = diff(S,x) ^^ diff(S,y); • Mappings where the equation depends on another surface. For example projection of a curve onto a surface. • Intersections where the equations depends on a pair of curves. For example the pre-symmetry set of a curve. • Mapping where the equation depends on a pair of curves. For example the Symmetry set. # video ### formats YT: • .MOV • .MPEG-1 ( also commons) • .MPEG-2 ( also commons) • .MPEG4 • .MP4 • .MPG • .AVI ( also commons) • .WMV • .MPEGPS • .FLV • 3GPP • WebM ( also commons) • DNxHR • ProRes • CineForm • HEVC (h265) commmons: Kodeki Text # Image guidelines FIles ## Color calibration dwa rodzaje kalibracji: • programową (poprzez soft dostarczany przez producenta panelu) • sprzętową (z pomocą urządzenia pomiarowego, na przykład kalibratora do monitorów). wynik deltaE po kalibracji • nie powinien przekraczać 1,5 ( dobrze) • < 1.0 ( wybornie) W praktyce lepiej polegać na kalibracji sprzętowej, która zachowuje wszystkie tony, odcienie barw oraz płynną gradację. Monitory • z fabryczną kalibracją kolorów ( krzywa gamma jest regulowana w fabryce dla każdej matrycy ) • Eizo • BenQ • czujnik automatycznej kalibracji, który eliminuje konieczność korzystania z zewnętrznych urządzeń • rozdzielczości ( 4k ) • gęstość pikseli ( duża to 163 punktów na cal) • technologia eliminacji migotania obrazu • tryb ciemni, szczególnie przydatny w postprocessingu • tryb animacji, dbający o spójność wyświetlanych odcieni • tryb CAD/CAM uwydatniający najdrobniejsze szczegóły znajdujące się na ekranie monitora • gamut kolorów : 100% gamutu sRGB zapisz ustawienia monitora w formie profilu, a najlepiej (jeśli to możliwe) kilku różnych, dopasowanych do konkretnych zastosowań. Before reviewing the work of authors it is recommended that you calibrate your monitor. If you don't do this, keep in mind that you might not see details in very bright or very dark areas. Also, some monitors may be tinted too much towards a certain color, and not have "neutral" color. See the image below full screen on a completely black background. You should be able to see at least three of the four circles on this image. If you see four then your brightness setting is on the high side, if you see three, it is fine, but if fewer than three are discernible then it is set too low. Monitor gamma checkerboard On a gamma-adjusted display, the four circles in the color image blend into the background when seen from a few feet away. If they do not, you could adjust the gamma setting (found in the computer's settings, not on the display), until they do. This may be very difficult to attain, and a slight error is not detrimental. Uncorrected PC displays usually show the circles darker than the background. Note that on most consumer LCD displays (laptop or flat screen) viewing angle strongly affects these images - correct adjustment on one part of the screen might be incorrect on another part for a stationary head position. Click on the images for more technical information. If possible, calibration with a hardware monitor calibrator is recommended. It is also important to ensure that your browser is displaying images at the correct resolution. To verify this, open the full 4000x2000 version of the grid you see at the left. The grid is 8 squares wide by 4 squares high, with the squares measuring 500 pixels on each side. After zooming in to 100%, you should check if it looks reasonable given your monitor resolution. For example, if you have a 1920x1080 monitor, the horizontal width should be just short of covering 4 squares, which would be 2000 pixels. Due to scroll bars, menus, etc. the actual amount of space available will be slightly less than your screen resolution. Firefox is known for having issues on a Windows computer where the Display setting in Control Panel have been set to anything other than 100% scaling; it is common to have a default of 125% (Medium) for large monitors. This results in Firefox upsampling everything by 25%, causing images to appear less sharp than they really are! To fix this issue, you can either change the scaling factor in Display to 100% or go to about:config (in the URL bar) and change layout.css.devPixelsPerPx to 1.0. # commons {{ValiCat|+|type=stop}} [[Category:CommonsRoot]] Help: ## Category Image • Static • non-photographic • computer graphic • art • AI • human Criteria for fractals classifications • by method • by fractal type • by country ??? • by year ( ? creation ?) • by File type ( or file format) • static image • raster • vector • animation • video • by quality • by technical citeria • by features • Fractal images - media of the day Help # spike It can be used for: • highlight the boundary ( 1D) • Specular reflection in Phong reflection model ( 3D ) # slope // "Psychofract" by Carlos Ureña - 2015 mat3 tran1, tran2 ; //transform matrix for each branch const float pi = 3.1415926535 ; const float rsy = 0.30 ; // length of each tree root trunk in NDC // ----------------------------------------------------------------------------- { s = sin( rads ) ; return mat3( c, s, 0.0, -s, c, 0.0, 0.0, 0.0, 1.0 ); } // ----------------------------------------------------------------------------- mat3 TranslateMat( vec2 d ) { return mat3( 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, d.x, d.y, 1.0 ); } // ----------------------------------------------------------------------------- mat3 ScaleMat( vec2 s ) { return mat3( s.x, 0.0, 0.0, 0.0, s.y, 0.0, 0.0, 0.0, 1.0 ); } // ----------------------------------------------------------------------------- mat3 ChangeFrameToMat( vec2 org, float angg, float scale ) { float angr = (angg*pi)/180.0 ; return ScaleMat( vec2( 1.0/scale, 1.0/scale ) ) * RotateMat( -angr ) * TranslateMat( -org ) ; } // ----------------------------------------------------------------------------- float RectangleDistSq( vec3 p ) { if ( 0.0 <= p.y && p.y <= rsy ) return p.x * p.x; if (p.y > rsy) return p.x*p.x + (p.y-rsy)*(p.y-rsy) ; return p.x*p.x + p.y*p.y ; } // ----------------------------------------------------------------------------- float BlendDistSq( float d1, float d2, float d3 ) { float dmin = min( d1, min(d2,d3)) ; return 0.5*dmin ; } // ----------------------------------------------------------------------------- vec4 ColorF( float distSq, float angDeg ) { float b = min(1.0, 0.1/(sqrt(distSq)+0.1)), v = 0.5*(1.0+cos( 200.0*angDeg/360.0 + b*15.0*pi )); return vec4( b*b*b,b*b,0.0,distSq) ; // returns squared distance in alpha component } // ----------------------------------------------------------------------------- float Trunk4DistSq( vec3 p ) { float d1 = RectangleDistSq( p ); return d1 ; } // ----------------------------------------------------------------------------- float Trunk3DistSq( vec3 p ) { float d1 = RectangleDistSq( p ), d2 = Trunk4DistSq( tran1*p ), d3 = Trunk4DistSq( tran2*p ); return BlendDistSq( d1, d2, d3 ) ; } // ----------------------------------------------------------------------------- float Trunk2DistSq( vec3 p ) { float d1 = RectangleDistSq( p ), d2 = Trunk3DistSq( tran1*p ), d3 = Trunk3DistSq( tran2*p ); return BlendDistSq( d1, d2, d3 ) ; } // ----------------------------------------------------------------------------- float Trunk1DistSq( vec3 p ) { float d1 = RectangleDistSq( p ) , d2 = Trunk2DistSq( tran1*p ), d3 = Trunk2DistSq( tran2*p ); return BlendDistSq( d1, d2, d3 ) ; } // ----------------------------------------------------------------------------- float Trunk0DistSq( vec3 p ) { float d1 = RectangleDistSq( p ) , d2 = Trunk1DistSq( tran1*p ), d3 = Trunk1DistSq( tran2*p ); return BlendDistSq( d1, d2, d3 ) ; } // ----------------------------------------------------------------------------- // compute the color and distance to tree, for a point in NDC coords vec4 ComputeColorNDC( vec3 p, float angDeg ) { vec2 org = vec2(0.5,0.5) ; vec4 col = vec4( 0.0, 0.0, 0.0, 1.0 ); float dmin ; for( int i = 0 ; i < 4 ; i++ ) { mat3 m = ChangeFrameToMat( org, angDeg + float(i)*90.0, 0.7 ); vec3 p_transf = m*p ; float dminc = Trunk0DistSq( p_transf ) ; if ( i == 0 ) dmin = dminc ; else if ( dminc < dmin ) dmin = dminc ; } return ColorF( dmin, angDeg ); // returns squared dist in alpha component } // ----------------------------------------------------------------------------- vec3 ComputeNormal( vec3 p, float dd, float ang, vec4 c00 ) { vec4 //c00 = ComputeColorNDC( p, ang ) , c10 = ComputeColorNDC( p + vec3(dd,0.0,0.0), ang ) , c01 = ComputeColorNDC( p + vec3(0.0,dd,0.0) , ang ) ; float h00 = sqrt(c00.a), h10 = sqrt(c10.a), h01 = sqrt(c01.a); vec3 tanx = vec3( dd, 0.0, h10-h00 ), tany = vec3( 0.0, dd, h01-h00 ); vec3 n = normalize( cross( tanx,tany ) ); if ( n.z < 0.0 ) n *= -1.0 ; return n ; } // ----------------------------------------------------------------------------- void mainImage( out vec4 fragColor, in vec2 fragCoord ) { const float width = 0.1 ; vec2 res = iResolution.xy ; float mind = min(res.x,res.y); vec2 pos = fragCoord.xy ; float x0 = (res.x - mind)/2.0 , y0 = (res.y - mind)/2.0 , px = pos.x - x0 , py = pos.y - y0 ; // compute 'tran1' and 'tran2': vec2 org1 = vec2( 0.0, rsy ) ; float ang1_deg = +20.0 + 30.00*cos( 2.0*pi*iTime/4.05 ), scale1 = +0.85 + 0.40*cos( 2.0*pi*iTime/2.10 )  ; vec2 org2 = vec2( 0.0, rsy ) ; float ang2_deg = -30.0 + 40.00*sin( 2.0*pi*iTime/2.52 ), scale2 = +0.75 + 0.32*sin( 2.0*pi*iTime/4.10 )  ; tran1 = ChangeFrameToMat( org1, ang1_deg, scale1 ) ; tran2 = ChangeFrameToMat( org2, ang2_deg, scale2 ) ; // compute pixel color (pixCol) float mainAng = 360.0*iTime/15.0 , // main angle, proportional to time dd = 1.0/float(mind) ; // pixel width or height in ndc vec3 pixCen = vec3( px*dd, py*dd, 1.0 ) ; // pixel center vec4 pixCol = ComputeColorNDC( pixCen, mainAng ), resCol  ; // compute output color as a function 'use_normal' const bool use_gradient = true ; { vec3 nor = ComputeNormal( pixCen, dd, mainAng, pixCol ); vec4 gradCol = vec4( max(nor.x,0.0), max(nor.y,0.0), max(nor.z,0.0), 1.0 ) ; } else resCol = pixCol ; fragColor = vec4( resCol.rgb, 1.0 ) ; } # rays // https://geometricolor.wordpress.com/2013/02/28/fractal-surprise-from-complex-function-iteration-the-code/ //Fractal surprise from complex function iteration: The code Posted on February 28, 2013 by Peter Stampfli // here is the program for the last post: Fractal surprise from complex function iteration // simply use it in processing 1.5 (I don’t know if it works in the new version 2.) float range, c,step; int n, iter, count; float rsqmax; void setup() { size(600, 600); c=0.4; step=0.002; iter=40; rsqmax=100; range=1.2; count=0; } void draw() { int i, j, k; float d=2*range/width; float x, y, h; float phi, phiKor; // phiKor is the trivial phase change to subtract colorMode(HSB, 400, 100, 100); for (j=0;j<height;j++) { for (i=0;i y=-range+d*j; x=-range+d*i; phiKor=0; for (k=0;k<iter;k++) { if (x*x+y*y>rsqmax) { break; } phiKor+=atan2(y, x); h=x*x-y*y+c; y=2*x*y; x=h; } phi=atan2(y, x)-phiKor; // correction phi=0.5*phi/PI; phi=(phi-floor(phi))*2*PI; // calculate phase phi mod 2PI pixels[i+j*width]=color(phi/PI*200, 100, 100); } } updatePixels(); // saveFrame(“a-####.jpg”); println(count+” “+c); c-=step; if (c<0) { noLoop(); } count++; } ${\displaystyle 0\leq \operatorname {mod} \left(\ \operatorname {floor} \left({\frac {\operatorname {floor} \left(x\right)}{2^{\operatorname {floor} \left(y\right)}}}\right)\ ,\ 2\ \right)}$ pins=[] pins_num=150 setup=()=> { createCanvas(windowWidth, windowHeight) y=0 for (i=0; i<pins_num; i++ ) { pins[i]=createVector(random(width), random(height)) } } draw=()=> { if (y<height) { for (x=0; x<width; x++ ) { angle=0 pins.forEach(p=>angle+=atan2(p.y-y, p.x-x)) colour=map(sin(angle), -1, 1, 0, 256) stroke(colour) line(x, y, x, y+colour) } y++ } } mousePressed=()=>setup() ## The field lines In a Fatou domain (that is not neutral) there is a system of lines orthogonal to the system of equipotential lines, and a line of this system is called a field line. If we colour the Fatou domain according to the iteration number (and not the real iteration number), the bands of iteration show the course of the equipotential lines, and so also the course of the field lines. If the iteration is towards ∞, we can easily show the course of the field lines, namely by altering the colour according to whether the last point in the sequence is above or below the x-axis, but in this case (more precisely: when the Fatou domain is super-attracting) we cannot draw the field lines coherently (because we use the argument of the product of ${\displaystyle f'(z_{i})}$ for the points of the cycle). For an attracting cycle C, the field lines issue from the points of the cycle and from the (infinite number of) points that iterate into a point of the cycle. And the field lines end on the Julia set in points that are non-chaotic (that is, generating a finite cycle). Let • r be the order of the cycle C • z* be a point in C. • ${\displaystyle f(f(...f(z*)))=z*}$ (the r-fold composition) • the complex number ${\displaystyle \alpha }$ by ${\displaystyle \alpha =(d(f(f(...f(z))))/dz)_{z=z*}}$ If the points of C are ${\displaystyle z_{i}(i=1,2,...,r,z_{1}=z*)}$, ${\displaystyle \alpha }$ is the product of the r numbers ${\displaystyle f'(z_{i})}$. The real number 1/${\displaystyle |\alpha |}$ is the attraction of the cycle, and our assumption that the cycle is neither neutral nor super-attracting, means that 1 < 1/${\displaystyle |\alpha |}$ < ∞. The point z* is a fixed point for ${\displaystyle f(f(...f(z)))}$, and near this point the map ${\displaystyle f(f(...f(z)))}$ has (in connection with field lines) character of a rotation with the argument ${\displaystyle \beta }$ of ${\displaystyle \alpha }$ (so that ${\displaystyle \alpha =|\alpha |e^{\beta i}}$). ### colour the Fatou domain In order to colour the Fatou domain, we have chosen a small number ${\displaystyle \epsilon }$ and set the sequences of iteration ${\displaystyle z_{k}(k=0,1,2,...,z_{0}=z)}$ to stop when ${\displaystyle |z_{k}-z*|<\epsilon }$, and we colour the point z according to • the number k which gives Level Set Method (LSM) • the real iteration number, if we prefer a smooth colouring ### FL def • the field lines, which are the lines orthogonal to the equipotential lines • A field line is orthogonal to the equipotential surfaces, which are the loci for the points of constant iteration number The colouring is determined by • the distance to the centre line of the field line (density/across) • and by the potential function (density/along), This colouring can be mixed with the colouring of the background. As two colour scales are required, these must be imported. The field lines are determined by their number, • their (relative) thickness (≤ 1) • a number transition determining the mixing of the colours of the field lines with the background - e.g • . 0.1 for a soft transition and therefore indistinct and thinner looking field lines, • 4 for more well defined field lines. If the field lines do not run precisely coherently, the bailout number must be diminished and the maximum iteration number increased. Here is the function 1 1 0 0 1, and the background is made of one colour by setting the density to 0: Within a field line there are two local distances: • the distance from the centre line • the distance from the equipotential line. In addition there are two whole numbers: • the number of the field line • the iteration number. The two local distances establish local coordinate systems within the field lines, and this means that it is possible to colour on the basis of mathematical procedures or pictures that are input. And the two whole numbers mean that such procedures or pictures can be made to depend on the field line number and the iteration number. When the Fatou domain is associated to a super-attracting cycle, the field lines cannot be drawn coherently. However, it is possible to draw a system of bands that follow their courses, but the number of which increases for each increase in the iteration number. If the function is a polynomial (that is, if the denominator is a constant), the factor of increase is the degree of the polynomial. For a general rational function the factor is the difference between the degree of the numerator and the denominator, but the constellation of the partitions is not regular, because fiels lines also originate from the points that are zeros of the denominator, and from the (infinitely many) points that are iterated into one of these zeros. For the function z4/(1 + z), the field lines are separated in three, but field lines also come from the point -1 and the points iterated into -1: ### colouring of the field lines If we choose a direction from z* given by an angle ${\displaystyle \theta }$, the field line issuing from z* in this direction consists of the points z such that the argument ${\displaystyle \psi }$ of the number ${\displaystyle z_{k}-z*}$ satisfies the condition ${\displaystyle \psi -k\beta =\theta (mod2\pi )}$. For if we pass an iteration band in the direction of the field lines (and away from the cycle), the iteration number k is increased by 1 and the number ${\displaystyle \psi }$ is increased by ${\displaystyle \beta }$, therefore the number ${\displaystyle \psi -k\beta (mod2\pi )}$ is constant along the field line. A colouring of the field lines of the Fatou domain means that we colour the spaces between pairs of field lines: we use the name field line for such coloured "interval of field lines", As a field line in our terminology is an interval of field lines we choose a number of regularly situated directions issuing from z*, and in each of these directions we choose two directions around this direction. As it can happen, that the two bounding field lines do not end in the same point of the Julia set, our coloured field lines can ramify (endlessly) in their way towards the Julia set. We can colour on the basis of the distance to the centre line of the field line, and we can mix this colouring with the usual colouring. Let n be the number of field lines and let t be their relative thickness (a number in the interval [0, 1]). For the point z, we have calculated the number ${\displaystyle \psi -k\beta (mod2\pi )}$, and z belongs to a field line if the number ${\displaystyle v=(\psi -k\beta (mod2\pi ))/(2\pi )}$ (in the interval [0, 1]) satisfies |v - i/n| < t/(2n) for one of the integers i = 0, 1, ..., n, and we can use the number |v - i/n|/(t/(2n)) (in the interval [0, 1] - the relative distance to the centre of the field line) to the colouring. Where • an attracting cycle C • r be the order of the cycle C • the points of C are ${\displaystyle z_{i}(i=1,2,...,r,z_{1}=z*)}$ • z* be a point in C; • it is a fixed point of ${\displaystyle f^{r}}$ • ${\displaystyle f^{r}(z*)=z*}$ • the argument ${\displaystyle \psi }$ of the number ${\displaystyle z=z_{k}-z*}$ ${\displaystyle \psi =\operatorname {atan2} \left(\operatorname {Im} (z),\operatorname {Re} (z)\right)}$ ### Pictures • In the first picture, the function is of the form ${\displaystyle z/2+1/(z-z^{3}/6)+c}$ and we have only coloured a single Fatou domain • The second picture shows that field lines can be made very decorative (the function is of the form ${\displaystyle z/(1+z^{3})+c}$). • third picture: A (coloured) field line is divided up by the iteration bands, and such a part can be put into a ono-to-one correspondence with the unit square: the one coordinate is the relative distance to one of the bounding field lines, this number is (v - i/n)/(t/(2n)) + 1/2, the other is the relative distance to the inner iteration band, this number is the non-integral part of the real iteration number. Therefore we can put pictures into the field lines. As many as we desire, if we index them according to the iteration number and the number of the field line. However, it seems to be difficult to find fractal motives suitable for placing of pictures - if the intention is a picture of some artistic value. But we can restrict the drawing to the field lines (and possibly introduce transparency in the inlaid pictures), and let the domain outside the field lines be another fractal motif # relief Cartographic Sketchy with lines define the colors used for different terrain heights: # Digital Image Processing • data analysis • image processing • image enhacement = process an image so that result is more suitable than original image for specific application • Histogram equalization # computer graphic www.pling.org.uk/cs/cgv.html www.tutorialspoint.com/computer_graphics/computer_graphics_curves.htm github.com/jagregory/abrash-black-book pages.mtu.edu/~shene/COURSES/cs3621/NOTES/model/b-rep.html pages.mtu.edu/~shene/COURSES/cs3621/NOTES/notes.html ## libraries ### icc convert image_rgb.tiff -profile "RGB.icc" -profile "CMYK.icc" image_cmyk.tiff ## algorithm • Render your image using correct radiometric calculations. You trace individual wavelengths of light or buckets of wavelengths. Whatever. In the end, you have an image that has a representation of the spectrum received at every point. • At each pixel, you take the spectrum you rendered, and convert it to the CIE XYZ color space. This works out to be integrating the product of the spectrum with the standard observer functions (see CIE XYZ definition). • This produces three scalar values, which are the CIE XYZ colors. • Use a matrix transform to convert this to linear RGB, and then from there use a linear/power transform to convert linear RGB to sRGB. • Convert from floating point to uint8 and save, clamping values out of range (your monitor can't represent them). • Send the uint8 pixels to the framebuffer. • The display takes the sRGB colors, does the inverse transform to produce three primaries of particular intensities. Each scales the output of whatever picture element it is responsible for. The picture elements light up, producing a spectrum. This spectrum will be (hopefully) a metamer for the original spectrum you rendered. • You perceive the spectrum as you would have perceived the rendered spectrum. ## hdr • Set your ISO to 200, set your camera to Aperture Priority • take three photos with exposure settings as EV 0, EV-2, and EV+2. The more differently exposed photos you have, the better. • Merge to HDR • Select 32-bit/channel and tick Remove Ghosts • . Click Image > Mode > 16-bit/channel • tone mapping. Adjust the settings depending on how you want your HDR photo to look. ## image enhancement techniques • gray-level transformation functions • linear (negative and identity transformations) • logarithmic (log and inverse-log transformations) • power-law (nth power and nth root transformations).T ## bit depth and bitrate • 8-bit = 2^8 = 256 values. 256 to create an 8-bit grayscale image • 16-bit grayscale. This means that they capture over 65,000 shades of gray. [1] • 24 bit = with three colour channels, that’s 256 (red) x 256 (green) x 256 (blue) for a total of around 2^8^3 = 16.7 million individual colours (RGB = true color) 16 777 216 = 256^3 = 2^8^3 ## aliasing • "A simple way to prevent aliasing of cosine functions (the color palette in this case) by removing frequencies as oscillations become smaller than a pixel. You can think of it as an LOD system. Move the mouse to compare naive versus band-limited cos(x)" Inigo Quilez # color "color operations should be done ...to either model human perception or the physical behavior of light"Björn Ottosson : How software gets color wrong ## Method for domain coloring HL plot of z, as per the simple color function example described in the text (left), and the graph of the complex function z3 − 1 (right) using the same color function, showing the three zeros as well as the negative real numbers as pink rays starting at the zeros. Representing a four dimensional complex mapping with only two variables is undesirable, as methods like projections can result in a loss of information. However, it is possible to add variables that keep the four-dimensional process without requiring a visualization of four dimensions. In this case, the two added variables are visual inputs such as color and brightness because they are naturally two variables easily processed and distinguished by the human eye. This assignment is called a "color function". There are many different color functions used. A common practice is to represent the complex argument (also known as "phase" or "angle") with a hue following the color wheel, and the magnitude by other means, such as brightness or saturation. ### Simple color function The following example colors the origin in black, 1 in green, −1 in magenta, and a point at infinity in white: ${\displaystyle {\begin{cases}H&=\arg z+2\pi /3,\\S&=100\%,\\L&=\ell (|z|).\end{cases}}}$ There are a number of choices for the function ${\displaystyle \ell :[0,\infty )\to [0,1)}$. ${\displaystyle \ell }$ should be strictly monotonic and continuous. Another desirable property is ${\displaystyle \ell (1/r)=1-\ell (r)}$ such that the inverse of a function is exactly as light as the original function is dark (and the other way around). Possible choices include • ${\displaystyle \ell _{1}(r)={\frac {2}{\pi }}\arctan(r)}$ and • ${\displaystyle \ell _{2}(r)={\frac {r^{a}}{r^{a}+1}}}$ (with some parameter ${\displaystyle a>0}$). With ${\displaystyle a=2}$, this corresponds to the stereographic projection onto the Riemann sphere. A widespread choice which does not have this property is the function ${\displaystyle \ell _{3}(r)=1-a^{|r|}}$ (with some parameter ${\displaystyle 0) which for ${\displaystyle a=1/2}$ and ${\displaystyle 0\leq r\leq 1}$ is very close to ${\displaystyle \ell _{1}}$. This approach uses the HSL (hue, saturation, lightness) color model. Saturation is always set at the maximum of 100%. Vivid colors of the rainbow are rotating in a continuous way on the complex unit circle, so the sixth roots of unity (starting with 1) are: green, cyan, blue, magenta, red, and yellow. Since the HSL color space is not perceptually uniform, one can see streaks of perceived brightness at yellow, cyan, and magenta (even though their absolute values are the same as red, green, and blue) and a halo around L = 12. More modern color spaces, e.g, the Lab color space or CIECAM02, correct this, making the images more accurate and less saturated. ### Discontinuous color changing Many color graphs have discontinuities, where instead of evenly changing brightness and color, it suddenly changes, even when the function itself is still smooth. This is done for a variety of reasons such as showing more detail or highlighting certain aspects of a function. This is also sometimes done unintentionally, for example if determining the direction of a complex number first depends on calculating the angle, say in the range [-π, π) and then assigning a color as some function of this angle, then in this case the transition across -π = π could be discontinuous. This kind of artifact rarely contrributes to the usefulness of a graph. #### Magnitude growth A discontinuous color function. In the graph, each discontinuity occurs when ${\displaystyle |z|=2^{n}}$ for integers n. Unlike the argument, which has finite range, the magnitude of a complex number can range from 0 to . Therefore, in functions that have large ranges of magnitude, changes in magnitude can sometimes be hard to differentiate when a very large change is also pictured in the graph. This can be remedied with a discontinuous color function which shows a repeating brightness pattern for the magnitude based on a given equation. This allows smaller changes to be easily seen as well as larger changes that "discontinuously jump" to a higher magnitude. In the graph on the right, these discontinuities occur in circles around the center, and show a dimming of the graph that can then start becoming brighter again. A similar color function has been used for the graph on top of the article. Equations that determine the discontinuities may be linear, such as for every integer magnitude, exponential equations such as every magnitude n where ${\displaystyle 2^{n}}$ is an integer, or any other equation. #### Highlighting properties Discontinuities may be placed where outputs have a certain property to highlight which parts of the graph have that property. For instance, a graph may instead of showing the color cyan jump from green to blue. This causes a discontinuity that is easy to spot, and can highlight lines such as where the argument is zero.[2] Discontinuities may also affect large portions of a graph, such as a graph where the color wheel divides the graph into quadrants. In this way, it is easy to show where each quadrant ends up with relations to others.[3] ## Color depth • 1-bit color = binary image • 8-bit color = 256 colors, usually from a fully-programmable palette ( VGA) • 24-bit = True color • 64-bit color = stores 16-bit R, 16-bit G, 16-bit B and 16-bit A. All integer, no floats Number type ### store Image formats to store RGB images with floating point pixels:[4] • you cannot use PNG ( png64 stores 16-bit R, 16-bit G, 16-bit B and 16-bit A. All integer, no floats), JPEG, TGA or GIF formats • use either ### processing For processing such images: • cannot use these with PIL/Pillow because it doesn't support 32-bit float RGB files. • use : ## Dynamic range Dynamic rang types: • image • display • print HDR and: ## color variations Variations created by color mixing[5] • Shades: created by adding black to a base color, increasing its darkness. Shades appear more dramatic and richer • tints: created by adding white to a base color, increasing its lightness. Tints are likely to look pastel and less intense • Tones: created by adding gray to a base color, increasing its lightness. Tones looks more sophisticated and complex than base colors Other: • Hues: refers to the basic family of a color from red to violet. Hues are variations of a base color on the color wheel • Temperatures: Color are often divided in cool and warm according to how we perceive them. Greens and blues are cool, whilst reds and yellows are warm 4 categories related to color brightness and saturation:[6] • vivid (light) = high saturation and brightness • vivid dark = low brightness • pastel = low saturation and high brightness • pale (pastel) dark = low saturation and brightness Shades of Color Notice that since (0,0,0) is black and (1,1,1) is white, shades of any particular color are created by moving closer to black or to white. You can use the parametric equation for a linear relationship between two values to make shades of a color darker or lighter. A parametric equation to calculate a linear change between values A and B: C = A + (B-A)*t; // where t varies between 0 and 1 To change a color (r,g,b) to make it lighter, move it closer to (1,1,1). newR = r + (1-r)*t; // where t varies between 0 and 1 newG = g + (1-g)*t; // where t varies between 0 and 1 newB = b + (1-b)*t; // where t varies between 0 and 1 To change a color (r,g,b) to make it darker, move it closer to (0,0,0). newR = r + (0-r)*t; // where t varies between 0 and 1 newG = g + (0-g)*t; // where t varies between 0 and 1 newB = b + (0-b)*t; // where t varies between 0 and 1 // or newR = r*t; // where t varies between 1 and 0 newG = g*t; // where t varies between 1 and 0 newB = b*t; // where t varies between 1 and 0 ### tints or pastel Names: • tints (a mixture of a base color with white ) • pale colors = increased lightness • pastel colors • soft or muted type of color • light color Effect • soothing to the eye • looks less intense, pastel, pale, faded look. • subtle, modern or sophisticated design Algorithm • mix base color with white = heavily tinted with white = desaturated with white • first generate a random color. Then we sature it a little and mix this color with white color[7] • in HSV. Take a hue. Desaturate the color a bit( = 80% saturation). Use 100% for value. • in the HSV color space, have high value and low saturation # https://mdigi.tools/random-pastel-color/ random_color = { r: Random(0, 255), g: Random(0, 255), b: Random(0, 255) } pastel_color = random_color.saturate( 10% ).mix( white ) /* * https://sighack.com/post/procedural-color-algorithms-color-variations * Mix randomly-generated RGB colors with a specified * base color. The mixing is performed taking into * account a user-specified weight parameter 'w', which * specifies what percentage of the final color should * come from the base color. * * A value of 0 for the weight specifies that the 100% * of the final RGB components should come from the * randomly generated color while a value of 0.5 * specifies an equal proportion from both the base color * and the randomly-generated one. */ color rgbMixRandom(color base, float w) { float r, g, b; /* Check bounds for weight parameter */ w = w > 1 ? 1 : w; w = w < 0 ? 0 : w; /* Generate components for a random RGB color */ r = random(256); g = random(256); b = random(256); /* Mix user-specified color using given weight */ r = (1-w) * r + w * red(base); g = (1-w) * g + w * green(base); b = (1-w) * b + w * blue(base); return color(r, g, b); } ‎ ## Intermediate Colour Values Smooth colouring requires the ability to obtain a colour in between two colours picked from a discrete palette: function: getIntermediateColourValue parameters: c1(R,G,B), c2(R',G',B'), m: 0 <= m <= 1 return: [R + m * (R' - R), G + m * (G' - G), B + m * (B' - B)] # Gimp • flatpak run org.gimp.GIMP # nested squares int VECT = 1; void setup() { size(640, 640); rectMode(CENTER); noStroke(); } void draw() { background(0); translate(width/2, height/2); int i = frameCount*2 % width; if (i==0) VECT = -VECT; SCALE = dist(i, 0, 0, width-i)/width; for (int j=0; j<16; j++) { scale(SCALE); fill(map(j, 0, 15, 64, 255)); rect(0, 0, width, height); } } # log-polar mapping examples ## mapping • Twisted Mandelbrot Set „Let p is pixel location relative to image's center. Normally, c = c0 + p. Twisted, c = c0 + p^2.” and you change c0 along some circle to get the rotation ? Exponential map [8] ### notation ${\displaystyle f:A\to B}$ is meant to say ${\displaystyle f}$ is a map whose domain is $A$ and whose codomain is $B$. $A$ and $B$ are both sets. E.g. ${\displaystyle {\text{sq}}:\mathbb {R} \to \mathbb {R} }$ means $\text{sq}$ is a real valued function defined over the real numbers. If you have $f:A\to B$, then we also have the notation $$f:a\mapsto b$$ where $a$ is an element of $A$ and $b$ is an element of $B$. This can be used to fix the notation for the evaluation of the map: E.g. $\text{sq}:x\mapsto \text{sq}(x)$. You can also prescript the actual map in that moment, by defining what $\text{sq}(x)$, e.g. $\text{sq}:x\mapsto \text{sq}(x) := x^{2}$ Other use for this notation is to simply say to what element of $B$ a particular $a\in A$ is mapped to. E.g. $\text{sq}:8\mapsto 64$. # Mandelbrot ## dense • Example: c0 = -0.39055 + 0.58680 i @ 1e5 magnification, 64k iterations • the set of all Misiurewicz points is dense on the boundary of the Mandelbrot set ( Local connectivity of the Mandelbrot set at certain infinitely renormalizable points by Yunping Jiang ) https://arxiv.org/abs/math/9508212 ## period • 0. Assume a well-behaved formula like quadratic Mandelbrot set. • 1. A hyperbolic component of period P is surrounded by an atom domain of period P. • 2. The size of the atom domain is around 4x larger than a disk-like component, often larger for cardioid-like components. • 3. Conjecture: Newton's method in 1 complex variable (f_c^P(z)-z=0) can find the limit cycle Z_1 .. Z_P when starting from the Pth iterate of 0, when c is sufficiently within the atom domain • 4. The limit cycle has multiplier (product 2 Z_k) 0 at the center and 1 in magnitude at the boundary of the component. • 5. The atom domain coordinate is 0 at the center and 1 in magnitude at the boundary of the atom domain. • 6. Perturbed Newton's method can be used for deep zooms. • 7. The limit cycle can also be used for interior distance estimation. Start from period 1 increasing, you get a sequence of atom domains. At each, if the atom domain coordinate is small, use Newton's method to find the limit cycle. If the limit cycle has small multiplier, stop, period is detected. If the iterate escapes, stop, pixel is exterior. ## 22-legged ant in the m-set $m-describe double 1000 1000 -0.72398340 0.28671980 1 the input point was -0.723983400000000055 + +0.286719800000000025 i the point didn't escape after 1000 iterations nearby hyperbolic components to the input point: - a period 1 cardioid with nucleus at +0.000000000000000000 + +0.000000000000000000 i the component has size 1 and is pointing west the atom domain has size 0 the nucleus domain coordinate is 0 at turn 0.000000000000000000 the atom domain coordinate is -nan at turn -nan the nucleus is 0.77869 to the east-south-east of the input point the input point is exterior to this component at radius 1.0354 and angle 0.45522 (in turns) a point in the attractor is -0.497319450905372551 + +0.143745216108897733 i external angles of this component are: .(0) .(1) - a period 2 circle with nucleus at -1.000000000000000000 + +0.000000000000000000 i the component has size 0.5 and is pointing west the atom domain has size 1 the nucleus domain coordinate is 1.2361 at turn 0.500000000000000000 the atom domain coordinate is 1 at turn 0.000000000000000000 the nucleus is 0.39799 to the south-west of the input point the input point is exterior to this component at radius 1.5919 and angle 0.12803 (in turns) a point in the attractor is -0.861857110872783827 + +0.396178203198002954 i external angles of this component are: .(01) .(10) - a period 105 cardioid with nucleus at -0.723948838355664148 + +0.286852337885451558 i the component has size 3.7103e-07 and is pointing east-south-east the atom domain has size 0.00010235 the nucleus domain coordinate is 1.5924 at turn 0.128053542315208269 the atom domain coordinate is 0.39811 at turn 0.128053542315208269 the nucleus is 0.00013697 to the north-north-east of the input point external angles of this component are: .(010101010011010010101010100101010101001010101010010101010100101010101001010101010010101010100101010101001) .(010101010011010100101010101001010101010010101010100101010101001010101010010101010100101010101001010101010) - a period 116 cardioid with nucleus at -0.723983481008419805 + +0.286719746468555081 i the component has size 1.7465e-07 and is pointing east-south-east the atom domain has size 7.0286e-05 the nucleus domain coordinate is 36.617 at turn 0.370795651751681277 the atom domain coordinate is 0.73603 at turn 0.878154585895997375 the nucleus is 9.7098e-08 to the west-south-west of the input point the input point is interior to this component at radius 0.83522 and angle 0.62253 (in turns) a point in the attractor is +0.000065706660165846 + -0.000227325037589120 i the interior distance estimate is 4.491e-08 external angles of this component are: .(01010101001101001010101010010101010100101010101001010101010010101010100101010101001010101010010101010100101010101001) .(01010101001101010010101010100101010101001010101010010101010100101010101001010101010010101010100101010101001010101010) nearby Misiurewicz points to the input point: - 2p1 the strength is 0.38133 the center is at -2.000000000000000000 + 0.000000000000000000 i the center is 1.3078 to the west-south-west of the input point the multiplier has radius 4 and angle 0.00000 (in turns) - 3p1 the strength is 0.35938 the center is at -1.543689012692076368 + -0.000000000000000000 i the center is 0.8684 to the west-south-west of the input point the multiplier has radius 1.6786 and angle -0.50000 (in turns) - 12p1 the strength is 0.00037767 the center is at -0.724112682973573563 + 0.286456567676711404 i the center is 0.00029327 to the south-south-west of the input point the multiplier has radius 1.0354 and angle 0.45526 (in turns) - 12p11 the strength is 2.0286e-05 the center is at -0.724112682973573563 + 0.286456567676711460 i the center is 0.00029327 to the south-south-west of the input point the multiplier has radius 1.4656 and angle 0.00789 (in turns) - 2p116 the strength is 2.3938e-14 the center is at -0.723874171053648929 + 0.286847598521453362 i the center is 0.00016812 to the north-east of the input point the multiplier has radius 41405 and angle -0.45166 (in turns) - 3p116 the strength is 9.1963e-15 the center is at -0.723892769678345482 + 0.286784362650608693 i the center is 0.00011128 to the north-east of the input point the multiplier has radius 8827.5 and angle 0.13635 (in turns)$ cabal repl # while in folder with mandelbrot prelude project ... ghci> fmap Txt.plain . Sym.angledAddress . Sym.rational =<< Txt.parse ".(01010101001101001010101010010101010100101010101001010101010010101010100101010101001010101010010101010100101010101001)" Just "1_5/11->11_1/2->22_1/2->33_1/2->44_1/2->55_1/2->66_1/2->77_1/2->88_1/2->99_1/2->110_1/2->116" ${\displaystyle 1_{\frac {5}{11}}\to 11_{\frac {1}{2}}\to 22_{\frac {1}{2}}\to 33_{\frac {1}{2}}\to 44_{\frac {1}{2}}\to 55_{\frac {1}{2}}\to 66_{\frac {1}{2}}\to 77_{\frac {1}{2}}\to 88_{\frac {1}{2}}\to 99_{\frac {1}{2}}\to 110_{\frac {1}{2}}\to 116}$ ${\displaystyle 1\xrightarrow {5/11} 11\xrightarrow {1/2} 22\xrightarrow {1/2} 33\xrightarrow {1/2} 44\xrightarrow {1/2} 55\xrightarrow {1/2} 66\xrightarrow {1/2} 77\xrightarrow {1/2} 88\xrightarrow {1/2} 99\xrightarrow {1/2} 110\xrightarrow {1/2} 116}$ ## alg by Claude coloured with atom domain quadrant, exterior binary decomposition, and interior and exterior distance estimates) Double precision, no perturbation. Atom domain coordinate is smallest z divided by previous smallest z. Check for interior only if the atom domain coordinate is small (I think the smallest atom domains are for circle-like components, and are about 4x the size of the component, cardioid-like components tend to have much larger atom domains). Atom domain quadrant colouring is based on the quadrant of the atom domain coordinate, something like floor(4 * arg(a)/2pi). How many iterations do you perform for distance estimation? For exterior distance estimation, you need a large escape radius, eg 100100. For interior distance estimation, you need the period, then a number (maybe 10 or so should usually be enough) Newton steps to find the limit cycle. Iteration count limit is arbitrary, with a finite limit some pixels will always be classified as "unknown". Looks like your code is finding interior unexpectedly (points that should be exterior are falsely determined to be interior). But without seeing the source it's hard to tell. extern bool m_d_compute_step(m_d_compute *px, int steps) { 76 if (! px) { 77 return false; 78 } 79 if (px->tag != m_unknown) { 80 return true; 81 } 82 double er2 = px->er2; 83 double _Complex c = px->c; 84 double _Complex z = px->z; 85 double _Complex dc = px->dc; 86 double _Complex zp = px->zp; 87 double _Complex zq = px->zq; 88 double mz2 = px->mz2; 89 double mzq2 = px->mzq2; 90 int p = px->p; 91 int q = px->q; 92 for (int i = 1; i <= steps; ++i) { 93 dc = 2 * z * dc + 1; 94 z = z * z + c; 95 double z2 = cabs2(z); 96 if (z2 < mzq2 && px->filter && px->filter->accept && px->filter->accept(px->filter, px->n + i)) 97 { 98 mzq2 = z2; 99 q = px->n + i; 100 zq = z; 101 } 102 if (z2 < mz2) { 103 double atom_domain_radius_squared = z2 / mz2; 104 mz2 = z2; 105 p = px->n + i; 106 zp = z; 107 if (atom_domain_radius_squared <= 0.25) { 108 if (px->bias == m_interior) { 109 double _Complex dz = 0; 110 double de = -1; 111 if (m_d_interior_de(&de, &dz, z, c, p, 64)) { 112 px->tag = m_interior; 113 px->p = p; 114 px->z = z; 115 px->dz = dz; 116 px->zp = zp; 117 px->de = de; 118 return true; 119 } 120 } else { 121 if (px->partials && px->np < px->npartials) { 122 px->partials[px->np].z = z; 123 px->partials[px->np].p = p; 124 px->np = px->np + 1; 125 } 126 } 127 } 128 } 129 if (! (z2 < er2)) { 130 px->tag = m_exterior; 131 px->n = px->n + i; 132 px->p = p; 133 px->q = q; 134 px->z = z; 135 px->zp = zp; 136 px->zq = zq; 137 px->dc = dc; 138 px->de = 2 * cabs(z) * log(cabs(z)) / cabs(dc); 139 return true; 140 } 141 } 142 if (px->bias != m_interior && px->partials) { 143 for (int i = 0; i < px->np; ++i) { 144 z = px->partials[i].z; 145 zp = z; 146 int p = px->partials[i].p; 147 double _Complex dz = 0; 148 double de = -1; 149 if (m_d_interior_de(&de, &dz, z, c, p, 64)) { 150 px->tag = m_interior; 151 px->p = p; 152 px->z = z; 153 px->dz = dz; 154 px->zp = zp; 155 px->de = de; 156 return true; 157 } 158 } 159 } 160 px->tag = m_unknown; 161 px->n = px->n + steps; 162 px->p = p; 163 px->q = q; 164 px->mz2 = mz2; 165 px->mzq2 = mzq2; 166 px->z = z; 167 px->dc = dc; 168 px->zp = zp; 169 px->zq = zq; 170 return false; 171 } mandelbrot-numerics] / c / lib / m_d_interior_de.c 1 // mandelbrot-numerics -- numerical algorithms related to the Mandelbrot set 2 // Copyright (C) 2015-2018 Claude Heiland-Allen 4 5 #include <mandelbrot-numerics.h> 6 #include "m_d_util.h" 7 8 extern bool m_d_interior_de(double *de_out, double _Complex *dz_out, double _Complex z, double _Complex c, int p, int steps) { 9 double _Complex z00 = 0; 10 if (m_failed != m_d_attractor(&z00, z, c, p, steps)) { 11 double _Complex z0 = z00; 12 double _Complex dz0 = 1; 13 for (int j = 0; j < p; ++j) { 14 dz0 = 2 * z0 * dz0; 15 z0 = z0 * z0 + c; 16 } 17 if (cabs2(dz0) <= 1) { 18 double _Complex z1 = z00; 19 double _Complex dz1 = 1; 20 double _Complex dzdz1 = 0; 21 double _Complex dc1 = 0; 22 double _Complex dcdz1 = 0; 23 for (int j = 0; j < p; ++j) { 24 dcdz1 = 2 * (z1 * dcdz1 + dz1 * dc1); 25 dc1 = 2 * z1 * dc1 + 1; 26 dzdz1 = 2 * (dz1 * dz1 + z1 * dzdz1); 27 dz1 = 2 * z1 * dz1; 28 z1 = z1 * z1 + c; 29 } 30 *de_out = (1 - cabs2(dz1)) / cabs(dcdz1 + dzdz1 * dc1 / (1 - dz1)); 31 *dz_out = dz1; 32 return true; 33 } 34 } 35 return false; 36 } ‎ ## inversion • z^2 + c to z^2 + 1/c First, there's the transformation from the cardioid of the body of the set to a circle. This is done in c-space (a+ib) as follows: rho=sqrt(a*a+b*b)-1/4 phi=arctan(b/a) a-new=rho*(2*cos(phi)-cos(2*phi))/3 b-new=rho*(2*sin(phi)-sin(2*phi))/3 Then, there's the 4 different ways on how to get from c to 1/c, that group in 3 families: • addition: c -> c-c*t+t/c and t=0..1 • multiplication: c -> c/(t*(c*c-1)+1) and t=0..1 • exponentiation: c -> c^t and t=1..-1 The first 2 are pretty elementary to work out. The 3rd makes use of the fact that: c = a+ib = r*exp(i*phi) where r=sqrt(a*a+b*b) and phi=arctan(b/a) then c^t = r^t*exp(t*i*phi) = r^t*[cos(t*phi)+i*sin(t*phi)] The top left is method 1, top right method 2, and the bottom 2 are variants of method 3. Hope this helps somewhat. z_(n+1) = (z_n)^2 + (tan(t) + i*c)/(i + c*tan(t)) with t from 0.2pi to 0.35pi Another way to invert the Mandelbrot set | Closer look at t from 0.2*pi to 0.35*pi by Fraktoler Curious # Julia Computing the Julia set of f(z) = z^2 + i where (J(f) = K(f)). [9] • approximating the filled Julia set from above: the first 15 preimages of a large disk D = B(0, R) ⊃ K(f) • approximating the Julia set from below: ∪0≤k≤12f−k(β) where β is a repellingfixed point in J(f) = IIM • a good-quality picture of J(f). # other Julia ### 610 https://fractalforums.org/fractal-mathematics-and-new-theories/28/julia-and-parameter-space-images-of-polynomials/2786/msg16210#msg16210 marcm200 A very basic cubic Julia set p(z)=z^3+(0.099609375-0.794921875i), but with very interesting entry points into the attracting periodic cycle of length 610. The image shows the Julia set (yellow), its attracting cycle (cyan), and some white pixels which have not (yet) entered the cycle at the current maxit of 15000 (there were much more at maxit 10000, so they will enter the cycle as expected). I was interested to see at what point(s) the attracted numbers "enter" the cycle. Entering was defined as: If a complex number is bounded, I checked its last iterate whether it is in the attracting cycle. If so, I went backwards in the orbit until I encountered the first non-cycle point (epsilon of 10^-7). Its image was set to be the entry point. The result was quite unambiguous. The 4k image had ~222,000 interior points, of which: the top 3 entry points were: 0.09960906311,-0.794921357 => used 220,130 times 0.09976627035,-0.7951977228 => 251 0.09942639427,-0.7946757595 => 208 So there is a preferred point to enter the cycle - it almost looks like the not-often used entry points might be numerical errors and everything enters the cycle at the same number (or the cycle itself is a numerical error - darn, I hope my question is still valid). Would this distribution still be accurate in the limit when one could actually test all interior points of the set and not just a finite number of rational coordinate complex numbers? Since a point never actually goes exactly into the cycle unless it is a preimage or an image of a cycle point, but will come arbitrarily close to (some?) periodic points: Does this imply that periodic points have an event horizon - once in, never out again? Or can an orbit point be close to a periodic point, jump out into the vicinity of another and coming closer there - and so on, so actually never getting stuck near one specific periodic point? ## z^d + c Internal angle = 0 ( one main component) • d = 2 c = 1/4 = 0.25 • d = 3 c = 0.384900179459751 +0.000000000000000 i period = 10000 • d = 4 c = -0.236235196855289 +0.409171363489396 i period = 10000 ??? it should be c = 0.472464424146544; • d = 5 c = 0.534992243981138 +0.000000000000000 i period = 10000 • d = 6 c = -0.471135846013573 +0.342300228596646 i period = 10000 ?? it should be c = 0.582559084495983 +0.000000000000000 i period = 0 Internal angle 1/3 from main component • d = 2 c = -0.125000000000000 +0.649519052838329 i period = 10000 ( Douady rabbit ) • d = 3 c = 0.481125224324688 +0.500000000000000 i period = 10000 • d = 4 c = -0.619317130969330 +0.370556691297005 i period = 10000 • d = 5 c = 0.694975311172961 +0.267496121990569 i period = 10000 • d = 6 c = -0.719547645525888 +0.256065008698348 i period = 10000 • d = 7 c = 0.758540222557608 +0.180894796695791 i period = 10000 • d = 8 c = -0.768629397583800 +0.197192545338461 i period = 10000 ## cubic • A 4k cubic Julia by ‎Chris Thomasson‎ . Here is the formula: z = pow(z, 3) - (pow(-z, 2.00001) - 1.0008875); link • cubic • "Let c = (.387848...) + i(.6853...). The left picture shows the filled Julia set Kc of the cubic map z3 + c, covered by level 0 of the puzzle. The center of symmetry is at 0, the point where the rays converge is α and the other fixed points are marked by dotted arrows. In this example the rotation number around α is ρα = 25 and the ray angles are 5 121 7→ 15 121 7→ 45 121 7→ 14 121 7→ 42 121 7→ 5 121 . The right picture illustrates level 1 of the puzzle for the same map. "A New Partition Identity Coming from Complex Dynamics • Owen Maresh owen maresh(-0.4999999999999998 + 0.8660254037844387*I) - (0.2926009682749477 + 0.252068970984772*I)* z - (0.4916379276414715 + 0.2509264824918978*I)* z^2 + (0.2511839558093919 - 1.0778044459985288*I)* z^3 • f f(z)=z5+(0.8+0.8i)z4+z which has the following fixed points • p1^=0 with multiplier |λ1|=|f′(0)|=1 (parabolic), • p2^=−0.8−0.8i with multiplier |λ2|=|f′(−0.8−0.8i)|≈|−13.7|>1 (repelling), • p3^=∞ with multiplier |λ3|=|limz→∞f(z)|−1=0 (super-attracting) • f(z)=(−0.090706+0.27145i)+(2.41154−0.133695 i)z^2 − z^3 has one critical orbit attracted to an orbit of period two and one critical orbit attracted to an orbit of period three. Basic complex dynamicsA computational approach by Mark McClure ## rational Julia sets of rational maps ( not polynomials ) ### Christopher Williams Family: ${\displaystyle z_{k+1}=z_{k}^{P}+c-\lambda z_{k}^{-Q}}$ Examples: • Julia set of ${\displaystyle z^{2}-0.0625z^{-2}}$ The most obvious feature is that it's full of holes! The fractal is homemorphic to (topologically the same as) the Sierpinski carpet • Julia set of z2 - 1 - 0.005z-2 • f = z^2 - 0.01z^-2 • f = z2 - 0.01z-2 Phoenix formula • z5 - 0.06iz-2 ### Robert L. Devaney • Singular perturbations of complex polynomials ${\displaystyle G_{\lambda ,c}(z)=z^{n}+c+{\frac {\lambda }{z^{n}}}}$ # Julia sets for fc(z) = z*z + c const double Cx=-0.74543; const double Cy=0.11301; -0.808 +0.174i; -0.1 +0.651i; ( beween 1 and 3 period component of Mandelbrot set ) -0.294 +0.634i ## tuned rabbit • c = 0 • 5/13 • c = -0.407104083085098 +0.584852842868102 i period = 13 • 1/2 • c = -0.410177342420846 +0.590406710726110 i period = 10000 the 5/13 Rabbit tuned with the Basilica approximates the golden Siegel disk ( LOCAL CONNECTIVITY OF THE MANDELBROT SET AT SOME SATELLITE PARAMETERS OF BOUNDED TYPE by DZMITRY DUDKO AND MIKHAIL LYUBICH ) ## spirals • on the parameter plane • part of M-set near Misiurewicz points • on the dynamic plane • Julia set near cut points • critical orbits • external rays landing on the parabolic or repelling perriodic points https://imagej.net/Directionality a flat directionality histogram is a good metric for many-armed spirals (at least with distance estimation colouring) ${\displaystyle (r,t)\to \lambda \to c}$ ${\displaystyle t={\frac {p}{q}}+\epsilon }$ ${\displaystyle \lambda =re^{2\pi ti}}$ In period 1 component of Mandelbrot set : ${\displaystyle c:{\frac {\lambda }{2}}-{\frac {\lambda ^{2}}{4}}}$ In period 2 component of Mandelbrot set : Period 1 • c = -0.106956704870346 +0.648733714002516 i , inside period 1 parent component , near period 100 child component , critical orbit is a spiral that starts , near internal ray 33/100 ### table Caption text Period c r 1-r t p/q p/q - t image author address 1 0.37496784+i*0.21687214 0.99993612384259 0.000063879203489 0.1667830755386747 1/6 0.00011640887201 Cr6spiral.png 1 1 -0.749413589136570+0.015312826507689*i. 0.9995895293978963 0.00041047060211000001 0.4975611481254812 1/2 -0.00243885187451881 png 1 2 -0.757 + 0.027i 0.977981594918841 0.02201840508115904 0.01761164373863864 0/1 -0.01761164373863864 pauldelbrot 1 -(1/2)-> 2 2 -0.752 + 0.01i 0.9928061240745848 0.007193875925415205 0.006414063302849116 0/1 -0.006414063302849116 pauldelbrot 1 -(1/2)-> 2 10 -1.2029905319213867188 + 0.14635562896728515625 i 0.979333 0.02490599999999998 0.985187275828761422 0/1 -0.01481272417123857 marcm200 1 -(1/2)-> 2 -(2/5)-> 10 14 -1.2255649566650390625 0.1083774566650390625 0.951928 0.048072 0.992666114460366900 1/1 0.0073338855396331 marcm200 1 -(1/2)-> 2 -(3/7)-> 14 14 -1.2256811857223510742 +0.10814088582992553711 i 0.955071 0.044929 0.984062994677356362 1/1 0.01593700532264363 marcm200 1 -(1/2)-> 2 -(3/7)-> 14 14 -0.8422698974609375 -0.19476318359375 i 0.952171 0.04782900000000001 0.935491618649184731 1/1 0.06450838135081527 marcm200 1 -(1/2)-> 2 -(6/7)-> 14 ### pauldelbrot "c=0.027*%i-0.757" period = 1 z= 0.01345178808414596*%i-0.5035840525848648 r = |m(z)| = 1.007527366616821 1-r = -0.007527366616821407 t = turn(m(z)) = 0.4957496478171055 p/q = 1/2 p/q-t = -0.004250352182894435 z= 1.503584052584865-0.01345178808414596*%i r = |m(z)| = 3.007288448945452 1-r = -2.007288448945452 t = turn(m(z)) = 0.9985761611087214 p/q = 1/2 p/q-t = 0.4985761611087214 period = 2 z= (-0.1022072682395012*%i)-0.3679154600985363 r = |m(z)| = 0.977981594918841 1-r = 0.02201840508115904 t = turn(m(z)) = 0.01761164373863864 p/q = 1/2 p/q-t = -0.4823883562613613 z= 0.1022072682395012*%i-0.6320845399014637 r = |m(z)| = 0.9779815949188409 1-r = 0.02201840508115915 t = turn(m(z)) = 0.01761164373863864 p/q = 1/2 p/q-t = -0.4823883562613613 c=0.01*%i-0.752" period = 1 z= 0.0049949453016411*%i-0.501011962705025 r = |m(z)| = 1.002073722342039 1-r = -0.00207372234203973 t = turn(m(z)) = 0.498413323518715 p/q = 0 p/q-t = 0.498413323518715 z= 1.501011962705025-0.0049949453016411*%i r = |m(z)| = 3.002040547136002 1-r = -2.002040547136002 t = turn(m(z)) = 0.9994703791038458 p/q = 0 p/q-t = 0.9994703791038458 period = 2 z= (-0.06402358560400053*%i)-0.4219037804142045 r = |m(z)| = 0.9928061240745848 1-r = 0.007193875925415205 t = turn(m(z)) = 0.006414063302849116 p/q = 0 p/q-t = 0.006414063302849116 z= 0.06402358560400053*%i-0.5780962195857955 r = |m(z)| = 0.9928061240745848 1-r = 0.007193875925415205 t = turn(m(z)) = 0.006414063302849116 p/q = 0 p/q-t = 0.006414063302849116 (%o296) "/home/a/Dokumenty/periodic/MaximaCAS/p1/p.mac" (%i297) ### marcm200 the input point was -1.2029905319213867e+00 + +1.4635562896728516e-01 i the point didn't escape after 10000 iterations nearby hyperbolic components to the input point: - a period 1 cardioid with nucleus at +0e+00 + +0e+00 i the component has size 1.00000e+00 and is pointing west the atom domain has size 0.00000e+00 the atom domain coordinates of the input point are -nan + -nan i the atom domain coordinates in polar form are nan to the east the nucleus is 1.21186e+00 to the east of the input point the input point is exterior to this component at radius 1.41904e+00 and angle 0.486382891412633800 (in turns) the multiplier is -1.41385e+00 + +1.21263e-01 i a point in the attractor is -7.0694e-01 + +6.06308e-02 i external angles of this component are: .(0) .(1) - a period 2 circle with nucleus at -1e+00 + +0e+00 i the component has size 5.00000e-01 and is pointing west the atom domain has size 1.00000e+00 the atom domain coordinates of the input point are -0.20299 + +0.14636 i the atom domain coordinates in polar form are 0.25025 to the north-west the nucleus is 2.50250e-01 to the south-east of the input point the input point is exterior to this component at radius 1.00100e+00 and angle 0.400579159596292533 (in turns) the multiplier is -8.11962e-01 + +5.85423e-01 i a point in the attractor is +1.81557e-01 + -1.07368e-01 i - a period 4 circle with nucleus at -1.310703e+00 + +3.761582e-37 i the component has size 1.17960e-01 and is pointing west the atom domain has size 2.34844e-01 the atom domain coordinates of the input point are +0.507 + +0.53837 i the atom domain coordinates in polar form are 0.73952 to the north-east the nucleus is 1.81719e-01 to the south-west of the input point the input point is exterior to this component at radius 1.00200e+00 and angle 0.801158319192584956 (in turns) the multiplier is +3.16563e-01 + -9.50682e-01 i a point in the attractor is +1.815562e-01 + -1.073687e-01 i external angles of this component are: .(0110) .(1001) - a period 10 circle with nucleus at -1.2103996e+00 + +1.5287483e-01 i the component has size 2.02739e-02 and is pointing north-west the atom domain has size 4.09884e-02 the atom domain coordinates of the input point are +0.16767 + -0.13713 i the atom domain coordinates in polar form are 0.2166 to the south-east the nucleus is 9.86884e-03 to the north-west of the input point the input point is interior to this component at radius 9.79333e-01 and angle 0.985187275828761422 (in turns) the multiplier is +9.75094e-01 + -9.10160e-02 i a point in the attractor is +7.0332348e-02 + -8.243835e-02 i external angles of this component are: .(0110010110) .(0110011001) the input point was -1.2255649566650391e+00 + +1.0837745666503906e-01 i the point didn't escape after 10000 iterations nearby hyperbolic components to the input point: - a period 1 cardioid with nucleus at +0e+00 + +0e+00 i the component has size 1.00000e+00 and is pointing west the atom domain has size 0.00000e+00 the atom domain coordinates of the input point are -nan + -nan i the atom domain coordinates in polar form are nan to the east the nucleus is 1.23035e+00 to the east of the input point the input point is exterior to this component at radius 1.43387e+00 and angle 0.490097175551864883 (in turns) the multiplier is -1.43109e+00 + +8.91595e-02 i a point in the attractor is -7.15546e-01 + +4.45805e-02 i external angles of this component are: .(0) .(1) - a period 2 circle with nucleus at -1e+00 + +0e+00 i the component has size 5.00000e-01 and is pointing west the atom domain has size 1.00000e+00 the atom domain coordinates of the input point are -0.22556 + +0.10838 i the atom domain coordinates in polar form are 0.25025 to the west-north-west the nucleus is 2.50250e-01 to the east-south-east of the input point the input point is exterior to this component at radius 1.00100e+00 and angle 0.428714049282459153 (in turns) the multiplier is -9.02260e-01 + +4.33510e-01 i a point in the attractor is +1.94019e-01 + -7.80792e-02 i - a period 4 circle with nucleus at -1.310703e+00 + +0e+00 i the component has size 1.17960e-01 and is pointing west the atom domain has size 2.34844e-01 the atom domain coordinates of the input point are +0.38418 + +0.4137 i the atom domain coordinates in polar form are 0.56457 to the north-east the nucleus is 1.37819e-01 to the south-west of the input point the input point is exterior to this component at radius 1.00200e+00 and angle 0.857428098564918195 (in turns) the multiplier is +6.26142e-01 + -7.82277e-01 i a point in the attractor is +1.940184e-01 + -7.80797e-02 i external angles of this component are: .(0110) .(1001) - a period 14 circle with nucleus at -1.2299714e+00 + +1.1067143e-01 i the component has size 1.06543e-02 and is pointing west-north-west the atom domain has size 1.95731e-02 the atom domain coordinates of the input point are +0.16376 + -0.15414 i the atom domain coordinates in polar form are 0.22489 to the south-east the nucleus is 4.96796e-03 to the west-north-west of the input point the input point is interior to this component at radius 9.51928e-01 and angle 0.992666114460366900 (in turns) the multiplier is +9.50917e-01 + -4.38495e-02 i a point in the attractor is +7.5747371e-02 + -5.129233e-02 i external angles of this component are: .(01100110010110) .(01100110011001) the input point was -1.2256811857223511e+00 + +1.0814088582992554e-01 i the point didn't escape after 10000 iterations nearby hyperbolic components to the input point: - a period 1 cardioid with nucleus at +0e+00 + +0e+00 i the component has size 1.00000e+00 and is pointing west the atom domain has size 0.00000e+00 the atom domain coordinates of the input point are -nan + -nan i the atom domain coordinates in polar form are nan to the east the nucleus is 1.23044e+00 to the east of the input point the input point is exterior to this component at radius 1.43394e+00 and angle 0.490119703062896594 (in turns) the multiplier is -1.43118e+00 + +8.89616e-02 i a point in the attractor is -7.15576e-01 + +4.44813e-02 i external angles of this component are: .(0) .(1) - a period 2 circle with nucleus at -1e+00 + +0e+00 i the component has size 5.00000e-01 and is pointing west the atom domain has size 1.00000e+00 the atom domain coordinates of the input point are -0.22568 + +0.10814 i the atom domain coordinates in polar form are 0.25025 to the west-north-west the nucleus is 2.50253e-01 to the east-south-east of the input point the input point is exterior to this component at radius 1.00101e+00 and angle 0.428881674271762436 (in turns) the multiplier is -9.02725e-01 + +4.32564e-01 i a point in the attractor is +1.9408e-01 + -7.79018e-02 i - a period 4 circle with nucleus at -1.310703e+00 + +0e+00 i the component has size 1.17960e-01 and is pointing west the atom domain has size 2.34844e-01 the atom domain coordinates of the input point are +0.38355 + +0.41286 i the atom domain coordinates in polar form are 0.56353 to the north-east the nucleus is 1.37561e-01 to the south-west of the input point the input point is exterior to this component at radius 1.00202e+00 and angle 0.857763348543524873 (in turns) the multiplier is +6.27801e-01 + -7.80972e-01 i a point in the attractor is +1.940823e-01 + -7.790208e-02 i external angles of this component are: .(0110) .(1001) - a period 14 circle with nucleus at -1.2299714e+00 + +1.1067143e-01 i the component has size 1.06543e-02 and is pointing west-north-west the atom domain has size 1.95731e-02 the atom domain coordinates of the input point are +0.15716 + -0.16242 i the atom domain coordinates in polar form are 0.22601 to the south-east the nucleus is 4.98108e-03 to the west-north-west of the input point the input point is interior to this component at radius 9.55071e-01 and angle 0.984062994677356362 (in turns) the multiplier is +9.50287e-01 + -9.54765e-02 i a point in the attractor is +6.2976569e-02 + -5.7543144e-02 i external angles of this component are: .(01100110010110) .(01100110011001) the input point was -8.422698974609375e-01 + -1.9476318359375e-01 i the point didn't escape after 10000 iterations nearby hyperbolic components to the input point: - a period 1 cardioid with nucleus at +0e+00 + +0e+00 i the component has size 1.00000e+00 and is pointing west the atom domain has size 0.00000e+00 the atom domain coordinates of the input point are -nan + -nan i the atom domain coordinates in polar form are nan to the east the nucleus is 8.64495e-01 to the east-north-east of the input point the input point is exterior to this component at radius 1.11403e+00 and angle 0.526643305764455283 (in turns) the multiplier is -1.09846e+00 + -1.85625e-01 i a point in the attractor is -5.49225e-01 + -9.28116e-02 i external angles of this component are: .(0) .(1) - a period 2 circle with nucleus at -1e+00 + +0e+00 i the component has size 5.00000e-01 and is pointing west the atom domain has size 1.00000e+00 the atom domain coordinates of the input point are +0.15773 + -0.19476 i the atom domain coordinates in polar form are 0.25062 to the south-east the nucleus is 2.50622e-01 to the north-west of the input point the input point is exterior to this component at radius 1.00249e+00 and angle 0.858340235783291439 (in turns) the multiplier is +6.30920e-01 + -7.79053e-01 i a point in the attractor is -1.0771e-01 + +2.48238e-01 i - a period 14 circle with nucleus at -8.4076071e-01 + -1.9927227e-01 i the component has size 1.01164e-02 and is pointing south-east the atom domain has size 1.66560e-02 the atom domain coordinates of the input point are +0.13324 + -0.2159 i the atom domain coordinates in polar form are 0.25371 to the south-south-east the nucleus is 4.75495e-03 to the south-south-east of the input point the input point is interior to this component at radius 9.52171e-01 and angle 0.935491618649184731 (in turns) the multiplier is +8.75023e-01 + -3.75452e-01 i a point in the attractor is +4.338561e-02 + +7.777828e-02 i external angles of this component are: .(10101010100110) .(10101010101001) ## basilica • San Marco Fractal = Basilica : c = - 3/4 • parabolic perturbation • 5/11 : ### 5/11 • The angle 681/2047 or p01010101001 has preperiod = 0 and period = 11 • The angle 682/2047 or p01010101010 has preperiod = 0 and period = 11. ## disconnected near c = -0.750357820200574 +0.047756163825227 i ## Siegel Disk Julia set = Jordan curve Irrational recurrent cycles • 0.59...+i0.43... • 0.33...+i0.07... • C= -0.408792866 -0.577405 i • c=-0.390541-0.586788i ## Other Feigenbaum: C= -1.4011552 +0.0 i Tower: C= -1 + 0.0 i Cauliflower: C= 0.25 +0.0 i ## Dendrite Critical point eventually periodic 0 > -2 > 2 (fixed). C= i c^3 + 2c^2 +2c +2 =0 ### 3 The core entropy for polynomials of higher degree Yan Hong Gao, Giulio Tiozzo The Julia set of fc(z) = z → z3 + 0.22036 + 1.18612i To show the non-uniqueness, let us consider the following example, which comes from [Ga]. We consider the postcritically finite polynomial fc(z) = z3 + c with c ≈ 0.22036 + 1.18612i. The critical value c receives two rays with arguments 11/72 and 17/72. Then, Θ := { Θ1(0) := {11/216, 83/216} , Θ2(0) := {89/216, 161/21 ## Other Circle: C= 0.0 +0.0 i Segment: C= -2 +0.0 i ## centers • c = -0.748490484405062 +0.048290910555737 i period = 65 ( 1 -? - > 65 ) • by ‎Chris Thomasson‎ • (0.355534, -0.337292) it is a center of period 85 componnet. Adress 1->5 -> 85, • 29 cycle power of two Julia set at point: (-0.742466, -0.107902) address 1 -> 29 • 38-cycle power of 3 Julia at point (0.388823, -0.000381453) • c = -0.051707765779845 +0.683880135777732 i period = 273, 1 -(1/3)-> 3 -(1/7)-> 21 -(1/13)-> 273 https://www.math.stonybrook.edu/~jack/tune-b.pdf • https://arxiv.org/pdf/math/9411238.pdf see Figure.2 • 1- (1/3) → 3 -(2/3) → 7 • 1 - (1/2) -> 2 - (1/2) → 4 -(1/3) → 7 • 1-(1/3) → 3- (1/2) → 4 • 1-(1/3) → 3 - (1/2) → 5-(1/2) → 6 • 1-(1/3) → 3-(1/2) → 5-(1/2) → 7 • 1-(1/3) → 3-(1/3) → 7 • 1-(3/4)-> 4 -(?/5)-> 20 : c = 0.300078079301992 -0.489531524188048 i period = 20 • http://wrap.warwick.ac.uk/35776/1/WRAP_THESIS_Sharland_2010.pdf ### minibrot • c = 0.284912968784722 +0.484816779093857 i period = 84 ### Superattracting per 3 (up to complex conjugate) C= -1.75488 (airplane) C= -0.122561 + 0.744862 i (rabbit) Douady's Rabbit Rabbit: C= -0.122561 +0.744862 i = ( -1/8+3/4 i ??? ) whose critical point 0 is on a periodic orbit of length 3 ### Superattracting per 4 (up to complex conjugate) C= -1.9408 C= -1.3107 C= -1.62541 C= -0.15652 +1.03225 i C= 0.282271 +0.530061 i #### Kokopelli • p = γM (3/15) • p(z) = z^2 0.156 + 1.302*i • The angle 3/15 or p0011 has preperiod = 0 and period = 4. • The conjugate angle is 4/15 or p0100 . • The kneading sequence is AAB* and the internal address is 1-3-4 . • The corresponding parameter rays are landing at the root of a primitive component of period 4. • c = -0.156520166833755 +1.032247108922832 i period = 4 ### Superattracting per 5 (up to complex conjugate) C= -1.98542 C= -1.86078 C= -1.62541 C= -1.25637 +0.380321 i C= -0.50434 +0.562766 i C= -0.198042 +1.10027 i C= -0.0442124 +0.986581 i C= 0.359259 +0.642514 i C= 0.379514 +0.334932 i ### A superattracting per 15 C= -0.0384261 +0.985494 i ## dense ### Vivid-Thick-Spiral VividThickSpiral { fractal: title="Vivid Thick Spiral" width=800 height=600 layers=1 credits="Ingvar Kullberg;1/7/2014;Frederik Slijkerman;7/23/2002" layer: caption="Background" opacity=100 method=multipass mapping: center=-0.745655283381030620841/0.07611785523064665900025 magn=3.7328517E10 formula: maxiter=1000000 percheck=off filename="Standard.ufm" entry="Mandelbrot" p_start=0/0 p_power=2/0 p_bailout=100 inside: transfer=none outside: density=0.1 transfer=arctan filename="Standard.ucl" entry="Basic" p_type=Iteration smooth=yes rotation=93 index=151 color=16580604 index=217 color=1909029 index=227 color=2951679 index=248 color=6682867 index=262 color=223 index=291 color=255 index=-29 color=55539 opacity: smooth=no index=0 opacity=255 } ### Pink-Labyrinth PinkLabyrinth { fractal: title="Pink Labyrinth" width=800 height=600 layers=1 credits="Ingvar Kullberg;1/7/2014;Frederik Slijkerman;7/23/2002" layer: caption="Background" opacity=100 method=multipass mapping: center=-0.745655283616919391525/0.0761178553836136017335 magn=6.6880259E9 formula: maxiter=1000000 filename="Standard.ufm" entry="Mandelbrot" p_start=0/0 p_power=2/0 p_bailout=100 inside: transfer=none outside: density=0.25 transfer=arctan filename="Standard.ucl" entry="Basic" p_type=Iteration smooth=yes rotation=74 index=132 color=16580604 index=198 color=1909029 index=208 color=2951679 index=229 color=6682867 index=243 color=223 index=272 color=255 index=-48 color=55539 opacity: smooth=no index=0 opacity=255 } IMAGE DETAILS ### Density near the cardoid 3 • Magnification: • 2^35.769 • 5.8552024369543422761426995117521 E10 • Coordinates: • Re = 0.360999615968828800 • Im = -0.121129382033034400 ### Density near the cardoid 3 by DinkydauSet https://fractalforums.org/image-threads/25/mandelbrot-set-various-structures/716/;topicseen Mandelbrot set Again a location that's not deep but super dense. Magnification: 2^35.769 5.8552024369543422761426995117521 E10 Coordinates: Re = 0.360999615968828800 Im = -0.121129382033034400 XaoS coordinates (maxiter 50000) (view -0.775225602760841 -0.136878655029377 7.14008235131944E-11 7.14008235674045E-11) https://arxiv.org/pdf/1703.01206.pdf "Limbs 8/21, 21/55, 55/144, 144/377, . . . scale geometrically fast on the right side of the (anti-)golden Siegel parameter, while limbs 5/13, 13/34, 34/89, 89/233, . . . scale geometrically fast on the left side. The bottom picture is a zoom of the top picture." ## a https://plus.google.com/u/0/photos/115452635610736407329/albums/6124078542095960129/6124078748270945650 frond tail Misiurewicz point of the period-27 bulb of the quintic Mandelbrot set (I don't have a number ATM, but you can find that) # SVG <?xml version="1.0" encoding="utf-8" standalone="no"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> # ps "Obrazy w indywidualnym wkładzie, mam z samodzielnie napisanymi programami PASCAL (FREEPASCAL) i / lub XFIG (program do rysowania pod LINUX) początkowo tworzone jako pliki EPS. Niestety bezpośrednia integracja plików EPS z Wikipedią nie jest możliwa. Przydatna jest konwersja plików EPS na pliki XFIG za pomocą PSTOEDIT. Konwersja plików EPS na pliki SVG jest możliwa dzięki INKSCAPE, a także eksportowi plików xfig do plików SVG. Jednak nie znalazłem zamiennika dla etykiety LATEX, włączając ją do pliku LATEX w środowisku rysunkowym z plikiem psfrag. Także dla licznych możliwości manipulowania krzywymi za pomocą xfig wiem w inkscape (wciąż) brak odpowiednika. Z plików PS tworzę pliki SVG z ps2pdf i pdf2svg i używam programu inkscape do dostosowania stron (marginesów) do rysunków." de:Benutzer:Ag2gaeh # Help |Other fields=SOUL WINDSURFER development InfoField This chart was created with R. Source code InfoField ## R code dx=800; dy=600 # define grid size C = complex( real=rep(seq(-2.2, 1.0, length.out=dx), each=dy ), imag=rep(seq(-1.2, 1.2, length.out=dy), dx ) ) C = matrix(C,dy,dx) # convert from vector to matrix Z = 0 # initialize Z to zero X = array(0, c(dy,dx,20)) # allocate memory for all the frames for (k in 1:20) { # perform 20 iterations Z = Z^2+C # the main equation X[,,k] = exp(-abs(Z)) # store magnitude of the complex number } library(caTools) # load library with write.gif function jetColors = colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) write.gif(X, "Mandelbrot.gif", col=jetColors, delay=100, transparent=0) differences between : <gallery> </gallery> {{SUL Box|en|wikt}} [[1]] ## compare with ==Compare with== <gallery caption="Sample gallery" widths="100px" heights="100px" perrow="6"> </gallery> </nowiki> Change in your preferences : Show hidden categories ## syntaxhighlight {{Galeria |Nazwa = Trzy krzywe w różnych skalach |wielkość = 400 |pozycja = right |Plik:LinLinScale.svg|Skala liniowo-liniowa |Plik:LinLogScale.svg|Skala liniowo-logarytmiczna |Plik:LogLinScale.svg|Skala logarytmiczno-liniowa |Plik:LogLogScale.svg|Skala logarytmiczno-logarytmiczna }} == c source code== <syntaxhighlight lang="c"> </syntaxhighlight> == bash source code== <syntaxhighlight lang="bash"> </syntaxhighlight> ==make== <syntaxhighlight lang=makefile> all: chmod +x d.sh ./d.sh </syntaxhighlight> Tu run the program simply make ==text output== <pre> ==references== <references/> ## references 1. colortrac glossary: greyscale 2. May 2004. http://users.mai.liu.se/hanlu09/complex/domain_coloring.html Retrieved 13 December 2018. 3. (September 2012). "Domain Coloring of Complex Functions: An Implementation-Oriented Introduction". IEEE Computer Graphics and Applications 32 (5): 90–97. DOI:10.1109/MCG.2012.100. PMID 24806991. 4. stackoverflow.com/questions/71856674/how-to-save-images-after-normalizing-the-pixels/71863957?noredirect=1 5. coolors by Fabrizio Bianchi 6. https://colors.artyclick.com/color-names-dictionary/ 7. random-pastel-color by Micro Digital Tools 8. theinnerframe : exponential-function/, Playing with Infinity 9. Computability of Brolin-Lyubich Measure by Ilia Binder, Mark Braverman, Cristobal Rojas, Michael Yampolsky 10. fractal_ken An "escape time" fractal generated by homemade software using recurrence relation z(n) = z(n - 1)^2 + 0.3305 + 0.06i ### GeSHi description: <syntaxhighlight lang="c" enclose="none"> g; a=2; </syntaxhighlight> ### source templetate deprecated: <syntaxhighlight lang="gnuplot">ssssssssssssss</syntaxhighlight> ### prettytable Mathematical Function Plot Description Function displaying a cusp at (0,1) Equation ${\displaystyle y={\sqrt {|x|}}+1}$ Co-ordinate System Cartesian X Range -4 .. 4 Y Range -0 .. 3 Derivative ${\displaystyle {\frac {dy}{dx}}={\frac {1}{2{\sqrt {x}}}}}$ Points of Interest in this Range Minima ${\displaystyle \left(0,1\right)\,}$ Cusps ${\displaystyle \left(0,1\right)\,}$ Derivatives at Cusp ${\displaystyle \lim _{x\to 0^{+}}f'(x)=+\infty }$, ${\displaystyle \lim _{x\to 0^{-}}f'(x)=-\infty }$
# Review the experimental question and hypothesis. Which statement best explains why the hypothesis is testable? Question: How does the amount of water individuals drink everyday affect the number of facial blemishes they receive every month? Hypothesis: If the amount of water people drink everyday affects the concentration of certain hormones in their blood, then increasing the amount of water consumed daily will decrease the number of facial blemishes they receive each month. The hypothesis inc ###### Question: Review the experimental question and hypothesis. Which statement best explains why the hypothesis is testable? Question: How does the amount of water individuals drink everyday affect the number of facial blemishes they receive every month? Hypothesis: If the amount of water people drink everyday affects the concentration of certain hormones in their blood, then increasing the amount of water consumed daily will decrease the number of facial blemishes they receive each month. The hypothesis includes an explanation and makes a prediction. The hypothesis is written using an "If …, then …" format. The hypothesis includes both an independent variable and a dependent variable. The hypothesis can be used to accurately explain observable facts. ### Which text in this excerpt from Twenty Years at Hull House by Jane Addams demonstrates the lack of concern for poor Immigrant communities that was frequently shown by politicians and others with influence? "One of the striking features of our neighborhood twenty years ago, and one to which we never became reconciled, was the presence of huge wooden garbage boxes" fastened to the street pavement in which the undisturbed refuse accumulated day by day. The system of garbage collecting was inadequa Which text in this excerpt from Twenty Years at Hull House by Jane Addams demonstrates the lack of concern for poor Immigrant communities that was frequently shown by politicians and others with influence? "One of the striking features of our neighborhood twenty years ago, and one to which we never... ### Which of the following correctly applies to a catalyst? a. They do not actually participate in the chemical reaction and therefore are unchanged at the end of the reaction. b. They provide an alternate lower energy mechanism by which the reaction proceeds. c. They must be in the same phase as the reactants in the chemical reaction. d. Biological catalysts are proteins called substrates. Which of the following correctly applies to a catalyst? a. They do not actually participate in the chemical reaction and therefore are unchanged at the end of the reaction. b. They provide an alternate lower energy mechanism by which the reaction proceeds. c. They must be in the same phase as the re... ### A ball is thrown straight up into the air from the top of a building standing at 50 feet with an initial velocity of 65 feet per second the height of the ball in feet can be modeled by the following function: h(t)=-16t^2++16t+96 When does the ball reach its maximum height? A ball is thrown straight up into the air from the top of a building standing at 50 feet with an initial velocity of 65 feet per second the height of the ball in feet can be modeled by the following function: h(t)=-16t^2++16t+96 When does the ball reach its maximum height?... ### Which forms of contraception have 85% success rate of higher? which forms of contraception have 85% success rate of higher?... ### Jordon, a leukemia patient, can be treated using (Smooth Muscle Cell, Human Stem Cells, Mature Bone Cells) because these cells have the capacity to (Attack and Destroy Cancer Cells,Cuase Cancer Cells to Differentiate,Produce Healty cells) Jordon, a leukemia patient, can be treated using (Smooth Muscle Cell, Human Stem Cells, Mature Bone Cells) because these cells have the capacity to (Attack and Destroy Cancer Cells,Cuase Cancer Cells to Differentiate,Produce Healty cells)... ### Discuss the differences between a wage, a salary and a commission. Discuss the differences between a wage, a salary and a commission.... ### Estimate the sum by rounding the numbers to the nearest hundred and then adding: 539 + 221 Estimate the sum by rounding the numbers to the nearest hundred and then adding: 539 + 221... ### What is the difference between the commerce power and the currency power what is the difference between the commerce power and the currency power... ### Which number is 0.25/-0.25 Which number is 0.25/-0.25... ### Determine the molecular mass of the following compounds: Cl \ C=O / Cl Determine the molecular mass of the following compounds: Cl \ C=O / Cl... ### Which statement best describes how some trees respond to decreasing temperatures and shorter days in the fall? A. They drop their leaves and go into dormancy. B. They grow new leaves and go into dormancy. C. They drop their leaves and increase photosynthesis. D. They grow new leaves and increase photosynthesis. SUS Which statement best describes how some trees respond to decreasing temperatures and shorter days in the fall? A. They drop their leaves and go into dormancy. B. They grow new leaves and go into dormancy. C. They drop their leaves and increase photosynthesis. D. They grow new leaves and increase pho... ### An aeroplane takes off from a runway by covering a distance of 1200 metre in 90s.what is the average speed of aeroplane​ an aeroplane takes off from a runway by covering a distance of 1200 metre in 90s.what is the average speed of aeroplane​... ### Which man organized A plan to kill President Lincoln Which man organized A plan to kill President Lincoln... ### Which two factors led to Latin American revolutions? Which two factors led to Latin American revolutions?... ### In a class of 30 students, 13 of them are boys. What percentage of the class are girls? Give your answer to 1 decimal place. In a class of 30 students, 13 of them are boys. What percentage of the class are girls? Give your answer to 1 decimal place.... ### The act of giving instruction or important information​ the act of giving instruction or important information​... ### Which extraction procedure will completely separate an amide from the by-product of the reaction between an amine and excess carboxylic acid anhydride? A) Add 0.1M NaOH (aq) to quench unreacted anhydride. Then add diethyl ether and separate the layers. The amide can be obtained from the ether layer by evaporating the solvent. B) Add 0.1 M HCl (aq) to quench unreacted anhydride. Then add diethyl ether and separate the layers. The amide can be obtained from ether layer by evaporating the solvent. Which extraction procedure will completely separate an amide from the by-product of the reaction between an amine and excess carboxylic acid anhydride? A) Add 0.1M NaOH (aq) to quench unreacted anhydride. Then add diethyl ether and separate the layers. The amide can be obtained from the ether layer ...
# Why do we think of group compositions as multiplication? This has bothered me for some time: The composition in a group is usually denoted $xy$ or $x\cdot y$. Powers (note the word) are denoted by $x^n$, inverses by $x^{-1}$, and the neutral element by $1$. Someone clearly seemed to think of multiplication when these conventions were adopted. But wait, in most algebraic (ring-like) structures where multiplication is defined, this operation almost never makes that structure into a group, even if you take away zero. Wouldn't it have been much more natural to use additive notation for groups? Obviously, when we call something "addition," it is usually commutative, but then again, this is mostly a result of conventions; multiplication was also traditionally thought of as commutative until evil people invented non-commutative rings. Are there any historical/heuristic/practical explanation for this (in my opinion) strange choice of notation? The best explanation I can come up with is that it works, that non-commutative rings just turned out to be such an interesting topic that people stopped thinking of multiplication as always commutative. Hence they used multiplicative notation when the group was not assumed to be Abelian. • Well, you're going to need addition and multiplication for a ring anyway... and addition is the commutative operation (whereas multiplication isn't necessarily, still). – Batman Apr 10 '15 at 18:42 • @Batman part of the question seems to be "when did multiplication first become non-commutative"? – Omnomnomnom Apr 10 '15 at 18:43 • You might be able to blame Arthur Cayley; he invented both matrix-multiplication and groups as we know them today. – Omnomnomnom Apr 10 '15 at 18:46 • There's also Hamitlon with quaternion-multiplication – Omnomnomnom Apr 10 '15 at 18:48 • Keep in mind that a lot of the pioneering work in group theory was done in permutation groups, where the operation is composition of permutations (and keep in mind Cayley's theorem) – Bill Dubuque Apr 10 '15 at 19:02 A group $G$ is a set endowed with a binary operation, the map $\cdot :G\times G \to G$ that obeys some properties. The notation is then $\cdot(g,h) := g\cdot h$. Of course, nothing stops us from using the plus symbol for the map $+ : G\times G \to G$. But mathematicians are lazy, and a dot is much easier to write than a plus sign! Then if $*$ is the lone object of the groupoid $\mathcal C$, a group is $\text{Aut}_\mathcal{C}(*).$ Its elements are morphisms $f: * \to *$, whose binary operation is composition. Composition is traditionally denoted by $\circ$ or sometimes even $gf$, which is reminiscent of multiplication.
Physics Definition And Proof Based Problems ### Definition And Proof Based Problems Definition And Proof Based Problems Q 3114134959 (i) Define moment of inertia. Write the parallel and perpendicular axis theorem. (ii) Derive an expression for moment of inertia of a disc of radius r, mass m about an axis along its diameter. Solution: (i) Moment of inertia is inertial equivalent in rotational motion. Moment of inertia of a rigid body is defined as the sum of the products of the constituent masses and the squares of the perpendicular distance from the axis of rotation. If m_1 m_2, ..... m_n, are the masses at perpendicular distances r_l> r_2, ..r_n then, moment of inertia I = m_1r_(1)^(2) + m_2r_(2)^(2) + .... + m_n r_(n)^(2) sum_(i=1)^(n) m_ir_(i)^(2) It is measured in kgm^2 and has the dimensions of ML^2 "Parallel Axis theorem" The moment of inertia about an axis passing parallel to the axisthrough the centre of mass of a rigid body is the sum of the moment of inertia (I_(cm) of the body about the axis through centre of mass and the product of its mass M and square of the separation (a^2) between the parallel axis. i.e I = I_(cm)+Ma^2 "Perpendicular Axis theorem" If I_x and I_y are the moment of inertia about the x and y-axis of any rigid mass, then the moment of inertia I. about z-axis is the sum of I_x and I_r (ii) Moment of in tertia of a uniform circular disc about its diameter : Using the theorem of perpendicular axes, we get I_d + I_d = Moment of inertia of the disc about perpendicular axis yoy' i.e , 2I_d = 1/2 MR^2 or I_d = 1/4 MR^2 Q 3164534455 Establish a relation between angular momentum and moment of inertia of a rigid body. Define moment of inertia in terms of it. Solution: We know L = r x p r x mu Since u = rw, we have L = r x nromega) = .r2omega = Iomega Q 3154634554 Define a Rigid Body. Name two kinds of motion which a rigid body can execute. What is the meant by the term Equilibrium ? For the equilibrium of a body, two conditions need to be satisfied. State them. Solution: Rigid body is one in which the separation between any two constituent masses remains constant. It can execute translatory and rotational motion. Equilibrium identifies the stability of any body. For equilibrium, (i) sumf = 0 (i.e.,) the total unbalanced force should be zero. (ii) sumr = 0 (i.e.,) the net torque acting on the body should be zero. Q 3104834758 Define moment of inertia. Write any two factors on which it depends. When the diver leaves the diving boarp, why does he bring his hand and feet closer together in order to make a somersault ? Solution: Moment of inertia of a body about a given axis is defined as the sum of the products of masses of all the particles of body and squares of their respective perpendicular distances from axis of rotation. Moment of inertia depends on (i) Distribution of Mass (ii) Orientation of axis of rotation. Diver does so, so that moment of inertia I of his body decreases. As angular momentum (I m) remains constant, therefore, angular velocity w of his body increases. Q 3134623552 Prove that the torque acting due to a force F in the xy plane is, t_z = x F_y- y F_x. Solution: We know vect = vecr xxvecF If vecr = xhati + yhatj +zhatk and vecF = F_xhati + F_Zhatk we have t = t_xhati + t_y hatj +t_zhatk= | ((hati,hatj, hatk), (x, y , z) (F_x,F_y F_z))| = hati (yF_z_zF_y) -hatj(x(F_z-zF_x)+hatk(xF_y_yF_x) Comparing the coefficients of i,] and k, we have t_2 = xF_y-yF_x Q 3174623556 State and prove the law of conservation of angular momentum. Solution: Torque vect = vecr xxvecF = vecr xxvec(dp)/(dt) (vecr xxvecp) = vec(dl)/(dt) If no external torque acts on a body, , vect= 0 then vec(dL)/(dt) = 0 or vecL is conserved. Q 3124112951 Prove the Kepler's law, that the line joining the sun and the planet sweeps equal areas in equal time, using the angular momentum conservation with the planet. Solution: When the planet moves along the line joining the sun and the planet it sweeps some area given by by A = 1/2r^2theta where theta is the angular displacement. :. (dA)/(dt)=1/2r^2(dtheta)/(dt) = 1/2 r^2omega (dA)/(dt) = 1/2m mr^2omega= L/(2m) Since no torque acts, angular momentum L is a constant so (dA)/(dt) is a constant i.e the line joining the sun and the planet sweeps equal intervals of time Q 3104723658 State and prove the parallel axis theorem. Solution: The moment of inertia about an axis parallel to the axis through the centre of mass is the sum of the moment of inertia about the axis through centre of mass and the product of the mass M and the square of the separation between the axes, i.e., I= MI_(cm)+Ma^2 Moment of inertia is the product of mass and the square of the separation from the axis. The moment of inertia of the system of n masses shown in the a figure about the parallel is I= m_1 (x_1+a)^2 + m_2 (a-x_2)^2 +m_3 (x_3-a)^2 + .........for n masses. In genereal, I = sum(i-1)^(n) m (x_x_i+a_i)^2 if all the n masses are considered equal. I= sum(i-1)^(n) mx_(i)^(2) + sum(i-1)^(n) ma^(2)+2asum(i-1)^(n) mx_i I= I_(cm)+Ma^2 :. summx_i = 0 if the centre of mass is considered as origain Q 3154112954 What is torque ? Write its formula in vector form. Handle, to open the door, is always provided at the free edge of a door. Why ? Solution: Torque is the moment of force. It is written as the cross-product of the position vector of the point of application of the force and the force, i.e., vect = vecr xxvecF the angle between vecr and vecF sin theta, where theta is It is measured in Nm and has dimensions torque is the product of the perpendicular distance between the axis of rotation and point of application of force in line with the force. It is the cause of rotational motion. To open the door, handle is to be rotated. So we need a torque. With less force one can exert more torque if the handle is at the edge. Q 3114823750 State and prove the perpendicular axis theorem. Solution: According to perpendicular axis theorem, the sum of the moment of inertia about x and y axes is equal to the moment of inertia about z-axis. The mass m has co-ordinates (x, y). The moment of inertia about x-axis, I_x = my^2 about y - axis, I_y = mx^2 I_x+I_y=m(x^2+y^2) = M(sqrt(X^2+Y^2)^2 I_x+L_y= M(r distance from z-axis))^2 I_x+i_y+i_2 Q 3174112956 What is torque ? Write its formula in vector form. Handle, to open the door, is always provided at the free edge of a door. Why ? Hots Solution: Angular momentum L = I omega Rotational K.E., E_k = Iomega^2 E_k = 1/2 ((Iomega)^2)/(I) = (L^2)/(2I) Q 3154212154 What is the torque provided by a force acting through the centre of mass of a sphere ? Solution: Zero. Since r = r.LF and r.L = 0 for all points on the axis. Q 3154312254 A projectile fired into the air suddenly explodes into several fragments. What can you say about the motion of the fragments after the collision ? Solution: No external force acts. The centre of mass will follow its original path with every particle scattered. Q 3174834756 Prove that the rate of change of the angular momentum of particle is equal to the torque acting on it. Solution: Torque rotating a particle in xy plane is t = xE_y - yF_x..(I) P_x = mv_x and P_y = mv_y are x, y component of linear momentum of body. According to Newton's 2nd law of motion. F_x= (dP_x)/(dt) = (d)/(dt)(mvb_x) = (mdv_x)/(dt) F_y = (dP_Y)/(dt) = (d)/(dt) (mv_y) = (mdvy)/(dt) Substituting in equation (i), we get t = xm(dv_y)/(dt) - ym (dv_x)/(dt) t = m[x(dv_y)/(dt) - y (dv_x)/(dt)] t = m (d)/(dt) (xv_y- yv_x) t = (d)/(dt) (xmv_y - ymmu_x) t = (d)/(dt) (xmv_y- ymu_x) t = (d)/(dt) (xP_y- yP_x) Put (xP_y - yP_x) = L t = vec(dL)/(dt) Q 3154723654 What is the analogue of mass in rotational motion ? Derive the expression for the kinetic energy of a rotating body. Solution: Moment of inertia is the analogue of mass in rotational motion. Let the body consists of particles of masses m_1 m_2, m_3 at perpendicular distances r_1 r_2, r_3 respectively from axis of rotation. If v1_ v_2, v_3 . are the respective linear velocities of particles, then u_1 = r_1 omega, v_2 = r_2omega, v_3 = r_2omega K.E of mass m_1 is 1/2 m_1v_(1)^(2) = 1/2 m_1 (r_1omega)^2 Similarly K.E of other particles of the body are 1/2 m_2r_(2)^(2)omega^2 , 1/2 m_3r_(3)^(2) omega^2 = 1/2 |sum_(i=1)^(i=m) m_i r_(i)^(2) |omega^2 put summ_ir_(i)^(2) = I K.E of rotation = 1/2 Iomega^2 Q 3124123051 Derive the relations (i) L = Iomega (ii) t = Ialpha Solution: (i) To prove L= Iomega We know, L= r xx p = r xx mv (ii) To prove t= Ialpha We know t = r F = rma = mr^2alpha= Ialpha vect = vecrxxvecF Q 3134034852 Derive an expression for moment of inertia of a thin circular ring passing through the centre and perpendicular to the plane of the ring. Solution: Consider a ring of mass M and radius r. Divide the ring into large number of small segments, each of length dl. For every segment dl, the moment of inertia is, m'r2 where m' is its mass. m' = (M)/(2pir) dl Moments of inertia I ' = (M)/(2pir) dl Net moment of inertia for the ring about the perpendicular axis I = int I'dl = int_(0)^(2pir) (M)/(2pir) M/(2pir)r^2dl I = int I'dl = int_(0)^(2pir) dl = (Mr)/(2pi) |l|_(0)^(2pi) = Mr^2
## Fall 2018 – Summer 2019 – Abstracts Wednesday, October 31, 2018.  Mathematics Colloquium. Prof. Tyrone Crisp, Department of Mathematics and Statistics, University of Maine. “Representations of finite groups: from $S_n$ to $GL_n$ and into the wilderness” 3:30 – 4:20 pm, Hill Auditorium, Barrows (ESRB). Snacks at 3:15pm. Abstract: In representation theory one studies the ways in which an abstract group can be represented by linear transformations of a vector space. The case where the group is a symmetric group $S_n$ and the vector space is finite-dimensional over the complex numbers has been intensively studied since the beginnings of the subject. In this talk I shall explain how the well-understood theory for $S_n$ can be used to organize the representation theory of other families of groups, such as hyperoctohedral groups and finite general linear groups. I shall also present some work in progress which aims to apply similar techniques to families of groups whose representations we cannot reasonably expect to classify. This talk will be aimed at a general mathematical audience. Wednesday, November 6, 2017. Teaching workshop. Prof. Natasha Speer and Jen Tyne, Department of Mathematics and Statistics, University of Maine. “i-Clickers” 3:30 – 4:20 pm, Hill Auditorium, Barrows (ESRB). Snacks at 3:15pm. Some faculty are using clickers for the first time, and others are considering using clickers in future semesters. This workshop will be focused on math classes, including how to manage clicker questions, what makes an effective math/stats clicker question, and more. Even if you have never thought about using clickers in your classroom, we welcome you to join us! Wednesday, January 30, 2019. Mathematics Colloquium. Prof. Robert Franzosa, Department of Mathematics and Statistics, University of Maine. “Two Talks in One” 3:30 – 4:20 pm, Hill Auditorium, Barrows (ESRB). Snacks at 3:15pm. Abstracts: True/False Cards: A Hands-on Deductive Reasoning Calculator The True/False deck of cards can be used for a hands-on approach to basic deductive reasoning topics typically seen in an introductory abstract mathematics course. I will show how the deck can be used to compare propositions for equivalence, identify tautologies, identify contradictions, construct truth tables, construct valid arguments, and solve logic puzzles. What is a Walk a Game Worth? The Baseball Simulator is a baseball simulation program that replays Major League Baseball games and seasons using team statistics. I will introduce the program and show how it can be used as a baseball analytics tool to answer questions like: How many more wins would a team have if they drew one more walk per game during the season Wednesday, February 4, 2019. Mathematics Colloquium. Dr. Joan Ferrini-Mundy, President, University of Maine and the University of Maine at Machias. “Integrating Research, Policy, and Practice in Mathematics Education: What Does It Mean to “Make a Difference”? “ 3:00 – 4:00 pm, Hill Auditorium, Barrows (ESRB). Refreshments will be served. Abstract: What does it mean for educational research to “make a difference?” Using examples from mathematics education, I will explore relationships among basic, foundational, applied, and use-inspired research. Research on student learning, teaching, and instructional materials has impacted and informed federal and state policy, reform and transformation efforts, and educational practice both by design and by serendipity. We will discuss key grand challenges in education today, both in Maine and beyond, and the potential for research to have a role in their solution. This colloquium is co-sponsored with the Maine Center for Research in STEM Education (RiSE Center) Wednesday, March 6, 2019. Mathematics Colloquium. Prof. Julian Rosen, Department of Mathematics and Statistics, University of Maine. “How to take a picture of something very far away” 3:30 – 4:20 pm, Hill Auditorium, Barrows (ESRB). Refreshments will be served at 3:15pm. Abstract: Very long baseline interferometry (VLBI) is a technique for imaging distant celestial objects. VLBI involves combining simultaneous observations from an array of telescopes spread across the globe, allowing much greater resolution than a single telescope could provide. However, recovering an image from VLBI data is mathematically difficult because the data is almost always sparse and noisy. In this talk, I will describe some of the mathematics of VLBI. Wednesday, April 3, 2019. Mathematics Colloquium. Prof. Timothy Boester, Department of Mathematics and Statistics, University of Maine. “Scaffolding student thinking: two examples of describing changing quantities” 3:30 – 4:20 pm, Hill Auditorium, Barrows (ESRB). Refreshments will be served at 3:15pm. Abstract: What types of questions or classroom experiences can help students learn how to describe changing quantities? This talk will connect two different projects focused on the research of student thinking of covariation, the “reasoning about values of two or more quantities varying simultaneously” (Thompson & Carlson, 2017). First we’ll examine how a sixth grade classroom developed meta-representational competence of the slope of linear functions. Then we’ll turn our attention to how the Pathways Pre-Calculus curriculum, currently used in MAT 122, develops the concept of exponential growth. Wednesday, April 17, 2019. Mathematics Colloquium. Dr. Frank Thorne, University of South Carolina. “An Analytic Perspective on Arithmetic Statistics” 3:30 – 4:20 pm, Hill Auditorium, Barrows (ESRB). Refreshments will be served at 3:15pm. Abstract: Arithmetic statistics is the science of counting arithmetic objects –number fields, class groups, and so on. Often, problems can be separated into two parts: first, prove a “parametrization theorem”, connecting these objects to lattice point counting problems; second, carry out the lattice point counting problem. In this talk, I will give an overview of this subject area with a focus on the second part — how can we count lattice points efficiently, and what kinds of theorems can one expect to obtain as a result? Friday, April 19, 2019. Graduate Seminar. Jaeho Choi, MA Mathematics Candidate, University of Maine. “Generalized Derivatives for Nonsmooth Problems” 3:00 – 4:00 pm, Room 421, Neville Hall. Abstract: Derivative information is useful for many problems found in science and engineering that require equation solving or optimization. Driven by its utility and mathematical curiosity, researchers over the years have developed a variety of generalized derivatives. In this talk, we will focus our attention on Clarke’s generalized derivative for Lipschitzian functions, which roughly is the smallest convex set containing all nearby derivatives of a domain point of interest. Clarke’s generalized derivative possesses a strong theoretical and numerical toolkit analogous to that of the classical derivative. This includes, for example, nonsmooth equation-solving and optimization methods, as well as nonsmooth versions of the MVT and the implicit function theorem. However, it is generally difficult to calculate Clarke’s generalized derivative. We will discuss pros and cons of Clarke’s generalized derivative in the finite dimensional setting and recent tools developed to calculate Clarke’s generalized derivative in a straightforward way. We will end the talk by stating the goal of our work, which is to extend Clarke’s theory and their recent tools to Banach spaces so that they can be used to tackle problems that are naturally set in infinite dimensions. Tuesday, April 23, 2019. Graduate Seminar. Puspanjali Subudhi, MA Mathematics Candidate, University of Maine. “The Conway-Maxwell-Poisson Distribution and its Application” 10:00 – 11:00 am, Room 421, Neville Hall. Abstract: The Poisson distribution is generally employed to analyze discrete data. However it’s reliance on a single parameter limits its flexibility in many applications. In this talk we will present a data set where the Poisson distribution does not fit . To analyze such a data we will present more flexible model with more than one parameter. The structural properties of the proposed model will be presented and a data set using the new model will be analyzed. Thursday, April 25, 2019. Mathematics Colloquium. Kevin Roberge, Department of Mathematics and Statistics, University of Maine. “A Story About Small and Large Things” 3:30 – 4:20 pm, Hill Auditorium, Barrows (ESRB). Refreshments will be served at 3:15pm. Abstract: Are you at your limit with limits? Does your life need more infinitely large and infinitesimal numbers? Do you enjoy complicated semantic conversations in and around the foundations of mathematics? If you said yes to those questions, you might enjoy learning more about Internal Set Theory, created by Edward Nelson in the seventies as an attempt to recreate Abraham Robinson’s Nonstandard analysis starting from set theory. Nonstandard Analysis is a rigorous treatment of infinitesimal and infinitely large real numbers. Internal Set Theory pulls back the curtain and supposes that these nonstandard elements were there all along, not only within the real numbers but within any infinite set. We’ll begin and end with the Dirac delta function taking a circuitous tour of the amazing and the odd within Internal Set Theory. Tuesday, May 7, 2019. Graduate Seminar. Puspanjali Subudhi, MA Mathematics Candidate, University of Maine. “Analysis of Survival Data by Weibull Conway-Maxwell Poisson” 10:00 – 11:00 am, Room 421, Neville Hall. Abstract: In life testing and survival analysis, sometimes the components are arranged in series and parallel system and the number of components are unknown. This unknown number of components considered to be random , following an appropriate probability mass function. More specifically , this problem arises in cancer clinical trials where the number of metastasis cells (Clonogens), denoted by N , is unknown and the event occurs as soon as one of the clonogens metastasized. In the damage models, the number of cracks unknown and the event occurs as soon as the first failure occurs. In this presentation we will present the survival data with baseline distribution, as Weibull and the distribution of N as Conway-Maxwell-Poisson distribution. This gives rise to four parameter in the model and increasing, decreasing , bathtub and upside bathtub failure rate. Tuesday, May 7, 2019. Graduate Seminar. Ariel Farber, MA Mathematics Candidate, University of Maine. “Poisson Processes: Background and Beginning to Build Models” 12:30 am – 1:30 pm, Room 421, Neville Hall. Abstract: Real-world events are often modeled using known probability distributions that behave similarly to the events themselves in nature. However, many distributions prove difficult to work with when developing such models. For this reason, Poisson processes are often utilized to model discrete events in continuous time. Characteristics of Poisson processes lend themselves to both simulating the behavior of events and deriving differential equations that describe the system. These characteristics will be presented and used to begin derivation of an epidemiological model.
# Simulate, Analyze, Measure Electronics (SAME) In Electronics, there are 3 perspectives of a circuit 1. Simulation - simulate to observe what the real world results should be like 2. Analysis - using mathematical models to analyze and predict the results 3. Measurement - prototype the circuit and measure If the results from the 3 perspectives are the SAME, one can independently conclude that the result is correct. This is a good way to learn electronics and indispensable when one needs to modify an existing circuit or design a new circuit. ## We like short equations An equation with more than 2 fingers width on each side of the equation is not very useful in design The voltage gain of a common emitter amplifer can be written as or this shorter version $$A_V = -{R_C \over r_e} = -{{R_C I_E} \over 26mV }$$ On this site, we prefer the shorter one - because it enables the reader to see quickly that the main determinants of the voltage gain are the parameters RC and IE. This is the first order effect. The long equation is more useful for higher order analysis. ## The odds are against you in the lab The odds are overwhelmingly against the learner to get a new circuit working right the first time Dr GH Tan The circuit shown is a simple LED circuit. The circled parts of the circuits are the places where the learner has to wire or orientate the part correctly. Lets say that there is a 1/2 chance of getting each action done right. The circuit works if and only if all the actions are done correctly and the probability of that happening is $$P = {1 \over 2} \times {1 \over 2} \times {1 \over 2} \times{1 \over 2} = {1 \over 16}$$ So you have actually better odds guessing the outcome of a coin toss then getting this 'simple' circuit wired up correctly the first time you attempt it. We improve the odds by • using a breadboard simulator to check the wiring. • using online simulation to show the user the outcome of a correct circuit operation.
# High-resolution tip-enhanced Raman scattering probes sub-molecular density changes ## Abstract Tip-enhanced Raman spectroscopy (TERS) exhibits new selection rule and sub-nanometer spatial resolution, which is attributed to the plasmonic near-field confinement. Despite recent advances in simulations of TERS spectra under highly confined fields, a simply physical mechanism has remained elusive. In this work we show that single-molecule TERS images can be explained by local sub-molecular density changes induced by the confined near-field during the Raman process. The local sub-molecular density changes determine the spatial resolution in TERS and the gradient-based selection rule. Using this approach we find that the four-fold symmetry of meso-tetrakis(3,5-di-tert-butylphenyl)porphyrin (H2TBPP) TERS images observed in experiments arises from the combination of degenerate normal modes localized in the functional side groups rather than the porphyrin ring as previously considered. As an illustration of the potential of the method, we demonstrate how this new theory can be applied to microscopic structure characterization. ## Introduction Tip-enhanced Raman spectroscopy (TERS) is a powerful technique to measure molecular properties with microscopic precision1,2,3,4,5,6,7,8,9,10,11. TERS measurements provide much richer information than traditional Raman spectroscopy, which is enabled by using a sharp metallic tip to probe the molecules in a sub-nanometer junction12,13,14. The sharp metallic tip confines the plasmonic near field in an extremely small volume, where the field-gradient effects become prominent leading to the significantly modified selection rule in TERS15. TERS spectra vary with the spatial movement of the tip. Many works reported such spatial resolutions achieve nanometer and sub-nanometer scales when atomically sharp tips are used16,17,18,19,20,21,22,23. One prime demonstration of these unique features is high-resolution TERS imaging. A TERS image of a given normal mode is a 2-dimensional (2D) mapping of the TERS intensities varying with the position of the tip24,25. The TERS images of different normal modes are predicted to be different, which contain the information of the unique selection rules in TERS26. Therefore, rationalizing the mode specificity and spatial variation of TERS imaging is necessary for fully understanding the physical mechanism of high-resolution TERS. Many theoretical modeling works were carried out to simulate TERS images by calculating the molecular responses to narrowly distributed near fields24,27,28,29. It was reported that atomic resolution can be achieved when the near-field confinement reaches a few Ångstroms in diameter, and each normal mode can be uniquely resolved26,30,31. TERS images in both simulations and experiments suggested strong correlations between the hotspots distributions and the vibrating atoms. However, the molecular response is a non-local property and thus not easily localized on individual atoms. Therefore, despite the success in simulating TERS images, existing theories do not provide clear and consistent explanations on how the vibrating atoms locally affect the TERS intensities. In this work, we address the question of what local property of a molecule is probed by the TERS tip. We demonstrate that sub-molecular density changes are probed by the confined near field in TERS and lead to the varying Raman intensities over normal modes and space. The probed density changes, which we define as Raman polarizability density, is a truly local property and is strongly correlated with the given vibrational mode. The density distribution is extracted from a small volume defined by the highly confined near field, leading to the spatially variant TERS intensities. This approach offers a clear explanation for the mode specificity and spatial variation of TERS signals. We show that the proposed mechanism accurately reproduces atomistic simulations and experimental results, and more importantly provides intuitive interpretations of TERS images. In the end we demonstrate how TERS imaging combined with the new theory can be applied to microscopic characterization and its potential to compete with scanning tunneling microscopy (STM). ## Results ### Locally integrated Raman polarizability density The method was adopted in this work, namely locally integrated Raman polarizability density (LIRPD). All the molecular properties and the near field are dependent on the frequency of the external field, and are in tensor format. The explicit notations of frequency and the tensor subscripts are omitted for simplicity. A detailed justification of the method is provided in Supplementary Methods. The concept of distributed polarizability density was first introduced by Maaskant and Oosterhoff in the theory of optical rotation32, and was later generalized by Hunt33,34. Briefly, the molecular polarizability α can be expressed as a spatial integration of a polarizability density ρ(α), $$\alpha = - {\int} {\rho ^{(\alpha )}} ({\mathbf{r}})\cdot d{\mathbf{r}} = - {\int} {\hat \mu ^{{\mathrm{eff}}}} \cdot \delta \rho ({\mathbf{r}})\cdot d{\mathbf{r}},$$ (1) where δρ(r) is the linear change in the electron density of a molecule due to an external electric field, $$\hat \mu ^{{\mathrm{eff}}}$$ is the effective dipole operator, and r is a vector in space. The polarizability density, ρ(α)(r), is a local property as it’s derived from the electron density distribution, which is different from the definition in refs. 33,34. However, the concept of “polarizability density” is similar as its spatial integral gives rise to the molecular polarizability. In the linear-response time-dependent density functional theory, the induced electron density of the molecule due to an electric field perturbation is expressed as $$\delta \rho ({\mathbf{r}}) = {\int} \chi ({\mathbf{r}},{\mathbf{r}}\prime )\hat v^{{\mathrm{pert}}}({\mathbf{r}}\prime )d{\mathbf{r}}\prime ,$$ (2) where χ(r, r′) is the density-density linear response function35 and $$\hat v^{{\mathrm{pert}}}({\mathbf{r}}\prime )$$ is the perturbation operator. Because the confined near field dominates the field distribution in the TERS junction, we can represent both the perturbation and effective dipole operators as the product of the near field distribution F(r − R) centered at R and a free-molecule operator in the unit external field, $$\begin{array}{l}\hat \mu ^{{\mathrm{eff}}}({\mathbf{r}}) = - {\mathbf{F}}({\mathbf{r}} - {\mathbf{R}})\cdot \hat \mu ,\\ \hat v^{{\mathrm{pert}}}({\mathbf{r}}\prime - {\mathbf{R}}) = - {\mathbf{F}}({\mathbf{r}}\prime - {\mathbf{R}})\cdot \hat \mu ({\mathbf{r}}\prime ).\end{array}$$ (3) Here $$\hat \mu$$ is the dipole operator, and the perturbation operator entails the plasmon-induced near field. Combining the first three equations, we obtain the molecular polarizability that is now dependent on the tip position, $$\begin{array}{c}\alpha ({\mathbf{R}}) = {\int} {\hat \mu ^{{\mathrm{eff}}}} ({\mathbf{r}})[{\int} \chi ({\mathbf{r}},{\mathbf{r}}\prime )\hat v^{{\mathrm{pert}}}({\mathbf{r}}\prime )d{\mathbf{r}}\prime ]d{\mathbf{r}}\\ = {\int} {\mathbf{F}} ({\mathbf{r}} - {\mathbf{R}})\hat \mu [{\int} \chi ({\mathbf{r}},{\mathbf{r}}\prime ,\omega )\hat \mu ({\mathbf{r}}\prime ){\mathbf{F}}({\mathbf{r}}\prime - {\mathbf{R}})d{\mathbf{r}}\prime ]d{\mathbf{r}}.\end{array}$$ (4) Because the near field is highly confined in high-resolution TERS, the induced density away from the near-field center diminishes quickly. Thus, we make a local approximation to the induced density perturbed by the near field, and take the near-field distribution out of the inner integral. Then we obtain the molecular polarizability in the form of locally integrated polarizability density, $$\begin{array}{c}\alpha ({\mathbf{R}}) = {\int} {\mathbf{F}} ({\mathbf{r}} - {\mathbf{R}})\hat \mu [{\int} \chi ({\mathbf{r}},{\mathbf{r}}\prime ,\omega )\hat \mu ({\mathbf{r}}\prime ){\mathbf{F}}({\mathbf{r}}\prime - {\mathbf{R}})d{\mathbf{r}}\prime ]d{\mathbf{r}}\\ \mathop { \approx }\limits^{{\mathrm{local}}} {\int} {\mathbf{F}} ({\mathbf{r}} - {\mathbf{R}})\hat \mu [{\int} {\chi ^{{\mathrm{free}}}} ({\mathbf{r}},{\mathbf{r}}\prime ,\omega )\hat \mu ({\mathbf{r}}\prime )d{\mathbf{r}}\prime ]{\mathbf{F}}({\mathbf{r}} - {\mathbf{R}})d{\mathbf{r}}\\ = {\int} {{\mathbf{F}}({\mathbf{r}} - {\mathbf{R}})} \cdot \hat \mu {\mkern 1mu} \delta \rho ^{{\mathrm{free}}}({\mathbf{r}})\cdot {\mathbf{F}}({\mathbf{r}} - {\mathbf{R}})\cdot d{\mathbf{r}}\\ = {\int} {{\mathbf{F}}({\mathbf{r}} - {\mathbf{R}})} \cdot \rho ^{(\alpha )}({\mathbf{r}})\cdot {\mathbf{F}}({\mathbf{r}} - {\mathbf{R}})\cdot d{\mathbf{r}}.\end{array}$$ (5) Here ρ(α)(r) is the free-molecule polarizability density as given in Eq. (1). The validity of this local approximation will be verified  by explicit comparison with the fully non-local response as shown below. The Raman polarizability density, denoted as δρ(α) = ∂ρ(α)/∂Qk, is the change of molecular polarizability density due to the vibrational mode k. It is calculated by the finite differentiation of polarizability densities with respect to small atomic displacements in a given vibrational mode. In a TERS junction, the effective Raman polarizability density is then represented as the free-molecule Raman polarizability density distributed over the near-field distribution (F(r − R)), $$\delta \rho _{{\mathrm{loc}}}^{(\alpha )}({\mathbf{r}},{\mathbf{R}}) = {\mathbf{F}}({\mathbf{r}} - {\mathbf{R}})\cdot \delta \rho ^{(\alpha )}({\mathbf{r}})\cdot {\mathbf{F}}({\mathbf{r}} - {\mathbf{R}}).$$ (6) The TERS intensity of a certain vibrational mode is proportional to the square of integrated effective Raman scattering polarizability density, formulated as $$I(Q_k) \propto [{\int} \delta \rho _{{\mathrm{loc}}}^{(\alpha )}({\mathbf{r}})\cdot d{\mathbf{r}}]^2.$$ (7) Due to the confinement of the near field, the integration over all space can be effectively approximated by local integration within a finite volume. This integration volume is determined by the full width at half maximum (FWHM) of the near-field distribution. Here we have briefly summarized the method of LIRPD without explicitly writing down the element form of all matrices since only the zz component of the polarizability tensor is considered in calculating the Raman intensities (the long axis of the TERS junction aligns with the z direction). A detailed description of LIRPD in full tensor representation is provided in Supplementary Methods. The local approximation made in Eq. (3) can be improved by including the densities of higher-order polarizability tensors, for example, the quadrupole-dipole polarizability ($${\cal{A}}$$ tensor)36 density. It is equivalent of applying multipole expansion to the effective dipole operator37,38, which introduces a semi-local correction to the local approximation. Including $${\cal{A}}$$-tensor densities slightly improves the accuracy when an atomically confined field is applied to a small molecule (benzene). But we find for larger systems, since the required near field is less confined, the contribution from $${\cal{A}}$$-tensor densities becomes trivial. Therefore, all presented TERS images are calculated by considering only the dipole-dipole polarizability density. The TERS images with additional $${\cal{A}}$$-tensor density contributions are provided for comparison in Supplementary Fig. 1. Here we take a benzene molecule as an example to demonstrate how the LIRPD approach works for TERS imaging. The Raman polarizability density distribution is plotted on the right panel of Fig. 1. The positive value of density is colored blue and the negative value is in yellow. The near field is confined in a red sphere, which we call the effective integration volume. In this work the near-field distribution is expressed as a 3D Lorentzian function. Compared with the widely used Gaussian field model, Lorentzian distribution has slightly more pronounced tails, which better captures the background near field on the substrate away from the tip as is obtained from our atomistic electrodynamics calculations15. The diameter of the integration volume is the full width at half maximum (FWHM) of the field distribution. The Raman polarizability densities distributed within the red sphere are locally enhanced by the near field leading to the effective scattering polarizability densities, which are then integrated over all space to obtain Raman intensity that corresponds to the specific tip position recorded in the TERS image (Fig. 1 insert in the right panel). The imaging pattern is not sensitive to the field shape, which is shown in Supplementary Fig. 2. Without the confined near field, the integration of the Raman polarizability density over all space leads to the far-field Raman signals of the molecule. The mechanism of LIRPD explains the gradient-based selection rule in plasmon-enhanced Raman spectroscopy as well as the spatial localization of the TERS intensity. ### TERS imaging and selection rule TERS images are obtained by scanning the tip over a sample molecule and simultaneously collecting the Raman signals. Atomically resolved TERS images were previously simulated, and the confinement of near field down to 5 Å in diameter was found necessary to achieve the ultrahigh resolution26. However, the local properties probed by the highly confined near field, which is key to establish the dependence of high-resolution TERS images on molecular normal modes, were  still not clear. For example, the simulated TERS images are drastically different between the symmetric and anti-symmetric bending modes of benzene, even though it is the same atoms that are vibrating in these two normal modes. Using the LIRPD method, we illustrate where such spatial variation originates and how it’s affected by the atomic vibrations. The consistency of this model is evidenced by reproducing the TERS images calculated by the hybrid atomistic electrodynamics/quantum mechanics method (DIM/QM) in ref. 26 which include the fully non-local response. Here we use the same symmetric and anti-symmetric bending modes of benzene as examples. The normal modes of the symmetric (Fig. 2a) and anti-symmetric bending vibrations (Fig. 2e) and the related Raman polarizability density distributions (Fig. 2b, f) were calculated with the molecule-substrate mutual polarization taken into account. The spatial distributions of the Raman polarizability densities and the molecular vibrations are highly correlated. In the 664 cm−1 mode, all the hydrogen atoms symmetrically bend out of the molecular plane. The corresponding density distribution preserves the symmetry. The densities are largely localized on the hydrogen atoms and benzene ring (top of Fig. 2b). The distribution is symmetric, but the signs are opposite with respect to the molecular plane (bottom of Fig. 2b). The large atomic displacement leads to the prominent density distributions on the hydrogen atoms. The densities distributed over the benzene ring come from the coupled motions of the carbon atoms. The 835 cm−1 mode is featured by anti-symmetric out-of-plane vibrations (Fig. 2e). The corresponding density distribution inherits the same anti-symmetry by having opposite signs in xy plane either above or below the molecular plane. The in-plane opposite signs stem from the para-hydrogen atoms coupled with the attached carbon atoms vibrating in opposite directions. Across the molecular plane, the densities also have opposite signs around the same atoms. The near field is here represented by a 3D Lorentzian distribution with FWHMs of 1.3 Å centered at 1.0 Å above the benzene plane in our simulations. By using Eq. (6), the Raman polarizability densities are enhanced within the Lorentzian peak, while the densities outside the peak are smeared out. In such way the Raman density distribution is extracted from a small volume defined by the confined near field. In other words, the Raman densities are locally selected by the confined near field. The selected densities are then integrated over space to obtain a Raman intensity. For the mode at 664 cm−1, the densities at the near-field position rather than elsewhere are greatly boosted. The integrated density (local polarizability density) is largely due to the densities with the same sign being accumulated, resulting in a strong TERS signal. In contrast, the integrated density is close to zero in the integration volume above the center of benzene, because the local densities are distributed with opposite signs and thus are integrated to zero. We accordingly see a quite low intensity in the center of the TERS image at 835 cm−1. Using the LIRPD method to calculate Raman intensities while scanning the tip over a molecule, we are able to reproduce the high-resolution TERS images predicted by the DIM/QM method (Fig. 2c, d, g, h). In general, the TERS intensity is predominantly determined by two factors: the Raman polarizability density distribution and the local integration volume (near-field distribution) in terms of size and position. The Raman polarizability density distribution is dominated by the large atomic displacements in a normal mode, and governs the pattern of its TERS image. A narrow near-field distribution leads to the atomic resolution in TERS images. For instance, the image resolution is sensitive to field FWHMs of the x and y components rather than the z component. Moreover, the height from tip to molecular plane, which is similarly defined as the near-field focal plane in ref. 26, plays a vital role in TERS imaging. A small change of tip height leads to a significantly different TERS image. These findings suggest that distributing the near fields within atomic dimensions over an appropriate imaging plane is the key to the atomic resolution in TERS images. (see Supplementary Fig. 3). It is noted that the integration of the Raman polarizability density without the confined near field leads to the typical far-field property of the molecule. For the selected two modes of benzene, the Raman polarizability density distributions are symmetric with opposite signs so that integration over all space is zero. This means the Raman signals are silent for the specific modes, which is consistent with the traditional selection rules. However, the confined near field breaks the symmetry, and thus leads to non-zero values after the integration. It provides the explanation for the inactive Raman modes being evoked in plasmon-enhanced Raman spectra. This symmetry breaking of Raman polarizability density distribution aligns with the field-gradient effects typically invoked to explain the high spacial resolution15,26,39. In TERS images, the hotspots indicate the tip positions locally break the symmetry. ### Complex Raman polarizability density in resonant TERS The LIRPD model is naturally transferable to resonant TERS spectra. Contributions from both the electronic and the vibrational transitions are coherently included in the Raman polarizability, which now has a non-trivial imaginary part. We take free-base porphyrin as an example to explore the correlation between Raman polarizability densities and resonant TERS images. Two representative modes of porphyrin are selected: one out-of-plane vibrational mode at 678 cm−1 and one in-plane mode at 1539 cm−1. The 678 cm−1 mode is characterized by the opposite out-of-plane bending of two hydrogen atoms attached to the para-nitrogens (Fig. 3a). The applied excitation energy is at 2.29 eV corresponding to the Qy(0, 0) transition of porphyrin. As shown in Fig. 3b, c, the real part of the density distribution reflects the dominant atomic displacement, and is symmetrically distributed with respect to the molecular plane with opposite signs. In the 678 cm−1 mode, the atoms vibrate perpendicularly to the molecular plane. Similar to the benzene out-of-plane modes, the real Raman polarizability density here is distributed closely around the vibrating atoms, and has opposite signs above and below the molecular plane. In contrast, the imaginary density distribution is asymmetric with respect to the molecular plane, where most of the densities are distributed underneath the porphyrin molecule. However, the imaginary Raman density distribution preserves the same symmetry as in the vibrational mode underneath the molecule. Similar trend is also observed in the 1539 cm−1 mode (Fig. 3e, f). As 1539 cm−1 mode is an in-plane mode, the real Raman polarizability densities are more broadly distributed in-plane. The direction from positive to negative values follows the overall trend of the atomic displacement. By locally integrating the complex Raman polarizability densities enhanced by the near field, mode-specific resonant TERS images with atomic resolution are obtained (Fig. 3g–j). The near fields with FWHMs of 2 Å are placed 1.5 Å above the molecular plane. The effective densities distributed in the scanning volume are illustrated in Fig. 3c, f and the locally enhanced Raman polarizability densities by the given tips are given in Supplementary Fig. 4. We again see the strong resonant Raman intensities at the dominant atomic displacements, which is consistent with the atomistic simulation results (Fig. 3g–j). The patterns in TERS images are mostly similar to the real Raman polarizability density distributions, which is attributed to the facts that the imaginary part of the density is overall much weaker than the real part and that the imaginary Raman densities are largely distributed underneath the molecule. The weak imaginary zz polarizability is expected, because the Qy(0, 0) band has a very weak oscillator strength due to the transition dipole moment on the xy plane. The mutual polarization dominates the interaction between the molecule and the substrate, which explains the imaginary Raman polarizability densities underneath the molecule. The overall patterns in these Raman polarizability density distributions and the corresponding TERS images align with the electronic transition dipole moment, which is along y axis in this specific example. We note that the TERS image of the 1539 cm−1 mode is not intuitively correlated to the real density distribution shown in Fig. 3e. Actually, the maximal density corresponding to the brighter hotspots in the TERS image is four-fold larger than the densities around the nitrogen atoms. This density value difference explains the contrast in the TERS image. Self-consistent solutions26,27,30 of molecular property perturbed by a confined near field are considered the most accurate at the TDDFT level of theory, as they calculate the fully non-local response of the molecule to the near field (Eq. (4)). DIM/QM is regarded as benchmark in this work, because it provides a consistent treatment of both the near-field distribution and the molecular properties. The local approximation made in Eq. (5) qualitatively reproduces the results from DIM/QM, which is an evidence of the validity of LIRPD approach. The agreement between the LIRPD and DIM/QM results is qualitatively good. Because while the overall symmetry patterns of the benzene TERS images are reserved in LIRPD, the two key features for benzene are also captured: hotspots are slightly off the atoms, and Raman inactive modes are activated by strong field gradient (Supplementary Fig. 5). Because of this local treatment of the electronic density, the FWHM of the Lorentzian field has to be smaller than DIM/QM. The agreement between the LIRPD and DIM/QM results can be improved by considering the multipole expansion at the density level. In Supplementary Fig. 1, we have shown that for small molecules like benzene and porphyrin, the $${\cal{A}}$$-tensor densities drives the hotspots slightly further away from the vibrating atoms, and the effective integration volume becomes closer to that in the DIM/QM simulations. However, the contribution from $${\cal{A}}$$-tensor decreases when the near field confinement is beyond the atomic scale, which is the case for the following analysis regarding interpreting experimental results. Moreover, because the Raman polarizability density in LIRPD is independent on the tip position, the LIRPD calculation is order of magnitude (the total number of grids) faster than DIM/QM, which is advantageous for analyzing large molecules seen in experiments. ### Interpreting experimental TERS images In the pioneering work of TERS imaging24, a single molecule of meso-tetrakis(3,5-di-tert-butylphenyl)porphyrin (H2TBPP) was visualized with sub-nanometer resolution via precise tuning of the plasmon resonance coupled with molecular vibrations. The four-fold symmetry in both experimental and simulated TERS images are invariant across different normal modes, which was attributed to electronic resonance and tautomerization24,27. Using the energetically favored concave configuration of H2TBPP23,27, we find that the strong interaction between molecule and the silver substrate leads to more than 100 nm red shift of the Q and B bands in the absorption spectra in Supplementary Fig. 6. Hence it is questionable to assume the free H2TBPP excitation in resonant TERS simulations. In our simulations we took the polarization interactions between the molecule and the metal substrate into account, and found that the Bx(0, 0) and By(0, 0) transitions of H2TBPP is excited at around 560 nm, while Qy(0, 0) band is excited at 760 nm (Supplementary Table 1). So the 532 nm laser used in the experiment is more likely to excite the B-band transition of H2TBPP. Therefore, we revisit the resonant TERS imaging of H2TBPP, and interpret the invariant patterns across different normal modes based on the LIRPD mechanism. In order to well describe the near-field distributions in plasmonic junction, the correlation between gap size and near-field confinement was investigated. The details are provided in Supplementary Table 2. A reasonable approximation to the plasmonic near field is that FWHMs are 12 Å for the x and y components and 6.0 Å for the z component. The narrower distribution along z-axis is due to the fact that the near field is squeezed by the short-range dipole-dipole interaction in the nanocavity15,40. The center of this field distribution is placed 2.7 Å above the molecule. The TERS spectrum obtained by LIRPD with the field center on top of a lobe agrees well with experimental spectrum (Supplementary Fig. 7). Based on the simulated TERS spectrum, the TERS mapping at the critical spectral peaks were elaborately explored. We find that there exists multiple degenerate modes within the integration window in ref. 24, and each of these modes has a distinct TERS image. The modes with the largest TERS intensities are featured by prominent butyl vibrations. For example, the region around 810 cm−1 is associated with the modes at 807.8, 808.4, 810.0, and 811.5 cm−1, which are characterized by the vibrations of different butyl groups (Fig. 4a). The simulated TERS images with By(0, 0) excitation are shown in Fig. 4. Combining the TERS images of the dominant modes around 810 cm−1 within a 20 cm−1 band width, we find the total TERS image matches the experimental mapping (Fig. 4c), also exhibiting the four-lobe symmetric pattern covering the butyl groups. Since the integration volume is above the entire molecule, the pyrrole vibrations are not captured in the TERS image (see Supplementary Fig. 8). The combined TERS image at around 1185 cm−1 is similar to 810 cm−1. The simulated four-lobe pattern matches the experimental mapping (Fig. 4d), and the multiple modes featured by butyl vibrations dominantly contribute to the TERS image (Fig. 4b). The TERS images at frequencies at 900, 990, and 1520 cm−1 are simulated as well (Supplementary Information Fig. 9), and all are consistent with the experimental results. Particularly, the contrast and central dark area becomes smaller toward the high wavenumbers in our simulations. The four-fold symmetry in H2TBPP TERS images was previously attributed to hydrogen tautomerization26,27. However, in this work we clearly see that the four-fold symmetry is obtained by combining the TERS images of degenerate modes, without tautomer contributions. The degenerate vibrations comes from the symmetry of the molecular structure. In the experiment reported in ref. 24, it is very likely that all the four degenerate modes are included in the integration window, which leads to the same four-fold symmetry across different frequency regions. By enforcing tautomerization, the TERS images remain the same except being slightly more smooth and symmetric (Supplementary Fig. 10). Thus, we believe that TERS images of H2TBPP are not sensitive to hydrogen tautomerization. We will further discuss the tautomerization effect on TERS images using a porphycene molecule whose tautomers have been clearly identified in STM experiments. Moreover, we find the TERS images calculated at Qy(0, 0) and By(0, 0) transitions are almost identical, as shown in Supplementary Fig. 11. This suggests that the Raman scattering properties of the side groups, which dominates the TERS images, are insensitive to these excited states. This is expected because both of these electronic excitations are localized in the base porphyrin ring. The TERS tip won’t be able to probe the base ring unless is forced down to the bottom of the molecule. It is generally difficult to differentiate H2TBPP normal modes based on TERS images, as was seen in experiment. Our simulation results suggest that the prevailing four-fold symmetry in H2TBPP TERS images is largely due to the combination of multiple degenerate modes with butyl vibrations, rather than tautomerization or electronic resonance effects. One would expect the TERS images of H2TBPP to be more differentiable if higher spatial resolution is achieved in experiments, and if more precise Raman measurements are performed so that the integration window becomes narrower to eliminate multiple mode contributions. Nevertheless, the LIRPD method offers a consistent and flexible approach to the interpretation of experimental measurements on large molecules. ### TERS imaging for microscopic structure characterization We further explore the effect of hydrogen tautomerization on TERS images, and at the same time demonstrate how TERS imaging can be applied as a structural characterization tool. We take porphycene as an example, of which the tautomers have been identified in experiment with the help of low-temperature STM41. The optimized geometries of three porphycene tautomers, one trans and two cis configurations (denoted as cis and cis′), are shown in Supplementary Fig. 12. The trans and cis porphycene are planar, while the hydrogen atoms in the cavity of cis′ porphycene are out of the macrocycle plane due to a strong steric repulsion. In the TERS simulation using the LIRPD methods, we examine the normal mode around 1250 cm−1, as it was previously reported to be a prominent peak in resonant SERS42. The near field is represented in 3D Lorentzian distribution with the FWHM of 5 Å for all three Cartesian components and is centered at 2 Å above the molecule. The resonant TERS images generated by the LIRPD method at the excitation energy of 2.21 eV are shown in Fig. 5. The simulations suggest that two modes contribute to the total TERS image at 1250 cm−1 with the band width of 20 cm−1. The dominant mode for each tautomer is characterized by central hydrogen atoms vibrations coupled with pyrrole moieties (Fig. 5a, b). The Raman polarizability density distributions of the individual modes within the scanning volumes are illustrated in Supplementary Fig. 13. We again see the resonant TERS image is largely determined by the real densities. The para-hydrogen atoms vibrating oppositely in the cavity lead to the large density distributions on the para-pyrrole moieties in the trans configuration. In the cis configuration the prominent density distributions are related to the ortho-hydrogen vibrations. The modes with the large displacement of the central hydrogen atoms provide the major contributions to the total TERS images. Generally, the overall hotspot symmetry follows the configuration of the two central hydrogens. There are four hotspots with one brighter pair on para-pyrrole moieties for the trans configuration. For the cis configuration, there are two connected hotspots on the adjacent pyrrole and separate lobes on the other two pyrrole moieties. The TERS image of the cis′ configuration was simulated as well (Supplementary Fig. 14). The simulations indicate that different tautomers can be identified and differentiated through distinct TERS images, and the patterns are either trans or cis following the configuration of the central hydrogens. TERS imaging carries both structural and chemical information of mode vibrations, and TERS images can be even more distinguishable among tautomers. Thus, by combining the LIRPD interpretation with high-resolution measurements, we envision TERS to be complementary to STM for microscopic characterization. ## Discussion In this work we illustrated that high-resolution TERS probes the molecule’s local polarizability density changes. The Raman polarizability densities are locally enhanced by the confined near field and then integrated over space giving rise to the molecule’s near-field response. The density distribution is unique for specific normal mode, and leads to the also unique TERS image. The local symmetry breaking in the integrated density distribution is the root of the spatial variation of TERS intensities, and explains the gradient-based selection rules in TERS. The locally integrated Raman polarizability density provides theoretical insights into experimental TERS images and origin of the hotspots from a point of view of the molecule’s locally probed property. The LIRPD mechanism is a simple and intuitive approach to the interpretation of high-resolution TERS images. With the help of LIRPD interpretation, we demonstrated that TERS imaging can be applied to resolve subtle changes in molecular structure with atomic precision. The key to achieve the atomistic resolution is to confine the near field down to a few Ångstroms in experiment. Previous simulations indicate that such confinement requires the tip to be atomically sharp28,40,43,44,45. To maintain a stable sharp tip during the scanning, cryogenic and high-vacuum environment is preferred in experiments15,46. Moreover, this work and previous simulations26 suggest that the TERS images are also very sensitive to the height of the focal plane relative to the molecule and the field confinement in the vertical axis, and thus flat molecules are generally favored. All these conditions to obtain high-resolution TERS images are difficult to fulfill. However, we still expect TERS imaging has the potential to rival with state-of-the-art scanning tunneling microscopy for microscopic characterization, and thus holds great promise for monitoring chemical structure and transformation with sub-molecular resolution. ## Methods ### DIM/QM calculations A locally modified version of the Amsterdam Density Functional (ADF) program package47,48,49 was employed to perform all the simulations. The geometry optimizations, frequency, and linear-response calculations were carried out using the Becke–Perdew (BP86) exchange-correlation functional with the triple-ζ polarized (TZP) Slater-type basis, except for the H2TBPP molecule, which was calculated at the BP86/DZP theoretical level in order to reduce the computational cost. The geometries of benzene and porphyrin molecules were optimized with small frozen core in the absence of metal substrate to be consistent with conditions of the previous work26. The adsorbed structures of H2TBPP and the porphycene molecules are strongly influenced by the molecule-substrate interactions. The metal substrate was included in the geometry optimizations accordingly. ### Polarizability density calculations The excited state lifetime is set to 0.1 eV50 and the metal substrates which are large enough to support a sample molecule were treated with the discrete interaction model (DIM)51. The frequency-dependent complex dielectric functions of metal surface were obtained from Johnson and Christy52. The cubic grids used for representing the density are determined by the sample molecule structure and orientation on the surface. The boundary of the box is 4 Å away from a H2TBPP molecule and 3 Å for the other molecules. The step size is 0.4 Å for generating grids parallel to the metal surface and 0.2 Å for H2TBPP and 0.1 Å for others in the vertical direction. The Raman polarizability densities are obtained by the three-point numerical differentiation method. The Raman polarizability densities are locally enhanced by near fields and are locally integrated. From the locally integrated Raman polarizability density, differential cross section (dσ/dΩ) of Raman scattering is written as $$\frac{{{\mathrm{d}}\sigma }}{{{\mathrm{d}}\Omega }} = \frac{{\pi ^2}}{{\epsilon _0^2}}(\tilde \nu _{{\mathrm{in}}} - \tilde \nu _k)^4\frac{h}{{8\pi ^2c\tilde \nu _k}}\frac{{|\alpha_k^\prime |^2}}{{1 - {\mathrm{exp}}( - hc\tilde \nu _k/k_{\mathrm{B}}T)}},$$ (8) where $$\tilde \nu _{{\mathrm{in}}}$$ is the incident frequency and $$\tilde \nu _k$$ is the frequency of kth normal mode. $$\alpha _k^\prime$$ is the locally integrated Raman polarizability density related to polarizability density of kth normal mode. Here we considered only zz component contributes to TERS cross sections, and the temperature was set to 298 K. ## Data availability The Raman polarizability densities used to generate TERS images are available upon request. Exemplar data are provided along with the source code repository. The LIRPD code is available at https://github.com/jensengrouppsu/LIRPD. ## References 1. 1. Onhommeau, S. & Lecomte, S. Tip-enhanced Raman spectroscopy: a tool for nanoscale chemical and structural characterization of biomolecules. Comput. Phys. Commun. 19, 8–18 (2018). 2. 2. Kradolfer, S. et al. Vibrational changes induced by electron transfer in surface bound azurin metalloprotein studied by tip-enhanced raman spectroscopy and scanning tunneling microscopy. ACS Nano 11, 12824–12831 (2017). 3. 3. Jiang, R. H. et al. Near-field plasmonic probe with super resolution and high throughput and signal-to-noise ratio. Nano Lett. 18, 881–885 (2018). 4. 4. van Schrojenstein Lantman, E. M., Deckert-Gaudig, T., Mank, A. J. G., Deckert, V. & Weckhuysen, B. M. Catalytic processes monitored at the nanoscale with tip-enhanced raman spectroscopy. Nat. Nanotechnol. 7, 583–586 (2012). 5. 5. Sun, M., Zhang, Z., Zheng, H. & Xu, H. In-situ plasmon-driven chemical reactions revealed by high vacuum tip-enhanced Raman spectroscopy. Sci. Rep. 2, 647 (2012). 6. 6. Birmingham, B. et al. Probing interaction between individual submonolayer nanoislands and bulk MoS2 using ambient TERS. J. Phys. Chem. C. 122, 2753–2760 (2018). 7. 7. Rahaman, M. et al. Highly localized strain in a mos2/au heterostructure revealed by tip-enhanced raman spectroscopy. Nano Lett. 17, 6027–6033 (2017). 8. 8. Bhattarai, A., Joly, A. G., Hess, W. P. & El-Khoury, P. Z. Visualizing electric fields at Au(111) step edges via tip-enhanced Raman scattering. Nano Lett. 17, 7131–7137 (2017). 9. 9. Kumar, N. et al. Nanoscale chemical imaging of solid-liquid interfaces using tip-enhanced Raman spectroscopy. Nanoscale 10, 1815–1824 (2018). 10. 10. Sheng, S. et al. Vibrational properties of a monolayer silicene sheet studied by tip-enhanced Raman spectroscopy. Phys. Rev. Lett. 119, 1–5 (2017). 11. 11. Chiang, N. et al. Probing Intermolecular vibrational symmetry breaking in self-assembled monolayers with ultrahigh vacuum tip-enhanced Raman spectroscopy. J. Am. Chem. Soc. 139, 18664–18669 (2017). 12. 12. Stöckle, R. M., Suh, Y. D., Deckert, V. & Zenobi, R. Nanoscale chemical analysis by tip-enhanced raman spectroscopy. Chem. Phys. Lett. 318, 131–136 (2000). 13. 13. Zhang, W., Yeo, B. S., Schmid, T. & Zenobi, R. Single molecule tip-enhanced raman spectroscopy with silver tips. J. Phys. Chem. C. 111, 1733–1738 (2007). 14. 14. Schmid, T., Opilik, L., Blum, C. & Zenobi, R. Nanoscale chemical imaging using tip-enhanced Raman spectroscopy: a critical review. Angew. Chem., Int. Ed. 52, 5940–5954 (2013). 15. 15. Lee, J. et al. Tip-enhanced raman spectromicroscopy of co(ii)-tetraphenylporphyrin on au(111): toward the chemists’ microscope. ACS Nano 11, 11466–11474 (2017). 16. 16. Neacsu, C. C., Dreyer, J., Behr, N. & Raschke, M. B. Scanning-probe raman spectroscopy with single-molecule sensitivity. Phys. Rev. B 73, 193406 (2006). 17. 17. Hayazawa, N., Inouye, Y., Sekkat, Z. & Kawata, S. Near-field raman scattering enhanced by a metallized tip. Chem. Phys. Lett. 335, 369–374 (2001). 18. 18. Steidtner, J. & Pettinger, B. Tip-enhanced raman spectroscopy and microscopy on single dye molecules with 15 nm resolution. Phys. Rev. Lett. 100, 236101 (2008). 19. 19. Stadler, J., Schmid, T. & Zenobi, R. Nanoscale chemical imaging using top-illumination tip-enhanced raman spectroscopy. Nano Lett. 10, 4514–4520 (2010). 20. 20. Sonntag, M. D. et al. Single-molecule tip-enhanced raman spectroscopy. J. Phys. Chem. C. 116, 478–483 (2012). 21. 21. Chen, C., Hayazawa, N. & Kawata, S. A 1.7 nm resolution chemical analysis of carbon nanotubes by tip-enhanced raman imaging in the ambient. Nat. Commun. 5, 3312 (2014). 22. 22. Jiang, N. et al. Nanoscale chemical imaging of a dynamic molecular phase boundary with ultrahigh vacuum tip-enhanced raman spectroscopy. Nano Lett. 16, 3898–3904 (2016). 23. 23. Chiang, N. et al. Conformational contrast of surface-mediated molecular switches yields ångstrom-scale spatial resolution in ultrahigh vacuum tip-enhanced raman spectroscopy. Nano Lett. 16, 7774–7778 (2016). 24. 24. Zhang, R. et al. Chemical mapping of a single molecule by plasmon-enhanced raman scattering. Nature 498, 82–86 (2013). 25. 25. Lee, J., Crampton, K. T., Tallarida, N. & Apkarian, V. A. Visualizing vibrational normal modes of a single molecule with atomically confined light. Nature 568, 78–82 (2019). 26. 26. Liu, P., Chulhai, D. V. & Jensen, L. Single-molecule imaging using atomistic near-field tip-enhanced raman spectroscopy. ACS Nano 11, 5094–5102 (2017). 27. 27. Duan, S. et al. Theoretical modeling of plasmon-enhanced raman images of a single molecule with subnanometer resolution. J. Am. Chem. Soc. 137, 9515–9518 (2015). 28. 28. Benz, F. et al. Single-molecule optomechanics in “picocavities”. Science 354, 726–729 (2016). 29. 29. Shin, H.-H. et al. Frequency-domain proof of the existence of atomic-scale SERS hot-spots. Nano Lett. 18, 262–271 (2018). 30. 30. Duan, S., Tian, G. & Luo, Y. Visualization of vibrational modes in real space by tip-enhanced non-resonant raman spectroscopy. Angew. Chem., Int. Ed. 55, 1041–1045 (2016). 31. 31. Duan, S., Tian, G. & Luo, Y. Theory for modeling of high resolution resonant and nonresonant raman images. J. Chem. Theor. Comput. 12, 4986–4995 (2016). 32. 32. Maaskant, W. J. A. & Oosterhoff, L. Theory of optical rotatory power. Mol. Phys. 8, 319–344 (1964). 33. 33. Hunt, K. L. C. Nonlocal polarizability densities and van der Waals interactions. J. Chem. Phys. 78, 6149–6155 (1983). 34. 34. Hunt, K. L. C. Nonlocal polarizability densities and the effects of short-range interactions on molecular dipoles, quadrupoles, and polarizabilities. J. Chem. Phys. 80, 393–407 (1984). 35. 35. Gross, E. K. U., Dobson, J. F. & Petersilka, M. Density functional theory of time-dependent phenomena, pp. 81–172 (Springer, Heidelberg, 1996). 36. 36. Barron, L. D. Molecular Light Scattering and Optical Activity, 2nd edn (Cambridge University Press, Cambridge, UK, 2004). 37. 37. Janesko, B. G. & Scuseria, G. E. Surface enhanced raman optical activity of molecules on orientationally averaged substrates: Theory of electromagnetic effects. J. Chem. Phys. 125, 124704 (2006). 38. 38. Chulhai, D. V., Hu, Z., Moore, J. E., Chen, X. & Jensen, L. Theory of linear and nonlinear surface-enhanced vibrational spectroscopies. Ann. Rev. Phys. Chem. 67, 541–564 (2016). 39. 39. Chulhai, D. V. & Jensen, L. Determining molecular orientation with surface-enhanced raman scattering using inhomogenous electric fields. J. Phys. Chem. C. 117, 19622–19631 (2013). 40. 40. Chen, X. & Jensen, L. Morphology dependent near-field response in atomistic plasmonic nanocavities. Nanoscale 10, 11410–11417 (2018). 41. 41. Kumagai, T. et al. Controlling intramolecular hydrogen transfer in a porphycene molecule with single atoms or molecules located nearby. Nat. Chem. 6, 41–46 (2014). 42. 42. Gawinkowski, S. et al. Single molecule Raman spectra of porphycene isotopologues. Nanoscale 8, 3337–3349 (2016). 43. 43. Barbry, M. et al. Atomistic near-field nanoplasmonics: Reaching atomic-scale resolution in nanooptics. Nano Lett. 15, 3410–3419 (2015). 44. 44. Trautmann, S. et al. A classical description of subnanometer resolution by atomic features in metallic structures. Nanoscale 9, 391–401 (2017). 45. 45. Urbieta, M. et al. Atomic-scale lightning rod effect in plasmonic picocavities: a classical view to a quantum effect. ACS Nano 12, 585–595 (2018). 46. 46. Pozzi, E. A. et al. Ultrahigh-vacuum tip-enhanced raman spectroscopy. Chem. Rev. 117, 4961–4982 (2017). 47. 47. te Velde, G. et al. Chemistry with adf. J. Comput. Chem. 22, 931–967 (2001). 48. 48. Fonseca Guerra, C., Snijders, J. G., te Velde, G. & Baerends, E. J. Towards an order-n dft method. Theor. Chem. Acc. 99, 391–403 (1998). 49. 49. Baerends, E. J. et al. ADF2018, SCM, Theoretical Chemistry. (Vrije Universiteit, Amsterdam, https://www.scm.com) 50. 50. Jensen, L., Zhao, L. L., Autschbach, J. & Schatz, G. C. Theory and method for calculating resonance Raman scattering from resonance polarizability derivatives. J. Chem. Phys. 123, 174110 (2005). 51. 51. Chen, X., Moore, J., Zekarias, M. & Jensen, L. Atomistic electrodynamics simulations of bare and ligand-coated nanoparticles in the quantum size regime. Nat. Commun. 6, 8921 (2015). 52. 52. Johnson, P. B. & Christy, R. W. Optical constants of the noble metals. Phys. Rev. B 6, 4370–4379 (1972). ## Acknowledgements This work was supported by the National Science Foundation Center for Chemical Innovation dedicated to Chemistry at the Space-Time Limit (CaSTL) Grant CHE-1414466. This work were conducted with Advanced CyberInfrastructure computational resources provided by The Institute for CyberScience at The Pennsylvania State University (http://ics.psu.edu), and portions are used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. Specifically, it used the Bridges system, which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC). ## Author information L.J. conceived the basic idea. X.C., P.L., and Z.H. implemented the method. X.C. and P.L. carried out the simulations. X.C., P.L., and L.J. analyzed the results and wrote the manuscript. Correspondence to Lasse Jensen. ## Ethics declarations ### Competing interests The authors declare no competing interests. Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Chen, X., Liu, P., Hu, Z. et al. High-resolution tip-enhanced Raman scattering probes sub-molecular density changes. Nat Commun 10, 2567 (2019) doi:10.1038/s41467-019-10618-x • Accepted: • Published: • ### Probing the Local Generation and Diffusion of Active Oxygen Species on a Pd/Au Bimetallic Surface by Tip-Enhanced Raman Spectroscopy • Hai-Sheng Su • , Hui-Shu Feng • , Qing-Qing Zhao • , Xia-Guang Zhang • , Juan-Juan Sun • , Yuhan He • , Sheng-Chao Huang • , Teng-Xiang Huang • , Jin-Hui Zhong • , De-Yin Wu •  & Bin Ren Journal of the American Chemical Society (2020) • ### Resolving Molecular Structures with High-Resolution Tip-Enhanced Raman Scattering Images • Pengchong Liu • , Xing Chen • , Hepeng Ye •  & Lasse Jensen ACS Nano (2019)
## Cryptology ePrint Archive: Report 2020/1048 An Algebraic Formulation of the Division Property: Revisiting Degree Evaluations, Cube Attacks, and Key-Independent Sums Kai Hu and Siwei Sun and Meiqin Wang and Qingju Wang Abstract: Since it was proposed in 2015 as a generalization of integral properties, the division property has evolved into a powerful tool for probing the structures of Boolean functions whose algebraic normal forms are not available. We capture the most essential elements for the detection of division properties from a pure algebraic perspective, proposing a technique named as monomial prediction, which can be employed to determine the presence or absence of a monomial in any product of the coordinate functions of a vectorial Boolean function $\boldsymbol f$ by counting the number of the so-called monomial trails across a sequence of simpler functions whose composition is $\boldsymbol f$. Under the framework of the monomial prediction, we formally prove that most algorithms for detecting division properties in literature raise no false alarms but may miss. We also establish the equivalence between the monomial prediction and the three-subset bit-based division property without unknown subset presented at EUROCRYPT 2020, and show that these two techniques are perfectly accurate. The monomial prediction technique can be regarded as a purification of the definitions of the division properties without resorting to external multisets. This algebraic formulation gives more insights into division properties and inspires new search strategies. With the monomial prediction, we obtain the exact algebraic degrees of TRIVIUM up to 834 rounds for the first time. In the context of cube attacks, we are able to explore a larger search space in limited time and recover the exact algebraic normal forms of complex superpolies with the help of a divide-and-conquer strategy. As a result, we identify more cubes with smaller dimensions, leading to improvements of some near-optimal attacks against 840-, 841- and 842-round TRIVIUM. Category / Keywords: secret-key cryptography / Division Property, Monomial Prediction, Detection Algorithm, Algebraic Degree, Cube Attack, TRIVIUM Original Publication (with minor differences): IACR-ASIACRYPT-2020 Date: received 30 Aug 2020, last revised 30 Aug 2020 Contact author: hukai at mail sdu edu cn,siweisun isaac@gmail com,mqwang@sdu edu cn,qingju wang@uni lu Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2020/1048 [ Cryptology ePrint archive ]
Does there exist anywhere a comprehensive list of small genus modular curves $X_G$, for G a subgroup of GL(2,Z/(n))$? (say genus <= 2), together with equations? I'm particularly interested in genus one cases, and moreso in split/non-split cartan, with or without normalizers. Ken Mcmurdy has a list here for$X_0(N)$, and Burcu Baran writes down equations for all$X_{ns}^+(p)$of genus <=2 in this preprint. - ## 5 Answers No, there does not exist a comprehensive list of equations: the known equations are spread out over several papers, and some people (e.g. Noam Elkies, John Voight; and even me) know equations which have not been published anywhere. When I have more time, I will give bibliographic data for some of the papers which give lists of some of these equations. Some names of the relevant authors: Ogg, Elkies, Gonzalez, Reichert. In my opinion, it would be a very worthy service to the number theory community to create an electronic source for information on modular curves (including Shimura curves) of low genus, including genus formulas, gonality, automorphism groups, explicit defining equations...In my absolutely expert opinion (that is, I make and use such computations in my own work, but am not an especially good computational number theorist: i.e., even I can do these calculations, so I know they're not so hard), this is a doable and even rather modest project compared to some related things that are already out there, e.g. William Stein's modular forms databases and John Voight's quaternion algebra packages. It is possible that it is a little too easy for our own good, i.e., there is the sense that you should just do it yourself. But I think that by current standards of what should be communal mathematical knowledge, this is a big waste of a lot of people's time. E.g., by coincidence I just spoke to one of my students, J. Stankewicz, who has spent some time implementing software to enumerate all full Atkin-Lehner quotients of semistable Shimura curves (over Q) with bounded genus. I assigned him this little project on the grounds that it would be nice to have such information, and I think he's learned something from it, but the truth is that there are people who probably already have code to do exactly this and I sort of regret that he's spent so much time reinventing this particular wheel. (Yes, he reads MO, and yes, this is sort of an apology on my behalf.) Maybe this is a good topic for the coming SAGE days at MSRI? Addendum: Some references: Kurihara, Akira On some examples of equations defining Shimura curves and the Mumford uniformization. J. Fac. Sci. Univ. Tokyo Sect. IA Math. 25 (1979), no. 3, 277--300.$ \ $Reichert, Markus A. Explicit determination of nontrivial torsion structures of elliptic curves over quadratic number fields. Math. Comp. 46 (1986), no. 174, 637--658. http://www.math.uga.edu/~pete/Reichert86.pdf$ \ $Gonzàlez Rovira, Josep Equations of hyperelliptic modular curves. Ann. Inst. Fourier (Grenoble) 41 (1991), no. 4, 779--795. http://www.math.uga.edu/~pete/Gonzalez.pdf$ \ $Noam Elkies, equations for some hyperelliptic modular curves, early 1990's. [So far as I know, these have never been made publicly available, but if you want to know an equation of a modular curve, try emailing Noam Elkies!]$ \ $Elkies, Noam D. Shimura curve computations. Algorithmic number theory (Portland, OR, 1998), 1--47, Lecture Notes in Comput. Sci., 1423, Springer, Berlin, 1998. http://arxiv.org/abs/math/0005160$ \ $An algorithm which was used to find explicit defining equations for$X_1(N)$,$N$prime, can be found in Pete L. Clark, Patrick K. Corn and the UGA VIGRE Number Theory Group, Computation On Elliptic Curves With Complex Multiplication, preprint. http://math.uga.edu/~pete/TorsCompv6.pdf This is just a first pass. I probably have encountered something like 10 more papers on this subject, and I wasn't familiar with some of the papers that others have mentioned. - Really nice answer. One question: Do any of the algorithmic methods you mention for computing defining equations work integrally, or only over Q? – Tyler Lawson Feb 4 2010 at 4:44 @TL: good question! (Sounds familiar, in fact.) Off the top of my head, I would say that the key issue is which of these algorithms work over a field of positive characteristic, and for which characteristics? E.g. "my" algorithm (a.k.a.: the most immediately obvious algorithm) for$X_1(N)$will work verbatim over a field of characteristic not dividing$N$, hence (I'm pretty sure) over$\mathbb{Z}[\frac{1}{N}]$. Doing things correctly at primes of bad reduction will be much harder in general, I fear, although in some very special cases (e.g. genus 0!) you can work these things out. – Pete L. Clark Feb 4 2010 at 6:07 Galbraith's thesis has a bunch: http://www.isg.rhul.ac.uk/~sdg/thesis.html - There is code in Magma packages to do ModularCurveQuotient which is$X_0(N)$mod Atkin-Lehners, via Galbraith's thesis. The looking at it seems that you can just change ModularForms(N,2) to ModularForms(Gamma1(N),2) in the function internals and hope to work with no Atkin-Lehners. This gives a canonical embedding to$C^{g-1}$if so. Why you want this for$g=48$with$X_1(50)$as 1035 quadrtics is unclear but it ran in 2 minutes. – Junkie May 1 2010 at 1:51 Cummins and Pauli have calculated generators for the function fields of all congruence subgroups of$\text{PSL}_2(\mathbb{Z})$of genus$\le 24$in: http://www.mathstat.concordia.ca/faculty/cummins/congruence/ I haven't looked at this for a few months but I believe that the companion paper http://www.emis.de/journals/EM/expmath/volumes/12/12.2/pp243_255.pdf discusses the generators. In the meantime there is a paper by Yifan Yang "Defining equations of modular curves" Advances in Mathematics Volume 204, Issue 2, 20 August 2006, Pages 481-508 which gives tables of equations for many modular curves, and discusses a methodology for finding "good" equations (i.e. those with small coefficients and a small number of terms in the defining polynomials) - These are very nice tables for what they contain, but I didn't see any data about defining equations. Am I missing something? (If you will permit a pedantic remark: I can tell you what the generators are for the function field of any integral algebraic curve over$\mathbb{Q}$:$x$and$y$. It's the relation that's not so easy...) – Pete L. Clark Feb 4 2010 at 6:10 @VM: In the paper of Cummins and Pauli I don't see any data about equations or function fields (and again, please let me know if I'm missing it). On the other hand, the paper of Yang seems like a must-read for those interested in the problem. – Pete L. Clark Feb 4 2010 at 14:43 There is also the paper by Broker, Lauter and Sutherland "Modular polynomials via isogeny volcanoes" http://arxiv.org/abs/1001.0402 which gives a fast (in practice) algorithm to calculate modular polynomials$\Phi_l(X,Y)$(this is the polynomial satisfying$\Phi_l(j(z),j(lz)) = 0$where$j$is the usual$j$-invariant) for$l$prime which is a highly-singular model for$X_0(l)$, and other analogous polynomials associated, for example, with the modular function$f$which generates a function field associated with a congruence subgroup of degree 72 over$\Gamma_0(1)$. Sutherland just spoke here yesterday on this. For example he can calculate$\Phi_l(X,Y)$for$l$about 20000. The interesting feature in this algorithm is that he can calculate$\Phi_l$modulo a small prime without actually calculating it over$\mathbb{Z}$and reducing. In the papers by Cummins and Pauli and Yang they essentially do their calculations by using "modular units" (cf. Kubert and Lang) which are explicit functions on$X(N)$(sometimes with character) for which we know the divisor, and then combining them in various ways and using Riemann-Roch type calculations. The method by Broker, Lauter and Sutherland uses the modular interpretation of$\Phi_l\$ in terms of isogenies, in a rather clever way. I feel that eventually this will be the way to go. - +1: there is lots more good work being done in this area than I knew about. MO strikes again! – Pete L. Clark Feb 4 2010 at 19:39 Explicit equations for X_1(N) that have been optimized to reduce both the degree and coefficient sizes are available for N <= 50 at http://math.mit.edu/~drew/X1_curves.txt. These were obtained using the algorithm described in http://arxiv.org/abs/0811.0296. EDIT: Tables of defining equations for X_1(N) for N <= 189 are now available at http://www-math.mit.edu/~drew/X1_altcurves.html -