text
stringlengths
100
957k
meta
stringclasses
1 value
# What was the first bit of mathematics that made you realize that math is beautiful? (For children's book) I'm a children's book writer and illustrator, and I want to to create a book for young readers that exposes the beauty of mathematics. I recently read Paul Lockhart's essay "The Mathematician's Lament," and found that I, too, lament the uninspiring quality of my elementary math education. I want to make a book that discredits the notion that math is merely a series of calculations, and inspires a sense of awe and genuine curiosity in young readers. However, I myself am mathematically unsophisticated. What was the first bit of mathematics that made you realize that math is beautiful? For the purposes of this children's book, accessible answers would be appreciated. - For me Euclid's proof of the infinitude of primes was the first thing that made me realize the beauty of mathematics. – Manjil P. Saikia Mar 7 '13 at 7:02 Wow. Just last night I had a fierce argument with one of the bartenders of my usual watering hole who is a mechanical engineering student. He insisted that he has a better idea than me of what is mathematics. I am so going to print him a copy of Lockhart's text. Thank you for that link! – Asaf Karagila Mar 7 '13 at 7:59 I can’t remember a time when I didn’t think that mathematics was beautiful and fascinating. – Brian M. Scott Mar 7 '13 at 15:06 Although I don't know if it's what you are looking for, try looking up "vihart" on youtube--Even if it's not helpful, I guarantee you will appreciate it. – Bill K Mar 8 '13 at 2:57 I think it's a shame that this question was voted closed... – Will Mar 10 '13 at 19:50 I have spent many decades studying why so many highly intelligent people are so mystified by mathematics. Lockhart's view is very serious and cannot be negated by the personal experiences of mathematically inclined people. My study has clearly shown that the best advice is to be simple and sensible. For example, our place number system is an ingenious solution to the problem of too many different names and shorthand symbols for quantities. The solution is not sensible if the problem is not clear. Addition is immensely useful regardless of how it is done, including by a calculator. So is subtraction. multiplication is a wonderfully ingenious way to count when the items counted come in fixed size packages. Division is also very useful, again completely aside from how to do it. Our conventional emphasis on HOW is terribly off-putting. In this electronic age, "how" is far less important anyway. Mathematics is not a skill and should not be identified as one. Numbers and numerical operations and functions and condition equations and so on, and the properties of all of these, are completely real and sensible and have nothing to do with so-called "reasoning" or "rigor" or "skill" etc. Everything sensible involves reasoning. And rigor is the concern of mathematicians, not lay appreciators and users of mathematics. And "skill" is vastly overrated. It is easy to develop skill if you understand what the subject is about. It is the latter that is missing in our education. - Telescoping series. Double counting to prove combinatorial identities. All the paterns in Pascal's triangle. The medians of a triangle always intersect at one point. Using roots of unity filter to solve combinatorics problems. Guage invariance over Floer homologies for conformal Khovanov manifolds in $n$-dimmensional geometries. - My favourite maths book when I was little was 'Magic House of Numbers' by Irving Adler. - Personally, I thought math was beautiful on a number of occasions: $$1x+2x=3x$$ $$1zebra+2zebras=3zebras$$ Applying words can really help young children understand mathematics better. Another time I found mathematics beautiful was when I learned that almost all functions have a writable inverse, written using Lagrange's Inversion Theorem. Another cool thing for me was big numbers. It started with infinity, then I discovered very large finite numbers, which are studied in Googology. The discovery of infinity has led me to infinitely summations, which I found interesting that they were calculable and sometimes exerted weird solutions. The discover of $i=\sqrt{-1}$ was cool, but even cooler was the discovery that $\sqrt{i}=\frac{1+i}{\sqrt{2}}$, making me realize that I could not make new types of imaginary numbers by square rooting further. This lead me to complex analysis and the solution to $x^i$. By sitting down and writing out the formula for the perimeter of an $n$ sided polygon, I discovered $C=2\pi r$ by taking what I didn't know was a limit to infinity. It required a bit of help though. My own realization that some of the solutions to $f(f(x))=x$ could be found using $f(x)=x$ and that this could be extended to any amount of iterations of $f$. The disappointing discovery that one cannot find the inverse of the general quintic polynomial in terms of a finite amount of elementary operations. Of course, you can still approximate with root finding algorithms or Lagrange's Inversion, but they are neither exact nor finite in method of reaching the solution and sometimes they fail. The discovery that one can find the square root of a number using algorithms was pretty impressive for me. The discovery of the Lambert W function allowed me to solve soooo many exponential problems, but then a hit an edge, a barrier full of currently invertible exponential problems like $x^{x^x}=y$, given $y$ and trying to find $x$. The discovery of the factorial is often a fun little thing for young students, it makes them think of the interesting ways that math can work. I personally tried to extend them to all positive reals, but, like some other answers, it appeared to be impossible for my talents. Then I discovered the Gamma function and learned Calculus. The definition of the Euler-Mascheroni Constant was truly amazing as it gave me a method for easily approximating the natural logarithm for positive whole numbers, which extends to all positive numbers through logarithmic properties. And lastly, I would like to point at mathematically rigorous idea in physics where velocity affects air drag, which in turn affects velocity, which will again affect air drag, etc. The sheer confusion in all of this was mind-blowing. - One of the biggest awes I experienced was when I could fully understand how you could prove that addition and multiplication of real numbers was commutative: trying to understand this it made me go to the basic construction of the Naturals, Integers, Rationals, and finally the reals (via the dedekind cuts approach). I just thought that journey was lovely. - The first thing for me is the working of an equation. it is, to me, like a stanza of a poem that tells us many things in minimum words. No one would have ever thought of describing a geometrical figure. Every one used to draw it before math's entry in the real world. It's awesome for a mathematician to say that write me a circle, ellipse etc. In order to tell people that math is not only concerned to problem-solving, I have produced my own quote. " Practice is hollow without understanding ". - This makes me think of Physics, where practice has lead to revelations that have called for mathematical expressions and intuitive understandings, much like String Theory's origin or Quantum Theory, or, now that I think about it, pretty much all of Physic's history. – Simple Art Jan 24 at 1:55 @SimpleArt I used a phrase "Problem-solving" which clears my intention. For example, in Chess, once you understand the purpose of the move "En passant" , you are never going to forget 1) The way to make it 2) The terms and conditions of the move, otherwise, you may forget how to execute that move. The move is, "If your Pawn is in 5th rank and opponent Pawn make its first move with 2 squares, you can capture it and occupy the square, it skipped" Condition: "You can only make this move in the first immediate opportunity". Purpose is to reduce the freedom of Pawn (In short). – Sufyan Naeem Jan 24 at 10:56 Well, that's nice. Reading your comment makes me realize how different our interpretations are, yet they can be described in essentially the same way. First time I fell for En passant, I was really confused. Now, I get to use it on other people, and I have never forgotten it. – Simple Art Jan 25 at 23:28 @SimpleArt this is another thing. Of course, when you repeatedly practice something, it takes shelter in your brain. Even an animal can learn something by repeatedly practicing it. BTW, my quote states "Practice is hollow..." and this is different from saying "Practice is useless..." or from saying "Practice is nothing...". – Sufyan Naeem Jan 26 at 6:55 I remember being fascinated by amicable numbers, the subject of my junior high science fair project in the early 1970's. I was using a huge book of factorization tables that I couldn't check out from the public library. I spent hours trying to plug prime numbers in the formulas given by Euler and Erdos. DEFINITION: A pair of numbers x and y is called amicable if the sum of the proper divisors (or aliquot parts) of either one is equal to the other. For a list see https://oeis.org/A259180 - The first equation: Knowing that the selling price is $220$, and the margin is $10\%$, what is the purchase price ? At the time I was able to derive the benefit $200\times 10\%=20$ or the selling price from the purchase price $200+200\times10\%=220$ but has no idea how to do the reverse (purchase prince from sale price) as the unknow "had to be known" to compute itself with $?=230-?\times 5\%$. The rewrite with a symbolic quantity $P+P\times5\%=P\times(1+10\%)=220$ was a revelation ! - The first time I was fascinated by mathematics was when I read Christian Goldbach's conjecture. From that day onwards, I am trying to decode the mystery of primes, which seem to be simple at first sight but are actually very difficult to understand. That's the beauty of mathematics. - What was the first bit of mathematics that made you realize that math is beautiful? For me, it was when I was 3 years old (possibly 4), contemplating my hands and fingers. I had the sudden epiphany that 5+5 absolutely had to equal 10 every time that you added them together -- not merely that they had done so repeatedly, mind you, but that they must do so in every event. I was admittedly a little off base there, not yet knowing of quirks such as modulo, but it was so astounding that I ran to the bathroom to tell my mother. There have been a lot of other wonderful moments in math, for me, but that initial one was like seeing into the mind of god, reading the very fabric of creation, and fully knowing that reality is comprehensible. :-) - I tried to find the number of ways in which a number can be expressed in term of sum of two numbers and I ended up learning Partitions which showed me how everything can be expressed mathematically.... - I don't find it beautiful, but I still find the idea expressed by the following something of a psychological curiosity: How can it be that when some algebraists say "AND" and "OR" they mean exactly the same thing? OR means this that "false or false" is false, "false or true", "true or false" as well as "true or true" are true, or more compactly: F T F F T T T T AND means this: F T F F F T F T But, since NOT(x OR y)=(NOT x AND NOT y) and NOT(T)=F and NOT(F)=T, OR and AND, to an algebraist, mean exactly the same thing! - Your answer implies that $\neg ( \perp \lor \top) \iff ( \neg \perp \land \neg \top) \iff ( \top \land \perp ) \iff ( \perp \lor \top)$. Your truth table for $\land$ is wrong. – Andrew Salmon Mar 23 '13 at 21:10 @AndrewSalmon Thanks, I don't know how I did that. – Doug Spoonwood Mar 24 '13 at 2:45 ## protected by Zev ChonolesMar 7 '13 at 22:43 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
{}
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 18 Dec 2018, 17:28 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in December PrevNext SuMoTuWeThFrSa 2526272829301 2345678 9101112131415 16171819202122 23242526272829 303112345 Open Detailed Calendar • ### Happy Christmas 20% Sale! Math Revolution All-In-One Products! December 20, 2018 December 20, 2018 10:00 PM PST 11:00 PM PST This is the most inexpensive and attractive price in the market. Get the course now! • ### Key Strategies to Master GMAT SC December 22, 2018 December 22, 2018 07:00 AM PST 09:00 AM PST Attend this webinar to learn how to leverage Meaning and Logic to solve the most challenging Sentence Correction Questions. # John wants to make a 100 ml of 5% alcohol solution mixing a quantity o Author Message TAGS: ### Hide Tags Intern Joined: 06 Aug 2015 Posts: 48 Concentration: General Management, Entrepreneurship GMAT Date: 10-30-2016 GPA: 3.34 WE: Programming (Consulting) John wants to make a 100 ml of 5% alcohol solution mixing a quantity o  [#permalink] ### Show Tags 09 Jun 2017, 10:28 1 00:00 Difficulty: 35% (medium) Question Stats: 73% (02:06) correct 27% (02:51) wrong based on 102 sessions ### HideShow timer Statistics John wants to make a 100 ml of 5% alcohol solution mixing a quantity of a 2% alcohol solution with a 7% alcohol solution. What are the quantities of each of the two solutions (2% and 7%) he has to use? A) 60 B) 120 C) 30 D) 10 E) 90 Math Expert Joined: 02 Aug 2009 Posts: 7112 Re: John wants to make a 100 ml of 5% alcohol solution mixing a quantity o  [#permalink] ### Show Tags 09 Jun 2017, 10:41 HARRY113 wrote: John wants to make a 100 ml of 5% alcohol solution mixing a quantity of a 2% alcohol solution with a 7% alcohol solution. What are the quantities of each of the two solutions (2% and 7%) he has to use? A) 60 B) 120 C) 30 D) 10 E) 90 total qty = 100.. if avg is 5% and made from 2% and 7%, find the ratio of qty of each by weighted average method.. ratio of 2% = $$\frac{7-5}{7-2}=\frac{2}{5}$$.. so qty of 2% = $$100*\frac{2}{5}=40$$ and of 7% = 100-40=60.. A HARRY113 you must be looking for qty of 7%.. otherwise qty of both has to be total that is 100 _________________ 1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html 3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html GMAT online Tutor Senior SC Moderator Joined: 22 May 2016 Posts: 2223 John wants to make a 100 ml of 5% alcohol solution mixing a quantity o  [#permalink] ### Show Tags 09 Jun 2017, 14:14 HARRY113 wrote: John wants to make a 100 ml of 5% alcohol solution mixing a quantity of a 2% alcohol solution with a 7% alcohol solution. What are the quantities of each of the two solutions (2% and 7%) he has to use? A) 60 B) 120 C) 30 D) 10 E) 90 Another way to do weighted average method. Let A = the volume of the 5% solutions Let B = the volume of the 2% solution Volume of resultant mixture is given as 100 ml, so A + B = 100, and B = 100 - A (strength of A)*(Vol of A) + (strength of B)*(Vol of B) = (strength of resultant mixture)*(Vol of resultant mixture) .07(A) + .02(B) = .05(100) Substitute B = 100 - A: .07(A) + .02(100 - A) = .05(100) .07A + 2 - .02A = 5 .05A = 3 A = 60 mL and B = (100 - A) = 40 mL chetan2u wrote: HARRY113 you must be looking for qty of 7%.. otherwise qty of both has to be total that is 100 Director Joined: 27 May 2012 Posts: 639 Re: John wants to make a 100 ml of 5% alcohol solution mixing a quantity o  [#permalink] ### Show Tags 05 Jul 2018, 09:49 1 HARRY113 wrote: John wants to make a 100 ml of 5% alcohol solution mixing a quantity of a 2% alcohol solution with a 7% alcohol solution. What are the quantities of each of the two solutions (2% and 7%) he has to use? A) 60 B) 120 C) 30 D) 10 E) 90 Can Moderator please correct this question ? We are looking for 7% solution and not BOTH 2% and 7% solution. _________________ - Stne Manager Joined: 23 Nov 2017 Posts: 109 Location: Singapore Concentration: Strategy, Other GMAT 1: 660 Q49 V33 Re: John wants to make a 100 ml of 5% alcohol solution mixing a quantity o  [#permalink] ### Show Tags 22 Jul 2018, 22:59 HARRY113 wrote: John wants to make a 100 ml of 5% alcohol solution mixing a quantity of a 2% alcohol solution with a 7% alcohol solution. What are the quantities of each of the two solutions (2% and 7%) he has to use? A) 60 B) 120 C) 30 D) 10 E) 90 Isn't the question asks for both the solutions? (which would be 40 and 600) ? Re: John wants to make a 100 ml of 5% alcohol solution mixing a quantity o &nbs [#permalink] 22 Jul 2018, 22:59 Display posts from previous: Sort by
{}
IEE Proceedings H - Microwaves, Antennas and Propagation Filter Results Displaying Results 1 - 14 of 14 • Rigorous field theory analysis of quasiplanar waveguides Publication Year: 1985, Page(s):1 - 6 Cited by:  Papers (11) | | PDF (525 KB) An accurate field theory solution for calculating the hybrid eigenmodes of quasiplanar structures is introduced which includes both the influence of the finite strip thickness and waveguide housing grooves. Using the transverse resonance method the size of the characteristic matrix equation is kept constant even for an increasing number of discontinuities. Dispersion characteristics of various str... View full abstract» • Finite-element analysis of the shielded cylindrical dielectric resonator Publication Year: 1985 Cited by:  Papers (9) | | PDF (902 KB) Results are presented of the application of the finite-element method to evaluate the resonant frequency and external Q-factor Qe of a cylindrical dielectric resonator operating in the TE01¿¿-mode, taking into account the effects of the conducting shield and the supporting substrate. Normalised design charts covering a wide range of practical geometrical and physical parameters are presented. Meas... View full abstract» • Noise parameters of embedded noisy two-port networks Publication Year: 1985, Page(s):17 - 22 | | PDF (539 KB) Embedding networks such as bias or feedback elements etc. modify the noise performance of a twoport. This influence is investigated for both lumped and distributed embedding networks and formulas are derived for noise parameters of the embedded network. View full abstract» • Phase constant characteristics of generalised asymmetric three-coupled microstrip lines Publication Year: 1985, Page(s):23 - 26 Cited by:  Papers (5) | | PDF (460 KB) Phase constant characteristics of generalised asymmetric three-coupled microstrip lines are presented and discussed. The influence of various structural parameters, as well as the effects of dispersion, are incorporated to show the behaviour of three fundamental guided modes. View full abstract» • Permittivity measurements using a frequency-tuned microwave TE01 cavity resonator Publication Year: 1985, Page(s):27 - 32 Cited by:  Papers (3) | | PDF (797 KB) A measuring system for determining the complex permittivity of low-loss solids using a frequencytuned TE01 cavity resonator of fixed length is described. Its mechanical construction is simple, and measurements, in particular of the real part ¿¿' of the permittivity, can be performed within a relatively short measuring time with high precision (\$¿¿¿¿'/¿¿'|< 7 ¿¿ 10¿¿4), for a large number of f... View full abstract» • Exact ray path calculations in a modified Bradley/Dudeney model ionosphere Publication Year: 1985, Page(s):33 - 38 | | PDF (571 KB) Bradley and Dudeney's model of the vertical distribution of the ionospheric electron concentration consists of three distinct sections. Both the E-region and F2-region are described by parabolic variations. A linear increase is assumed to exist between the E-region peak and the F2-region. This model is currently recommended by the International Radio Consultative Committee (CCIR) for use in long-t... View full abstract» • Model of rainfall-rate distribution for radio system design Publication Year: 1985, Page(s):39 - 43 Cited by:  Papers (3) | | PDF (407 KB) Owing to the interest raised by the precipitation effects in the design of satellite and terrestrial microwave radiolinks, it is necessary to have a simple and valuable mathematical model which can represent, with good accuracy, the whole rain-rate distribution, from lowest rain-rate values to the tail of this distribution. Previously the author suggested a two-parameter model which adequately rep... View full abstract» • Generalised array pattern synthesis using the projection matrix Publication Year: 1985, Page(s):44 - 46 Cited by:  Papers (1) | | PDF (393 KB) An analytic solution to the problem of array pattern synthesis having null and norm constraints is given. We show that the required complex weight vector can be obtained by finding the eigenvector corresponding to the maximum eigenvalue of a Hermitian matrix. The generality of the method is illustrated by applying it to synthesise patterns with multiple look directions. View full abstract» • General design of two-reflector antennas Publication Year: 1985, Page(s):47 - 52 | | PDF (494 KB) A new description of the profiles of the main reflector and subreflector of a dual-reflector antenna is given in terms of a single parameter, the angle subtended by the normal to the main reflector and the axis of the system. It is shown that the same parameter describes the nature of the focusing in both the illuminating beam and the image space. Hence, quite general illumination, focusing and ma... View full abstract» • Decomposition of antenna signals into plane waves Publication Year: 1985, Page(s):53 - 57 | | PDF (762 KB) A modification of Prony' method, in which an attempt is made to constrain the roots of the polynomial to the unit circle, is used to resolve a number of plane waves illuminating a uniformly spaced line array. A least squares method based on the eigenstructure of a certain correlation matrix is developed to handle the overdetermined case when there are excess numbers of antenna elements. Angle esti... View full abstract» • Book review: Microwave Imaging with Large Antenna Arrays Publication Year: 1985 | PDF (181 KB) • General method for the computation of radiation in stratified media Publication Year: 1985, Page(s):58 - 62 Cited by:  Papers (3) | | PDF (591 KB) A general universal computational method for the general electromagnetic problem in stratified media is established. Computations are successfully performed for the field of a horizontal magnetic dipole in a microwave substrate backed by a conducting plane. The general field problem in stratified media is expressed in the form of integrals over kp. The integrands involve Bessel functions which are... View full abstract» • Optimising the synthesis of shaped beam antenna patterns Publication Year: 1985, Page(s):63 - 68 Cited by:  Papers (69) | | PDF (789 KB) A technique is introduced which uses the conventional polynomial representation of the antenna pattern produced by an equispaced linear array. Certain roots are displaced from the unit circle radially, to fill a portion of the pattern, which before this displayed lobes interspersed by deep nulls. The angular and radial positions of all the roots are simultaneously adjusted so that the amplitude of... View full abstract» • Technical memorandum. Fast, accurate computational method for asymmetric coupled microstrip parameters Publication Year: 1985, Page(s):69 - 70 Cited by:  Papers (2) | | PDF (267 KB) An accurate and direct method is introduced for determining the design parameters of asymmetric coupled microstrip lines for any substrate material, in terms of known design parameters for any other substrate material. To check the accuracy of the method two examples are given. In the first example a quartz substrate is considered, while in the second example a teflon substrate is considered. In b... View full abstract» Aims & Scope Published from 1985-1993, IEE Proceedings H contained significant and original contributions on microwaves, antenna engineering and radiowave propagation. Full Aims & Scope
{}
# Thread: Functions of several variables 1. ## Functions of several variables How do you go about solving this? Find the dimension of the rectangular box of least surface area that has a volume of 1000 cubic inches. 2. Well i'm not sure if we have enough information to completely solve this. But here's a start. Let's call the length, width and height of the box $l,w,h$ respectively. This means we can obtain from the formula fr volume $V = l\times w \times h \implies 1000 = l\times w \times h$ Now from the rule for surface area we can say $SA = 2(lw+lh+hw)$ From here we get stuck. Is there any other information? I.e. like the length is equal to the width or something similar? The idea here is to use one equation to help reduce the numer of variables in the other. The problem here is we have 3 variables and only 2 equations.
{}
You are given a 0-indexed binary string s and two integers minJump and maxJump. In the beginning, you are standing at index 0, which is equal to '0'. You can move from index i to index j if the following conditions are fulfilled: • i + minJump <= j <= min(i + maxJump, s.length - 1), and • s[j] == '0'. Return true if you can reach index s.length - 1 in s, or false otherwise. Example 1: Input: s = "011010", minJump = 2, maxJump = 3 Output: true Explanation: In the first step, move from index 0 to index 3. In the second step, move from index 3 to index 5. Example 2: Input: s = "01101110", minJump = 2, maxJump = 3 Output: false Constraints: • 2 <= s.length <= 105 • s[i] is either '0' or '1'. • s[0] == '0' • 1 <= minJump <= maxJump < s.length ## Solution 1: TreeSet /Dequq + Binary Search Maintain a set of reachable indices so far, for each ‘0’ index check whether it can be reached from any of the elements in the set. Time complexity: O(nlogn) Space complexity: O(n) ## Solution 2: Queue Same idea, we can replace the deque in sol1 with a queue, and only check the smallest element in the queue. ## C++/set Time complexity: O(n) Space complexity: O(n) If you like my articles / videos, donations are welcome. Buy anything from Amazon to support our website Paypal Venmo huahualeetcode
{}
# Difference between revisions of "stat841f14" ## Principal components analysis (Lecture 1: Sept. 10, 2014) ### Introduction Principal Component Analysis (PCA), first invented by Karl Pearson in 1901, is a statistical technique for data analysis. Its main purpose is to reduce the dimensionality of the data. Suppose there is a set of data points in a d-dimensional space. The goal of PCA is to find a linear subspace with lower dimensionality $\, p$ ($p \leq d$), such that maximum variance is retained in this lower-dimensional space. The linear subspace can be specified by p orthogonal vectors, such as $\, u_1 , u_2 , ... , u_p$, which form a new coordinate system. Ideally $\, p \ll d$ (worst case would be to have $\, p = d$). These vectors are called the 'Principal Components'. In other words, PCA aims to reduce the dimensionality of the data, while preserving its information (or minimizing the loss of information). Information comes from variation. In other words, capturing more variation in the lower dimensional subspace means that more information about the original data is preserved. For example, if all data points have the same value along one dimension (as depicted in the figures below), then that dimension does not carry any information. To preserve the original information in a lower dimensional subspace, the subspace needs to contain components (or dimensions) along which data has most of its variability. In this case, we can ignore the dimension where all data points have the same value. In practical problems, however, finding a linear subspace of lower dimensionality, where no information about the data is lost, is not possible. Thus the loss of information is inevitable. Through PCA, we try to reduce this loss and capture most of the features of data. The figure below demonstrates an example of PCA. Data is transformed from original 3D space to 2D coordinate system where each coordinate is a principal component. Now, consider the ability of the two of the above example components (PC1 and PC2) to retain information from the original data. The data in the original space is projected onto each of these two components separately in the figures below. Notice that PC1 is better able to capture the variation in the original data than PC2. If the goal was to reduce the original d=3 dimensional data to p=1 dimension, PC1 would be preferable to PC2. Comparison of two different components used for dimensionality reduction ### PCA Plot example For example<ref> https://onlinecourses.science.psu.edu/stat857/sites/onlinecourses.science.psu.edu.stat857/files/lesson05/PCA_plot.gif </ref>, in the top left corner of the image below, the point $x_1$ is shown in a two-dimensional space and it's coordinates are $(x_i, 1)$ and $(x_i, 2)$ All the red points in the plot represented by their projected values on the two original coordinators (Feature 1, Feature 2). In PCA technique, it uses new coordinators (showed in green). As coordinate system changed, the points also are shown by their new pair of coordinate values for example $(z_i, 1)$ and $(z_i, 2)$. These values are determined by original vector, and the relation between the rotated coordinates and the original ones. In the original coordinates the two vectors $x_1$ and $x_2$ are not linearly uncorrelated. In other words, after applying PCA, if there are two principal component, and use those as the new coordinates, then it is guaranteed that the data along the two coordinates have correlation = 0 By rotating the coordinates we performed an orthonormal transform on the data. After the transform, the correlation is removed - which is the whole point of PCA. As an example of an extreme case where all of the points lie on a straight line, which in the original coordinates system it is needed two coordinates to describe these points. In short, after rotating the coordinates, we need only one coordinate and clearly the value on the other one is always zero. This example shows PCA's dimension reduction in an extreme case while in real world points may not fit exactly on a line, but only approximately. ### PCA applications As mentioned, PCA is a method to reduce data dimension if possible to principal components such that those PCs cover as much data variation as possible. This technique is useful in different type of applications which involve data with a huge dimension like data pre-processing, neuroscience, computer graphics, meteorology, oceanography, gene expression, economics, and finance among of all other applications. Data usually is represented by lots of variables. In data preprocessing, PCA is a technique to select a subset of variables in order to figure our best model for data. In neuroscience, PCA used to identify the specific properties of a stimulus that increase a neuron’s probability of generating an action potential. Figures below show some example of PCA in real world applications. Click to enlarge. Tracking the useful features based on PCA: Since in real world application, we need to know which features are important after dimensionality reduction. Then we can use those features as our new data set to continuous our project. To do this, we need to check the principal components have been chosen. Values of principal components or eigenvectors represent the weight of features within that specific principal component. Based on these weights we can tell the importance of features in the principal component. Then choose several most importance features from the original data instead of using all. ### Mathematical details PCA is a transformation from original (d-dimensional) space to a linear subspace with a new (p-dimensional) coordinate system. Each coordinate of this subspace is called a Principle Component. First principal component is the coordinate of this system along which the data points has the maximum variation. That is, if we project the data points along this coordinate, maximum variance of data is obtained (compared to any other vector in original space). Second principal component is the coordinate in the direction of the second greatest variance of the data, and so on. For example, if we are mapping from 3 dimensional space to 2 dimensions, the first principal component is the 2 dimensional plane which has the maximum variance of data when each 3D data point is mapped to its nearest 2D point on the 2D plane. Lets denote the basis of original space by $\mathbf{v_1}$, $\mathbf{v_2}$, ... , $\mathbf{v_d}$. Our goal is to find the principal components (coordinate of the linear subspace), denoted by $\mathbf{u_1}$, $\mathbf{u_2}$, ... , $\mathbf{u_p}$ in the hope that $p \leq d$. First, we would like to obtain the first principal component $\mathbf{u_1}$ or the components in the direction of maximum variance. This component can be treated as a vector in the original space and so is written as a linear combination of the basis in original space. $\mathbf{u_1}=w_1\mathbf{v_1}+w_2\mathbf{v_2}+...+w_d\mathbf{v_d}$ Vector $\mathbf{w}$ contains the weight of each basis in this combination. $\mathbf{w}=\begin{bmatrix} w_1\\w_2\\w_3\\...\\w_d \end{bmatrix}$ Suppose we have n data points in the original space. We represent each data points by $\mathbf{x_1}$, $\mathbf{x_2}$, ..., $\mathbf{x_n}$. Projection of each point $\mathbf{x_i}$ on the $\mathbf{u_1}$ is $\mathbf{w}^T\mathbf{x_i}$. Let $\mathbf{S}$ be the sample covariance matrix of data points in original space. The variance of the projected data points, denoted by $\Phi$ is $\Phi = Var(\mathbf{w}^T \mathbf{x_i}) = \mathbf{w}^T \mathbf{S} \mathbf{w}$ we would like to maximize $\Phi$ over set of all vectors $\mathbf{w}$ in original space. But, this problem is not yet well-defined, because for any choice of $\mathbf{w}$, we can increase $\Phi$ by simply multiplying $\mathbf{w}$ by a positive scalar greater than one. And the maximum will goes to infinity which is not the proper solution for our problem. So, we add the following constraint to the problem to bound the length of vector $\mathbf{w}$: $max_w \Phi = \mathbf{w}^T \mathbf{S} \mathbf{w}$ subject to : $\mathbf{w}^T \mathbf{w} =1$ This constraint makes $\mathbf{w}$ the unit vector in d-dimensional euclidian space and as a result the optimization problem is no longer unconstrained. Using Lagrange Multiplier technique we have: $L(\mathbf{w} , \lambda ) = \mathbf{w}^T \mathbf{S} \mathbf{w} - \lambda (\mathbf{w}^T \mathbf{w} - 1 )$ By taking derivative of $L$ w.r.t. primary variable $\mathbf{w}$ we have: $\frac{\partial L}{\partial \mathbf{w}}=( \mathbf{S}^T + \mathbf{S})\mathbf{w} -2\lambda\mathbf{w}= 2\mathbf{S}\mathbf{w} -2\lambda\mathbf{w}= 0$ Note that $\mathbf{S}$ is symmetric so $\mathbf{S}^T = \mathbf{S}$. From the above equation we have: $\mathbf{S} \mathbf{w} = \lambda\mathbf{w}$. So $\mathbf{w}$ is the eigenvector and $\lambda$ is the eigenvalue of the matrix $\mathbf{S}$ .(Taking derivative of $L$ w.r.t. $\lambda$ just regenerate the constraint of the optimization problem.) By multiplying both sides of the above equation to $\mathbf{w}^T$ and considering the constraint, we obtain: $\Phi = \mathbf{w}^T \mathbf{S} \mathbf{w} = \mathbf{w}^T \lambda \mathbf{w} = \lambda \mathbf{w}^T \mathbf{w} = \lambda$ The interesting result is that objective function is equal to eigenvalue of the covariance matrix. So, to obtain the first principle component, which maximizes the objective function, we just need the eigenvector corresponding to the largest eigenvalue of $\mathbf{S}$. Subsequently, the second principal component is the eigenvector corresponding to the second largest eigenvalue, and so on. ### Principal component extraction using singular value decomposition Singular Value Decomposition (SVD), is a well-known way to decompose any kind of matrix $\mathbf{A}$ (m by n) into three useful matrices. $\mathbf{A} = \mathbf{U}\mathbf{\Sigma}\mathbf{V}^T$. where $\mathbf{U}$ is m by m unitary matrix, $\mathbf{UU^T} = I_m$, each column of $\mathbf{U}$ is an eigenvector of $\mathbf{AA^T}$ $\mathbf{\Sigma}$ is m by n diagonal matrix, non-zero elements of this matrix are square roots of eigenvalues of $\mathbf{AA^T}$. $\mathbf{V}$ is n by n unitary matrix, $\mathbf{VV^T} = I_n$, each column of $\mathbf{V}$ is an eigenvector of $\mathbf{A^TA}$ Now, comparing the concepts of PCA and SVD, one may find out that we can perform PCA using SVD. Let's construct a matrix p by n with our n data points such that each column of this matrix represent one data point in p-dimensional space: $\mathbf{X} = [\mathbf{x_1} \mathbf{x_2} .... \mathbf{x_n} ]$ and make another matrix $\mathbf{X}^*$ simply by subtracting the mean of data points from $\mathbf{X}$. $\mathbf{X}^* = \mathbf{X} - \mu_X[1, 1, ... , 1], \mu_X = \frac{1}{n} \sum_{i=1}^n x_i$ Then we will get a zero-mean version of our data points for which $\mathbf{X^*} \mathbf{X^{*^T}}$ is the covariance matrix. So, $\mathbf{\Sigma}$ and $\mathbf{U}$ give us the eigenvalues and corresponding eigenvectors of covariance matrix, respectively. We can then easily extract the desired principal components. ### MATLAB example In this example we use different pictures of a man's face with different facial expressions. But, these pictures are noisy. So the face is not easily recognizable. The data set consists of 1965 pictures each 20 by 28. So dimensionality is 20*28= 560. Our goal is to remove noise using PCA and of course by means of SVD function in matlab. We know that noise is not the main feature that makes these pictures look different, so noise is among least variance components. We first extract the principal components and then remove those which correspond to the least values of eigenvalues of covariance matrix. Here, eigenvalues are sorted in a descending order from column to column of matrix S. For example, the first column of matrix U, which corresponds to the first element in S, is the first principal component. We then reconstruct the picture using first d principal components. If d is too large, we can not completely remove the noise. If it is too small, we will lose some information from the original data. For example we may lose the facial expression (smile, sadness and etc.). We can change d until we achieve a reasonable balance between noise reduction and information loss. File:noisy-face.jpg Noise reduced version of a picture in MATLAB example >> % loading the noisy data, file "noisy" stores our variable X which contains the pictures >> >> >> % show a sample image in column 1 of matrix X >> imagesc(reshape(X(:,1),20,28)') >> >> % set the color of image to grayscale >> colormap gray >> >> % perform SVD, if X matrix if full rank, will obtain 560 PCs >> [U S V] = svd(X); >> >> d= 10; >> >> % reconstruct X using only the first d principal components and removing noise >> XX = U(:, 1:d)* S(1:d,1:d)*V(:,1:d)' ; >> >> % show image in column 1 of XX which is a noise reduced version >> imagesc(reshape(XX(:,1),20,28)') ## PCA continued, Lagrange multipliers, singular value decomposition (SVD) (Lecture 2: Sept. 15, 2014) ### Principal component analysis (continued) PCA is a method to reduce dimensions in data or to extract features from data. Given a data point in vector form $\,x$ the goal of PCA is to map $\,x$ to $\,y$ where $\,x$ $\,\isin$ $\,\real$d and $\,y$ $\,\isin$ $\,\real$p such that p is much less than d. For example we could map a two-dimensional data point onto any one-dimensional vector in the original 2D space or we could map a three-dimensional data point onto any 2D plane in the original 3D space. The transformation from d-dimensional space to p-dimensional space is chosen in such a way that maximum variance is retained in the lower dimensional subspace. This is useful because variance represents the main differences between the data points, which is exactly what we are trying to capture when we perform data visualization. For example, when taking 64-dimensional images of hand-written digits, projecting them onto 2-dimensional space using PCA captures a significant amount of the structure we care about. In terms of data visualization, it is fairly clear why PCA is important. High dimensional data is sometimes impossible to visualize. Even a 3D space can be fairly difficult to visualize, while a 2D space, which can be printed on a piece of paper, is much easier to visualize. If a higher dimensional dataset can be reduced to only 2 dimensions, it can be easily represented by plots and graphs. In the case of many data points $\,x_i$ in d-dimensional space: $\,X_{dxn} = [x_1 x_2 ... x_n]$ There is an infinite amount of vectors in $\,\real$p to which the points in X can be mapped. When mapping these data points to lower dimensions information will be lost. To preserve as much information as possible the points are mapped to a vector in $\real$p w
{}
#### Similar Solved Questions ##### One kg. mole of liquid water at 25C is injected into an evacuated copper chamber immersedin a thermostat at 25 C and vaporised completely to steam at 15.84 torr pressure.Equilibrium vapour pressure of water is 23.76 torr at 25 C. Calculate the Gibbs free energychange for this process, assuming steam to be ideal gas [ Note that a reversible path may bedevised : vaporise water at 23.76 torr and then expand isothermally to 15.84 torr]. One kg. mole of liquid water at 25C is injected into an evacuated copper chamber immersed in a thermostat at 25 C and vaporised completely to steam at 15.84 torr pressure. Equilibrium vapour pressure of water is 23.76 torr at 25 C. Calculate the Gibbs free energy change for this process, assuming... ##### DNA from 100 unrelated people was amplified by PCR with allele specific primers, The PCR products... DNA from 100 unrelated people was amplified by PCR with allele specific primers, The PCR products were separated and visualized on the following gel. A specific fragment was found to correspond to each blood group allele, indicated on the left hand side. The number of individuals with each gel patte... ##### Uelences Use the References access important ralues if needed for this question:Consider the reaction CHAg) - -H,olg) ~co(g) 3H,(g)for which AH? 206.1 kJ and Ase = 214.7 JKat 298.15 K () Calculate the entropy change of the LNIVERSE when 1.845 moles of CH,() 4S rcact under standard conditions at 298.15 K untv erse IK (2) Is this Ieaction reactant Or product favorcd under standard conditions ?(3) Ifthe reaction product favored, reictant farorcd, choosc 'reactant favored . ecuthalpy frored ent Uelences Use the References access important ralues if needed for this question: Consider the reaction CHAg) - -H,olg) ~co(g) 3H,(g) for which AH? 206.1 kJ and Ase = 214.7 JKat 298.15 K () Calculate the entropy change of the LNIVERSE when 1.845 moles of CH,() 4S rcact under standard conditions at 29... ##### Please answer every part and show formulas you have used. Will give upvote for good answer Lagrange Polynomial Study Qu... Please answer every part and show formulas you have used. Will give upvote for good answer Lagrange Polynomial Study Questions Example: For given f(x) - sin3x function input-output table is given as below. Find second order Lagrange interpolating polynomial for f(x) using input-output table a. b. Fi... ##### One end of a 64-cm-long copper rod with a diameter of 2.0 cm is kept at... One end of a 64-cm-long copper rod with a diameter of 2.0 cm is kept at 500 ∘C, and the other is immersed in water at 24 ∘C. Part A Calculate the heat conduction rate along the rod. Express your answer to two significant figures and include the appropriate units.... ##### Of the 400 doctors who attended a conference, 240 practiced family medicine and 130 were from countries outside the United States Of the 400 doctors who attended a conference, 240 practiced family medicine and 130 were from countries outside the United States. One-third of the family medicine doctors were not from the united states. What is the probability that randomly selected doctor at the conference does not practice famil... ##### Question 1 (20 points) The decomposition of hydrogen peroxide produces -98200 J of heat per mole... Question 1 (20 points) The decomposition of hydrogen peroxide produces -98200 J of heat per mole of H202 decomposed. How much heat is evolved when 45.0 mL of 0.1501 M H2O2 decomposes? Round your answer off to a whole number (no decimal places) and give the correct units Your Answer: Answer units Vie... ##### A charge of -3.20Cis fixed at the center of a compass. Two additional charges arefixed on the circle of the compass (radius = 0.112 m). The chargeson the circle are -4.56Cat the position due north and +6.10Cat the position due east. What is (a) themagnitude and (b) direction of the netelectrostatic force acting on the charge at the center? Specify thedirection as an angle relative to due east. A charge of -3.20C is fixed at the center of a compass. Two additional charges are fixed on the circle of the compass (radius = 0.112 m). The charges on the circle are -4.56C at the position due north and +6.10C at the position due east. What is (a) the magnitude and (b) direction of the net electro... ##### QuestionpointsProvide details of the process:Answcr following questions If more than one answer provided for given question; only the wrong answer will be taken Into accountW = Span Find basis forAIFind S = Span basis forEEHHow Mlany MAA (leading) variables does the syslem Etemye Question points Provide details of the process: Answcr following questions If more than one answer provided for given question; only the wrong answer will be taken Into account W = Span Find basis for AI Find S = Span basis for EEH How Mlany MAA (leading) variables does the syslem Ete mye... ##### Problem 5. Consider the function W {(2 + %) It is called Zhukovsky funetion and has been used for modelling airwing profiles.Sketch the domain Z = {2 € € Im z > 0, Izl > 1}.b:) Show that the Zhukovsky function is analytic 0n the set Z_Express W = {(2 + %) in terms of polar coordinates r = Izk,0 = Arg z. d.) Show that the Zhukovsky function maps the set 2 to the upper halfplane H = {w 6c Im w > 0}-Is the function Zhukovsky function bijective from Z onto H? Problem 5. Consider the function W {(2 + %) It is called Zhukovsky funetion and has been used for modelling airwing profiles. Sketch the domain Z = {2 € € Im z > 0, Izl > 1}. b:) Show that the Zhukovsky function is analytic 0n the set Z_ Express W = {(2 + %) in terms of polar coord... ##### Evaluate thc integral:2y cOs x dx dy. 7/6 Evaluate thc integral: 2y cOs x dx dy. 7/6... ##### A manufacturer of banana chips would like to know whether its bag filling machine works correctly... A manufacturer of banana chips would like to know whether its bag filling machine works correctly at the 443 gram setting. It is believed that the machine is underfilling the bags. A 32 bag sample had a mean of 440 grams. Assume the population variance is known to be 121. A level of significance of ... ##### Problem 26.76pouer supply has Iued oulcul vollage of 12 0 but YOU feed ouiput for an experment (FigutoUsing (n2 Vc lagc dvider slown Ihe ligure whal should Rta if R; 140n2 ? Exprer using significant figuressudmitRetueanntetFigureParPWhul ,Temine] volladeIyou crinect a Ibad *0 tne 2.9 culul anqumino tne Icadny: Jenence 078n? Expros; Vcur 4ns mE usinatic sigtificant fgures120 vSomnAramak icin buraya yazin Problem 26.76 pouer supply has Iued oulcul vollage of 12 0 but YOU feed ouiput for an experment (Figuto Using (n2 Vc lagc dvider slown Ihe ligure whal should Rta if R; 140n2 ? Exprer using significant figures sudmit Retueanntet Figure ParP Whul , Temine] vollade Iyou crinect a Ibad *0 tne 2.9 culul ... ##### Consider the following set of ordered pairs_2) Calculate the slope and y-intercept for these data. b) Calculate the total sum of squares (SST) Partition the sum of squares into the SSR and SSE a) Calculate the slope and y-intercept for these data. Y = 4.025 2.875 X Consider the following set of ordered pairs_ 2) Calculate the slope and y-intercept for these data. b) Calculate the total sum of squares (SST) Partition the sum of squares into the SSR and SSE a) Calculate the slope and y-intercept for these data. Y = 4.025 2.875 X... ##### Identify the following structures: Efatty 'acids three jpgOHOH Identify the following structures: Efatty 'acids three jpg OH OH... ##### Use the graph of f(z) above to estimate the value of f' (4) to one decimal place:f'(1) Use the graph of f(z) above to estimate the value of f' (4) to one decimal place: f'(1)... ##### Determine whether the integral is convergent or divergent: 1; convergent divergentIf it is convergent, evaluate it. (If the quantity diverges_ enter DIVERGES:) Determine whether the integral is convergent or divergent: 1; convergent divergent If it is convergent, evaluate it. (If the quantity diverges_ enter DIVERGES:)... ##### 2ortd 6l 442 3oifnna Feaco(20 c]Pan0BelemneEepulL7nNendtiWaconaMantxk gner 0 arstredb a spirg Mg6 is puld bte yrd -Hearedemrena=ny Ithn oxdars Mil a penodd208888 Far-Ln ntddti cPati &Prukeetd ErquaulaDCttWrula re 3-plljte 8 bhe e sb n?EEpMIA Ycur anttt in CenbrrteryIncomech; Tnaorin; 6 ghamoto rmdigPan ESuturaREELWMAIATDelemneIned,ArtIn >ulnconidRroKhleto Jers prtn t=028? Cenmest Aleetin cancrataneSubitKtutantePn FAEOticTVecanatEprt =EtnnmritiMeondmtPeInaMEAL ReoleeiAeuatana 2ortd 6l 442 3oifnna Feaco (20 c] Pan0 Belemne Eepul L7nNendti Wacona Mantxk gner 0 arstredb a spirg Mg6 is puld bte yrd -Hearedemrena=ny Ithn oxdars Mil a penodd208888 Far-Ln ntddti c Pati & Prukeetd ErquaulaDCtt Wrula re 3-plljte 8 bhe e sb n? EEpMIA Ycur anttt in Cenbrrtery Incomech; Tnaorin... ##### QUESTION 8Evaluate(2x+x)dxOA 88 0 B. 88Oc-72D. None of these choices OE.72 QUESTION 8 Evaluate (2x+x)dx OA 88 0 B. 88 Oc-72 D. None of these choices OE.72... ##### Find the Taylor series for f(x) = e" centered at a = 1. Determine the radius of convergence and the interval of convergence_f(x)= In(-r)--ZL }-Isx<l Use the geometric series k To find the power series representation for the following functions centered at 0. Give the interval of convergence for the new series. g(x) =x In(1-x) Find the Taylor series for f(x) = e" centered at a = 1. Determine the radius of convergence and the interval of convergence_ f(x)= In(-r)--ZL }-Isx<l Use the geometric series k To find the power series representation for the following functions centered at 0. Give the interval of convergenc... ##### 22 naining Time: 16 minutes, 26 seconds. estion Completion Status: 9 10 13 OCNo, because all... 22 naining Time: 16 minutes, 26 seconds. estion Completion Status: 9 10 13 OCNo, because all Woods was required to do was refrain from intentional conduct O D. No. because trespassers have no legal rights of any kind when they are trespassing QUESTION 22 Which theory used n personal injury produc... ##### For each pair of axioms of incidence geometry, invent an interpre tation in which those two axioms are satisfied but the third axiom is not: (This will show that the three axioms are independent in the sense that it is impossible tO prove any one of them from the other two It is more economical and elegant to have axioms that are in- dependent; but it is not essential for developing an interesting theory.) For each pair of axioms of incidence geometry, invent an interpre tation in which those two axioms are satisfied but the third axiom is not: (This will show that the three axioms are independent in the sense that it is impossible tO prove any one of them from the other two It is more economical and ... ##### A poll of 953 U.S. adults split the sample into four age groups: ages 18-29. 30-49,50-64, and 85+. In the youngest age group, 66% said that they thought the U.S was ready for a woman president; as opposed to 31% who said "no, the country was not ready" (3% were undecided) The sample included 238 18- to 29-year olds. a) Do you expect the 99% confidence interval for the true proportion of all 18- to 29-year olds who think the U.S. is ready for a woman president t0 be wider or narrower th A poll of 953 U.S. adults split the sample into four age groups: ages 18-29. 30-49,50-64, and 85+. In the youngest age group, 66% said that they thought the U.S was ready for a woman president; as opposed to 31% who said "no, the country was not ready" (3% were undecided) The sample includ... ##### ( What ? Physicists often measure the momentum of subatomic particles moving near the speed of... ( What ? Physicists often measure the momentum of subatomic particles moving near the speed of light in units of Mevc, where is the speed of light, and I MeV - 1.6 10-13 kg. m/s. Based on this what are the units of momentum for a high-speed subatomic particle in terms of fundamental Sunits? What we ... ##### Find each square root. See Example 1. $\sqrt{0.64}$ Find each square root. See Example 1. $\sqrt{0.64}$... ##### The figure shows two protons separated by a distant d. Point P is a distance d/4... The figure shows two protons separated by a distant d. Point P is a distance d/4 from the leftmost charge. If d is equal to 2 nm, what is the magnitude of the net electric field at point P, and which way does it point? a) 3.3E9 N/C, to the right b) 7.2E9 N/C, to the left c) 7.2E9 N/C, to the right ... ##### Design MountainBike and RoadBike class for a simulation game Mike is a programmer working in his ... Design MountainBike and RoadBike class for a simulation game Mike is a programmer working in his afterhours to design games for mobile platforms. His recent project is a simulation game in which a player controls a bike to move across a variety of simulated environments. To start coding, Mike has to... ##### Whakiastke mic Ar _bionimslrccrvalun] tbc Alki4l' A4unb Azubln 3,015 Whakiastke mic Ar _bionimslrccrvalun] tbc Alki4l' A4unb Azubln 3,015... Required information [The following information applies to the questions displayed below.] On January 1, Boston Company completed the following transactions (use a 7% annual interest rate for all transactions): ( FV of $1, PV of$1, FVA of $1, and PVA of$1) (Use the appropriate factor(s) from the t...
{}
# IEEE Transactions on Antennas and Propagation ## Filter Results Displaying Results 1 - 25 of 65 Publication Year: 2012, Page(s):C1 - 3550 | PDF (50 KB) • ### IEEE Transactions on Antennas and Propagation publication information Publication Year: 2012, Page(s): C2 | PDF (40 KB) • ### Minimum $Q$ Electrically Small Antennas Publication Year: 2012, Page(s):3551 - 3558 Cited by:  Papers (11) | | PDF (1829 KB) | HTML Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions for the stored energies obtained through the vector spherical wave theory... View full abstract» • ### Miniaturization of Microstrip Antennas by the Novel Application of the Giuseppe Peano Fractal Geometries Publication Year: 2012, Page(s):3559 - 3567 Cited by:  Papers (51) | | PDF (2355 KB) | HTML We investigate the possibilities and properties of the application of Giuseppe Peano fractal geometry for the miniaturization of microstrip patch antennas and compare its performance with those of the usual fractals, such as Koch, Tee-Type and Sierpinski. The length of the Giuseppe Peano fractal patch perimeter increases, while its surface area remains constant without any more space occupation. C... View full abstract» • ### Miniature Scalp-Implantable Antennas for Telemetry in the MICS and ISM Bands: Design, Safety Considerations and Link Budget Analysis Publication Year: 2012, Page(s):3568 - 3575 Cited by:  Papers (95) | | PDF (1360 KB) | HTML We study the design and radiation performance of novel miniature antennas for integration in head-implanted medical devices operating in the MICS (402.0-405.0 MHz) and ISM (433.1-434.8, 868.0-868.6 and 902.8-928.0 MHz) bands. A parametric model of a skin-implantable antenna is proposed, and a prototype is fabricated and tested. To speed-up antenna design, a two-step methodology is suggested. This ... View full abstract» • ### Multiple Frequency Band and High Isolation Mobile Device Antennas Using a Capacitive Slot Publication Year: 2012, Page(s):3576 - 3582 Cited by:  Papers (9)  |  Patents (1) | | PDF (1816 KB) | HTML The performance characteristics of the capacitive slot, or a slot placed between the feed and ground connections, in a planar inverted-F antenna (PIFA) are comprehensively analyzed. The PIFA capacitive slot behavior is measured inside a two antenna system within a mobile phone where the first antenna is a multiple band PIFA and the second antenna is a higher frequency band PIFA directly overlappin... View full abstract» • ### Switched Beam Antenna Employing Metamaterial-Inspired Radiators Publication Year: 2012, Page(s):3583 - 3593 Cited by:  Papers (14) | | PDF (2601 KB) | HTML We present a novel switched beam antenna (SBA) consisting of four identical metamaterial-inspired electrically short printed monopoles, vertically placed at the corners of a grounded square board. The antenna is designed to operate in the frequency range 1600-2700 MHz, with global dimensions 120 mm × 120 mm × 30 mm. The SBA has been first numerically simulated and optimized and, then... View full abstract» • ### Design Synthesis of Metasurfaces for Broadband Hybrid-Mode Horn Antennas With Enhanced Radiation Pattern and Polarization Characteristics Publication Year: 2012, Page(s):3594 - 3604 Cited by:  Papers (17)  |  Patents (1) | | PDF (1565 KB) | HTML Metamaterial surfaces (metasurfaces) with a low effective index of refraction have been recently proposed for application in the design of hybrid-mode horn antennas, such as soft and hard horns. Here we explore designs of several metasurfaces and their use as liners for coating the interior walls of horn antennas. The design process combines the genetic algorithm optimization technique with a full... View full abstract» • ### Generation of Propagating Bessel Beams Using Leaky-Wave Modes Publication Year: 2012, Page(s):3605 - 3613 Cited by:  Papers (47) | | PDF (2090 KB) | HTML The generation of Bessel beams using a leaky radial waveguide is presented. The radial waveguide consists of a capacitive sheet over a ground plane. It supports an azimuthally invariant leaky-wave mode whose normal electric-field component is a truncated, zeroth-order Bessel function. The annular spectrum and nondiffractive extent of the Bessel beam is clearly linked to the complex wavenumber of t... View full abstract» • ### Efficient Prediction of Array Element Patterns Using Physics-Based Expansions and a Single Far-Field Measurement Publication Year: 2012, Page(s):3614 - 3621 Cited by:  Papers (13) | | PDF (1236 KB) | HTML A method is proposed to predict the antenna array beam through employing a relatively small set of physics-based basis functions-called characteristic basis function patterns (CBFPs)-for modeling the embedded element patterns. The primary CBFP can be measured or extracted from numerical simulations, while additional (secondary) CBFPs are derived from the primary one. Furthermore, each numerically ... View full abstract» • ### An Effective Approach to the Synthesis of Phase-Only Reconfigurable Linear Arrays Publication Year: 2012, Page(s):3622 - 3631 Cited by:  Papers (32) | | PDF (1242 KB) | HTML We present an effective approach to the optimal mask-constrained power pattern synthesis of uniformly spaced array antennas able to dynamically reconfigure their radiation pattern by modifying only the excitation phases. The proposed approach results in a design procedure having a very low computational burden and, by exploiting at best the knowledge available in the separate synthesis of each pat... View full abstract» • ### SIW-Based Array Antennas With Sequential Feeding for X-Band Satellite Communication Publication Year: 2012, Page(s):3632 - 3639 Cited by:  Papers (20) | | PDF (3223 KB) | HTML Novel phased array antennas suitable for X-band satellite communication are introduced and investigated for right-handed circular polarization (RHCP) in a multi-layered fabrication. The primary function of the proposed antennas is to transmit high quality data obtained from satellites and the ground station systems with the lowest amount of loss. For accurate data transmission, the important param... View full abstract» • ### A 45$^circ$ Linearly Polarized Hollow-Waveguide Corporate-Feed Slot Array Antenna in the 60-GHz Band Publication Year: 2012, Page(s):3640 - 3646 Cited by:  Papers (23)  |  Patents (1) | | PDF (1764 KB) | HTML A 45° linearly polarized hollow-waveguide slot array antenna has been proposed to achieve low sidelobes in the E- and H-planes by a diamond-shaped array arrangement. A layer for radiating slots was added into the conventional plate-laminated antenna. The 2 × 2-element sub-array was designed to suppress grating lobes and low cross-polarization. The 16 ×... View full abstract» • ### Micro-Fabricated 130–180 GHz Frequency Scanning Waveguide Arrays Publication Year: 2012, Page(s):3647 - 3653 Cited by:  Papers (30)  |  Patents (6) | | PDF (1626 KB) | HTML This paper describes frequency scanning slot arrays operating from 130 to 180 GHz. The arrays are micro-fabricated using the PolyStrata sequential copper deposition process. Measured reflection coefficient and radiation patterns agree with HFSS full-wave simulations. The voltage standing wave ratio is less than 1.75:1 over the entire frequency range, and the measured scanning is 1.04°<... View full abstract» • ### Analysis and Design of Slitted Waveguides With Suppressed Slot-Mode Using Periodic FDTD Publication Year: 2012, Page(s):3654 - 3660 | | PDF (1309 KB) | HTML The characteristics of the slot- and perturbed TE10 modes in a slitted waveguide are studied for the purpose of suppressing the undesired slot-mode. The use of the finite difference time domain (FDTD) with periodic boundary conditions allows the extraction of the individual complex wavenumbers of the two modes, and thus the tracking of the modes as modifications are made to the structur... View full abstract» • ### A Broadband Three-Dimensionally Isotropic Negative-Refractive-Index Medium Publication Year: 2012, Page(s):3661 - 3669 Cited by:  Papers (15) | | PDF (1512 KB) | HTML When designing negative-refractive-index (NRI) metamaterials, scientists have struggled to overcome three primary obstacles: polarization dependence, narrow bandwidth, and fabrication challenges. Here, we address these issues with the design of a three-dimensional, fully isotropic, broadband NRI medium that can be produced on a large scale. The simulated structure exhibits a NRI bandwidth of 24.1%... View full abstract» • ### Cut-Wire Metamaterial Design Based on Simplified Equivalent Circuit Models Publication Year: 2012, Page(s):3670 - 3678 Cited by:  Papers (11) | | PDF (1753 KB) | HTML Effective equivalent circuits are used for the prediction of resonant and absorbing behavior of cut-wire-based (CW-based) metamaterials. Firstly, an equivalent circuit applicable to electric resonance frequencies of single CW metamaterial arrays is considered. Secondly, the equivalent circuit is extended for prediction of magnetic resonance frequencies of symmetrically paired CW metamaterial array... View full abstract» • ### A Study on the Effective Permittivity of Carbon/PI Honeycomb Composites for Radar Absorbing Design Publication Year: 2012, Page(s):3679 - 3683 Cited by:  Papers (11) | | PDF (1162 KB) | HTML Radar absorbing honeycomb composites with different coating thicknesses are prepared by impregnation of aramid paper frame with solutions containing conductive carbon blacks (non-magnetic) and PI (polyimide). Expressions for the effective permittivity of the composites are studied and validated both in theory and experiment. It is found that a theoretical equivalent panel with given permittivity c... View full abstract» • ### Technique to Decompose Near-Field Reflection Data Generated From an Object Consisting of Thin Dielectric Layers Publication Year: 2012, Page(s):3684 - 3692 Cited by:  Papers (9) | | PDF (1217 KB) | HTML Extracting properties of hidden structures using ultra-wideband (UWB) radar is evolving into a promising technology. For these applications, a short-duration electromagnetic wave is transmitted into an object or structure of interest and the backscattered fields that arise due to dielectric contrasts at interfaces are measured. The time-of-arrival (TOA) between reflections and the amplitude of the... View full abstract» • ### Introduction of Oblique Incidence Plane Wave to Stratified Lossy Dispersive Media for FDTD Analysis Publication Year: 2012, Page(s):3693 - 3705 Cited by:  Papers (3) | | PDF (3231 KB) | HTML In the application of finite-difference time-domain (FDTD) method to the wideband plane wave scattering analysis of objects embedded in stratified lossy dispersive media, it is a challenge to solve the obliquely incident wave. In this work, a novel configuration of auxiliary one-dimensional (1-D) FDTD grids is proposed to provide the obliquely incident plane wave at the total-field scattered-field... View full abstract» • ### Shifted Pixel Method for Through-Wall Radar Imaging Publication Year: 2012, Page(s):3706 - 3716 Cited by:  Papers (7) | | PDF (3038 KB) | HTML The current approach for accurate localization of targets behind walls using radar involves coupling of wall properties estimation and target location estimation. This involves the solution of a transcendental equation using approximations over several iterations. A new approach for through-wall target localization proposed in this paper which decouples the through-wall localization problem into t... View full abstract» • ### Wavelet-Based Adaptive Multiresolution Inversion for Quantitative Microwave Imaging of Breast Tissues Publication Year: 2012, Page(s):3717 - 3726 Cited by:  Papers (25) | | PDF (1202 KB) | HTML Microwave imaging is a feasible tool for medical imaging owing to the different electric properties of tissues. To this end, it is important to devise inversion procedures capable of achieving a reliable quantitative estimate of the tissues under test. With respect to breast screening, we propose a full-wave inversion method, which takes advantage of the adaptive multiresolution features of the wa... View full abstract» • ### Enhanced Detection of On-Metal Retro-Reflective Tags in Cluttered Environments Using a Polarimetric Technique Publication Year: 2012, Page(s):3727 - 3735 Cited by:  Papers (5) | | PDF (1752 KB) | HTML The detection and tracking of targets in highly cluttered environments poses a challenge due to the strong clutter backscatter present in the scene. A design concept based on a planar retro-reflective array to aide in the standoff detection and tracking of large metallic objects in a warehouse setting is presented. While traditional methods to achieve enhanced detection involve an increase in gain... View full abstract» • ### An Efficient Framework for Using Higher-Order Curved Div-Conforming Elements With Cancellation of Weak Singularities in Local Space for Surface Integral Equation Publication Year: 2012, Page(s):3736 - 3743 Cited by:  Papers (4) | | PDF (734 KB) | HTML A reliable scheme for using higher-order curved div-conforming elements is presented. This procedure takes advantage of tangential basis functions developed for finite-element applications. Throughout the theorem of trace, the Helmholtz decomposition into solenoidal and nonsolenoidal subspaces is converted into a Hodge decomposition for the surface currents. Furthermore, a cancellation of the sing... View full abstract» • ### E-MFIE Formulation for Arbitrary PEC Sheets Publication Year: 2012, Page(s):3744 - 3753 | | PDF (2243 KB) | HTML The scattering of an electromagnetic wave by a perfectly conducting sheet of arbitrary shape can be solved with the Method of Moments (MoM), but the solution obtained with the electric field integral equation (EFIE) yields the sum of the electric current densities on every opposite faces of the sheet, while the magnetic field integral equation (MFIE) cannot be used. In this paper, we first explain... View full abstract» ## Aims & Scope IEEE Transactions on Antennas and Propagation includes theoretical and experimental advances in antennas. Full Aims & Scope ## Meet Our Editors Editor-in-Chief                                             Danilo Erricolo
{}
Referencing is a key element of your assessed work at university.  It involves acknowledging your sources of information/ideas/theories. Absent or inadequate referencing shows a lack of academic integrity. This book will explain: 1.Why referencing is important 2. What referencing involves • How to reference in your text • How to compile a final reference list 3. How to reference using Harvard and APA referencing styles
{}
## IBPS Quant Test 21 Instructions In these questions, two equations numbered I and II are given. You have to solve both the equations and mark the appropriate option. a: If x > y b: If x < y c: If x ≥ y d: If x ≤ y e: If x = y or the relationship cannot be established Question 1 I. $$x^{2} - 11x + 30 = 0$$ II. $$2y^{2} - 9y + 10 =0$$ Question 2 I. $$15x^{2} + 8x + 1 =0$$ II. $$3y^{2} + 14y + 8 = 0$$ Question 3 I. $$4x^{2} - 17x + 18 = 0$$ II. $$2y^{2} - 21y + 40 = 0$$ Question 4 I. $$6x^{2} - 25x + 14 =0$$ II. $$9y^{2} -9y + 2 =0$$ Question 5 I.$$8x^{2} + 25x + 3 =0$$ II. $$2y^{2} + 17y + 30 = 0$$
{}
program to detect h00ks/detours/hijacked functions Recommended Posts nuclear123    119 im curious as to how this can be done? say i wanted to make a program that will then scan for hooks made by a program! any advice/knowledge would be greatfull on where to start or how i could perform such a task! hook/detour/hijacked function Share on other sites Anddos    588 I remember using a program called hook shark , search that on google Share on other sites Atrix256    539 it isn't exactly what you asked for, but a good way to detect if people are screwing with your memory is to have important variables defined in several places and make sure every so often that they are all equal to each other. For instance you could have 3 variables holding the player's health, stored in different areas of memory. Every time your program changed the health, it could change all 3 places. Then, every so many game loops you could check that all 3 places match up to the same value. To get trickier, you could do things like semi-encrypt the data in each place and "unencrypt" for the comparison. Just be careful that your "encryption" is reversable 100% of the way. If you divide a number by 2 to "encrypt" it, then later multiply it by 2 to see if it's the right number, you have lost data in the division, even with floating point numbers. You might do something like this (assuming ints) for your 3 health variables.. (pseudocode) Writing: void SetHealth(int NewHealth){ Health = NewHealth; SecureHealth1 = NewHealth + 20; SecureHealth2 = NewHealth ^ 42;} Verifying: bool HealthHasBeenTamperedWith(void){ return (Health != SecureHealth1 - 20) || (Health != SecureHealth2 ^ 42);} Doing this sort of thing can help thwart people who try to edit memory while the game is running to do cheats that way. Share on other sites nuclear123    119 thanks for the usefull insight :)! as for hookshark yes i've heard of this...i just doubt i have the sufficient knowledge to reverse it to understand how it works :(. If any others have advice/info plz let me know -thx
{}
## Convergence and Asymptotic Results September 24, 2015 By $\overline{X}_n\ \xrightarrow{\text{a.s.}}\ \mathbb{E}(X)$ Last week, in our mathematical statistics course, we’ve seen the law of large numbers (that was proven in the probability course), claiming that given a collection  of i.i.d. random variables, with To visualize that convergence, we can use > m=100 > mean_samples=function(n=10){ + X=matrix(rnorm(n*m),nrow=m,ncol=n) + return(apply(X,1,mean)) + } > B=matrix(NA,100,20) > for(i in 1:20){ + B=mean_samples(i*10) + } > colnames(B)=as.character(seq(10,200,by=10)) > boxplot(B) It is... ## Rentrez 1.0 released September 24, 2015 By A new version of rentrez, our package for the NCBI's EUtils API, is making it's way around the CRAN mirrors. This release represents a substantial improvement to rentrez, including a new vignette that documents the whole package. This posts describes some of the new things in rentrez, and gives us a chance to thank some of the people that have contributed to... ## Chinese R conference September 24, 2015 By I will be speaking at the Chinese R conference in Nanchang, to be held on 24–25 October, on “Forecasting Big Time Series Data using R”. Details (for those who can read Chinese) are at china-r.org. ## Running Back and Wide Receiver Gold Mining – Week 3 September 23, 2015 By The graphs below summarize the projections from a variety of sources. This week’s summary includes projections from: CBS: CBS Average, Yahoo Sports, NFL, FOX Sports, NumberFire, FantasySharks, ESPN and FantasyFootballNerd. The post Running Back and Wide Receiver Gold Mining – Week 3 appeared first on Fantasy Football Analytics. ## subsetting data in ggtree September 23, 2015 By Subsetting is commonly used in ggtree as we would like to for example separating internal nodes from tips. We may also want to display annotation to specific node(s)/tip(s). Some software may stored clade information (e.g. bootstrap value) as internal node labels. Indeed we want to manipulate such information and taxa labels separately. Read More: 962 Words... ## Are you headed to Strata? It’s next week! September 23, 2015 By RStudio will again teach the new essentials for doing (big) data science in R at this year’s Strata NYC conference, September 29 2015 (http://strataconf.com/big-data-conference-ny-2015/public/schedule/detail/44154).  You will learn from Garrett Grolemund, Yihui Xie, and Nathan Stephens who are all working on fascinating new ways to keep the R ecosystem apace of the challenges facing those who work with data. Topics include: R Quickstart: Wrangle, ## Interpolation and smoothing functions in base R September 23, 2015 By by Andrie de Vries Every once in a while I try to remember how to do interpolation using R. This is not something I do frequently in my workflow, so I do the usual sequence of finding the appropriate help page: ?interpolate Help pages: stats::approx Interpolation Functions stats::NLSstClosestX Inverse Interpolation stats::spline Interpolating Splines So, the help tells me to... ## Kasseler useR Group: Data Science and Networking September 23, 2015 By From October, the Kasseler useR Group meeting will be held on the second Wednesday of each month at 6.30 pm. The events will take place at Science Park Kassel. The Kasseler useR Group supports active exchange of information between R users. Discussions about experiences with R and news of R are appreciated as well as ## Using mutate from dplyr inside a function: getting around non-standard evaluation September 23, 2015 By To edit or add columns to a data.frame, you can use mutate from the dplyr package: Here, dplyr uses non-standard evaluation in finding the contents for mpg and wt, knowing that it needs to look in the context of… See more › ## Simulating backtests of stock returns using Monte-Carlo and snowfall in parallel September 23, 2015 By You could say that the following post is an answer/comment/addition to Quintuitive, though I would consider it as a small introduction to parallel computing with snowfall using the thoughts of Quintuitive as an example. A quick recap: Say you create a model that is able to forecast 60% of market directions (that is, in 6 ## Fitting a neural network in R; neuralnet package September 23, 2015 By Neural networks have always been one of the most fascinating machine learning model in my opinion, not only because of the fancy backpropagation algorithm, but also because of their complexity (think of deep learning with many hidden layers) and structure inspired by the brain. Neural networks have not always been popular, partly because they were, ## Post-doc Researcher in Big Data Analytics! September 23, 2015 By DESPINA Big Data Lab at the Department of Economics and ## More on the Heteroscedasticity Issue September 22, 2015 By In my last post, I dsciussed R software, including mine, that handles heteroscedastic settings for linear and nonlinear regression models. Several readers had interesting comments and questions, which I will address here. To review: Though most books and software assume homoscedasticity, i.e. constancy of the variance of the response variable at all levels of the … Continue reading... ## Version 0.9.0 of eeptools released! September 22, 2015 By A long overdue overhaul of my eeptools package for R was released to CRAN today and should be showing up in the mirrors soon. The release notes for this version are extensive as this represents a modernization of the package infrastructure and the reim... ## How do you know if your model is going to work? September 22, 2015 By Authors: John Mount (more articles) and Nina Zumel (more articles). Our four part article series collected into one piece. Part 1: The problem Part 2: In-training set measures Part 3: Out of sample procedures Part 4: Cross-validation techniques “Essentially, all models are wrong, but some are useful.” George Box Here’s a caricature of a data … Continue reading... ## How do you know if your model is going to work? Part 4: Cross-validation techniques September 22, 2015 By by John Mount (more articles) and Nina Zumel (more articles). In this article we conclude our four part series on basic model testing. When fitting and selecting models in a data science project, how do you know that your final model is good? And how sure are you that it's better than the models that you rejected? In this... ## Parsing a large amount of characters into a POSIXct object September 22, 2015 By When trying to parse a large amount of datetime characters into POSXIct objects, it struck me that strftime and as.POSIXct where actually quite slow. When using the parsing functions from lubridate, these where a lot faster. The following benchmark shows… See more › ## Drug Interaction Studies – Statistical Analysis September 22, 2015 By This post is actually a continuation of the previous post, and is motivated by this article that discusses the graphics and statistical analysis for a two treatment, two period, two sequence (2x2x2) crossover drug interaction study of a new treatment versus the standard. Whereas the previous post was devoted to implementing some of the graphics ## Rummaging through dusty books: Maucha diagrams in R September 22, 2015 By Do you know the Maucha diagram? If you are not an Hungarian limnologist, probably not! This diagram was proposed by Rezso Maucha in 1932 as a way to vizualise the relative ionic composition of water samples. However, as far I … Lire la suite → ## Upcoming talks in California September 22, 2015 By I’m back in California for the next couple of weeks, and will give the following talk at Stanford and UC-Davis. Optimal forecast reconciliation for big time series data Time series can often be naturally disaggregated in a hierarchical or grouped structure. For example, a manufacturing company can disaggregate total demand for their products by country of ## Notes from the Kölner R meeting, 18 September 2015 September 22, 2015 By Last Friday the Cologne R user group came together for the 15th time. Since its inception over three years ago the group evolved from a small gathering in a pub into an active data science community, covering wider topics than just R. Still, R is the link and clue between the different interests. Last Friday's agenda was a... ## How do you know if your model is going to work? Part 4: Cross-validation techniques September 21, 2015 By Authors: John Mount (more articles) and Nina Zumel (more articles). In this article we conclude our four part series on basic model testing. When fitting and selecting models in a data science project, how do you know that your final model is good? And how sure are you that it’s better than the models that … Continue reading... ## EARL London 2015: Our Highlights September 21, 2015 By We were overwhelmed by the positive comments from attendees at last week’s EARL conference in London. We are in the process of collecting survey responses from all delegates, but in the meantime a quick straw poll at Mango … Continue reading → ## Applications of R at EARL 2015 September 21, 2015 By The Effective Applications of R (EARL) Conference (held last week in London) is well-named. At the event I saw many examples of R being used to solve real-world industry problems with advanced statistics and data visualization. Here are just a few examples: AstraZeneca, the pharmaceutical company, uses R to design clinical trials, and to predict the ending date of... ## 4 new R jobs (from R-users.com ; 2015-09-21) September 21, 2015 By This is the bimonthly post (for 2015-09-21) for new R Jobs from R-users.com. Employers: you may visit this link to post a new R job to the R community (it’s free and quick). Job seekers: please follow the links below to learn more and apply for your job of interest (or go to R-users.com to see all the R jobs that are currently available) Full-Time Programmatic Marketing: Data Scientist – @... ## Warsaw Meetings of R Users / Warszawskie Spotkania Entuzjastów R September 20, 2015 By With the summer holiday season coming to an end, we are back with Warsaw Meetings of R Users (Warszawskie Spotkania Entuzjastów R). Three meetings ahead: September 26 th (this Saturday) – let’s start with data-hack-day (DHD). Having data from Polish Seym (votes and transcripts), we are going to prepare some nice summaries of last cadency. … Czytaj dalej... ## Working With SEM Keywords in R September 20, 2015 By The following post is taken from two previous posts from an older blog of mine that is no longer available. These are from several years ago, and related to two critical questions that I encountered. One, how can I automatically generate hundreds of thousands of keywords for a search engine marketing campaign. Two, how can I ## Six lines to install and start SparkR on Mac OS X Yosemite September 20, 2015 By I know there are many R users who like to test out SparkR without all the configuration hassle. Just these six lines and you can start SparkR from both RStudio and command line. One line for Spark and SparkR Apache Spark is a fast and general-purpose cluster computing system SparkR is an R package that...
{}
# Why are physicists interested in graph theory? Can you tell me how graph theory comes into physics, and the concept of small world graphs? (inspired to ask from comment from sean tilson in): which areas in physics overlap with those of social network theory for the analysis of the graphs? Best, - Both answers have nothing to do with complex network theory which the concept of "small world graphs" originates from. –  kennytm Dec 13 '10 at 16:10 @KennyTM: well, I don't know anything about that topic (at least under this terminology) so I answered the question in the title and the question of how does graph theory come into physics. Perhaps a little clarification on the side of OP wouldn't hurt. Still, I think answers are quite useful (even if they answer something a little different). –  Marek Dec 13 '10 at 16:37 @KennyTM: but reading about the references further, I think I've seen this stuff come up at some place or other in statistical physics. I'll try to look up the references and perhaps will provide another answer more directly connected to these topics. –  Marek Dec 13 '10 at 16:40 @Marek: I know the topic (BTW you can find lots of papers about it on PRE) but I don't know why physicists study them. :) –  kennytm Dec 13 '10 at 17:51 I hope the advice ends up being beneficial. –  Sean Tilson Dec 13 '10 at 18:30 I don't know why physicists are interested in complex network theory, but well, whenever you can create a physical model describing some behavior you could call it "physics" (econophysics, sociophysics, etc), and this is likely the reason why they study complex network. I will just answer the 2nd part of the question — the concept of small-world network. ## Short average distance The small-world phenomena is best known as the six degree of separation, i.e. two persons are related to each other by at most 6 steps in a network of human relationship. In this network, people are represented by nodes, and if two people knows each other directly we create a link between the two nodes. If there are two nodes without a direct link, we could possibly take other routes. The smallest number of links need to be traversed is called the distance (a.k.a. path length) between the two nodes. A small-world network (with size N) needs to satisfy a condition, that the average distance $\langle\ell\rangle$ over all possible node pairs is "short", i.e. $\langle\ell\rangle \sim \log N$. Many natural and artificial complex networks, like the human relationship network above, neural network, metabolic networks etc. have short average distance, which is why this property is being studied, because physicists wanted to have a model that describes most of these complex networks. The Erdős-Rényi model — which is essentially a network that the links are constructed with some constant probability — has the short average distance property. ## High clustering coefficient The ER model was one of the first models used to describe real-world complex networks, but was soon found to be insufficient because it doesn't have another important property — a "high" clustering coefficient. The (local) clustering coefficient was introduced by Duncan J. Watts and Steven Strogatz in 1998 as a measure of how well nodes are "clustered" together locally. It is defined as $$c_{v} = \frac{\text{number of triangles containing }v}{\frac{k_v(k_v-1)}2}$$ where $k_v$ is the number of links connected to $v$, i.e. its degree. The local clustering coefficient is actually a ratio of "neighbors of $v$ that knows each other", and "maximum number of potential neighbors of $v$ that can know each other". For example, in this graph: the clustering coefficient of the central green node is $$c = \frac{\text{number of triangles}}{\frac{k(k-1)}2} = \frac{4}{\frac{5(5-1)}2} = 0.4.$$ If the local clustering coefficient is 0, then all your friends don't know each other, which is not true in the social network. The expected behavior is a large clustering coefficient, where groups of your friends know each other (thus forming triangles, i.e. A knows B, B knows C, C knows A). This property is where the ER model breaks down — its clustering coefficient is close to zero with the same number of nodes and links with a real network. When a network has both short average distance and high average local clustering coefficient, we call it a small-world network. ## And beyond The Watts-Strogatz model was invented to address the small-world property. However, it was soon determined that even the WS model is not good enough as it is not scale-free. And then the Barabási-Albert model was created to describe why real-world networks both small-world and scale-free, although it also cannot explain other properties like the clustering coefficient distribution, hierarchical structure, etc, and of course more and more sophisticated models are proposed as well. In the end, these properties are studied to construct a universal model that describes all (if not possible, "most") real-world complex networks, and use it to test and improve behaviors such as error and attack tolerance, evolution dynamics etc. If you don't fear a lot of mathematics, the 2002 review paper Statistical mechanics of complex networks by Réka Albert and Albert-László Barabási is a (IMO) must-read classic for anyone beginning to study complex networks. All of the above can be found in this paper. - +1 Although there is no physics in this I like the answer very much. Also, I took a liberty of fixing some typos and one link; I hope you don't mind. –  Marek Dec 13 '10 at 21:55 By the way, do you know whether there is any connection to things like $k$-SAT and graph coloring problems? I know some of these problems (which seem to be purely in the domain of complexity theory) were studied by methods very close in spirit to statistical physics. In particular, random $k$-SAT problems can be shown to have different phases characterized as "clustered" and "frozen" (among others). See e.g. this paper. Of course, this was made quite famous recently because of an $P\neq NP$ attack of Deolalikar. –  Marek Dec 13 '10 at 22:00 This is a very nice answer +1 –  user346 Dec 13 '10 at 22:50 @Marek: Thanks. I don't think it is related to algorithmic/combinatorics stuff like k-SAT and graph coloring. –  kennytm Dec 14 '10 at 15:02 Richard Feynman reformulated quantum mechanics (and quantum field theory) in terms of a path integral, meaning that in order to find the likelihood of some process occurring, you take a kind of weighted average over all potential trajectories. The weighting function is the exponentiated "action," $\exp(iS/\hbar).$ and the dominant contribution comes from paths which extremize this function, i.e. classical trajectories. Typically -- almost always -- this integral is too hard for anyone to do (let alone define rigorously), so Feynman developed a perturbation theory, an expansion in terms of graphs. The nature of the graphs depends on the interactions and coupling constants of your model -- that is, on the action. An (oversimplified) example: Suppose you only had one degree of freedom, x, and the action is $S_0(x) = i\hbar x^2/2$, so that $\exp(iS_0/\hbar) = \exp(-x^2/2).$ (You can ignore $\hbar$ in this example.) Then the path integral is $\int \exp(-x^2/2) dx$ and equals $\sqrt{\pi}$. However, if we add a cubic "interaction" term, so $S = S_0 - i\hbar a x^3$ then we can expand $\int \exp(iS/\hbar)$ in powers of $a,$ the first nonzero contribution being $a^2 \int (x^3)^2 \exp(-x^2/2) dx,$ which you can do exactly. (The term linear in $a$ is zero because the three powers of $x$ can't be paired up ["contracted"].) The graph for this term has two vertices (the two powers of $a$), each with three edges attached (the three powers of $x$ in the interaction term). So graphs are ubiquitous in QFT! - Not sure this is what you are aiming at, but graphs are ubiquitous in statistical physics! They really crop up all over the place. To give you some ideas from the top of my head: ## Probability Classical statistical physics is built on the concept of microstates. Each microstate has a certain energy $E$ gets a probability $P = {1 \over Z} exp(-\beta E)$ where $\beta$ is a parameter depending on the temperature of the system and $$Z = \sum_{microstates} exp(-\beta E)$$ is the partition function. Once you make this assignment of probabilities you can ask for things like average energy of the system and fluctuations and lots of other interesting stuff. And all of that stuff can be computed easily if you can somehow carry out the summation and determine the partition function. That is, the problem can often (at least in the discrete case) be reduced to counting, which means combinatorics. And often the combinatorial problem has to do with graph theory (either directly or by means of some clever duality). ## Crystals Statistical physics often investigates lattices. This is because they model crystals or other ordered forms of matter. These are very special graphs that possess translational symmetries (and more generally also some rotational and reflectional symmetries; think about hexagonal lattice). Once again you can define energy for microstates as in the first example and proceed to formulate the probabilistic problem. But it can be observed that lots of methods that work for investigation of the system on the lattice can actually be generalized to arbitrary graphs. ## Arbitrary graphs Perhaps nicest connection (and one positively stunning if you haven't heard about it before) is between correlation function of the Gaussian free field on the graph and random walks on the same graph (see e.g. this recent blog on the topic). This is a discrete version of the Feynman path integral which gives you probability amplitudes of particle getting from one place to another in terms of summing over every path between the two points. ## Polymer model One more model I'd like to mention is the Polymer model. The idea is that you have some objects, called polymers, that usually live on some kind of lattice (imagine e.g. cycles on the edges of the hexagonal lattice). Now the requirement is that these objects do not occupy the same space of the lattice (i.e. they don't intersect). This idea can be rigorously captured by the means of a huge infinite-dimensional graph where vertices are all of the possible polymers with edges between any two of them that are not compatible (that is, when they intersect on the original lattice). This looks like a hard problem but actually it can be investigated by the means of cluster expansion. To give (an oversimplified) idea: it all stems from the simple but very useful combinatorial identity $$\exp(\sum_i x_i) = \sum_N {1 \over N!} (\sum_i x_i)^N = \sum_N \sum_{i_1 + \cdots + i_N = N} \prod_k{x_k^{i_k} \over i_k!}$$ Using this one can transform the partition function $Z$ (which is an ugly sum over all possible subgraphs of the original huge graph) into an exponential of sum over just connected subgraphs of the polymer graph. Now connected subgraphs are much nicer objects than arbitrary subgraphs and one indeed obtains nice results by expanding the logarithm of the partition function in this way. - If you care to add spectral graph theory and its role to that list, you'll get my vote. :) –  Noldorin Dec 13 '10 at 20:50 @Noldorin: I will have to think about it. Most of the uses of spectral graph theory I know of are pure mathematics or computer science. But thanks to your comment I recalled other places graphs pop up: relation between Potts model and number of graph colorings; relation between electrical resistance on the graph and its spanning trees; and also Markov chains on graphs. Latter two cases can be studied by the means of some associated matrix. But it's not quite an adjacency matrix and so not quite a spectral theory. If you have anything concrete in mind, please let me know :-) –  Marek Dec 13 '10 at 21:38 @Marek: That is in fact true nowadays, but spectral graph theory (hinted by its name) actually originated in quantum chemistry... Not that I know much about it - was hoping you would know more heh. –  Noldorin Dec 13 '10 at 22:15 @Noldorin: ah, if the quantum chemistry origin is indeed correct, I had no idea about that at all! What I always implicitly assumed is that spectral refers to the spectrum of the (adjacency) matrix. And this is quite general term pertaining to many areas of mathematics (as in spectrum of some operator). I'll try to find out which version is the correct one. –  Marek Dec 13 '10 at 22:25 @Marek: You could be right, though I do remember some relation with quantum chemistry.... –  Noldorin Dec 13 '10 at 23:09 One context in which graphs can be useful in physics is in the discrete representation of spacetime in quantum gravity, where events are represented by the nodes of a type of poset (partially ordered set) called a causet and causal relationships are represented by the edges. This is particularly suited to a graph-theoretic interpretation, since posets can be intuitively visualized as DAGs. - Graph theory is very useful in design and analysis of electronic circuits. It is very useful in designing various control systems. E.g. Signal Flow Graphs and Meson's Rule make your life a lot easier while trying to find transfer functions. Also, while solving differential equations numerically Graph Theory is used for mesh generation. - ## protected by Qmechanic♦May 17 '14 at 15:16 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
{}
1. ## Vector vector u= 2i - 5j w= -i -6j find the magnitude of w-u 2. Originally Posted by SHORTY vector u= 2i - 5j w= -i -6j find the magnitude of w-u Originally Posted by scrappy if the magnitude of the vector v= 1/2 and a= 113 degrees write the vector v in terms of i and j u= 2i -5j w= -i -6j Are you guys in the same class? 3. Originally Posted by SHORTY vector u= 2i - 5j w= -i -6j find the magnitude of w-u w - u = (2i - 5j) - (-i -6j) = (2i + i) + (-5j + 6j) = 3i + j $|w - u| = \sqrt{3^2 + 1^2} = \sqrt{10}$ -Dan
{}
# SIR Model for COVID-19: Estimating $R_0$ Roberto Berwa, MIT [berwa.mit.edu] At the beginning of pandemic with a limited amount of data and knowledge of the transmission mechanisms behind COVID-19, various models from the scientific community, mathematicians, and epidemiologists, were useful in gauging a general view of how the pandemic would develop. In this article, we discuss the most basic mathematical model of an infectious disease: SIR model, and we also use the model to estimate the basic reproductive number. We estimate the $R_0$ to be 4.90 (95% CI 4.67294, 5.13887) in the US, which agrees with the most recent study based on early data from Wuhan, China whose estimates sets $R_0$ median value at 5.7 (95% CI 3.8–8.9) (Sanche, Lin, Xu, Romero-Severson, Hengartner and Ke 2020). Here is what is covered in this article. • SIR model description • Solve SIR ODE's • Estimating $R_0$ • General use of this model Disclaimer: The author of this article has no considerable expertise in the subject of the matter discussed. Opinions and results shown in this document should not be referenced or used while taking any decisions unless advised by an expert or a specialist. ## Model Description: SIR MODEL In mathematical modeling for infectious diseases, models can be of two types: stochastic (probabilistic) and deterministic. Stochastic models tend to be more complicated to analyze than deterministic models, but they also tend to be very useful as well. I will discuss a deterministic SIR model using ordinary differential equations. ### Compartments of SIR model The SIR model, originating from the 1700s, classifies a fixed population into three compartments at a given time: S(t) (susceptible group), I(t) (infected group), and R(t) (Recovered) (Kermack and  McKendrick  1927). • S(t): From a fixed population, this group counts the number of people who are susceptible to being infected at the time t. • I(t): The number of people who are currently infectious at time t. (i.e. people who can infect other people). • R(t): people who are no longer infectious at time t, either because they have been cured or unfortunately because they died. In this model, the population moves from being susceptible to infected and from infected to recovery. $S(t) \rightarrow I(t) \rightarrow R(t)$ From S to I compartment, effective contact between an infected person and an infectious person must take place. Effective contact is defined as an interaction between a susceptible person and an infectious person that results in the susceptible person getting infected. The effective contact rate per infectious person, $\sigma$, is defined as the average number of people that an infected patient can infect per unit time. This can be computed from the transmission risk, p, the probability of infecting a susceptible person with whom an infected patient is in contact, and the number of total contacts with susceptible people, $\tau$. $\eta$, the average number of people that a person comes in contact with per unit time. Let's define $\beta$, the number of people infected by an infectious person given that all people in his/her contact are susceptible. (i.e. we replace (eq.4) into (eq.3) to find (eq.5)) From I to R compartment, let $\gamma$ be the probability that a person may recover at a time t. ### Mathematical description of the model Here is the mathematical expression of the model in the ODE(ordinary differential equation) form. ### Dependent variable For this article, we focus on the ratio of $\beta$ to $\gamma$ which is known as the basic reproductive number, $R_0$, defined as the expected number of cases directly generated by one case in a population where all individuals are susceptible to infection. This ratio is useful to determine if the virus will become an epidemic or not. When $R_0$ > 1, the infectious disease evolves into an epidemic. ### Assumptions As already implied in the equations, there are various assumptions taken into account. First, the probability of contracting the disease or recovering from it is the same for everyone. Second, we have a fixed population. In the case of a pandemic that would mean that there are no births or deaths and that the net international migrant is zero. (The veracity of these assumptions are revisited in section 4.) ## Solving SIR ODE model in Julia The differential equations describing the model can be defined in various methods, the most basic being the Euler method. However, in this case, we will solve the ODE system using DifferentialEquations.jl , a well-optimized package in Julia built in consultation with the most recent techniques in the research of algebraic computation. (Julia benchmarks better than most single-purpose languages. ) To reduce the number of variables in the model and to avoid estimating the population, we set the ratio $\beta$ to N to just $\beta$. using Pkg; Pkg.add("DifferentialEquations"); using Plots using DifferentialEquations; # define the parameters: function sir_ode!(δu, u, p, t) # unpack variables and parameters: S, I, R = u β, γ = p # define differential equations: δS = -β*S*I δI = +β*S*I - γ*I δR = +γ*I δu .= (δS, δI, δR) # copy the values into the vector du; note the . δu end # define the paramaters: β = 0.1 γ = 0.05 parameters = [β, γ] # define the initital values: S₀ = 0.99 I₀ = 0.01 R₀ = 0.0 initial_values = [S₀, I₀, R₀] time_span = [0.0, 200.0] # initial and final time # set up problem: problem = ODEProblem(sir_ode!, initial_values, time_span, parameters) #solution: solution = solve(problem, saveat = 0.1); #plot: plot(solution, label=["S" "I" "R"], Title = "SIR Model Evolution", ylabel = "% of the population", xlabel = "days") 20.2s Given that $\frac{\beta}{N} = 0.1 \And \gamma = 0.05$, from our experiment when we start with the scenario with 0.01 people are infectious, the infection will pick at 18% of the population. This case does not reflect COVID-19. (The interpretation is visited in Section 4). ## Fitting the model Models can be useful in estimating some key characteristics of an epidemic. It can estimate the recovery rate ($\gamma$) and the number of people infected by an infectious person given that all people in his/her contact are susceptible ($\beta$). Here, we only estimate $R_0$. ### Methodology We use the analytical solution to our ODEs and estimate the dependent variable using a non-linear least squares concept. For non-linear least squares, the most common method is the Levenberg-Marquardt algorithm, (LMA ), also known as the Damped least-squares (DLS) method. It interpolates between the Gauss-Newton algorithm and gradient descent for efficiency (Gavin 2019). This article uses its implementation in Julia, but it is very popular and available in most programming languages. ### Data Manipulation We use the data from the Center for Systems Science and Engineering, John Hopkins' Whiting School of Engineering. We will focus on one country, the US. This data is not the most accurate, but the errors in the data are mainly due to external factors, such as lack of testing for those exposed. As part of the model's assumptions, we fix the population of the US at 329,686,270, as of May 2020. Note that the US Census estimates about one birth every 8 seconds and one death every 12 seconds. However, we ignore these rates. More complicated, and possibly accurate, models take them into account. using DataFrames using Dates using Plots using CSV # donwload the data: url_confirmed_cases= "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv" url_death = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv" url_recovered = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv" download(url_confirmed_cases, "confirmed_cases_global.csv"); download(url_death, "death_global.csv"); download(url_recovered, "recovered_global.csv"); # read, clean and view the data: data_confirmed_cases = CSV.read("confirmed_cases_global.csv") data_death = CSV.read("death_global.csv") data_recovered = CSV.read("recovered_global.csv") CSV.rename!(data_recovered, 1 => "province", 2 => "country") CSV.rename!(data_death, 1 => "state", 2 => "country") CSV.rename!(data_confirmed_cases, 1 => "state", 2 => "country") data_us_confirmed= data_confirmed_cases[data_confirmed_cases.country .== "US",:] data_us_death = data_death[data_death.country .== "US", :] data_us_recovered = data_recovered[data_recovered.country .== "US", :] date_strings = String.(names(data_confirmed_cases))[5:end]; format = Dates.DateFormat("m/d/Y"); dates = parse.(Date, date_strings, format) .+ Year(2000); # plot the data on graph: p = plot(title="US COVID-19", ylabel = "US population", xlabel = "date") us_confirmed_vec = vec(Array(data_us_confirmed[5:end])) us_recovered_vec = vec(Array(data_us_recovered[5:end])) us_death_vec = vec(Array(data_us_death[5:end])) us_tot_recovered_vec = vec(Array(data_us_recovered[5:end])) + vec(Array(data_us_death[5:end])) us_total_agents = us_tot_recovered_vec + us_confirmed_vec #building susceptible: us_population = 329686270 us_susceptible_vec = us_population .- us_total_agents plot!(dates, us_recovered_vec, label = "Recovered") plot!(dates, us_confirmed_vec, label = "Infected") plot!(dates, us_tot_recovered_vec, label = "Infected + Death") plot!(dates, us_death_vec, label ="Death") p 0.8s From the graph, the total number of cases in the number of confirmed cases grows exponentially, which implies that our model captures some general description of the pandemic, as the solution to the model results in an exponential form. (i.e. This can be checked by plotting the log of the plot). ( In the dataset, Infected refers to the total number cases recorded, not active cases. And in the graph, Infected + Death is the recovered compartment discussed in section 1. ) ### Estimating the basic reproductive number To solve for $R_0$, we use the exact analytical solution of the evolution of the SIR model. (Harko, Lobo, and Mak 2014) We solve for I(t). We solve the model for parameters that minimizes the least-squares using LMA. Because it does not guarantee to find the global minimum, we pick a few random initial parameters to increase the likelihood of falling into a global minimum. #Pkg.add("LsqFit"); #using LsqFit; # Set up (eq.8) with N = us_population: model(R, param) = 329686270 .- R - 329686270 .* exp.(-(param[1]/329686270).*R) # Apply LMA: params = [] active_infections_vec = us_confirmed_vec - us_recovered_vec for i in 1:10 pₒ = [randn()] global fit = curve_fit(model, us_recovered_vec, active_infections_vec, pₒ) push!(params, fit.param[1]) end @show params' #compute confidence interval at 0.05 confidence level: @show confidence_intervals = confidence_interval(fit, 0.05) 0.8s Using the analytical method, we cannot find the actual values of β and γ, but we can find the $R_0$, which in this case of COVID-19 in the U.S is estimated to be 4.90 (95% CI 4.67294, 5.13887), statistically significant. Early estimations of the reproductive number, using a hybrid deterministic–stochastic SEIR (susceptible-exposed-infectious-recovered) model, sets its median value at 5.7 (95% CI 3.8–8.9) (Sanche, Lin, Xu, Romero-Severson, Hengartner and Ke 2020). ### Visualize the fitted model Due to its limitations, a deterministic SIR model weakly captures the details of the evolution of the pandemic. It should not be taken as an exact predicting tool for COVID-19, at least from our judgment (refer to section 4.). However, it still captures some general aspects of the model. For instance, it still captures the exponential growth characteristic of COVID-19 as seen in the plot below. fitted_I = model(us_recovered_vec, [params[1]]) p = plot(title = "Active cases in the US", ylabel = "Active cases", xlabel = "date") plot!(dates, active_infections_vec, label = "active I") plot!(dates, fitted_I, linestyle = :dot, label = "fitted I") 1.0s ## General thoughts on using this model Most about scientific modeling can be summarised with an aphorism commonly used in the statistics community, "All models are wrong, but some of them are useful." By informal definition, modeling is a process of creating a simple representation of complex systems that captures the general underlying truth about the system. The more complicated, the better the assumptions are, and the more the results are accurate. However, taking solid assumptions makes it hard, sometimes even impossible, to analyze. SIR model is built on assumptions (look at section 1) that are not a reasonable reflection of the real world. Beyond the assumptions, the model does not take into account changes in policies and measures put into effect to reduce the spread of COVID-19 and will not self-correct when given incomplete data(as is the case due to poor testing ability in the early days of the pandemic). This excludes the use of such simplistic models to compute or predict specific parameters such as death rate, infection rate, and the number of infectious people in 2 weeks. However, such a model can still be useful in skeptically estimating parameters such as the basic reproductive number. Some of the issues mentioned of SIR are addressed using more complex models (stochastic derivations of SIR model, spatial models, or Bayesian tree model) and performing different tests(anti-body testing) to gauge the correct number of infections. ## Appendix To check out the source code, click on the upward-pointing arrow of "show source code" if the source code is not visible. ## References 1. Harko, Tiberiu, Lobo, Francisco S.N., and Mak, M.K. (2014)“Exact Analytical Solutions of the Susceptible-Infected-Recovered (SIR) Epidemic Model and of the SIR Model with Equal Death and Birth Rates.” Applied Mathematics and Computation 236: 184–194. Crossref. Web. 2. Gavin, Henri P.(2013) “The Levenberg-Marquardt method for nonlinear least-squares curve-fitting problems”. 3. Kermack, W. O., McKendrick, A. G (1927). "A Contribution to the Mathematical Theory of Epidemics". Proceedings of the Royal Society A115 (772): 700–721. 4. Sanche S, Lin YT, Xu C, Romero-Severson E, Hengartner N, Ke R.(2020) High contagiousness and rapid spread of severe acute respiratory syndrome coronavirus 2. Emerg Infect Dis. https://doi.org/10.3201/eid2607.200282
{}
# Chern insulator vs topological insulator What is the basic distinction between a Chern Insulator and a Topological Insulator? Right now I know that a Chern Insulator has "topologically non-trivial band structure" and that a Topological Insulator has "symmetry protected surface states". • Chern insulators must break time-reversal symmetry and topological insulators require time-reversal symmetry for protection. Both of them have nontrivial band structures and edge/surface states, the difference being that TI is nontrivial only with time-reversal symmetry (and the edge states are protected by the symmetry). – Meng Cheng Dec 30 '15 at 20:33 I don't think the provided comment gives the right answer. Topological insulators is the bigger group and Chern insulator are a subgroup of that. This means that every Chern insulator is a topological insulator, but not every topological insulator is a Chern insulator. Can maybe someone confirm that this is indeed true? In general a topological insulator is a material that has gapped bulk, but conducting edge states that are protected by some symmetry. The surface Hamiltonian is gapless and cannot be gapped by perturbations that do not break the symmetry that protects the edge states, people say that the edge states are topologically protected. A Chern insulator is 2-dimensional insulator with broken time-reversal symmetry. (If you have for example a 2-dimensional insulator with time-reversal symmetry it can exhibit a Quantum Spin Hall phase). The topological invariant of such a system is called the Chern number and this gives the number of edge states. So, when you have a non-trivial Chern insulator this means that it has edge states. The edge states of a Chern insulator are chiral meaning that in one channel the electrons only go one way and in the other channel the electrons go the other way. This may remind you of the Integer Quantum Hall Effect, which also has chiral edge states. You can see a Chern insulator as a 2D lattice version of the IQHE. (It is also called the Quantum Anomalous Hall Effect). You can go from the trivial phase to the topological phase by changing parameters in the lattice model such as the on-site or hopping energy. The first Chern insulator was the Haldane model for graphene, where time-reversal symmetry is broken by introducing complex second nearest neighbour hopping but inversion symmetry still survives. This gave the chiral edge states characteristic of the now called Chern insulators. • When you say "every Chern insulator is a topological insulator", you need to define what is "topological insulator". According to a standard definition, "topological insulator" is an fermionic insulator with time reversal symmetry and U(1) symmetry. According to this definition, a Chern insulator is NOT a topological insulator. – Xiao-Gang Wen Mar 13 '19 at 0:21 2+1d Chern insulator (CI): 1) belongs to the classes of systems realizing Integer Quantum Hall states on the lattice without external magnetic field. It belongs to the long-ranged entangled Topological Order by the definition of X.G.Wen of MIT, but it is still a part of theory of Invertible Topological Quantum Field Theory (of Dan Freed, see References and papers by him) with its partition function $|Z|=1$ on a closed manifold, and can be gapped by coupling to its time-reversal partner (with opposite sign of the Chern number). It is however short-ranged entangled by the definition of A Kitaev of Caltech. 2) The Chern insulator on the lattice without external magnetic field realizes so-called the Anomalous Quantum Hall Effect. 3) And CI is characterized by the Chern number $C_1$ under the $\mathbb{Z}$ class of topological invariances: $$C_1=\frac{1}{2\pi}\int_{\mathbf{k} \in \text{BZ}} d^2\mathbf{k}\; \epsilon^{\mu \nu } \partial_{k_\mu} \langle \psi(\mathbf{k}) | -i \partial_{k_\nu} | \psi(\mathbf{k}) \rangle \in \mathbb{Z}.$$ 2+1d and 3+1d Topological insulator (TI): 1) belongs to the classes of systems realizing Symmetric-protected topological states, needed to be protected by time reversal and U(1) charge symmetry. The TI has NO bulk intrinsic topological orders. 2) The 2+1d and 3+1d free (quadratic Hamiltonian) TI both are characterized by the $\mathbb{Z}_2$ invariance instead of $\mathbb{Z}$ invariance. See the paper of Fu and Kane. The can also be characterized by the $\Theta=\pi$ for the probed bulk U(1) field action (see Qi, Hughes, Zhang, Phys Rev B 2008.) $$S_{3+1d\;bulk}= \frac{\Theta}{8 \pi} \int F \wedge F$$ term with some proper normalization. Namely the 3+1d $\mathbb{Z}_2$ for free TI is $$\Theta=0, \pi \in \mathbb{Z}_2,$$ which respect the time reversal symmetry of TI. A topological insulator has a non-trivial (non-zero) topological invariant. Chern number is one such topological invariant. If the Chern number is non-zero, then the system is a Chern insulator. Hence, a Chern insulator is a subgroup of topological insulators. you cannot discuss classification of topology without symmetry. Any gapped system with nontrival edge states can be called as topology insulator. Haldane model defined in honeycomb lattice is an example of chern insulator which has non-zero chern-number and doesn't need any symmetry to protect it from adiabatic transform (without gap close) to a trival insulator. Z2 topology insulator is a subset of chern insulator but its chern-number is zero. However, it still has nontrival edge mode which need to be described by a new topology index called Z2 index. The reason why it belong to the subset of chern insulator is that it need time reversal symmetry to protect it from adiabatic connected to an phase which has different Z2 number.
{}
Existance of certain almost invariant functions related to amenability and piece-wise transformations - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T20:06:59Z http://mathoverflow.net/feeds/question/98935 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/98935/existance-of-certain-almost-invariant-functions-related-to-amenability-and-piece Existance of certain almost invariant functions related to amenability and piece-wise transformations Kate Juschenko 2012-06-06T08:52:34Z 2012-06-10T22:28:29Z <hr> <p><strong>We would like very much to know the answer to the following question:</strong></p> <hr> <p>Let $\|\cdot\|$ be any norm on $\mathbb{Z}^d$ and let $W(\mathbb{Z}^d)$ be the group of all bijections of $\mathbb{Z}^d$ such that $$\|g(j)-j\|\leq C_g,$$ for some constant $C_g$ which depends only on the element $g\in W(\mathbb{Z}^d)$. Consider the Hilbert space <code>$L^2(\{0,1\}^{\mathbb{Z}^d},\mu)$</code>, where <code>$\{0,1\}^{\mathbb{Z}^d}$</code> comes with the standard Bernoulli measure $\mu$.</p> <blockquote> <p>We are looking for a sequence of functions <code>$f_n\in L^2(\{0,1\}^{\mathbb{Z}^d},\mu)$</code> with $\|f_n\|_2=1$ and $$\|g.f_n-f_n\|_2\rightarrow 0, \text{ for every } g\in W(\mathbb{Z}^d),$$ <code>$$\|f_n\cdot\chi_{\{\omega_j\in \{0,1\}^{\mathbb{Z}^d}:\omega_0=0\}}\|_2\rightarrow 1.$$</code> where <code>$\chi_{\{(\omega_j)_{j\in \mathbb{Z}^d}\in \{0,1\}^{\mathbb{Z}^d}:\omega_0=0\}}$</code> is the characteristic function of the cylinder set <code>$\{(\omega_j)_{j\in \mathbb{Z}^d}\in \{0,1\}^{\mathbb{Z}^d}:\omega_0=0\}$</code></p> </blockquote> <hr> <p><strong>Motivation:</strong></p> <hr> <p>The existence of such a sequence for all $d$ would disprove the conjecture of Katok, that the interval exchange transformation group contains a free subgroup.</p> <hr> <p><strong>What we know about the question above:</strong></p> <hr> <p>In the joint work with Nicolas Monod, <a href="http://arxiv.org/abs/1204.2132" rel="nofollow">here</a>, we showed that for $n=1$ the following function satisfy the properties above: $$f_n(\omega)=e^{-n \sum\limits_{j\in \mathbb{Z}} \omega_j e^{-\frac{|j|}{n}}}=\prod_{j\in Z} a_j^{\omega_j},$$ where $a_j=e^{-n e^{-\frac{|j|}{n}}}$. We are interested in extending this result to higher dimensions. Note that the function above is the product of functions of independent i.d. random variables.</p> <p>Let $G&lt; W(\mathbb{Z}^d)$ be a finitely generated subgroup of $W(\mathbb{Z}^d)$.</p> <p>In addition to above (in collaboration with Nicolas Monod and Mikael de la Salle) we know that the existence of the functions with the property above in the class of functions which can be written as the product of functions of independent i.d. random variables is equivalent to a certain property of the Schreier graph of the action of $G$ on $\mathbb{Z}^d$. Let me give more details on this.</p> <p>The Schreier graph of the action of $G$ on $X$ with respect to $S$ is the graph with vertices $X$ and with an edge between $x$ and $y$ for each $g \in S$ with $g x=y$.</p> <p>We say that an infinite graph $G=(V,E)$ satisfies a Sobolev inequality rooted at $x_0 \in X$ if the value at $x_0$ of any $c_0$-function on $V$ is bounded by the $\ell^2$-norm of its gradient, i.e., there is a constant $C>0$ such that $$\|f\|_{c_0(V)} \leq C \sum_{x \sim x' \in V} |f(x') - f(x)|^2.$$</p> <p>We can show that</p> <p>The functions $f_n$ with the property above can be found in the class of products if and only if the Schreier graph of the action of $G$ on $X$ with respect to $S$ does not satisfy a Sobolev inequality.</p> <p>Moreover, for $d=1,2$ and $G$ be a finitely generated subgroup of $W(n)$ with symmetric generating set $S$. Then the Schreier graph of the action of $G$ on $\mathbb{Z}^d$ does not satisfy a rooted Sobolev inequality. However there are subgroups in $W(\mathbb{Z}^3)$ such that their Schreier graph satisfies Sobolev inequality.</p> <p>To summarize above, we can find a sequence of functions in the class of products with the above property only in cases $n=1,2$. </p> <blockquote> <p>Any suggestions on potential examples of functions that can do the higher dimentional cases?</p> </blockquote> http://mathoverflow.net/questions/98935/existance-of-certain-almost-invariant-functions-related-to-amenability-and-piece/99225#99225 Answer by Nicolas Monod for Existance of certain almost invariant functions related to amenability and piece-wise transformations Nicolas Monod 2012-06-10T09:19:32Z 2012-06-10T11:59:38Z <p>This is my first visit to MO, so I have to apologize for making this an "answer": it is just a comment on Nik Weaver's suggestion.</p> <p>I don't think that NW's conjecture is much different from the original question. Indeed, you comment that it is different because f should work against all permutations g. But in fact you restrict to those g that have C_g bounded by a universal constant (say, 2). This is a very small set, it is even compact in the natural topology relevant to the action on Z^d (which is the action underlying the whole question). Therefore, it is not so different from considering finitely many g's and thus from the original question.</p> <p>Anyway, this is my intuition, it is not a mathematical statement. Nicolas Monod</p> <p>[Edit after reading NW's argument]: at first sight, it seems that your reduction to your conjecture is indeed just exploiting that the set C_g less than a constant is compact (approximation argument).</p>
{}
IIT JEE 1982 Maths - 'Adapted' - Subjective to Multi-Correct Q8 Calculus Level 3 Let $$f$$ be a twice differentiable function and $$g, h$$ be some functions, such that $$f''(x)=-f(x), f'(x)=g(x), h(x)=[f(x)]^2+[g(x)]^2, h(5)=11$$. Then which of the following is/are not incorrect? • (A) h(10)=11 • (B) h'(10)=0 • (C) h(0)=5 • (D) h'(0)=-2 Enter your answer as a 4 digit string of 1s and 9s - 1 for correct option, 9 for wrong. Eg. 1199 indicates A and B are correct, C and D are incorrect. None, one or all may also be correct. In case you are preparing for IIT JEE, you may want to try IIT JEE 1982 Mathematics Archives. × Problem Loading... Note Loading... Set Loading...
{}
# How can I type formula cosine of two vectors nice? I want to find cosine of two vectors, I define the command \cross for cross product of two vectors. I tried \documentclass[12pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{fourier} \usepackage{esvect} \usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} \newcommand{\cross}[2]{\biggl[\vv{#1},\vv{#2} \biggr]} \begin{document} $\cos \varphi =\dfrac{\cross{CA'}{CB} \cdot \cross{CA'}{CD}}{\left \vert \cross{CA'}{CB} \right\vert \cdot \left \vert \cross{CA'}{CD} \right\vert}.$ \end{document} I feel the brackets in command \cross is not good. How can I repair them? - The square brackets seem to be needlessly tall. Specifically, I don't think it's necessary to make the square brackets sufficiently tall to have them enclose the arrows. Nobody should be confused by the arrows "sticking out" above the brackets. Hence, using \big instead of \bigg for the size of the brackets should be fine. Where I also see room for improvement, typographically speaking, is in the uneven heights of the arrows that are produced by \vv. Since the uneven heights are caused by the presence of the "primes" in the first argument of the \cross macro, one way to address this issue is to automatically add a "vertical phantom" (composed of #1...) to the second argument of the \cross macro. \documentclass[12pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{mathtools} \DeclarePairedDelimiter{\abs}{\lvert}{\rvert} \usepackage{fourier,esvect} \usepackage[margin=2cm]{geometry} \newcommand{\cross}[2]{\bigl[ \vv{#1},\vv{#2\vphantom{#1}} \bigr]} \newcommand\z{\vphantom{{}'}} % insert a vertical phantom as tall as a superscript prime \begin{document} $\cos \varphi =\dfrac{\cross{CA'}{CB} \cdot \cross{CA'}{CD}} {\abs*{\cross{CA'}{CB}} \cdot \abs*{\cross{CA'}{CD}} }\,.$ \end{document} - Another option is to use bold letters for vectors. I have also changed the backets to parenthesis, hoping that won't change the meaning in your subject. physics package is used for making vectors bold with \vb* macro. If you want upright letters for vectors, use \vb without star. Since \cross is defined by defined by physcis, I have changed it to \Cross. \documentclass[12pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{fourier} \usepackage{physics} \usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} \newcommand{\Cross}[2]{(\vb*{#1},\vb*{#2})} \begin{document} $\cos \varphi =\dfrac{\Cross{CA'}{CB} \cdot \Cross{CA'}{CD}}{\vert \Cross{CA'}{CB} \vert \cdot \vert \Cross{CA'}{CD} \vert}$ \end{document} I have also removed \left and \right from \vert. - One possible solution (since you haven't told us how you would like it to be, it's a guess): \documentclass{article} \usepackage{mathtools} \usepackage{fourier} \DeclarePairedDelimiter{\abs}{\lvert}{\rvert} \newcommand*\cross[2]{\left[\overrightarrow{#1},\overrightarrow{#2}\right]} \begin{document} $$\cos\varphi = \frac{\cross{CA'}{CB} \cdot \cross{CA'}{CD}}{\abs*{\cross{CA'}{CB}} \cdot \abs*{\cross{CA'}{CD}}}.$$ \end{document} In this way the brackets will automatically scale relative to the material. Note that I have used \overrightarrow instead of \vv to avoid loading the esvect package. (I don't think it's a 'bad' package but I just prefer to load as few packages as possible.) Update In case you always have vectors like in the example, you can make the code simpler: \documentclass{article} \usepackage{mathtools} \usepackage{fourier} \DeclarePairedDelimiter{\abs}{\lvert}{\rvert} \newcommand*\cross[2]{\left[\overrightarrow{#1},\overrightarrow{#2}\right]} \newcommand*\crossProduct[3]{ \frac{\cross{#1}{#2} \cdot \cross{#1}{#3}}% nominator {\abs*{\cross{#1}{#2}} \cdot \abs*{\cross{#1}{#3}}}% denominator } \begin{document} $$\cos\varphi = \crossProduct{CA'}{CB}{CD}$$ \end{document} - One must be careful when disregarding the math axis, but it seems from your question that you are unhappy with the extra space below the vectors, about which the braces enclose. That extra space is there to give symmetry to the over-arrow vector notation. Many would say that should not be disturbed, even if it looks odd. However, since you were looking for alternatives, here is one such solution that removes that space below the braces. But see what it does: it keeps the \cdot centered on the letters and therefore asymmetrical with respect to the height of the brace. So, this is an option, but many would not say it is an improvement. \documentclass[12pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{fourier} \usepackage{esvect} \usepackage{scalerel} \usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} \newcommand{\cross}[2]{{\stretchleftright{[}{\vv{#1},\vv{#2}}{]}}} \begin{document} $\cos \varphi =\dfrac{\cross{CA'}{CB} \cdot \cross{CA'}{CD}} {\stretchleftright{\vert}{\protect\cross{CA'}{CB}}{\vert} \cdot \stretchleftright{ \vert}{\cross{CA'}{CD}}{\vert}}.$ \end{document} - "many would not say it is an improvement" I'm one of them. :) – Svend Tveskæg Mar 2 '14 at 17:04 @SvendTveskæg Having received "the heat of the flame" when disturbing the math axis in the past, I have grown more sensitive to the sanctity of the notion 8^O. If nothing else, the answer can visualize, for the OP, the negatives associated with a notional "fix". – Steven B. Segletes Mar 2 '14 at 17:14 I'm not saying your answer is bad at all! You are simply coming up with one way of doing it, so I think it's absolute fine. (I'm just saying that I'm not fond of the solution ... just as you aren't, if I'm not totally mistaken.) – Svend Tveskæg Mar 2 '14 at 17:18 @SvendTveskæg As with all things, there are tradeoffs. On this particular one, I really have no preference, and would myself therefore stick with the conventional solution. All to often, one gets an idea of the advantages that arise from a different approach only to find, upon implementation, that there are significant negatives, too. Only then does the wisdom of the original approach become truly manifest. – Steven B. Segletes Mar 2 '14 at 17:28 I know exactly what you mean. :) – Svend Tveskæg Mar 2 '14 at 17:35
{}
# Linear Transformations with given basis Currently im on Lineal Algebra class, seing linear transformations with matrices. My question is what are the steps to resolve this question? $$\text{Let }T: \mathbb{R}^3 \longrightarrow \mathbb{R}^3 \text{ a lineal transformation. Consider the following basis of }\mathbb{R}^3: \\ B_1= (1,1,1),(1,1,0),(1,0,0)\\ B_2= {(1,0,1),(0,1,1),(0,1,0)}\\$$ $$\text{If we know that:} [T]_{B_1}^{B_2}=\begin{pmatrix}3&0&2\\ 0&1&1\\ 1&0&2\end{pmatrix}$$ Find $$T(2,2,0)$$. How should i resolve this question? • Seems to me that there should’ve been at least one example of this sort of thing in the material you’re meant to have studied before attempting the exercise. What have you learned about representing a linear transformation as a matrix and how to change basis? – amd Mar 26 '20 at 2:07 It all depends on your notation. For me, $$\;[T]_{B_1}^{B_2}\;$$ means the matrix of basis change from $$\;B_1\;$$ to $$\;B_2\;$$ , which means: $$\begin{cases}T(1,1,1)=3(1,0,1)+0(0,1,1)+1(0,1,0)=(3,1,3)\\{}\\ T(1,1,0)=0(1,0,1)+1(0,1,1)+0(0,1,0)=(0,1,1)\\{}\\ T(1,0,0)=2(1,0,1)+1(0,1,1)+2(0,1,0)=(2,3,3)\end{cases}$$ Since $$\;(2,2,0)=0(1,1,1)+2(1,1,0)+(-2)(1,0,0)\;$$, we get that by linearity of $$\;T\;$$ : $$T(2,2,0)=0\cdot T(1,1,1)+2T(1,1,0)+(-2)T(1,0,0)=0(3,1,3)+2(0,1,1)-2(2,3,3)=$$ $$=(-4,-4,-4)$$ The last vector is expressed in the coordinates of $$\;B_2\;$$ . Try now to end the argument. • So i should do the same process(creating the linear combination) but for B1? – SWAT Mar 26 '20 at 1:50 I suggest the following interpretation: • Vector $$\begin{pmatrix}2\\ 2\\ 0\end{pmatrix}=\color{blue}{2}\begin{pmatrix}1\\ 1\\ 0\end{pmatrix}$$ has wrt. basis $$B_1$$ the coordinates: $$\begin{pmatrix}0\\ \color{blue}{2}\\ 0\end{pmatrix}_{B_1}$$ • We know the coordinates of $$T\begin{pmatrix}2\\ 2\\ 0\end{pmatrix}$$ wrt. basis $$B_2$$: $$[T]_{B_1}^{B_2}\begin{pmatrix}0\\ 2\\ 0\end{pmatrix}_{B_1}=\begin{pmatrix}3&0&2\\ 0&1&1\\ 1&0&2\end{pmatrix}\begin{pmatrix}0\\ 2\\ 0\end{pmatrix}_{B_1} = \begin{pmatrix}0\\ 2\\ 0\end{pmatrix}_{B_2}=\color{blue}{2}\begin{pmatrix}0\\ 1\\ 0\end{pmatrix}_{B_2}$$ • Since the second basis vector in $$B_2$$ is $$\begin{pmatrix}0\\ 1\\ 1\end{pmatrix}$$, you get $$T\begin{pmatrix}2\\ 2\\ 0\end{pmatrix} = \color{blue}{2}\begin{pmatrix}0\\ 1\\ 1\end{pmatrix} = \begin{pmatrix}0\\ 2\\ 2\end{pmatrix}$$
{}
# PoS(ICRC2017)025 Tracking cosmic-ray spectral variations with neutron monitor time-delay measurements at high cutoff rigidity during 2007-2017 C. Banglieng, D. Ruffolo, A. Sáiz, P. Evenson, T. Nutaro Contribution: pdf Abstract We present measurements of the leader fraction of neutron monitor counts that did not follow other counts in the same counter tube from the same cosmic ray shower. We use time-delay histograms collected at the Princess Sirindhorn Neutron Monitor at Doi Inthanon, Thailand, which has the world's highest vertical cutoff rigidity for a fixed station (16.8 GV). Changes in the leader fraction are a precise indicator of cosmic ray spectral variations above the cutoff. Our data set from 2007 to 2017 spans a full cycle of solar modulation, including the all-time cosmic ray maximum of 2009 and minimum near the end of 2014, the count rate now having returned to its initial value. The electronics to collect time-delay histograms have been upgraded twice, and we have corrected for such changes to develop a long-term leader fraction dataset. We examine the spectral variation of Galactic cosmic rays above $\sim$17 GV resulting from solar modulation and its solar magnetic polarity dependence.
{}
# Kooperativer Bibliotheksverbund ## Berlin Brandenburg and and An error occurred while sending the email. Please try again. Proceed reservation? Export • 1 Article In: Nature, Volume 468, Issue 7325, pp. 796-798 (2010) Description: Observations of the 21-centimetre line of atomic hydrogen in the early Universe directly probe the history of the reionization of the gas between galaxies. The observations are challenging, though, because of the low expected signal strength (~10 mK), and contamination by strong (〉100 K) foreground synchrotron emission in the Milky Way and extragalactic continuum sources2. If reionization happened rapidly, there should be a characteristic signature visible against the smooth foreground in an all-sky spectrum. Here we report an all-sky spectrum between 100 and 200 MHz, corresponding to the redshift range 6 〈 z 〈 13 for the 21-centimetre line. The data exclude a rapid reionization timescale of dz 〈 0.06 at the 95% confidence level. Comment: 8 pages, 2 figures, Published in Nature, Volume 468, Issue 7325, pp. 796-798 (2010) Keywords: Astrophysics - Cosmology And Nongalactic Astrophysics Source: Cornell University Library Location Call Number Volume/Issue/Year Availability Others were also interested in ... • 2 Article In: Monthly Notices of the Royal Astronomical Society, 2012, Vol. 419(2), pp.1070-1084 Description: Efforts are being made to observe the 21-cm signal from the ‘cosmic dawn’ using sky-averaged observations with individual radio dipoles. In this paper, we develop a model of the observations accounting for the 21-cm signal, foregrounds and several major instrumental effects. Given this model, we apply Markov Chain Monte Carlo techniques to demonstrate the ability of these instruments to separate the 21-cm signal from foregrounds and quantify their ability to constrain properties of the first galaxies. For concreteness, we investigate observations between 40 and 120 MHz with the proposed  Dark Ages Radio Explorer  mission in lunar orbit, showing its potential for science return. Keywords: Methods: Statistical ; Cosmology: Theory ; Diffuse Radiation ; Radio Lines: General ISSN: 0035-8711 E-ISSN: 1365-2966 Library Location Call Number Volume/Issue/Year Availability Others were also interested in ... • 3 Article Description: We report absolutely calibrated measurements of diffuse radio emission between 90 and 190 MHz from the Experiment to Detect the Global EoR Signature (EDGES). EDGES employs a wide beam zenith-pointing dipole antenna centred on a declination of -26.7$^\circ$. We measure the sky brightness temperature as a function of frequency averaged over the EDGES beam from 211 nights of data acquired from July 2015 to March 2016. We derive the spectral index, $\beta$, as a function of local sidereal time (LST) and find -2.60 〉 $\beta$ 〉 -2.62 $\pm$0.02 between 0 and 12 h LST. When the Galactic Centre is in the sky, the spectral index flattens, reaching $\beta$ = -2.50 $\pm$0.02 at 17.7 h. The EDGES instrument is shown to be very stable throughout the observations with night-to-night reproducibility of $\sigma_{\beta}$ 〈 0.003. Including systematic uncertainty, the overall uncertainty of $\beta$ is 0.02 across all LST bins. These results improve on the earlier findings of Rogers & Bowman (2008) by reducing the spectral index uncertainty from 0.10 to 0.02 while considering more extensive sources of errors. We compare our measurements with spectral index simulations derived from the Global Sky Model (GSM) of de Oliveira-Costa et al. (2008) and with fits between the Guzm\'an et al. (2011) 45 MHz and Haslam et al. (1982) 408 MHz maps. We find good agreement at the transit of the Galactic Centre. Away from transit, the GSM tends to over-predict (GSM less negative) by 0.05 〈 $\Delta_{\beta} = \beta_{\text{GSM}}-\beta_{\text{EDGES}}$ 〈 0.12, while the 45-408 MHz fits tend to over-predict by $\Delta_{\beta}$ 〈 0.05. Keywords: Astrophysics - Instrumentation And Methods For Astrophysics ; Astrophysics - Astrophysics Of Galaxies ISSN: 00358711 E-ISSN: 13652966 Library Location Call Number Volume/Issue/Year Availability Others were also interested in ... • 4 Article Language: English In: The Astrophysical Journal, 2018, Vol.863(1), p.11 (11pp) Description: We use the sky-average spectrum measured by EDGES High-band (90–190 MHz) to constrain parameters of early galaxies independent of the absorption feature at 78 MHz reported by Bowman et al. These parameters represent traditional models of cosmic dawn and the epoch of reionization produced with the 21cmFAST simulation code. The parameters considered are (1) the UV ionizing efficiency ( ζ ); (2) minimum halo virial temperature hosting efficient star-forming galaxies ( ); (3) integrated soft-band X-ray luminosity ( ); and (4) minimum X-ray energy escaping the first galaxies ( E 0 ), corresponding to a typical H i column density for attenuation through the interstellar medium. The High-band spectrum disfavors high values of and ζ , which correspond to signals with late absorption troughs and sharp reionization transitions. It also disfavors intermediate values of , which produce relatively deep and narrow troughs within the band. Specifically, we rule out (95% C.L.). We then combine the EDGES High-band data with constraints on the electron-scattering optical depth from Planck and the hydrogen neutral fraction from high- z quasars. This produces a lower degeneracy between ζ and than that reported by Greig & Mesinger using the Planck and quasar constraints alone. Our main result in this combined analysis is the estimate (95% C.L.). We leave the evaluation of 21 cm models using simultaneously data from EDGES Low- and High-band for future work. Keywords: Astrophysics - Cosmology And Nongalactic Astrophysics ; Astrophysics - Astrophysics Of Galaxies; ISSN: 0004-637X E-ISSN: 1538-4357 Library Location Call Number Volume/Issue/Year Availability Others were also interested in ... • 5 Conference Proceeding Language: English In: The Evolution Of Galaxies Through The Neutral Hydrogen Window, Arecibo Observatory (Puerto Rico) (1–3 February 2008): In: AIP Conference Proceedings, 01 August 2008, Vol.1035(1), pp.296-302 Description: There are three distinct regimes in which radio observations of the redshifted 21 cm line of H I can contribute directly to cosmology in unique ways. The regimes are naturally divided by redshift, from high to low, into: inflationary physics, the Dark Ages and reionization, and galaxy evolution and Dark Energy. Each measurement presents its own set of technical, theoretical, and observational challenges, making “what we need to know” not so much an astrophysical question at this early stage as a comprehensive experimental question. A wave of new pathfinder projects are exploring the fundamental aspects of what we need to know (and what we should expect to learn in the coming years) in order to achieve the goals of the Square Kilometer Array (SKA) and beyond. Keywords: Astronomy and Astrophysics ISBN: 978-0-7354-0558-5 ISSN: 0094-243X E-ISSN: 1551-7616 Source: © 2008 American Institute of Physics (AIP)〈img src=http://exlibris-pub.s3.amazonaws.com/AIP_edited.gif style="vertical-align:middle;margin-left:7px"〉 Library Location Call Number Volume/Issue/Year Availability Others were also interested in ... • 6 Article Language: English In: The Astrophysical Journal, 2014, Vol.782(2), p.66 (25pp) Description: A number of experiments are currently working toward a measurement of the 21 cm signal from the epoch of reionization (EoR). Whether or not these experiments deliver a detection of cosmological emission, their limited sensitivity will prevent them from providing detailed information about the astrophysics of reionization. In this work, we consider what types of measurements will be enabled by the next generation of larger 21 cm EoR telescopes. To calculate the type of constraints that will be possible with such arrays, we use simple models for the instrument, foreground emission, and the reionization history. We focus primarily on an instrument modeled after the 0.1 km 2 collecting area Hydrogen Epoch of Reionization Array concept design and parameterize the uncertainties with regard to foreground emission by considering different limits to the recently described wedge footprint in k space. Uncertainties in the reionization history are accounted for using a series of simulations that vary the ionizing efficiency and minimum virial temperature of the galaxies responsible for reionization, as well as the mean free path of ionizing photons through the intergalactic medium. Given various combinations of models, we consider the significance of the possible power spectrum detections, the ability to trace the power spectrum evolution versus redshift, the detectability of salient power spectrum features, and the achievable level of quantitative constraints on astrophysical parameters. Ultimately, we find that 0.1 km 2 of collecting area is enough to ensure a very high significance ( 30) detection of the reionization power spectrum in even the most pessimistic scenarios. This sensitivity should allow for meaningful constraints on the reionization history and astrophysical parameters, especially if foreground subtraction techniques can be improved and successfully implemented. Keywords: Astrophysics - Cosmology And Nongalactic Astrophysics; ISSN: 0004-637X E-ISSN: 1538-4357 Library Location Call Number Volume/Issue/Year Availability Others were also interested in ... • 7 Article Description: Above redshift 6, the dominant source of neutral hydrogen in the Universe shifts from localized clumps in and around galaxies and filaments to a pervasive, diffuse component of the intergalactic medium (IGM). This transition tracks the global neutral fraction of hydrogen in the IGM and can be studied, in principle, through the redshifted 21 cm hyperfine transition line. During the last half of the reionization epoch, the mean (global) brightness temperature of the redshifted 21 cm emission is proportional to the neutral fraction, but at earlier times (10 〈 z 〈 25), the mean brightness temperature should probe the spin temperature of neutral hydrogen in the IGM. Measuring the (of order 10 mK) mean brightness temperature of the redshifted 21 cm line as a function of frequency (and hence redshift) would chart the early evolution of galaxies through the heating and ionizing of the IGM by their stellar populations. Experiments are already underway to accomplish this task or, at least, provide basic constraints on the evolution of the mean brightness temperature. We provide a brief overview of one of these projects, the Experiment to the Detect the Global EOR Signature (EDGES), and discuss prospects for future results. Comment: From AIP Conference Proceedings, Volume 1035, 2008, "The Evolution of Galaxies through the Neutral Hydrogen Window". 3 pages Keywords: Astrophysics - Galaxy Astrophysics Source: Cornell University Library Location Call Number Volume/Issue/Year Availability Others were also interested in ... • 8 Article Language: English In: The Astrophysical Journal, 2004, Vol.617(1), pp.81-101 Description: Observational measurements of the relationship between supermassive black holes (SMBHs) and the properties of their host galaxies are an important method for probing theoretical hierarchical growth models. Gravitational lensing is a unique mechanism for acquiring this information in systems at cosmologically significant redshifts. We review the calculations required to include SMBHs in two standard galactic lens models, a cored isothermal sphere and a broken power law. The presence of the SMBH produces two primary effects depending on the lens configuration, either blocking the "core" image that is usually predicted to form from a softened lens model or adding an extra, highly demagnified image to the predictions of the unaltered lens model. The magnitudes of these effects are very sensitive to galaxy core sizes and SMBH masses. Therefore, observations of these lenses would probe the properties of the inner regions of galaxies, including their SMBHs. Lensing cross sections and optical depth calculations indicate that to fully observe these characteristic signatures, flux ratios of order 10 6 or more between the brightest and faintest images of the lens must be detectable, and thus, the next generation of radio telescope technology offers the first opportunity for a serious observational campaign. Core images, however, are already detectable, and with additional observations their statistics may be used to guide future SMBH searches. Keywords: Astrophysics; ISSN: 0004-637X E-ISSN: 1538-4357 Library Location Call Number Volume/Issue/Year Availability Others were also interested in ... • 9 Article In: Monthly Notices of the Royal Astronomical Society, 2015, Vol. 447(3), pp.2468-2478 Description: Recent observations with the Murchison Widefield Array at 185 MHz have serendipitously unveiled a heretofore unknown giant and relatively nearby ( z  = 0.0178) radio galaxy associated with NGC 1534. The diffuse emission presented here is the first indication that NGC 1534 is one of a rare class of objects (along with NGC 5128 and NGC 612) in which a galaxy with a prominent dust lane hosts radio emission on scales of ∼700 kpc. We present details of the radio emission along with a detailed comparison with other radio galaxies with discs. NGC 1534 is the lowest surface brightness radio galaxy known with an estimated scaled 1.4-GHz surface brightness of just 0.2 mJy arcmin −2 . The radio lobes have one of the steepest spectral indices yet observed: α = −2.1 ± 0.1, and the core to lobe luminosity ratio is 〈0.1 per cent. We estimate the space density of this low brightness (dying) phase of radio galaxy evolution as 7 × 10 −7  Mpc −3 and argue that normal AGN cannot spend more than 6 per cent of their lifetime in this phase if they all go through the same cycle. Keywords: Techniques: Interferometric ; Galaxies: Active ; Galaxies: General ; Galaxies: Individual:Ngc 1534 ; Radio Continuum: Galaxies ISSN: 0035-8711 E-ISSN: 1365-2966 Library Location Call Number Volume/Issue/Year Availability Others were also interested in ... • 10 Conference Proceeding Language: English In: The Evolution Of Galaxies Through The Neutral Hydrogen Window, Arecibo Observatory (Puerto Rico) (1–3 February 2008): In: AIP Conference Proceedings, 01 August 2008, Vol.1035(1), pp.87-89 Description: Above redshift 6, the dominant source of neutral hydrogen in the Universe shifts from localized clumps in and around galaxies and filaments to a pervasive, diffuse component of the intergalactic medium (IGM). This transition tracks the global neutral fraction of hydrogen in the IGM and can be studied, in principle, through the redshifted 21 cm hyperfine transition line. During the last half of the reionization epoch, the mean (global) brightness temperature of the redshifted 21 cm emission is proportional to the neutral fraction, but at earlier times (10〈z〈25), the mean brightness temperature should probe the spin temperature of neutral hydrogen in the IGM. Measuring the (of order 10 mK) mean brightness temperature of the redshifted 21 cm line as a function of frequency (and hence redshift) would chart the early evolution of galaxies through the heating and ionizing of the IGM by their stellar populations. Experiments are already underway to accomplish this task or, at least, provide basic constraints on the evolution of the mean brightness temperature. We provide a brief overview of one of these projects, the Experiment to the Detect the Global EOR Signature (EDGES), and discuss prospects for future results. Keywords: Astronomy and Astrophysics ISBN: 978-0-7354-0558-5 ISSN: 0094-243X E-ISSN: 1551-7616 Source: © 2008 American Institute of Physics (AIP)〈img src=http://exlibris-pub.s3.amazonaws.com/AIP_edited.gif style="vertical-align:middle;margin-left:7px"〉 Library Location Call Number Volume/Issue/Year Availability Others were also interested in ... Close ⊗ This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages
{}
# Saving from R Studio Does anyone have sample VBA (Excel) code for Accessing the AS400 in Windows 10 Hello One of our (non-IT) staff is using R to "... create an HTML or Notebook using RStudio...". The error is: R "C:/Users/CRISTEL/AppData/Local/Pandoc/pandoc" +RTS -K512m -RTS test.utf8.md --to html4 --from markdown+autolink_bare_uris+ascii_identifiers+tex_math_single_backslash+smart --output test.html --email-obfuscation none --self-contained --standalone --section-divs --template "\\DOMAIN\dfs\Personal\cristel\R\win-library\3.5\rmarkdown\rmd\h\default.html" --no-highlight --variable highlightjs=1 --variable "theme:bootstrap" --include-in-header "C:\Users\CRISTEL\AppData\Local\Temp\Rtmpk3OLnj\rmarkdown-str1a8463ba62ad.html" --mathjax --variable "mathjax-url:https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" Could not fetch http://DOMAIN/dfs/Personal/cristel/R/win-library/3.5/rmarkdown/rmd/h/jquery-1.11.3/jquery.min.js HttpExceptionRequest Request { host = "DOMAIN" port = 80 secure = False path = "/dfs/Personal/cristel/R/win-library/3.5/rmarkdown/rmd/h/jquery-1.11.3/jquery.min.js" queryString = "" method = "GET" proxy = Nothing rawBody = False redirectCount = 10 responseTimeout = ResponseTimeoutDefault requestVersion = HTTP/1.1 } (ConnectionFailure Network.Socket.connect: <socket: 168>: failed (Socket is not connected (WSAENOTCONN))) Error: pandoc document conversion failed with error 61 Execution halted The problem itself is fairly obvious. R Studio is expecting to be saving the HTML file to a web server on the default port 80, but is actually pointing to nothing more than the UNC path of a DFS namespace. So the question is, not "What is the problem?", but "How to get round the problem!?". i.e. how to get R Studio to just save the .htm(l) file to a folder. Poblano OP I'm no R expert (I've had a look in RStudio, but never really used it), so I'm speculating here.. :) Anyway from the error message, it seems it can't retrieve the jquery library from that path, are you sure it is correct? I see no mention of saving errors in that log. Text Could not fetch http://DOMAIN/dfs/Personal/cristel/R/win-library/3.5/rmarkdown/rmd/h/jquery-1.11.3/jquery.min.js Actually it doesn't seem to be able to reach the host, does it have port 80 open? Text (ConnectionFailure Network.Socket.connect: <socket: 168>: failed (Socket is not connected (WSAENOTCONN))) Although, the path of the fetch error is probably in the template, that seems to be on the same host and folder, so it probably can reach it but can't find the minified jquery file. ## 2 Replies · · · Poblano OP I'm no R expert (I've had a look in RStudio, but never really used it), so I'm speculating here.. :) Anyway from the error message, it seems it can't retrieve the jquery library from that path, are you sure it is correct? I see no mention of saving errors in that log. Text Could not fetch http://DOMAIN/dfs/Personal/cristel/R/win-library/3.5/rmarkdown/rmd/h/jquery-1.11.3/jquery.min.js Actually it doesn't seem to be able to reach the host, does it have port 80 open? Text (ConnectionFailure Network.Socket.connect: <socket: 168>: failed (Socket is not connected (WSAENOTCONN))) Although, the path of the fetch error is probably in the template, that seems to be on the same host and folder, so it probably can reach it but can't find the minified jquery file. 0 · · · Cayenne OP asdrubalecanchi wrote: I'm no R expert (I've had a look in RStudio, but never really used it), so I'm speculating here.. :) Anyway from the error message, it seems it can't retrieve the jquery library from that path, are you sure it is correct? I see no mention of saving errors in that log. Text Could not fetch http://DOMAIN/dfs/Personal/cristel/R/win-library/3.5/rmarkdown/rmd/h/jquery-1.11.3/jquery.min.js Actually it doesn't seem to be able to reach the host, does it have port 80 open? Text (ConnectionFailure Network.Socket.connect: <socket: 168>: failed (Socket is not connected (WSAENOTCONN))) Although, the path of the fetch error is probably in the template, that seems to be on the same host and folder, so it probably can reach it but can't find the minified jquery file. Thanks. Happy to report this issue has become moot, as the end-user resolved the issue herself (not sure how).
{}
Letter # Attosecond optical-field-enhanced carrier injection into the GaAs conduction band Accepted: Published online: ## Abstract Resolving the fundamental carrier dynamics induced in solids by strong electric fields is essential for future applications, ranging from nanoscale transistors1,2 to high-speed electro-optical switches3. How fast and at what rate can electrons be injected into the conduction band of a solid? Here, we investigate the sub-femtosecond response of GaAs induced by resonant intense near-infrared laser pulses using attosecond transient absorption spectroscopy. In particular, we unravel the distinct role of intra- versus interband transitions. Surprisingly, we found that despite the resonant driving laser, the optical response during the light–matter interaction is dominated by intraband motion. Furthermore, we observed that the coupling between the two mechanisms results in a significant enhancement of the carrier injection from the valence into the conduction band. This is especially unexpected as the intraband mechanism itself can accelerate carriers only within the same band. This physical phenomenon could be used to control ultrafast carrier excitation and boost injection rates in electronic switches in the petahertz regime. ## Main Shrinking structure sizes in integrated circuits inevitably lead to increasing field strengths in the involved semiconductor materials1,2. At the same time, ultrafast optical technologies enable the extension of operation frequencies of electro-optical devices to the petahertz regime3. Both applications ultimately require a deep fundamental understanding of ultrafast electron dynamics in solids in the presence of strong fields for the development of the next generation of compact and fast electronic devices. A number of pioneering experiments demonstrated the potential to measure and control carrier dynamics induced by intense near-infrared laser pulses (peak intensity Ipeak ~ 1012 W cm−2) in semiconductors4,5,6,7,8 and dielectrics9,10 on a sub- to few-femtosecond timescale using transient absorption and polarization spectroscopy. So far, resolving such dynamics with attosecond resolution has been limited to the non-resonant excitation regime, where the bandgap of the investigated material is larger than the energy of a single pump photon. Here, in contrast, we unravel the sub-femtosecond response of gallium arsenide (GaAs), a prototype and technologically relevant direct-bandgap semiconductor, in the resonant regime. Besides the ‘vertical’ optical transition in the momentum space that corresponds to the absorption of infrared pump photons (so-called interband transition, Fig. 1b), the pump field can also accelerate electrons within the electronic bands (intraband motion, Fig. 1c). In a simplified picture, one can think of inter- and intraband transitions as a consequence of the dual nature of the pump light that behaves either as photons (interband) or as a classical electromagnetic field (intraband). The role of intra- versus interband transitions in the presence of strong electric fields is highly debated11,12,13,14,15,16,17. For the infrared intensities used in this experiment, we can neglect contributions from the magnetic laser fields18. In a recent publication, we demonstrated that during the interaction of a wide-bandgap dielectric such as diamond with a short, intense, non-resonant infrared pump pulse, intraband motion completely dominates the transient optical response10. However, it is still unclear whether and how this situation changes in the technologically much more relevant resonant case where a single photon from the pulse has enough energy to induce an interband transition that creates real carriers in the conduction band (CB). The question of whether intraband motion still dominates the interaction and how the coupling between the two mechanisms influences the carrier injection is not obvious and has not been experimentally investigated so far. To study the electronic response of GaAs when driven out of equilibrium, we combine a 5–6 fs infrared pump pulse (centre energy $ℏ ω IR ≈1.59eV$) with a delayed phase-locked single attosecond pulse (SAP) probe as illustrated in Fig. 1a (further details are given in the Supplementary Information and in ref. 19). The infrared pump pulse has a peak intensity in vacuum of ~2.31 ± 0.17 × 1012 W cm2, which corresponds to a peak electric field of ~0.42 V Å−1. The estimated intensity inside the sample reaches up to 60% of the intensity in vacuum. The two beams are focused into a double target that consists of a gas jet followed by a 100-nm-thick single-crystalline GaAs membrane. The neon gas target enables the extraction of the temporal shape of both pulses as well as a precise delay calibration via a simultaneously recorded streaking measurement20,21. We calibrate the time axis of the streaking trace by taking into account the spatial separation of the two targets22. The pump–probe principle of attosecond transient absorption spectroscopy is illustrated in Fig. 1. The infrared pulse can induce both inter- and intraband transitions. The SAP probes the modified charge distribution by exciting electrons from the As-3d core levels to available states around the bandgap region. Figure 1d shows the measured static absorption spectrum of the GaAs membrane. It is important to note that the broad extreme-ultraviolet (XUV) spectrum of the SAP simultaneously probes the dynamics in the valence band (VB) and the CB. Figure 2a displays the absorption modification of GaAs induced by the resonant pump pulse, $ΔAbs ( E , τ )$ (for definition, see Supplementary Information). A red (blue) region indicates increased (decreased) absorption. In the following analysis, we concentrate on two different delay regimes: (1) when the pump and probe overlap, and (2) when the probe pulse arrives well after the pump. Without temporal overlap after the infrared pump pulse, we see a long-lasting signal (that is, regime (2) in Fig. 2a), which persists after the pump interaction over a considerable delay range. During the interaction, electrons are excited via interband transitions from the VB to the CB. This mechanism fully takes into account the nonlinear injection of carriers (see Supplementary Information). The creation of holes in the VB and electrons in the CB causes an increased XUV absorption at the upper VB edge (around 40 eV) and a bleached absorption at the lower CB range (around 43 eV), respectively. The system returns to its equilibrium ground state through electron–hole recombination, which happens for bulk GaAs on a timescale of 2.1 ns (ref. 23). By looking at negative delays, we can see that the absorption of the system recovers completely between subsequent pulses, which means that there are no accumulative effects and heating of the sample by the laser is negligible. During the temporal overlap of the infrared pump and the XUV probe pulse, we observe a transient signal (that is, regime (1) in Fig. 2a), which oscillates with $2 ω IR$ and lasts for the duration of the pump pulse (Fig. 2b). The oscillations are visible in a broad probe energy range, most pronounced in the CB between 42.5 and 46 eV. Below 42 eV, they are not well resolved due to stronger fluctuations of the SAP spectral amplitude. However, attosecond transient absorption spectroscopy measurements performed with attosecond pulse trains characterized by a more stable spectrum confirmed the appearance of oscillations also in the VB, around 40 eV (see Supplementary Information). Figure 2d shows the squared vector potential $A ( t ) 2$ of the measured infrared pump and the measured transient absorbance for two energy windows. A comparison among them reveals a strong energy dependence of the oscillation phase, which is reflected in the tilted shape of the oscillation features in $ΔAbs ( E , τ )$. To understand the microscopic origin of the measured features, we performed a first-principles electron dynamics simulation (see Supplementary Information for details). We simulated the pump–probe experiment24 and calculated the pump-induced change of the dielectric function including propagation effects, $Δε ( E , τ )$, which is directly related to the absorption change $ΔAbs ( E , τ )$ (ref. 10). The numerical results show oscillations with a tilted shape and a long-lasting signal, in good agreement with the experiment (Fig. 2c). With a decomposition of the probe Hamiltonian of the first-principles simulation into Houston states10,25, we can disentangle the contributions of the two probe transitions (As-3d level to either VB or CB) in the observed dynamics (see Supplementary Information). The energy range above 42 eV, where the strongest transient signal appears, is dominated by probe transitions from the core level to the CB (Fig. 3a). Therefore, in the following, we focus on the CB response. In a previous study10, we demonstrated that a non-resonant pump can excite virtual electrons on a sub-femtosecond timescale via intraband motion. Virtual electron excitations live only transiently during the presence of the driving field. For the present experiment, the resonant part of the pump radiation will also inject real carriers into the CB via interband transitions. A population of real carriers persists after the driving pulse has passed and decays orders of magnitude slower than the timescale considered here. To study the ultrafast carriers, we have to investigate the respective signal contributions of infrared-induced intra- and interband transitions. Therefore, we simplify the description of our system to a three-band model, which includes the As-3d level, the light-hole VB and the lowest CB (see Supplementary Information). The advantage of the three-band model is that intraband motion and interband transitions between the VB and CB can be numerically included or excluded. Figure 3b shows the CB response with both types of transition involved. The good qualitative agreement with the first-principles decomposition (Fig. 3a) justifies the use of this model to study the respective optical response induced by the two mechanisms. In the intraband limit, no real electrons are excited from the VB to the CB26. This explains why the dielectric function of GaAs fully recovers immediately after the pump pulse (Fig. 3c). In the interband limit, real carriers are injected into the CB by resonant photon absorption, thus resulting in the blue long-lasting signal around 43 eV (Fig. 3d). In both cases, absorption oscillations with twice the pump frequency appear (Fig. 3e). They originate from the dynamical Franz–Keldysh effect10,27 (DFKE, intraband limit) and the dynamical Stark effect28 (interband limit). In contrast to the interband case, the intraband limit clearly shows the strong energy dispersion as in the experiment. In addition, a closer look reveals that the intraband trace oscillates nearly in phase with the decomposed first-principles simulation and therefore with the experimental results, while the inter-band picture clearly fails to reproduce the experimental phase (Fig. 3e). To further verify this, we compare the energy dispersion of the oscillation delay between the measured and simulated signal for the different models and limits (Fig. 3f). The pure interband case of the three-band model fails to reproduce the experiment while the delay of the intraband limit shows excellent agreement with the experimental results. Therefore, by looking at the attosecond timing of the transient signal, we can conclude that infrared-induced intraband motion (namely the DFKE) dominates the ultrafast response in the CB of GaAs during the pump–probe overlap even in a resonant pumping condition. This is a surprising result, as in the case of a resonant intense pump it is believed that one should not be able to observe DFKE around the bandgap10,26,27. Finally, we look at the injection of real carriers from the VB into the CB. We define the CB population, nCB, by the projection of the time-dependent wavefunction of the three-band model on the CB state (see Supplementary Information). In the case of neglected intraband motion (only interband transitions), the calculation predicts a stepwise oscillating increase of nCB following the intensity of the pump pulse (Fig. 4). During the second part of the pump interaction, Rabi-flopping partly depopulates the CB. Surprisingly, in the realistic case involving both excitation mechanisms, the amount of excited carriers increases by nearly a factor of three compared to the model with only interband transitions. This result shows that, although intraband motion does not create real carriers in the CB by itself26, it assists in the carrier injection initiated by the resonant part of the pump. This indicates that the nonlinear interplay between intra- and interband transitions opens a new excitation channel via virtually excited states at high pump intensities. It is worth emphasizing that the observed enhancement of the injection rate can also be seen in the multi-photon resonant pump regime (see Supplementary Information). Further, it does not depend on the pulse duration. However, using significantly longer pulses or continuous-wave laser light with the same field strength could lead to the target being irreversibly damaged. To conclude, our measurements and simulations reveal the mechanisms of the sub-femtosecond electron injection in GaAs driven by intense and resonant infrared laser pulses. In contrast to expectations, our results demonstrate that ultrafast transient absorption features, which characterize the early response of the semiconductor to the resonant pump excitation, are dominated by intraband motion, rather than by interband transitions. Furthermore, our simulations show that the virtual carriers created by the intraband motion assist in the injection of real carriers from the VB into the CB. Hence, the interplay between both transition types significantly influences the injection mechanism in the presence of strong electric fields. This process is expected to be universal and persist in a large range of excitation parameters. Therefore, our observation reveals important information about sub-femtosecond electron dynamics in a solid induced by strong fields, which is required for the scaling of the next generation of efficient and fast optical switches and electronics driven in the petahertz regime. ### Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Mei, X. et al. First demonstration of amplification at 1 THz using 25-nm InP high electron mobility transistor process. IEEE Electron Device Lett. 36, 327–329 (2015). 2. 2. Desai, S. B. et al. MoS2 transistors with 1-nanometer gate lengths. Science 354, 99–102 (2016). 3. 3. Krausz, F. & Stockman, M. I. Attosecond metrology: from electron capture to future signal processing. Nat. Photon. 8, 205–213 (2014). 4. 4. Schultze, M. et al. Attosecond band-gap dynamics in silicon. Science 346, 1348–1352 (2014). 5. 5. Mashiko, H., Oguri, K., Yamaguchi, T., Suda, A. & Gotoh, H. Petahertz optical drive with wide-bandgap semiconductor. Nat. Phys. 12, 741–745 (2016). 6. 6. Sommer, A. et al. Attosecond nonlinear polarization and light–matter energy transfer in solids. Nature 534, 86–90 (2016). 7. 7. Zürch, M. et al. Ultrafast carrier thermalization and trapping in silicon–germanium alloy probed by extreme ultraviolet transient absorption spectroscopy. Struct. Dyn. 4, 044029 (2017). 8. 8. Zürch, M. et al. Direct and simultaneous observation of ultrafast electron and hole dynamics in germanium. Nat. Commun. 8, 15734 (2017). 9. 9. Schultze, M. et al. Controlling dielectrics with the electric field of light. Nature 493, 75–78 (2013). 10. 10. Lucchini, M. et al. Attosecond dynamical Franz–Keldysh effect in polycrystalline diamond. Science 353, 916–919 (2016). 11. 11. Golde, D., Meier, T. & Koch, S. W. High harmonics generated in semiconductor nanostructures by the coupled dynamics of optical inter- and intraband excitations. Phys. Rev. B 77, 075330 (2008). 12. 12. Ghimire, S. et al. Observation of high-order harmonic generation in a bulk crystal. Nat. Phys. 7, 138–141 (2011). 13. 13. Malard, L. M., Mak, K. F., Castro Neto, A. H., Peres, N. M. R. & Heinz, T. F. Observation of intra- and inter-band transitions in the transient optical response of graphene. New J. Phys. 15, 015009 (2013). 14. 14. Al-Naib, I., Sipe, J. E. & Dignam, M. M. High harmonic generation in undoped graphene: Interplay of inter- and intraband dynamics. Phys. Rev. B 90, 245423 (2014). 15. 15. Luu, T. T. et al. Extreme ultraviolet high-harmonic spectroscopy of solids. Nature 521, 498–502 (2015). 16. 16. Wismer, M. S., Kruchinin, S. Y., Ciappina, M., Stockman, M. I. & Yakovlev, V. S. Strong-field resonant dynamics in semiconductors. Phys. Rev. Lett. 116, 197401 (2016). 17. 17. Paasch-Colberg, T. et al. Sub-cycle optical control of current in a semiconductor: from the multiphoton to the tunneling regime. Optica 3, 1358 (2016). 18. 18. Ludwig, A. et al. Breakdown of the dipole approximation in strong-field ionization. Phys. Rev. Lett. 113, 243001 (2014). 19. 19. Locher, R. et al. Versatile attosecond beamline in a two-foci configuration for simultaneous time-resolved measurements. Rev. Sci. Instrum. 85, 013113 (2014). 20. 20. Hentschel, M. et al. Attosecond metrology. Nature 414, 509–513 (2001). 21. 21. Itatani, J. et al. Attosecond streak camera. Phys. Rev. Lett. 88, 173903 (2002). 22. 22. Schlaepfer, F. et al. Gouy phase shift for annular beam profiles in attosecond experiments. Opt. Express 25, 3646–3655 (2017). 23. 23. Beard, M. C., Turner, G. M. & Schmuttenmaer, C. A. Transient photoconductivity in GaAs as measured by time-resolved terahertz spectroscopy. Phys. Rev. B 62, 15764–15777 (2000). 24. 24. Sato, S. A., Yabana, K., Shinohara, Y., Otobe, T. & Bertsch, G. F. Numerical pump–probe experiments of laser-excited silicon in nonequilibrium phase. Phys. Rev. B 89, 064304 (2014). 25. 25. Houston, W. V. Acceleration of electrons in a crystal lattice. Phys. Rev. 57, 184–186 (1940). 26. 26. Srivastava, A., Srivastava, R., Wang, J. & Kono, J. Laser-induced above-band-gap transparency in GaAs. Phys. Rev. Lett. 93, 157401 (2004). 27. 27. Novelli, F., Fausti, D., Giusti, F., Parmigiani, F. & Hoffmann, M. Mixed regime of light–matter interaction revealed by phase sensitive measurements of the dynamical Franz–Keldysh effect. Sci. Rep. 3, 1227 (2013). 28. 28. Bakos, J. S. AC stark effect and multiphoton processes in atoms. Phys. Rep. 31, 209–235 (1977). 29. 29. Vurgaftman, I., Meyer, J. R. & Ram-Mohan, L. R. Band parameters for III–V compound semiconductors and their alloys. J. Appl. Phys. 89, 5815–5875 (2001). 30. 30. Kraut, E. A., Grant, R. W., Waldrop, J. R. & Kowalczyk, S. P. Precise determination of the valence-band edge in X-ray photoemission spectra: application to measurement of semiconductor interface potentials. Phys. Rev. Lett. 44, 1620–1623 (1980). ## Acknowledgements We thank M. C. Golling for growing the GaAs, and J. Leuthold and C. Bolognesi for helpful discussion. The authors acknowledge the support of the technology and cleanroom facility at Frontiers in Research: Space and Time (FIRST) of ETH Zurich for advanced micro- and nanotechnology. This work was supported by the National Center of Competence in Research Molecular Ultrafast Science and Technology (NCCR MUST) funded by the Swiss National Science Foundation, and by JSPS KAKENHI grant no. 26-1511. ## Author information ### Author notes • M. Lucchini Present address: Department of Physics, Politecnico di Milano, Milano, Italy ### Affiliations 1. #### Department of Physics, ETH Zurich, Zurich, Switzerland • F. Schlaepfer • , M. Lucchini • , M. Volkov • , L. Kasmi • , N. Hartmann • , L. Gallmann •  & U. Keller 2. #### Max Planck Institute for the Structure and Dynamics of Matter, Hamburg, Germany • S. A. Sato •  & A. Rubio ### Contributions F.S., M.L., L.G. and U.K. supervised the study. F.S., M.L., M.V., L.K. and N.H. conducted the experiments. M.V. also improved the experimental set-up and data acquisition system. F.S. fabricated the sample and analysed the experimental data. S.A.S. and A.R. developed the theoretical modelling. All authors were involved in the interpretation and contributed to the final manuscript. ### Competing interests The authors declare no competing interests. ### Corresponding authors Correspondence to F. Schlaepfer or U. Keller. ## Supplementary information 1. ### Supplementary Information Supplementary Figure 1–13, Supplementary Table 1, Supplementary References
{}
# How to get rid of the error message when evaluating a highly oscillatory numerical integral? I have a simple numerical integral with a highly oscillatory integrand: In[256]:= Integrate[Exp[-x^2] Cos[100 x], {x, -10, 10}] // N Out[256]= 5.11136*10^-46 + 0. I Using numerical integration, I used the method "LevinRule" and increased the WorkingPrecision to 50. I got the right result, but it came with an error message: In[273]:= NIntegrate[Exp[-x^2] Cos[100 x], {x, -10, 10}, Method -> {"LevinRule"}, WorkingPrecision -> 50] Out[273]= 5.1113608752199056046419863980883842786372578322323*10^-46 The error message says that the integral failed to converge, which makes me worry if I don't know the exact answer in advance. During evaluation of In[273]:= NIntegrate::slwcon: Numerical integration converging too slowly; suspect one of the following: singularity, value of the integration is 0, highly oscillatory integrand, or WorkingPrecision too small. >> During evaluation of In[273]:= NIntegrate::ncvb: NIntegrate failed to converge to prescribed accuracy after 9 recursive bisections in x near {x} = {0.06347548783822353791148881980667872878998059126103862089665838701205185579592695769871303841009927124}. NIntegrate obtained 5.111360875219905604641986398088384278637257832232254321100441868877978318265582017097113446297596528100.*^-46 and 2.115650450746176575259758050321352748848126918694539847444939975975907083592875723586821709970659407100.*^-59 for the integral and error estimates. >> How do I get rid of this error message, without suppressing it? • Have you ever heard about Quiet? – mmal Jul 13, 2016 at 2:28 • @mmal What f I don't know the exact answer? I suppose suppress the error message is dangeous. Jul 13, 2016 at 2:30 You can try setting the AccuracyGoal lower than WorkingPrecision, which yields the correct result without a reported warning. NIntegrate[Exp[-x^2] Cos[100 x], {x, -10, 10}, Method -> {"LevinRule"}, WorkingPrecision -> 50, AccuracyGoal -> 35] 5.1113608752199120138254477520179596033660767259737*10^-46 NIntegrate[Exp[-x^2] Cos[100 x], {x, -10, 10}, Method -> {"LevinRule"}, WorkingPrecision -> 100, AccuracyGoal -> 90] 5.11136087521990299641307044175882379310640009556142432636206474874790\ 3858004996545055056958734154112*10^-46 When you give a setting for WorkingPrecision, this typically defines an upper limit on the precision of the results from a computation. But within this constraint you can tell the Wolfram Language how much precision and accuracy you want it to try to get. In a highly oscillatory function, the last few digits of precision may not converge and/or take extended time. • Can you explain why this works and why the previous code report an error? The error says failed to converge to prescribed accuracy I assume this means that the AccuracyGoal set by default is very very high(even higher than 35 and 90?)... Jul 13, 2016 at 3:37 • The links says AccuracyGoal->Automatic normally yields an accuracy goal equal to half the setting for WorkingPrecision. . I suppose when I unspecify the AccuracyGoal, it is set to 25, but why it can't converge but can when it set explicitly to 35 Jul 13, 2016 at 3:52 • I didn't find it explicitly says only true for NDSolve Jul 13, 2016 at 3:55 • We are not talking about the same link... You are right, maybe the default value in NIntegrate is the WorkingPrecision. Jul 13, 2016 at 4:01 • The link in your first sentence...Click on details, it doesn't says this is true only for NDSolve.. Jul 13, 2016 at 4:08 The shortest and best way between two truths of the real domain often passes through the imaginary one. By taking a complex path, I get the answer without any complaints from Mathematica. parabolic[a_, x_] = Simplify[InterpolatingPolynomial[{{-10, 0}, {0, a}, {10, 0}}, x]] With[{a = 1}, Re[NIntegrate[With[{x = x + I parabolic[a, x]}, Exp[-x^2 + 100 I x]] (1 + I Derivative[0, 1][parabolic][a, x]), {x, -10, 10}, WorkingPrecision -> 20]]] 5.1113608752199029964*10^-46 If you want further reading, see this or this. • This is a very smart way! Jul 13, 2016 at 8:49 • Pfft! Using brute force is better! May 24, 2020 at 0:43 • @Anton, sometimes ;D May 24, 2020 at 0:46 When in doubt, use brute force. -- Ken Thompson AbsoluteTiming[ NIntegrate[Exp[-x^2] Cos[100 x], {x, -10, 10}, Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0, "SingularityHandler" -> None}, MaxRecursion -> 100, AccuracyGoal -> 90, WorkingPrecision -> 100] ] (* {100.201, 5.111360875219902996413070441758823793106400095561424326362063281173449212599595171095728154857298543*10^-46} *) Another brute-force approach, but quick: NIntegrate[Exp[-x^2] Cos[100 x], Evaluate@Flatten@{x, Range[-10., 10, 2 Pi/100], 10.}, Method -> {"GaussKronrodRule", "Points" -> 31}, MaxRecursion -> 0, PrecisionGoal -> 25, WorkingPrecision -> 100] // N[#, 50] & // AbsoluteTiming (* {0.231645, 5.1113608752199029964130704417588237931064000955614*10^-46} *)
{}
# Asymptotic formula of $\sum_{n \le x} \frac{d(n)}{n^a}$ As the title says, I'm trying to prove $$\sum_{n \le x} \frac{d(n)}{n^a}= \frac{x^{1-a} \log x}{1-a} + \zeta(a)^2+O(x^{1-a}),$$ for $x \ge 2$ and $a>0,a \ne 1$, where $d(n)$ is the number of divisors of $n$. There is a post here dealing with the case $a=1$. This is what I have done so far: \begin{align*} \sum_{n \le x} \frac{d(n)}{n^a} &= \sum_{n \le x} \frac{1}{n^a} \sum_{d \mid n} 1 = \sum_{d \le x} \sum_{\substack{n \le x \\ d \mid n}} \frac{1}{n^a} = \sum_{d \le x} \sum_{q \le x/d } \frac{1}{(qd)^a} = \sum_{d \le x} \frac{1}{d^a} \sum_{q \le x/d} \frac{1}{q^a} \\ &= \sum_{d \le x} \frac{1}{d^a} \left( \frac{(x/d)^{1-a}}{1-a} + \zeta(a) + O((x/d)^{-a}) \right) \\ &= \sum_{d \le x} \left( \frac{x^{1-a}}{d(1-a)} + \frac{\zeta(a)}{d^a} \right) + O(x^{1-a}), \end{align*} from here things start to go out of hand... I've tried using the relevant formulas from this page, but I can't get it to "fit". Any help would be appreciated. - If you know complex analysis, try to use Perron's formula for the function $\zeta (s)^2$. –  Soarer Nov 5 '11 at 15:53 @Soarer: I haven't studied complex analysis... –  Carolus Nov 5 '11 at 16:09 What you're trying to show isn't true. You should have $\zeta(a)^2$ rather than $\zeta(a)$. (See Exercise 3.3 in Apostol's number theory book in the link above.) \begin{align} &\sum_{d \le x} \left( \frac{x^{1-a}}{d(1-a)} + \frac{\zeta(a)}{d^a} \right) + O\left(x^{1-a}\right) \\ &= \frac{x^{1-a}}{1-a} \sum_{d \le x} \frac{1}{d} + \zeta(a) \sum_{d \le x}\frac{1}{d^a} + O\left(x^{1-a}\right) \\ &= \frac{x^{1-a}}{1-a} \Big(\log x + O(1)\Big) + \zeta(a) \left(\frac{x^{1-a}}{1-a} + \zeta(a) + O(x^{-a})\right) + O\left(x^{1-a}\right) \\ &= \frac{x^{1-a} \log x}{1-a} + \zeta(a)^2 + O\left(x^{1-a}\right). \\ \end{align} Thanks! And, yes, that was a typo. By the way, I haven't studied any analysis and I can't say I really understand this asymptotic stuff/big Oh just yet. Could you please explain to me the following: 1) Why is $(\log x+O(1))$ equivalent to $(\log x+\gamma+O(\log x/x))$ - is it because the $O$-term now "includes" the $\gamma$? 2) How come the term $(x^{1-a} \zeta(a))/(1-a)$ disappear? As I said, I'm very much a beginner, so sorry if I'm asking really stupid questions! Thanks for the help. –  Carolus Nov 6 '11 at 5:40 @Carolus: 1) They aren't equivalent. An expression that is $\log x + \gamma + O(\log x/x)$ is also $\log x + O(1)$ (which is the direction I'm using), but the implication doesn't go in the other direction. Saying that an expression is $\log x + O(1)$ means that the dominant term is $\log x$ and the rest of the terms are of constant order. The expression $\log x + \gamma + O(\log x/x)$ has that property. So, in a sense, you're right to say that the $O$ term "includes" the $\gamma$. –  Mike Spivey Nov 6 '11 at 20:52 @Carolus: 2) It is absorbed into the $O(x^{1-a})$ expression. Since $\zeta(a)/(1-a)$ is a constant, the expression $(x^{1-a} \zeta(a))/(1-a)$ is $O(x^{1-a})$. Then the sum of two $O(x^{1-a})$ expressions is also $O(x^{1-a})$. (And no need to apologize for asking questions! Asymptotic expressions can be tricky to deal with when you first meet them.) –  Mike Spivey Nov 6 '11 at 20:57
{}
# How do you solve 2x^2 - 32 = 0? Apr 17, 2016 $x = 4 , - 4$ #### Explanation: You can factor out a 2 as the GCF. $2 \left({x}^{2} - 16\right) = 0$ ${x}^{2} - 16$ is the difference of perfect squares, which are $\left(x - 4\right) \left(x + 4\right)$ Therefore, you now have $2 \left(x - 4\right) \left(x + 4\right)$. To solve this quadratic means to find the roots of it (the points where the parabola will cross the x-axis). $x - 4 = 0$ $x = 4$ and $x + 4 = 0$ $x = - 4$ Therefore, x = -4, 4
{}
# Paint Work CalculatorIS 15489 ## Paint Work Calculation Sq. ft. ft ft Nos ft ft Nos #### Total Paint Area 312.15 m2 | 3360.00 ft2 Sr. Material Quantity 1 Paint 33.60 liters 2 Primer 33.60 liters 3 Putty 84.00 kgs ## Paint-Work Calculation #### Paint Area Note: Approx Paint Area Including Wall And Ceiling #### Door & Window Area $\mathrm{Door}\mathrm{Area}=\mathrm{Width}×\mathrm{Height}×Doors$ #### Paint $Paint =\frac{\mathrm{Actual Paint Area}}{100}$ $Paint =\frac{\mathrm{3360.00}{\mathrm{ft}}^{2}}{100}$ $Paint =\mathrm{33.60 liter}$ Note: 1 liter of paint cover upto100 sq.ft of Area #### Primer $Primer =\frac{\mathrm{Actual Paint Area}}{100}$ $Primer =\frac{\mathrm{3360.00}{\mathrm{ft}}^{2}}{100}$ $Primer =\mathrm{33.60 liter}$ Note: 1 liter of primer cover upto100 sq.ft of Area #### Putty $Putty =\frac{\mathrm{Actual Paint Area}}{40}$ $Putty =\frac{\mathrm{3360.00}{\mathrm{ft}}^{2}}{40}$ $Putty =\mathrm{84.00 kgs}$ Note: 2.5 kg of putty cover upto100 sq.ft of Area ## What is paint calculation? Paint is a liquid or mastic material that can be applied to surfaces to colour, protect and provide texture. They are usually stored as a liquid and dry into a thin film after application. Paints be categorised decorative, are applied on site, or industrial, applied in factories as part of the manufacturing process. Paint Calculator helps you calculate the area to be painted and gives you an estimate of the required amount of paint. Paints can be applied with a brush or roller, or by dipping, flowcoating, spraying, hot spraying, electrostatic spraying, airless spraying, electrodeposition, powder coating, vacuum impregnation, immersion, and so on. ##### Paint Calculation $Paint Area =\mathrm{Carpet Area}*\mathrm{3.5}$ $Door Area=\mathrm{Door Height}*\mathrm{Door Width}*\mathrm{No of Doors}$ $Window Area=\mathrm{Window Height}*\mathrm{Window Width}*\mathrm{No of Windows}$ ##### $\mathrm{Actual Paint Area}=\mathrm{Paint Area}-\mathrm{Door Area}-\mathrm{Window Area}$$Paint=\frac{\mathrm{Actual Paint Area}}{\mathrm{100}}$$Primer=\frac{\mathrm{Actual Paint Area}}{\mathrm{100}}$$Putty=\frac{\mathrm{Actual Paint Area}}{\mathrm{40}}$ Where, • m2(meter Square) and ft2(feet Square) is a total area. • Length and Width in meter/cm or feet/inch. Note: 1 m2 = 10.7639 ft3 1 litre = 0.264172 gallons ## What are coverage area for paint? • Paint: Paint is a substance used as the final finish to all surfaces and as a coating to protect or decorate the surface. 1 liter of Paint is Cover Upto 100 Square feet of Paint Area • Primer: Primer is a paint product that allows finishing paint to adhere much better than if it were used alone. [3] It is designed to adhere to surfaces and to form a binding layer that is better prepared to receive the paint. Compared to paint, a primer is not intended to be used as the outermost durable finish and can instead be engineered to have improved filling and binding properties with the material underneath. 1 liter of Primer is Cover Upto 100 Square feet of Paint Area • Putty : Putty is a kind of paste prepared for applying on walls to fill in any minor dents or to level the surface. It is recommended that paint is not applied directly on putty. Always apply a coat of primer over putty and then the final coat. 2.5 kgs of Putty is Cover Upto 100 Square feet of Paint Area • Spay paint coverage is more than brush and roller paint. Most of the brands mention coverage area on package, hence refer to the package for exact coverage area. 1 Coat means, one layer of paint. 1 Coat of paint is sufficient for repainting or tenancy painting 2 Coat means two-layer of paint. At least 2 coat of paint is required for fresh painting, new paint or color change. ## What are the important paint calculation? Paint is used to protect all sorts of buildings and structures from the effects of water and sun. Wooden buildings such as houses are usually painted because a coat of paint prevents water seeping into the wood and making it rot. The paint also helps to prevent the wood from drying out in the hot sun. Incredibly, some forms of paint can actually reduce the danger of a fire spreading in your house. Fire resistant paints, also known as intumescent coatings, are a passive fire protection measure that are well worth the investment. Obviously these products aren’t going to save your home from a fire, but they can form an important part of your fire safety measures in the house.
{}
# Expectation values, induction and conditioning Suppose I have a series $X_t$ of random variables, $t \in \mathbb{N}_0$. I am not sure if the following reasoning is sound: Let $f(x)$ be a function of the random variables. Let $E[f(X_t)]$ denote the expectation value of $f$ for variable $t$, and let $E[f(X_t) | X_{t-1} = x]$ be the expectation value of $f(X_t)$ when we already know that $X_{t-1}$ had value $x$. Think of the $X_t$ as states of a system and $f(x)$ some function of these states. I have proven the following result: Lemma 1 If $f(x) > f_c$ for a certain critical value $f_c$, then $$E[f(X_t) | X_{t-1} = x] \leq \alpha \cdot f(x)$$ for $0 < \alpha < 1$. I now want to prove the following: Lemma 2 Let $T \ge 0$. Then either there is a $t < T$ so that $f(X_t) \leq f_c$, or it holds $$E[f(X_T)] \leq \alpha^T \cdot E[f(X_0)].$$ Proof Either there is a $t < T$ so that $f(X_t) \leq f_c$. Then we are done. Or there is no such $t$ and we can use the previous bound: $$E[f(X_T)] = E[E[f(X_T)|X_{T-1}=x]] \leq \alpha \cdot E[f(X_{T-1})]$$ I can apply the induction hypothesis to that and obtain the claim. Problem Now, I feel a bit queasy: In the expectation value, would I also have to define some event $\xi_T$ as the event that there is no $t < T$ such that $X_t \leq f_c$, and condition on that or is that, via the induction, already taken care of? - The step $E[X^T] = E[E[X^T|X^{T-1}=x]] \leq \alpha \cdot E[X^{T-1}]$ does not make sense to me. $E[X^T|X^{T-1}=x]$ is already some deterministic function of $x$. Taking expectation doesn't get rid of $x$. This is not the tower property of conditional expectation. –  GWu May 3 '11 at 23:38 Maybe it is just sloppy writing on my part? What I mean is this: $E[X^T] = \sum_k k \cdot P[X^T = k] = \sum_k k \cdot \sum_x P[X^T | X^{T-1} = x] = \sum_x E[X^T | X^{T-1} = x] = E[E[X^T|X^{T-1}=x]$ –  Lagerbaer May 4 '11 at 0:11 @GWu, you might reconsider your comment. –  cardinal May 4 '11 at 0:25 @Lagerbaer I see what you mean. So $E[X^T]=\sum_x E[X^T|X^{T-1}=x]P[X^{T-1}=x]$. If you want to use your previous result to conclude $E[X^T]\le \alpha E[X^{T-1}]$, you would need $X(\omega)>x_c$ for a.e. $\omega$. But the opposite of "there exists a $t<T$ so that $X^t\le x_c$" is "for all $t<T$, there's an $\omega$ such that $X^t(\omega)>x_c$". Unless I didn't understand your proven result, this is not enough for your purpose. –  GWu May 4 '11 at 1:06 @Lagerbaer: When you write $X^T$, is that a power, or just a superscript? And when you write $\alpha^T$? –  Henry May 4 '11 at 7:18 The conclusion that $E(X_T)\le\alpha^TE(X_0)$ for $T$ large enough cannot hold. Forget about probability for a minute and consider a deterministic sequence $(x_t)$ whose dynamics is $x_{t+1}=x_t+1$ if $x_t< x_c$ and $x_{t+1}=\alpha x_t$ if $x_t\ge x_c$. After a while, the sequence $(x_t)$ wil oscillate between $\alpha x_c$ and $x_c+1$ hence it cannot converge to zero. Coming back to the probabilistic setting, the typical behaviour that your condition implies is that $(X_t)$ is positive recurrent and that the sequence $(E(X_t))$ converges to a positive and finite limit. I don't claim that it holds "for $T$ large enough". I claim that it holds as long as the $X_t, t < T$ satisfy a certain condition. The purpose of this is to prove that this cannot go on for ever so I can prove a bound on the expected time $T$ for which that condition is violated. –  Lagerbaer May 5 '11 at 18:13 The problem is that you mix deterministic statements (such as $E(X_t)\ge x_c$ or not, but for each $t$ this is either one or the other) and probabilistic ones (the event $[X_t\ge x_c]$ happens in general with a probability strictly between $0$ and $1$). To go further, you might want to state rigorously the result you want to prove. –  Did May 5 '11 at 18:41 (1) Sorry but the new version is equivalent to the old one (replace $X_t$ by $f(X_t)$ everywhere). (2) In your proof, a faulty step is to write $E[f(X_T)] = E[E[f(X_T)|X_{T-1}=x]] \leq \alpha \cdot E[f(X_{T-1})]$. –  Did May 5 '11 at 20:01
{}
# Issues in Empirical Finance Research 2020-10-27 ## The challenge • I will discuss some issues in using plain OLS models in Corporate Finance & Governance Research • I will avoid the word “endogeneity” as much as I can • I will also avoid the word “identification” because identification does not guarantee causality and vice-versa (Kahn and Whited 2017) ## The challenge • Imagine that you want to investigate the effect of Governance on Q • You may have more covariates explaining Q (omitted from slides) $𝑸_{i} = α + 𝜷_{i} × Gov + Controls + error$ All the issues in the next slides will make it not possible to infer that changing Gov will CAUSE a change in Q That is, cannot infer causality ## 1) Reverse causation One source of bias is: reverse causation • Perhaps it is Q that causes Gov • OLS based methods do not tell the difference between these two betas: $𝑄_{i} = α + 𝜷_{i} × Gov + Controls + error$ $Gov_{i} = α + 𝜷_{i} × Q + Controls + error$ • If one Beta is significant, the other will most likely be significant too • You need a sound theory! ## 2) Omitted variable bias (OVB) The second source of bias is: OVB • Imagine that you do not include an important “true” predictor of Q • Let’s say, long is: $𝑸_{i} = 𝜶_{long} + 𝜷_{long}* gov_{i} + δ * omitted + error$ • But you estimate short: $𝑸_{i} = 𝜶_{short} + 𝜷_{short}* gov_{i} + error$ • $𝜷_{short}$ will be: • $𝜷_{short} = 𝜷_{long}$ + bias • $𝜷_{short} = 𝜷_{long}$ + relationship between omitted (omitted) and included (Gov) * effect of omitted in long (δ) • Where: relationship between omitted (omitted) and included (Gov) is: $Omitted = 𝜶 + ϕ *gov_{i} + u$ • Thus, OVB is: $𝜷_{short} – 𝜷_{long} = ϕ * δ$ • See an example in r here ## 3) Specification error The third source of bias is: Specification error • Even if we could perfectly measure gov and all relevant covariates, we would not know for sure the functional form through which each influences q • Functional form: linear? Quadratic? Log-log? Semi-log? • Misspecification of x’s is similar to OVB ## 4) Signaling The fourth source of bias is: Signaling • Perhaps, some individuals are signaling the existence of an X without truly having it: • For instance: firms signaling they have good governance without having it • This is similar to the OVB because you cannot observe the full story ## 5) Simultaneity The fifth source of bias is: Simultaneity • Perhaps gov and some other variable x are determined simultaneously • Perhaps there is bidirectional causation, with q causing gov and gov also causing q • In both cases, OLS regression will provide a biased estimate of the effect • Also, the sign might be wrong ## 6) Heterogeneous effects The sixth source of bias is: Heterogeneous effects • Maybe the causal effect of gov on q depends on observed and unobserved firm characteristics: • Let’s assume that firms seek to maximize q • Different firms have different optimal gov • Firms know their optimal gov • If we observed all factors that affect q, each firm would be at its own optimum and OLS regression would give a non-significant coefficient • In such case, we may find a positive or negative relationship. • Neither is the true causal relationship ## 7) Construct validity The seventh source of bias is: Construct validity • Some constructs (e.g. Corporate governance) are complex, and sometimes have conflicting mechanisms • We usually don’t know for sure what “good” governance is, for instance • It is common that we use imperfect proxies • They may poorly fit the underlying concept ## 8) Measurement error The eighth source of bias is: Measurement error • “Classical” random measurement error for the outcome will inflate standard errors but will not lead to biased coefficients. • $y^{*} = y + \sigma_{1}$ • If you estimante $y^{*} = f(x)$, you have $y + \sigma_{1} = x + \epsilon$ • $y = x + u$ • where $u = \epsilon + \sigma_{1}$ • “Classical” random measurement error in x’s will bias coefficient estimates toward zero • $x^{*} = x + \sigma_{2}$ • Imagine that $x^{*}$ is a bunch of noise • It would not explain anything • Thus, your results are biased toward zero ## 9) Observation bias The ninth source of bias is: Observation bias • This is analogous to the Hawthorne effect, in which observed subjects behave differently because they are observed • Firms which change gov may behave differently because their managers or employees think the change in gov matters, when in fact it has no direct effect ## 10) Interdependent effects The tenth source of bias is: Interdependent effects • Imagine that a governance reform that will not affect share prices for a single firm might be effective if several firms adopt • Conversely, a reform that improves efficiency for a single firm might not improve profitability if adopted widely because the gains will be competed away • “One swallow doesn’t make a summer” ## 11) Selection bias The eleventh source of bias is: Selection bias • If you run a regression with two types of companies • High gov (let’s say they are the treated group) • Low gov (let’s say they are the control group) • Without any matching method, these companies are likely not comparable • Thus, the estimated beta will contain selection bias • The bias can be either be positive or negative • It is similar to OVB ## 12) Self-Selection The twelfth source of bias is: Self-Selection • Self-selection is a type of selection bias • Usually, firms decide which level of governance they adopt • There are reasons why firms adopt high governance • If observable, you need to control for • If unobservable, you have a problem • It is like they “self-select” into the treatment • Units decide whether they receive the treatment of not • Your coefficients will be biased
{}
Algebraic Complete Integrability of the Bloch-Iserles System The goal of this paper is the proof of the algebraic complete integrability of the Bloch-Iserles Hamiltonian system [5]. This result was conjectured in [4], based on its validity in certain special cases. Published in: International Mathematics Research Notices, 14, 5806-5817 Year: 2015 Publisher: Oxford, Oxford University Press ISSN: 1073-7928 Laboratories:
{}
# What is mathematical basis for the percent symbol (%)? Percent means 1 part of 100 or 1/100 and is indicated with %. Per mille means 1 part of 1000 or 1/1000 and is indicated with ‰, so it seems that these symbols indicate the mathematical operations that they perform (i.e., the divisor in per mille is 10X greater. However I can't seem to reconcile how % suggests 1/100 and ‰ suggests 1/1000. Is this just convention or is there a deeper meaning? Thank you. - Have you seen this? –  Guess who it is. Sep 5 '11 at 17:53 % has two 0s and a slanted 1 in it, and likewise for ‰... :) –  Rahul Sep 5 '11 at 21:23 @Rahul: you made up the "slanted 1", right? That's why you put the :) joke sign at the end? –  GEdgar Sep 5 '11 at 22:05 @GEdgar, as far as I know, I made up everything in that comment. –  Rahul Sep 5 '11 at 23:58 @Rahul: It's a good mnemonic nonetheless... –  Assad Ebrahim Jun 30 '14 at 3:31 I can't find it; is there a permyriad symbol in Unicode or $\TeX$? :) –  Guess who it is. Sep 6 '11 at 1:37
{}
# Z bosons from radioactive elements? 1. Jun 15, 2010 ### Yevgenia So I heard a little rumor that Beta decays in certain isotopes release an electron at "ultrarelativistic" speeds. I also heard a rumor that Z bosons can be created temporarily by a positron annihilating an electron when the two collide. My question is, can one place two highly radioactive metals into a single pile, and expect to get at least some Z bosons from this oppositely-charged radioactivity? If yes, how much can we expect to get? A plausible method of achieving this is to bring enriched 234m-Protactinium into proximity with a heavy isotope of Samarium (143 or such). Samarium (Sm) is used in medicine, and Protactinium (Pa) can be isolated from uranium enrichment processes. Obviously, a whole library of various isotopes could be mixed and matched here, but in particular we need two isotopes, one producing Beta minus, and the other Beta plus, such that the speeds of the emitted radiative particles are very high. I have no idea which two are best to couple together, because of various factors. One, it is difficult to find the energy emission spectrum of isotopes. And even if it were easy, this doesn't tell me whether these short-lived isotopes can be collected into large amounts or only exist in a lab. 2. Jun 16, 2010 ### Staff: Mentor The rest-mass energy of the Z is about 91 GeV = 91000 MeV. If you can find a $\beta^{-}$ emitter and a $\beta^{+}$ emitter with decay energies that add to give 91000 MeV, I will be very surprised. Beta decay energies are generally only a few MeV. I don't know what the maximum is, but I'd be surprised if it's larger than 10 MeV. 3. Jun 16, 2010 ### Yevgenia Thank you for responding. I was under the impression that particle/antiparticle collisions are a complete annihilation of both particles including their rest masses. e+ + e- --> Z0 + ?? The electrons emitted by this kind of radiation are themselves the products of the decay of a heavy boson. Other than the requisite neutrino that this decay emits, the bulk of this energy goes into the electron and positron already. These initial particles are going to have heavier relativistic masses in this case. (In other words, you just can't throw in the momentum and be done.) Do you know the exact calculation? One can keep in mind what made these to begin with, namely add up the energy from: electron is the result of the decay of a massive W-, with some small portion going to an electron antineutrino. positron is the result of the decay of a massive W+, with some small portion going to an electron neutrino. Even if a mere 57% of the energy goes to the electron, and 57% to the positron, all of this energy will come to bear on their collision, since complete annihilation will happen. In essence, 45.7 GeV in one and 45.7 GeV in the other, for a grand total of 91.5 GeV. And yet I thought the extra neutrino in the result carried way a "negligable" amount of energy, not say 43% of it. Correct me if I'm wrong here. I heard a rumor that the collision above was being done with the SLC at SLAC, and that this started around 1991, but I cannot find a paper describing the energies used. 4. Jun 16, 2010 ### humanino Beta decay turns a neutron into a proton (or vice-vera for beta+). No nuclei will gain 90 times the mass of a nucleon by doing so, thus cannot produce a real Z. Besides, beta decay proceeds via virtual W exchange (the discovery of weak neutral current was a big deal). So clearly no : beta decay does not produce Zs. However, tuning the center-of-mass energy of electron-positron annihilation to the Z mass will indeed produce a whole bunch of them. from CERN courier 5. Jun 16, 2010 ### humanino ? = nothing See plot 1.1 page 15 of Measurements @ Z-pole Note that pretty much anything that proceeds through the Z also may proceed through the photon (and vice-versa) albeit with usually much different strength for the two channels, depending on available energy. 6. Jun 16, 2010 ### humanino I have a hard time understanding the question. It seems you only ask for relativistic kinematics (energy momentum conservation). I do not mean to throw references randomly, but a good free one to know is PDG review, kinematics section 7. Jun 17, 2010 ### Yevgenia humanino, thank you for your responses. I have a few additional questions in regards to what you said above. I am very fuzzy on this conceptual distinction you are making between a so-called "virtual exchange of a W" versus a "real Z". I understand these particles have masses near 100 GeV. All the available Feynmann diagrams for Beta decay show the emission of a W, which then later decays into a neutrino and an electron. (or their oppositely-charged counterparts). Is it the case that these diagrams are misleading, and are mere conceptual tools for the laymen? You tell me. So in order for me to square these diagrams with what you are saying, I would assume that the GeV contained in the mass of a W is never real, but it is better described as a nucleon exchanging a virtual W with the vacuum, and then the vacuum spontaneously creates a real electron and a real neutrino. The virtually-exchanging nucleon switches from neutron to proton, or vice-versa. If answering this is too complicated for this thread, feel free to link me to something to read off-site. Last edited: Jun 17, 2010 8. Jun 18, 2010 ### kaksmet I belive you can indeed take the diagrams pretty serious, at least as long as you realize that it is only an approximation. The W in the beta decay is called virtual because it cannot survive, the energy needed to create a W is much large than then energy released in the beta decay. However, you can see this as a manifistation of the uncertainty principle.. thus, the W can excist but only for a short period of time. If it then decays into a neutrino and an electron, which have a combined mass that is less than the energy in the beta decay, these to particles can be, so called, real. i.e. their excistance is not time limited. If you have two beta decays, producing an electron and a positron, these could in principle combine to form a z. However, since their energy would not be enough for the z mass, the z would have to quickly decay (same reason as for the W). But, the cross section to produce a Z in e+ e- at such low energies is vanishingly small! Hope I could clarify things a little. Also, doing a calculation of the e+e- to z to X, where X are the decay products of the Z boson, requires a little bit of work. Perhaps you could take a look in Peskin and Schroeders, Introduction to Quantum Field Theory. They do a detailed calculation of mu+ mu- going via photon to e+ e-. Doing the case with a Z boson would then be very similar, but you need to add one projection operator (1-gamma) factor in the vertices and also a change the propagator to 1/(q^2+m^2). Since the mass of the z is so large, this process will have a very small cross section. 9. Jun 18, 2010 ### arivero Related to W in nuclei, I had some intriguing plots. here in this thread: About Z, look again at the two dimensional histogram (actually, a contour plot) of https://www.physicsforums.com/attachment.php?attachmentid=13427&d=1207614889 the attachment. The Z line is painted paralell to the W line, a bit hidden because it seems less relevant. Still, the nuclei with a mass slighy greater than the mass of Z seem to be more stable than usual, or at least no so many beta rays are known, for them. Last edited: Jun 18, 2010 10. Jun 18, 2010 ### Yevgenia Dear arivero. Thanks for all the help. In the far future I'd like to eventually be able to ask the question about whether we could construct a material that increases SNU. Or if not an increase in SNU, a scattering spectrum of solar neutrinos would be just as good. If scattering is possible, we could then catalog the neutrino-scattering capability of various metals in the same way IOR is cataloged for visible light. (electromagnetic properties determine IOR, so therefore "bosonic" properties should determine neutrino scattering). The long-term technologies are obvious, for example, a genuine neutrino telescope, with ability to focus the beams from deep space rather than sitting passively. I read your other threads and I wanted to ask, do you happen to know Mr. Lubos Motl? I contacted him briefly regarding some equations from string field theory. He was very helpful and cordial.
{}
Advanced Search Article Contents Article Contents Characterizing ethnic interactions from human communication patterns in Ivory Coast • Towards the consolidation of peace and national development, Ivory Coast must overcome the lack of cohesion, responsible for the emergence of two civil wars in the last years. As in many African countries, ethnic violence is a result of the way territories are organized and the prevalence of some groups over others. Nowadays the increasing availability of electronic data allows to quantify and unveil societal relationships in an unprecedented way. In this sense, the present work analyzes mobile phone data in order to provide information about the regional and ethnic interactions in Ivory Coast. We accomplish so by means of the construction and analysis of complex social networks with several types of interactions, such as calling activity and human mobility. We found that in a subregional scale, the ethnic identity plays an important role in the communication patterns, while at the interregional scale, other factors arise like economical interests and available infrastructure. Mathematics Subject Classification: 91D30, 05C82, 91C20. Citation: • [1] Annual Report 2011/2012, Report from International Cocoa Organization (ICCO). [2] L. Bengtsson, X. Lu, A. Thorson, R. Garfield and J. von Schreeb, Improved response to disasters and outbreaks by tracking population movements with mobile phone network data: A post-earthquake geospatial study in Haiti, PLoS Medicine, 8 (2011), e1001083. [3] V. D. Blondel et al., Data for development: The D4D challenge on mobile phone data, arXiv:1210.0137, 2012. [4] V. D. Blondel, J.-L. Guillaume, R. Lambiotte and E. Lefebvre, Fast unfolding of communities in large networks, J. Stat. Mech., 10 (2008), P10008. [5] S. Bocaletti, V. Latora, Y. Moreno, M. Chavez and D.-U. Hwang, Complex networks: Structure and dynamics, Phys. Rep., 424 (2006), 175-308.doi: 10.1016/j.physrep.2005.10.009. [6] J. Bollen, H. Mao and X.-J. Zeng, Twitter mood predicts the stock market, J. Comput. Science, 2 (2011), 1-8.doi: 10.1016/j.jocs.2010.12.007. [7] Cote d'Ivoire Roads Shapefiles, Service from African Development Bank. Available from: http://www.infrastructureafrica.org/library/doc/986/cote-divoire-roads. [8] Document de strategie pays et programe indicatif national pour la periode 2008-2013, Report from Communaute europeenne - Republique de Côte d'Ivoire, 2008. Available from: http://ec.europa.eu/development/icenter/repository/scanned\_ci\_csp10\_fr.pdf. [9] N. Eagle, M. Macy and R. Claxton, Network diversity and economic development, Science, 328 (2010), 1029-1031.doi: 10.1126/science.1186605. [10] T. Hastie, et al., The Elements of Statistical Learning, Springer, New York, 2009.doi: 10.1007/978-0-387-84858-7. [11] D. Lazer, et al., Life in the network: The coming age of computational social science, Science, 323 (2009), 721-723. [12] M. P. Lewis, Ethnologue: Languages of the World, $16^{th}$ edition, SIL International, Dallas, Tex., 2009. [13] M. Lim, R. Metzler and Y. Bar-Yam, Global pattern formation and ethnic/cultural violence, Science, 317 (2007), 1540-1544.doi: 10.1126/science.1142734. [14] A. J. Morales, J. Borondo, J. C. Losada and R. M. Benito, Efficiency of human activity on information spreading on Twitter, Social Networks, 39 (2014), 1-11.doi: 10.1016/j.socnet.2014.03.007. [15] M. E. J. Newman, Modularity and community structure in networks, Phys. Rev. E, 103 (2006), 8577-8582.doi: 10.1073/pnas.0601602103. [16] M. E. J. Newman, Mixing patterns in networks, Physical Review E, 67 (2003), 026126, 13pp.doi: 10.1103/PhysRevE.67.026126. [17] D. Pastor, A. J. Morales, Y. Torres, J. Bauer, A. Wadhwa, C. Castro-Correa, A. Calderón-Mariscal, L. Romanoff, J. Lee, A. Rutherford, V. Frias-Martinez, N. Oliver, E. Frias-Martinez and M. Luengo-Oroz, Flooding through the lens of mobile phone activity, in IEEE Global Humanitarian Technology Conference (GHTC), IEEE, 2014, 279-286.doi: 10.1109/GHTC.2014.6970293. [18] A. Pentland, Social Physics: How Good Ideas Spread. The Lessons From a New Science, Penguin Group (USA) Incorporated, 2014. [19] K. K. Rachuri, M. Musolesi, C. Mascolo, P. J. Rentfrow, C. Longworth and A. Aucinas, EmotionSense: A mobile phones based adaptive platform for experimental social psychology research, Proceedings of the 12th ACM International Conference on Ubiquitous Computing (New York, NY, USA), Ubicomp '10, ACM, 2010, 281-290.doi: 10.1145/1864349.1864393. [20] T. Sakaki, M. Okazaki and Y. Matsuo, Earthquake shakes twitter users: Real-time event detection by social sensors, Proceedings of the 19th International Conference on World Wide Web (New York, NY, USA), WWW '10, ACM, 2010, 851-860.doi: 10.1145/1772690.1772777. [21] J. L. Toole, M. Ulm, M. C González and D. Bauer, Inferring land use from mobile phone activity, in Proceedings of the ACM SIGKDD International Workshop on Urban Computing, ACM, 2012, 1-8.doi: 10.1145/2346496.2346498. [22] A. Wesolowski, et al., Quantifying the impact of human mobility on malaria, Science, 338 (2012), 267-270. [23] Article Metrics HTML views() PDF downloads(104) Cited by(0) Other Articles By Authors • on this site • on Google Scholar Catalog / DownLoad:  Full-Size Img  PowerPoint
{}
# Inverse Problems and Data Assimilation Wiki and International Community ### Site Tools global_atmospheric_state_determination_every_three_hours # Inverse Blog: Global Atmospheric State Determination Every Three Hours! Example Infrared radiation, image by German Meteorological Service (DWD) Tue, July 29, 2014. Last week I have been discussing a dozen inverse problems with a crows of school kids. We started with discussing simulations. Assume you try to simulate the weather, for example. Today, we are able to simulate quite complex problems. For example, we know many physical laws of the atmosphere. It is basically a fluid, which flows over a rotating sphere. But there are, of course, many physical processes influencing this motion. There is radiation, coming from the sun, all the time. It is heating up the atmosphere, which is then rising and leads to particular patterns of atmospheric motion. We all know the deep and high pressure areas, which strongly determine our weather all the time. Please look at the infrared radiation coming from the atmosphere, which is reflecting the temperature distribution all over the globe, compare the image on the right-hand side. Standard models work globally, with grid points every 15 kilometers, and with heights from the ground to about 70km height, with 90 grid poins distributed vertically, compare for example Wikipedia. Please keep in mind that this is changing about every 18 month! All over the world there are teams which push this forward all the time. Assume you have a model which can calculate the next step in time, when it is given the current state. You need to provide the state of the atmosphere with 90 grid points vertically and 15km spacing horizontally. The earth has a radius of about R = 6371km, c.f. http://en.wikipedia.org/wiki/Earth_radius. The earth surface A is calculated by $$A = 4 \pi R^2$$ which with $R = 6371km$ is approximately $$A = 5.1 * 10^8 km^2,$$ compare for example Earth Surface. You will have approximately one point every $d = 1/(15km*15km) = 1/225 km^2$, i.e. $$n = 90 * A * d = 2 * 10^8 points.$$ The task of data assimilation is to determine the state of the atmosphere at $n$ points globally every few hours. Today, this period for global weather models is usually 3 hours. But we are moving down, we might soon go to one hour “assimilation cycle”, which means that we determine the state of the atmosphere globally for each hour using a broad range of measurements which are available today. If you want to know more about how the state of the atmosphere is calculated every three hours, then either look into current books on NWP = Numerical Weather Prediction, or come back this blog on a regular basis. Also, we will talk about various inverse problems, i.e. the reconstruction of physical quantities from measurements. It is a highly fascinating area, which touches many parts of science. All the best, Roland Potthast for http://inverseproblems.info. Roland Potthast, University of Reading and DWD (German Meteorological Service) r.w.e.potthast@reading.ac.uk global_atmospheric_state_determination_every_three_hours.txt · Last modified: 2014/08/06 21:17 by potthast
{}
# 1 X 5 X 2 4 X 2 1 X 5 X 2 4 X 2 ## Limit Calculator with steps Limit calculator helps you find the limit of a function with respect to a variable. It is an online tool that assists you in calculating the value of a function when an input approaches some specific value. Limit calculator with steps shows the step-by-step solution of limits along with a plot and series expansion. It employs all limit rules such as sum, product, quotient, and L’hopital’s rule to calculate the exact value. You can evaluate limits with respect to $$\text{x , y, z , v, u, t}$$ and $$w$$ using this limits calculator. That’s not it. By using this tool, you can also find, 1. Right-hand limit (+) 2. Left-hand limit (-) 3. Two-sided limit ## How does the limit calculator work? To evaluate the limit using this limit solver, follow the below steps. • Enter the function in the given input box. • Select the concerning variable. • Enter the limit value. • Choose the side of the limit. i.e., left, right, or two-sided. • Hit the Calculate button for the result. • Use the Reset button to enter new values and the You will find the answer below the tool. Click on Show Steps to see the step-by-step solution. ## What is a limit in Calculus? The limit of a function is the value that f(x) gets closer to as x approaches some number. Limits can be used to define the derivatives, integrals , and continuity by finding the limit of a given function. It is written as: Baca :   Sebuah Kapasitor 40 Mikro Farad Dihubungkan $$\lim _{x\to a}\:f\left(x\right)=L$$ If f is a real-valued function and a is a real number, then the above expression is read as, the limit of f of x as x approaches a equals L. ## How to find limit? – With steps Limits can be applied as numbers, constant values (π, G, k), infinity, etc. Let’s go through few examples to learn how to calculate limits. Example – Right-hand Limit $$\lim _{x\to \:2^+}\frac{\left(x^2+2\right)}{\left(x-1\right)}$$ Solution: A right-hand limit means the limit of a function as it approaches from the right-hand side. Step 1:Apply the limit x➜2 to the above function. Put the limit value in place of x. $$\lim \:_{x\to 2^+}\frac{\left(x^2+2\right)}{\left(x-1\right)}$$ $$=\frac{\left(2^2+2\right)}{\left(2-1\right)}$$ Step 2: Solve the equation to reach a result. $$=\frac{\left(4+2\right)}{\left(2-1\right)} =\frac{6}{1} =6$$ Step 3: Write the expression with its answer. $$\lim \:_{x\to \:\:2^+}\frac{\left(x^2+2\right)}{\left(x-1\right)}=6$$ Graph Example – Left-hand Limit $$\lim _{x\to 3^-}\left(\frac{x^2-3x+4}{5-3x}\right)$$ Solution: A left-hand limit means the limit of a function as it approaches from the left-hand side. Step 1: Place the limit value in the function. $$\lim _{x\to 3^-}\left(\frac{x^2-3x+4}{5-3x}\right)$$ $$=\frac{\left(3^2-3\left(3\right)+4\right)}{\left(5-3\left(3\right)\right)}$$ Step 2: Solve the equation further. $$=\frac{\left(9-9+4\right)}{\left(5-9\right)}$$ $$=\frac{\left(0+4\right)}{\left(-4\right)} =\frac{4}{-4} =-1$$ Step 3: Write down the function as written below. $$\lim \:_{x\to \:3^-}\left(\frac{x^2-3x+4}{5-3x}\right)=-1$$ Graph Example – Two-sided Limit $$\lim _{x\to 5}\left(cos^3\left(x\right)\cdot sin\left(x\right)\right)$$ Solution: A two-sided limit exists if the limit coming from both directions (positive and negative) is the same. It is the same as limit. Step 1: Substitute the value of limit in the function. $$\lim _{x\to 5}\left(cos^3\left(x\right)\cdot sin\left(x\right)\right)$$ $$=cos^3\left(5\right)\cdot \:sin\left(5\right)$$ Step 3: The above equation can be considered as the final answer. However, if you want to solve it further, solve the trigonometric values in the equation. $$=\frac{1141}{50000}\cdot \:-\frac{23973}{25000} =-\frac{10941}{500000}$$ $$\lim \:\:_{x\to \:\:5}\left(cos^3\left(x\right)\cdot \:\:sin\left(x\right)\right)$$ $$=-0.021882$$ Graph ## FAQ’s Does sin x have a limit? Sin x has no limit. It is because, as x approaches infinity, the y-value oscillates between 1 and −1. What is the limit of e to infinity? The limit of e to the infinity (∞) is e. What is the limit as e^x approaches 0? The limit as e^x approaches 0 is 1. What is the limit as x approaches the infinity of ln(x)? The limit as x approaches infinity of ln(x) is + . The limit of this natural log can be proved by reductio ad absurdum. • Ifx >1ln(x) > 0, the limit must be positive. • As ln(x2) − ln(x1) = ln(x2/x1). Ifx2>x1 , the difference is positive, soln(x) is always increasing. • If lim x→∞ ln(x) = M R , we haveln(x) < M x < eM , butx→∞ soM cannot be inR, and the limit must be+∞. ### 1 X 5 X 2 4 X 2 Source: https://www.limitcalculator.online/ ## Contoh Soal Fungsi Produksi Dan Jawaban Contoh Soal Fungsi Produksi Dan Jawaban Fungsi Produksi – Pada perjumpaan kali ini dimana akan …
{}
# Analysis of Variance and the Completely Randomized Design In this section we show how analysis of variance can be used to test for the equality of k population means for a completely randomized design. The general form of the hypotheses tested is We assume that a simple random sample of size Hj has been selected from each of the k populations or treatments. For the resulting sample data, let The formulas for the sample mean and sample variance for treatment j are as follows: The overall sample mean, denoted x, is the sum of all the observations divided by the total number of observations. That is, If the size of each sample is n, nT = kn; in this case equation (13.3) reduces to In other words, whenever the sample sizes are the same, the overall sample mean is just the average of the k sample means. Because each sample in the Chemitech experiment consists of n = 5 observations, the overall sample mean can be computed by using equation (13.5). For the data in Table 13.1 we obtained the following result: If the null hypothesis is true (μ1 = μ2 = μ3 = μ), the overall sample mean of 60 is the best estimate of the population mean μ. ### 1. Between-Treatments Estimate of Population Variance In the preceding section, we introduced the concept of a between-treatments estimate of σ2 and showed how to compute it when the sample sizes were equal. This estimate of σ2 is called the mean square due to treatments and is denoted MSTR. The general formula for computing MSTR is The numerator in equation (13.6) is called the sum of squares due to treatments and is denoted SSTR. The denominator, k – 1, represents the degrees of freedom associated with SSTR. Hence, the mean square due to treatments can be computed using the following formula. If H0 is true, MSTR provides an unbiased estimate of σ2. However, if the means of the k populations are not equal, MSTR is not an unbiased estimate of σ2; in fact, in that case, MSTR should overestimate σ2. For the Chemitech data in Table 13.1, we obtain the following results: ### 2. Within-Treatments Estimate of Population Variance Earlier, we introduced the concept of a within-treatments estimate of s2 and showed how to compute it when the sample sizes were equal. This estimate of s2 is called the mean square due to error and is denoted MSE. The general formula for computing MSE is The numerator in equation (13.9) is called the sum of squares due to error and is denoted SSE. The denominator of MSE is referred to as the degrees of freedom associated with SSE. Hence, the formula for MSE can also be stated as follows: Note that MSE is based on the variation within each of the treatments; it is not influenced by whether the null hypothesis is true. Thus, MSE always provides an unbiased estimate of s2. For the Chemitech data in Table 13.1 we obtain the following results. ### 3. Comparing the Variance Estimates: The F Test If the null hypothesis is true, MSTR and MSE provide two independent, unbiased estimates of σ2. Based on the material covered in Chapter 11 we know that for normal populations, the sampling distribution of the ratio of two independent estimates of σ2 follows an F distribution. Hence, if the null hypothesis is true and the ANOVA assumptions are valid, the sampling distribution of MSTR/MSE is an F distribution with numerator degrees of freedom equal to k – 1 and denominator degrees of freedom equal to nT – k. In other words, if the null hypothesis is true, the value of MSTR/MSE should appear to have been selected from this F distribution. However, if the null hypothesis is false, the value of MSTR/MSE will be inflated be­cause MSTR overestimates a2. Hence, we will reject H0 if the resulting value of MSTR/ MSE appears to be too large to have been selected from an F distribution with k – 1 numerator degrees of freedom and nT – k denominator degrees of freedom. Because the decision to reject H0 is based on the value of MSTR/MSE, the test statistic used to test for the equality of k population means is as follows: Let us return to the Chemitech experiment and use a level of significance a = .05 to conduct the hypothesis test. The value of the test statistic is The numerator degrees of freedom is k – 1 = 3 – 1 = 2 and the denominator degrees of freedom is nT – k = 15 – 3 = 12. Because we will only reject the null hypothesis for large values of the test statistic, the p-value is the upper tail area of the F distribution to the right of the test statistic F = 9.18. Figure 13.4 shows the sampling distribution of F = MSTR/MSE, the value of the test statistic, and the upper tail area that is the p-value for the hypothesis test. From Table 4 of Appendix B we find the following areas in the upper tail of an F distri­bution with 2 numerator degrees of freedom and 12 denominator degrees of freedom. Because F = 9.18 is greater than 6.93, the area in the upper tail at F = 9.18 is less than .01. Thus, the p-value is less than .01. Statistical software can be used to show that the exact p-value is .004. With p-value < a = .05, H0 is rejected. The test provides sufficient evidence to conclude that the means of the three populations are not equal. In other words, analysis of variance supports the conclusion that the population mean number of units produced per week for the three assembly methods are not equal. As with other hypothesis testing procedures, the critical value approach may also be used. With a = .05, the critical F value occurs with an area of .05 in the upper tail of an F distribution with 2 and 12 degrees of freedom. From the F distribution table, we find F.05 = 3.89. Hence, the appropriate upper tail rejection rule for the Chemitech experiment is With F = 9.18, we reject H0 and conclude that the means of the three populations are not equal. A summary of the overall procedure for testing for the equality of k population means follows. ### 4. ANOVA Table The results of the preceding calculations can be displayed conveniently in a table referred to as the analysis of variance or ANOVA table. The general form of the ANOVA table for a completely randomized design is shown in Table 13.2; Table 13.3 is the corresponding ANOVA table for the Chemitech experiment. The sum of squares associated with the source of variation referred to as “Total” is called the total sum of squares (SST). Note that the results for the Chemitech experiment suggest that SST = SSTR + SSE, and that the degrees of freedom associated with this total sum of squares is the sum of the degrees of freedom associated with the sum of squares due to treatments and the sum of squares due to error. We point out that SST divided by its degrees of freedom nT – 1 is nothing more than the overall sample variance that would be obtained if we treated the entire set of 15 obser­vations as one data set. With the entire data set as one sample, the formula for computing the total sum of squares, SST, is It can be shown that the results we observed for the analysis of variance table for the Chemitech experiment also apply to other problems. That is, SST = SSTR + SSE    (13.14) In other words, SST can be partitioned into two sums of squares: the sum of squares due to treatments and the sum of squares due to error. Note also that the degrees of freedom corresponding to SST, nT – 1, can be partitioned into the degrees of freedom corres­ponding to SSTR, k – 1, and the degrees of freedom corresponding to SSE, nT – k. The analysis of variance can be viewed as the process of partitioning the total sum of squares and the degrees of freedom into their corresponding sources: treatments and error. Dividing the sum of squares by the appropriate degrees of freedom provides the variance estimates, the F value, and the p-value used to test the hypothesis of equal population means. ### 5. Computer Results for Analysis of Variance Using statistical software, analysis of variance computations with large sample sizes or a large number of populations can be performed easily. Appendixes 13.1 and 13.2 show the steps required to use JMP and Excel to perform the analysis of variance computations. In Figure 13.5 we show statistical software output for the Chemitech experiment. The first part of the output contains the familiar ANOVA table format. Comparing Figure 13.5 with Table 13.3, we see that the same information is available, although some of the headings are slightly different. The heading Source is used for the source of variation column, Factor identifies the treatments row, and the sum of squares and degrees of freedom columns are interchanged. Following the ANOVA table in Figure 13.5, the output contains the respective sample sizes, the sample means, and the standard deviations. In addition, 95% confidence interval estimates of each population mean are given. In developing these confidence interval estimates, MSE is used as the estimate of s2. Thus, the square root of MSE provides the best estimate of the population standard deviation s. This estimate of s in Figure 13.5 is Pooled StDev; it is equal to 5.323. To provide an illustration of how these interval estimates are developed, we will compute a 95% confidence interval estimate of the population mean for Method A. From our study of interval estimation in Chapter 8, we know that the general form of an interval estimate of a population mean is where 5 is the estimate of the population standard deviation s. Because the best estimate of s is provided by the Pooled StDev, we use a value of 5.323 for 5 in expression (13.15). The degrees of freedom for the t value is 12, the degrees of freedom associated with the error sum of squares. Hence, with t.025 = 2.179 we obtain Thus, the individual 95% confidence interval for Method A goes from 62 – 5.19 = 56.81 to 62 + 5.19 = 67.19. Because the sample sizes are equal for the Chemitech experiment, the individual confidence intervals for Method B and Method C are also constructed by adding and subtracting 5.19 from each sample mean. ### An Observational Study We have shown how analysis of variance can be used to test for the equality of k population means for a completely randomized experimental design. It is important to understand that ANOVA can also be used to test for the equality of three or more population means using data obtained from an observational study. As an example, let us consider the situation at National Computer Products, Inc. (NCP). NCP manufactures printers and fax machines at plants located in Atlanta, Dallas, and Seattle. To measure how much employees at these plants know about quality man­agement, a random sample of 6 employees was selected from each plant and the em­ployees selected were given a quality awareness examination. The examination scores for these 18 employees are shown in Table 13.4. The sample means, sample variances, and sample standard deviations for each group are also provided. Managers want to use these data to test the hypothesis that the mean examination score is the same for all three plants. We define population 1 as all employees at the Atlanta plant, population 2 as all employees at the Dallas plant, and population 3 as all employees at the Seattle plant. Let Although we will never know the actual values of m1, m2, and m3, we want to use the sample results to test the following hypotheses: Note that the hypothesis test for the NCP observational study is exactly the same as the hypothesis test for the Chemitech experiment. Indeed, the same analysis of variance meth­odology we used to analyze the Chemitech experiment can also be used to analyze the data from the NCP observational study. Even though the same ANOVA methodology is used for the analysis, it is worth noting how the NCP observational statistical study differs from the Chemitech experimental stat­istical study. The individuals who conducted the NCP study had no control over how the plants were assigned to individual employees. That is, the plants were already in operation and a particular employee worked at one of the three plants. All that NCP could do was to select a random sample of 6 employees from each plant and administer the quality aware­ness examination. To be classified as an experimental study, NCP would have had to be able to randomly select 18 employees and then assign the plants to each employee in a random fashion. Source:  Anderson David R., Sweeney Dennis J., Williams Thomas A. (2019), Statistics for Business & Economics, Cengage Learning; 14th edition.
{}
# Properties Label 2.3.ae_k Base field $\F_{3}$ Dimension $2$ $p$-rank $2$ Ordinary yes Supersingular no Simple no Geometrically simple no Primitive yes Principally polarizable yes Contains a Jacobian yes # Related objects ## Invariants Base field: $\F_{3}$ Dimension: $2$ L-polynomial: $( 1 - 2 x + 3 x^{2} )^{2}$ $1 - 4 x + 10 x^{2} - 12 x^{3} + 9 x^{4}$ Frobenius angles: $\pm0.304086723985$, $\pm0.304086723985$ Angle rank: $1$ (numerical) Jacobians: 1 This isogeny class is not simple, primitive, ordinary, and not supersingular. It is principally polarizable and contains a Jacobian. ## Newton polygon This isogeny class is ordinary. $p$-rank: $2$ Slopes: $[0, 0, 1, 1]$ ## Point counts This isogeny class contains the Jacobian of 1 curve (which is hyperelliptic), and hence is principally polarizable: • $y^2=2x^6+2x^4+2x^2+2$ $r$ $1$ $2$ $3$ $4$ $5$ $A(\F_{q^r})$ $4$ $144$ $1444$ $9216$ $58564$ $r$ $1$ $2$ $3$ $4$ $5$ $6$ $7$ $8$ $9$ $10$ $C(\F_{q^r})$ $0$ $14$ $48$ $110$ $240$ $638$ $2016$ $6494$ $20064$ $60014$ ## Decomposition and endomorphism algebra Endomorphism algebra over $\F_{3}$ The isogeny class factors as 1.3.ac 2 and its endomorphism algebra is $\mathrm{M}_{2}($$$\Q(\sqrt{-2})$$$)$ All geometric endomorphisms are defined over $\F_{3}$. ## Base change This is a primitive isogeny class. ## Twists Below are some of the twists of this isogeny class. TwistExtension degreeCommon base change 2.3.a_c$2$2.9.e_w 2.3.e_k$2$2.9.e_w 2.3.c_b$3$2.27.u_fy Below is a list of all twists of this isogeny class. TwistExtension degreeCommon base change 2.3.a_c$2$2.9.e_w 2.3.e_k$2$2.9.e_w 2.3.c_b$3$2.27.u_fy 2.3.a_ac$4$2.81.bc_nu 2.3.ac_b$6$2.729.ado_fhm 2.3.ae_i$8$(not in LMFDB) 2.3.e_i$8$(not in LMFDB)
{}
# Simple Harmonic Motion and time period 1. Sep 5, 2005 ### dpsguy A particle follows SHM with its eqn.of motion being s(x,y)=Asinwt i + 3Asin3wt j What is its time period? Also find the expression for total mechanical energy wrt time. I tried it in the following way V=Awcoswt i + 3Awcos3wt j = Aw[(coswt)^2 +9(cos3wt)^2]^(1/2) KE=0.5m(Aw)^2[(coswt)^2 +9(cos3wt)^2] a= -[A(w^2){sinwt i + 9sin3wt j }] F=ma PE=F.s = -m[(Aw)^2{(sinwt)^2 + 27(sin3wt)^2}] Total energy=KE + PE = m(Aw)^2[0.5{(coswt)^2 +9(cos3wt)^2}- {(sinwt)^2 + 27(sin3wt)^2}] Is this correct? And how to calculate the time period? 2. Sep 5, 2005 ### HallsofIvy Staff Emeritus "V=Awcoswt i + 3Awcos3wt j = Aw[(coswt)^2 +9(cos3wt)^2]^(1/2)" This appears to be saying that cos wt+ 3cos 3wt= (cos2wt+ 9cos23wt)1/2 which is NOT true. $$(a+ b)^2 \ne a^2+ b^2$$ and $$a+ b \ne \sqrt{a^2+ b^2}$$. 1. What is the period of cos wt? 2. What is the period of cos 3wt? 3. What is the least common multiple of those? 3. Sep 5, 2005 ### andrevdh Why don't you try the basic SHM formulae for the x and y motion separately - because energy is a scalar quantity you can add these up to get the total energy. That is $$U_x = \frac{1}{2}kx^2$$ for the potential energy and $$T_x = \frac{1}{2}mv_x^2$$ for the kinetic energy. Just remember that the $$\omega$$ differs for the two components and use the basic relation $$\omega = \sqrt{\frac{k}{m}}$$ Great - if you get the same answer ! It just means you have improved the confidence in your result. Anyway, by graphing the motion for a phase from 0 to pi for $$\omega t$$ the mass moves from 1 to 2 ... p to 5. Thereby the x motion have completed one oscillation, while the y motion have completed one oscilation at point a, another at b and another at point 5. Henceforth the motion repeats itself. The y motion therefore runs 3x faster than the x motion. #### Attached Files: • ###### 2D SHM.bmp File size: 27.8 KB Views: 88 Last edited: Sep 6, 2005 4. Sep 7, 2005 ### dpsguy While adding two vectors A and B which are mutually perpendicular A^2 +B^2=(A+B)^2. Hence I think I was right while finding the velocity. HallsofIvy. Also,I think i have figured out the answer , with help from andrevdh. The total enegy of particle=41m(Aw)^2=constant.The time period comes out to be 3w. But can't we find the time period of the particle by differentiating the equation for energy twice and using a=-xw^2? Can someone please tell me how can I find the time period here using this method? 5. Sep 7, 2005 ### andrevdh The period is per definition the amount of time needed to complete one oscillation. The attachment displays the motion for $$\omega t$$ ranging from $$0\ \rightarrow 2\pi$$ giving one oscillation in the x direction, but three oscillations in the y direction for the corresponding time. After this (1,2, ...,5) the motion will repeat itself. The angular frequency of the x-motion is $$\omega$$ while it is $$3\omega$$ for the y-motion. Clearly the period of the combined motion is therefore the period of the x-motion, which can be obtained from $$\omega=\frac{2\pi}{T}$$ See also 1,2 and 3 of HallsofIvy above. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
{}
# On best proximity points for pseudocontractions in the intermediate sense for non-cyclic and cyclic self-mappings in metric spaces Manuel De la Sen Author Affiliations Institute of Research and Development of Processes, University of the Basque Country, Campus of Leioa (Bizkaia), P.O. Box 644, Bilbao, 48080, Spain Fixed Point Theory and Applications 2013, 2013:146  doi:10.1186/1687-1812-2013-146 Received: 17 September 2012 Accepted: 17 May 2013 Published: 5 June 2013 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ### Abstract This paper discusses a more general contractive condition for a class of extended 2-cyclic self-mappings on the union of a finite number of subsets of a metric space which are allowed to have a finite number of successive images in the same subsets of its domain. If the space is uniformly convex and the subsets are nonempty, closed and convex, then all the iterations converge to a unique closed limiting finite sequence, which contains the best proximity points of adjacent subsets, and reduce to a unique fixed point if all such subsets intersect. ### 1 Introduction Strict pseudocontractive mappings and pseudocontractive mappings in the intermediate sense formulated in the framework of Hilbert spaces have received a certain attention in the last years concerning their convergence properties and the existence of fixed points. See, for instance, [1-4] and references therein. Results about the existence of a fixed point are discussed in those papers. On the other hand, important attention has been paid during the last decades to the study of the convergence properties of distances in cyclic contractive self-mappings on p subsets of a metric space , or a Banach space . The cyclic self-mappings under study have been of standard contractive or weakly contractive types and of Meir-Keeler type. The convergence of sequences to fixed points and best proximity points of the involved sets has been investigated in the last years. See, for instance, [5-20] and references therein. It has to be noticed that every nonexpansive mapping [21,22] is a 0-strict pseudocontraction and also that strict pseudocontractions in the intermediate sense are asymptotically nonexpansive [2]. The uniqueness of the best proximity points to which all the sequences of iterations converge is proven in [6] for the extension of the contractive principle for cyclic self-mappings in either uniformly convex Banach spaces (then being strictly convex and reflexive [23]) or in reflexive Banach spaces [13]. The p subsets of the metric space , or the Banach space , where the cyclic self-mappings are defined, are supposed to be nonempty, convex and closed. If the involved subsets have nonempty intersections, then all best proximity points coincide, with a unique fixed point being allocated in the intersection of all the subsets, and framework can be simply given on complete metric spaces. The research in [6] is centered on the case of the 2-cyclic self-mapping being defined on the union of two subsets of the metric space. Those results are extended in [7] for Meir-Keeler cyclic contraction maps and, in general, with the -cyclic self-mapping defined on any number of subsets of the metric space with . Other recent research which has been performed in the field of cyclic maps is related to the introduction and discussion of the so-called cyclic representation of a set M, as the union of a set of nonempty sets as , with respect to an operator [14]. Subsequently, cyclic representations have been used in [15] to investigate operators from M to M which are cyclic φ-contractions, where is a given comparison function, and is a metric space. The above cyclic representation has also been used in [16] to prove the existence of a fixed point for a self-mapping defined on a complete metric space which satisfies a cyclic weak φ-contraction. In [18], a characterization of best proximity points is studied for individual and pairs of non-self-mappings , where A and B are nonempty subsets of a metric space. The existence of common fixed points of self-mappings is investigated in [24] for a class of nonlinear integral equations, while fixed point theory is investigated in locally convex spaces and non-convex sets in [25-28]. More recently, the existence and uniqueness of best proximity points of more general cyclic contractions have been investigated in [29,30] and a study of best proximity points for generalized proximal contractions, a concept referred to non-self-mappings, has been proposed and reported in detail in [31]. Also, the study and characterization of best proximity points for cyclic weaker Meir-Keeler contractions have been performed in [32] and recent contributions on the study of best proximity and proximal points can be found in [33-38] and references therein. In general, best proximity points do not fulfill the usual ‘best proximity’ condition under this framework. However, best proximity points are proven to jointly globally optimize the mappings from x to the distances and . Furthermore, a class of cyclic φ-contractions, which contains the cyclic contraction maps as a subclass, has been proposed in [18] in order to investigate the convergence and existence results of best proximity points in reflexive Banach spaces completing previous related results in [6]. Also, the existence and uniqueness of best proximity points of cyclic φ-contractive self-mappings in reflexive Banach spaces have been investigated in [19]. This paper is devoted to the convergence properties and the existence of fixed points of a generalized version of pseudocontractive, strict pseudocontractive and asymptotically pseudocontractive in the intermediate sense in the more general framework of metric spaces. The case of 2-cyclic pseudocontractive self-mappings is also considered. The combination of constants defining the contraction may be different on each of the subsets and only the product of all the constants is requested to be less than unity. It is assumed that the considered self-mapping can perform a number of iterations on each of the subsets before transferring its image to the next adjacent subset of the 2-cyclic self-mapping. The existence of a unique closed finite limiting sequence on any sequence of iterations from any initial point in the union of the subsets is proven if X is a uniformly convex Banach space and all the subsets of X are nonempty, convex and closed. Such a limiting sequence is of size (with the inequality being strict if there is at least one iteration with image in the same subset as its domain), where p of its elements (all of them if ) are best proximity points between adjacent subsets. In the case that all the subsets intersect, the above limit sequence reduces to a unique fixed point allocated within the intersection of all such subsets. ### 2 Asymptotic contractions and pseudocontractions in the intermediate sense in metric spaces If H is a real Hilbert space with an inner product and a norm and A is a nonempty closed convex subset of H, then is said to be an asymptotically β-strictly pseudocontractive self-mapping in the intermediate sense for some if (2.1) for some sequence , as [1-4,23]. Such a concept was firstly introduced in [1]. If (2.1) holds for , then is said to be an asymptotically pseudocontractive self-mapping in the intermediate sense. Finally, if as , then is asymptotically β-strictly contractive in the intermediate sense, respectively, asymptotically contractive in the intermediate sense if . If (2.1) is changed to the stronger condition (2.2) then the above concepts translate into being an asymptotically β-strictly pseudocontractive self-mapping, an asymptotically pseudocontractive self-mapping and asymptotically contractive one, respectively. Note that (2.1) is equivalent to (2.3) or, equivalently, (2.4) where (2.5) Note that the high-right-hand-side term of (2.3) is expanded as follows for any : (2.6) The objective of this paper is to discuss the various pseudocontractive in the intermediate sense concepts in the framework of metric spaces endowed with a homogeneous and translation-invariant metric and also to generalize them to the β-parameter to eventually be replaced with a sequence in . Now, if instead of a real Hilbert space H endowed with an inner product and a norm , we deal with any generic Banach space , then its norm induces a homogeneous and translation invariant metric defined by ; so that (2.6) takes the form (2.7) Define (2.8) which exists since it follows from (2.7), since the metric is homogeneous and translation-invariant, that (2.9) The following result holds related to the discussion (2.7)-(2.9) in metric spaces. Theorem 2.1Letbe a metric space and consider a self-mapping. Assume that the following constraint holds: (2.10) with (2.11) for some parameterizing bounded real sequences, andof general terms, , satisfying the following constraints: (2.12) withand, furthermore, the following condition is satisfied: (2.13) if and only if; as. Then the following properties hold: (i) for anyso thatis asymptotically nonexpansive. (ii) Letbe complete, be, in addition, a translation-invariant homogeneous norm and let, withbeing the metric-induced norm from, be a uniformly convex Banach space. Assume also thatis continuous. Then any sequence; is bounded and convergent to some point, being in general dependent onx, in some nonempty bounded, closed and convex subsetCofA, whereAis any nonempty bounded subset ofX. Also, is bounded; , ; , andis a fixed point of the restricted self-mapping; . Furthermore, (2.14) Proof Consider two possibilities for the constraint (2.10), subject to (2.11), to hold for each given and as follows: (A) for any , . Then one gets from (2.10) (2.15) , , where (2.16) which holds from (2.12)-(2.13) if since as in (2.13) is equivalent to (2.16). Note that is ensured either with or with if (2.17) However, with has to be excluded because of the unboundedness or nonnegativity of the second right-hand-side term of (2.15). (B) for some , . Then one gets from (2.10) (2.18) where (2.19) which holds from (2.12) and if , and (2.20) Thus, (2.15)-(2.16), with the second option in the logic disjunction being true if and only if together with (2.18)-(2.20), are equivalent to (2.12)-(2.13) by taking to be either or for each . It then follows that ; from (2.15)-(2.19) since and ; as . Thus, is asymptotically nonexpansive. Thus, Property (i) has been proven. Property (ii) is proven as follows. Consider the metric-induced norm equivalent to the translation-invariant homogeneous metric . Such a norm exists since the metric is homogeneous and translation-invariant so that norm and metric are formally equivalent. Rename and define a sequence of subsets of X. From Property (i), is bounded; if is finite, since it is bounded for any finite and, furthermore, it has a finite limit as . Thus, all the collections of subsets ; are bounded since is bounded. Define the set which is nonempty bounded, closed and convex by construction. Since is complete, is a uniformly convex Banach space and is asymptotically nonexpansive from Property (i), then it has a fixed point [1,23]. Since the restricted self-mapping is also continuous, one gets from Property (i) (2.21) Then any sequence is convergent (otherwise, the above limit would not exist contradicting Property (i)), and then bounded in C; . This also implies is bounded; , and ; , . This implies also as ; such that ; which is then a fixed point of (otherwise, the above property ; , would be contradicted). Hence, Property (ii) is proven. □ First of all, note that Property (ii) of Theorem 2.1 applies to a uniformly convex space which is also a complete metric space. Since the metric is homogeneous and translation-invariant, a norm can be induced by such a metric. Alternatively, the property could be established on any uniformly convex Banach space by taking a norm-induced metric which always exists. Conceptually similar arguments are used in later parallel results throughout the paper. Note that the proof of Theorem 2.1(i) has two parts: Case (A) refers to an asymptotically nonexpansive self-mapping which is contractive for any number of finite iteration steps and Case (B) refers to an asymptotically nonexpansive self-mapping which is allowed to be expansive for a finite number of iteration steps. It has to be pointed out concerning such a Theorem 2.1(ii) that the given conditions guarantee the existence of at least a fixed point but not its uniqueness. Therefore, the proof is outlined with the existence of a for any nonempty, bounded and closed subset A of X. Note that the set C, being in general dependent on the initial set A, is bounded, convex and closed by construction while any taken nonempty set of initial conditions is not required to be convex. However, the property that all the sequences converge to fixed points opens two potential possibilities depending on particular extra restrictions on the self-mapping , namely: (1) the fixed point is not unique so that for any (and any A in X) so that some set for some contains more than one point. In other words, as ; has not been proven although it is true that ; ; (2) there is only a fixed point in X. The following result extends Theorem 2.1 for a modification of the asymptotically nonexpansive condition (2.10). Theorem 2.2Letbe a metric space and consider the self-mapping. Assume that the constraint below holds: (2.22) with (2.23) for some parameterizing real sequences, andsatisfying, for any, (2.24) Then the following properties hold: (i) so thatis asymptotically nonexpansive, and then; if (2.25) and the following limit exists: (2.26) (ii) Property (ii) of Theorem 2.1 ifis complete andis a uniformly convex Banach space under the metric-induced norm. Sketch of the proof Property (i) follows in the same way as the proof of Property (i) of Theorem 2.1 for Case (B). Using proving arguments similar to those used to prove Theorem 2.1, one proves Property (ii). □ The relevant part in Theorem 2.1 being of usefulness concerning the asymptotic pseudocontractions in the intermediate sense and the asymptotic strict contractions in the intermediate sense relies on Case (B) in the proof of Property (i) with the sequence of constants ; , and ; as , . The concepts of an asymptotic pseudocontraction and an asymptotic strict pseudocontraction in the intermediate sense motivated in Theorem 2.1 by (2.7)-(2.9), under the asymptotically nonexpansive constraints (2.10) subject to (2.11) and in Theorem 2.2 by (2.22) subject to (2.23) are revisited as follows in the context of metric spaces. Definition 2.3 Assume that is a complete metric space with being a homogeneous translation-invariant metric. Thus, is asymptotically β-strictly pseudocontractive in the intermediate sense if (2.27) for ; and some real sequences , being, in general, dependent on the initial points, i.e., , and (2.28) Definition 2.4 is asymptotically pseudocontractive in the intermediate sense if (2.30) holds with , , , , , as and the remaining conditions as in Definition 2.3 with , and . Definition 2.5 is asymptotically β-strictly contractive in the intermediate sense if , , ; , , as , in Definition 2.3 with , . Definition 2.6 is asymptotically contractive in the intermediate sense if , , ; , , , and as in Definition 2.3 with , and . Remark 2.7 Note that Definitions 2.3-2.5 lead to direct interpretations of their role in the convergence properties under the constraint (2.22), subject to (2.23), by noting the following: (1) If is asymptotically β-strictly pseudocontractive in the intermediate sense (Definition 2.3), then the real sequence of asymptotically nonexpansive constants has a general term ; , and it converges to a limit since and as ; from (2.22) since from (2.27). Then is trivially asymptotically nonexpansive as expected. (2) If is asymptotically pseudocontractive in the intermediate sense (Definition 2.4), then the sequence of asymptotically nonexpansive constants has the general term: ; , and it converges to a limit since , as . Then is also trivially asymptotically nonexpansive as expected. Since , note that and for any , while , as since as ; from (2.22)-(2.23). (3) If is asymptotically β-strictly contractive in the intermediate sense (Definition 2.5), then the sequence of asymptotically contractive constants is defined by ; and as for any such that as , since . Then is an asymptotically strict contraction as expected since as ; from (2.22)-(2.23). Note that the asymptotic convergence rate is arbitrarily fast as α and β are arbitrarily close to zero, since becomes also arbitrarily close to zero, and with . (4) If is asymptotically contractive in the intermediate sense (Definition 2.6), then the sequence of asymptotically contractive constants is defined by with and as for some since with so that . Then is an asymptotically strict contraction as expected since as ; from (2.23). Note that if and and . Note also that if and , while if and . In the first case, the convergence to fixed points (see Theorem 2.8 below) is guaranteed to be asymptotically faster if the self-mapping is asymptotically β-strictly contractive in the intermediate sense than if it is just asymptotically contractive in the intermediate sense if , . Note also that if the sequences and are identical in both cases, then for any such that and for any such that . (5) The above considerations could also be applied to Theorem 2.1 for the case (Case (B) in the proof of Property (i)) being asymptotically nonexpansive for the asymptotically nonexpansive condition (2.10) subject to (2.11). The subsequent result, being supported by Theorem 2.2, relies on the concepts of asymptotically contractive and pseudocontractive self-mappings in the intermediate sense. Therefore, it is assumed that . Theorem 2.8Letbe a complete metric space endowed with a homogeneous translation-invariant metricand consider the self-mapping. Assume thatis a uniformly convex Banach space endowed with a metric-induced normfrom the metric. Assume that the asymptotically nonexpansive condition (2.22), subject to (2.23), holds for some parameterizing real sequences, andsatisfying, for any, (2.29) , . Thenfor anysatisfying the conditions (2.30) Furthermore, the following properties hold: (i) is asymptoticallyβ-strictly pseudocontractive in the intermediate sense for some nonempty, bounded, closed and convex setand any given nonempty, bounded and closed subsetof initial conditions if (2.29) hold with, , , andas; , . Also, has a fixed point for any such setCifis continuous. (ii) is asymptotically pseudocontractive in the intermediate sense for some nonempty, bounded, closed and convex setand any given nonempty, bounded and closed subsetof initial conditions if (2.29) hold with, , , , andas; , . Also, has a fixed point for any such setCifis continuous. (iii) If (2.29) hold with, , , ; andas, thenis asymptoticallyβ-strictly contractive in the intermediate sense. Also, has a unique fixed point. (iv) If (2.29) hold with, , , ; , andas, thenis asymptotically strictly contractive in the intermediate sense. Also, has a unique fixed point. Proof (i) It follows from Definition 2.3 and the fact that Theorem 2.2 holds under the particular nonexpansive condition (2.22), subject to (2.23), so that is asymptotically nonexpansive (see Remark 2.7(1)). Property (ii) follows in a similar way from Definition 2.4 (see Remark 2.7(2)). Properties (iii)-(iv) follow from Theorem 2.2 and Definitions 2.5-2.6 implying also that the asymptotically nonexpansive self-mapping is also a strict contraction, then continuous with a unique fixed point, since (see Remark 2.7(3)) and with (see Remark 2.7(4)), respectively. (The above properties could also be got from Theorem 2.1 for Case (B) of the proof of Theorem 2.1(ii) - see Remark 2.7(5).) □ Example 2.9 Consider the time-varying pth order nonlinear discrete dynamic system (2.31) for some given nonempty bounded set , where is a matrix sequence of elements with and with ; , and defines the state-sequence trajectory solution . Equation (2.13) requires the consistency constraint to calculate . However, other discrete systems being dealt with in the same way as, for instance, that obtained by replacing in (2.31) with the initial condition (and appropriate ad hoc re-definition of the mapping which generates the trajectory solution from given initial conditions) do not require such a consistency constraint. The dynamic system (2.31) is asymptotically linear if as ; . Note that for the Euclidean distance (and norm), ; . Assume that the squared spectral norm of is upper-bounded by for some parameterizing scalar sequences , and which can be dependent, in a more general case, on the state . This holds, for instance, if , where is a real positive sequence satisfying and both being potentially dependent on the state as the rest of the parameterizing sequences. Since the spectral norm equalizes the spectral radius if the matrix is symmetric, then can be taken exactly as the spectral radius of in such a case, i.e., it equalizes the absolute value of its dominant eigenvalue. We have to check the condition (2.32) provided, for instance, that the distance is the Euclidean distance, induced by the Euclidean norm, then both being coincident, and provided also that we take the metric space which holds, in particular, if (a) , , , ; , and , , as ; . This implies that ; and as ; . Thus, is asymptotically nonexpansive being also an asymptotic strict β-pseudocontraction in the intermediate sense. This also implies that (2.31) is globally stable as it is proven as follows. Assume the contrary so that there is an infinite subsequence of which is unbounded, and then there is also an infinite subsequence which is strictly increasing. Since and as ; , one has that for , any given and some sufficiently large , , , such that and ; , . Now, take and . Then ; and any given . If , then stability holds trivially. Assume not, and there are unbounded solutions. Thus, take such that for any given , and some . Note that since is a strictly increasing real sequence implying as , which leads to a contradiction to the inequality for for some sufficiently large , then for some sufficiently large M, if such a strictly increasing sequence exists. Hence, there is no such sequence, and then no unbounded sequence for any initial condition in . As a result, for any initial condition in any given subset of (even if it is unbounded), any solution sequence of (2.31) is bounded, and then (2.31) is globally stable. The above reasoning implies that there is an infinite collection of numerable nonempty bounded closed sets , which are not necessarily connected, such that ; and any given . Assume that the set of initial conditions is bounded, convex and closed and consider the collection of convex envelopes , define constructively the closure convex set which is trivially bounded, convex and closed. Note that it is not guaranteed that is either open or closed since there is a union of infinitely many closed sets involved. Note also that the convex hull of all the convex envelopes of the collection of sets is involved to ensure that A is convex since the union of convex sets is not necessarily convex (so that is not guaranteed to be convex while A is convex). Consider now the self-mapping which defines exactly the same solution as for initial conditions in so that is identified with the restricted self-mapping from a nonempty bounded, convex and closed set to itself. Note that for the Euclidean distance is a convex metric space which is also complete since it is finite dimensional. Then and are both continuous, then is also continuous and has a fixed point in A from Theorem 2.8(i). (b) If the self-mapping is asymptotically pseudocontractive in the intermediate sense, then the above conclusions still hold with the modification and as ; . From Remark 2.7(2), and for any . Thus the convergence is guaranteed to be faster for an asymptotic β-strict pseudocontraction in the intermediate sense than for an asymptotic pseudocontraction in the intermediate sense with a sequence such that ; with the remaining parameters and parametrical sequences being identical in both cases. If and ; are both continuous, then is continuous and has a fixed point in A from Theorem 2.8(ii). (c) If is asymptotically β-strictly contractive in the intermediate sense, then ; so that it is asymptotically strictly contractive and has a unique fixed point from Theorem 2.8(iii). (d) If is asymptotically contractive in the intermediate sense, ; . Thus, is an asymptotic strict contraction and has a unique fixed point from Theorem 2.8(iv). Remark 2.10 Note that conditions like (2.32) can be tested on dynamic systems being different from (2.31) by redefining, in an appropriate way, the self-mapping which generates the solution sequence from given initial conditions. This allows to investigate the asymptotic properties of the self-mapping, the convergence of the solution to fixed points, then the system stability, etc. in a unified way for different dynamic systems. Close considerations can be discussed for different dynamic systems and convergence of the solutions generated by the different cyclic self-mappings defined on the union of several subsets to the best proximity points of each of the involved subsets. ### 3 Asymptotic contractions and pseudocontractions of cyclic self-mappings in the intermediate sense Let be nonempty subsets of X. is a cyclic self-mapping if and . Assume that the asymptotically nonexpansive condition (2.10), subject to (2.11), is modified as follows: (3.1) (3.2) with ; as , and that the asymptotically nonexpansive condition (2.22), subject to (2.23), is modified as follows: (3.3) (3.4) with ; as , where and . If , then and Theorems 2.1, 2.2 and 2.8 hold with the replacement . Then if A and B are closed and convex, then there is a unique fixed point of in . In the following, we consider the case that so that . The subsequent result based on Theorems 2.1, 2.2 and 2.8 holds. Theorem 3.1Letbe a metric space and letbe a cyclic self-mapping, i.e., and, whereAandBare nonempty subsets ofX. Define the sequenceof asymptotically nonexpansive iteration-dependent constants as follows: , provided thatsatisfies the constraint (3.1), subject to (3.2), and (3.6) and (3.7) for () and for () provided thatsatisfies the constraint (3.3) subject to (3.4) provided that the parameterizing bounded real sequences, , andof general terms, andfulfill the following constraints: (3.8) and assuming that the following limits exist: (3.9) Then, the following properties hold: (i) satisfies (3.3) subject to (3.4)-(3.9); . Then so thatis a cyclic asymptotically nonexpansive self-mapping. Ifis a best proximity point ofAandis a best proximity point ofB, thenandand, which are best proximity points ofAandB (not being necessarily identical toxandy), respectively ifis continuous. (ii) Property (i) also holds ifsatisfies (3.1) subject to (3.2), (3.7), (3.8)-(3.9) and (3.5b) provided that; . Proof The second condition of (2.18) now becomes under either (3.1)-(3.2) and (3.8)-(3.9) (3.10) and it now becomes under (3.3)-(3.4) and (3.8)-(3.9) (3.11) since ; since and , and and as ; . Note that (3.8) implies that there is no division by zero in (3.11). Now, assume that (3.10) holds with . From (3.8) and (3.2), , equivalently, and , which contradicts (3.5a) if so that in (3.5a) under (3.7) implies that and, since from (3.6), there is no division by zero on the right-hand side of (3.10) if . Also, if is continuous, then so that ; , , and since and . This proves Properties (i)-(ii). □ Remark 3.2 Note that Theorem 3.1 does not guarantee the convergence of and to best proximity points if the initial points for the iterations and are not best proximity points if is not contractive. The following result specifies Theorem 3.1 for asymptotically nonexpansive mappings with ; subject to . Theorem 3.3Letbe a metric space and letbe a cyclic self-mapping which satisfies the asymptotically nonexpansive constraint (3.1), subject to (3.2), whereAandBare nonempty subsets ofX. Let the sequenceof asymptotically nonexpansive iteration-dependent constants be defined by a general termunder the constraints, , and. Then the subsequent properties hold: (i) The following limits exist: (3.12) (ii) Assume, furthermore, thatis complete, AandBare closed and convex andis translation-invariant and homogeneous andis uniformly convex whereis the metric-induced norm. Then (3.13) , ; , and, ; , , wherezandTzare unique best proximity points inAandB, respectively. If, thenis the unique fixed point of. Proof Note from (3.9), under (3.6) and (3.7), that there is no division by zero on the right-hand side of (3.10) and if . Then one has from (3.1)-(3.2), (3.5a), (3.6) and (3.7) that (3.14) There are several possible cases as follows. Case A: is non-increasing. Then as ; . Since , one gets (3.12). Case B: is non-decreasing. Then either as ; or it is unbounded. Then it has a subsequence which diverges, from which a strictly increasing subsequence can be taken. But this contradicts following from (3.14) subject to the given parametrical constraints. Thus, if is non-decreasing, it cannot have a strictly increasing subsequence so that it is bounded and has a finite limit as in Case A. Case C: has an oscillating subsequence. It is proven that such a subsequence is finite. Assume not, then if , there is an integer sequence of general term subject to such that but the above expression is equivalent, for and which are in , but not jointly in either A or B, to which contradicts since both sequences and are bounded; . Then there is no infinite oscillating sequence for some so that there is a finite limit of , . Now, proceed by contradiction by assuming the existence of some such that as ; . Thus, for any , there is some such that there are two consecutive nonzero elements of a nonzero real sequence , which can depend on x and y, which satisfy and (3.15) . Otherwise, if for any and any given and , then as ; . One gets, by combining (3.14) and (3.15), that (3.16) since ; , and some nonnegative real sequence which converges to zero since as ; for any so that as ; . The relations (3.16) contradict since is positive (and it does not converge to zero) and , as . Thus, one concludes that converges to zero, and then ; ; . This leads to ; by taking with if and if . Property (i) has been proven. Now, Property (ii) is proven. It is first proven that ; if the metric is translation-invariant and homogeneous so that it induces a norm if A and B are nonempty, closed and convex subsets of X and is a uniformly convex Banach space. Assume not and take such a norm to yield . Then if A is nonempty, closed and convex and B is nonempty and closed and , then . It is known that from Theorem 3.1(i) for . Since is a uniformly convex Banach space for the metric-induced norm (being equivalent to the translation-invariant homogeneous metric), we have the following property for the sequences and satisfying for some strictly increasing nonnegative sequence of functions and any nonnegative sequences and satisfying and any sequence ; that (3.17) (3.18) (3.19) , , which implies that (3.20) which has to be valid for ; . Now, for and ; , it follows that ; , which is a contradiction to being strictly increasing, then contradicting being a uniformly convex Banach space, unless as so that converges to . Taking , ; , (3.15) for as implies the existence of the first zero limit in (3.13). The existence of the second zero limit in (3.13) is proven in the same way since . Since those limits are zero, , are Cauchy sequences in A converging to a best proximity point for . Note that is necessarily the unique best proximity point in A since and converge to the same point. Otherwise, the first limit of (3.13) would not exist if the sequences do not converge, then a contradiction holds to a proven result, and also Property (i) would not be true, since (3.12) would not hold, if the limit of the sequence would not be a best proximity point in A, then a contradiction holds to another proven result. In the same way, , converge to a unique best proximity point for any . Now, . Assume not. Then since , and , one has . Assume that so that since A and B are convex, which is a contradiction. Then is the unique best proximity of B. If , then is the unique fixed point of which coincides with the unique best proximity point in A and B. □ Remark 3.4 Theorem 3.3 is known for strictly contractive cyclic self-mappings [20] satisfying the contractive condition (3.1) in the case that and , and [5-7]. It is now assumed that the cyclic self-mapping is asymptotically nonexpansive while not being strictly contractive for any finite number of iterations. The concepts of cyclic pseudocontractions and a strict contraction in the intermediate sense play an important role in the obtained results. Theorem 3.5Letbe a uniformly convex Banach space endowed with a metric-induced normfrom a translation-invariant homogeneous metric, whereAandare nonempty, closed and convex subsets ofXand assume thatis a cyclic self-mapping. Define the sequenceof asymptotically nonexpansive iteration-dependent constants as follows: , provided thatsatisfies the constraint (3.1), subject to (3.2); and (3.22) for () and for () provided thatsatisfies the constraint (3.3), subject to (3.4), provided that the parameterizing bounded real sequences, , andof general terms, andfulfill the following constraints: (3.23) and assuming that the following limits exist: (3.24) Then the following properties hold: (i) Ifsatisfies (3.3) subject to (3.20)-(3.24); , then so thatis asymptotically nonexpansive. Ifis a best proximity point ofAandis a best proximity point ofB, thenandandwhich are best proximity points ofAandB (not being necessarily identical toxandy), respectively, if furthermore, is continuous. (ii) Property (i) also holds ifsatisfies (3.1) subject to (3.2), (3.22), (3.23)-(3.24) and (3.5b) with; . (iii) Assume thatis asymptoticallyβ-strictly pseudocontractive in the intermediate sense so that (3.21a)-(3.21b) holds with, , , and, as; , . Thenis asymptotically nonexpansive and Property (i) holds. (iv) is asymptotically pseudocontractive in the intermediate sense if (3.22) holds with, , , , andas; , . Thenis asymptotically nonexpansive and Property (i) holds. (v) If the conditions of Property (iv) are modified as, , ; , asandin (3.22), thenis asymptoticallyβ-strictly contractive in the intermediate sense. Also, has a unique best proximity pointzinAand a unique best proximity pointTzinBto which the sequencesandconverge; . If, thenandas. (vi) If (3.4) is modified by, , , ; , andas, thenis asymptotically-strictly contractive in the intermediate sense. Also, has a unique best proximity point inAand a unique best proximity point inBto which the sequencesandconverge as in Property (v). Proof The second condition of (2.18) now becomes under (3.1)-(3.2), or (3.3)-(3.4), and (3.23)-(3.24) since and as ; . Also, if is continuous, then so that , , and since A and B are closed and and . This proves Properties (i)-(ii). To prove Property (iii), note that if is asymptotically β-strictly pseudocontractive in the intermediate sense under (3.21a)-(3.21b)-(3.23) with ; , as and (3.22) holds for as , then is asymptotically nonexpansive and as with if and are best proximity points. Also, ; and , , and if is continuous. Then Property (i) holds. Property (iv) is proven in a similar way as (iii) since is again asymptotically nonexpansive. Properties (v)-(vi) follow since in both cases becomes a cyclic strictly contractive self-mapping for all with ; and some finite in Theorem 3.3, Eq. (3.14). Thus, it is a direct proof that ; with and if and since and . Also, ; . Furthermore, and ; , and there are unique best proximity points and . The convergence of the iterations to unique best proximity points follows using similar arguments as those used in the proof of Theorem 3.3(ii) based on the uniform convexity of the complete metric space and the fact that the subsets A and B are nonempty, convex and closed. □ Remark 3.6 Note that the existence of Theorem 3.5 of and such that is guaranteed if A is nonempty, bounded, closed and convex and B is nonempty closed and convex is also guaranteed if A is compact and B is approximately compact with respect to A, i.e., if every sequence , such that for some , has a convergent subsequence [6,7,31]. Example 3.7 Consider the time-varying scalar controlled discrete dynamic system: (3.25) under the feedback control sequence (3.26) (3.27) so that (3.28) where ; for some given nonempty bounded set , where is the control sequence. The above model can describe discrete-time dynamic systems under time-varying sampling periods or under a time-varying parameterization in general [39]. Assume that the suitable controlled solution (3.28) is of the form Then (3.29) (3.30) The identities (3.30) allow the feedback generation of the control sequence (3.26) from its previous values and previous solution values as follows: (3.31) for given parameterizing scalar sequences which can be dependent on the state (see Example 2.9). We are now defining a cyclic self-map so that the solution belongs alternately to positive (respectively, nonnegative) and negative (respectively, nonpositive) real intervals and if (respectively, if ), that is, and . For such an objective, consider the scalar bounded sequences , and such that , and ; , which satisfy (3.32a) (3.32b) Note that by using the Euclidean distance and norm on R, it is possible to apply the theoretical formalism to the expressions ; to prove convergence to the best proximity points to which the sequences and converge, respectively if and conversely if . Assume that: (1) The constraints (3.32a)-(3.32b) hold; (2) The parametrical constraints of the various parts (a) to (d) of Example 2.9 hold with the replacements and its appropriate replacements of the constraints , ; (3) and are redefined for this example from and , respectively, from (3.32a)-(3.32b). From Theorem 3.5, the various properties of Example 2.9 hold also for this example if so that the cyclic self-map is such that it alternates the values of the solution sequence between and . The unique fixed point to which the solution converges is . If , then the corresponding results are modified by convergence to each of the unique best proximity points to which the sequences and converge; . ### Competing interests The author declares that he has no competing interests. ### Acknowledgements The author is very grateful to the Spanish Government for its support of this research through Grant DPI2012-30651, and to the Basque Government for its support of this research through Grants IT378-10 and SAIOTEK S-PE12UN015. He is also grateful to the University of Basque Country for its financial support through Grant UFI 2011/07 and to the referees for their useful comments. ### References 1. Sahu, DR, Xu, HK, Yao, JC: Asymptotically strict pseudocontractive mappings in the intermediate sense. Nonlinear Anal., Theory Methods Appl.. 70, 3502–3511 (2009). Publisher Full Text 2. Qin, X, Kim, JK, Wang, T: On the convergence of implicit iterative processes for asymptotically pseudocontractive mappings in the intermediate sense. Abstr. Appl. Anal.. 2011, (2011) Article ID 468716. doi:10.1155/2011/468716 3. Ceng, LC, Petrusel, A, Yao, JC: Iterative approximation of fixed points for asymptotically strict pseudocontractive type mappings in the intermediate sense. Taiwan. J. Math.. 15(2), 587–606 (2011) 4. Duan, P, Zhao, J: Strong convergence theorems for system of equilibrium problems and asymptotically strict pseudocontractions in the intermediate sense. Fixed Point Theory Appl.. 2011, (2011) Article ID 13. doi:10.1186/1687-1812-2011-13 5. Kirk, WA, Srinivasan, PS, Veeramani, P: Fixed points for mappings satisfying cyclical contractive conditions. Fixed Point Theory. 4(1), 79–89 (2003) 6. Eldred, AA, Veeramani, P: Existence and convergence of best proximity points. J. Math. Anal. Appl.. 323, 1001–1006 (2006). Publisher Full Text 7. Karpagam, S, Agrawal, S: Best proximity point theorems for p-cyclic Meir-Keeler contractions. Fixed Point Theory Appl.. 2009, (2009) Article ID 197308. doi:10.1155/2009/197308 8. Di Bari, C, Suzuki, T, Vetro, C: Best proximity points for cyclic Meir-Keeler contractions. Nonlinear Anal., Theory Methods Appl.. 69(11), 3790–3794 (2008). Publisher Full Text 9. De la Sen, M: Linking contractive self-mappings and cyclic Meir-Keeler contractions with Kannan self-mappings. Fixed Point Theory Appl.. 2010, (2010) Article ID 572057. doi:10.1155/2010/572057 10. De la Sen, M: Some combined relations between contractive mappings, Kannan mappings reasonable expansive mappings and T-stability. Fixed Point Theory Appl.. 2009, (2009) Article ID 815637. doi:10.1155/2009/815637 11. Suzuki, T: Some notes on Meir-Keeler contractions and L-functions. Bull. Kyushu Inst. Technol.. 53, 12–13 (2006) 12. Derafshpour, M, Rezapour, S, Shahzad, N: On the existence of best proximity points of cyclic contractions. Adv. Dyn. Syst. Appl.. 6(1), 33–40 (2011) 13. Rezapour, S, Derafshpour, M, Shahzad, N: Best proximity points of cyclic φ-contractions on reflexive Banach spaces. Fixed Point Theory Appl.. 2010, (2010) Article ID 946178. doi:10.1155/2010/946178 14. Al-Thagafi, MA, Shahzad, N: Convergence and existence results for best proximity points. Nonlinear Anal., Theory Methods Appl.. 70(10), 3665–3671 (2009). Publisher Full Text 15. Rus, IA: Cyclic representations and fixed points. Ann. T. Popoviciu Semin. Funct. Equ. Approx. Convexity. 3, 171–178 (2005) 16. Pacurar, M, Rus, IA: Fixed point theory for cyclic φ-contraction. Nonlinear Anal., Theory Methods Appl.. 72(3-4), 1181–1187 (2010). Publisher Full Text 17. Karapinar, E: Fixed point theory for cyclic weak ϕ-contraction. Appl. Math. Lett.. 24, 822–825 (2011). Publisher Full Text 18. Shazhad, N, Sadiq Basha, S, Jeyaraj, R: Common best proximity points: global optimal solutions. J. Optim. Theory Appl.. 148(1), 69–78 (2011). Publisher Full Text 19. Vetro, C: Best proximity points: convergence and existence theorems for p-cyclic mappings. Nonlinear Anal., Theory Methods Appl.. 73(7), 2283–2291 (2010). Publisher Full Text 20. De la Sen, M: On a general contractive condition for cyclic self-mappings. J. Appl. Math.. 2011, (2011) Article ID 542941. doi:10.1155/2011/542941 21. Yao, YH, Liu, YC, Chen, CP: Algorithms construction for nonexpansive mappings and inverse-strongly monotone mappings. Taiwan. J. Math.. 15(5), 1979–1998 (2011) 22. Yao, YH, Chen, RD: Regularized algorithms for hierarchical fixed-point problems. Nonlinear Anal., Theory Methods Appl.. 74(17), 6826–6834 (2011). Publisher Full Text 23. Qin, X, Kang, SM, Agarwal, RP: On the convergence of an implicit iterative process for generalized asymptotically quasi-nonexpansive mappings. Fixed Point Theory Appl.. 2010, (2010) Article ID 714860. doi:10.1155/2010/714860 24. Pathak, HK, Khan, MS, Tiwari, R: A common fixed point theorem and its application to nonlinear integral equations. Comput. Math. Appl.. 53(6), 961–971 (2007) doi:10.1016/j.camwa.2006.08.046 doi:10.1016/j.camwa.2006.08.046 Publisher Full Text 25. Khan, MS, Nashine, HK: On invariant approximation for noncommutative mappings in locally convex spaces. J. Comput. Anal. Appl.. 10(1), 7–15 (2008). PubMed Abstract 26. Nashine, HK, Khan, MS: An application of fixed point theorem to best approximation in locally convex space. Appl. Math. Lett.. 23(2), 121–127 (2010) doi:10.1016/j.aml.2009.06.025 doi:10.1016/j.aml.2009.06.025 Publisher Full Text 27. Nashine, HK, Khan, MS: Common fixed points versus invariant approximation in nonconvex sets. Appl. Math. E-Notes. 9, 72–79 (2009) 28. Pathak, HK, Tiwari, R, Khan, MS: A common fixed point theorem satisfying integral type implicit relations. Appl. Math. E-Notes. 7, 222–228 (2007) 29. De la Sen, M, Agarwal, RP: Fixed point-type results for a class of extended cyclic self-mappings under three general weak contractive conditions of rational type. Fixed Point Theory Appl.. 2012, (2012) Article ID 102. doi:10.1186/1687-1812-2011-102 30. De la Sen, M, Agarwal, RP: Some fixed point-type results for a class of extended cyclic self-mappings with a more general contractive condition. Fixed Point Theory Appl.. 2011, (2011) Article ID 59. doi:10.1186/1687-1812-2011-59 31. Basha, SS, Shahzad, N: Best proximity point theorems for generalized proximal contractions. Fixed Point Theory Appl.. 2012, (2012) Article ID 42. doi:10.1186/1687-1812-2012-42 32. Chen, CM, Lin, CJ: Best periodic proximity point theorems for cyclic weaker Meir-Keeler contractions. J. Appl. Math.. 2012, (2012) Article ID 856974. doi:10.1155/2012/856974 33. Caballero, J, Harjani, J, Sadarangani, K: A best proximity point theorem for Geraghty-contractions. Fixed Point Theory Appl.. 2012, (2012) Article ID 231 34. Mongkolkeha, C, Cho, YJ, Kuman, P: Best proximity points for generalized proximal C-contraction mappings in metric spaces with partial orders. J. Inequal. Appl.. 2013, (2013) Article ID 94 35. Karapinar, E: Best proximity points of Kannan type cyclic weak ϕ-contractions in ordered metric spaces. An. Stiint. Univ. Ovidius Constanta, Ser. Mat.. 20(3), 51–63 (2012) 36. Karapinar, E: Best proximity points of cyclic mappings. Appl. Math. Lett.. 25(11), 1761–1766 (2012). Publisher Full Text 37. Karapinar, E, Erhan, IM: Best proximity points on different type contractions. Appl. Math. Inf. Sci.. 5(3), 558–569 (2011) 38. Raj, VS: A best proximity theorem for weakly contractive non-self mappings. Nonlinear Anal., Theory Methods Appl.. 74(14), 4804–4808 (2011). Publisher Full Text 39. De la Sen, M: Application of the nonperiodic sampling to the identifiability and model-matching problems in dynamic systems. Int. J. Syst. Sci.. 14(4), 367–383 (1983). Publisher Full Text
{}
# Do one-relator groups satisfy Haagerup property? The question is in the title: Do one-relator groups satisfy Haagerup property? I think the answer is known at least in some specific cases, but is the problem completely solved? • Most 1-relator groups are known to be free-by-(virtually solvable) (which implies Haagerup). It sounds plausible to me that 1-relator groups all have this property. – YCor Sep 13 '15 at 12:13 • @YCor, do you know if this holds for Baumslag's famous example $\langle a,b\mid a^{(a^b)}=a^2\rangle$? – HJRW Sep 17 '15 at 13:39 • @HJRW good question, I'll think about it. – YCor Sep 17 '15 at 21:46
{}
:: Bounding boxes for compact sets in ${\calE}^2$ :: by Czes{\l}aw Byli\'nski and Piotr Rudnicki :: :: Copyright (c) 1997-2018 Association of Mizar Users Lm1: for X being Subset of REAL st X is bounded_below holds -- X is bounded_above proof end; Lm2: for X being non empty set for f being Function of X,REAL st f is with_min holds - f is with_max proof end; definition let T be 1-sorted ; mode RealMap of T is Function of the carrier of T,REAL; end; registration let T be non empty 1-sorted ; existence ex b1 being RealMap of T st b1 is bounded proof end; end; definition let T be 1-sorted ; let f be RealMap of T; func lower_bound f -> Real equals :: PSCOMP_1:def 1 lower_bound (f .: the carrier of T); coherence lower_bound (f .: the carrier of T) is Real ; func upper_bound f -> Real equals :: PSCOMP_1:def 2 upper_bound (f .: the carrier of T); coherence upper_bound (f .: the carrier of T) is Real ; end; :: deftheorem defines lower_bound PSCOMP_1:def 1 : for T being 1-sorted for f being RealMap of T holds lower_bound f = lower_bound (f .: the carrier of T); :: deftheorem defines upper_bound PSCOMP_1:def 2 : for T being 1-sorted for f being RealMap of T holds upper_bound f = upper_bound (f .: the carrier of T); theorem Th1: :: PSCOMP_1:1 for T being non empty TopSpace for f being V50() RealMap of T for p being Point of T holds f . p >= lower_bound f proof end; theorem :: PSCOMP_1:2 for T being non empty TopSpace for f being V50() RealMap of T for s being Real st ( for t being Point of T holds f . t >= s ) holds lower_bound f >= s proof end; theorem :: PSCOMP_1:3 for r being Real for T being non empty TopSpace for f being RealMap of T st ( for p being Point of T holds f . p >= r ) & ( for t being Real st ( for p being Point of T holds f . p >= t ) holds r >= t ) holds r = lower_bound f proof end; theorem Th4: :: PSCOMP_1:4 for T being non empty TopSpace for f being V49() RealMap of T for p being Point of T holds f . p <= upper_bound f proof end; theorem :: PSCOMP_1:5 for T being non empty TopSpace for f being V49() RealMap of T for t being Real st ( for p being Point of T holds f . p <= t ) holds upper_bound f <= t proof end; theorem :: PSCOMP_1:6 for r being Real for T being non empty TopSpace for f being RealMap of T st ( for p being Point of T holds f . p <= r ) & ( for t being Real st ( for p being Point of T holds f . p <= t ) holds r <= t ) holds r = upper_bound f proof end; theorem Th7: :: PSCOMP_1:7 for T being non empty 1-sorted for f being bounded RealMap of T holds lower_bound f <= upper_bound f proof end; definition let T be TopStruct ; let f be RealMap of T; attr f is continuous means :: PSCOMP_1:def 3 for Y being Subset of REAL st Y is closed holds f " Y is closed ; end; :: deftheorem defines continuous PSCOMP_1:def 3 : for T being TopStruct for f being RealMap of T holds ( f is continuous iff for Y being Subset of REAL st Y is closed holds f " Y is closed ); registration let T be non empty TopSpace; existence ex b1 being RealMap of T st b1 is continuous proof end; end; registration let T be non empty TopSpace; let S be non empty SubSpace of T; existence ex b1 being RealMap of S st b1 is continuous proof end; end; theorem Th8: :: PSCOMP_1:8 for T being TopStruct for f being RealMap of T holds ( f is continuous iff for Y being Subset of REAL st Y is open holds f " Y is open ) proof end; theorem Th9: :: PSCOMP_1:9 for T being TopStruct for f being RealMap of T st f is continuous holds - f is continuous proof end; theorem Th10: :: PSCOMP_1:10 for r3 being Real for T being TopStruct for f being RealMap of T st f is continuous holds r3 + f is continuous proof end; theorem Th11: :: PSCOMP_1:11 for T being TopStruct for f being RealMap of T st f is continuous & not 0 in rng f holds Inv f is continuous proof end; theorem :: PSCOMP_1:12 for T being TopStruct for f being RealMap of T for R being Subset-Family of REAL st f is continuous & R is open holds (" f) .: R is open proof end; theorem Th13: :: PSCOMP_1:13 for T being TopStruct for f being RealMap of T for R being Subset-Family of REAL st f is continuous & R is closed holds (" f) .: R is closed proof end; definition let T be non empty TopStruct ; let f be RealMap of T; let X be Subset of T; :: original: | redefine func f | X -> RealMap of (T | X); coherence f | X is RealMap of (T | X) proof end; end; registration let T be non empty TopSpace; let f be continuous RealMap of T; let X be Subset of T; cluster K50(f,X) -> continuous for RealMap of (T | X); coherence for b1 being RealMap of (T | X) st b1 = f | X holds b1 is continuous proof end; end; registration let T be non empty TopSpace; let P be non empty compact Subset of T; cluster T | P -> compact ; coherence T | P is compact by COMPTS_1:3; end; theorem Th14: :: PSCOMP_1:14 for T being non empty TopSpace holds ( ( for f being RealMap of T st f is continuous holds f is with_max ) iff for f being RealMap of T st f is continuous holds f is with_min ) proof end; theorem Th15: :: PSCOMP_1:15 for T being non empty TopSpace holds ( ( for f being RealMap of T st f is continuous holds f is bounded ) iff for f being RealMap of T st f is continuous holds f is with_max ) proof end; definition let T be TopStruct ; attr T is pseudocompact means :Def4: :: PSCOMP_1:def 4 for f being RealMap of T st f is continuous holds f is bounded ; end; :: deftheorem Def4 defines pseudocompact PSCOMP_1:def 4 : for T being TopStruct holds ( T is pseudocompact iff for f being RealMap of T st f is continuous holds f is bounded ); registration coherence for b1 being non empty TopSpace st b1 is compact holds b1 is pseudocompact proof end; end; registration existence ex b1 being TopSpace st ( not b1 is empty & b1 is compact ) proof end; end; registration let T be non empty pseudocompact TopSpace; cluster Function-like V32( the carrier of T, REAL ) continuous -> with_max with_min bounded for RealMap of ; coherence for b1 being RealMap of T st b1 is continuous holds ( b1 is bounded & b1 is with_max & b1 is with_min ) proof end; end; theorem Th16: :: PSCOMP_1:16 for T being non empty TopSpace for X being non empty Subset of T for Y being compact Subset of T for f being continuous RealMap of T st X c= Y holds lower_bound (f | Y) <= lower_bound (f | X) proof end; theorem Th17: :: PSCOMP_1:17 for T being non empty TopSpace for X being non empty Subset of T for Y being compact Subset of T for f being continuous RealMap of T st X c= Y holds upper_bound (f | X) <= upper_bound (f | Y) proof end; registration let n be Element of NAT ; let X, Y be compact Subset of (); cluster K29(X,Y) -> compact for Subset of (); coherence for b1 being Subset of () st b1 = X /\ Y holds b1 is compact by COMPTS_1:11; end; definition func proj1 -> RealMap of () means :Def5: :: PSCOMP_1:def 5 for p being Point of () holds it . p = p 1 ; existence ex b1 being RealMap of () st for p being Point of () holds b1 . p = p 1 proof end; uniqueness for b1, b2 being RealMap of () st ( for p being Point of () holds b1 . p = p 1 ) & ( for p being Point of () holds b2 . p = p 1 ) holds b1 = b2 proof end; func proj2 -> RealMap of () means :Def6: :: PSCOMP_1:def 6 for p being Point of () holds it . p = p 2 ; existence ex b1 being RealMap of () st for p being Point of () holds b1 . p = p 2 proof end; uniqueness for b1, b2 being RealMap of () st ( for p being Point of () holds b1 . p = p 2 ) & ( for p being Point of () holds b2 . p = p 2 ) holds b1 = b2 proof end; end; :: deftheorem Def5 defines proj1 PSCOMP_1:def 5 : for b1 being RealMap of () holds ( b1 = proj1 iff for p being Point of () holds b1 . p = p 1 ); :: deftheorem Def6 defines proj2 PSCOMP_1:def 6 : for b1 being RealMap of () holds ( b1 = proj2 iff for p being Point of () holds b1 . p = p 2 ); theorem Th18: :: PSCOMP_1:18 for r, s being Real holds proj1 " ].r,s.[ = { |[r1,r2]| where r1, r2 is Real : ( r < r1 & r1 < s ) } proof end; theorem Th19: :: PSCOMP_1:19 for P being Subset of () for r3, q3 being Real st P = { |[r1,r2]| where r1, r2 is Real : ( r3 < r1 & r1 < q3 ) } holds P is open proof end; theorem Th20: :: PSCOMP_1:20 for r, s being Real holds proj2 " ].r,s.[ = { |[r1,r2]| where r1, r2 is Real : ( r < r2 & r2 < s ) } proof end; theorem Th21: :: PSCOMP_1:21 for P being Subset of () for r3, q3 being Real st P = { |[r1,r2]| where r1, r2 is Real : ( r3 < r2 & r2 < q3 ) } holds P is open proof end; registration coherence proof end; coherence proof end; end; theorem Th22: :: PSCOMP_1:22 for X being Subset of () for p being Point of () st p in X holds () . p = p 1 proof end; theorem Th23: :: PSCOMP_1:23 for X being Subset of () for p being Point of () st p in X holds () . p = p 2 proof end; definition let X be Subset of (); func W-bound X -> Real equals :: PSCOMP_1:def 7 lower_bound (); coherence lower_bound () is Real ; func N-bound X -> Real equals :: PSCOMP_1:def 8 upper_bound (); coherence upper_bound () is Real ; func E-bound X -> Real equals :: PSCOMP_1:def 9 upper_bound (); coherence upper_bound () is Real ; func S-bound X -> Real equals :: PSCOMP_1:def 10 lower_bound (); coherence lower_bound () is Real ; end; :: deftheorem defines W-bound PSCOMP_1:def 7 : for X being Subset of () holds W-bound X = lower_bound (); :: deftheorem defines N-bound PSCOMP_1:def 8 : for X being Subset of () holds N-bound X = upper_bound (); :: deftheorem defines E-bound PSCOMP_1:def 9 : for X being Subset of () holds E-bound X = upper_bound (); :: deftheorem defines S-bound PSCOMP_1:def 10 : for X being Subset of () holds S-bound X = lower_bound (); Lm3: for p being Point of () for X being non empty compact Subset of () st p in X holds ( lower_bound () <= p 1 & p 1 <= upper_bound () & lower_bound () <= p 2 & p 2 <= upper_bound () ) proof end; theorem :: PSCOMP_1:24 for p being Point of () for X being non empty compact Subset of () st p in X holds ( W-bound X <= p 1 & p 1 <= E-bound X & S-bound X <= p 2 & p 2 <= N-bound X ) by Lm3; definition let X be Subset of (); func SW-corner X -> Point of () equals :: PSCOMP_1:def 11 |[(),()]|; coherence |[(),()]| is Point of () ; func NW-corner X -> Point of () equals :: PSCOMP_1:def 12 |[(),()]|; coherence |[(),()]| is Point of () ; func NE-corner X -> Point of () equals :: PSCOMP_1:def 13 |[(),()]|; coherence |[(),()]| is Point of () ; func SE-corner X -> Point of () equals :: PSCOMP_1:def 14 |[(),()]|; coherence |[(),()]| is Point of () ; end; :: deftheorem defines SW-corner PSCOMP_1:def 11 : for X being Subset of () holds SW-corner X = |[(),()]|; :: deftheorem defines NW-corner PSCOMP_1:def 12 : for X being Subset of () holds NW-corner X = |[(),()]|; :: deftheorem defines NE-corner PSCOMP_1:def 13 : for X being Subset of () holds NE-corner X = |[(),()]|; :: deftheorem defines SE-corner PSCOMP_1:def 14 : for X being Subset of () holds SE-corner X = |[(),()]|; theorem :: PSCOMP_1:25 for P being Subset of () holds () 1 = () 1 proof end; theorem :: PSCOMP_1:26 for P being Subset of () holds () 1 = () 1 proof end; theorem :: PSCOMP_1:27 for P being Subset of () holds () 2 = () 2 proof end; theorem :: PSCOMP_1:28 for P being Subset of () holds () 2 = () 2 proof end; definition let X be Subset of (); func W-most X -> Subset of () equals :: PSCOMP_1:def 15 (LSeg ((),())) /\ X; coherence (LSeg ((),())) /\ X is Subset of () ; func N-most X -> Subset of () equals :: PSCOMP_1:def 16 (LSeg ((),())) /\ X; coherence (LSeg ((),())) /\ X is Subset of () ; func E-most X -> Subset of () equals :: PSCOMP_1:def 17 (LSeg ((),())) /\ X; coherence (LSeg ((),())) /\ X is Subset of () ; func S-most X -> Subset of () equals :: PSCOMP_1:def 18 (LSeg ((),())) /\ X; coherence (LSeg ((),())) /\ X is Subset of () ; end; :: deftheorem defines W-most PSCOMP_1:def 15 : for X being Subset of () holds W-most X = (LSeg ((),())) /\ X; :: deftheorem defines N-most PSCOMP_1:def 16 : for X being Subset of () holds N-most X = (LSeg ((),())) /\ X; :: deftheorem defines E-most PSCOMP_1:def 17 : for X being Subset of () holds E-most X = (LSeg ((),())) /\ X; :: deftheorem defines S-most PSCOMP_1:def 18 : for X being Subset of () holds S-most X = (LSeg ((),())) /\ X; registration let X be non empty compact Subset of (); cluster W-most X -> non empty compact ; coherence ( not W-most X is empty & W-most X is compact ) proof end; cluster N-most X -> non empty compact ; coherence ( not N-most X is empty & N-most X is compact ) proof end; cluster E-most X -> non empty compact ; coherence ( not E-most X is empty & E-most X is compact ) proof end; cluster S-most X -> non empty compact ; coherence ( not S-most X is empty & S-most X is compact ) proof end; end; definition let X be Subset of (); func W-min X -> Point of () equals :: PSCOMP_1:def 19 |[(),(lower_bound (proj2 | ()))]|; coherence |[(),(lower_bound (proj2 | ()))]| is Point of () ; func W-max X -> Point of () equals :: PSCOMP_1:def 20 |[(),(upper_bound (proj2 | ()))]|; coherence |[(),(upper_bound (proj2 | ()))]| is Point of () ; func N-min X -> Point of () equals :: PSCOMP_1:def 21 |[(lower_bound (proj1 | ())),()]|; coherence |[(lower_bound (proj1 | ())),()]| is Point of () ; func N-max X -> Point of () equals :: PSCOMP_1:def 22 |[(upper_bound (proj1 | ())),()]|; coherence |[(upper_bound (proj1 | ())),()]| is Point of () ; func E-max X -> Point of () equals :: PSCOMP_1:def 23 |[(),(upper_bound (proj2 | ()))]|; coherence |[(),(upper_bound (proj2 | ()))]| is Point of () ; func E-min X -> Point of () equals :: PSCOMP_1:def 24 |[(),(lower_bound (proj2 | ()))]|; coherence |[(),(lower_bound (proj2 | ()))]| is Point of () ; func S-max X -> Point of () equals :: PSCOMP_1:def 25 |[(upper_bound (proj1 | ())),()]|; coherence |[(upper_bound (proj1 | ())),()]| is Point of () ; func S-min X -> Point of () equals :: PSCOMP_1:def 26 |[(lower_bound (proj1 | ())),()]|; coherence |[(lower_bound (proj1 | ())),()]| is Point of () ; end; :: deftheorem defines W-min PSCOMP_1:def 19 : for X being Subset of () holds W-min X = |[(),(lower_bound (proj2 | ()))]|; :: deftheorem defines W-max PSCOMP_1:def 20 : for X being Subset of () holds W-max X = |[(),(upper_bound (proj2 | ()))]|; :: deftheorem defines N-min PSCOMP_1:def 21 : for X being Subset of () holds N-min X = |[(lower_bound (proj1 | ())),()]|; :: deftheorem defines N-max PSCOMP_1:def 22 : for X being Subset of () holds N-max X = |[(upper_bound (proj1 | ())),()]|; :: deftheorem defines E-max PSCOMP_1:def 23 : for X being Subset of () holds E-max X = |[(),(upper_bound (proj2 | ()))]|; :: deftheorem defines E-min PSCOMP_1:def 24 : for X being Subset of () holds E-min X = |[(),(lower_bound (proj2 | ()))]|; :: deftheorem defines S-max PSCOMP_1:def 25 : for X being Subset of () holds S-max X = |[(upper_bound (proj1 | ())),()]|; :: deftheorem defines S-min PSCOMP_1:def 26 : for X being Subset of () holds S-min X = |[(lower_bound (proj1 | ())),()]|; theorem Th29: :: PSCOMP_1:29 for P being Subset of () holds ( () 1 = () 1 & () 1 = () 1 & () 1 = () 1 & () 1 = () 1 & () 1 = () 1 ) proof end; theorem Th30: :: PSCOMP_1:30 for X being non empty compact Subset of () holds ( () 2 <= () 2 & () 2 <= () 2 & () 2 <= () 2 & () 2 <= () 2 & () 2 <= () 2 & () 2 <= () 2 ) proof end; theorem Th31: :: PSCOMP_1:31 for p being Point of () for Z being non empty Subset of () st p in W-most Z holds ( p 1 = () 1 & ( Z is compact implies ( () 2 <= p 2 & p 2 <= () 2 ) ) ) proof end; theorem Th32: :: PSCOMP_1:32 for X being non empty compact Subset of () holds W-most X c= LSeg ((),()) proof end; theorem :: PSCOMP_1:33 for X being non empty compact Subset of () holds LSeg ((),()) c= LSeg ((),()) proof end; theorem Th34: :: PSCOMP_1:34 for X being non empty compact Subset of () holds ( W-min X in W-most X & W-max X in W-most X ) proof end; theorem :: PSCOMP_1:35 for X being non empty compact Subset of () holds ( (LSeg ((),())) /\ X = {()} & (LSeg ((),())) /\ X = {()} ) proof end; theorem :: PSCOMP_1:36 for X being non empty compact Subset of () st W-min X = W-max X holds W-most X = {()} proof end; theorem Th37: :: PSCOMP_1:37 for P being Subset of () holds ( () 2 = () 2 & () 2 = () 2 & () 2 = () 2 & () 2 = () 2 & () 2 = () 2 ) proof end; theorem Th38: :: PSCOMP_1:38 for X being non empty compact Subset of () holds ( () 1 <= () 1 & () 1 <= () 1 & () 1 <= () 1 & () 1 <= () 1 & () 1 <= () 1 & () 1 <= () 1 ) proof end; theorem Th39: :: PSCOMP_1:39 for p being Point of () for Z being non empty Subset of () st p in N-most Z holds ( p 2 = () 2 & ( Z is compact implies ( () 1 <= p 1 & p 1 <= () 1 ) ) ) proof end; theorem Th40: :: PSCOMP_1:40 for X being non empty compact Subset of () holds N-most X c= LSeg ((),()) proof end; theorem :: PSCOMP_1:41 for X being non empty compact Subset of () holds LSeg ((),()) c= LSeg ((),()) proof end; theorem Th42: :: PSCOMP_1:42 for X being non empty compact Subset of () holds ( N-min X in N-most X & N-max X in N-most X ) proof end; theorem :: PSCOMP_1:43 for X being non empty compact Subset of () holds ( (LSeg ((),())) /\ X = {()} & (LSeg ((),())) /\ X = {()} ) proof end; theorem :: PSCOMP_1:44 for X being non empty compact Subset of () st N-min X = N-max X holds N-most X = {()} proof end; theorem Th45: :: PSCOMP_1:45 for P being Subset of () holds ( () 1 = () 1 & () 1 = () 1 & () 1 = () 1 & () 1 = () 1 & () 1 = () 1 ) proof end; theorem Th46: :: PSCOMP_1:46 for X being non empty compact Subset of () holds ( () 2 <= () 2 & () 2 <= () 2 & () 2 <= () 2 & () 2 <= () 2 & () 2 <= () 2 & () 2 <= () 2 ) proof end; theorem Th47: :: PSCOMP_1:47 for p being Point of () for Z being non empty Subset of () st p in E-most Z holds ( p 1 = () 1 & ( Z is compact implies ( () 2 <= p 2 & p 2 <= () 2 ) ) ) proof end; theorem Th48: :: PSCOMP_1:48 for X being non empty compact Subset of () holds E-most X c= LSeg ((),()) proof end; theorem :: PSCOMP_1:49 for X being non empty compact Subset of () holds LSeg ((),()) c= LSeg ((),()) proof end; theorem Th50: :: PSCOMP_1:50 for X being non empty compact Subset of () holds ( E-min X in E-most X & E-max X in E-most X ) proof end; theorem :: PSCOMP_1:51 for X being non empty compact Subset of () holds ( (LSeg ((),())) /\ X = {()} & (LSeg ((),())) /\ X = {()} ) proof end; theorem :: PSCOMP_1:52 for X being non empty compact Subset of () st E-min X = E-max X holds E-most X = {()} proof end; theorem Th53: :: PSCOMP_1:53 for P being Subset of () holds ( () 2 = () 2 & () 2 = () 2 & () 2 = () 2 & () 2 = () 2 & () 2 = () 2 ) proof end; theorem Th54: :: PSCOMP_1:54 for X being non empty compact Subset of () holds ( () 1 <= () 1 & () 1 <= () 1 & () 1 <= () 1 & () 1 <= () 1 & () 1 <= () 1 & () 1 <= () 1 ) proof end; theorem Th55: :: PSCOMP_1:55 for p being Point of () for Z being non empty Subset of () st p in S-most Z holds ( p 2 = () 2 & ( Z is compact implies ( () 1 <= p 1 & p 1 <= () 1 ) ) ) proof end; theorem Th56: :: PSCOMP_1:56 for X being non empty compact Subset of () holds S-most X c= LSeg ((),()) proof end; theorem :: PSCOMP_1:57 for X being non empty compact Subset of () holds LSeg ((),()) c= LSeg ((),()) proof end; theorem Th58: :: PSCOMP_1:58 for X being non empty compact Subset of () holds ( S-min X in S-most X & S-max X in S-most X ) proof end; theorem :: PSCOMP_1:59 for X being non empty compact Subset of () holds ( (LSeg ((),())) /\ X = {()} & (LSeg ((),())) /\ X = {()} ) proof end; theorem :: PSCOMP_1:60 for X being non empty compact Subset of () st S-min X = S-max X holds S-most X = {()} proof end; :: Degenerate cases theorem :: PSCOMP_1:61 for P being Subset of () st W-max P = N-min P holds W-max P = NW-corner P proof end; theorem :: PSCOMP_1:62 for P being Subset of () st N-max P = E-max P holds N-max P = NE-corner P proof end; theorem :: PSCOMP_1:63 for P being Subset of () st E-min P = S-max P holds E-min P = SE-corner P proof end; theorem :: PSCOMP_1:64 for P being Subset of () st S-min P = W-min P holds S-min P = SW-corner P proof end; theorem :: PSCOMP_1:65 for r, s being Real holds ( proj2 . |[r,s]| = s & proj1 . |[r,s]| = r ) proof end; :: Moved from JORDAN1E, AK, 23.02.2006 theorem :: PSCOMP_1:66 for X being non empty Subset of () for Y being compact Subset of () st X c= Y holds N-bound X <= N-bound Y by Th17; theorem :: PSCOMP_1:67 for X being non empty Subset of () for Y being compact Subset of () st X c= Y holds E-bound X <= E-bound Y by Th17; theorem :: PSCOMP_1:68 for X being non empty Subset of () for Y being compact Subset of () st X c= Y holds S-bound X >= S-bound Y by Th16; theorem :: PSCOMP_1:69 for X being non empty Subset of () for Y being compact Subset of () st X c= Y holds W-bound X >= W-bound Y by Th16;
{}
## Indicators • Similars in SciELO ## On-line version ISSN 1807-0302 ### Comput. Appl. Math. vol.30 no.3 São Carlos  2011 #### http://dx.doi.org/10.1590/S1807-03022011000300010 Operational Tau approximation for a general class of fractional integro-differential equations S. Karimi Vanani; A. Aminataei Department of Mathematics, K.N. Toosi University of Technology, P.O. Box: 16315-1618, Tehran, Iran, E-mails: solatkarimi@dena.kntu.ac.ir / ataei@kntu.ac.ir ABSTRACT In this work, an extension of the algebraic formulation of the operational Tau method (OTM) for the numerical solution of the linear and nonlinear fractional integro-differential equations (FIDEs) is proposed. The main idea behind the OTM is to convert the fractional differential and integral parts of the desired FIDE to some operational matrices. Then the FIDE reduces to a set of algebraic equations. We demonstrate the Tau matrix representation for solving FIDEs based on arbitrary orthogonal polynomials. Some advantages of using the method, errorestimation and computer algorithm are also presented. Illustrative linear and nonlinear experiments are included to show the validity and applicability of the presented method. Mathematical subject classification: 65M70, 34A25, 26A33, 47Gxx. Key words: spectral methods, operational Tau method, fractional integro-differential equations, error estimation, computer algorithm of the method. 1 Introduction The main object of this work is to solve the fractional integro-differential equation of the following form: where F and G are given smooth functions, D α is a fractional differential operator of order α in the Caputo's sense, Bj; j = 0,1,...,m-1, are m supplementary conditions and u(x) is a solution to be determined. Differential and integral equations involving derivatives of non-integer order have shown to be adequate models for various phenomena arising in damping laws, diffusion processes, models of earthquake [1], fluid-dynamics traffic model [2], mathematical physics and engineering [3], fluid and continuum mechanics [4], chemistry, acoustics and psychology [5]. Some numerical methods for the solution of FIDEs are presented in the literature. We can point out to the collocation method [6], Adomian decomposition method (ADM) [7]-9], Spline collocation method [10], fractional differential transform method [11] and the method of combination of forward and central differences [12]. Out of the aforesaid methods, we desire to consider OTM for solving FIDEs. Spectral methods provide a computational approach which achieved substantial popularity in the last three decades. Tau method is one of the most important spectral methods which is extensively applied for numerical solution of many problems. This method was invented by Lanczos [13] for solving ordinary differential equations (ODEs) and then the expansion of the method were done for many different problems such as partial differential equations (PDEs) [14]-[16], integral equations (IEs) [17], integro-differential equations (IDEs) [18] and etc. [19]-[22]. In this work, we are interested in solving FIDEs with an operational approach of the Tau method. Because in the Tau method, we are dealing with a system of equations wherein the matrix of unknown coefficients is sparse and can be easily invertible. Also, the differential and integral parts appearing in the equation is replaced by its operational Tau representation. Then, we obtain a system of algebraic equations wherein its solution is easy. This work has been organized as follows. In Section 2, we briefly state the basic definitions of the fractional calculus. Section 3 is devoted to introduce the OTM and its application on FIDEs. In Section 4, the Legendre and Laguerre polynomials as base functions are considered. An efficient error estimation of the proposed method is presented in Section 5. The accuracy and efficiency of the scheme is investigated with three illustrative numerical experiments inSection 6. Finally, Section 7 consists of some obtained conclusions. 2 Basic definitions of the fractional calculus In this section, we give some basic definitions and properties of the fractional calculus theory, which are used in this work [3,23]. Definition 1. A real function u(x), x > 0 is said to be in the space C µ, µ ∈ , if there exists a real number p > µ, such that u(x) = xpv(x), where v(x) C[0, ) and it is said to be in the space iff u(m)(x) C µ, m . Definition 2. The Riemann-Liouville fractional integral operator of order α > 0, of a function u(x) C µ, µ > -1, is defined as: where Γ is the Gamma function. Some of the most important properties of operator J α for u(x) C µ, µ > -1, α, β > 0 and γ > -1, are as follows: Definition 3. The fractional derivative of u(x) in the Caputo's sense is defined as: when m-1 < α < m, m , x > 0, u(x) . 3 Operational Tau method In this section, we state the structure of OTM and its application on FIDEs. Also, we present some preliminaries and notations using in this work. For any integrable functions ψ(x) and ϕ(x) on [a,b], we define the scalar product , by where and ω(x) is a weight function. Let be the space of all functions f:[a,b] , with < . The main idea of the method is to seek a polynomial to approximate u(x) . Let ϕx = = ΦXx be a set of arbitrary orthogonal polynomial bases defined by a lower triangular matrix Φ and Xx = [1,x,x2,...]T. Lemma 1. Suppose that u(x) is a polynomial as u(x) = = uXx , then we have: and where u = [u0,u1,...,un,...], Xa = [1,a,a2,...]T, a and M,N and P are infinite matrices with only nonzero elements Mi+1,i = i+1, Ni,i+1 = 1, Proof. See [24]. Let us consider to be an orthogonal series expansion of the exact solution of equations (1) and (2), where u = is a vector of unknown coefficients, ΦXx is an orthogonal basis for polynomials in . In the Tau method, the aim is to convert equations (1) and (2) to an algebraic system using some operational matrices. First of all, we intend to describe an operational form of D α u(x). In order to do this, we assert the process as follows. Using equations (3), (4) and (7); we get: Using property iii) of Definition 2, we have: where Γ is an infinite diagonal matrix with elements and By approximating xm- α+ γ, γ = 0,1,...; as follows: Substituting equation (10) in equation (9) and using equation (8); we obtain: In next step, the aim is to linearize analytic functions F(x,u(x)) and G(x, t, u(t)). These functions can be written as: and Now, we state the following lemma and corollary. Lemma 2. Let Xx = [1,x,x2,...]T, u = [u0,u1,u2,...] be infinite vectors and \boldsymbol Φ = [ ϕ0| ϕ1| ϕ2|...], ϕi are infinite columns of matrix Φ. Then, we have: where U is an upper triangular matrix as: In addition, if we suppose that u(x) = u ΦXx represents a polynomial, then for any positive integer p, the relation is valid. Proof. We have: therefore if we call the last upper triangular coefficient matrix as U, then we have: Now, in order to prove equation (16), we apply induction. For p = 1, it is obvious that u(x) = u ΦXx. For p = 2, we rewrite u2(x) = u ΦXxu ΦXx = u Φ(Xxu ΦXx) and using equation (14), we have: therefore, equation (16) is hold for p = 2. Now, suppose that equation (16) is hold for p = k, then we must prove that the relation is valid for s = k+1.Thus, u^k+1(x) So, equation (16) is proved. We rewrite equation (12) as: The first summation can be considered as: For the second summation, using equations (5) and (16) yields: Therefore, we have: In the same manner, we rewrite: Using equation (5), the second summation is as: Therefore, The first part easily can be written as follows: where Using equations (5) and (6), the second part of equation is written as follows: Thus, we have: In addition, suppose that the supplementary conditions are generally as: Therefore, using equation (4) we have: Using equations (11), (18) and (27) we replace equation by the following operational form: So, the residual R(x) of equation , can be written as: where Now, we set the residual matrix R = 0 or we use the following inner products, Therefore an infinite system of algebraic equations is obtained. Since, somewhere we require finite terms of approximation, then we must truncate the series to finite number of terms. Thus, we choose n+1-m of the first equations in R and the following system is obtained: or Solving the aforesaid system yields the unknown vector u = [u0,u1,...,un]. This is the so-called operational Tau method which is applicable for finite, infinite, regular and irregular domains. We summarize OTM in the following algorithm. 4 Some orthogonal polynomials Orthogonal functions have received considerable attention in dealing with several problems. Their most important characteristic is reducing the computations and converting the problem to a system of algebraic equations. In this work, we use Legendre polynomials for finite domain [0,h] and Laguerre polynomials for infinite domain [0, ]. 4.1 Legendre polynomials The well-known Legendre polynomials are defined as follows: They are orthogonal on the interval [-1, 1] with respect to the weight function w(x) = 1 and satisfy: Since the interval of orthogonality of these polynomials may differ with the domain of the problem, we must shift the polynomial to the desired interval. Thus, to construct the shifted Legendre polynomials on arbitrary interval [0,h], it is sufficient to do the change of variable: . So, the shifted Legendre polynomials are defined as follows: and satisfy: 4.2 Laguerre polynomials Laguerre polynomials are defined as follows: They are orthogonal on the interval [0, ) with respect to the weight function w(x) = e-x and satisfy: 5 Error estimation In this section, an error estimation for the approximate solution of equation with supplementary conditions (2) is obtained. Let us call en(x) = u(x)- un(x) as the error function of the approximate solution un(x) to u(x) where u(x) is the exact solution of equation (1). Hence, un(x) satisfies the following equations: The perturbation term Hn(x) can be obtained by substituting the computedsolution un(x) into the equation: We proceed to find an approximation en,N(x) to the error function en(x) in the same way as we did before for the solution of equation (1). Note that N is the degree of approximation of un(x). By subtracting equations (31) and (32) from equations (1) and (2), respectively we have: or It should be noted that in order to construct the approximate en,N(x) to en(x), only the related equations like as equations (7) through (31) needs to be recomputed and the structure of the method remains the same. 6 Illustrative numerical experiments with some comments In this section, four experiments of linear and nonlinear FIDEs are given to illustrate the results. In all experiments, we consider the shifted Legendre polynomials as basis functions for finite domains and Laguerre polynomials for infinite domains. The computations associated with the experiments discussed above were performed in Maple 13 on a PC with a CPU of 2.4 GHz. Experiment 6.1. Consider the following FIDE [6,7]: with the initial condition: u(0) = 0 The exact solution is: u(x) = x3. We have solved this experiment using OTM with Laguerre polynomials and some approximations are obtained as follows: and so on. Thus, we have u(x) = x3 which is the exact solution of the problem. Experiment 6.2. Consider the following nonlinear FIDE [8]: with the boundary conditions: u(0) = u'(0) = 1, u(1) = u'(1) = e. The only case which we know the exact solution for α = 4 is: u(x) = ex. We have solved this experiment for n = 7 with different α and have compared it with the closed form series solutions of the exact solution obtained by ADM [8]. The comparison is shown in Table 1. From the numerical results in Table 1, it is easy to conclude that obtained results by OTM are in good agreement with those obtained using the ADM. In this experiment, the exact solution is not known, so a main question arises that which method is more accurate. We conclude that OTM is more accurate by considering the following notation and discussion. Note 1. In the theory of fractional calculus, it is obvious that when the fractional derivative α (m-1 < α < m) tends to positive integer number m, then the approximate solution continuously tends to the exact solution of the problem with derivation m. A closer look at the values obtained by ADM in Table 1 do not have this characteristic. The values for α = 3.25 are less than the values for α = 3.5, so the values for α = 3.5 must be less than the values for α = 3.75. But this fact have not occurred for ADM solutions but for OTM solutions, we have the reduction in the results. Therefore OTM is more reliable than ADM. Moreover, for α = 4, using Legendre polynomials, the following sequences of approximate solution is obtained: and so on. Thus, we obtain: This has the closed form u(x) = ex, which is the exact solution of the problem. Thus, for positive integer derivatives, if the exact solution exists, then OTM produces its series solution. Experiment 6.3. Consider the following nonlinear FIDE: with the initial conditions: The only case which we know the exact solution for α = 2 is u(x) = sinx. We have solved this experiment for n = 5 with different α and have used Laguerre polynomials as basis functions. The results are given in Table 2. For α = 2, using Laguerre polynomials, the following sequence of approximate solution is obtained: and so on. Thus, we obtain: This has the closed form u(x) = sinx, which is the exact solution of the problem. Experiment 6.4. Consider the following nonlinear FIDE: with the initial conditions: The only case which we know the exact solution for α = 2 is: u(x) = e-x. We have solved this experiment for n = 4 with different α and used shifted Legendre polynomials as basis functions. Figure 1 shows the approximate solutions and illustrates the aforesaid fact given by note 1. The following figure illustrates the convergency of the method and the factthat the method tends continuously to the exact solution if fractional derivations tend to an integer order. For α = 2, using shifted Legendre polynomials, the following sequence of approximate solution is obtained: and so on. Thus, we obtain: This has the closed form u(x) = e-x, which is the exact solution of theproblem. 7 Conclusion In this work, operational Tau method is employed successfully to solve theFIDEs. Arbitrary orthogonal polynomial bases were applied as basis functions.Reducing the FIDEs to algebraic equations is the first characteristics of theproposed method. The applications of OTM on some problems including linear and nonlinear terms are considered and some useful results are obtained.The most important ones are the simplicity of the method, reducing the computations using orthogonal polynomials and having low run time of its algorithm. Furthermore, this method yields the desired accuracy only in a few terms in a series form of the exact solution. All of these advantages of the OTM to solve nonlinear problems assert the method as a convenient, reliable andpowerful tool. REFERENCES [1] J.H. He, Nonlinear oscillation with fractional derivative and its applications. In: International Conference on Vibrating Engineering, Dalian, China (1998), 288-291.         [ Links ] [2] J.H. He, Some applications of nonlinear fractional differential equations and their approximations. Bull. Sci. Technol., 15 (1999), 86-90.         [ Links ] [3] I. Podlubny, Fractional Differential Equations. Academic Press, New York (1999).         [ Links ] [4] F. Mainardi, Fractals and Fractional Calculus Continuum Mechanics. Springer Verlag, (1997), 291-348.         [ Links ] [5] W.M. Ahmad and R. El-Khazali, Fractional-order dynamical models of love. Chaos, Solitons & Fractals, 33 (2007), 1367-1375.         [ Links ] [6] E.A. Rawashdeh, Numerical solution of fractional integro-differential equations by collocation method. Applied Mathematics and Computation, 176 (2005), 1-6.         [ Links ] [7] R.C. Mittal and R. Nigam, Solution of fractional integro-differential equations by adomian decomposition method. Int. J. of Appl. Math. and Mech., 4 (2008),87-94.         [ Links ] [8] S. Momani and M.A. Noor, Numerical methods for fourth-order fractional integro-differential equations. Applied Mathematics and Computation, 182 (2006), 754-760.         [ Links ] [9] W.G. El-Sayed and A.M.A. El-Sayed, On the functional integral equations of mixed type and integro-differential equations of fractional orders. Applied Mathematics and Computation, 154 (2004), 461-467.         [ Links ] [10] A. Pedas and E. Tamme, Spline collocation method for integro-differential equations with weakly singular kernels. Journal of Computational and Applied Mathematics, 197 (2006), 253-269.         [ Links ] [11] D. Nazari and S. Shahmorad, Application of the fractional differential transform method to fractional-order integro-differential equations with nonlocal boundary conditions. Journal of Computational and Applied Mathematics, 234 (2010), 883-891.         [ Links ] [12] M.F. Al-Jamal and E.A. Rawashde, The Approximate Solution of Fractional Integro-Differential Equations. Int. J. Contemp. Math. Sciences, 4 (2009), 1067-1078.         [ Links ] [13] C. Lanczos, Trigonometric interpolation of empirical and analytical functions.J. Math. Phys., 17 (1938), 123-199.         [ Links ] [14] K.M. Liu and E.L. Ortiz, Numerical solution of eigenvalue problems for partial differential equations with the Tau-lines method. Comp. Math. Appl. B, 12 (1986), 1153-1168.         [ Links ] [15] E.L. Ortiz and K.S. Pun, Numerical solution of nonlinear partial differential equations with Tau method. J. Comp. Appl. Math., 12 (1985), 511-516.         [ Links ] [16] E.L. Ortiz and H. Samara, Numerical solution of partial differential equations with variable coefficients with an operational approach to the Tau method. Comp.Math. Appl., 10 (1984), 5-13.         [ Links ] [17] M.K. EL-Daou and H.G. Khajah, Iterated solutions of linear operator equations with the Tau method. Math. Comput., 66(217) (1997), 207-213.         [ Links ] [18] J. Pour-Mahmoud, M.Y. Rahimi-Ardabili and S. Shahmorad, Numerical solution of the system of Fredholm integro-differential equations by the Tau method. Applied Mathematics and Computation, 168 (2005), 465-478.         [ Links ] [19] K.M. Liu and E.L. Ortiz, Approximation of eigenvalues defined by ordinary differential equations with the Tau method. Matrix Pencils, Springer, Berlin, (1983), 90-102.         [ Links ] [20] K.M. Liu and E.L. Ortiz, Tau method approximation of differential eigenvalue problems where the spectral parameter enters nonlinearly. J. Comput. Phys.,72 (1987), 299-310.         [ Links ] [21] K.M. Liu and E.L. Ortiz, Numerical solution of ordinary and partial functional-differential eigenvalue problems with the Tau method. Computing, 41 (1989),205-217.         [ Links ] [22] E.L. Ortiz and H. Samara, Numerical solution of differential eigenvalue problems with an operational approach to the Tau method. Computing, 31 (1983), 95-103.         [ Links ] [23] S. Samko, A. Kilbas and O. Marichev, Fractional Integrals and Derivatives.Gordon and Breach, Yverdon (1993).         [ Links ] [24] K.M. Liu and C.K. Pan, The automatic solution to systems of ordinary differential equations by the Tau method. Computers Math. Applicat., 38 (1999), 197-210.         [ Links ]
{}
# Changes ## Papacy , 3 years ago Papal influence The yearly change of papal influence is affected by the opinion of the Papal state towards a country, namely by 0.5% times this opinion. ::$\text{yearly papal influence}<h1>test</h1> = \left(1 + \frac\text{opinion of the Papal state}{200}\right)\sum \text{modifiers}$ Papal influence can be spent in the following manner:
{}
# Sum of Cauchy sequences [duplicate] Possible Duplicate: Sum of Cauchy Sequences Cauchy? Let $(X,||\cdot||)$ be a normed space. Show that if $(x_{n})_{n}$ and $(y_{n})_{n}$ are Cauchy sequences in $X$, then the sequence $(x_{n}+y_{n})_{n}$ is also Cauchy in $X$. I have used the definitions: If $(x_{n})_{n}$ is Cauchy, $\forall\epsilon>0:\exists N\in\mathbb{N}:n,m\ge N\implies||x_{n}-x_{m}||<\frac{\epsilon}{2}$ If $(y_{n})_{n}$ is Cauchy, $\forall\epsilon>0:\exists M\in\mathbb{N}:n,m\ge M\implies||y_{n}-y_{m}||<\frac{\epsilon}{2}$ To come up with $||x_{n}+x_{m}|| + ||y_{n}+y_{m}|| \le \epsilon$ How can I rephrase the left hand side of the inequality to come up with $||(x_{n}+y_{n}) - (x_{m}+y_{m})||$ to show Cauchyness? - Triangle inequality. –  Asaf Karagila Feb 29 '12 at 15:50 I tried the triangle inequality, how're you supposed to go backwards with it to get it into the form desired? –  dplanet Feb 29 '12 at 15:51 Don't you mean $\|x_n - x_m\| + \|y_n - y_m\| \leq \epsilon$? Then do what Ragib suggested below. –  Neal Feb 29 '12 at 16:01 I am voting to close this as Duplicate! –  user21436 Mar 1 '12 at 7:21 You were almost there! Rewrite $\| (x_n + y_n ) - (x_m+ y_m) \| = \| (x_n - x_m) + (y_n - y_m) \|$ and apply the triangle inequality.
{}
Pauls Online Notes Home / Calculus II / Applications of Integrals / Surface Area Show Mobile Notice Show All Notes Hide All Notes Mobile Notice You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width. ### Section 2-2 : Surface Area 1. Set up, but do not evaluate, an integral for the surface area of the object obtained by rotating $$x = \sqrt {y + 5}$$ , $$\sqrt 5 \le x \le 3$$ about the $$y$$-axis using, 1. $$\displaystyle ds = \sqrt {1 + {{\left[ {\frac{{dy}}{{dx}}} \right]}^2}} \,dx$$ 2. $$\displaystyle ds = \sqrt {1 + {{\left[ {\frac{{dx}}{{dy}}} \right]}^2}} \,dy$$ Solution 2. Set up, but do not evaluate, an integral for the surface area of the object obtained by rotating $$y = \sin \left( {2x} \right)$$ , $$\displaystyle 0 \le x \le \frac{\pi }{8}$$ about the $$x$$-axis using, 1. $$\displaystyle ds = \sqrt {1 + {{\left[ {\frac{{dy}}{{dx}}} \right]}^2}} \,dx$$ 2. $$\displaystyle ds = \sqrt {1 + {{\left[ {\frac{{dx}}{{dy}}} \right]}^2}} \,dy$$ Solution 3. Set up, but do not evaluate, an integral for the surface area of the object obtained by rotating $$y = {x^3} + 4$$ , $$1 \le x \le 5$$ about the given axis. You can use either ds. 1. the $$x$$-axis 2. the $$y$$-axis Solution 4. Find the surface area of the object obtained by rotating $$y = 4 + 3{x^2}$$ , $$1 \le x \le 2$$ about the $$y$$-axis. Solution 5. Find the surface area of the object obtained by rotating $$y = \sin \left( {2x} \right)$$ , $$\displaystyle 0 \le x \le \frac{\pi }{8}$$ about the $$x$$-axis. Solution
{}
h h A malleable material can be pounded into a very thin sheet. g Polymers are usually ductile; however there are brittle polymers available. Malleable metals, like copper and nickel, are able to be stretched out into thin wires. DBTT can also be influenced by external factors such as neutron radiation, which leads to an increase in internal lattice defects and a corresponding decrease in ductility and increase in DBTT. − e But in that case the phrasing "most ductile metals" should be changed. Introduction. The ductile–brittle transition temperature (DBTT), nil ductility temperature (NDT), or nil ductility transition temperature of a metal is the temperature at which the fracture energy passes below a predetermined value (for steels typically 40 J[18] for a standard Charpy impact test). Ductility is a mechanical property commonly described as a material's amenability to drawing (e.g. Ductility plays a major role in formability. Elongation results are affected by changes in gage length, specimen geometry, and speed of testing or strain rate. In metallic bonds valence shell electrons are delocalized and shared between many atoms.   0 [4] Materials that are generally described as ductile include gold and copper. However, lead shows an exception by becoming more brittle when it is heated. The ductility decreases sharply as the grain size in a polycrystalline metal is reduced. Copper historically has served as an excellent conductor of electricity, but it can conduct just about anything. A ∗ n For experiments conducted at higher temperatures, dislocation activity[clarification needed] increases. FCC metals remain ductile down to very low temperatures. Recall pulling is applying tensile stress. Most metals are both malleable and ductile.   A popular example of this is the sinking of the Titanic. i The relatively good deformability of metals (also referred to as ductility) compared to other materials is a significant feature.The reason for this lies in the special metallic bond.The good formability is the basis for many manufacturing processes such as bending, deep drawing, forging, etc. The term "ductile" literally means that a metal substance is capable of being stretched into a thin wire without becoming weaker or more brittle in the process. [1] In materials science, ductility is defined by the degree to which a material can sustain plastic deformation under tensile stress before failure. More ductile metals are those that more readily twin. Metals are a common type of ductile material. l Many reasons have been hypothesized for why the ship sink, and among those reasons is the impact of the cold water on the steel of the ship's hull. i In addition, heat dissipation caused by devices that contact the specimen, such as grips and extensometers, become a factor when specimens are not tested at ambient temperatures. Ductility is a measure of a metal's ability to withstand tensile stress—any force that pulls the two ends of an object away from each other. − l Percent elongation, or engineering strain at fracture, can be written as: [14][15][16], % e i e   a According to Shigley's Mechanical Engineering Design [17] significant denotes about 5.0 percent elongation. High degrees of ductility occur due to metallic bonds, which are found predominantly in metals; this leads to the common perception that metals are ductile in general. {\displaystyle q} where the area of concern is the cross-sectional area of the gage of the specimen. Gold, platinum, and silver often are drawn into long strands for use in jewelry, for example. Take iron for example, with its physical ductility of D = 0.43. The game of tug-of-war provides a good example of tensile stress being applied to a rope. The Properties and Applications of Platinum, The Properties, Production, and Applications of Tin, Using Quenching to Harden Steel in Metalworking, Properties of the Basic Metals Element Group. e c r Ductility is the plastic deformation that occurs in metal as a result of such types of strain. However, ductility is not something like absolute constant for a metal or alloy under all conditions. Ductility of metals defines their ability to deform under tensile stress; this is often characterized by the metals ability to be stretched into a wire. Materials testing - Materials testing - Measures of ductility: Ductility is the capacity of a material to deform permanently in response to stress. The most accurate method of measuring the DBTT of a material is by fracture testing. n {\displaystyle \%RA={\frac {change\ in\ area}{original\ area}}={\frac {A_{0}-A_{f}}{A_{0}}}*100}. t [12], The quantities commonly used to define ductility in a tension test are percent elongation (sometimes denoted as For example, the bodies of cars and trucks need to be formed into specific shapes, as do cooking utensils, cans for packaged food and beverages, construction materials, and more. a t e = Metals that have high ductility include gold, platinum, silver and iron. Metals like Tungsten, Rhenium, Tantalum, Hafnium, Osmium show this property.   Ductility is especially important in metalworking, as materials that crack, break or shatter under stress cannot be manipulated using metal-forming processes such as hammering, rolling, drawing or extruding. If experiments are performed at a higher strain rate, more dislocation shielding is required to prevent brittle fracture, and the transition temperature is raised. These can be used for many different applications, but it is especially common in construction projects, such as bridges, and in factory settings for things such as pulley mechanisms. Ductile metals can be used in more than just conductive wiring. In some materials, the transition is sharper than others and typically requires a temperature-sensitive deformation mechanism. Gage length is important to elongation values, however, as the gage length increases, elongation values become le… The game of tug-of-war provides a good example of tensile stress being applied to a rope. a ε As temperature decreases a ductile material can become brittle - ductile-to-brittle transition Alloying usually increases the ductile-to-brittle transition temperature. a n e g f t An increase in temperature will increase ductility. Ductility owes to the ability of metal atoms to slip over each other and deform under stress. The process of stretching metal is called twining. i Practically, a ductile material is a material that can easily be stretched into a wire when pulled as shown in the figure below. By contrast, malleability is the measure of a metal's ability to withstand compression, such as hammering, rolling, or pressing. For example, in materials with a body-centered cubic (bcc) lattice the DBTT is readily apparent, as the motion of screw dislocations is very temperature sensitive because the rearrangement of the dislocation core prior to slip requires thermal activation. Ductility is especially important in metalworking, as materials that crack, break or shatter under stress cannot be manipulated using metal-forming processes such as hammering, rolling, drawing or extruding. Temperature also impacts ductility in metals. R [5], Malleability, a similar mechanical property, is characterized by a material's ability to deform plastically without failure under compressive stress. The temperature at which this occurs is the ductile–brittle transition temperature. a The weather was too cold for the ductile-brittle transition temperature of the metal in the ship's hull, increasing how brittle it was and making it more susceptible to damage. If the base metal properties do not match the strength, ductility and corrosion resistance that they should their life may be much shorter. a Ductility is desirable in the high temperature and high pressure applications in reactor plants because of the added stresses on the metals. Malleability in metals is useful in multiple applications that require specific shapes designed from metals that have been flattened or rolled into sheets. Brittle Materials. Materials that are extremely ductile can be stretched thin without cracking and losing their strength. ) and reduction of area (sometimes denoted as E g Malleability & Ductility. Ductility Versus Malleability . Malleable materials can be formed cold using stamping or pressing, whereas brittle materials may be cast or thermoformed. In metallic bonds, the valence shell electrons are delocalised and shared between many atoms. Typically four point bend testing at a range of temperatures is performed on pre-cracked bars of polished material. n DBTT is a very important consideration in selecting materials that are subjected to mechanical stresses. ∗ What Happens When Metals Undergo Heat Treatment. A n Brittle materials, such as glass, cannot accommodate concentrations of stress I created this video for use in my Chemistry course. g Copper, nickel and tin are a few metals that are able to be pounded into thin sheets and able to be made into thin wires. For example, zamak 3 exhibits good ductility at room temperature but shatters when impacted at sub-zero temperatures. n = i L Gage length. In other words, most metals become more ductile when they're heated and can be more easily drawn into wires without breaking. Many plastics and amorphous solids, such as Play-Doh, are also malleable. Malleable materials can be formed cold using stamping or pressing, whereas brittle materials may be cast or thermoformed. t Most common steels, for example, are quite ductile and hence can accommodate local stress concentrations. In malleable metals, atoms roll over each other into new, permanent positions without breaking their metallic bonds. l Metals exposed to temperatures below this point are susceptible to fracturing, making this an important consideration when choosing which metals to use in extremely cold temperatures. Gold is usually alloyed with base metals for use in jewellery, altering its hardness and ductility, melting point, color and other properties. For the general ductility treatment of Section VII the ductility measure for cast iron is about that of T/C = 2/5 = 0.4, giving general agreement between the two very different methodologies, but which still have the same limits. n When metals are heated their ductility increases. For ceramics, this type of transition occurs at much higher temperatures than for metals. Lead proves to be an exception to this rule, as it becomes more brittle as it is heated. As they are heated, metals generally become less brittle, allowing for plastic deformation. r a Considering the properties of non-metals it is not shiny, malleable or ductile nor are they good conductors of electricity. g [1] Lead is an example of a material which is, relatively, malleable but not ductile.[5][8].   − A metal's ductile-brittle transition temperature is the point at which it can withstand tensile stress or other pressure without fracturing. [2][3] Ductility is an important consideration in engineering and manufacturing, defining a material's suitability for certain manufacturing operations (such as cold working) and its capacity to absorb mechanical overload. Gold and platinum are generally considered to be among the most ductile metals.   How Do Metallurgists Measure Toughness in Metal? 1. Copper, aluminum, and steel are examples of ductile metals. One ounce of gold could be drawn to a length of 50 miles. Most common steels, for example, are quite ductile and hence can accommodate local stress concentrations. Ductility is usually defined as the extent to which a material can be deformed plastically and measured in uniaxial tension. r i a Usually, if two materials have the same strength and hardness, the one that has the higher ductility is more desirable. i g i a Ductility. e a The atomic particles that makeup metals can deform under stress either by slipping over each other or stretching away from each other. e l A similar phenomenon, the glass transition temperature, occurs with glasses and polymers, although the mechanism is different in these amorphous materials. Aluminum, which is used in cans for food, is an example of a metal that is malleable but not ductile. the capacity to undergo a change of physical form without breaking; malleability or flexibility: High ductility and very low hardness made gold easy to work using primitive techniques. Metals with low ductilities, such as bismuth, will rupture when they're put under tensile stress. [10][11] When highly stretched, such metals distort via formation, reorientation and migration of dislocations and crystal twins without noticeable hardening. Ductile Definition and Examples (Ductility), Metallic Bond: Definition, Properties, and Examples. into wire). The ductility of a material will change as its temperature is changed. In fact, it gets modified by the process parameters that is why the same material may show different formability in different forming processes. Maybe that is the complete list of metals from that source and lead, being at the end, is supposed to have zero ductility. l At a certain temperature, dislocations shield[clarification needed] the crack tip to such an extent that the applied deformation rate is not sufficient for the stress intensity at the crack-tip to reach the critical value for fracture (KiC). Under extreme conditions such as high pressures or when frozen solid pressing, brittle!, a process called twinning. another without being subjected to mechanical stresses not. Stretched out into thin wires that occurs in metal as a result of such types of.. It can withstand tensile stress being applied to a rope in colder waters during World War,. Because of the alloys being used in cans for food, is an example this! The temperature at which it can conduct just about anything contrast, ductility is more desirable the ability of atoms! The ability of metal ions properties do not match the strength, ductility is the ability of the being... Gets modified by the process parameters that is malleable but not ductile the Alloying constituents and... Such as gold and copper sharply as the extent to which a test specimen fractures a... This Coronation Crown money box was made to ductility of metals the 1937 Coronation King! Earth and minor metal industries conduct just about anything were amenable to forming by hammering or rolling not,... To deform permanently in response to stress activity [ ductility of metals needed ], material ability undergo... Metal is gold Coronation of King George VI fracture strain is the plastic deformation occurs... 17 ] significant denotes about 5.0 percent elongation and polymers, although the is... At which a test specimen fractures during a uniaxial tensile test be problematic for steels with high ductility—such as be. Match the strength, ductility is more desirable be changed polymers, although the mechanism is in! Ductile ; however there are brittle polymers available materials due to grain boundary sliding ductility of metals ductility and resistance... Force is applied applications in reactor plants because of its resistance to corrosion, electrical conductivity, ductility is point! Specific shapes designed from metals that can easily be stretched out into thin wires a good example of tensile being... In nano materials due to grain boundary sliding, ductility is the ability of a metal that is why same! Historically has served as an excellent conductor of electricity a good example of tensile stress being to! Material ability to withstand compression, such as high pressures or when frozen solid these of! 'S ability to undergo significant ductility of metals deformation before rupture, malleability '' here... Without any fracturing are considered to be stretched out into thin wires without breaking a uniaxial test... Shown in the high temperature and high pressure applications in reactor plants because of the atoms be. We can distinguish metals from non-metals the valence shell electrons are delocalized shared. Example of tensile stress being applied to a length of 50 miles the phrasing ductile! Of non-metals provide one means by which we can distinguish metals from non-metals material to deform fracturing... Germanium adds ductility - the ability of the alloys being used in cans for food, is an of... That have high ductility in these amorphous materials to undergo significant plastic ductility of metals before rupture, malleability '' here...
{}
FEATool Multiphysics  v1.15.5 Finite Element Analysis Toolbox Vibration Modes of a Hollow Cylinder This model studies the vibration modes of a free and hollow cylinder using an axisymmetric approximation. No loads or constrains are applied as the free modes are sought. The cylinder is 10 m long, has a center diameter of 2 m, and is 0.4 m thick. Moreover, the material of the cylinder is considered to be steel with E = 2·1011 Pa, density 8000 km/m3, and a Poisson's ratio of 0.3. Target free vibration frequencies are given in [1]. # Tutorial This model is available as an automated tutorial by selecting Model Examples and Tutorials... > Structural Mechanics > Vibration Modes of a Hollow Cylinder from the File menu. Or alternatively, follow the step-by-step instructions below. 1. To start a new model click the New Model toolbar button, or select New Model... from the File menu. 2. Select the Axisymmetry radio button. 3. Select the Axisymmetric Stress-Strain physics mode from the Select Physics drop-down menu. 4. Press OK to finish the physics mode selection. 5. To create a rectangle, first click on the Create square/rectangle Toolbar button. Then left click in the main plot axes window, and hold down the mouse button. Move the mouse pointer to draw the shape outline, and release the button to finalize the shape. 6. Select R1 in the geometry object Selection list box. 7. To modify and edit the selected rectangle, click on the Inspect/edit selected geometry object Toolbar button to open the Edit Geometry Object dialog box. 8. Enter 1.8 into the xmin edit field. 9. Enter 2.2 into the xmax edit field. 10. Enter 0 into the ymin edit field. 11. Enter 10 into the ymax edit field. 12. Press OK to finish and close the dialog box. 13. Switch to Grid mode by clicking on the corresponding Mode Toolbar button. 14. Enter 0.2 into the Grid Size edit field. 15. Press the Generate button to call the grid generation algorithm. 16. Switch to Equation mode by clicking on the corresponding Mode Toolbar button. 17. Enter 0.3 into the Poisson's ratio edit field. 18. Enter 2e11 into the Modulus of elasticity edit field. 19. Enter 8000 into the Density edit field. 20. Press OK to finish the equation and subdomain settings specification. 21. Switch to Boundary mode by clicking on the corresponding Mode Toolbar button. 22. Select 1, 2, 3, and 4 in the Boundaries list box. 23. Press OK to finish the boundary condition specification. 24. Switch to Solve mode by clicking on the corresponding Mode Toolbar button. 25. Press the Settings Toolbar button. 26. Select Eigenvalue from the Solution and solver type drop-down menu. 27. Press the Solve button. Open the postprocessing settings dialog box, look at the Solution eigenvalue/frequency drop-menu box, and verify that they correspond to the target values 0, 243.8, 378.5, 394, 397.5, 405 Hz. Select the second mode and plot its Total displacement as well as Deformation plot with a scale factor of 1. 1. Press the Plot Options Toolbar button. 2. Select Total displacement from the Predefined surface plot expressions drop-down menu. 3. Select the Deformation Plot check box. 4. Enter 0.2 into the Deformation scale factor edit field. 5. Select 2.34132e+06 (243.529 Hz) from the Available solutions/eigenvalues (frequencies) drop-down menu. 6. Press OK to plot and visualize the selected postprocessing options. The vibration modes of a hollow cylinder structural mechanics model has now been completed and can be saved as a binary (.fea) model file, or exported as a programmable MATLAB m-script text file (available as the example ex_axistressstrain1 script file), or GUI script (.fes) file. # CLI Postprocessing To visualize the full 3D solution from the axisymmetic model, the data can be exported and processed on the MATLAB command line interface (CLI) console with the Export Model Data Struct to MATLAB option from the File menu. The postrevolve and postplot functions can then be applied to revolve and visualize the data, for example fea_revolved = postrevolve( fea, 36, 1 ); % Note that the radial coordinate is replaced by % "x" and "y" in the revolved 3D fea data struct postplot( fea_revolved, 'surfexpr', 'sqrt((sqrt(x^2+y^2)*u)^2+w^2)', ... 'deformexpr', {'x*u', 'y*u', '0'}, 'deformscale', -0.2, ... 'solnum', 4, 'parent', figure, 'axis', 'off', 'colorbar', 'off' ) view(3) # Reference [1] Abbassian F, Dawswell DJ, Knowles NC. Free Vibration Benchmarks. NAFEMS, Test 41, 1987.
{}
# Explain in detail Carnot engine 30 views in Science edited Explain in detail Carnot engine by (26.5k points) A carnot engine is a reversible heat engine operating between two temperatures, The steps involved in a carnot cycle are explained as follows. Step 1 : Isothermal expansion of gas from state (P1, V1, T1) to state (P2, V2, T2) as shown in the figure. Step 2 : Adiabatic expansion of gas from (P2, V2, T1) to (P3, V3, T2) Step 3 : Isothermal compression of gas from (P3, V3, T2) to (P4, V4, T2) Step 4 : Adiabatic compression of gas from (P4, V4, T2) to (P1, V1, T1) These steps constitute one carnot cycle. The process is repeated thereafter. The efficiency of carnot engine, $\eta$ is given by $\eta = \frac{Q_1 - Q_2}{Q_1} = \frac{T_1 - T_2}{T_1}$
{}
# Math Help - Problem with vectors 1. ## Problem with vectors Question Forces of magnitude 3 Newtons, 5 Newtons and 7 Newtons act along the vectors (-3,-2,-9) (8,5,-9) (4,-1,-9) Find the x component. I have the answer at 4.968 however i can not arrive at it, attached is my working out. d has been used instead of c. i have the x components as 2.531, -2.774 and -6.93 however these are not correct as they do not equal the answer of 4.968 Thanks 2. ## Re: Problem with vectors You need to find the x-component of each of the three force vectors, then add them. So starting with 3 N acting along (-3, -2, -9): the magnitude of this directional vector is sqrt(3^2+2^2+9^2) = 9.969, and the portion acting in the x direction is -3/9.969 = -0.3094. Multiply by 3N to get -0.928N. This is the amount fo force acting in the x-direction for the 3N force. Repeat this for the other two vectors and add, and you should get Fx = -0.928N + 3.068N +2.828N = 4.968N. 3. ## Re: Problem with vectors ah this looks right, thank you. Can i ask why the -3 is divided by 9.969? 4. ## Re: Problem with vectors What you really want is the directional vectors to be expressed as unit vectors (i.e. have magnitude 1). So that first directional vector is given as (-3, -2, -9) expressed as a unit vector is (-3/9.696, -2/9.969, -9/9.969), or (-0.3094, -0.2063, -0.9283). You should double check that this has magnitude of 1. Now you can multiply the x, y, and z components of this unit vector by 3 N to find the x, y, and z components of the 3N force. 5. ## Re: Problem with vectors Thanks ebaines, cleared that up for me 6. ## Re: Problem with vectors Hi Ebaines, is it possible for you to give me some help to get started on another question. Three vectors a = xi - 3j + 9K b = -3i + xj - 10K c = 9i - 10j + xk Find the largest value of x for which the magnitude of the resultant is equal to 17. I am given the correct answer of 8.564 but i dont know how to get that. I have no working out to show for this as i dont know where to start, but some advice would be helpful Thanks 7. ## Re: Problem with vectors When you add vectors a, b and c the resultant is ((x-3+9)i, (-3+x-10)j, (9-10+x)k). Now write out the formula for the magnitude of this vector, and solve for x such that the magnitude is 17.
{}
# Given a steady state vector is it possible to calculate the corresponding transition (probability) matrix Knowing that there is a probability matrix M (where all columns add to 1) which when applied to a given vector P produces the same vector P, what is the best solution to find M? I can get my head around it for a 2x2 example but cannot work out a general solution for larger matrices. My simple example is where we know our steady state vector P: $$\begin{bmatrix}1/3\\2/3\end{bmatrix}$$ So with eigen value = 1 Looking for matrix M where M.P = P Therefore (M - I)P = 0 $$\begin{bmatrix}a-1&b\\c&d-1\end{bmatrix}$$ This gives equations: (a-1) 1/3 + b 2/3 = 0 a = 0.5 b c 1/3 + (d-1) 2/3 = 0 c = 2 - 2d and because it is a probability matrix we know: c = 1-a and d = 1-b We can substitute to find the values of M: $$\begin{bmatrix}0.2&0.4\\0.8&0.6\end{bmatrix}$$ But my question is how do I find the solution for bigger vectors such as the following where M would be 4x4 or even bigger where M is 10x10 etc. What is the fastest way to compute the solution? $$\begin{bmatrix}0.1\\0.2\\0.4\\0.3\end{bmatrix}$$ The first equation, $$\frac13 (a-1) + \frac23 b = 0,$$ is actually equivalent to $$a = 1 - 2b.$$ It does not necessarily imply that $$a = \frac12b.$$ It just happens that by setting $$a=\frac15$$ and $$b = \frac25,$$ both the actual condition implied by $$(M-I)P$$ and the additional condition $$a = \frac12b$$ are satisfied. But that is not the only solution. For example, try $$M = M_1 = \begin{pmatrix} \frac13 & \frac13 \\ \frac23 & \frac23 \end{pmatrix}.$$ This satisfies both $$\frac13 (a-1) + \frac23 b = 0$$ and $$\frac13 c + \frac23 (d-1) = 0,$$ but most importantly, its columns each sum to $$1$$ and it satisfies $$M_1P = P.$$ In fact, it satisfies $$M_1v = P$$ for any vector $$v$$ whose entries have the sum $$1.$$ A trivial example, of course, is $$M=I$$, since $$IP = P.$$ Moreover, if $$M_1P = P$$ and $$M_2P=P$$ then $$(qM_1)P = qP$$ and $$(rM_2P)=P,$$ so $$(qM_1 + rM_2)P = (q+r)P.$$ If also $$q+r=1$$ then $$(qM_1 + rM_2)P = P,$$ that is, the linear combination of matrices $$qM_1 + rM_2$$ satisfies the conditions you set for $$M.$$ In other words, any weighted average of two matrices that satisfy the conditions for $$M$$ (that is, each matrix is multiplied by a scalar, the sum of the scalars is $$1,$$ and after scalar multiplication the resulting matrices are added together) will also satisfy the requirements of $$M.$$ Since you can choose $$q$$ to be anything you want and then set $$r = 1 - q$$ to satisfy $$q+r=1,$$ this gives you infinitely many choices as long as you can find two different solutions for $$M$$ (which we have done). For example, the matrix you found in the question is $$\frac65 M_1 - \frac15 I.$$ To derive this linear combination of matrices you simply set $$q M_1 + r I$$ equal to the matrix in the question and see if you can solve for $$q$$ and $$r$$. As it turns out, you can, and the result is $$q=\frac65,$$ $$r=-\frac15.$$ You can check the result by writing out the results of the scalar multiplications and matrix addition in $$\frac65 M_1 - \frac15 I$$ and comparing the final result with your original matrix. As we can see from these examples, the choice of a matrix $$M$$ is not uniquely determined by the requirements that $$M$$ be a probability matrix and that $$MP = P.$$ A general approach to finding an $$n\times n$$ matrix $$M$$ for a vector $$P$$ of $$n$$ entries is to identify a non-zero entry in $$P$$ (there must be at least one of these). Suppose $$p_k \neq 0.$$ Then construct a set of $$n(n-1)$$ matrices of dimension $$n\times n$$ as follows: For each $$i$$ and $$j$$ such that $$1\leq i \leq n,$$ $$1\leq j \leq n,$$ and $$i\neq k,$$ construct a matrix $$B(j,i)$$ with entries $$b_{ji} = 1$$ and $$b_{jk} = \frac{1}{p_k}(m_{ji} p_i),$$ and set all other entries of this matrix to zero. Each such matrix $$B(j,i)$$ satisfies the condition $$B(j,i)\,P = 0.$$ These matrices are all independent of each other and span a vector space $$\mathcal L$$ of $$n(n-1)$$ dimensions. If $$L$$ is any matrix in that vector space, then $$LP = 0.$$ We can construct an affine space of matrices $$\mathcal A$$ by adding the identity matrix $$I$$ to each matrix $$L$$ in $$\mathcal L.$$ If $$A$$ is a matrix in $$\mathcal A$$ then for some $$L \in \mathcal L,$$ $$AP = (I + L)P = P + LP = P + 0 = P,$$ and therefore $$M=A$$ is a solution of $$MP=P.$$ Another way to put it is, given that $$p_k\neq 0,$$ you can construct an $$n\times n$$ matrix $$A$$ by putting any real numbers you like everywhere except in the $$k$$th column and then setting $$a_{j,k}$$ so that $$a_{j,k} = \frac{1}{p_k} (p_j - a_{j,1}p_1 - \cdots - a_{j,k-1}p_{k-1} - a_{j,k+1}p_{k+1} - \cdots - a_{j,n}p_n).$$ The independent choices you make in each of the $$n(n-1)$$ entries other than the $$k$$th column give you a space of $$n(n-1)$$ dimensions. As long as $$P\neq 0,$$ however, $$M=0$$ is not a solution to $$MP = P$$ and therefore the space is an affine space rather than a vector space. But this only shows that a matrix in this affine space is a solution to the equation $$MP=P.$$ Not all matrices in the space are probability matrices. The requirement that $$M$$ must be a probability matrix imposes more constraints. To help discuss the implications of these constraints, suppose that a particular probability matrix $$A$$ can be written $$A = I + r_{11} B(1,1) + \cdots + r_{1n} B(1,n) + \cdots + r_{n1} B(n,1) + \cdots + r_{nn} B(n,n),$$ that is, as the sum of $$I$$ and a linear combination of the matrices $$B(j,i)$$ previously described where $$j$$ runs from $$1$$ to $$n$$ and $$i$$ runs from $$1$$ to $$n$$ excluding $$k.$$ One set of constraints says that every entry of $$A$$ must be in $$[0,1].$$ For each $$i \neq k,$$ we have $$a_{ii} = 1 + r_{ii},$$ so the constraint implies that $$-1 \leq r_{ii} \leq 0.$$ But for $$i\neq k$$ and $$i\neq j$$ we have $$a_{ji} = r_{ji},$$ so the constraint implies that $$0 \leq r_{ji} \leq 1.$$ A further constraint is that the sum of each column must be $$1.$$ But the matrix $$I$$ has entry $$1$$ in each column, so the sum of entries in column $$i$$ of $$A$$ (where $$i\neq k$$) is $$1 = 1 + r_{1i} + \cdots + r_{ni},$$ so the sum of all the coefficients $$r_{ji}$$ must be zero. This implies that the contributions of the matrices $$B(j,i)$$ to the sum of entries in column $$k$$ is also zero, so we find that the $$k$$th column also sums to $$1$$ without imposing any further constraint. However, the constraints on the sum of $$r_{ji}$$ for every $$i$$ such that $$i\neq k$$ reduces the dimension of the solution by $$n - 1.$$ The remaining solution is a convex subset (bounded by the conditions $$-1 \leq r_{ii} \leq 0$$ and $$0 \leq r_{ji} \leq 1$$ for $$i\neq j$$) within an $$(n-1)^2$$-dimensional affine space. When $$n=2$$ the solution is a convex subset of a space of dimension $$(2-1)^2 = 1.$$ So it is no coincidence after all that the matrix found in the question is a linear combination of $$I$$ and the matrix $$M_1$$ found earlier in this answer. All solutions to that particular problem have that property. For the vector $$P = \begin{pmatrix} 1/3 \\ 2/3 \end{pmatrix},$$ the solution space of $$M$$ consists of all matrices that can be written $$\begin{pmatrix} 1&0 \\ 0&1 \end{pmatrix} + r \begin{pmatrix} -2 & 1 \\ 0&0 \end{pmatrix} - r \begin{pmatrix} 0&0 \\ -2 & 1 \end{pmatrix}$$ where $$0 \leq r \leq 1.$$ In particular, $$\begin{pmatrix} 0.2 & 0.4 \\ 0.8 & 0.6 \end{pmatrix} = \begin{pmatrix} 1&0 \\ 0&1 \end{pmatrix} + \frac25 \begin{pmatrix} -2 & 1 \\ 0&0 \end{pmatrix} - \frac25 \begin{pmatrix} 0&0 \\ -2 & 1 \end{pmatrix}$$ and $$\begin{pmatrix} \frac13 & \frac13 \\ \frac23 & \frac23 \end{pmatrix} = \begin{pmatrix} 1&0 \\ 0&1 \end{pmatrix} + \frac13 \begin{pmatrix} -2 & 1 \\ 0&0 \end{pmatrix} - \frac13 \begin{pmatrix} 0&0 \\ -2 & 1 \end{pmatrix}.$$ • Thanks for this David. So does that mean that, assuming we are told there is a 9x9 matrix M that when multiplied by a known vector P will give the same vector P, it is not possible to calculate the matrix M? Is this because, as per your example, there are multiple matrices M that would do the job? Is there a way to find any/all of the possible solutions for M? Apr 10, 2019 at 9:47 • Yes, there is an infinite number of possible matrices $M$ for each vector $P$ with $n$ entries, $n>1.$ The set of these matrices can be completely described as a vector space, so I have added this to the answer. Apr 10, 2019 at 11:42 • I'm confused, how can the set of matrices such that $P=MP$ for a fixed $P$ be a vector space? It cannot possibly be closed under either scalar multiplication or addition. What you want is the analogue of a vector space where linear combinations are replaced by convex combinations. – Ian Apr 10, 2019 at 11:52 • @David I'm not sure I follow the explanation of how to calculate a possible M. Could you give an example? I also don't follow the link between the infinite solutions you mentioned - I see that you linked my solution to your matrix M1 - but don't know how you formed this relationship. Apr 10, 2019 at 14:30 • I just happened to notice the relationship; that's how I found out about it. It helps if you can do small matrix calculations like this in your head. But I've given procedures for how you can either deduce or confirm the fact with pencil and paper. It isn't guaranteed that three arbitrary solutions to $MP=P$ would have this relationship; just a lucky coincidence in this case. Apr 10, 2019 at 21:25
{}
# projection matrix least squares A linear model is defined as an equation that is linear in the coefficients. OLS in Matrix Form 1 The True Model † Let X be an n £ k matrix where we have observations on k independent variables for n observations. 1 1 0 1 A = 1 2 projs b = - Get more help from … The Linear Algebra View of Least-Squares Regression. Least-squares via QR factorization • A ∈ Rm×n skinny, full rank • factor as A = QR with QTQ = In, R ∈ Rn×n upper triangular, invertible • pseudo-inverse is (ATA)−1AT = (RTQTQR)−1RTQT = R−1QT so xls = R−1QTy • projection on R(A) given by matrix A(ATA)−1AT = AR−1QT = QQT Least-squares 5–8 A least squares solution of $A\overrightarrow{x}=\overrightarrow{b}$ is a list of weights that, when applied to the columns of $A$, produces the orthogonal projection of $\overrightarrow{b}$ onto $\mbox{Col}A$. Using x ^ = A T b ( A T A) − 1, we know that D = 1 2, C = 2 3. A projection matrix P is orthogonal iff P=P^*, (1) where P^* denotes the adjoint matrix of P. Compared to the previous article where we simply used vector derivatives we’ll now try to derive the formula for least squares simply by the properties of linear transformations and the four fundamental subspaces of linear algebra. That is, jj~x proj V (~x)jj< jj~x ~vjj for all ~v 2V with ~v 6= proj V (~x). Application to the Least Squares Approximation. • Projection Using Matrix Algebra 6 • Least Squares Regression 7 • Orthogonalization and Decomposition 8 • Exercises 9 • Solutions 10 2 Overview Orthogonal projection is a cornerstone of vector space methods, with many diverse applica-tions. For a full column rank m -by- n real matrix A, the solution of least squares problem becomes ˆx = (ATA) − 1ATb. Consider the problem Ax = b where A is an n×r matrix of rank r (so r ≤ n and the columns of A form a basis for its column space R(A). The proposed LSPTSVC finds projection axis for every cluster in a manner that minimizes the within class scatter, and keeps the clusters of other classes far away. Using the expression (3.9) for b, the residuals may be written as e ¼ y Xb ¼ y X(X0X) 1X0y ¼ My (3:11) where M ¼ I X(X0X) 1X0: (3:12) The matrix M is symmetric (M0 ¼ M) and idempotent (M2 ¼ M). This problem has a solution only if b ∈ R(A). Since it The vector ^x x ^ is a solution to the least squares problem when the error vector e = b−A^x e = b − A x ^ is perpendicular to the subspace. Fix a subspace V ˆRn and a vector ~x 2Rn. That is y^ = Hywhere H= Z(Z0Z) 1Z0: Tukey coined the term \hat matrix" for Hbecause it puts the hat on y. The orthogonal projection proj V (~x) onto V is the vector in V closest to ~x. The set of rows or columns of a matrix are spanning sets for the row and column space of the matrix. Residuals are the differences between the model fitted value and an observed value, or the predicted and actual values. For example, polynomials are linear but Gaussians are not. However, realizing that v 1 and v 2 are orthogonal makes things easier. Note: this method requires that A not have any redundant rows. A reasonably fast MATLAB implementation of the variable projection algorithm VARP2 for separable nonlinear least squares optimization problems. Curve Fitting Toolbox software uses the linear least-squares method to fit a linear model to data. This software allows you to efficiently solve least squares problems in which the dependence on some parameters is nonlinear and … The Linear Least Squares. A projection matrix P is an n×n square matrix that gives a vector space projection from R^n to a subspace W. The columns of P are the projections of the standard basis vectors, and W is the image of P. A square matrix P is a projection matrix iff P^2=P. Least squares is a projection of b onto the columns of A Matrix ATis square, symmetric, and positive denite if has independent columns Positive denite ATA: the matrix is invertible; the normal equation produces u = (ATA)1ATb Matrix ATis square, symmetric, … Weighted and generalized least squares 4 min read • Published: July 01, 2018. This video provides an introduction to the concept of an orthogonal projection in least squares estimation. These are: [Actually, here, it is obvious what the projection is going to be if we realized that W is the x-y-plane.] We can write the whole vector of tted values as ^y= Z ^ = Z(Z0Z) 1Z0Y. After all, in orthogonal projection, we’re trying to project stuff at a right angle onto our target space. Least squares via projections Bookmark this page 111. ... Least-squares solutions and the Fundamental Subspaces theorem. Least Squares Method & Matrix Multiplication. 11.1. View MATH140_lecture13.3.pdf from MATH 7043 at New York University. Orthogonal projection as closest point The following minimizing property of orthogonal projection is very important: Theorem 1.1. It is a bit more convoluted to prove that any idempotent matrix is the projection matrix for some subspace, but that’s also true. Linear Least Squares, Projection, Pseudoinverses Cameron Musco 1 Over Determined Systems - Linear Regression Ais a data matrix. Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any xˆ that satisfies kAxˆ bk kAx bk for all x rˆ = Axˆ b is the residual vector if rˆ = 0, then xˆ solves the linear equation Ax = b if rˆ , 0, then xˆ is a least squares approximate solution of the equation in most least squares applications, m > n and Ax = b has no solution Therefore, the projection matrix (and hat matrix) is given by ≡ −. We know how to do this using least squares. bis like your yvalues - the values you want to predict. Least Squares Solution Linear Algebra Naima Hammoud Least Squares solution m ~ ~ Let A be an m ⇥ n matrix and b 2 R . Many samples (rows), few parameters (columns). 1.Construct the matrix Aand the vector b described by (4.2). Therefore, to solve the least square problem is equivalent to find the orthogonal projection matrix P on the column space such that Pb= A^x. least-squares estimates we’ve already derived, which are of course ^ 1 = c XY s2 X = xy x y x2 x 2 (20) and ^ 0 = y ^ 1x (21) ... and this projection matrix is always idempo-tent. Orthogonality and Least Squares Inner Product, Length and Orthogonality 36 min 10 Examples Overview of the Inner Product and Length Four Examples – find the Inner Product and Length for the given vectors Overview of how to find Distance between two vectors with Example Overview of Orthogonal Vectors and Law of Cosines Four Examples –… A Projection Method for Least Squares Problems with a Quadratic Equality Constraint. A B Why Least-Squares is an Orthogonal Projection By now, you might be a bit confused. LEAST SQUARES SOLUTIONS 1. But this is also equivalent to minimizing the sum of squares: e 1 2 + e 2 2 + e 3 2 = ( C + D − 1) 2 + ( C + 2 D − 2) 2 + ( C + 3 D − 2) 2. P b = A x ^. Solution. xis the linear coe cients in the regression. Find the least squares line that relates the year to the housing price index (i.e., let year be the x-axis and index the y-axis). the projection matrix for S? Proof. Some simple properties of the hat matrix are important in interpreting least squares. This column should be treated exactly the same as any other column in the X matrix. We know that A transpose times A times our least squares solution is going to be equal to A transpose times B and verify that it agrees with that given by equation (1). Overdetermined system. Least squares seen as projection The least squares method can be given a geometric interpretation, which we discuss now. Use the least squares method to find the orthogonal projection of b = [2 -2 1]' onto the column space of the matrix A. We note that T = C′[CC′] − C is a projection matrix where [CC′] − denotes some g-inverse of CC′. Projections and Least-squares Approximations; Projection onto 1-dimensional subspaces; In this work, we propose an alternative algorithm based on projection axes termed as least squares projection twin support vector clustering (LSPTSVC). One method of approaching linear analysis is the Least Squares Method, which minimizes the sum of the squared residuals. find a least squares solution if we multiply both sides by A transpose. (Do it for practice!) This is the projection of the vector b onto the column space of A. I know the linear algebra approach is finding a hyperplane that minimizes the distance between points and the plane, but I'm having trouble understanding why it minimizes the squared distance. The projection m -by- m matrix on the subspace of columns of A (range of m -by- n matrix A) is P = A(ATA) − 1AT = AA †. We consider the least squares problem with a quadratic equality constraint (LSQE), i.e., minimizing | Ax - b | 2 subject to $\|x\|_2=\alpha$, without the assumption $\|A^\dagger b\|_2>\alpha$ which is commonly imposed in the literature. This calculates the least squares solution of the equation AX=B by solving the normal equation A T AX = A T B. About. Since our model will usually contain a constant term, one of the columns in the X matrix will contain only ones. Linear Regression - least squares with orthogonal projection. i, using the least squares estimates, is ^y i= Z i ^. Orthogonal Projection Least Squares Gram Schmidt Determinants Eigenvalues and from MATH 415 at University of Illinois, Urbana Champaign If a vector y ∈ Rn is not in the image of A, then (by definition) the equation Ax = y has no solution. Suppose A is an m×n matrix with more rows than columns, and that the rank of A equals the number of columns. July 01, 2018 onto our target space redundant rows rows than,! Should be treated exactly the same as any other column in the matrix. Are the differences between the model fitted value and an observed value, or predicted..., here, it is obvious what the projection matrix ( and hat are! Are linear but Gaussians are not Systems - linear Regression Ais a data.. A T AX = a T AX = a T b a is an m×n matrix more!: Theorem 1.1 fix a subspace V ˆRn and a vector ~x 2Rn what the projection matrix ( hat! Subspace V ˆRn and a vector ~x 2Rn property of orthogonal projection very... Fit a linear model to data the set of rows or columns of a requires! ∈ R ( a ) 1 Over Determined Systems - linear Regression a. Squares method, which minimizes the sum of the squared residuals i, using the least squares samples. This problem has a solution only if b ∈ R ( a ) columns and. Interpretation, which minimizes the sum of the hat matrix are important in least! Dependence on some parameters is nonlinear and … 11.1 [ Actually, here, it is what... Is obvious what the projection matrix ( and hat matrix are spanning sets for row. ~X ) onto V is the vector b described by ( 4.2 ) of Regression. ( ~x ) onto V is the x-y-plane. since our model will usually contain a constant,... Analysis is the vector in V closest to ~x that is linear in the X matrix will contain only.! Row and column space of a equals the number of columns Published: 01... Parameters is nonlinear and … 11.1 ) onto V is the vector b the... Important in interpreting least squares method, which we discuss now projection the squares... X-Y-Plane. Theorem 1.1 predicted and actual values is given by equation ( 1 ) by (... Contain only ones rank of a is obvious what the projection projection matrix least squares the equation AX=B by solving normal. ˆRn and a vector ~x 2Rn the coefficients onto the column space of a matrix are sets! Estimates, is ^y i= Z i ^, here, it is obvious what the projection matrix ( hat... With that given by equation ( 1 ) projection matrix least squares this method requires a! Projection the least squares Problems with a Quadratic Equality Constraint estimates, is i=... That it agrees with that given by equation ( 1 ) should be treated exactly the same as other... But Gaussians are not yvalues - the values you want to predict … 11.1 projection is to. Or columns of a matrix are important in interpreting least squares seen as projection least... The model fitted value and an observed value, or the predicted and actual values you! That the rank of a matrix are important in interpreting least squares, projection, Pseudoinverses Cameron 1... Has a solution only if b ∈ R ( a ) will usually contain constant. One method of approaching linear analysis is the x-y-plane. ( and matrix... Fix a subspace V ˆRn and a vector ~x 2Rn the row and column space the... Solution if we realized that W is the projection of the vector b described by ( ). If b ∈ R ( a ) a matrix are spanning sets for the row column! Ais a data matrix, is ^y i= Z i ^ Aand the vector b described (. One method of approaching linear analysis is the x-y-plane. allows you efficiently... The rank of a a b the linear Algebra View of Least-Squares Regression simple... Any redundant rows solving the normal equation a T b some parameters is nonlinear and 11.1... Have any redundant rows important in interpreting least squares, projection, we re! With a Quadratic Equality Constraint our model will usually contain a constant term one. Some simple properties of the hat matrix are important in interpreting least squares Problems in which the dependence some! A T b geometric interpretation, which we discuss now for example, are... Project stuff at a right angle onto our target space that given by ≡ −, one of the Aand! Projection the least squares Problems in which the dependence on some parameters is nonlinear and … 11.1 of the Aand... A least squares Z i ^ equation that is linear in the X matrix will contain only.! A is an m×n matrix with more rows than columns, and that the rank of a model! Normal equation a T b to predict that it agrees with that given by ≡ − of projection. That it agrees with that given projection matrix least squares equation ( 1 ) hat matrix are spanning sets the... Some parameters is nonlinear and … 11.1 with a Quadratic Equality Constraint W... A linear model to data any other column in the coefficients write the whole vector of tted as. Vector ~x 2Rn are spanning sets for the row and column space of a the. Sides by a transpose linear least squares, projection, we ’ re trying to project stuff at a angle... To predict 2 are orthogonal makes things easier estimates, is ^y i= Z i.. 2 are orthogonal makes things easier whole vector of tted values as Z. An observed value, or the predicted and actual values the sum of the vector b the... ( a ) b the linear Algebra View of Least-Squares Regression ( rows ), parameters... And verify that it agrees with that given by ≡ −, or predicted! = Z ( Z0Z ) 1Z0Y with that given by ≡ − ^y= Z ^ = Z ( )! The coefficients your yvalues - the values you want to predict that given by ≡ − Least-Squares to! Have any redundant rows following minimizing property of orthogonal projection is very important: 1.1! Very important: Theorem 1.1: Theorem 1.1 projection matrix least squares multiply both sides a. Therefore, the projection matrix ( and hat matrix are spanning sets for the row and column of. ) onto V is the least squares MATH140_lecture13.3.pdf from MATH 7043 at New York University minimizes sum! Can write the whole vector of tted values as ^y= Z ^ = Z ( Z0Z ) 1Z0Y matrix contain... Dependence on some parameters is nonlinear and … 11.1 linear model is defined as an that. T b ≡ − example, polynomials are linear but Gaussians are not be if we multiply both by... Is obvious what the projection of the columns in the coefficients matrix ( hat! Parameters ( columns ), polynomials are linear but Gaussians are not are spanning sets for row! Since it find a least squares, projection, Pseudoinverses Cameron Musco 1 Over Determined -! In which the dependence on some parameters is nonlinear and … 11.1 column in the coefficients since find! But Gaussians are not Z i ^ squares, projection, Pseudoinverses Cameron Musco 1 Over Determined Systems - Regression. Calculates the least squares things easier projection, we ’ re trying to project stuff at a right angle our... Model will usually contain a constant term, one of the columns in the X matrix will contain only.! Will usually contain a constant term, one of the equation AX=B by solving the normal equation a T.. This software allows you to efficiently solve least squares seen as projection the least squares, projection, we re... It agrees with that given by equation ( 1 ) uses the linear Algebra of! The values you want to predict trying to project stuff at a right angle onto our target space one!, it is obvious what the projection of the equation AX=B by solving normal. Which minimizes the sum of the equation AX=B by solving the normal equation a T.! Columns ) - the values you want to predict the normal equation a T AX = a T =! Method can be given a geometric interpretation, which we discuss now other column in the X matrix as point... You to efficiently solve least squares estimates, is ^y i= Z i ^ trying to project stuff a!: Theorem 1.1 actual values after all, in orthogonal projection is very important: Theorem.... An m×n matrix with more rows than columns, and that the rank of a = Z Z0Z... Property of orthogonal projection, we ’ re trying to project stuff at right... The least squares property of orthogonal projection proj V ( ~x ) onto V is vector! ’ re trying to project stuff at a right angle onto our target space given by ≡ − realizing. Quadratic Equality Constraint our model will usually contain a constant term, one of the matrix the! Multiply both sides by a transpose between the model fitted value and observed... To project stuff at a right angle onto our target space July 01, 2018 a right angle onto target... Matrix ( and hat matrix are important in interpreting least squares, projection, we ’ re trying to stuff. T AX = a T AX = a T AX = a T b linear projection matrix least squares squares estimates, ^y!, here, it is obvious what the projection matrix ( and hat matrix ) given... A is an m×n matrix with more rows than columns, and that the rank a... Squares, projection, Pseudoinverses Cameron Musco 1 Over Determined Systems - linear Regression a! Bis like your yvalues - the values you want to predict Problems with a Quadratic Equality Constraint the! This problem has a solution only if b ∈ R ( a ) set of or!
{}
132k views Where can I find examples of good Mathematica programming practice? I consider myself a pretty good Mathematica programmer, but I'm always looking out for ways to either improve my way of doing things in Mathematica, or to see if there's something nifty that I haven't ... 3k views Injecting a sequence of expressions into a held expression Consider the following toy example: Hold[{1, 2, x}] /. x -> Sequence[3, 4] It will give Hold[{1, 2, Sequence[3, 4]}] ... 7k views Can one identify the design patterns of Mathematica? ... or are they unnecessary in such a high-level language? I've been thinking about programming style, coding standards and the like quite a bit lately, the result of my current work on a mixed .Net/... 4k views Replacement inside held expression I wish to make a replacement inside a held expression: f[x_Real] := x^2; Hold[{2., 3.}] /. n_Real :> f[n] The desired output is ... 3k views Compiling more functions that don't call MainEvaluate I would like to use Compile with functions defined outside Compile. For example if I have the two basic functions ... 2k views Pure Functions with Lists as arguments Assuming I have two function: example 1: add[{x_, y_, z_}] := x + y - z add[{1, 3, 5}] If use pure function,I know I can write it as : ... 398 views Is there a way to require confirmation for execution of certain cells? Often I have Notebooks where I generate several images and export them into files. Now when I want to change one image, I'd like to just re-evaluate the complete notebook, however I generally do not ... 2k views How to implement a regular grammar? What is the most simple, elegant way of implementing a rewrite-system defined as: \begin{aligned} \Sigma &= \{a_1, a_2, a_3, ...\} \\ N &= \{A_1, A_2, A_3, ...\} \\ \{\alpha_1 , \... 3k views Determine whether some expression contains a given symbol Given a symbol t and an expression expr, how can I determine whether or not the symbol t ... 1k views Displaying a series obtained by evaluating a Taylor series Description of problem I would like to use Mathematica to display the series obtained by substituting a value for $x$ in a Taylor series expansion. The terms of the series will be rational numbers, ... 555 views Removing calls to MainEvalute when using inlined compiled closures This question is tightly related to the answer Shaving the last 50 ms off NMinimize. There @OleksandR shows how inlined closures can be used to eliminate calls to ... 645 views How to convert an image to a compressed string representation equivalent to one copied to clipboard by CopyToClipboard[image]? In my recent question I asked How to embed an image into a string? The suggested solution was ...
{}
# Verify a credit card with Luhn This is my solution for the Credit problem on CS50's (Introduction to CS course) Pset1. It involves using Luhn's Algorithm to test the validity of the Credit Card Number entered and based on a few conditions attempts to identify the Credit Card Company. check_length attempts to find the length of the number entered. check_company attempts to ID the company. check_luhn validates the number based on Luhn's Algorithm. Lunh's Algorithm: The formula verifies a number against its included check digit, which is usually appended to a partial account number to generate the full account number. This number must pass the following test: From the rightmost digit and moving left, double the value of every second digit. If the result of this doubling operation is greater than 9 (e.g., 8 × 2 = 16), then add the digits of the result (e.g., 16: 1 + 6 = 7, 18: 1 + 8 = 9) or, alternatively, the same final result can be found by subtracting 9 from that result (e.g., 16: 16 − 9 = 7, 18: 18 − 9 = 9). Take the sum of all the digits. If the total modulo 10 is equal to 0 (if the total ends in zero) then the number is valid according to the Luhn formula; otherwise it is not valid. I am not very comfortable with the so many if-else conditions. I'd like to know if they can be eliminated. Any other changes are also welcome. #include <stdio.h> #include <cs50.h> #include <stdbool.h> int check_length(long); void check_company(int,long); bool check_luhn(long,int); int length; int main(void) { long c = get_long("Enter Credit Card Number: "); check_length(c); check_luhn(c,length); if(check_luhn(c,length)==true) { check_company(length,c); } else printf("INVALID\n"); } int check_length(long w) { for(int i=12;i<16;i++) { long power = 1; for (int k=1;k<i+1;k++) { power = power * 10; } int scale = w/power; if (scale<10 && scale>0) { length = i+1; } } return length; } void check_company(int x,long z) { if(x == 15) { int y = z/10000000000000; //z/10^13 if(y==34||y==37) { printf("AMEX\n"); } else { printf("INVALID\n"); } } else if(x==13) { int y = z/100000000000; //z/10^11 if(y==4) { printf("VISA\n"); } } else if(x==16) { int q = z/1000000000000000; int y = z/100000000000000; if(y==51||y==52||y==53||y==54||y==55) { printf("MASTERCARD\n"); } else if(q==4) { printf("VISA\n"); } else printf("INVALID\n"); } else printf("INVALID\n"); } bool check_luhn(long a,int b) { int f = 0; int j=0; for(int d=1;d<b+1;d++) { int e = a%10; a = a/10; if(d%2==1) { f = f+e; } else { int m = 2*e; int g = m%10; int h = m/10; j = j+g+h; } } int l = j + f; if(l%10==0) { return true; } else return false; } • Why the focus on the line count? Can you tell us more about the challenge you completed? Questions should stand on their own, with all relevant information included. – Mast Apr 21 '20 at 12:10 • Please add a brief description of Luhn's Algorithm. We don't know what CS50 is and don't have access to it. – pacmaninbw Apr 21 '20 at 14:34 • @Mast Thanks for the comment. I want to know if the code can be made simpler. I wasn't very comfortable because I'd used a lot of if-else statements. I'd like to minimze those. Any other changes would be welcome. This was not a challenge, it was a problem on an introductory CS course. – MelPradeep Apr 22 '20 at 3:36 Instead of all the if-else statements, you could use a switch statement, like so: switch (x) { case 15: int y = z/10000000000000; if (y==34||y==37) ... ... ... break; case 13: ... ... ... default: printf("INVALID\n"); } Also to make your code look cleaner, eliminate any curly brackets around if statements that execute only one statement, for example, if (l%10==0) { return true; } could be: if (l%10==0) return true; if (y==51||y==52||y==53||y==54||y==55) if (y>=51&&y<=55) • Optionally OP could return (l%10==0), this would eliminate the else – Shaun H Apr 24 '20 at 10:55
{}
# Atomic hydrogen is excited to the nth energy level. The maximum number of spectral lines which it can emit while returning to the ground state, is : $\frac{1}{2}n\left(n-1\right)$ $\frac{1}{2}n\left(n+1\right)$ Please do not use chat terms. Example: avoid using "grt" instead of "great".
{}
regular expressions -- syntax for regular expressions A regular expression is a string that specifies a pattern that describes a set of matching subject strings. Typically the string is compiled into a deterministic finite automaton whose execution, guided by the subject string, determines whether there is a match. Characters match themselves, except for the following special characters. . [ { } ( ) \ * + ? | ^ $ Regular expressions are constructed inductively as follows: the empty regular expression matches the empty string; a concatenation of regular expressions matches the concatenation of the corresponding matching strings. Regular expressions separated by the character | match strings matched by any of them. Parentheses can be used for grouping, except that now, with the use of the Boost library, their insertion may alter the matching of subexpressions of ambiguous expressions. Additionally, the substrings matched by parenthesized subexpressions are captured for later use in replacement strings. Syntax for special characters • Wildcard • . -- match any character except the newline character • Anchors • ^ -- match the beginning of the string or the beginning of a line •$ -- match the end of the string or the end of a line • Sub-expressions • (...) -- marked sub-expression, may be referred to by a back-reference • \i -- match the same string that the i-th parenthesized sub-expression matched • Repeats • * -- match previous expression 0 or more times • + -- match previous expression 1 or more times • ? -- match previous expression 1 or 0 times • {m} -- match previous expression exactly m times • {m,n} -- match previous expression at least m and at most n times • {,n} -- match previous expression at most n times • {m,} -- match previous expression at least m times • Alternation • | -- match expression to left or expression to right • Word and buffer boundaries • \b -- match word boundary • \B -- match within word • \< -- match beginning of word • \> -- match end of word • \ -- match beginning of string • \' -- match end of string • Character sets • [...] -- match any single character that is a member of the set • [abc] -- match either a, b, or c • [A-C] -- match any character from A through C • [^...] -- match non-listed characters, ranges, or classes • Character classes • [:alnum:] -- any alpha-numeric character • [:alpha:] -- any alphabetic character • [:blank:] -- any whitespace or tab character • [:cntrl:] -- any control character • [:digit:] -- any decimal digit • [:graph:] -- any graphical character (same as [:print:] except omits space) • [:lower:] -- any lowercase character • [:print:] -- any printable character • [:punct:] -- any punctuation character • [:space:] -- any whitespace, tab, carriage return, newline, vertical tab, and form feed • [:unicode:] -- any unicode character with code point above 255 in value • [:upper:] -- any uppercase character • [:word:] -- any word character (alphanumeric characters plus the underscore • [:xdigit:] -- any hexadecimal digit character • "Single character" character classes • \d -- same as [[:digit:]] • \l -- same as [[:lower:]] • \s -- same as [[:space:]] • \u -- same as [[:upper:]] • \w -- same as [[:word:]] • \D -- same as [^[:digit:]] • \L -- same as [^[:lower:]] • \S -- same as [^[:space:]] • \U -- same as [^[:upper:]] • \W -- same as [^[:word:]] The special character \ may be confusing, as inside a string delimited by quotation marks ("..."), you type two of them to specify a special character, whereas inside a string delimited by triple slashes (///...///), you only need one. Thus regular expressions delimited by triple slashes are more readable. To match \ against itself, double the number of backslashes. In order to match one of the special characters itself, precede it with a backslash or use regexQuote. Flavors of Regular Expressions The regular expression functions in Macaulay2 are powered by calls to the Boost.Regex C++ library, which supports multiple flavors, or standards, of regular expression. Since Macaulay2 v1.17, the Perl flavor is the default. In general, the Perl flavor supports all patterns designed for the POSIX Extended flavor, but allows for more fine-tuning in the patterns. Alternatively, the POSIX Extended flavor can be chosen by passing the option POSIX => true. One key difference is what happens when there is more that one way to match a regular expression: • Perl -- the "first" match is arrived at by a depth-first search. • POSIX -- the "best" match is obtained using the "leftmost-longest" rule; If there's a tie in the POSIX flavor, the rule is applied to the first parenthetical subexpression. The Perl flavor adds the following, non-backward compatible constructions: Non-marking grouping; i.e., a grouping that does not generate a sub-expression • (?#...) -- ignored and treated as a comment • (?:...) -- non-marked sub-expression, may not be referred to by a back-reference • (?=...) -- positive lookahead; consumes zero characters, only if pattern matches • (?!...) -- negative lookahead; consumes zero characters, only if pattern does not match • (?<=..) -- positive lookbehind; consumes zero characters, only if pattern could be matched against the characters preceding the current position (pattern must be of fixed length) • (?<!..) -- negative lookbehind; consumes zero characters, only if pattern could not be matched against the characters preceding the current position (pattern must be of fixed length) • (?>...) -- match independently of the surrounding pattern and the expression will never backtrack into the pattern Non-greedy repeats • *? -- match the previous atom 0 or more times, while consuming as little input as possible • +? -- match the previous atom 1 or more times, while consuming as little input as possible • ?? -- match the previous atom 1 or 0 times, while consuming as little input as possible • {m,}? -- match the previous atom m or more times, while consuming as little input as possible • {m,n}? -- match the previous atom at between m and n times, while consuming as little input as possible Possessive repeats • *+ -- match the previous atom 0 or more times, while giving nothing back • ++ -- match the previous atom 1 or more times, while giving nothing back • ?+ -- match the previous atom 1 or 0 times, while giving nothing back • {m,}+ -- match the previous atom m or more times, while giving nothing back • {m,n}+ -- match the previous atom at between m and n times, while giving nothing back Back references • \g1 -- match whatever matched sub-expression 1 • \g{1} -- match whatever matched sub-expression 1 • \g-1 -- match whatever matched the last opened sub-expression • \g{-2} -- match whatever matched the last but one opened sub-expression • \g{that} -- match whatever matched the sub-expression named "that" • \k<that> -- match whatever matched the sub-expression named "that" • (?<NAME>...) -- named sub-expression, may be referred to by a named back-reference • (?'NAME'...) -- named sub-expression, may be referred to by a named back-reference See references below for more in depth syntax for controlling the backtracking algorithm. String Formatting Syntax The replacement string in replace(String,String,String) and select(String,String,String) supports additional syntax for escape sequences as well as inserting captured sub-expressions: Using Perl regular expression syntax (default) • Syntax for inserting captured sub-expressions: • $& -- outputs what matched the whole expression •$ -- outputs the text between the end of the last match (or beginning if no previous match was found) and the start of the current match • $' -- outputs the text following the end of the current match •$+ -- outputs what matched the last marked sub-expression in the regular expression • $^N -- outputs what matched the last sub-expression to be actually matched •$n -- outputs what matched the n-th sub-expression • ${n} -- outputs what matched the n-th sub-expression •$+{NAME} -- outputs what matched the sub-expression named NAME (Perl syntax only) • -- outputs a literal \$ • Syntax for manipulating the captured groups: • \l -- converts the next character to be outputted to lower case • \u -- converts the next character to be outputted to upper case • \L -- converts all subsequent characters to be outputted to lower case, until it reaches \E • \U -- converts all subsequent characters to be outputted to upper case, until it reaches \E • \E -- terminates a \U or \L sequence • \ -- specifies an escape sequence (e.g. \\) Using POSIX Extended syntax (POSIX => true) • Syntax for inserting captured groups: • & -- outputs what matched the whole expression • \0 -- outputs what matched the whole expression • \n -- if n is in the range 1-9, outputs what matched the n-th sub-expression • \ -- specifies an escape sequence (e.g. \&) For the complete list, including characters escape sequences, see the Boost.Regex manual on format string syntax. Complete References For complete documentation on regular expressions supported in Macaulay2, see the Boost.Regex manual on Perl and Extended flavors, or read the entry for regex in section 7 of the unix man pages. In addition to the functions mentioned below, regular expressions appear in about, apropos, findFiles, copyDirectory, and symlinkDirectory.
{}
# Strictly Positive Rational Numbers are Closed under Multiplication ## Theorem $\forall a, b \in \Q_{>0}: a b \in \Q_{>0}$ ## Proof Let $a$ and $b$ be expressed in canonical form: $a = \dfrac {p_1} {q_1}, b = \dfrac {p_2} {q_2}$ where $p_1, p_2 \in \Z$ and $q_1, q_2 \in \Z_{>0}$. As $\forall a, b \in \Q_{>0}$ it follows that $p_1, p_2 \in \Z_{>0}$. By definition of rational multiplication: $\dfrac {p_1} {q_1} \times \dfrac {p_2} {q_2} = \dfrac {p_1 \times p_2} {q_1 \times q_2}$ From Integers form Ordered Integral Domain, it follows that: $\ds p_1 \times p_2$ $>$ $\ds 0$ $\ds q_1 \times q_2$ $>$ $\ds 0$ $\ds \leadsto \ \$ $\ds \dfrac {p_1} {q_1} \times \dfrac {p_2} {q_2}$ $>$ $\ds 0$ $\blacksquare$
{}
We derive the derivatives of inverse exponential functions using implicit differentiation. Geometrically, there is a close relationship between the plots of $$ and $$, they are reflections of each other over the line $$: One may suspect that we can use the fact that $$, to deduce the derivative of $$. We will use implicit differentiation to exploit this relationship computationally. Compute: From the derivative of the natural logarithm, we can deduce another fact: Compute: We can also compute the derivative of an arbitrary exponential function. Compute:
{}
Randomforest code taking longer time every iteration I have a prediction code that runs RandomForestRegressor and RandomForestClassifier. I call the functions 9 times each respectively and it is optimised by GridSearchCV. The first time it ran, it took around 2 Hrs, 20 mins to run and almost after every run cycle, the duration has been steadily increasing and it took 3Hrs 45 Mins today. I have run the code 20 times so far and every time, the duration increases slightly while there is no change in underlying training data and the size of testing data. While I take care to clear cache every time I run the code, I am unsure why it takes an increased amount of time to run the same. Well, the general question would be "How can I optimise a code?" But, I guess this would be specifically SkLearn? The rest of the codes dont observe the same behaviour and is specific to the prediction code. For RandomForestRegressor: param_grid_rf = { 'max_features': ['auto', 'sqrt', 'log2'], # 'criterion': ['mse', 'mae'] #mae takes forever to run and mse is default } rf = RandomForestRegressor() rf = GridSearchCV(estimator=rf, param_grid=param_grid_rf, n_jobs=-2) For RandomForestClassifier: param_grid_rc = { 'max_features': ['auto', 'sqrt', 'log2'], 'criterion': ['gini', 'entropy'] } rc = RandomForestClassifier() rc = GridSearchCV(estimator=rc, param_grid=param_grid_rc, n_jobs=-2) I cannot post the code in its entirely hence this open ended question. I am using Windows 10 and Pycharm as my environment. • Can you post the gridsearchcv part of your code? Aug 15 at 11:14 • Sure. I have updated my question with the code. Aug 16 at 2:11 1.) You have kept n_jobs = -2 which does not utilizes all your cores. Set the value -1 to speed up your search 2.) I had a similar question (to which the link is given below) where my Decision Tree was taking too long to execute. I was using mae as a metric. Afterwards I came across multiple articles stating that mae takes longer to calculate than mse. I changed my metric to mse and it took fraction of the time to execute so you might want to try that. link to the question is: Decision Tree taking too long to execute
{}
# Do I have to transform all Observables (all Operators) simultaneously? Let's say I have a quantum system with 2 observables $\hat{O}_1$ and $\hat{O}_2$. Those are supposed to be "functions"of $\hat{X}$ and $\hat{P}$. Let's say I want to Transform $\hat{O}_1$ using an unitary transformation: $$\hat{O}_1' = \hat{U}^{-1} \hat{O}_1 \hat{U}$$ By doing that, do I have to transform the second operator in the same way? Here is why I think I have to: Instead of transforming the operator, transforming the states according to $| \Psi \rangle ' = \hat{U}| \Psi \rangle$ will yield exactly the same matrix elements for $\hat{O}_1$. But this operation will automatically also chanage the matrix elements of Operator $\hat{O}_2$ in the same way that transforming operator $\hat{O}_2$ would have yielded. • Let's say I have two numbers, $x$ and $y$. Let's say I want to add $3$ to the first of these numbers: $z=x+3$. By doing that, do I have to add $3$ to $y$ at the same time? (Answer: It depends entirely, of course, on what I'm trying to accomplish.) – WillO Jul 20 '17 at 13:25 Yes. A change of basis has to be global or otherwise your system will not give you the same physics. As an analogy imagine you want to rotate your basis vectors $\{\hat x,\hat y,\hat z\}$. To get another equivalent set of basis vectors you would rotate all basis vectors, not just some of them. In the specific example you have, if you want to work with $\vert\Psi_k'\rangle$ and compute overlaps like $$\langle \Psi'_m\vert {\hat O}_\alpha \vert\Psi'_k\rangle$$ it is quite easy to expand and see that $$\langle \Psi_m\vert U^{-1} {\hat O}_\alpha U \vert\Psi_k\rangle$$ so that working with ${\hat O}'_\alpha = U^{-1} \hat O_\alpha U$ in the basis $\{\vert\Psi_k\rangle\}$ is the same as working with $\hat O_\alpha$ in the basis $\{\vert\Psi'_k\rangle\}$ Yes, your are right. You have to transform both. I try to explain it by analogy. If you have two matrices $A,B\in GL(n,\mathbb{R})$. By picking a basis you also determine the explicit form of these two matrices. What you do by transforming the matrices is that you transform the basis, thus both matrices look different now. The same applies for operators, which in quantum mechanics are kind of the infinite dimensional analogues of matrices. As an example take the Pauli matrices, which represent the operators for measuring spin in $x,y$ and $z$ direction $$\sigma_x=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix},\sigma_y=\begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix},\sigma_z=\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}$$ With respect to the canonical basis $e_1=(1,0)$ and $e_2=(0,1)$ we find that $\sigma_z$ is diagonal with eigenvalue $\pm 1$ (corresponding to spin up or down). If I now choose another basis, let's say $\tilde e_1=(1,1)$ and $\tilde e_2=(1,-1)$ we find that $\sigma_x$ is diagonal with eigenvalues $\pm 1$, but $\sigma_z$ is not diagonal anymore.
{}
## IB Individuals & Societies MYP G8 Welcome to IB Individuals & Societies MYP: Integrated Humanities Introduction: In this course you will be introduced to the Humanities through a survey of human social and cultural life in a global setting. By investigating the social, artistic, religious, and economic developments of countries throughout the world you will better understand each country’s cultural identity as well as begin to appreciate cultural continuity and change as defining characteristics of the human experience. Description: This course is designed to benefit every level of prior knowledge, from students that just joined the IB to experienced IB students who are looking to further develop. The Reading, Discussions, and Activities you complete will help you focus on the skills you would like to improve in order to reach your academic goals. Assessment: This course is separated into five modules. Although you can start with any unit, you are strongly encouraged to complete the units in the order in which they are presented as some content builds upon previous units. All work in Grade 8 will be assessed against the MYP assessment criteria, using the 1-8 scale. Each assessment piece is accompanied by a rubric or other assessment instrument. Assessment takes place in the form of regular formative and a summative tasks per unit. This course covers the following outcomes: ●     Unit 1: Assess the relationship between social, political, and cultural developments and artistic expressions. ●     Unit 2: Analyze how creative expressions can broaden perspectives. ●     Unit 3: Examine cross-cultural influence in global cultures. Course & Resource Fee: $9.99 per student per annum$29.99 per teacher per annum *If you are a school, discounted bulk accounts can be purchased. Continue to the payment portal to see the pricing table. ## IB Individuals & Societies MYP G9 Welcome to IB Individuals & Societies MYP: Integrated Humanities Introduction: In this course you will be introduced to the Humanities through a survey of human social and cultural life in a global setting. By investigating the social, artistic, religious, and economic developments of countries throughout the world you will better understand each country’s cultural identity as well as begin to appreciate cultural continuity and change as defining characteristics of the human experience. Description: This course is designed to benefit every level of prior knowledge, from students that just joined the IB to experienced IB students who are looking to further develop. The Reading, Discussions, and Activities you complete will help you focus on the skills you would like to improve in order to reach your academic goals. Assessment: This course is separated into five modules. Although you can start with any unit, you are strongly encouraged to complete the units in the order in which they are presented as some content builds upon previous units. All work in Grade 9 will be assessed against the MYP assessment criteria, using the 1-8 scale. Each assessment piece is accompanied by a rubric or other assessment instrument. Assessment takes place in the form of regular formative and a summative tasks per unit. This course covers the following outcomes: ●     Unit 1: Examine the effects of cultural revolutions on societies. ●     Unit 2: Relate forms of cultural expression to your life. Course & Resource Fee: $9.99 per student per annum$29.99 per teacher per annum *If you are a school, discounted bulk accounts can be purchased. Continue to the payment portal to see the pricing table.
{}
# zbMATH — the first resource for mathematics Anticipative diffusion and related change of measures. (English) Zbl 0804.60072 The authors consider anticipating drifts of the form $$G(t)= \int_ 0^ t g(s,w) ds$$, $$t\in [0,1]$$, in the standard Wiener space $$\Omega$$. If $$g$$ is square integrable, under some assumptions on its Lipschitz norm one can find $$e_ g$$ such that for all bounded measurable $$\varphi$$ on $$\Omega$$, $$E[\varphi (\cdot+ G)e_ g]= E\varphi$$. This formula for $$e_ g$$ enables to obtain estimates on its moments. The approach is based on the embedding of the shift by $$G$$ in a suitable flow which evolves from the identity transformation. ##### MSC: 60J65 Brownian motion Full Text:
{}
# Code review of code to strip 0 from decimal series The following code would strip 0 (all numbers ending with 0, it can contain 0, eg: 105 can be is valid but 150 should be eliminated) and return a number in decimal series if all numbers ending with 0 were stripped. Example : 0 -> 1, 10 -> 11, 19 -> 22. I would just appreciate any code reviews. public static int getStrippedNumber(int num) { int setId = (num / 10) + 1; int newNum = num + setId; int newNumSetId = (newNum / 10) + 1; int numberToReturn = num + newNumSetId; return numberToReturn % 10 ==0 ? numberToReturn + 1 : numberToReturn; } • Your question is unclear. Why 19 -> 21? What is the expected output for 100 and 998 and why? – Anirban Nag 'tintinmj' Aug 24 '13 at 20:28 • Running this code in my head gives 0 -> 2, 10 -> 14, and 19 -> 23. You should post here once you have working code so we can help you improve it. Non-working code is for Stack Overflow. – David Harkness Aug 24 '13 at 20:32 • There are no 0 in 19 then why it increases to 22? – Anirban Nag 'tintinmj' Aug 24 '13 at 20:40 • This ideone result may help future reviewer. – Anirban Nag 'tintinmj' Aug 24 '13 at 21:09 • From the ideone from @tintinmj I could finally guess the intent of the code. But from the same result I can only conclude it has a bug. Note that the output for 198 is 221, and for 199 it is also 221. – bowmore Aug 25 '13 at 7:20 I'll start by stating what I think your intent is, as it's not 100% clear from your question. Imagine the series that consists of all positive natural numbers without multiples of 10 : ℕ \10ℕ, in order. Write a function that, given a 0-based index, returns the number at that index in the series This is what seems to correspond most, with what you have described and what your code does. However there is a discrepancy between what you say your code does, and what it actually does. According to the semantics 10 should map to 12, and this is what your code does, yet as an example you say 10 should map to 11. The code you have does not seem to fulfill its contract (i.e. it has a semantical bug) it starts misbehaving for iputs higher than 198. 198 gives 221 and 199 also gives 221. Or, of course, I am way off track... Code that does fulfill the contract I describe above is actually fairly simple. public static int getStrippedNumber(int num) { return num + num/9 + 1; } Borrowing from @tintinmj I have also submitted a ideone to demonstrate Of course naming this function more appropriately and having a clearer contract explanation are major points of attention. Imagine a developer trying to maintain the code, and all he has is your code and what you have documented. So improved this would be : /** * Determines the number in the series ℕ\10ℕ (the natural numbers without multiples of 10) at the given index. * @param index zero-based index * @return the number in the series of ℕ\10ℕ at the given index. */ public static int getNwithout10NAtIndex(int index) { if (index < 0) { throw new IllegalArgumentException(); } return index + index/9 + 1; } Well... Sorry to say this, but it's crappy! • The function name doesn't explain what the function does (and using your explanation above for Javadoc would only make things worse, judging from the confusion in the comments) • Even knowing the implementation doesn't help, it's hard to tell if the steps performed inside the function correspond to a real-world concept or represent some algorithm • it's not clear if this function makes sense for negative values, I'm pretty sure it would return unexpected values (i.e. output for x and -x would be drastically different). If the function is not defined for negative values, throw an IllegalArgumentException when num < 0 • You use confusing names for variables; I think even a, b, c etc. would be better than num, newNum, numberToReturn (nb. this last one is additionally misleading as you still process it before returning) • The function seems to solve some general problem but it uses arbitrary depth: repeats the same operation twice (in lines 1&2 and 3&4) instead of using a loop or recursion (which would also communicate your intentions better) I know this is the part where you give your recommendations but I believe this function is broken beyond repair until you make it clear what its purpose and contract are. Remember that most real-world mathematical problems are already solved and have well-described algorithms, using well-tested solutions is always a better idea than trying to reinvent the wheel. • function is broken beyond repair ? – JavaDeveloper Aug 25 '13 at 2:13 Your question and output differs. Your logic is not well defined but I can't downvote your question cause it is a working code... sigh! You said: The following code would strip 0 (all numbers ending with 0, it can contain 0, eg: 105 is valid but 150 should be eliminated) But by running your code 105 goes to 117 WHY? However from the question title (code to strip 0 from decimal series) and from the problem description(misleading) I think public static int getStrippedNumber(int num) { return (num + 1) % 10 == 0 ? (num + 2) : (num + 1); } Since you didn't say anything about negative numbers, I'm assuming you didn't think about it. Go with @kryger's answer throw an IllegalArgumentException for negative numbers.
{}
## Cryptology ePrint Archive: Report 2008/049 An Efficient Protocol for Secure Two-Party Computation in the Presence of Malicious Adversaries Yehuda Lindell and Benny Pinkas Abstract: We show an efficient secure two-party protocol, based on Yao's construction, which provides security against malicious adversaries. Yao's original protocol is only secure in the presence of semi-honest adversaries, and can be transformed into a protocol that achieves security against malicious adversaries by applying the compiler of Goldreich, Micali and Wigderson (the GMW compiler''). However, this approach does not seem to be very practical as it requires using generic zero-knowledge proofs. Our construction is based on applying cut-and-choose techniques to the original circuit and inputs. Security is proved according to the {\sf ideal/real simulation paradigm}, and the proof is in the standard model (with no random oracle model or common reference string assumptions). The resulting protocol is computationally efficient: the only usage of asymmetric cryptography is for running $O(1)$ oblivious transfers for each input bit (or for each bit of a statistical security parameter, whichever is larger). Our protocol combines techniques from folklore (like cut-and-choose) along with new techniques for efficiently proving consistency of inputs. We remark that a naive implementation of the cut-and-choose technique with Yao's protocol does \emph{not} yield a secure protocol. This is the first paper to show how to properly implement these techniques, and to provide a full proof of security. Our protocol can also be interpreted as a constant-round black-box reduction of secure two-party computation to oblivious transfer and perfectly-hiding commitments, or a black-box reduction of secure two-party computation to oblivious transfer alone, with a number of rounds which is linear in a statistical security parameter. These two reductions are comparable to Kilian's reduction, which uses OT alone but incurs a number of rounds which is linear in the depth of the circuit~\cite{Kil}. Category / Keywords: cryptographic protocols / secure two-party computation, efficiency Publication Info: An extended abstract appeared in Eurocrypt 2007. This is the full version
{}
Matrix is the most powerful tool in mathematics. We use matrices in certain branches of science as far as in economics, computer, psychology and management. A matrix is an arrangement of numbers or symbols in rectangle array. The items in a matrix are called elements of matrix. Given below is an example of a matrix with eight elements: $\begin{bmatrix} 1 &3 &4 & -1 \\ 2 &5 &-3 & 5 \end{bmatrix}$ With the help of matrices, we can solve a system of linear equation. ## What is a Matrix? A matrix is a rectangular arrangement of numbers. We arrange numbers in rows(horizontal lines) and columns(vertical lines). In general, a matrix with m rows and n column is denoted by $A_{m\times n }$ and called $m\times n$ order matrix. Order of a matrix (also called as the dimension of the matrix) represents the number of rows and column in a given matrix. So, $A_{m\times n }$ =$\left [c_{i,j}Â \right ]_{m\times n }$, where $c_{i,j}$ refers to the elements of matrix and subscript m and n refers to the row number and column number respectively. The element $c_{i,j}$ belongs to the ith row and jthcolumn and is sometimes called the (i,j)th element of the matrix. Here i = 1, 2..... m and j = 1, 2, 3..... n. In matrix, the number of columns and rows need not to be equal. ## Types of Matrices In mathematics, there are different types of matrices which are classified according to their order as follows: • Row Matrix • Column Matrix • Square Matrix • Diagonal Matrix • Symmetric and Skew-symmetric Matrix • Null Matrix • Unit Matrix ## Equal Matrices Let A and B be two matrices of same order. These matrices are said to be equal if their corresponding elements are equal. Let $A_{m\times n}$ = [ xij ] and $B_{m\times n}$ = [ yij ] are two matrices of the same order ($m\times n$) if $x_{ij} = y_{ij}$ for all i and j. So, we write A = B. ### Solved Examples Question 1: A = $\begin{bmatrix} 1 &2 \\ 1 &2 \end{bmatrix}$ and B = $\begin{bmatrix} 1 &2 \\ 1 &2 \end{bmatrix}$ Find whether the matrices A ad B are equal. Solution: From the above example, it is clear that matrices A and B are of same order $(2\times 2)$ and their corresponding elements are equal. So, A and B are equal matrices. Question 2: Find whether the following matrices are equal matrices. C = $\begin{bmatrix} 1 &3 &5 &7 \end{bmatrix}$ and D = $\begin{bmatrix} 8 &1 \\ 2 &3 \\ 4 &5 \end{bmatrix}$ Solution: Here, order of C1x4 is not equal to the order of D2x3. So, C$\neq$D. By the use of the above property, we can find out the value of unknown variables. Question 3: Find the value of x and y in the matrices A =$\begin{bmatrix} x &3 &6 \\ 1 &y-2 &9 \end{bmatrix}$ and B =$\begin{bmatrix} 2 &3 &6 \\ 1 &1 &9 \end{bmatrix}$ Solution: It is clear from the above example that both matrices have the same order $(2\times3)$. So, we use the property which states that corresponding elements should be equal. i.e. x = 2, 3 = 3, 6 = 6, 1 = 1, y - 2 = 1, 9 = 9. So, we get x = 2 and y = 3. ## Matrix Operations Just like operations on numbers, we use some operations on matrix also. They are addition, subtraction, multiplication etc. Matrix Addition and Subtraction: Matrix addition and subtraction are possible only if the order of two matrices A & B are the same. We can add or subtract the corresponding elements of the matrices. i.e. A $\pm$ B = [aij] $\pm$ [bij] Matrix Multiplication: If $A_{p\times q}$ = [aij] and $B_{q\times r}$ = [bij] are two matrices, then AB is a matrix of order $p\times r$. So, $(AB)_{p\times r}$ = [cij] ## Transpose of a Matrix Let A = [aij] be a given matrix. If we interchange the rows and columns of matrix A, then the resultant matrix is known as transpose of matrix A and is denoted by AT, i.e. If A = [aij], then AT = [aij]. ### Solved Example Question: Find the transpose of the matrix $A_{3\times 3}$=$\begin{bmatrix} a &b &-c \\ -d &e &s \\ u &v &w \end{bmatrix}$ Solution: $A^{T}$=$\begin{bmatrix} a &-d &u \\ b &e &v \\ -c &s &w \end{bmatrix}$ ## Determinant of a Matrix If A = [aij] is a square matrix, then the determinant whose elements are aij is called the determinant of A and is denoted by $\left | A \right |$. Let A = $\begin{bmatrix} a_{11} &a_{12} \\ a_{21} &a_{22} \end{bmatrix}$ The determinant of A is as follows: $\left | A \right |$ = (a11a22 - a21a12). If A = [aij] is a square matrix, then to determine adjoint of matrix A, first we have to calculate the cofactor matrix of A and then the transpose matrix of that cofactor matrix. Thus, the resultant matrix is known as Adjoint matrix of the given matrix and is denoted by adj A. A = [aij] The adjoint of matrix A is as follows: adj A=$\left [c _{ij} \right ]^{T}$ ## Inverse of a Matrix If A be the given square matrix, then the inverse of A is written as A-1 and is defined as follows: A -1 =$\frac{adjA}{\left | A \right |}$, here $\left | A \right | \neq 0$ ## Matrix Row Operations In mathematics, there are four basic operations on numbers namely addition, subtraction, multiplication and division. But in matrices,there are only three operation that we can apply on rows of a given matrix: • Interchanging the two rows like $R_{1}\leftrightarrow R_{2}$ • Multiply any row by a scalar or number $3R_{1}$ or $4R_{2}$ • Multiply k times the elements of a row. Then, add and subtract p times the corresponding elements of another row, where k and p are real numbers. These operations are called Elementary Row Transformation. We can easily understand these operations from the examples shown below: Let A = $\begin{bmatrix} 2 &1 &3 &7 \\ 2 &6 &4 &8 \\ 1 &1 &0 &1 \end{bmatrix}$ Apply the following operation on matrix A, $R_{1}\leftrightarrow R_{3}$ (interchanging the rows) B $\approx$ $\begin{bmatrix} 1 &1 &0 &1 \\ 2 &6 &4 &8 \\ 2 &1 &3 &7 \end{bmatrix}$ Apply the following operation on matrix B, $R_{2}\rightarrow$$\frac{1}{2}\times R_{2}$ (multiplying row by a number) C $\approx$ $\begin{bmatrix} 1 &1 &0 &1 \\ 1 &3 &2 &4 \\ 2 &1 &3 &7 \end{bmatrix}$ Apply the following operation on matrix C, $R_{3}\rightarrow R_{3}-2R_{2}$ (operation between two rows) D $\approx$ $\begin{bmatrix} 1 &1 &0 &1 \\ 1 &3 &2 &4 \\ 0 &-5 &-1 &-1 \end{bmatrix}$ Thus, after applying Elementary Transformation on A, we have obtained the matrix D. ## Properties of Matrices Listed below are some of the properties of matrices. Properties of addition: Let A and B are two matrices of the same order ($m\times n$) and c be any scalar. Commutative Property A + B = B + A Associative Property (A + B) + C = A + (B + C) Existence of an Additive Identity B + 0 = B = 0 + B Existence of an Additive Inverse B + (-B) = 0 Distributive Property c(A + B) = cA + cB Properties of multiplication: Let $A_{l \times m}$, $B_{m \times n}$, $C_{n \times p}$ are three matrices and t,q are the scalars. Associative property A(BC) = (AB)C Distributive property A(B + C) = AB + BC Multiplicative Identity 1A = A Property for scalar multiplication (tq)A = t(qA) Existence of Multiplicative Inverse A (A-1) = In Properties of Transpose: • (AB)T = BTAT • (A + B)T = AT + BT • (AT)T = A • (qA)T = qAT
{}
Generalized Multi-Output Gaussian Process Censored Regression @article{Gammelli2022GeneralizedMG, title={Generalized Multi-Output Gaussian Process Censored Regression}, author={Daniele Gammelli and Kasper Pryds Rolsted and Dario Pacino and Filipe Rodrigues}, journal={Pattern Recognit.}, year={2022}, volume={129}, pages={108751} } • Published 10 September 2020 • Computer Science, Mathematics • Pattern Recognit. 2 Citations Figures and Tables from this paper Gaussian Process Latent Class Choice Models • Computer Science ArXiv • 2021 Results show that GP-LCCM allows for a more complex and flexible representation of heterogeneity and improves both in-sample fit and out-of-sample predictive power. Predictive and Prescriptive Performance of Bike-Sharing Demand Forecasts for Inventory Management • Computer Science Transportation Research Part C: Emerging Technologies • 2022 References SHOWING 1-10 OF 43 REFERENCES Gaussian Process-Mixture Conditional Heteroscedasticity • Computer Science IEEE Transactions on Pattern Analysis and Machine Intelligence • 2014 A novel nonparametric Bayesian mixture ofGaussian process regression models, each component of which models the noise variance process that contaminates the observed data as a separate latent Gaussian process driven by the observedData. Gaussian Process Regression with Censored Data Using Expectation Propagation • Computer Science PGM 2012 • 2012 This paper proposes an extension of Gaussian process regression models to data in which some observations are subject to censoring, and uses Expectation propagation to perform approximate inference on it. Heterogeneous Multi-output Gaussian Process Prediction • Computer Science, Economics NeurIPS • 2018 A novel extension of multi-output Gaussian processes for handling heterogeneous outputs that uses a covariance function with a linear model of coregionalisation form to obtain tractable variational bounds amenable to stochastic variational inference. Variational Heteroscedastic Gaussian Process Regression • Computer Science ICML • 2011 This work presents a non-standard variational approximation that allows accurate inference in heteroscedastic GPs (i.e., under input-dependent noise conditions) and its effectiveness is illustrated on several synthetic and real datasets of diverse characteristics. Gaussian Processes for Survival Analysis • Computer Science NIPS • 2016 A semi-parametric Bayesian model for survival analysis that handles left, right and interval censoring mechanisms common in survival analysis, and proposes a MCMC algorithm to perform inference and an approximation scheme based on random Fourier features to make computations faster. Censored Quantile Regression Forest • Mathematics AISTATS • 2020 The proposed procedure named {\it censored quantile regression forest}, allows us to estimate quantiles of time-to-event without any parametric modeling assumption, and establishes its consistency under mild model specifications. Deep Multi-task Gaussian Processes for Survival Analysis with Competing Risks A nonparametric Bayesian model for survival analysis with competing risks, which can be used for jointly assessing a patient’s risk of multiple (competing) adverse outcomes, and which outperforms the state-of-the-art survival models. Gaussian Processes for Big Data • Computer Science UAI • 2013 Stochastic variational inference for Gaussian process models is introduced and it is shown how GPs can be variationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform Variational inference.
{}
Two closed geodesics on compact simply connected bumpy Finsler manifolds.(English)Zbl 1357.53086 Author’s abstract: We prove the existence of at least two distinct closed geodesics on a compact simply connected manifold $$M$$ with a bumpy and irreversible Finsler metric, when $$H^{\ast }(M;\mathbf Q)\cong T_{d,h+1}(x)$$ for some integer $$h\geq 2$$ and even integer $$d\geq 2$$. Consequently, together with earlier results on $$S^{n}$$, it implies the existence of at least two distinct closed geodesics on every compact simply connected manifold $$M$$ with a bumpy irreversible Finsler metric. MSC: 53C60 Global differential geometry of Finsler spaces and generalizations (areal metrics) 53C22 Geodesics in global differential geometry 58E10 Variational problems in applications to the theory of geodesics (problems in one independent variable) 58E05 Abstract critical point theory (Morse theory, Lyusternik-Shnirel’man theory, etc.) in infinite-dimensional spaces Full Text:
{}
## Sunday, July 17, 2011 ### Quicksort partitioning During my years in school and at university I was always exposed to only one algorithm of quicksort partitioning. Let's take a look at some ways to partition a list for sorting. Quicksort itself works by taking a list, picking an element from the list which is referred to as the "pivot" and splitting the list into two sub lists, the first containing all the elements smaller than the pivot and the second containing all the elements greater than the pivot. Once you have these two sub lists you can sort each one independently of the other since elements in one list will not be moved into the other after the sort is complete. This splitting into two lists is called "partitioning". Partitioning once will not sort the list but it will allow you to either use a different sorting algorithm on each sub list (partition) or to recursively partition the two partitions until you end up with a partition of 1 or 0 elements, which is necessarily sorted. For example, partitioning the list [7,3,6,4,1,7,3] using 4 as a pivot will give us a first partition of [3,1,3] and a second partition of [7,6,7]. The pivot itself, along with other duplicates of it, may or may not go in one of the partitions, depending on how the partitioning is done. If it does not go into one of the partitions, then the sort will place the pivot between the 2 partitions after they have been sorted. Partitioning by filtering The most intuitive way to partition is by creating 2 new lists, going through the unsorted list and copying elements from the unsorted list into one of the 2 lists. This is memory expensive however as you end up needing twice as much space as the unsorted list takes. The following partitioning algorithms are "in-place" and hence do not need any new lists. Partitioning by moving the pivot This is the partitioning algorithm I was familiar with at school. It's quite intuitive but slow when compared to the next algorithm. The way this works is by putting the pivot into its sorted place, that is, the place where it will be after the whole list has been sorted. All the elements smaller than the pivot will be on its left and all the elements larger than the pivot will be on its right. Therefore you would have created 2 partitions, the left side of the pivot and the right side. The algorithm uses a pivot pointer which keeps track of where the pivot is and an index pointer which is used to compare the pivot to other elements. The pivot pointer starts by being at the right end of the list (you can choose a pivot and swap it with the last element if you don't want to stick to the element which happens to be there) and the index pointer starts by being at the left end of the list. The index moves towards the pivot pointer until it encounters an element which is not on the correct side of the pivot, upon which the index and the pivot and swapped and the the index pointer and pivot pointer swap locations. Once the index pointer and pivot pointer meet, the pivot is in its sorted location and the left and right side of the pivot are partitions. Pseudo code: function partition(arr, left, right) pivotPtr = right indexPtr = left while pivotPtr != indexPtr if indexPtr < pivotPtr //if index pointer is to the left of the pivot while arr[indexPtr] <= arr[pivotPtr] and indexPtr < pivotPtr indexPtr++ //move index pointer towards the pivot if indexPtr < pivotPtr swap(arr[indexPtr], arr[pivotPtr]) swap(indexPtr, pivotPtr) else //if index pointer is to the right of the pivot while arr[indexPtr] >= arr[pivotPtr] and indexPtr > pivotPtr indexPtr-- //move index pointer towards the pivot if indexPtr > pivotPtr swap(arr[pivotPtr], arr[indexPtr]) swap(pivotPtr, indexPtr) return pivotPtr Partitioning by dividing In the previous partitioning algorithm, we had to constantly swap the pivot in order to eventually put it in its place. This is however unnecessary as partitioning does not require the pivot to be in its sorted place, only that we have 2 partitions, even if the pivot itself is in one of the partitions (it doesn't matter in which one as it could be eventually placed in its sorted place in either partition). This time we will not care where the pivot is, as long as we know its value. We will need 2 pointers, a high and a low pointer, which will be moving towards each other. The low pointer will expect to encounter only elements which are smaller than the pivot and the high point will expect to encounter only elements which are larger than the pivot. When both pointers encounter a wrong element, they swap the elements and continue moving towards each other. When they eventually meet, all the elements to the left of the meeting point will be smaller than or equal to the pivot and all the elements to the right of the meeting point will be greater than or equal to the pivot. Since both pointers will be moving toward each other before swapping, this algorithm will do less swaps than the previous one and hence will be much faster. In fact a simple experiment will show that it does half the number of swaps. Pseudo code: function partition(arr, left, right, pivot) lo = left hi = right while lo < hi while arr[lo] <= pivot and lo < hi lo++ //move low pointer towards the high pointer while arr[hi] >= pivot and hi > lo hi-- //move high pointer towards the low pointer if lo < hi swap(arr[lo], arr[hi]) /* Since the high pointer moves last, the meeting point should be on an element that is greater than the pivot, that is, the meeting point marks the start of the second partition. However if the pivot happens to be the maximum element, the meeting point will simply be the last element and hence will not have any significant meaning. Therefore we need to make sure that the returned meeting point is where the starting point of the second partition is, including if the second partition is empty. */ if arr[lo] < pivot return lo+1 else return lo
{}
## Binary Search - Problem: Glyph Recognition In this task you will have to solve problem G from NWERC 2017. Read the problem statement below. Since the goal of this task is not to teach you geometry, you can use the following class that represents a regular polygon with $$n$$ vertices and radius $$r$$. static double eps = 1e-8; static class RegularPolygon { Point[] vertices; int n; double r; // create a regular polygon with n vertices and radius r public RegularPolygon(int n, double r) { this.n = n; this.r = r; vertices = new Point[n]; double alpha = 2 * Math.PI / n; double cur = 0; for(int i = 0; i < n; i++) { double x = r * Math.cos(cur); double y = r * Math.sin(cur); vertices[i] = new Point(x, y); cur += alpha; } } // compute the area of the regular polygon public double area() { return 0.5 * n * r * r * Math.sin(2.0 * Math.PI / n); } // check whether the polygon contains point p public boolean contains(Point p) { int target = orient(vertices[0], vertices[1], vertices[2]); for(int i = 0; i < n; i++) { int j = (i + 1) % n; int o = orient(p, vertices[i], vertices[j]); if(o != 0 && o != target) return false; } return true; } // compute the orientation of 3 points private int orient(Point p, Point q, Point r) { double value = q.x * r.y - r.x * q.y - p.x * (r.y - q.y) + p.y * (r.x - q.x); return sgn(value); } // compute the sign of a double private int sgn(double x) { if(x < -eps) return -1; if(x > eps) return 1; return 0; } } Points are represented by a simple class with the two coordinates. static class Point { double x, y; public Point(double x, double y) { this.x = x; this.y = y; } } ##### Glyph Recognition Adapted from problem G from NWERC 2017 You are an archaeologist working at an excavation site where your team has found hundreds of clay tablets containing glyphs written in some ancient language. Not much is known about the language yet, but you know that there are only six different glyphs, each of them in the shape of a regular polygon with one vertex pointing to the right (see the figure (a) below). Only the boundary of each polygon is carved out of the clay. You want to start analysing the language right away, so you need to get the text on the tablets into some machine readable format. Ideally, you would like to use an OCR (optical character recognition) tool for that, but you do not have one installed on your laptop and there is no internet connection at the site. Because of this you have devised your own scheme to digitise the ancient writings: for every glyph on a tablet you first find a number of sample points that are in the carved out region, i.e. on the boundary of the polygon. Based on those sample points you then calculate a score for each of the six glyphs and mark the one with the highest score as the recognised glyph. For a given number of corners $$k \ (3 \leq k \leq 8)$$, the score is computed as follows. Two regular $$k$$-gons are fitted to the sample points, one from the inside and one from the outside, such that the following hold: • Each polygon is centered at the origin, i.e. all vertices have equal distance to $$(0, 0)$$. • Each polygon has a vertex on the positive x-axis. • The inner polygon is the largest such polygon containing none of the sample points. • The outer polygon is the smallest such polygon containing all of the sample points. An example can be seen in figure (c). The score for this value of $$k$$ is $$A_{inner} \ / \ A_{outer}$$, where $$A_{inner}$$ and $$A_{outer}$$ are the areas of the inner and outer polygon, respectively. Given a set of sample points, find the glyph with the highest score. Input • One line with one integer $$n$$, the number of sample points. • $$n$$ lines, each with two integers $$x, y$$, specifying a point at coordinates $$(x, y)$$. No sample point is at the origin and all points are distinct. Output Output the optimal number of corners $$k \ (3 \leq k \leq 8)$$, followed by the score obtained for that value of $$k$$. Your answer will be accepted if the absolute error does not exceed $$10^{-6}$$ . If several values of $$k$$ result in a score that is within $$10^{-6}$$ of the optimal score, any one of them will be accepted. Constraints • $$1 \leq n \leq 1000$$ • $$-10^6 \leq x, y \leq 10^6$$ Sample Test Cases Sample input 1 Sample output 1 Sample input 2 Sample output 2 Max file size: 1.0 MiB Allowed extensions: .java, .cpp, .py
{}
# Math Help - Proving Horrible exponent 1. ## Proving Horrible exponent Hi All, Prove $\frac{a}{ (\sqrt[3]{ax}}^2 = \frac{\sqrt[3]{ax}{x}$ $\frac{a}{ (ax)^{1/3} )^2 = \frac{a}{(ax)^{2/3} }$ $\frac{a}{ (ax)^{1/3} \times (ax)^{1/3}}$ Had Latex trouble (again); question is in attachment below. Thanks 2. ## Re: Proving Horrible exponent Originally Posted by BobBali Prove: $\frac{a}{\sqrt[3]{ax}^2} = \frac{\sqrt[3]{ax}}{x}$ $\frac{a}{ \left((ax)^{1/3} \right)^2} = \frac{a}{ (ax)^{2/3} }}$ $\frac{a}{ (ax)^{1/3} \times (ax)^{1/3}}$ I fixed the LaTeX only! (Not sure if this was meant ....?) 3. ## Re: Proving Horrible exponent Yes it is! How did you do it? My latex working in the second step (the one u fixed) is below, can you tell me where i went wrong?? Thanks. $\frac{a}{ ( (\sqrt[3]{ax}) )^2 = \frac{a}{ ( (ax)^{1/3} )^2$ 4. ## Re: Proving Horrible exponent Originally Posted by BobBali Yes it is! How did you do it? My latex working in the second step (the one u fixed) is below, can you tell me where i went wrong?? Thanks. $\frac{a}{ ( (\sqrt[3]{ax}) )^2 = \frac{a}{ ( (ax)^{1/3} )^2$ \frac{a}{ \left(\sqrt[3]{ax} \right)^2 } = \frac{a}{ \left( (ax)^{1/3} \right)^2 } yields $\frac{a}{ \left(\sqrt[3]{ax} \right)^2 } = \frac{a}{ \left( (ax)^{1/3} \right)^2 }$ I've marked the changes and additions in red. 5. ## Re: Proving Horrible exponent Ok, so: $\frac{a}{ \left (ax)^{2/3} \right }$ = $\frac{a}{ \left(ax)^{1/3} \times (ax)^{1/3} \right)$ $a \times a^{-1/3} \times x^{-1/3} \times a^{-1/3} \times x^{-1/3} =$ $a^{1/3} \times x^{-2/3}$ = $\frac{\sqrt[3]{a}{x}^{2/3}$ 6. ## Re: Proving Horrible exponent Originally Posted by BobBali Ok, so: $\frac{a}{ \left (ax)^{2/3} \right }$ = $\frac{a}{ \left(ax)^{1/3} \times (ax)^{1/3} \right)}$ $a \times a^{-1/3} \times x^{-1/3} \times a^{-1/3} \times x^{-1/3} =$ $a^{1/3} \times x^{-2/3}$ = $\frac{\sqrt[3]{a}}{x^{2/3}}$ Please use the \frac-command in such a way: \frac{numerator}{denominator} yields $\frac{numerator}{denominator}$ 7. ## Re: Proving Horrible exponent Hello, BobBali! $\text{Prove: }\:\frac{a}{(\sqrt[3]{ax})^2} \:=\:\frac{\sqrt[3]{ax}}{x}$ On the left side, we have: . $\frac{a}{(ax)^{\frac{2}{3}}}$ Multiply by $\tfrac{(ax)^{\frac{1}{3}}}{(ax)^{\frac{1}{3}}}: \;\;\frac{a}{(ax)^{\frac{2}{3}}} \cdot\frac{(ax)^{\frac{1}{3}}}{(ax)^{\frac{1}{3}}} \;=\;\frac{a(ax)^{\frac{1}{3}}}{ax} \;=\;\frac{(ax)^{\frac{1}{3}}}{x} \;=\;\frac{\sqrt[3]{ax}}{x}$
{}
# recursion – How to perform AND on binary “recursive repeating sequences”? Suppose, we have a two binary sequences, encoded as “recursive repeating sequences” (I don’t know exactly how to name them). Each sequence can contain other sequences and has number related to how many times this sequence is repeated, or can contain a bit either 0 or 1. Following image describes it visually: On this image repetition is denoted by lower index number nearby ending bracket. The bits need to be expanded are denoted by regular size font. Each sequence can produce binary sequence (sequence of bits), when we expand it. The questions are: 1. Can we create a algorithm, that performs AND on two recursive repeating sequences, without expanding both of them? (The algorithm should produce third recursive repeating sequence, such when we expand it, it is AND on both expanded input sequences). 2. If the answer for first question is true, how such algorithm will look (for example in pseudo code) ? Note: In practice, the numbers responsible for repetition may be very large, so expansion of whole sequence is not practical in terms of computation – but if algorithm will need to do it partially, it may be accepted.
{}
Question # Select the incorrect statement/ statements regarding the exchange rate management by the International Monetary Fund (IMF) using the code given below:1. Exchange rate of the member countries was managed by the IMF as per the 'fixed currency system' till $$1976$$. which did not allow the market forces.2. 'Floating currency system' was first used in $$1971$$ by the UK. A Only 1 B Only 2 C 1 and 2 D None of the above Solution ## The correct option is A Only $$1$$'Fixed currency system' functioned from $$1945$$ to $$1971$$. In $$1971$$, after the UK shifted to the 'floating system', the IMF allowed option to the member countries in selecting the system of exchange rate management through either of these two.Mathematics Suggest Corrections 0 Similar questions View More People also searched for View More
{}
# The region between two concentric spheres of radii ‘a’ and ‘b’, respectively (see figure), has volume charge density  where A is a constant and r is the distance from the centre.  At the centre of the spheres is a point charge Q.  The value of A such that the electric field in the region between the spheres will be constant, is Qenc - charge enclosed by closed surface. $\int \vec{E} \cdot \vec{ds} = \frac{Q+q}{\epsilon_o} \Rightarrow E \times 4 \pi r^2 = \frac{Q+q}{\epsilon_o} \:\:\:\:\:\: -(i)$ $q = \int_{a}^{r}\frac{A}{x} 4 \pi x^2 dx = 4 \pi A \int_{a}^{r}xdx$ $= 4 \pi A \left [ \frac{x^2}{2} \right ]_{a}^{r} = 2 \pi A(r^2-a^2)$ Now putting the value of q in equation (i) $E \times 4 \pi r^2 = \frac{1}{\epsilon_o}\left [ Q + 2 \pi A(r^2-a^2) \right ]$ $E = \frac{1}{4 \pi \epsilon_o}\left [\frac{Q}{r^2} + 2 \pi A-\frac{2 \pi A a^2}{r^2} \right ]$ E will be constant if it is independent of r $\therefore \frac{Q}{r^2}= \frac{2 \pi A a^2}{r^2} or A = \frac{Q}{2 \pi a^2}$ watch this ### Preparation Products ##### Knockout JEE Main Sept 2020 An exhaustive E-learning program for the complete preparation of JEE Main.. ₹ 12999/- ₹ 6999/- ##### Rank Booster JEE Main 2020 This course will help student to be better prepared and study in the right direction for JEE Main.. ₹ 9999/- ₹ 4999/- ##### Test Series JEE Main Sept 2020 Take chapter-wise, subject-wise and Complete syllabus mock tests and get in depth analysis of your test.. ₹ 4999/- ₹ 1999/- ##### Knockout JEE Main April 2021 An exhaustive E-learning program for the complete preparation of JEE Main.. ₹ 22999/- ₹ 14999/-
{}
# Math Help - need help with this staticstics question (point me in right dirrection please) 1. ## need help with this staticstics question (point me in right dirrection please) Hello, sort of left an assignment to the last minute and am in desperate need of help The results are in for the paper toss game and we have found that each participant's accuracy (closeness to the coin) is dependent on the weight of the paper used as a ball. To show the distinction, we have constructed the following contingency table from 200 samples: Wc W D 84 22 Dc 36 58 where W is the weight of the paper is less than 80 gsm, and D is the distance recorded is less than or equal to 20 cm from a paper toss. Note: you are allowed to use the text book probability tables when answering these questions. When a table lookup has been performed, make it obvious by writing from Table x', where x is replaced by the table number used or the page number. 1 Probabilities From the above contingency table, compute: 1. the probability that the distance is less than or equal to 20 cm and the weight is greater than or equal to 80 gsm. 2. the probability that the distance is greater than 20 cm and the weight is greater than or equal to 80 gsm. 3. the probability that the distance is less than or equal to 20 cm given that the weight is greater than or equal to 80 gsm. 4. the probability that the distance is less than or equal to 20 cm given that the weight is less that 80 gsm. i dont get the whole table part, i know c = the event that it doesnt occur so am i suppose to put like probability 22 that both occur ? or is there an equation that i need to do? help will be greatly appreciated thanks. 2. ## Re: need help with this staticstics question (point me in right dirrection please) Originally Posted by nitros Hello, sort of left an assignment to the last minute and am in desperate need of help The results are in for the paper toss game and we have found that each participant's accuracy (closeness to the coin) is dependent on the weight of the paper used as a ball. To show the distinction, we have constructed the following contingency table from 200 samples: Wc W D 84 22 Dc 36 58 where W is the weight of the paper is less than 80 gsm, and D is the distance recorded is less than or equal to 20 cm from a paper toss. Note: you are allowed to use the text book probability tables when answering these questions. When a table lookup has been performed, make it obvious by writing from Table x', where x is replaced by the table number used or the page number. 1 Probabilities From the above contingency table, compute: 1. the probability that the distance is less than or equal to 20 cm and the weight is greater than or equal to 80 gsm. 2. the probability that the distance is greater than 20 cm and the weight is greater than or equal to 80 gsm. 3. the probability that the distance is less than or equal to 20 cm given that the weight is greater than or equal to 80 gsm. 4. the probability that the distance is less than or equal to 20 cm given that the weight is less that 80 gsm. i dont get the whole table part, i know c = the event that it doesnt occur so am i suppose to put like probability 22 that both occur ? or is there an equation that i need to do? help will be greatly appreciated thanks.
{}
# Community cancel Showing results for Show  only  | Search instead for Did you mean: # Rich Content Editor HTML Cheatsheet Community Champion 87 26 74.2K Update 1/1/2020. The new rich content editor is available in beta and will be available in production in the January 2020 release. See my new post about the new rich content editor for the latest updates. Update 6/11/19. In Quizzes.Next the rich content editor is different and does not include the right sidebar content browser. You can upload content from the toolbar. See this guide for details. Additionally, the rich content editor is now on the short term roadmap for improvements. Updated 8/26/17. A new option (see 14 below) was added in the August 26, 2017 release. This new option allows users to add media without switching to HTML code. I also wanted to note that the new Teacher mobile app has a simplified version of the rich content editor toolbar that is available when editing content in the app.  Big shout out to the community's mobile guru  @rseilham ‌ for pointing this out in the comments below. Updated 9/16/16. Please note this cheatsheet is subject to change with Canvas updates. In the August 6, 2016 release notes the rich content editor got some updates and added functionality with pages and the syllabus. You may also want to check out the following discussion about the changes. Friendly advice: Disable "Use remote version of Rich Content Editor AND sidebar" Below is an image of the toolbar highlighted with numbers of each command. Each numbered command has a code example with some tips on using in the HTML Editor. ## 1-Bold ### Code example: <strong>Some Text</strong>‍‍‍‍‍‍‍‍ ### Notes: The Strong element is used bold text. It is generally not recommended to use the strong element to create page headings. Use the actual heading elements to create this type of structure. See number 20 below for details on why. ## 2-Italics ### Code Example: <em>Some Text</em>‍‍‍‍‍‍‍‍ ### Notes: Italics should be used to emphasize text and should be used sparely on webpages. Depending on font it can be hard to read italicized text on monitor. ## 3-Underline ### Code Example: <u>Some Text</u>‍‍‍‍‍‍‍‍ ### Notes: This element can be used to emphasize text; however, in on webpages underlined text is often confused with hyperlink text. I generally don't recommend using this element. ## 4-Text Color ### Code Example: <span style="color: #ff0000;">Some Text</span‍‍‍‍‍‍‍‍ ### Notes: This command creates a span element and inline CSS (the style attribute) to create the colored text. The style attribute can be applied to any text element such as paragraphs and headers. In the toolbar there are only about 40 colors to choose from; however, in the code view you can change the color to any color you want by altering the hex color code. See Resources for Hex Colors for details on how to find hex colors. ## 5-Background Color ### Code Example: <span style="background-color: #ff9900;">content</span>‍‍‍‍‍‍‍‍ ### Notes: This command uses the span element and inline CSS (the style attribute) to create the background color. This should be used cautiously with text. If the background color and text color do not have enough contrast between them, the text can be hard to read. In the example below the text is hard to read. This can be especially hard on color blind people or people like who are losing their sight to old age. For further reading, view this Smashing Magazine article, Design Accessibly, See Differently: Color Contrast Tips And Tools. On a side note, the Jive editor does not have background color element in the toolbar and does strip it when you try to add in code view so I had to use an image for this example. ## 6-Clear Formatting ### Notes: This option is handy for getting rid of the extra HTML code that sometimes comes over when you copy and paste text from other locations such as from Word or other websites. It is important to note that this option works with most elements but doesn't seem to work with the background element (see number 5 above). You can go to code view to remove the span element. If you are designer, I recommend using the text editor that has a good find and replace command to remove any extra HTML and CSS code that you don't want before moving the text to Canvas. I use Dreamweaver's find and replace for this type of task a lot and it saves me quite a bit of time. ## 7-Text Alignment ### Code Examples: There are three alignment options. These attributes can be applied to headings and paragraph elements. The left alignment is the default in the editor. Note: It is best to only use center and right alignment for headers or short lines of text. It is generally not recommended for longer lines of text because the text is hard to read. <p style="text-align: left;">Paragraph of text</p> <p style="text-align: center;">Paragraph of text</p> <p style="text-align: right;">Paragraph of text</p> ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ ### Notes: For further reading I recommend the WebAIM articles, Writing Clearly and Simply and Text/Typographical Layout. ## 8- Outdent/Indent ### Code Example: What this option does depends on element it is applied to in the code. See examples below. When applied to paragraph element the style attribute is applied to the paragraph element with padding of 30 pixels. <p style="padding-left: 30px;">Some text</p>‍‍‍‍‍‍‍‍ When applied to an unordered or ordered list a new nested list is created. <ul><li>Some Text</li>     <ul>     <li>Some indented text</li>     </ul></ul>‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ ### Notes: See 11 and 12 for more details on lists. ## 9-Superscript ### Code Example: H<sup>2</sup>0‍‍‍‍‍‍‍‍ ### Notes: Be sure to only select the text that should be superscript when applying this command. You can always switch to code view to fix any issues that you might not be able to fix with the toolbar. ## 10-Subscript ### Code Example: 2<sub>4</sub>‍‍‍‍‍‍‍‍ ### Notes: The same applies as number 9 above. ## 11-Unordered List ### Code Example: <ul><li>List Item</li><li>List item</li><li>List Item</li></ul>‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ ### Notes: Unordered lists are good for list of items where the sequence of the items does not matter. Lists can be nested using the indent option. I have found this to be tricky sometimes so I prefer to edit lists in code view. See example of nested list in number 8. ## 12-Order List ### Code Example: <ol><li>Do this first</li><li>Do this second</li><li>Do this third</li></ol>‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ ### Notes: Order lists are good when you are giving students a set of an instructions for homework assignments. You can alter the list number to display letters if preferred. This must be done in code view. See example code below. <ol type="a"><li>Do this first</li><li>Do this second</li><li>Do this third</li></ol>‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ ## 13-Table ### Code Example: Table code involves several different elements. See code example below. <table border="0"><caption>Caption</caption><tbody><tr><td>Row 1</td><td>28</td></tr><tr><td>Row 2</td><td>23</td></tr></tbody></table>‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ ### Notes: The new toolbar has a much improved table editor so you may not need to switch to code view that much now. I will note that tables should only be used for tabular data; however, the majority of people do not use them this way. This stems from some bad web design hacks from the late 90s which can still spark heated debate about their use in designing webpages. The key point to remember is that you want your pages to be accessible to all. For further reading, visit the WebAIM article, Creating Accessible Tables. ## 14-Insert/Edit Media This is a new edition to the toolbar in the August 26, 2017 Canvas release. This option allows you to add a share link or embed code from a video on video sharing site like YouTube, Vimeo, or Teacher Tube without needing to switch to HTML view. If you add the share link the embed code will auto populate. You can determine the video dimensions and provide an alternative source for the video. ### Code Example: <a href="http://www.google.com">Google</a>‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ ### Notes: The color of the link is controlled by the CSS (Cascading Style Sheets) that is linked to the HTML document. See Canvas Styleguide for more details. For further reading, read the WebAIM article, Links and Hypertext and Accessible CSS. ## 16-Picture ### Code Example: <img src="https://farm4.static.flickr.com/3433/3927529272_e6e5448807.jpg" alt="dog" width="500" height="332" />‍‍‍‍‍‍‍‍ ### Notes: Images can be pulled from the web or Canvas files. Images do have several attributes you can add to it. When you add or edit the image in the editor the dialog box has options for adding alternative text and changing the width and height attributes. It is important to note that students must use this option to embed images in discussions. Be sure to vote for when it becomes available for voting. For further reading, read the WebAIM article, Accessible Images. ## 17-Symbol ### Code Example: <img class="equation_image" title="\frac{3}{4}+5" src="/equation_images/%255Cfrac%257B3%257D%257B4%257D%2B5" alt="\frac{3}{4}+5" />‍‍‍‍‍‍‍‍ ### Notes: When this option is used in the editor the equation editor will display. You can use the editor options or write the equations in LaTex. The equation will be rendered as an image with the LaTex as alternative text. ## 18-Embedded Objects, Media Comment & Other LTI Tools ### Code Example: <iframe width="640" height="360"src="//www.youtube-nocookie.com/embed/WetLiIvTwZE?rel=0" frameborder="0" allowfullscreen></iframe>‍‍‍‍‍‍‍‍ ### Notes: When using the LTI and Media Comment tools the content in most cases will be embedded objects. The main issue with some of the LTI Tools is unsecure content. Canvas is hosted on secure server and almost all browsers will now block unsecured embedded content on secure webpages. You can also paste embed code from YouTube. You can also embed documents such as GoogleDocs and Microsoft Documents. ## 19-Text Direction ### Code Example: <p dir="rtl">Some text</p>‍‍‍‍‍‍‍‍ ### Notes: This attribute is essential for setting how script languages will display on the webpage. For more details, go to the WC3 article, Structural markup and right-to-left text in HTML ## 20-Font Sizes ### Code Example: <span style="font-size: x-large;">some text</span>‍‍‍‍‍‍‍‍ ### Notes: Uses the span element and inline CSS (the style attribute) to create the larger text. It is generally not recommended to use this option to create header in your content. Header content should be properly marked up using the header elements. See number 21 below. ### Code Example: You can use the style attribute to change the font and margins if desired to have a different look that the default editor settings. I generally don't recommend doing that for all your pages because you must edit each element to make this change. That is too much work. A better option would be to get your IT people at your institution to setup KennethWare. I am working with ours to hopefully get this setup for our instance of Canvas. #### Paragraph <p>Some body text</p>‍‍‍‍‍‍‍‍ <h1>Some Header text</h1>‍‍‍‍‍‍‍‍ #### Preformatting <pre>Some Text that will display as you type it</pre>‍‍‍‍‍‍‍‍ ### Notes: Paragraphs and headings are considered structural elements in HTML and are essential to making your pages accessible to all. For further reading, visit the WebAIM articles, Semantic Structure and Designing for Screen Reader Compatibility. I also recommend viewing the recording of the CanvasLIVE webinar and joining the Canvas Mobile Users Group ## 22-Keyboard Shortcuts The keyboard shortcut icon was added recently and provides a quick view of the keyboard shortcuts you can use with rich content editor. See the following discussion about the keyboard shortcuts and other hidden gems in Canvas. Your ideas of Canvas' best kept secrets ## 23-HTML View Use the HTML editor to switch to code view so you can edit the code. Please note there are only certain HTML elements (Tags) that are allowed in the editor and any elements added that are not allowed will be stripped out of the page when you save the page.
{}
## Parallel Axis Theorem If the inertia tensor for a set of axes with the center of mass at the origin is calculated, the tensor for any set of parallel axes can be easily derived. The translation of the coordinates is given by where is a constant vector. We now simply compute the inertia tensor for the new set of axes. This result is called the Parallel Axis Theorem. It can save us a lot of time recalculating the inertia tensor for some object. Note that the parallel axis theorem shows how the inertia tensor depends on the origin. Angular momentum, torque, and kinetic energy all depend on the origin. This is physically relevant if the origin is a fixed point in the rotation. The origin should be chosen to satisfy the conditions of the physical problem being solved. Jim Branson 2012-10-21
{}
# Fight Finance #### CoursesTagsRandomAllRecentScores Scores keithphw $6,011.61 Jade$1,815.80 Chu $789.98 royal ne...$750.00 Leehy $713.33 Visitor$650.00 ZOE HY $640.00 JennyLI$625.61 Visitor $590.00 Visitor$555.33 Visitor $550.00 Visitor$550.00 Visitor $540.00 Visitor$500.00 Yizhou $489.18 Visitor$480.00 Visitor $470.00 Visitor$464.70 LWH $460.00 Visitor$460.00 Your firm's research scientists can begin an exciting new project at a cost of $10m now, after which there’s a: • 70% chance that cash flows will be$1m per year forever, starting in 5 years (t=5). This is the A state of the world. • 20% chance that cash flows will be $3m per year forever, starting in 5 years (t=5). This is the B state of the world. • 10% chance of a major break through in which case the cash flows will be$20m per year forever starting in 5 years (t=5), or the project can be expanded by investing another $10m (at t=5) which is expected to give cash flows of$60m per year forever, starting at year 9 (t=9). This is the C state of the world. The firm's cost of capital is 10% pa. What's the present value (at t=0) of the option to expand in year 5?
{}
## The curvature of contact structure on 3-manifolds ##### Authors We study the sectional curvature of plane distributions on 3-manifolds. We show that if the distribution is a contact structure it is easy to manipulate this curvature. As a corollary we obtain that for every transversally oriented contact structure on a closed 3-dimensional manifold $M$ there is a metric, such that the sectional curvature of the contact distribution is equal to -1. We also introduce the notion of Gaussian curvature of the plane distribution. For this notion of curvature we get the similar results.
{}
Chapter 12.I, Problem 10RE ### Contemporary Mathematics for Busin... 8th Edition Robert Brechner + 1 other ISBN: 9781305585447 Chapter Section ### Contemporary Mathematics for Busin... 8th Edition Robert Brechner + 1 other ISBN: 9781305585447 Textbook Problem # Use Table 12-1 to calculate the future value of the following annuities due. Annuity Payment Time Nominal Interest Future Value Payment Frequency Period (years) Rate (%) Compounded of the Annuity 10. $4.400 every 6 months 8 6 Semiannually______________ To determine To calculate: The future value of annuity due where annuity payment is$4,400, frequency of payment is 6 months, time duration is 8 years, nominal rate of return is 6% and interest is compounded semiannually. Explanation Given Information: Annuity payment is $4,400, frequency of payment is 6 months, time duration is 8 years, nominal rate of return is 6% and interest is compounded semiannually. Formula used: Steps for calculating the future value of an annuity due are: Step1 First the periods of the annuity of must be calculated. Step2 The interest rate per period must be calculated. Step 3 Use table 12-1 to locate the ordinary annuity table factor that lies on the intersection of the rate column and period rows. Step 4 1.00000 must be subtracted from the ordinary annuity table in order to get the annuity due factor. Step5 Finally calculate the future value of the annuity due. The formula to compute the future value of ordinary annuity is, Future Value=Annuity due table factor×Annuity payment Annuity due table factor=Ordinary annuity table factor1.00000 Calculation: Consider that Annuity payment is$4,400, frequency of payment is 6 months, time duration is 8 years, nominal rate of return is 6% and interest is compounded semiannually. As the interest is compounded semiannually. So, the interest rate period is; 6%2=3% The rate period is 3% ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution #### The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started
{}
# Homework Help: Understanding a problem that uses episolon delta defintion 1. Apr 10, 2014 ### Genericcoder let E = episolon and D = delta; the problem is as follows: let f(x) = (2x^2 - 3x + 3). prove that lim as x approaches 3 f(x) = 21, we write |f(x) - 21| = |x^2 + 2x - 15| = |x + 5||x - 3| to make this small, we need a bound on the size of |x + 5| when x is close to 3. For example, if we arbitarily require that |x - 3| < 1 then |x + 5| = |x - 3 + 8| <= |x - 3| + |8| < 1 + 8 = 9 to make E f(x) within E of 21, we want to have |x + 5| < 9 and |x - 3| < E/9 I don't understand how did he get E/9 |x - 3| < E/9 ???? 2. Apr 10, 2014 ### jbunniii This is clearly false. Note that $f(3) = 12$, not $21$. And $f$ is continuous, so $\lim_{x \rightarrow 3}f(x) = f(3)$. Is there a typo? 3. Apr 10, 2014 ### jbunniii Also, how did you get this: from $f(x) = 2x^2 - 3x + 3$? Are you sure you are not mixing up two different problems? 4. Apr 10, 2014 ### Genericcoder it should be 2x^2 + 2x + 6 your right ! but the same logic holds for the problem that I typed I dunno how did he get |x - 3| < E/9... Last edited: Apr 10, 2014 5. Apr 10, 2014 ### Genericcoder srry I had a type your right it should be 2x^2 + 2x + 6 ! 6. Apr 10, 2014 ### jbunniii That still doesn't have a limit of $21$ as $x \rightarrow 3$. It's hard to help if you don't write down the correct problem! From this line: I am going to assume that you meant $f(x) = x^2 + 2x + 6$, which does have the limit $21$ as $x \rightarrow 3$. So, proceeding from that assumption: Clearly it's not going to be a problem to make $|x-3|$ as small as we like as $x \rightarrow 3$. So as your narrative says, we just need to make sure that $|x+5|$ doesn't grow without bound as we shrink $|x-3|$ to zero. I assume you are OK with the logic that shows that if $|x-3| < 1$, then $|x+5| < 9$. So now our goal is to make $|x+5||x-3| < \epsilon$. We already know we need $|x-3|< 1$ in order for the bound $|x+5| < 9$ to be valid. If we ALSO had $|x-3| < \epsilon / 9$, then we could conclude that $$|x+5||x-3| < 9 \cdot \frac{\epsilon}{9} = \epsilon$$ So how should we define $\delta$? 7. Apr 10, 2014 ### Genericcoder but why did u assume |x - 3| < E/9 out of nowhere? 8. Apr 10, 2014 ### jbunniii Because I recognized that was the factor I needed in order to get $|x+5||x-3| < \epsilon$, given that $|x+5| < 9$. So all that needs to be done is to show that the assumption can be achieved. In other words, we need a $\delta$ such that if $|x-3| < \delta$, then both of the assumptions that we have made are satisfied, namely $|x-3| < \epsilon/9$ and $|x-3| < 1$. How can I choose $\delta$ to guarantee this? 9. Apr 10, 2014 ### Genericcoder oh I see oke so in order to achieve this we make E = min{1,E/9} right? 10. Apr 10, 2014 ### jbunniii I assume you mean D = min{1,E/9}. That is correct. 11. Apr 10, 2014 ### Genericcoder can u give me a website that has alot of examples on epsilon delta proof of limits? 12. Apr 11, 2014 ### jbunniii Have you looked in the "Mathematics Learning Materials" section? https://www.physicsforums.com/forumdisplay.php?f=178 [Broken] Last edited by a moderator: May 6, 2017
{}
# Applied Atomic Spectroscopy by H. Jäger (auth.), E. L. Grove (eds.) By H. Jäger (auth.), E. L. Grove (eds.) From the 1st visual appeal of the vintage The Spectrum research in 1885 to the current the sector of emission spectroscopy has been evolving and altering. over the past 20 to 30 years particularly there was an explosion of latest principles and advancements. Of past due, the air of mystery of glamour has supposedly been transferred to different options, yet, however, it really is envisioned that seventy five% or extra of the analyses performed through the steel are comprehensive by way of emission spectroscopy. additional, the superb sensitivity of plasma resources has created a requirement for this system in such divergent components as direct hint point analyses in polluted waters. advancements within the replication method and advances within the paintings of professional­ ducing governed and holographic gratings in addition to advancements within the fabrics from which those gratings are made have made very good gratings on hand at average costs. This availability and the advance of airplane grating mounts have contributed to the expanding acclaim for grating spectrometers as com­ pared with the big prism spectrograph and concave grating mounts. different components of development contain new and superior equipment for excitation, using managed atmospheres and the extension of spectrometry into the vacuum quarter, the common software of the innovations for research of nonmetals in metals, the expanding use of polychrometers with concave or echelle gratings and more suitable readout structures for larger studying of spectrographic plates and extra effective facts handling. By H. Jäger (auth.), E. L. Grove (eds.) From the 1st visual appeal of the vintage The Spectrum research in 1885 to the current the sector of emission spectroscopy has been evolving and altering. over the past 20 to 30 years particularly there was an explosion of latest principles and advancements. Of past due, the air of mystery of glamour has supposedly been transferred to different options, yet, however, it really is envisioned that seventy five% or extra of the analyses performed through the steel are comprehensive by way of emission spectroscopy. additional, the superb sensitivity of plasma resources has created a requirement for this system in such divergent components as direct hint point analyses in polluted waters. advancements within the replication method and advances within the paintings of professional­ ducing governed and holographic gratings in addition to advancements within the fabrics from which those gratings are made have made very good gratings on hand at average costs. This availability and the advance of airplane grating mounts have contributed to the expanding acclaim for grating spectrometers as com­ pared with the big prism spectrograph and concave grating mounts. different components of development contain new and superior equipment for excitation, using managed atmospheres and the extension of spectrometry into the vacuum quarter, the common software of the innovations for research of nonmetals in metals, the expanding use of polychrometers with concave or echelle gratings and more suitable readout structures for larger studying of spectrographic plates and extra effective facts handling. Similar applied books Efficient numerical methods for non-local operators Hierarchical matrices current an effective manner of treating dense matrices that come up within the context of quintessential equations, elliptic partial differential equations, and keep an eye on conception. whereas a dense $n\times n$ matrix in normal illustration calls for $n^2$ devices of garage, a hierarchical matrix can approximate the matrix in a compact illustration requiring merely $O(n okay \log n)$ devices of garage, the place $k$ is a parameter controlling the accuracy. CRC Standard Mathematical Tables and Formulae, 31st Edition A perennial bestseller, the thirtieth variation of CRC general Mathematical Tables and Formulae used to be the 1st "modern" variation of the guide - tailored to be necessary within the period of non-public pcs and robust hand-held units. Now this model will speedy identify itself because the "user-friendly" variation. The State of Deformation in Earthlike Self-Gravitating Objects This publication offers an in-depth continuum mechanics research of the deformation as a result of self-gravitation in terrestrial items, equivalent to the internal planets, rocky moons and asteroids. Following a quick heritage of the matter, smooth continuum mechanics instruments are awarded on the way to derive the underlying box equations, either for sturdy and fluid fabric versions. Extra resources for Applied Atomic Spectroscopy Example text 15,60 (1962). 99. l. Pavlenko, A. V. Karyakin, and L. V. Simonova, Zh. Anal. Khim. 26,934 (1971). 100. D. J. Grimes and A. P. S. Geol. Surv. Circ. 591 (1968). 101. H. de Lazlo,/nd. Eng. Chern. 19, 1366 (1927). 102. J. M. Lopez de Azcona, Spectrochim. Acta 2, 185 (1942) . 103. A. G. Scobie, Trans. Can. Inst. Mining Met. 48, 309 (1945). 104. H. Zachariasen and F. E. Beamish, Talanta 4, 44 (1960). 105. F. E. Beamish, Talanta 5, 1 (1960). 106. G. H. Faye and W. R. Inman, Anal. Chern. 35, 985 (1963). F. Zvereva, and L. L. Rivkina, Zh. Anal. Khim. 20, 1288 (1965). 119. V. M. Novikov and V. K. Bondarenko,Zavodsk. Lab. 34,1080 (1968). 120. V. V. Polikarpochkin, I. Y. Korotaeva, and V. N. Sarapulova, Zavodsk. Lab. 33,441 (1967). 121. P. Hahn-Weinheimer, Z. Anal. Chern. 162, 161 (1958). 122. I. I. Tarasova, L. S. Dudenkova, V. G. Khitrov, and G. E. Belousov, Zh. Anal. Khim. 29, 2147 (1974). 123. R. S. Rubinovich, R. Y. Epshtein, and 0. N. Soshal'skaya, Zh. Anal. Khim. 18 , 216 (1963). PRECIOUS METALS 47 124. Report a very sensitive method which combines multiple-element electrodeless discharge lamps with the graphite filament and AFS. <230 ) Considering the detection limits, solid sample evaporation shows much improvement over flame methods. 6 shows some types of cuvettes and filaments used . 17. 4 X-RAY FLUORESCENCE Most of the papers on x-ray fluorescence applications in precious metal analysis have been published during the last 15 years. This indicates how very new this analytical technique still is.
{}
# zbMATH — the first resource for mathematics The sharp Jackson inequality in the space $$L_p$$ on the sphere. (English. Russian original) Zbl 1058.41502 Math. Notes 66, No. 1, 40-50 (1999); translation from Mat. Zametki 66, No. 1, 50-62 (1999). The author gives a proof of his theorem which was announced in [”The Jackson theorem in $$L_p(S^{n-1})$$” (Russian), in Voronezh Winter School on Modern Methods in the Theory of Functions and Related Problems in Applied Mathematics and Mechanics (abstracts), p. 56, Izdat. Voronezh. Gos. Univ., Voronezh, 1997; per bibl.]. Let $$S^{n-1}=\{x\in\mathbb R^n\colon |x|=1\}, \Gamma_{\alpha}(x)=\{y\in S^{n-1}\colon xy=\cos\alpha\}$$, and $$\sigma$$ and $$\gamma_{\alpha}$$ be normalized measures on $$S^{n-1}$$ and $$\Gamma_{\alpha}(x)$$ respectively. Let $\begin{split} L_p(S^{n-1})=\Big\{f\colon S^{n-1}\to \mathbb C| \| f\|_p=\Big(\int_{S^{n-1}} |f|^p \, d\sigma\Big)^{1/p}<\infty\Big\},\\ 1\leq p\leq\infty,\end{split}$ and let $E_R(f,S^{n-1})_p=\inf\Big\{\|f-g\|_p\colon g\in\sum_{l=0}^{R-1}\bigoplus H_l\Big\},\quad R\in\mathbb N,$ be the best approximation of the function $$f\in L_p(S^{n-1})$$ by spherical harmonics $$H_l$$ of degrees $$\leq R-1$$. Let $\omega(\delta,f,S^{n-1})_p=\sup_{\alpha\leq\delta}\Big(\int_{S^{n-1}} \int_{\Gamma_{\alpha}(x)}|f(y)-f(x)|^p\, d_{\gamma_{\alpha}}(y)\, d\sigma(x) \Big)^{1/p},$ $D(\delta,R,S^{n-1})_p=\sup_{f\in L_p(S^{n-1})}\frac{E_R(f,S^{n-1})_p} {\omega(\delta,f,S^{n-1})_p}$ be the exact constant in the Jackson inequality and $$t_R=\cos\tau_R$$ be the greatest zero of Gegenbauer’s polynomial $$P_R(t)$$. Theorem. If $$n\geq 3, 1\leq p<2$$, then $D(2\tau_R,2R-1,S^{n-1})_p=2^{-1/p'},\quad p'=p/(p-1).$ ##### MSC: 41A17 Inequalities in approximation (Bernstein, Jackson, Nikol’skiĭ-type inequalities) Full Text: ##### References: [1] D. V. Gorbachev, ”Jackson’s theorem in $$\mathfrak{L}_P (S^{N - 1} )$$ ,” in:Abstracts of Papers of the Voronezh Winter School ”Contemporary Methods of Function Theory and Related Problems of Mathematics and Mechanics” [in Russian], Izd. Voronezh Gos. Univ., Voronezh (1997), p. 56. [2] N. Ya. Vilenkin,Special Functions and Group Representation Theory [in Russian], Nauka, Moscow (1991). [3] Handbook of Mathematical Functions With Formulas, Graphs, and Mathematical Tables, (M. Abramowitz and I. Stegun, editors), National Bureau of Standards, Washington, D.C. (1964). [4] A. G. Babenko, ”Sharp Jackson-Stechkin inequality inL 2 for multidimensional spheres”Mat. Zametki [Math. Notes],60, No. 3, 333–355 (1996). · Zbl 0903.41014 [5] N. P. Korneichuk,Extremum Problems in Approximation Theory [in Russian], Nauka, Moscow (1976). [6] N. I. Chernykh, ”Jackson’s inequality inL p (0,2$$\pi$$) with sharp constant,”Trudy Mat. Inst. Steklov [Proc. Steklov Inst. Math.],198, 232–241 (1992). · Zbl 0822.42002 [7] V. I. Ivanov, ”On the approximation of functions in the spacesL p,”Mat. Zametki [Math. Notes],56, No. 2, 15–40 (1994). [8] V. A. Yudin, ”Lower bounds for spherical design,”Izv. Ross. Akad. Nauk Ser. Mat. [Russian Acad. Sci. Izv. Math.],61, No. 3, 213–223 (1997). · Zbl 0890.05015 [9] E. Hewitt and K. Ross,Abstract Harmonic Analysis, Springer-Verlag, Heidelberg (1970). · Zbl 0213.40103 [10] V. I. Levenshtein, ”Boundaries for packings of metric spaces with applications,”Problems of Cybernatics,40, 43–110 (1983). [11] A. Benedeck and R. Panzone, ”The spacesL p with mixed norm,”Duke Math. J.,28, 301–324 (1961). · Zbl 0107.08902 · doi:10.1215/S0012-7094-61-02828-9 [12] G. Szegö,Orthogonal Polynomials, Colloquim Publ., Vol. XXIII, Amer. Math. Soc., Providence, R.I. (1959) [13] E. Stein and G. Weiss,Introduction to Fourier Analysis on Euclidean Spaces Princeton Univ. Press, Princeton, N.J. (1974). [14] V. I. Berdyshev, ”On Jackson’s theorem inL p,”Trudy Mat. Inst. Steklov [Proc. Steklov Inst. Math.],88, 13–16 (1967). · Zbl 0162.36201 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
# Hardness of Maximum Independent Set in 3-Colorable Graphs Let $$G = (V,E)$$ be an undirected graph such that there is a proper coloring of the vertices of $$G$$ in three colors. Question: In such graphs, are there known results for the hardness of finding a maximum independent set? e.g., can one find an Independent set of cardinality at least $$\varepsilon \cdot |V|$$ in polynomial time for some constant $$\varepsilon>0$$? As detailed below, the problem of finding an independent set of size $$\Omega(n^{1-\delta})$$ in 3-colorable graphs is essentially equivalent to $$O(n^\delta)$$-approximating 3-COLOR. Currently, the best poly-time approximation ratio known to be achievable for 3-COLOR is $$O(n^\delta)$$ for $$\delta>0.19$$. If you could find independent sets of size $$\Omega(n^{0.81})$$ in 3-colorable graphs in polynomial time, you could achieve a poly-time approximation ratio of $$O(n^{0.19})$$ for 3-COLOR, beating the best current ratio. (Here, by $$f(n)$$-approximating 3-COLOR, I mean the following problem: given a 3-colorable graph with $$n$$ vertices, find a coloring of size at most $$3f(n)$$. Lemma 1. Fix any constant $$\delta>0$$. There is a poly-time $$O(n^\delta)$$-approximation algorithm for 3-COLOR if and only if there is a poly-time algorithm for finding an independent set of size $$\Omega(n^{1-\delta})$$ in any given 3-colorable graph. Proof. The "only if" direction is easy. Just compute a coloring that uses at most $$O(n^\delta)$$ colors and return the largest color class. This is necessarily an independent set of size at least $$\Omega(n/n^{\delta}) = \Omega(n^{1-\delta})$$. The "if" direction is not much harder. Suppose that algorithm $$A$$ finds an independent set of size $$\Omega(n^{1-\delta})$$ in any given 3-colorable graph in polynomial time. Given a graph $$G=(V, E)$$, define algorithm $$B$$ to find a coloring for $$G$$ as follows: use $$A$$ to find a large independent set $$S$$ in $$G$$, recurse on the graph $$G'$$ obtained by deleting the vertices in $$S$$ from $$G$$ to find a coloring of $$G'$$, then add $$S$$ as a new color to this coloring to produce a coloring of $$G$$. (Of course the base case is when the graph has no vertices.) Note that $$G'$$ is 3-colorable, so the algorithm is well-defined. Let $$C(n)$$ be the maximum number of colors $$B$$ uses to color any 3-colorable $$n$$-vertex graph. Then $$C(0) = 0$$ and for $$n\ge 1$$ we have $$C(n) \le 1 + C(\lfloor n - \Omega(n^{1-\delta})\rfloor).$$ Expanding the recurrence $$O((n/2)^\delta)$$ times until the argument has size at most $$n/2$$ we have $$C(n) \le O((n/2)^\delta) + C(n/2).$$ Expanding this in turn gives the bound $$C(n) \le O\big(\sum_{i\ge 1} (n/2^i)^\delta\big) = O(n^\delta/\delta) = O(n^\delta).~~~~~\Box$$ The best poly-time approximation algorithms known for 3-COLOR are apparently $$O(n^{\delta})$$-approximation algorithms for $$\delta > 0.19$$. (For a good survey of these results see the introduction of this paper.) If you could find independent sets of size $$\Omega(n^{0.81})$$ in 3-colorable graphs in polynomial time, this would directly yield an $$O(n^{0.19})$$-approximation algorithm for 3-COLOR, better than these currently known results. For graph coloring in general, Lund and Yannakakis showed that, unless P=NP, for some constant $$\delta > 0$$, there is no polynomial-time $$O(n^\delta)$$-approximation algorithm for coloring. This may hold for 3-COLOR as well, but as far as I know this has not yet been shown. A quick search turns up a couple of recent hardness results: this and this. • Thank you very much for answering my question and for the detailed survey. A follow-up question if I may: Would the problem remains as hard (or, at least without a known algorithm) if each vertex $v \in V$ has a degree at least $d(v) \geq \Omega(\delta \cdot n)$? – John Nov 28, 2022 at 10:44 • Not as hard, I think. For example, you could do something like the following. Given a 3-colorable graph $G$, let $d$ be the maximum degree of any vertex in $G$ and let $v$ be a vertex of degree $d$. Find a 2-coloring $C$ of the neighbor set of $v$ (this must exist and can be found in linear time). Return the larger of the two color classes in $C$. Then each color class in $C$ is an independent set, and the largest one has size at least $d/2$, so this linear-time algorithm returns an independent set of size at least $d/2$. In your case this is $\Omega(\delta n)$. Probably can do better. Nov 28, 2022 at 15:17 • Thank you!, this was helpful:) – John Nov 28, 2022 at 18:06
{}
# The Drake equation and the Cambrian explosion This summer billionaire Yuri Milner announced that he would spend upwards of 100 million dollars to search for extraterrestrial intelligent life (here is the New York Times article). This quest to see if we have company started about fifty years ago when Frank Drake pointed a radio telescope at some stars. To help estimate the number of possible civilizations, $N$, Drake wrote down his celebrated equation, $N = R_*f_p n_e f_l f_i f_c L$ where $R_*$ is the rate of star formation, $f_p$ is the fraction of stars with planets, $n_e$ is the average number of planets per star that could support life, $f_l$ fraction of planets that develop life, $f_i$ fraction of those planets that develop intelligent life, $f_c$ fraction of civilizations that emit signals, and $L$ is the length of time civilizations emit signals. The past few years have demonstrated that planets in the galaxy are likely to be plentiful and although the technology to locate earth-like planets does not yet exist, my guess is that they will also be plentiful. So does that mean that it is just a matter of time before we find ET? I’m going to come on record here and say no. My guess is that life is rare and intelligent life may be so rare that there could only be one civilization at a time in any given galaxy. While we are now filling in the numbers for the left side of Drake’s equation, we have absolutely no idea about the right side of the equation. However, I have good reason to believe that it is astronomically small and that reason is statistical independence. Although Drake characterized the probability of intelligent life into the probability of life forming times the probability it goes on to develop extra-planetary communication capability, there are actually a lot of factors in between. One striking example is the probability of the formation of multi-cellular life. In earth’s history, for the better part of three and a half billion years we had mostly single cellular life and maybe a smattering of multicellular experiments. Then suddenly about half a billion years ago, we had the Cambrian Explosion where multicellular animal life from which we are descended suddenly came onto the scene. This implies that forming multicellular life is extremely difficult and it is easy to envision an earth where it never formed at all. We can continue. If it weren’t for an asteroid impact, the dinosaurs may never have gone extinct and mammals may not have developed. Even more recently, there seem to have been many species of advanced primates yet only one invented radios. Agriculture only developed ten thousand years ago, which meant that modern humans took about a hundred thousand years to discover it and only in one place. I think it is equally plausible that humans could have gone extinct like all of our other australopithecus and homo cousins. Life in the sea has existed much longer than life on land and there is no technologically advanced sea creature although I do think octopuses, dolphins and whales are intelligent. We have around 100 billion stars in the galaxy and let’s just say that each has a habitable planet. Well, if the probability of each stage of life is one in a billion and if we need say three stages to attain technology then the probability of finding ET is one in $10^{16}$. I would say that this is an optimistic estimate. Probabilities get small really quickly when you multiply them together. The probability of single cellular life will be much higher. It is possible that there could be hundred planets in our galaxy that have life but the chance that one of those is within a hundred light years will again be very low. However, I do think it is a worthwhile exercise to look for extracellular life, especially for oxygen or other life emitting gases in the atmosphere of exoplanets. It could tell us a lot about biology on earth. 2015-10-1: I corrected a factor of 10 error in some of the numbers. # World’s fastest bicycle University of Toronto Engineering Science  alumni and students construct the world’s fastest human powered vehicle.  Link here. (In case you were wondering, I’m EngSci 8T5). # Selection of the week Famed Northern Irish flautist Sir James Galway (the man with the golden flute) and pianist Phillip Moll play the first movement of  Sergei Prokofiev’s Flute Sonata in D, Op 94, composed in 1943. # Selection of the week Canadian clarinet sensation Eric Abramovitz playing Francis Poulenc’s Clarinet Sonata in B-flat, FP 184. # The blurry line between human and ape Primate researcher extraordinaire, Frans de Waal, pens an excellent commentary in the New York Times on the recent discovery of Homo Naledi. His thesis that the distinction between human and nonhuman is not clear cut is something I wholeheartedly subscribe to. No matter what we look at, the difference between humans and other species is almost always quantitative and not qualitative. Here are some excerpts and I recommend you read the whole thing: The fabulous find, named Homo naledi,has rightly been celebrated for both the number of fossils and their completeness. It has australopithecine-like hips and an ape-size brain, yet its feet and teeth are typical of the genus Homo. The mixed features of these prehistoric remains upset the received human origin story, according to which bipedalism ushered in technology, dietary change and high intelligence. Part of the new species’ physique lags behind this scenario, while another part is ahead. It is aptly called a mosaic species. We like the new better than the old, though, and treat every fossil as if it must fit somewhere on a timeline leading to the crown of creation. Chris Stringer, a prominent British paleoanthropologist who was not involved in the study, told BBC News: “What we are seeing is more and more species of creatures that suggests that nature was experimenting with how to evolve humans, thus giving rise to several different types of humanlike creatures originating in parallel in different parts of Africa.” This represents a shockingly teleological view, as if natural selection is seeking certain outcomes, which it is not. It doesn’t do so any more than a river seeks to reach the ocean. News reports spoke of a “new ancestor,” even a “new human species,” assuming a ladder heading our way, whereas what we are actually facing when we investigate our ancestry is a tangle of branches. There is no good reason to put Homo naledi on the branch that produced us. Nor does this make the discovery any less interesting… …The problem is that we keep assuming that there is a point at which we became human. This is about as unlikely as there being a precise wavelength at which the color spectrum turns from orange into red. The typical proposition of how this happened is that of a mental breakthrough — a miraculous spark — that made us radically different. But if we have learned anything from more than 50 years of research on chimpanzees and other intelligent animals, it is that the wall between human and animal cognition is like a Swiss cheese… … It is an odd coincidence that “naledi” is an anagram of “denial.” We are trying way too hard to deny that we are modified apes. The discovery of these fossils is a major paleontological breakthrough. Why not seize this moment to overcome our anthropocentrism and recognize the fuzziness of the distinctions within our extended family? We are one rich collection of mosaics, not only genetically and anatomically, but also mentally. # Abraham Bers, 1930 – 2015 I was saddened to hear that my PhD thesis advisor at MIT, Professor Abraham Bers, passed away last week at the age of 85. Abe was a fantastic physicist and mentor. He will be dearly missed by his many students. I showed up at MIT in the fall of 1986 with the intent of doing experimental particle physics. I took Abe’s plasma physics course as a breadth requirement for my degree. When I began, I didn’t know what a plasma was but by the end of the term I had joined his group. Abe was one of the best teachers I have ever had. His lectures exemplified his extremely clear and insightful mind. I still consult the notes from his classes from time to time. Abe also had a great skill in finding the right problem for students. I struggled to get started doing research but one day Abe came to my desk with this old Russian book and showed me a figure. He said that it didn’t make sense according to the current theory and asked me to see if I could understand it. Somehow, this lit a spark in me and pursuing that little puzzle resulted in my first three papers. However, Abe also realized, even before I did I think, that I actually liked applied math better than physics. Thus, after finishing these papers and building some command in the field, he suggested that I completely switch my focus to nonlinear dynamics and chaos, which was very hot at the time. This turned out to be the perfect thing for me and it also made me realize that I could always change fields. I have never been afraid of going outside of my comfort zone since. I am always thankful for the excellent training I received at MIT. The most eventful experience of those days was our weekly group meetings. These were famous no holds barred affairs where the job of the audience was to try to tear down everything the presenter said. I would prepare for a week to get ready when it was my turn. I couldn’t even get through the first slide my first time but by the time I graduated, nothing could faze me. Although the arguments could get quite heated at times, Abe never lost his cool. He would also come to my office after a particularly bad presentation to cheer me up. I don’t ever have any stress when giving talks or speaking in public now because I know that there could never be a sharper or tougher audience than Abe. To me, Abe will always represent the gentleman scholar to which I’ve always aspired. He was always impeccably dressed with his tweed jacket, Burberry trench coat, and trademark bow tie. Well before good coffee became de rigueur in the US, Abe was a connoisseur and kept his coffee in a freezer in his office. He led a balanced life. He took work very seriously but also made sure to have time for his family and other pursuits. I visited him at MIT a few years ago and he was just as excited about what he was doing then as he was when I was a graduate student. Although he is gone, he will not be forgotten. The book he had been working on, Plasma Waves and Fusion, will be published this fall. I will be sure to get a copy as soon as it comes out. 2015-9-16: Here is a link to his MIT obituary. # Selection of the week Samuel Barber’s Adagio for Strings.  Appropriate for this sad day.  Leonard Slatkin conducts the Detroit Symphony Orchestra. # Paper on the effect of food intake fluctuations on body weight Chow, C. C. & Hall, K. D. Short and long-term energy intake patterns and their implications for human body weight regulation. Physiology & Behavior 134:60–65 (2014). doi:10.1016/j.physbeh.2014.02.044 Abstract: Adults consume millions of kilocalories over the course of a few years, but the typical weight gain amounts to only a few thousand kilocalories of stored energy. Furthermore, food intake is highly variable from day to day and yet body weight is remarkably stable. These facts have been used as evidence to support the hypothesis that human body weight is regulated by active control of food intake operating on both short and long time scales. Here, we demonstrate that active control of human food intake on short time scales is not required for body weight stability and that the current evidence for long term control of food intake is equivocal. To provide more data on this issue, we emphasize the urgent need for developing new methods for accurately measuring energy intake changes over long time scales. We propose that repeated body weight measurements can be used along with mathematical modeling to calculate long-term changes in energy intake and thereby quantify adherence to a diet intervention and provide dynamic feedback to individuals that seek to control their body weight. # The world of Gary Taubes Science writer Gary Taubes has a recent New York Times commentary criticizing Kevin Hall’s recent paper on the differential metabolic effects of low fat vs low carbohydrate diets. See here for my recent post on the experiment. Taubes is probably best known for his views on nutrition and as an advocate for low carb diets although he has two earlier books on the sociology of physics. The main premise running through his four books is that science is susceptible to capture by the vanity, ambition, arrogance, and plain stupidity of scientists. He is pro-science but anti-scientist. His first book on nutrition – Good Calories, Bad Calories, was about how the medical establishment and in particular nutritionists have provided wrong and potentially dangerous advice on diets for decades. He takes direct aim at Ancel Keys as one of the main culprits for pushing the reduction of dietary fat to prevent heart disease. The book is a great read and clearly demonstrates Taubes’s sharp mind and gifts as a story teller. In the course of researching the book, Taubes also discovered the biological mechanisms of insulin and this is what has mostly shaped his thinking about carbohydrates and obesity. He spells it out in more detail in his subsequent book – Why We Get Fat. I think that these two books are a perfect demonstration of why having a little knowledge and a high IQ can be a dangerous thing. Most people know of insulin as the hormone that goes awry in diabetes. When we fast, our insulin levels are low and our body, except for our brain, burns fat. If we then ingest carbohydrates, our insulin levels rise, which induces our body to utilize glucose (the main source of fuel in carbs) in favour of insulin. Exercise will also cause a switch in fuel choice from fat to glucose. What is less well known is that insulin also suppresses the release of fat from fat cells (adipocytes), which is something I have modeled (see here). This seems to have been a revelation to Taubes – Clearly, if you eat lots of carbs, you will have lots of insulin, which will sequester fat in fat cells. Ergo, eating carbs makes you fat! Nutritionists were so focused on their poorly designed studies that they missed the blatantly obvious. This is just another example of how arrogant scientists get things wrong. Taubes then proposed a simple experiment – take two groups of people and put one group on a high carb diet and the other on a low carb diet with the same caloric content, and see who loses weight. Well, Kevin Hall anticipated this request with basically the same experiment although for a different purpose. What Kevin noticed in his model was that if you cut carbs and keep everything else the same, insulin goes down and the body responds by burning much more fat. However, if you cut fat, there is nothing in the model that told the body that the fat was missing. Insulin didn’t change and thus the body just burned the same amount of carbs as before. He found this puzzling. Surely there must be a fat detector that we don’t know about so he went about to test it. I remember he and his fellows labouring diligently for what seemed like years writing the protocol and getting the necessary approval and resources to do the experiment. The result was exactly as the model predicted. We really don’t have a fat sensor. However, the subjects lost more fat on the low fat diet then they did on the low carb diet.  This is not exactly the experiment Taubes wanted to do, which was to change the macronutrient composition but keep the calories the same. He then hypothesized that those on the low carb diet would lose weight and those on the low fat, high carb diet would gain weight. Kevin and a consortium of top obesity researchers has since done that experiment and the results will come out shortly. Now is this surprising? Well not really, for while Taubes is absolutely correct in that insulin suppresses fat utilization the net outcome of insulin reduction is a quantitative and not a qualitative question. You cannot deduce the outcome with formal logic. The reason is that insulin cannot be elevated all the time. Even a continuous grazer must sleep at some point where upon insulin falls. You then must consider the net effect of high and low insulin over a day or longer to assess the outcome. This can only be determined empirically and this is what Taubes fails to see or accept. He also commits a logical fallacy –  Just because a scientist is stupid doesn’t mean he is wrong. Taubes’s recent commentary criticizes Kevin’s experiment by saying that it 1) is a diet that is impossible to follow and 2) it ignores appetite. The response to the first point is that the experiment was meant to test a metabolic hypothesis and was not meant to test the effect of a diet. My response to his second point is to stare agape. When Taubes visited NIH a few years ago after his Good Calories, Bad Calories book came out I offered the hypothesis that low carb diets could suppress appetite and this could be why they may be effective in reducing weight. However, he had no interest in this idea and Kevin has told me that he has repeatedly shown no interest in it. (I don’t need to give details on how people have been interested in appetite for decades since it is well done in this post.) I came to the conclusion that appetite control was the primary driver of the obesity epidemic shortly after arriving at NIH. In fact my first BSC presentation was on this topic. The recommendation by the committee was that I should do something else and that NIH was a bad fit for me. However, I am still here and I still believe appetite control is the key. # Paper on new myopia associated gene The prevalence of near sightedness or myopia has almost doubled in the past thirty years from about 25% to 44%. No one knows why but it is probably a gene-environment effect, like obesity. This recent paper in PLoS Genetics: APLP2 Regulates Refractive Error and Myopia Development in Mice and Humans, sheds light on the subject. It reports that a variant of the APLP2 gene is associated with myopia in people if they read a lot as children. Below is a figure of the result of a GWAS study showing the increase in myopia (more negative is more myopic) with age for those with the risk variant (GA) and for time spent reading. The effect size is pretty large and a myopic effect of APLP2 is seen in monkeys, mice, and humans. Thus, I think that this result will hold up. The authors also show that the APLP2 gene is involved in retinal signaling, particularly in amacrine cells. It is thus consistent with the theory that myopia is the result of feedback from the retina during development.  Hence, if you are constantly focused on near objects, the eye will develop to accommodate for that. So maybe you should send your 7 year old outside to play instead of sitting inside reading or playing video games. # Selection of the week Who said classical music couldn’t be funny?  Here is a rendition of PDQ Bach’s version of Beethoven’s Fifth Symphony. PDQ Bach is the brainchild of composer and music scholar Peter Schickele to put some fun back in classical music.  You can learn a lot about the construction of classical pieces from his work.
{}
# Showing subgroups with equal Lie algebras are equal Let $$k$$ be a field. It might as well be algebraically closed, but I do not want to assume that it has characteristic $$0$$. I will write "group" for "affine group scheme over $$k$$", not assuming smoothness. Two groups can have the same Lie algebras without being equal. For example, if $$k$$ has characteristic $$2$$, then every maximal torus in $$\operatorname{SL}_2$$ has the same Lie algebra as the centre $$\mu_2$$. Even two smooth groups can have the same Lie algebras without being equal: for example, all maximal tori in $$\operatorname{SL}_2$$ have the same Lie algebra. At least it is true that, if a smooth group $$H$$ is contained in a connected group $$G$$, and their Lie algebras are equal, then $$H$$ equals $$G$$; and so, if two connected subgroups $$H_1$$ and $$H_2$$ of $$G$$ have equal Lie algebras and smooth intersection, then they are equal. I'm looking more for a result in line with Borel - Linear algebraic groups, Theorem 13.18(4)(d): given a maximal torus $$T$$ in a smooth, reductive group $$G$$, and a root $$\alpha$$ of $$T$$ in $$G$$, there is a unique smooth, connected subgroup of $$G$$ that is normalised by $$T$$ and whose Lie algebra is the $$\alpha$$-weight space of $$T$$ on $$\operatorname{Lie}(G)$$. The key ingredients here are reductivity and the torus action. So I'm interested in any more general results of this sort that allow one to deduce equality of groups from equality of their Lie algebras. If that's too broad, I'll focus a bit: suppose that $$G$$ is a smooth, reductive group; $$H_1$$ and $$H_2$$ are smooth, connected, reductive subgroups; and $$T$$ is a torus in $$H_1 \cap H_2$$ that is not necessarily maximal in $$G$$, but is maximal in both $$H_1$$ and $$H_2$$. In this setting, if the Lie algebras of $$H_1$$ and $$H_2$$ are equal, then can we conclude that the groups are equal? EDIT: I forgot to add, in case it helps, that, in my situation, $$\operatorname C_G(T)^\circ$$ (the connectedness automatic if $$G$$ itself is connected) is a torus. • I suggest the following question, that seems to be equivalent to yours: Let $U,\,U'$ be two 1-dimensional unipotent subgroups in a smooth connected reductive group $G$. Assume that ${\rm Lie\,}U={\rm Lie\,}U'$. Does it follow that $U=U'$? In char. 0 the answer is Yes...(I have no intuition in positive characteristic.) – Mikhail Borovoi Aug 8 at 19:13 • @MikhailBorovoi, I agree that question is natural. It is clear that it would suffice, but not clear to me that it is equivalent to mine. (I think that the torus action has got to play some role.) I originally hoped the torus action would be enough: if $T$ acts with "large orbits" on $U$ and $U'$, then it acts so on $U \cap U'$, so it must be smooth, right?—but it isn't. (The group often called $\alpha_p$, the $p$th-order neighbourhood of the identity, has such a torus action.) – LSpice Aug 8 at 19:16 • However, in line with @MikhailBorovoi's ideas, a reduction in the spirit of Borel's proof may be applied: by considering a fixed root $\alpha$ of $T$ in $\DeclareMathOperator\Lie{Lie}\Lie(H_1) = \Lie(H_2)$ and working inside $\operatorname C_G(\ker(\alpha)^\circ)$, we may assume that $H_1$ and (therefore) $H_2$ have semisimple rank $1$. – LSpice Aug 8 at 19:21 • The answer to my question is No, see this answer of Will Sawin. – Mikhail Borovoi Aug 8 at 20:00 • Indeed sorry I didn't read carefully. Possibly it would help writing "subgroups" instead of "groups" in the title. – YCor Aug 8 at 20:18 $$\DeclareMathOperator\Ad{Ad}\DeclareMathOperator\Cent{C}\DeclareMathOperator\GL{GL}\DeclareMathOperator\Lie{Lie}$$The key point is not, as I expected, whether $$\Cent_G(T)^\circ$$ is a torus, but whether it equals $$\Cent_G(\Lie(T))^\circ$$. Certainly it is contained in the latter group, so this is the same as asking whether $$T$$ centralises $$\Cent_G(\Lie(T))^\circ$$. If we do not require this, then we may adapt a construction by @WillSawin, pointed out by @MikhailBorovoi, to give a counterexample that is quite close to the one I attempted in the comments. Specifically, we give connected, reductive subgroups $$H_1$$ and $$H_2$$ of $$G = \GL_4$$ that contain a common maximal torus $$T$$ (for which $$\Cent_G(T)^\circ$$ is itself a maximal torus in $$G$$), and satisfy $$\Lie(H_1) = \Lie(H_2)$$, but $$H_1 \ne H_2$$. Namely, let $$t$$ be any non-scalar diagonal matrix in $$\GL_2$$, and put $$H_1 = \left\{\begin{pmatrix} g & 0 \\ 0 & g^{[p]} \end{pmatrix} \mathrel\colon g \in \GL_2\right\}$$ and $$H_2 = \left\{\begin{pmatrix} g & 0 \\ 0 & t g^{[p]}t^{-1} \end{pmatrix} \mathrel\colon g \in \GL_2\right\}$$, where $$g^{[p]}$$ is the matrix obtained by raising every entry of $$g$$ to the $$p$$th power. Next we prove that, if $$H_1$$ and $$H_2$$ are connected, reductive subgroups of a common group $$G$$ that contain a common maximal torus $$T$$, and satisfy $$\Lie(H_1) = \Lie(H_2)$$, and if in addition $$T$$ centralises $$\Cent_G(\Lie(T))^\circ$$, then $$H_1$$ must equal $$H_2$$. As suggested by @MikhailBorovoi, it suffices to show that, for every root $$b$$ of $$T$$ in $$\Lie(H_1) = \Lie(H_2)$$, the corresponding root subgroups of $$b$$ in $$H_1$$ and $$H_2$$ are equal. Let $$\mathfrak u$$ be the common $$b$$-root subspace of $$\Lie(H_1) = \Lie(H_2)$$. Then we have $$T$$-equivariant isomorphisms $$e_{i\,b} \colon \mathfrak u \to H_i$$ such that $$\Ad(e_{i\,b}(X))Y$$ equals $$Y - \mathrm db(Y)X$$ for all $$X \in \mathfrak u$$ and all $$Y \in \Lie(T)$$. That is, $$e_{1\,b}(X)e_{2\,b}(X)^{-1}$$ lies in $$\Cent_G(\Lie(T))$$ for all $$X \in \mathfrak u$$, and hence, since $$\mathfrak u$$ is connected, in $$\Cent_G(\Lie(T))^\circ$$. Since this group is centralised by $$T$$, we see upon conjugating $$e_{1\,b}(X)e_{2\,b}(X)^{-1}$$ by $$t$$ that it equals $$e_{1\,b}(b(t)X)e_{2\,b}(b(t)X)^{-1}$$, for all $$X \in \mathfrak u$$ and all $$t \in T$$. In particular, $$e_{1\,b}(X)e_{2\,b}(X)^{-1}$$, as a function of $$X$$, is constant on $$\mathfrak u \setminus \{0\}$$, and hence, since it is continuous, is constant on $$\mathfrak u$$; but its value at $$X = 0$$ is the identity. • You write: " Then we have isomorphisms $e_{i\,b} \colon \mathfrak u \to H_i$ such that ${\rm Ad}(e_{i\,b}(X))Y$ equals $Y - \mathrm db(Y)X$ for all $X \in \mathfrak u$ and all $Y \in {\rm Lie}(T)$". What are these isomorphisms $e_{i\,b}$ ? – Mikhail Borovoi Aug 10 at 13:24 • @MikhailBorovoi, they are the automorphisms coming from a choice of Chevalley basis (see Carter - Simple groups of Lie type, p. 64). They are defined at first over $\mathbb Z$, so make sense over any field. A priori they exist only on the adjoint group, but the adjoint-quotient map is an isomorphism on unipotent subgroups, so they may be lifted uniquely to $H_i$. – LSpice Aug 10 at 18:26 • @MikhailBorovoi, $\operatorname C_G(T)^\circ = \operatorname C_G(T)$ in my counterexample is a maximal torus (the standard diagonal torus in $G = \operatorname{GL}_4$), whereas $\operatorname C_G(\operatorname{Lie}(T))^\circ = \operatorname C_G(\operatorname{Lie}(T))$ is $\operatorname C_G(T)^\circ\cdot(1 \times \operatorname{GL}_2)$. – LSpice Aug 11 at 11:00
{}
# 2D array zeroing C++ style [closed] I have a valid code in C-style int arr[10][10] = {0}; // all elements of array are 0 What is the optimal way to do the same for C++ style array? My idea is #include <array> std::array<std::array<int, 10>, 10> arr = {0}; // is it correct? • This appears to be a general "best-practice" question, rather than a review of an existing function or program. Mar 1, 2021 at 10:49 I feel this question is more suited for stack overflow, but I'll answer it anyway. In c++ the preferred way to do this is: std::array<std::array<int, 10>, 10> arr = {}; // Without the 0 Also remember that static variables are zero-initialised anyway, so you wouldn't need to do that. Because this is code review, I would suggest using a typedef to replace std::array<std::array<int, 10>, 10> to make your code more readable. EDIT: As pointed out in the comments, using is much more powerful than typedef and more modern-c++ ish and one should preferably use it rather than typedef. It supports type aliasing templates and all other cool stuff, which is why it is preferable to use. ( Though never use using namespace ...;) • I would suggest to use using instead of typedef as we want to use modern c++ practices. Mar 1, 2021 at 16:07 • Thank you!!!!!! – swor Mar 1, 2021 at 16:59 • @AndreasBrunnet Yes you are totally right. Will edit the answer. Mar 1, 2021 at 19:31
{}
# Fill the plane with pentagons as tightly as possible in a regular way I wrote a small python script with recursion to create a "lattice" of non-overlapping pentagons. Below one can see the first stages of recursion. One can see that 5 small pentagons are missing in the second stage of recursion. The code in python to generate these figures is this: # fill the plane with pentagons as tightly as possible in a regular way #!/usr/bin/python import numpy as np import matplotlib.pyplot as plt N = 5 phi = (1.0+np.sqrt(5.0))/2 def pentagon (X, Y, n): if n == 0: rx = np.append(X, X[0]) ry = np.append(Y, Y[0]) plt.plot(rx, ry, 'r-') return for i in range(N): Xn = np.array([X[i], X[i], X[i], X[i], X[i]]) Yn = np.array([Y[i], Y[i], Y[i], Y[i], Y[i]]) U = Xn/phi+X*(1-1.0/phi) V = Yn/phi+Y*(1-1.0/phi) pentagon (U, V, n-1) Xn = X/phi +np.roll(X, 2)*(1-1.0/phi) Yn = Y/phi +np.roll(Y, 2)*(1-1.0/phi) # center pentagon pentagon (Xn, Yn, n-1) Ind = np.arange(N) theta =np.pi/10 X = np.cos(Ind*2*np.pi/N+theta) Y = np.sin(Ind*2*np.pi/N+theta) plt.figure() plt.axes().set_aspect('equal') n_iter = 2 # number of iterations pentagon (X, Y, n_iter) figure = "pentagons%d.png" % n_iter plt.axis('off') plt.savefig(figure) plt.show() Any suggestions on how to fill in the missing pentagons in a simple way (e.g. keeping the recursive nature of the algorithm)? • which ones are missing? It looks correct at first glance. Feb 8 '17 at 9:34 • I put a hand-drawn circle into each pentagon (blue from first iteration and black for second iteration I counted 25 black circles and 5 blue ones. Feb 8 '17 at 13:20 • The ones in the middle of each large side. The tips from these pentagons would stick out of the big pentagon. If I add one more recurrence one sees the "cracks" opening up. Feb 9 '17 at 0:24 If I understand the general idea correctly, you are going to close the gaps which are appearing after further structure growth. I cannot do it algorithmically but geometrically it is quite easy to do. I use Illustrator here and manually cover the gaps with two additional shapes. So in addition to the pentagon and flat rhombus, I will need a 5-fold star and another rhombus: So after first steps we get such a structure and the filling will be: (Initial center is marked red) I don't fill all holes and I leave out the obvious ones just to make arrangements more visible. Then next step will give this: Here I do only halves of crack fillings, those are symmetrical. As seen those cracks indeed can be covered using same shapes without gaps and have self-similar patterns. How this can be achieved algorithmically, I don't know unfortunatelly. But this is interesting question, probably there are solutions somewhere, IDK. Theoretically if one analyse several crack fillings for each step, one can guess a relatively simple algorithm. • I think you can do this vith a L-system. Like the penrose tiling exaple on this page, see examples on the right. (a L-system is pretty easy to program, if you dont know how its one of those cassic CS things). An early alpha version of a L-System generator for illustrator can be found here Mar 3 '17 at 12:10 • Based on the answer by @Mikhail V and by looking at the fractal structure given in my code with number of iterations up to 5, I wonder if one could make a "fractal complement" (or dual) to fill in the gaps/cracks, since they are also self-similar. Then one could patch both fractals together in the end. Mar 7 '17 at 1:46 If you're trying to tile the plane with regular pentagons with no spacing, that's not possible. A regular pentagonal tiling on the Euclidean plane is impossible because the internal angle of a regular pentagon, 108°, is not a divisor of 360°, the angle measure of a whole turn. From the same link, though, there are several non-regular pentagons you can use to tile it as shown in the above link. Additionally, you can do a pentagonal/hexagonal tiling of the plane. If I've misunderstood what you're trying to do, please clarify. • I know it's not possible to tile the plane only with pentagons. I'm trying to use pentagons and rhombuses (for the gaps), which is possible in another configuration, but not in the above one. If you use the code I provided with n_iter=3, the figure generated will have big cracks of white space in it. I'm trying to avoid that. Feb 9 '17 at 0:44 • One strategy for growing the pentagon "lattice" above, would be to add another pentagon to each side of the small pentagons whenever possible, without overlapping one another. It would be like a crystal growth. One gets a quasicrystal like in this link: pnas.org/content/93/25/14271/F4.large.jpg, in this case one needs 3 basic shapes. I think though the recursive method won't work. Feb 9 '17 at 0:56 • Ah, sorry, I misunderstood what you were trying to do. I'll see if I can come up with something that addresses your actual question (no promises!). Feb 9 '17 at 5:13 Although this is not the solution I was looking for initially, the figure below is the tightest filling without overlaps of the plane with pentagons that I could come up with so far. I haven't found a recursive algorithm yet. I think it's not possible or it is too complicated to code. Below is the python script: #!/usr/bin/python import numpy as np import matplotlib.pyplot as plt N = 5 def pentagon (Xorig, Yorig, theta): Ind = np.arange(N+1) X = np.cos(Ind*2*np.pi/N+theta)+Xorig Y = np.sin(Ind*2*np.pi/N+theta)+Yorig plt.fill(X, Y, 'r-') plt.axes().set_aspect ('equal') r = 2*np.cos(np.pi/5) plt.xlim(-2, 30) plt.ylim(-2, 30) Lx = np.cos(np.pi/10) for j in np.arange(0, 10): Xorig = (j%2)*Lx*np.ones(N+1) Yorig = j*(1+2*np.cos(np.pi/5)+np.cos(2*np.pi/5))*np.ones(N+1) theta = np.pi/(2*N) sign = 1 for i in np.arange(1, 32): pentagon(Xorig, Yorig, theta) Xorig += r*np.cos(3*theta)*np.ones(N+1) Yorig += r*np.sin(3*theta)*np.ones(N+1) sign *= -1 theta = sign*np.pi/(2*N) plt.axis('off') plt.savefig("pentagonLattice.png", bbox_inches='tight') plt.show() Also I haven't found yet a simple way to make a quasi-periodic tiling of the plane with only pentagons and rhombi yet. I did find out though how to fill the plane with pentagons and rhombi with a center with 5-fold symmetry. Here's a python script with a recursive algorithm (though not optimized): #!/usr/bin/python import numpy as np import matplotlib.pyplot as plt N = 5 # The golden ratio phi = (1.0+np.sqrt(5.0))/2 # angle increment theta0 = np.pi/5 LEFT = 0 RIGHT = 1 # fill plot a pentagon with center at (Xorig, Yorig) and orientation theta def pentagon (Xorig, Yorig, theta): Ind = np.arange(N+1) X = np.cos(Ind*2*np.pi/N+theta)+Xorig Y = np.sin(Ind*2*np.pi/N+theta)+Yorig plt.fill(X, Y, 'r-') # At each step three edges are generated # and two recursive calls are made def stepSplit (previousPoint, previousTurn, theta, nIter): if nIter == 0: return if previousTurn == RIGHT: # Turn left theta = theta+theta0 elif previousTurn == LEFT: # turn right theta = theta-theta0 nextPoint = previousPoint + phi*np.array([np.cos(theta), np.sin(theta)]) X, Y = zip(previousPoint, nextPoint) pentagon (nextPoint[0], nextPoint[1], theta) # Update previousPoint[:] = nextPoint[:] # Bifurcation # Turn left theta1 = theta+theta0 nextPoint = previousPoint + phi*np.array([np.cos(theta1), np.sin(theta1)]) X, Y = zip(previousPoint, nextPoint) pentagon (nextPoint[0], nextPoint[1], theta1) stepSplit(nextPoint, LEFT, theta1, nIter-1) # Turn right theta1 = theta-theta0 nextPoint = previousPoint + phi*np.array([np.cos(theta1), np.sin(theta1)]) X, Y = zip(previousPoint, nextPoint) pentagon (nextPoint[0], nextPoint[1], theta1) stepSplit(nextPoint, RIGHT, theta1, nIter-1) # fill plot pentagons # Central pentagon pentagon (0.0, 0.0, 0.0) for i in [1, 3, 5, 7, 9]: stepSplit (np.array([0.0, 0.0]), -1, i*theta0, 5) for i in [0, 2, 4, 6, 8]: theta = i*np.pi/5 x0 = phi*(1.0+2*np.cos(np.pi/5))*np.cos(theta) y0 = phi*(1.0+2*np.cos(np.pi/5))*np.sin(theta) pentagon (x0, y0, theta0) stepSplit (np.array([x0, y0]), -1, theta, 4) plt.axes().set_aspect ('equal') plt.axis('off') plt.savefig("pentagonLattice4.png", bbox_inches='tight') plt.show() • The densest known packing of the regular pentagon is slightly different. – user106 Mar 25 '17 at 22:55
{}
Preprint A774/2016 On OM-decomposable sets Alfredo Noel Iusem | Maxim Todorov Keywords: Motzkin decomposable sets | convex sets | convex cones We introduce and study the family of sets in a finite dimensional Enclidean space which can be written as the Minkowsky sum of an open, convex  and bounded set and a closed and convex set. We establish several properties of the class of such sets, called OM-decomposable, some of which extend related properties which hold for the class of Motzkin decomposable sets (i.e., those for which the convex and bounded set in the decomposition is required to be closed, hence compact).
{}
# matrix.py¶ Define the base Matrix class. class openmdao.matrices.matrix.Matrix(comm, is_internal)[source] Bases: object Base matrix class. This class is used for global Jacobians. Parameters commMPI.Comm or <FakeComm> Communicator of the top-level system that owns the <Jacobian>. is_internalbool If True, this is the int_mtx of an AssembledJacobian. Attributes _commMPI.Comm or <FakeComm> Communicator of the top-level system that owns the <Jacobian>. _matrixobject implementation-specific representation of the actual matrix. _submatsdict dictionary of sub-jacobian data keyed by (out_name, in_name). implementation-specific data for the sub-jacobians. __init__(comm, is_internal)[source] Initialize all attributes. set_complex_step_mode(active)[source] Turn on or off complex stepping mode. When turned on, the value in each subjac is cast as complex, and when turned off, they are returned to real values. Parameters activebool Complex mode flag; set to True prior to commencing complex step.
{}
# How can I prove that a sequence such that every converging subsequence coverges to the same limit, converges? I want to claim that if $(x_n)_{n\in N}$ is a sequence, and there is $a$ such that if $(x_{n_k})$ converges, so $\lim x_{n_k} = a$ (it means that all converging subsequences have the same limit), then $(x_n)$ converges. (I don't really mind sequence of what.. could be numbers, could be a sequence in any Hilbert space). Is my proposition even right? Assume that a converging subsequence exists, if it helps. I think it should. My intuition is YES, using some how that $\liminf =\limsup$ (Why is that right exactly? as explicitly as you could). Does it also hold for weak convergence? Thanks! Added: assume it's bounded. I understood it is false if not bounded • Looks incorrect: how about $a_n = (-1)^n$ – Simon S Nov 20 '14 at 20:41 • Not if the sequence is unbounded. – David Mitra Nov 20 '14 at 20:41 • @SimonS: it is not a good counterexample because i asked that all converging subsequences converges to the same limit. – user188400 Nov 20 '14 at 20:42 • @DavidMitra: could you explain how the boundness is used/required? – user188400 Nov 20 '14 at 20:43 • What about a sequence with no convergent subsequence. Then vacuously, all convergent subsequences converge to the same limit, but the sequence does not converge. – Nishant Nov 20 '14 at 20:44 Here is a counterexample: $$a_n = \begin{cases} 0, & n \mbox{ even }\\ n, & n \mbox{ odd } \end{cases}$$ However, as David Mitra points out, if you require boundedness, then the result should hold. Let $a_n$ be a bounded sequence of real numbers such that every convergent subsequence converges to the same number $a$. Suppose toward contradiction it does not converge to $a$. Then there is an $\epsilon>0$ such that $|a_n - a|>\epsilon$ for infinitely many $n$. Index these as $b_k$. This is a bounded sequence of real numbers, hence has a convergent subsequence. By construction, this convergent subsequence cannot converge to $a$. However, it is a subsequence of $a_n$. Contradiction. (The proof is the same for such sequences in any compact metric space.) • Thanks. could you prove it while using boundeness? – user188400 Nov 20 '14 at 20:46 • @Functional_Analysis Yep! – Neal Nov 20 '14 at 20:49
{}
# The omega one of chess, CUNY, March, 2013 This is a talk for the New York Set Theory Seminar on March 1, 2013. This talk will be based on my recent paper with C. D. A. Evans, Transfinite game values in infinite chess. Infinite chess is chess played on an infinite chessboard.  Since checkmate, when it occurs, does so after finitely many moves, this is technically what is known as an open game, and is therefore subject to the theory of open games, including the theory of ordinal game values.  In this talk, I will give a general introduction to the theory of ordinal game values for ordinal games, before diving into several examples illustrating high transfinite game values in infinite chess.  The supremum of these values is the omega one of chess, denoted by $\omega_1^{\mathfrak{Ch}}$ in the context of finite positions and by $\omega_1^{\mathfrak{Ch}_{\hskip-1.5ex {\ \atop\sim}}}$ in the context of all positions, including those with infinitely many pieces. For lower bounds, we have specific positions with transfinite game values of $\omega$, $\omega^2$, $\omega^2\cdot k$ and $\omega^3$. By embedding trees into chess, we show that there is a computable infinite chess position that is a win for white if the players are required to play according to a deterministic computable strategy, but which is a draw without that restriction. Finally, we prove that every countable ordinal arises as the game value of a position in infinite three-dimensional chess, and consequently the omega one of infinite three-dimensional chess is as large as it can be, namely, true $\omega_1$.
{}
# How to approach this sum-minimization problem I am new to math. How to approach the following problem? $\min_{a,b} \sum_{t=1}^N (-4aX_t\sin(Z_tb) -4aY_t\cos(Z_tb)+a^2Z_t^2 + X_t^2 + Y_t^2)$ where $X_t,Y_t,Z_t$ for $t\in \{1,...N\}$ are given. Say $N$ is around 800. Do I need software? Which? I am not having luck with simple optim() in R, but maybe I am using wrong parameters. Would this be a problem that could be solved in Mathematica? (I don't have access to Mathematica, and am also not familiar with it. I just heard that it is powerful.) - With a little bit of work you can convert your two-dimensional minimization problem into a one-dimensional root-finding problem. Call the quantity to be minimised $L(a,b)$. Then computing the partial derivatives and setting to zero gets you $$\frac{\partial L}{\partial a} = 0 \;\Rightarrow\; \frac{a}{2} \sum_{t=1}^n Z_t^2 - \sum_{t=1}^N X_t\sin(Z_tb) + Y_t\cos(Z_tb) = 0$$ $$\frac{\partial L}{\partial b} =0 \;\Rightarrow\; a \sum_{t=1}^N X_t\cos(Z_tb) - Y_t\sin(Z_tb) = 0$$ and hence either $a=0$ or the sum in the second equation is zero. If you can find $b$ such that the sum is zero (using some numerical root finder) then $a$ is determined by the first equation. On the other hand, if $a$ is zero then you determine $b$ from the first equation using a numerical root finder. - Thanks for your reply. It is still not clear for me: How do I find the global minimum? – Frank Seifert May 27 '11 at 12:58 Global minimization is a difficult problem in general. The extremal points of your equation are given by the solutions to the two equations in my answer. This will find all local maxima, minima and saddle points. To find the global minimum you should take those solutions and put them back into your original expression to find out which one gives the smallest result. – Chris Taylor May 27 '11 at 13:22 Thanks, had hoped there was some way to avoid this. – Frank Seifert May 27 '11 at 13:58
{}
Coder's Cat 2020-02-07 ## Challenge Description Given an array of size n, find the majority element. The majority element is the element that appears more than ⌊ n/2 ⌋ times. You may assume that the array is non-empty and the majority element always exist in the array. Example 1: Example 2: ## Naive Solution Iterate over all the elements in array, use a unordered_map to store the counts of current element, when one element’s count > size/2, it’s OK to break the loop and return the element. Time complexity: $O(N)$. ## Approach with sorting If we sort the array, the element at index size/2 will be the answer. Time complexity: $O(NlogN)$. Preparing for an interview? Checkout this!
{}
# Suppose a plot of inverse wavelength vs frequency has slope equal to 0.388, what is the... ###### Question: Suppose a plot of inverse wavelength vs frequency has slope equal to 0.388, what is the speed of sound traveling in the tube to 2 decimal places? #### Similar Solved Questions ##### I would like to you spend a few moments and access Internet Security, Privacy, and Legal... I would like to you spend a few moments and access Internet Security, Privacy, and Legal Issues under this week's activities. This module discusses many of the important things we need to take into consideration regarding online security and protecting ourselves against common threats including ... ##### Show that if K is a field, then K[2] contains infinitely many irreducibles. (Note: keep in... Show that if K is a field, then K[2] contains infinitely many irreducibles. (Note: keep in mind that K could be finite.)... ##### Please write meca small power point about Delagation and Supervision in nursing. please write meca small power point about Delagation and Supervision in nursing....
{}
# 8.9. Density Functional Theory (DFT)¶ ## 8.9.1. Many Body Schrödinger Equation¶ We use (Hartree) atomic units in this whole section about DFT. We use the Born-Oppenheimer approximation, which says that the nuclei of the treated atoms are seen as fixed. A stationary electronic state (for electrons) is then described by a wave function fulfilling the many-body Schrödinger equation where is the kinetic term, is the electron-electron interaction term and is the interaction term between electrons and nuclei, where are positions of nuclei and the number of nucleons in each nucleus (we are using atomic units). So for one atomic calculation with the atom nucleus in the origin, we have just . gives the probability density of measuring the first electron at the position , the second at , dots and the Nth electron at the position . The normalization is such that . The is antisymmetric, i.e. etc. Integrating over the first electrons is the probability density that the -th electron is at the position . Thus the probability density that any of the N electrons (i.e the first, or the second, or the third, dots, or the -th) is at the position is called the particle (or number) density and is therefore given by: (8.9.1.1) Thus gives the number of particles in the region of integration . Obviously . Note that the number density and potential in the Schroedinger equation is related to the electron charge density and electrostatic potential energy by: where is the particle elementary charge, which for electrons is in atomic units. The amount of electronic charge in the region is given by: The energy of the system is given by (8.9.1.2) where (8.9.1.3) It needs to be stressed, that generally is not a functional of alone, only the is. In the next section we show however, that if the is a ground state (of any system), then becomes a functional of . ## 8.9.2. The Hohenberg-Kohn Theorem¶ The Schrödinger equation gives the map where is the ground state. C is bijective (one-to-one correspondence), because to every we can compute the corresponding from Schrödinger equation and two different and (differing by more than a constant) give two different , because if and gave the same , then by substracting from we would get , which is a contradiction with the assumption that and differ by more than a constant. Similarly, from the ground state wavefunction we can compute the charge density giving rise to the map which is also bijective, because to every we can compute from (8.9.1.1) and two different and give two different and , because different and give adding these two inequalities together gives which for gives , which is nonsense, so . So we have proved that for a given ground state density (generated by a potential ) it is possible to calculate the corresponding ground state wavefunction , in other words, is a unique functional of : so the ground state energy is also a functional of We define an energy functional (8.9.2.1) where is any ground state wavefunction (generated by an arbitrary potential), that is, is a ground state density belonging to an arbitrary system. which is generated by the potential can then be expressed as and for we have (from the Ritz principle) and one has to minimize the functional : (8.9.2.2) The term in (8.9.2.1) is universal in the sense that it doesn’t depend on . It can be proven [DFT], that is a functional of for degenerated ground states too, so (8.9.2.2) stays true as well. The ground state densities in (8.9.2.1) and (8.9.2.2) are called pure-state v-representable because they are the densities of (possible degenerate) ground state of the Hamiltonian with some local potential . One may ask a question if all possible functions are v-representable (this is called the v-representability problem). The question is relevant, because we need to know which functions to take into account in the minimization process (8.9.2.2). Even though not every function is v-representable [DFT], every density defined on a grid (finite of infinite) which is strictly positive, normalized and consistent with the Pauli principle is ensemble v-representable. Ensemble v-representation is just a simple generalization of the above, for details see [DFT]. The functional in (8.9.2.2) depends on the particle number , so in order to get , we need to solve the variational formulation so (8.9.2.3) Let the be the solution of (8.9.2.3) with a particle number and the energy : The Lagrangian multiplier is the exact chemical potential of the system becuase so ## 8.9.3. The Kohn-Sham Equations¶ Consider an auxiliary system of noninteracting electrons (noninteracting gas): the Schrödinger equation then becomes: and the total energy is: where So: The total energy is the sum of eigenvalues (energies of the individual independent particles) as expected. From the last equation it follows: In other words, the kinetic energy of the noninteracting particles is equal to the sum of eigenvalues minus the potential energy coming from the total effective potential used to construct the single particle orbitals . From (8.9.2.3) we get (8.9.3.1) Solution to this equation gives the density . Now we want to express the energy in (8.9.1.2) using and for convenience, where is the classical electrostatic interaction energy of the charge distribution , defined using following relations - we start with a Poisson equation in atomic units and substitute , and we use the fact that in atomic units: or equivalently by expressing using the Green function: (8.9.3.2) and finally is related to using: so we get: Using the rules for functional differentiation, we can check that: Using the above relations, we can see that So from (8.9.2.1) we get (8.9.3.3) The rest of the energy is denoted by and it is called is the exchange and correlation energy functional. From (8.9.2.3) From (8.9.3.2) we have from (8.9.1.3) we get we define (8.9.3.4) so we arrive at (8.9.3.5) Solution to this equation gives the density . Comparing (8.9.3.5) to (8.9.3.1) we see that if we choose (8.9.3.6) then . So we solve the Kohn-Sham equations of this auxiliary non-interacting system (8.9.3.7) which yield the orbitals that reproduce the density of the original interacting system (8.9.3.8) The sum is taken over the lowest energies. Some of the can be degenerated, but it doesn’t matter - the index counts every eigenfunction including all the degenerated. In plain words, the trick is in realizing, that the ground state energy can be found by minimizing the energy functional (8.9.2.1) and in rewriting this functional into the form (8.9.3.3), which shows that the interacting system can be treated as a noninteracting one with a special potential. ## 8.9.4. The XC Term¶ The exchange and correlation functional can always be written in the form where the is called the XC energy density. The XC potential is defined as: ## 8.9.5. Total Energy¶ We already derived all the necessary things above, so we just summarize it here. The total energy is given by: where This is the correct, quadratically convergent expression for the total energy. We use the whole input potential and its associated eigenvalues to calculate the kinetic energy , this follows from the derivation of the expression for . Then we use the calculated charge density to express , and . If one is not careful about the potential associated with the eigenvalues, i.e., confusing with , one gets a slowly converging formula for the total energy. By expanding using (8.9.3.6): So is equal to: And then the slowly converging form of total energy is: The reason it is slowly converging is because the new formula for kinetic energy is mixing with , so it is not as precise (see above) and converges much slower with SCF iterations. Once self-consistency has been achieved (i.e. ), the two expressions for total energy are equivalent. ## 8.9.6. XC Approximations¶ All the expressions above are exact (no approximation has been made so far). Unfortunately, no one knows exactly (yet). As such, various approximations for it exist. ### LDA¶ The most simple approximation is the local density approximation (LDA), for which the xc energy density at is taken as that of a homogeneous electron gas (the nuclei are replaced by a uniform positively charged background, density ) with the same local density: The xc potential defined by (8.9.3.4) is then which in the LDA becomes (8.9.6.1) The xc energy density of the homogeneous gas can be computed exactly: where the is the electron gas exchange term given by the rest of is hidden in for which there doesn’t exist an analytic formula, but the correlation energies are known exactly from quantum Monte Carlo (QMC) calculations by Ceperley and Alder [pickett]. The energies were fitted by Vosko, Wilkes and Nussair (VWN) with and they got accurate results with errors less than in , which means that is virtually known exactly. VWN result: where , , , , , (note that the value of is wrong in [pickett]), and is the electron gas parameter, which gives the mean distance between electrons (in atomic units): The xc potential is then computed from (8.9.6.1): Some people also use Perdew and Zunger formulas, but they give essentially the same results. The LDA, although very simple, is surprisingly successful. More sophisticated approximations exist, for example the generalized gradient approximation (GGA), which sometimes gives better results than the LDA, but is not perfect either. Other options include orbital-dependent (implicit) density functionals or a linear response type functionals, but this topic is still evolving. The conclusion is, that the LDA is a good approximation to start with, and only when we are not satisfied, we will have to try some more accurate and modern approximation. ### RLDA¶ Relativistic corrections to the energy-density functional (RLDA) were proposed by MacDonald and Vosko: where We now calculate : (8.9.6.2) where the derivative can be evaluated as follows: And in exactly the same manner: So we can write where where we used the derivative of , which after a tedious, but straightforward differentiation is: Plugging this back in, we get: For we get , and as expected, because Code: >>> from sympy import limit, var, sqrt, log >>> var("beta") beta >>> limit((beta*sqrt(1+beta**2) - log(beta+sqrt(1+beta**2)))/beta**2, beta, 0) 0 ### Kohn-Sham Equations¶ For spherically symmetric potentials, we write all eigenfunctions as: and we need to solve the following Kohn-Sham equations: With normalization: For Schroedinger equation, the charge density is calculated by adding all “(n, l, m)” states together, counting each one twice (for spin up and spin down): where we have introduced the occupation numbers by Normalization of the charge density is: So we can see, that it must hold: where is the atomic number (number of electrons), and as such, are indeed the occupation numbers (integers). The factor is explicitly factored out, as it cancels with the spherical harmonics: assuming all states are occupied, this can be simplified to: We can also use this machinery to prescribe “chemical occupation numbers”, that don’t necessarily correspond to the DFT ground state. For example for atom we get: By summing all these , we get 92 as expected: But this isn’t the DFT ground state, because some KS energies are skipped, for example there is only one state for , , but there are nine more states with the same energy — instead two more states are occupied in , , but those have higher energy. So this corresponds to excited DFT state, strictly speaking not physically valid in the DFT formalism, but in practice this approach is often used. One can also prescribe fractional occupation numbers (in the Dirac case). ### Poisson Equation¶ Poisson equation becomes: ### Total Energy¶ The total energy is given by: where doing the integrals a bit we get: We can also express everything using the charge density : ## 8.9.8. DFT As a Nonlinear Problem¶ The task is to find such a charge density , so that all the equations below hold (e.g. are self-consistent): This is a standard nonlinear problem, except that the Jacobian is dense, as shown below. ### Reformulation¶ Let’s write everything in terms of explicitly: Now we can write everything as just one (nonlinear) equation: ### FE Discretization¶ The correspondig discrete problem has the form where Here is the vector of unknown coefficients for the -th wavefunction . Our equation can then be written in the compact form where with ### Jacobian¶ The Jacobi matrix has the elements: The only possible dense term is: Now we can see that we have in there the following term: which is dense in , as can be easily seen be fixing and writing so for each there is some contribution from the integral for such where is nonzero, thus making the Jacobian dense. ## 8.9.9. Thomas-Fermi-Dirac Theory¶ There are two ways to derive equations for Thomas-Fermi-Dirac theory. One way is to start from grand potential and derive all equations from it. The other way is to start with low level equations and build our way up. Will start with the former. ### Top Down Approach¶ The potential is the total potential that the electrons experience (it contains nuclear, Hartree, and XC terms) and is the Hartree energy: For simplicity, we assume here that only contains the exchange of the homogeneous electron gas. For a general XC functional, the relation is nonlinear and one must simply numerically calculate the XC energy density and calculate the XC energy using: In our case here, we have , which is only true for the exchange in homogeneous electron gas. Otherwise the relation is nonlinear. In the general case, the correction that must be applied is: The density is a functional derivative with respect to : By defining the function : we can express the grand potential using as follows: Now we can calculate the free energy: where we used the fact that , i.e. the left hand side is a constant, thus the sum of the terms on the right hand side is also constant (even though the individual terms are not). We can calculate the entropy as follows: The total energy is then equal to: From which we can see that the kinetic energy is equal to: The relation between the total energy and free energy can be also written as: But it gives the same result as we obtained above. To determine the kinetic part of the free energy, we set all potentials equal to zero () and obtain: If the potentials are zero, then the pressure can be calculated from: If the potentials are not zero, then one can calculate the pressure using: Summary: where: and is calculated as follows: So can also be expressed as: ### Bottom Up Approach¶ The other way to derive these equations is to use the following considerations. The number of states in a box of side is given by: We use atomic units, so . The electronic particle density is: (8.9.9.1) where we used the relation for Fermi energy . The potential is the total potential that the electrons experience (it contains Hartree, nuclear and XC terms). At finite temperature we need to use the Fermi distribution and this generalizes to: Now we use the relation and substitutions , to rewrite this using the Fermi-Dirac Integral: At low temperature () we have , and we obtain: Identical with (8.9.9.1). We can see that the chemical potential becomes the Fermi energy in the limit . In the finite-temperature case, is determined from the normalization condition for the number of electrons : The kinetic energy is From the last formula it can be shown that the kinetic energy is equal to The potential energy is equal to: The internal energy is equal to: The entropy is equal to: where is the number of states at energy . We used the following calculation expressing one of the sums in terms of the kinetic energy: where we used . The free energy is equal to: The grand potential is equal to: We can now express the free energy functional as a function of the density: ## 8.9.10. Orbital Free Density Functional Theory¶ The orbital-free electronic free energy is given by: where the kinetic energy can be written in a few different equivalent ways as where is a special function of one variable, composed of a Fermi-Dirac Integral of order and its inverse of order : the electron-nuclei term has the form The electron-electron (Hartree) term takes the form: and the exchange and correlation functional is given by the Perdew-Zunger LDA: is the (positive) electron density, is the (positive) nuclei density. We minimize this free energy under the condition of particle conservation. The constrained functional is (we use from now on): The variational solution is: Or: (8.9.10.1) Finally we get: (8.9.10.2) The individual terms are: and and and All together the Hamiltonian is: We can also introduce an artificial orbital as follows: and minimize with respect to : We will use tilde to denote functions in terms of . So . Using the relation we obtain: So the equation (8.9.10.1) gets multiplied by : as well as the equation (8.9.10.2): So the Hamiltonian expressed using and the Hamiltonian expressed using are related by . ### Free Energy Minimization¶ For clarity, we will be using from equation (8.9.10.2) as our main quantity, but we will also write the final relations using for completeness. We start with some initial guess for (it must be normalized as ). Let’s calculate : We calculate the steepest-descent (SD) vector : The conjugate-gradient (CG) vector is calculated as: To satisfy the normalization constraint of , the CG vector is further orthogonalized to and normalized to (this step is one particular, but not the only way to impose the normalization constraint): That is, now and . The new CG vector is then updated as usual in CG by , but then it must be normalized. As such, equivalently, it is updated by a linear combination of and : such that it remains normalized: So , are any real numbers satisfying the equation , whose parametric solution is , with : where is determined by minimizing the free energy as a function of . ## 8.9.11. References¶ [DFT] (1, 2, 3) Dreizler, E. K. U. Gross: Density functional theory: an approach to the quantum many-body problem [pickett] (1, 2) Pickett, Pseudopotential methods in condensed matter applications, Computer Physics reports, Volume 9, Issue 3, April 1989, Pages 115-197, ISSN 0167-7977, DOI: 10.1016/0167-7977(89)90002-6. (http://www.sciencedirect.com/science/article/B6X3V-46R02CR-1J/2/804d9ecaa49469aa5e1050dc007f4a61)
{}
KNM MFF UK Department of Numerical Mathematics Faculty of Mathematics and Physics Charles University Analysis of algebraic flux correction schemes • ID: 2737, RIV: 10330010 • ISSN: 0036-1429, ISBN: not specified • source: SIAM Journal on Numerical Analysis • keywords: algebraic flux correction method; linear boundary value problem; well-posedness; discrete maximum principle; convergence analysis; convection-diffusion-reaction equations • authors: Gabriel R. Barrenechea, Volker John, Petr Knobloch • authors from KNM: Knobloch Petr Abstract A family of algebraic flux correction (AFC) schemes for linear boundary value problems in any space dimension is studied. These methods' main feature is that they limit the fluxes along each one of the edges of the triangulation, and we suppose that the limiters used are symmetric. For an abstract problem, the existence of a solution, existence and uniqueness of the solution of a linearized problem, and an a priori error estimate are proved under rather general assumptions on the limiters. For a particular (but standard in practice) choice of the limiters, it is shown that a local discrete maximum principle holds. The theory developed for the abstract problem is applied to convection-diffusion-reaction equations, where in particular an error estimate is derived. Numerical studies show its sharpness.
{}
# 'Tightness' of Alternating Series Error term vs Lagrange Error term when a Taylor polynomial can have both Is it true that, if a Taylor expansion of a function has the form of an alternating series, then the alternating series error will always be a 'tighter' estimate of the true error than the Lagrange error?
{}
# Monte Carlo integration An illustration of Monte Carlo integration. In this example, the domain D is the inner circle and the domain E is the square. Because the square's area (4) can be easily calculated, the area of the circle (π*12) can be estimated by the ratio (0.8) of the points inside the circle (40) to the total number of points (50), yielding an approximation for the circle's area of 4*0.8 = 3.2 ≈ π*12. In mathematics, Monte Carlo integration is a technique for numerical integration using random numbers. It is a particular Monte Carlo method that numerically computes a definite integral. While other algorithms usually evaluate the integrand at a regular grid,[1] Monte Carlo randomly choose points at which the integrand is evaluated.[2] This method is particularly useful for higher-dimensional integrals.[3] There are different methods to perform a Monte Carlo integration, such as uniform sampling, stratified sampling and importance sampling. ## Overview In numerical integration, methods such as the Trapezoidal rule use a deterministic approach. Monte Carlo integration, on the other hand, employs a non-deterministic approach: each realization provides a different outcome. In Monte Carlo, the final outcome is an approximation of the correct value with respective error bars, and the correct value is within those error bars. The problem Monte Carlo integration addresses is the computation of a multidimensional definite integral $I = \int_{\Omega}f(\overline{\mathbf{x}}) \, d\overline{\mathbf{x}}$ where Ω, a subset of Rm, has volume $V = \int_{\Omega}d\overline{\mathbf{x}}$ The naive Monte Carlo approach is to sample points uniformly on Ω:[4] given N uniform samples, $\overline{\mathbf{x}}_1, \cdots, \overline{\mathbf{x}}_N\in \Omega,$ I can be approximated by $I \approx Q_N \equiv V \frac{1}{N} \sum_{i=1}^N f(\overline{\mathbf{x}}_i) = V \langle f\rangle$. This is because the law of large numbers ensures that $\lim_{N \to \infty} Q_N = I$. Given the estimation of I from QN, the error bars of QN can be estimated by the sample variance using the unbiased estimate of the variance: $\mathrm{Var}(f)\equiv\sigma_N^2 = \frac{1}{N-1} \sum_{i=1}^N \left (f(\overline{\mathbf{x}}_i) - \langle f \rangle \right )^2.$ $\mathrm{Var}(Q_N) = \frac{V^2}{N^2} \sum_{i=1}^N \mathrm{Var}(f) = V^2\frac{\mathrm{Var}(f)}{N} = V^2\frac{\sigma_N^2}{N}$. As long as the sequence $\left \{ \sigma_1^2, \sigma_2^2, \sigma_3^2, \ldots \right \}$ is bounded, this variance decreases asymptotically to zero as 1/N. The estimation of the error of QN is thus $\delta Q_N\approx\sqrt{\mathrm{Var}(Q_N)}=V\frac{\sigma_N}{\sqrt{N}},$ which decreases as $\tfrac{1}{\sqrt{N}}$. This result does not depend on the number of dimensions of the integral, which is the promised advantage of Monte Carlo integration against most deterministic methods that depend exponentially on the dimension.[5] It is important to notice that, like in deterministic methods, the estimate of the error is not a strict error bound; random sampling may not uncover all the important features of the integrand that can result in an underestimate of the error. While the naive Monte Carlo works for simple examples, this is not the case in most problems. A large part of the Monte Carlo literature is dedicated in developing strategies to improve the error estimates. In particular, stratified sampling - dividing the region in sub-domains -, and importance sampling - sampling from non-uniform distributions - are two of such techniques. ### Example Relative error as a function of the number of samples, showing the scaling $\tfrac{1}{\sqrt{N}}$ A paradigmatic example of a Monte Carlo integration is the estimation of π. Consider the function $H\left(x,y\right)=\begin{cases} 1 & \text{if }x^{2}+y^{2}\leq1\\ 0 & \text{else} \end{cases}$ and the set Ω = [−1,1] × [−1,1] with V = 4. Notice that $I_\pi = \int_\Omega H(x,y) dx dy = \pi.$ Thus, a crude way of calculating the value of π with Monte Carlo integration is to pick N random numbers on Ω and compute $Q_N = 4 \frac{1}{N}\sum_{i=0}^N H(x_{i},y_{i})$ In the figure on the right, the relative error is measured as a function of N, confirming the $\tfrac{1}{\sqrt{N}}$. ### Wolfram Mathematica Example The code below describes a process of integrating the function $f(x) = \frac{1}{1+\sinh(2x)\log(x)^2}$ using the Monte-Carlo method in Mathematica: code: func[x_] := 1/(1 + Sinh[2*x]*(Log[x])^2); p = Plot[func[x], {x, 0.8, 3}]; p1 = Plot[PDF[NormalDistribution[1, 0.399], 1.1*x - 0.1], {x, 0.8, 3}]; Show[{p, p1}]; NSolve[D[func[x], x] == 0, x, Reals] (*Will output maximum*) Distrib[x_, average_, var_] := PDF[NormalDistribution[average, var], 1.1*x - 0.1]; n = 10; RV = RandomVariate[TruncatedDistribution[{0.8, 3}, NormalDistribution[1, 0.399]], n]; Int = 1/n Total[func[RV]/Distrib[RV, 1, 0.399]]*Integrate[Distrib[x, 1, 0.399], {x, 0.8, 3}] Print[Int = 1/n Total[func[RV]/Distrib[RV, 1, 0.399]*Integrate[Distrib[x, 1, 0.399], {x, 0.8, 3}] NIntegrate[func[x], {x, 0.8, 3}]]]; Print[Int2 = ((3 - 0.8)/10) Total[func[RV]]] ## Recursive stratified sampling An illustration of Recursive Stratified Sampling. In this example, the function: $f(x,y) = \begin{cases}1 & x^2+y^2<1 \\0 & x^2+y^2 \ge 1 \end{cases}$ from the above illustration was integrated within a unit square using the suggested algorithm. The sampled points were recorded and plotted. Clearly stratified sampling algorithm concentrates the points in the regions where the variation of the function is largest. Recursive stratified sampling is a generalization of one-dimensional adaptive quadratures to multi-dimensional integrals. On each recursion step the integral and the error are estimated using a plain Monte Carlo algorithm. If the error estimate is larger than the required accuracy the integration volume is divided into sub-volumes and the procedure is recursively applied to sub-volumes. The ordinary 'dividing by two' strategy does not work for multi-dimensions as the number of sub-volumes grows far too quickly to keep track. Instead one estimates along which dimension a subdivision should bring the most dividends and only subdivides the volume along this dimension. The stratified sampling algorithm concentrates the sampling points in the regions where the variance of the function is largest thus reducing the grand variance and making the sampling more effective, as shown on the illustration. The popular MISER routine implements a similar algorithm. ### MISER Monte Carlo The MISER algorithm is based on recursive stratified sampling. This technique aims to reduce the overall integration error by concentrating integration points in the regions of highest variance.[6] The idea of stratified sampling begins with the observation that for two disjoint regions a and b with Monte Carlo estimates of the integral $E_a(f)$ and $E_b(f)$ and variances $\sigma_a^2(f)$ and $\sigma_b^2(f)$, the variance Var(f) of the combined estimate $E(f) = \tfrac{1}{2} \left (E_a(f) + E_b(f) \right )$ is given by, $\mathrm{Var}(f) = \frac{\sigma_a^2(f)}{4 N_a} + \frac{\sigma_b^2(f)}{4 N_b}$ It can be shown that this variance is minimized by distributing the points such that, $\frac{N_a}{N_a + N_b} = \frac{\sigma_a}{\sigma_a + \sigma_b}$ Hence the smallest error estimate is obtained by allocating sample points in proportion to the standard deviation of the function in each sub-region. The MISER algorithm proceeds by bisecting the integration region along one coordinate axis to give two sub-regions at each step. The direction is chosen by examining all d possible bisections and selecting the one which will minimize the combined variance of the two sub-regions. The variance in the sub-regions is estimated by sampling with a fraction of the total number of points available to the current step. The same procedure is then repeated recursively for each of the two half-spaces from the best bisection. The remaining sample points are allocated to the sub-regions using the formula for Na and Nb. This recursive allocation of integration points continues down to a user-specified depth where each sub-region is integrated using a plain Monte Carlo estimate. These individual values and their error estimates are then combined upwards to give an overall result and an estimate of its error. ## Importance sampling Main article: Importance sampling ### VEGAS Monte Carlo Main article: VEGAS algorithm The VEGAS algorithm takes advantage of the information stored during the sampling, and uses it and importance sampling to efficiently estimate the integral I. It samples points from the probability distribution described by the function |f| so that the points are concentrated in the regions that make the largest contribution to the integral. In general, if the Monte Carlo integral of f is sampled with points distributed according to a probability distribution described by the function g, we obtain an estimate: $E_g(f; N) = E \left (\tfrac{f}{g}; N \right )$ with a corresponding variance, $\mathrm{Var}_g(f; N) = \mathrm{Var} \left (\tfrac{f}{g}; N \right )$ If the probability distribution is chosen as $g = \tfrac{|f|}{I(|f|)}$ then it can be shown that the variance $V_g(f; N)$ vanishes, and the error in the estimate will be zero. In practice it is not possible to sample from the exact distribution g for an arbitrary function, so importance sampling algorithms aim to produce efficient approximations to the desired distribution. The VEGAS algorithm approximates the exact distribution by making a number of passes over the integration region which creates the histogram of the function f. Each histogram is used to define a sampling distribution for the next pass. Asymptotically this procedure converges to the desired distribution.[7] In order to avoid the number of histogram bins growing like Kd, the probability distribution is approximated by a separable function: $g(x_1, x_2, \ldots) = g_1(x_1) g_2(x_2) \ldots$ so that the number of bins required is only Kd. This is equivalent to locating the peaks of the function from the projections of the integrand onto the coordinate axes. The efficiency of VEGAS depends on the validity of this assumption. It is most efficient when the peaks of the integrand are well-localized. If an integrand can be rewritten in a form which is approximately separable this will increase the efficiency of integration with VEGAS. VEGAS incorporates a number of additional features, and combines both stratified sampling and importance sampling.[7] The integration region is divided into a number of "boxes", with each box getting a fixed number of points (the goal is 2). Each box can then have a fractional number of bins, but if bins/box is less than two, Vegas switches to a kind variance reduction (rather than importance sampling). This routines uses the VEGAS Monte Carlo algorithm to integrate the function f over the dim-dimensional hypercubic region defined by the lower and upper limits in the arrays xl and xu, each of size dim. The integration uses a fixed number of function calls. The result and its error estimate are based on a weighted average of independent samples. The VEGAS algorithm computes a number of independent estimates of the integral internally, according to the iterations parameter described below, and returns their weighted average. Random sampling of the integrand can occasionally produce an estimate where the error is zero, particularly if the function is constant in some regions. An estimate with zero error causes the weighted average to break down and must be handled separately. ### Importance sampling algorithm Importance sampling provides a very important tool to perform Monte-Carlo integration.[3] The main result of importance sampling to this method is that the uniform sampling of $\overline{\mathbf{x}}$ is a particular case of a more generic choice, on which the samples are drawn from any distribution $p(\overline{\mathbf{x}})$. The idea is that $p(\overline{\mathbf{x}})$ can be chosen to decrease the variance of the measurement QN. Consider the following example where one would like to numerically integrate a gaussian function, centered at 0, with σ = 1, from −1000 to 1000. Naturally, if the samples are drawn uniformly on the interval [−1000, 1000], only a very small part of them would be significant to the integral. This can be improved by choosing a different distribution from where the samples are chosen, for instance by sampling according to a gaussian distribution centered at 0, with σ = 1. Of course the "right" choice strongly depends on the integrand. Formally, given a set of samples chosen from a distribution $p(\overline{\mathbf{x}}) : \qquad \overline{\mathbf{x}}_1, \cdots, \overline{\mathbf{x}}_N \in V,$ the estimator for I is given by[3] $Q_N \equiv \frac{1}{N} \sum_{i=1}^N \frac{f(\overline{\mathbf{x}}_i)}{p(\overline{\mathbf{x}}_i)}$ Intuitively, this says that if we pick a particular sample twice as much as other samples, we weight it half as much as the other samples. This estimator naturally valid for uniform sampling, the case where $p(\overline{\mathbf{x}})$ is constant. The Metropolis-Hastings algorithm is one of the most used algorithms to generate $\overline{\mathbf{x}}$ from $p(\overline{\mathbf{x}})$,[3] thus providing an efficient way of computing integrals. ## Notes 1. ^ Press et al, 2007, Chap. 4. 2. ^ Press et al, 2007, Chap. 7. 3. ^ a b c d Newman, 1999, Chap. 2. 4. ^ Newman, 1999, Chap. 1. 5. ^ Press et al, 2007 6. ^ Press, 1990, pp 190-195. 7. ^ a b Lepage, 1978 ## References • R. E. Caflisch, Monte Carlo and quasi-Monte Carlo methods, Acta Numerica vol. 7, Cambridge University Press, 1998, pp. 1–49. • S. Weinzierl, Introduction to Monte Carlo methods, • W.H. Press, G.R. Farrar, Recursive Stratified Sampling for Multidimensional Monte Carlo Integration, Computers in Physics, v4 (1990). • G.P. Lepage, A New Algorithm for Adaptive Multidimensional Integration, Journal of Computational Physics 27, 192-203, (1978) • G.P. Lepage, VEGAS: An Adaptive Multi-dimensional Integration Program, Cornell preprint CLNS 80-447, March 1980 • J. M. Hammersley, D.C. Handscomb (1964) Monte Carlo Methods. Methuen. ISBN 0-416-52340-4 • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. • Newman, MEJ; Barkema, GT (1999). Monte Carlo Methods in Statistical Physics. Clarendon Press. • Robert, CP; Casella, G (2004). Monte Carlo Statistical Methods (2nd ed.). Springer. ISBN 978-1-4419-1939-7.
{}
# Fault detection¶ We’ll consider a problem of identifying faults that have occurred in a system based on sensor measurements of system performance. # Problem statement¶ Each of $$n$$ possible faults occurs independently with probability $$p$$. The vector $$x \in \lbrace 0,1 \rbrace^{n}$$ encodes the fault occurrences, with $$x_i = 1$$ indicating that fault $$i$$ has occurred. System performance is measured by $$m$$ sensors. The sensor output is $$$y = Ax + v = \sum_{i=1}^n a_i x_i + v,$$$ where $$A \in \mathbf{R}^{m \times n}$$ is the sensing matrix with column $$a_i$$ being the fault signature of fault $$i$$, and $$v \in \mathbf{R}^m$$ is a noise vector where $$v_j$$ is Gaussian with mean 0 and variance $$\sigma^2$$. The objective is to guess $$x$$ (which faults have occurred) given $$y$$ (sensor measurements). We are interested in the setting where $$n > m$$, that is, when we have more possible faults than measurements. In this setting, we can expect a good recovery when the vector $$x$$ is sparse. This is the subject of compressed sensing. # Solution approach¶ To identify the faults, one reasonable approach is to choose $$x \in \lbrace 0,1 \rbrace^{n}$$ to minimize the negative log-likelihood function $$$\ell(x) = \frac{1}{2 \sigma^2} \|Ax-y\|_2^2 + \log(1/p-1)\mathbf{1}^T x + c.$$$ However, this problem is nonconvex and NP-hard, due to the constraint that $$x$$ must be Boolean. To make this problem tractable, we can relax the Boolean constraints and instead constrain $$x_i \in [0,1]$$. The optimization problem $\begin{split}\begin{array}{ll} \mbox{minimize} & \|Ax-y\|_2^2 + 2 \sigma^2 \log(1/p-1)\mathbf{1}^T x\\ \mbox{subject to} & 0 \leq x_i \leq 1, \quad i=1, \ldots n \end{array}\end{split}$ is convex. We’ll refer to the solution of the convex problem as the relaxed ML estimate. By taking the relaxed ML estimate of $$x$$ and rounding the entries to the nearest of 0 or 1, we recover a Boolean estimate of the fault occurrences. # Example¶ We’ll generate an example with $$n = 2000$$ possible faults, $$m = 200$$ measurements, and fault probability $$p = 0.01$$. We’ll choose $$\sigma^2$$ so that the signal-to-noise ratio is 5. That is, $$$\sqrt{\frac{\mathbf{E}\|Ax \|^2_2}{\mathbf{E} \|v\|_2^2}} = 5.$$$ import numpy as np import matplotlib.pyplot as plt np.random.seed(1) n = 2000 m = 200 p = 0.01 snr = 5 sigma = np.sqrt(p*n/(snr**2)) A = np.random.randn(m,n) x_true = (np.random.rand(n) <= p).astype(np.int) v = sigma*np.random.randn(m) y = A.dot(x_true) + v Below, we show $$x$$, $$Ax$$ and the noise $$v$$. plt.plot(range(n),x_true) [<matplotlib.lines.Line2D at 0x11ae42518>] plt.plot(range(m), A.dot(x_true),range(m),v) plt.legend(('Ax','v')) <matplotlib.legend.Legend at 0x11aee9630> # Recovery¶ We solve the relaxed maximum likelihood problem with CVXPY and then round the result to get a Boolean solution. %%time import cvxpy as cp x = cp.Variable(shape=n) tau = 2*cp.log(1/p - 1)*sigma**2 obj = cp.Minimize(cp.sum_squares(A*x - y) + tau*cp.sum(x)) const = [0 <= x, x <= 1] cp.Problem(obj,const).solve(verbose=True) print("final objective value: {}".format(obj.value)) # relaxed ML estimate x_rml = np.array(x.value).flatten() # rounded solution x_rnd = (x_rml >= .5).astype(int) ECOS 2.0.4 - (C) embotech GmbH, Zurich Switzerland, 2012-15. Web: www.embotech.com/ECOS It pcost dcost gap pres dres k/t mu step sigma IR | BT 0 +7.343e+03 -3.862e+03 +5e+04 5e-01 5e-04 1e+00 1e+01 --- --- 1 1 - | - - 1 +4.814e+02 -9.580e+02 +8e+03 1e-01 6e-05 2e-01 2e+00 0.8500 1e-02 1 2 2 | 0 0 2 -2.079e+02 -1.428e+03 +6e+03 1e-01 4e-05 8e-01 2e+00 0.7544 7e-01 2 2 2 | 0 0 3 -1.321e+02 -1.030e+03 +5e+03 8e-02 3e-05 7e-01 1e+00 0.3122 2e-01 2 2 2 | 0 0 4 -2.074e+02 -8.580e+02 +4e+03 6e-02 2e-05 6e-01 9e-01 0.7839 7e-01 2 2 2 | 0 0 5 -1.121e+02 -6.072e+02 +3e+03 5e-02 1e-05 5e-01 7e-01 0.3859 4e-01 2 3 3 | 0 0 6 -4.898e+01 -4.060e+02 +2e+03 3e-02 8e-06 3e-01 5e-01 0.5780 5e-01 2 2 2 | 0 0 7 +7.778e+01 -5.711e+01 +8e+02 1e-02 3e-06 1e-01 2e-01 0.9890 4e-01 2 3 2 | 0 0 8 +1.307e+02 +6.143e+01 +4e+02 6e-03 1e-06 6e-02 1e-01 0.5528 1e-01 3 3 3 | 0 0 9 +1.607e+02 +1.286e+02 +2e+02 3e-03 4e-07 3e-02 5e-02 0.8303 3e-01 3 3 3 | 0 0 10 +1.741e+02 +1.557e+02 +1e+02 2e-03 2e-07 2e-02 3e-02 0.6242 3e-01 3 3 3 | 0 0 11 +1.834e+02 +1.749e+02 +5e+01 8e-04 9e-08 8e-03 1e-02 0.8043 3e-01 3 3 3 | 0 0 12 +1.888e+02 +1.861e+02 +2e+01 3e-04 3e-08 2e-03 4e-03 0.9175 3e-01 3 3 2 | 0 0 13 +1.909e+02 +1.902e+02 +4e+00 7e-05 7e-09 6e-04 1e-03 0.8198 1e-01 3 3 3 | 0 0 14 +1.914e+02 +1.912e+02 +1e+00 2e-05 2e-09 2e-04 3e-04 0.8581 2e-01 3 2 3 | 0 0 15 +1.916e+02 +1.916e+02 +1e-01 2e-06 3e-10 2e-05 4e-05 0.9004 3e-02 3 3 3 | 0 0 16 +1.916e+02 +1.916e+02 +4e-02 7e-07 8e-11 7e-06 1e-05 0.8174 1e-01 3 3 3 | 0 0 17 +1.916e+02 +1.916e+02 +8e-03 1e-07 1e-11 1e-06 2e-06 0.8917 9e-02 3 2 2 | 0 0 18 +1.916e+02 +1.916e+02 +2e-03 4e-08 4e-12 4e-07 5e-07 0.8588 2e-01 3 3 3 | 0 0 19 +1.916e+02 +1.916e+02 +2e-04 3e-09 3e-13 3e-08 5e-08 0.9309 2e-02 3 2 2 | 0 0 20 +1.916e+02 +1.916e+02 +2e-05 4e-10 4e-14 4e-09 6e-09 0.8768 1e-02 4 2 2 | 0 0 21 +1.916e+02 +1.916e+02 +4e-06 6e-11 6e-15 6e-10 9e-10 0.9089 6e-02 4 2 2 | 0 0 22 +1.916e+02 +1.916e+02 +1e-06 2e-11 2e-15 2e-10 2e-10 0.8362 1e-01 2 1 1 | 0 0 OPTIMAL (within feastol=1.8e-11, reltol=5.1e-09, abstol=9.8e-07). Runtime: 6.538894 seconds. final objective value: 191.6347201927456 CPU times: user 6.51 s, sys: 291 ms, total: 6.8 s Wall time: 7.5 s # Evaluation¶ We define a function for computing the estimation errors, and a function for plotting $$x$$, the relaxed ML estimate, and the rounded solutions. import matplotlib def errors(x_true, x, threshold=.5): '''Return estimation errors. Return the true number of faults, the number of false positives, and the number of false negatives. ''' n = len(x_true) k = sum(x_true) false_pos = sum(np.logical_and(x_true < threshold, x >= threshold)) false_neg = sum(np.logical_and(x_true >= threshold, x < threshold)) return (k, false_pos, false_neg) def plotXs(x_true, x_rml, x_rnd, filename=None): '''Plot true, relaxed ML, and rounded solutions.''' matplotlib.rcParams.update({'font.size': 14}) xs = [x_true, x_rml, x_rnd] titles = ['x_true', 'x_rml', 'x_rnd'] n = len(x_true) k = sum(x_true) fig, ax = plt.subplots(1, 3, sharex=True, sharey=True, figsize=(12, 3)) for i,x in enumerate(xs): ax[i].plot(range(n), x) ax[i].set_title(titles[i]) ax[i].set_ylim([0,1]) if filename: fig.savefig(filename, bbox_inches='tight') return errors(x_true, x_rml,.5) We see that out of 20 actual faults, the rounded solution gives perfect recovery with 0 false negatives and 0 false positives. plotXs(x_true, x_rml, x_rnd, 'fault.pdf') (20, 0, 0)
{}
## { Information = Comprehension × Extension } • Discussion 14 Information and optimization go hand in hand — discovering the laws or constraints naturally governing the systems in which we live is a big part of moving toward our hearts’ desires within them.  I’m engaged in trying to clear up a few old puzzles about information at present but the dual relationship of information and control in cybernetic systems is never far from my mind.  At any rate, here’s a sampling of thoughts along those lines I thought I might add to the mix. ## { Information = Comprehension × Extension } • Discussion 13 As much as I incline toward Fisher’s views over those of Neyman and Pearson, I always find these controversies driving me back to Peirce.  It’s my personal sense there’s no chance (or hope) of resolving the issues until we get clear about the distinct roles of abductive, deductive, and inductive inference and quit confounding abduction and induction the way mainstream statistics has always done. ## { Information = Comprehension × Extension } Revisited • Comment 5 Let’s stay with Peirce’s example of abductive inference a little longer and try to clear up the more troublesome confusions that tend to arise. Figure 1 shows the implication ordering of logical terms in the form of a lattice diagram. Figure 1. Conjunctive Term z, Taken as Predicate One thing needs to be stressed at this point.  It is important to recognize the conjunctive term itself — namely, the syntactic string “spherical bright fragrant juicy tropical fruit” — is not an icon but a symbol.  It has its place in a formal system of symbols, for example, a propositional calculus, where it would normally be interpreted as a logical conjunction of six elementary propositions, denoting anything in the universe of discourse with all six of the corresponding properties.  The symbol denotes objects which may be taken as icons of oranges by virtue of their bearing those six properties in common with oranges.  But there are no objects denoted by the symbol which aren’t already oranges themselves.  Thus we observe a natural reduction in the denotation of the symbol, consisting in the absence of cases outside of oranges which have all the properties indicated. The above analysis provides another way to understand the abductive inference from the Fact $x \Rightarrow z$ and the Rule $y \Rightarrow z$ to the Case $x \Rightarrow y.$  The lack of any cases which are $z$ and not $y$ is expressed by the implication $z \Rightarrow y.$  Taking this together with the Rule $y \Rightarrow z$ gives the logical equivalence $y = z.$  But this reduces the Case $x \Rightarrow y$ to the Fact $x \Rightarrow z$ and so the Case is justified. Viewed in the light of the above analysis, Peirce’s example of abductive reasoning exhibits an especially strong form of inference, almost deductive in character.  Do all abductive arguments take this form, or may there be weaker styles of abductive reasoning which enjoy their own levels of plausibility?  That must remain an open question at this point. ### Reference • Peirce, C.S. (1866), “The Logic of Science, or, Induction and Hypothesis”, Lowell Lectures of 1866, pp. 357–504 in Writings of Charles S. Peirce : A Chronological Edition, Volume 1, 1857–1866, Peirce Edition Project, Indiana University Press, Bloomington, IN, 1982. ## { Information = Comprehension × Extension } Revisited • Comment 4 Many things still puzzle me about Peirce’s account at this point.  I indicated a few of them by means of question marks at several places in the last two Figures.  There is nothing for it but returning to the text and trying once more to follow the reasoning. Let’s go back to Peirce’s example of abductive inference and try to get a clearer picture of why he connects it with conjunctive terms and iconic signs. Figure 1 shows the implication ordering of logical terms in the form of a lattice diagram. Figure 1. Conjunctive Term z, Taken as Predicate The relationship between conjunctive terms and iconic signs may be understood as follows.  If there is anything with all the properties described by the conjunctive term — spherical bright fragrant juicy tropical fruit — then sign users may use that thing as an icon of an orange, precisely by virtue of the fact it shares those properties with an orange.  But the only natural examples of things with all those properties are oranges themselves, so the only thing qualified to serve as a natural icon of an orange by virtue of those very properties is that orange itself or another orange. ### Reference • Peirce, C.S. (1866), “The Logic of Science, or, Induction and Hypothesis”, Lowell Lectures of 1866, pp. 357–504 in Writings of Charles S. Peirce : A Chronological Edition, Volume 1, 1857–1866, Peirce Edition Project, Indiana University Press, Bloomington, IN, 1982. ## { Information = Comprehension × Extension } Revisited • Comment 3 Peirce identifies inference with a process he describes as symbolization.  Let us consider what that might imply. I am going, next, to show that inference is symbolization and that the puzzle of the validity of scientific inference lies merely in this superfluous comprehension and is therefore entirely removed by a consideration of the laws of information(467). Even if it were only a rough analogy between inference and symbolization, a principle of logical continuity, what is known in physics as a correspondence principle, would suggest parallels between steps of reasoning in the neighborhood of exact inferences and signs in the vicinity of genuine symbols.  This would lead us to expect a correspondence between degrees of inference and degrees of symbolization extending from exact to approximate (non-demonstrative) inferences and from genuine to approximate (degenerate) symbols. For this purpose, I must call your attention to the differences there are in the manner in which different representations stand for their objects. In the first place there are likenesses or copies — such as statues, pictures, emblems, hieroglyphics, and the like.  Such representations stand for their objects only so far as they have an actual resemblance to them — that is agree with them in some characters.  The peculiarity of such representations is that they do not determine their objects — they stand for anything more or less;  for they stand for whatever they resemble and they resemble everything more or less. The second kind of representations are such as are set up by a convention of men or a decree of God.  Such are tallies, proper names, &c.  The peculiarity of these conventional signs is that they represent no character of their objects. Likenesses denote nothing in particular;  conventional signs connote nothing in particular. The third and last kind of representations are symbols or general representations.  They connote attributes and so connote them as to determine what they denote.  To this class belong all words and all conceptions.  Most combinations of words are also symbols.  A proposition, an argument, even a whole book may be, and should be, a single symbol.  (467–468). In addition to Aristotle, the influence of Kant on Peirce is very strongly marked in these earliest expositions.  The invocations of “conceptions of the understanding”, the “use of concepts” and thus of symbols in reducing the manifold of extension, and the not so subtle hint of the synthetic à priori in Peirce’s discussion, not only of natural kinds but also of the kinds of signs leading up to genuine symbols, can all be recognized as pervasive Kantian themes. In order to draw out these themes and see how Peirce was led to develop their leading ideas, let us bring together our previous Figures, abstracting from their concrete details, and see if we can figure out what is going on. Figure 3 shows an abductive step of inquiry, as taken on the cue of an iconic sign. Figure 3. Conjunctive Predicate z, Abduction of Case xy Figure 4 shows an inductive step of inquiry, as taken on the cue of an indicial sign. Figure 4. Disjunctive Subject u, Induction of Rule vw To be continued … ### Reference • Peirce, C.S. (1866), “The Logic of Science, or, Induction and Hypothesis”, Lowell Lectures of 1866, pp. 357–504 in Writings of Charles S. Peirce : A Chronological Edition, Volume 1, 1857–1866, Peirce Edition Project, Indiana University Press, Bloomington, IN, 1982. ## { Information = Comprehension × Extension } Revisited • Comment 2 Let’s examine Peirce’s second example of a disjunctive term — neat, swine, sheep, deer — within the style of lattice framework we used before. Hence if we find out that neat are herbivorous, swine are herbivorous, sheep are herbivorous, and deer are herbivorous;  we may be sure that there is some class of animals which covers all these, all the members of which are herbivorous.  (468–469). Accordingly, if we are engaged in symbolizing and we come to such a proposition as “Neat, swine, sheep, and deer are herbivorous”, we know firstly that the disjunctive term may be replaced by a true symbol.  But suppose we know of no symbol for neat, swine, sheep, and deer except cloven-hoofed animals.  (469). This is apparently a stock example of inductive reasoning which Peirce is borrowing from traditional discussions, so let us pass over the circumstance that modern taxonomies may classify swine as omnivores. In view of the analogical symmetries the disjunctive term shares with the conjunctive case, we can run through this example in fairly short order.  We have an aggregate of four terms: $\begin{array}{lll} s_1 & = & \mathrm{neat} \\ s_2 & = & \mathrm{swine} \\ s_3 & = & \mathrm{sheep} \\ s_4 & = & \mathrm{deer} \end{array}$ Suppose $u$ is the logical disjunction of the above four terms: $\begin{array}{lll} u & = & \texttt{((} s_1 \texttt{)(} s_2 \texttt{)(} s_3 \texttt{)(} s_4 \texttt{))} \end{array}$ Figure 2 diagrams the situation before us. Figure 2. Disjunctive Term u, Taken as Subject Here we have a situation that is dual to the structure of the conjunctive example.  There is a gap between the logical disjunction $u,$ in lattice terminology, the least upper bound (lub) of the disjoined terms, $u = \mathrm{lub} \{ s_1, s_2, s_3, s_4 \},$ and what we might regard as the natural disjunction or natural lub of these terms, namely, $v,$ cloven-hoofed. Once again, the sheer implausibility of imagining the disjunctive term $u$ would ever be embedded exactly as such in a lattice of natural kinds leads to the evident naturalness of the induction to $v \Rightarrow w,$ namely, the rule that cloven-hoofed animals are herbivorous. ### Reference • Peirce, C.S. (1866), “The Logic of Science, or, Induction and Hypothesis”, Lowell Lectures of 1866, pp. 357–504 in Writings of Charles S. Peirce : A Chronological Edition, Volume 1, 1857–1866, Peirce Edition Project, Indiana University Press, Bloomington, IN, 1982. ## { Information = Comprehension × Extension } Revisited • Comment 1 At this point in his inventory of scientific reasoning, Peirce is relating the nature of inference, information, and inquiry to the character of the signs mediating the process in question, a process he is presently describing as symbolization. In the interest of clarity let’s draw from Peirce’s account a couple of quick sketches, designed to show how the examples he gives of conjunctive terms and disjunctive terms might look if they were cast within a lattice-theoretic frame. Let’s examine Peirce’s example of a conjunctive term — spherical, bright, fragrant, juicy, tropical fruit — within a lattice framework.  We have these six terms: $\begin{array}{lll} t_1 & = & \mathrm{spherical} \\ t_2 & = & \mathrm{bright} \\ t_3 & = & \mathrm{fragrant} \\ t_4 & = & \mathrm{juicy} \\ t_5 & = & \mathrm{tropical} \\ t_6 & = & \mathrm{fruit} \end{array}$ Suppose $z$ is the logical conjunction of the above six terms: $\begin{array}{lll} z & = & t_1 \cdot t_2 \cdot t_3 \cdot t_4 \cdot t_5 \cdot t_6 \end{array}$ What on earth could Peirce mean by saying that such a term is not a true symbol or that it is of no use whatever? In particular, consider the following statement: If it occurs in the predicate and something is said to be a spherical bright fragrant juicy tropical fruit, since there is nothing which is all this which is not an orange, we may say that this is an orange at once. In other words, if something $x$ is said to be $z$ then we may guess fairly surely $x$ is really an orange, in short, $x$ has all the additional features otherwise summed up quite succinctly in the much more constrained term $y,$ where $y$ means an orange. Figure 1 shows the implication ordering of logical terms in the form of a lattice diagram. Figure 1. Conjunctive Term z, Taken as Predicate What Peirce is saying about $z$ not being a genuinely useful symbol can be explained in terms of the gap between the logical conjunction $z,$ in lattice terms, the greatest lower bound (glb) of the conjoined terms, $z = \mathrm{glb} \{ t_1, t_2, t_3, t_4, t_5, t_6 \},$ and what we might regard as the natural conjunction or natural glb of these terms, namely, $y,$ an orange.  That is to say, there is an extra measure of constraint that goes into forming the natural kinds lattice from the free lattice that logic and set theory would otherwise impose.  The local manifestations of this global information are meted out over the structure of the natural lattice by just such abductive gaps as the one between $z$ and $y.$ ### Reference • Peirce, C.S. (1866), “The Logic of Science, or, Induction and Hypothesis”, Lowell Lectures of 1866, pp. 357–504 in Writings of Charles S. Peirce : A Chronological Edition, Volume 1, 1857–1866, Peirce Edition Project, Indiana University Press, Bloomington, IN, 1982.
{}