url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://theinfolist.com/html/ALL/s/point_source.html
TheInfoList A point source is a single identifiable ''localised'' source of something. A point source has negligible extent, distinguishing it from other source geometries. Sources are called point sources because in mathematical modeling, these sources can usually be approximated as a mathematical point Point or points may refer to: Places * Point, LewisImage:Point Western Isles NASA World Wind.png, Satellite image of Point Point ( gd, An Rubha), also known as the Eye Peninsula, is a peninsula some 11 km long in the Outer Hebrides (or Western I ... to simplify analysis. The actual source need not be physically small, if its size is negligible relative to other length scales in the problem. For example, in astronomy Astronomy (from el, ἀστρονομία, literally meaning the science that studies the laws of the stars) is a natural science that studies astronomical object, celestial objects and celestial event, phenomena. It uses mathematics, phys ... , star A star is an astronomical object consisting of a luminous spheroid of plasma Plasma or plasm may refer to: Science * Plasma (physics), one of the four fundamental states of matter * Plasma (mineral) or heliotrope, a mineral aggregate * Quark ... s are routinely treated as point sources, even though they are in actuality much larger than the Earth Earth is the third planet from the Sun and the only astronomical object known to harbour and support life. 29.2% of Earth's surface is land consisting of continents and islands. The remaining 70.8% is Water distribution on Earth, covered wi ... . In three dimensions Three-dimensional space (also: 3-space or, rarely, tri-dimensional space) is a geometric setting in which three values (called parameter A parameter (from the Ancient Greek language, Ancient Greek wikt:παρά#Ancient Greek, παρά, ''par ... , the density of something leaving a point source decreases in proportion to the inverse square Image:Inverse square law.svg, 420px, S represents the light source, while r represents the measured points. The lines represent the flux emanating from the sources and fluxes. The total number of flux lines depends on the strength of the light so ... of the distance Distance is a numerical measurement ' Measurement is the number, numerical quantification (science), quantification of the variable and attribute (research), attributes of an object or event, which can be used to compare with other objects or eve ... from the source, if the distribution is isotropic Isotropy is uniformity in all orientations; it is derived from the Greek ''isos'' (ἴσος, "equal") and ''tropos'' (τρόπος, "way"). Precise definitions depend on the subject area. Exceptions, or inequalities, are frequently indicated by ... , and there is no absorption Absorption may refer to: Chemistry and biology *Absorption (chemistry), diffusion of particles of gas or liquid into liquid or solid materials *Absorption (skin), a route by which substances enter the body through the skin *Absorption (pharmacolo ... or other loss. # Mathematics In mathematics, a point source is a singularity Singularity or singular point may refer to: Science, technology, and mathematics Mathematics * Mathematical singularity, a point at which a given mathematical object is not defined or not "well-behaved", for example infinite or not differentiabl ... from which flux Flux describes any effect that appears to pass or travel (whether it actually moves or not) through a surface or substance. Flux is a concept in applied mathematics and vector calculus which has many applications to physics. For transport ph ... or flow is emanating. Although singularities such as this do not exist in the observable universe, mathematical point sources are often used as approximations to reality in physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Motion (physics), motion and behavior through Spacetime, space and time, and the related entities of energy and force. "Physical scie ... and other fields. Generally, a source of light can be considered a point source if the resolution of the imaging instrument is too low to resolve the source's apparent size. There are two types and sources of light: a point source and an extended source. Mathematically an object may be considered a point source if its angular size The angular diameter, angular size, apparent diameter, or apparent size is an angular distance describing how large a sphere of a sphere A sphere (from Greek language, Greek —, "globe, ball") is a Geometry, geometrical object in solid geometr ... , $\theta$, is much smaller than the resolving power of the telescope: $\theta << \lambda / D$, where $\lambda$ is the wavelength of light and $D$ is the telescope diameter. Examples: * Light from a distant star seen through a small telescope * Light passing through a pinhole A hole is an opening in or through a particular medium, usually a solid body. Holes occur through natural and artificial processes, and may be useful for various purposes, or may represent a problem needing to be addressed in many fields of engin ... or other small aperture In optics, an aperture is a hole or an opening through which light travels. More specifically, the aperture and focal length of an optical system determine the cone angle of a bundle of ray (optics), rays that come to a focus (optics), focus ... , viewed from a distance much greater than the size of the hole * Light from a street light A street light, light pole, lamppost, street lamp, light standard, or lamp standard is a raised source of light on the edge of a road or path. Similar lights may be found on a railway platform A railway platform is an area alongside a railwa ... in a large-scale study of light pollution Light pollution is the presence of unwanted, inappropriate, or excessive artificial lighting Lighting or illumination is the deliberate use of light Light or visible light is electromagnetic radiation within the portion of the elect ... or street Radio wave Radio waves are a type of electromagnetic radiation In physics Physics (from grc, φυσική (ἐπιστήμη), physikḗ (epistḗmē), knowledge of nature, from ''phýsis'' 'nature'), , is the natural science that studies ma ... sources which are smaller than one radio wavelength In physics Physics is the that studies , its , its and behavior through , and the related entities of and . "Physical science is that department of knowledge which relates to the order of nature, or, in other words, to the regular su ... are also generally treated as point sources. Radio emissions generated by a fixed electrical circuit are usually polarized, producing anisotropic Anisotropy () is the property of a material which allows it to change or assume different properties in different directions as opposed to isotropy Isotropy is uniformity in all orientations; it is derived from the Greek ''isos'' (ἴσος, ... radiation. If the propagating medium is lossless, however, the radiant power in the radio waves at a given distance will still vary as the inverse square of the distance if the angle remains constant to the source polarization. Gamma ray A gamma ray, also known as gamma radiation (symbol γ or \gamma), is a penetrating form of electromagnetic radiation In physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, it ... and X-ray An X-ray, or, much less commonly, X-radiation, is a penetrating form of high-energy electromagnetic radiation In physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Moti ... sources may be treated as a point source if sufficiently small. Radiological contamination and nuclear sources are often point sources. This has significance in health physics Health physics, also referred to as the science of radiation protection Radiation protection, also known as radiological protection, is defined by the International Atomic Energy Agency The International Atomic Energy Agency (IAEA) is an in ... and radiation protection Radiation protection, also known as radiological protection, is defined by the International Atomic Energy Agency (IAEA) as "The protection of people from harmful effects of exposure to ionizing radiation, and the means for achieving this". Exposur ... . Examples: * Radio antennas are often smaller than one wavelength, even though they are many metres across * Pulsars Animation of a rotating pulsar. The sphere in the middle represents the neutron star, the curves indicate the magnetic field lines and the protruding cones represent the emission zones. A pulsar (from ''pulse'' and ''-ar'' as in “quasar A ... are treated as point sources when observed using radio telescopes A radio telescope is a specialized antenna Antenna (pl. antennas or antennae) may refer to: Science and engineering * Antenna (radio) In radio engineering, an antenna or aerial is the interface between radio waves propagating through sp ... *In nuclear physics, a "hot spot" is a point source of radiation In physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Motion (physics), motion and behavior through Spacetime, space and time, and the related entities of energy and f ... # Sound Sound In physics Physics is the that studies , its , its and behavior through , and the related entities of and . "Physical science is that department of knowledge which relates to the order of nature, or, in other words, to the regular ... is an oscillating pressure Pressure (symbol: ''p'' or ''P'') is the force In physics, a force is an influence that can change the motion (physics), motion of an Physical object, object. A force can cause an object with mass to change its velocity (e.g. moving fr ... wave. As the pressure oscillates up and down, an audio point source acts in turn as a fluid point source and then a fluid point sink. (Such an object does not exist physically, but is often a good simplified model for calculations.) Examples: * Seismic vibration from a localised seismic experiment searching for oil * Noise pollution Noise pollution, also known as or sound , is the propagation of noise with ranging impacts on the activity of human or animal life, most of them harmful to a degree. The source of outdoor noise worldwide is mainly caused by machines, transport, ... from a jet engine A jet engine is a type of reaction engine A reaction engine is an engine or motor that produces thrust by expelling reaction mass, in accordance with Newton's third law of motion. This law of motion is most commonly paraphrased as: "For ... in a large-scale study of noise pollution * A loudspeaker A loudspeaker (or ''speaker driver'', or most frequently just ''speaker'') is an Acoustical engineering#Electroacoustics, electroacoustic transducer, that is, a device that converts an electrical audio signal into a corresponding sound. A ''spe ... may be considered as a point source in a study of the acoustics Acoustics is a branch of physics Physics is the that studies , its , its and behavior through , and the related entities of and . "Physical science is that department of knowledge which relates to the order of nature, or, in other wo ... of airport An airport is an aerodrome An aerodrome (Commonwealth English The use of the English language English is a West Germanic languages, West Germanic language first spoken in History of Anglo-Saxon England, early medieval En ... announcements A coaxial loudspeakerA coaxial loudspeaker is a loudspeaker system in which the individual driver units radiate sound from the same point or axis. Two general types exist: one is a compact design using two or three speaker drivers, usually in car audio, and the other is ... is designed to work as a point source to allow a wider field for listening. Point sources are used as a means of calibrating ionizing radiation Ionizing radiation (or ionising radiation), including nuclear radiation, consists of s or s that have sufficient to s or s by detaching s from them. The particles generally travel at a speed that is greater than 1% of , and the electromagnetic w ... instruments. They are usually a sealed capsule and are most commonly used for gamma, x-ray and beta measuring instruments. # Heat In vacuum A vacuum is a space Space is the boundless three-dimensional Three-dimensional space (also: 3-space or, rarely, tri-dimensional space) is a geometric setting in which three values (called parameter A parameter (from the Ancient Gree ... , heat escapes as radiation In physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Motion (physics), motion and behavior through Spacetime, space and time, and the related entities of energy and f ... isotropically. If the source remains stationary in a compressible fluid Compressible flow (or gas dynamics) is the branch of fluid mechanics Fluid mechanics is the branch of physics concerned with the mechanics Mechanics (Ancient Greek, Greek: ) is the area of physics concerned with the motions of physical objec ... such as air File:Atmosphere gas proportions.svg, Composition of Earth's atmosphere by volume, excluding water vapor. Lower pie represents trace gases that together compose about 0.043391% of the atmosphere (0.04402961% at April 2019 concentration ). Number ... , flow patterns can form around the source due to convection Convection is single or multiphase fluid flow that occurs Spontaneous process, spontaneously due to the combined effects of material property heterogeneity and body forces on a fluid, most commonly density and gravity (see buoyancy). When t ... anisotropic Anisotropy () is the property of a material which allows it to change or assume different properties in different directions as opposed to isotropy Isotropy is uniformity in all orientations; it is derived from the Greek ''isos'' (ἴσος, ... pattern of heat loss. The most common form of anisotropy is the formation of a thermal plume above the heat source. Examples: *Geological hotspots on the surface of the Earth which lie at the tops of thermal plumes rising from deep inside the Earth *Plumes of heat studied in thermal pollution Thermal pollution, sometimes called "thermal enrichment," is the degradation of water quality by any process that changes ambient water temperature Temperature is a physical quantity that expresses hot and cold. It is the manifestation of ... tracking. # Fluid Fluid point sources are commonly used in fluid dynamics In physics and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids—liquids and gases. It has several subdisciplines, including ''aerodynamics'' (the study of air and other gases in motion) and ... and aerodynamics Aerodynamics, from Greek#REDIRECT Greek Greek may refer to: Greece Anything of, from, or related to Greece Greece ( el, Ελλάδα, , ), officially the Hellenic Republic, is a country located in Southeast Europe. Its population is appr ... . A point source of fluid is the inverse of a fluid point sink (a point where fluid is removed). Whereas fluid sinks exhibit complex rapidly changing behaviour such as is seen in vortices , revealed by colored smoke In fluid dynamics In physics and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids—liquids and gases. It has several subdisciplines, including aerodynamics (the ... (for example water running into a plug-hole or tornado A tornado is a violently rotating column of air File:Atmosphere gas proportions.svg, Composition of Earth's atmosphere by volume, excluding water vapor. Lower pie represents trace gases that together compose about 0.043391% of the atmos ... es generated at points where air is rising), fluid sources generally produce simple flow patterns, with stationary isotropic point sources generating an expanding sphere of new fluid. If the fluid is moving (such as wind in air or currents in water) a plume is generated from the point source. Examples: * Air pollution Air pollution is the presence of substances in the atmosphere that are harmful to the health of humans and other Outline of life forms, living beings, or cause damage to the climate or to materials. There are different types of air pollutants, ... from a power plant A power station, also referred to as a power plant and sometimes generating station or generating plant, is an industrial facility for the generation A generation is "all of the people born and living Living or The Living may refer to: ... flue gas stack A flue-gas stack, also known as a smoke stack, chimney stack or simply as a stack, is a type of chimney A chimney is an architectural ventilation structure made of masonry, clay or metal that isolates hot toxic exhaust gas Exhaust gas o ... in a large scale analysis of air pollution * Water pollution Water pollution (or aquatic pollution) is the contamination of water bodies ( Lysefjord) in Norway Norway ( nb, ; nn, ; se, Norga; smj, Vuodna; sma, Nöörje), officially the Kingdom of Norway, is a Nordic countries, Nordic c ... from an oil refinery An oil refinery or petroleum refinery is an industrial process Industrial processes are procedures involving chemical A chemical substance is a form of matter having constant chemical composition and characteristic properties. Some referen ... wastewater Wastewater is generated after the use of fresh water Water (chemical formula H2O) is an , transparent, tasteless, odorless, and , which is the main constituent of 's and the s of all known living organisms (in which it acts as a ). It i ... discharge outlet in a large scale analysis of water pollution * Gas escaping from a pressurised pipe in a laboratory * Smoke is often released from point sources in a wind tunnel Wind tunnels are large tubes with air blowing through them which are used to replicate the interaction between air and an object flying through the air or moving along the ground. Researchers use wind tunnels to learn more about how an aircraft ... in order to create a plume of smoke which highlights the flow of the wind over an object * Smoke from a localised chemical fire can be blown in the wind to form a plume of pollution # Pollution Sources of various types of pollution are often considered as point sources in large-scale studies of pollution. * Line source A line source, as opposed to a point source, area source, or Volume source (pollution), volume source, is a source of air, noise, water contamination or electromagnetic radiation that emanates from a linear (one-dimensional) geometry. The most ... * Dirac delta function In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is n ... # References {{reflist Experimental physics Mathematical physics
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8099316358566284, "perplexity": 2148.028849513512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00017.warc.gz"}
http://lajeannedarc.com/dvyo/limit-definition-derivative-calculator.php
# Limit definition derivative calculator In general, a fractional function will have an infinite limit if the limit of the denominator is zero and the limit of the numerator is not zero. Finding the nth derivative means to take a few derivatives (1st, 2nd, 3rd…) and look for a pattern. Example 1: Find the derivative of the constant function f(x) = c using the definition of derivative. Further realize that the slope of the secant line between any Derivative of a function definition is - the limit if it exists of the quotient of an increment of a dependent variable to the corresponding increment of an associated independent variable as the latter increment tends to zero without being zero. The derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument Definition of Derivative by Merriam A derivative is a financial contract with a value that is derived from an underlying asset. This value is the derivative; There are a few different, but equivalent, versions of this definition. Please review the general discussion before trying this example. I tried to turn that around  Tangent Line; Definition of the Derivative; Equation of a Tangent Line; Definition of Feature on a Graphing Calculator; Determining Differentiability; Derivatives from the Left This is also called Using the Limit Method to Take the Derivative. In order to find the nth derivative, find the first few derivatives to identify the pattern. Let's say it in English first: "f(x) gets close to some limit as x gets close to some value" $\begingroup$ As a side note your second line is incorrect. Use the definition of the first derivative as the limit of difference quotient to find the first derivative of a function. So the derivative f0(x) of a function y = f(x) spews out the slope of the tangent to the graph y = f(x) at each x in the domain of f where there is a tangent line. Attempt to solve it. This calculator is in beta. As with any skill, you only improve with practice. Limits are the method by which the derivative, or rate of change, of a function is calculated. Currently, we have around 200 calculators to help you "do the math" quickly in areas such as finance, fitness, health, math, and others, and we are still developing more. of change of a function f is called the derivative of f. 6) x. On that page, we arrived at the limit definition of the derivative through two routes: one using geometric intuition and the other using physical intuition. Thanks! Need algebra help? Try MathPapa Algebra Calculator Derivative of e^x by Limit Definition Date: 05/21/2002 at 22:40:18 From: Jeff King Subject: derivative of e^x by limit definition Dear Dr. But this is a big topic for another day. 5 Jun 2019 How can derivatives assist us in evaluating indeterminate limits of the form ∞ ∞ ? Because differential calculus is based on the definition of the  7 Oct 2019 variable Δx. Here is the “official” definition of a derivative (slope of a curve at a certain point), where $${f}’$$ is a function of $$x$$. Therefore, and the derivative of f(x) = 2x - x 2 at x = 0. Consider the limit definition of the derivative. Follow the rules mentioned in the above derivative calculator and understand the concept for deriving the given function to differentiate. a, − 1 2 a 2 + 2 a + 2 The derivative of a function f(x) is written f'(x) and describes the rate of change of f(x). In practice, once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions Derivative of arctan(x) Let’s use our formula for the derivative of an inverse function to find the deriva­ tive of the inverse of the tangent function: y = tan−1 x = arctan x. e. Check out all of our online calculators here! In the limit, assuming the limit exists, we will find the exact slope of the tangent line to the curve at the given point. Log InorSign Up. One of the most important features that differentiate mathematics from other subjects is the upgrade from Suggested Prerequesites: The definition of the derivative Suppose we want to find the derivative of We could hopefully multiply y ( x ) out and then take the derivative with little difficulty (in fact, this will be done below, as a check). 1 Definition of the Derivative Preliminary Questions 1. For the function $$f(x) = x - x^2\text{,}$$ use the limit definition of the derivative to compute $$f'(2)\text{. symbols('Δx') expr = f. The limit of tan(x) is limit(tan(x)) Inverse function tangent : The inverse function of tangent is the arctangent function noted arctan. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Calculus teachers usually focus on the calculation of limit, sometimes on graphical illustration of limit, rarely on theoretical aspect (or definition) of limit. It will also find local minimum and maximum, of the given function. c) Discuss the value of implicit . one without infinity) is that in order to integrate, you need to know the interval length. org, which also includes TEXnicCenter, a free and easy-to-use user-interface. And I usually begin finding the derivative by looking at the difference quotient, so let's find and simplify the difference quotient, now in this case our f of x is mx+b. In calculus, the (ε, δ)-definition of limit ("epsilon–delta definition of limit") is a formalization of the notion of limit. Wherever there is an x in f( x ), replace the x with x + h . We can construct a definition of a tangent as the limit of a secant of the curve taken as the separation between the points tends to zero. Derivative definition is - a word formed from another word or base : a word formed by derivation. Note also that the function has a vertical asymptote at x = c if either of the above limits hold true. Using the limit definition of derivative, find the derivative function, Example Find the derivative of f(x)=3/x2. Free limit calculator - solve limits step-by-step. Logarithmic differentiation Calculator online with solution and steps. Integration is the inverse operation of differentiation or You should be able to calculate the derivative of a function both by using the limit definition of the derivative, and by using rules for calculating the derivative of a Calculate the Derivative by Definition Description Obtain the derivative of the function by using the definition Derivatives by Calculus > Limit - Formal Rules Then we will look at the limit definition of a derivative, use it to compute We can use this formula to calculate the winner's speed at any time during the race. Derivative by first principle refers to using algebra to find a general expression for the slope of a curve. Even though the derivative at the point does not exist, the right and the left limit of the ratio do exist. The concept is due to Augustin-Louis Cauchy, who never gave an (,) definition of limit in his Cours d'Analyse, but occasionally used , arguments in proofs. Definition. There are four possible limits to define here. The derivative of function f at x=c is the limit of the slope of the secant line from x=c to x=c+h as h approaches 0. As the distance h {\displaystyle h} tends to 0, the secant line becomes the tangent at the point x 0 {\displaystyle x_{0}} . But instead of saying a limit equals some value because it looked like it was going to, we can have a more formal definition. y! f(x) f¿(c) f¿(x) ! lim hS0 f(x Combined Calculus tutorial videos. How to use the normal distribution calculator: an example Decide on the mean of your normal distribution. For non-linear functions, the rate of change of a curve varies, and the derivative of a function at a given point is the rate of change of the function, represented by the slope of the line tangent to The derivative of a function can, in principle, be computed from the definition by considering the difference quotient, and computing its limit. A limit problem asks you to determine what the y value of a function is zeroing in on as the x value approaches a particular number. Calculate the derivative of g(x)=2x−3 from first principles. ) So, we plug in the above limit definition for \pdiff{f}{x}. The limit of a positive integer root of a function is the root of the limit of Limits by L'Hôpital's rule Calculator Get detailed solutions to your math problems with our Limits by L'Hôpital's rule step-by-step calculator. Functions: sqrt - square root Finding a Derivative at a Point As stated earlier, the derivative at x = 0. Choose "Find the Derivative" from the menu and click to see the result! Derivatives are often used as an instrument to hedge risk for one party of a contract, while offering the potential for high returns for the other party. It is equal to slope of the line connecting (x,f(x)) and (x+h,f(x+h)) as h approaches 0. The second derivative of a function at a point , denoted , is defined as follows: More explicitly, this can be written as: Definition as a function. Steps. Calculus, Cosine, Derivative, Functions, Sine, Trigonometric Functions In the applets below, graphs of the functions and are shown. Limits are essential to calculus (and mathematical analysis in general) and are used to define continuity, derivatives, and integrals. limit(f,var,a) returns the Bidirectional Limit of the symbolic expression f when var approaches a. Derivative: A derivative is a security with a price that is dependent upon or derived from one or more underlying assets. 1. LATEX (pronounced “Lay-Tek”) is a document typesetting program (not a word processor) that is available free from www. Definition of the Derivative Instantaneous Rates of Change Power, Constant, and Sum Rules Higher Order Derivatives Product Rule Quotient Rule Chain Rule Differentiation Rules with Tables Chain Rule with Trig Chain Rule with Inverse Trig Chain Rule with Natural Logarithms and Exponentials Chain Rule with Other Base Logs and Exponentials Derivative definition The derivative of a function is the ratio of the difference of function value f(x) at points x+Δx and x with Δx, when Δx is infinitesimally small. derivative synonyms, derivative pronunciation, derivative translation, English dictionary definition of derivative. We simplify the equation by taking the tangent of both sides: y = tan−1 x tan y = tan(tan−1 x) tan y = x Free derivative calculator - high order differentiation solver step-by-step This website uses cookies to ensure you get the best experience. 11) Use the definition of the derivative to show that f '(0) does not exist where f (x) = x. Thus, one way to describe the derivative at a point is that as another point approaches this one, the rate of chan Problem 5. To prove that, we will use the following identity: sin A − sin B = 2 cos ½(A + B) sin ½(A − B). Oct 25, 2016 · Let y= f(x) Then derivative of y wrt x is by definition: $\frac{dy}{dx} = \lim_{h\to 0 } \frac{f(x+h) -f(x)}{h}$ Consider if y = $x^2$is the Solution for Using the limit definition of a derivative find the derivative at a point for : fx=2x+1 at x =5 Use the short cut theorems to take and evaluate… For this proof, we can use the limit definition of the derivative. Integral calculus offers a precise method of calculating the region below the curve of a mathematical function. Process Capability Ratio Calculator. f(x)=−6x f ( x ) = - 6 x. The result is called the directional derivative. f (x) = lim Hint: Calculate lim h→0. Description : The derivative calculator allows to do symbolic differentiation using the derivation property on one hand and the derivatives of the other usual functions. This is the currently selected item. A way to check this is to graph it and see that indeed the limit as x gets closer to 0 is 1: 2 4 6 8 -2 -4 -6 -8 0. For x>2 . Find the components of the definition. The derivative itself is a contract between two or more parties based upon Using the limit definition of the derivative. For the learning of limits and limit of a function and how to calculate its equations, use this Limit Calculator . Derivatives The Definition of the Derivative – In this section we will be looking at the definition of the derivative. Calculator supports derivatives up to 10th order as well as complex functions. The following problems require the use of the limit definition of a derivative, which is given by They range in difficulty from easy to somewhat challenging. Calculator. Practice We can generalize the partial derivatives to calculate the slope in any direction. Limit Order: A limit order is a take-profit order placed with a bank or brokerage to buy or sell a set amount of a financial instrument at a specified price or better; because a limit order is not That's the derivative. 1 Problem 27E. Limit Definition of Derivative. 2 -0. 4 0. This website provides 12 problems, in which you should practice using the limit definition to find the derivative. Sep 29, 2011 · I've been stuck on this problem for awhile What's the derivative of x^(3/2) using the limit definition of a derivative? I think you have to use binomial expansion, but I'm not sureThanks in advance! Limit tangent : The limit calculator allows the calculation of limits of the tangent function. Make use of this free online derivative calculator to differentiate a function. ) Problem 1. Explain in words what the difference quotient represents. : the limit of his experience; the limit of vision. Type in any integral to get the solution, free steps and graph Products Classroom Activities Graphing Calculator Scientific Calculator Four Function Calculator Test Practice Geometry Tool. Gradient is a vector comprising partial derivatives of a function with regard to the variables. Root Law. That derivative approaches 0, that is, becomes smaller. Sep 07, 2016 · This calculus video tutorial shows you how to use limit process / definition of the derivative formula to find the derivative of a function that contains square roots and fractions. y = − 1 2 x 2 + 2 x + 2. Then = 1 , and = 1 . Definition as derivative: It is the derivative of the function with respect to , where all the other variables are treated as unknown constants while doing the differentiation. The derivative of a function is one of the basic concepts of mathematics. subs(x, Δx)/Δx sp. Use that identity to Derivative Calculator gives step-by-step help on finding derivatives. Not sure what that means? Type your expression (like the one shown by Once one sees the definition and learns the basic rules, you can basically calculate the derivative of a lot of reasonable functions quickly. If we let x approach x 0, in other words, take the limit, we have the definition of the derivative: Definition of the Derivative (slope form) We can do this a different way and get the same results. Jul 31, 2012 · Because I imagine the derivative of ln(x) was calculated for the first time using the definition of the derivative, wasn't it? and I want to find out how they did it. Unfortunately, this function only returns the derivative of a single point. Symbolically, this is the limit of [f(c)-f(c+h)]/h as h→0. One-sided and two-sided being supported. Worked example: Derivative as a limit. if this limit exists. 1 Answer Steve M Nov 28, 2016 # f'(x)=-2/x^3 # Explanation: By definition of The limit definition is used by plugging in our function to the formula above, and then taking the limit. L'Hospital's Rule. Now the derivative is going to start with a definition of the derivative. The limit of a positive integer power of a function is the power of the limit of the function: Example: Evaluate . The reason you can’t solve these integrals without first turning them into a proper integral (i. By definition, the square root of a real number x, is a number which squared is equal to x. That gives the upper limit z = (3 -y)/3. The derivative of a function y = f( x) at a point ( x, f( x)) is defined as . We must remember that mathematics is a succession. By using this website, you agree to our Cookie Policy. What is the derivative of the arctangent function of x? The derivative of the arctangent function of x is equal to 1 divided by (1+x 2) Limit definition derivative calculator keyword after analyzing the system lists the list of keywords related and the list of websites with related content, in addition you can see which keywords most interested customers on the this website 2001; Stewart, 2012). Similarly for the second term you should be using (3x+h)/(3x+h) instead of just 3x+h (although now you know that 3(x+h) is the correct thing anyways). Though there are many different ways to prove the rules for finding a derivative, the most common way to set up a proof of these rules is to go back to the limit definition. (Use one of the first two forms listed at the top of the page, since you'll be finding the general derivative. They could be seen as "half-tangents". This is true regardless of the value of the lower limit a. According to the rule for changing from base e to a different base a: Topic 20 of Precalculus. In Questions 3–5, f (x) is an arbitrary function. The derivative of the natural logarithmic function (ln[x]) is simply 1 divided by x. Examples and solutions are presented. There are examples of valid and invalid expressions at the bottom of the page. The online calculator will calculate the derivative of any function using the common rules of differentiation (product rule, quotient rule, chain rule, etc. This complicated method of integration is comparable to determining a derivative the hard way by using the formal definition that’s based on the difference quotient. Find the mistakes: 1. Show that f is differentiable at x=0, i. To graph a function defined on an unbounded domain, we also need to know the behavior of as In this section, we define limits at infinity and show how these limits affect the graph of a function. Limit definition, the final, utmost, or furthest boundary or point as to extent, amount, continuance, procedure, etc. The first problem is to set up the limits of integration. The limit of a quotient is the quotient of the limits (provided that the limit of the denominator is not 0): Example: Evaluate . The derivative is way to define how an expressions output changes as the inputs When this occurs, the function is said to have an infinite limit; hence, you write . It is also known as the delta method. The inverse operation for differentiation is called integration. So let's start with the general idea. We've integrated the flow to have the volume. Jan 22, 2020 · Sometimes we will be given a limit question, and the process involved in evaluating the limit is arduous. That derivative becomes Derivative definition is - a word formed from another word or base : a word formed by derivation. 1. (Roll the mouse over the math to see a hint in red) 2. Identify the derivative as the limit of a difference quotient. The concepts of tangent lines and secant lines are illustrated below. The concept of Derivative is at the core of Calculus and modern mathematics. For any point where x = a, the derivative of this is f'(a) = lim(h→0) f(a+h) - f(h) / h. One Bernard Baruch Way (55 Lexington Ave. More about the Process Capability Calculator for you to have a better understanding of the results provided by this solver. The capability ratio of a process is a measure of the percentage of non-conforming elements produced by a process. The constant is the initial velocity term that would be lost upon taking the derivative of velocity because the derivative of a constant term is zero. The Chain Rule Examples: Finding The nth Derivative. Integration is the inverse operation of differentiation or derivative. - You can assume the function is defined at a+h and at a. ” To summarize, we compute the derivative of f(x) by forming the difference quotient f(x+∆x)− f(x) ∆x, (2. Click HERE to see a detailed solution to problem 10. Resulting from or employing derivation: a derivative word; a derivative process. The derivative is the function slope or slope of the tangent line at point x. The derivative of y(x) is written as y′ or (Leibnitz notation). The calculator will try to simplify result as much as possible. Input a function, a real variable, the limit point and optionally, you can input the direction and find out it's limit in that point. Rearrange the limit so that the sin(x)'s are next to each other. And like using the difference quotient to find a derivative, you won’t use the limit of a Riemann sum to calculate area once you learn the shortcut method of finding area. The ideas of velocity and acceleration are familiar in everyday experience, but now we want you to connect them with calculus. A derivative is a special case of the limit of f(x) and is defined as . Limit calculator This is a calculator which computes the limit of a given function at a given point. In addition to the formal definition, there are other methods that aid in the computation of limits. Geometrically, the derivative of a function can be interpreted as the slope of the graph of the function or, more precisely, as the slope of the tangent line at a point. You should probably have some paper handy. Then find the value of the der 30 Mar 2016 Calculating Partial Derivatives from the Definition. Derivatives are financial products, such as futures contracts, options, and mortgage-backed securities. This page on calculating derivatives by definition is a follow-up to the page An Intuitive Introduction to the Derivative. How to Use First, write the variable and the point at which taking the limit. While this is beyond the scope of this calculator, aside from its basic linear use, the concept of a slope is important in differential calculus. For complex functions, the geometrical motivation is missing, but the definition is formally the same as the definition for derivatives of real functions. 6 0. This time, make the following substitutions in the functional form of the slope definition: Substituting these into the slope formula gives Apr 30, 2020 · The term derivative refers to a financial product that derives its value from its relationship to another underlying asset. The Lagrangian is the difference of kinetic energy T and potential energy V which are functions of the displacement x(t). We can use the definition of the derivative in order to generalize solutions and develop rules to find derivatives. Use the Limit Definition to Find the Derivative. So f prime of x equals the limit as h approaches zero of f of x plus h minus f of x over h. Use the definition of the partial derivative as a limit to calculate ∂f/∂x and ∂f/∂y for the SOLUTIONS. (There are no formulas that apply at points around which a function definition is broken up in this way. Solution: 0 0 ( ) ( ) lim lim 0 x x f x x f x c c D fi x D fi x Free Derivative using Definition calculator - find derivative using the definition step-by-step. The rule is applied to the functions that are expressed as the product of two other functions. In mathematics, a limit is the value that a function (or sequence) "approaches" as the input (or index) "approaches" some value. 2 0. This page was constructed with the help of Suzanne Cada. David Jerison. If the first integral of a function, which must equal to deriving it to -1, is as follows Precalculus & Elements of Calculus tutorial videos. Learn derivatives with free interactive flashcards. In the example below, that’s “x” … Continue reading → Derivative Calculator computes derivatives of a function with respect to given variable using analytical differentiation and displays a step-by-step solution. Answer to: Use the limit definition of the derivative (not the rules) to find f'(x) if f (x) = 3x^2 + 4x - 3. It is based on Cauchy's formula for calculating iterated integrals. The derivative is an important tool in calculus that represents an infinitesimal change in a function with respect to one of its variables. It is meant to serve as a summary only. Tap for more steps Limit Definition of Derivative. This definition is known as \(\varepsilon-\delta-$$ or Cauchy definition for limit. These assets typically are debt or equity securities, commodities, indices, or currencies, but derivatives can assume value from nearly any underlying asset. The limit for this derivative may not Define derivative. The derivative of sin x. The calculator supports both one-sided and two-sided limits. The most basic way of calculating derivatives is using the definition. If f(x) is continuous at a then: Continuous Functions and Compositions. f'[x]=1+ limit as h->0 of numerator sqrt[x+h] + sqrt [x] denominator h I did a google search of square root limit, definition of derivative, and didn't come up with anything that helpful. The process of finding the derivative is called differentiation. For which A partial Derivative Calculator is a tool which provides you the solution of partial derivate equations solution with so much ease and fun. Section 3-1 : The Definition of the Derivative. If a derivative is taken n times, then the notation d n f / d x n or f n (x) is used. Start studying Limits and the Definition of the Derivative. y = x 2− a −12​ h  According to the definition of derivative, this ratio is considered in the limit as X approaches to 0 Δx→0. Find Euler-Lagrange Equation for Spring. a ,−12​ a 2 +2 a +2. Because differential calculus is based on the definition of the derivative, and the definition of the derivative involves a limit, there is a sense in which all of calculus rests on limits. Graphic tangent : The graphing calculator is able to plot tangent function in its definition interval. You can also check your answers! Interactive graphs/plots help visualize and better understand the functions. 2. 2 x y Open image in a new page Graph of y=sin(x)/x. Before this limit can be evaluated, the expression must be expanded and simplified. To calculate derivatives this way is a skill. the rate of change, of the normal CDF. There’s also the Heine definition of the limit of a function, which states that a function $$f\left( x \right)$$ has a limit $$L$$ at $$x = a$$, if for every sequence $$\left\{ {{x_n}} \right\}$$, which has a limit at $$a,$$ the sequence $$f\left( {{x_n Textbook solution for Calculus: An Applied Approach (MindTap Course List)… 10th Edition Ron Larson Chapter 2. Power Law. Derivatives Math Help Definition of a Derivative. Let's apply the definition of differentiation and see what happens: Since the limit of as is less than 1 for and greater than for (as one can show via direct calculations), and since is a continuous function of for , it follows that there exists a positive real number we'll call such that for we get Sep 13, 2010 · 3. If you apply this changing speed to each instant (take the integral of the derivative), you recreate the original behavior, just like applying the daily stock market changes to recreate the full price history. For more about how to use the Integral Calculator, go to "Help" or take a look at the examples. If exists, we say that f is differentiable at c. First find the Lagrangian for a spring with mass m and spring constant k, and then derive the Euler-Lagrange equation. 136 ChAptER 3 the Derivative The definition of a limit describes what happens to ƒ1x2 when x is near, but not equal to, the value a. If one exists, then you have a formula for the nth derivative. May 11, 2019 · Derivative Using the Limit Definition May 11, 2019 May 11, 2019 Dr. Together with the integral, derivative occupies a central place in calculus. Derivative definition, derived. To find the derivative from its definition, we need to find the limit of the difference ratio as x approaches zero. as the "limit" of the secant lines through (x 0, f (x 0)) and nearby points (x, f (x)) as x approaches x 0. Calculate the Derivative by Definition Description Obtain the derivative of the function by using the definition Derivatives by Definition Enter the function and the value of for which is to be obtained. ) A secant line is a straight line joining two points on a function. Figure 1 The derivative of a function as the limit of rise over run. This term would also be considered a higher-order derivative. The definition of the derivative is used to find derivatives of basic functions. Label. It transforms it into a form that is better understandable by a computer, namely a tree (see figure below). 01 Single Variable Calculus, Fall 2006 Prof. Get the free "Limit Calculator - Math 101" widget for your website, blog, Wordpress, Blogger, or iGoogle. Jan 13, 2011 · The actual value is 5 and 2/3, but the calculator uses the definition of the derivative as (change in y/ change in x) to calculate it at a point, and isn't always accurate to the last digit. For this reason it is both exciting, and challenging. The derivative is a measure of the instantaneous rate of change, which is equal to The definition of a limit in calculus is the value that a function gets close to but never surpasses as the input changes. We pick a relatively simple example here : f(x) = x 3. Create your own worksheets like this one with Infinite Calculus. The definition of the derivative can be approached in two different ways. Limit Evaluation Methods Continuous Functions. Derivative online. It is not affected by how (or even whether) ƒ1a2 is defined. For a single-variable function, the first derivative is the slope of the function at that point, and it equals the slope of the tangent at that point. In the first section of the Limits chapter we saw that the computation of the slope of a tangent line, the instantaneous rate of change of a function, and the instantaneous velocity of an object at \(x = a$$ all required us to compute the following limit. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on). The usual definition of the derivative is in terms of a limit: $\lim \limits _{h \rightarrow 0} {\frac{f(x+h)-f(x)}{h}}$. The Nuiances of Definite Integral Calculator . In fact, Calculus without limits is like Romeo without Juliet. Also the definition implies that the function values cannot approach two different numbers, so that if a limit exists, it is unique. REMARK 1 : Use of the limit definition of the derivative of f at x=2 also leads to a correct solution to this problem. The derivative of 5(4. Detailed step by step solutions to your Logarithmic differentiation problems online with our  . Essentially, this limit finds the rate of change between two points as those points become increasingly close to each other and converge to a point Feb 22, 2018 · This calculus video tutorial provides a basic introduction into the definition of the derivative formula in the form of a difference quotient with limits. The derivative of tan x. b) when x is less than 1 and becomes smaller. First, a parser analyzes the mathematical function. The following procedure illustrates how to find the derivative of a function at any value x. The formal definition of a limit is quite possibly one of the most challenging definitions you will encounter early in your study of calculus; however, it is well worth any effort you make to reconcile it with your intuitive notion of a limit. In each applet, drag the BIG WHITE POINT along the graph of the displayed function. For x<2 . From English to Mathematics. How the Derivative Calculator Works. Husch and Limit calculator counts a limit or border of a certain function. Complete exam problem 2 on page 1; Check solution to exam problem 2 on page 2; Finding the derivative of 1/x 2 using the limit definition The Calculator can find derivatives using the sum rule, the elementary power rule, the generalized power rule, the reciprocal rule (inverse function rule), the product rule, the chain rule and logarithmic derivatives. That might be the reason why people call it multi-derivative instead of partial derivative. Notice the upper limit replaces the variable of integration wherever it appears in the integrand and the result is multiplied by the derivative of the upper limit: (This formula literally is just the chain rule, since f is the derivative of its antiderivative (given by the indefinite integral) - in the notation of the earlier examples, h'(x Free definite integral calculator - solve definite integrals with all the steps. a + h ,−12​ a + h 2+2 a + h +2. In addition, the limit involved in the limit definition of the derivative is one that always generates an indeterminate form of $$\frac{0}{0}$$. Improper integrals are integrals you can’t immediately solve because of the infinite limit(s) or vertical asymptote in the interval. Evaluating f'(x) at x_0 gives the slope of the line tangent to f(x) at x_0. We now define that limit to be the base of the natural logarithms, the number we will call When we calculate that derivative below, we will see that that constant   Differentiation from first principles A-Level Mathematics revision (AS and A2) section of We use this definition to calculate the gradient at any particular point. REMARK 2 : What follows is a common INCORRECT attempt to solve this problem using another method. Free math lessons and math homework help from basic math to algebra, geometry and beyond. The derivative of sec x. Jan 28, 2020 · Produce a function derivative_quotient that has three arguments: f, a, h, where f is a function from float to float, a is a number, and h is a very small number. We talk at length about how to use the definition on the page calculating the derivative by definition. It allows to draw graphs of the function and its derivatives. The process of calculating a derivative is called differentiation. A locked market, also called a daily trading limit, is the maximum gain or loss allowed on a derivative or currency in one trading day. Definition of the Derivative. What are the two ways of writing the difference quotient? 2. Let's return to the very first principle definition of derivative. 3. 'lim' stands for 'limit 'and we say that the limit, as x tends to zero, of 2x+dx is 2x. We can use the definition to find the derivative function, or to find the value of the derivative at a particular Derivative Proofs. Justin Albert Now that we have defined limits and are able to find them, we can begin to talk about the second major topic of Calculus, derivatives. Ideally we'd find the limit as h approaches 0, but that is impossible to do programmatically without having to know what the definition of func is—and we want to keep the definition of the derivative as general as possible. The derivative of csc x. First recall the definition of "derivative" for any function f : Derivative of arctan. The derivative of a function at some point characterizes the rate of change of Read more Definition of the Limit is also known as function limit, directed limit, iterated limit, nested limit and multivariate limit. y =−12​ x 2+2 x +2. This involves calculating a limit. The simplest derivatives to find are those of polynomial functions. The derivative of cos x. Drag the black point along the curve. Answer to 2. Of course trigonometric, hyperbolic and exponential functions are also supported. This calculator calculates the derivative of a function and then simplifies it. Definition 8 Calculus Derivatives Limit Definition of Derivative . Now that we know what the definition is, how do we use it? Well, we need to plug in our function to the formula; that is, write the function with any x replaced with x + delta x, then subtract the original Derivative calculator allows steps by steps calculation of the derivative of a function with respect to a variable. adj. The work here for and is famous and involves a couple famous limits. We appreciate your feedback to help us improve it. miktex. Calculate the We can also calculate the slope of a secant line to a function at a value a by using this Formally we may define the tangent line to the graph of a function as follows. This way, we can see how the limit definition works for various functions. Using the definition of derivative, find the derivatives of the following functions. Limit computes the limiting value f * of a function f as its variables x or x i get arbitrarily close to their limiting point x * or . Limit Calculator. Factor out a sin from the quantity on the right. Use The Definition. See Picture. The derivative of cot x. For a line, this is just the slope. Apply the usual rules of differentiation to a function. Use the limit definition of the derivative to find the derivative of. Using 0 in the definition, we have lim h →0 0 + h − 0 h = lim h 0 h h which does not exist because the left-handed and right-handed limits are different. Free trial available at Limit Evaluation at +-Infinity. This is the slope of a segment connecting two points that are very close According to the definition of derivative, this ratio is considered in the limit as X approaches to 0 Δx→0. However, if we want to calculate $\displaystyle \pdiff{f}{x}(0,0)$, we have to use the definition of the partial derivative. Not sure what that means? Type your expression (like the one shown by default below) and then click the blue arrow to submit. First, store a number into x that’s extremely close to the arrow-number, enter the limit expression in … From the limit definition of the derivative, write Hence is its own derivative. The slopes of the secant lines approximate the slope of the tangent line, which is the derivative of the function at . My goal is to make a complete library of applets for Calculus I that are suitable for in-class demonstrations and/or student exploration. 8 1 1. One way to specify a direction is with a vector $\vc{u}=(u_1,u_2)$ that points in the direction in which we want to compute the slope. In calculus, the slope of the tangent line to a curve at a particular point on the curve. 1 day ago · What does mean where is a region in the plane. TI-Calculator screen-shots produced by a TI-83Plus calculator using a TI-Graph Link. T HE DERIVATIVE of sin x is cos x. This is also called Using the Limit Method to Take the Derivative. Formal and alternate form of the derivative. How to use derivative in a sentence. 10. The derivative of a function is defined as the instantaneous rate of change of the function at a certain point. Back to top. Students, teachers, parents, and everyone can find solutions to their math problems instantly. Notice: even though h remains in the denominator, we can take the limit since it does not result in  Derivative Calculator computes derivatives of a function with respect to given variable using analytical differentiation and displays a step-by-step solution. Just for fun, we verify this result using the limit definition of the derivative: We already have f( x ) so it's easy to obtain f( x + h ). For a function , the second derivative is defined as Free Limit of Sum Calculator - find limits of sums step-by-step This website uses cookies to ensure you get the best experience. Another function with more complex radical terms. ), with steps shown. limit(expr, Δx, 0) But it outputs oo which means infinity. prisingly, we call this new function the derivative of f(x). Seperate the two quantities and put the functions with x in front of the limit (We are only concerned This example is interesting. Definition as a limit expression. The Integral Calculator supports definite and indefinite integrals (antiderivatives) as well as integrating functions with many variables. The derivative of log a x. Recall that an expression of the form fx fa( ) ( ) x a − − or fx h fx( ) ( ) h + − is called a difference quotient. With help … Continue reading → In physics, the integration of acceleration yields velocity plus a constant. Consequently, we cannot evaluate directly, but have to manipulate the expression first. If you are going to try these problems before looking at the solutions, you can avoid common mistakes by making proper use of functional notation and careful use of basic algebra. "The derivative of f equals the limit as Δ x goes to zero of f(x+Δx) - f(x) over Δx" Or sometimes the derivative is written like this (explained on Derivatives as dy/dx ): The process of finding a derivative is called "differentiation". derivative_quotient(f, a, h) should return the number (f(a + h) - f(a)) / h. Thus, The derivative of a function y = f(x) is the function defined by f0(x) = lim h→0 f(x+h)−f(x) h. 1) which is the slope of a line, then we figure out what happens when ∆x gets very close to 0. For more complex curves, we can find the rate of change between two points on the curve easily since we can draw a line through them. Please use this feedback form to send your feedback. These deriv-atives can be viewed in four ways: physically, numerically, symbolically, and graphically. Its calculation, in fact, derives from the slope formula for a straight line, except that a limiting process must be used for curves. The Mistakes. Free derivative applications calculator - find derivative application solutions step-by-step This website uses cookies to ensure you get the best experience. It gives the instantaneous rate of change of y with respect to x. This calculator computes volumes for a few of the most usual basic shapes. f'(x)=limh→0f(x+h)−f(x)h f ′ ( x ) = lim h → 0 ⁡ f  Solve derivatives using this free online calculator. Free Derivative using Definition calculator - find derivative using the definition step-by-step This website uses cookies to ensure you get the best experience. Limit Definition for sin: Using angle sum identity, we get. You can solve a limit problem with your calculator using the arrow-number. $\mathrm{Find\:derivative\:using\:limit\:definition\:of}\:\frac{t}{t+1}:\quad\frac{1}{\left (t+1\right)^2}$ Find derivative using limit definition of t t +1 : 1( t +1) 2. Gotcha: The Many meanings of "Derivative" The gray secant line joins the fixed red point on the graph of to a movable black point on the curve. Calculate the derivative of the function using the limit definition of the derivative. The value “a” that is at the bottom of the integral sign is called the lower limit of the integral and the value “b” at the top is called the upper limit of the integral. We can calculate it for you. 1 Definition (Derivative. Enjoy! ulitmatecalc. So Im going to try that but I can't think of anything else! This calculator evaluates derivatives using analytical differentiation. (Topic 20 of Trigonometry. We’ll do one of them and leave the other three to you to write down if you’d like to. Online Derivative Calculator. Use the Limit Definition to Find the Derivative f(x)=x^2-5x+6. Consider the diagram below. Derivatives always have the $$\frac 0 0$$ indeterminate form. Find more Mathematics widgets in Wolfram|Alpha. Derivatives have been created to mitigate a remarkable number of risks: fluctuations in stock, bond, commodity, and index prices; changes in foreign exchange rates; changes in interest rates; and weather events, to name a few. This is a calculator which computes derivative, minimum and maximum of a function with respect to a variable x. The integral calculator gives chance to count integrals of functions online free. ) You'll be able to check your answers when you finish each problem. The second derivative of a function at a point is defined as the derivative of the derivative of the function. This website uses cookies to ensure you get the best experience. Given a function , there are many ways to denote the derivative of with respect to . has a derivative at every point in [a, b], and the derivative is That is, the derivative of a definite integral of f whose upper limit is the variable x and whose lower limit is the constant a equals the function f evaluated at x. 5 is defined to be the limit . You need to multiply the first term by (3x)/(3x) but you only show multiplying by 3x. The derivative is denoted by f′ ( x), read “ f prime of x” or “ f prime at x,” and f is said to be differentiable at x if this limit exists (see Figure ). Derivative Calculator: Online derivative calculator to find the derivative or partial derivative of a function with respect to a variable. In other words, the slope of the plot of is the same as its height, or the same as its second coordinate. PROBLEM 11 : Use the limit definition to compute the derivative, f'(x), for The Limit of a Function; Limit Definition of the Derivative. Why does the limit-definition of derivative expression yield the EXACT instantaneous rate of change / derivative of the function? First, realize again what the expression stands for: the slope of the secant line of the function between points x x x and x + Δ x x+\Delta x x + Δ x. An INCORRECT conclusion would be that f'(2) = 1. Find the derivative of the function f(x) = x^2 + x Use the limit definition of the derivative to find the derivative of the function at x = 1, f(x) = (x + |20) Get more help from Chegg. The derivative of a function can, in principle, be computed from the definition by considering the difference quotient, and computing its limit. 5 is 1. a) when x is greater than 1 and becomes larger. a) Write out the limit definition for the derivative of y = xx. Limits are one of the most important aspects of calculus, and they are used to determine continuity and the values of functions in a graphical sense. The derivative of 2 x. Limits, Continuity, and the Definition of the Derivative Page 2 of 18 DEFINITION (ALTERNATE) Derivative at a Point The derivative of the function f at the point xa= is the limit () ( ) lim xa f xfa fa → xa − ′ = − X Y (x, f(x)) (a, f(a)) provided the limit exists. ) It is also equivalent to the average rate of change, or simply the slope between two points. Course Material Related to This Topic: Finding the derivative of x 3 using the limit definition of a derivative. Putting this together, we can write the slope of the tangent at P as: dy/dx=lim_(h->0)(f(x+h)-f(x))/h This is called differentiation from first principles, (or the delta method). If f is a function defined by then the derivative of f(x) at any value x, denoted is if this limit exists. We have shown how to use the first and second derivatives of a function to describe the shape of a graph. Next thing I'd like to do is to calculate a derivative of the function using it's definition in x = 0: I need to calculate a limit: So I far as I need to calculate it in x = 0, I code this: Δx = sp. Limits involving functions of two variables can be considerably more difficult to deal with; fortunately, most of the functions we encounter are fairly easy to Derivative. Feb 13, 2018 · Create A Derivative Calculator in Python. , use the limit definition of the derivative to compute f'(0) . The first step in taking a directional derivative, is to specify the direction. b) Write out the limit definition for the derivative of the inverse trig function from question 2. It is used to take the equations of derivative or two variables and even it intakes multivariable. Riemann-Liouville derivative is the most used generalization of the derivative. Let's think about how we can calculate the derivative at a point for a function y=f(x ). net's sole focus is to provide fast, comprehensive, convenient, free online calculators in a plethora of areas. In fact, if we use the slope-interpretation of the derivative we see that this means that the graph has two lines close to it at the point under consideration. The final answer is simplified. The last result is what we obtain when we find the derivative using the definition of the derivative. For example, let's sketch an arbitrary graph (in this case, we use y=x2, but it  This method is called differentiation from first principles or using the definition. This second definition is the one we will make rigorous later on as the limit definition of the derivative. To develop calculus for functions of one variable, we needed to make sense of the concept of a limit, which we needed to understand continuous functions and to define the derivative. Let y = f(x) be a function and let a be in the domain of f. In practice, once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions Summary Limit Definition of the Derivative We now give a rigorous definition of the derivative, along the lines of the definition of tangent line given above as a limit of certain secant lines. ©1995-2001 Lawrence S. If f(x) is continuous at b: Factor and Cancel. [3] Now use the power rule to calculate the derivative. Derivatives of logarithmic functions are simpler than they would seem to be, even though the functions themselves come from an important limit in Calculus. at 24th St) New York, NY 10010 646-312-1000 The Slope of a Curve as a Derivative . at 24th St) New York, NY 10010 646-312-1000 Derivative (calculus) synonyms, Derivative (calculus) pronunciation, Derivative (calculus) translation, English dictionary definition of Derivative (calculus). A secant line for the function f ( x ) at x = x 0 is a line through the points ( x 0 , f ( x 0 )) and ( x , f ( x )) , for some x in the domain of f . Calculus Introduction to calculus Calculus is a huge milestone for many when taking math classes. “the derivative of f at 7 is −7/24. For our final limit definition let’s look at limits at infinity that are also infinite in value. For those with a technical background, the following section explains how the Derivative Calculator works. The aforementioned Calculator computes a derivative of a certain function related to a variable x utilizing analytical differentiation. Proof: Alternative form of the derivative Contact Us If you are in need of technical support, have a question about advertising opportunities, or have a general question, please contact us by phone or submit a message through the form below. See more. Note this is an approximation. Geometrically speaking, f′(  How do you use the definition of a derivative function to calculate f'(4) where f(x)= x+3? How do you use the formal definition of the derivative as a limit to find the  The online calculator will calculate the derivative of any function using the common rules of differentiation (product rule, quotient rule, chain rule. When this happens, we may be looking at the definition of derivative in disguise! Together we will learn how to quickly, and appropriately use our derivative rules to arrive at our final answer swiftly and efficiently. ) Let be a complex valued function with , let be a point such that , and is a limit point of . One is geometrical (as a slope of a curve) and the other one is physical (as a rate of change). Calculus Applets using GeoGebra This website is a project by Marc Renault, supported by Shippensburg University. One way, as I have recently discovered thanks to the help of the users here, is to transform the indeterminate form 0/0 into 1 ∞ , by using the properties of logarithms and some Secant Lines, Tangent Lines, and Limit Definition of a Derivative (Note: this page is just a brief review of the ideas covered in Group. Free partial derivative calculator - partial differentiation solver step-by-step This website uses cookies to ensure you get the best experience. Be careful in your work with - it is a function composition! Also take care in carrying out the subtraction ; realize we are subtracting off the entire quantity given by . Definition as a limit (using derivative as limit of difference quotient) Definition as a directional derivative: Directional derivative in the positive -direction. For the knowledge of integrals and their calculation, find this Integral Calculator . Figure %: Tangent and Secant Lines Oct 07, 2012 · Using the limit definition to find the derivative of f(x) = 1/(2x+1)? As the title says, finding this tricky, not sure if i need to use the quotient limit law or find a common denominator or what, any help is appreciated. Keywords/Tags: Calculus, derivative, difference quotient, limit Finding Derivatives Using the Limit Definition Purpose: This is intended to strengthen your ability to find derivatives using the limit definition. derivative of g(x) is not zero at point a: ; and there exists limit of derivatives: then there exists limit of f(x) and g(x): , and it is equal to limit of derivatives : For function you can use the following syntax: Operations: + addition-subtraction * multiplication / division ^ power. Derivative/instantaneous rate of change, Differentiability (definition) The derivative, or instantaneous rate of change, of a function f(x) is the limit of the average rate of change between two points on the graph of f(x) as the distance between those two points approaches zero. 18. The Definition of the Limit – We will give the exact definition of several of the limits covered in this section. By definition, the density function is the first derivative, i. The first derivative of position is velocity, and the second derivative is acceleration. (See below. We have step-by-step solutions for your textbooks written by Bartleby experts! AN EXAMPLE OF THE LIMIT METHOD FOR FINDING THE DERIVATIVE OF f. Then find an equation for the tangent line to the curve y = x^2 +2 at x = 2. It can handle polynomial, rational, irrational, exponential, logarithmic, trigonometric, inverse trigonometric, hyperbolic and inverse hyperbolic functions. Step-by-step solution and graphs included! This limit is not guaranteed to exist, but if it does, f(x) f ( x ) is said to be differentiable at x=a x = a . From the definition of the derivative, we can deduce that . What does the following quantity represent in terms of the graph of f (x)? f (8)− f (3) 8 −3 4. Compute the derivative of f(x) = x^2 + 2 using the limit definition. Choose from 500 different sets of derivatives flashcards on Quizlet. Examples: 1. In the definition of derivative, this ratio is considered in the limit as Δx→0. }\) In addition, discuss the meaning of this value and draw a labeled graph that supports your explanation. Limit, mathematical concept based on the idea of closeness, used primarily to assign values to certain functions at points where no values are defined, in such a way as to be consistent with nearby values. The concept of a limit is fundamental to Calculus. This gives the derivative. It is at the heart of so many Calculus concepts like the derivative, the integral, etc. For second-order derivatives, it's common to use the notation f"(x). How Does a Locked Market Work? Let's say a forward contract on Company XYZ stock has a trading limit of X. Formal definition of the derivative as a limit. limit( f , a ) uses the default variable found by symvar . Integrals / Antiderivatives Online Derivative Calculator. The subject is packed full of many new topics like limits, derivatives, and integrals. 4. By signing up, you'll get thousands for Teachers for Schools for Working Scholars The derivative of a function at a point is the limit of the slope between two points on the function as the difference in their x values diminishes to zero. Calculate the limit of that derivative. Math, I've tried (as has my entire calculus class) to prove that the derivative of e^x is e^x by the limit definition of the derivative (without using the Taylor expansion for e^x) and we cannot seem to get past one last step. The only thing I think might be possible is "conjugate" to rationalize. Derivatives of Exponential Functions. limit( f ) returns the limit at 0 . The right-hand derivative of f at x = a is the limit and the left-hand derivative of f at x = a is the limit The function f is differentiable on the interval I if when I has a right-hand endpoint a, then the left-hand derivative of f exists at x = a, The above online Product rule derivatives calculator computes a derivative of a given function with respect to a variable x using analytical differentiation. asked by jack on April 8, 2013; applied calculus. The calculator will help to differentiate any function - from simple to the most complex. We’ll also give the exact definition of continuity. For multivariate or complex-valued functions, an infinite number of ways to approach a limit point exist, and so these functions must pass more stringent criteria in order for a unique limit value to exist. Enter a valid algebraic expression to find the derivative. Practice your math skills and learn step by step with our math solver. This derivative can be found using both the definition of the derivative and a calculator. Recall that the function of interest is f(x) = 2x - x 2. The limit calculator helps to calculate limits at positive, negative and complex infinities. Most of derivatives' value is based on the value of an underlying security, commodity, or other financial instrument. limit definition derivative calculator t gsl2nt9avb3, nwkrktrcap h 5p , uzgo afnfqp42ykqzt, ezc5vrx6hovg, 9zmsavy ysg3i9, 0wgpsvv favqjlaqgzot, u o3lix4kfucflyf, jjobg45ca8gu1ae9ggv, wh 73brlv waws, z liu5qt b c7o0m 3xvr9, eeqy8esg 47cq7, 8neqlscdsqvd,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9625338315963745, "perplexity": 326.1604928286844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107884755.46/warc/CC-MAIN-20201024194049-20201024224049-00504.warc.gz"}
https://socratic.org/questions/how-do-you-solve-2-x-4-2-1#155569
Algebra Topics # How do you solve 2(x-4)=2? $x = 5$ Expand left hand side to give $2 x - 8 = 2$ add 8 to both sides to give $2 x = 10$ divide by 2 to give $x = 5$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31055641174316406, "perplexity": 239.89149983000908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00635.warc.gz"}
http://www.freemathhelp.com/forum/search.php?s=1fb6cc94dcbbe8dbc85f706532efd385&searchid=1122453
# Search: Type: Posts; User: Btoepp 1. ## Thread: Range: movie lengths and classes by Btoepp Replies 1 Views 442 ### Range: movie lengths and classes Okay, I think this is the forum I should be looking under. Pick a range of times that fits the spread of your movie lengths nicely. Make sure you have at least 5 classes that cover all possible... 2. ## Thread: Probabilty w/ dice: using data from my 100 rolls by Btoepp Replies 3 Views 659 ### Re: Probabilty w/ dice That was a great help! so again, using the 2 dice example, if i rolled eight sixes the theoretical probability of me rolling a six would be 8/36 or 2/9? And how do you find out how many ways two... 3. ## Thread: Probabilty w/ dice: using data from my 100 rolls by Btoepp Replies 3 Views 659 ### Probabilty w/ dice: using data from my 100 rolls This should be the easiest part to understand, but i'm confused: Use the data from your 100 rolls to estimate the probability of getting each possible numeric value. Repeat the process after... 4. ## Thread: Credit cards: How long to pay off balance, assuming that.... by Btoepp Replies 7 Views 2,050 ### Well, if anyone wants to take a shot at it,... Well, if anyone wants to take a shot at it, here's my understanding 5000(1)+ 4700(30) 30 = 4866.67 I=Prt I= 4866.67(.0075)(1) I=36.50 by Btoepp Replies 7 Views 2,050 6. ## Thread: Credit cards: How long to pay off balance, assuming that.... by Btoepp Replies 7 Views 2,050 ### Does that mean that i don't count the payments... Does that mean that i don't count the payments until the next month? If it says his original payment was on the 10th and each payment is due on the 12 of each subsequent billing cycle then i wouldn't... 7. ## Thread: Credit cards: How long to pay off balance, assuming that.... by Btoepp Replies 7 Views 2,050 ### Average daily balance is found by adding the... Average daily balance is found by adding the unpaid balances for each day of the billing period and dividing by the number of days in the billing period. I= Prt r=mo rate t one month by Btoepp Replies 7 Views 2,050
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5207536816596985, "perplexity": 3212.9528998516694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
https://arxiv.org/abs/0812.3158
cond-mat.str-el (what is this?) # Title: Permutation Symmetric Critical Phases in Disordered Non-Abelian Anyonic Chains Abstract: Topological phases supporting non-abelian anyonic excitations have been proposed as candidates for topological quantum computation. In this paper, we study disordered non-abelian anyonic chains based on the quantum groups $SU(2)_k$, a hierarchy that includes the $\nu=5/2$ FQH state and the proposed $\nu=12/5$ Fibonacci state, among others. We find that for odd $k$ these anyonic chains realize infinite randomness critical {\it phases} in the same universality class as the $S_k$ permutation symmetric multi-critical points of Damle and Huse (Phys. Rev. Lett. 89, 277203 (2002)). Indeed, we show that the pertinent subspace of these anyonic chains actually sits inside the ${\mathbb Z}_k \subset S_k$ symmetric sector of the Damle-Huse model, and this ${\mathbb Z}_k$ symmetry stabilizes the phase. Comments: 13 pages Subjects: Strongly Correlated Electrons (cond-mat.str-el); Disordered Systems and Neural Networks (cond-mat.dis-nn) DOI: 10.1103/PhysRevB.79.155120 Cite as: arXiv:0812.3158 [cond-mat.str-el] (or arXiv:0812.3158v1 [cond-mat.str-el] for this version) ## Submission history From: Lukasz Fidkowski [view email] [v1] Tue, 16 Dec 2008 21:15:57 GMT (124kb)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7659963369369507, "perplexity": 3583.2767432590244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687711.44/warc/CC-MAIN-20170921082205-20170921102205-00324.warc.gz"}
http://physics.stackexchange.com/questions/45424/question-on-the-gell-mann-low-equation
# Question on the Gell-Mann Low equation Question on the Gell-Mann Low Equation. In this paper, http://arxiv.org/abs/1205.3365, page 21, the author argues that if: t →∞(1-iϵ), all the terms in equation (193) goes to zero, except the first term. Can anyone explain this to me? - If you substitute $t\rightarrow \tau (1-i\epsilon)$ in your eq. (193), then the exponent is no longer a pure phase (i.e. complex number wuth modulus =1). All the terms receive a factor of type $e^{-\tau E_n}$. All these terms go to zero, as $\tau \rightarrow \infty$, but since author says $E_n > E_0$ for all $n>0$, then the first term goes to zero slower than all other terms. Thus we keep only this first term. If you understand now, then you can make it your own answer, it is ok to answer your questions. – au700 Nov 29 '12 at 14:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.923613965511322, "perplexity": 330.8371819540665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276564.72/warc/CC-MAIN-20160524002116-00216-ip-10-185-217-139.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/131544-question-about-subspace-print.html
• Mar 1st 2010, 08:14 PM superdude I am new with vector space and subspace. The question asks "is the set of all vectors (x,y) where $0 \leq x \leq 1$, $0 \leq y \leq 1$ is in subspace $\mathbb{R}^2$ So there's 10 properties a "candidate" must satisfy to be classified as a subspace. These properties are listed alpha a-d beta e-h. I don't know if alpha and beta have some sort of special significance. In the book I'm reading there are two knew operators: vector addition and scalar multiplication. These confuse me because they appear to act in the same way as regular addition and multiplication. Here is my attempt at the question: $\alpha)$ $\mathbf{u}+\mathbf{v}=(a_1+a_2,b_1+b_2)$ this doesn't hold because (1,1)+(1,1)=(2,2) which is outside of the vector space a) $\mathbf{u}+\mathbf{v}=(a_1,b_1)+(a_2,b_2)=(a_1+a_2 ,b_1+b_2)=(a_2+a_1,b_2+b_1)=\mathbf{v}+\mathbf{u}$ so this holds b) $\mathbf{u}+(\mathbf{v}+\mathbf{w}) = (a_1,b_1)+[(a_2,b_2)+(a_3+b_3)]=(a_1+a_2,b_1+b_2)+(a_3,b_3)=(\mathbf{u}+\mathbf{v })+\mathbf{w}$ this holds c) $\mathbf{0}+\mathbf{u}=\mathbf{u}$ I guess this holds, I don't know how it could not. d) property: for each u in V, there is an element -u in V such that u (Thinking) -u = 0where (Thinking) denotes vector addition so I think this property does not hold true take for example u = 1. How do I say this in a more formal way? $\beta)$ If u is any element of V and c is any real number then c scalar multiplication with u is in v. I said this does not hold if c is outside of [0,1]. Or is the idea that c has to be in [0,1] to start with? e) $c \cdot (\mathbf{u}+\mathbf{v}=c \cdot [(a_1,b_2)+(a_2,b_2)]=c \cdot (a_1+a_2,b_1+b_2)$ I have a simillar problem, is c>1 or c<0 then it doesn't hold. f) g) and h) I have similar problem So since one property doesn't hold the answer to the question is no. I'm trying really hard here, can someone help me correct anymistakes I have? • Mar 1st 2010, 08:16 PM Drexel28 Quote: Originally Posted by superdude I am new with vector space and subspace. The question asks "is the set of all vectors (x,y) where $0 \leq x \leq 1$, $0 \leq y \leq 1$ is in subspace $\mathbb{R}^2$ So there's 10 properties a "candidate" must satisfy to be classified as a subspace. These properties are listed alpha a-d beta e-h. I don't know if alpha and beta have some sort of special significance. In the book I'm reading there are two knew operators: vector addition and scalar multiplication. These confuse me because they appear to act in the same way as regular addition and multiplication. Here is my attempt at the question: $\alpha)$ $\mathbf{u}+\mathbf{v}=(a_1+a_2,b_1+b_2)$ this doesn't hold because (1,1)+(1,1)=(2,2) which is outside of the vector space a) $\mathbf{u}+\mathbf{v}=(a_1,b_1)+(a_2,b_2)=(a_1+a_2 ,b_1+b_2)=(a_2+a_1,b_2+b_1)=\mathbf{v}+\mathbf{u}$ so this holds b) $\mathbf{u}+(\mathbf{v}+\mathbf{w}) = (a_1,b_1)+[(a_2,b_2)+(a_3+b_3)]=(a_1+a_2,b_1+b_2)+(a_3,b_3)=(\mathbf{u}+\mathbf{v })+\mathbf{w}$ this holds c) $\mathbf{0}+\mathbf{u}=\mathbf{u}$ I guess this holds, I don't know how it could not. d) property: for each u in V, there is an element -u in V such that u (Thinking) -u = 0where (Thinking) denotes vector addition so I think this property does not hold true take for example u = 1. How do I say this in a more formal way? $\beta)$ If u is any element of V and c is any real number then c scalar multiplication with u is in v. I said this does not hold if c is outside of [0,1]. Or is the idea that c has to be in [0,1] to start with? e) $c \cdot (\mathbf{u}+\mathbf{v}=c \cdot [(a_1,b_2)+(a_2,b_2)]=c \cdot (a_1+a_2,b_1+b_2)$ I have a simillar problem, is c>1 or c<0 then it doesn't hold. f) g) and h) I have similar problem So since one property doesn't hold the answer to the question is no. I'm trying really hard here, can someone help me correct anymistakes I have? If any of the properties don't hold then it isn't a subspace. • Mar 1st 2010, 08:21 PM superdude in what case would there not be a zero vector? why is there the wired circle around the dot for scalar multiplication and the circle around the plus sign for vector addition? • Mar 1st 2010, 08:22 PM Drexel28 Quote: Originally Posted by superdude in what case would there not be a zero vector? why is there the wired circle around the dot for scalar multiplication and the circle around the plus sign for vector addition? What? • Mar 1st 2010, 08:34 PM superdude how can there not be an element 0 in V such that u vector adition -u=0 ? what do these operators mean? how do they differ from normal addition and multiplication? http://img59.imageshack.us/img59/831...roperators.jpg • Mar 1st 2010, 08:43 PM Drexel28 Quote: Originally Posted by superdude how can there not be an element 0 in V such that u vector adition -u=0 ? what do these operators mean? how do they differ from normal addition and multiplication? http://img59.imageshack.us/img59/831...roperators.jpg Oh, $\odot$, and $\oplus$. Give me an example, usually you consider $V\oplus V'$ where they are both vector spaces. • Mar 1st 2010, 10:05 PM superdude How about using the question I started with and showing how u and v from $\mathbb{R}^2$ such that u $\oplus$ v and c $\odot$ v is in $\mathbb{R}^2$ so I'm confused as to what the differnce between $+$ and $\oplus$ and the $\cdot$ and $\odot$ is? Does the circle around them mean that the operations are being done to vectors? • Mar 1st 2010, 10:14 PM Drexel28 Quote: Originally Posted by superdude How about using the question I started with and showing how u and v from $\mathbb{R}^2$ such that u $\oplus$ v and c $\odot$ v is in $\mathbb{R}^2$ so I'm confused as to what the differnce between $+$ and $\oplus$ and the $\cdot$ and $\odot$ is? Does the circle around them mean that the operations are being done to vectors? I've never seen that notation before, but I would assume it is what you said. • Mar 1st 2010, 10:25 PM superdude ok thanks for saying that. My textbook is extremely unclear about what's going on: I was unsure if the definition was for real vector space, or if what is being defined is the operators themselves. I'm looking at page 272 of Introductory Linear Algebra An Applied First Course by Bernard Kolman and David R. Hill if anyone has this book and could clear up the meaning of $\oplus , \odot$ that would be greatly appreciated.(Hi) Another related question: it takes 10 requirments for something to be a vector space and only 2 of those 10 requirments for a set of matrices to be a subset of a vector space. Does that mean that to be a subset, the thing may not nescecarily be a vector space?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 40, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8930220603942871, "perplexity": 607.78157946559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815500.61/warc/CC-MAIN-20180224073111-20180224093111-00630.warc.gz"}
https://fronkonstin.com/tag/functional/
# Flowers for Julia No hables de futuro, es una ilusión cuando el Rock & Roll conquistó mi corazón (El Rompeolas, Loquillo y los Trogloditas) In this post I create flowers inspired in the Julia Sets, a family of fractal sets obtained from complex numbers, after being iterated by a holomorphic function. Despite of the ugly previous definition, the mechanism to create them is quite simple: • Take a grid of complex numbers between -2 and 2 (both, real and imaginary parts). • Take a function of the form $f(z)=z^{n}+c$ setting parameters $n$ and $c$. • Iterate the function over the complex numbers several times. In other words: apply the function on each complex. Apply it again on the output and repeat this process a number of times. • Calculate the modulus of the resulting number. • Represent the initial complex number in a scatter plot where x-axis correspond to the real part and y-axis to the imaginary one. Color the point depending on the modulus of the resulting number after applying the function $f(z)$ iteratively. This image corresponds to a grid of 9 million points and 7 iterations of the function $f(z)=z^{5}+0.364716021116823$: To color the points, I pick a random palette from the top list of COLOURLovers site using the `colourlovers` package. Since each flower involves a huge amount of calculations, I use Reduce to make this process efficiently. More examples: There are two little Julias in the world whom I would like to dedicate this post. I wish them all the best of the world and I am sure they will discover the beauty of mathematics. These flowers are yours. The code is available here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38699549436569214, "perplexity": 844.7308325544659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247504594.59/warc/CC-MAIN-20190221111943-20190221133943-00400.warc.gz"}
https://www.arxiv-vanity.com/papers/nucl-th/9908040/
# The structure of superheavy elements newly discovered in the reaction of 86Kr with 208Pb J. Meng and N.Takigawa Department of Technical Physics, Peking University, Beijing 100871, P.R. China Department of Physics, Tohoku University Sendai 980-8578, Japan Center of Theoretical Nuclear Physics, National Laboratory of Heavy Ion Accelerator, Lanzhou 730000, China e-mail: July 12, 2022 ###### Abstract The structure of superheavy elements newly discovered in the Pb(Kr,n) reaction at Berkeley is systematically studied in the Relativistic Mean Field (RMF) approach. It is shown that various usually employed RMF forces, which give fair description of normal stable nuclei, give quite different predictions for superheavy elements. Among the effective forces we tested, TM1 is found to be the good candidate to describe superheavy elements. The binding energies of the 118 nucleus and its decay daughter nuclei obtained using TM1 agree with those of FRDM within MeV. Similar conclusion that TM1 is the good interaction is also drawn from the calculated binding energies for Pb isotopes with the Relativistic Continuum Hartree Bogoliubov (RCHB) theory. Using the pairing gaps obtained from RCHB, RMF calculations with pairing and deformation are carried out for the structure of superheavy elements. The binding energy, shape, single particle levels, and the Q values of the decay are discussed, and it is shown that both pairing correlation and deformation are essential to properly understand the structure of superheavy elements. A good agreement is obtained with experimental data on . PACS numbers : 21.60.Jz, 21.65.+f, 21.10.-k, 21.10.Gv, 27.90.+b Keywords: superheavy elements, Relativistic Mean Field, Pairing correlation, deformation, magic number, Q-values of decay ## I Introduction Following the discovery of decay isotopes of Elements , and at GSI [1, 2, 3], an isotope of the Element 118, 118, and several its decay daughter nuclei were announced to have been discovered at Berkeley Lab’s 88-Inch Cyclotron with the newly constructed Berkeley Gas-filled Separator by bombarding lead target with an intense beam of krypton ions of 449 MeV [4]. The sequence of decay events is consistent with the long-standing theoretical prediction that there exists an “island of stability” around 114 protons and 184 neutrons and activates once again the study of superheavy elements. The study of superheavy elements has been a hot topic for the last two decades. Recent works on the collisions, structure and stability of Heavy and Superheavy Elements can be found in Refs. [5, 6, 7, 8, 9, 10, 11]. In a recent paper, Smolanczuk claimed that the reaction should have a particularly favorable production rate [12]. This motivated the experiment at Berkeley. According to the authors, the synthesized superheavy element 118 decays by emitting an alpha particle within less than a millisecond, leaving behind the isotope of element 116 with mass number 289. This daughter nucleus is also radioactive, alpha-decaying to an isotope of element 114. The chain of successive alpha decays continues until element 106. Smolanczuk discussed also the properties of superheavy elements in this mass region under the constraint of a spherical shape based on a macroscopic-microscopic approach [13]. In contrast to his approach, here we study the structure of superheavy element 118 and of the daughter nuclei in the sequence of decays in the Relativistic Mean Field (RMF) theory. The effects of deformation and pairing correlation will be taken into account. The pairing gaps for deformed RMF calculations are taken from the Relativistic Continuum Hartree Bogoliubov (RCHB) theory [14], which is an extension of the Relativistic Mean Field and the Bogoliubov transformation in the coordinate representation [15]. As the spin-orbit splitting which governs the shell structure and magic number is naturally obtained in the RMF theory, we expect that the structure of superheavy elements can be understood properly once the deformation and pairing correlation are taken into account. We investigate the binding energy, deformation, the -values of the alpha decay, the effect of pairing correlation, shell structure, and the structure of single particle levels for protons and neutrons. The paper is organized as follows. In sect.II, we present the results of RMF calculations without pairing correlation for several standard forces, which give fair description of normal stable nuclei. We thus discuss the appropriate force to describe superheavy elements. In sect.III the RCHB theory is used to investigate the pairing correlation in these superheavy elements. The RCHB provides not only a unified description of mean field and pairing correlation but also a proper description for the continuum and the coupling between the bound state and the continuum [14]. We then perform in sect.IV the study by a deformed RMF+BCS approach using the pairing gaps supplied by RCHB. We summarize the paper in sect.V. ## Ii Examination of various RMF parameter sets There are many parameter sets for RMF calculations, which provide nearly equal quality of description for stable nuclei. Therefore, we wish to find at first which effective force in RMF is more suitable to describe superheavy elements. As claimed in Ref.[11] the results are strongly interaction dependent. For this purpose we perform RMF calulations that include deformation but ignore pairing correlation with different effective forces. The details of the method can be found in Ref.[16]. Table 1 compares the binding energies of superheavy element and its decay daughter nuclei calculated with effective forces TM1 [17], NL1, NL3 and NLSH. For comparison, the results of the phenomenological FRDM calculations are given in the last column[18]. The results of TM1 are nearly the same as those of FRDM for , , , , and . They are within 1 MeV from each other. The difference between TM1 and FRDM results gets larger for , and , but is still smaller than 3 MeV. Though there are differences of several MeV, NL1 and NLSH give similar results as TM1. The NL3 parameter set, on the other hand, gives a difference of about 50 MeV from the other calculations. One important difference between the RMF calculations with TM1 and FRDM is that the additional gain of the binding energy when one moves from 112 to 114 is much less in the RMF calculations. In other words, Z=114 has a weaker meaning as a magic number in the RMF calculations. Though, strictly speaking, it may not be adequate, let us call this effect as the change of the shell structure or of the magic number property at Z=112. As we see shortly, a similar effect appears in the Z dependence of the nuclear shape. This effect eventually plays an important role in reproducing the qualitative trend of the experimental data on the atomic number dependence of . Similar trend concerning the mutual comparison of different forces appears also in Table 2, where the Q-values of decay sequence (MeV) are shown. The Q-values given by TM1 and FRDM are quite similar except for , where the difference is 3.8 MeV. This large difference is connected with the change of the shell structure mentioned above. Table 3 shows the corresponding deformation parameter in the ground state. TM1 predicts a stable prolate deformation for all the nuclei listed in the table, taking the minimum at . NL3 and NLSH give similar results as TM1, but the minimum deformation is shifted to for NL3. The NL1 predicts a spherical shape for , while FRDM almost spherical shape for , , , and . The shift of the atomic number, where the deformation becomes minimum, from Z=114 to 112 is what we already mentioned as an evidence of the change of the shell structure. Table 4 compares the corresponding charge-radii . In contrast to the big difference seen in the binding energy, the charge-radii for different forces lie within from each other. ## Iii Pairing correlation in superheavy elements: description by RCHB In this section we study the effects of pairing correlation in superheavy element 118 and its decay daughter nuclei by using the self-consistent and fully microscopic RCHB theory[14] under the constraint of a spherical shape. With the pairing gap obtained from RCHB, a self-consistent and more complete RMF calculation with both pairing correlation and deformation will be carried out in the next section. Before applying the RCHB theory to newly discovered superheavy elements, we examine once again which effective force is the most suitable to describe superheavy elements. For this purpose, we use lead isotopes as test cases. The binding energies of six Pb isotopes calculated by RCHB with 4 different effective forces are compared with experimental data in Table 5 . Although all the calculations except for NL3 well reproduce the experimental binding energies of the Pb isotopes, TM1 gives the best reproduction of the data. Therefore we expect that the RMF calculations with TM1 and pairing correlation will give a satisfactory description of superheavy elements. The radii for neutron , proton , matter , and charge radii calculated by RMF with TM1 are given in the last four columns in Table 5. We have then calculated the binding energy , one neutron separation energy , the Q value for the decay , matter and charge radii and , neutron and proton pairing gaps for superheavy elements in the RCHB with TM1. The results are shown in Table 6. The matter radius is larger than the charge radius for all nuclei due to the neutron excess. The Q value of the decay increases monotonically with . The proton pairing gap parameter is around 1 MeV, while the neutron pairing gap parameter is relatively small due to blocking effects. The calculation for , and are also given for reference to understand the fusion barrier to synthesize the element . In Figs.1 and 2, the single particle levels in the canonical basis for neutrons and protons in 118 are given, respectively. In order to avoid the irregularity due to the blocking effect, we give the single particle levels in 118 instead of . The Fermi surface for neutrons and protons is given in each figure by the dashed line. The potential is the sum of the vector and scalar potentials. Fig.1 indicates that, after the sub-closed shell at , the next closed or sub-closed shells occur at and . For the proton case, closed or sub-closed shells occur at , and . The Fermi level for protons in is at MeV, while that for neutrons at MeV. Although the Fermi level for protons is very close to the continuum, the wave functions of all the protons are well localized in a small region because of the Coulomb barrier. Figs.3 and 4 show the change of the single particle neutron and proton levels near the Fermi surface along the decay chain from 118. Similarly to Figs.1 and 2, we give the single particle levels for the neighboring even-even nuclei in order to avoid the irregularity due to the blocking effect. Adding an particle always raises the proton single particle levels and lowers the neutron single particle levels. There are distinct gaps of about 2 MeV at and and of about 3 MeV at for neutrons. The decay energy is shown in Fig.5 as a function of the atomic number along the decay chain from 118. The observed data and the prediction of FRDM are also included, where the former are taken from Fig.4 in [4]. Compared with the data, RCHB calculations give systematically too small . This reflects the deformation effect which is ignored in the present RCHB calculations. The calculated in the RMF calculations neglecting the pairing correlation but including the deformation (the open circles) is somewhat larger than that in RCHB but still smaller than the data for and fluctuates for larger Z showing a sharp peak at . This contrasts to the result of FRDM, which also fluctuates, but shows a deep minimum at the same atomic number reflecting a sub-closed shell at in this model. ## Iv The description of superheavy elements by DRMF+BCS Using the pairing gap from RCHB, we now perform the RMF calculations by including both deformation and pairing correlation. The results are given in Table 7 for the binding energy , the particle energy, matter and charge radii, and the neutron, proton and matter deformation parameters. The calculated binding energies for , and are also given. Each binding energy increases by 0.3 to 2 MeV with the pairing correlation, and can noticeably alter the atomic number dependence of . We added the results of calculated by the DRMF+BCS in Fig.5. Comparing with RMF calculations, we observe that the theoretical becomes much closer to the experimental data by the inclusion of the pairing correlation. Only for , the remains the same and has a difference of 2 MeV from the data. Interestingly, takes maximum at in DRMF+BCS in accord with the hump in the data, while the FRDM gives a minimum there. ## V Summary We made a systematic study of the structure of superheavy elements recently discovered at Berkeley Lab’s 88-Inch Cyclotron by the reaction Kr + Pb at 449 MeV in the framework of Relativistic Mean Field (RMF) approach. We have shown that usually used various RMF forces, which provide fair description of normal stable nuclei, give quite different predictions for superheavy elements. Among them TM1 is found to be the good candidate to describe superheavy elements. We have shown that the binding energy obtained from TM1 agrees with that of FRDM within a difference of MeV. The same conclusion that TM1 is the good interaction has been drawn from the calculations of the binding energy of Pb isotopes using Relativistic Continuum Hartree Bogoliubov (RCHB) theory. However, neither the deformation nor the pairing correlation alone could explain the data of . We then performed RMF calculations of superheavy elements which include both the pairing correlation and deformation by using the pairing gaps obtained from RCHB. We have thus shown that a good agreement can be obtained between theory and experimental data concerning the Q value of the -decay. Especially, our RMF calculations reproduce a peak at Z=114 seen in the experimental data. We conjecture that this peak appears because of the shift of the shell structure, e.g. concerning nuclear shape, from Z=114 in FRDM to Z=112. Finally we wish to make a few comments on open questions. We kept the pairing gap parameter once it has been fixed for a spherical shape, ignoring the possibility of its shape dependence [19]. Another basic assumption is that the observed decays are from ground state to ground state, though this might not be the case for a part of the decay chain. We noticed a paper by Cwiok et al. [20] after we have completed our study. The validity of the above mentioned approximation and assumption will be worth being examined to obtain more reliable understanding of superheavy elements. It would also be important to understand the difference between our conclusions and those in ref.[20], where the authors predict systematically much smaller deformation for all nuclei and also claim that the deformation monotonically decreases towards Z=118. We will address these questions in a separate paper. J.M. thank the Department of Physics, Tohouku university for its hospitality and the financial support of Japan Society for the Promotion of Science to make his stay possible. This work was partially sponsored by the National Science Foundation in China under Project No. 19847002.and by SRF for ROCS, SEM, China.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.917600154876709, "perplexity": 1005.5320362430625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499710.49/warc/CC-MAIN-20230129080341-20230129110341-00064.warc.gz"}
https://portlandpress.com/biochemj/article-abstract/245/3/699/23443/Studies-on-the-interactions-of-human-pancreatic?redirectedFrom=fulltext
The interactions of human pancreatic elastase 2 with alpha 1-proteinase inhibitor and alpha 1-antichymotrypsin were compared by studies in vitro. The equimolar complexes obtained between the enzyme and either inhibitor were relatively stable at 25 degrees C since they could be visualized for up to 5 days by an electrophoretic method. However, in both cases, a slow dissociation occurred with release of active enzyme. As the kass. rate constants are of the same order of magnitude, with a slightly lower value for alpha 1-proteinase inhibitor when compared with alpha 1-antichymotrypsin [(5.6 +/- 1.2) X 10(5) and (8.9 +/- 1.3) X 10(5) M-1.s-1 respectively], partition of human pancreatic elastase 2 between both inhibitors in human plasma is mainly dependent on their respective concentrations. A comparative study by crossed immunoelectrophoresis of the interactions of this enzyme with the two inhibitors contained in normal human plasma and in a mimetic mixture of pure inhibitors was carried out. This allowed the visualization of complexes with either inhibitor. Formation of such a complex with alpha 1-antichymotrypsin had never been demonstrated previously. The patterns obtained are similar when working with normal plasma or with the synthetic mixture, suggesting that, in the conditions used, alpha 1-proteinase inhibitor and alpha 1-antichymotrypsin are the main inhibitors of human pancreatic elastase 2 in the plasma sample. However, it is also shown that part of the enzyme may be taken up by alpha 2-macroglobulin, which is responsible for the remaining enzyme activity on a synthetic substrate. The present work suggests that, according to the delay times of inhibition of human pancreatic elastase 2 calculated from the normal plasma concentrations of alpha 1-proteinase inhibitor and alpha 1-antichymotrypsin, a significant role can be assigned to both inhibitors. Moreover, the role of alpha 1-antichymotrypsin would be enhanced in alpha 1-proteinase-inhibitor deficiency. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8330497145652771, "perplexity": 2849.2357014240506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038878326.67/warc/CC-MAIN-20210419045820-20210419075820-00202.warc.gz"}
https://www.gamedev.net/forums/topic/646039-forward-declaration-how-and-when-do-we-use-it/
# Forward declaration, how and when do we use it? ## Recommended Posts Now I know they're already a lot of threads talking about this. But no matter how much I read, I still can't seem to understand! So I decided to give it a try here - plus, I can ask a question myself here. Suppose I have 2 classes; class A and class B. Both Have their own .cpp and .h files. I heard that if two classes need to use each other's member variables (quite often in game programming, I've come across a lot, but don't know how to.) So how do I do forward declaration so that class A can access class B, and class B can access class A? I might want to implement this on inventory - item class for my game. Note that class A and class B are NOT declared and defined in the same .cpp file. Thanks :) ##### Share on other sites A.h: class B; //forward declaration class A { //contents } B.h: class A; class B { //contents } A.cpp: #include "A.h" #include "B.h" //code  B.cpp: #include "B.h" #include "A.h" //code This is the safest way.  Basically you can't include a file that includes the first file. Forward declaring a class allows you to define pointers using that type, but not access them. So in class A you can have B* b; but you can't have an inline constructor that calls a function in b or accesses its variables. Since you don't usually #include a .cpp file directly, it's safe to just include all the classes .h files in your .cpp files Edited by zacaj ##### Share on other sites Forward declaring a class allows you to define pointers using that type, but not access them. I'm don't follow this. Example please? thanks :) ##### Share on other sites Usually, it is better to try and structure your game so there is a clear direction to such dependencies. For instance, I can see why an inventory needs to know about items, but why does the item care about inventories? What about items on the ground? Such a cyclical dependency is harder to reason about, and harder to test in isolation. ##### Share on other sites class Foo; Foo *foo_pointer; //valid. It's just a pointer, so it only needs to know that Foo is a type (which it does from the forward declaration), it doesn't need to know anything about Foo itself Foo foo; //error. Actually making a variable of type Foo (foo_pointer is of type Foo*, not Foo) can't be done, because we don't know how big Foo is. void accessFoo(Foo *foo) { foo->x=4;//also an error. We don't know what variables Foo contains, so there's no way we could access them } class Foo { public: int x; }; //declaring the type Foo down here doesn't affect the code above it Foo foo2; //this is now valid, because Foo is a fully defined type void main() { foo_pointer=new Foo();//although foo_pointer was declared above the Foo class, since *this* code is below the Foo class, it can access foo_pointer's members foo_pointer->x=8; accessFoo(foo_pointer); } ##### Share on other sites You might find Organizing Code Files in C and C++ a good read. ##### Share on other sites typically for an inventory system, you'll have a master database of inventory item types, and some sort of list data structure that can hold a list of instances of individual items. i call mine a "stufflist". <g> its a list of stuff. stuff laying on the ground in a given area, stuff the player is carrying, stuff a NPC has for use in combat, stuff you find in a treasure chest, stuff available for trade, stuff installed on a vehicle, stuff carried in a cargo bay, etc. then different things in the game (PCs, npcs, stores, treasure chests, vehicles, etc) have their own stufflists. all inventory actions are about adding items to, removing items from, or transferring items between, stufflists. this usually does not require circular dependency between the parts of the system. the item types database has the data such as base price, weight, is_weapon, weapon_type, is_missile_weapon, ammo_type, damage, rate of fire, time to manufacture, rendering info (how to draw it), etc. the stufflists have the ability to add and remove objects, and perform calculations such as total weight of items in the list, and can do searches for items based on type, quality, etc. so at the low level you have the item types database, at the mid level you have the stufflist API for managing items, and at the high level you have the controlling code that manipulates the stuff lists to do things such as move all the items in the treasure chest to the player's inventory ("Take all"). note that each stufflist is in essence a little database of its own. fields in its records would be things like itemtype, quantity, quality, % completed (if under construction / being manufactured), position and rotation (for dropped items), etc. things like encumbrance checks, container checks (do they have enough containers to carry it?), out of cargo space checks, out of hull space checks, etc would be done by the controlling code. ##### Share on other sites Meerul264, You may have been misinformed about forward declarations.  One thing a forward declaration does not give is visibility on class members. Gaining Visibility on Class Definitions This is generally done by including headers.  As zacaj shows above, A.cpp can include B.h to see and use B's public members, and B.cpp can include A.h to see and use A's public members. A.h: class A {/*members declared here*/}; B.h: class B {/*members declared here*/}; A.cpp: #include "A.h" // Includes A's class definition (so members declared in it can be defined here). #include "B.h" // Includes B's class definition (so members of A can see and use B and its public members). ... B.cpp: #include "B.h" // Includes B's class definition (so members declared in it can be defined here). #include "A.h" // Includes A's class definition (so members of B can see and use A and its public members). ... Header inclusion gets you visibility on a class definition.  Forward declarations are neither necessary nor sufficient. When to Use Forward Declarations Forward declarations are for cases in which the compiler needs to know that something is defined elsewhere, but doesn't need the details of that definition. A.h: class B; // Forward declaration class A {B * pB;}; // Use of B, but not of B's definition In this case, the compiler doesn't need to know anything about B here except that it's a class which is defined somewhere. Note that we could include B.h here instead.  However, that would unnecessarily include B.h everywhere that A.h is included (which is bad for compile times). Also, there are cases when you can't include a header: A.h: #include "B.h" // Error - B.h includes this file class A {B * pB;}; B.h: #include "A.h" // Error - A.h includes this file class B {A * pA}; Trying to include header files here results in infinite include recusion. Include guards and "#pragma Once" can't solve this issue.  They will prevent the recursion, but will result in a reference to an undefined type.  If you can imagine the compiler reading A.h, it would include B.h, whithin which it would effectively ignore the directive to include A.h, then would parse the definition of B, which includes a reference to A, which would not yet be defined. Forward declaration is the only way to make this work. That being said, you'll run into other problems when you have classes mutually depending on each other in this way.  As mentioned above, in this case you should probably re-design to reduce dependancies.  I generally only use forward declarations to eliminate unnecessary include directives. Forward Declarations Can Only Do So Much Note that forward declarations allow A and B to contain pointers to each other because the compiler doesn't need to know the details of a class to define a pointer to that class.  This wouldn't work for member objects. A.h: #include "B.h" class A {B MyB;}; In this case we must include the full definition of B.  The compiler needs its details in order to fully define A.  Because of this, there's no way for B to then contain a member of type A.  Forward declarations don't make recursive class definitions possible. Edited by VReality ##### Share on other sites Generaly, forward declaration is neccesary only if the declaration of class A needs to understand declarations of class B and vice versa, so it does not apply for definitions. To put siimply, suppose you have class CNode and class CScene. CScene needs to know about CNode, and suppose CNode definition needs to know about CScene as well, BUT, only on the definition manner. See this. CNode.cpp: #include "CScene.h" // vital and unavoidable! CNode::Release() { m_pScene->RemoveNode(this); // m_pScene is of CScene* type; } If you want to write such a definition of a CNode function, you need to include CScene.h header to CNode.cpp, ||||BUT THIS IS POSSIBLE ONLY IF CScene,h does not include Cnode.h!||||. But unluckily you find yourself in CScene declaration in following situation: CScene.h: #include "Cnode.h" // can be removed if following line is uncommented // class CNode; // this is forward declaration, needed to type members of somewhere in declaration of CScene class CScene { ....... CNode* GetNodeByName(char* name); ...... } Do you see that forward declaration allowed leaving include of CNode.h from CScene.h , thus allowing to use CScene.h in CNode.cpp  definitions while having CScene members correctly typed if they use CNode on declaratoin level. ## Create an account or sign in to comment You need to be a member in order to leave a comment ## Create an account Sign up for a new account in our community. It's easy! Register a new account • ### Forum Statistics • Total Topics 628305 • Total Posts 2981962 • 9 • 12 • 11 • 12 • 11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25923416018486023, "perplexity": 3747.7780594004785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805881.65/warc/CC-MAIN-20171119234824-20171120014824-00594.warc.gz"}
https://cbm-wiki.gsi.de/Public/PublicRich
Default Deutsch English You are here: CBM Wiki>Public Web>PublicRich (2008-01-21, ClaudiaHoehne) # The Ring Imaging Cherenkov Detector (RICH) ## Cherenkov Strahlung und RICH Detektor Hier finden Sie eine allgemeine Einfuehrung in die Grundlagen der Cherenkov Strahlung und ihre Nutzung in RICH Detektoren. ## RICH detector concept The RICH detector will serve for electron identification from lowest momenta up to 10-12 GeV/c needed for the study of the dielectronic decay channel of vector mesons. In the current CBM detector layout the RICH would be positioned behind the magnet with the silicon tracking system (STS/MVD) and in front of the first transition radiation detector (TRD), see the figure below for a sketch: Figure 1: Detector layout of RICH, positioned behind a large aperture dipole magnet with a silicon tracking system inside. The RICH will be followed by several transition radistion detectors serving for further electron identification and tracking. Combined with particle identification information from the other detectors, a pion suppression of 10000 is required out of which a factor 100-1000 has to be provided by the RICH alone. High detection efficiency of electrons is also required which calls for 10-15 hits per electron ring at minimum. As global tracking has to connect tracks in the STS and TRD, therefore the RICH detector should not extend 3 m and and a material budget of 3-4 % radiation length in order to reduce multiple scattering. A large acceptance of 25° in the laboratory has to be covered to identify the vector mesons in a wide range of rapidity and transverse momentum. The current detector concept forsees (for more detailed information on the components see below): • gaseous RICH detector with vertically separated mirrors (R=450 cm), gas vessel (~ (6-7)m x 5m x 3m)) • radiator: N (if needed with admixture of CO for suppression of fluorescent light) • mirror: glass or carbon substrate, Al+MgF coating, surface ~ (5-6)m x 4m • photodetector (shielded by magnet yoke, granularity ~6mm x 6mm): preferable PMTs As introduced above, the Ring Imaging Cherenkov detector (RICH) is designed to provide electron identification in the momentum range of electrons from low-mass vector-meson decays, i.e. from lowest momenta up to 10-12 GeV/c. These requirements define possible gaseous radiators for the RICH detector; in addition it would be preferable if the gas is easy to handle, chemically passive and inflammable: Assuming that pions can be separated from electrons up to 90% of the maximum Cherenkov opening angle (), the momentum range for identification is illustrated in the figure below in dependence on the Lorentz factor , where n is the refractive index of the medium. A radiator with %$\gamma_{\rm th}>38$% would be ideal, because then the Cherenkov angle of pions is less than 90% of for all momenta smaller than 12 GeV/c. Figure 2: Momentum threshold for Cherenkov light production for pions and kaons in dependence on . Also shown is the momentum at which the opening angle of pions corresponds to 90% of the opening angle of electrons. The green band thus indicates the approximate region of pion identification in dependence on . for CO and N is indicated by the dashed lines. Nitrogen would fulfill all requirements, but also CO can be of interest. Important characteristics are its chromatic dispersion and transmittance, see figures below. One concern of nitrogen might be its fluorescence which could be quenched by some addition of CO [H. Morii et al., Nucl. Instr. and Meth. A 526 (2004) 399]. As PMTs are foreseen as photodetector, a lower wavelength cutoff of 175-150nm is fine. Within this range also the ring resolution does not yet suffer from the chromatic dispersion. • chromatic dispersion from [Landolt Boernstein Series, 6th Edition, volume II/8] and [Ph.D. thesis of Annick Bideau-Mehu (1982)] • transmittance of N from [Y. Tomkiewicz and E.L. Garwin, NIM V 114 (1974) 413] • absorption of water, O and CO from [Olav Ullaland, RICH04 workshop] ### Mirror The most important consideration concerning the mirror material will come from global tracking simulations: STS and TRD tracks have to be connected with high precision which limits length and material budget of the RICH detector. The maximum length will determine the radius of curvature. And as the mirror gives the largest contribution to the material budget of the RICH, the maxium allowable radiation length will determine whether glass mirrors can be used or a lightweight material such as carbon has to be used. Currently, we aim at glass mirrors of 3-4mm thickness and a diameter of 50-60cm. Mirrors based on carbon fibres are kept als alternative. The coating should provide highest reflection for the full range of photons not absorbed in the gas and detected by the photodetector, i.e. down to about 150nm. The choice will thus be a Al+MgF coating. • reflection of Cherenkov photons in dependence on wavelength as measured by HADES [J.Friese et al, NIM A 502 (2003) 241] ### Photodetector Highly granulated PMTs are foreseen as photodetectors. However, as the photodetector is the most important ingredient determining the final number of hits/ring, special care has to be taken to enhance the detection of photons from lower wavelengths. Basically two concepts are discussed currently: • development of small size PMTs (diameter ~6-7mm) by IHEP Protvino with bialkali photocathode, glass window and a wavelengthshifter film (p-terphenyl) to enhance the sensitivity in the near UV • MAPMTs, e.g. from Hamamatsu (H8500) with pixel sizes of ~6mm x 6mm, bialkali photocathode and UV window to enhance the sensitivity in the near UV ## RICH in simulations The RICH detector is implemented in the CBM simulation framework (CBMroot) as introduced above: • gas vessel: • entrance and exit window: 0.25mm kapton • gas vessel: 5mm aluminum • beam pipe: 0.5mm carbon • chromatic dispersion • transmittance as presented in the above figures • Mirror: 3mm glass substrate, radial curvature of 450cm and • reflection properties as measured by HADES and presented above • Photodetector: different options available, i.e. • "Protvino PMT", diameter 8mm, hexagonal ordering, quantum efficiency enhanced in the UV region using wavelegnth shifter films attached to each glass window (proposal from Protvino, see e.g. Development of RICH, S. Sadovsky) • multianode PMT from Hamamatsu H8500 with UV glass for enhanced UV sensitivity: 8x8 MAPMT with an effective pixel size of • gas detector with CsI coated photocathode, see ALICE development (NIM A 502 (2003) 101): pixel size In order to cover a large acceptance (25° in the laboratory), the overall dimensions in the simulations are • gas vessel: ~(6-7)m x 5m x 3m) • mirror: 2 mirrors, vertically separated, full size of each ~ (5-6)m x 4m • photodetector: 2 planes a 3.2m x 1.4m However, these dimensions and the overall layout are not yet optimized. Main aim of the simulations is currently to prove that the physics can in principle be done with the proposed setup. This is a very valid concern as the proposed CBM setup is different from the "classical" dielectron experiment as e.g. CERES or HADES: In those experiments the electron identification is performed in front of the magnetic field and the main material budget introduced by tracking detectors. The CBM setup is in this respect vice versa. This bears the problem that background rejection has to be performed to a large extend with precise tracking alone, see PublicPhysics for more information on feasibility studies of the dielectron measurement. Simulations are usually performed using either single electrons or Au+Au events generated by the UrQMD model which are transported by GEANT3 through the CBM setup. In the table below typical numbers of rings/event for different energies or radiators are given. Besides these two factors the amount depends strongly on the material budget in the STS which for this simulation was 3.4mm of silicon equivalent. Tracks with equal or more than 4 hits are considered as well reconstructable. radiator beam energy # rings # rings (e) # rings () # rings (e, > 4 STS hits) N2 41 15 GeV 57.2 41.6 6.6 8.1 N2 41 25 GeV 91.3 63 19 14.1 N2 41 35 GeV 117.6 78.8 29.7 18.3 CO2 33 25 GeV 113.7 69.7 34.5 14.4 Assuming the different types of photodetectors described above the resulting properties are (N2 radiator): photodetector # channels Nhits/ring (e) geometrical coverage double hits Protvino PMT 160k 40 91% ~12% Hamamatsu 214k 22 85% ~7% CsI 140k 20.7 99% ~13% The values for the geometrical coverage include some spacings for assembly. The number of channels is calculated for the currently assumed area, an optimized layout will probably reduce the number by about 25-40%. First estimates on the ring radius resolution which is important for e- separation at higher momenta indicate that 2-3% should be within reach. This would result in a 3 separation of electrons and pions up to momenta of 13.5 GeV/c. In the following table extimates of the Cherenkov opening angle resolution for single photons are summarized for the main sources: multiple scattering ~1 (for p= 1 GeV/c) magnetic stray field < 1 (for p= 1 GeV/c) emission point small because of corrections, optimization angular deviation of mirror < 1 chromatic dispersion > 1 (strongly dependent on ) pixel size 1-2 This are a couple of independent errors of ~1 mrad size. The combined error for a full ring of N photons can be calculated as: mrad = mrad = 0.36 ... 0.7 mrad. The Cherenkov opening angle for electrons in a N2 radiator is =24.4 mrad. The estimated resolution lies thus in a range of 2...3%. ### Electron identification With current, still preliminary simulations including the full CBM detector setup, ring recognition algorithms, ring-track matching algorithms and certain ring selection cuts the following results are achieved using STS and RICH information alone (Status as of Jan. 2007): • Radius versus momentum for selected "good" rings, the lines show a band for electron identification: • Pion suppression factor = ( id. as e)/( in RICH (or RICH & TRD) acceptance) For the calculation of the pion-suppression all particles were identified as electrons which fall into a radius range of in the above plot. Up to 10 GeV/c the main source of misidentification is matching of primary vertex pions to rings from secondary electrons which themselves have no reconstructed track. From 10 GeV/c on pions start to contribute in the sample which are identified as electron because their ring radius falls into the electron cut. Both sources can be reduced by additionally using in particular information from the TRD detector.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7940322160720825, "perplexity": 4400.0968804801805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00061.warc.gz"}
https://stats.stackexchange.com/questions/352082/the-curse-of-high-dimension-and-distance/352090
# The Curse of high Dimension And Distance For extracting features from video frames (2 sample/sec) I use keras framework in python and load VGG16 that input size is (150,150,3) and output size is (4,4,512). After the feature extraction step I want to cluster frame features with Hierarchical K-Means. My problems are as follow: 1. I save each frame features in a vector which size is 8192. For a video that have 8000 frames if only reduce each frame size to (150,150) and extract features then I have a feature matrix with size (640,8192). As you can see feature matrix for even one video is very large ans besides "sparse". What is the best way to reduction its dimension? 2. What is the best metric for calculation distance between two pair of frame features? The space is so sparse and even feature values are so small, so Euclidean Distance is not a wise choise!! CLARIFICATION What is the frame feature: As you already knew, videos are nothing but frames, and with the help of deep learning (VGG16 (without the last fully connected layer)) we can extract its features in the way we like. for more information kaggle.com/keras/vgg16 In this particular case, output features have the size of (4*4*512) that become 8192 number in a row vector. Data: My data as I mentioned above is a very sparse and large matrix (640,8192). Non-zero values are rarely up to 100. IDEAS For Dimension Reduction: Two method are available for DR 1. Principal component analysis (PCA): A statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. (source: https://en.wikipedia.org/wiki/Principal_component_analysis) 2. Singular-Value Decomposition (SVD): A factorization of a real or complex matrix. It is the generalization of the eigen decomposition of a positive semi definite normal matrix (for example, a symmetric matrix with positive eigenvalues) to any m*n matrix via an extension of the polar decomposition. (source: https://en.wikipedia.org/wiki/Singular-value_decomposition) Most important parameter of these two methods is "n_components" that is number of components to keep. This parameter have the value of min(n_samples, n_features). As you can guess, components that we kept with this module is depend of Sample Number and Feature Number. Suppose that we have two videos with feature matrices with size of (140, 8192) and (640, 8192). First element is number of frames and second element is number of features. The output of PCA for these two videos is (140, 140) & (640, 640). We have to have matrices with same axis to check distance and clustering. How to solve this problem? I know that this clarification is too long to read, but it's worth it! • It is uclear what your data look like. Their values. What is frame features, in what form were they extracted. – ttnphns Jun 19 '18 at 9:08 • @ttnphns as you probably know, videos are nothing but frames, and with the help of deep learning (VGG16 (without the last fully connected layer)) we can extract its features in the way we like. for more information kaggle.com/keras/vgg16 In this particular case, output features have the size of (4*4*512) that become 8192 number in a row vector. My data as I mentioned above is a very sparse and large matrix (640,8192). Non-zero values are rarely up to 100. I hope this clarification helps you to understand the problem. Feel free to ask for more :) – Shahroozevsky Andrea Jun 19 '18 at 9:26 • Very nice. It is what you ought to explain in your question. And show maybe a snippet of your data. So people not working with images or keras can quickly understand the situation. – ttnphns Jun 19 '18 at 9:57 • If you read the question carefully, you can easily find the keywords out of it, keywords like "keras" or "video". You cannot say if a question belongs to python or clustering category, I can solve it :) – Shahroozevsky Andrea Jun 19 '18 at 10:12 • You did not understand my comment, probably. I said that a well-asked question would be a question which potentially could be understood and answered by an analyst not working with images, python or keras. Anyway. – ttnphns Jun 19 '18 at 10:42 Dimension reduction is always a trade-off between resources and precision. You can use PCA (setting a high value of n, say 5000) and use the measure of exactly how much variation is addressed by each component to try and determine the number of principle components to keep. Maybe 80% is enough, but if you've only explained 30% of your data's variance after 5000 principle components, then maybe you need an alternative. One way to plot this visually is to plot the number of components as x, and the explained variance as y. You're likely to get a curve that starts off shallow, and gets steeper at some point (or starts steep, and levels off to shallow, depending on which way your x-axis is ordered). Finding the "elbow" of this curve will deliver the point at which the cost of adding an additional component yields less and less explanatory value. It's not a perfect answer, but will give you an indication of roughly where to set your n. sklearn.decomposition.PCA has an output that shows the explained variance after fitting - use this to help get a better feel for your data. So the short version of this is, you can run PCA with any number n you want, but only pick the top k components where in combination, they do a good enough job (where "good enough" is defined by you) of explaining the variance in your data. Alternately, look into building an auto-encoder using a convolutional neural net (Keras/Tensorflow), these tend to do very good job of compression, especially where more linear methods (like PCA) find the task more challenging. [EDIT] Also, to address your final point - PCA wont ever reduce the feature space to a value less than the number of observations. That is, where you're feeding it with a small number of observations vs a large number of features, you can't force PCA to reduce the feature-size to a value lower than your observation count. Either build up more data, or generate it using your initial dataset as a starting point, and duplicating it (sensibly, adding some random variation perhaps) so that you can reliably squirt in a large enough dataset. This itself might pose you some problems, since now you'll have to deal with a larger memory footprint due to the input length, rather than the feature-width. Brute-forcing it so that your PCA squeezes your feature-width down to only 50 components would limit your issue in the first place, and perhaps your resultant feature space will continue to provide you with enough explanation of variance to be useful. It's common to only use maybe 5 components for example, but with an associated loss of precision. The truth lies in your data. For high dimensions, I read that Cosine Similarity works better than Euclidean Distance (https://en.wikipedia.org/wiki/Cosine_similarity). Maybe you can try this. • Hi :) Do you have any implementation of Cosine Similarity(in any programming language) ? – Shahroozevsky Andrea Jun 19 '18 at 9:34 • Yes, If you use Python just use scikit (scikit-learn.org/stable/modules/generated/…) otherwise here a post with several implementations (stackoverflow.com/questions/18424228/…) – alexandre_d Jun 19 '18 at 9:38 • Thank you, I will try them and let you know about the reults :) – Shahroozevsky Andrea Jun 19 '18 at 9:41 • Note that the distance directly corresponding to cosine similarity, the chord distance, is euclidean distance computed after both vectors are normalized to sum of squared values within a vector = 1. – ttnphns Jun 19 '18 at 10:06 • so by this definition, cosine similarity and euclidean distance are the same? @ttnphns – Shahroozevsky Andrea Jun 19 '18 at 10:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49884647130966187, "perplexity": 770.5319993343018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681412.74/warc/CC-MAIN-20200125191854-20200125221854-00421.warc.gz"}
http://oliverfriedmann.com/2012/12/06/applied-engineering-modules-functors-and-maps/
# Applied Engineering: Modules, Functors and Maps [latexpage] In the last post of the series, we introduced the functional programming language OCaml and basic types like lists. We applied higher-order functions like map to them (recall that a higher-order function essentially is a function taking a function as an argument). Recall that we’ve considered the list of people: let people = [ (0, "Hank"); (1, "Karen"); (2, "Becka"); (3, "Mia"); (4, "Julian"); (5, "Trixi") ] If we now want to look up the name of the person with id, let’s say, 4, we can only walk through the list until we find an entry that matches the respective number. This is a very inefficient algorithm for looking up data (its worst-case and average-case complexity are both linear). We can be much faster using a different data structure that stores all entries in order and applies binary search. The classic structure is a balanced tree and all database system rely on such organization of data. Luckily, OCaml already provides a module for handling such data. A module can be seen as a namespace for types and functions that belong together. We will use the map module that allows to quickly look up a key and return a corresponding value – in our case, the keys would be the people’s ids and the values would be the people’s names. The map module is general enough to allow arbitrary keys and arbitrary values to be stored in a balanced tree. If you think about that for a minute, you might see an issue here. In order to organize the data in the tree, you need to be able to compare keys with each other, so they can be arranged in the right order. But how would the map module compare stuff with each other if the map module doesn’t know how to compare certain types? Well, it doesn’t. We have to specify how our keys should be compared with each other. How do we explain to the map module how our keys can be compared with each other? By parametrizing the map module by an auxiliary module that explains the key comparison. More precisely, we map the auxiliary module to a concrete map module by a functor (a mapping from a module to a module). module IntMap = Map.Make( struct type t = int let compare = compare end );; On other words: we define the module IntMap to bethe  Map.Make-Functor applied to the auxiliary module with key type int (our people’s ids) and the comparator compare (OCaml provides a “magic” compare function for every type that we use here – for ints, it’s the natural ordering on integers). Next, we want to convert our list of people to an IntMap, mapping ids to people’s names. In order to build and use the map, we need three IntMap-functions: one to create an initial empty IntMap-instance, one to add assignments to it and one to look up keys. IntMap.empty: 'a IntMap.t IntMap.add: int -> 'a -> 'a IntMap.t -> 'a IntMap.t IntMap.find: int -> 'a IntMap.t -> 'a Recall that ‘a is a type variable. In our case, the type variable corresponds to the value type – and since the map does not care about the specifics of this type, we don’t have to fix it via any auxiliary modules and functors. The first function, IntMap.empty, simply returns an empty instance of an IntMap. The second function, IntMap.add, takes a key, a value and an existing IntMap as arguments and returns a new IntMap that added the new key-value assignment to the existing IntMap. The third function, IntMap.find, takes a key and an existing IntMap as arguments and returns the corresponding value (assuming that the key-value pair is present). Our conversion function therefore looks as follows: let int_list_to_map int_list = let empty_map = IntMap.empty in let rec helper rest_list acc_map = match rest_list with [] -> acc_map | (domain, range)::rest_list' -> helper rest_list' (IntMap.add domain range acc_map) in helper int_list empty_map The function starts with an empty map and then walks recursively through the last, adding the assignments one by one to the map. We can now use it to generate our people mapping and use it to look up people’s names: let people_map = int_list_to_map people let people_func p = IntMap.find p people_map Recall that in our initial post, we had two more lists – the list of constraints (which person likes to sit next to which person) and the table configuration (which is seat is close to which set). Every entry in each of these lists is a triple – two ids and a value (how likable / how close). let constraints = [ (0, 1, 1.0); (0, 2, 1.0); (0, 3, -0.5); (0, 4, -1.0); (1, 0, 0.75); (1, 2, 1.0); (1, 3, 0.5); (1, 4, 0.5); (1, 5, -0.75); (2, 0, 0.5); (2, 1, 0.5); (2, 3, 0.75); (2, 4, -0.75); (3, 0, 1.0); (3, 5, 0.5); (4, 1, 0.5); (5, 0, 1.0); (5, 1, -0.5) ] let table = [ (0, 1, 1.0); (0, 4, 1.0); (1, 0, 1.0); (1, 4, 1.0); (1, 2, 1.0); (1, 5, 0.5); (2, 3, 1.0); (2, 1, 1.0); (2, 5, 1.0); (2, 4, 0.5); (3, 2, 1.0); (3, 5, 1.0); (4, 0, 1.0); (4, 1, 1.0); (4, 5, 1.0); (4, 2, 0.5); (5, 3, 1.0); (5, 4, 1.0); (5, 3, 1.0); (5, 1, 0.5) ] We again want to convert both lists to maps, but in this case, the key consists of two ids, i.e. is an int product-type. We need to create a new module IntMap2 by mapping a new auxiliary module (that allows to compare key pairs with each other) to a map: module Int2Map = Map.Make( struct type t = int * int let compare = compare end ) let int2_list_to_map int2_list = let empty_map = Int2Map.empty in let rec helper rest_list acc_map = match rest_list with [] -> acc_map | (domain, domain2, range)::rest_list' -> helper rest_list' (Int2Map.add (domain, domain2) range acc_map) in helper int2_list empty_map let constraint_map = int2_list_to_map constraints let table_map = int2_list_to_map table We again want to define functions to look up the respective values by key pairs. There will be cases, however, in which certain key pairs don’t exist in the mappings. We didn’t specify, for instance, any constraints for the two seats 0 and 2 with respect to each other, because we implicitly assign them the proximity value 0. Similarly, this holds true for the constraints. We need to make sure, therefore, that our look up routines catch the exception in which no key pair could be found. let constraint_func p q = try Int2Map.find (p, q) constraint_map with Not_found -> 0.0 let table_map = int2_list_to_map table let table_func p q = try Int2Map.find (p, q) table_map with Not_found -> 0.0 We say here, try to execute the look up code provided by the Map module. If it raises the exception Not_found, return 0. Next, we need to define a new type for handling dinner table assignments. It should be a map from a person’s id to a seat id – in other words an int IntMap.t. To play around with it, let us setup a test assignment that assigns each person the seat with the same id: let test_assignment = int_list_to_map [(0,0); (1,1); (2,2); (3,3); (4,4); (5,5)] So far, so good. To get a feeling of what’s going on, let us print the table assignment. We will make use of another higher-order function provided by the Map module that allows us to walk through the map: IntMap.fold: (int -> 'a -> 'b -> 'b) -> 'a IntMap.t -> 'b -> 'b This looks pretty complicated, so let’s check the details. The first argument is function that accepts a key and a value, and maps some data (of type ‘b) to updated data (of type ‘b again). The second argument is the map and third argument is some initial data (of type ‘b). Overall, the routine routines data of type ‘b. Internally, the function starts with initial data and the first key-value-pair and calls the user function to obtain an updated data object. It continues in that fashion with the second key-value-pair until all key-value-pairs have been handled. The final updated data object is then returned. We will make us of it build a textual representation of dinner table assignments: let format_assignment assignment = IntMap.fold (fun person seat acc_format -> acc_format ^ people_func person ^ " sits on seat #" ^ string_of_int seat ^ "\n" ) assignment "" How does it work? It starts with the empty string “” and adds a line for every assignment pair in the user defined function. The current string is given by the accumulator variable acc_format. The output accumulator is built by concatenating (denoted by ^ in Ocaml) the accumulator variable with the person’s name and the seat number. Applying that to our test assignment, we get the following output: # print_string (format_assignment test_assignment);; Hank sits on seat #0 Karen sits on seat #1 Becka sits on seat #2 Mia sits on seat #3 Julian sits on seat #4 Trixi sits on seat #5 We conclude the post by realizing the assignment value function of our initial post that allows us to rate a given dinner table assignment: $\sum_{p,q \in People} constraint(p,q) \cdot proximity(assign(p), assign(q))$ We make extensive use of the fold operation again (which corresponds to the sum operation here): let assignment_value assignment = IntMap.fold (fun p _ acc_value -> IntMap.fold (fun q _ acc_value' -> acc_value' +. constraint_func p q *. table_func (IntMap.find p assignment) (IntMap.find q assignment) ) people_map acc_value ) people_map 0.0 For the record, the assignment value of our test assignment is 3.5. We should find an assignment that yields a higher value, right? We will see how to accomplish that in our final post in the series.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18750186264514923, "perplexity": 2538.347231295714}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696696.79/warc/CC-MAIN-20170926212817-20170926232817-00076.warc.gz"}
http://slideplayer.com/slide/2535315/
# Lesson 1- Basics Objectives : To know how to approx numbers to a required accuracy by 2-3 1.Basic Number Types 2.Decimal Places 3.Significant Figures 4.Writing. ## Presentation on theme: "Lesson 1- Basics Objectives : To know how to approx numbers to a required accuracy by 2-3 1.Basic Number Types 2.Decimal Places 3.Significant Figures 4.Writing."— Presentation transcript: Lesson 1- Basics Objectives : To know how to approx numbers to a required accuracy by 2-3 1.Basic Number Types 2.Decimal Places 3.Significant Figures 4.Writing numbers in Standard Form 5.Writing numbers in Engineering notation Oct 2011Foundation - L1 Number Types There are two types of numbers (Scientist) - Exact -> Amount of money in your pocket - Approximate -> Measurements like weight height Mathematicians have more definitions of numbers........... Oct 2011Foundation - L1 Number Types Counting Numbers Positive Whole Numbers1, 2, 3, 4, 5…… Natural Numbers N Counting numbers and zero 0, 1, 2, 3, 4…… Integers Z All positive and negative whole numbers …-2, -1, 0, 1, 2… Rational Numbers Q Numbers which can be written as a fraction where m and n are integers -1, 0, ½, 2¾, Real Numbers R All rational and irrational numbers -1, 2¾, π, Oct 2011Foundation - L1 Rational Numbers Most real numbers can be written as a fraction in its lowest form Example: Express 0.123123123123......... 123 as a fraction Trick x 1000 to get rid of decimals Subtract to get rid of decimals Oct 2011Foundation - L1 Irrational Numbers ─ But some numbers can not be expressed as fractions ─ Examples include These are numbers where the patterns in the decimals do not repeat We can not express numbers like this in faction form. The irrational number set is much smaller than the set of rational numbers Oct 2011Foundation - L1 Proof that is irrational Method- We will assume that it is rational and then we will contradict this assumption m and n are integers and the fraction can not be simplified further (i.e lowest form) So m 2 is an even number m 2 – even this implies that m is even m m 2 1 2 2 4 3 9 4 16 5 25 so “m” can be written as “2 × a” (as m even) so (so n is even too!!) So is Both numerator and denominator are divisible by 2 and therefore is not in lowest form and can be simplified Contradiction!! Oct 2011Foundation - L1 Starter − You need to buy some carpet for your bedroom − You measure the width and length of your room as 7.22m x 6.58m − You do not have a calculator or a pen and you have to estimate the area quickly in your head! − How do you estimate the area? − What values did you use for the length and width? − The carpet cost £5.80 per square meter, consider how much money you should take to the shop? Oct 2011Foundation - L1 Area is 7.22m x 6.58m The area must be smaller then 8m x 7m => 56m The area must be larger then 7m x 6m => 42m 42 m < Area < 56m But a better guess might be 7m x 7m => 49m These workings are all to 1 significant figure (sf) Obviously taking more (sf) will result in a more accurate answer How much money should you take? It is easy to how much exactly if you are good with mental aritmetic or have a calculator, but in principal if you take more than you need you cant go wrong!! If bad with numbers take 60 x 6 = £360 Significant Figures Consider the Real number 37.500 All the digits to the left of the decimal point are important Only the 5 to the right of the decimal point is important as 37.5 is the same as 37.500 Consider 37.5001, then all of the digits are important SIGNIFICANT FIGUREs (SF) means IMPORTANT DIGITS Oct 2011Foundation - L1 Sig Figs 37.5 3-> This is the 1 st Sig Fig 7-> This is the 2 nd Sig Fig 5 -> This is the 3 rd Sig Fig The significance of numbers decreases from left to right Oct 2011Foundation - L1 Rounding to Sig Figs Example Approximate 37.5 to 1 significant figures Look at the next most significant number (2 nd number) 304037 Round up if ≥ 35 Round down < 35 37.5 is 40 to 1 significant figures we write 40 (1 sf) Oct 2011Foundation - L1 Rounding to Sig Figs Example Approximate 37.5 to 2 significant figures Look at the next most significant number (this is now 3 rd No.) 373837.5 Round up if ≥ 37.5 Round down < 37.5 37.5 is 38 to 2 significant figures we write 38 (2 sf) Oct 2011Foundation - L1 Rounding to Sig Figs Example : Approximate 37.5 to 3 significant figures This is just 37.5 (because there are only 3 digits) Significant figures (sf) are counted from the left of a number. Always begin counting from the first number that is not zero. 9 4 6 0 3. 5 8 1 st 2 nd 3 rd 4 th 5 th 6 th 7 th significant figure 0. 0 0 0 0 0 1 4 9 0 2 0 7 Notice that a zero can be significant if it is in the middle of a number. Oct 2011Foundation - L1 Example Round up if ≥ Round down < Round up if ≥ Round down < Write the following to 3 sf a)12.455 b)0.013026 c)0.1005 d)13445.698 e)0.1999 Oct 2011Foundation - L1 Find the following 801296 to 1 sf 801296 to 3 sf -52.9000 to 3 sf -52.9001 to 4 sf Oct 2011Foundation - L1 Decimal Places -This is another way numbers are approximated or rounded -The principal is the same as for sig figs but we are only interested in the numbers to the right of the decimal place 3.14159 Interested in these numbers Example : Express π (Pi) to 1 decimal place π = 3.1415926535897932384 Round up if ≥ Round down < 3.1 3.2 3.5 π is 3.1 (1 dp) Oct 2011Foundation - L1 Decimal Places Example : Express π (Pi) to 2 decimal places π = 3.1415926535897932384 This is < 5 so do not round up π = 3.14 (2 dp) Example : Express π (Pi) to 6 decimal places π = 3.1415926535897932384 This is ≥ 5 so round up π = 3.141593 (6 dp) Oct 2011Foundation - L1 Scientific Notation A short-hand way of writing large or small numbers without writing all of the zeros Example : The Distance From the Sun to the Earth 93,000,000 Oct 2011Foundation - L1 Step 1 Move decimal left Leave only one number in front of decimal Step 2 Write number without zeros Oct 2011Foundation - L1 Step 3 Count how many places you moved decimal Make that your power of ten Oct 2011Foundation - L1 Scientific Notation Example: Partial pressure of CO 2 in atmosphere  0.000356 atm. This number has 3 sig. figs, but leading zeros are only place- keepers and can cause some confusion. So expressed in scientific notation this is 3.56 x 10 -4 atm This is much less ambiguous, as the 3 sig. figs. are clearly shown. Oct 2011Foundation - L1 Engineering Notation This is the same as scientific notation except the POWER is replaced by the letter E Examples NumberScientific Notation Engineering Notation 1001.x10 2 1.E2 1000 (1 sig fig)1. x 10 3 1.E3 1000 (2 dec pl)1.00x 10 3 1.00E3 -0.00123-1.23x 10 -3 -1.23E-3 10071.007x10 3 1.007E3 Oct 2011Foundation - L1 Summary 1- Significant figures are of more general use as they don’t depend on units used e.g. 2,301.2 m (1d.p.) = 2.3012 km (4 d.p.) 2- Answers which are money should usually be given to 2 decimal places, so, the nearest penny 3 ×£23.57895= £70.73685 = £70.74 to the nearest penny Oct 2011Foundation - L1 3- You must use at least one more s.f. in working than in your answer -To give an answer to 3 s.f. you generally need to use at least 4 s.f. in working. -To give an answer to 4 s.f. you generally need to use at least 5 s.f. in working. Example Calculate 3.7545 x 8.91235 to 3 sig fig You should at least use 3.754 x 8.912 but I would use all the digits on the calculator unless otherwise stated. Oct 2011Foundation - L1 4- When calculating with numbers that have been measured to different levels of accuracy, it makes sense to work the calculation to the lowest level of measurement “Treat Like with Like” Example If a cars speed has been measured as 40 to (1 sig fig) The distance travelled is measured as 10.91325 km (7 sig fig) It makes some sense to estimate the time (=dist x speed) as : 40 (1 sf) x 10 (1 sf) = 400 sec Oct 2011Foundation - L1 If a cars velocity has been measured as 40.012 (5 sig fig) The distance travelled is measured as 10.91325 km (7 sig fig) It makes sense to estimate the time (=dist x speed) as 40.012 (5 sf) x 10.913 (5 sf) = 436.6501 sec = 436.65 (5 sig fig) or = 436.65 (2 dec pl) Try to work to at least one digit higher accuracy. Try to measure numbers to a sensible order of accuracy Oct 2011Foundation - L1 Download ppt "Lesson 1- Basics Objectives : To know how to approx numbers to a required accuracy by 2-3 1.Basic Number Types 2.Decimal Places 3.Significant Figures 4.Writing." Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8343631625175476, "perplexity": 1832.4974230503708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867095.70/warc/CC-MAIN-20180624215228-20180624235228-00278.warc.gz"}
https://gamedev.stackexchange.com/questions/58006/grid-based-lighting-in-xna-monogame
# Grid Based Lighting in XNA/Monogame I know that questions like this have been asked many times, but I have not found one exactly like this yes. I have implemented a top-down grid based world in Monogame, and am starting on the lighting system soon. How I want to do lighting is to have a grid that is 4 times wider and higher, basically splitting each world tile into a 4x4 system of "subtiles". I would like to use a flow like system to spread light across the tiles by reducing the light by a small amount each time. This is kind of the effect I was going for: http://i.imgur.com/rv8LCxZ.png The black grid lines are the light grid, and the red lines are the actual tile grid, and the light drop-off is very exaggerated. I plan to render the world by drawing the unlit grid to a separate RenderTarget2D, then rendering the lighting grid to a separate target and overlaying the two. Basically, my questions are: 1. What would be the algorithm for a flow style lighting system like this? 2. Would there be a more efficient way of rendering this? 3. How would I handle the darkening of the light with colors, reducing the RGB values in each grid, or reducing the alpha in each grid, assuming that I render the light map over the grid using blending? 4. Even assuming the former are possible, what BlendState would I use for that? A floodfill lighting system should do what you want. First, set the light value of source tiles to 1, and all other tiles to 0. Next, to propogate the light, use a recursive DFS function to set light values of neighboring tiles to some attenuation of the source light. So essentially, you would have something like: void PropogateLight(float sourceLight, int toX, int toY) { if(tiles[toX][toY].Light >= sourceLight) return; float newLight = attenuate(sourceLight); if(newLight < 0) { tiles[toX][toY].Light = 0; return; } tiles[toX][toY].Light = newLight; PropogateLight(newLight, toX + 1, toY); PropogateLight(newLight, toX - 1, toY); PropogateLight(newLight, toX, toY + 1); PropogateLight(newLight, toX, toY - 1); } Once you have the light values set, just do: spriteBatch.Draw(texture, position, new Color(light, light, light)); • Thanks for the response. I'm assuming that this would work just as well if I wanted colored light, I would just replace a blanket light value with something like 'PropogateRLight' or 'PropogateGLight'? Also, I'm assuming here that the 'attenuate' function just decreases the amount of light by a set amount? – sm81095 Jun 24 '13 at 3:58 • Also, since each world tile is split up into 16 smaller lit parts, would it be more efficient to draw all of the unlit tiles, then the light map, and combine the two, or split up each world tile into 16 smaller ones and render using the color setting into a single RenderTarget and not worry about separate render targets? – sm81095 Jun 24 '13 at 4:02 • Correct! You can split the PropogateLight into different channels. The attenuate function would indeed reduce the intensity of the light. Is there any reason for the RenderTargets? Why not just tint it the first time? – untitled Jun 24 '13 at 4:13 • Alright, thanks for the answer! I'll test this out later today. – sm81095 Jun 24 '13 at 14:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16215112805366516, "perplexity": 1480.3202119175135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576355.92/warc/CC-MAIN-20190923105314-20190923131314-00101.warc.gz"}
http://export.arxiv.org/list/cond-mat.dis-nn/1801
# Disordered Systems and Neural Networks ## Authors and titles for cond-mat.dis-nn in Jan 2018 [ total of 71 entries: 1-25 | 26-50 | 51-71 ] [ showing 25 entries per page: fewer | more | all ] [1] Title: Kramers-Kronig relations and the properties of conductivity and permittivity in heterogeneous media Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn) [2] Title: Stability and pre-thermalization in chains of classical kicked rotors Comments: 21 single-column pages, 13 figures Journal-ref: Atanu Rajak, Roberta Citro, Emanuele G. Dalla Torre, J. Phys. A: Math. Theor. 51 465001 (2018) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Quantum Gases (cond-mat.quant-gas); Quantum Physics (quant-ph) [3] Title: A relativistic extension of Hopfield neural networks via the mechanical analogy Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Neural and Evolutionary Computing (cs.NE); Machine Learning (stat.ML) [4] Title: Scaling Behavior in the 3D Random Field $XY$ Model Authors: Ronald Fisch Comments: 6 pages, 3 figures. arXiv admin note: text overlap with arXiv:0709.4658 Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech) [5] Title: Stretched Exponential Relaxation of Glasses: Origin of the Mixed Alkali Effect Journal-ref: American Ceramic Society Bulletin 96, no. 4 (2017): 34-36 Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Materials Science (cond-mat.mtrl-sci) [6] Title: The plasmon-polariton mirroring due to strong fluctuations of the surface impedance Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn) [7] Title: Fano Resonances in Flat Band Networks Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn) [8] Title: Multicanonical Sampling of the Space of States of H(2,n)-Vector Models Journal-ref: Shevchenko, Y.A., Makarov, A.G., Andriushchenko, P.D. et al. J. Exp. Theor. Phys. (2017) 124: 982 Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn) [9] Title: Spiking label propagation for community detection Comments: Version 2: 8 pages, 6 figures Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Physics and Society (physics.soc-ph) [10] Title: Large deviation theory for diluted Wishart random matrices Journal-ref: Phys. Rev. E 97, 032124 (2018) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech); Data Analysis, Statistics and Probability (physics.data-an) [11] Title: Spectral engineering and tunable thermoelectric behavior in a quasiperiodic ladder network Journal-ref: Physics Letters A, Vol. 383, Issue 6, Page 570-577 (2019) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Mesoscale and Nanoscale Physics (cond-mat.mes-hall) [12] Title: Three-dimensional chimera patterns in networks of spiking neuron oscillators Journal-ref: Phys. Rev. E 97, 052213 (2018) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Chaotic Dynamics (nlin.CD); Pattern Formation and Solitons (nlin.PS) [13] Title: Maximally Random Discrete-Spin Systems with Symmetric and Asymmetric Interactions and Maximally Degenerate Ordering Comments: Final published version, 4 pages, 5 figures Journal-ref: Phys. Rev. E 97, 052102 (2018) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech) [14] Title: The quasi-periodic quantum Ising transition in 1D Journal-ref: Phys. Rev. Lett. 120, 175702 (2018) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech); Strongly Correlated Electrons (cond-mat.str-el) [15] Title: Feeding the multitude: A polynomial-time algorithm to improve sampling Comments: 16 pages, 15 figures, 2 tables Journal-ref: Phys. Rev. E 99, 043306 (2019) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Computational Physics (physics.comp-ph); Quantum Physics (quant-ph) [16] Title: Many-body localization, symmetry, and topology Comments: Key Issues Review for Reports on Progress in Physics. Published version Journal-ref: Rep. Prog. Phys. 81, 082501 (2018) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech); Strongly Correlated Electrons (cond-mat.str-el) [17] Title: Disorder engineering: From structural coloration to acoustic filters Journal-ref: Phys. Rev. Materials 2, 075201 (2018) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Mesoscale and Nanoscale Physics (cond-mat.mes-hall); Optics (physics.optics) [18] Title: Structural stability of interaction networks against negative external fields Journal-ref: Phys. Rev. E 97, 042311 (2018) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Populations and Evolution (q-bio.PE) [19] Title: Field Theory of Disordered Elastic Interfaces at 3-Loop Order: The $β$-Function Comments: This is the first part of arXiv:1707.09802v1. The remaining part is in arXiv:1707.09802v2. 47 pages, 67 figures. v2: typos corrected and hyper-ref enabled Journal-ref: Nucl. Phys. B 932 (2018) 540-588 Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn) [20] Title: Out-of-time-ordered measurements as a probe of quantum dynamics Journal-ref: Phys. Rev. A 97, 030103 (2018) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech); Quantum Physics (quant-ph) [21] Title: Dimensional Reduction by Conformal Bootstrap Authors: Shinobu Hikami Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); High Energy Physics - Theory (hep-th); Mathematical Physics (math-ph) [22] Title: Pinning by rare defects and effective mobility for elastic interfaces in high dimensions Journal-ref: J. Phys. A: Math. Theo., 2018 Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech) [23] Title: Random matrix approach to plasmon resonances in the random impedance network model of disordered nanocomposites Journal-ref: Phys. Rev. E 97, 050101(R) (2018) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Mesoscale and Nanoscale Physics (cond-mat.mes-hall); Optics (physics.optics) [24] Title: Thermal conductivity in 1d: disorder-induced transition from anomalous to normal scaling Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech) [25] Title: Spatiotemporal intermittency and localized dynamic fluctuations upon approaching the glass transition Comments: New analysis technique introduced, and applied to previously published data Journal-ref: Phys. Rev. E 97, 060601 (2018) Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Soft Condensed Matter (cond-mat.soft) [ total of 71 entries: 1-25 | 26-50 | 51-71 ] [ showing 25 entries per page: fewer | more | all ] Disable MathJax (What is MathJax?) Links to: arXiv, form interface, find, cond-mat, 2009, contact, help  (Access key information)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5069295763969421, "perplexity": 9283.985688415098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400265461.58/warc/CC-MAIN-20200927054550-20200927084550-00066.warc.gz"}
https://www.khronos.org/registry/vulkan/specs/1.1-khr-extensions/html/chap22.html
## 22. Tessellation Tessellation involves three pipeline stages. First, a tessellation control shader transforms control points of a patch and can produce per-patch data. Second, a fixed-function tessellator generates multiple primitives corresponding to a tessellation of the patch in (u,v) or (u,v,w) parameter space. Third, a tessellation evaluation shader transforms the vertices of the tessellated patch, for example to compute their positions and attributes as part of the tessellated surface. The tessellator is enabled when the pipeline contains both a tessellation control shader and a tessellation evaluation shader. ### 22.1. Tessellator If a pipeline includes both tessellation shaders (control and evaluation), the tessellator consumes each input patch (after vertex shading) and produces a new set of independent primitives (points, lines, or triangles). These primitives are logically produced by subdividing a geometric primitive (rectangle or triangle) according to the per-patch outer and inner tessellation levels written by the tessellation control shader. These levels are specified using the built-in variables TessLevelOuter and TessLevelInner, respectively. This subdivision is performed in an implementation-dependent manner. If no tessellation shaders are present in the pipeline, the tessellator is disabled and incoming primitives are passed through without modification. The type of subdivision performed by the tessellator is specified by an OpExecutionMode instruction in the tessellation evaluation or tessellation control shader using one of execution modes Triangles, Quads, and IsoLines. Other tessellation-related execution modes can also be specified in either the tessellation control or tessellation evaluation shaders, and if they are specified in both then the modes must be the same. Tessellation execution modes include: • Triangles, Quads, and IsoLines. These control the type of subdivision and topology of the output primitives. One mode must be set in at least one of the tessellation shader stages. If the VK_KHR_portability_subset extension is enabled, and VkPhysicalDevicePortabilitySubsetFeaturesKHR::tessellationIsolines is VK_FALSE, then isoline tessellation is not supported by the implementation, and IsoLines must not be used in either tessellation shader stage. • VertexOrderCw and VertexOrderCcw. These control the orientation of triangles generated by the tessellator. One mode must be set in at least one of the tessellation shader stages. • PointMode. Controls generation of points rather than triangles or lines. This functionality defaults to disabled, and is enabled if either shader stage includes the execution mode. If the VK_KHR_portability_subset extension is enabled, and VkPhysicalDevicePortabilitySubsetFeaturesKHR::tessellationPointMode is VK_FALSE, then point mode tessellation is not supported by the implementation, and PointMode must not be used in either tessellation shader stage. • SpacingEqual, SpacingFractionalEven, and SpacingFractionalOdd. Controls the spacing of segments on the edges of tessellated primitives. One mode must be set in at least one of the tessellation shader stages. • OutputVertices. Controls the size of the output patch of the tessellation control shader. One value must be set in at least one of the tessellation shader stages. For triangles, the tessellator subdivides a triangle primitive into smaller triangles. For quads, the tessellator subdivides a rectangle primitive into smaller triangles. For isolines, the tessellator subdivides a rectangle primitive into a collection of line segments arranged in strips stretching across the rectangle in the u dimension (i.e. the coordinates in TessCoord are of the form (0,x) through (1,x) for all tessellation evaluation shader invocations that share a line). Each vertex produced by the tessellator has an associated (u,v,w) or (u,v) position in a normalized parameter space, with parameter values in the range [0,1], as illustrated in figures Domain parameterization for tessellation primitive modes (upper-left origin) and Domain parameterization for tessellation primitive modes (lower-left origin). The domain space can have either an upper-left or lower-left origin, selected by the domainOrigin member of VkPipelineTessellationDomainOriginStateCreateInfo. Figure 11. Domain parameterization for tessellation primitive modes (upper-left origin) Figure 12. Domain parameterization for tessellation primitive modes (lower-left origin) Caption In the domain parameterization diagrams, the coordinates illustrate the value of TessCoord at the corners of the domain. The labels on the edges indicate the inner (IL0 and IL1) and outer (OL0 through OL3) tessellation level values used to control the number of subdivisions along each edge of the domain. For triangles, the vertex’s position is a barycentric coordinate (u,v,w), where u + v + w = 1.0, and indicates the relative influence of the three vertices of the triangle on the position of the vertex. For quads and isolines, the position is a (u,v) coordinate indicating the relative horizontal and vertical position of the vertex relative to the subdivided rectangle. The subdivision process is explained in more detail in subsequent sections. A patch is discarded by the tessellator if any relevant outer tessellation level is less than or equal to zero. Patches will also be discarded if any relevant outer tessellation level corresponds to a floating-point NaN (not a number) in implementations supporting NaN. No new primitives are generated and the tessellation evaluation shader is not executed for patches that are discarded. For Quads, all four outer levels are relevant. For Triangles and IsoLines, only the first three or two outer levels, respectively, are relevant. Negative inner levels will not cause a patch to be discarded; they will be clamped as described below. ### 22.3. Tessellator Spacing Each of the tessellation levels is used to determine the number and spacing of segments used to subdivide a corresponding edge. The method used to derive the number and spacing of segments is specified by an OpExecutionMode in the tessellation control or tessellation evaluation shader using one of the identifiers SpacingEqual, SpacingFractionalEven, or SpacingFractionalOdd. If SpacingEqual is used, the floating-point tessellation level is first clamped to [1, maxLevel], where maxLevel is the implementation-dependent maximum tessellation level (VkPhysicalDeviceLimits::maxTessellationGenerationLevel). The result is rounded up to the nearest integer n, and the corresponding edge is divided into n segments of equal length in (u,v) space. If SpacingFractionalEven is used, the tessellation level is first clamped to [2, maxLevel] and then rounded up to the nearest even integer n. If SpacingFractionalOdd is used, the tessellation level is clamped to [1, maxLevel - 1] and then rounded up to the nearest odd integer n. If n is one, the edge will not be subdivided. Otherwise, the corresponding edge will be divided into n - 2 segments of equal length, and two additional segments of equal length that are typically shorter than the other segments. The length of the two additional segments relative to the others will decrease monotonically with n - f, where f is the clamped floating-point tessellation level. When n - f is zero, the additional segments will have equal length to the other segments. As n - f approaches 2.0, the relative length of the additional segments approaches zero. The two additional segments must be placed symmetrically on opposite sides of the subdivided edge. The relative location of these two segments is implementation-dependent, but must be identical for any pair of subdivided edges with identical values of f. When tessellating triangles or quads using point mode with fractional odd spacing, the tessellator may produce interior vertices that are positioned on the edge of the patch if an inner tessellation level is less than or equal to one. Such vertices are considered distinct from vertices produced by subdividing the outer edge of the patch, even if there are pairs of vertices with identical coordinates. ### 22.4. Tessellation Primitive Ordering Few guarantees are provided for the relative ordering of primitives produced by tessellation, as they pertain to primitive order. • The output primitives generated from each input primitive are passed to subsequent pipeline stages in an implementation-dependent order. • All output primitives generated from a given input primitive are passed to subsequent pipeline stages before any output primitives generated from subsequent input primitives. ### 22.5. Tessellator Vertex Winding Order When the tessellator produces triangles (in the Triangles or Quads modes), the orientation of all triangles is specified with an OpExecutionMode of VertexOrderCw or VertexOrderCcw in the tessellation control or tessellation evaluation shaders. If the order is VertexOrderCw, the vertices of all generated triangles will have clockwise ordering in (u,v) or (u,v,w) space. If the order is VertexOrderCcw, the vertices will have counter-clockwise ordering in that space. If the tessellation domain has an upper-left origin, the vertices of a triangle have counter-clockwise ordering if a = u0 v1 - u1 v0 + u1 v2 - u2 v1 + u2 v0 - u0 v2 is negative, and clockwise ordering if a is positive. ui and vi are the u and v coordinates in normalized parameter space of the ith vertex of the triangle. If the tessellation domain has a lower-left origin, the vertices of a triangle have counter-clockwise ordering if a is positive, and clockwise ordering if a is negative. Note The value a is proportional (with a positive factor) to the signed area of the triangle. In Triangles mode, even though the vertex coordinates have a w value, it does not participate directly in the computation of a, being an affine combination of u and v. ### 22.6. Triangle Tessellation If the tessellation primitive mode is Triangles, an equilateral triangle is subdivided into a collection of triangles covering the area of the original triangle. First, the original triangle is subdivided into a collection of concentric equilateral triangles. The edges of each of these triangles are subdivided, and the area between each triangle pair is filled by triangles produced by joining the vertices on the subdivided edges. The number of concentric triangles and the number of subdivisions along each triangle except the outermost is derived from the first inner tessellation level. The edges of the outermost triangle are subdivided independently, using the first, second, and third outer tessellation levels to control the number of subdivisions of the u = 0 (left), v = 0 (bottom), and w = 0 (right) edges, respectively. The second inner tessellation level and the fourth outer tessellation level have no effect in this mode. If the first inner tessellation level and all three outer tessellation levels are exactly one after clamping and rounding, only a single triangle with (u,v,w) coordinates of (0,0,1), (1,0,0), and (0,1,0) is generated. If the inner tessellation level is one and any of the outer tessellation levels is greater than one, the inner tessellation level is treated as though it were originally specified as 1 + ε and will result in a two- or three-segment subdivision depending on the tessellation spacing. When used with fractional odd spacing, the three-segment subdivision may produce inner vertices positioned on the edge of the triangle. If any tessellation level is greater than one, tessellation begins by producing a set of concentric inner triangles and subdividing their edges. First, the three outer edges are temporarily subdivided using the clamped and rounded first inner tessellation level and the specified tessellation spacing, generating n segments. For the outermost inner triangle, the inner triangle is degenerate — a single point at the center of the triangle — if n is two. Otherwise, for each corner of the outer triangle, an inner triangle corner is produced at the intersection of two lines extended perpendicular to the corner’s two adjacent edges running through the vertex of the subdivided outer edge nearest that corner. If n is three, the edges of the inner triangle are not subdivided and it is the final triangle in the set of concentric triangles. Otherwise, each edge of the inner triangle is divided into n - 2 segments, with the n - 1 vertices of this subdivision produced by intersecting the inner edge with lines perpendicular to the edge running through the n - 1 innermost vertices of the subdivision of the outer edge. Once the outermost inner triangle is subdivided, the previous subdivision process repeats itself, using the generated triangle as an outer triangle. This subdivision process is illustrated in Inner Triangle Tessellation. Figure 13. Inner Triangle Tessellation Caption In the Inner Triangle Tessellation diagram, inner tessellation levels of (a) five and (b) four are shown (not to scale). Solid black circles depict vertices along the edges of the concentric triangles. The edges of inner triangles are subdivided by intersecting the edge with segments perpendicular to the edge passing through each inner vertex of the subdivided outer edge. Dotted lines depict edges connecting corresponding vertices on the inner and outer triangle edges. Once all the concentric triangles are produced and their edges are subdivided, the area between each pair of adjacent inner triangles is filled completely with a set of non-overlapping triangles. In this subdivision, two of the three vertices of each triangle are taken from adjacent vertices on a subdivided edge of one triangle; the third is one of the vertices on the corresponding edge of the other triangle. If the innermost triangle is degenerate (i.e., a point), the triangle containing it is subdivided into six triangles by connecting each of the six vertices on that triangle with the center point. If the innermost triangle is not degenerate, that triangle is added to the set of generated triangles as-is. After the area corresponding to any inner triangles is filled, the tessellator generates triangles to cover the area between the outermost triangle and the outermost inner triangle. To do this, the temporary subdivision of the outer triangle edge above is discarded. Instead, the u = 0, v = 0, and w = 0 edges are subdivided according to the first, second, and third outer tessellation levels, respectively, and the tessellation spacing. The original subdivision of the first inner triangle is retained. The area between the outer and first inner triangles is completely filled by non-overlapping triangles as described above. If the first (and only) inner triangle is degenerate, a set of triangles is produced by connecting each vertex on the outer triangle edges with the center point. After all triangles are generated, each vertex in the subdivided triangle is assigned a barycentric (u,v,w) coordinate based on its location relative to the three vertices of the outer triangle. The algorithm used to subdivide the triangular domain in (u,v,w) space into individual triangles is implementation-dependent. However, the set of triangles produced will completely cover the domain, and no portion of the domain will be covered by multiple triangles. Output triangles are generated with a topology similar to triangle lists, except that the order in which each triangle is generated, and the order in which the vertices are generated for each triangle, are implementation-dependent. However, the order of vertices in each triangle is consistent across the domain as described in Tessellator Vertex Winding Order. If the tessellation primitive mode is Quads, a rectangle is subdivided into a collection of triangles covering the area of the original rectangle. First, the original rectangle is subdivided into a regular mesh of rectangles, where the number of rectangles along the u = 0 and u = 1 (vertical) and v = 0 and v = 1 (horizontal) edges are derived from the first and second inner tessellation levels, respectively. All rectangles, except those adjacent to one of the outer rectangle edges, are decomposed into triangle pairs. The outermost rectangle edges are subdivided independently, using the first, second, third, and fourth outer tessellation levels to control the number of subdivisions of the u = 0 (left), v = 0 (bottom), u = 1 (right), and v = 1 (top) edges, respectively. The area between the inner rectangles of the mesh and the outer rectangle edges are filled by triangles produced by joining the vertices on the subdivided outer edges to the vertices on the edge of the inner rectangle mesh. If both clamped inner tessellation levels and all four clamped outer tessellation levels are exactly one, only a single triangle pair covering the outer rectangle is generated. Otherwise, if either clamped inner tessellation level is one, that tessellation level is treated as though it was originally specified as 1 + ε and will result in a two- or three-segment subdivision depending on the tessellation spacing. When used with fractional odd spacing, the three-segment subdivision may produce inner vertices positioned on the edge of the rectangle. If any tessellation level is greater than one, tessellation begins by subdividing the u = 0 and u = 1 edges of the outer rectangle into m segments using the clamped and rounded first inner tessellation level and the tessellation spacing. The v = 0 and v = 1 edges are subdivided into n segments using the second inner tessellation level. Each vertex on the u = 0 and v = 0 edges are joined with the corresponding vertex on the u = 1 and v = 1 edges to produce a set of vertical and horizontal lines that divide the rectangle into a grid of smaller rectangles. The primitive generator emits a pair of non-overlapping triangles covering each such rectangle not adjacent to an edge of the outer rectangle. The boundary of the region covered by these triangles forms an inner rectangle, the edges of which are subdivided by the grid vertices that lie on the edge. If either m or n is two, the inner rectangle is degenerate, and one or both of the rectangle’s edges consist of a single point. This subdivision is illustrated in Figure Inner Quad Tessellation. Caption In the Inner Quad Tessellation diagram, inner quad tessellation levels of (a) (4,2) and (b) (7,4) are shown. The regions highlighted in red in figure (b) depict the 10 inner rectangles, each of which will be subdivided into two triangles. Solid black circles depict vertices on the boundary of the outer and inner rectangles, where the inner rectangle of figure (a) is degenerate (a single line segment). Dotted lines depict the horizontal and vertical edges connecting corresponding vertices on the inner and outer rectangle edges. After the area corresponding to the inner rectangle is filled, the tessellator must produce triangles to cover the area between the inner and outer rectangles. To do this, the subdivision of the outer rectangle edge above is discarded. Instead, the u = 0, v = 0, u = 1, and v = 1 edges are subdivided according to the first, second, third, and fourth outer tessellation levels, respectively, and the tessellation spacing. The original subdivision of the inner rectangle is retained. The area between the outer and inner rectangles is completely filled by non-overlapping triangles. Two of the three vertices of each triangle are adjacent vertices on a subdivided edge of one rectangle; the third is one of the vertices on the corresponding edge of the other rectangle. If either edge of the innermost rectangle is degenerate, the area near the corresponding outer edges is filled by connecting each vertex on the outer edge with the single vertex making up the inner edge. The algorithm used to subdivide the rectangular domain in (u,v) space into individual triangles is implementation-dependent. However, the set of triangles produced will completely cover the domain, and no portion of the domain will be covered by multiple triangles. Output triangles are generated with a topology similar to triangle lists, except that the order in which each triangle is generated, and the order in which the vertices are generated for each triangle, are implementation-dependent. However, the order of vertices in each triangle is consistent across the domain as described in Tessellator Vertex Winding Order. ### 22.8. Isoline Tessellation If the tessellation primitive mode is IsoLines, a set of independent horizontal line segments is drawn. The segments are arranged into connected strips called isolines, where the vertices of each isoline have a constant v coordinate and u coordinates covering the full range [0,1]. The number of isolines generated is derived from the first outer tessellation level; the number of segments in each isoline is derived from the second outer tessellation level. Both inner tessellation levels and the third and fourth outer tessellation levels have no effect in this mode. As with quad tessellation above, isoline tessellation begins with a rectangle. The u = 0 and u = 1 edges of the rectangle are subdivided according to the first outer tessellation level. For the purposes of this subdivision, the tessellation spacing mode is ignored and treated as equal_spacing. An isoline is drawn connecting each vertex on the u = 0 rectangle edge to the corresponding vertex on the u = 1 rectangle edge, except that no line is drawn between (0,1) and (1,1). If the number of isolines on the subdivided u = 0 and u = 1 edges is n, this process will result in n equally spaced lines with constant v coordinates of 0, . Each of the n isolines is then subdivided according to the second outer tessellation level and the tessellation spacing, resulting in m line segments. Each segment of each line is emitted by the tessellator. These line segments are generated with a topology similar to line lists, except that the order in which each line is generated, and the order in which the vertices are generated for each line segment, are implementation-dependent. Note If the VK_KHR_portability_subset extension is enabled, and VkPhysicalDevicePortabilitySubsetFeaturesKHR::tessellationIsolines is VK_FALSE, then isoline tessellation is not supported by the implementation. ### 22.9. Tessellation Point Mode For all primitive modes, the tessellator is capable of generating points instead of lines or triangles. If the tessellation control or tessellation evaluation shader specifies the OpExecutionMode PointMode, the primitive generator will generate one point for each distinct vertex produced by tessellation, rather than emitting triangles or lines. Otherwise, the tessellator will produce a collection of line segments or triangles according to the primitive mode. These points are generated with a topology similar to point lists, except the order in which the points are generated for each input primitive is undefined. Note If the VK_KHR_portability_subset extension is enabled, and VkPhysicalDevicePortabilitySubsetFeaturesKHR::tessellationPointMode is VK_FALSE, then tessellation point mode is not supported by the implementation. ### 22.10. Tessellation Pipeline State The pTessellationState member of VkGraphicsPipelineCreateInfo is a pointer to a VkPipelineTessellationStateCreateInfo structure. The VkPipelineTessellationStateCreateInfo structure is defined as: // Provided by VK_VERSION_1_0 typedef struct VkPipelineTessellationStateCreateInfo { VkStructureType sType; const void* pNext; VkPipelineTessellationStateCreateFlags flags; uint32_t patchControlPoints; } VkPipelineTessellationStateCreateInfo; • sType is the type of this structure. • pNext is NULL or a pointer to a structure extending this structure. • flags is reserved for future use. • patchControlPoints is the number of control points per patch. Valid Usage • VUID-VkPipelineTessellationStateCreateInfo-patchControlPoints-01214 patchControlPoints must be greater than zero and less than or equal to VkPhysicalDeviceLimits::maxTessellationPatchSize Valid Usage (Implicit) • VUID-VkPipelineTessellationStateCreateInfo-sType-sType sType must be VK_STRUCTURE_TYPE_PIPELINE_TESSELLATION_STATE_CREATE_INFO • VUID-VkPipelineTessellationStateCreateInfo-pNext-pNext pNext must be NULL or a pointer to a valid instance of VkPipelineTessellationDomainOriginStateCreateInfo • VUID-VkPipelineTessellationStateCreateInfo-sType-unique The sType value of each struct in the pNext chain must be unique flags must be 0 // Provided by VK_VERSION_1_0 typedef VkFlags VkPipelineTessellationStateCreateFlags; VkPipelineTessellationStateCreateFlags is a bitmask type for setting a mask, but is currently reserved for future use. The VkPipelineTessellationDomainOriginStateCreateInfo structure is defined as: // Provided by VK_VERSION_1_1 typedef struct VkPipelineTessellationDomainOriginStateCreateInfo { VkStructureType sType; const void* pNext; VkTessellationDomainOrigin domainOrigin; } VkPipelineTessellationDomainOriginStateCreateInfo; or the equivalent // Provided by VK_KHR_maintenance2 typedef VkPipelineTessellationDomainOriginStateCreateInfo VkPipelineTessellationDomainOriginStateCreateInfoKHR; • sType is the type of this structure. • pNext is NULL or a pointer to a structure extending this structure. • domainOrigin is a VkTessellationDomainOrigin value controlling the origin of the tessellation domain space. If the VkPipelineTessellationDomainOriginStateCreateInfo structure is included in the pNext chain of VkPipelineTessellationStateCreateInfo, it controls the origin of the tessellation domain. If this structure is not present, it is as if domainOrigin was VK_TESSELLATION_DOMAIN_ORIGIN_UPPER_LEFT. Valid Usage (Implicit) • VUID-VkPipelineTessellationDomainOriginStateCreateInfo-sType-sType sType must be VK_STRUCTURE_TYPE_PIPELINE_TESSELLATION_DOMAIN_ORIGIN_STATE_CREATE_INFO • VUID-VkPipelineTessellationDomainOriginStateCreateInfo-domainOrigin-parameter domainOrigin must be a valid VkTessellationDomainOrigin value The possible tessellation domain origins are specified by the VkTessellationDomainOrigin enumeration: // Provided by VK_VERSION_1_1 typedef enum VkTessellationDomainOrigin { VK_TESSELLATION_DOMAIN_ORIGIN_UPPER_LEFT = 0, VK_TESSELLATION_DOMAIN_ORIGIN_LOWER_LEFT = 1, // Provided by VK_KHR_maintenance2 VK_TESSELLATION_DOMAIN_ORIGIN_UPPER_LEFT_KHR = VK_TESSELLATION_DOMAIN_ORIGIN_UPPER_LEFT, // Provided by VK_KHR_maintenance2 VK_TESSELLATION_DOMAIN_ORIGIN_LOWER_LEFT_KHR = VK_TESSELLATION_DOMAIN_ORIGIN_LOWER_LEFT, } VkTessellationDomainOrigin; or the equivalent // Provided by VK_KHR_maintenance2 typedef VkTessellationDomainOrigin VkTessellationDomainOriginKHR; This enum affects how the VertexOrderCw and VertexOrderCcw tessellation execution modes are interpreted, since the winding is defined relative to the orientation of the domain.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5975638628005981, "perplexity": 1156.920800899995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151531.67/warc/CC-MAIN-20210724223025-20210725013025-00122.warc.gz"}
http://mathhelpforum.com/advanced-statistics/43949-binomial-distribution-help.html
# Math Help - Binomial distribution help! 1. ## Binomial distribution help! Which of the following statements is true? pick 4 possible statements! 1) The width of a confidence interval for a population proportion will change from sample to sample, for a fixed confidence level. 2) The width of a confidence interval will always change from sample to sample, regardless of what we're estimating (mean or population). 3) The width of a confidence interval for a population mean will change from sample to sample, for a fixed confidence level. 4) The width of a confidence interval will never change from sample to sample, regardless of what we're estimating (mean or proportion). 5) The Central Limit Theorem says that, for samples of size at least 30, the average of observations from any single distribution becomes normally distributed. 6) Confidence intervals for population proportions, constructed using z-scores as critical values, are only valid under the Central Limit Theorem. 7) Confidence intervals for population means of normal populations, constructed using z-scores as critical values, are valid even without the Central Limit Theorem. 8) The Central Limit Theorem says that, for samples of size at least 30, individual observations from any distribution become normally distributed. 9) The sample used for creating a confidence interval for a population proportion must have at least 30 observations, regardless of the number of successes. 10) A 90% confidence interval will be the same for every sample taken. Thanks! 2. Originally Posted by Vedicmaths Which of the following statements is true? pick 4 possible statements! 1) The width of a confidence interval for a population proportion will change from sample to sample, for a fixed confidence level. 2) The width of a confidence interval will always change from sample to sample, regardless of what we're estimating (mean or population). 3) The width of a confidence interval for a population mean will change from sample to sample, for a fixed confidence level. 4) The width of a confidence interval will never change from sample to sample, regardless of what we're estimating (mean or proportion). Mr F wonders: Does the confidence interval depend on sample size n .......? 5) The Central Limit Theorem says that, for samples of size at least 30, the average of observations from any single distribution becomes normally distributed. 6) Confidence intervals for population proportions, constructed using z-scores as critical values, are only valid under the Central Limit Theorem. 7) Confidence intervals for population means of normal populations, constructed using z-scores as critical values, are valid even without the Central Limit Theorem. 8) The Central Limit Theorem says that, for samples of size at least 30, individual observations from any distribution become normally distributed. Mr F ponders: What is the statement of the Central Limit Theorem? How large is large ......? 9) The sample used for creating a confidence interval for a population proportion must have at least 30 observations, regardless of the number of successes. Mr F muses: Can you construct a confidence interval for a sample of size n = 20 ....? 10) A 90% confidence interval will be the same for every sample taken. Mr F contemplates: Does the confidence interval depend on sample size n .......? Is every sample taken always the same size .....? Thanks! ..
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281204342842102, "perplexity": 551.0123575898432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776420526.72/warc/CC-MAIN-20140707234020-00073-ip-10-180-212-248.ec2.internal.warc.gz"}
https://blogs.mathworks.com/loren/2015/05/22/including-external-code-in-published-document/
# Including External Code in Published Document When I wanted to show you a code snippet in the past, I displayed the code in the text of my blog post. ### Contents #### How I Did It To do so, I often used the function type, or sometimes dbtype (if I wanted to refer to line numbers). In this way, I did not require you to download code from elsewhere to see what I was discussing. #### How I Do It Now Now using R2015a I can take advantage of a new feature of the markup language for publish. Using the include markup allows me to include external content. How do I do this? With the delimiters <include>fileOfInterest.m</include> Not only that, but the content is included with the proper syntax highlighting for MATLAB code! #### Let's Try It Out! Here's some code from an older post of mine of methods for computing Fibonacci numbers. function f = fibrec(n) %FIBREC Recursive function for n Fibonacci numbers. % Minimize the error checking so we don't bog down in it. I have included it % if ~isscalar(n) | ~isreal(n) | n<0 | fix(n)~=n % error('ArtBlog:fibrec:MBRealPosInt','N must be a real positive integer') % end if n == 1, f = 1; % First element is 1. return; elseif n == 2 f = [1 1]; % First two elements are 1. else % Call fibrec with previous result and compute next one from it. fm1 = fibrec(n-1); f = [fm1 fm1(end)+fm1(end-1)]; end
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47070300579071045, "perplexity": 1916.5358987712955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347436466.95/warc/CC-MAIN-20200603210112-20200604000112-00511.warc.gz"}
http://math.stackexchange.com/questions/89870/x-y-are-independent-standard-normal-distributed-then-what-is-the-distribution-of
# X,Y are independent standard normal distributed then what is the distribution of $\frac{X}{X+Y}$ X, Y are independent standard normal random variables, what is the distribution of $$\frac{X}{X+Y}$$ Could anyone help me with this? Thanks. I have worked the problem by multivariable transformation: Let $$Z=\frac{X}{X+Y} , W=X$$ Consider transformation $$(X,Y)\longrightarrow(Z,W)$$ Then $$X(Z,W)=W , Y(Z,W)=\frac{W(1-Z)}{Z}$$ defines the inverse transformation. The Jacobian is $$J(Z,W)=\frac{w}{z^{2}}$$ So $$f_{Z,W}(z,w)=f_{X,Y}(w,\frac{w(1-z)}{z})\cdot\mid\frac{w}{z^{2}}\mid$$ As X and Y are independent. Then the marginal pdf of Z is $$f_{Z}(z)=\intop_{0}^{\infty}\frac{w}{z^{2}}\cdot f_{X}(w)\cdot f_{Y}(\frac{w(1-z)}{z})dw+\intop_{-\infty}^{0}-\frac{w}{z^{2}}\cdot f_{X}(w)\cdot f_{Y}(\frac{w(1-z)}{z})dw$$ After calculation we get $$f_{Z}(z)=\frac{1}{\pi\cdot\frac{1}{2}\cdot(1+(\frac{z-\frac{1}{2}}{\frac{1}{2}})^{2})}$$ Hence $$Z\sim \mathrm{Cauchy}(\frac{1}{2},\frac{1}{2}).$$ - You could/should try using the general method expanded here. –  Did Dec 9 '11 at 10:50 Why did you erase every description of what you had tried? Now your post runs contrary to explicit recommendations about how to ask questions on the site... –  Did Dec 9 '11 at 10:57 I think I have worked out the question by using multivariable transfromation. And I just want to check if my answer is correct. –  John Dec 9 '11 at 11:02 If that is so, you might want to post your solution as an answer to your own question, then people will be able to check it. This is actually recommended on the site. –  Did Dec 9 '11 at 11:05 Perhaps a further edit is needed. As of right now, the question does not state that $X$ and $Y$ are independent random variables, but the solution included in the question does make the assumption that $X$ and $Y$ are independent. –  Dilip Sarwate Dec 9 '11 at 13:49 show 1 more comment Since $X$ and $Y$ are independent standard gaussian random variables, the distribution of $Z=\frac{X}{X+Y}$ has density $f_Z$, where for every $z$ in $\mathbb R$, $$\color{red}{f_Z(z)=\frac1\pi\,\frac1{z^2+(1-z)^2}}.$$ The direct way to prove this (as is now done by the OP) is to rely on the change of variables method expanded here. One can deduce from the expression of $f_Z$ that $Z=\frac12(1+T)$, where $T$ is standard Cauchy, that is, the distribution of $T$ has density $f_T$, where for every $t$ in $\mathbb R$, $$\color{purple}{f_T(t)=\frac1\pi\,\frac1{1+t^2}}.$$ But the formulas for $f_Z$ and $f_T$ are also direct consequences of two facts: 1. The ratio of two independent standard gaussian random variables is a standard Cauchy random variable. 2. If $X$ and $Y$ are independent standard gaussian random variables, then the random variables $\frac1{\sqrt2} (X+Y)$ and $\frac1{\sqrt2}(X-Y)$ are independent standard gaussian random variables as well. - It tooks me half page to work out pdf of Z. But it seems that you can get it directly. Could you explain to me how you do that? –  John Dec 9 '11 at 11:20 @ZhouzhouDu: do you know the distribution of $\frac{X}{Y}$ where $X,Y$ are independent standard Gaussian? –  Ilya Dec 9 '11 at 11:31 @Ilya Yes. It is standard Cauchy. But X and X+Y are not independent. Is that matters? –  John Dec 9 '11 at 11:35 ZhouzhouDu: Quote: If that is so [that is, if you have worked out the question and just want to check if your answer is correct] you might want to post your solution as an answer to your own question, then people will be able to check it. –  Did Dec 9 '11 at 11:43 @ZhouzhouDu: I guess, Didier meant that $X+Y$ and $X-Y$ are independent (since $X+Y$ and $X+Y$ are clearly non-independent), so $$\frac{X}{X+Y} = \frac12\left(1+\frac{X-Y}{X+Y}\right)$$ and $\frac{1}{\sqrt{2}}$ you use in numerator and denominator to normalize them and make standard Guassian and use the fact the quotient of them is Cauchy. Btw, independence of $X-Y$ and $X+Y$ you can just verify by covariation –  Ilya Dec 9 '11 at 11:44 show 8 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582227468490601, "perplexity": 264.4518537175193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
https://kar.kent.ac.uk/51803/
# Strong Coupling Superconductivity in the Vicinity of the Structural Quantum Critical Point in (CaxSr1-x)3Rh4Sn13 Yu, Wing Chi, Cheung, Yiu Wing, Saines, Paul J., Imai, Masaki, Matsumoto, Takuya, Michioka, Chishiro, Yoshimura, Kazuyoshi, Goh, Swee K. (2015) Strong Coupling Superconductivity in the Vicinity of the Structural Quantum Critical Point in (CaxSr1-x)3Rh4Sn13. Physical Review Letters: Moving Physics Forward, 115 . p. 207003. ISSN 0031-9007. E-ISSN 1079-7114. (doi:10.1103/PhysRevLett.115.207003) (KAR id:51803) PDF Author's Accepted Manuscript Language: English Preview Official URL http://dx.doi.org/10.1103/PhysRevLett.115.207003 ## Abstract The family of the superconducting quasiskutterudites (CaxSr1?x)3Rh4Sn13 features a structural quantum critical point at xc=0.9, around which a dome-shaped variation of the superconducting transition temperature Tc is found. Using specific heat, we probe the normal and the superconducting states of the entire series straddling the quantum critical point. Our analysis indicates a significant lowering of the effective Debye temperature on approaching xc, which we interpret as a result of phonon softening accompanying the structural instability. Furthermore, a remarkably large enhancement of 2?/kBTc and ?C/?Tc beyond the Bardeen-Cooper-Schrieffer values is found in the vicinity of the structural quantum critical point. The phase diagram of (CaxSr1?x)3Rh4Sn13 thus provides a model system to study the interplay between structural quantum criticality and strong electron-phonon coupling superconductivity. Item Type: Article 10.1103/PhysRevLett.115.207003 Functional Materials Group Q Science > QC Physics > QC173.45 Condensed Matter Faculties > Sciences > School of Physical Sciences > Functional Materials Group Paul Saines 13 Nov 2015 09:37 UTC 10 Feb 2020 15:50 UTC https://kar.kent.ac.uk/id/eprint/51803 (The current URI for this page, for reference purposes) https://orcid.org/0000-0002-4207-2112
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.81816565990448, "perplexity": 7785.703983293239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738699.68/warc/CC-MAIN-20200810205824-20200810235824-00313.warc.gz"}
https://mxnet.apache.org/versions/1.5.0/api/clojure/docs/org.apache.clojure-mxnet.ndarray-api.html
# org.apache.clojure-mxnet.ndarray-api Experimental ### -copy (-copy {:keys [data out], :or {out nil}, :as opts}) Returns a copy of the input. From:src/operator/tensor/elemwise_unary_op_basic.cc:218 data: The input array. out: Output array. (optional) ### -linalg-extractdiag (-linalg-extractdiag {:keys [a offset out], :or {offset nil, out nil}, :as opts}) Extracts the diagonal entries of a square matrix. Input is a tensor *A* of dimension *n >= 2*. If *n=2*, then *A* represents a single square matrix which diagonal elements get extracted as a 1-dimensional tensor. If *n>2*, then *A* represents a batch of square matrices on the trailing two dimensions. The extracted diagonals are returned as an *n-1*-dimensional tensor. .. note:: The operator supports float32 and float64 data types only. Examples:: // Single matrix diagonal extraction A = [[1.0, 2.0], [3.0, 4.0]] extractdiag(A) = [1.0, 4.0] extractdiag(A, 1) = [2.0] // Batch matrix diagonal extraction A = [[[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]]] extractdiag(A) = [[1.0, 4.0], [5.0, 8.0]] Defined in src/operator/tensor/la_op.cc:L495 a: Tensor of square matrices offset: Offset of the diagonal versus the main diagonal. 0 corresponds to the main diagonal, a negative/positive value to diagonals below/above the main diagonal. (optional) out: Output array. (optional) ### -linalg-extracttrian (-linalg-extracttrian {:keys [a offset lower out], :or {offset nil, lower nil, out nil}, :as opts}) Extracts a triangular sub-matrix from a square matrix. Input is a tensor *A* of dimension *n >= 2*. If *n=2*, then *A* represents a single square matrix from which a triangular sub-matrix is extracted as a 1-dimensional tensor. If *n>2*, then *A* represents a batch of square matrices on the trailing two dimensions. The extracted triangular sub-matrices are returned as an *n-1*-dimensional tensor. The *offset* and *lower* parameters determine the triangle to be extracted: - When *offset = 0* either the lower or upper triangle with respect to the main diagonal is extracted depending on the value of parameter *lower*. - When *offset = k > 0* the upper triangle with respect to the k-th diagonal above the main diagonal is extracted. - When *offset = k < 0* the lower triangle with respect to the k-th diagonal below the main diagonal is extracted. .. note:: The operator supports float32 and float64 data types only. Examples:: // Single triagonal extraction A = [[1.0, 2.0], [3.0, 4.0]] extracttrian(A) = [1.0, 3.0, 4.0] extracttrian(A, lower=False) = [1.0, 2.0, 4.0] extracttrian(A, 1) = [2.0] extracttrian(A, -1) = [3.0] // Batch triagonal extraction A = [[[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]]] extracttrian(A) = [[1.0, 3.0, 4.0], [5.0, 7.0, 8.0]] Defined in src/operator/tensor/la_op.cc:L605 a: Tensor of square matrices offset: Offset of the diagonal versus the main diagonal. 0 corresponds to the main diagonal, a negative/positive value to diagonals below/above the main diagonal. (optional) lower: Refer to the lower triangular matrix if lower=true, refer to the upper otherwise. Only relevant when offset=0 (optional) out: Output array. (optional) ### -linalg-gelqf (-linalg-gelqf {:keys [a out], :or {out nil}, :as opts}) LQ factorization for general matrix. Input is a tensor *A* of dimension *n >= 2*. If *n=2*, we compute the LQ factorization (LAPACK *gelqf*, followed by *orglq*). *A* must have shape *(x, y)* with *x <= y*, and must have full rank *=x*. The LQ factorization consists of *L* with shape *(x, x)* and *Q* with shape *(x, y)*, so that: *A* = *L* \* *Q* Here, *L* is lower triangular (upper triangle equal to zero) with nonzero diagonal, and *Q* is row-orthonormal, meaning that *Q* \* *Q*\ :sup:T is equal to the identity matrix of shape *(x, x)*. If *n>2*, *gelqf* is performed separately on the trailing two dimensions for all inputs (batch mode). .. note:: The operator supports float32 and float64 data types only. Examples:: // Single LQ factorization A = [[1., 2., 3.], [4., 5., 6.]] Q, L = gelqf(A) Q = [[-0.26726124, -0.53452248, -0.80178373], [0.87287156, 0.21821789, -0.43643578]] L = [[-3.74165739, 0.], [-8.55235974, 1.96396101]] // Batch LQ factorization A = [[[1., 2., 3.], [4., 5., 6.]], [[7., 8., 9.], [10., 11., 12.]]] Q, L = gelqf(A) Q = [[[-0.26726124, -0.53452248, -0.80178373], [0.87287156, 0.21821789, -0.43643578]], [[-0.50257071, -0.57436653, -0.64616234], [0.7620735, 0.05862104, -0.64483142]]] L = [[[-3.74165739, 0.], [-8.55235974, 1.96396101]], [[-13.92838828, 0.], [-19.09768702, 0.52758934]]] Defined in src/operator/tensor/la_op.cc:L798 a: Tensor of input matrices to be factorized out: Output array. (optional) ### -linalg-gemm (-linalg-gemm a b c)(-linalg-gemm {:keys [a b c transpose-a transpose-b alpha beta axis out], :or {transpose-a nil, transpose-b nil, alpha nil, beta nil, axis nil, out nil}, :as opts}) Performs general matrix multiplication and accumulation. Input are tensors *A*, *B*, *C*, each of dimension *n >= 2* and having the same shape If *n=2*, the BLAS3 function *gemm* is performed: *out* = *alpha* \* *op*\ (*A*) \* *op*\ (*B*) + *beta* \* *C* Here, *alpha* and *beta* are scalar parameters, and *op()* is either the identity or matrix transposition (depending on *transpose_a*, *transpose_b*). If *n>2*, *gemm* is performed separately for a batch of matrices. The column indices of the matrices are given by the last dimensions of the tensors, the row indices by the axis specified with the *axis* parameter. By default, the trailing two dimensions will be used for matrix encoding. For a non-default axis parameter, the operation performed is equivalent to a series of swapaxes/gemm/swapaxes calls. For example let *A*, *B*, *C* be 5 dimensional tensors. Then gemm(*A*, *B*, *C*, axis=1) is equivalent A1 = swapaxes(A, dim1=1, dim2=3) B1 = swapaxes(B, dim1=1, dim2=3) C = swapaxes(C, dim1=1, dim2=3) C = gemm(A1, B1, C) C = swapaxis(C, dim1=1, dim2=3) When the input data is of type float32 and the environment variables MXNET_CUDA_ALLOW_TENSOR_CORE and MXNET_CUDA_TENSOR_OP_MATH_ALLOW_CONVERSION are set to 1, this operator will try to use pseudo-float16 precision (float32 math with float16 I/O) precision in order to use Tensor Cores on suitable NVIDIA GPUs. This can sometimes give significant speedups. .. note:: The operator supports float32 and float64 data types only. Examples:: A = [[1.0, 1.0], [1.0, 1.0]] B = [[1.0, 1.0], [1.0, 1.0], [1.0, 1.0]] C = [[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]] gemm(A, B, C, transpose_b=True, alpha=2.0, beta=10.0) = [[14.0, 14.0, 14.0], [14.0, 14.0, 14.0]] A = [[[1.0, 1.0]], [[0.1, 0.1]]] B = [[[1.0, 1.0]], [[0.1, 0.1]]] C = [[[10.0]], [[0.01]]] gemm(A, B, C, transpose_b=True, alpha=2.0 , beta=10.0) = [[[104.0]], [[0.14]]] Defined in src/operator/tensor/la_op.cc:L89 a: Tensor of input matrices b: Tensor of input matrices c: Tensor of input matrices transpose-a: Multiply with transposed of first input (A). (optional) transpose-b: Multiply with transposed of second input (B). (optional) alpha: Scalar factor multiplied with A*B. (optional) beta: Scalar factor multiplied with C. (optional) axis: Axis corresponding to the matrix rows. (optional) out: Output array. (optional) ### -linalg-gemm2 (-linalg-gemm2 a b)(-linalg-gemm2 {:keys [a b transpose-a transpose-b alpha axis out], :or {transpose-a nil, transpose-b nil, alpha nil, axis nil, out nil}, :as opts}) Performs general matrix multiplication. Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape If *n=2*, the BLAS3 function *gemm* is performed: *out* = *alpha* \* *op*\ (*A*) \* *op*\ (*B*) Here *alpha* is a scalar parameter and *op()* is either the identity or the matrix transposition (depending on *transpose_a*, *transpose_b*). If *n>2*, *gemm* is performed separately for a batch of matrices. The column indices of the matrices are given by the last dimensions of the tensors, the row indices by the axis specified with the *axis* parameter. By default, the trailing two dimensions will be used for matrix encoding. For a non-default axis parameter, the operation performed is equivalent to a series of swapaxes/gemm/swapaxes calls. For example let *A*, *B* be 5 dimensional tensors. Then gemm(*A*, *B*, axis=1) is equivalent to A1 = swapaxes(A, dim1=1, dim2=3) B1 = swapaxes(B, dim1=1, dim2=3) C = gemm2(A1, B1) C = swapaxis(C, dim1=1, dim2=3) When the input data is of type float32 and the environment variables MXNET_CUDA_ALLOW_TENSOR_CORE and MXNET_CUDA_TENSOR_OP_MATH_ALLOW_CONVERSION are set to 1, this operator will try to use pseudo-float16 precision (float32 math with float16 I/O) precision in order to use Tensor Cores on suitable NVIDIA GPUs. This can sometimes give significant speedups. .. note:: The operator supports float32 and float64 data types only. Examples:: // Single matrix multiply A = [[1.0, 1.0], [1.0, 1.0]] B = [[1.0, 1.0], [1.0, 1.0], [1.0, 1.0]] gemm2(A, B, transpose_b=True, alpha=2.0) = [[4.0, 4.0, 4.0], [4.0, 4.0, 4.0]] // Batch matrix multiply A = [[[1.0, 1.0]], [[0.1, 0.1]]] B = [[[1.0, 1.0]], [[0.1, 0.1]]] gemm2(A, B, transpose_b=True, alpha=2.0) = [[[4.0]], [[0.04 ]]] Defined in src/operator/tensor/la_op.cc:L163 a: Tensor of input matrices b: Tensor of input matrices transpose-a: Multiply with transposed of first input (A). (optional) transpose-b: Multiply with transposed of second input (B). (optional) alpha: Scalar factor multiplied with A*B. (optional) axis: Axis corresponding to the matrix row indices. (optional) out: Output array. (optional) ### -linalg-inverse (-linalg-inverse {:keys [a out], :or {out nil}, :as opts}) Compute the inverse of a matrix. Input is a tensor *A* of dimension *n >= 2*. If *n=2*, *A* is a square matrix. We compute: *out* = *A*\ :sup:-1 If *n>2*, *inverse* is performed separately on the trailing two dimensions for all inputs (batch mode). .. note:: The operator supports float32 and float64 data types only. Examples:: // Single matrix inversion A = [[1., 4.], [2., 3.]] inverse(A) = [[-0.6, 0.8], [0.4, -0.2]] // Batch matrix inversion A = [[[1., 4.], [2., 3.]], [[1., 3.], [2., 4.]]] inverse(A) = [[[-0.6, 0.8], [0.4, -0.2]], [[-2., 1.5], [1., -0.5]]] Defined in src/operator/tensor/la_op.cc:L917 a: Tensor of square matrix out: Output array. (optional) ### -linalg-makediag (-linalg-makediag {:keys [a offset out], :or {offset nil, out nil}, :as opts}) Constructs a square matrix with the input as diagonal. Input is a tensor *A* of dimension *n >= 1*. If *n=1*, then *A* represents the diagonal entries of a single square matrix. This matrix will be returned as a 2-dimensional tensor. If *n>1*, then *A* represents a batch of diagonals of square matrices. The batch of diagonal matrices will be returned as an *n+1*-dimensional tensor. .. note:: The operator supports float32 and float64 data types only. Examples:: // Single diagonal matrix construction A = [1.0, 2.0] makediag(A) = [[1.0, 0.0], [0.0, 2.0]] makediag(A, 1) = [[0.0, 1.0, 0.0], [0.0, 0.0, 2.0], [0.0, 0.0, 0.0]] // Batch diagonal matrix construction A = [[1.0, 2.0], [3.0, 4.0]] makediag(A) = [[[1.0, 0.0], [0.0, 2.0]], [[3.0, 0.0], [0.0, 4.0]]] Defined in src/operator/tensor/la_op.cc:L547 a: Tensor of diagonal entries offset: Offset of the diagonal versus the main diagonal. 0 corresponds to the main diagonal, a negative/positive value to diagonals below/above the main diagonal. (optional) out: Output array. (optional) ### -linalg-maketrian (-linalg-maketrian {:keys [a offset lower out], :or {offset nil, lower nil, out nil}, :as opts}) Constructs a square matrix with the input representing a specific triangular sub-matrix. This is basically the inverse of *linalg.extracttrian*. Input is a tensor *A* of dimension *n >= 1*. If *n=1*, then *A* represents the entries of a triangular matrix which is lower triangular if *offset<0* or *offset=0*, *lower=true*. The resulting matrix is derived by first constructing the square matrix with the entries outside the triangle set to zero and then adding *offset*-times an additional diagonal with zero entries to the square matrix. If *n>1*, then *A* represents a batch of triangular sub-matrices. The batch of corresponding square matrices is returned as an *n+1*-dimensional tensor. .. note:: The operator supports float32 and float64 data types only. Examples:: // Single matrix construction A = [1.0, 2.0, 3.0] maketrian(A) = [[1.0, 0.0], [2.0, 3.0]] maketrian(A, lower=false) = [[1.0, 2.0], [0.0, 3.0]] maketrian(A, offset=1) = [[0.0, 1.0, 2.0], [0.0, 0.0, 3.0], [0.0, 0.0, 0.0]] maketrian(A, offset=-1) = [[0.0, 0.0, 0.0], [1.0, 0.0, 0.0], [2.0, 3.0, 0.0]] // Batch matrix construction A = [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]] maketrian(A) = [[[1.0, 0.0], [2.0, 3.0]], [[4.0, 0.0], [5.0, 6.0]]] maketrian(A, offset=1) = [[[0.0, 1.0, 2.0], [0.0, 0.0, 3.0], [0.0, 0.0, 0.0]], [[0.0, 4.0, 5.0], [0.0, 0.0, 6.0], [0.0, 0.0, 0.0]]] Defined in src/operator/tensor/la_op.cc:L673 a: Tensor of triangular matrices stored as vectors offset: Offset of the diagonal versus the main diagonal. 0 corresponds to the main diagonal, a negative/positive value to diagonals below/above the main diagonal. (optional) lower: Refer to the lower triangular matrix if lower=true, refer to the upper otherwise. Only relevant when offset=0 (optional) out: Output array. (optional) ### -linalg-potrf (-linalg-potrf {:keys [a out], :or {out nil}, :as opts}) Performs Cholesky factorization of a symmetric positive-definite matrix. Input is a tensor *A* of dimension *n >= 2*. If *n=2*, the Cholesky factor *B* of the symmetric, positive definite matrix *A* is computed. *B* is triangular (entries of upper or lower triangle are all zero), has positive diagonal entries, and: *A* = *B* \* *B*\ :sup:T if *lower* = *true* *A* = *B*\ :sup:T \* *B* if *lower* = *false* If *n>2*, *potrf* is performed separately on the trailing two dimensions for all inputs (batch mode). .. note:: The operator supports float32 and float64 data types only. Examples:: // Single matrix factorization A = [[4.0, 1.0], [1.0, 4.25]] potrf(A) = [[2.0, 0], [0.5, 2.0]] // Batch matrix factorization A = [[[4.0, 1.0], [1.0, 4.25]], [[16.0, 4.0], [4.0, 17.0]]] potrf(A) = [[[2.0, 0], [0.5, 2.0]], [[4.0, 0], [1.0, 4.0]]] Defined in src/operator/tensor/la_op.cc:L214 a: Tensor of input matrices to be decomposed out: Output array. (optional) ### -linalg-potri (-linalg-potri {:keys [a out], :or {out nil}, :as opts}) Performs matrix inversion from a Cholesky factorization. Input is a tensor *A* of dimension *n >= 2*. If *n=2*, *A* is a triangular matrix (entries of upper or lower triangle are all zero) with positive diagonal. We compute: *out* = *A*\ :sup:-T \* *A*\ :sup:-1 if *lower* = *true* *out* = *A*\ :sup:-1 \* *A*\ :sup:-T if *lower* = *false* In other words, if *A* is the Cholesky factor of a symmetric positive definite matrix *B* (obtained by *potrf*), then *out* = *B*\ :sup:-1 If *n>2*, *potri* is performed separately on the trailing two dimensions for all inputs (batch mode). .. note:: The operator supports float32 and float64 data types only. .. note:: Use this operator only if you are certain you need the inverse of *B*, and cannot use the Cholesky factor *A* (*potrf*), together with backsubstitution (*trsm*). The latter is numerically much safer, and also cheaper. Examples:: // Single matrix inverse A = [[2.0, 0], [0.5, 2.0]] potri(A) = [[0.26563, -0.0625], [-0.0625, 0.25]] // Batch matrix inverse A = [[[2.0, 0], [0.5, 2.0]], [[4.0, 0], [1.0, 4.0]]] potri(A) = [[[0.26563, -0.0625], [-0.0625, 0.25]], [[0.06641, -0.01562], [-0.01562, 0,0625]]] Defined in src/operator/tensor/la_op.cc:L275 a: Tensor of lower triangular matrices out: Output array. (optional) ### -linalg-sumlogdiag (-linalg-sumlogdiag {:keys [a out], :or {out nil}, :as opts}) Computes the sum of the logarithms of the diagonal elements of a square matrix. Input is a tensor *A* of dimension *n >= 2*. If *n=2*, *A* must be square with positive diagonal entries. We sum the natural logarithms of the diagonal elements, the result has shape (1,). If *n>2*, *sumlogdiag* is performed separately on the trailing two dimensions for all inputs (batch mode). .. note:: The operator supports float32 and float64 data types only. Examples:: // Single matrix reduction A = [[1.0, 1.0], [1.0, 7.0]] sumlogdiag(A) = [1.9459] // Batch matrix reduction A = [[[1.0, 1.0], [1.0, 7.0]], [[3.0, 0], [0, 17.0]]] sumlogdiag(A) = [1.9459, 3.9318] Defined in src/operator/tensor/la_op.cc:L445 a: Tensor of square matrices out: Output array. (optional) ### -linalg-syrk (-linalg-syrk {:keys [a transpose alpha out], :or {transpose nil, alpha nil, out nil}, :as opts}) Multiplication of matrix with its transpose. Input is a tensor *A* of dimension *n >= 2*. If *n=2*, the operator performs the BLAS3 function *syrk*: *out* = *alpha* \* *A* \* *A*\ :sup:T if *transpose=False*, or *out* = *alpha* \* *A*\ :sup:T \ \* *A* if *transpose=True*. If *n>2*, *syrk* is performed separately on the trailing two dimensions for all inputs (batch mode). .. note:: The operator supports float32 and float64 data types only. Examples:: // Single matrix multiply A = [[1., 2., 3.], [4., 5., 6.]] syrk(A, alpha=1., transpose=False) = [[14., 32.], [32., 77.]] syrk(A, alpha=1., transpose=True) = [[17., 22., 27.], [22., 29., 36.], [27., 36., 45.]] // Batch matrix multiply A = [[[1., 1.]], [[0.1, 0.1]]] syrk(A, alpha=2., transpose=False) = [[[4.]], [[0.04]]] Defined in src/operator/tensor/la_op.cc:L730 a: Tensor of input matrices transpose: Use transpose of input matrix. (optional) alpha: Scalar factor to be applied to the result. (optional) out: Output array. (optional) ### -linalg-trmm (-linalg-trmm a b)(-linalg-trmm {:keys [a b transpose rightside lower alpha out], :or {transpose nil, rightside nil, lower nil, alpha nil, out nil}, :as opts}) Performs multiplication with a lower triangular matrix. Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape If *n=2*, *A* must be triangular. The operator performs the BLAS3 function *trmm*: *out* = *alpha* \* *op*\ (*A*) \* *B* if *rightside=False*, or *out* = *alpha* \* *B* \* *op*\ (*A*) if *rightside=True*. Here, *alpha* is a scalar parameter, and *op()* is either the identity or the matrix transposition (depending on *transpose*). If *n>2*, *trmm* is performed separately on the trailing two dimensions for all inputs (batch mode). .. note:: The operator supports float32 and float64 data types only. Examples:: // Single triangular matrix multiply A = [[1.0, 0], [1.0, 1.0]] B = [[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]] trmm(A, B, alpha=2.0) = [[2.0, 2.0, 2.0], [4.0, 4.0, 4.0]] // Batch triangular matrix multiply A = [[[1.0, 0], [1.0, 1.0]], [[1.0, 0], [1.0, 1.0]]] B = [[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]], [[0.5, 0.5, 0.5], [0.5, 0.5, 0.5]]] trmm(A, B, alpha=2.0) = [[[2.0, 2.0, 2.0], [4.0, 4.0, 4.0]], [[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]] Defined in src/operator/tensor/la_op.cc:L333 a: Tensor of lower triangular matrices b: Tensor of matrices transpose: Use transposed of the triangular matrix (optional) rightside: Multiply triangular matrix from the right to non-triangular one. (optional) lower: True if the triangular matrix is lower triangular, false if it is upper triangular. (optional) alpha: Scalar factor to be applied to the result. (optional) out: Output array. (optional) ### -linalg-trsm (-linalg-trsm a b)(-linalg-trsm {:keys [a b transpose rightside lower alpha out], :or {transpose nil, rightside nil, lower nil, alpha nil, out nil}, :as opts}) Solves matrix equation involving a lower triangular matrix. Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape If *n=2*, *A* must be triangular. The operator performs the BLAS3 function *trsm*, solving for *out* in: *op*\ (*A*) \* *out* = *alpha* \* *B* if *rightside=False*, or *out* \* *op*\ (*A*) = *alpha* \* *B* if *rightside=True*. Here, *alpha* is a scalar parameter, and *op()* is either the identity or the matrix transposition (depending on *transpose*). If *n>2*, *trsm* is performed separately on the trailing two dimensions for all inputs (batch mode). .. note:: The operator supports float32 and float64 data types only. Examples:: // Single matrix solve A = [[1.0, 0], [1.0, 1.0]] B = [[2.0, 2.0, 2.0], [4.0, 4.0, 4.0]] trsm(A, B, alpha=0.5) = [[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]] // Batch matrix solve A = [[[1.0, 0], [1.0, 1.0]], [[1.0, 0], [1.0, 1.0]]] B = [[[2.0, 2.0, 2.0], [4.0, 4.0, 4.0]], [[4.0, 4.0, 4.0], [8.0, 8.0, 8.0]]] trsm(A, B, alpha=0.5) = [[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]], [[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]] Defined in src/operator/tensor/la_op.cc:L396 a: Tensor of lower triangular matrices b: Tensor of matrices transpose: Use transposed of the triangular matrix (optional) rightside: Multiply triangular matrix from the right to non-triangular one. (optional) lower: True if the triangular matrix is lower triangular, false if it is upper triangular. (optional) alpha: Scalar factor to be applied to the result. (optional) out: Output array. (optional) ### -ravel-multi-index (-ravel-multi-index {:keys [data shape out], :or {shape nil, out nil}, :as opts}) Converts a batch of index arrays into an array of flat indices. The operator follows numpy conventions so a single multi index is given by a column of the input matrix. The leading dimension may be left unspecified by using -1 as placeholder. Examples:: A = [[3,6,6],[4,5,1]] ravel(A, shape=(7,6)) = [22,41,37] ravel(A, shape=(-1,6)) = [22,41,37] Defined in src/operator/tensor/ravel.cc:L42 data: Batch of multi-indices shape: Shape of the array into which the multi-indices apply. (optional) out: Output array. (optional) ### -shuffle (-shuffle {:keys [data out], :or {out nil}, :as opts}) Randomly shuffle the elements. This shuffles the array along the first axis. The order of the elements in each subarray does not change. For example, if a 2D array is given, the order of the rows randomly changes, but the order of the elements in each row does not change. data: Data to be shuffled. out: Output array. (optional) ### -unravel-index (-unravel-index {:keys [data shape out], :or {shape nil, out nil}, :as opts}) Converts an array of flat indices into a batch of index arrays. The operator follows numpy conventions so a single multi index is given by a column of the output matrix. The leading dimension may be left unspecified by using -1 as placeholder. Examples:: A = [22,41,37] unravel(A, shape=(7,6)) = [[3,6,6],[4,5,1]] unravel(A, shape=(-1,6)) = [[3,6,6],[4,5,1]] Defined in src/operator/tensor/ravel.cc:L67 data: Array of flat indices shape: Shape of the array into which the multi-indices apply. (optional) out: Output array. (optional) ### abs (abs {:keys [data out], :or {out nil}, :as opts}) Returns element-wise absolute value of the input. Example:: abs([-2, 0, 3]) = [2, 0, 3] The storage type of abs output depends upon the input storage type: - abs(default) = default - abs(row_sparse) = row_sparse - abs(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L708 data: The input array. out: Output array. (optional) ### activation (activation data act-type)(activation {:keys [data act-type out], :or {out nil}, :as opts}) Applies an activation function element-wise to the input. The following activation functions are supported: - relu: Rectified Linear Unit, :math:y = max(x, 0) - sigmoid: :math:y = \frac{1}{1 + exp(-x)} - tanh: Hyperbolic tangent, :math:y = \frac{exp(x) - exp(-x)}{exp(x) + exp(-x)} - softrelu: Soft ReLU, or SoftPlus, :math:y = log(1 + exp(x)) - softsign: :math:y = \frac{x}{1 + abs(x)} Defined in src/operator/nn/activation.cc:L167 data: The input array. act-type: Activation function to be applied. out: Output array. (optional) (adam-update weight grad mean var lr)(adam-update {:keys [weight grad mean var lr beta1 beta2 epsilon wd rescale-grad clip-gradient lazy-update out], :or {beta1 nil, beta2 nil, epsilon nil, wd nil, rescale-grad nil, clip-gradient nil, lazy-update nil, out nil}, :as opts}) Update function for Adam optimizer. Adam is seen as a generalization Adam update consists of the following steps, where g represents gradient and m, v are 1st and 2nd order moment estimates (mean and variance). .. math:: g_t = \nabla J(W_{t-1})\\ m_t = \beta_1 m_{t-1} + (1 - \beta_1) g_t\\ v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2\\ W_t = W_{t-1} - \alpha \frac{ m_t }{ \sqrt{ v_t } + \epsilon } w += - learning_rate * m / (sqrt(v) + epsilon) However, if grad's storage type is row_sparse, lazy_update is True and the storage type of weight is the same as those of m and v, only the row slices whose indices appear in grad.indices are updated (for w, m and v):: w[row] += - learning_rate * m[row] / (sqrt(v[row]) + epsilon) Defined in src/operator/optimizer_op.cc:L686 weight: Weight grad: Gradient mean: Moving mean var: Moving variance lr: Learning rate beta1: The decay rate for the 1st moment estimates. (optional) beta2: The decay rate for the 2nd moment estimates. (optional) epsilon: A small constant for numerical stability. (optional) wd: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) lazy-update: If true, lazy updates are applied if gradient's stype is row_sparse and all of w, m and v have the same stype (optional) out: Output array. (optional) (add-n {:keys [args out], :or {out nil}, :as opts}) Adds all input arguments element-wise. .. math:: add\_n(a_1, a_2, ..., a_n) = a_1 + a_2 + ... + a_n add_n is potentially more efficient than calling add by n times. The storage type of add_n output depends on storage types of inputs - add_n(row_sparse, row_sparse, ..) = row_sparse - add_n(default, csr, default) = default - add_n(any input combinations longer than 4 (>4) with at least one default type) = default - otherwise, add_n falls all inputs back to default storage and generates default storage Defined in src/operator/tensor/elemwise_sum.cc:L155 args: Positional input arguments out: Output array. (optional) ### all-finite (all-finite {:keys [data init-output out], :or {init-output nil, out nil}, :as opts}) Check if all the float numbers in the array are finite (used for AMP) Defined in src/operator/contrib/all_finite.cc:L101 data: Array init-output: Initialize output to 1. (optional) out: Output array. (optional) ### amp-cast (amp-cast data dtype)(amp-cast {:keys [data dtype out], :or {out nil}, :as opts}) Cast function between low precision float/FP32 used by AMP. It casts only between low precision float/FP32 and does not do anything for other types. Defined in src/operator/tensor/amp_cast.cc:L37 data: The input. dtype: Output data type. out: Output array. (optional) ### amp-multicast (amp-multicast data num-outputs)(amp-multicast {:keys [data num-outputs out], :or {out nil}, :as opts}) Cast function used by AMP, that casts its inputs to the common widest type. It casts only between low precision float/FP32 and does not do anything for other types. Defined in src/operator/tensor/amp_cast.cc:L71 data: Weights num-outputs: Number of input/output pairs to be casted to the widest type. out: Output array. (optional) ### arccos (arccos {:keys [data out], :or {out nil}, :as opts}) Returns element-wise inverse cosine of the input array. The input should be in range [-1, 1]. The output is in the closed interval :math:[0, \pi] .. math:: arccos([-1, -.707, 0, .707, 1]) = [\pi, 3\pi/4, \pi/2, \pi/4, 0] The storage type of arccos output is always dense Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L179 data: The input array. out: Output array. (optional) ### arccosh (arccosh {:keys [data out], :or {out nil}, :as opts}) Returns the element-wise inverse hyperbolic cosine of the input array, \ computed element-wise. The storage type of arccosh output is always dense Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L320 data: The input array. out: Output array. (optional) ### arcsin (arcsin {:keys [data out], :or {out nil}, :as opts}) Returns element-wise inverse sine of the input array. The input should be in the range [-1, 1]. The output is in the closed interval of [:math:-\pi/2, :math:\pi/2]. .. math:: arcsin([-1, -.707, 0, .707, 1]) = [-\pi/2, -\pi/4, 0, \pi/4, \pi/2] The storage type of arcsin output depends upon the input storage type: - arcsin(default) = default - arcsin(row_sparse) = row_sparse - arcsin(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L160 data: The input array. out: Output array. (optional) ### arcsinh (arcsinh {:keys [data out], :or {out nil}, :as opts}) Returns the element-wise inverse hyperbolic sine of the input array, \ computed element-wise. The storage type of arcsinh output depends upon the input storage type: - arcsinh(default) = default - arcsinh(row_sparse) = row_sparse - arcsinh(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L306 data: The input array. out: Output array. (optional) ### arctan (arctan {:keys [data out], :or {out nil}, :as opts}) Returns element-wise inverse tangent of the input array. The output is in the closed interval :math:[-\pi/2, \pi/2] .. math:: arctan([-1, 0, 1]) = [-\pi/4, 0, \pi/4] The storage type of arctan output depends upon the input storage type: - arctan(default) = default - arctan(row_sparse) = row_sparse - arctan(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L200 data: The input array. out: Output array. (optional) ### arctanh (arctanh {:keys [data out], :or {out nil}, :as opts}) Returns the element-wise inverse hyperbolic tangent of the input array, \ computed element-wise. The storage type of arctanh output depends upon the input storage type: - arctanh(default) = default - arctanh(row_sparse) = row_sparse - arctanh(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L337 data: The input array. out: Output array. (optional) ### argmax (argmax {:keys [data axis keepdims out], :or {axis nil, keepdims nil, out nil}, :as opts}) Returns indices of the maximum values along an axis. In the case of multiple occurrences of maximum values, the indices corresponding to the first occurrence are returned. Examples:: x = [[ 0., 1., 2.], [ 3., 4., 5.]] // argmax along axis 0 argmax(x, axis=0) = [ 1., 1., 1.] // argmax along axis 1 argmax(x, axis=1) = [ 2., 2.] // argmax along axis 1 keeping same dims as an input array argmax(x, axis=1, keepdims=True) = [[ 2.], [ 2.]] data: The input axis: The axis along which to perform the reduction. Negative values means indexing from right to left. Requires axis to be set as int, because global reduction is not supported yet. (optional) keepdims: If this is set to True, the reduced axis is left in the result as dimension with size one. (optional) out: Output array. (optional) ### argmax-channel (argmax-channel {:keys [data out], :or {out nil}, :as opts}) Returns argmax indices of each channel from the input array. The result will be an NDArray of shape (num_channel,). In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence are returned. Examples:: x = [[ 0., 1., 2.], [ 3., 4., 5.]] argmax_channel(x) = [ 2., 2.] data: The input array out: Output array. (optional) ### argmin (argmin {:keys [data axis keepdims out], :or {axis nil, keepdims nil, out nil}, :as opts}) Returns indices of the minimum values along an axis. In the case of multiple occurrences of minimum values, the indices corresponding to the first occurrence are returned. Examples:: x = [[ 0., 1., 2.], [ 3., 4., 5.]] // argmin along axis 0 argmin(x, axis=0) = [ 0., 0., 0.] // argmin along axis 1 argmin(x, axis=1) = [ 0., 0.] // argmin along axis 1 keeping same dims as an input array argmin(x, axis=1, keepdims=True) = [[ 0.], [ 0.]] data: The input axis: The axis along which to perform the reduction. Negative values means indexing from right to left. Requires axis to be set as int, because global reduction is not supported yet. (optional) keepdims: If this is set to True, the reduced axis is left in the result as dimension with size one. (optional) out: Output array. (optional) ### argsort (argsort {:keys [data axis is-ascend dtype out], :or {axis nil, is-ascend nil, dtype nil, out nil}, :as opts}) Returns the indices that would sort an input array along the given axis. This function performs sorting along the given axis and returns an array of indices having same shape as an input array that index data in sorted order. Examples:: x = [[ 0.3, 0.2, 0.4], [ 0.1, 0.3, 0.2]] // sort along axis -1 argsort(x) = [[ 1., 0., 2.], [ 0., 2., 1.]] // sort along axis 0 argsort(x, axis=0) = [[ 1., 0., 1.] [ 0., 1., 0.]] // flatten and then sort argsort(x) = [ 3., 1., 5., 0., 4., 2.] Defined in src/operator/tensor/ordering_op.cc:L177 data: The input array axis: Axis along which to sort the input tensor. If not given, the flattened array is used. Default is -1. (optional) is-ascend: Whether to sort in ascending or descending order. (optional) dtype: DType of the output indices. It is only valid when ret_typ is "indices" or "both". An error will be raised if the selected data type cannot precisely represent the indices. (optional) out: Output array. (optional) ### batch-dot (batch-dot lhs rhs)(batch-dot {:keys [lhs rhs transpose-a transpose-b forward-stype out], :or {transpose-a nil, transpose-b nil, forward-stype nil, out nil}, :as opts}) Batchwise dot product. batch_dot is used to compute dot product of x and y when x and y are data in batch, namely 3D arrays in shape of (batch_size, :, :). For example, given x with shape (batch_size, n, m) and y with shape (batch_size, m, k), the result array will have shape (batch_size, n, k), which is computed by:: batch_dot(x,y)[i,:,:] = dot(x[i,:,:], y[i,:,:]) Defined in src/operator/tensor/dot.cc:L125 lhs: The first input rhs: The second input transpose-a: If true then transpose the first input before dot. (optional) transpose-b: If true then transpose the second input before dot. (optional) forward-stype: The desired storage type of the forward output given by user, if thecombination of input storage types and this hint does not matchany implemented ones, the dot operator will perform fallback operationand still produce an output of the desired storage type. (optional) out: Output array. (optional) ### batch-norm (batch-norm data gamma beta moving-mean moving-var)(batch-norm {:keys [data gamma beta moving-mean moving-var eps momentum fix-gamma use-global-stats output-mean-var axis cudnn-off out], :or {eps nil, momentum nil, fix-gamma nil, use-global-stats nil, output-mean-var nil, axis nil, cudnn-off nil, out nil}, :as opts}) Batch normalization. Normalizes a data batch by mean and variance, and applies a scale gamma as well as offset beta. Assume the input has more than one dimension and we normalize along axis 1. We first compute the mean and variance along this axis: .. math:: data\_mean[i] = mean(data[:,i,:,...]) \\ data\_var[i] = var(data[:,i,:,...]) Then compute the normalized output, which has the same shape as input, as following: .. math:: out[:,i,:,...] = \frac{data[:,i,:,...] - data\_mean[i]}{\sqrt{data\_var[i]+\epsilon}} * gamma[i] + beta[i] Both *mean* and *var* returns a scalar by treating the input as a vector. Assume the input has size *k* on axis 1, then both gamma and beta have shape *(k,)*. If output_mean_var is set to be true, then outputs both data_mean and the inverse of data_var, which are needed for the backward pass. Note that gradient of these two outputs are blocked. Besides the inputs and the outputs, this operator accepts two auxiliary states, moving_mean and moving_var, which are *k*-length vectors. They are global statistics for the whole dataset, which are updated by:: moving_mean = moving_mean * momentum + data_mean * (1 - momentum) moving_var = moving_var * momentum + data_var * (1 - momentum) If use_global_stats is set to be true, then moving_mean and moving_var are used instead of data_mean and data_var to compute the output. It is often used during inference. The parameter axis specifies which axis of the input shape denotes the 'channel' (separately normalized groups). The default is 1. Specifying -1 sets the channel axis to be the last item in the input shape. Both gamma and beta are learnable parameters. But if fix_gamma is true, then set gamma to 1 and its gradient to 0. .. Note:: When fix_gamma is set to True, no sparse support is provided. If fix_gamma is set to False, the sparse tensors will fallback. Defined in src/operator/nn/batch_norm.cc:L572 data: Input data to batch normalization gamma: gamma array beta: beta array moving-mean: running mean of input moving-var: running variance of input eps: Epsilon to prevent div 0. Must be no less than CUDNN_BN_MIN_EPSILON defined in cudnn.h when using cudnn (usually 1e-5) (optional) momentum: Momentum for moving average (optional) fix-gamma: Fix gamma while training (optional) use-global-stats: Whether use global moving statistics instead of local batch-norm. This will force change batch-norm into a scale shift operator. (optional) output-mean-var: Output the mean and inverse std (optional) axis: Specify which shape axis the channel is specified (optional) cudnn-off: Do not select CUDNN operator, if available (optional) out: Output array. (optional) ### batch-norm-v1 (batch-norm-v1 data gamma beta)(batch-norm-v1 {:keys [data gamma beta eps momentum fix-gamma use-global-stats output-mean-var out], :or {eps nil, momentum nil, fix-gamma nil, use-global-stats nil, output-mean-var nil, out nil}, :as opts}) Batch normalization. This operator is DEPRECATED. Perform BatchNorm on the input. Normalizes a data batch by mean and variance, and applies a scale gamma as well as offset beta. Assume the input has more than one dimension and we normalize along axis 1. We first compute the mean and variance along this axis: .. math:: data\_mean[i] = mean(data[:,i,:,...]) \\ data\_var[i] = var(data[:,i,:,...]) Then compute the normalized output, which has the same shape as input, as following: .. math:: out[:,i,:,...] = \frac{data[:,i,:,...] - data\_mean[i]}{\sqrt{data\_var[i]+\epsilon}} * gamma[i] + beta[i] Both *mean* and *var* returns a scalar by treating the input as a vector. Assume the input has size *k* on axis 1, then both gamma and beta have shape *(k,)*. If output_mean_var is set to be true, then outputs both data_mean and data_var as well, which are needed for the backward pass. Besides the inputs and the outputs, this operator accepts two auxiliary states, moving_mean and moving_var, which are *k*-length vectors. They are global statistics for the whole dataset, which are updated by:: moving_mean = moving_mean * momentum + data_mean * (1 - momentum) moving_var = moving_var * momentum + data_var * (1 - momentum) If use_global_stats is set to be true, then moving_mean and moving_var are used instead of data_mean and data_var to compute the output. It is often used during inference. Both gamma and beta are learnable parameters. But if fix_gamma is true, then set gamma to 1 and its gradient to 0. There's no sparse support for this operator, and it will exhibit problematic behavior if used with sparse tensors. Defined in src/operator/batch_norm_v1.cc:L95 data: Input data to batch normalization gamma: gamma array beta: beta array eps: Epsilon to prevent div 0 (optional) momentum: Momentum for moving average (optional) fix-gamma: Fix gamma while training (optional) use-global-stats: Whether use global moving statistics instead of local batch-norm. This will force change batch-norm into a scale shift operator. (optional) output-mean-var: Output All,normal mean and var (optional) out: Output array. (optional) ### batch-take (batch-take a indices)(batch-take {:keys [a indices out], :or {out nil}, :as opts}) Takes elements from a data batch. .. note:: batch_take is deprecated. Use pick instead. Given an input array of shape (d0, d1) and indices of shape (i0,), the result will be an output array of shape (i0,) with:: output[i] = input[i, indices[i]] Examples:: x = [[ 1., 2.], [ 3., 4.], [ 5., 6.]] // takes elements with specified indices batch_take(x, [0,1,0]) = [ 1. 4. 5.] Defined in src/operator/tensor/indexing_op.cc:L753 a: The input array indices: The index array out: Output array. (optional) ### bilinear-sampler (bilinear-sampler data grid)(bilinear-sampler {:keys [data grid cudnn-off out], :or {cudnn-off nil, out nil}, :as opts}) Applies bilinear sampling to input feature map. Bilinear Sampling is the key of [NIPS2015] \"Spatial Transformer Networks\". The usage of the operator is very similar to remap function in OpenCV, except that the operator has the backward pass. Given :math:data and :math:grid, then the output is computed by .. math:: x_{src} = grid[batch, 0, y_{dst}, x_{dst}] \\ y_{src} = grid[batch, 1, y_{dst}, x_{dst}] \\ output[batch, channel, y_{dst}, x_{dst}] = G(data[batch, channel, y_{src}, x_{src}) :math:x_{dst}, :math:y_{dst} enumerate all spatial locations in :math:output, and :math:G() denotes the bilinear interpolation kernel. The out-boundary points will be padded with zeros.The shape of the output will be (data.shape[0], data.shape[1], grid.shape[2], grid.shape[3]). The operator assumes that :math:data has 'NCHW' layout and :math:grid has been normalized to [-1, 1]. BilinearSampler often cooperates with GridGenerator which generates sampling grids for BilinearSampler. GridGenerator supports two kinds of transformation: affine and warp. If users want to design a CustomOp to manipulate :math:grid, please firstly refer to the code of GridGenerator. Example 1:: ## Zoom out data two times data = array([[[[1, 4, 3, 6], [1, 8, 8, 9], [0, 4, 1, 5], [1, 0, 1, 3]]]]) affine_matrix = array([[2, 0, 0], [0, 2, 0]]) affine_matrix = reshape(affine_matrix, shape=(1, 6)) grid = GridGenerator(data=affine_matrix, transform_type='affine', target_shape=(4, 4)) out = BilinearSampler(data, grid) out [[[[ 0, 0, 0, 0], [ 0, 3.5, 6.5, 0], [ 0, 1.25, 2.5, 0], [ 0, 0, 0, 0]]] Example 2:: ## shift data horizontally by -1 pixel data = array([[[[1, 4, 3, 6], [1, 8, 8, 9], [0, 4, 1, 5], [1, 0, 1, 3]]]]) warp_maxtrix = array([[[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]]) grid = GridGenerator(data=warp_matrix, transform_type='warp') out = BilinearSampler(data, grid) out [[[[ 4, 3, 6, 0], [ 8, 8, 9, 0], [ 4, 1, 5, 0], [ 0, 1, 3, 0]]] Defined in src/operator/bilinear_sampler.cc:L256 data: Input data to the BilinearsamplerOp. grid: Input grid to the BilinearsamplerOp.grid has two channels: x_src, y_src cudnn-off: whether to turn cudnn off (optional) out: Output array. (optional) (block-grad {:keys [data out], :or {out nil}, :as opts}) Stops gradient computation. Stops the accumulated gradient of the inputs from flowing through this operator in the backward direction. In other words, this operator prevents the contribution of its inputs to be taken into account for computing gradients. Example:: v1 = [1, 2] v2 = [0, 1] a = Variable('a') b = Variable('b') executor = loss.simple_bind(ctx=cpu(), a=(1,2), b=(1,2)) executor.forward(is_train=True, a=v1, b=v2) executor.outputs [ 1. 5.] executor.backward() [ 0. 0.] [ 1. 1.] Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L299 data: The input array. out: Output array. (optional) (broadcast-add lhs rhs)(broadcast-add {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns element-wise sum of the input arrays with broadcasting. broadcast_plus is an alias to the function broadcast_add. Example:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] [ 2., 2., 2.]] broadcast_plus(x, y) = [[ 1., 1., 1.], [ 2., 2., 2.]] Supported sparse operations: lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-axis {:keys [data axis size out], :or {axis nil, size nil, out nil}, :as opts}) Broadcasts the input array over particular axes. Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to (2,8,3,9). Elements will be duplicated on the broadcasted axes. Example:: // given x of shape (1,2,1) x = [[[ 1.], [ 2.]]] // broadcast x on on axis 2 broadcast_axis(x, axis=2, size=3) = [[[ 1., 1., 1.], [ 2., 2., 2.]]] // broadcast x on on axes 0 and 2 broadcast_axis(x, axis=(0,2), size=(2,3)) = [[[ 1., 1., 1.], [ 2., 2., 2.]], [[ 1., 1., 1.], [ 2., 2., 2.]]] data: The input axis: The axes to perform the broadcasting. (optional) size: Target sizes of the broadcasting axes. (optional) out: Output array. (optional) (broadcast-div lhs rhs)(broadcast-div {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns element-wise division of the input arrays with broadcasting. Example:: x = [[ 6., 6., 6.], [ 6., 6., 6.]] y = [[ 2.], [ 3.]] broadcast_div(x, y) = [[ 3., 3., 3.], [ 2., 2., 2.]] Supported sparse operations: lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-equal lhs rhs)(broadcast-equal {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns the result of element-wise **equal to** (==) comparison operation with broadcasting. Example:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_equal(x, y) = [[ 0., 0., 0.], [ 1., 1., 1.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-greater lhs rhs)(broadcast-greater {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns the result of element-wise **greater than** (>) comparison operation with broadcasting. Example:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_greater(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-greater-equal lhs rhs)(broadcast-greater-equal {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns the result of element-wise **greater than or equal to** (>=) comparison operation with broadcasting. Example:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_greater_equal(x, y) = [[ 1., 1., 1.], [ 1., 1., 1.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-hypot lhs rhs)(broadcast-hypot {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns the hypotenuse of a right angled triangle, given its "legs" It is equivalent to doing :math:sqrt(x_1^2 + x_2^2). Example:: x = [[ 3., 3., 3.]] y = [[ 4.], [ 4.]] broadcast_hypot(x, y) = [[ 5., 5., 5.], [ 5., 5., 5.]] z = [[ 0.], [ 4.]] broadcast_hypot(x, z) = [[ 3., 3., 3.], [ 5., 5., 5.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-lesser lhs rhs)(broadcast-lesser {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns the result of element-wise **lesser than** (<) comparison operation with broadcasting. Example:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_lesser(x, y) = [[ 0., 0., 0.], [ 0., 0., 0.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-lesser-equal lhs rhs)(broadcast-lesser-equal {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns the result of element-wise **lesser than or equal to** (<=) comparison operation with broadcasting. Example:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_lesser_equal(x, y) = [[ 0., 0., 0.], [ 1., 1., 1.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-like lhs rhs)(broadcast-like {:keys [lhs rhs lhs-axes rhs-axes out], :or {lhs-axes nil, rhs-axes nil, out nil}, :as opts}) Broadcasts lhs to have the same shape as rhs. Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations with arrays of different shapes efficiently without creating multiple copies of arrays. Also see, Broadcasting <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>;_ for more explanation. Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to (2,8,3,9). Elements will be duplicated on the broadcasted axes. For example:: broadcast_like([[1,2,3]], [[5,6,7],[7,8,9]]) = [[ 1., 2., 3.], [ 1., 2., 3.]]) broadcast_like([9], [1,2,3,4,5], lhs_axes=(0,), rhs_axes=(-1,)) = [9,9,9,9,9] lhs: First input. rhs: Second input. lhs-axes: Axes to perform broadcast on in the first input array (optional) rhs-axes: Axes to copy from the second input array (optional) out: Output array. (optional) (broadcast-logical-and lhs rhs)(broadcast-logical-and {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns the result of element-wise **logical and** with broadcasting. Example:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_logical_and(x, y) = [[ 0., 0., 0.], [ 1., 1., 1.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-logical-or lhs rhs)(broadcast-logical-or {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns the result of element-wise **logical or** with broadcasting. Example:: x = [[ 1., 1., 0.], [ 1., 1., 0.]] y = [[ 1.], [ 0.]] broadcast_logical_or(x, y) = [[ 1., 1., 1.], [ 1., 1., 0.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-logical-xor lhs rhs)(broadcast-logical-xor {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns the result of element-wise **logical xor** with broadcasting. Example:: x = [[ 1., 1., 0.], [ 1., 1., 0.]] y = [[ 1.], [ 0.]] broadcast_logical_xor(x, y) = [[ 0., 0., 1.], [ 1., 1., 0.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-maximum lhs rhs)(broadcast-maximum {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns element-wise maximum of the input arrays with broadcasting. This function compares two input arrays and returns a new array having the element-wise maxima. Example:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_maximum(x, y) = [[ 1., 1., 1.], [ 1., 1., 1.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-minimum lhs rhs)(broadcast-minimum {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns element-wise minimum of the input arrays with broadcasting. This function compares two input arrays and returns a new array having the element-wise minima. Example:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_maximum(x, y) = [[ 0., 0., 0.], [ 1., 1., 1.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-mod lhs rhs)(broadcast-mod {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns element-wise modulo of the input arrays with broadcasting. Example:: x = [[ 8., 8., 8.], [ 8., 8., 8.]] y = [[ 2.], [ 3.]] broadcast_mod(x, y) = [[ 0., 0., 0.], [ 2., 2., 2.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-mul lhs rhs)(broadcast-mul {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns element-wise product of the input arrays with broadcasting. Example:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_mul(x, y) = [[ 0., 0., 0.], [ 1., 1., 1.]] Supported sparse operations: lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-not-equal lhs rhs)(broadcast-not-equal {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns the result of element-wise **not equal to** (!=) comparison operation with broadcasting. Example:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_not_equal(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-power lhs rhs)(broadcast-power {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns result of first array elements raised to powers from second array, element-wise with broadcasting. Example:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_power(x, y) = [[ 2., 2., 2.], [ 4., 4., 4.]] lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-sub lhs rhs)(broadcast-sub {:keys [lhs rhs out], :or {out nil}, :as opts}) Returns element-wise difference of the input arrays with broadcasting. broadcast_minus is an alias to the function broadcast_sub. Example:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_sub(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]] broadcast_minus(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]] Supported sparse operations: lhs: First input to the function rhs: Second input to the function out: Output array. (optional) (broadcast-to {:keys [data shape out], :or {shape nil, out nil}, :as opts}) Broadcasts the input array to a new shape. Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations with arrays of different shapes efficiently without creating multiple copies of arrays. Also see, Broadcasting <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>;_ for more explanation. Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to (2,8,3,9). Elements will be duplicated on the broadcasted axes. For example:: broadcast_to([[1,2,3]], shape=(2,3)) = [[ 1., 2., 3.], [ 1., 2., 3.]]) The dimension which you do not want to change can also be kept as 0 which means copy the original value. So with shape=(2,0), we will obtain the same result as in the above example. data: The input shape: The shape of the desired array. We can set the dim to zero if it's same as the original. E.g A = broadcast_to(B, shape=(10, 0, 0)) has the same meaning as A = broadcast_axis(B, axis=0, size=10). (optional) out: Output array. (optional) ### cast (cast data dtype)(cast {:keys [data dtype out], :or {out nil}, :as opts}) Casts all elements of the input to a new type. .. note:: Cast is deprecated. Use cast instead. Example:: cast([0.9, 1.3], dtype='int32') = [0, 1] cast([1e20, 11.1], dtype='float16') = [inf, 11.09375] cast([300, 11.1, 10.9, -1, -3], dtype='uint8') = [44, 11, 10, 255, 253] Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L634 data: The input. dtype: Output data type. out: Output array. (optional) ### cast-storage (cast-storage data stype)(cast-storage {:keys [data stype out], :or {out nil}, :as opts}) Casts tensor storage type to the new type. When an NDArray with default storage type is cast to csr or row_sparse storage, the result is compact, which means: - for csr, zero values will not be retained - for row_sparse, row slices of all zeros will not be retained The storage type of cast_storage output depends on stype parameter: - cast_storage(csr, 'default') = default - cast_storage(row_sparse, 'default') = default - cast_storage(default, 'csr') = csr - cast_storage(default, 'row_sparse') = row_sparse - cast_storage(csr, 'csr') = csr - cast_storage(row_sparse, 'row_sparse') = row_sparse Example:: dense = [[ 0., 1., 0.], [ 2., 0., 3.], [ 0., 0., 0.], [ 0., 0., 0.]] # cast to row_sparse storage type rsp = cast_storage(dense, 'row_sparse') rsp.indices = [0, 1] rsp.values = [[ 0., 1., 0.], [ 2., 0., 3.]] # cast to csr storage type csr = cast_storage(dense, 'csr') csr.indices = [1, 0, 2] csr.values = [ 1., 2., 3.] csr.indptr = [0, 1, 3, 3, 3] Defined in src/operator/tensor/cast_storage.cc:L71 data: The input. stype: Output storage type. out: Output array. (optional) ### cbrt (cbrt {:keys [data out], :or {out nil}, :as opts}) Returns element-wise cube-root value of the input. .. math:: cbrt(x) = \sqrt[3]{x} Example:: cbrt([1, 8, -125]) = [1, 2, -5] The storage type of cbrt output depends upon the input storage type: - cbrt(default) = default - cbrt(row_sparse) = row_sparse - cbrt(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L950 data: The input array. out: Output array. (optional) ### ceil (ceil {:keys [data out], :or {out nil}, :as opts}) Returns element-wise ceiling of the input. The ceil of the scalar x is the smallest integer i, such that i >= x. Example:: ceil([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1., 2., 2., 3.] The storage type of ceil output depends upon the input storage type: - ceil(default) = default - ceil(row_sparse) = row_sparse - ceil(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L786 data: The input array. out: Output array. (optional) ### clip (clip data a-min a-max)(clip {:keys [data a-min a-max out], :or {out nil}, :as opts}) Clips (limits) the values in an array. Given an interval, values outside the interval are clipped to the interval edges. Clipping x between a_min and a_x would be:: clip(x, a_min, a_max) = max(min(x, a_max), a_min)) Example:: x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] clip(x,1,8) = [ 1., 1., 2., 3., 4., 5., 6., 7., 8., 8.] The storage type of clip output depends on storage types of inputs and the a_min, a_max \ parameter values: - clip(default) = default - clip(row_sparse, a_min <= 0, a_max >= 0) = row_sparse - clip(csr, a_min <= 0, a_max >= 0) = csr - clip(row_sparse, a_min < 0, a_max < 0) = default - clip(row_sparse, a_min > 0, a_max > 0) = default - clip(csr, a_min < 0, a_max < 0) = csr - clip(csr, a_min > 0, a_max > 0) = csr Defined in src/operator/tensor/matrix_op.cc:L723 data: Input array. a-min: Minimum value a-max: Maximum value out: Output array. (optional) ### concat (concat data num-args)(concat {:keys [data num-args dim out], :or {dim nil, out nil}, :as opts}) Joins input arrays along a given axis. .. note:: Concat is deprecated. Use concat instead. The dimensions of the input arrays should be the same except the axis along which they will be concatenated. The dimension of the output array along the concatenated axis will be equal to the sum of the corresponding dimensions of the input arrays. The storage type of concat output depends on storage types of inputs - concat(csr, csr, ..., csr, dim=0) = csr - otherwise, concat generates output with default storage Example:: x = [[1,1],[2,2]] y = [[3,3],[4,4],[5,5]] z = [[6,6], [7,7],[8,8]] concat(x,y,z,dim=0) = [[ 1., 1.], [ 2., 2.], [ 3., 3.], [ 4., 4.], [ 5., 5.], [ 6., 6.], [ 7., 7.], [ 8., 8.]] Note that you cannot concat x,y,z along dimension 1 since dimension 0 is not the same for all the input arrays. concat(y,z,dim=1) = [[ 3., 3., 6., 6.], [ 4., 4., 7., 7.], [ 5., 5., 8., 8.]] Defined in src/operator/nn/concat.cc:L371 data: List of arrays to concatenate num-args: Number of inputs to be concated. dim: the dimension to be concated. (optional) out: Output array. (optional) ### convolution (convolution data weight bias kernel num-filter)(convolution {:keys [data weight bias kernel stride dilate pad num-filter num-group workspace no-bias cudnn-tune cudnn-off layout out], :or {no-bias nil, cudnn-off nil, stride nil, dilate nil, workspace nil, layout nil, out nil, pad nil, num-group nil, cudnn-tune nil}, :as opts}) Compute *N*-D convolution on *(N+2)*-D input. In the 2-D convolution, given input data with shape *(batch_size, channel, height, width)*, the output is computed by .. math:: out[n,i,:,:] = bias[i] + \sum_{j=0}^{channel} data[n,j,:,:] \star weight[i,j,:,:] where :math:\star is the 2-D cross-correlation operator. For general 2-D convolution, the shapes are - **data**: *(batch_size, channel, height, width)* - **weight**: *(num_filter, channel, kernel[0], kernel[1])* - **bias**: *(num_filter,)* - **out**: *(batch_size, num_filter, out_height, out_width)*. Define:: f(x,k,p,s,d) = floor((x+2*p-d*(k-1)-1)/s)+1 then we have:: If no_bias is set to be true, then the bias term is ignored. The default data layout is *NCHW*, namely *(batch_size, channel, height, width)*. We can choose other layouts such as *NWC*. If num_group is larger than 1, denoted by *g*, then split the input data evenly into *g* parts along the channel axis, and also evenly split weight along the first dimension. Next compute the convolution on the *i*-th part of the data with the *i*-th weight part. The output is obtained by concatenating all the *g* results. 1-D convolution does not have *height* dimension but only *width* in space. - **data**: *(batch_size, channel, width)* - **weight**: *(num_filter, channel, kernel[0])* - **bias**: *(num_filter,)* - **out**: *(batch_size, num_filter, out_width)*. *width*. The shapes are - **data**: *(batch_size, channel, depth, height, width)* - **weight**: *(num_filter, channel, kernel[0], kernel[1], kernel[2])* - **bias**: *(num_filter,)* - **out**: *(batch_size, num_filter, out_depth, out_height, out_width)*. Both weight and bias are learnable parameters. There are other options to tune the performance. - **cudnn_tune**: enable this option leads to higher startup time but may give faster speed. Options are - **off**: no tuning - **limited_workspace**:run test and pick the fastest algorithm that doesn't exceed workspace limit. - **fastest**: pick the fastest algorithm and ignore workspace limit. - **None** (default): the behavior is determined by environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT. 0 for off, 1 for limited workspace (default), 2 for fastest. - **workspace**: A large number leads to more (GPU) memory usage but may improve the performance. Defined in src/operator/nn/convolution.cc:L472 data: Input data to the ConvolutionOp. weight: Weight matrix. bias: Bias parameter. kernel: Convolution kernel size: (w,), (h, w) or (d, h, w) stride: Convolution stride: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension. (optional) dilate: Convolution dilate: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension. (optional) pad: Zero pad for convolution: (w,), (h, w) or (d, h, w). Defaults to no padding. (optional) num-filter: Convolution filter(channel) number num-group: Number of group partitions. (optional) workspace: Maximum temporary workspace allowed (MB) in convolution.This parameter has two usages. When CUDNN is not used, it determines the effective batch size of the convolution kernel. When CUDNN is used, it controls the maximum temporary storage used for tuning the best CUDNN kernel when limited_workspace strategy is used. (optional) no-bias: Whether to disable bias parameter. (optional) cudnn-tune: Whether to pick convolution algo by running performance test. (optional) cudnn-off: Turn off cudnn for this layer. (optional) layout: Set layout for input, output and weight. Empty for default layout: NCW for 1d, NCHW for 2d and NCDHW for 3d.NHWC and NDHWC are only supported on GPU. (optional) out: Output array. (optional) ### convolution-v1 (convolution-v1 data weight bias kernel num-filter)(convolution-v1 {:keys [data weight bias kernel stride dilate pad num-filter num-group workspace no-bias cudnn-tune cudnn-off layout out], :or {no-bias nil, cudnn-off nil, stride nil, dilate nil, workspace nil, layout nil, out nil, pad nil, num-group nil, cudnn-tune nil}, :as opts}) This operator is DEPRECATED. Apply convolution to input then add a bias. data: Input data to the ConvolutionV1Op. weight: Weight matrix. bias: Bias parameter. kernel: convolution kernel size: (h, w) or (d, h, w) stride: convolution stride: (h, w) or (d, h, w) (optional) dilate: convolution dilate: (h, w) or (d, h, w) (optional) pad: pad for convolution: (h, w) or (d, h, w) (optional) num-filter: convolution filter(channel) number num-group: Number of group partitions. Equivalent to slicing input into num_group partitions, apply convolution on each, then concatenate the results (optional) workspace: Maximum temporary workspace allowed for convolution (MB).This parameter determines the effective batch size of the convolution kernel, which may be smaller than the given batch size. Also, the workspace will be automatically enlarged to make sure that we can run the kernel with batch_size=1 (optional) no-bias: Whether to disable bias parameter. (optional) cudnn-tune: Whether to pick convolution algo by running performance test. Leads to higher startup time but may give faster speed. Options are: 'off': no tuning 'limited_workspace': run test and pick the fastest algorithm that doesn't exceed workspace limit. 'fastest': pick the fastest algorithm and ignore workspace limit. If set to None (default), behavior is determined by environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT: 0 for off, 1 for limited workspace (default), 2 for fastest. (optional) cudnn-off: Turn off cudnn for this layer. (optional) layout: Set layout for input, output and weight. Empty for default layout: NCHW for 2d and NCDHW for 3d. (optional) out: Output array. (optional) ### correlation (correlation data1 data2)(correlation {:keys [data1 data2 kernel-size max-displacement stride1 stride2 pad-size is-multiply out], :or {kernel-size nil, max-displacement nil, stride1 nil, stride2 nil, pad-size nil, is-multiply nil, out nil}, :as opts}) Applies correlation to inputs. The correlation layer performs multiplicative patch comparisons between two feature maps. Given two multi-channel feature maps :math:f_{1}, f_{2}, with :math:w, :math:h, and :math:c being their width, height, and number of channels, the correlation layer lets the network compare each patch from :math:f_{1} with each patch from :math:f_{2}. For now we consider only a single comparison of two patches. The 'correlation' of two patches centered at :math:x_{1} in the first map and :math:x_{2} in the second map is then defined as: .. math:: c(x_{1}, x_{2}) = \sum_{o \in [-k,k] \times [-k,k]} <f_{1}(x_{1} + o), f_{2}(x_{2} + o)> for a square patch of size :math:K:=2k+1. Note that the equation above is identical to one step of a convolution in neural networks, but instead of convolving data with a filter, it convolves data with other data. For this reason, it has no training weights. Computing :math:c(x_{1}, x_{2}) involves :math:c * K^{2} multiplications. Comparing all patch combinations involves :math:w^{2}*h^{2} such computations. Given a maximum displacement :math:d, for each location :math:x_{1} it computes correlations :math:c(x_{1}, x_{2}) only in a neighborhood of size :math:D:=2d+1, by limiting the range of :math:x_{2}. We use strides :math:s_{1}, s_{2}, to quantize :math:x_{1} globally and to quantize :math:x_{2} within the neighborhood centered around :math:x_{1}. The final output is defined by the following expression: .. math:: out[n, q, i, j] = c(x_{i, j}, x_{q}) where :math:i and :math:j enumerate spatial locations in :math:f_{1}, and :math:q denotes the :math:q^{th} neighborhood of :math:x_{i,j}. Defined in src/operator/correlation.cc:L198 data1: Input data1 to the correlation. data2: Input data2 to the correlation. kernel-size: kernel size for Correlation must be an odd number (optional) max-displacement: Max displacement of Correlation (optional) stride1: stride1 quantize data1 globally (optional) stride2: stride2 quantize data2 within the neighborhood centered around data1 (optional) pad-size: pad for Correlation (optional) is-multiply: operation type is either multiplication or subduction (optional) out: Output array. (optional) ### cos (cos {:keys [data out], :or {out nil}, :as opts}) Computes the element-wise cosine of the input array. The input should be in radians (:math:2\pi rad equals 360 degrees). .. math:: cos([0, \pi/4, \pi/2]) = [1, 0.707, 0] The storage type of cos output is always dense Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L89 data: The input array. out: Output array. (optional) ### cosh (cosh {:keys [data out], :or {out nil}, :as opts}) Returns the hyperbolic cosine of the input array, computed element-wise. .. math:: cosh(x) = 0.5\times(exp(x) + exp(-x)) The storage type of cosh output is always dense Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L272 data: The input array. out: Output array. (optional) ### crop (crop data num-args)(crop {:keys [data num-args offset h-w center-crop out], :or {offset nil, h-w nil, center-crop nil, out nil}, :as opts}) .. note:: Crop is deprecated. Use slice instead. Crop the 2nd and 3rd dim of input data, with the corresponding size of h_w or with width and height of the second input symbol, i.e., with one input, we need h_w to specify the crop height and width, otherwise the second input symbol's size will be used Defined in src/operator/crop.cc:L50 data: Tensor or List of Tensors, the second input will be used as crop_like shape reference num-args: Number of inputs for crop, if equals one, then we will use the h_wfor crop height and width, else if equals two, then we will use the heightand width of the second input symbol, we name crop_like here offset: crop offset coordinate: (y, x) (optional) h-w: crop height and width: (h, w) (optional) center-crop: If set to true, then it will use be the center_crop,or it will crop using the shape of crop_like (optional) out: Output array. (optional) ### ctc-loss (ctc-loss data label data-lengths label-lengths)(ctc-loss {:keys [data label data-lengths label-lengths use-data-lengths use-label-lengths blank-label out], :or {use-data-lengths nil, use-label-lengths nil, blank-label nil, out nil}, :as opts}) Connectionist Temporal Classification Loss. .. note:: The existing alias contrib_CTCLoss is deprecated. The shapes of the inputs and outputs: - **data**: (sequence_length, batch_size, alphabet_size) - **label**: (batch_size, label_sequence_length) - **out**: (batch_size) The data tensor consists of sequences of activation vectors (without applying softmax), with i-th channel in the last dimension corresponding to i-th label for i between 0 and alphabet_size-1 (i.e always 0-indexed). Alphabet size should include one additional value reserved for blank label. When blank_label is "first", the 0-th channel is be reserved for activation of blank label, or otherwise if it is "last", (alphabet_size-1)-th channel should be reserved for blank label. label is an index matrix of integers. When blank_label is "first", the value 0 is then reserved for blank label, and should not be passed in this matrix. Otherwise, when blank_label is "last", the value (alphabet_size-1) is reserved for blank label. If a sequence of labels is shorter than *label_sequence_length*, use the special padding value at the end of the sequence to conform it to the correct length. The padding value is 0 when blank_label is "first", and -1 otherwise. For example, suppose the vocabulary is [a, b, c], and in one batch we have three sequences 'ba', 'cbb', and 'abac'. When blank_label is "first", we can index the labels as {'a': 1, 'b': 2, 'c': 3}, and we reserve the 0-th channel for blank label in data tensor. The resulting label tensor should be padded to be:: [[2, 1, 0, 0], [3, 2, 2, 0], [1, 2, 1, 3]] When blank_label is "last", we can index the labels as {'a': 0, 'b': 1, 'c': 2}, and we reserve the channel index 3 for blank label in data tensor. The resulting label tensor should be padded to be:: [[1, 0, -1, -1], [2, 1, 1, -1], [0, 1, 0, 2]] out is a list of CTC loss values, one per example in the batch. See *Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks*, A. Graves *et al*. for more information on the definition and the algorithm. Defined in src/operator/nn/ctc_loss.cc:L100 data: Input ndarray label: Ground-truth labels for the loss. data-lengths: Lengths of data for each of the samples. Only required when use_data_lengths is true. label-lengths: Lengths of labels for each of the samples. Only required when use_label_lengths is true. use-data-lengths: Whether the data lenghts are decided by data_lengths. If false, the lengths are equal to the max sequence length. (optional) use-label-lengths: Whether the label lenghts are decided by label_lengths, or derived from padding_mask. If false, the lengths are derived from the first occurrence of the value of padding_mask. The value of padding_mask is 0 when first CTC label is reserved for blank, and -1 when last label is reserved for blank. See blank_label. (optional) blank-label: Set the label that is reserved for blank label.If "first", 0-th label is reserved, and label values for tokens in the vocabulary are between 1 and alphabet_size-1, and the padding mask is -1. If "last", last label value alphabet_size-1 is reserved for blank label instead, and label values for tokens in the vocabulary are between 0 and alphabet_size-2, and the padding mask is 0. (optional) out: Output array. (optional) ### deconvolution (deconvolution data weight bias kernel num-filter)(deconvolution {:keys [data weight bias kernel stride dilate pad adj target-shape num-filter num-group workspace no-bias cudnn-tune cudnn-off layout out], :or {target-shape nil, no-bias nil, cudnn-off nil, stride nil, dilate nil, workspace nil, layout nil, adj nil, out nil, pad nil, num-group nil, cudnn-tune nil}, :as opts}) Computes 1D or 2D transposed convolution (aka fractionally strided convolution) of the input tensor. This operation can be seen as the gradient of Convolution operation with respect to its input. Convolution usually reduces the size of the input. Transposed convolution works the other way, going from a smaller input to a larger output while preserving the connectivity pattern. data: Input tensor to the deconvolution operation. weight: Weights representing the kernel. bias: Bias added to the result after the deconvolution operation. kernel: Deconvolution kernel size: (w,), (h, w) or (d, h, w). This is same as the kernel size used for the corresponding convolution stride: The stride used for the corresponding convolution: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension. (optional) dilate: Dilation factor for each dimension of the input: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension. (optional) pad: The amount of implicit zero padding added during convolution for each dimension of the input: (w,), (h, w) or (d, h, w). (kernel-1)/2 is usually a good choice. If target_shape is set, pad will be ignored and a padding that will generate the target shape will be used. Defaults to no padding. (optional) adj: Adjustment for output shape: (w,), (h, w) or (d, h, w). If target_shape is set, adj will be ignored and computed accordingly. (optional) target-shape: Shape of the output tensor: (w,), (h, w) or (d, h, w). (optional) num-filter: Number of output filters. num-group: Number of groups partition. (optional) workspace: Maximum temporary workspace allowed (MB) in deconvolution.This parameter has two usages. When CUDNN is not used, it determines the effective batch size of the deconvolution kernel. When CUDNN is used, it controls the maximum temporary storage used for tuning the best CUDNN kernel when limited_workspace strategy is used. (optional) no-bias: Whether to disable bias parameter. (optional) cudnn-tune: Whether to pick convolution algorithm by running performance test. (optional) cudnn-off: Turn off cudnn for this layer. (optional) layout: Set layout for input, output and weight. Empty for default layout, NCW for 1d, NCHW for 2d and NCDHW for 3d.NHWC and NDHWC are only supported on GPU. (optional) out: Output array. (optional) ### degrees (degrees {:keys [data out], :or {out nil}, :as opts}) Converts each element of the input array from radians to degrees. .. math:: degrees([0, \pi/2, \pi, 3\pi/2, 2\pi]) = [0, 90, 180, 270, 360] The storage type of degrees output depends upon the input storage type: - degrees(default) = default - degrees(row_sparse) = row_sparse - degrees(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L219 data: The input array. out: Output array. (optional) ### depth-to-space (depth-to-space data block-size)(depth-to-space {:keys [data block-size out], :or {out nil}, :as opts}) Rearranges(permutes) data from depth into blocks of spatial data. Similar to ONNX DepthToSpace operator: https://github.com/onnx/onnx/blob/master/docs/Operators.md#DepthToSpace. The output is a new tensor where the values from depth dimension are moved in spatial blocks to height and width dimension. The reverse of this operation is space_to_depth. .. math:: \begin{gather*} x \prime = reshape(x, [N, block\_size, block\_size, C / (block\_size ^ 2), H * block\_size, W * block\_size]) \\ x \prime \prime = transpose(x \prime, [0, 3, 4, 1, 5, 2]) \\ y = reshape(x \prime \prime, [N, C / (block\_size ^ 2), H * block\_size, W * block\_size]) \end{gather*} where :math:x is an input tensor with default layout as :math:[N, C, H, W]: [batch, channels, height, width] and :math:y is the output tensor of layout :math:[N, C / (block\_size ^ 2), H * block\_size, W * block\_size] Example:: x = [[[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]], [[12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23]]]] depth_to_space(x, 2) = [[[[0, 6, 1, 7, 2, 8], [12, 18, 13, 19, 14, 20], [3, 9, 4, 10, 5, 11], [15, 21, 16, 22, 17, 23]]]] Defined in src/operator/tensor/matrix_op.cc:L1050 data: Input ndarray block-size: Blocks of [block_size. block_size] are moved out: Output array. (optional) ### diag (diag {:keys [data k axis1 axis2 out], :or {k nil, axis1 nil, axis2 nil, out nil}, :as opts}) Extracts a diagonal or constructs a diagonal array. diag's behavior depends on the input array dimensions: - 1-D arrays: constructs a 2-D array with the input as its diagonal, all other elements are zero. - N-D arrays: extracts the diagonals of the sub-arrays with axes specified by axis1 and axis2. The output shape would be decided by removing the axes numbered axis1 and axis2 from the input shape and appending to the result a new axis with the size of the diagonals in question. For example, when the input shape is (2, 3, 4, 5), axis1 and axis2 are 0 and 2 respectively and k is 0, the resulting shape would be (3, 5, 2). Examples:: x = [[1, 2, 3], [4, 5, 6]] diag(x) = [1, 5] diag(x, k=1) = [2, 6] diag(x, k=-1) = [4] x = [1, 2, 3] diag(x) = [[1, 0, 0], [0, 2, 0], [0, 0, 3]] diag(x, k=1) = [[0, 1, 0], [0, 0, 2], [0, 0, 0]] diag(x, k=-1) = [[0, 0, 0], [1, 0, 0], [0, 2, 0]] x = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]] diag(x) = [[1, 7], [2, 8]] diag(x, k=1) = [[3], [4]] diag(x, axis1=-2, axis2=-1) = [[1, 4], [5, 8]] Defined in src/operator/tensor/diag_op.cc:L87 data: Input ndarray k: Diagonal in question. The default is 0. Use k>0 for diagonals above the main diagonal, and k<0 for diagonals below the main diagonal. If input has shape (S0 S1) k must be between -S0 and S1 (optional) axis1: The first axis of the sub-arrays of interest. Ignored when the input is a 1-D array. (optional) axis2: The second axis of the sub-arrays of interest. Ignored when the input is a 1-D array. (optional) out: Output array. (optional) ### dot (dot lhs rhs)(dot {:keys [lhs rhs transpose-a transpose-b forward-stype out], :or {transpose-a nil, transpose-b nil, forward-stype nil, out nil}, :as opts}) Dot product of two arrays. dot's behavior depends on the input array dimensions: - 1-D arrays: inner product of vectors - 2-D arrays: matrix multiplication - N-D arrays: a sum product over the last axis of the first input and the first axis of the second input For example, given 3-D x with shape (n,m,k) and y with shape (k,r,s), the result array will have shape (n,m,r,s). It is computed by:: dot(x,y)[i,j,a,b] = sum(x[i,j,:]*y[:,a,b]) Example:: x = reshape([0,1,2,3,4,5,6,7], shape=(2,2,2)) y = reshape([7,6,5,4,3,2,1,0], shape=(2,2,2)) dot(x,y)[0,0,1,1] = 0 sum(x[0,0,:]*y[:,1,1]) = 0 The storage type of dot output depends on storage types of inputs, transpose option and forward_stype option for output storage type. Implemented sparse operations include: - dot(default, default, transpose_a=True/False, transpose_b=True/False) = default - dot(csr, default, transpose_a=True) = default - dot(csr, default, transpose_a=True) = row_sparse - dot(csr, default) = default - dot(csr, row_sparse) = default - dot(default, csr) = csr (CPU only) - dot(default, csr, forward_stype='default') = default - dot(default, csr, transpose_b=True, forward_stype='default') = default If the combination of input storage types and forward_stype does not match any of the above patterns, dot will fallback and generate output with default storage. .. Note:: If the storage type of the lhs is "csr", the storage type of gradient w.r.t rhs will be and Adam. Note that by default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: /api/python/optimization/optimization.html Defined in src/operator/tensor/dot.cc:L77 lhs: The first input rhs: The second input transpose-a: If true then transpose the first input before dot. (optional) transpose-b: If true then transpose the second input before dot. (optional) forward-stype: The desired storage type of the forward output given by user, if thecombination of input storage types and this hint does not matchany implemented ones, the dot operator will perform fallback operationand still produce an output of the desired storage type. (optional) out: Output array. (optional) ### dropout (dropout {:keys [data p mode axes cudnn-off out], :or {p nil, mode nil, axes nil, cudnn-off nil, out nil}, :as opts}) Applies dropout operation to input array. - During training, each element of the input is set to zero with probability p. The whole array is rescaled by :math:1/(1-p) to keep the expected sum of the input unchanged. - During testing, this operator does not change the input if mode is 'training'. If mode is 'always', the same computaion as during training will be applied. Example:: random.seed(998) input_array = array([[3., 0.5, -0.5, 2., 7.], [2., -0.4, 7., 3., 0.2]]) a = symbol.Variable('a') dropout = symbol.Dropout(a, p = 0.2) executor = dropout.simple_bind(a = input_array.shape) ## If training executor.forward(is_train = True, a = input_array) executor.outputs [[ 3.75 0.625 -0. 2.5 8.75 ] [ 2.5 -0.5 8.75 3.75 0. ]] ## If testing executor.forward(is_train = False, a = input_array) executor.outputs [[ 3. 0.5 -0.5 2. 7. ] [ 2. -0.4 7. 3. 0.2 ]] Defined in src/operator/nn/dropout.cc:L95 data: Input array to which dropout will be applied. p: Fraction of the input that gets dropped out during training time. (optional) mode: Whether to only turn on dropout during training or to also turn on for inference. (optional) axes: Axes for variational dropout kernel. (optional) cudnn-off: Whether to turn off cudnn in dropout operator. This option is ignored if axes is specified. (optional) out: Output array. (optional) (elemwise-add lhs rhs)(elemwise-add {:keys [lhs rhs out], :or {out nil}, :as opts}) Adds arguments element-wise. The storage type of elemwise_add output depends on storage types of inputs - otherwise, elemwise_add generates output with default storage lhs: first input rhs: second input out: Output array. (optional) ### elemwise-div (elemwise-div lhs rhs)(elemwise-div {:keys [lhs rhs out], :or {out nil}, :as opts}) Divides arguments element-wise. The storage type of elemwise_div output is always dense lhs: first input rhs: second input out: Output array. (optional) ### elemwise-mul (elemwise-mul lhs rhs)(elemwise-mul {:keys [lhs rhs out], :or {out nil}, :as opts}) Multiplies arguments element-wise. The storage type of elemwise_mul output depends on storage types of inputs - elemwise_mul(default, default) = default - elemwise_mul(row_sparse, row_sparse) = row_sparse - elemwise_mul(default, row_sparse) = row_sparse - elemwise_mul(row_sparse, default) = row_sparse - elemwise_mul(csr, csr) = csr - otherwise, elemwise_mul generates output with default storage lhs: first input rhs: second input out: Output array. (optional) ### elemwise-sub (elemwise-sub lhs rhs)(elemwise-sub {:keys [lhs rhs out], :or {out nil}, :as opts}) Subtracts arguments element-wise. The storage type of elemwise_sub output depends on storage types of inputs - elemwise_sub(row_sparse, row_sparse) = row_sparse - elemwise_sub(csr, csr) = csr - elemwise_sub(default, csr) = default - elemwise_sub(csr, default) = default - elemwise_sub(default, rsp) = default - elemwise_sub(rsp, default) = default - otherwise, elemwise_sub generates output with default storage lhs: first input rhs: second input out: Output array. (optional) ### embedding (embedding data weight input-dim output-dim)(embedding {:keys [data weight input-dim output-dim dtype sparse-grad out], :or {dtype nil, sparse-grad nil, out nil}, :as opts}) Maps integer indices to vector representations (embeddings). This operator maps words to real-valued vectors in a high-dimensional space, called word embeddings. These embeddings can capture semantic and syntactic properties of the words. For example, it has been noted that in the learned embedding spaces, similar words tend to be close to each other and dissimilar words far apart. For an input array of shape (d1, ..., dK), the shape of an output array is (d1, ..., dK, output_dim). All the input values should be integers in the range [0, input_dim). If the input_dim is ip0 and output_dim is op0, then shape of the embedding weight matrix must be (ip0, op0). By default, if any index mentioned is too large, it is replaced by the index that addresses the last vector in an embedding matrix. Examples:: input_dim = 4 output_dim = 5 // Each row in weight matrix y represents a word. So, y = (w0,w1,w2,w3) y = [[ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.], [ 10., 11., 12., 13., 14.], [ 15., 16., 17., 18., 19.]] // Input array x represents n-grams(2-gram). So, x = [(w1,w3), (w0,w2)] x = [[ 1., 3.], [ 0., 2.]] // Mapped input x to its vector representation y. Embedding(x, y, 4, 5) = [[[ 5., 6., 7., 8., 9.], [ 15., 16., 17., 18., 19.]], [[ 0., 1., 2., 3., 4.], [ 10., 11., 12., 13., 14.]]] The storage type of weight can be either row_sparse or default. .. Note:: If "sparse_grad" is set to True, the storage type of gradient w.r.t weights will be and Adam. Note that by default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: /api/python/optimization/optimization.html Defined in src/operator/tensor/indexing_op.cc:L519 data: The input array to the embedding operator. weight: The embedding weight matrix. input-dim: Vocabulary size of the input indices. output-dim: Dimension of the embedding vectors. dtype: Data type of weight. (optional) sparse-grad: Compute row sparse gradient in the backward calculation. If set to True, the grad's storage type is row_sparse. (optional) out: Output array. (optional) ### erf (erf {:keys [data out], :or {out nil}, :as opts}) Returns element-wise gauss error function of the input. Example:: erf([0, -1., 10.]) = [0., -0.8427, 1.] Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L964 data: The input array. out: Output array. (optional) ### erfinv (erfinv {:keys [data out], :or {out nil}, :as opts}) Returns element-wise inverse gauss error function of the input. Example:: erfinv([0, 0.5., -1.]) = [0., 0.4769, -inf] Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L985 data: The input array. out: Output array. (optional) ### exp (exp {:keys [data out], :or {out nil}, :as opts}) Returns element-wise exponential value of the input. .. math:: exp(x) = e^x \approx 2.718^x Example:: exp([0, 1, 2]) = [1., 2.71828175, 7.38905621] The storage type of exp output is always dense Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L1044 data: The input array. out: Output array. (optional) ### expand-dims (expand-dims data axis)(expand-dims {:keys [data axis out], :or {out nil}, :as opts}) Inserts a new axis of size 1 into the array shape For example, given x with shape (2,3,4), then expand_dims(x, axis=1) will return a new array with shape (2,1,3,4). Defined in src/operator/tensor/matrix_op.cc:L416 data: Source input axis: Position where new axis is to be inserted. Suppose that the input NDArray's dimension is ndim, the range of the inserted axis is [-ndim, ndim] out: Output array. (optional) ### expm1 (expm1 {:keys [data out], :or {out nil}, :as opts}) Returns exp(x) - 1 computed element-wise on the input. This function provides greater precision than exp(x) - 1 for small values of x. The storage type of expm1 output depends upon the input storage type: - expm1(default) = default - expm1(row_sparse) = row_sparse - expm1(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L1189 data: The input array. out: Output array. (optional) ### fill-element-0index (fill-element-0index lhs mhs rhs)(fill-element-0index {:keys [lhs mhs rhs out], :or {out nil}, :as opts}) Fill one element of each line(row for python, column for R/Julia) in lhs according to index indicated by rhs and values indicated by mhs. This function assume rhs uses 0-based index. lhs: Left operand to the function. mhs: Middle operand to the function. rhs: Right operand to the function. out: Output array. (optional) ### fix (fix {:keys [data out], :or {out nil}, :as opts}) Returns element-wise rounded value to the nearest \ integer towards zero of the input. Example:: fix([-2.1, -1.9, 1.9, 2.1]) = [-2., -1., 1., 2.] The storage type of fix output depends upon the input storage type: - fix(default) = default - fix(row_sparse) = row_sparse - fix(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L843 data: The input array. out: Output array. (optional) ### flatten (flatten {:keys [data out], :or {out nil}, :as opts}) Flattens the input array into a 2-D array by collapsing the higher dimensions. .. note:: Flatten is deprecated. Use flatten instead. For an input array with shape (d1, d2, ..., dk), flatten operation reshapes the input array into an output array of shape (d1, d2*...*dk). Note that the bahavior of this function is different from numpy.ndarray.flatten, which behaves similar to mxnet.ndarray.reshape((-1,)). Example:: x = [[ [1,2,3], [4,5,6], [7,8,9] ], [ [1,2,3], [4,5,6], [7,8,9] ]], flatten(x) = [[ 1., 2., 3., 4., 5., 6., 7., 8., 9.], [ 1., 2., 3., 4., 5., 6., 7., 8., 9.]] Defined in src/operator/tensor/matrix_op.cc:L291 data: Input array. out: Output array. (optional) ### floor (floor {:keys [data out], :or {out nil}, :as opts}) Returns element-wise floor of the input. The floor of the scalar x is the largest integer i, such that i <= x. Example:: floor([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-3., -2., 1., 1., 2.] The storage type of floor output depends upon the input storage type: - floor(default) = default - floor(row_sparse) = row_sparse - floor(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L805 data: The input array. out: Output array. (optional) ### ftml-update (ftml-update weight grad d v z lr t)(ftml-update {:keys [weight grad d v z lr beta1 beta2 epsilon t wd rescale-grad clip-grad out], :or {beta1 nil, beta2 nil, epsilon nil, wd nil, rescale-grad nil, clip-grad nil, out nil}, :as opts}) The FTML optimizer described in available at http://proceedings.mlr.press/v70/zheng17a/zheng17a.pdf. .. math:: g_t = \nabla J(W_{t-1})\\ v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2\\ d_t = \frac{ 1 - \beta_1^t }{ \eta_t } (\sqrt{ \frac{ v_t }{ 1 - \beta_2^t } } + \epsilon) \sigma_t = d_t - \beta_1 d_{t-1} z_t = \beta_1 z_{ t-1 } + (1 - \beta_1^t) g_t - \sigma_t W_{t-1} W_t = - \frac{ z_t }{ d_t } Defined in src/operator/optimizer_op.cc:L638 weight: Weight grad: Gradient d: Internal state d_t v: Internal state v_t z: Internal state z_t lr: Learning rate. beta1: Generally close to 0.5. (optional) beta2: Generally close to 1. (optional) epsilon: Epsilon to prevent div 0. (optional) t: Number of update. wd: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-grad: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) out: Output array. (optional) ### ftrl-update (ftrl-update weight grad z n lr)(ftrl-update {:keys [weight grad z n lr lamda1 beta wd rescale-grad clip-gradient out], :or {lamda1 nil, beta nil, wd nil, rescale-grad nil, clip-gradient nil, out nil}, :as opts}) Update function for Ftrl optimizer. Referenced from *Ad Click Prediction: a View from the Trenches*, available at http://dl.acm.org/citation.cfm?id=2488200. z += rescaled_grad - (sqrt(n + rescaled_grad**2) - sqrt(n)) * weight / learning_rate w = (sign(z) * lamda1 - z) / ((beta + sqrt(n)) / learning_rate + wd) * (abs(z) > lamda1) If w, z and n are all of row_sparse storage type, only the row slices whose indices appear in grad.indices are updated (for w, z and n):: z[row] += rescaled_grad[row] - (sqrt(n[row] + rescaled_grad[row]**2) - sqrt(n[row])) * weight[row] / learning_rate w[row] = (sign(z[row]) * lamda1 - z[row]) / ((beta + sqrt(n[row])) / learning_rate + wd) * (abs(z[row]) > lamda1) Defined in src/operator/optimizer_op.cc:L874 weight: Weight grad: Gradient z: z n: Square of grad lr: Learning rate lamda1: The L1 regularization coefficient. (optional) beta: Per-Coordinate Learning Rate beta. (optional) wd: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) out: Output array. (optional) ### fully-connected (fully-connected data weight bias num-hidden)(fully-connected {:keys [data weight bias num-hidden no-bias flatten out], :or {no-bias nil, flatten nil, out nil}, :as opts}) Applies a linear transformation: :math:Y = XW^T + b. If flatten is set to be true, then the shapes are: - **data**: (batch_size, x1, x2, ..., xn) - **weight**: (num_hidden, x1 * x2 * ... * xn) - **bias**: (num_hidden,) - **out**: (batch_size, num_hidden) If flatten is set to be false, then the shapes are: - **data**: (x1, x2, ..., xn, input_dim) - **weight**: (num_hidden, input_dim) - **bias**: (num_hidden,) - **out**: (x1, x2, ..., xn, num_hidden) The learnable parameters include both weight and bias. If no_bias is set to be true, then the bias term is ignored. .. Note:: The sparse support for FullyConnected is limited to forward evaluation with row_sparse weight and bias, where the length of weight.indices and bias.indices must be equal to num_hidden. This could be useful for model inference with row_sparse weights trained with importance sampling or noise contrastive estimation. To compute linear transformation with 'csr' sparse data, sparse.dot is recommended instead of sparse.FullyConnected. Defined in src/operator/nn/fully_connected.cc:L277 data: Input data. weight: Weight matrix. bias: Bias parameter. num-hidden: Number of hidden nodes of the output. no-bias: Whether to disable bias parameter. (optional) flatten: Whether to collapse all but the first axis of the input data tensor. (optional) out: Output array. (optional) ### gamma (gamma {:keys [data out], :or {out nil}, :as opts}) Returns the gamma function (extension of the factorial function \ to the reals), computed element-wise on the input array. The storage type of gamma output is always dense data: The input array. out: Output array. (optional) ### gammaln (gammaln {:keys [data out], :or {out nil}, :as opts}) Returns element-wise log of the absolute value of the gamma function \ of the input. The storage type of gammaln output is always dense data: The input array. out: Output array. (optional) ### gather-nd (gather-nd data indices)(gather-nd {:keys [data indices out], :or {out nil}, :as opts}) Gather elements or slices from data and store to a tensor whose shape is defined by indices. Given data with shape (X_0, X_1, ..., X_{N-1}) and indices with shape (M, Y_0, ..., Y_{K-1}), the output will have shape (Y_0, ..., Y_{K-1}, X_M, ..., X_{N-1}), where M <= N. If M == N, output shape will simply be (Y_0, ..., Y_{K-1}). The elements in output is defined as follows:: output[y_0, ..., y_{K-1}, x_M, ..., x_{N-1}] = data[indices[0, y_0, ..., y_{K-1}], ..., indices[M-1, y_0, ..., y_{K-1}], x_M, ..., x_{N-1}] Examples:: data = [[0, 1], [2, 3]] indices = [[1, 1, 0], [0, 1, 0]] gather_nd(data, indices) = [2, 3, 0] data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]] indices = [[0, 1], [1, 0]] gather_nd(data, indices) = [[3, 4], [5, 6]] data: data indices: indices out: Output array. (optional) ### grid-generator (grid-generator data transform-type)(grid-generator {:keys [data transform-type target-shape out], :or {target-shape nil, out nil}, :as opts}) Generates 2D sampling grid for bilinear sampling. data: Input data to the function. transform-type: The type of transformation. For affine, input data should be an affine matrix of size (batch, 6). For warp, input data should be an optical flow of size (batch, 2, h, w). target-shape: Specifies the output shape (H, W). This is required if transformation type is affine. If transformation type is warp, this parameter is ignored. (optional) out: Output array. (optional) ### hard-sigmoid (hard-sigmoid {:keys [data alpha beta out], :or {alpha nil, beta nil, out nil}, :as opts}) Computes hard sigmoid of x element-wise. .. math:: y = max(0, min(1, alpha * x + beta)) Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L133 data: The input array. alpha: Slope of hard sigmoid (optional) beta: Bias of hard sigmoid. (optional) out: Output array. (optional) ### identity-attach-kl-sparse-reg (identity-attach-kl-sparse-reg {:keys [data sparseness-target penalty momentum out], :or {sparseness-target nil, penalty nil, momentum nil, out nil}, :as opts}) Apply a sparse regularization to the output a sigmoid activation function. data: Input data. sparseness-target: The sparseness target (optional) penalty: The tradeoff parameter for the sparseness penalty (optional) momentum: The momentum for running average (optional) out: Output array. (optional) ### instance-norm (instance-norm data gamma beta)(instance-norm {:keys [data gamma beta eps out], :or {eps nil, out nil}, :as opts}) Applies instance normalization to the n-dimensional input array. This operator takes an n-dimensional input array where (n>2) and normalizes the input using the following formula: .. math:: out = \frac{x - mean[data]}{ \sqrt{Var[data]} + \epsilon} * gamma + beta This layer is similar to batch normalization layer (BatchNorm) with two differences: first, the normalization is carried out per example (instance), not over a batch. Second, the same normalization is applied both at test and train time. This operation is also known as contrast normalization. If the input data is of shape [batch, channel, spacial_dim1, spacial_dim2, ...], gamma and beta parameters must be vectors of shape [channel]. This implementation is based on paper: .. [1] Instance Normalization: The Missing Ingredient for Fast Stylization, D. Ulyanov, A. Vedaldi, V. Lempitsky, 2016 (arXiv:1607.08022v2). Examples:: // Input of shape (2,1,2) x = [[[ 1.1, 2.2]], [[ 3.3, 4.4]]] // gamma parameter of length 1 gamma = [1.5] // beta parameter of length 1 beta = [0.5] // Instance normalization is calculated with the above formula InstanceNorm(x,gamma,beta) = [[[-0.997527 , 1.99752665]], [[-0.99752653, 1.99752724]]] Defined in src/operator/instance_norm.cc:L95 data: An n-dimensional input array (n > 2) of the form [batch, channel, spatial_dim1, spatial_dim2, ...]. gamma: A vector of length 'channel', which multiplies the normalized input. beta: A vector of length 'channel', which is added to the product of the normalized input and the weight. eps: An epsilon parameter to prevent division by 0. (optional) out: Output array. (optional) ### khatri-rao (khatri-rao {:keys [args out], :or {out nil}, :as opts}) Computes the Khatri-Rao product of the input matrices. Given a collection of :math:n input matrices, .. math:: A_1 \in \mathbb{R}^{M_1 \times M}, \ldots, A_n \in \mathbb{R}^{M_n \times N}, the (column-wise) Khatri-Rao product is defined as the matrix, .. math:: X = A_1 \otimes \cdots \otimes A_n \in \mathbb{R}^{(M_1 \cdots M_n) \times N}, where the :math:k th column is equal to the column-wise outer product :math:{A_1}_k \otimes \cdots \otimes {A_n}_k where :math:{A_i}_k is the kth column of the ith matrix. Example:: >>> A = mx.nd.array([[1, -1], >>> [2, -3]]) >>> B = mx.nd.array([[1, 4], >>> [2, 5], >>> [3, 6]]) >>> C = mx.nd.khatri_rao(A, B) >>> print(C.asnumpy()) [[ 1. -4.] [ 2. -5.] [ 3. -6.] [ 2. -12.] [ 4. -15.] [ 6. -18.]] Defined in src/operator/contrib/krprod.cc:L108 args: Positional input matrices out: Output array. (optional) ### l2-normalization (l2-normalization {:keys [data eps mode out], :or {eps nil, mode nil, out nil}, :as opts}) Normalize the input array using the L2 norm. For 1-D NDArray, it computes:: out = data / sqrt(sum(data ** 2) + eps) For N-D NDArray, if the input array has shape (N, N, ..., N), with mode = instance, it normalizes each instance in the multidimensional array by its L2 norm.:: for i in 0...N out[i,:,:,...,:] = data[i,:,:,...,:] / sqrt(sum(data[i,:,:,...,:] ** 2) + eps) with mode = channel, it normalizes each channel in the array by its L2 norm.:: for i in 0...N out[:,i,:,...,:] = data[:,i,:,...,:] / sqrt(sum(data[:,i,:,...,:] ** 2) + eps) with mode = spatial, it normalizes the cross channel norm for each position in the array by its L2 norm.:: for dim in 2...N for i in 0...N out[.....,i,...] = take(out, indices=i, axis=dim) / sqrt(sum(take(out, indices=i, axis=dim) ** 2) + eps) -dim- Example:: x = [[[1,2], [3,4]], [[2,2], [5,6]]] L2Normalization(x, mode='instance') =[[[ 0.18257418 0.36514837] [ 0.54772252 0.73029673]] [[ 0.24077171 0.24077171] [ 0.60192931 0.72231513]]] L2Normalization(x, mode='channel') =[[[ 0.31622776 0.44721359] [ 0.94868326 0.89442718]] [[ 0.37139067 0.31622776] [ 0.92847669 0.94868326]]] L2Normalization(x, mode='spatial') =[[[ 0.44721359 0.89442718] [ 0.60000002 0.80000001]] [[ 0.70710677 0.70710677] [ 0.6401844 0.76822126]]] Defined in src/operator/l2_normalization.cc:L196 data: Input array to normalize. eps: A small constant for numerical stability. (optional) mode: Specify the dimension along which to compute L2 norm. (optional) out: Output array. (optional) ### layer-norm (layer-norm data gamma beta)(layer-norm {:keys [data gamma beta axis eps output-mean-var out], :or {axis nil, eps nil, output-mean-var nil, out nil}, :as opts}) Layer normalization. Normalizes the channels of the input tensor by mean and variance, and applies a scale gamma as well as offset beta. Assume the input has more than one dimension and we normalize along axis 1. We first compute the mean and variance along this axis and then compute the normalized output, which has the same shape as input, as following: .. math:: out = \frac{data - mean(data, axis)}{\sqrt{var(data, axis) + \epsilon}} * gamma + beta Both gamma and beta are learnable parameters. Unlike BatchNorm and InstanceNorm, the *mean* and *var* are computed along the channel dimension. Assume the input has size *k* on axis 1, then both gamma and beta have shape *(k,)*. If output_mean_var is set to be true, then outputs both data_mean and data_std. Note that no gradient will be passed through these two outputs. The parameter axis specifies which axis of the input shape denotes the 'channel' (separately normalized groups). The default is -1, which sets the channel axis to be the last item in the input shape. Defined in src/operator/nn/layer_norm.cc:L155 data: Input data to layer normalization gamma: gamma array beta: beta array axis: The axis to perform layer normalization. Usually, this should be be axis of the channel dimension. Negative values means indexing from right to left. (optional) eps: An epsilon parameter to prevent division by 0. (optional) output-mean-var: Output the mean and std calculated along the given axis. (optional) out: Output array. (optional) ### leaky-re-lu (leaky-re-lu data gamma)(leaky-re-lu {:keys [data gamma act-type slope lower-bound upper-bound out], :or {act-type nil, slope nil, lower-bound nil, upper-bound nil, out nil}, :as opts}) Applies Leaky rectified linear unit activation element-wise to the input. Leaky ReLUs attempt to fix the "dying ReLU" problem by allowing a small slope when the input is negative and has a slope of one when input is positive. The following modified ReLU Activation functions are supported: - *elu*: Exponential Linear Unit. y = x > 0 ? x : slope * (exp(x)-1) - *selu*: Scaled Exponential Linear Unit. y = lambda * (x > 0 ? x : alpha * (exp(x) - 1)) where *lambda = 1.0507009873554804934193349852946* and *alpha = 1.6732632423543772848170429916717*. - *leaky*: Leaky ReLU. y = x > 0 ? x : slope * x - *prelu*: Parametric ReLU. This is same as *leaky* except that slope is learnt during training. - *rrelu*: Randomized ReLU. same as *leaky* but the slope is uniformly and randomly chosen from *[lower_bound, upper_bound)* for training, while fixed to be *(lower_bound+upper_bound)/2* for inference. Defined in src/operator/leaky_relu.cc:L65 data: Input data to activation function. gamma: Slope parameter for PReLU. Only required when act_type is 'prelu'. It should be either a vector of size 1, or the same size as the second dimension of data. act-type: Activation function to be applied. (optional) slope: Init slope for the activation. (For leaky and elu only) (optional) lower-bound: Lower bound of random slope. (For rrelu only) (optional) upper-bound: Upper bound of random slope. (For rrelu only) (optional) out: Output array. (optional) ### linear-regression-output (linear-regression-output data label)(linear-regression-output {:keys [data label grad-scale out], :or {grad-scale nil, out nil}, :as opts}) Computes and optimizes for squared loss during backward propagation. Just outputs data during forward propagation. If :math:\hat{y}_i is the predicted value of the i-th sample, and :math:y_i is the corresponding target value, then the squared loss estimated over :math:n samples is defined as :math:\text{SquaredLoss}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_2 .. note:: Use the LinearRegressionOutput as the final output layer of a net. The storage type of label can be default or csr - LinearRegressionOutput(default, default) = default - LinearRegressionOutput(default, csr) = default By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example. The parameter grad_scale can be used to change this scale to grad_scale/m. Defined in src/operator/regression_output.cc:L92 data: Input data to the function. label: Input label to the function. grad-scale: Scale the gradient by a float factor (optional) out: Output array. (optional) ### log (log {:keys [data out], :or {out nil}, :as opts}) Returns element-wise Natural logarithmic value of the input. The natural logarithm is logarithm in base *e*, so that log(exp(x)) = x The storage type of log output is always dense Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L1057 data: The input array. out: Output array. (optional) ### log-softmax (log-softmax {:keys [data axis temperature dtype out], :or {axis nil, temperature nil, dtype nil, out nil}, :as opts}) Computes the log softmax of the input. This is equivalent to computing softmax followed by log. Examples:: >>> x = mx.nd.array([1, 2, .1]) >>> mx.nd.log_softmax(x).asnumpy() array([-1.41702998, -0.41702995, -2.31702995], dtype=float32) >>> x = mx.nd.array( [[1, 2, .1],[.1, 2, 1]] ) >>> mx.nd.log_softmax(x, axis=0).asnumpy() array([[-0.34115392, -0.69314718, -1.24115396], [-1.24115396, -0.69314718, -0.34115392]], dtype=float32) data: The input array. axis: The axis along which to compute softmax. (optional) temperature: Temperature parameter in softmax (optional) dtype: DType of the output in case this can't be inferred. Defaults to the same as input's dtype if not defined (dtype=None). (optional) out: Output array. (optional) ### log10 (log10 {:keys [data out], :or {out nil}, :as opts}) Returns element-wise Base-10 logarithmic value of the input. 10**log10(x) = x The storage type of log10 output is always dense Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L1074 data: The input array. out: Output array. (optional) ### log1p (log1p {:keys [data out], :or {out nil}, :as opts}) Returns element-wise log(1 + x) value of the input. This function is more accurate than log(1 + x) for small x so that :math:1+x\approx 1 The storage type of log1p output depends upon the input storage type: - log1p(default) = default - log1p(row_sparse) = row_sparse - log1p(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L1171 data: The input array. out: Output array. (optional) ### log2 (log2 {:keys [data out], :or {out nil}, :as opts}) Returns element-wise Base-2 logarithmic value of the input. 2**log2(x) = x The storage type of log2 output is always dense Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L1086 data: The input array. out: Output array. (optional) ### logical-not (logical-not {:keys [data out], :or {out nil}, :as opts}) Returns the result of logical NOT (!) function Example: logical_not([-2., 0., 1.]) = [0., 1., 0.] data: The input array. out: Output array. (optional) ### logistic-regression-output (logistic-regression-output data label)(logistic-regression-output {:keys [data label grad-scale out], :or {grad-scale nil, out nil}, :as opts}) Applies a logistic function to the input. The logistic function, also known as the sigmoid function, is computed as :math:\frac{1}{1+exp(-\textbf{x})}. Commonly, the sigmoid is used to squash the real-valued output of a linear model :math:wTx+b into the [0,1] range so that it can be interpreted as a probability. It is suitable for binary classification or probability prediction tasks. .. note:: Use the LogisticRegressionOutput as the final output layer of a net. The storage type of label can be default or csr - LogisticRegressionOutput(default, default) = default - LogisticRegressionOutput(default, csr) = default The loss function used is the Binary Cross Entropy Loss: :math:-{(y\log(p) + (1 - y)\log(1 - p))} Where y is the ground truth probability of positive outcome for a given example, and p the probability predicted by the model. By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example. The parameter grad_scale can be used to change this scale to grad_scale/m. Defined in src/operator/regression_output.cc:L152 data: Input data to the function. label: Input label to the function. grad-scale: Scale the gradient by a float factor (optional) out: Output array. (optional) ### lrn (lrn data nsize)(lrn {:keys [data alpha beta knorm nsize out], :or {alpha nil, beta nil, knorm nil, out nil}, :as opts}) Applies local response normalization to the input. The local response normalization layer performs "lateral inhibition" by normalizing over local input regions. If :math:a_{x,y}^{i} is the activity of a neuron computed by applying kernel :math:i at position :math:(x, y) and then applying the ReLU nonlinearity, the response-normalized activity :math:b_{x,y}^{i} is given by the expression: .. math:: b_{x,y}^{i} = \frac{a_{x,y}^{i}}{\Bigg({k + \frac{\alpha}{n} \sum_{j=max(0, i-\frac{n}{2})}^{min(N-1, i+\frac{n}{2})} (a_{x,y}^{j})^{2}}\Bigg)^{\beta}} where the sum runs over :math:n "adjacent" kernel maps at the same spatial position, and :math:N is the total number of kernels in the layer. Defined in src/operator/nn/lrn.cc:L164 data: Input data to LRN alpha: The variance scaling parameter :math:lpha in the LRN expression. (optional) beta: The power parameter :math:eta in the LRN expression. (optional) knorm: The parameter :math:k in the LRN expression. (optional) nsize: normalization window width in elements. out: Output array. (optional) ### mae-regression-output (mae-regression-output data label)(mae-regression-output {:keys [data label grad-scale out], :or {grad-scale nil, out nil}, :as opts}) Computes mean absolute error of the input. MAE is a risk metric corresponding to the expected value of the absolute error. If :math:\hat{y}_i is the predicted value of the i-th sample, and :math:y_i is the corresponding target value, then the mean absolute error (MAE) estimated over :math:n samples is defined as :math:\text{MAE}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_1 .. note:: Use the MAERegressionOutput as the final output layer of a net. The storage type of label can be default or csr - MAERegressionOutput(default, default) = default - MAERegressionOutput(default, csr) = default By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example. The parameter grad_scale can be used to change this scale to grad_scale/m. Defined in src/operator/regression_output.cc:L120 data: Input data to the function. label: Input label to the function. grad-scale: Scale the gradient by a float factor (optional) out: Output array. (optional) ### make-loss (make-loss {:keys [data out], :or {out nil}, :as opts}) Make your own loss function in network construction. This operator accepts a customized loss function symbol as a terminal loss and the symbol should be an operator with no backward dependency. The output of this function is the gradient of loss with respect to the input data. For example, if you are a making a cross entropy loss function. Assume out is the predicted output and label is the true label, then the cross entropy can be defined as:: cross_entropy = label * log(out) + (1 - label) * log(1 - out) loss = make_loss(cross_entropy) We will need to use make_loss when we are creating our own loss function or we want to combine multiple loss functions. Also we may want to stop some variables' gradients from backpropagation. See more detail in BlockGrad or stop_gradient. The storage type of make_loss output depends upon the input storage type: - make_loss(default) = default - make_loss(row_sparse) = row_sparse Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L332 data: The input array. out: Output array. (optional) ### max (max {:keys [data axis keepdims exclude out], :or {axis nil, keepdims nil, exclude nil, out nil}, :as opts}) Computes the max of array elements over given axes. data: The input axis: The axis or axes along which to perform the reduction. The default, axis=(), will compute over all elements into a scalar array with shape (1,). If axis is int, a reduction is performed on a particular axis. If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple. If exclude is true, reduction will be performed on the axes that are Negative values means indexing from right to left. (optional) keepdims: If this is set to True, the reduced axes are left in the result as dimension with size one. (optional) exclude: Whether to perform reduction on axis that are NOT in axis instead. (optional) out: Output array. (optional) ### mean (mean {:keys [data axis keepdims exclude out], :or {axis nil, keepdims nil, exclude nil, out nil}, :as opts}) Computes the mean of array elements over given axes. data: The input axis: The axis or axes along which to perform the reduction. The default, axis=(), will compute over all elements into a scalar array with shape (1,). If axis is int, a reduction is performed on a particular axis. If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple. If exclude is true, reduction will be performed on the axes that are Negative values means indexing from right to left. (optional) keepdims: If this is set to True, the reduced axes are left in the result as dimension with size one. (optional) exclude: Whether to perform reduction on axis that are NOT in axis instead. (optional) out: Output array. (optional) ### min (min {:keys [data axis keepdims exclude out], :or {axis nil, keepdims nil, exclude nil, out nil}, :as opts}) Computes the min of array elements over given axes. data: The input axis: The axis or axes along which to perform the reduction. The default, axis=(), will compute over all elements into a scalar array with shape (1,). If axis is int, a reduction is performed on a particular axis. If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple. If exclude is true, reduction will be performed on the axes that are Negative values means indexing from right to left. (optional) keepdims: If this is set to True, the reduced axes are left in the result as dimension with size one. (optional) exclude: Whether to perform reduction on axis that are NOT in axis instead. (optional) out: Output array. (optional) ### moments (moments {:keys [data axes keepdims out], :or {axes nil, keepdims nil, out nil}, :as opts}) Calculate the mean and variance of data. The mean and variance are calculated by aggregating the contents of data across axes. If x is 1-D and axes = [0] this is just the mean and variance of a vector. Example: x = [[1, 2, 3], [4, 5, 6]] mean, var = moments(data=x, axes=[0]) mean = [2.5, 3.5, 4.5] var = [2.25, 2.25, 2.25] mean, var = moments(data=x, axes=[1]) mean = [2.0, 5.0] var = [0.66666667, 0.66666667] mean, var = moments(data=x, axis=[0, 1]) mean = [3.5] var = [2.9166667] Defined in src/operator/nn/moments.cc:L54 data: Input ndarray axes: Array of ints. Axes along which to compute mean and variance. (optional) keepdims: produce moments with the same dimensionality as the input. (optional) out: Output array. (optional) ### mp-nag-mom-update (mp-nag-mom-update weight grad mom weight32 lr)(mp-nag-mom-update {:keys [weight grad mom weight32 lr momentum wd rescale-grad clip-gradient out], :or {momentum nil, wd nil, rescale-grad nil, clip-gradient nil, out nil}, :as opts}) Update function for multi-precision Nesterov Accelerated Gradient( NAG) optimizer. Defined in src/operator/optimizer_op.cc:L743 weight: Weight grad: Gradient mom: Momentum weight32: Weight32 lr: Learning rate momentum: The decay rate of momentum estimates at each epoch. (optional) wd: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) out: Output array. (optional) ### mp-sgd-mom-update (mp-sgd-mom-update weight grad mom weight32 lr)(mp-sgd-mom-update {:keys [weight grad mom weight32 lr momentum wd rescale-grad clip-gradient lazy-update out], :or {momentum nil, wd nil, rescale-grad nil, clip-gradient nil, lazy-update nil, out nil}, :as opts}) Updater function for multi-precision sgd optimizer weight: Weight grad: Gradient mom: Momentum weight32: Weight32 lr: Learning rate momentum: The decay rate of momentum estimates at each epoch. (optional) wd: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) lazy-update: If true, lazy updates are applied if gradient's stype is row_sparse and both weight and momentum have the same stype (optional) out: Output array. (optional) ### mp-sgd-update (mp-sgd-update weight grad weight32 lr)(mp-sgd-update {:keys [weight grad weight32 lr wd rescale-grad clip-gradient lazy-update out], :or {wd nil, rescale-grad nil, clip-gradient nil, lazy-update nil, out nil}, :as opts}) Updater function for multi-precision sgd optimizer weight: Weight grad: gradient weight32: Weight32 lr: Learning rate wd: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) lazy-update: If true, lazy updates are applied if gradient's stype is row_sparse. (optional) out: Output array. (optional) ### multi-all-finite (multi-all-finite {:keys [data num-arrays init-output out], :or {num-arrays nil, init-output nil, out nil}, :as opts}) Check if all the float numbers in all the arrays are finite (used for AMP) Defined in src/operator/contrib/all_finite.cc:L133 data: Arrays num-arrays: Number of arrays. (optional) init-output: Initialize output to 1. (optional) out: Output array. (optional) ### multi-mp-sgd-mom-update (multi-mp-sgd-mom-update data lrs wds)(multi-mp-sgd-mom-update {:keys [data lrs wds momentum rescale-grad clip-gradient num-weights out], :or {momentum nil, rescale-grad nil, clip-gradient nil, num-weights nil, out nil}, :as opts}) Momentum update function for multi-precision Stochastic Gradient Descent (SGD) optimizer. Momentum update has better convergence rates on neural networks. Mathematically it looks like below: .. math:: v_1 = \alpha * \nabla J(W_0)\\ v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\ W_t = W_{t-1} + v_t v = momentum * v - learning_rate * gradient weight += v Where the parameter momentum is the decay rate of momentum estimates at each epoch. Defined in src/operator/optimizer_op.cc:L470 data: Weights lrs: Learning rates. wds: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. momentum: The decay rate of momentum estimates at each epoch. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) num-weights: Number of updated weights. (optional) out: Output array. (optional) ### multi-mp-sgd-update (multi-mp-sgd-update data lrs wds)(multi-mp-sgd-update {:keys [data lrs wds rescale-grad clip-gradient num-weights out], :or {rescale-grad nil, clip-gradient nil, num-weights nil, out nil}, :as opts}) Update function for multi-precision Stochastic Gradient Descent (SDG) optimizer. weight = weight - learning_rate * (gradient + wd * weight) Defined in src/operator/optimizer_op.cc:L415 data: Weights lrs: Learning rates. wds: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) num-weights: Number of updated weights. (optional) out: Output array. (optional) ### multi-sgd-mom-update (multi-sgd-mom-update data lrs wds)(multi-sgd-mom-update {:keys [data lrs wds momentum rescale-grad clip-gradient num-weights out], :or {momentum nil, rescale-grad nil, clip-gradient nil, num-weights nil, out nil}, :as opts}) Momentum update function for Stochastic Gradient Descent (SGD) optimizer. Momentum update has better convergence rates on neural networks. Mathematically it looks like below: .. math:: v_1 = \alpha * \nabla J(W_0)\\ v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\ W_t = W_{t-1} + v_t v = momentum * v - learning_rate * gradient weight += v Where the parameter momentum is the decay rate of momentum estimates at each epoch. Defined in src/operator/optimizer_op.cc:L372 data: Weights, gradients and momentum lrs: Learning rates. wds: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. momentum: The decay rate of momentum estimates at each epoch. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) num-weights: Number of updated weights. (optional) out: Output array. (optional) ### multi-sgd-update (multi-sgd-update data lrs wds)(multi-sgd-update {:keys [data lrs wds rescale-grad clip-gradient num-weights out], :or {rescale-grad nil, clip-gradient nil, num-weights nil, out nil}, :as opts}) Update function for Stochastic Gradient Descent (SDG) optimizer. weight = weight - learning_rate * (gradient + wd * weight) Defined in src/operator/optimizer_op.cc:L327 data: Weights lrs: Learning rates. wds: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) num-weights: Number of updated weights. (optional) out: Output array. (optional) ### nag-mom-update (nag-mom-update weight grad mom lr)(nag-mom-update {:keys [weight grad mom lr momentum wd rescale-grad clip-gradient out], :or {momentum nil, wd nil, rescale-grad nil, clip-gradient nil, out nil}, :as opts}) Update function for Nesterov Accelerated Gradient( NAG) optimizer. It updates the weights using the following formula, .. math:: v_t = \gamma v_{t-1} + \eta * \nabla J(W_{t-1} - \gamma v_{t-1})\\ W_t = W_{t-1} - v_t Where :math:\eta is the learning rate of the optimizer :math:\gamma is the decay rate of the momentum estimate :math:\v_t is the update vector at time step t :math:\W_t is the weight vector at time step t Defined in src/operator/optimizer_op.cc:L724 weight: Weight grad: Gradient mom: Momentum lr: Learning rate momentum: The decay rate of momentum estimates at each epoch. (optional) wd: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) out: Output array. (optional) ### nanprod (nanprod {:keys [data axis keepdims exclude out], :or {axis nil, keepdims nil, exclude nil, out nil}, :as opts}) Computes the product of array elements over given axes treating Not a Numbers (NaN) as one. data: The input axis: The axis or axes along which to perform the reduction. The default, axis=(), will compute over all elements into a scalar array with shape (1,). If axis is int, a reduction is performed on a particular axis. If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple. If exclude is true, reduction will be performed on the axes that are Negative values means indexing from right to left. (optional) keepdims: If this is set to True, the reduced axes are left in the result as dimension with size one. (optional) exclude: Whether to perform reduction on axis that are NOT in axis instead. (optional) out: Output array. (optional) ### nansum (nansum {:keys [data axis keepdims exclude out], :or {axis nil, keepdims nil, exclude nil, out nil}, :as opts}) Computes the sum of array elements over given axes treating Not a Numbers (NaN) as zero. data: The input axis: The axis or axes along which to perform the reduction. The default, axis=(), will compute over all elements into a scalar array with shape (1,). If axis is int, a reduction is performed on a particular axis. If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple. If exclude is true, reduction will be performed on the axes that are Negative values means indexing from right to left. (optional) keepdims: If this is set to True, the reduced axes are left in the result as dimension with size one. (optional) exclude: Whether to perform reduction on axis that are NOT in axis instead. (optional) out: Output array. (optional) ### negative (negative {:keys [data out], :or {out nil}, :as opts}) Numerical negative of the argument, element-wise. The storage type of negative output depends upon the input storage type: - negative(default) = default - negative(row_sparse) = row_sparse - negative(csr) = csr data: The input array. out: Output array. (optional) ### norm (norm {:keys [data ord axis out-dtype keepdims out], :or {ord nil, axis nil, out-dtype nil, keepdims nil, out nil}, :as opts}) Computes the norm on an NDArray. This operator computes the norm on an NDArray with the specified axis, depending on the value of the ord parameter. By default, it computes the L2 norm on the entire array. Currently only ord=2 supports sparse ndarrays. Examples:: x = [[[1, 2], [3, 4]], [[2, 2], [5, 6]]] norm(x, ord=2, axis=1) = [[3.1622777 4.472136 ] [5.3851647 6.3245554]] norm(x, ord=1, axis=1) = [[4., 6.], [7., 8.]] rsp = x.cast_storage('row_sparse') norm(rsp) = [5.47722578] csr = x.cast_storage('csr') norm(csr) = [5.47722578] data: The input ord: Order of the norm. Currently ord=1 and ord=2 is supported. (optional) axis: The axis or axes along which to perform the reduction. The default, axis=(), will compute over all elements into a scalar array with shape (1,). If axis is int, a reduction is performed on a particular axis. If axis is a 2-tuple, it specifies the axes that hold 2-D matrices, and the matrix norms of these matrices are computed. (optional) out-dtype: The data type of the output. (optional) keepdims: If this is set to True, the reduced axis is left in the result as dimension with size one. (optional) out: Output array. (optional) ### one-hot (one-hot indices depth)(one-hot {:keys [indices depth on-value off-value dtype out], :or {on-value nil, off-value nil, dtype nil, out nil}, :as opts}) Returns a one-hot array. The locations represented by indices take value on_value, while all other locations take value off_value. one_hot operation with indices of shape (i0, i1) and depth of d would result in an output array of shape (i0, i1, d) with:: output[i,j,:] = off_value output[i,j,indices[i,j]] = on_value Examples:: one_hot([1,0,2,0], 3) = [[ 0. 1. 0.] [ 1. 0. 0.] [ 0. 0. 1.] [ 1. 0. 0.]] one_hot([1,0,2,0], 3, on_value=8, off_value=1, dtype='int32') = [[1 8 1] [8 1 1] [1 1 8] [8 1 1]] one_hot([[1,0],[1,0],[2,0]], 3) = [[[ 0. 1. 0.] [ 1. 0. 0.]] [[ 0. 1. 0.] [ 1. 0. 0.]] [[ 0. 0. 1.] [ 1. 0. 0.]]] Defined in src/operator/tensor/indexing_op.cc:L799 indices: array of locations where to set on_value depth: Depth of the one hot dimension. on-value: The value assigned to the locations represented by indices. (optional) off-value: The value assigned to the locations not represented by indices. (optional) dtype: DType of the output (optional) out: Output array. (optional) ### ones-like (ones-like {:keys [data out], :or {out nil}, :as opts}) Return an array of ones with the same shape and type as the input array. Examples:: x = [[ 0., 0., 0.], [ 0., 0., 0.]] ones_like(x) = [[ 1., 1., 1.], [ 1., 1., 1.]] data: The input out: Output array. (optional) (pad data mode pad-width)(pad {:keys [data mode pad-width constant-value out], :or {constant-value nil, out nil}, :as opts}) Pads an input array with a constant or edge values of the array. .. note:: Pad is deprecated. Use pad instead. .. note:: Current implementation only supports 4D and 5D input arrays with padding applied only on axes 1, 2 and 3. Expects axes 4 and 5 in pad_width to be zero. This operation pads an input array with either a constant_value or edge values along each axis of the input array. The amount of padding is specified by pad_width. pad_width is a tuple of integer padding widths for each axis of the format (before_1, after_1, ... , before_N, after_N). The pad_width should be of length 2*N where N is the number of dimensions of the array. For dimension N of the input array, before_N and after_N indicates how many values to add before and after the elements of the array along dimension N. The widths of the higher two dimensions before_1, after_1, before_2, after_2 must be 0. Example:: x = [[[[ 1. 2. 3.] [ 4. 5. 6.]] [[ 7. 8. 9.] [ 10. 11. 12.]]] [[[ 11. 12. 13.] [ 14. 15. 16.]] [[ 17. 18. 19.] [ 20. 21. 22.]]]] [[[[ 1. 1. 2. 3. 3.] [ 1. 1. 2. 3. 3.] [ 4. 4. 5. 6. 6.] [ 4. 4. 5. 6. 6.]] [[ 7. 7. 8. 9. 9.] [ 7. 7. 8. 9. 9.] [ 10. 10. 11. 12. 12.] [ 10. 10. 11. 12. 12.]]] [[[ 11. 11. 12. 13. 13.] [ 11. 11. 12. 13. 13.] [ 14. 14. 15. 16. 16.] [ 14. 14. 15. 16. 16.]] [[ 17. 17. 18. 19. 19.] [ 17. 17. 18. 19. 19.] [ 20. 20. 21. 22. 22.] [ 20. 20. 21. 22. 22.]]]] [[[[ 0. 0. 0. 0. 0.] [ 0. 1. 2. 3. 0.] [ 0. 4. 5. 6. 0.] [ 0. 0. 0. 0. 0.]] [[ 0. 0. 0. 0. 0.] [ 0. 7. 8. 9. 0.] [ 0. 10. 11. 12. 0.] [ 0. 0. 0. 0. 0.]]] [[[ 0. 0. 0. 0. 0.] [ 0. 11. 12. 13. 0.] [ 0. 14. 15. 16. 0.] [ 0. 0. 0. 0. 0.]] [[ 0. 0. 0. 0. 0.] [ 0. 17. 18. 19. 0.] [ 0. 20. 21. 22. 0.] [ 0. 0. 0. 0. 0.]]]] data: An n-dimensional input array. mode: Padding type to use. "constant" pads with constant_value "edge" pads using the edge values of the input array "reflect" pads by reflecting values with respect to the edges. pad-width: Widths of the padding regions applied to the edges of each axis. It is a tuple of integer padding widths for each axis of the format (before_1, after_1, ... , before_N, after_N). It should be of length 2*N where N is the number of dimensions of the array.This is equivalent to pad_width in numpy.pad, but flattened. constant-value: The value used for padding when mode is "constant". (optional) out: Output array. (optional) ### pick (pick data index)(pick {:keys [data index axis keepdims mode out], :or {axis nil, keepdims nil, mode nil, out nil}, :as opts}) Picks elements from an input array according to the input indices along the given axis. Given an input array of shape (d0, d1) and indices of shape (i0,), the result will be an output array of shape (i0,) with:: output[i] = input[i, indices[i]] By default, if any index mentioned is too large, it is replaced by the index that addresses the last element along an axis (the clip mode). This function supports n-dimensional input and (n-1)-dimensional indices arrays. Examples:: x = [[ 1., 2.], [ 3., 4.], [ 5., 6.]] // picks elements with specified indices along axis 0 pick(x, y=[0,1], 0) = [ 1., 4.] // picks elements with specified indices along axis 1 pick(x, y=[0,1,0], 1) = [ 1., 4., 5.] y = [[ 1.], [ 0.], [ 2.]] // picks elements with specified indices along axis 1 using 'wrap' mode // to place indicies that would normally be out of bounds pick(x, y=[2,-1,-2], 1, mode='wrap') = [ 1., 4., 5.] y = [[ 1.], [ 0.], [ 2.]] // picks elements with specified indices along axis 1 and dims are maintained pick(x,y, 1, keepdims=True) = [[ 2.], [ 3.], [ 6.]] data: The input array index: The index array axis: int or None. The axis to picking the elements. Negative values means indexing from right to left. If is None, the elements in the index w.r.t the flattened input will be picked. (optional) keepdims: If true, the axis where we pick the elements is left in the result as dimension with size one. (optional) mode: Specify how out-of-bound indices behave. Default is "clip". "clip" means clip to the range. So, if all indices mentioned are too large, they are replaced by the index that addresses the last element along an axis. "wrap" means to wrap around. (optional) out: Output array. (optional) ### pooling (pooling {:keys [data kernel pool-type global-pool cudnn-off pooling-convention stride pad p-value count-include-pad layout out], :or {cudnn-off nil, stride nil, layout nil, p-value nil, pooling-convention nil, count-include-pad nil, pool-type nil, out nil, pad nil, global-pool nil, kernel nil}, :as opts}) Performs pooling on the input. The shapes for 1-D pooling are - **data** and **out**: *(batch_size, channel, width)* (NCW layout) or *(batch_size, width, channel)* (NWC layout), The shapes for 2-D pooling are - **data** and **out**: *(batch_size, channel, height, width)* (NCHW layout) or *(batch_size, height, width, channel)* (NHWC layout), out_height = f(height, kernel[0], pad[0], stride[0]) out_width = f(width, kernel[1], pad[1], stride[1]) The definition of *f* depends on pooling_convention, which has two options: - **valid** (default):: f(x, k, p, s) = floor((x+2*p-k)/s)+1 - **full**, which is compatible with Caffe:: f(x, k, p, s) = ceil((x+2*p-k)/s)+1 But global_pool is set to be true, then do a global pooling, namely reset kernel=(height, width). Three pooling options are supported by pool_type: - **avg**: average pooling - **max**: max pooling - **sum**: sum pooling - **lp**: Lp pooling *height*. Namely the input data and output will have shape *(batch_size, channel, depth, height, width)* (NCDHW layout) or *(batch_size, depth, height, width, channel)* (NDHWC layout). Notes on Lp pooling: Lp pooling was first introduced by this paper: https://arxiv.org/pdf/1204.3968.pdf. L-1 pooling is simply sum pooling, while L-inf pooling is simply max pooling. We can see that Lp pooling stands between those two, in practice the most common value for p is 2. For each window X, the mathematical expression for Lp pooling is: :math:f(X) = \sqrt[p]{\sum_{x}^{X} x^p} Defined in src/operator/nn/pooling.cc:L416 data: Input data to the pooling operator. kernel: Pooling kernel size: (y, x) or (d, y, x) (optional) pool-type: Pooling type to be applied. (optional) global-pool: Ignore kernel size, do global pooling based on current input feature map. (optional) cudnn-off: Turn off cudnn pooling and use MXNet pooling operator. (optional) pooling-convention: Pooling convention to be applied. (optional) stride: Stride: for pooling (y, x) or (d, y, x). Defaults to 1 for each dimension. (optional) pad: Pad for pooling: (y, x) or (d, y, x). Defaults to no padding. (optional) p-value: Value of p for Lp pooling, can be 1 or 2, required for Lp Pooling. (optional) count-include-pad: Only used for AvgPool, specify whether to count padding elements for averagecalculation. For example, with a 5*5 kernel on a 3*3 corner of a image,the sum of the 9 valid elements will be divided by 25 if this is set to true,or it will be divided by 9 if this is set to false. Defaults to true. (optional) layout: Set layout for input and output. Empty for default layout: NCW for 1d, NCHW for 2d and NCDHW for 3d. (optional) out: Output array. (optional) ### pooling-v1 (pooling-v1 {:keys [data kernel pool-type global-pool pooling-convention stride pad out], :or {kernel nil, pool-type nil, global-pool nil, pooling-convention nil, stride nil, pad nil, out nil}, :as opts}) This operator is DEPRECATED. Perform pooling on the input. The shapes for 2-D pooling is - **data**: *(batch_size, channel, height, width)* - **out**: *(batch_size, num_filter, out_height, out_width)*, with:: out_height = f(height, kernel[0], pad[0], stride[0]) out_width = f(width, kernel[1], pad[1], stride[1]) The definition of *f* depends on pooling_convention, which has two options: - **valid** (default):: f(x, k, p, s) = floor((x+2*p-k)/s)+1 - **full**, which is compatible with Caffe:: f(x, k, p, s) = ceil((x+2*p-k)/s)+1 But global_pool is set to be true, then do a global pooling, namely reset kernel=(height, width). Three pooling options are supported by pool_type: - **avg**: average pooling - **max**: max pooling - **sum**: sum pooling 1-D pooling is special case of 2-D pooling with *weight=1* and *kernel[1]=1*. *height*. Namely the input data will have shape *(batch_size, channel, depth, height, width)*. Defined in src/operator/pooling_v1.cc:L104 data: Input data to the pooling operator. kernel: pooling kernel size: (y, x) or (d, y, x) (optional) pool-type: Pooling type to be applied. (optional) global-pool: Ignore kernel size, do global pooling based on current input feature map. (optional) pooling-convention: Pooling convention to be applied. (optional) stride: stride: for pooling (y, x) or (d, y, x) (optional) pad: pad for pooling: (y, x) or (d, y, x) (optional) out: Output array. (optional) ### prod (prod {:keys [data axis keepdims exclude out], :or {axis nil, keepdims nil, exclude nil, out nil}, :as opts}) Computes the product of array elements over given axes. data: The input axis: The axis or axes along which to perform the reduction. The default, axis=(), will compute over all elements into a scalar array with shape (1,). If axis is int, a reduction is performed on a particular axis. If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple. If exclude is true, reduction will be performed on the axes that are Negative values means indexing from right to left. (optional) keepdims: If this is set to True, the reduced axes are left in the result as dimension with size one. (optional) exclude: Whether to perform reduction on axis that are NOT in axis instead. (optional) out: Output array. (optional) (radians {:keys [data out], :or {out nil}, :as opts}) Converts each element of the input array from degrees to radians. .. math:: radians([0, 90, 180, 270, 360]) = [0, \pi/2, \pi, 3\pi/2, 2\pi] The storage type of radians output depends upon the input storage type: Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L238 data: The input array. out: Output array. (optional) ### rcbrt (rcbrt {:keys [data out], :or {out nil}, :as opts}) Returns element-wise inverse cube-root value of the input. .. math:: rcbrt(x) = 1/\sqrt[3]{x} Example:: rcbrt([1,8,-125]) = [1.0, 0.5, -0.2] Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L1004 data: The input array. out: Output array. (optional) ### reciprocal (reciprocal {:keys [data out], :or {out nil}, :as opts}) Returns the reciprocal of the argument, element-wise. Calculates 1/x. Example:: reciprocal([-2, 1, 3, 1.6, 0.2]) = [-0.5, 1.0, 0.33333334, 0.625, 5.0] Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L686 data: The input array. out: Output array. (optional) ### relu (relu {:keys [data out], :or {out nil}, :as opts}) Computes rectified linear activation. .. math:: max(features, 0) The storage type of relu output depends upon the input storage type: - relu(default) = default - relu(row_sparse) = row_sparse - relu(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L85 data: The input array. out: Output array. (optional) ### repeat (repeat data repeats)(repeat {:keys [data repeats axis out], :or {axis nil, out nil}, :as opts}) Repeats elements of an array. By default, repeat flattens the input array into 1-D and then repeats the elements:: x = [[ 1, 2], [ 3, 4]] repeat(x, repeats=2) = [ 1., 1., 2., 2., 3., 3., 4., 4.] The parameter axis specifies the axis along which to perform repeat:: repeat(x, repeats=2, axis=1) = [[ 1., 1., 2., 2.], [ 3., 3., 4., 4.]] repeat(x, repeats=2, axis=0) = [[ 1., 2.], [ 1., 2.], [ 3., 4.], [ 3., 4.]] repeat(x, repeats=2, axis=-1) = [[ 1., 1., 2., 2.], [ 3., 3., 4., 4.]] Defined in src/operator/tensor/matrix_op.cc:L796 data: Input data array repeats: The number of repetitions for each element. axis: The axis along which to repeat values. The negative numbers are interpreted counting from the backward. By default, use the flattened input array, and return a flat output array. (optional) out: Output array. (optional) ### reshape (reshape {:keys [data shape reverse target-shape keep-highest out], :or {shape nil, reverse nil, target-shape nil, keep-highest nil, out nil}, :as opts}) Reshapes the input array. .. note:: Reshape is deprecated, use reshape Given an array and a shape, this function returns a copy of the array in the new shape. The shape is a tuple of integers such as (2,3,4). The size of the new shape should be same as the size of the input array. Example:: reshape([1,2,3,4], shape=(2,2)) = [[1,2], [3,4]] Some dimensions of the shape can take special values from the set {0, -1, -2, -3, -4}. The significance of each is explained below: - 0 copy this dimension from the input to the output shape. Example:: - input shape = (2,3,4), shape = (4,0,2), output shape = (4,3,2) - input shape = (2,3,4), shape = (2,0,0), output shape = (2,3,4) - -1 infers the dimension of the output shape by using the remainder of the input dimensions keeping the size of the new array same as that of the input array. At most one dimension of shape can be -1. Example:: - input shape = (2,3,4), shape = (6,1,-1), output shape = (6,1,4) - input shape = (2,3,4), shape = (3,-1,8), output shape = (3,1,8) - input shape = (2,3,4), shape=(-1,), output shape = (24,) - -2 copy all/remainder of the input dimensions to the output shape. Example:: - input shape = (2,3,4), shape = (-2,), output shape = (2,3,4) - input shape = (2,3,4), shape = (2,-2), output shape = (2,3,4) - input shape = (2,3,4), shape = (-2,1,1), output shape = (2,3,4,1,1) - -3 use the product of two consecutive dimensions of the input shape as the output dimension. Example:: - input shape = (2,3,4), shape = (-3,4), output shape = (6,4) - input shape = (2,3,4,5), shape = (-3,-3), output shape = (6,20) - input shape = (2,3,4), shape = (0,-3), output shape = (2,12) - input shape = (2,3,4), shape = (-3,-2), output shape = (6,4) - -4 split one dimension of the input into two dimensions passed subsequent to -4 in shape (can contain -1). Example:: - input shape = (2,3,4), shape = (-4,1,2,-2), output shape =(1,2,3,4) - input shape = (2,3,4), shape = (2,-4,-1,3,-2), output shape = (2,1,3,4) If the argument reverse is set to 1, then the special values are inferred from right to left. Example:: - without reverse=1, for input shape = (10,5,4), shape = (-1,0), output shape would be (40,5) - with reverse=1, output shape will be (50,4). Defined in src/operator/tensor/matrix_op.cc:L202 data: Input data to reshape. shape: The target shape (optional) reverse: If true then the special values are inferred from right to left (optional) target-shape: (Deprecated! Use shape instead.) Target new shape. One and only one dim can be 0, in which case it will be inferred from the rest of dims (optional) keep-highest: (Deprecated! Use shape instead.) Whether keep the highest dim unchanged.If set to true, then the first dim in target_shape is ignored,and always fixed as input (optional) out: Output array. (optional) ### reshape-like (reshape-like lhs rhs)(reshape-like {:keys [lhs rhs out], :or {out nil}, :as opts}) Reshape some or all dimensions of lhs to have the same shape as some or all dimensions of rhs. Returns a **view** of the lhs array with a new shape without altering any data. Example:: x = [1, 2, 3, 4, 5, 6] y = [[0, -4], [3, 2], [2, 2]] reshape_like(x, y) = [[1, 2], [3, 4], [5, 6]] More precise control over how dimensions are inherited is achieved by specifying \ slices over the lhs and rhs array dimensions. Only the sliced lhs dimensions \ are reshaped to the rhs sliced dimensions, with the non-sliced lhs dimensions staying the same. Examples:: - lhs shape = (30,7), rhs shape = (15,2,4), lhs_begin=0, lhs_end=1, rhs_begin=0, rhs_end=2, output shape = (15,2,7) - lhs shape = (3, 5), rhs shape = (1,15,4), lhs_begin=0, lhs_end=2, rhs_begin=1, rhs_end=2, output shape = (15) Negative indices are supported, and None can be used for either lhs_end or rhs_end to indicate the end of the range. Example:: - lhs shape = (30, 12), rhs shape = (4, 2, 2, 3), lhs_begin=-1, lhs_end=None, rhs_begin=1, rhs_end=None, output shape = (30, 2, 2, 3) Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L485 lhs: First input. rhs: Second input. out: Output array. (optional) ### reverse (reverse data axis)(reverse {:keys [data axis out], :or {out nil}, :as opts}) Reverses the order of elements along given axis while preserving array shape. Note: reverse and flip are equivalent. We use reverse in the following examples. Examples:: x = [[ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.]] reverse(x, axis=0) = [[ 5., 6., 7., 8., 9.], [ 0., 1., 2., 3., 4.]] reverse(x, axis=1) = [[ 4., 3., 2., 1., 0.], [ 9., 8., 7., 6., 5.]] Defined in src/operator/tensor/matrix_op.cc:L898 data: Input data array axis: The axis which to reverse elements. out: Output array. (optional) ### rint (rint {:keys [data out], :or {out nil}, :as opts}) Returns element-wise rounded value to the nearest integer of the input. .. note:: - For input n.5 rint returns n while round returns n+1. - For input -n.5 both rint and round returns -n-1. Example:: rint([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2., 1., -2., 2., 2.] The storage type of rint output depends upon the input storage type: - rint(default) = default - rint(row_sparse) = row_sparse - rint(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L767 data: The input array. out: Output array. (optional) ### rmsprop-update (rmsprop-update weight grad n lr)(rmsprop-update {:keys [weight grad n lr gamma1 epsilon wd rescale-grad clip-gradient clip-weights out], :or {gamma1 nil, epsilon nil, wd nil, rescale-grad nil, clip-gradient nil, clip-weights nil, out nil}, :as opts}) Update function for RMSProp optimizer. RMSprop is a variant of stochastic gradient descent where the gradients are divided by a cache which grows with the sum of squares of recent gradients? RMSProp is similar to AdaGrad, a popular variant of SGD which adaptively tunes the learning rate of each parameter. AdaGrad lowers the learning rate for each parameter monotonically over the course of training. While this is analytically motivated for convex optimizations, it may not be ideal for non-convex problems. RMSProp deals with this heuristically by allowing the learning rates to rebound as the denominator decays over time. Define the Root Mean Square (RMS) error criterion of the gradient as :math:RMS[g]_t = \sqrt{E[g^2]_t + \epsilon}, where :math:g represents gradient and :math:E[g^2]_t is the decaying average over past squared gradient. The :math:E[g^2]_t is given by: .. math:: E[g^2]_t = \gamma * E[g^2]_{t-1} + (1-\gamma) * g_t^2 The update step is .. math:: \theta_{t+1} = \theta_t - \frac{\eta}{RMS[g]_t} g_t The RMSProp code follows the version in http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf Tieleman & Hinton, 2012. Hinton suggests the momentum term :math:\gamma to be 0.9 and the learning rate :math:\eta to be 0.001. Defined in src/operator/optimizer_op.cc:L795 weight: Weight grad: Gradient n: n lr: Learning rate gamma1: The decay rate of momentum estimates. (optional) epsilon: A small constant for numerical stability. (optional) wd: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) clip-weights: Clip weights to the range of [-clip_weights, clip_weights] If clip_weights <= 0, weight clipping is turned off. weights = max(min(weights, clip_weights), -clip_weights). (optional) out: Output array. (optional) ### rmspropalex-update (rmspropalex-update weight grad n g delta lr)(rmspropalex-update {:keys [weight grad n g delta lr gamma1 gamma2 epsilon wd rescale-grad clip-gradient clip-weights out], :or {gamma1 nil, gamma2 nil, epsilon nil, wd nil, rescale-grad nil, clip-gradient nil, clip-weights nil, out nil}, :as opts}) Update function for RMSPropAlex optimizer. RMSPropAlex is non-centered version of RMSProp. Define :math:E[g^2]_t is the decaying average over past squared gradient and :math:E[g]_t is the decaying average over past gradient. .. math:: E[g^2]_t = \gamma_1 * E[g^2]_{t-1} + (1 - \gamma_1) * g_t^2\\ E[g]_t = \gamma_1 * E[g]_{t-1} + (1 - \gamma_1) * g_t\\ \Delta_t = \gamma_2 * \Delta_{t-1} - \frac{\eta}{\sqrt{E[g^2]_t - E[g]_t^2 + \epsilon}} g_t\\ The update step is .. math:: \theta_{t+1} = \theta_t + \Delta_t The RMSPropAlex code follows the version in http://arxiv.org/pdf/1308.0850v5.pdf Eq(38) - Eq(45) by Alex Graves, 2013. Graves suggests the momentum term :math:\gamma_1 to be 0.95, :math:\gamma_2 to be 0.9 and the learning rate :math:\eta to be 0.0001. Defined in src/operator/optimizer_op.cc:L834 weight: Weight grad: Gradient n: n g: g delta: delta lr: Learning rate gamma1: Decay rate. (optional) gamma2: Decay rate. (optional) epsilon: A small constant for numerical stability. (optional) wd: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) clip-weights: Clip weights to the range of [-clip_weights, clip_weights] If clip_weights <= 0, weight clipping is turned off. weights = max(min(weights, clip_weights), -clip_weights). (optional) out: Output array. (optional) ### rnn (rnn data parameters state state-cell sequence-length state-size num-layers mode)(rnn {:keys [data parameters state state-cell sequence-length state-size num-layers bidirectional mode p state-outputs projection-size lstm-state-clip-min lstm-state-clip-max lstm-state-clip-nan use-sequence-length out], :or {projection-size nil, p nil, lstm-state-clip-min nil, state-outputs nil, lstm-state-clip-max nil, use-sequence-length nil, lstm-state-clip-nan nil, out nil, bidirectional nil}, :as opts}) Applies recurrent layers to input data. Currently, vanilla RNN, LSTM and GRU are implemented, with both multi-layer and bidirectional support. When the input data is of type float32 and the environment variables MXNET_CUDA_ALLOW_TENSOR_CORE and MXNET_CUDA_TENSOR_OP_MATH_ALLOW_CONVERSION are set to 1, this operator will try to use pseudo-float16 precision (float32 math with float16 I/O) precision in order to use Tensor Cores on suitable NVIDIA GPUs. This can sometimes give significant speedups. **Vanilla RNN** Applies a single-gate recurrent layer to input X. Two kinds of activation function are supported: ReLU and Tanh. With ReLU activation function: .. math:: h_t = relu(W_{ih} * x_t + b_{ih} + W_{hh} * h_{(t-1)} + b_{hh}) With Tanh activtion function: .. math:: h_t = \tanh(W_{ih} * x_t + b_{ih} + W_{hh} * h_{(t-1)} + b_{hh}) Reference paper: Finding structure in time - Elman, 1988. https://crl.ucsd.edu/~elman/Papers/fsit.pdf **LSTM** Long Short-Term Memory - Hochreiter, 1997. http://www.bioinf.jku.at/publications/older/2604.pdf .. math:: \begin{array}{ll} i_t = \mathrm{sigmoid}(W_{ii} x_t + b_{ii} + W_{hi} h_{(t-1)} + b_{hi}) \\ f_t = \mathrm{sigmoid}(W_{if} x_t + b_{if} + W_{hf} h_{(t-1)} + b_{hf}) \\ g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hc} h_{(t-1)} + b_{hg}) \\ o_t = \mathrm{sigmoid}(W_{io} x_t + b_{io} + W_{ho} h_{(t-1)} + b_{ho}) \\ c_t = f_t * c_{(t-1)} + i_t * g_t \\ h_t = o_t * \tanh(c_t) \end{array} **GRU** Gated Recurrent Unit - Cho et al. 2014. http://arxiv.org/abs/1406.1078 The definition of GRU here is slightly different from paper but compatible with CUDNN. .. math:: \begin{array}{ll} r_t = \mathrm{sigmoid}(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \mathrm{sigmoid}(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \\ \end{array} Defined in src/operator/rnn.cc:L690 data: Input data to RNN parameters: Vector of all RNN trainable parameters concatenated state: initial hidden state of the RNN state-cell: initial cell state for LSTM networks (only for LSTM) sequence-length: Vector of valid sequence lengths for each element in batch. (Only used if use_sequence_length kwarg is True) state-size: size of the state for each layer num-layers: number of stacked layers bidirectional: whether to use bidirectional recurrent layers (optional) mode: the type of RNN to compute p: drop rate of the dropout on the outputs of each RNN layer, except the last layer. (optional) state-outputs: Whether to have the states as symbol outputs. (optional) projection-size: size of project size (optional) lstm-state-clip-min: Minimum clip value of LSTM states. This option must be used together with lstm_state_clip_max. (optional) lstm-state-clip-max: Maximum clip value of LSTM states. This option must be used together with lstm_state_clip_min. (optional) lstm-state-clip-nan: Whether to stop NaN from propagating in state by clipping it to min/max. If clipping range is not specified, this option is ignored. (optional) use-sequence-length: If set to true, this layer takes in an extra input parameter sequence_length to specify variable length sequence (optional) out: Output array. (optional) ### roi-pooling (roi-pooling data rois pooled-size spatial-scale)(roi-pooling {:keys [data rois pooled-size spatial-scale out], :or {out nil}, :as opts}) Performs region of interest(ROI) pooling on the input array. ROI pooling is a variant of a max pooling layer, in which the output size is fixed and region of interest is a parameter. Its purpose is to perform max pooling on the inputs of non-uniform sizes to obtain fixed-size feature maps. ROI pooling is a neural-net layer mostly used in training a Fast R-CNN network for object detection. This operator takes a 4D feature map as an input array and region proposals as rois, then it pools over sub-regions of input and produces a fixed-sized output array regardless of the ROI size. To crop the feature map accordingly, you can resize the bounding box coordinates by changing the parameters rois and spatial_scale. The cropped feature maps are pooled by standard max pooling operation to a fixed size output indicated by a pooled_size parameter. batch_size will change to the number of region bounding boxes after ROIPooling. The size of each region of interest doesn't have to be perfectly divisible by the number of pooling sections(pooled_size). Example:: x = [[[[ 0., 1., 2., 3., 4., 5.], [ 6., 7., 8., 9., 10., 11.], [ 12., 13., 14., 15., 16., 17.], [ 18., 19., 20., 21., 22., 23.], [ 24., 25., 26., 27., 28., 29.], [ 30., 31., 32., 33., 34., 35.], [ 36., 37., 38., 39., 40., 41.], [ 42., 43., 44., 45., 46., 47.]]]] // region of interest i.e. bounding box coordinates. y = [[0,0,0,4,4]] // returns array of shape (2,2) according to the given roi with max pooling. ROIPooling(x, y, (2,2), 1.0) = [[[[ 14., 16.], [ 26., 28.]]]] // region of interest is changed due to the change in spacial_scale parameter. ROIPooling(x, y, (2,2), 0.7) = [[[[ 7., 9.], [ 19., 21.]]]] Defined in src/operator/roi_pooling.cc:L295 data: The input array to the pooling operator, a 4D Feature maps rois: Bounding box coordinates, a 2D array of [[batch_index, x1, y1, x2, y2]], where (x1, y1) and (x2, y2) are top left and bottom right corners of designated region of interest. batch_index indicates the index of corresponding image in the input array pooled-size: ROI pooling output shape (h,w) spatial-scale: Ratio of input feature map height (or w) to raw image height (or w). Equals the reciprocal of total stride in convolutional layers out: Output array. (optional) ### round (round {:keys [data out], :or {out nil}, :as opts}) Returns element-wise rounded value to the nearest integer of the input. Example:: round([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2., 2., -2., 2., 2.] The storage type of round output depends upon the input storage type: - round(default) = default - round(row_sparse) = row_sparse - round(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L746 data: The input array. out: Output array. (optional) ### rsqrt (rsqrt {:keys [data out], :or {out nil}, :as opts}) Returns element-wise inverse square-root value of the input. .. math:: rsqrt(x) = 1/\sqrt{x} Example:: rsqrt([4,9,16]) = [0.5, 0.33333334, 0.25] The storage type of rsqrt output is always dense Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L927 data: The input array. out: Output array. (optional) ### scatter-nd (scatter-nd data indices shape)(scatter-nd {:keys [data indices shape out], :or {out nil}, :as opts}) Scatters data into a new tensor according to indices. Given data with shape (Y_0, ..., Y_{K-1}, X_M, ..., X_{N-1}) and indices with shape (M, Y_0, ..., Y_{K-1}), the output will have shape (X_0, X_1, ..., X_{N-1}), where M <= N. If M == N, data shape should simply be (Y_0, ..., Y_{K-1}). The elements in output is defined as follows:: output[indices[0, y_0, ..., y_{K-1}], ..., indices[M-1, y_0, ..., y_{K-1}], x_M, ..., x_{N-1}] = data[y_0, ..., y_{K-1}, x_M, ..., x_{N-1}] all other entries in output are 0. .. warning:: If the indices have duplicates, the result will be non-deterministic and the gradient of scatter_nd will not be correct!! Examples:: data = [2, 3, 0] indices = [[1, 1, 0], [0, 1, 0]] shape = (2, 2) scatter_nd(data, indices, shape) = [[0, 0], [2, 3]] data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]] indices = [[0, 1], [1, 1]] shape = (2, 2, 2, 2) scatter_nd(data, indices, shape) = [[[[0, 0], [0, 0]], [[1, 2], [3, 4]]], [[[0, 0], [0, 0]], [[5, 6], [7, 8]]]] data: data indices: indices shape: Shape of output. out: Output array. (optional) ### sequence-last (sequence-last data sequence-length)(sequence-last {:keys [data sequence-length use-sequence-length axis out], :or {use-sequence-length nil, axis nil, out nil}, :as opts}) Takes the last element of a sequence. This function takes an n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims] and returns a (n-1)-dimensional array of the form [batch_size, other_feature_dims]. Parameter sequence_length is used to handle variable-length sequences. sequence_length should be an input array of positive ints of dimension [batch_size]. To use this parameter, set use_sequence_length to True, otherwise each example in the batch is assumed to have the max sequence length. .. note:: Alternatively, you can also use take operator. Example:: x = [[[ 1., 2., 3.], [ 4., 5., 6.], [ 7., 8., 9.]], [[ 10., 11., 12.], [ 13., 14., 15.], [ 16., 17., 18.]], [[ 19., 20., 21.], [ 22., 23., 24.], [ 25., 26., 27.]]] // returns last sequence when sequence_length parameter is not used SequenceLast(x) = [[ 19., 20., 21.], [ 22., 23., 24.], [ 25., 26., 27.]] // sequence_length is used SequenceLast(x, sequence_length=[1,1,1], use_sequence_length=True) = [[ 1., 2., 3.], [ 4., 5., 6.], [ 7., 8., 9.]] // sequence_length is used SequenceLast(x, sequence_length=[1,2,3], use_sequence_length=True) = [[ 1., 2., 3.], [ 13., 14., 15.], [ 25., 26., 27.]] Defined in src/operator/sequence_last.cc:L100 data: n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims] where n>2 sequence-length: vector of sequence lengths of the form [batch_size] use-sequence-length: If set to true, this layer takes in an extra input parameter sequence_length to specify variable length sequence (optional) axis: The sequence axis. Only values of 0 and 1 are currently supported. (optional) out: Output array. (optional) (sequence-mask data sequence-length)(sequence-mask {:keys [data sequence-length use-sequence-length value axis out], :or {use-sequence-length nil, value nil, axis nil, out nil}, :as opts}) Sets all elements outside the sequence to a constant value. This function takes an n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims] and returns an array of the same shape. Parameter sequence_length is used to handle variable-length sequences. sequence_length should be an input array of positive ints of dimension [batch_size]. To use this parameter, set use_sequence_length to True, otherwise each example in the batch is assumed to have the max sequence length and this operator works as the identity operator. Example:: x = [[[ 1., 2., 3.], [ 4., 5., 6.]], [[ 7., 8., 9.], [ 10., 11., 12.]], [[ 13., 14., 15.], [ 16., 17., 18.]]] // Batch 1 B1 = [[ 1., 2., 3.], [ 7., 8., 9.], [ 13., 14., 15.]] // Batch 2 B2 = [[ 4., 5., 6.], [ 10., 11., 12.], [ 16., 17., 18.]] // works as identity operator when sequence_length parameter is not used SequenceMask(x) = [[[ 1., 2., 3.], [ 4., 5., 6.]], [[ 7., 8., 9.], [ 10., 11., 12.]], [[ 13., 14., 15.], [ 16., 17., 18.]]] // sequence_length [1,1] means 1 of each batch will be kept // and other rows are masked with default mask value = 0 [[[ 1., 2., 3.], [ 4., 5., 6.]], [[ 0., 0., 0.], [ 0., 0., 0.]], [[ 0., 0., 0.], [ 0., 0., 0.]]] // sequence_length [2,3] means 2 of batch B1 and 3 of batch B2 will be kept // and other rows are masked with value = 1 [[[ 1., 2., 3.], [ 4., 5., 6.]], [[ 7., 8., 9.], [ 10., 11., 12.]], [[ 1., 1., 1.], [ 16., 17., 18.]]] data: n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims] where n>2 sequence-length: vector of sequence lengths of the form [batch_size] use-sequence-length: If set to true, this layer takes in an extra input parameter sequence_length to specify variable length sequence (optional) value: The value to be used as a mask. (optional) axis: The sequence axis. Only values of 0 and 1 are currently supported. (optional) out: Output array. (optional) ### sequence-reverse (sequence-reverse data sequence-length)(sequence-reverse {:keys [data sequence-length use-sequence-length axis out], :or {use-sequence-length nil, axis nil, out nil}, :as opts}) Reverses the elements of each sequence. This function takes an n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims] and returns an array of the same shape. Parameter sequence_length is used to handle variable-length sequences. sequence_length should be an input array of positive ints of dimension [batch_size]. To use this parameter, set use_sequence_length to True, otherwise each example in the batch is assumed to have the max sequence length. Example:: x = [[[ 1., 2., 3.], [ 4., 5., 6.]], [[ 7., 8., 9.], [ 10., 11., 12.]], [[ 13., 14., 15.], [ 16., 17., 18.]]] // Batch 1 B1 = [[ 1., 2., 3.], [ 7., 8., 9.], [ 13., 14., 15.]] // Batch 2 B2 = [[ 4., 5., 6.], [ 10., 11., 12.], [ 16., 17., 18.]] // returns reverse sequence when sequence_length parameter is not used SequenceReverse(x) = [[[ 13., 14., 15.], [ 16., 17., 18.]], [[ 7., 8., 9.], [ 10., 11., 12.]], [[ 1., 2., 3.], [ 4., 5., 6.]]] // sequence_length [2,2] means 2 rows of // both batch B1 and B2 will be reversed. SequenceReverse(x, sequence_length=[2,2], use_sequence_length=True) = [[[ 7., 8., 9.], [ 10., 11., 12.]], [[ 1., 2., 3.], [ 4., 5., 6.]], [[ 13., 14., 15.], [ 16., 17., 18.]]] // sequence_length [2,3] means 2 of batch B2 and 3 of batch B3 // will be reversed. SequenceReverse(x, sequence_length=[2,3], use_sequence_length=True) = [[[ 7., 8., 9.], [ 16., 17., 18.]], [[ 1., 2., 3.], [ 10., 11., 12.]], [[ 13., 14, 15.], [ 4., 5., 6.]]] Defined in src/operator/sequence_reverse.cc:L122 data: n-dimensional input array of the form [max_sequence_length, batch_size, other dims] where n>2 sequence-length: vector of sequence lengths of the form [batch_size] use-sequence-length: If set to true, this layer takes in an extra input parameter sequence_length to specify variable length sequence (optional) axis: The sequence axis. Only 0 is currently supported. (optional) out: Output array. (optional) ### sgd-mom-update (sgd-mom-update weight grad mom lr)(sgd-mom-update {:keys [weight grad mom lr momentum wd rescale-grad clip-gradient lazy-update out], :or {momentum nil, wd nil, rescale-grad nil, clip-gradient nil, lazy-update nil, out nil}, :as opts}) Momentum update function for Stochastic Gradient Descent (SGD) optimizer. Momentum update has better convergence rates on neural networks. Mathematically it looks like below: .. math:: v_1 = \alpha * \nabla J(W_0)\\ v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\ W_t = W_{t-1} + v_t v = momentum * v - learning_rate * gradient weight += v Where the parameter momentum is the decay rate of momentum estimates at each epoch. However, if grad's storage type is row_sparse, lazy_update is True and weight's storage type is the same as momentum's storage type, only the row slices whose indices appear in grad.indices are updated (for both weight and momentum):: v[row] = momentum[row] * v[row] - learning_rate * gradient[row] weight[row] += v[row] Defined in src/operator/optimizer_op.cc:L563 weight: Weight grad: Gradient mom: Momentum lr: Learning rate momentum: The decay rate of momentum estimates at each epoch. (optional) wd: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) lazy-update: If true, lazy updates are applied if gradient's stype is row_sparse and both weight and momentum have the same stype (optional) out: Output array. (optional) ### sgd-update (sgd-update weight grad lr)(sgd-update {:keys [weight grad lr wd rescale-grad clip-gradient lazy-update out], :or {wd nil, rescale-grad nil, clip-gradient nil, lazy-update nil, out nil}, :as opts}) Update function for Stochastic Gradient Descent (SGD) optimizer. weight = weight - learning_rate * (gradient + wd * weight) However, if gradient is of row_sparse storage type and lazy_update is True, only the row slices whose indices appear in grad.indices are updated:: weight[row] = weight[row] - learning_rate * (gradient[row] + wd * weight[row]) Defined in src/operator/optimizer_op.cc:L522 weight: Weight grad: Gradient lr: Learning rate wd: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) lazy-update: If true, lazy updates are applied if gradient's stype is row_sparse. (optional) out: Output array. (optional) ### shape-array (shape-array {:keys [data lhs-begin lhs-end rhs-begin rhs-end out], :or {lhs-begin nil, lhs-end nil, rhs-begin nil, rhs-end nil, out nil}, :as opts}) Returns a 1D int64 array containing the shape of data. Example:: shape_array([[1,2,3,4], [5,6,7,8]]) = [2,4] Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L544 data: Input Array. lhs-begin: Defaults to 0. The beginning index along which the lhs dimensions are to be reshaped. Supports negative indices. (optional) lhs-end: Defaults to None. The ending index along which the lhs dimensions are to be used for reshaping. Supports negative indices. (optional) rhs-begin: Defaults to 0. The beginning index along which the rhs dimensions are to be used for reshaping. Supports negative indices. (optional) rhs-end: Defaults to None. The ending index along which the rhs dimensions are to be used for reshaping. Supports negative indices. (optional) out: Output array. (optional) ### sigmoid (sigmoid {:keys [data out], :or {out nil}, :as opts}) Computes sigmoid of x element-wise. .. math:: y = 1 / (1 + exp(-x)) The storage type of sigmoid output is always dense Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L119 data: The input array. out: Output array. (optional) ### sign (sign {:keys [data out], :or {out nil}, :as opts}) Returns element-wise sign of the input. Example:: sign([-2, 0, 3]) = [-1, 0, 1] The storage type of sign output depends upon the input storage type: - sign(default) = default - sign(row_sparse) = row_sparse - sign(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L727 data: The input array. out: Output array. (optional) ### signsgd-update (signsgd-update weight grad lr)(signsgd-update {:keys [weight grad lr wd rescale-grad clip-gradient out], :or {wd nil, rescale-grad nil, clip-gradient nil, out nil}, :as opts}) Update function for SignSGD optimizer. .. math:: g_t = \nabla J(W_{t-1})\\ W_t = W_{t-1} - \eta_t \text{sign}(g_t) weight = weight - learning_rate * sign(gradient) .. note:: - sparse ndarray not supported for this optimizer yet. Defined in src/operator/optimizer_op.cc:L61 weight: Weight grad: Gradient lr: Learning rate wd: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) out: Output array. (optional) ### signum-update (signum-update weight grad mom lr)(signum-update {:keys [weight grad mom lr momentum wd rescale-grad clip-gradient wd-lh out], :or {momentum nil, wd nil, rescale-grad nil, clip-gradient nil, wd-lh nil, out nil}, :as opts}) SIGN momentUM (Signum) optimizer. .. math:: g_t = \nabla J(W_{t-1})\\ m_t = \beta m_{t-1} + (1 - \beta) g_t\\ W_t = W_{t-1} - \eta_t \text{sign}(m_t) state = momentum * state + (1-momentum) * gradient weight = weight - learning_rate * sign(state) Where the parameter momentum is the decay rate of momentum estimates at each epoch. .. note:: - sparse ndarray not supported for this optimizer yet. Defined in src/operator/optimizer_op.cc:L90 weight: Weight grad: Gradient mom: Momentum lr: Learning rate momentum: The decay rate of momentum estimates at each epoch. (optional) wd: Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. (optional) rescale-grad: Rescale gradient to grad = rescale_grad*grad. (optional) clip-gradient: Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). (optional) wd-lh: The amount of weight decay that does not go into gradient/momentum calculationsotherwise do weight decay algorithmically only. (optional) out: Output array. (optional) ### sin (sin {:keys [data out], :or {out nil}, :as opts}) Computes the element-wise sine of the input array. The input should be in radians (:math:2\pi rad equals 360 degrees). .. math:: sin([0, \pi/4, \pi/2]) = [0, 0.707, 1] The storage type of sin output depends upon the input storage type: - sin(default) = default - sin(row_sparse) = row_sparse - sin(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L46 data: The input array. out: Output array. (optional) ### sinh (sinh {:keys [data out], :or {out nil}, :as opts}) Returns the hyperbolic sine of the input array, computed element-wise. .. math:: sinh(x) = 0.5\times(exp(x) - exp(-x)) The storage type of sinh output depends upon the input storage type: - sinh(default) = default - sinh(row_sparse) = row_sparse - sinh(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L257 data: The input array. out: Output array. (optional) ### size-array (size-array {:keys [data out], :or {out nil}, :as opts}) Returns a 1D int64 array containing the size of data. Example:: size_array([[1,2,3,4], [5,6,7,8]]) = [8] Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L596 data: Input Array. out: Output array. (optional) ### slice (slice data begin end)(slice {:keys [data begin end step out], :or {step nil, out nil}, :as opts}) Slices a region of the array. .. note:: crop is deprecated. Use slice instead. This function returns a sliced array between the indices given by begin and end with the corresponding step. For an input array of shape=(d_0, d_1, ..., d_n-1), slice operation with begin=(b_0, b_1...b_m-1), end=(e_0, e_1, ..., e_m-1), and step=(s_0, s_1, ..., s_m-1), where m <= n, results in an array with the shape (|e_0-b_0|/|s_0|, ..., |e_m-1-b_m-1|/|s_m-1|, d_m, ..., d_n-1). The resulting array's *k*-th dimension contains elements from the *k*-th dimension of the input array starting from index b_k (inclusive) with step s_k until reaching e_k (exclusive). If the *k*-th elements are None in the sequence of begin, end, and step, the following rule will be used to set default values. If s_k is None, set s_k=1. If s_k > 0, set b_k=0, e_k=d_k; else, set b_k=d_k-1, e_k=-1. The storage type of slice output depends on storage types of inputs - slice(csr) = csr - otherwise, slice generates output with default storage .. note:: When input data storage type is csr, it only supports step=(), or step=(None,), or step=(1,) to generate a csr output. For other step parameter values, it falls back to slicing a dense tensor. Example:: x = [[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.]] slice(x, begin=(0,1), end=(2,4)) = [[ 2., 3., 4.], [ 6., 7., 8.]] slice(x, begin=(None, 0), end=(None, 3), step=(-1, 2)) = [[9., 11.], [5., 7.], [1., 3.]] Defined in src/operator/tensor/matrix_op.cc:L506 data: Source input begin: starting indices for the slice operation, supports negative indices. end: ending indices for the slice operation, supports negative indices. step: step for the slice operation, supports negative values. (optional) out: Output array. (optional) ### slice-axis (slice-axis data axis begin end)(slice-axis {:keys [data axis begin end out], :or {out nil}, :as opts}) Slices along a given axis. Returns an array slice along a given axis starting from the begin index to the end index. Examples:: x = [[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.]] slice_axis(x, axis=0, begin=1, end=3) = [[ 5., 6., 7., 8.], [ 9., 10., 11., 12.]] slice_axis(x, axis=1, begin=0, end=2) = [[ 1., 2.], [ 5., 6.], [ 9., 10.]] slice_axis(x, axis=1, begin=-3, end=-1) = [[ 2., 3.], [ 6., 7.], [ 10., 11.]] Defined in src/operator/tensor/matrix_op.cc:L596 data: Source input axis: Axis along which to be sliced, supports negative indexes. begin: The beginning index along the axis to be sliced, supports negative indexes. end: The ending index along the axis to be sliced, supports negative indexes. out: Output array. (optional) ### slice-channel (slice-channel data num-outputs)(slice-channel {:keys [data num-outputs axis squeeze-axis out], :or {axis nil, squeeze-axis nil, out nil}, :as opts}) Splits an array along a particular axis into multiple sub-arrays. .. note:: SliceChannel is deprecated. Use split instead. **Note** that num_outputs should evenly divide the length of the axis along which to split the array. Example:: x = [[[ 1.] [ 2.]] [[ 3.] [ 4.]] [[ 5.] [ 6.]]] x.shape = (3, 2, 1) y = split(x, axis=1, num_outputs=2) // a list of 2 arrays with shape (3, 1, 1) y = [[[ 1.]] [[ 3.]] [[ 5.]]] [[[ 2.]] [[ 4.]] [[ 6.]]] y[0].shape = (3, 1, 1) z = split(x, axis=0, num_outputs=3) // a list of 3 arrays with shape (1, 2, 1) z = [[[ 1.] [ 2.]]] [[[ 3.] [ 4.]]] [[[ 5.] [ 6.]]] z[0].shape = (1, 2, 1) squeeze_axis=1 removes the axis with length 1 from the shapes of the output arrays. **Note** that setting squeeze_axis to 1 removes axis with length 1 only along the axis which it is split. Also squeeze_axis can be set to true only if input.shape[axis] == num_outputs. Example:: z = split(x, axis=0, num_outputs=3, squeeze_axis=1) // a list of 3 arrays with shape (2, 1) z = [[ 1.] [ 2.]] [[ 3.] [ 4.]] [[ 5.] [ 6.]] z[0].shape = (2 ,1 ) Defined in src/operator/slice_channel.cc:L107 data: The input num-outputs: Number of splits. Note that this should evenly divide the length of the axis. axis: Axis along which to split. (optional) squeeze-axis: If true, Removes the axis with length 1 from the shapes of the output arrays. **Note** that setting squeeze_axis to true removes axis with length 1 only along the axis which it is split. Also squeeze_axis can be set to true only if input.shape[axis] == num_outputs. (optional) out: Output array. (optional) ### slice-like (slice-like data shape-like)(slice-like {:keys [data shape-like axes out], :or {axes nil, out nil}, :as opts}) Slices a region of the array like the shape of another array. This function is similar to slice, however, the begin are always 0s and end of specific axes are inferred from the second input shape_like. Given the second shape_like input of shape=(d_0, d_1, ..., d_n-1), a slice_like operator with default empty axes, it performs the following operation: out = slice(input, begin=(0, 0, ..., 0), end=(d_0, d_1, ..., d_n-1)). When axes is not empty, it is used to speficy which axes are being sliced. Given a 4-d input data, slice_like operator with axes=(0, 2, -1) will perform the following operation: out = slice(input, begin=(0, 0, 0, 0), end=(d_0, None, d_2, d_3)). Note that it is allowed to have first and second input with different dimensions, however, you have to make sure the axes are specified and not exceeding the dimension limits. For example, given input_1 with shape=(2,3,4,5) and input_2 with shape=(1,2,3), it is not allowed to use: out = slice_like(a, b) because ndim of input_1 is 4, and ndim of input_2 is 3. The following is allowed in this situation: out = slice_like(a, b, axes=(0, 2)) Example:: x = [[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.]] y = [[ 0., 0., 0.], [ 0., 0., 0.]] slice_like(x, y) = [[ 1., 2., 3.] [ 5., 6., 7.]] slice_like(x, y, axes=(0, 1)) = [[ 1., 2., 3.] [ 5., 6., 7.]] slice_like(x, y, axes=(0)) = [[ 1., 2., 3., 4.] [ 5., 6., 7., 8.]] slice_like(x, y, axes=(-1)) = [[ 1., 2., 3.] [ 5., 6., 7.] [ 9., 10., 11.]] Defined in src/operator/tensor/matrix_op.cc:L665 data: Source input shape-like: Shape like input axes: List of axes on which input data will be sliced according to the corresponding size of the second input. By default will slice on all axes. Negative axes are supported. (optional) out: Output array. (optional) ### smooth-l1 (smooth-l1 data scalar)(smooth-l1 {:keys [data scalar out], :or {out nil}, :as opts}) Calculate Smooth L1 Loss(lhs, scalar) by summing .. math:: f(x) = \begin{cases} (\sigma x)^2/2,& \text{if }x < 1/\sigma^2\\ |x|-0.5/\sigma^2,& \text{otherwise} \end{cases} where :math:x is an element of the tensor *lhs* and :math:\sigma is the scalar. Example:: smooth_l1([1, 2, 3, 4]) = [0.5, 1.5, 2.5, 3.5] smooth_l1([1, 2, 3, 4], scalar=1) = [0.5, 1.5, 2.5, 3.5] Defined in src/operator/tensor/elemwise_binary_scalar_op_extended.cc:L104 data: source input scalar: scalar input out: Output array. (optional) ### softmax (softmax {:keys [data axis temperature dtype out], :or {axis nil, temperature nil, dtype nil, out nil}, :as opts}) Applies the softmax function. The resulting array contains elements in the range (0,1) and the elements along the given axis sum up to 1. .. math:: softmax(\mathbf{z/t})_j = \frac{e^{z_j/t}}{\sum_{k=1}^K e^{z_k/t}} for :math:j = 1, ..., K t is the temperature parameter in softmax function. By default, t equals 1.0 Example:: x = [[ 1. 1. 1.] [ 1. 1. 1.]] softmax(x,axis=0) = [[ 0.5 0.5 0.5] [ 0.5 0.5 0.5]] softmax(x,axis=1) = [[ 0.33333334, 0.33333334, 0.33333334], [ 0.33333334, 0.33333334, 0.33333334]] Defined in src/operator/nn/softmax.cc:L93 data: The input array. axis: The axis along which to compute softmax. (optional) temperature: Temperature parameter in softmax (optional) dtype: DType of the output in case this can't be inferred. Defaults to the same as input's dtype if not defined (dtype=None). (optional) out: Output array. (optional) ### softmax-activation (softmax-activation {:keys [data mode out], :or {mode nil, out nil}, :as opts}) Applies softmax activation to input. This is intended for internal layers. .. note:: This operator has been deprecated, please use softmax. If mode = instance, this operator will compute a softmax for each instance in the batch. This is the default mode. If mode = channel, this operator will compute a k-class softmax at each position of each instance, where k = num_channel. This mode can only be used when the input array has at least 3 dimensions. This can be used for fully convolutional network, image segmentation, etc. Example:: >>> input_array = mx.nd.array([[3., 0.5, -0.5, 2., 7.], >>> [2., -.4, 7., 3., 0.2]]) >>> softmax_act = mx.nd.SoftmaxActivation(input_array) >>> print softmax_act.asnumpy() [[ 1.78322066e-02 1.46375655e-03 5.38485940e-04 6.56010211e-03 9.73605454e-01] [ 6.56221947e-03 5.95310994e-04 9.73919690e-01 1.78379621e-02 1.08472735e-03]] Defined in src/operator/nn/softmax_activation.cc:L59 data: The input array. mode: Specifies how to compute the softmax. If set to instance, it computes softmax for each instance. If set to channel, It computes cross channel softmax for each position of each instance. (optional) out: Output array. (optional) ### softmax-cross-entropy (softmax-cross-entropy data label)(softmax-cross-entropy {:keys [data label out], :or {out nil}, :as opts}) Calculate cross entropy of softmax output and one-hot label. - This operator computes the cross entropy in two steps: - Applies softmax function on the input array. - Computes and returns the cross entropy loss between the softmax output and the labels. - The softmax function and cross entropy loss is given by: - Softmax Function: .. math:: \text{softmax}(x)_i = \frac{exp(x_i)}{\sum_j exp(x_j)} - Cross Entropy Function: .. math:: \text{CE(label, output)} = - \sum_i \text{label}_i \log(\text{output}_i) Example:: x = [[1, 2, 3], [11, 7, 5]] label = [2, 0] softmax(x) = [[0.09003057, 0.24472848, 0.66524094], [0.97962922, 0.01794253, 0.00242826]] softmax_cross_entropy(data, label) = - log(0.66524084) - log(0.97962922) = 0.4281871 Defined in src/operator/loss_binary_op.cc:L59 data: Input data label: Input label out: Output array. (optional) ### softmax-output (softmax-output data label)(softmax-output {:keys [data label grad-scale ignore-label multi-output use-ignore preserve-shape normalization out-grad smooth-alpha out], :or {use-ignore nil, normalization nil, smooth-alpha nil, grad-scale nil, ignore-label nil, preserve-shape nil, multi-output nil, out-grad nil, out nil}, :as opts}) Computes the gradient of cross entropy loss with respect to softmax output. - This operator computes the gradient in two steps. The cross entropy loss does not actually need to be computed. - Applies softmax function on the input array. - Computes and returns the gradient of cross entropy loss w.r.t. the softmax output. - The softmax function, cross entropy loss and gradient is given by: - Softmax Function: .. math:: \text{softmax}(x)_i = \frac{exp(x_i)}{\sum_j exp(x_j)} - Cross Entropy Function: .. math:: \text{CE(label, output)} = - \sum_i \text{label}_i \log(\text{output}_i) - The gradient of cross entropy loss w.r.t softmax output: .. math:: \text{gradient} = \text{output} - \text{label} - During forward propagation, the softmax function is computed for each instance in the input array. For general *N*-D input arrays with shape :math:(d_1, d_2, ..., d_n). The size is :math:s=d_1 \cdot d_2 \cdot \cdot \cdot d_n. We can use the parameters preserve_shape and multi_output to specify the way to compute softmax: - By default, preserve_shape is false. This operator will reshape the input array into a 2-D array with shape :math:(d_1, \frac{s}{d_1}) and then compute the softmax function for each row in the reshaped array, and afterwards reshape it back to the original shape :math:(d_1, d_2, ..., d_n). - If preserve_shape is true, the softmax function will be computed along the last axis (axis = -1). - If multi_output is true, the softmax function will be computed along the second axis (axis = 1). - During backward propagation, the gradient of cross-entropy loss w.r.t softmax output array is computed. The provided label can be a one-hot label array or a probability label array. - If the parameter use_ignore is true, ignore_label can specify input instances with a particular label to be ignored during backward propagation. **This has no effect when softmax output has same shape as label**. Example:: data = [[1,2,3,4],[2,2,2,2],[3,3,3,3],[4,4,4,4]] label = [1,0,2,3] ignore_label = 1 SoftmaxOutput(data=data, label = label,\ multi_output=true, use_ignore=true,\ ignore_label=ignore_label) ## forward softmax output [[ 0.0320586 0.08714432 0.23688284 0.64391428] [ 0.25 0.25 0.25 0.25 ] [ 0.25 0.25 0.25 0.25 ] [ 0.25 0.25 0.25 0.25 ]] [[ 0. 0. 0. 0. ] [-0.75 0.25 0.25 0.25] [ 0.25 0.25 -0.75 0.25] [ 0.25 0.25 0.25 -0.75]] ## notice that the first row is all 0 because label[0] is 1, which is equal to ignore_label. - The parameter grad_scale can be used to rescale the gradient, which is often used to give each loss function different weights. - This operator also supports various ways to normalize the gradient by normalization, The normalization is applied if softmax output has different shape than the labels. The normalization mode can be set to the followings: - 'null': do nothing. - 'batch': divide the gradient by the batch size. - 'valid': divide the gradient by the number of instances which are not ignored. Defined in src/operator/softmax_output.cc:L230 data: Input array. label: Ground truth label. grad-scale: Scales the gradient by a float factor. (optional) ignore-label: The instances whose labels == ignore_label will be ignored during backward, if use_ignore is set to true). (optional) multi-output: If set to true, the softmax function will be computed along axis 1. This is applied when the shape of input array differs from the shape of label array. (optional) use-ignore: If set to true, the ignore_label value will not contribute to the backward gradient. (optional) preserve-shape: If set to true, the softmax function will be computed along the last axis (-1). (optional) normalization: Normalizes the gradient. (optional) out-grad: Multiplies gradient with output gradient element-wise. (optional) smooth-alpha: Constant for computing a label smoothed version of cross-entropyfor the backwards pass. This constant gets subtracted from theone-hot encoding of the gold label and distributed uniformly toall other labels. (optional) out: Output array. (optional) ### softmin (softmin {:keys [data axis temperature dtype out], :or {axis nil, temperature nil, dtype nil, out nil}, :as opts}) Applies the softmin function. The resulting array contains elements in the range (0,1) and the elements along the given axis sum up to 1. .. math:: softmin(\mathbf{z/t})_j = \frac{e^{-z_j/t}}{\sum_{k=1}^K e^{-z_k/t}} for :math:j = 1, ..., K t is the temperature parameter in softmax function. By default, t equals 1.0 Example:: x = [[ 1. 2. 3.] [ 3. 2. 1.]] softmin(x,axis=0) = [[ 0.88079703, 0.5, 0.11920292], [ 0.11920292, 0.5, 0.88079703]] softmin(x,axis=1) = [[ 0.66524094, 0.24472848, 0.09003057], [ 0.09003057, 0.24472848, 0.66524094]] Defined in src/operator/nn/softmax.cc:L153 data: The input array. axis: The axis along which to compute softmax. (optional) temperature: Temperature parameter in softmax (optional) dtype: DType of the output in case this can't be inferred. Defaults to the same as input's dtype if not defined (dtype=None). (optional) out: Output array. (optional) ### softsign (softsign {:keys [data out], :or {out nil}, :as opts}) Computes softsign of x element-wise. .. math:: y = x / (1 + abs(x)) The storage type of softsign output is always dense Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L163 data: The input array. out: Output array. (optional) ### sort (sort {:keys [data axis is-ascend out], :or {axis nil, is-ascend nil, out nil}, :as opts}) Returns a sorted copy of an input array along the given axis. Examples:: x = [[ 1, 4], [ 3, 1]] // sorts along the last axis sort(x) = [[ 1., 4.], [ 1., 3.]] // flattens and then sorts sort(x) = [ 1., 1., 3., 4.] // sorts along the first axis sort(x, axis=0) = [[ 1., 1.], [ 3., 4.]] // in a descend order sort(x, is_ascend=0) = [[ 4., 1.], [ 3., 1.]] Defined in src/operator/tensor/ordering_op.cc:L127 data: The input array axis: Axis along which to choose sort the input tensor. If not given, the flattened array is used. Default is -1. (optional) is-ascend: Whether to sort in ascending or descending order. (optional) out: Output array. (optional) ### space-to-depth (space-to-depth data block-size)(space-to-depth {:keys [data block-size out], :or {out nil}, :as opts}) Rearranges(permutes) blocks of spatial data into depth. Similar to ONNX SpaceToDepth operator: https://github.com/onnx/onnx/blob/master/docs/Operators.md#SpaceToDepth The output is a new tensor where the values from height and width dimension are moved to the depth dimension. The reverse of this operation is depth_to_space. .. math:: \begin{gather*} x \prime = reshape(x, [N, C, H / block\_size, block\_size, W / block\_size, block\_size]) \\ x \prime \prime = transpose(x \prime, [0, 3, 5, 1, 2, 4]) \\ y = reshape(x \prime \prime, [N, C * (block\_size ^ 2), H / block\_size, W / block\_size]) \end{gather*} where :math:x is an input tensor with default layout as :math:[N, C, H, W]: [batch, channels, height, width] and :math:y is the output tensor of layout :math:[N, C * (block\_size ^ 2), H / block\_size, W / block\_size] Example:: x = [[[[0, 6, 1, 7, 2, 8], [12, 18, 13, 19, 14, 20], [3, 9, 4, 10, 5, 11], [15, 21, 16, 22, 17, 23]]]] space_to_depth(x, 2) = [[[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]], [[12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23]]]] Defined in src/operator/tensor/matrix_op.cc:L1104 data: Input ndarray block-size: Blocks of [block_size. block_size] are moved out: Output array. (optional) ### spatial-transformer (spatial-transformer data loc transform-type sampler-type)(spatial-transformer {:keys [data loc target-shape transform-type sampler-type cudnn-off out], :or {target-shape nil, cudnn-off nil, out nil}, :as opts}) Applies a spatial transformer to input feature map. data: Input data to the SpatialTransformerOp. loc: localisation net, the output dim should be 6 when transform_type is affine. You shold initialize the weight and bias with identity tranform. target-shape: output shape(h, w) of spatial transformer: (y, x) (optional) transform-type: transformation type sampler-type: sampling type cudnn-off: whether to turn cudnn off (optional) out: Output array. (optional) ### sqrt (sqrt {:keys [data out], :or {out nil}, :as opts}) Returns element-wise square-root value of the input. .. math:: \textrm{sqrt}(x) = \sqrt{x} Example:: sqrt([4, 9, 16]) = [2, 3, 4] The storage type of sqrt output depends upon the input storage type: - sqrt(default) = default - sqrt(row_sparse) = row_sparse - sqrt(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L907 data: The input array. out: Output array. (optional) ### square (square {:keys [data out], :or {out nil}, :as opts}) Returns element-wise squared value of the input. .. math:: square(x) = x^2 Example:: square([2, 3, 4]) = [4, 9, 16] The storage type of square output depends upon the input storage type: - square(default) = default - square(row_sparse) = row_sparse - square(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L883 data: The input array. out: Output array. (optional) ### squeeze (squeeze {:keys [data axis out], :or {axis nil, out nil}, :as opts}) Remove single-dimensional entries from the shape of an array. Same behavior of defining the output tensor shape as numpy.squeeze for the most of cases. See the following note for exception. Examples:: data = [[[0], [1], [2]]] squeeze(data) = [0, 1, 2] squeeze(data, axis=0) = [[0], [1], [2]] squeeze(data, axis=2) = [[0, 1, 2]] squeeze(data, axis=(0, 2)) = [0, 1, 2] .. Note:: The output of this operator will keep at least one dimension not removed. For example, squeeze([[[4]]]) = [4], while in numpy.squeeze, the output will become a scalar. data: data to squeeze axis: Selects a subset of the single-dimensional entries in the shape. If an axis is selected with shape entry greater than one, an error is raised. (optional) out: Output array. (optional) ### stack (stack data num-args)(stack {:keys [data axis num-args out], :or {axis nil, out nil}, :as opts}) Join a sequence of arrays along a new axis. The axis parameter specifies the index of the new axis in the dimensions of the result. For example, if axis=0 it will be the first dimension and if axis=-1 it will be the last dimension. Examples:: x = [1, 2] y = [3, 4] stack(x, y) = [[1, 2], [3, 4]] stack(x, y, axis=1) = [[1, 3], [2, 4]] data: List of arrays to stack axis: The axis in the result array along which the input arrays are stacked. (optional) num-args: Number of inputs to be stacked. out: Output array. (optional) ### sum (sum {:keys [data axis keepdims exclude out], :or {axis nil, keepdims nil, exclude nil, out nil}, :as opts}) Computes the sum of array elements over given axes. .. Note:: sum and sum_axis are equivalent. For ndarray of csr storage type summation along axis 0 and axis 1 is supported. Setting keepdims or exclude to True will cause a fallback to dense operator. Example:: data = [[[1, 2], [2, 3], [1, 3]], [[1, 4], [4, 3], [5, 2]], [[7, 1], [7, 2], [7, 3]]] sum(data, axis=1) [[ 4. 8.] [ 10. 9.] [ 21. 6.]] sum(data, axis=[1,2]) [ 12. 19. 27.] data = [[1, 2, 0], [3, 0, 1], [4, 1, 0]] csr = cast_storage(data, 'csr') sum(csr, axis=0) [ 8. 3. 1.] sum(csr, axis=1) [ 3. 4. 5.] data: The input axis: The axis or axes along which to perform the reduction. The default, axis=(), will compute over all elements into a scalar array with shape (1,). If axis is int, a reduction is performed on a particular axis. If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple. If exclude is true, reduction will be performed on the axes that are Negative values means indexing from right to left. (optional) keepdims: If this is set to True, the reduced axes are left in the result as dimension with size one. (optional) exclude: Whether to perform reduction on axis that are NOT in axis instead. (optional) out: Output array. (optional) ### svm-output (svm-output data label)(svm-output {:keys [data label margin regularization-coefficient use-linear out], :or {margin nil, regularization-coefficient nil, use-linear nil, out nil}, :as opts}) Computes support vector machine based transformation of the input. This tutorial demonstrates using SVM as output layer for classification instead of softmax: https://github.com/dmlc/mxnet/tree/master/example/svm_mnist. data: Input data for SVM transformation. label: Class label for the input data. margin: The loss function penalizes outputs that lie outside this margin. Default margin is 1. (optional) regularization-coefficient: Regularization parameter for the SVM. This balances the tradeoff between coefficient size and error. (optional) use-linear: Whether to use L1-SVM objective. L2-SVM objective is used by default. (optional) out: Output array. (optional) ### swap-axis (swap-axis {:keys [data dim1 dim2 out], :or {dim1 nil, dim2 nil, out nil}, :as opts}) Interchanges two axes of an array. Examples:: x = [[1, 2, 3]]) swapaxes(x, 0, 1) = [[ 1], [ 2], [ 3]] x = [[[ 0, 1], [ 2, 3]], [[ 4, 5], [ 6, 7]]] // (2,2,2) array swapaxes(x, 0, 2) = [[[ 0, 4], [ 2, 6]], [[ 1, 5], [ 3, 7]]] Defined in src/operator/swapaxis.cc:L70 data: Input array. dim1: the first axis to be swapped. (optional) dim2: the second axis to be swapped. (optional) out: Output array. (optional) ### take (take a indices)(take {:keys [a indices axis mode out], :or {axis nil, mode nil, out nil}, :as opts}) Takes elements from an input array along the given axis. This function slices the input array along a particular axis with the provided indices. Given data tensor of rank r >= 1, and indices tensor of rank q, gather entries of the axis dimension of data (by default outer-most one as axis=0) indexed by indices, and concatenates them in an output tensor of rank q + (r - 1). Examples:: x = [4. 5. 6.] // Trivial case, take the second element along the first axis. take(x, [1]) = [ 5. ] // The other trivial case, axis=-1, take the third element along the first axis take(x, [3], axis=-1, mode='clip') = [ 6. ] x = [[ 1., 2.], [ 3., 4.], [ 5., 6.]] // In this case we will get rows 0 and 1, then 1 and 2. Along axis 0 take(x, [[0,1],[1,2]]) = [[[ 1., 2.], [ 3., 4.]], [[ 3., 4.], [ 5., 6.]]] // In this case we will get rows 0 and 1, then 1 and 2 (calculated by wrapping around). // Along axis 1 take(x, [[0, 3], [-1, -2]], axis=1, mode='wrap') = [[[ 1. 2.] [ 2. 1.]] [[ 3. 4.] [ 4. 3.]] [[ 5. 6.] [ 6. 5.]]] The storage type of take output depends upon the input storage type: - take(default, default) = default - take(csr, default, axis=0) = csr Defined in src/operator/tensor/indexing_op.cc:L695 a: The input array. indices: The indices of the values to be extracted. axis: The axis of input array to be taken.For input tensor of rank r, it could be in the range of [-r, r-1] (optional) mode: Specify how out-of-bound indices bahave. Default is "clip". "clip" means clip to the range. So, if all indices mentioned are too large, they are replaced by the index that addresses the last element along an axis. "wrap" means to wrap around. "raise" means to raise an error, not supported yet. (optional) out: Output array. (optional) ### tan (tan {:keys [data out], :or {out nil}, :as opts}) Computes the element-wise tangent of the input array. The input should be in radians (:math:2\pi rad equals 360 degrees). .. math:: tan([0, \pi/4, \pi/2]) = [0, 1, -inf] The storage type of tan output depends upon the input storage type: - tan(default) = default - tan(row_sparse) = row_sparse - tan(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L139 data: The input array. out: Output array. (optional) ### tanh (tanh {:keys [data out], :or {out nil}, :as opts}) Returns the hyperbolic tangent of the input array, computed element-wise. .. math:: tanh(x) = sinh(x) / cosh(x) The storage type of tanh output depends upon the input storage type: - tanh(default) = default - tanh(row_sparse) = row_sparse - tanh(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L290 data: The input array. out: Output array. (optional) ### tile (tile data reps)(tile {:keys [data reps out], :or {out nil}, :as opts}) Repeats the whole array multiple times. If reps has length *d*, and input array has dimension of *n*. There are three cases: - **n=d**. Repeat *i*-th dimension of the input by reps[i] times:: x = [[1, 2], [3, 4]] tile(x, reps=(2,3)) = [[ 1., 2., 1., 2., 1., 2.], [ 3., 4., 3., 4., 3., 4.], [ 1., 2., 1., 2., 1., 2.], [ 3., 4., 3., 4., 3., 4.]] - **n>d**. reps is promoted to length *n* by pre-pending 1's to it. Thus for an input shape (2,3), repos=(2,) is treated as (1,2):: tile(x, reps=(2,)) = [[ 1., 2., 1., 2.], [ 3., 4., 3., 4.]] - **n<d**. The input is promoted to be d-dimensional by prepending new axes. So a shape (2,2) array is promoted to (1,2,2) for 3-D replication:: tile(x, reps=(2,2,3)) = [[[ 1., 2., 1., 2., 1., 2.], [ 3., 4., 3., 4., 3., 4.], [ 1., 2., 1., 2., 1., 2.], [ 3., 4., 3., 4., 3., 4.]], [[ 1., 2., 1., 2., 1., 2.], [ 3., 4., 3., 4., 3., 4.], [ 1., 2., 1., 2., 1., 2.], [ 3., 4., 3., 4., 3., 4.]]] Defined in src/operator/tensor/matrix_op.cc:L857 data: Input data array reps: The number of times for repeating the tensor a. Each dim size of reps must be a positive integer. If reps has length d, the result will have dimension of max(d, a.ndim); If a.ndim < d, a is promoted to be d-dimensional by prepending new axes. If a.ndim > d, reps is promoted to a.ndim by pre-pending 1's to it. out: Output array. (optional) ### topk (topk {:keys [data axis k ret-typ is-ascend dtype out], :or {axis nil, k nil, ret-typ nil, is-ascend nil, dtype nil, out nil}, :as opts}) Returns the top *k* elements in an input array along the given axis. The returned elements will be sorted. Examples:: x = [[ 0.3, 0.2, 0.4], [ 0.1, 0.3, 0.2]] // returns an index of the largest element on last axis topk(x) = [[ 2.], [ 1.]] // returns the value of top-2 largest elements on last axis topk(x, ret_typ='value', k=2) = [[ 0.4, 0.3], [ 0.3, 0.2]] // returns the value of top-2 smallest elements on last axis topk(x, ret_typ='value', k=2, is_ascend=1) = [[ 0.2 , 0.3], [ 0.1 , 0.2]] // returns the value of top-2 largest elements on axis 0 topk(x, axis=0, ret_typ='value', k=2) = [[ 0.3, 0.3, 0.4], [ 0.1, 0.2, 0.2]] // flattens and then returns list of both values and indices topk(x, ret_typ='both', k=2) = [[[ 0.4, 0.3], [ 0.3, 0.2]] , [[ 2., 0.], [ 1., 2.]]] Defined in src/operator/tensor/ordering_op.cc:L64 data: The input array axis: Axis along which to choose the top k indices. If not given, the flattened array is used. Default is -1. (optional) k: Number of top elements to select, should be always smaller than or equal to the element number in the given axis. A global sort is performed if set k < 1. (optional) ret-typ: The return type. "value" means to return the top k values, "indices" means to return the indices of the top k values, "mask" means to return a mask array containing 0 and 1. 1 means the top k values. "both" means to return a list of both values and indices of top k elements. (optional) is-ascend: Whether to choose k largest or k smallest elements. Top K largest elements will be chosen if set to false. (optional) dtype: DType of the output indices when ret_typ is "indices" or "both". An error will be raised if the selected data type cannot precisely represent the indices. (optional) out: Output array. (optional) ### transpose (transpose {:keys [data axes out], :or {axes nil, out nil}, :as opts}) Permutes the dimensions of an array. Examples:: x = [[ 1, 2], [ 3, 4]] transpose(x) = [[ 1., 3.], [ 2., 4.]] x = [[[ 1., 2.], [ 3., 4.]], [[ 5., 6.], [ 7., 8.]]] transpose(x) = [[[ 1., 5.], [ 3., 7.]], [[ 2., 6.], [ 4., 8.]]] transpose(x, axes=(1,0,2)) = [[[ 1., 2.], [ 5., 6.]], [[ 3., 4.], [ 7., 8.]]] Defined in src/operator/tensor/matrix_op.cc:L375 data: Source input axes: Target axis order. By default the axes will be inverted. (optional) out: Output array. (optional) ### trunc (trunc {:keys [data out], :or {out nil}, :as opts}) Return the element-wise truncated value of the input. The truncated value of the scalar x is the nearest integer i which is closer to zero than x is. In short, the fractional part of the signed number x is discarded. Example:: trunc([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1., 1., 1., 2.] The storage type of trunc output depends upon the input storage type: - trunc(default) = default - trunc(row_sparse) = row_sparse - trunc(csr) = csr Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L825 data: The input array. out: Output array. (optional) ### up-sampling (up-sampling data scale sample-type num-args)(up-sampling {:keys [data scale num-filter sample-type multi-input-mode num-args workspace out], :or {num-filter nil, multi-input-mode nil, workspace nil, out nil}, :as opts}) Upsamples the given input data. Two algorithms (sample_type) are available for upsampling: - Nearest Neighbor - Bilinear **Nearest Neighbor Upsampling** Input data is expected to be NCHW. Example:: x = [[[[1. 1. 1.] [1. 1. 1.] [1. 1. 1.]]]] UpSampling(x, scale=2, sample_type='nearest') = [[[[1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1.]]]] **Bilinear Upsampling** Uses deconvolution algorithm under the hood. You need provide both input data and the kernel. Input data is expected to be NCHW. num_filter is expected to be same as the number of channels. Example:: x = [[[[1. 1. 1.] [1. 1. 1.] [1. 1. 1.]]]] w = [[[[1. 1. 1. 1.] [1. 1. 1. 1.] [1. 1. 1. 1.] [1. 1. 1. 1.]]]] UpSampling(x, w, scale=2, sample_type='bilinear', num_filter=1) = [[[[1. 2. 2. 2. 2. 1.] [2. 4. 4. 4. 4. 2.] [2. 4. 4. 4. 4. 2.] [2. 4. 4. 4. 4. 2.] [2. 4. 4. 4. 4. 2.] [1. 2. 2. 2. 2. 1.]]]] Defined in src/operator/nn/upsampling.cc:L173 data: Array of tensors to upsample. For bilinear upsampling, there should be 2 inputs - 1 data and 1 weight. scale: Up sampling scale num-filter: Input filter. Only used by bilinear sample_type.Since bilinear upsampling uses deconvolution, num_filters is set to the number of channels. (optional) sample-type: upsampling method multi-input-mode: How to handle multiple input. concat means concatenate upsampled images along the channel dimension. sum means add all images together, only available for nearest neighbor upsampling. (optional) num-args: Number of inputs to be upsampled. For nearest neighbor upsampling, this can be 1-N; the size of output will be(scale*h_0,scale*w_0) and all other inputs will be upsampled to thesame size. For bilinear upsampling this must be 2; 1 input and 1 weight. workspace: Tmp workspace for deconvolution (MB) (optional) out: Output array. (optional) ### where (where condition x y)(where {:keys [condition x y out], :or {out nil}, :as opts}) Return the elements, either from x or y, depending on the condition. Given three ndarrays, condition, x, and y, return an ndarray with the elements from x or y, depending on the elements from condition are true or false. x and y must have the same shape. If condition has the same shape as x, each element in the output array is from x if the corresponding element in the condition is true, and from y if false. If condition does not have the same shape as x, it must be a 1D array whose size is the same as x's first dimension size. Each row of the output array is from x's row if the corresponding element from condition is true, and from y's row if false. Note that all non-zero values are interpreted as True in condition. Examples:: x = [[1, 2], [3, 4]] y = [[5, 6], [7, 8]] cond = [[0, 1], [-1, 0]] where(cond, x, y) = [[5, 2], [3, 8]] csr_cond = cast_storage(cond, 'csr') where(csr_cond, x, y) = [[5, 2], [3, 8]] Defined in src/operator/tensor/control_flow_op.cc:L57 condition: condition array x: y: out: Output array. (optional) ### zeros-like (zeros-like {:keys [data out], :or {out nil}, :as opts}) Return an array of zeros with the same shape, type and storage type as the input array. The storage type of zeros_like output depends on the storage type of the input - zeros_like(row_sparse) = row_sparse - zeros_like(csr) = csr - zeros_like(default) = default Examples:: x = [[ 1., 1., 1.], [ 1., 1., 1.]] zeros_like(x) = [[ 0., 0., 0.], [ 0., 0., 0.]] data: The input out: Output array. (optional)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.409523069858551, "perplexity": 20337.807648822905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00282.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/dcdss.2012.5.641
# American Institute of Mathematical Sciences • Previous Article Support properties of solutions to nonlinear parabolic equations with variable density in the hyperbolic space • DCDS-S Home • This Issue • Next Article Survey on time periodic problem for fluid flow under inhomogeneous boundary condition June  2012, 5(3): 641-656. doi: 10.3934/dcdss.2012.5.641 ## An explicit stable numerical scheme for the $1D$ transport equation 1 Commissariat à l’Énergie Atomique (CEA), DEN/DANS/DM2S/SFME/LETR, 91191 Gif-sur-Yvette, France Received  August 2010 Revised  October 2010 Published  October 2011 We derive in this paper a numerical scheme in order to calculate solutions of $1D$ transport equations. This $2nd$-order scheme is based on the method of characteristics and consists of two steps: the first step is about the approximation of the foot of the characteristic curve whereas the second one deals with the computation of the solution at this point. The main idea in our scheme is to combine two $2nd$-order interpolation schemes so as to preserve the maximum principle. The resulting method is designed for classical solutions and is unconditionally stable. Citation: Yohan Penel. An explicit stable numerical scheme for the $1D$ transport equation. Discrete and Continuous Dynamical Systems - S, 2012, 5 (3) : 641-656. doi: 10.3934/dcdss.2012.5.641 ##### References: [1] C. Bardos, M. Bercovier and O. Pironneau, The vortex method with finite elements, Math. Comp., 36 (1981), 119-136. doi: 10.1090/S0025-5718-1981-0595046-3. [2] F. Boyer and P. Fabrie, "Éléments d'Analyse pour l'Étude de Quelques Modèles d'Écoulements de Fluides Visqueux Incompressibles,'' Mathématiques & Applications (Berlin), 52, Springer-Verlag, Berlin, 2006. [3] J. Burgers, "A Mathematical Model Illustrating the Theory of Turbulence," edited by Richard von Mises and Theodore von Kármán, Adv. Appl. Mech., Academic Press, Inc., New York, (1948), 171-199. doi: 10.1016/S0065-2156(08)70100-5. [4] S. Dellacherie, On a diphasic low Mach number system, M2AN Math. Model. Numer. Anal., 39 (2005), 487-514. doi: 10.1051/m2an:2005020. [5] B. Després and F. Lagoutière, Contact discontinuity capturing schemes for linear advection and compressible gas dynamics, J. Sci. Comput., 16 (2001), 479-524. doi: 10.1023/A:1013298408777. [6] J. Douglas Jr. and T. Russell, Numerical methods for convection-dominated diffusion problems based on combining the method of characteristics with finite element or finite difference procedures, SIAM J. Numer. Anal., 19 (1982), 871-885. [7] J. Douglas Jr., C.-S. Huang and F. Pereira, The modified method of characteristics with adjusted advection, Numer. Math., 83 (1999), 353-369. doi: 10.1007/s002110050453. [8] G. Fourestey, "Simulation Numérique et Contrôle Optimal d'Interactions Fluide Incompressible/Structure par une Méthode de Lagrange-Galerkin d'Ordre 2,'' Ph.D Thesis, École Nationale des Ponts et Chaussées, 2002. Available at: http://hal.archives-ouvertes.fr/tel-00005675. [9] E. Godlewski and P.-A. Raviart, "Numerical Approximation of Hyperbolic Systems of Conservation Laws,'' Applied Mathematical Sciences, 118, Springer-Verlag, New York, 1996. [10] F. Holly and A. Preissmann, Accurate calculation of transport in two dimensions, J. Hydr. Div., 103 (1977), 1259-1277. [11] R. LeVeque, "Numerical Methods for Conservation Laws," Second edition, Lectures in Mathematics ETH Zürich, Birkhäuser Verlag, Basel, 1992. [12] J. Marsden and A. Chorin, "A Mathematical Introduction to Fluid Mechanics," Springer-Verlag, New-York-Heidelberg, 1979. [13] S. Osher and J. Sethian, Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations, J. Comput. Phys., 79 (1988), 12-49. doi: 10.1016/0021-9991(88)90002-2. [14] Y. Penel, "Étude Théorique et Numérique de la Déformation d'une Interface Séparant deux Fluides Non-Miscibles à Bas Nombre de Mach," Ph.D Thesis, Univ. Paris 13, Available at: http://hal.archives-ouvertes.fr/tel-00547865. [15] Y. Penel, S. Dellacherie and O. Lafitte, Global solutions to the 1D Abstract Bubble Vibration model, Submitted. [16] O. Pironneau, On the transport-diffusion algorithm and its applications to the Navier-Stokes equations, Numer. Math., 38 (1981/82), 309-332. doi: 10.1007/BF01396435. show all references ##### References: [1] C. Bardos, M. Bercovier and O. Pironneau, The vortex method with finite elements, Math. Comp., 36 (1981), 119-136. doi: 10.1090/S0025-5718-1981-0595046-3. [2] F. Boyer and P. Fabrie, "Éléments d'Analyse pour l'Étude de Quelques Modèles d'Écoulements de Fluides Visqueux Incompressibles,'' Mathématiques & Applications (Berlin), 52, Springer-Verlag, Berlin, 2006. [3] J. Burgers, "A Mathematical Model Illustrating the Theory of Turbulence," edited by Richard von Mises and Theodore von Kármán, Adv. Appl. Mech., Academic Press, Inc., New York, (1948), 171-199. doi: 10.1016/S0065-2156(08)70100-5. [4] S. Dellacherie, On a diphasic low Mach number system, M2AN Math. Model. Numer. Anal., 39 (2005), 487-514. doi: 10.1051/m2an:2005020. [5] B. Després and F. Lagoutière, Contact discontinuity capturing schemes for linear advection and compressible gas dynamics, J. Sci. Comput., 16 (2001), 479-524. doi: 10.1023/A:1013298408777. [6] J. Douglas Jr. and T. Russell, Numerical methods for convection-dominated diffusion problems based on combining the method of characteristics with finite element or finite difference procedures, SIAM J. Numer. Anal., 19 (1982), 871-885. [7] J. Douglas Jr., C.-S. Huang and F. Pereira, The modified method of characteristics with adjusted advection, Numer. Math., 83 (1999), 353-369. doi: 10.1007/s002110050453. [8] G. Fourestey, "Simulation Numérique et Contrôle Optimal d'Interactions Fluide Incompressible/Structure par une Méthode de Lagrange-Galerkin d'Ordre 2,'' Ph.D Thesis, École Nationale des Ponts et Chaussées, 2002. Available at: http://hal.archives-ouvertes.fr/tel-00005675. [9] E. Godlewski and P.-A. Raviart, "Numerical Approximation of Hyperbolic Systems of Conservation Laws,'' Applied Mathematical Sciences, 118, Springer-Verlag, New York, 1996. [10] F. Holly and A. Preissmann, Accurate calculation of transport in two dimensions, J. Hydr. Div., 103 (1977), 1259-1277. [11] R. LeVeque, "Numerical Methods for Conservation Laws," Second edition, Lectures in Mathematics ETH Zürich, Birkhäuser Verlag, Basel, 1992. [12] J. Marsden and A. Chorin, "A Mathematical Introduction to Fluid Mechanics," Springer-Verlag, New-York-Heidelberg, 1979. [13] S. Osher and J. Sethian, Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations, J. Comput. Phys., 79 (1988), 12-49. doi: 10.1016/0021-9991(88)90002-2. [14] Y. Penel, "Étude Théorique et Numérique de la Déformation d'une Interface Séparant deux Fluides Non-Miscibles à Bas Nombre de Mach," Ph.D Thesis, Univ. Paris 13, Available at: http://hal.archives-ouvertes.fr/tel-00547865. [15] Y. Penel, S. Dellacherie and O. Lafitte, Global solutions to the 1D Abstract Bubble Vibration model, Submitted. [16] O. Pironneau, On the transport-diffusion algorithm and its applications to the Navier-Stokes equations, Numer. Math., 38 (1981/82), 309-332. doi: 10.1007/BF01396435. [1] Jisheng Kou, Huangxin Chen, Xiuhua Wang, Shuyu Sun. A linear, decoupled and positivity-preserving numerical scheme for an epidemic model with advection and diffusion. Communications on Pure and Applied Analysis, , () : -. doi: 10.3934/cpaa.2021094 [2] Abdollah Borhanifar, Maria Alessandra Ragusa, Sohrab Valizadeh. High-order numerical method for two-dimensional Riesz space fractional advection-dispersion equation. Discrete and Continuous Dynamical Systems - B, 2021, 26 (10) : 5495-5508. doi: 10.3934/dcdsb.2020355 [3] Yones Esmaeelzade Aghdam, Hamid Safdari, Yaqub Azari, Hossein Jafari, Dumitru Baleanu. Numerical investigation of space fractional order diffusion equation by the Chebyshev collocation method of the fourth kind and compact finite difference scheme. Discrete and Continuous Dynamical Systems - S, 2021, 14 (7) : 2025-2039. doi: 10.3934/dcdss.2020402 [4] Xu Yang, François Golse, Zhongyi Huang, Shi Jin. Numerical study of a domain decomposition method for a two-scale linear transport equation. Networks and Heterogeneous Media, 2006, 1 (1) : 143-166. doi: 10.3934/nhm.2006.1.143 [5] Kai Qu, Qi Dong, Chanjie Li, Feiyu Zhang. Finite element method for two-dimensional linear advection equations based on spline method. Discrete and Continuous Dynamical Systems - S, 2021, 14 (7) : 2471-2485. doi: 10.3934/dcdss.2021056 [6] Alexandre Caboussat, Roland Glowinski. A Numerical Method for a Non-Smooth Advection-Diffusion Problem Arising in Sand Mechanics. Communications on Pure and Applied Analysis, 2009, 8 (1) : 161-178. doi: 10.3934/cpaa.2009.8.161 [7] Helge Holden, Xavier Raynaud. A convergent numerical scheme for the Camassa--Holm equation based on multipeakons. Discrete and Continuous Dynamical Systems, 2006, 14 (3) : 505-523. doi: 10.3934/dcds.2006.14.505 [8] Marco Berardi, Fabio V. Difonzo. A quadrature-based scheme for numerical solutions to Kirchhoff transformed Richards' equation. Journal of Computational Dynamics, 2022, 9 (2) : 69-84. doi: 10.3934/jcd.2022001 [9] Amy Allwright, Abdon Atangana. Augmented upwind numerical schemes for a fractional advection-dispersion equation in fractured groundwater systems. Discrete and Continuous Dynamical Systems - S, 2020, 13 (3) : 443-466. doi: 10.3934/dcdss.2020025 [10] Wen Li, Song Wang, Volker Rehbock. A 2nd-order one-point numerical integration scheme for fractional ordinary differential equations. Numerical Algebra, Control and Optimization, 2017, 7 (3) : 273-287. doi: 10.3934/naco.2017018 [11] Zeyu Xia, Xiaofeng Yang. A second order accuracy in time, Fourier pseudo-spectral numerical scheme for "Good" Boussinesq equation. Discrete and Continuous Dynamical Systems - B, 2020, 25 (9) : 3749-3763. doi: 10.3934/dcdsb.2020089 [12] Mario Bukal. Well-posedness and convergence of a numerical scheme for the corrected Derrida-Lebowitz-Speer-Spohn equation using the Hellinger distance. Discrete and Continuous Dynamical Systems, 2021, 41 (7) : 3389-3414. doi: 10.3934/dcds.2021001 [13] Armando Majorana. A numerical model of the Boltzmann equation related to the discontinuous Galerkin method. Kinetic and Related Models, 2011, 4 (1) : 139-151. doi: 10.3934/krm.2011.4.139 [14] Roberto Camassa, Pao-Hsiung Chiu, Long Lee, W.-H. Sheu. A particle method and numerical study of a quasilinear partial differential equation. Communications on Pure and Applied Analysis, 2011, 10 (5) : 1503-1515. doi: 10.3934/cpaa.2011.10.1503 [15] Jaemin Shin, Yongho Choi, Junseok Kim. An unconditionally stable numerical method for the viscous Cahn--Hilliard equation. Discrete and Continuous Dynamical Systems - B, 2014, 19 (6) : 1737-1747. doi: 10.3934/dcdsb.2014.19.1737 [16] Mathias Dus. The discretized backstepping method: An application to a general system of $2\times 2$ linear balance laws. Mathematical Control and Related Fields, 2022  doi: 10.3934/mcrf.2022006 [17] Faranak Rabiei, Fatin Abd Hamid, Zanariah Abd Majid, Fudziah Ismail. Numerical solutions of Volterra integro-differential equations using General Linear Method. Numerical Algebra, Control and Optimization, 2019, 9 (4) : 433-444. doi: 10.3934/naco.2019042 [18] Ying Liu, Yanping Chen, Yunqing Huang, Yang Wang. Two-grid method for semiconductor device problem by mixed finite element method and characteristics finite element method. Electronic Research Archive, 2021, 29 (1) : 1859-1880. doi: 10.3934/era.2020095 [19] Li Yang, Zeng Rong, Shouming Zhou, Chunlai Mu. Uniqueness of conservative solutions to the generalized Camassa-Holm equation via characteristics. Discrete and Continuous Dynamical Systems, 2018, 38 (10) : 5205-5220. doi: 10.3934/dcds.2018230 [20] Roberto Camassa. Characteristics and the initial value problem of a completely integrable shallow water equation. Discrete and Continuous Dynamical Systems - B, 2003, 3 (1) : 115-139. doi: 10.3934/dcdsb.2003.3.115 2021 Impact Factor: 1.865
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5281133651733398, "perplexity": 5266.095296298906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104676086.90/warc/CC-MAIN-20220706182237-20220706212237-00312.warc.gz"}
https://e-schultz.dk/journal/cgv8z.php?id=2598ec-voltage-drop-calculator
# voltage drop calculator You can also calculate this with the Maximum Demand Calculator with Examples AS/NZS 3000. Here, we are taking an example of 100 ft power line. By choosing the right one, you can change the units, Step3: Enter the length of the cable in feet or meters. I → Electrical Current (A). The maximum demand current according to AS 3000: 2007 Table C 1 for one 10 A socket in a room is 10 A. Voltage drop tables; Calculations. The resistance and reactance values in AS/NZS 3008 for an 8 mm2 two-core cable are: $$\Delta V_{1\phi-ac}=\dfrac{I L 2 Z_c}{1000}$$, $$\Delta V_{1\phi-ac}=\dfrac{15 \cdot 30 \cdot 2 \cdot 2.232}{1000}$$. Too much voltage drop may result in damage and improper function of the electrical and electronics apparatus. To verify the voltage drop, Ohm’s law and Kirchhoff’s circuit law are used which are briefed below. You know, the current flows based on the electron movement, while moving the electron we need to consume some energy to move the electron from one atom to another. It is used for referring the circular cross sectional area of the wire or conductor. Example #1; Example #2; Voltage drop formulas. Between the Point of Supply and load. This is for thermoplastic (PVC), three- and four-core cables, unenclosed and spaced from a surface. The utility limits the voltage drop at the Point of Supply to 2%. Voltage drop means the reduction in voltage or voltage loss. Voltage: 230 VAC, 1-phase: Load: 0.75 kW, 0.85 power factor: Distance: 40 m: Conductor size: 4 mm 2: The resistance and reactance values in AS/NZS 3008 for a 4 mm 2 two-core cable are: R c = 5.61 Ω/km, from Table 35 -Multi-core, circular at 75°C. Save my name, email, and website in this browser for the next time I comment. Step2: Enter the diameter of the cable by choosing a different unit such as AWG, MM, and Inch. 2. Free online calculator to compute drop voltage and energy losses in a wire. Between the Point of Supply and load. And P = 20 V × 10 A = 200 W will be wasted as heat in the wire. Electrical4u will use the information you provide on this form to be in touch with you and to provide updates and marketing. The value will be […] Here I is the flow of current and R is the resistance of the circuit and R will be calculated by below formula. Also note that there is no specific table in AS/NZS 3008 for DC resistance. Exact method #1. After your cable selection, the resistivity will be taken into account automatically, you do not want to enter the value. Requirements relating to maximum permitted voltage drop. But the deciding factor of the value of energy is the physical features of the elements. Electrical4U is dedicated to the teaching and sharing of all things related to electrical and electronics engineering. Step:5 Click the calculate button and you get the results of voltage drop calculator online. There was an error while trying to send your request. For understanding the voltage drop in DC circuit, we can take an example. The resistance and reactance values in AS/NZS 3008 for a 4 mm2 two-core cable are: $$I = \dfrac{750}{230 \times 0.85} = \text{3.84 A}$$, $$\Delta V_{1\phi-ac}=\dfrac{3.84 \cdot 40 \cdot 2 \cdot 5.61}{1000}$$, $$\% V_{1\phi-ac}= \dfrac {1.72} {230} \cdot 100$$. The Volt Drop Calculator uses the AC resistance Rc values from Table 35 in AS/NZS 3008. And by pressing the reset button, you can change all values. Voltage drop E VD = IR cosθ + IX sinθ where abbreviations are same as below “Exact Method”. So; for 2 lines, 2 × 100 ft. Let Electrical resistance be 1.02Ω/1000 ft and current be 10 A. The percentage voltage drop is calculated as: $$\% V_{1\phi-ac}= \dfrac {2.01} {230} \cdot 100$$. R → Resistance of the cable conductor (Ω/1000ft) CTRL + SPACE for auto-complete. The problem with voltage drop is: For example, if you supply a 21 Ω heater from a 230 V supply. Dette indlæg blev udgivet i Ikke kategoriseret. Bogmærk permalinket.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5588986277580261, "perplexity": 1612.123920818137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00662.warc.gz"}
http://math.stackexchange.com/questions/249739/prove-that-this-topology-yields-a-t-3-space
# Prove that this topology yields a $T_3$ space. Here is the problem: Let $(X,\tau)$ be a topological space and $A \subset X$. let $\tau _ A = \{ U \cup (V \cap A) \}$ where $U$ and $V$ are open in $X$. I have to prove that if $(X, \tau)$ is $T _3$ and A is a closed subset then $(X, \tau _A)$ is $T_3$. So I have to prove 2 things $(\rm i)$ $X$ is $T_1$ $(\rm ii)$ for each closed subset $K$ and $x \notin K \exists U,V \in \tau$ such that$x \in U$ and $K \subset V$ and $U \cap V = \emptyset$ To prove i) I did: Let p and q different $\in X$ because $(X,\tau)$ is $T_3$ adn so $T_1$ there exist an open V in X such that p $\in V$ and $q\notin V$. And let $W =V \cup (X/A \cap A)$ So W=V and W is an open in X with $\tau _A$ then $p\in W$ and $q\notin W$ so $(X, \tau_A)$ is $T_1$ But I don't know how to prove ii) Could you help me? Thank you - $\newcommand{\cl}{\operatorname{cl}}$ Probably an important thing to investigate is the connection between the topologies $\tau$ and $\tau_A$. • $\tau_A$ is finer than $\tau$. This can easily been seen by noting that if $U \subseteq X$ is $\tau$-open, then $U = U \cup ( \emptyset \cap A )$ is $\tau_A$-open. • Since $\tau_A$ is finer than $\tau$, then $\cl_{\tau_A} ( B ) \subseteq \cl_{\tau} ( B )$ for every $B \subseteq X$. (Where $\cl_\sigma ( B )$ denotes the $\sigma$-closure of $B$.) Next, it is probably easier to prove this using the open-neighbourhood characterisation of regularity: A T$_1$-space $Y$ is regular if given any $y \in Y$ and any open neighbourhood $W$ of $Y$ there is an open $V \subseteq Y$ such that $y \in V \subseteq \cl (V) \subseteq W$. Hint: Suppose $x \in X$ and $W$ is any $\tau_A$-open neighbourhood of $x$. Then there are $\tau$-open sets $U_1 , U_2$ such that $W = U_1 \cup ( U_2 \cap A )$. Handle the cases $x \in U_1$ and $x \in U_2 \cap A$ separately, using the regularity of $\tau$ in each part.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905626773834229, "perplexity": 81.27204766190887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008919.73/warc/CC-MAIN-20141125155648-00103-ip-10-235-23-156.ec2.internal.warc.gz"}
https://plainmath.net/90218/want-to-understand-how-a-fraction-is-sim
# Want to understand how a fraction is simplified The fraction is used to determine the sum of a telescopic series. sum_(k=1)^oo 1/((k+1)sqrt(k)+k sqrt(k+1)) Want to understand how a fraction is simplified The fraction is used to determine the sum of a telescopic series. $\sum _{k=1}^{\mathrm{\infty }}\frac{1}{\left(k+1\right)\sqrt{k}+k\sqrt{k+1}}$ This is the solved fraction. $\frac{1}{\left(k+1\right)\sqrt{k}+k\sqrt{k+1}}=\frac{\left(k+1\right)\sqrt{k}-k\sqrt{k+1}}{\left(k+1{\right)}^{2}k-{k}^{2}\left(k+1\right)}=\frac{\left(k+1\right)\sqrt{k}-k\sqrt{k+1}}{{k}^{3}+2{k}^{2}+k-{k}^{3}-{k}^{2}}=\frac{\left(k+1\right)\sqrt{k}-k\sqrt{k+1}}{k\left(k+1\right)}=\frac{\sqrt{k}}{k}-\frac{\sqrt{k+1}}{k+1}=\frac{1}{\sqrt{k}}-\frac{1}{\sqrt{k+1}}$ I want to understand what is being done on every step,mainly the last three. Thank you. You can still ask an expert for help • Live experts 24/7 • Questions are typically answered in as fast as 30 minutes • Personalized clear answers Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Ashlee Ramos First equal: multiply and divide by $\left(k+1\right)\sqrt{k}-k\sqrt{k+1}$ Second equal: Expand the denominator Third equal: cancel obvious terms in the denominator and write ${k}^{2}+k=k\left(k+1\right)$. Fourth equal: distribute along the minus in the numerator, and cancel the obvious factors $\left(k+1\right)$ in the first summand, and k in the second one: $\frac{\left(k+1\right)\sqrt{k}-k\sqrt{k+1}}{k\left(k+1\right)}=\frac{\left(k+1\right)\sqrt{k}}{k\left(k+1\right)}-\frac{k\sqrt{k+1}}{k\left(k+1\right)}=\frac{\sqrt{k}}{k}-\frac{\sqrt{k+1}}{k+1}.$ Fifth equal: $\frac{\sqrt{k}}{k}=\frac{1}{\sqrt{k}}$, and similarly for $k+1$ We have step-by-step solutions for your answer! tuzkutimonq4 Another way to look at it is first "declutter" the expression, since all those radicals obfuscate the simple structure. Let $a=\sqrt{k}\phantom{\rule{thinmathspace}{0ex}}$, $b=\sqrt{k+1}\phantom{\rule{thinmathspace}{0ex}}$, then: $\frac{1}{\left(k+1\right)\sqrt{k}+k\sqrt{k+1}}=\frac{1}{a{b}^{2}+{a}^{2}b}=\frac{1}{ab\left(a+b\right)}$ Now consider that $\phantom{\rule{thinmathspace}{0ex}}\left(b+a\right)\left(b-a\right)={b}^{2}-{a}^{2}=\left(\overline{)k}+1\right)-\overline{)k}=1\phantom{\rule{thinmathspace}{0ex}}$, so $\phantom{\rule{thinmathspace}{0ex}}\frac{1}{a+b}=b-a\phantom{\rule{thinmathspace}{0ex}}$. Then: $\frac{1}{ab\left(a+b\right)}=\frac{b-a}{ab}=\frac{\overline{)b}}{a\overline{)b}}-\frac{\overline{)a}}{\overline{)a}b}=\frac{1}{a}-\frac{1}{b}=\frac{1}{\sqrt{k}}-\frac{1}{\sqrt{k+1}}$ We have step-by-step solutions for your answer!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 36, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9662762880325317, "perplexity": 900.5629952917301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00231.warc.gz"}
http://bmjopen.bmj.com/content/7/2/e013155?cpetoc=
Article Text Intimate partner violence among pregnant women in Rwanda, its associated risk factors and relationship to ANC services attendance: a population-based study 1. Akashi Andrew Rurangirwa1,2, 2. Ingrid Mogren3, 3. Joseph Ntaganira1, 4. Gunilla Krantz2 1. 1Department of Epidemiology and Biostatistics, School of Public Health, University of Rwanda, Rwanda 2. 2Section of Epidemiology and Social Medicine (EPSO), Department of Public Health and Community Medicine, The Sahlgrenska Academy at University Gothenburg, Sweden 3. 3Department of Clinical Sciences, Obstetrics and Gynecology, Umeå University, Sweden 1. Correspondence to Dr Akashi Andrew Rurangirwa; rakashi{at}nursph.org Abstract Objectives To investigate the prevalence of four forms of intimate partner violence during pregnancy in Rwandan women, associated sociodemographic and psychosocial factors and relationship to antenatal care service usage. Design This was a cross-sectional population-based study conducted in the Northern province of Rwanda and in Kigali city. Participants and settings A total of 921 women who gave birth within the past 13 months were included. Villages in the study area were selected using a multistage random sampling technique and community health workers helped in identifying eligible participants. Clinical psychologists, nurses or midwives carried out face-to-face interviews using a structured questionnaire. Bivariable and multivariable logistic regression were used to assess associations. Results The prevalence rates of physical, sexual, psychological violence and controlling behaviour during pregnancy were 10.2% (95% CI 8.3 to 12.2), 9.7% (95% CI 7.8 to 11.6), 17.0% (95% CI 14.6 to 19.4) and 20.0% (95% CI 17.4 to 22.6), respectively. Usage of antenatal care services was less common among women who reported controlling behaviour (OR) 1.93 (95% CI 1.34 to 2.79). No statistically significant associations between physical, psychological and sexual violence and antenatal care usage were found. Low socioeconomic status was associated with physical violence exposure (OR) 2.27 (95% CI 1.29 to 3.98). Also, young age, living in urban areas and poor social support were statistically significant in their associations with violence exposure during pregnancy. Conclusions Intimate partner violence inquiry should be included in the standard antenatal care services package and professionals should be trained in giving support, advice and care to those exposed. Gender-based violence is criminalised behaviour in Rwanda; existing policies and laws must be followed and awareness raised in society for preventive purposes. • SOCIAL MEDICINE This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ Statistics from Altmetric.com Strengths and limitations of this study • This is the first study in Rwanda that has investigated all forms of intimate partner violence during pregnancy, its associated risk factors and its association with antenatal care service attendance. • We had a large number of participants and a very low non-response rate. • We used internationally recognised data collection tools that have been successfully used in similar settings. • Owing to the sensitive nature of the intimate partner violence, under-reporting of some violent events cannot be ruled out. • Some data were collected retrospectively from respondents, which can result in recall bias. Introduction Intimate partner violence (IPV) refers to behaviour by an intimate partner or ex-partner that causes physical, sexual or psychological harm, including physical aggression, sexual coercion, psychological abuse and controlling behaviours.1 IPV exerted towards a pregnant woman may have the most devastating health and social consequences both for the woman herself and for the fetus,2–4 and it may determine whether and when a pregnant woman seeks antenatal care services.5–8 Physical/sexual abuse may cause a range of health problems such as sexually transmitted infections, chronic pains, fractures as well as stress, anxiety and depression,9 and inability to be a good parent after childbirth.6 Furthermore, violence during pregnancy has been associated with fetal growth restriction, adverse pregnancy outcomes and childhood growth impairment.10 ,11 Depression and stress may subsequently lead to increased levels of stress hormones during pregnancy and reduced placental circulation.10 Violence may therefore partly underlie the fetal origins of adult disease theory, that is, fetal programming since evidence shows that intrauterine growth restriction is associated with child and adulthood diseases.12 The overall prevalence rate of IPV (physical, sexual and/or psychological violence) during pregnancy in the developed world ranges between 10 and 20%.5 ,13 In African countries, the overall prevalence rates of IPV during pregnancy are some of the highest in the world, as high as 25, 34 and 61% in Ethiopia, Zimbabwe and the Gambia respectively.3 ,6 ,8 ,14 It has been suggested that IPV against women and its effects may be exacerbated in resource-limited settings, such as Rwanda and many other African countries, due to gender inequality and cultural and economic barriers that restrain women in becoming economically independent.15 ,16 This situation compels women to accept violence exposure from the husband/partner, and the healthcare services are most often inadequate in terms of identifying abused women and offering support. The effects of violence on pregnant women in low-income countries such as intrauterine growth restriction, miscarriages, preterm birth and fetal death could therefore be related to delayed, incomplete or inadequate antenatal care service attendance, partly as a result of the IPV that pregnant women encounter.8 ,17–19 Rwanda is a low-income country in central Africa with ∼12 million inhabitants.20 Sixty-four per cent of women and 66% of men have completed or received some primary school education, whereas 12% of women and 9% of men have no formal education.21 ,22 The majority of the low educated or illiterate women live in rural areas, where the fertility rate is higher than the country average of 4.6 children per family. Women are mainly involved in housework and small-scale agricultural activities to contribute to feeding their extended families.22 It is a patriarchal society, where IPV may be perceived as a confidential family matter and considered acceptable in order to keep the family together,23 although gender-based violence is a punishable offence in the Rwandan penal code.24 Studies investigating IPV in Rwanda show that its prevalence ranges between 16 and 50% and that it occurs in women and men, although women are more frequently and seriously affected.22 ,25 ,26 However, these studies did not give the overall prevalence of the different forms of IPV or did not investigate its status at all during pregnancy. Therefore, in a population-based cross-sectional study including 921 Rwandan women who delivered ≤13 months ago, the prevalence of IPV during pregnancy, associated risk factors and its relationship with usage of antenatal cares services were investigated. This study forms part of the Maternal Health Research Programme (MaTHeR) undertaken by the University of Rwanda in collaboration with the University of Gothenburg and Umeå University in Sweden. Methods Study design, study population and sample size This cross-sectional population-based study was conducted in the Northern Province and in Kigali city, the capital and largest city in Rwanda. Kigali has urban, semiurban and rural areas, whereas the Northern Province is predominantly rural. The target population was women who gave birth within the past 13 months. The sample size was calculated based on the estimated prevalence of hypertensive disorders during pregnancy (10%),27 ,28 as hypertension is one of the major factors to be investigated within this research programme and was the least prevalent among study outcomes. The desired level of precision was set at 0.025 and a design effect of 1.5 was used to take care of the multistage nature of the study. Adding 10% to the sample size to take care of possible non-responses gave a sample size of 912 women. After data collection, extra data had been collected for 10 more women and it was decided to include them in the study; thus, the sample for analysis comprised 922 women. The selection process was based on the total population of about 2 865 000 inhabitants from 4791 villages.20 In three steps, villages (the smallest administrative entity in Rwanda), households and study participants were randomly selected in the five districts of the Northern Province and in three districts within Kigali City. First, of 4791 villages located in the study area, it was decided to select in total 48 villages (equal to 1%). The villages were then randomly selected from the total number of villages in the study area by using Epi-Info random function. Approximately 20% of the Rwandan population lives in urban areas.22 In order to mirror the country's rural-urban divide, 20% of the villages were selected from urban areas. Second, the number of households from each village was selected based on the total number of households in each selected village (proportionate to size). With the help of the community health workers (CHWs) who keep maternal records, women who gave birth within the past 13 months were identified and finally the women to be interviewed were randomly selected among eligible women in each household, if more than one were present. If an excess number of households with an eligible woman were at hand in a village, lottery decided which ones to include. In case of fewer eligible women in the village than envisaged in the study, the closest village was approached and the same data collection procedures were used to obtain the remaining number of eligible women. Only one woman refused the interview. The overall response rate was 99.9%. Data collection procedures Data collection took place between July and August of 2014. A structured, paper-based interviewer administered questionnaire including sociodemographic and psychosocial characteristics, items related to physical and sexual violence, psychological abuse and controlling behaviour as well as antenatal care (ANC) attendance and procedures, pregnancy and delivery outcomes and health economics issues was developed. The items for investigating violence were selected from the Women's Health and Life Experiences Questionnaire, a validated questionnaire developed by the WHO for research on IPV experience.29 ,30 This instrument has been shown to be cross-culturally valid and has previously been successfully used for similar studies in Rwanda.26 ,31 The questionnaire was translated into Kinyarwanda, the Rwandan national language and pretested but no major changes were made apart from a few adjustments in Kinyarwanda wording. Twelve well-trained interviewers, who were clinical psychologists, nurses or midwives, belonging to a pool of interviewers at the School of Public Health, University of Rwanda were selected. Face-to-face interviews were performed and four supervisors (first author and three colleagues) guided the interviewers. If an eligible woman was not present at the time of interview, the team waited for her to come or went back later to do the interview at the earliest possible time. The supervisors ensured that all selected households were contacted and the supervisors reviewed the filled-in questionnaires before the team left the village. The School of Public Health at the University of Rwanda was the lead implementer of the study. Data entry was performed by four skilled personnel selected from a permanent cohort of data entry clerks from the School of Public Health under the supervision of a data entry manager. After the primary data entry, the information from 100 questionnaires, each including the 96 variables used in this study, were re-entered to check the accuracy of the first data entry. In total, five errors were detected, which corresponds to an error rate of 0.05% (5/9600). The erroneous data were thereafter corrected. All participants included in this study gave birth to a child who was alive at least up to the date of the interview. Variables Antenatal care visits The number of ANC visits was dichotomised into poor ANC services usage and adequate ANC services usage and then used as the outcome variable. The former was defined as having made ≤2 visits to ANC clinics during the course of pregnancy irrespective of the timing, whereas the latter was defined as having made ≥3 visits during pregnancy irrespective of the timing. Intimate partner violence IPV was measured as exposure to physical violence (six items), sexual violence (three items) and psychological abuse (four items) and controlling behaviour (7 items). Women were asked to indicate whether they had been exposed to any of the violent acts during pregnancy. In order to assess the seriousness, trend and time frame of violence against women during pregnancy, the women were also asked whether they were exposed to the same acts of violence ever in life, in the year before the current pregnancy and/or after childbirth. Subsequently, summary measures for each of the forms of violence were constructed mirroring the exposure in the time periods ‘ever in life’, ‘in the year before the current pregnancy’, ‘during pregnancy’ (which was our main period of interest) and ‘the period after childbirth’. For each of the forms of violence, women who reported any of the violent acts were considered as exposed. Sociodemographic and psychosocial variables Participants' age was categorised into 15–30 years and 31–46 years age groups. The number of people in the household was described as a three-category variable (1–3 people, 4–6 people and 7 or more people); then a dichotomised variable was created where the first two categories were combined into the reference category and seven or more were considered the exposed. Marital status was dichotomised into married or cohabitating (reference category) and then single, divorced or widowed were brought together in the exposure category. Women's relationship with household head was assessed as being the wife/partner of the household head, or having any other relationship with the household head such as being the daughter, daughter-in-law, being the household head herself, other family relationship and no relationship, further dichotomised into the wife/partner, or any other relationship; the latter was then used as the exposed category. Ever attended school was responded to with yes/no with the latter as the exposure category. Total household monthly income was made into a three-category variable as more than 36 000 FRW (US$60), between 17 501 –35 999 FRW (US$30–60) and <17 500 FRW (US\$30), later dichotomised into ≤17 500 FRW and ≥17 501 FRW. Social support was defined as having a family member, a relative or a friend who could lend support to the woman if any problem would arise. This item was responded to with yes/no, with the latter as the exposure category. Partner's age was categorised into ≤40 years and 41–70 years age groups. Then identical techniques were used to categorise partner's level of education and the total household monthly income as described above for participants. A composite variable of assets in the household was used as a proxy for socioeconomic status of the household. Assets in the household included a radio, a television set, a refrigerator, a bicycle, a motorcycle, a car, a mobile phone and a computer. It was later dichotomised into having at least one of the items or having none of the items, constituting the poorest households. All participants included in this study gave birth to a child who was alive at least up to the date of the interview. Statistical analysis The prevalence and frequency of acts of violence exposure were estimated as n, % and number of violence incidents in different time periods. McNemar's test was used to assess statistically significant differences in IPV prevalence during different periods. Associations were investigated between different forms of IPV (predictor variable) and poor usage of antenatal care services and further between sociodemographic and psychosocial risk factors and different forms of IPV (outcome variable) during pregnancy by use of bivariable and multivariable logistic regression models. Possible confounders were considered based on statistical significance in bivariable analyses and for theoretical reasons due to findings in earlier studies. All models were therefore adjusted for the woman's and the husband's age, number of people in the household, relationship of the woman with the household head, social support, women and husband's education, occupation and family assets. Finally, the Nagelkerke R-Square test was used to assess the fit of the final models. All measures of association are presented as ORs with their 95% CIs. All analyses were performed using Statistical Package of Social Sciences V.22.0 for Windows (SPSS, Armonk, New York, USA). Ethical considerations Participation was voluntary for all the selected women and no remuneration was given for participating in the study. Before the interview, the interviewer explained in detail the content of the questionnaire, informed the participants on confidentiality of their responses and of their free choice to withdraw from the study at any time during the interview or later. For the protection of the interviewed women in the households and to maintain confidentiality, only one woman in each household was interviewed. The interview was conducted in privacy between the participant and the interviewer. If the interview was to be interrupted by a visitor, the interviewers had been trained either to terminate the interview or to stop asking about violence and to move on to the less sensitive topics such as pregnancy complications until privacy was guaranteed. If it was not possible for the partner/husband to leave the household at that particular time, the woman to be interviewed would be revisited at another time. If an eligible woman was below18 years of age, her parents or guardians' consent was asked for. A written and signed consent was obtained from all participants. To maintain confidentiality, since IPV is a sensitive issue, which might induce strong feelings among the exposed, women were informed that those in need of any kind of assistance could receive it at a nearby health centre or hospital that was informed in advance about the study. Results Sociodemographic and psychosocial data Participants were mostly of low socioeconomic status, had not completed primary school and were engaged in non-skilled work. The mean time since childbirth and the time of the interview was 6.7 months (SD=3.5). The majority were married or cohabiting (84.1%, n=774). Just over 20% of the participants (n=186) had poor social support. Partners' sociodemographic and psychosocial characteristics showed a similar trend (table 1). Of all participating women, 20.4% (n=188), 13% (n=120) and 20.6% (n=190) had ever been subjected to physical, sexual and psychological violence, respectively (table 2). Data on lifetime prevalence of controlling behaviour was not available. Table 1 Sociodemographic and psychosocial characteristics of the study population Table 2 Prevalence of physical, sexual and psychological violence experienced by women earlier in life, in the year before pregnancy and during pregnancy N=921 The prevalence of IPV and controlling behaviour during pregnancy Physical partner violence was reported by 10.2% (95% CI 8.3 to 12.2), (n=94) of all women during pregnancy, psychological abuse by 17.0% (95% CI 14.6 to 19.4), (n=157), sexual violence by 9.7% (95% CI 7.8 to 11.6), (n=89) and controlling behaviour by 20.0% (95% CI 17.4 to 22.6), (n=163). Except physical violence, all other forms of violence increased during pregnancy as compared to the year before pregnancy but only psychological abuse showed a statistically significant increase from 13.4% (n=123) to 17.0% (n=157), (p value<0.01). A trend was also observed in that the prevalence rates of physical, sexual and psychological violence and controlling behaviour all increased after childbirth as compared to during pregnancy, but these estimates did not reach statistical significance (see online supplementary table S1, tables 2 and 3 and figure 1). Figure 2 shows the overlapping of different forms of IPV during pregnancy. Just over 4% (95% CI 2.9 to 5.6), (n=40) reported all three forms and the most common overlapping was observed between physical and psychological violence, 3.5% (95% CI 2.3 to 4.7), (n=33). Table 3 Partner's controlling behaviour with prevalence, summary measure and severity of control tactics. N=921 Figure 1 Prevalence rates of different forms of IPV at different life phases. N=921. IPV, intimate partner violence. Figure 2 Prevalence rates of overlapping forms of IPV perpetrated against women during pregnancy. N=208. IPV, intimate partner violence; Phys: physical violence, Psych: psychological violence, Sex: sexual Violence. supplementary table Prevalence of physical, sexual and psychological violence experienced by women earlier in life, the year before pregnancy and during pregnancy Associations between IPV during pregnancy and poor antenatal care usage In bivariable and multivariable logistic regression models, no statistically significant associations were observed between physical, sexual or psychological violence during pregnancy and poor usage of ANC services. Multivariable logistic regression models showed that reporting controlling behaviour was almost twofold associated with poor usage of ANC services as compared to not reporting controlling behaviour (OR 1.93 (95% CI 1.34 to 2.79) (table 4). Further, we investigated possible interactions between physical, sexual and psychological violence with controlling behaviour and usage of ANC services but no significant interactions were present (results not shown). Table 4 Associations between different forms of violence during pregnancy and poor usage of antenatal care services Associations between sociodemographic and psychosocial factors and IPV during pregnancy To investigate the risk factors for physical, sexual, psychological violence and controlling behaviour during pregnancy, bivariable and multivariable logistic regression analyses were performed. Multivariable analyses showed that low socioeconomic status (measured as having none of the assets in the household) was associated with increased exposure to physical violence OR 2.27 (95% CI 1.29 to 3.98), psychological abuse exposure was associated with having been pregnant more than once, and low socioeconomic status with ORs of 2.11 (95% CI 1.19 to 3.75) and 2.38 (95% CI 1.47 to 3.86), respectively. Young age and poor social support were associated with exposure to sexual violence, OR 1.84 (95% CI 1.01 to 3.35) and OR 2.92 (95% CI 1.63 to 5.24), respectively. Finally, women from urban areas (Kigali city) and younger women (15–30 years) were at increased exposure to controlling behaviour with ORs of 1.94 (95% CI 1.33 to 2.81) and 2.17 (95% CI 1.35 to 3.47), respectively (table 5). Table 5 Associations between sociodemographic and psychosocial factors and women's exposure to different forms of IPV during pregnancy In bivariate analyses, partner/husband's level of education was significantly associated with physical and psychological violence exposure but lost its significance in multivariable analyses (not presented in table 5). Discussion This is the first retrospective study in Rwanda that has investigated the prevalence of all forms of IPV, including controlling behaviour against women during pregnancy. The study also investigated the risk factors for IPV and controlling behaviour during pregnancy and whether IPV exposure would influence usage of ANC services. Analysing each form of IPV separately, the prevalence of physical violence during pregnancy was 10.2%, which is consistent with 7–12% found in a similar study in Tanzania.4 Prevalence rates for sexual and psychological violence in our study were also similar to the previously reported findings in the region.32 However, a cross-sectional study among pregnant women in Tanzania gave the prevalence rates of 18% and 20% for sexual and physical violence, respectively.33 Also, a community-based cross-sectional study among 282 married pregnant women in Ethiopia reported even higher prevalence rates for all forms of IPV.34 The differences in results could be explained by true prevalence differences but also could be due to differences in social beliefs on what constitutes IPV, tools and scales that were used for data collection and analyses and sample size. There are insufficient data on the prevalence of spouse controlling behaviour during pregnancy. The prevalence of 20% in our study is lower than the 33.1% and 42% reported in Malawi and Nigeria, respectively,35 ,36 but these studies investigate the partner's controlling behaviour before pregnancy, which might have led to higher rates, as pregnancy may be protective against some forms of violence exposure. However, the lower prevalence rates of controlling behaviour may also be a result of changing perceptions about the role of women in Rwandan society as a result of government's efforts to use gender equality as one of the means to achieve social and economic development by empowering women. Gender equality policies have been institutionalised and there is a female majority parliament. Results in this study indicate that physical violence decreased slightly during pregnancy, whereas both sexual violence and psychological abuse increased, but a statistically significant change was observed for psychological abuse only. Studies evaluating whether physical violence, sexual violence or psychological abuse increases or decreases during pregnancy have shown mixed results.32 ,37 This could be expected considering the different definitions of IPV that have been used and due to real differences in results. However, our results are consistent with other studies, which show that being pregnant is not necessarily protective against IPV and that physical violence may slightly decrease compared to other forms of IPV probably because of the partner's fear of hurting the unborn baby or due to the cultural unacceptability of hurting a pregnant woman.8 ,38 The reasons as to why psychological abuse increased significantly during pregnancy in our study are not clear but a similar finding is presented from Zimbabwe.3 A suggested explanation is that women who have mistimed the pregnancy or if the pregnancy is unwanted endure significantly higher levels of psychological abuse from their partners, who blame them for getting pregnant.39 This could be a plausible explanation among our participants who were mainly low educated and living in rural areas where knowledge about contraceptive methods may be an issue and women are less likely to use any form of family planning methods for convenient timing of conception. As a result, it is not surprising that the most common overlapping types of violence during pregnancy were psychological and physical violence. Comparable overlap has been reported in a study from Tanzania.40 We have observed that husband/partner's controlling behaviour was associated with poor ANC service attendance, which is in line with a few related findings on this topic in Africa.41 However, the lack of associations of all forms of IPV with poor ANC services usage in this study seems counterintuitive. Although studies investigating the direct effect of single forms of IPV exposure on pregnant women's ANC services attendance are scarce, there are a few related ones from sub-Saharan Africa which suggest that IPV exposure may be a risk factor for poor ANC services attendance.18 ,41 ,42 One of the reasons suggested is lack of sufficient information about ANC services, and partners who directly or indirectly through threats or actual violence stop them from going to ANC clinics.42 ,43 This may not be true in Rwandan settings where there is a highly successful sensitisation and education effort by CHWs to support the community on health-related issues such as ANC services attendance.44 Each village of ∼100–150 households in Rwanda has about four CHWs for this purpose.45 CHWs further encourage families to immunise their children, and to inform about various health matters, such as nutrition and malaria prevention issues. However, lack of associations may also be related to disguising intentions; owing to cultural norms prevailing in Rwanda and stigma associated with violence in the community,46 abused pregnant women may conceal that they are being abused by attending ANC clinics in order to protect their status and family image in the community. Furthermore, local evidence shows that men are ashamed of the violence they perpetrate against women,23 because of its social unacceptability and possible reprimand from authorities and in order to hide it, they may encourage their pregnant wives to attend ANC services so that neighbours, CHWs and local administrative authorities do not discover that their wives are being abused. This is supported by results in this study showing that controlling behaviour, which may not be considered as violence, is associated with poor ANC services usage. We will further investigate this rather unexpected finding that physical, sexual or psychological violence does not influence ANC attendance by use of qualitative data. Results in our study show that low social economic status was associated with both physical and psychological violence during pregnancy. This is in accordance with other findings.47–49 Lower household income may be a result of lack of employment, which leads to poverty in the household that can provoke conflicts.23 Lower social economic status and lower gender equality awareness may largely explain the higher prevalence rates of IPV that have been observed in low-income countries such as Rwanda as compared to high-income countries. The finding that poor social support and young age are associated with sexual violence during pregnancy is consistent with the findings in other studies.50 ,51 Abusive and/or controlling partners often aim to isolate the victims from family, friends and social networks, which means lack of intimate forums in which pregnant women can discuss such a sensitive topic as physical/sexual violence during pregnancy in a traditional society as Rwanda. Similarly, young women are more likely to be economically dependent and may not attempt to resist sexual violence because of the fear that their husbands may walk out on them. Consequently, pregnant women may treat the situation as normal and make no attempts to rectify it. The finding that women expecting their first-born were at lower exposure of psychological abuse is not surprising and similar findings have been reported in other studies.32 ,47 The first child is generally a source of happiness and warmth between the couple, whereas having delivered two or more times might present more economic challenges and dependence, which can initiate or increase violence. We have found that a woman's young age and living in urban areas are risk factors for husband/partner controlling behaviour during pregnancy, which is comparable to previous findings in the region and elsewhere.32 ,36 ,52 In a slowly but gradually changing Rwandan society, young women are more likely to have more social networks and outgoing activities, which may provoke partners with a tendency towards being controlling. Women living in and around Kigali are relatively more educated and/or are more likely to have paid employment than the common and traditional occupation of subsistence farming. As women start to oppose traditional gender role expectations and, to a growing extent, assume non-traditional roles, violence against them has been shown to increase.53 Methodological considerations The strength of this study is the large sample size, the low non-response rate and the use of internationally recognised tools for all forms of IPV assessment including controlling behaviour. Owing to cultural beliefs and the sensitive nature of IPV, there was a possibility of under-reporting of violent events and information on some variables was missing, which may have resulted in less precise analyses. Nevertheless, data collection was conducted with utmost care, by a team of trained and experienced medical personnel including clinical psychologists, who were able to establish a favourable environment for discussion with participating women. These were of the same sex and of similar age as the participants, which has been shown to improve the accuracy of the reporting in interviews.54 The design of our study limits the ability to draw any causal inferences and data from women experiencing IPV whose pregnancies were terminated earlier for whatever reason were not available. Finally, the data were collected retrospectively from respondents who gave birth between 1 and 13 months before the interview with a mean time of 6.7 months, which may have resulted in recall bias. However, the short recall period meant that this was most likely a minor problem. We believe that the findings in this study are generalisable to the entire country, as living circumstances are quite similar in the rest of the provinces in the country. Conclusion This study has demonstrated that all forms of IPV including controlling behaviour against pregnant women in Rwanda are frequent. We recommend that all forms of IPV are included in the standard health assessment package of ANC services and health service providers should regularly be trained and made aware of IPV against pregnant women attending ANC services. Policies aiming at increasing ANC services attendance should be reinforced and CHWs have to be empowered and given sufficient support as they have an important task in raising awareness on the dangers of IPV including controlling behaviour. Existing laws and policy on gender-based violence, which criminalise such behaviour, should be followed in that perpetrators should be convicted in serious cases. Primary prevention is about alerting the media, institutions, organisations, communities and families on the subject, to create an open debate on the subject in society. Finally, more research is needed to determine the effects of IPV during pregnancy on pregnancy outcomes, women's postpartum well-being and the newborn's early childhood, adolescence and adult life. Acknowledgments The authors gratefully acknowledge the contribution of all participating women who welcomed us into their homes and gave their valuable time to answer our questions and share their maternal health experience with us. The authors are also grateful to the Section for Epidemiology and Social Medicine (EPSO), at the University of Gothenburg and the School of Public Health, College of Medicine and Health Sciences, University of Rwanda for all the support they provided. View Abstract Footnotes • ▸ Additional material is available. To view please visit the journal online (http://dx.doi.org/10.1136/bmjopen-2016-013155). • Contributors GK designed the study. GK, IM, AAR and JN developed the study questionnaire. AAR developed the study methodology, coordinated and participated in piloting and data collection activities and carried out all statistical analyses with assistance from GK. The manuscript was drafted and written by AAR with contributions from GK, IM and JN. JN also participated in data collection activities. • Funding This study forms part of the Maternal Health Research Programme (MaTHeR) undertaken by the University of Rwanda in collaboration with the University of Gothenburg and Umeå University in Sweden. The Study was made possible by financial support from The Swedish International Development Cooperation Agency (SIDA). • Competing interests None declared. • Ethics approval Institutional Review Board of the College of Medicine and Health Sciences, University of Rwanda and the National Institute of Statistics of Rwanda (number: 0425/2014/10/NISR). • Provenance and peer review Not commissioned; externally peer reviewed. • Data sharing statement No additional data are available. Request permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24177518486976624, "perplexity": 4705.130629629137}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803906.12/warc/CC-MAIN-20171117185611-20171117205611-00052.warc.gz"}
https://startups.mathsgee.com/35958/do-i-need-to-know-linear-algebra-for-data-science?show=35959
# arrow_back Do I need to know linear algebra for data science? 10 views Do I need to know linear algebra for data science? This is an essential branch of mathematics for understanding how machinelearning algorithms work on a stream of data to create insight. Everything from friend suggestions on Facebook, to song recommendations on Spotify, to transferring your selfie to a Salvador Dali-style portrait using deep transfer learning involves matrices and matrix algebra. Here are the essential topics to learn: - Basic properties of matrix and vectors: scalar multiplication, linear transformation, transpose, conjugate, rank, determinant - Inner and outer products, matrix multiplication rule and various algorithms, matrix inverse - Special matrices: square matrix, identity matrix, triangular matrix, idea about sparse and dense matrix, unit vectors, symmetric matrix, Hermitian, skew-Hermitian and unitary matrices - Matrix factorization concept/LU decomposition, Gaussian/GaussJordan elimination, solving $\mathrm{Ax}=\mathrm{b}$ linear system of equation - Vector space, basis, span, orthogonality, orthonormality, linear least square - Eigenvalues, eigenvectors, diagonalization, singular value decomposition Where You Might Use It If you have used the dimensionality reduction technique principal component analysis, then you have likely used the singular value decomposition to achieve a compact dimension representation of your data set with fewer parameters. All neural network algorithms use linear algebra techniques to represent and process network structures and learning operations. by Platinum (102,810 points) ## Related questions Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Do I need to know discrete math for data science? Do I need to know discrete math for data science?Do I need to know discrete math for data science? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Do I need to know calculus for data science? Do I need to know calculus for data science?Do I need to know calculus for data science? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Do I need to know statistics for data science? Do I need to know statistics for data science?Do I need to know statistics for data science? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Do I need to know Functions, Variables, Equations, and Graphs for data science? Do I need to know Functions, Variables, Equations, and Graphs for data science?Do I need to know Functions, Variables, Equations, and Graphs for data science? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Do I need to know Optimization and Operation Research Topics for data science? Do I need to know Optimization and Operation Research Topics for data science?Do I need to know Optimization and Operation Research Topics for data science? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Interactive Maths Quiz: How well do you know mathematics for data science?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7232518196105957, "perplexity": 3391.214807016481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301263.50/warc/CC-MAIN-20220119033421-20220119063421-00710.warc.gz"}
https://www.deepdyve.com/lp/ou_press/sequential-rank-agreement-methods-for-comparison-of-ranked-lists-wYdjgiQvt9
Sequential rank agreement methods for comparison of ranked lists Sequential rank agreement methods for comparison of ranked lists Summary The comparison of alternative rankings of a set of items is a general and common task in applied statistics. Predictor variables are ranked according to magnitude of association with an outcome, prediction models rank subjects according to the personalized risk of an event, and genetic studies rank genes according to their difference in gene expression levels. We propose a sequential rank agreement measure to quantify the rank agreement among two or more ordered lists. This measure has an intuitive interpretation, it can be applied to any number of lists even if some are partially incomplete, and it provides information about the agreement along the lists. The sequential rank agreement can be evaluated analytically or be compared graphically to a permutation based reference set in order to identify changes in the list agreements. The usefulness of this measure is illustrated using gene rankings, and using data from two Danish ovarian cancer studies where we assess the within and between agreement of different statistical classification methods. 1. Introduction Ranking of items or results is common in scientific research, and ranked lists occur naturally as the result of many statistical applications. Regression methods rank predictor variables according to magnitude of their association with an outcome, prediction models rank subjects according to their risk of an event, and genetic studies rank genes according to their difference in gene expression levels across samples. Two common research questions are of interest when several rankings of the same items are available: i) to what extent do the lists agree on the rankings and how will that change as we go through the lists and ii) is it possible to identify an optimal rank until which the lists agree on the items? A typical situation where these questions arise is in high-dimensional genomics studies such as genome-wide association studies where several analysis methods (e.g., regression methods, lasso, and random forest) can be used identify and rank millions of gene variants according to their association with the outcome. The ranking of each gene variant may vary from method to method, and a consensus summary of agreement of the findings is needed to determine which gene variants to investigate more closely in subsequent validation studies. To minimize expenses it is only of interest to consider gene variants that have high ranking across the different methods. Multiple ranked lists also appear in machine learning where the stability of the ranks produced by a “black-box”-technique can be evaluted by bootstrapping the data. Assessing which items are stable (i.e., consistent rankings across bootstrap samples) will help to weed out possible fluke findings. In this article, we introduce sequential rank agreement for measuring agreement among ranked lists. The general idea is to define agreement based on the sequence of ranks from a subset of the first $$d$$ items in each list. As agreement metric we adapt the limits of agreement known from agreement between quantitative variables (Altman and Bland, 1983), but any measure for agreement could essentially be used. Our proposed approach allows us to compare multiple lists simultaneously, it provides a dynamic measure of agreement as a function of the depth in the lists, it places higher weight on items at the top of the list, it accommodates partially observed lists of varying lengths, and has a natural interpretation that directly relates to the ranks. Graphical illustration of sequential rank agreement allows us to infer any changepoints, i.e., a list depth where a substantial change in the agreement of the lists occur, but we also provide asymptotical and randomization-based graphical tools to compare the observed rank agreement to the expected agreement found in non-informative data. In this sense, our approach is a combination and generalization of some of the ideas of Irizarry and others (2005), Carterette (2009), and Boulesteix and Slawski (2009). Carterette (2009) compares two rankings based on the distance between them as measured by a multivariate Gaussian distribution, and the latter presents an overview of approaches for aggregation of ranked lists including bootstrap and leave-one-out jack-knife approaches. Irizarry and others (2005) propose a plot based on the intersection of lists which is a special case of our setting where the agreement metric is the overlap proportion. However, simple intersection also places equal weights on all depths of the list and therefore Fagin and others (2003) and Webber and others (2010) proposed weighted intersections which put more emphasis on the top of the lists. Specifically, Webber and others (2010) define their rank-biased overlap by weighting with a converging series to ensure that the top is weighted higher than the potentially non-informative bottom of the lists. It is possible to use the existing methods to calculate agreement of lists until a given depth, i.e., limited to the $$d$$ items of each list. However, the interpretation may not be straightforward, especially in the case of more than two lists, and they may not accommodate partial rankings. Very recently, Hall and Schimek (2012) proposed a method for comparing pairwise rankings and derived the asymptotic distribution of the endpoint where the two ranked lists no longer are in agreement. Their approach was based on anchoring one of the two lists and subsequently generating a sequence of 0s and 1s depending on whether the ranks in the second list was close to the rank from the anchored list. Sampath and Verducci (2013) followed up on this idea for pairwise comparison of lists but used penalties based on a truncated geometric probability instead of a 0–1 process, and they evaluated the distribution of the endpoint of agreement by computational approaches. The asymptotic distribution in the Hall and Schimek (2012) paper is based on letting the number of lists increase to infinity which is a situation that is only relevant in special cases, whereas the simulation-based null distribution approach of Sampath and Verducci (2013) does not rely on asymptotic results to evaluate their pairwise findings. The article is organized as follows: the next section defines sequential rank agreement for multiple ranked lists and discuss how to handle incomplete lists. In Section 3, we show the asymptotic distribution of the endpoint of agreement and discuss approaches to evaluate the results obtained from sequential rank agreement. Finally, we apply the sequential rank agreement to two Danish ovarian cancer studies and compare our method to the method of Hall and Schimek (2012) in a small sample simulation study before we discuss the findings and possible extensions. The GitHub repository https://github.tagteam/SuperRanker contains an implementation of the proposed approach and the code for the leukemia analysis presented in Section 3 and the simulations found in Section 5 (see commit ffe8302). 2. Methods Consider a set of $$P$$ different items $$X=\{X_1,\dots,X_P\}$$ and a ranking function $$R: \{X_1,\dots,X_P\}\to \{1,\dots,P\}$$, such that $$R(X_p)$$ is the rank of item $$X_p$$. The inverse mapping $$R^{-1}$$ gives the item $$R^{-1}(r)$$ that was assigned to rank $$r\in\{1,\dots,P\}$$. An ordered list is the realization of a ranking function $$R$$ applied to the set of items $$X$$. Panels (a) and (b) of Table 1 show a schematic example of these mappings. Thus, if $$R_l^{-1}(1)=X_{34}$$, then item $$X_{34}$$ is ranked first in list $$l$$ and similarly $$R_l(X_{34})=1$$. In all what follows, we consider a fixed set of items and consider the ranking function to be a random variable. Thus, let $$R_1(X),\dots,R_L(X)$$, $$L\geq2$$, be a sample of $$L$$ independent identically distributed draws from an unknown probability distribution function $$Q$$. One aim is then to test how much $$Q$$ resembles the uniform distribution which assigns probability $$1/P!$$ to each of the $$P!$$ different possible rankings. Table 1 Example set of ranked lists. (a) shows the ranked lists of items for each of three lists, (b) presents the ranks obtained by each item in each of the three lists, and (c) shows the cumulative set of items up to a given depth in the three lists when $$\varepsilon=0$$ (i.e., an item is added to $$S(d)$$ whenever it appears in at least one list). (a) Rank $$R^{-1}_1$$ $$R^{-1}_2$$ $$R^{-1}_3$$ 1 A A B 2 B C A 3 C D E 4 D B C 5 E E D (b) Item $$R_1$$ $$R_2$$ $$R_3$$ A 1 1 2 B 2 4 1 C 3 2 4 D 4 3 5 E 5 5 3 (a) Rank $$R^{-1}_1$$ $$R^{-1}_2$$ $$R^{-1}_3$$ 1 A A B 2 B C A 3 C D E 4 D B C 5 E E D (b) Item $$R_1$$ $$R_2$$ $$R_3$$ A 1 1 2 B 2 4 1 C 3 2 4 D 4 3 5 E 5 5 3 (c) Depth $$S_d$$ 1 $$\{$$A, B$$\}$$ 2 $$\{$$A, B, C$$\}$$ 3 $$\{$$A, B, C, D, E$$\}$$ 4 $$\{$$A, B, C, D, E$$\}$$ 5 $$\{$$A, B, C, D, E$$\}$$ (c) Depth $$S_d$$ 1 $$\{$$A, B$$\}$$ 2 $$\{$$A, B, C$$\}$$ 3 $$\{$$A, B, C, D, E$$\}$$ 4 $$\{$$A, B, C, D, E$$\}$$ 5 $$\{$$A, B, C, D, E$$\}$$ Table 1 Example set of ranked lists. (a) shows the ranked lists of items for each of three lists, (b) presents the ranks obtained by each item in each of the three lists, and (c) shows the cumulative set of items up to a given depth in the three lists when $$\varepsilon=0$$ (i.e., an item is added to $$S(d)$$ whenever it appears in at least one list). (a) Rank $$R^{-1}_1$$ $$R^{-1}_2$$ $$R^{-1}_3$$ 1 A A B 2 B C A 3 C D E 4 D B C 5 E E D (b) Item $$R_1$$ $$R_2$$ $$R_3$$ A 1 1 2 B 2 4 1 C 3 2 4 D 4 3 5 E 5 5 3 (a) Rank $$R^{-1}_1$$ $$R^{-1}_2$$ $$R^{-1}_3$$ 1 A A B 2 B C A 3 C D E 4 D B C 5 E E D (b) Item $$R_1$$ $$R_2$$ $$R_3$$ A 1 1 2 B 2 4 1 C 3 2 4 D 4 3 5 E 5 5 3 (c) Depth $$S_d$$ 1 $$\{$$A, B$$\}$$ 2 $$\{$$A, B, C$$\}$$ 3 $$\{$$A, B, C, D, E$$\}$$ 4 $$\{$$A, B, C, D, E$$\}$$ 5 $$\{$$A, B, C, D, E$$\}$$ (c) Depth $$S_d$$ 1 $$\{$$A, B$$\}$$ 2 $$\{$$A, B, C$$\}$$ 3 $$\{$$A, B, C, D, E$$\}$$ 4 $$\{$$A, B, C, D, E$$\}$$ 5 $$\{$$A, B, C, D, E$$\}$$ The agreement among the lists regarding the rank given to an item $$X_p$$ can be measured by the variance across the lists \begin{align} A(X_p) &= \mathbb{E}_Q\left[\left(R(X_p)-\mathbb{E}_Q R(X_p)\right)^2\right]\\ &= \sum_{r\in\Pi} \left(r(X_p)-\mathbb{E}_Q R(X_p)\right)^2 Q(r),\nonumber \end{align} (2.1) where $$\Pi$$ is the set of all permutations of $$X$$, $$Q$$ is a probability mass function on $$\Pi$$, and $$\mathbb{E}_Q R(X_p)=\sum_{r\in\Pi} r(X_p)Q(r)$$. The empirical counterpart is \begin{align} \widehat{A}_L(X_p) = \frac{1}{L-1}\sum_{i=1}^L (R_i(X_p) - \overline{R}_L(X_p))^2, \quad \overline{R}_L(X_p) = \frac{1}{L}\sum_{i=1}^L R_i(X_p). \end{align} (2.2) For each item, the function $$\widehat{A}_L$$ has an interpretation as the expected Euclidean distance of the individual rankings from the expected ranking over the $$L$$ lists, and it corresponds to the same measure that is used to compute the limits of agreement (Altman and Bland, 1983). For an integer $$1\le d\le P$$, we define the expected set of unique items found by merging the first $$d$$ elements across the possible lists: \begin{align} S(d) &= \left\{X_p; \left(\sum_{r\in\Pi} 1\left(r(X_p)\le d\right) Q(r)\right) > {\varepsilon}\right\}, \end{align} (2.3) where $$1(\cdot)$$ denotes the indicator function, and where $$\varepsilon\in[0,1)$$ is a pre-specified constant that sets the minimum proportion of lists that an item must be present in before it is added to $$S(d)$$. When $$\varepsilon=0$$, then an item is included as soon as it is present in just one list. The empirical counterpart is the set of unique items ranked less than or equal to $$d$$ in any of the $$L$$ lists: $$\widehat{S}_{L}(d) = \left\{X_p; \left(\frac{1}{L}\sum_{l=1}^L 1\left(R_l(X_p)\le d\right)\right)> {\varepsilon}\right\},$$ (2.4) which is exemplified in Panel (c) of Table 1. We define the sequential rank agreement as the weighted expected agreement of the items found in the set $$S(d)$$: $${\textrm{sra}(d) = \left\{\begin{array}{cl}\frac{1}{|S(d)|}\sum_{p \in S(d)}A(X_p) & \text{when} |S(d)|>0, \\ 0 & \text{otherwise}\end{array} \right.}$$ (2.5) where $$|S(d)|$$ is the cardinality of the set $$S(d)$$. As stated, we are only interested in sra($$d$$) when $$|S(d)|>0$$. The empirical counterpart when $$|S(d)|>0$$ is equivalently given by $$\widehat{\textrm{sra}}_L(d) = \frac{\sum_{\{p \in \widehat{S}_{L}(d)\}}(L-1)\widehat{A}_L(X_p)}{(L-1)|\widehat {S}_{L}(d)|}.$$ (2.6) Values of sra close to zero when $$|S(d)|>0$$ suggest that the lists agree on the rankings while larger values suggest disagreement. If $$|S(d)|=0$$, then no items were sufficiently frequent among the observed lists, and we can conclude that the lists do not agree above the threshold $$\varepsilon$$. The sequential rank agreement will be zero for all values of $$d$$ when the ranked lists are identical. 2.1. Interpreting and applying sequential rank agreement The sequential rank agreement is equivalent to the pooled variance of the items found in $$S(d)$$. Thus, the square root of the sequential rank agreement measures the average of the average difference in rankings among the lists for the items we have included until depth $$d$$. In method comparison studies, the observed agreement is compared to a pre-specified acceptable limit, and we can do similarly. For easy visualization of the rank agreement, we suggest to plot $$\sqrt{\widehat{\textrm{sra}}_L(d)}$$ corresponding to the pooled SD against $$d$$. As an example, consider the data by Golub (1999) (found in Dudoit and others (2002)) where 3051 gene expression values measured on 38 tumor mRNA samples were used to classify between acute lymphoblastic leukemia and acute myeloid leukemia. Several analysis methods are possible for these data for example marginal unequal variances two-sample $$t$$ tests, marginal logistic regression analyses, elastic net logistic regression (Friedman and others, 2010), and marginal maximum information content correlations (MIC) (Reshef and others, 2011), and we want to identify a set of genes that consistently are most likely to be associated to leukemia. For the first two methods, the genes were ranked according to minimum $$p$$ value, for logistic regression the genes were ordered by size of the corresponding coefficients (after standardization), and MIC was ordered by absolute correlation which resulted in the top rankings seen in Table 2. The ranked lists appear to agree that genes 2124 and 829 are among the most interesting while the best ranked gene from MIC, gene 378, is not found in the top 10 for two of the other methods. Table 2 List of ranked results from the Golub data. Numbers indicate the predictor/gene for the given ranking and method. Only the top 10 ranks are shown in the table. The ranked lists appear to agree that genes 2124 and 829 are among the most interesting while the highest ranked gene from MIC, gene 378, is not found in the top 10 for two of the other methods. Ranking Welsh’s $$t$$ LogReg ElasticNet MIC 1 2124 2124 829 378 2 896 896 2198 829 3 2600 829 2124 896 4 766 394 808 1037 5 829 766 1665 2124 6 2851 2670 1920 808 7 703 2939 1389 108 8 2386 2386 1767 515 9 2645 1834 1042 2670 10 2002 378 2600 2600 Ranking Welsh’s $$t$$ LogReg ElasticNet MIC 1 2124 2124 829 378 2 896 896 2198 829 3 2600 829 2124 896 4 766 394 808 1037 5 829 766 1665 2124 6 2851 2670 1920 808 7 703 2939 1389 108 8 2386 2386 1767 515 9 2645 1834 1042 2670 10 2002 378 2600 2600 Table 2 List of ranked results from the Golub data. Numbers indicate the predictor/gene for the given ranking and method. Only the top 10 ranks are shown in the table. The ranked lists appear to agree that genes 2124 and 829 are among the most interesting while the highest ranked gene from MIC, gene 378, is not found in the top 10 for two of the other methods. Ranking Welsh’s $$t$$ LogReg ElasticNet MIC 1 2124 2124 829 378 2 896 896 2198 829 3 2600 829 2124 896 4 766 394 808 1037 5 829 766 1665 2124 6 2851 2670 1920 808 7 703 2939 1389 108 8 2386 2386 1767 515 9 2645 1834 1042 2670 10 2002 378 2600 2600 Ranking Welsh’s $$t$$ LogReg ElasticNet MIC 1 2124 2124 829 378 2 896 896 2198 829 3 2600 829 2124 896 4 766 394 808 1037 5 829 766 1665 2124 6 2851 2670 1920 808 7 703 2939 1389 108 8 2386 2386 1767 515 9 2645 1834 1042 2670 10 2002 378 2600 2600 The sequential rank agreement curve (using $$\varepsilon=0$$ to decide that an item should be present in just a single list before it is included) shown in the left plot of Figure 1 show the average distance in ranks for the genes considered among the first $$d$$ positions. Not surprisingly, the sequential rank agreement is better towards the top of the lists (smaller values on the $$y$$ axis corresponds to better agreement) than towards the bottom of the lists. Figure 1 shows a substantial deterioration in agreement (higher sra) after depth 5. Thus, if we were to restrict attention to a small set of predictors then our prime focus would be on the items found among the top-5 lists from Table 2). The choice of which depth until which we think the lists agree can be chosen either from a pre-specified threshold for an acceptable difference in rankings or from a pre-specified item set size. A changepoint analysis on the sequential rank agreement would be able to identify depths where a substantial increase/change in rank agreement occurs and would be another way to identify sets of items that share agreement among the lists if a pre-specified acceptable rank agreement threshold is not given. Fig. 1. View largeDownload slide Left panel: sequential rank agreement for four different analysis methods applied to the 3051 genes in the Golub data (black line). Right panel: corresponding sequential rank agreement for the same data but where only the top 20 ranked items are available and the rank of the remaining items are not available. The blue and red (the top and bottom areas, respectively) areas correspond to the independent and randomized reference hypothesis areas, respectively. Note that both the $$x$$ and $$y$$ axes are shown on the log scale to “zoom in” on the top of the lists. Fig. 1. View largeDownload slide Left panel: sequential rank agreement for four different analysis methods applied to the 3051 genes in the Golub data (black line). Right panel: corresponding sequential rank agreement for the same data but where only the top 20 ranked items are available and the rank of the remaining items are not available. The blue and red (the top and bottom areas, respectively) areas correspond to the independent and randomized reference hypothesis areas, respectively. Note that both the $$x$$ and $$y$$ axes are shown on the log scale to “zoom in” on the top of the lists. Generally, changes in the level of rank agreement suggest that there are sets of items that are ranked similarly in all lists while other items constitute set(s) with vastly different ranks. When the lists are likely to agree on a few top-ranked items then the sequential rank agreement curve will start low and then increase until it levels off exactly as is seen in Figure 1. 2.2. Analysis of incomplete lists Incomplete or partial lists are a common occurrence that arise, for example, in case of missing data (items), when comparing top $$d$$ list results from publications, or when some methods only rank a subset of the items. For example, penalized regression such as the Lasso provides a sparse set of predictors that have non-zero coefficients. There is no obvious ordering of the set of predictors whose coefficient has been shrunk to zero, and thus we end up with a partial ordering. Incomplete lists also occur if for example the analyst restricts attention to the ranks of items that have been found to be statistically significant. Sequential rank agreement can be generalized to incomplete lists in the following way. Let $$\Lambda_l\subset X$$ be the subset of $$d_l$$ items that have been ranked highest in list $$l$$. The case where all lists are incomplete at the same depth $$d$$ corresponds to $$d_1=\cdots=d_L=d$$. For incomplete lists the rank function becomes $$\widetilde R_l(X_p) = \begin{cases} \{R_l(X_p)\} & \text{for }\ X_p\in \Lambda_l,\\ \{d_l+1,\dots,P\} & \text{for }\ X_p \not\in \Lambda_l \end{cases}$$ (2.7) where we only know that the rank for the unobserved items in list $$l$$ must be larger than the largest rank observed in that list. The agreement, $$A(X_p)$$, cannot be computed directly for all predictors in the presence of incomplete lists because the exact rank for some items will be unknown. Also, recall that the rankings within a single list are not independent since each rank must appear exactly once in each list. Thus, we cannot simply assign the same number (e.g., the mean of the unassigned ranks) to the missing items since that would result in less variation of the ranks and hence less variation of the agreement, and it would artificially introduce a (downward) bias of agreement for items that are missing in multiple lists. Instead, we randomize the ranks $$\{d_{l}+1,\dots,P\}$$ to the items that do not occur in list $$\Lambda_l$$. One realization of the $$L$$ rankings of the set $$X$$ is obtained by randomizing the missing items of each list. By randomizing a large number of times, we can compute (2.5) for each realization, and then compute the sequential rank agreement as the pointwise (for each depth) average of the rank agreements. The algorithm is described in detail in Algorithm C in the Appendix of supplementary material available at Biostatistics Online. The proposed approach is based on two assumptions: i) that the most interesting items are found in the top of the lists and ii) that the ranks that are missing from the lists provide so little information that it is reasonable to assume that they can be represented by a random order. The first assumption is justifiable because we have already accepted that it is reasonable to rank the items in the first place. The second assumption is fair in the light of the first assumption provided that we have a “sufficiently large” part of the top of the lists available. When the two assumptions are satisfied, then it is clear that the interesting part of the sequential rank agreement curves is restricted to depths where the number of items without ranks available is low. Similar to fully observed lists, we generally expect the sequential rank agreement to start low and then increase unless the lists are completely unrelated (in which case the sequential rank agreement will be constant at a high level) or if the lists mostly agree on the ranking (in which case the sequential rank agreement will also be constant but at a low level). For incomplete ranked lists, we also expect a changepoint around the depth where the lists are become incomplete. This is an artifact stemming from the fact that we assume that the remainder of the lists can be replaced by a simple permutation of the missing items. Note that if an item is ranked highly in a few lists but unranked in the remaining lists then it gets poor rank agreement since we only compare whether the lists agree on their rankings and in this case they clearly do not. The right-most plot of Figure 1 shows the impact of restricting the Golub data such that only top-20 lists are available instead of full lists of length 3051 (20 was chosen to resemble the list lengths that might be harvested from published manuscripts). The sequential rank agreement increases much quicker because the incomplete lists introduce more noise in the estimation of the agreement, but it is still possible to see that the top of the list has a sequential rank agreement that is not substantially different from the full lists. 3. Evaluating sequential rank agreement results To evaluate the sequential rank agreement values, we propose two different benchmark values corresponding to two different hypotheses. We wish to determine if we observe better agreement than would be expected if there were no relevant information available in the data. The first reference hypothesis is \begin{eqnarray} H_0 & : & \text{The list rankings correspond to completely randomly}\\ & & \text{permuted lists}\nonumber \end{eqnarray} (3.1) which not only assumes that there is no information in the data on which the rankings are based but also that the methods used to provide the rankings are completely independent. Alternatively, we can remove the restriction of the independence among the methods used to generate the $$L$$ ranked lists under the null hypothesis that any association to the outcome is removed in the data. \begin{eqnarray*} \widetilde H_0 & :& \text{The list rankings are based on data that contain}\\ & & \text{no association to the outcome.} \end{eqnarray*} This alternative null hypothesis addresses the fact that some ranking methods are more likely to provide similar rankings of the same data because the ranking methods focus on the same features of the data rather than because of any information contained in the data. 3.1. Permutation-based inference $$H_0$$ is a quite unrealistic null hypothesis, but we can easily obtain realizations from that null hypothesis simply by permuting the items within each list and then computing the sequential rank agreement for the permuted lists. In the fully observed case, each experiment contains $$L$$ lists of random permutations of the items in $$X$$. For the incomplete case we first permute the items $$X_1,\dots,X_P$$ and then assign missing ranks for list $$l$$ from $$d_l$$ to $$P$$ (i.e., each list has the same number of observed rankings as was observed for list $$l$$ in the original data set). The sequential rank agreement curve from the original lists can then be compared to, say, the pointwise 95% quantiles of the observed rank agreements obtained under $$H_0$$. To obtain the distribution under $$\widetilde H_0$$, the idea is to repeat the ranking procedures for unassociated data many times. For each resample, we first permute the outcome variable in the data set. This removes any association between the predictor variables and the outcome while keeping the structure in the predictors and we apply the same methods that was used for the original data to the permuted data set to generate $$L$$ new rankings and compute the sra for the unassociated data. Note that we only permute the outcomes and thus preserve the internal structure of the predictors. This randomization approach requires that the original data is available and as such it may not be possible to evaluate $$\widetilde H_0$$ in all situations. If the sequential rank agreement for the original data lies substantially below the distribution of the sequential rank agreements obtained under either $$H_0$$ or $$\widetilde H_0$$ then this suggests that the original ranked lists agree more than expected in data with no information, and therefore that the information in the lists is significantly more in agreement than what would be expected. Figure 1 shows the empirical distributions of sequential rank agreement under $$H_0$$ and $$\widetilde H_0$$ each based on $$400$$ permutations of the Golub data from Section 2.1. Figure 1 indicates that the observed sequential rank agreement for the Golub data is significantly better than what would be expected by chance for data that contain no information since it lies below the reference areas if the lists were just random ($$H_0$$ corresponding to the blue area). However, if we consider $$\widetilde H_0$$ then the sequential rank agreement is just inside the red (the lower area in the figure) area, and we conclude that the agreement seen in the Golub data is not significantly better than what we would expect when we remove the association between the predictors and the outcome. The incomplete data also suggest that there may be at most 1 or 2 ranked items towards the top of the lists that yield a result better than what would be expected (the bottom-right plot). Not surprisingly, the sequential rank agreement under $$\widetilde H_0$$ is lower than the sequential rank agreement under $$H_0$$ because the four methods used to rank the data ($$t$$-test, logistic regression, elastic net, and MIC) generally tend to identify the same predictors. It is important to stress that neither $$H_0$$ nor $$\widetilde H_0$$ are related to questions regarding the association between the outcome and the predictors in the data set. Both hypotheses are purely considering how the rankings agree in situations where the data contains no information for creating the rankings. It is also worth pointing out, that if the lists are short ($$P$$ low) and there are few lists ($$L$$ low) then the number of possible different permutations under the null is small and the $$p$$ value obtained may be fluctuating if the number of permutations is small. We have found that a number of permutations over 500 works well even for smaller samples. 3.2. Asymptotic inference of change in agreement In many applications, it is of interest to estimate a list depth which satisfies a changepoint criterion since that corresponds to a change in agreement among the list ranks. In particular, a changepoint will provide a data-driven indicator as to the depth until the lists exhibit a change in rank agreement and would consequently be an obvious choice for identifying the set of items that the lists agree the upon the most. In this section, we investigate the theoretical properties of our proposed method for this specific task. As in Hall and Schimek (2012), we consider an infinite set of lists and study the asymptotic behavior for $$L\to\infty$$. The list lengths are not allowed to change with $$L$$ since the lengths are fixed in most applications. We start by showing that $$\widehat{\textrm{sra}}_L$$ is a consistent estimator of $$\textrm{sra}$$ for $$L \rightarrow \infty$$. Theorem 3.1 Assume that $$\{R_l(X)\}_{l=1}^L$$ are independent draws from a probability distribution $$Q$$ on the set of lists $$\Pi$$. Then $$\left\|\widehat{\textrm{sra}}_L - \textrm{sra}\right\|_\infty = o_P(1)$$. Proof. See Appendix A in the supplementary material available at Biostatistics Online. □ We now define the changepoint as the first crossing point of the sequential rank agreement and a threshold function $$q\colon\,\{1,\ldots,P\} \mapsto \mathbb{R}_{\geq 0}$$. The values of $$q$$ could be a deterministic constant or, for example, the limits-of-agreement obtained in randomly permuted lists corresponding to the null-hypothesis in (3.1). We define the superlevel set of the sequential rank agreement with respect to $$q$$ as \begin{align} \mathcal{L}(q) = \left\{d : \textrm{sra}(d) \geq q(d)\right\}. \end{align} (3.2) A changepoint $$d^\ast(q)$$ in the list agreement is then defined by the position $$d^\ast(q) = \begin{cases} \inf(\mathcal{L}(q)) & \text{ if } |\mathcal{L}(q)| > 0\\ P & \text{ if } |\mathcal{L}(q)| = 0 \end{cases}$$ (3.3) corresponding to the first list depth where the sequential rank agreement exceeds the threshold if such a position exists. Otherwise, the full list is in agreement according to $$q$$, and the changepoint is set to the full length of the lists. The empirical superlevel set is similarly defined as \begin{align} \widehat{\mathcal{L}}_L(\widehat{q}_L) = \left\{d : \widehat{\textrm{sra}}_L(d) \geq \widehat{q}_L(d)\right\} \end{align} (3.4) where the threshold function may depend on the sample size. The estimated changepoint is \begin{align} \widehat{d^\ast_L}(\widehat{q}_L) &= 1(|\widehat{\mathcal{L}}_L(\widehat{q}_L)| > 0)\inf \widehat{\mathcal{L}}_L(\widehat{q}_L) + 1(|\widehat{\mathcal{L}}_L(\widehat{q}_L)| = 0)P. \end{align} (3.5) The consistency of the estimated changepoint, $$\widehat{d^\ast_L}(\widehat{q}_L)$$, follows from Theorem 3.1 by the following corollary. Corollary 3.1 Let $$\widehat{q}_L$$ be a positive threshold function such that $$\left\|\widehat{q}_L - q\right\|_\infty = o_P(1)$$ for some limiting function $$q$$. Then $$\widehat{d^\ast_L}(\widehat{q}_L) \overset{P}{\longrightarrow} d^\ast(q)$$ for $$L \rightarrow \infty$$. Proof. See Appendix B in supplementary material available at Biostatistics Online. □ Corollary 3.1 indicates that we can use the threshold function $$\widehat{q}_L$$ estimated under the null hypothesis as discussed in the previous section as a limiting threshold function for inferring the depth $$d$$, where the observed sequential rank agreement first crosses the threshold of the null threshold, i.e., the depth until which the observed ranked lists are in better agreement than expected under the null hypothesis. In that sense, the threshold function serves the same role as the limits of agreement in method comparison studies, except that the threshold function is not constant but can accommodate the changing nature of the number of items used for the computation of the sequential rank agreement for a given depth. In practice, we can compute an estimate of the threshold function under the null using the permutation approach sketched in the previous section which makes it relevant even for small sample settings. 4. Application to ovarian cancer data We now consider an application of the sequential rank agreement to two data sets consisting of MALDI-TOF (Matrix-Assisted Laser Desorption/Ionization Time Of Flight) mass spectra obtained from blood samples from patients with either benign or malignant ovarian tumors. The data sets are sub-samples of the Danish MALOVA and DACOVA study populations. The MALOVA study is a Danish study on ovarian cancer (Hogdall and others, 2004) where all Danish women diagnosed with an ovarian tumor and referred for surgery from the participating Departments of Gynecology were enrolled continuously from December 1994 to May 1999. For the purpose of illustration, we use a random sub-sample of $$119$$ patients with a total of $$58$$ patients with malignant ovarian cancers as cases and $$61$$ patients with benign ovarian tumors as controls. The DACOVA study is another Danish study on ovarian cancer which included about $$66\%$$ of the female population of Denmark (Bertelsen, 1991). The study aimed to continuously enroll all patients that were referred to surgery of an ovarian tumor clinically suspected to be cancer during the period from 1984 to 1990. We use a random sub-sample from the DACOVA study of $$54$$ malignant ovarian cancers and $$59$$ benign ovarian tumors/gynecologic disorders. Each spectrum consists of $$49\,642$$ samples over a range of mass-to-charge ratios between $$800$$ and $$20\,000$$ Dalton which we downsample on an equidistant grid of 5000 points by linear interpolation. We then preprocess the downsampled spectra individually by first removing the slow-varying baseline intensity with the SNIP algorithm (Ryan and others, 1988) followed by a normalization with respect to the total ion count. Finally, we standardize the $$5000$$ predictors to have column-wise zero mean and unit variance in each data set. We use the two data sets to illustrate how the sequential rank agreement can be applied in two different scenarios. In the first scenario, we assess the agreement of four different statistical classification methods in how they rank the predictors according to their importance for distinguishing benign and malignant tumors. In the second scenario, we assess the agreement among rankings of individual predicted risks of having a malignant tumor. The first scenario is relevant in the context of biomarker discovery and the latter is important e.g., when ranking patients according to immediacy of treatment. Four classification methods are considered: Random Forest (Breiman, 2001) implemented in the R package randomForest (Liaw and Wiener, 2002), logistic Lasso (Tibshirani, 1996) and Ridge regression (Segerstedt, 1992) both implemented in the R package glmnet (Friedman and others, 2010), and Partial Least Squares Discriminant Analysis (PLS-DA) (Boulesteix, 2004) implemented in the R package caret (Kuhn, 2014). All four methods depend on a tuning parameter. The tuning parameter for Lasso and Ridge regression is the degree of penalization, and for PLS-DA it is the number of components (the dimensionality of the subspace). We estimate these separately for each sub-sample by a 20 times repeated 5-fold cross-validation procedure. For the Random Forest, we grow a fixed number of $$5000$$ trees and let the tuning parameter be the number of predictors randomly sampled at each split. We estimate this by a binary search with respect to minimizing the Out-of-Bag classification error estimate. In both scenarios, we use the MALOVA data to train the statistical models, and in both situations the agreements are assessed with respect to perturbations of the training data in the following manner. We draw 1000 random sub-samples (without replication) consisting of 90% of the MALOVA observations and train the four models on each sub-sample. The implementation of Lasso and Ridge regression in the glmnet package offers three different cross-validated optimization criteria for the penalty parameter: total deviance, classification accuracy, and area under the receiver operating characteristic curve (ROC). We apply all three criteria to investigate their effect on the sra. Note that these models produce incomplete lists depending on the value of the penalty parameter. 4.1. Agreement of predictor rankings For each of the four methods, each of the 1000 models trained on the 1000 sub-samples of the MALOVA data produces a ranking of the 5000 predictors according to their importance for discriminating between the tumor types. For the Random Forest classifier, the predictors are ranked according to the Gini index, while for the logistic Lasso and Ridge regression models we order by absolute magnitude of the estimated regression coefficients. For the PLS-DA model, the importance of the predictors is based on a weighted sum of the absolute coefficients where the weights are proportional to the reduction in the sums of squares across the components. The top right panel of Figure 2 shows the sequential rank agreement of the estimated importance of the 5000 predictors. For clarity of presentation, we zoom in on the agreement up to list depth 600. At deeper list depths all agreement curves are approximately constant. As expected, most of the sequential rank agreement curves start low, indicating good agreement, followed by an increase until they become approximately constant. This has the interpretation that the agreement across the sub-samples is higher in the top as compared to the tail of the lists for all these classification methods. The changepoints where the curves become approximately constant are the list depths where the ranks of the remaining items become close to uniformly random. Fig. 2. View largeDownload slide Top left panel: sequential rank agreement of 1000 rankings of the predicted risks of malignant tumor. For each method the different rankings were obtained by first training models in 1000 random sub-samples of the MALOVA data and then predicting the risk of malignant tumor in the 113 DACOVA patients. Top right panel: sequential rank agreement of 1000 rankings of the 5000 predictors. The rankings were obtained from the same 1000 trained models. Bottom left panel: sequential rank agreement for Ridge regression obtained by artificially censoring predictor ranks when their absolute coefficient values are lower than the 0.1% quantile. Bottom right panel: box plots of AUC values across the $$1000$$ sub-samples with respect to the known class labels of the DACOVA data. Fig. 2. View largeDownload slide Top left panel: sequential rank agreement of 1000 rankings of the predicted risks of malignant tumor. For each method the different rankings were obtained by first training models in 1000 random sub-samples of the MALOVA data and then predicting the risk of malignant tumor in the 113 DACOVA patients. Top right panel: sequential rank agreement of 1000 rankings of the 5000 predictors. The rankings were obtained from the same 1000 trained models. Bottom left panel: sequential rank agreement for Ridge regression obtained by artificially censoring predictor ranks when their absolute coefficient values are lower than the 0.1% quantile. Bottom right panel: box plots of AUC values across the $$1000$$ sub-samples with respect to the known class labels of the DACOVA data. A not expected shape of the agreement curves is seen for the Ridge models for all three tuning criteria. They all show higher disagreement in the top of the lists followed by a decrease. The reason behind this behavior is rather subtle. Looking at the distribution of the absolute value of the regression coefficients we see that a large proportion of them are numerically very close to zero and have almost equal absolute value. This is a general feature of the Ridge models in this data set and seen for all the 1000 trained models. This implies that when predictors are ranked according to the magnitude of their coefficients, their actual order becomes more uncertain and more close to a random permutation. This problem can be alleviated by truncating all predictors with absolute coefficient values below a given threshold thereby introducing an artificial incompletion of the lists. For the Ridge models tuned with the deviance criterion, Figure 2 (bottom left) shows the the sequential rank agreement where for each of the 1000 trained models the predictors were artificially censored when their absolute coefficient value was lower than the $$0.1\%$$ quantile of the 5000 absolute coefficient values. The curve was calculated using Algorithm 1 from the Appendix of supplementary material available at Biostatistics Online. with $$B=1000$$ and $$P=5000$$. The corresponding curve from Figure 2 (top right) is shown for comparison. Even though the number of predictors with missing ranks is very small compared to the total number of predictors, the effect on the sequential rank agreement is substantial and with the artificial censoring the shape of the curves is as expected, starting low and then increasing. Looking at the agreement curves for the Lasso models in Figure 2 (top right) we clearly see the effect of the sparsity inducing penalization giving rise to incomplete lists. These curves were similarly calculated using the algorithm and $$1000$$ random permutations. Under the deviance optimization criterion the median number of non-zero coefficients was 33 (range 16–50) and for the class accuracy criterion 14 (range 4–56). These values correspond to the list depths where the agreement curves become constant as a result of the subsequent censoring. 4.2. Agreement of individual risk predictions To assess the stability of the individual risk predictions, we apply the predictors from the DACOVA data set to each of the models. The predicted probabilities are then ranked in decreasing order such that the patients with the highest risk of a malignant tumor appears in the top of the list. Figure 2 (top left) shows the sra separately for each method, based on the 1000 risk predictions obtained from the models trained in the same $$1000$$ random sub-samples of the MALOVA data. Most curves start low and then increase indicating higher agreement among high risk patients. This is expected if we rank the individuals according to highest risk of disease. However, it is also expected that individuals with very low risk also show high agreement. In this case, we order the patients according to (high) risk prediction but we could essentially also have reversed the order to identify the patients that have low risk prediction. An exception is the risk prediction agreement for the Lasso tuned with the area under the curve (AUC) criterion which shows very low agreement among the high values of the predicted risks. The reason is that optimizing the penalty parameter with respect to the AUC criterion tends to favor a very high penalty value causing only a single predictor to be selected in each of the 1000 iterations. This results in a lack of generalizability to the DACOVA data which gives rise to the higher disagreement in the predicted risks. In the extreme case where the penalty becomes so high that none of the predictors are selected by the Lasso, the sequential rank agreement for the predicted probabilities becomes undefined since all the ranks will be ties. Comparing the top left and right panels of Figure 2 it can further be seen that some of the methods show better agreement with respect to the predicted probabilities than for ranking the importance of the predictors and vice versa. Ridge regression shows higher agreement across training sets for the risk predictions than PLS-DA, and PLS-DA shows higher agreement for predictor importance than Ridge regression. Lasso shows similar agreement for ranking the risk predictions as PLS-DA (except for the AUC criterion), and poorer agreement for ranking predictors. This reason for the latter is the high auto-correlation between the intensities in the mass spectra which leads to collinearity issues in the regression models. It is well-known that variable selection with the Lasso does not perform very well when the predictors are highly correlated. The collinearity does, however, not affect the agreement of the risk predictions (Figure 2, top left), since the specific variable selected is not that important for a group of highly correlated predictors when the purpose is risk predictions. It appears that Ridge regression tuned with the AUC criterion achieves the best performance with respect to the stability of ranking the individual predicted risk probabilities. It must, however, be stressed that the sequential rank agreement in this application is only concerned with the agreement of the risk predictions across sub-samples and not with the actual accuracy of the risk predictions. Thus, we also computed the AUC values for the different models based on the DACOVA data. The distributions across the $$1000$$ sub-samples for a selection of the models is shown in the bottom right panel of Figure 2. Here, we see that PLS-DA attains the highest AUC values with a median value of $$0.70$$ while the Ridge model with the AUC criterion attains a median AUC of $$0.49$$. This implies that while Ridge regression optimized with respect to the AUC criterion achieves the best sequential rank agreement, it performs similar to a random coin toss with respect to classifying the DACOVA patients. In practice both concerns are of importance. 5. Simulation study—comparison of list agreements We present results from a simulation study where we compare the small sample properties of the sequential rank agreement to the topK method (Hall and Schimek, 2012). The purpose of the simulations were 2-fold: first, we want to investigate the rank agreement as it changes with threshold $$q$$ and number of lists $$L$$. Second, we want to compare the results from sra with the topK for a realistic situation where the true, underlying agreement is not governed by a simple probability distribution. Thus, we are interested in two features of the methods: the depth until which they agree and the number of unique predictors found. To define the depth of agreement for sra we set a constant threshold function to an integer $$q$$ and report the first crossing point, i.e., the smallest list depth where sra exceeds $$q$$ (as implemented in the sra function from our package SuperRanker using the median absolute distance argument). For topK, we use the function j0.multi which is implemented in the R package TopKLists. Specifically, we set the tuning parameter v of j0.multi to the value 6 and the window parameter d to $$q$$ and report the output parameter maxK as the depth of agreement. Thus, to make this comparison we assume that the first crossing point of sra and the result of the topK method measure the same underlying feature. The simulations should mimic a data analysis situation where we have a single data set and where important features are identified (and ranked) using marginal $$t$$-tests. We want to used agreement to understand the stability of the observed feature selection. In each simulation run, we first generated an “original” data set with 1000 predictors and 400 observations. The predictors were drawn independently from a standard Gaussian distribution with variance 1 such that $$E(y_i) = \sum_{j=1}^{15}x_{ij},$$ where $$y_i$$ is the $$i$$th response and $$x_{ij}$$ is the $$j$$th predictor for the $$i$$th measurement. For each “original data set” we obtained $$L$$ ranked lists of the 1000 predictors by drawing $$L$$ bootstrap samples (with replacement) of 400 observations and then ranking the 1000 predictors according to their marginal $$t$$ test statistics. Thus, we assessed the depth of agreement among lists that are ranked with the same statistical method on bootstrap versions of the same data set. We report results from two scenarios each based on 1000 simulated data sets: Scenario I: Fix the number of lists $$L=8$$ and vary the threshold $$q\in\{3,4,5,6,7,8,9,10\}$$. Scenario II: Fix the threshold $$q=5$$ and vary the lists, $$L\in\{3,5,10,50\}$$. In both scenarios, we summarized the distribution of the estimated depth of agreement as well as the average number of unique predictors found in the set of predictors which is selected by the estimated depth of agreement. The results from Scenario I are shown in the left panel of Figure 3. The violin plots (with rectangular kernel) show the distributions of the estimated depths of agreement for both methods. As expected, the depth of agreement increased when the threshold for agreement/window increased. Fig. 3. View largeDownload slide Left panel: simulation study showing distribution of estimated rank agreements for sra and topK for varying thresholds and fixed number of lists $$L=8$$. The bold numbers are the average number of unique predictors included in the set where the lists agree. Right panel: simulation results for varying number of lists and with fixed threshold of $$q=5$$. Fig. 3. View largeDownload slide Left panel: simulation study showing distribution of estimated rank agreements for sra and topK for varying thresholds and fixed number of lists $$L=8$$. The bold numbers are the average number of unique predictors included in the set where the lists agree. Right panel: simulation results for varying number of lists and with fixed threshold of $$q=5$$. We see that sra results in a substantially lower depth of agreement than the topK method. Also the average numbers of unique predictors (bold numbers inside the plots) which ideally should be 15 to reflect the number of true underlying predictors are markedly smaller—and close to the true value—for sra. Even larger differences were found when we used the Euclidean distance instead of the median absolute distance for the sequential rank agreement (results not shown). The right panel of Figure 3 shows the results from Scenario II. The number of lists has little impact on the results, and again the sra is more conservative than topK and as a consequence sra includes fewer predictors in the selected set where the lists agree. The effect sizes of the 15 predictors in the model is the same and in practice we observe that the majority of the 15 predictors are generally picked in each sample but that their individual rankings vary substantially in the top 15 within each bootstrap sample. If the number of influential predictors is lessened, then the variance in depth estimation and number of predictors is reduced. 6. Discussion In this article, we address the problem of comparing ranked lists of the same items. Our proposed method can handle both the situation where the underlying data to generate the ranked lists are available and the situation where the only available data is the actual ranked lists. In addition, incomplete ranked lists where only the ranks of the top $$k$$ ranked items are known can be accommodated as well. The proposed agreement measure can be interpreted as the average distance between an item’s rank and the average rank assigned to that item across lists. The sequential rank agreement can be used to determine the depth at which the rank agreement becomes too large to be desirable based on prior requirements or acceptable differences, or it can be used to visually determine when the change in agreement becomes too large. In that regard, the investigator can have prior limits on the level of agreement that is acceptable. We have shown that sra is very versatile: it can be used not only to compare ranked lists of items produced from different samples/populations but that it also can be used to study the ranks obtained from different analysis methods on the same data, and to evaluate the stability of the ranks by bootstrapping (or sub-sampling) the data repeatedly and comparing the ranks obtained from training the models on the bootstrapped data. While the sequential rank agreement is primarily an exploratory tool, we have suggested two null hypotheses that can be used to evaluate the sequential rank agreement obtained. Note that none of the two null hypotheses are concerned with the actual “true ranking” but are purely concerned with consistency/stability of the rankings among the lists, and consequently we cannot determine if the rankings are good but only whether they agree. The sequential rank agreement curve can be compared visually to the curves obtained under either of the null distributions and point-wise $$p$$-values can be obtained for each depth by counting the number of sequential rank agreements under the null hypothesis that is less than or equal to the observed rank agreement. Finally, we have—whenever possible—used all available ranks from the lists. We could choose to restrict attention to the rank of items, which show evidence for significance in their models. That ensures that less emphasis is put on the agreement of the non-significant items, and it would be easier to identify a change in agreement among the items that were deemed to be relevant. Artificial censoring was successfully introduced in the ridge regression application. We note that the sequential rank agreement is still marred by problems that generally apply to ranking of items and/or individuals. Collinearity in particular can be a huge problem when bootstrapping data or when comparing different analysis methods. For example, marginal analyses where each item is analyzed separately will assign similar ranks to two highly correlated predictors while methods that provide a sparse solution such as the Lasso will just rank a single of the two predictors high. Thus in such a scenario, we would expect low agreement of the rankings from Lasso and marginal analyses simply because of the way correlated predictors are handled. This is general problem for ranked lists and not a shortcoming of the sequential rank agreement. Another caveat with the way the sequential rank agreement is defined is the use of the standard deviation to measure agreement. The standard deviation is an integral part of the limits-of-agreement as discussed by Altman and Bland (1983) which is why we have followed the analogous path. However, the standard deviation is unstable when the number of observations is low and alternatives such as the median absolute deviance may prove more stable in some situations. In conclusion, we have introduced a method for evaluation of ranked (partial/censored) lists that can be easily interpreted and that can be applied to a large number of situations. The method presented here can be adapted further by using it to compare and classify statistical analysis methods that agree on the rankings they provide or by using the rank agreement to optimize a hyper-parameter in, say, elastic net regularized regression where the rank agreement is used to determine the mixing proportion between the $$L_1$$ and the $$L_2$$ penalty. Finally, the proposed method may be adapted (with some additional assumptions) to the situation where there are put equal emphasis on both ends of the lists and not just on the top of the lists. Supplementary Material Supplementary material is available at http://biostatistics.oxfordjournals.org. Acknowledgments Conflict of Interest: None declared. References Altman D. and Bland J. M. ( 1983 ). Measurement in medicine: the analysis of method comparison studies. The Statistician 32 , 307 – 317 . Google Scholar CrossRef Search ADS Bertelsen K. ( 1991 ). Protocol allocation and exclusion in two Danish randomised trials in ovarian cancer. British Journal of Cancer 64 , 1172 . Google Scholar CrossRef Search ADS PubMed Boulesteix A.-L. ( 2004 ). PLS dimension reduction for classification with microarray data. Statistical Applications in Genetics and Molecular Biology 3 , 1 – 30 . Google Scholar CrossRef Search ADS Boulesteix A.-L. and Slawski M. ( 2009 ). Stability and aggregation of ranked gene lists. Briefings in Bioinformatics 10 , 556 – 568 . 568 Google Scholar CrossRef Search ADS PubMed Breiman L. ( 2001 ). Random forests. Machine Learning 45 , 5 – 32 . Google Scholar CrossRef Search ADS Carterette B. ( 2009 ). On rank correlation and the distance between rankings. In: Proceedings of the 32Nd International ACM SIGIR Conference on Research and Development in Information Retrieval , SIGIR ’09 . New York, NY, USA : ACM. pp. 436 – 443 . Dudoit S. , Fridlyand J. and Speed T. P. ( 2002 ). Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data. Journal of the American Statistical Association 97 , 77 – 87 . Google Scholar CrossRef Search ADS Fagin R. , Kumar R. and Sivakumar D. ( 2003 ). Comparing Top k Lists. SIAM Journal on Discrete Mathematics 17 , 134 – 160 . Google Scholar CrossRef Search ADS Friedman J. , Hastie T. and Tibshirani R. ( 2010 ). Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software 33 , 1 – 22 . Google Scholar CrossRef Search ADS PubMed Golub T. R. ( 1999 , October ). Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science 286 , 531 – 537 . Google Scholar CrossRef Search ADS PubMed Hall P. and Schimek M. G. ( 2012 ). Moderate deviation-based inference for random degeneration in paired rank lists. JASA 107 , 661 – 672 . Google Scholar CrossRef Search ADS Hogdall E. V. , Ryan A. , Kjaer S K. , Blaakaer J. , Christensen L. , Bock J. E. , Glud E. , Jacobs I. J. and Hogdall C. K. ( 2004 ). Loss of heterozygosity on the X chromosome is an independent prognostic factor in ovarian carcinoma: from the Danish “MALOVA” ovarian carcinoma study. Cancer 100 , 2387 – 2395 . Google Scholar CrossRef Search ADS PubMed Irizarry R. A. , Warren D. , Spencer F. , Kim I. F. , Biswal S. , Frank B. C. , Gabrielson E. , Garcia J. G. , Geoghegan J. , Germino G. and others. ( 2005 ). Multiple-laboratory comparison of microarray platforms. Nature Methods 2 , 345 – 350 . Google Scholar CrossRef Search ADS PubMed Kuhn M. ( 2014 ). caret: Classification and Regression Training . R package version 6.0-24. Liaw A. and Wiener M. ( 2002 ). Classification and regression by randomForest. R News 2 , 18 – 22 . Reshef D. N. , Reshef Y. A. , Finucane H. K. , Grossman S. R. , McVean G. , Turnbaugh P. J. , Lander E. S. , Mitzenmacher M. and Sabeti P. C. ( 2011 ). Detecting novel associations in large data sets. Science (New York, N.Y.) 334 , 1518 – 24 . Google Scholar CrossRef Search ADS PubMed Ryan C. G. , Clayton E. , Griffin W. L. , Sie S. H. and Cousens D. R. ( 1988 ). SNIP, a statistics-sensitive background treatment for the quantitative analysis of PIXE spectra in geoscience applications. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 34 , 396 – 402 . Google Scholar CrossRef Search ADS Sampath S. and Verducci J. S. ( 2013 ). Detecting the end of agreement between two long ranked lists. Statistical Analysis and Data Mining 6 , 458 – 471 . Google Scholar CrossRef Search ADS Segerstedt B. ( 1992 ). On ordinary ridge regression in generalized linear models. Communications in Statistics-Theory and Methods 21 , 2227 – 2246 . Google Scholar CrossRef Search ADS Tibshirani R. ( 1996 ). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) 58 , 267 – 288 . Webber W. , Moffat A. and Zobel J. ( 2010 ). A similarity measure for indefinite rankings. ACM Transactions on Information Systems 28 , 20:1 – 20:38 . Google Scholar CrossRef Search ADS © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Biostatistics Oxford University Press Sequential rank agreement methods for comparison of ranked lists Biostatistics, Volume Advance Article – Jun 3, 2018 17 pages Loading next page... /lp/ou_press/sequential-rank-agreement-methods-for-comparison-of-ranked-lists-wYdjgiQvt9 Publisher Oxford University Press Copyright © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. ISSN 1465-4644 eISSN 1468-4357 D.O.I. 10.1093/biostatistics/kxy017 Publisher site See Article on Publisher Site Abstract Summary The comparison of alternative rankings of a set of items is a general and common task in applied statistics. Predictor variables are ranked according to magnitude of association with an outcome, prediction models rank subjects according to the personalized risk of an event, and genetic studies rank genes according to their difference in gene expression levels. We propose a sequential rank agreement measure to quantify the rank agreement among two or more ordered lists. This measure has an intuitive interpretation, it can be applied to any number of lists even if some are partially incomplete, and it provides information about the agreement along the lists. The sequential rank agreement can be evaluated analytically or be compared graphically to a permutation based reference set in order to identify changes in the list agreements. The usefulness of this measure is illustrated using gene rankings, and using data from two Danish ovarian cancer studies where we assess the within and between agreement of different statistical classification methods. 1. Introduction Ranking of items or results is common in scientific research, and ranked lists occur naturally as the result of many statistical applications. Regression methods rank predictor variables according to magnitude of their association with an outcome, prediction models rank subjects according to their risk of an event, and genetic studies rank genes according to their difference in gene expression levels across samples. Two common research questions are of interest when several rankings of the same items are available: i) to what extent do the lists agree on the rankings and how will that change as we go through the lists and ii) is it possible to identify an optimal rank until which the lists agree on the items? A typical situation where these questions arise is in high-dimensional genomics studies such as genome-wide association studies where several analysis methods (e.g., regression methods, lasso, and random forest) can be used identify and rank millions of gene variants according to their association with the outcome. The ranking of each gene variant may vary from method to method, and a consensus summary of agreement of the findings is needed to determine which gene variants to investigate more closely in subsequent validation studies. To minimize expenses it is only of interest to consider gene variants that have high ranking across the different methods. Multiple ranked lists also appear in machine learning where the stability of the ranks produced by a “black-box”-technique can be evaluted by bootstrapping the data. Assessing which items are stable (i.e., consistent rankings across bootstrap samples) will help to weed out possible fluke findings. In this article, we introduce sequential rank agreement for measuring agreement among ranked lists. The general idea is to define agreement based on the sequence of ranks from a subset of the first $$d$$ items in each list. As agreement metric we adapt the limits of agreement known from agreement between quantitative variables (Altman and Bland, 1983), but any measure for agreement could essentially be used. Our proposed approach allows us to compare multiple lists simultaneously, it provides a dynamic measure of agreement as a function of the depth in the lists, it places higher weight on items at the top of the list, it accommodates partially observed lists of varying lengths, and has a natural interpretation that directly relates to the ranks. Graphical illustration of sequential rank agreement allows us to infer any changepoints, i.e., a list depth where a substantial change in the agreement of the lists occur, but we also provide asymptotical and randomization-based graphical tools to compare the observed rank agreement to the expected agreement found in non-informative data. In this sense, our approach is a combination and generalization of some of the ideas of Irizarry and others (2005), Carterette (2009), and Boulesteix and Slawski (2009). Carterette (2009) compares two rankings based on the distance between them as measured by a multivariate Gaussian distribution, and the latter presents an overview of approaches for aggregation of ranked lists including bootstrap and leave-one-out jack-knife approaches. Irizarry and others (2005) propose a plot based on the intersection of lists which is a special case of our setting where the agreement metric is the overlap proportion. However, simple intersection also places equal weights on all depths of the list and therefore Fagin and others (2003) and Webber and others (2010) proposed weighted intersections which put more emphasis on the top of the lists. Specifically, Webber and others (2010) define their rank-biased overlap by weighting with a converging series to ensure that the top is weighted higher than the potentially non-informative bottom of the lists. It is possible to use the existing methods to calculate agreement of lists until a given depth, i.e., limited to the $$d$$ items of each list. However, the interpretation may not be straightforward, especially in the case of more than two lists, and they may not accommodate partial rankings. Very recently, Hall and Schimek (2012) proposed a method for comparing pairwise rankings and derived the asymptotic distribution of the endpoint where the two ranked lists no longer are in agreement. Their approach was based on anchoring one of the two lists and subsequently generating a sequence of 0s and 1s depending on whether the ranks in the second list was close to the rank from the anchored list. Sampath and Verducci (2013) followed up on this idea for pairwise comparison of lists but used penalties based on a truncated geometric probability instead of a 0–1 process, and they evaluated the distribution of the endpoint of agreement by computational approaches. The asymptotic distribution in the Hall and Schimek (2012) paper is based on letting the number of lists increase to infinity which is a situation that is only relevant in special cases, whereas the simulation-based null distribution approach of Sampath and Verducci (2013) does not rely on asymptotic results to evaluate their pairwise findings. The article is organized as follows: the next section defines sequential rank agreement for multiple ranked lists and discuss how to handle incomplete lists. In Section 3, we show the asymptotic distribution of the endpoint of agreement and discuss approaches to evaluate the results obtained from sequential rank agreement. Finally, we apply the sequential rank agreement to two Danish ovarian cancer studies and compare our method to the method of Hall and Schimek (2012) in a small sample simulation study before we discuss the findings and possible extensions. The GitHub repository https://github.tagteam/SuperRanker contains an implementation of the proposed approach and the code for the leukemia analysis presented in Section 3 and the simulations found in Section 5 (see commit ffe8302). 2. Methods Consider a set of $$P$$ different items $$X=\{X_1,\dots,X_P\}$$ and a ranking function $$R: \{X_1,\dots,X_P\}\to \{1,\dots,P\}$$, such that $$R(X_p)$$ is the rank of item $$X_p$$. The inverse mapping $$R^{-1}$$ gives the item $$R^{-1}(r)$$ that was assigned to rank $$r\in\{1,\dots,P\}$$. An ordered list is the realization of a ranking function $$R$$ applied to the set of items $$X$$. Panels (a) and (b) of Table 1 show a schematic example of these mappings. Thus, if $$R_l^{-1}(1)=X_{34}$$, then item $$X_{34}$$ is ranked first in list $$l$$ and similarly $$R_l(X_{34})=1$$. In all what follows, we consider a fixed set of items and consider the ranking function to be a random variable. Thus, let $$R_1(X),\dots,R_L(X)$$, $$L\geq2$$, be a sample of $$L$$ independent identically distributed draws from an unknown probability distribution function $$Q$$. One aim is then to test how much $$Q$$ resembles the uniform distribution which assigns probability $$1/P!$$ to each of the $$P!$$ different possible rankings. Table 1 Example set of ranked lists. (a) shows the ranked lists of items for each of three lists, (b) presents the ranks obtained by each item in each of the three lists, and (c) shows the cumulative set of items up to a given depth in the three lists when $$\varepsilon=0$$ (i.e., an item is added to $$S(d)$$ whenever it appears in at least one list). (a) Rank $$R^{-1}_1$$ $$R^{-1}_2$$ $$R^{-1}_3$$ 1 A A B 2 B C A 3 C D E 4 D B C 5 E E D (b) Item $$R_1$$ $$R_2$$ $$R_3$$ A 1 1 2 B 2 4 1 C 3 2 4 D 4 3 5 E 5 5 3 (a) Rank $$R^{-1}_1$$ $$R^{-1}_2$$ $$R^{-1}_3$$ 1 A A B 2 B C A 3 C D E 4 D B C 5 E E D (b) Item $$R_1$$ $$R_2$$ $$R_3$$ A 1 1 2 B 2 4 1 C 3 2 4 D 4 3 5 E 5 5 3 (c) Depth $$S_d$$ 1 $$\{$$A, B$$\}$$ 2 $$\{$$A, B, C$$\}$$ 3 $$\{$$A, B, C, D, E$$\}$$ 4 $$\{$$A, B, C, D, E$$\}$$ 5 $$\{$$A, B, C, D, E$$\}$$ (c) Depth $$S_d$$ 1 $$\{$$A, B$$\}$$ 2 $$\{$$A, B, C$$\}$$ 3 $$\{$$A, B, C, D, E$$\}$$ 4 $$\{$$A, B, C, D, E$$\}$$ 5 $$\{$$A, B, C, D, E$$\}$$ Table 1 Example set of ranked lists. (a) shows the ranked lists of items for each of three lists, (b) presents the ranks obtained by each item in each of the three lists, and (c) shows the cumulative set of items up to a given depth in the three lists when $$\varepsilon=0$$ (i.e., an item is added to $$S(d)$$ whenever it appears in at least one list). (a) Rank $$R^{-1}_1$$ $$R^{-1}_2$$ $$R^{-1}_3$$ 1 A A B 2 B C A 3 C D E 4 D B C 5 E E D (b) Item $$R_1$$ $$R_2$$ $$R_3$$ A 1 1 2 B 2 4 1 C 3 2 4 D 4 3 5 E 5 5 3 (a) Rank $$R^{-1}_1$$ $$R^{-1}_2$$ $$R^{-1}_3$$ 1 A A B 2 B C A 3 C D E 4 D B C 5 E E D (b) Item $$R_1$$ $$R_2$$ $$R_3$$ A 1 1 2 B 2 4 1 C 3 2 4 D 4 3 5 E 5 5 3 (c) Depth $$S_d$$ 1 $$\{$$A, B$$\}$$ 2 $$\{$$A, B, C$$\}$$ 3 $$\{$$A, B, C, D, E$$\}$$ 4 $$\{$$A, B, C, D, E$$\}$$ 5 $$\{$$A, B, C, D, E$$\}$$ (c) Depth $$S_d$$ 1 $$\{$$A, B$$\}$$ 2 $$\{$$A, B, C$$\}$$ 3 $$\{$$A, B, C, D, E$$\}$$ 4 $$\{$$A, B, C, D, E$$\}$$ 5 $$\{$$A, B, C, D, E$$\}$$ The agreement among the lists regarding the rank given to an item $$X_p$$ can be measured by the variance across the lists \begin{align} A(X_p) &= \mathbb{E}_Q\left[\left(R(X_p)-\mathbb{E}_Q R(X_p)\right)^2\right]\\ &= \sum_{r\in\Pi} \left(r(X_p)-\mathbb{E}_Q R(X_p)\right)^2 Q(r),\nonumber \end{align} (2.1) where $$\Pi$$ is the set of all permutations of $$X$$, $$Q$$ is a probability mass function on $$\Pi$$, and $$\mathbb{E}_Q R(X_p)=\sum_{r\in\Pi} r(X_p)Q(r)$$. The empirical counterpart is \begin{align} \widehat{A}_L(X_p) = \frac{1}{L-1}\sum_{i=1}^L (R_i(X_p) - \overline{R}_L(X_p))^2, \quad \overline{R}_L(X_p) = \frac{1}{L}\sum_{i=1}^L R_i(X_p). \end{align} (2.2) For each item, the function $$\widehat{A}_L$$ has an interpretation as the expected Euclidean distance of the individual rankings from the expected ranking over the $$L$$ lists, and it corresponds to the same measure that is used to compute the limits of agreement (Altman and Bland, 1983). For an integer $$1\le d\le P$$, we define the expected set of unique items found by merging the first $$d$$ elements across the possible lists: \begin{align} S(d) &= \left\{X_p; \left(\sum_{r\in\Pi} 1\left(r(X_p)\le d\right) Q(r)\right) > {\varepsilon}\right\}, \end{align} (2.3) where $$1(\cdot)$$ denotes the indicator function, and where $$\varepsilon\in[0,1)$$ is a pre-specified constant that sets the minimum proportion of lists that an item must be present in before it is added to $$S(d)$$. When $$\varepsilon=0$$, then an item is included as soon as it is present in just one list. The empirical counterpart is the set of unique items ranked less than or equal to $$d$$ in any of the $$L$$ lists: $$\widehat{S}_{L}(d) = \left\{X_p; \left(\frac{1}{L}\sum_{l=1}^L 1\left(R_l(X_p)\le d\right)\right)> {\varepsilon}\right\},$$ (2.4) which is exemplified in Panel (c) of Table 1. We define the sequential rank agreement as the weighted expected agreement of the items found in the set $$S(d)$$: $${\textrm{sra}(d) = \left\{\begin{array}{cl}\frac{1}{|S(d)|}\sum_{p \in S(d)}A(X_p) & \text{when} |S(d)|>0, \\ 0 & \text{otherwise}\end{array} \right.}$$ (2.5) where $$|S(d)|$$ is the cardinality of the set $$S(d)$$. As stated, we are only interested in sra($$d$$) when $$|S(d)|>0$$. The empirical counterpart when $$|S(d)|>0$$ is equivalently given by $$\widehat{\textrm{sra}}_L(d) = \frac{\sum_{\{p \in \widehat{S}_{L}(d)\}}(L-1)\widehat{A}_L(X_p)}{(L-1)|\widehat {S}_{L}(d)|}.$$ (2.6) Values of sra close to zero when $$|S(d)|>0$$ suggest that the lists agree on the rankings while larger values suggest disagreement. If $$|S(d)|=0$$, then no items were sufficiently frequent among the observed lists, and we can conclude that the lists do not agree above the threshold $$\varepsilon$$. The sequential rank agreement will be zero for all values of $$d$$ when the ranked lists are identical. 2.1. Interpreting and applying sequential rank agreement The sequential rank agreement is equivalent to the pooled variance of the items found in $$S(d)$$. Thus, the square root of the sequential rank agreement measures the average of the average difference in rankings among the lists for the items we have included until depth $$d$$. In method comparison studies, the observed agreement is compared to a pre-specified acceptable limit, and we can do similarly. For easy visualization of the rank agreement, we suggest to plot $$\sqrt{\widehat{\textrm{sra}}_L(d)}$$ corresponding to the pooled SD against $$d$$. As an example, consider the data by Golub (1999) (found in Dudoit and others (2002)) where 3051 gene expression values measured on 38 tumor mRNA samples were used to classify between acute lymphoblastic leukemia and acute myeloid leukemia. Several analysis methods are possible for these data for example marginal unequal variances two-sample $$t$$ tests, marginal logistic regression analyses, elastic net logistic regression (Friedman and others, 2010), and marginal maximum information content correlations (MIC) (Reshef and others, 2011), and we want to identify a set of genes that consistently are most likely to be associated to leukemia. For the first two methods, the genes were ranked according to minimum $$p$$ value, for logistic regression the genes were ordered by size of the corresponding coefficients (after standardization), and MIC was ordered by absolute correlation which resulted in the top rankings seen in Table 2. The ranked lists appear to agree that genes 2124 and 829 are among the most interesting while the best ranked gene from MIC, gene 378, is not found in the top 10 for two of the other methods. Table 2 List of ranked results from the Golub data. Numbers indicate the predictor/gene for the given ranking and method. Only the top 10 ranks are shown in the table. The ranked lists appear to agree that genes 2124 and 829 are among the most interesting while the highest ranked gene from MIC, gene 378, is not found in the top 10 for two of the other methods. Ranking Welsh’s $$t$$ LogReg ElasticNet MIC 1 2124 2124 829 378 2 896 896 2198 829 3 2600 829 2124 896 4 766 394 808 1037 5 829 766 1665 2124 6 2851 2670 1920 808 7 703 2939 1389 108 8 2386 2386 1767 515 9 2645 1834 1042 2670 10 2002 378 2600 2600 Ranking Welsh’s $$t$$ LogReg ElasticNet MIC 1 2124 2124 829 378 2 896 896 2198 829 3 2600 829 2124 896 4 766 394 808 1037 5 829 766 1665 2124 6 2851 2670 1920 808 7 703 2939 1389 108 8 2386 2386 1767 515 9 2645 1834 1042 2670 10 2002 378 2600 2600 Table 2 List of ranked results from the Golub data. Numbers indicate the predictor/gene for the given ranking and method. Only the top 10 ranks are shown in the table. The ranked lists appear to agree that genes 2124 and 829 are among the most interesting while the highest ranked gene from MIC, gene 378, is not found in the top 10 for two of the other methods. Ranking Welsh’s $$t$$ LogReg ElasticNet MIC 1 2124 2124 829 378 2 896 896 2198 829 3 2600 829 2124 896 4 766 394 808 1037 5 829 766 1665 2124 6 2851 2670 1920 808 7 703 2939 1389 108 8 2386 2386 1767 515 9 2645 1834 1042 2670 10 2002 378 2600 2600 Ranking Welsh’s $$t$$ LogReg ElasticNet MIC 1 2124 2124 829 378 2 896 896 2198 829 3 2600 829 2124 896 4 766 394 808 1037 5 829 766 1665 2124 6 2851 2670 1920 808 7 703 2939 1389 108 8 2386 2386 1767 515 9 2645 1834 1042 2670 10 2002 378 2600 2600 The sequential rank agreement curve (using $$\varepsilon=0$$ to decide that an item should be present in just a single list before it is included) shown in the left plot of Figure 1 show the average distance in ranks for the genes considered among the first $$d$$ positions. Not surprisingly, the sequential rank agreement is better towards the top of the lists (smaller values on the $$y$$ axis corresponds to better agreement) than towards the bottom of the lists. Figure 1 shows a substantial deterioration in agreement (higher sra) after depth 5. Thus, if we were to restrict attention to a small set of predictors then our prime focus would be on the items found among the top-5 lists from Table 2). The choice of which depth until which we think the lists agree can be chosen either from a pre-specified threshold for an acceptable difference in rankings or from a pre-specified item set size. A changepoint analysis on the sequential rank agreement would be able to identify depths where a substantial increase/change in rank agreement occurs and would be another way to identify sets of items that share agreement among the lists if a pre-specified acceptable rank agreement threshold is not given. Fig. 1. View largeDownload slide Left panel: sequential rank agreement for four different analysis methods applied to the 3051 genes in the Golub data (black line). Right panel: corresponding sequential rank agreement for the same data but where only the top 20 ranked items are available and the rank of the remaining items are not available. The blue and red (the top and bottom areas, respectively) areas correspond to the independent and randomized reference hypothesis areas, respectively. Note that both the $$x$$ and $$y$$ axes are shown on the log scale to “zoom in” on the top of the lists. Fig. 1. View largeDownload slide Left panel: sequential rank agreement for four different analysis methods applied to the 3051 genes in the Golub data (black line). Right panel: corresponding sequential rank agreement for the same data but where only the top 20 ranked items are available and the rank of the remaining items are not available. The blue and red (the top and bottom areas, respectively) areas correspond to the independent and randomized reference hypothesis areas, respectively. Note that both the $$x$$ and $$y$$ axes are shown on the log scale to “zoom in” on the top of the lists. Generally, changes in the level of rank agreement suggest that there are sets of items that are ranked similarly in all lists while other items constitute set(s) with vastly different ranks. When the lists are likely to agree on a few top-ranked items then the sequential rank agreement curve will start low and then increase until it levels off exactly as is seen in Figure 1. 2.2. Analysis of incomplete lists Incomplete or partial lists are a common occurrence that arise, for example, in case of missing data (items), when comparing top $$d$$ list results from publications, or when some methods only rank a subset of the items. For example, penalized regression such as the Lasso provides a sparse set of predictors that have non-zero coefficients. There is no obvious ordering of the set of predictors whose coefficient has been shrunk to zero, and thus we end up with a partial ordering. Incomplete lists also occur if for example the analyst restricts attention to the ranks of items that have been found to be statistically significant. Sequential rank agreement can be generalized to incomplete lists in the following way. Let $$\Lambda_l\subset X$$ be the subset of $$d_l$$ items that have been ranked highest in list $$l$$. The case where all lists are incomplete at the same depth $$d$$ corresponds to $$d_1=\cdots=d_L=d$$. For incomplete lists the rank function becomes $$\widetilde R_l(X_p) = \begin{cases} \{R_l(X_p)\} & \text{for }\ X_p\in \Lambda_l,\\ \{d_l+1,\dots,P\} & \text{for }\ X_p \not\in \Lambda_l \end{cases}$$ (2.7) where we only know that the rank for the unobserved items in list $$l$$ must be larger than the largest rank observed in that list. The agreement, $$A(X_p)$$, cannot be computed directly for all predictors in the presence of incomplete lists because the exact rank for some items will be unknown. Also, recall that the rankings within a single list are not independent since each rank must appear exactly once in each list. Thus, we cannot simply assign the same number (e.g., the mean of the unassigned ranks) to the missing items since that would result in less variation of the ranks and hence less variation of the agreement, and it would artificially introduce a (downward) bias of agreement for items that are missing in multiple lists. Instead, we randomize the ranks $$\{d_{l}+1,\dots,P\}$$ to the items that do not occur in list $$\Lambda_l$$. One realization of the $$L$$ rankings of the set $$X$$ is obtained by randomizing the missing items of each list. By randomizing a large number of times, we can compute (2.5) for each realization, and then compute the sequential rank agreement as the pointwise (for each depth) average of the rank agreements. The algorithm is described in detail in Algorithm C in the Appendix of supplementary material available at Biostatistics Online. The proposed approach is based on two assumptions: i) that the most interesting items are found in the top of the lists and ii) that the ranks that are missing from the lists provide so little information that it is reasonable to assume that they can be represented by a random order. The first assumption is justifiable because we have already accepted that it is reasonable to rank the items in the first place. The second assumption is fair in the light of the first assumption provided that we have a “sufficiently large” part of the top of the lists available. When the two assumptions are satisfied, then it is clear that the interesting part of the sequential rank agreement curves is restricted to depths where the number of items without ranks available is low. Similar to fully observed lists, we generally expect the sequential rank agreement to start low and then increase unless the lists are completely unrelated (in which case the sequential rank agreement will be constant at a high level) or if the lists mostly agree on the ranking (in which case the sequential rank agreement will also be constant but at a low level). For incomplete ranked lists, we also expect a changepoint around the depth where the lists are become incomplete. This is an artifact stemming from the fact that we assume that the remainder of the lists can be replaced by a simple permutation of the missing items. Note that if an item is ranked highly in a few lists but unranked in the remaining lists then it gets poor rank agreement since we only compare whether the lists agree on their rankings and in this case they clearly do not. The right-most plot of Figure 1 shows the impact of restricting the Golub data such that only top-20 lists are available instead of full lists of length 3051 (20 was chosen to resemble the list lengths that might be harvested from published manuscripts). The sequential rank agreement increases much quicker because the incomplete lists introduce more noise in the estimation of the agreement, but it is still possible to see that the top of the list has a sequential rank agreement that is not substantially different from the full lists. 3. Evaluating sequential rank agreement results To evaluate the sequential rank agreement values, we propose two different benchmark values corresponding to two different hypotheses. We wish to determine if we observe better agreement than would be expected if there were no relevant information available in the data. The first reference hypothesis is \begin{eqnarray} H_0 & : & \text{The list rankings correspond to completely randomly}\\ & & \text{permuted lists}\nonumber \end{eqnarray} (3.1) which not only assumes that there is no information in the data on which the rankings are based but also that the methods used to provide the rankings are completely independent. Alternatively, we can remove the restriction of the independence among the methods used to generate the $$L$$ ranked lists under the null hypothesis that any association to the outcome is removed in the data. \begin{eqnarray*} \widetilde H_0 & :& \text{The list rankings are based on data that contain}\\ & & \text{no association to the outcome.} \end{eqnarray*} This alternative null hypothesis addresses the fact that some ranking methods are more likely to provide similar rankings of the same data because the ranking methods focus on the same features of the data rather than because of any information contained in the data. 3.1. Permutation-based inference $$H_0$$ is a quite unrealistic null hypothesis, but we can easily obtain realizations from that null hypothesis simply by permuting the items within each list and then computing the sequential rank agreement for the permuted lists. In the fully observed case, each experiment contains $$L$$ lists of random permutations of the items in $$X$$. For the incomplete case we first permute the items $$X_1,\dots,X_P$$ and then assign missing ranks for list $$l$$ from $$d_l$$ to $$P$$ (i.e., each list has the same number of observed rankings as was observed for list $$l$$ in the original data set). The sequential rank agreement curve from the original lists can then be compared to, say, the pointwise 95% quantiles of the observed rank agreements obtained under $$H_0$$. To obtain the distribution under $$\widetilde H_0$$, the idea is to repeat the ranking procedures for unassociated data many times. For each resample, we first permute the outcome variable in the data set. This removes any association between the predictor variables and the outcome while keeping the structure in the predictors and we apply the same methods that was used for the original data to the permuted data set to generate $$L$$ new rankings and compute the sra for the unassociated data. Note that we only permute the outcomes and thus preserve the internal structure of the predictors. This randomization approach requires that the original data is available and as such it may not be possible to evaluate $$\widetilde H_0$$ in all situations. If the sequential rank agreement for the original data lies substantially below the distribution of the sequential rank agreements obtained under either $$H_0$$ or $$\widetilde H_0$$ then this suggests that the original ranked lists agree more than expected in data with no information, and therefore that the information in the lists is significantly more in agreement than what would be expected. Figure 1 shows the empirical distributions of sequential rank agreement under $$H_0$$ and $$\widetilde H_0$$ each based on $$400$$ permutations of the Golub data from Section 2.1. Figure 1 indicates that the observed sequential rank agreement for the Golub data is significantly better than what would be expected by chance for data that contain no information since it lies below the reference areas if the lists were just random ($$H_0$$ corresponding to the blue area). However, if we consider $$\widetilde H_0$$ then the sequential rank agreement is just inside the red (the lower area in the figure) area, and we conclude that the agreement seen in the Golub data is not significantly better than what we would expect when we remove the association between the predictors and the outcome. The incomplete data also suggest that there may be at most 1 or 2 ranked items towards the top of the lists that yield a result better than what would be expected (the bottom-right plot). Not surprisingly, the sequential rank agreement under $$\widetilde H_0$$ is lower than the sequential rank agreement under $$H_0$$ because the four methods used to rank the data ($$t$$-test, logistic regression, elastic net, and MIC) generally tend to identify the same predictors. It is important to stress that neither $$H_0$$ nor $$\widetilde H_0$$ are related to questions regarding the association between the outcome and the predictors in the data set. Both hypotheses are purely considering how the rankings agree in situations where the data contains no information for creating the rankings. It is also worth pointing out, that if the lists are short ($$P$$ low) and there are few lists ($$L$$ low) then the number of possible different permutations under the null is small and the $$p$$ value obtained may be fluctuating if the number of permutations is small. We have found that a number of permutations over 500 works well even for smaller samples. 3.2. Asymptotic inference of change in agreement In many applications, it is of interest to estimate a list depth which satisfies a changepoint criterion since that corresponds to a change in agreement among the list ranks. In particular, a changepoint will provide a data-driven indicator as to the depth until the lists exhibit a change in rank agreement and would consequently be an obvious choice for identifying the set of items that the lists agree the upon the most. In this section, we investigate the theoretical properties of our proposed method for this specific task. As in Hall and Schimek (2012), we consider an infinite set of lists and study the asymptotic behavior for $$L\to\infty$$. The list lengths are not allowed to change with $$L$$ since the lengths are fixed in most applications. We start by showing that $$\widehat{\textrm{sra}}_L$$ is a consistent estimator of $$\textrm{sra}$$ for $$L \rightarrow \infty$$. Theorem 3.1 Assume that $$\{R_l(X)\}_{l=1}^L$$ are independent draws from a probability distribution $$Q$$ on the set of lists $$\Pi$$. Then $$\left\|\widehat{\textrm{sra}}_L - \textrm{sra}\right\|_\infty = o_P(1)$$. Proof. See Appendix A in the supplementary material available at Biostatistics Online. □ We now define the changepoint as the first crossing point of the sequential rank agreement and a threshold function $$q\colon\,\{1,\ldots,P\} \mapsto \mathbb{R}_{\geq 0}$$. The values of $$q$$ could be a deterministic constant or, for example, the limits-of-agreement obtained in randomly permuted lists corresponding to the null-hypothesis in (3.1). We define the superlevel set of the sequential rank agreement with respect to $$q$$ as \begin{align} \mathcal{L}(q) = \left\{d : \textrm{sra}(d) \geq q(d)\right\}. \end{align} (3.2) A changepoint $$d^\ast(q)$$ in the list agreement is then defined by the position $$d^\ast(q) = \begin{cases} \inf(\mathcal{L}(q)) & \text{ if } |\mathcal{L}(q)| > 0\\ P & \text{ if } |\mathcal{L}(q)| = 0 \end{cases}$$ (3.3) corresponding to the first list depth where the sequential rank agreement exceeds the threshold if such a position exists. Otherwise, the full list is in agreement according to $$q$$, and the changepoint is set to the full length of the lists. The empirical superlevel set is similarly defined as \begin{align} \widehat{\mathcal{L}}_L(\widehat{q}_L) = \left\{d : \widehat{\textrm{sra}}_L(d) \geq \widehat{q}_L(d)\right\} \end{align} (3.4) where the threshold function may depend on the sample size. The estimated changepoint is \begin{align} \widehat{d^\ast_L}(\widehat{q}_L) &= 1(|\widehat{\mathcal{L}}_L(\widehat{q}_L)| > 0)\inf \widehat{\mathcal{L}}_L(\widehat{q}_L) + 1(|\widehat{\mathcal{L}}_L(\widehat{q}_L)| = 0)P. \end{align} (3.5) The consistency of the estimated changepoint, $$\widehat{d^\ast_L}(\widehat{q}_L)$$, follows from Theorem 3.1 by the following corollary. Corollary 3.1 Let $$\widehat{q}_L$$ be a positive threshold function such that $$\left\|\widehat{q}_L - q\right\|_\infty = o_P(1)$$ for some limiting function $$q$$. Then $$\widehat{d^\ast_L}(\widehat{q}_L) \overset{P}{\longrightarrow} d^\ast(q)$$ for $$L \rightarrow \infty$$. Proof. See Appendix B in supplementary material available at Biostatistics Online. □ Corollary 3.1 indicates that we can use the threshold function $$\widehat{q}_L$$ estimated under the null hypothesis as discussed in the previous section as a limiting threshold function for inferring the depth $$d$$, where the observed sequential rank agreement first crosses the threshold of the null threshold, i.e., the depth until which the observed ranked lists are in better agreement than expected under the null hypothesis. In that sense, the threshold function serves the same role as the limits of agreement in method comparison studies, except that the threshold function is not constant but can accommodate the changing nature of the number of items used for the computation of the sequential rank agreement for a given depth. In practice, we can compute an estimate of the threshold function under the null using the permutation approach sketched in the previous section which makes it relevant even for small sample settings. 4. Application to ovarian cancer data We now consider an application of the sequential rank agreement to two data sets consisting of MALDI-TOF (Matrix-Assisted Laser Desorption/Ionization Time Of Flight) mass spectra obtained from blood samples from patients with either benign or malignant ovarian tumors. The data sets are sub-samples of the Danish MALOVA and DACOVA study populations. The MALOVA study is a Danish study on ovarian cancer (Hogdall and others, 2004) where all Danish women diagnosed with an ovarian tumor and referred for surgery from the participating Departments of Gynecology were enrolled continuously from December 1994 to May 1999. For the purpose of illustration, we use a random sub-sample of $$119$$ patients with a total of $$58$$ patients with malignant ovarian cancers as cases and $$61$$ patients with benign ovarian tumors as controls. The DACOVA study is another Danish study on ovarian cancer which included about $$66\%$$ of the female population of Denmark (Bertelsen, 1991). The study aimed to continuously enroll all patients that were referred to surgery of an ovarian tumor clinically suspected to be cancer during the period from 1984 to 1990. We use a random sub-sample from the DACOVA study of $$54$$ malignant ovarian cancers and $$59$$ benign ovarian tumors/gynecologic disorders. Each spectrum consists of $$49\,642$$ samples over a range of mass-to-charge ratios between $$800$$ and $$20\,000$$ Dalton which we downsample on an equidistant grid of 5000 points by linear interpolation. We then preprocess the downsampled spectra individually by first removing the slow-varying baseline intensity with the SNIP algorithm (Ryan and others, 1988) followed by a normalization with respect to the total ion count. Finally, we standardize the $$5000$$ predictors to have column-wise zero mean and unit variance in each data set. We use the two data sets to illustrate how the sequential rank agreement can be applied in two different scenarios. In the first scenario, we assess the agreement of four different statistical classification methods in how they rank the predictors according to their importance for distinguishing benign and malignant tumors. In the second scenario, we assess the agreement among rankings of individual predicted risks of having a malignant tumor. The first scenario is relevant in the context of biomarker discovery and the latter is important e.g., when ranking patients according to immediacy of treatment. Four classification methods are considered: Random Forest (Breiman, 2001) implemented in the R package randomForest (Liaw and Wiener, 2002), logistic Lasso (Tibshirani, 1996) and Ridge regression (Segerstedt, 1992) both implemented in the R package glmnet (Friedman and others, 2010), and Partial Least Squares Discriminant Analysis (PLS-DA) (Boulesteix, 2004) implemented in the R package caret (Kuhn, 2014). All four methods depend on a tuning parameter. The tuning parameter for Lasso and Ridge regression is the degree of penalization, and for PLS-DA it is the number of components (the dimensionality of the subspace). We estimate these separately for each sub-sample by a 20 times repeated 5-fold cross-validation procedure. For the Random Forest, we grow a fixed number of $$5000$$ trees and let the tuning parameter be the number of predictors randomly sampled at each split. We estimate this by a binary search with respect to minimizing the Out-of-Bag classification error estimate. In both scenarios, we use the MALOVA data to train the statistical models, and in both situations the agreements are assessed with respect to perturbations of the training data in the following manner. We draw 1000 random sub-samples (without replication) consisting of 90% of the MALOVA observations and train the four models on each sub-sample. The implementation of Lasso and Ridge regression in the glmnet package offers three different cross-validated optimization criteria for the penalty parameter: total deviance, classification accuracy, and area under the receiver operating characteristic curve (ROC). We apply all three criteria to investigate their effect on the sra. Note that these models produce incomplete lists depending on the value of the penalty parameter. 4.1. Agreement of predictor rankings For each of the four methods, each of the 1000 models trained on the 1000 sub-samples of the MALOVA data produces a ranking of the 5000 predictors according to their importance for discriminating between the tumor types. For the Random Forest classifier, the predictors are ranked according to the Gini index, while for the logistic Lasso and Ridge regression models we order by absolute magnitude of the estimated regression coefficients. For the PLS-DA model, the importance of the predictors is based on a weighted sum of the absolute coefficients where the weights are proportional to the reduction in the sums of squares across the components. The top right panel of Figure 2 shows the sequential rank agreement of the estimated importance of the 5000 predictors. For clarity of presentation, we zoom in on the agreement up to list depth 600. At deeper list depths all agreement curves are approximately constant. As expected, most of the sequential rank agreement curves start low, indicating good agreement, followed by an increase until they become approximately constant. This has the interpretation that the agreement across the sub-samples is higher in the top as compared to the tail of the lists for all these classification methods. The changepoints where the curves become approximately constant are the list depths where the ranks of the remaining items become close to uniformly random. Fig. 2. View largeDownload slide Top left panel: sequential rank agreement of 1000 rankings of the predicted risks of malignant tumor. For each method the different rankings were obtained by first training models in 1000 random sub-samples of the MALOVA data and then predicting the risk of malignant tumor in the 113 DACOVA patients. Top right panel: sequential rank agreement of 1000 rankings of the 5000 predictors. The rankings were obtained from the same 1000 trained models. Bottom left panel: sequential rank agreement for Ridge regression obtained by artificially censoring predictor ranks when their absolute coefficient values are lower than the 0.1% quantile. Bottom right panel: box plots of AUC values across the $$1000$$ sub-samples with respect to the known class labels of the DACOVA data. Fig. 2. View largeDownload slide Top left panel: sequential rank agreement of 1000 rankings of the predicted risks of malignant tumor. For each method the different rankings were obtained by first training models in 1000 random sub-samples of the MALOVA data and then predicting the risk of malignant tumor in the 113 DACOVA patients. Top right panel: sequential rank agreement of 1000 rankings of the 5000 predictors. The rankings were obtained from the same 1000 trained models. Bottom left panel: sequential rank agreement for Ridge regression obtained by artificially censoring predictor ranks when their absolute coefficient values are lower than the 0.1% quantile. Bottom right panel: box plots of AUC values across the $$1000$$ sub-samples with respect to the known class labels of the DACOVA data. A not expected shape of the agreement curves is seen for the Ridge models for all three tuning criteria. They all show higher disagreement in the top of the lists followed by a decrease. The reason behind this behavior is rather subtle. Looking at the distribution of the absolute value of the regression coefficients we see that a large proportion of them are numerically very close to zero and have almost equal absolute value. This is a general feature of the Ridge models in this data set and seen for all the 1000 trained models. This implies that when predictors are ranked according to the magnitude of their coefficients, their actual order becomes more uncertain and more close to a random permutation. This problem can be alleviated by truncating all predictors with absolute coefficient values below a given threshold thereby introducing an artificial incompletion of the lists. For the Ridge models tuned with the deviance criterion, Figure 2 (bottom left) shows the the sequential rank agreement where for each of the 1000 trained models the predictors were artificially censored when their absolute coefficient value was lower than the $$0.1\%$$ quantile of the 5000 absolute coefficient values. The curve was calculated using Algorithm 1 from the Appendix of supplementary material available at Biostatistics Online. with $$B=1000$$ and $$P=5000$$. The corresponding curve from Figure 2 (top right) is shown for comparison. Even though the number of predictors with missing ranks is very small compared to the total number of predictors, the effect on the sequential rank agreement is substantial and with the artificial censoring the shape of the curves is as expected, starting low and then increasing. Looking at the agreement curves for the Lasso models in Figure 2 (top right) we clearly see the effect of the sparsity inducing penalization giving rise to incomplete lists. These curves were similarly calculated using the algorithm and $$1000$$ random permutations. Under the deviance optimization criterion the median number of non-zero coefficients was 33 (range 16–50) and for the class accuracy criterion 14 (range 4–56). These values correspond to the list depths where the agreement curves become constant as a result of the subsequent censoring. 4.2. Agreement of individual risk predictions To assess the stability of the individual risk predictions, we apply the predictors from the DACOVA data set to each of the models. The predicted probabilities are then ranked in decreasing order such that the patients with the highest risk of a malignant tumor appears in the top of the list. Figure 2 (top left) shows the sra separately for each method, based on the 1000 risk predictions obtained from the models trained in the same $$1000$$ random sub-samples of the MALOVA data. Most curves start low and then increase indicating higher agreement among high risk patients. This is expected if we rank the individuals according to highest risk of disease. However, it is also expected that individuals with very low risk also show high agreement. In this case, we order the patients according to (high) risk prediction but we could essentially also have reversed the order to identify the patients that have low risk prediction. An exception is the risk prediction agreement for the Lasso tuned with the area under the curve (AUC) criterion which shows very low agreement among the high values of the predicted risks. The reason is that optimizing the penalty parameter with respect to the AUC criterion tends to favor a very high penalty value causing only a single predictor to be selected in each of the 1000 iterations. This results in a lack of generalizability to the DACOVA data which gives rise to the higher disagreement in the predicted risks. In the extreme case where the penalty becomes so high that none of the predictors are selected by the Lasso, the sequential rank agreement for the predicted probabilities becomes undefined since all the ranks will be ties. Comparing the top left and right panels of Figure 2 it can further be seen that some of the methods show better agreement with respect to the predicted probabilities than for ranking the importance of the predictors and vice versa. Ridge regression shows higher agreement across training sets for the risk predictions than PLS-DA, and PLS-DA shows higher agreement for predictor importance than Ridge regression. Lasso shows similar agreement for ranking the risk predictions as PLS-DA (except for the AUC criterion), and poorer agreement for ranking predictors. This reason for the latter is the high auto-correlation between the intensities in the mass spectra which leads to collinearity issues in the regression models. It is well-known that variable selection with the Lasso does not perform very well when the predictors are highly correlated. The collinearity does, however, not affect the agreement of the risk predictions (Figure 2, top left), since the specific variable selected is not that important for a group of highly correlated predictors when the purpose is risk predictions. It appears that Ridge regression tuned with the AUC criterion achieves the best performance with respect to the stability of ranking the individual predicted risk probabilities. It must, however, be stressed that the sequential rank agreement in this application is only concerned with the agreement of the risk predictions across sub-samples and not with the actual accuracy of the risk predictions. Thus, we also computed the AUC values for the different models based on the DACOVA data. The distributions across the $$1000$$ sub-samples for a selection of the models is shown in the bottom right panel of Figure 2. Here, we see that PLS-DA attains the highest AUC values with a median value of $$0.70$$ while the Ridge model with the AUC criterion attains a median AUC of $$0.49$$. This implies that while Ridge regression optimized with respect to the AUC criterion achieves the best sequential rank agreement, it performs similar to a random coin toss with respect to classifying the DACOVA patients. In practice both concerns are of importance. 5. Simulation study—comparison of list agreements We present results from a simulation study where we compare the small sample properties of the sequential rank agreement to the topK method (Hall and Schimek, 2012). The purpose of the simulations were 2-fold: first, we want to investigate the rank agreement as it changes with threshold $$q$$ and number of lists $$L$$. Second, we want to compare the results from sra with the topK for a realistic situation where the true, underlying agreement is not governed by a simple probability distribution. Thus, we are interested in two features of the methods: the depth until which they agree and the number of unique predictors found. To define the depth of agreement for sra we set a constant threshold function to an integer $$q$$ and report the first crossing point, i.e., the smallest list depth where sra exceeds $$q$$ (as implemented in the sra function from our package SuperRanker using the median absolute distance argument). For topK, we use the function j0.multi which is implemented in the R package TopKLists. Specifically, we set the tuning parameter v of j0.multi to the value 6 and the window parameter d to $$q$$ and report the output parameter maxK as the depth of agreement. Thus, to make this comparison we assume that the first crossing point of sra and the result of the topK method measure the same underlying feature. The simulations should mimic a data analysis situation where we have a single data set and where important features are identified (and ranked) using marginal $$t$$-tests. We want to used agreement to understand the stability of the observed feature selection. In each simulation run, we first generated an “original” data set with 1000 predictors and 400 observations. The predictors were drawn independently from a standard Gaussian distribution with variance 1 such that $$E(y_i) = \sum_{j=1}^{15}x_{ij},$$ where $$y_i$$ is the $$i$$th response and $$x_{ij}$$ is the $$j$$th predictor for the $$i$$th measurement. For each “original data set” we obtained $$L$$ ranked lists of the 1000 predictors by drawing $$L$$ bootstrap samples (with replacement) of 400 observations and then ranking the 1000 predictors according to their marginal $$t$$ test statistics. Thus, we assessed the depth of agreement among lists that are ranked with the same statistical method on bootstrap versions of the same data set. We report results from two scenarios each based on 1000 simulated data sets: Scenario I: Fix the number of lists $$L=8$$ and vary the threshold $$q\in\{3,4,5,6,7,8,9,10\}$$. Scenario II: Fix the threshold $$q=5$$ and vary the lists, $$L\in\{3,5,10,50\}$$. In both scenarios, we summarized the distribution of the estimated depth of agreement as well as the average number of unique predictors found in the set of predictors which is selected by the estimated depth of agreement. The results from Scenario I are shown in the left panel of Figure 3. The violin plots (with rectangular kernel) show the distributions of the estimated depths of agreement for both methods. As expected, the depth of agreement increased when the threshold for agreement/window increased. Fig. 3. View largeDownload slide Left panel: simulation study showing distribution of estimated rank agreements for sra and topK for varying thresholds and fixed number of lists $$L=8$$. The bold numbers are the average number of unique predictors included in the set where the lists agree. Right panel: simulation results for varying number of lists and with fixed threshold of $$q=5$$. Fig. 3. View largeDownload slide Left panel: simulation study showing distribution of estimated rank agreements for sra and topK for varying thresholds and fixed number of lists $$L=8$$. The bold numbers are the average number of unique predictors included in the set where the lists agree. Right panel: simulation results for varying number of lists and with fixed threshold of $$q=5$$. We see that sra results in a substantially lower depth of agreement than the topK method. Also the average numbers of unique predictors (bold numbers inside the plots) which ideally should be 15 to reflect the number of true underlying predictors are markedly smaller—and close to the true value—for sra. Even larger differences were found when we used the Euclidean distance instead of the median absolute distance for the sequential rank agreement (results not shown). The right panel of Figure 3 shows the results from Scenario II. The number of lists has little impact on the results, and again the sra is more conservative than topK and as a consequence sra includes fewer predictors in the selected set where the lists agree. The effect sizes of the 15 predictors in the model is the same and in practice we observe that the majority of the 15 predictors are generally picked in each sample but that their individual rankings vary substantially in the top 15 within each bootstrap sample. If the number of influential predictors is lessened, then the variance in depth estimation and number of predictors is reduced. 6. Discussion In this article, we address the problem of comparing ranked lists of the same items. Our proposed method can handle both the situation where the underlying data to generate the ranked lists are available and the situation where the only available data is the actual ranked lists. In addition, incomplete ranked lists where only the ranks of the top $$k$$ ranked items are known can be accommodated as well. The proposed agreement measure can be interpreted as the average distance between an item’s rank and the average rank assigned to that item across lists. The sequential rank agreement can be used to determine the depth at which the rank agreement becomes too large to be desirable based on prior requirements or acceptable differences, or it can be used to visually determine when the change in agreement becomes too large. In that regard, the investigator can have prior limits on the level of agreement that is acceptable. We have shown that sra is very versatile: it can be used not only to compare ranked lists of items produced from different samples/populations but that it also can be used to study the ranks obtained from different analysis methods on the same data, and to evaluate the stability of the ranks by bootstrapping (or sub-sampling) the data repeatedly and comparing the ranks obtained from training the models on the bootstrapped data. While the sequential rank agreement is primarily an exploratory tool, we have suggested two null hypotheses that can be used to evaluate the sequential rank agreement obtained. Note that none of the two null hypotheses are concerned with the actual “true ranking” but are purely concerned with consistency/stability of the rankings among the lists, and consequently we cannot determine if the rankings are good but only whether they agree. The sequential rank agreement curve can be compared visually to the curves obtained under either of the null distributions and point-wise $$p$$-values can be obtained for each depth by counting the number of sequential rank agreements under the null hypothesis that is less than or equal to the observed rank agreement. Finally, we have—whenever possible—used all available ranks from the lists. We could choose to restrict attention to the rank of items, which show evidence for significance in their models. That ensures that less emphasis is put on the agreement of the non-significant items, and it would be easier to identify a change in agreement among the items that were deemed to be relevant. Artificial censoring was successfully introduced in the ridge regression application. We note that the sequential rank agreement is still marred by problems that generally apply to ranking of items and/or individuals. Collinearity in particular can be a huge problem when bootstrapping data or when comparing different analysis methods. For example, marginal analyses where each item is analyzed separately will assign similar ranks to two highly correlated predictors while methods that provide a sparse solution such as the Lasso will just rank a single of the two predictors high. Thus in such a scenario, we would expect low agreement of the rankings from Lasso and marginal analyses simply because of the way correlated predictors are handled. This is general problem for ranked lists and not a shortcoming of the sequential rank agreement. Another caveat with the way the sequential rank agreement is defined is the use of the standard deviation to measure agreement. The standard deviation is an integral part of the limits-of-agreement as discussed by Altman and Bland (1983) which is why we have followed the analogous path. However, the standard deviation is unstable when the number of observations is low and alternatives such as the median absolute deviance may prove more stable in some situations. In conclusion, we have introduced a method for evaluation of ranked (partial/censored) lists that can be easily interpreted and that can be applied to a large number of situations. The method presented here can be adapted further by using it to compare and classify statistical analysis methods that agree on the rankings they provide or by using the rank agreement to optimize a hyper-parameter in, say, elastic net regularized regression where the rank agreement is used to determine the mixing proportion between the $$L_1$$ and the $$L_2$$ penalty. Finally, the proposed method may be adapted (with some additional assumptions) to the situation where there are put equal emphasis on both ends of the lists and not just on the top of the lists. Supplementary Material Supplementary material is available at http://biostatistics.oxfordjournals.org. Acknowledgments Conflict of Interest: None declared. References Altman D. and Bland J. M. ( 1983 ). Measurement in medicine: the analysis of method comparison studies. The Statistician 32 , 307 – 317 . Google Scholar CrossRef Search ADS Bertelsen K. ( 1991 ). Protocol allocation and exclusion in two Danish randomised trials in ovarian cancer. British Journal of Cancer 64 , 1172 . Google Scholar CrossRef Search ADS PubMed Boulesteix A.-L. ( 2004 ). PLS dimension reduction for classification with microarray data. Statistical Applications in Genetics and Molecular Biology 3 , 1 – 30 . Google Scholar CrossRef Search ADS Boulesteix A.-L. and Slawski M. ( 2009 ). Stability and aggregation of ranked gene lists. Briefings in Bioinformatics 10 , 556 – 568 . 568 Google Scholar CrossRef Search ADS PubMed Breiman L. ( 2001 ). Random forests. Machine Learning 45 , 5 – 32 . Google Scholar CrossRef Search ADS Carterette B. ( 2009 ). On rank correlation and the distance between rankings. In: Proceedings of the 32Nd International ACM SIGIR Conference on Research and Development in Information Retrieval , SIGIR ’09 . New York, NY, USA : ACM. pp. 436 – 443 . Dudoit S. , Fridlyand J. and Speed T. P. ( 2002 ). Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data. Journal of the American Statistical Association 97 , 77 – 87 . Google Scholar CrossRef Search ADS Fagin R. , Kumar R. and Sivakumar D. ( 2003 ). Comparing Top k Lists. SIAM Journal on Discrete Mathematics 17 , 134 – 160 . Google Scholar CrossRef Search ADS Friedman J. , Hastie T. and Tibshirani R. ( 2010 ). Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software 33 , 1 – 22 . Google Scholar CrossRef Search ADS PubMed Golub T. R. ( 1999 , October ). Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science 286 , 531 – 537 . Google Scholar CrossRef Search ADS PubMed Hall P. and Schimek M. G. ( 2012 ). Moderate deviation-based inference for random degeneration in paired rank lists. JASA 107 , 661 – 672 . Google Scholar CrossRef Search ADS Hogdall E. V. , Ryan A. , Kjaer S K. , Blaakaer J. , Christensen L. , Bock J. E. , Glud E. , Jacobs I. J. and Hogdall C. K. ( 2004 ). Loss of heterozygosity on the X chromosome is an independent prognostic factor in ovarian carcinoma: from the Danish “MALOVA” ovarian carcinoma study. Cancer 100 , 2387 – 2395 . Google Scholar CrossRef Search ADS PubMed Irizarry R. A. , Warren D. , Spencer F. , Kim I. F. , Biswal S. , Frank B. C. , Gabrielson E. , Garcia J. G. , Geoghegan J. , Germino G. and others. ( 2005 ). Multiple-laboratory comparison of microarray platforms. Nature Methods 2 , 345 – 350 . Google Scholar CrossRef Search ADS PubMed Kuhn M. ( 2014 ). caret: Classification and Regression Training . R package version 6.0-24. Liaw A. and Wiener M. ( 2002 ). Classification and regression by randomForest. R News 2 , 18 – 22 . Reshef D. N. , Reshef Y. A. , Finucane H. K. , Grossman S. R. , McVean G. , Turnbaugh P. J. , Lander E. S. , Mitzenmacher M. and Sabeti P. C. ( 2011 ). Detecting novel associations in large data sets. Science (New York, N.Y.) 334 , 1518 – 24 . Google Scholar CrossRef Search ADS PubMed Ryan C. G. , Clayton E. , Griffin W. L. , Sie S. H. and Cousens D. R. ( 1988 ). SNIP, a statistics-sensitive background treatment for the quantitative analysis of PIXE spectra in geoscience applications. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 34 , 396 – 402 . Google Scholar CrossRef Search ADS Sampath S. and Verducci J. S. ( 2013 ). Detecting the end of agreement between two long ranked lists. Statistical Analysis and Data Mining 6 , 458 – 471 . Google Scholar CrossRef Search ADS Segerstedt B. ( 1992 ). On ordinary ridge regression in generalized linear models. Communications in Statistics-Theory and Methods 21 , 2227 – 2246 . Google Scholar CrossRef Search ADS Tibshirani R. ( 1996 ). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) 58 , 267 – 288 . Webber W. , Moffat A. and Zobel J. ( 2010 ). A similarity measure for indefinite rankings. ACM Transactions on Information Systems 28 , 20:1 – 20:38 . Google Scholar CrossRef Search ADS © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) Journal BiostatisticsOxford University Press Published: Jun 3, 2018 You’re reading a free preview. Subscribe to read the entire article. DeepDyve is your personal research library It’s your single place to instantly discover and read the research that matters to you. Enjoy affordable access to over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month Explore the DeepDyve Library Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve Freelancer DeepDyve Pro Price FREE$49/month \$360/year Save searches from Google Scholar, PubMed Create lists to organize your research Export lists, citations Read DeepDyve articles Abstract access only Unlimited access to over 18 million full-text articles Print 20 pages / month PDF Discount 20% off
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 18, "equation": 15, "x-ck12": 0, "texerror": 0, "math_score": 0.8204191327095032, "perplexity": 657.538952965077}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204461.23/warc/CC-MAIN-20190325214331-20190326000331-00376.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2015.20.2089
# American Institute of Mathematical Sciences September  2015, 20(7): 2089-2105. doi: 10.3934/dcdsb.2015.20.2089 ## The spreading fronts in a mutualistic model with advection 1 School of Applied Mathematics, Nanjing University of Finance and Economics, Nanjing 210023, China 2 School of Mathematical Science, Yangzhou University, Yangzhou 225002 Received  September 2014 Revised  March 2015 Published  July 2015 This paper is concerned with a system of semilinear parabolic equations with two free boundaries, which describe the spreading fronts of the invasive species in a mutualistic ecological model. The advection term is introduced to model the behavior of the invasive species in one dimension space. The local existence and uniqueness of a classical solution are obtained and the asymptotic behavior of the free boundary problem is studied. Our results indicate that for small advection, two free boundaries tend monotonically to finite limits or infinities at the same time, and a spreading-vanishing dichotomy holds, namely, either the expanding environment is limited and the invasive species dies out, or the invasive species spreads to all new environment and establishes itself in a long run. Moreover, some rough estimates of the spreading speed are also given when spreading happens. Citation: Mei Li, Zhigui Lin. The spreading fronts in a mutualistic model with advection. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2089-2105. doi: 10.3934/dcdsb.2015.20.2089 ##### References: [1] H. Berestycki, F. Hamel and N. Nadirashvili, The speed of propagation for KPP type problems. I. Periodic framework,, J. Eur. Math. Soc., 7 (2005), 173. doi: 10.4171/JEMS/26. Google Scholar [2] H. Berestycki, F. Hamel and L. Roques, Analysis of the periodically fragmented environment model. II. Biological invasions and pulsating travelling fronts,, J. Math. Pures Appl., 84 (2005), 1101. doi: 10.1016/j.matpur.2004.10.006. Google Scholar [3] R. S. Cantrell and C. Cosner, Spatial Ecology via Reaction-Diffusion Equations,, John Wiley and Sons Ltd., (2003). doi: 10.1002/0470871296. Google Scholar [4] Y. H. Du and Z. M. Guo, Spreading-vanishing dichotomy in a diffusive logistic model with a free boundary, II,, J. Differential Equations, 250 (2011), 4336. doi: 10.1016/j.jde.2011.02.011. Google Scholar [5] Y. H. Du, Z. M. Guo and R. Peng, A diffusive logistic model with a free boundary in time-periodic environment,, J. Funct. Anal., 265 (2013), 2089. doi: 10.1016/j.jfa.2013.07.016. Google Scholar [6] Y. H. Du and Z. G. Lin, Spreading-vanishing dichotomy in the diffusive logistic model with a free boundary,, SIAM J. Math. Anal., 42 (2010), 377. doi: 10.1137/090771089. Google Scholar [7] Y. H. Du and Z. G. Lin, Erratum: Spreading-vanishing dichotomy in the diffusive logistic model with a free boundary,, SIAM J. Math. Anal., 45 (2013), 1995. doi: 10.1137/110822608. Google Scholar [8] Y. H. Du and Z. G. Lin, The diffusive competition model with a free boundary: Invasion of a superior or inferior competitor,, Discrete Contin. Dyn. Syst. Ser. B, 19 (2014), 3105. doi: 10.3934/dcdsb.2014.19.3105. Google Scholar [9] Y. H. Du and B. D. Lou, Spreading and vanishing in nonlinear diffusion problems with free boundaries,, preprint, (2013). Google Scholar [10] Y. H. Du and L. Ma, Logistic type equations on $\mathbbR^N$ by a squeezing method involving boundary blow-up solutions,, J. London Math. Soc., 64 (2001), 107. doi: 10.1017/S0024610701002289. Google Scholar [11] R. A. Fisher, The wave of advance of advantageous genes,, Ann. Eugenics, 7 (1937), 355. doi: 10.1111/j.1469-1809.1937.tb02153.x. Google Scholar [12] H. Gu, Z. G. Lin and B. D. Lou, Different asymptotic spreading speeds induced by advection in a diffusion problem with free boundaries,, Proc. Amer. Math. Soc., 143 (2015), 1109. doi: 10.1090/S0002-9939-2014-12214-3. Google Scholar [13] H. Gu, Z. G. Lin and B. D. Lou, Long time behavior of solutions of a diffusion-advection logistic model with free boundaries,, Appl. Math. Lett., 37 (2014), 49. doi: 10.1016/j.aml.2014.05.015. Google Scholar [14] J. S. Guo and C. H. Wu, On a free boundary problem for a two-species weak competition system,, J. Dynam. Differential Equations, 24 (2012), 873. doi: 10.1007/s10884-012-9267-0. Google Scholar [15] F. Hamel and N. Nadirashvili, Travelling fronts and entire solutions of the Fisher-KPP equation in $\mathbb R^N$,, Arch. Ration. Mech. Anal., 157 (2001), 91. doi: 10.1007/PL00004238. Google Scholar [16] Y. Kaneko and Y. Yamada, A free boundary problem for a reaction-diffusion equation appearing in ecology,, Adv. Math. Sci. Appl., 21 (2011), 467. Google Scholar [17] A. N. Kolmogorov, I. G. Petrovsky and N. S. Piskunov, Ètude de l'équation de la diffusion avec croissance de la quantité de matière et son application à un problème biologique},, Bull. Univ. Moscou Sér. Internat., A1 (1937), 1. Google Scholar [18] C. X. Lei, Z. G. Lin and Q. Y. Zhang, The spreading front of invasive species in favorable habitat or unfavorable habitat,, J. Differential Equations, 257 (2014), 145. doi: 10.1016/j.jde.2014.03.015. Google Scholar [19] C. X. Lei, Z. G. Lin and H. Y. Wang, The free boundary problem describing information diffusion in online social networks,, J. Differential Equations, 254 (2013), 1326. doi: 10.1016/j.jde.2012.10.021. Google Scholar [20] Z. G. Lin, A free boundary problem for a predator-prey model,, Nonlinearity, 20 (2007), 1883. doi: 10.1088/0951-7715/20/8/004. Google Scholar [21] R. M. May, Simple mathematical models with very complicated dynamics,, The Theory of Chaotic Attractors, (2004), 85. doi: 10.1007/978-0-387-21830-4_7. Google Scholar [22] J. Memmott, P. G. Craze, H. M. Harman, P. Syrett and S. V. Fowler, The effect of propagule size on the invasion of an alien insect,, J. Anim. Ecol., 74 (2005), 50. doi: 10.1111/j.1365-2656.2004.00896.x. Google Scholar [23] R. Peng and X. Q. Zhao, The diffusive logistic model with a free boundary and seasonal succession,, Discrete Contin. Dyn. Syst. A, 33 (2013), 2007. doi: 10.3934/dcds.2013.33.2007. Google Scholar [24] H. L. Smith, Monotone Dynamical Systems,, American Math. Soc., (1995). Google Scholar [25] M. X. Wang, On some free boundary problems of the prey-predator model,, J. Differential Equations, 256 (2014), 3365. doi: 10.1016/j.jde.2014.02.013. Google Scholar [26] H. F. Weinberger, On spreading speeds and traveling waves for growth and migration models in a periodic habitat,, J. Math. Biol., 45 (2002), 511. doi: 10.1007/s00285-002-0169-3. Google Scholar [27] H. F. Weinberger, M. A. Lewis and B. Li, Anomalous spreading speeds of cooperative recursion systems,, J. Math. Biol., 55 (2007), 207. doi: 10.1007/s00285-007-0078-6. Google Scholar [28] J. X. Xin, Front propagation in heterogeneous media,, SIAM Rev., 42 (2000), 161. doi: 10.1137/S0036144599364296. Google Scholar [29] J. F. Zhao and M. X. Wang, A free boundary problem of a predator-prey model with higher dimension and heterogeneous environment,, Nonlinear Analysis: Real World Appl., 16 (2014), 250. doi: 10.1016/j.nonrwa.2013.10.003. Google Scholar [30] P. Zhou and Z. G. Lin, Global existence and blowup of a nonlocal problem in space with free boundary,, J. Funct. Anal., 262 (2012), 3409. doi: 10.1016/j.jfa.2012.01.018. Google Scholar [31] P. Zhou and D. M. Xiao, The diffusive logistic model with a free boundary in heterogeneous environment,, J. Differential Equations, 256 (2014), 1927. doi: 10.1016/j.jde.2013.12.008. Google Scholar show all references ##### References: [1] H. Berestycki, F. Hamel and N. Nadirashvili, The speed of propagation for KPP type problems. I. Periodic framework,, J. Eur. Math. Soc., 7 (2005), 173. doi: 10.4171/JEMS/26. Google Scholar [2] H. Berestycki, F. Hamel and L. Roques, Analysis of the periodically fragmented environment model. II. Biological invasions and pulsating travelling fronts,, J. Math. Pures Appl., 84 (2005), 1101. doi: 10.1016/j.matpur.2004.10.006. Google Scholar [3] R. S. Cantrell and C. Cosner, Spatial Ecology via Reaction-Diffusion Equations,, John Wiley and Sons Ltd., (2003). doi: 10.1002/0470871296. Google Scholar [4] Y. H. Du and Z. M. Guo, Spreading-vanishing dichotomy in a diffusive logistic model with a free boundary, II,, J. Differential Equations, 250 (2011), 4336. doi: 10.1016/j.jde.2011.02.011. Google Scholar [5] Y. H. Du, Z. M. Guo and R. Peng, A diffusive logistic model with a free boundary in time-periodic environment,, J. Funct. Anal., 265 (2013), 2089. doi: 10.1016/j.jfa.2013.07.016. Google Scholar [6] Y. H. Du and Z. G. Lin, Spreading-vanishing dichotomy in the diffusive logistic model with a free boundary,, SIAM J. Math. Anal., 42 (2010), 377. doi: 10.1137/090771089. Google Scholar [7] Y. H. Du and Z. G. Lin, Erratum: Spreading-vanishing dichotomy in the diffusive logistic model with a free boundary,, SIAM J. Math. Anal., 45 (2013), 1995. doi: 10.1137/110822608. Google Scholar [8] Y. H. Du and Z. G. Lin, The diffusive competition model with a free boundary: Invasion of a superior or inferior competitor,, Discrete Contin. Dyn. Syst. Ser. B, 19 (2014), 3105. doi: 10.3934/dcdsb.2014.19.3105. Google Scholar [9] Y. H. Du and B. D. Lou, Spreading and vanishing in nonlinear diffusion problems with free boundaries,, preprint, (2013). Google Scholar [10] Y. H. Du and L. Ma, Logistic type equations on $\mathbbR^N$ by a squeezing method involving boundary blow-up solutions,, J. London Math. Soc., 64 (2001), 107. doi: 10.1017/S0024610701002289. Google Scholar [11] R. A. Fisher, The wave of advance of advantageous genes,, Ann. Eugenics, 7 (1937), 355. doi: 10.1111/j.1469-1809.1937.tb02153.x. Google Scholar [12] H. Gu, Z. G. Lin and B. D. Lou, Different asymptotic spreading speeds induced by advection in a diffusion problem with free boundaries,, Proc. Amer. Math. Soc., 143 (2015), 1109. doi: 10.1090/S0002-9939-2014-12214-3. Google Scholar [13] H. Gu, Z. G. Lin and B. D. Lou, Long time behavior of solutions of a diffusion-advection logistic model with free boundaries,, Appl. Math. Lett., 37 (2014), 49. doi: 10.1016/j.aml.2014.05.015. Google Scholar [14] J. S. Guo and C. H. Wu, On a free boundary problem for a two-species weak competition system,, J. Dynam. Differential Equations, 24 (2012), 873. doi: 10.1007/s10884-012-9267-0. Google Scholar [15] F. Hamel and N. Nadirashvili, Travelling fronts and entire solutions of the Fisher-KPP equation in $\mathbb R^N$,, Arch. Ration. Mech. Anal., 157 (2001), 91. doi: 10.1007/PL00004238. Google Scholar [16] Y. Kaneko and Y. Yamada, A free boundary problem for a reaction-diffusion equation appearing in ecology,, Adv. Math. Sci. Appl., 21 (2011), 467. Google Scholar [17] A. N. Kolmogorov, I. G. Petrovsky and N. S. Piskunov, Ètude de l'équation de la diffusion avec croissance de la quantité de matière et son application à un problème biologique},, Bull. Univ. Moscou Sér. Internat., A1 (1937), 1. Google Scholar [18] C. X. Lei, Z. G. Lin and Q. Y. Zhang, The spreading front of invasive species in favorable habitat or unfavorable habitat,, J. Differential Equations, 257 (2014), 145. doi: 10.1016/j.jde.2014.03.015. Google Scholar [19] C. X. Lei, Z. G. Lin and H. Y. Wang, The free boundary problem describing information diffusion in online social networks,, J. Differential Equations, 254 (2013), 1326. doi: 10.1016/j.jde.2012.10.021. Google Scholar [20] Z. G. Lin, A free boundary problem for a predator-prey model,, Nonlinearity, 20 (2007), 1883. doi: 10.1088/0951-7715/20/8/004. Google Scholar [21] R. M. May, Simple mathematical models with very complicated dynamics,, The Theory of Chaotic Attractors, (2004), 85. doi: 10.1007/978-0-387-21830-4_7. Google Scholar [22] J. Memmott, P. G. Craze, H. M. Harman, P. Syrett and S. V. Fowler, The effect of propagule size on the invasion of an alien insect,, J. Anim. Ecol., 74 (2005), 50. doi: 10.1111/j.1365-2656.2004.00896.x. Google Scholar [23] R. Peng and X. Q. Zhao, The diffusive logistic model with a free boundary and seasonal succession,, Discrete Contin. Dyn. Syst. A, 33 (2013), 2007. doi: 10.3934/dcds.2013.33.2007. Google Scholar [24] H. L. Smith, Monotone Dynamical Systems,, American Math. Soc., (1995). Google Scholar [25] M. X. Wang, On some free boundary problems of the prey-predator model,, J. Differential Equations, 256 (2014), 3365. doi: 10.1016/j.jde.2014.02.013. Google Scholar [26] H. F. Weinberger, On spreading speeds and traveling waves for growth and migration models in a periodic habitat,, J. Math. Biol., 45 (2002), 511. doi: 10.1007/s00285-002-0169-3. Google Scholar [27] H. F. Weinberger, M. A. Lewis and B. Li, Anomalous spreading speeds of cooperative recursion systems,, J. Math. Biol., 55 (2007), 207. doi: 10.1007/s00285-007-0078-6. Google Scholar [28] J. X. Xin, Front propagation in heterogeneous media,, SIAM Rev., 42 (2000), 161. doi: 10.1137/S0036144599364296. Google Scholar [29] J. F. Zhao and M. X. Wang, A free boundary problem of a predator-prey model with higher dimension and heterogeneous environment,, Nonlinear Analysis: Real World Appl., 16 (2014), 250. doi: 10.1016/j.nonrwa.2013.10.003. Google Scholar [30] P. Zhou and Z. G. Lin, Global existence and blowup of a nonlocal problem in space with free boundary,, J. Funct. Anal., 262 (2012), 3409. doi: 10.1016/j.jfa.2012.01.018. Google Scholar [31] P. Zhou and D. M. Xiao, The diffusive logistic model with a free boundary in heterogeneous environment,, J. Differential Equations, 256 (2014), 1927. doi: 10.1016/j.jde.2013.12.008. Google Scholar [1] Haomin Huang, Mingxin Wang. The reaction-diffusion system for an SIR epidemic model with a free boundary. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2039-2050. doi: 10.3934/dcdsb.2015.20.2039 [2] Jia-Feng Cao, Wan-Tong Li, Meng Zhao. On a free boundary problem for a nonlocal reaction-diffusion model. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4117-4139. doi: 10.3934/dcdsb.2018128 [3] Yizhuo Wang, Shangjiang Guo. A SIS reaction-diffusion model with a free boundary condition and nonhomogeneous coefficients. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1627-1652. doi: 10.3934/dcdsb.2018223 [4] Manjun Ma, Xiao-Qiang Zhao. Monostable waves and spreading speed for a reaction-diffusion model with seasonal succession. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 591-606. doi: 10.3934/dcdsb.2016.21.591 [5] Maho Endo, Yuki Kaneko, Yoshio Yamada. Free boundary problem for a reaction-diffusion equation with positive bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2019, 0 (0) : 0-0. doi: 10.3934/dcds.2020033 [6] Bingtuan Li, William F. Fagan, Garrett Otto, Chunwei Wang. Spreading speeds and traveling wave solutions in a competitive reaction-diffusion model for species persistence in a stream. Discrete & Continuous Dynamical Systems - B, 2014, 19 (10) : 3267-3281. doi: 10.3934/dcdsb.2014.19.3267 [7] Hans F. Weinberger, Kohkichi Kawasaki, Nanako Shigesada. Spreading speeds for a partially cooperative 2-species reaction-diffusion model. Discrete & Continuous Dynamical Systems - A, 2009, 23 (3) : 1087-1098. doi: 10.3934/dcds.2009.23.1087 [8] Vladimir V. Chepyzhov, Mark I. Vishik. Trajectory attractor for reaction-diffusion system with diffusion coefficient vanishing in time. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1493-1509. doi: 10.3934/dcds.2010.27.1493 [9] Gary Bunting, Yihong Du, Krzysztof Krakowski. Spreading speed revisited: Analysis of a free boundary model. Networks & Heterogeneous Media, 2012, 7 (4) : 583-603. doi: 10.3934/nhm.2012.7.583 [10] Shin-Ichiro Ei, Toshio Ishimoto. Effect of boundary conditions on the dynamics of a pulse solution for reaction-diffusion systems. Networks & Heterogeneous Media, 2013, 8 (1) : 191-209. doi: 10.3934/nhm.2013.8.191 [11] Xin Li, Xingfu Zou. On a reaction-diffusion model for sterile insect release method with release on the boundary. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2509-2522. doi: 10.3934/dcdsb.2012.17.2509 [12] Narcisa Apreutesei, Vitaly Volpert. Reaction-diffusion waves with nonlinear boundary conditions. Networks & Heterogeneous Media, 2013, 8 (1) : 23-35. doi: 10.3934/nhm.2013.8.23 [13] Ching-Shan Chou, Yong-Tao Zhang, Rui Zhao, Qing Nie. Numerical methods for stiff reaction-diffusion systems. Discrete & Continuous Dynamical Systems - B, 2007, 7 (3) : 515-525. doi: 10.3934/dcdsb.2007.7.515 [14] Laurent Desvillettes, Klemens Fellner. Entropy methods for reaction-diffusion systems. Conference Publications, 2007, 2007 (Special) : 304-312. doi: 10.3934/proc.2007.2007.304 [15] A. Dall'Acqua. Positive solutions for a class of reaction-diffusion systems. Communications on Pure & Applied Analysis, 2003, 2 (1) : 65-76. doi: 10.3934/cpaa.2003.2.65 [16] Marek Fila, Hirokazu Ninomiya, Juan-Luis Vázquez. Dirichlet boundary conditions can prevent blow-up in reaction-diffusion equations and systems. Discrete & Continuous Dynamical Systems - A, 2006, 14 (1) : 63-74. doi: 10.3934/dcds.2006.14.63 [17] Keng Deng. On a nonlocal reaction-diffusion population model. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 65-73. doi: 10.3934/dcdsb.2008.9.65 [18] Zhiting Xu, Yingying Zhao. A reaction-diffusion model of dengue transmission. Discrete & Continuous Dynamical Systems - B, 2014, 19 (9) : 2993-3018. doi: 10.3934/dcdsb.2014.19.2993 [19] Feng-Bin Wang. A periodic reaction-diffusion model with a quiescent stage. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 283-295. doi: 10.3934/dcdsb.2012.17.283 [20] Zhenguo Bai, Tingting Zhao. Spreading speed and traveling waves for a non-local delayed reaction-diffusion system without quasi-monotonicity. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4063-4085. doi: 10.3934/dcdsb.2018126 2018 Impact Factor: 1.008 ## Metrics • PDF downloads (23) • HTML views (0) • Cited by (7) ## Other articlesby authors • on AIMS • on Google Scholar [Back to Top]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6402750611305237, "perplexity": 4265.924565512866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987773711.75/warc/CC-MAIN-20191021120639-20191021144139-00124.warc.gz"}
https://infoscience.epfl.ch/record/33572
## Modifications of Ge dots on Si(001) substrates C-predeposition and Si overgrowth Silicon has become the most important material for the semiconductor industry, due to several advantages like good heat conductance or the high quality of its oxide. Nevertheless, for opto-electronic devices, the limitation of its indirect band-gap has anticipated a breakthrough. To increase the probability for radiative recombination, one way is to use Heisenberg's uncertainty principle. If the carriers are confined in a quantum well, this leads to a broadening of the wave functions in reciprocal space, and a higher probability for recombination processes. For Si, this can be achieved by the deposition of Ge, as this material grows on a silicon surface in Stranski-Krastanov mode. After the formation of a 3-5 monolayer (ML) thick wetting-layer, Ge dots form which can act as quantum wells within a Si matrix. While holes of the valance band are confined in these dots, the silicon surrounding the Ge dots is under tensile strain and therefore acts as a slight quantum well for the electrons of the conductance band. Crucial parameters for the confinement of the carriers and therefore of the probability for radiative recombination are the density, size, and composition of the dots. One way to influence the density and size of Ge dots is to modify the Si surface by the predeposition of carbon. The deposition of submonolayers of carbon leads to a c(4x4) reconstruction of those parts of the surface, the carbon is incorporated in. If Ge is deposited on such modified surfaces, it starts to grow on the c(4x4) free areas, due to the strain induced by the carbon. As a result Ge grows directly in a three-dimensional way, and smaller dot sizes and higher densities can be achieved. As these physical values depend on the size and density of the c(4x4) reconstructed areas, the modification of the Si surface by the pre-deposition of carbon was studied by Scanning Tunnelling Microscopy (STM). It was found, that the deposition between 0.11 ML and 0.2 ML of carbon leads to the best compromise between density and size of c(4x4) reconstructed areas. In addition we studied the carbon induced Ge dots by photoluminescence spectroscopy. The intensity of the photoluminescence signal indicates an increase for the probability of no-phonon-assisted recombination. Besides the size and density, the composition is, as already mentioned, of importance. We found by STM that due to capping of Ge-dots with Si, as it is necessary to embed them in a Si matrix, at high temperatures unwanted intermixing occurs. That can even lead to a shape transformation from dome to hut clusters. This observation was proven by Energy Filtered Transmission Electron Microscopy giving the information, which parts of the dots intermix strongest. For a quantitative analyses, reciprocal space maps were measured with x-ray diffraction measurement. The simulation of these space maps gave a quantitative insight into the composition of the dots under its restrictions. To prevent the intermixing, the growth temperature for the silicon cap was lowered. Afterwards no shape transformation was found by STM, but during the initial steps of overgrowing (3 ML of Si), another type of cluster appeared, whose origin has not fully been understood yet. To proof the concept of growth temperature reduction for whole devices, two stacks of Ge dots, one overgrown at high temperature and one at low temperature, were investigated by photoluminescence investigations. They gave a hint, that the lowering of the overgrowth not only prevents the intermixing during the initial stages of overgrowth, as investigated before, but also when the dots are completely overgrown. Kern, Klaus Year: 2004 Publisher: Lausanne, EPFL Other identifiers: urn: urn:nbn:ch:bel-epfl-thesis3103-3 Note: The status of this file is: EPFL only
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.908055305480957, "perplexity": 1590.859460241792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863811.3/warc/CC-MAIN-20180520224904-20180521004904-00551.warc.gz"}
http://www.zazzle.com.au/football+ipad+cases
Showing All Results 7,145 results Page 1 of 120 Related Searches: soccer, sport, tenebrae Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo No matches for Showing All Results 7,145 results Page 1 of 120
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221780061721802, "perplexity": 4451.338125085384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/4366-ship.html
# Thread: A ship... 1. ## A ship... A ship that weighs 10,000 tons (mass displace what volume of fresh water in order to float? 2. Originally Posted by Celia A ship that weighs 10,000 tons (mass displace what volume of fresh water in order to float? It will displace a volume of water who's weight is equal to the weight of the ship. Now the density of fresh water is 1 metric ton per cubic metre, so a 10000 metric ton ship will displace 10000 cubic metres of water. RonL 1 metric ton = 1000 kg
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8167926669120789, "perplexity": 3915.9734107756244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661289.41/warc/CC-MAIN-20160924173741-00288-ip-10-143-35-109.ec2.internal.warc.gz"}
https://asmedigitalcollection.asme.org/fluidsengineering/article-abstract/108/1/98/409991/Flow-Around-Two-Elliptic-Cylinders-in-Tandem?redirectedFrom=fulltext
Flow around two elliptic cylinders in tandem arrangement was experimentally investigated through measurements of the surface static pressure distribution and estimations of the flow parameters such as the drag, lift and moment coefficients. The elliptic cylinders examined had an axis ratio of 1:3 and they were aranged in tandem with an identical angle of attack. The angle of attack ranged from 0 to 90 deg and the nondimensional cylinder spacing l/c from 1.03 to 4.0, where l denotes the distance between the cylinder centers and c is the major axis. It has been found that the flow characteristics vary drastically with the angle of attack and also the cylinder spacing. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9283998012542725, "perplexity": 546.0604949992086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655864.19/warc/CC-MAIN-20191015032537-20191015060037-00290.warc.gz"}
https://advocatespedia.com/Armed_Forces_Tribunal_Act,_2007
# Armed Forces Tribunal Act, 2007 ## Section 1 ### Short Title And Commencement :- (1) This Act may be called the Armed Forces Tribunal Act, 2007. (2) It shall come into force on such date as the Central Government may, by notification, appoint. ## Section 2 ### Applicability Of The Act :- (1) The provisions of this Act shall apply to all persons subject to the army Act, 1950, (46 of 1950) the Navy Act, 1957 (62 of 1957) and the Air Force Act, 1950 (45 of 1950) (2) This Act shall also apply to retired personnel subject to the Army Act, 1950 (46 of 1950) or the Navy Act, 1957 (62 of 1957) or the Air Force Act, 1950 (45 of 1950) including their dependants, heirs and successors, in so far as it relates to their service matters. ## Section 3 ### Definitions :- In this Act, unless the context otherwise requires,- (a) "Administrative Member" means a member of the Tribunal who is not a Judicial Member within the meaning of clause (g); (b) "application" means an application made under sub-section (2) of section 14; (c) "appointed day" means the date with effect from which the Tribunal is established by notification under section 4; (d) "Bench" means a Bench of the Tribunal; (e) "Chairperson" means the Chairperson of the Tribunal; (f) "court martial" means a court martial held under the Army Act, 1950 (46 of 1950) or the Navy Act, 1957 (62 of 1957) including the disciplinary courts constituted under the Act or the Air Force Act, 1950; (45 of 1950) (g) "Judicial Member" means a member of the Tribunal appointed as such under this Act, and includes the Chairperson, who possesses any of the qualifications specified in sub-section (2) of section 6; (h) "Member" means a member (whether Judicial or Administrative) of the Tribunal and includes the Chairperson; (i) "military custody" means the arrest or confinement of a person according to the usages of the service and includes naval or air force custody; (j) "notification" means a notification published in the Official Gazette; (k) "prescribed" means prescribed by rules made under this Act; (l) "President" means the President of India; (m) "rules" means the rules made under this Act; (n) "service" means the service within or outside India; (o) "service matters", in relation to the persons subject to the Army Act, 1950 (46 of 1950) the Navy Act, 1957 (62 of 1957) and the Air Force Act, 1950 (45 of 1950) mean all matters relating to the conditions of their service and shall include- (i) remuneration (including allowances), pension and other retirement benefits; (ii) tenure, including commission, appointment, enrolment, probation, confirmation, seniority, training, promotion, reversion, premature retirement, superannuation, termination of service and penal deductions; (iii) summary disposal and trials where the punishment of dismissal is awarded; (iv) any other matter, whatsoever, but shall not include matters relating to- (i) orders issued under section 18 of the Army Act, 1950 (46 of 1950) sub-section (1) of section 15 of the Navy Act, 1957 (62 of 1957) and section 18 of the Air Force Act, 1950; (45 of 1950) and (ii) transfers and postings including the change of place or unit on posting whether individually or as a part of unit, formation or ship in relation to the persons subject to the Army Act, 1950 (46 of 1950) the Navy Act, 1957 (62 of 1957) and the Air Force Act, 1950 (45 of 1950); (iii) leave of any kind; (iv) summary court martial except where the punishment is of dismissal or imprisonment for more than three months; (p) "summary disposals and trials" means summary disposals and trials held under the Army Act, 1950 (46 of 1950) the Navy Act, 1957 (62 of 1957) and the Air Force Act, 1950; (45 of 1950) (q) "Tribunal: means the Armed Forces Tribunal established under section 4. ## Section 4 ### Establishment Of Armed Forces Tribunal :- The Central Government shall, by notification, establish a Tribunal to be known as the Armed Forces Tribunal to exercise the jurisdiction, powers and authority conferred on it by or under this Act. ## Section 5 ### Composition Of Tribunal And Benches Thereof :- (1) The Tribunal shall consist of a Chairperson, and such number of Judicial and Administrative Members as the Central Government may deem fit and, subject to the other provisions of this Act, the jurisdiction, powers and authority of the Tribunal may be exercised by Benches there of. (2) Subject to the other provisions of this Act, a Bench shall consist of one Judicial Member and one Administrative Member. ( 3 ) Notwithstanding anything contained in sub-section (1), the Chairperson- ( a ) may, in addition to discharging the functions of a Judicial Member of the Bench to which he is appointed, discharge the functions of an Administrative Member of any other Bench; (b) may transfer a Member from one Bench to another Bench; (c) may, for the purpose of securing that any case or cases, which having regard to the nature of the questions involved, requires or require, in his opinion, or under the rules made under this Act, to be decided by a Bench composed of more than two members, issue such general or special orders, as he may deem fit: Provided that every Bench constituted in pursuance of this clause shall include at least one Judicial Member and one Administrative Member. (4) Subject to the other provisions of this Act, the Benches of the Tribunal shall ordinarily sit at Delhi (which shall be known as the Principal Bench), and at such other places as the Central Government may, by notification, specify. ## Section 6 ===Qualifications For Appointment Of Chairperson And Other Members :-=== ( 1 ) A person shall not be qualified for appointment as the Chairperson unless he is a retired Judge of the Supreme Court or a retired Chief Justice of a High Court. (2) A person shall not be qualified for appointment as a Judicial Member unless he is or has been a Judge of a High Court. (3) A person shall not be qualified for appointment as an Administrative Member unless- (a) he has held or has been holding the rank of Major General or above for a total period of at least three years in the Army or equivalent rank in the Navy or the Air Force; and (b) he has served for not less than one year as Judge Advocate General in the Army or the Navy or the Air Force, and is not below the rank of Major General, Commodore and Air Commodore respectively. Explanation.- When a serving person is appointed as an Administrative Member, he shall have retired from service prior to assuming such appointment. ## Section 7 ### Appointment Of Chairperson And Other Members :- (1) Subject to the provisions of this section, the Chairperson and other Members of the Tribunal shall be appointed by the President:Provided that no appointment under this sub-section shall be made except after consultation with the Chief Justice of India. (2) The President may appoint one or more Members of the Tribunal to be the ViceChairperson, or, as the case may be, the Vice-Chairpersons, thereof. ## Section 8 ### Term Of Office :- The Chairperson or a Member shall hold office for a term of four years from the date on which he enters upon his office and shall be eligible for re-appointment: Provided that no Chairperson shall hold office as such after he has attained,- (a) in case he has been a Judge of the Supreme Court, the age of seventy years; and (b) in case he has been the Chief Justice of a High Court, the age of sixty-five years: Provided further that no other Member shall hold office as such Member after he has attained the age of sixty- five years. ## Section 9 ### Resignation And Removal :- (1) The Chairperson or a Member may, by notice in writing under his hand addressed to the President, resign his office: Provided that the Chairperson or a Member shall, unless he is permitted by the President to relinquish his office sooner, continue to hold office until the expiry of three months from the date of receipt of such notice or until a person duly appointed as his successor enters upon his office or until the expiry of his term of office, whichever is the earliest. (2) The Chairperson or a Member shall not be removed from his office except by an order made by the President on the ground of proved misbehaviour or incapacity after an inquiry made by a sitting Judge of the Supreme Court in which such Chairperson or other Member had been informed of the charges against him and given a reasonable opportunity of being heard in respect of those charges. (3) The Central Government may, by rules, regulate the procedure for the investigation of misbehaviour or incapacity of the Chairperson or other Member referred to in sub-section (2). ## Section 10 ===Salaries, Allowances And Other Terms And Conditions Of Service Of Chairperson And Other Members :-=== The salaries and allowances payable to, and the other terms and conditions of service (including pension, gratuity and other retirement benefits) of, the Chairperson and other Members shall be such as may be prescribed by the Central Government: Provided that neither the salary and allowances nor the other terms and conditions of service of the Chairperson and other Members shall be varied to their disadvantage after their appointment. ## Section 11 ===Prohibitions As To Holding Of Offices, Etc., By Chairperson Or Member On Ceasing To Be Such Chairperson Or Member :-=== On ceasing to hold office- (a) the Chairperson shall be ineligible for further employment either under the Government of India or under the Government of a State; (b) a Member other than the Chairperson shall, subject to the provisions of this Act, be eligible for appointment as a member of any other Tribunal but not for any other employment either under the Government of India or under the Government of a State; and (c) the Chairperson or other Members shall not appear, act or plead before the Tribunal. ## Section 12 ### Financial And Administrative Powers Of Chairperson :- T h e Chairperson shall exercise such financial and administrative powers over the Benches as may be prescribed: Provided that the Chairperson shall have the authority to delegate such of his financial and administrative powers as he may think fit to any other Member or any officer of the Tribunal, subject to the conditions that such Member or officer shall, while exercising such delegated powers, continue to act under the direction, control and supervision of the Chairperson. ## Section 13 ### Staff Of The Tribunal :- (1) The Central Government shall determine the nature and categories of the officers and other employees required to assist the Tribunal in the discharge of its functions and provide the Tribunal with such officers and other employees as it may think fit. (2) The salaries and allowances payable to, and the other terms and conditions of service of the officers and other employees of the Tribunal shall be such as may be prescribed. (3) The officers and other employees of the Tribunal shall discharge their functions under the general superintendence of the Chairperson. ## Section 14 ### Jurisdiction, Powers And Authority In Service Matters :- (1) Save as otherwise expressly provided in this Act, the Tribunal shall exercise, on and from the appointed day, all the jurisdiction, powers and authority, exercisable immediately before that day by all courts (except the Supreme Court or a High Court exercising jurisdiction under articles 226 and 227 of the Constitution) in relation to all service matters. (2) Subject to the other provisions of this Act, a person aggrieved by an order pertaining to any service matter may make an application to the Tribunal in such form and accompanied by such documents or other evidence and on payment of such fee as may be prescribed. (3) On receipt of an application relating to service matters, the Tribunal shall, if satisfied after due inquiry, as it may deem necessary, that it is fit for adjudication by it, admit such application; but where the Tribunal is not so satisfied, it may dismiss the application after recording its reasons in writing. (4) For the purpose of adjudicating an application, the Tribunal shall have the same powers as are vested in a Civil Court under the Code of Civil Procedure, 1908, (5 of 1908) while trying a suit in respect of the following matters, namely- (a) summoning and enforcing the attendance of any person and examining him on oath; (b) requiring the discovery and production of documents; (c) receiving evidence on affidavits; (d) subject to the provisions of sections 123 and 124 of the Indian Evidence Act, 1872, (1 of 1872) requisitioning any public record or document or copy of such record or document from any office; (e) issuing commissions for the examination of witnesses or documents; (f) reviewing its decisions; (g) dismissing an application for default or deciding it exparte; (h) setting aside any order of dismissal of any application for default or any order passed by it exparte; and (i) any other matter which may be prescribed by the Central Government. (5) The Tribunal shall decide both questions of law and facts that may be raised before it. ## Section 15 ### Jurisdiction Powers And Authority In Matters Of Appeal Against Court Martial :- (1) Save as otherwise expressly provided in this Act, the Tribunal shall exercise, on and from the appointed day, all the jurisdiction, powers and authority exercisable under this Act in relation to appeal against any order, decision, finding or sentence passed by a court martial or any matter connected therewith or incidental thereto. (2) Any person aggrieved by an order, decision, finding or sentence passed by a court martial may prefer an appeal in such form, manner and within such time as may be prescribed. (3 ) The Tribunal shall have power to grant bail to any person accused of an offence and in military custody, with or without any conditions which it considers necessary: Provided that no accused person shall be so released if there appears reasonable ground for believing that he has been guilty of an offence punishable with death or imprisonment for life. (4) The Tribunal shall allow an appeal against conviction by a court martial where (a) the finding of the court martial is legally not sustainable due to any reason whatsoever; or (b) the finding involves wrong decision on a question of law; or (c) there was a material irregularity in the course of the trial resulting in miscarriage of justice, but, in any other case, may dismiss the appeal where the Tribunal considers that no miscarriage of justice is likely to be caused or has actually resulted to the appellant: Provided that no order dismissing the appeal by the Tribunal shall be passed unless such order is made after recording reasons therefor in writing. (5) The Tribunal may allow an appeal against conviction, and pass appropriate order thereon. (6) Notwithstanding anything contained in the foregoing provisions of this section, the Tribunal shall have the power to- (a) substitute for the findings of the court martial, a finding of guilty for any other offence for which the offender could have been lawfully found guilty by the court martial and pass a sentence afresh for the offence specified or involved in such findings under the provisions of the Army Act, 1950 (46 of 1950) or the Navy Act, 1957 (62 of 1957) or the Air Force Act, 1950, (45 of 1950) as the case may be; or (b) if sentence is found to be excessive, illegal or unjust, the Tribunal may- (i) remit the whole or any part of the sentence, with or without conditions; (ii) mitigate the punishment awarded; (iii) commute such punishment to any lesser punishment or punishments mentioned in the Army Act, 1950, (46 of 1950) the Navy Act, 1957 (62 of 1957) and the Air Force Act, 1950, (45 of 1950) as the case may be; (c) enhance the sentence awarded by a court martial: Provided that no such sentence shall be enhanced unless the appellant has been given an opportunity of being heard; (d) release the appellant, if sentenced to imprisonment, on parole with or without conditions; (e) suspend a sentence of imprisonment; (f) pass any other order as it may think appropriate. (7) Notwithstanding any other provisions in this Act, for the purposes of this section, the Tribunal shall be deemed to be a criminal court for the purposes of sections 175, 178, 179, 180, 193, 195, 196 or 228 (45 of 1860) of the Indian Penal Code and Chapter XXVI of the Code of Criminal Procedure, 1973. (2 of 1974) ## Section 16 ### Re-Trial :- (1) Except as provided by this Act, where the conviction of a person by court martial for an offence has been quashed, he shall not be liable to be tried again for that offence by a court martial or by any other Court. (2) The Tribunal shall have the power of quashing a conviction, to make an order authorising the appellant to be retried by court martial, but shall only exercise this power when the appeal against conviction is allowed by reasons only of evidence received or available to be received by the Tribunal under this Act and it appears to the Tribunal that the interests of justice require that an order under this section should be made: Provided that an appellant shall not be retried under this section for an offence other than- (a) the offence for which he was convicted by the original court martial and in respect of which his appeal is allowed; ( b ) any offence for which he could have been convicted at the original court martial on a charge of the first-mentioned offence; (c) any offence charged in the alternative in respect of which the court martial recorded no finding in consequence of convicting him of the first-mentioned offence. (3) A person who is to be retried under this section for an offence shall, if the Tribunal or the Supreme Court so directs, whether or not such person is being tried or retried on one or more of the original charges, no fresh investigation or other action shall be taken under the relevant provision of the Army Act, 1950 (46 of 1950) or the Navy Act, 1957 (62 of 1957) or the Air Force Act, 1950 (45of 1950) as the case may be, or rules and regulations made there under, in relation to the said charge or charges on which he is to be retried. ## Section 17 ### Powers Of The Tribunal On Appeal Under Section 15 :- T he Tribunal, while hearing and diciding an appeal under section 15, shall have the power- a) to order production of documents or exhibits connected with the proceedings before the court martial; b) to order the attendance of the witnesses; c) to receive evidence; d) to obtain reports from Court martial; e) order reference of any question for enquiry; f) appoint a person with special expert knowledge to act as an assessor; and g) to determine any question which is necessary to be determined in order to do justice in the case. ## Section 18 ### Cost :- While disposing of the application under section 14 or an appeal under section 15, the Tribunal shall have power to make such order as to costs as it may deem just. ## Section 19 ### Power To Punish For Contempt :- (1) Any person who is guilty of contempt of the Tribunal by using any insulting or threatening language, or by causing any interruption or disturbance in the proceedings of such Tribunal shall, on conviction, be liable to suffer imprisonment for a term which may extend to three years. (2) For the purposes of trying an offence under this section, the provisions of sections 14, 15, 17, 18 and 20 of the Contempt of courts Act, 1971 (70 of 1971) shall mutatis mutandis apply, as if a reference therein to- (a) Supreme Court or High Court were a reference to the Tribunal; (b) Chief Justice were a reference to the Chairperson; (c) Judge were a reference to the Judicial or Administrative Member of the Tribunal; (d) Advocate-General were a reference to the prosecutor; and (e) Court were a reference to the Tribunal. ## Section 20 ### Distribution Of Business Among The Benches :- The Chairperson may make provisions as to the distribution of the business of the Tribunal among its Benches. ## Section 21 ===Application Not To Be Admitted Unless Other Remedies Exhausted :-=== (1) The Tribunal shall not ordinarily admit an application unless it is satisfied that the applicant had availed of the remedies available to him under the Army Act, 1950 (46 of 1950) or the Navy Act, 1957 (62 of 1957) or the Air Force Act, 1950 (45 of 1950) as the case may be, and respective rules and regulations made thereunder. (2) For the purposes of sub-section (1), a person shall be deemed to have availed of all the remedies available to him under the Army Act, 1950 (46 of 1950) or the Navy Act, 1957 (62 of 1957) or the Air Force Act, 1950, (45 of 1950) and respective rules and regulations- (a) if a final order has been made by the Central Government or other authority or officer or other person competent to pass such order under the said Acts, rules and regulations, rejecting any petition preferred or representation made by such person; (b) where no final order has been made by the Central Government or other authority or officer or other person competent to pass such order with regard to the petition preferred or representation made by such person, if a period of six months from the date on which such petition was preferred or representation was made has expired. ## Section 23 ### Procedure And Powers Of The Tribunal :- (1) The Tribunal shall not be bound by the procedure laid down in the Code of Civil Procedure, 1908 (5 of 1908) but shall be guided by the principles of natural justice and subject to the other provisions of this Act and any rules made thereunder, the Tribunal shall have the power to lay down and regulate its own procedure including the fixing of place and time of its inquiry and deciding whether to sit in public or in camera. ( 2 ) The Tribunal shall decide every application made to it as expeditiously as possible after a perusal of documents, affidavits and written representations and after hearing such oral arguments as may be advanced: Provided that where the Tribunal deems it necessary, for reasons to be recorded in writing, it may allow oral evidence to be adduced. (3) No adjournment shall be granted by the Tribunal without recording the reasons justifying the grant of such adjournment and cost shall be awarded, if a party requests for adjournment more than twice. ## Section 24 ### Term Of Sentence And Its Effect On Appeal :- (1) The term of any sentence passed by the Tribunal under clause (a) of subsection (6).of section 15 of this Act shall, unless the Tribunal otherwise directs, be reckoned to commence on the day on which it would have commenced under the Army Act, 1950, (46 of 1950) the Navy Act, 1957 (62 of 1957) or the Air Force Act, 1950 (45 of 1950) as the case may be, under which the court martial against which the appeal was filed, had. been held. (2) Subject to the provisions of sub-section (3), any sentence passed on an appeal from the Tribunal to the Supreme Court in substitution for another sentence shall, unless the Supreme Court otherwise directs, be reckoned to commence on the day on which the original sentence would have commenced. (3) Where a person who is undergoing sentence is granted stay of the operation of the said sentence, either by suspension or otherwise, pending an appeal, the period during which he is so released due to the sentence having been so stayed, shall be excluded in computing the term for which he is so sentenced by the Tribunal or the Supreme Court, as the case may be. ## Section 25 ===Right Of Applicant Or Of Appellant To Take Assistance Of A Legal Practitioner And Of Government, Etc., To Appoint Counsel :-=== (1) A person making an application or preferring an appeal to the Tribunal may either. appear in person or take the assistance of a legal practitioner of his choice to present his case before the Tribunal. (2) The Central Government or the competent authority, as may be prescribed, may authorise one or more legal practitioners or any of its law officers to act as counsel and every person so authorised by it may present its case with respect to any application or appeal, as the case may be, before the, Tribunal. ## Section 26 ### Condition As To Making Of Interim Order :- (1) Notwithstanding anything contained in any other provisions of this Act or in any other law for the time being in force, no interim order (whether by way of injunction or stay or in any other manner) shall be made on an application or appeal, or in any proceeding relating thereto, unless- - (a) copies of such application or appeal, as the case may be, and all documents in support of the plea for such interim order are furnished to the party against whom such application or appeal, as the case may be, is made or proposed to be made; and (b) opportunity of being heard is given to the other party in the matter: Provided that the Tribunal may dispense with the requirements of clauses (a) and (b) and make an interim order as an exceptional measure if it is satisfied, for reasons to be recorded in writing, that it is necessary so to do for preventing any loss being caused to the applicant or to the appellant, as the case may be. (2) Where any party against whom an interim order, whether by way of injunction or stay or in any other manner, is made on an application or appeal or in any proceeding relating thereto under sub-section (i), without- (a) furnishing to such party copies of such application or appeal, as the case may be, and all documents in support of the plea for such interim order; and (b) giving such party an opportunity of being heard, and making an application to the Tribunal for the vacation of such order and furnishing a copy of such application or appeal, as the case may be, to the party in whose favour such order has been made or the counsel of such party, the Tribunal shall dispose of the application within a period of fourteen days from the date on which it is received or from the date on which the copy of such application is so furnished, whichever is later, or where the Tribunal is closed on the last day of that period, before the expiry of the next working day; and if the application is not so disposed of, the interim order shall, on the expiry of that period, or, as the case may be, the expiry of the said next working day, stand vacated. ## Section 27 ===Power Of Chairperson To Transfer Cases From One Bench To Another :-=== O n the application of any of the parties and after notice to the parties concerned, and after hearing such of them as he may desire to be heard, or on his own motion without such notice, the Chairperson may transfer any case pending before one Bench for disposal, to, any other Bench. ## Section 28 ### Decision To Be By Majority :- If the Members of a Bench differ in opinion on any point, the point shall be decided according to the opinion of the majority, if there is a majority, but if the Members are equally divided, they shall state the point Or points on which they differ and make a reference to the Chairperson who shall either hear the point or points himself or refer the case for hearing on such point or points by one or more of the Members of the Tribunal and such point or points shall be decided according to the opinion of the majority of the Members of the Tribunal who have heard the case, including those who first heard it. ## Section 29 ### Execution Of Order Of Tribunal :- Subject to the other provisions of this Act, and the rules made thereunder, the order of the Tribunal disposing of an application shall be final and shall not be called in question in any Court and such order shall be executed accordingly. ## Section 30 ### Appeal To Supreme Court :- (1) Subject to the provisions of section 31, an appeal shall lie to the Supreme Court against the final decision or order of the Tribunal (other than an order passed under section 19): Provided that such appeal is preferred within a period of ninety days of the said decision or order: Provided further that there shall be no appeal against an interlocutory order of the Tribunal. (2) An appeal shall lie to the Supreme Court as of right from any order or decision of the Tribunal in the exercise of its jurisdiction to punish for contempt: . Provided that an appeal under this sub�section shall be filed in the Supreme Court within sixty days from the date of the order appealed against. (3) Pending any appeal under sub-section (2), the Supreme Court may order that- (a) the execution of the punishment or the order appealed against be suspended; or (b) if the appellant is in confinement, he be released on bail: Provided that where an appellant satisfies the Tribunal that he intends to prefer an appeal, the Tribunal may also exercise any of the powers conferred under clause (a) or clause (b), as the case may be. ## Section 31 ### Leave To Appeal :- (1) An appeal to the Supreme Court shall lie with the leave of the Tribunal; and such leave shall not be granted unless it is certified by the Tribunal that a point of law of general public importance is involved in the decision, or it appears to the Supreme Court that the point is one which ought to be considered by that Court. (2) An application to the Tribunal for leave to appeal to the Supreme Court shall be made within a period of thirty days beginning with the date of the decision of the Tribunal and an application to the Supreme Court for leave shall be made within a period of thirty days beginning with the date on which the application for leave is refused by the Tribunal. (3) An appeal shall be treated as pending until any application for leave to appeal is disposed of and if leave to appeal is granted, until the appeal is disposed of; and an application for leave to appeal shall be treated as disposed of at the expiration of the time within which it might have been made, but it is not made within that time. ## Section 32 ### Condonation :- The Supreme Court may, upon an application made at any time by the appellant, extend the time within which an appeal may be preferred by him to that Court under section 30 or sub-section (2) of section 31. ## Section 33 ### Exclusion Of Jurisdiction Of Civil Courts :- On and from the date from which any jurisdiction, powers and authority becomes exercisable by the Tribunal in relation-to service matters under this Act, no Civil Court shall have, or be entitled to exercise, such jurisdiction, power or authority in relation to those service matters. ## Section 34 ### Transfer Of Pending Cases :- (1) Every suit, or other proceeding pending before any court including a High Court or other authority immediately before the date of establishment of the Tribunal under this Act, being a suit or proceeding the cause of action whereon it is based, is such that it would have been within the jurisdiction of the Tribunal, if it had arisen after such establishment within the jurisdiction of such Tribunal, stand transferred on that date to such Tribunal. (2) Where any suit, or other proceeding stands transferred from any court including a High Court or other authority to the Tribunal under sub-section (1),- (a) the court or other authority shall, as soon as may be, after such transfer, forward the records of such suit, or other proceeding to the Tribunal; (b) the Tribunal may, on receipt of such records, proceed to deal with such suit, or other proceeding, so far as may be, in the same manner as in the case of an application made under sub-section (2) of section 14, from the stage which was reached before such transfer or from any earlier stage or de novo as the Tribunal may deem fit. ## Section 35 ### Provision For Filing Of Certain Appeals :- Where any decree or order has been made or passed by any court (other than a High Court) or any other authority in any suit or proceeding before the establishment of the Tribunal, being a suit or proceeding the cause of action whereon it is based, is such that it would have been, if it had arisen after such establishment, within the jurisdiction of the Tribunal, and no appeal has been preferred against such decree or order before such establishment or if preferred, the same is pending for disposal before any court including High Court and the time for preferring such appeal under any law for the time being in force had not expired before such establishment, such appeal shall lie to the Tribunal, within ninety days from the date on which the Tribunal is established, or within ninety days from the date of receipt of the copy of such decree or order, whichever is later. ## Section 36 ===Proceedings Before Tribunal To Be Judicial Proceedings :-=== All proceedings before the Tribunal shall be deemed to be judicial proceedings within the meaning of sections 193, 219 and 228 (45 of 1860) of the Indian Penal Code. ## Section 37 ### Members And Staff Of Tribunal To Be Public Servants :- T h e Chairperson, other Members and the officers and other employees provided under section 13 to the Tribunal shall be deemed to be public servants within the meaning of section 21 (45 of 1860) of the Indian Penal Code. ## Section 38 ### Protection Of Action Taken In Good Faith :- No suit, prosecution or other legal proceeding shall lie against the Central Government or against the Chairperson or any other Member or any other person authorised by the Chairperson, for anything which is done in good faith or intended to be done in pursuance of this Act or any rule or order made thereunder in the discharge of official duties. ## Section 39 ### Act To Have Overriding Effect :- The provisions of this Act shall have effect notwithstanding anything inconsistent therewith contained, in any other law for the time being in force or in any instrument having effect by virtue of any law other than this Act. ## Section 40 ### Power To Remove Difficulties :- (1) If any difficulty arises in giving effect to the provisions of this Act, the Central Government may, by order published in the Official Gazette, make such provisions, not inconsistent with the provisions of this Act as appear to it to be necessary or expedient for removing the difficulty: Provided that no order shall be made under this section after the expiry of two years from the date of commencement of this Act. (2) Every order made under this section shall, as soon as may be after it is made, be laid before each House of Parliament. ## Section 41 ### Power Of Central Government To Make Rules :- (1) The Central Government may, by notification, make rules for the purposes of carrying out the provisions of this Act. (2) Without prejudice to the generality of the foregoing power, such rules may provide for all or any of the following matters, namely:- (a) the case or cases which shall be decided by a Bench composed of more than two Members under clause (c) of sub-section (3) of section 5; (b) the procedure under sub-section (3) of section 9 for the investigation of misbehaviour or incapacity of Chairperson or other Member; (c) the salaries and allowances payable to, and the other terms and conditions of service of the Chairperson and other Members under section 10; (d) the financial and administrative powers which the Chairperson may exercise over the Benches of the Tribunal under section 12; (e) the salaries and allowances payable to, and other terms and conditions of service of the officers and other employees of the Tribunal under sub-section (2) of section 13; (f) the form in which an application may be made under sub�section (2) of section 14, the documents and other evidence by which such application shall be accompanied and the fee payable in respect of the filing of such application or for the service of execution of processes; (g) the other matter which may be prescribed under clause (i) of sub-section (4) of section 14; (h) the form and manner in which an appeal may be filed, the fee payable thereon and the time within which such appeal may be filed under sub-section (2) of section 15; (i) the rules subject to which the Tribunal shall have power to regulate its own procedure under sub-section (1) of section 23; (j) competent authority who may authorise legal practitioners or law officers to act as counsel under sub-section (2) of section 25; (k) any other matter which may be prescribed or in respect of which rules are required to be made by the Central Government. ## Section 42 ### Power To Make Rules Retrospectively :- The powers to make rules under section 41 shall include the power to make such rules or any of them retrospectively from a date not earlier than the date on which this Act shall come into operation but no such retrospective effect shall be given to any such rule so as to prejudicially affect the interests of any person to whom such rule may be applicable. ## Section 43 ### Laying Of Rules :- Every rule made under this Act shall be laid, as soon as may be after it is made, before each House of Parliament while it is in session, for a total period of thirty days which may be comprised in one session or in two or more successive sessions, and if, before the expiry of the session immediately following the session or the successive sessions aforesaid, both Houses agree in making any modification in the rule or both Houses agree that the rule should not be made, the rule shall thereafter have effect only in such modified form or be of no effect, as the case may be; so, however, that any such modification or annulment shall be without prejudice to the validity of anything previously done under that rule. <ref>https://www.courtkutchehry.com/Acts/Home<\ref>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4894289970397949, "perplexity": 3050.7107016485743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00492.warc.gz"}
https://www.beatthegmat.com/when-x-y-2-6-x-y-x-y-t300950.html
• NEW! FREE Beat The GMAT Quizzes Hundreds of Questions Highly Detailed Reporting Expert Explanations • 7 CATs FREE! If you earn 100 Forum Points Engage in the Beat The GMAT forums to earn 100 points for $49 worth of Veritas practice GMATs FREE VERITAS PRACTICE GMAT EXAMS Earn 10 Points Per Post Earn 10 Points Per Thanks Earn 10 Points Per Upvote ## When x/y = 2.6, (x-y)/(x+y)=? tagged by: Max@Math Revolution ##### This topic has 2 expert replies and 1 member reply ### GMAT/MBA Expert ## When x/y = 2.6, (x-y)/(x+y)=? [GMAT math practice question] $$When\ \frac{x}{y}=2.6,\ \frac{\left(x-y\right)}{\left(x+y\right)}=?$$ $$A.\ \frac{2}{7}$$ $$B.\ \frac{3}{8}$$ $$C.\ \frac{4}{9}$$ $$D.\ \frac{5}{9}$$ $$E.\ \frac{7}{10}$$ _________________ Math Revolution Finish GMAT Quant Section with 10 minutes to spare. The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. Only$149 for 3 month Online Course Free Resources-30 day online access & Diagnostic Test Email to : info@mathrevolution.com ### GMAT/MBA Expert GMAT Instructor Joined 08 Dec 2008 Posted: 12829 messages Followed by: 1247 members 5254 GMAT Score: 770 Max@Math Revolution wrote: [GMAT math practice question] $$When\ \frac{x}{y}=2.6,\ \frac{\left(x-y\right)}{\left(x+y\right)}=?$$ $$A.\ \frac{2}{7}$$ $$B.\ \frac{3}{8}$$ $$C.\ \frac{4}{9}$$ $$D.\ \frac{5}{9}$$ $$E.\ \frac{7}{10}$$ GIVEN: x/y = 2.6 All of them are in fraction form. So.... Rewrite 2.6 as a fraction: x/y = 2 3/5 Or...: x/y = 13/5 So, let's let x = 13 and y = 5, since these values satisfy the condition that x/y = 13/5 Our goal is to find the value of (x - y)/(x + y) Plug in x = 13 and y = 5 We get: (x - y)/(x + y) = (13 - 5)/(13 + 5) = 8/18 = 4/9 Cheers, Brent _________________ Brent Hanneson – Creator of GMATPrepNow.com Use our video course along with And check out all of our free resources GMAT Prep Now's comprehensive video course can be used in conjunction with Beat The GMAT’s FREE 60-Day Study Guide and reach your target score in 2 months! Master | Next Rank: 500 Posts Joined 15 Oct 2009 Posted: 326 messages 27 Brent@GMATPrepNow wrote: Max@Math Revolution wrote: [GMAT math practice question] $$When\ \frac{x}{y}=2.6,\ \frac{\left(x-y\right)}{\left(x+y\right)}=?$$ $$A.\ \frac{2}{7}$$ $$B.\ \frac{3}{8}$$ $$C.\ \frac{4}{9}$$ $$D.\ \frac{5}{9}$$ $$E.\ \frac{7}{10}$$ GIVEN: x/y = 2.6 All of them are in fraction form. So.... Rewrite 2.6 as a fraction: x/y = 2 3/5 Or...: x/y = 13/5 So, let's let x = 13 and y = 5, since these values satisfy the condition that x/y = 13/5 Our goal is to find the value of (x - y)/(x + y) Plug in x = 13 and y = 5 We get: (x - y)/(x + y) = (13 - 5)/(13 + 5) = 8/18 = 4/9 Cheers, Brent Yes, that's the way I did it, then there's this way also: Given that X/Y has been specified, can (X-Y)/(X+Y) make direct use of it ? Divide top and bottom by Y: = (X/Y-1)/(X/Y+1) Plug in X/Y=2.6: (2.6-1)/(2.6+1) = 1.6/3.6 = 16/36 = 4/9 ### GMAT/MBA Expert Legendary Member Joined 24 Jul 2015 Posted: 2223 messages Followed by: 32 members 19 GMAT Score: $$\frac{\left(x-y\right)}{\left(x+y\right)}=\frac{\frac{x}{y}-1}{\frac{x}{y}+1}=\frac{\left(2.6-1\right)}{\left(2.6+1\right)}=\frac{1.6}{3.6}=\frac{16}{36}=\frac{4}{9}$$ _________________ Math Revolution Finish GMAT Quant Section with 10 minutes to spare. The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. Only $149 for 3 month Online Course Free Resources-30 day online access & Diagnostic Test Unlimited Access to over 120 free video lessons-try it yourself Email to : info@mathrevolution.com • Free Practice Test & Review How would you score if you took the GMAT Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • 1 Hour Free BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Free Trial & Practice Exam BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • 5-Day Free Trial 5-day free, full-access trial TTP Quant Available with Beat the GMAT members only code • Get 300+ Practice Questions 25 Video lessons and 6 Webinars for FREE Available with Beat the GMAT members only code • 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code • FREE GMAT Exam Know how you'd score today for$0 Available with Beat the GMAT members only code • Award-winning private GMAT tutoring Register now and save up to \$200 Available with Beat the GMAT members only code ### Top First Responders* 1 Jay@ManhattanReview 68 first replies 2 Brent@GMATPrepNow 65 first replies 3 GMATGuruNY 41 first replies 4 Ian Stewart 23 first replies 5 Scott@TargetTestPrep 15 first replies * Only counts replies to topics started in last 30 days See More Top Beat The GMAT Members ### Most Active Experts 1 Scott@TargetTestPrep Target Test Prep 200 posts 2 Brent@GMATPrepNow GMAT Prep Now Teacher 91 posts 3 Max@Math Revolution Math Revolution 85 posts 4 Jay@ManhattanReview Manhattan Review 73 posts 5 GMATGuruNY The Princeton Review Teacher 65 posts See More Top Beat The GMAT Experts
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43333351612091064, "perplexity": 20599.23341622903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578534596.13/warc/CC-MAIN-20190422035654-20190422061654-00197.warc.gz"}
http://www.sfb45.de/publications-1/yaus-form-of-schwarz-lemma-and-arakelov-inequality-on-moduli-spaces-of-projective-manifolds
##### Personal tools You are here: Home Yau's Form of Schwarz Lemma and Arakelov Inequality on Moduli Spaces of Projective Manifolds # Yau's Form of Schwarz Lemma and Arakelov Inequality on Moduli Spaces of Projective Manifolds Kang Zuo Number 10 Kang Zuo 2008
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796717166900635, "perplexity": 8439.92119236061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806760.43/warc/CC-MAIN-20171123070158-20171123090158-00506.warc.gz"}
https://cozilikethinking.wordpress.com/2020/04/08/an-interesting-putnam-problem-on-the-pigeonhole-principle/
# An interesting Putnam problem on the Pigeonhole Principle The following problem is contained in the book “Putnam and Beyond” by Gelca, and I saw it on stackexchange. I’m mainly recording this solution because it took me longer than usual to come up with the solution, as I was led down the wrong path many a time. Noting what is sufficient for a block of numbers to be a square is the main challenge in solving this problem. Let there be a sequence of $m$ terms, all of which belong to a set of $n$ natural numbers. Prove that if $2^n\leq m$, then there exists a block of consecutive terms, the product of which is a square number. Let the $n$ numbers be $\{a_1,\dots,a_n\}$, and consider the function $f(k)=($ a tuple of $0$‘s and $1$‘s), where the $0$‘s and $1$‘s denote the number of times $\pmod 2$ that each element $a_i$ has appeared from the $1$st to the $k$th element of the sequence of positive integers. So $f(1)=$($1$ somewhere, and the rest of the terms are $0$), etc. Clearly, if $f(k)=(0,0,\dots,0)$ for any $k$, then the consecutive sequence of numbers from the 1st term to the kth terms is a square. If no $f(k)$ is $(0,0,0\dots,0)$, then there are $2^m-1$ such tuples, and at least $2^m$ values of $k$. Hence, two of them must be equal. Let us suppose that $f(k_1)=f(k_2)$. Then the sequence of terms from $k_1$ until $k_2$ is a square. Hence proved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 27, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9497883915901184, "perplexity": 131.70258084021273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00427.warc.gz"}
https://chemistry.stackexchange.com/questions/32822/ring-expansion-in-cyclic-compounds/32827
# Ring expansion in cyclic compounds My attempt In the first case: $\ce{H+}$ adds to the $\ce{OH}$ group, giving us a carbocation. The carbocation thus formed is exceptionally stable due to back bonding. I wonder why would it go under ring expansion even though the strain is not a factor here as the ring strain in a cyclobutane ring is ~$26.3\ \mathrm{kcal/mol}$, and that in a cyclopropane ring is ~$27.5\ \mathrm{kcal/mol}$. In the second case: Again the $\ce{H+}$ adds to the $\ce{OH}$ group, giving us a tertiary carbocation with seven hyper-conjugating structures. Why would it go under ring expansion to give secondary carbocation with just two hyper-conjugating structures? I believe is based on ring strain in this case, as the ring strain in a five-membered ring is ~$6.2\ \mathrm{kcal/mol}$, while the ring strain in a six-membered ring is ~$0.1\ \mathrm{kcal/mol}$. Source: Advanced Problems In Organic Chemistry, MS Chouhan, 11th edition; Chapter - Hydrocarbons (Alkenes); Question 180 in latest edition • I will expect the carboncation after ring contraction reaction to be trapped by some nucleophile instead of elimination product. With no $sp^{2}$ carbon center involved in the product, the ring strain data will make sense in this case. Thus, resonance structure stabilization will overcome the ring strain. This reaction should be a reaction somewhat under kinetic condition with careful control, and it may not happen under general thermodynamic condition(just heat up without any nucleophile), where I will expect ring opening to happen. – Ian Fang Jun 13 '15 at 14:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3156414330005646, "perplexity": 2641.705795428054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146176.73/warc/CC-MAIN-20200225233214-20200226023214-00435.warc.gz"}
http://math.stackexchange.com/questions/130555/conditional-probability-distribution-with-gaussian-noise?answertab=oldest
# Conditional probability distribution with Gaussian noise If I have a relationship as follows: $$Y = a X + G(0,\sigma^2),\text{ so }y = a X + \text{some Gaussian noise}.$$ The conditional probability distribution of $y$ given $x$, i.e. $P(y|x)$, is equal to a Gaussian with mean $= a X$ and variance $= \sigma^2$. I intuitively understand this as the expected value for $y$ should be $a X$ and this will vary due to the noise with the same variance of the noise. Is there a formal proof for this? Thanks, Aly - 1. If $Z = k + Y$, where $k$ is a constant, then the probability density of $Z$ is the same of $Y$ except for a shift : $f_Z(z) = f_Y(z-k)$. As corolary: $E(Z) = k + E(Y)$ and $Var(Z) = Var(Y)$. 2. If $Z = X + Y$ and we condition on $X$ (which means that we are given the value of $X$), then $X$ can be regarded as a constant, and the above applies. More formally (but, I insist, this should not be necessary to understand the previous) $f_{Z|X}(z|x) = f_Y(z-x)$ I think you should write $f_Z(z)=f_Y(z-k)$ rather than $f_Z(Z)-f_Y(Z-k)$, i.e. don't use the capital $Z$ to refer to both the random variable and the argument to the density function. – Michael Hardy Apr 11 '12 at 21:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9949519038200378, "perplexity": 88.08014328319993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161942.67/warc/CC-MAIN-20160205193921-00195-ip-10-236-182-209.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/55726/properly-discontinuous-action?sort=newest
Properly Discontinuous Action When looking definition, and theorems related to Properly discontinuous action of a group $G$ on a topological space $X$, it is different in different books (Topology and Geometry-Bredon, Complex Functions-Jones, Three Dimensional Geometry and Topology- Thurston). Therefore, it will be clarified, if we write these definitions separately, and see which is stronger or which are equivalent? I will name them as "Type A", "Type B"..) Let $X$ be a topological space and $G$ be a group acting on $X$. Definition 1: The action is of "Type A" if the map $G \times X \rightarrow X \times X$, given by $(g,x)\mapsto (x,g.x)$ is proper, i.e. inverse image of any compact set under this map is compact. Definition 2: The action is of "Type B" if for any compact set $K\subseteq X$, $K\cap g.K=\phi$ for all but finitely many $g\in G$. Definition 3: The action is of "Type C" if for each $x\in X$ has an open neighbourhood $U$ such that $g.U\cap U=\phi$ for all but finitely many $g\in G$. Definition 4: The action is of "Type D" if for each $x\in X$, there exist an open neighbourhood $U$ of $x$, such that $g.U\cap U\neq \phi$ for $g\in G$ implies $g.x=x$. Definition 5: The action is of "Type E" if each $x\in X$ has a neighbourhood $U$ such that the set {$g\in G \colon g.x\in U$} is finite. Q.1 Which type of actions imply which other type of action? Q.2 If $X$ is Hausdorff, then under which type action, the quotient $X/G$ is Hausdorff? (These are required, when studying action of a group on a compact Riemann surface, its quotient, whether quotient map is branched or unbranched, etc,) (This question may be not applicable to post for MO; but when reading a paper related to enumeration of equivalent coverings of a space, with given (finite) transformation group, I came across this notion, and when looked into details, the different definitions puzzled.) - Good question! Def. 1 and 2 are equivalent for $X$ a locally compact Hausdorff space. This can be found in [Lee, Smooth manifolds, p. 147] and [Koszul, Lectures on Groups of Transformations, p. 3] Actually Def. 2 is equivalent to the action is of "Type B" of "second kind" if for any compact sets $K$ and $L$, $K\cap g\cdot L=\varnothing$ for all but finitely many $g\in G$. Under the same assumption 1,2 and 3 should be equivalent according to [Lee, Topological manifolds, p. 267] (at least for discrete $G$). – Daniel Pape Feb 17 '11 at 12:39 Here G may be topological group; so G x X will be a topological space. – RDK Feb 17 '11 at 16:06 As Daniel pointed out, these definitions tend to coincide for locally compact spaces and this is probably the reason for the confusion. Beyond local compactness one needs to be more careful and this is taken care of in Koszul's book. e.g. Def. 1. is the correct definition for locally compact spaces, but in general you should add the condition that the map be closed. Concerning Q2: The quotient map $X \to X/G$ is always open and using def 1 the orbit equivalence relation is closed, hence the quotient is Hausdorff. – Theo Buehler Feb 17 '11 at 21:21 If $G$ is a Lie group, even a compact one, then these definitions are inequivalent; OP, do you assume $G$ to be discrete? – inkspot Feb 18 '11 at 14:47 In some actions, we have to the groups $G$(topological group) should be discrete.. – RDK Feb 19 '11 at 15:50 Below locally compact spaces are assumed to be Hausdorff. The following is essentially a distillate of results from Bourbaki's Topologie Générale, Chapitres II and III. Definition. A continuous function $f: X \to Y$ is called proper if $f$ maps closed sets to closed sets and $f^{-1}(K)$ is compact for all compact $K \subset Y$. Remark. If $X$ is Hausdorff and $Y$ is locally compact then a continuous function $f: X \to Y$ is proper if and only if $f^{-1}(K)$ is compact for all compact $K \subset Y$. Moreover, $X$ must be locally compact. To see this, cover $Y$ with open and relatively compact sets $U_{\alpha}$. Then $f^{-1}(U_{\alpha})$ is an open covering of $X$ by relatively compact sets, hence $X$ is locally compact. If $F \subset X$ is closed then $f(F)$ is closed. Indeed, if $(y_{n}) \subset f(F)$ is a net converging to $y$, then we may assume that all $y_{n}$ are in a compact neighborhood $K$ of $y$. Pick a pre-image $x_{n}$ of each $y_{n} \in f^{-1}(K)$, which is compact by assumption. If $x_{i} \to x \in f^{-1}(K)$ is a convergent subnet of $(x_{n})$ then $(f(x_{i}))$ is a subnet of $(y_{n})$, hence $f(x) = y$ by continuity and thus $y \in f(F)$. Remark. In the definition of properness it would suffice to require that $f$ is closed and $f^{-1}(y)$ is compact for all $y \in Y$, but the definition above is good enough for the present purposes. Definition. Let $G$ be a topological group acting continuously on a topological space $X$. The action is called proper if the map $\rho: G \times X \to X \times X$ given by $(g,x) \mapsto (x,gx)$ is proper. Proposition. If $G$ acts properly on $X$ then $X/G$ is Hausdorff. In particular, each orbit $Gx$ is closed. The stabilizer $G_{x}$ of each point is compact and the map $G/G_{x} \to Gx$ is a homeomorphism. Moreover, if $G$ is Hausdorff then so is $X$. Proof. Indeed, the orbit equivalence relation is the image of $\rho$, hence it is closed. Since the projection $X \to X/G$ is open, this implies that $X/G$ is Hausdorff. Since the pre-image of the point $[x]$ in $X/G$ is its orbit $Gx$, we see that orbits are closed. The stabilizer $G_{x}$ of a point $x$ is the projection of $\rho^{-1}(x,x)$ to $G$, hence it is compact. The map $G/G_{x} \to Gx$ is proper and $1$-to-$1$, hence a homeomorphism. Finally, if $G$ is Hausdorff, then $\{e\} \times X \subset G \times X$ is closed and therefore the diagonal $\Delta_{X} = \rho(\{e\} \times X)$ of $X \times X$ is closed, hence $X$ is Hausdorff. Exercise. Let $G$ be a Hausdorff topological group acting properly on a locally compact space $X$. Then $G$ and $X/G$ are both locally compact. If $X$ is compact Hausdorff then so are $G$ and $X/G$. Replace finite by compact in Type A and Type B. Then we have the following implications for a continuous action: Proper $\Longrightarrow$ Type A, the converse holds if both $G$ and $X$ are locally compact. Type A $\Longrightarrow$ Type B. Let $K \subset X$ be compact. Then $K \times K \subset X \times X$ is compact. Thus, if the action is of type A, then $\rho^{-1}(K \times K) = \{(g,x) \in G \times X\,:\,(x,gx) \in K \times K\} \subset G \times X$ is compact. The projection of this set to $G$ is compact and consists precisely of the $g \in G$ for which $K \cap gK \neq \emptyset$. Type B $\Longrightarrow$ Type A if $X$ is Hausdorff. We have to show that $\rho^{-1}(L)$ is compact for every compact $L \subset X \times X$. Let $K$ be the union of the two projections of $L$. Then $(g,x) \in \rho^{-1}(K \times K)$ is equivalent to $x \in K \cap gK$. Since $\rho^{-1}(K \times K)$ is compact and $\rho^{-1}(L)$ is a closed subset of $\rho^{-1}(K \times K)$, we have that $\rho^{-1}(L)$ is compact. Corollary. If $G$ and $X$ are locally compact, properness, Type A and Type B are all equivalent. Let me now show that in the locally compact setting properness is equivalent to a refinement of Type C: Proposition. Let $G$ and $X$ be locally compact and assume that $G$ acts continuously on $X$. The following are equivalent: 1. The action is proper. 2. For all $x,y \in X$ there are open neighborhoods $U_{x}, U_{y} \subset X$ of $x$ and $y$ such that $C = \{g \in G\,:\,gU_x \cap U_{y} \neq \emptyset \}$ is relatively compact. Proof. $1.$ implies $2.$ Let $K_{x}$ and $K_{y}$ be compact neighborhoods of $x$ and $y$. Then the set $\rho^{-1}(K_{x} \times K_{y})$ is compact and its projection to $G$ contains $C$ and is compact. Now let $U_{x}$ and $U_{y}$ be the interiors of $K_{x}$ and $K_{y}$. $2.$ implies $1$. Let $K \subset X \times X$ be compact. We want to show that $\rho^{-1}(K)$ is compact as well. Let $(g_{n},x_{n})$ be a universal net in $\rho^{-1}(K)$. Then $(x_{n},g_{n}x_{n})$ is a universal net in $K$ and hence converges to some $(x,y) \in K$. Let $U_{x}, U_{y}$ and $C$ be as in $2.$. Then $(x_{n},g_{n}x_{n}) \in U_{x} \times U_{y}$ eventually and thus also $(g_{n}) \subset C$ eventually. Since $(g_{n})$ is universal and $C$ is relatively compact, $(g_{n})$ converges to some $g \in G$. Hence $(g_{n},x_{n})$ converges to $(g,x) \in \rho^{-1}(K)$. Example. To see that Type C is weaker than properness, consider $A = \begin{pmatrix} 2 & 0 \\ 0 & 2^{-1} \end{pmatrix}$ and the action of $\mathbb{Z}$ on $\mathbb{R}^{2} \smallsetminus \{0\}$ given by $n \cdot x = A^{n} x$. For instance for $x = \begin{pmatrix} 1 \\ 0 \end{pmatrix}$ and $y = \begin{pmatrix} 0 \\ 1 \end{pmatrix}$ and all neighborhoods $U_{x} \ni x$ and $U_{y} \ni y$ the set $\{n \in \mathbb{Z}\,:\, U_{x} \cap n \cdot U_{y} \neq \emptyset \}$ is infinite. Thus this action isn't proper. On the other hand, it is easy to see that it is of Type C. Remark. The previous example shows that properness of an action is not a local property. Exercise. If the action of a locally compact group $G$ on a locally compact space $X$ is of type C and $X/G$ is Hausdorff then it is proper. To finish this discussion, it is evident that an action of type C is also of type E, hence type E is also weaker than properness. Finally, a trivial action is of type D, hence this property has nothing to do with properness. Here are some references: I've followed Bourbaki, Topologie Générale, Ch. III, in terminology, and the proofs I've given are variants of Bourbaki's. I happen to like Koszul's Lectures on groups of transformations. If you're looking for a more pedestrian approach, you can find the most important facts in Lee's Introduction to topological manifolds. - Unfortunately, Bourbaki's terminology differs from edition to edition, and from French to English. So in the newest French edition, a map $f:X\to Y$ is said to be proper if the map $f\times{\rm Id}_Z:X\times Z\to Y\times Z$ is closed. The notion coincides with the one discussed above when $X$ is Hausdorff and $Y$ is Hausdorff and locally compact. (It implies that $X$ is locally compact.) – ACL Feb 27 '11 at 23:39 Addendum: This second definition ($f$ universally closed) is equivalent to the one in your second remark : $f$ closed with compact fibres. – ACL Feb 28 '11 at 0:07 @ACL: Are you sure about the changes in terminology? I've never observed such a thing but I didn't look for it. Anyway in my edition it's defined as you say but that's equivalent to the definition I gave. Indeed, in Ch I. §10, No 2, Thm 1 they prove that for a continuous map $f \times \operatorname{id}_{Z}$ is closed for all $Z$ iff $f$ is closed and $f^{-1}(y)$ is quasi-compact. I hinted at that in my second remark above. In Prop. 6 they prove that $f^{-1}(K)$ is quasi-compact for all quasi-compact $K$ without further hypotheses. Since points are quasi-compact, the two definitions coincide. – Theo Buehler Feb 28 '11 at 0:08 Anyway, thanks for pointing out that I didn't make myself perfectly clear. I'll try to clarify some things some time tomorrow. – Theo Buehler Feb 28 '11 at 0:36 Theo Buehler did a great job relating Types A, B, C, and E. Going off of Stefan Witzel's new answer, I'd like to point out that in Munkres's Topology in section 81 (page 505 of that link), he defines an action to be properly discontinuous if for all $x\in X$ there is a nbhd $U$ s.t. $g(U)\cap U = \emptyset$ unless $g=1$. So if we tweak Type D from the original question to conclude $g=1$ rather than just that $g$ fixes $x$, then we recover Munkres' definition and it's no longer as trivial as Theo's answer showed the old Type D definition was. I think Munkres' definition is the one which I think point-set topologists would use. It's nice because you don't need to assume a topology on $G$, but of course you could just put the discrete topology on it. Perhaps the other definitions are more popular in the literature of Riemann surfaces, and the difference may be because of standing hypotheses in that field, since they often care most about the case of Fuchsian groups. Certainly Munkres' definition implies Type E and Type C Munkres also points out that the quotient map $\pi: X\rightarrow X/G$ is a covering map iff the action of $G$ is properly discontinuous. An exercise in section 81 gives: Let $X$ be locally compact Hausdorff and let $G$ act freely (i.e. fixed-point-free). Suppose that for each compact $C \subset X$ there are only finitely many $g\in G$ s.t. $C\cap g(C) \neq \emptyset$. Then the action of $G$ is properly discontinuous and $X/G$ is locally compact Hausdorff. So this tells you when Type B implies Munkres' definition. Now let's relate Munkres' definition to Type A and Theo's answer. Using Theo's various propositions and corollaries it's not hard to see that if $X$ is locally compact Hausdorff space and $G$ is any group (which we'll equip with the discrete topology) then Munkres' definition implies Type A. Conversely, if $X$ is locally compact then a proper action of a discrete group must be of Type B (by Theo's comment) and this implies Munkres' definition because local compactness lets us get from $g(K)\cap K = \emptyset$ to $g(U)\cap U = \emptyset$. - The definition you're attributing to Munkres is closer to what I would call "properly discontinuous and fixed-point free". I would certainly allow a properly discontinuous action to have fixed points, myself. – Dylan Thurston Jul 23 '15 at 1:46 Yes, it does seem Munkres is assuming fixed point free in his book. I suppose he's interested in the topology of the quotient, but it does seem restrictive. Thanks for pointing this out. – David White Jul 23 '15 at 6:27 Just two small remarks: • The action is properly discontinuous if it is proper and the group is is equipped with the discrete topology (compact then meaning finite, this accounts for some confusion, I guess). • I think in Definition 4 the conclusion should be $g = 1$. Then it means that the action is properly discontinuous and free (which for example Bredon calls properly discontinuous). - Of course, if a torsion-free group acts properly discontinuously, then it acts freely. So in that case the definitions are again equivalent. – Stefan Witzel Jun 30 '11 at 21:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704558849334717, "perplexity": 135.29019198888267}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277286.54/warc/CC-MAIN-20160524002117-00150-ip-10-185-217-139.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-solve-x-3-5-6
Algebra Topics # How do you solve x+3<5? Mar 28, 2017 $x < 2$ #### Explanation: Subtract 3 from both sides so $x$ stands alone. $x < 5 - 3$ $x < 2$ $\left\{x | x < 2\right\}$- set builder notation $\left(- \infty , 2\right)$- interval notation ##### Impact of this question 444 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4466395676136017, "perplexity": 14844.940829028983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363189.92/warc/CC-MAIN-20211205130619-20211205160619-00225.warc.gz"}
http://mathhelpforum.com/new-users/227655-probability.html
# Math Help - Probability 1. ## Probability Two dice, one red and one white are rolled. What is the probability that the white die turns up a smaller number than the red die ? give the answer step by step 2. ## Re: Probability no of possible outcomes=6*6=36 now if white die sets 1 then red die has 2,3,4,5,6=5 white die sets 2 red die has 3,4,5,6=4 white die sets 3 red die has 4,5,6=3 white die sets 4 red die has 5,6=2 white die sets 5 red die has 6=1 now prob=(5+4+3+2+1)/36=5/12 3. ## Re: Probability Thank you so much
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8688508868217468, "perplexity": 4675.413596170061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273513.48/warc/CC-MAIN-20140728011753-00200-ip-10-146-231-18.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/160571/regionplot-of-minimum-value
# RegionPlot of minimum value I have a complicated function $f_{a,b}(t)$ of one argument ($t$) whose definition depends on two parameters $a,b$. For every point in the plane $a-b$ (with $a\in[a_{min}, a_{max}]$ and $b\in[b_{min}, b_{max}]$) I need to find $\min\limits_{t\in\mathbb{R}}{f_{a,b}(t)}$ and verify whether it's positive; all the points $(a,b)$ that satisfy this condition are part of the region that I want to plot. What's the easiest way to do this? For example, taking $f_{a,b}(t)=1+at+bt^2$, we can use Minimize[1+at+bt^2, t] and find that the minimum value is (for $b>0$) equal to $\frac{-a^2+4b}{4b}$. Then the plot is given by RegionPlot[(-a^2+4b)/(4b)>0, {a, amin, amax}, {b, bmin, bmax}] for the given values of amin etc. The issue is that in my case $f_{a,b}(t)$ is more complicated and the minimum must be found numerically (in general). I have tried with RegionPlot[Minimize[f, x][[1]]> 0, {a, amin, amax}, {b, bmin, bmax}] but the evaluation never completes; replacing Minimize with FindMinimum only gives me a long list of error messages. Any help is appreciated! If your function is to complicated to do it analytically, use NMinimize. RegionPlot[First@NMinimize[f[a, b, t], t] > 0, {a, 0, 3}, {b, 0, 4}] • This does not work actually, as (for the simple function in my example) I get a long list of error messages such as NMinimize::nnum: The function value 1-0.829053 a+0.687328 b is not a number at {t} = {-0.829053}. Can you provide a minimal working example of the code you used to produce your image? – GioMott Nov 27 '17 at 20:32 • First I defined your simple example function f[a_, b_, t_] = 1 + a t + b t^2, Then the command you see above RegionPlot[First@NMinimize[f[a, b, t], t] > 0, {a, 0, 3}, {b, 0, 4}]. This works fine. – Akku14 Nov 28 '17 at 20:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7805159687995911, "perplexity": 759.2931947536845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145708.59/warc/CC-MAIN-20200222150029-20200222180029-00134.warc.gz"}
https://math.stackexchange.com/questions/56302/a-nonmeasurable-set-e-of-finite-measure-and-a-g-delta-set-g-that-contai
# a nonmeasurable set $E$ of finite measure and a $G_{\delta}$ set $G$ that contains $E$ I understand that the measurability of a set is equivalent for the existence of a $G_{\delta}$ set $G$ that contains the set and has the same outer measure. However, I do not know how to answer this question in my text: Let $E$ be a nonmeasurable set of finite outer measure. Show that there is a $G_{\delta}$ set $G$ that contain $E$ such that outer measure of $E$ is the same as the outer measure of $G$ while outer measure of $G\setminus E$ is greater than zero. The Theorem of Vitali states that any set of real number with positive outer measure contains a subset that fails to be measurable but I do not know how to relate this theorem to the problem. • are you talking about the Lebesgue measure (or in general, a complete measure)? because then $G-E$ must have outer measure greater than zero, or otherwise it would be measurable (with measure zero), and then $E$ would be measurable since $E=G-(G-E)$ – Ofir Aug 8 '11 at 14:24 Let $$E$$ be any set with a finite outer measure $$r=\lambda^*(E)$$. From the definition of outer measure $$r$$ is the infimum of the measures of open sets containing E. For each $$n$$ you can find $$U_n$$ open such that $$E\subseteq U_n$$ and $$r\leq \lambda(U_n)\leq r+\frac{1}{n}$$. Taking $$G=\bigcap U_n$$ we get that G is a $$G_\delta$$ set, $$E\subseteq G$$ and therefore $$r=\lambda^*(E)\leq \lambda^*(G)=\lambda(G)$$, and for every n we also have $$\lambda(G)\leq\lambda(U_n)\leq r+\frac{1}{n}$$ so $$\lambda(G)=r$$. If $$\lambda^*(G-E)=0$$ then $$G-E$$ would be measurable and then E would be measurable since $$G-(G-E)=E$$ and both G,G-E are measurable. This shows in particular that $$\lambda^*(A)+\lambda^*(B-A)$$ can be larger than $$\lambda^*(B)$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9809620976448059, "perplexity": 47.59460015579332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528457.66/warc/CC-MAIN-20191210152154-20191210180154-00057.warc.gz"}
http://mathhelpforum.com/pre-calculus/213044-epsilon-2.html
1. ## Re: Epsilon Originally Posted by Petrus Defination 7 says me what im doing idk, but the problem says i shall use defination 7. I agree with the definition, of course. As I described in post #11, the inequality from the definition is translated into two inequalities. One of them holds for all x > 0, and the other only starting from some N > 0. You need to solve the second inequality to find N. 2. ## Re: Epsilon So i got one quest if im correct. U Will have 2 case because of absolute value. F(x)>0 and f(x)<0. What i want to say is when its case 2 i Will solve ur way and case 1 is what i did just do. Then im pretty much confused what to do with these two case 3. ## Re: Epsilon The problem says to find N from the definition of the limit. It must be true that $\frac{\sqrt{4x^2+1}}{x+1}<2.5$ for $x > N$ and $\frac{\sqrt{4x^2+1}}{x+1}>1.5$ for $x > N$. The graph shows that $\frac{\sqrt{4x^2+1}}{x+1}<2.5$ for all $x > 0$. Solving the second inequality, you find $N > 0$ such that $\frac{\sqrt{4x^2+1}}{x+1}>1.5$ for $x > N$. So, if $x > \max(0,N)$, we have both $x > 0$ and $x > N$ and therefore both inequalities are true. But $\max(0,N) = N$, so both of those inequalities are true for $x > N$. 4. ## Re: Epsilon Originally Posted by emakarov That's not what I get. The inequality is $2\sqrt{4x^2+1}>3(x+1)$, or $7x^2-18x-5>0$. Edit: i misscalculted ignore this 5. ## Re: Epsilon Hello Emakarov! If this would be on exam would this be a good answer(i will skip the math part): By the graph or calculate we can se to right side of x intercept (0) we can se that it will be <0 so we only will get positive x intercept if we use x<0. So i calculate the x intercept when -(f(x)-L)<epsilon (0.5) and get x intercept as 2.82 and 0.2528. then i can set like 3 in function and look if its lower then epsilon (0.5) and it is. 6. ## Re: Epsilon Originally Posted by Petrus If this would be on exam would this be a good answer(i will skip the math part): By the graph or calculate we can se to right side of x intercept (0) we can se that it will be <0 so we only will get positive x intercept if we use x<0. So i calculate the x intercept when -(f(x)-L)<epsilon (0.5) and get x intercept as 2.82 and 0.2528. then i can set like 3 in function and look if its lower then epsilon (0.5) and it is. Sorry, this is very hard to read. "we can se that it will be <0": what will be < 0? "we only will get positive x intercept": why are you talking about x-intercepts? And x-intercepts of what? The x-intercepts of the original function $f(x)=\frac{\sqrt{4x^2+1}}{x+1}$ (*) do not arise in this problem at all. "the x intercept when -(f(x)-L)<epsilon": you can't talk about the x-intercept "when" an equation holds. The concept of an x-intercept is only applicable to a function, not to an inequality. An inequality has solutions, which is a possibly infinite set of real numbers. Sometimes this set can be expressed using several inequalities of the form x > ... or x < ... . "and get x intercept as 2.82 and 0.2528": the second value should be negative, but it is not important here. "then i can set like 3 in function and look if its lower then epsilon (0.5) and it is": "like" is not appropriate in mathematical text. Which function: f(x) from (*) above or |f(x) - 2|? How can you check whether |f(x) - 2| < 0.5 for all x > 3, i.e., for infinitely many x? And why would you need to check this if you have just solved this inequality? 7. ## Re: Epsilon Originally Posted by emakarov Sorry, this is very hard to read. "we can se that it will be <0": what will be < 0? "we only will get positive x intercept": why are you talking about x-intercepts? And x-intercepts of what? The x-intercepts of the original function $f(x)=\frac{\sqrt{4x^2+1}}{x+1}$ (*) do not arise in this problem at all. "the x intercept when -(f(x)-L)<epsilon": you can't talk about the x-intercept "when" an equation holds. The concept of an x-intercept is only applicable to a function, not to an inequality. An inequality has solutions, which is a possibly infinite set of real numbers. Sometimes this set can be expressed using several inequalities of the form x > ... or x < ... . "and get x intercept as 2.82 and 0.2528": the second value should be negative, but it is not important here. "then i can set like 3 in function and look if its lower then epsilon (0.5) and it is": "like" is not appropriate in mathematical text. Which function: f(x) from (*) above or |f(x) - 2|? How can you check whether |f(x) - 2| < 0.5 for all x > 3, i.e., for infinitely many x? And why would you need to check this if you have just solved this inequality? ima try do my best to explain now, im not a good explainer :/ when i mean <0 then i mean with solving absolute value equation we will have positive value when -(f(x)-L) tbh idk how to describe this with words. I basicly understand not really good on this but i need to read more about this. 8. ## Re: Epsilon Originally Posted by Petrus indeed that but there is a problem.. im kinda never on pc anymore.. i use my smartphone if i get stuck :S (im mostly in school without any pc) You may have noticed how few of us are now replying to your post. I will not as long as you are so discourteous as to posting unreadable images. If I were you, I would get access to a PC or a tablet computer. I have a 10in tablet that is a fully function cell phone. It is easy to use on this site. You could use Tapatalk on your phone. OR you could learn to use the camera correctly. It appears that you are just too lazy to do that. Page 2 of 2 First 12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8580315709114075, "perplexity": 780.5003574608169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00573-ip-10-171-10-108.ec2.internal.warc.gz"}
https://datascience.stackexchange.com/questions/32164/whats-wrong-with-my-deep-nn-of-two-hidden-layers/32170
# What's wrong with my deep NN of two hidden layers? batch_size = 128 size_1 = 1024 size_2 = 256 size_3 = 128 beta = 0.001 graph = tf.Graph() with graph.as_default(): tf_train_dataset = tf.placeholder( tf.float32,shape=(batch_size,image_size*image_size)) tf_train_labels = tf.placeholder( tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Weights and Biases g_W1 = tf.Variable( tf.truncated_normal([image_size*image_size,size_1])) g_B1 = tf.Variable( tf.zeros([size_1])) g_W2 = tf.Variable( tf.truncated_normal([size_1,size_2])) g_B2 = tf.Variable( tf.zeros([size_2])) g_W3 = tf.Variable( tf.truncated_normal([size_2,num_labels])) g_B3 = tf.Variable( tf.zeros([num_labels])) # g_W4 = tf.Variable( # tf.truncated_normal([size_3,num_labels])) # g_B4 = tf.Variable( # tf.zeros([num_labels])) L1 = tf.nn.relu( tf.matmul(tf_train_dataset,g_W1) + g_B1) L2 = tf.nn.relu( tf.matmul(L1,g_W2) + g_B2) # L3 = tf.nn.relu( # tf.matmul(L2,g_W3) + g_B3) dr_prob = tf.placeholder("float") #L1 = tf.nn.dropout(tf.nn.relu( # tf.matmul(tf_train_dataset,g_W1) + g_B1), 1.0) #L2 = tf.nn.dropout(tf.nn.relu( # tf.matmul(L1,g_W2) + g_B2), 1.0) #L3 = tf.nn.dropout(tf.nn.relu( # tf.matmul(L2,g_W3) + g_B3), 1.0) logits = tf.matmul(L2, g_W3) + g_B3 loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))+\ beta*tf.nn.l2_loss(g_W1) +\ beta*tf.nn.l2_loss(g_W2)+\ beta*tf.nn.l2_loss(g_W3) # beta*tf.nn.l2_loss(g_W4) # Optimizer. # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(logits) L1_pred = tf.nn.relu(tf.matmul(tf_valid_dataset, g_W1) + g_B1) L2_pred = tf.nn.relu(tf.matmul(L1_pred, g_W2) + g_B2) # L3_pred = tf.nn.relu(tf.matmul(L2_pred, g_W3) + g_B3) valid_prediction = tf.nn.softmax(tf.matmul(L2_pred, g_W3) + g_B3) L1_test = tf.nn.relu(tf.matmul(tf_test_dataset, g_W1) + g_B1) L2_test = tf.nn.relu(tf.matmul(L1_test, g_W2) + g_B2) # L3_test = tf.nn.relu(tf.matmul(L2_test, g_W3) + g_B3) test_prediction = tf.nn.softmax(tf.matmul(L2_test, g_W3) + g_B3) num_steps = 3001 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, dr_prob : 0.5} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)) Now it's 2 days trying to know what's wrong with my solution, I hope somebody can spot it, the purpose is to train a simple deep NN of two hidden NN, I have checked other's solutions and I still don't get what's wrong with my code (it's 4th problem 3rd assignment of Udacity deep learning online course) I am getting the following output.. Initialized Minibatch loss at step 0: 3983.812256 Minibatch accuracy: 8.6% Validation accuracy: 10.0% Minibatch loss at step 500: nan Minibatch accuracy: 9.4% Validation accuracy: 10.0% Minibatch loss at step 1000: nan Minibatch accuracy: 8.6% Validation accuracy: 10.0% Minibatch loss at step 1500: nan Minibatch accuracy: 11.7% Validation accuracy: 10.0% Minibatch loss at step 2000: nan Minibatch accuracy: 6.2% Validation accuracy: 10.0% Minibatch loss at step 2500: nan Minibatch accuracy: 10.2% Validation accuracy: 10.0% Minibatch loss at step 3000: nan Minibatch accuracy: 7.8% Validation accuracy: 10.0% Test accuracy: 10.0% You didn't tell in your question what you tried when debugging, but I'll try to answer. Short answer: It looks to me you got to choose a lower learning rate since your loss is exploding after the first iteration. Explanation: You are using a standard Stochastic Gradient Descent to perform optimization. Therefore, it's an non-adaptive learning rate algorithms, which means if that latter is poorly chosen, loss can explode if the learning rate is too high. That's why when I'm running in such optimization issues with a new neural network, what I like to do is to set a very low learning rate to ensure convergence at first. Also you could use an adaptive optimizer such as AdaGrad or Adam who have both tensorflow implementations. I hope that'll solve your issue. • reducing learning rate to 0.05 solves the divergence problem, that's completely right, keeping the same code (not using adaptive learning rate) the only way to get a better accuracy now is to increase the number of steps, am I right? May 25 '18 at 16:41 • Well not necessarily, one problem. The only thing that's sure is that if your loss dimishes you can put away the optimization issue. If you plot the learning and validation loss you'll see if you have achieved convergence or not. If you have then increasing the number of steps is useless. Otherwise, you can increase the number of steps or refining the learning rate, or trying an adaptive optimizer. I hope I'm clear enough. May 25 '18 at 16:48 In addition to the reply I marked as the answer to my Question (the learning rate) I would like to add the following things that I needed to change: 1. The standard deviation since I initialized my weights as truncated normal, 2. Using a function that truncates the output of my Relu (relu6 in tensorflow).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5894855260848999, "perplexity": 7386.492330038482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358189.36/warc/CC-MAIN-20211127133237-20211127163237-00260.warc.gz"}
https://tioj.ck.tp.edu.tw/submissions/123796
# Score 38608 114388 22 Subtask no. Testdata Range Constraints Score 1 0~17 $N \leq 10$ 9 / 9 2 0~27 $N \leq 20$ 13 / 13 3 0~42 $N \leq 3000$ 0 / 29 4 0~59 無額外限制 0 / 49 # Testdata Results Testdata no. Subtasks Time (ms) Memory (KiB) Verdict Score 0 1 2 3 4 36 50124 Accepted 100 1 1 2 3 4 36 50280 Accepted 100 2 1 2 3 4 40 50120 Accepted 100 3 1 2 3 4 36 50112 Accepted 100 4 1 2 3 4 36 50076 Accepted 100 5 1 2 3 4 36 50048 Accepted 100 6 1 2 3 4 36 50228 Accepted 100 7 1 2 3 4 36 50192 Accepted 100 8 1 2 3 4 40 50312 Accepted 100 9 1 2 3 4 36 50128 Accepted 100 10 1 2 3 4 36 50280 Accepted 100 11 1 2 3 4 36 50308 Accepted 100 12 1 2 3 4 36 50096 Accepted 100 13 1 2 3 4 36 50308 Accepted 100 14 1 2 3 4 36 50192 Accepted 100 15 1 2 3 4 36 50036 Accepted 100 16 1 2 3 4 40 50284 Accepted 100 17 1 2 3 4 32 50128 Accepted 100 18 2 3 4 40 50100 Accepted 100 19 2 3 4 40 50148 Accepted 100 20 2 3 4 36 50284 Accepted 100 21 2 3 4 36 50300 Accepted 100 22 2 3 4 36 50112 Accepted 100 23 2 3 4 40 50224 Accepted 100 24 2 3 4 36 50252 Accepted 100 25 2 3 4 36 50256 Accepted 100 26 2 3 4 36 50164 Accepted 100 27 2 3 4 36 50308 Accepted 100 28 3 4 624 75588 Accepted 100 29 3 4 632 75964 Accepted 100 30 3 4 640 76236 Accepted 100 31 3 4 48 50296 Wrong Answer 0 32 3 4 648 75832 Accepted 100 33 3 4 648 76108 Accepted 100 34 3 4 632 75696 Accepted 100 35 3 4 36 50284 Wrong Answer 0 36 3 4 632 75656 Accepted 100 37 3 4 36 50292 Accepted 100 38 3 4 628 75460 Accepted 100 39 3 4 632 76148 Accepted 100 40 3 4 616 75012 Accepted 100 41 3 4 632 75820 Accepted 100 42 3 4 36 50348 Accepted 100 43 4 1912 95964 Wrong Answer 0 44 4 1828 95864 Accepted 100 45 4 1808 95720 Accepted 100 46 4 2076 114388 Accepted 100 47 4 1120 90208 Accepted 100 48 4 1800 95812 Accepted 100 49 4 1928 95780 Wrong Answer 0 50 4 2060 114268 Accepted 100 51 4 1108 82040 Wrong Answer 0 52 4 2012 114184 Accepted 100 53 4 2016 114136 Accepted 100 54 4 2040 114292 Accepted 100 55 4 2040 114180 Accepted 100 56 4 1104 90064 Accepted 100 57 4 1808 95848 Accepted 100 58 4 1812 95796 Accepted 100 59 4 1988 114264 Accepted 100 Submitter: Compiler: c++14 Code Length: 1.26 KB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.390554279088974, "perplexity": 887.3893187174571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358180.42/warc/CC-MAIN-20211127103444-20211127133444-00164.warc.gz"}
https://eprints.utas.edu.au/2347/
Persistent Improvements in structure and permeability of a Ferrosol due to liming Kirkham, J, Rowe, B and Doyle, RB (2006) Persistent Improvements in structure and permeability of a Ferrosol due to liming. In: ASSSI-ASPAC-ACMS National Soils Conference; Soil science solving problems, 3-7 December 2006, The University of Adelaide. Preview PDF
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272116780281067, "perplexity": 21793.505375611247}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886794.24/warc/CC-MAIN-20180117023532-20180117043532-00626.warc.gz"}
http://saslist.com/blog/2021/07/14/compare-computational-methods-for-least-squares-regression/
In a previous article, I discussed various ways to solve a least-square linear regression model. I discussed the SWEEP operator (used by many SAS regression routines), the LU-based methods (SOLVE and INV in SAS/IML), and the QR decomposition (CALL QR in SAS/IML). Each method computes the estimates for the regression coefficients, b, by using the normal equations (XX) b = Xy, where X is a design matrix for the data. This article describes a QR-based method that does not use the normal equations but works directly with the overdetermined system X b = y. It then compares the performance of the direct QR method to the computational methods that use the normal equations. ### The QR solution of an overdetermined system As shown in the previous article, you can use the QR algorithm to solve the normal equations. However, if you search the internet for "QR algorithm and least squares," you find many articles that show how you can use the QR decomposition to directly solve the overdetermined system X b = y. How does the direct QR method compare to the methods that use the normal equations? Recall that X is an n x m design matrix, where n > m and X is assumed to be full rank of m. For simplicity, I will ignore column pivoting. If you decompose X = QRL, the orthogonal matrix Q is n x n, but the matrix RL is not square. ("L" stands for "long.") However, RL is the vertical concatenation of a square triangular matrix and a rectangular matrix of zeros: ${\bf R_L} = \begin{bmatrix} {\bf R} \\ {\bf 0} \end{bmatrix}$ If you let Q1 be the first m columns of Q and let Q2 be the remaining (n-m) columns, you get a partitioned matrix equation: $\begin{bmatrix} {\bf Q_1} & {\bf Q_2} \end{bmatrix} \begin{bmatrix} {\bf R} \\ {\bf 0} \end{bmatrix} {\bf b} = {\bf y}$ If you multiply both sides by Q (the inverse of the orthogonal matrix, Q), you find out that the important matrix equation to solve is ${\bf R b} = {\bf Q_1^{\prime} y}$. The vector ${\bf Q_1^{\prime} y}$ is the first m rows of the vector ${\bf Q^{\prime} y}$. The QR call in SAS/IML enables you to obtain the triangular R matrix and the vector Q y directly from the data matrix and the observed vector. The following program uses the same design matrix as for my previous article. Assuming X has rank m, the call to the QR subroutine returns the m x m triangular matrix, R, and the vector Q y. You can then extract the first m rows of that vector and solve the triangular system, as follows: /* Use PROC GLMSELECT to write a design matrix */ proc glmselect data=Sashelp.Class outdesign=DesignMat; class Sex; model Weight = Height Sex Height*Sex/ selection=none; run;   proc iml; use DesignMat; read all var {'Intercept' 'Height' 'Sex_F' 'Height_Sex_F'} into X; read all var {'Weight'} into Y; close;   /* The QR algorithm can work directly with the design matrix and the observed responses. */ call QR(Qty, R, piv, lindep, X, , y); /* return Q*y and R (and piv) */ m = ncol(X); c = QTy[1:m]; /* we only need the first m rows of Q*y */ b = trisolv(1, R, c, piv); /* solve triangular system */ print b[L="Direct QR" F=D10.4]; This is the same least-squares solution that was found by using the normal equations in my previous article. ### Compare the performance of least-squares solutions How does this direct method compare with the methods that use the normal equations? You can download a program that creates simulated data and runs each algorithm to estimate the least-squares regression coefficients. The simulated data has 100,000 observations; the number of variables is chosen to be m={10, 25, 50, 75, 100, 250, 500}. The program uses SAS/IML 15.1 on a desktop PC to time the algorithms. The results are shown below: The most obvious feature of the graph is that the "Direct QR" method that is described in this article is not as fast as the methods that use the normal equations. For 100 variables and 100,000 observations, the "Direct QR" call takes more than 12 seconds on my PC. (It's faster on a Linux server). The graph shows that the direct method shown in this article is not competitive with the normal-equation-based algorithms when using the linear algebra routines in SAS/IML 15.1. The graph shows that the algorithms that use the normal equations are relatively faster. For the SAS/IML calls on my PC, you can compute the regression estimates for 500 variables in about 2.6 seconds. The graph has a separate line for the time required to form the normal equations (which you can think of as forming the XX matrix). Most of the time is spent computing the normal equations; only a fraction of the time is spent actually solving the normal equations. The following table shows computations on my PC for the case of 500 variables: The table shows that it takes about 2.6 seconds to compute the XX matrix and the vector Xy. After you form the normal equations, solving them is very fast. For this example, the SOLVE and INV methods take only a few milliseconds to solve a 500 x 500. The QR algorithms take 0.1–0.2 seconds longer. So, for this example, forming the normal equations accounts for more than 90% of the total time. ### SAS regression procedures These results are not the best that SAS can do. SAS/IML is a general-purpose tool. SAS regression procedures like PROC REG are optimized to compute regression estimates even faster. They also use the SWEEP operator, which is faster than the SOLVE function. For more than 20 years, SAS regression procedures have used multithreaded computations to optimize the performance of regression computations (Cohen, 2002). More recently, SAS Viya added the capability for parallel processing, which can speed up the computations even more. And, of course, they compute much more than only the coefficient estimates! They also compute standard errors, p-values, related statistics (MSE, R square,....), diagnostic plots, and more. ### Summary This article compares several methods for obtaining least-squares regression estimates. It uses simulated data where the number of observations is much greater than the number of variables. It shows that methods that use the normal equations are faster than a "Direct QR" method, which does not use the normal equations. When you use the normal equations, most of the time is spent actually forming the normal equations. After you have done that, the time required to solve the system is relatively fast. The post Compare computational methods for least squares regression appeared first on The DO Loop.
{"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7996030449867249, "perplexity": 826.9139465836529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00382.warc.gz"}
http://www.kurims.kyoto-u.ac.jp/ja/seminar/seminar-tamagawa.html
## Number Theory / Arithmetic Geometry Seminar Chieh-Yu Chang »áϢ³¹ÖµÁ Title On Hilbert's seventh problem and transcendence theory Date October 9th (Wed), 23rd (Wed) and 28th (Mon), 2013, 10:30-12:00 [change in the date] October 9th (Wed) 10:30-12:00, October 23rd (Wed) 10:30-12:00, 13:30-15:00, 2013 Room Room 006, RIMS Speaker Chieh-Yu Chang »á (National Tsing Hua University) Abstract Hilbert's seventh problem is about the linear independence question of two logarithms of algebraic numbers, which was solved by Gelfond and Schneider in the 1930s. Later on, it was generalized to several logarithms of algebraic numbers by Baker in the 1960s and generalized to general abelian logarithms of algebraic points by Wuestholz in the 1980s. This phenomenon can be also asked for multiple zeta values, but it is still open. In the first talk, we will give a survey on the classical theory and report recent progress on the parallel questions for function fields in positive characteristic. Current methods and tools of transcendence theory using t-motives will be discussed in the second and third talks. Organizer Akio Tamagawa (RIMS, Kyoto Univ.) Mini-Workshop on Number Theory / Arithmetic Geometry Date Thursday, January 31, 2013 Room Room 206, RIMS, Kyoto University 10:00 -- 10:20 Arata Minamide (RIMS, M2) Elementary Anabelian Properties of Graphs 10:30 -- 10:50 Yang Yu (RIMS, M2) Arithmetic Fundamental Groups and Geometry of Curves over a Discrete Valuation Ring 11:00 -- 11:20 Takeshi Okada (RIMS, M2) On Finiteness of Twists of Abelian Varieties 13:00 -- 13:30 Yu Iijima (RIMS, D1) Galois Action on Mapping Class Groups 13:45 -- 14:45 Chia-Fu Yu (Academia Sinica) Density of the Ordinary Locus in the Hilbert-Siegel Moduli Spaces Organizer Akio Tamagawa (RIMS, Kyoto Univ.) Title Resolution of nonsingularities for Mumford curves. Date December 15 (Thu), 2011, 14:15-15:45 Room Room 206, RIMS Speaker Emmanuel Lepage »á (Institut Mathematique de Jussieu) Abstract Let $X$ be a hyperbolic curve over $\overline Q_p$. I am interested in the following property: for every semistable model $\mathcal X$ of $X$ and every closed point $x$ of the special fiber there exists a finite covering $Y$ of $X$ such that the minimal semistable model $\mathcal Y$ of $Y$ above $\mathcal X$ has a vertical component above $x$. I will try to explain why hyperbolic Mumford curves satisfy this property. I will give anabelian appplications of this to the tempered fundamental group. Mini-Workshop Rational Points on Modular Curves and Shimura Curves'' Date Monday, October 26th, 2009 13:30--14:30 Keisuke Arai (Univ. Tokyo) Points on $X_0^+(N)$ over quadratic fields (joint work with F. Momose) Abstract:  Momose (1987) studied the rational points on the modular curve $X_0^+(N)$ for a composite number $N$. He showed that the rational points on $X_0^+(N)$ consist of cusps and CM points under certain conditions on a prime divisor $p$ of $N$. But $p=37$ was excluded. For $37$ is peculiar because $X_0(37)$ is a hyperelliptic curve and $w_{37}$ is not the hyperelliptic involution. We show that the rational points on $X_0^+(37M)$ consist of cusps and CM points. We also show that the $K$-rational points on $X_0^+(N)$ consist of cusps and CM points for a quadratic field $K$ under certain conditions (both $p=37$ and $p\ne 37$ allowed). 14:45--15:45 Fumio Sairaiji (Hiroshima International Univ.) Takuya Yamauchi (Osaka Prefecture Univ.) On rational torsion points of central $\mathbb{Q}$-curves Abstract:  Let $E$ be a central $\mathbb{Q}$-curve over a polyquadratic field $k$. In this talk we give an upper bound for prime divisors $p$ of the order of the $k$-rational torsion subgroup of $E$. For example, $p$ is less than or equal to 13, if the scalar restriction of $E$ from $k$ to $\mathbb{Q}$ is of GL$_2$-type with real multiplications. Our result is a generalization of the result of Mazur on elliptic curves over $\mathbb{Q}$, and it is a precision of the upper bounds of Merel and Oesterl\'{e}. 16:00--17:00 Pierre Parent (Univ. Bordeaux 1) Rational points on Shimura curves Abstract:  For $B$ a rational quaternion algebra, the Shimura curve associated with $B$ (or more precisely its quotient by certain Atkin-Lehner involutions) is a moduli space, in a certain sense, for abelian surfaces with potential multiplication by $B$. Proving that those curves almost never have rational points would therefore allow a small step towards the conjecture, attributed to Coleman and Mazur, which predicts the scarcity of endomorphism algebras for abelian varieties of GL$_2$-type over $\mathbb{Q}$. We will present a method to study such rational points, developped by A. Yafaev and myself, and recently improved by F. Gillibert. Organizers Akio Tamagawa (RIMS, Kyoto Univ.) Marco Boggi »áϢ³¹Ö±é Title Profinite curve complexes and the congruence subgroup problem for the mapping class group'' Date September 27th (Thu) and 28th (Fri), 2007 Room at Room 206, RIMS, Kyoto University 27th (Thu) 10:00--12:00 Boggi lunch 14:00--16:00 Boggi 16:00-- free discussion dinner 28th (Fri) 10:00--12:00 Boggi lunch 14:00--16:00 free discussion Abstract ÊÌ»æ¤Î¤È¤ª¤ê Ï¢ÍíÀè¡§¾¾ËÜâáʹ­ÅçÂç¡Ë¡¢¶ÌÀî°Âµ³ÃË¡¢Ë¾·î¿·°ì¡ÊµþÂç¿ôÍý¸¦¡Ë Title Arithmetic from Geometry on Elliptic Curves Date June 2 (Fri), 2006, 16:30-17:30 Room Room 202, RIMS Speaker Christopher Rasmussen (Rice Univ.) Abstract One of the philosophies of arithmetic geometry made popular by Grothendieck was the notion that the structure of the absolute Galois group of $\mathbf{Q}$, could be determined from geometric (or even combinatoric) data. In a related vein, one finds that the arithmetic properties of a curve are sometimes determined by its geometry. Specifically, the structure of a curve as a cover of the projective line can have arithmetic consequences for the Jacobian of the curve. We will discuss this situation in the case of elliptic curves, where this connection between arithmetic and geometry can be seen very clearly. Title Arithmetic Algebraic Geometry Lecture¡Ê½¸Ãæ¹ÖµÁ¡Ë Date May 8 (Mon)- May 19, 2006. Room ¤³¤Á¤é¤ò¤´Í÷¤¯¤À¤µ¤¤ Speaker ²ÃÆ£ÏÂÌé¡ÊµþÂç Íý¡Ë Abstract Weil ¤¬ 1949 ǯ¤ËÄó½Ð¤·¤¿ Weil ͽÁۤϡ¢ 1970 ǯÂå¤Î¤Ï¤¸¤á¤ËºÇ½ªÅª¤Ë¾ÚÌÀ¤µ¤ì¤ë¤Þ¤Ç¡¢ Âå¿ô´ö²¿³Ø¤ÎÂ礭¤Ê¿ÊŸ¤Î¸¶Æ°ÎϤȤʤꡢ ¤È¤¯¤Ë Grothendieck ¤¬ Weil ͽÁۤξÚÌÀ¤ò¤á¤¶¤·¤ÆÆ³Æþ¤·¤¿ ¥¨¥¿¡¼¥ë¥³¥Û¥â¥í¥¸¡¼¤Ï¡¢¸½ºß¤ÎÀ°¿ôÏÀ¤Î½ÅÍ×¤ÊÆ»¶ñ¤È¤Ê¤Ã¤¿¡£ ¤½¤Î·Ð°Þ¤ò¤Õ¤êÊÖ¤ê¤Ä¤Ä¡¢¥¨¥¿¡¼¥ë¥³¥Û¥â¥í¥¸¡¼¤Î²òÀâ¤ò¤ª¤³¤Ê¤¦¡£ Comments ¾ÜºÙ¤Ï¤³¤Á¤é Title Algebraic dynamical systems (preperiodic points, Mahler measures, equidistribution of small points) Date May 1 (Mon), 2006, 16:30- Room Room 202, RIMS Speaker Lucien Szpiro (City Univ. New York) Abstract Reference: (available at http://math.gc.cuny.edu/faculty/szpiro/People_Faculty_Szpiro.html) --Joint papers with T. Tucker --Joint paper with E. Ullmo and S. Zhang Comments Date April 10 (Mon), 2006, 14:00-17:00 Room Room 202, RIMS (14:00-15:15) Speaker Michel Matignon (Univ. Bordeaux 1/Chuo Univ.) Title Wild monodromy groups and automorphisms groups of curves (16:00-16:45) Speaker Barry Green (Univ. Stellenbosch/Chuo Univ.) Title Selected results on liftings of Galois covers of smooth curves from char. p to char. 0 Title Mini-Workshop Arithmetic Geometry of Covers of Curves and Related Topics'' Date September 12 (Mon), 13(Tue), 2005 URL ¥»¥ß¥Ê¡¼¥È¥Ã¥×¤Ø
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8248308897018433, "perplexity": 2023.0854441525491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657139314.4/warc/CC-MAIN-20140914011219-00017-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/how-to-find-normal-force.351912/
# How to find Normal force? 1. ### texasgirl03 3 1. The problem statement, all variables and given/known data If a box that weighs 536N is pulled forward at a constant speed by a force of 150N at an angle of 37 degreees with the ground, what normal force does thte box exert on the supporting surface? ( I am really not sure at all where to begin) formula? 2. ### cepheid 5,191 Staff Emeritus Hi texasgirl03, You know the magnitude of the force pulling on the box and its angle from the horizontal. So, you can resolve this force into two components, one acting vertically, and the other acting horizontally. That's the starting point. Do you know what a free body diagram is? It would be helpful to draw one here. 3. ### texasgirl03 3 Okay i understand so far.. but what is the formula? 4. ### cepheid 5,191 Staff Emeritus If you draw a triangle with the force at 37 degrees from the horizontal, you can use basic trigonometry to get the horizontal and vertical components of that force (the trigonometric ratios, sine and cosine, are what will be used). Draw the triangle that resolves the force vector into its components and it will be clear. 5. ### texasgirl03 3 I do not know how to do that. thats why i am asking. I just wanted the formula. 6. ### cepheid 5,191 Staff Emeritus No disrespect, but drawing a right-angled triangle and applying basic trigonometry are both things that you should know how to do, and if I just give you the answer, you won't really learn much. Draw the force vector at an angle of 37 degrees from the horizontal. Now, you can see that this vector can be represented as the sum of two other component vectors, a horizontal component, and a vertical component. Together these two components add up in the usual way for vector addition to produce the force vector. These three vectors form a right-angled triangle. The ratio of the side of the triangle that is opposite to the angle to the hypotenuse is the sine of that angle. The ratio of the adjacent side and hypotenuse is the cosine of the angle. That is what you need to know in order to calculate the horizontal and vertical components (but I am not going to tell you which one is which). 7. ### Deep_Blue 2 Hi Texasgirl03, Take a look a the diagram and see if it makes the problem any clearer. Notice that there's two forces acting upwards the Normal force (N) & the y component of the 150N force (Ty), and one force pulling downwards (Fg). Find Ty and remember that according to Newton's Second Law (∑ F = m*a), the sum of all forces in the y direction will equal the object's mass * acceleration. See if you can figure it out from here. Last edited: Nov 5, 2009
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.956334114074707, "perplexity": 299.2961960128829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096780.24/warc/CC-MAIN-20150627031816-00294-ip-10-179-60-89.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-equations/75361-partial-differential-equation-using-dalemberts-approach-print.html
# Partial Differential Equation using D'Alembert's approach • February 23rd 2009, 02:02 PM flaming Partial Differential Equation using D'Alembert's approach Solve by using D'Alembert's solution with the even extensions of f(x) and g(x) $\mu_{tt} = c^2\mu_{xx}$ where $0\leq x <\infty$ $\mu(x,0) = f(x)$, $\mu_t(x,0) = g(x)$ where $0\leq x <\infty$ $\mu_x(0,t) = 0$ where $t\ge 0$ • February 23rd 2009, 03:55 PM ThePerfectHacker Quote: Originally Posted by flaming Solve by using D'Alembert's solution with the even extensions of f(x) and g(x) $\mu_{tt} = c^2\mu_{xx}$ where $0\leq x <\infty$ $\mu(x,0) = f(x)$, $\mu_t(x,0) = g(x)$ where $0\leq x <\infty$ $\mu_x(0,t) = 0$ where $t\ge 0$ Since we have the boundary condition $u_x(0,t)=0$ we would consider even extensions. Let $f_1(x)\text{ and }g_1(x)$ be even extensions, and we will also assume that these extensions are well-behaved to satisfy the condition of D'Alembert's solution. Thus, we have that $u_tt = c^2u_{xx}$ for $(x,t) \in \mathbb{R}^2$. Thus, the solution is given by: $u(x,t) = \frac{1}{2}[f_1(x+ct) - f_1(x-ct)] + \frac{1}{2a}\int_{x-ct}^{x+ct}g_1(\xi) d\xi$. Notice that $u(-x,t) = u(x,t)$ therefore $u_x(0,t) = 0$, this is precisely what we want.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975647330284119, "perplexity": 685.0412072085933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447559962.154/warc/CC-MAIN-20141224185919-00073-ip-10-231-17-201.ec2.internal.warc.gz"}
http://lists.freebsd.org/pipermail/freebsd-questions/2005-September/099219.html
# IE in FreeBSD? Ted Mittelstaedt tedm at toybox.placo.com Mon Sep 19 10:58:14 PDT 2005 >-----Original Message----- >From: Mario Hoerich [mailto:spambox at MHoerich.de] >Sent: Sunday, September 18, 2005 10:07 AM >To: Ted Mittelstaedt >Cc: jahnke at fmjassoc.com; youshi10 at u.washington.edu; >freebsd-questions at freebsd.org >Subject: Re: IE in FreeBSD? > > ># Ted Mittelstaedt: >> # On Behalf Of Frank Jahnke >> > >> >filled out and saved on a FreeBSD system? >> > >> >> PDF doesn't belong in complex forms that are filled out online. I use >> PDF at my job and we use it for one use only - contracts. A contract >> must be in paper with a human's signature on it to have any validity >> whatsoever in a court of law, despite what you may read otherwise. > >In Germany, electronic signatures conforming to the conditions in >§17 SiG ("signature law") and §15 Annex 1 SigV ("signature decree") >are as valid as a "hard" signature and can (for example) be used for >communication with government departments. > >The world doesn't end on US borders. > Sure, try suing someone for \$200 in small claims for that - the expert witness fees to verify to the court that such a signature exists and is valid will be more than the amount your trying to get. > >> >>The Mac isn't >> >> a gateway to UNIX by any means. Apple made it easy for >Mac users to >> >> continue to be stone stupid, and the Mac users by and >large chose to >> >> stay stone stupid. Apple knows it's customer base that's for sure. > >*Shrug*. I'm a CS + Math student and I've used FreeBSD since 3.3 >(Linux before). I don't think I'm stone stupid. Are you aware of that the terminology "by and large" means in that context? Perhaps not, maybe the translation to German modified the meaning? So, your the one in a thousand Mac user that's not stone stupid, an occurrance that my statement allowed to exist. > > >> >I find this attitude to be very distressing, but remarkably common. > >Yup. > >> >Sure, users are not as informed as they might be, and they >can do stupid >> >things. But they use the computer as a tool to do certain tasks, and >> >they shouldn't have to know about how the computer works to >accomplish >> >> Yah yah yah. I hear the same thing about cars - "we shouldn't need to >> know how a car works to drive it" Sure - sounds great. > >Cars != computers. With cars, failure to understand their basic >features is likely to get people killed. I don't see that kind >of risk with ordinary PCs. The analogy is thus pointless. > >You could just as well demand that anyone ever using mathematics >knows the entire theory behind it. Hey, you just said the analogy is pointless - then proceed to argue it? Must be a valid analogy or you wouldn't have proceeded to argue. > >> It's like teaching mathematics in school. You can teach the >kids to do >> addition, subtraction, multiplication and division by hand, so they >> understand what is going on, > >No, they don't. Mathematics in school is nothing but a "desktop" >for real mathematics. Why are you continuing to divert focus here? Let me restate and rephrase: "You can teach the kids to do addition, subtraction, multiplication and division by hand, so they understand what is going on with addition, subtraction, multiplication and division." >With just school mathematics, you don't >understand the slightest thing of what's going on, but you've >learned how to use it. The above example is *very* basic (this >is the stuff you usually learn at the very beginning of your >first math-lecture at a university), but you won't learn any >of that in school. At least not around here. > >A more advanced example are integrals. You learn how to integrate, >but you haven't got the slightest clue an integral is really defined >as (from the top of my head) > > \int f := \sum_{k=1}^{\infty} f_k > >where each f_k is a step function, i.e. an element of the >vector space \mathcal{F}_{ST}(|R,|R) spanned by the elementary >functions g_i. That is: > f_k := \sum_{i=1}^{k} \lamda_i g_i >with > g_i(x) := \begin{cases}1 & x \in [a,b[ \\ 0 & otherwise\end{cases} > >There's a *lot* of theory behind those few lines and believe me, >it ain't pretty or simple. However, there's no reason anyone Baloney. Sure, someone who uses integrals every day to build or create something does not need to know the theory well enough to repeat it, or remember enough of it to understand all of it correctly. But sometime during the teaching of how to work an integral they should have been instructed by someone who really understood it and could help them to form a mental image that would be an analogy of what is going on. They should have the general gist of the idea. Your attitude is reminicent of "Pay no attention to that man behind the curtain" from the Wizard of Oz. It's elitist and snobbish - "oh only us priests can understand it you commoners never can so go away and let your betters handle this" > >That's why the "desktop" school mathematics exists. So people >who aren't interested in mathematics won't have to deal with >its intricacies. > When I was growing up there was a LOT of stuff I had stuffed into my head when I was in school that I 'wasn't interested in" and was "never going to use when I grow up" I told my teachers this repeatedly. Fortunately they ignored this. I feel sorry that you must have grown up in one of the permissive schools where your teachers didn't slap that notion out of your head like they should have. >I think this is a better analogy than yours, because in both cases > i) the matters involved are widely considered complicated. > > ii) the users have to deal with "virtual" quantities, i.e. they > can't touch them. This tends to be a problem for many people. > Snob again. >iii) the risks involved are pretty much the same. > >None of this applies to cars. > Boy you keep coming back to that cars thing, it must really be bugging you - what's wrong, haven't figured out how to invalidate it yet? The point of analogy is to assist the reader to understand the point of an argument. I think you understand it well and your trying to divert attention to it by focusing on the analogy itself, rather than the idea the analogy quite obviously effectively conveyed. > >> >It seems that you are arguing the BSDs (Free, Net, Open and so on) >> >should be used only for servers (and perhaps a few other applications >> >like embedded systems), and to leave the desktop to the Mac >and Windows. >> >> No, you are missing the point totally. I'm arguing that the so-called >> "desktop" isn't important. > >For you. There's other needs than yours and they're of no less >importance. > No, for everyone. All the users see is application interfaces on the screen. They don't know or care if those application interfaces are generated locally or 1000 miles away and they are just seeing the screen output. I think you, like many Mac users, are still stuck with that mental image of the Mac Big Brother commercial that played once during the superbowl nearly 2 decades ago, and the idea that your precious Mac might simply be nothing more than a portal to a bigger and more powerful system admined by someone else you think is unnatural. >> The desktop needs to serve as a portal to the real applications >> and processing, which is centralized. It is a means to an end, >> not an end itself. The servers in the center that are doing the >> Really Important Work are of course all FreeBSD. > >This doesn't exactly make sense for home PCs. I'll certainly not >stick another machine in my single room appartment so I have a >"server". > Rubbish. We have many customers who have employees that work at home and terminal server into the office, and they have all their applications at the work system, and they do this very successfully. You aren't doing And as for basic apps like wordprocessors and such - well I have to remind you that you yourself already argued in a previous post that this entire apps that are more complex than that. In other words the rules of engagement you set up for this discussion was specifically NOT home user apps, it was complex business apps in a work environment. Now your dragging in home users which are a different deal alltogether. Recall the OP wants to run IE to deal with vendor websites that are IE specific and already ruled out telling the vendors of these busted websites to fuck off (like a home user has the freedom to do) since he has to go to them for work. > >[ data on notebooks ] >> Move the data to a central location and the notebook becomes a dumb >> window with no data on it, and there's no need to pay attention to >> the notebook. > >Not all the world's a company. And I certainly wouldn't like >my data or applications on a "central location" not owned and >controlled by me. > If it is truly your own data then you have a right to save it locally, as any portal would allow you to do. But most people arging like your doing here, are actually working with data belonging to someone else. what, those mails belong to your employer. All that biological data you are talking about in a prior post doesen't belong to you either. Your a home user viewing a DVD you 'bought' guess again, you don't own that data either. Your a home user reading the news on CNN's website - guess again, that data doesen't belong to you either. Your running Microsoft Word on your home PC - guess again, Microsoft owns that program not you. Your running it under MacOS? Apple owns that operating system, not you. All the world IS a company unless you completely buck the system and install ALL open source and don't view anyone elses's webpages, e-mails, hell even this post here is copyrighted by someone else, not you. And your arguing for putting -commercial- software on FreeBSD? Seems to me that's an argument for your applications being not owned or controlled by you. > >> It's a shame these days that people have so little respect for someone >> else's point of view that they are more concerned with the feelings of >> the person than the actual ideas of that person. I think you've been >> around those government shirts too long, you've been contaminated >> by political correctness. Tell me, do you really believe in anything >> anymore or is everything just shades of gray to you? Sorry >> though I forgot the words to Kumbiya. >> >> Jesus, at least call me an asshole then I will have some hope you >> actually believe what your saying! > >Ad hominem attacks are *precisely* what implies disrespect with >another's ideas. Besides, they usually show a notable lack of >both self-discipline and arguments. They're not really efficient >either. Metadiscussion implies far more disrespect. And you haven't responded to my point anyway - which is you've been around those government shirts too long, you've been contaminated by political correctness. It's pretty clear you really haven't thought through a consistent philosophy on this IE thing. I will give you a ray of hope though, there ARE consistent logical arguments binaries. But you MUST accept as an axiom to make these arguments that it will damage FreeBSD and the Open Source movement for these arguments to have any consistency. You cannot have it both ways - you cannot work to bring commercial binaries to an Open Source OS like FreeBSD without undermining the very system your claiming to "help" Your arguments are just like the people who argued for seamless Windows for OS/2. They were just lying to themselves to make themselves feel better when they were arguing that seamless support of Microsoft Windows binaries would "help" OS/2. And you are doing it also today, it's no different. Ted
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3082348704338074, "perplexity": 4248.437227182434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119655159.46/warc/CC-MAIN-20141024030055-00288-ip-10-16-133-185.ec2.internal.warc.gz"}
https://docs.deistercloud.com/content/Axional%20development%20products.15/Axional%20Studio.4/Development%20Guide.10/Languages%20reference.16/XSQL%20Script.10/Packages.40/cron/getNextExecutions.xml?embedded=true
# 1 cron.getNextExecutions This tag is useful to print in a calendar the execucions plan of programmed system tasks. Returns a list with de dates of the following executions for a provided expression used to define the scheduler. The number of dates that will be calculated it is defined by count attribute (by default 30). <cron.getNextExecutions    expression='expression'    count='count'/> Example ##### Cron expressions Expression Scheduler 0 0/5 * * * ? CronTrigger Example 1 - an expression to create a trigger that simply fires every 5 minutes. 10 0/5 * * * ? CronTrigger Example 3 - an expression to create a trigger that fires at 10:30, 11:30, 12:30, and 13:30, on every Wednesday and Friday. 0 30 10-13 ? * WED,FRI CronTrigger Example 4 - an expression to create a trigger that fires every half hour between the hours of 8 am and 10 am on the 5th and 20th of every month. Note that the trigger will NOT fire at 10:00 am, just at 8:00, 8:30, 9:00 and 9:30. 0 0/30 8-9 5,20 * ? Note that some scheduling requirements are too complicated to express with a single trigger - such as “every 5 minutes between 9:00 am and 10:00 am, and every 20 minutes between 1:00 pm and 10:00 pm”. The solution in this scenario is to simply create two triggers, and register both of them to run the same job. CronTrigger Example 1 - an expression to create a trigger that simply fires every 5 minutes Example The expression generates a scheduler starting on the first day of december at 12 hour and runs every 15 minutes. The count number it is not define so the result shows the first 30 calculated dates. Copy <xsql-script> <body> <iterator name='next'> <in> <cron.getNextExecutions expression='5 /15 * * 12 ?' /> </in> <do> <println><date.format format='dd-MM-yyyy hh:mm:ss'><next /></date.format></println> </do> </iterator> </body> </xsql-script> Example The expression generates a scheduler that returns a plan to execute it every day (from now) on 10:15:05 time. Copy <xsql-script> <body> <iterator name='next'> <in> <cron.getNextExecutions expression='5 15 10 * * ?' count='10'/> </in> <do> <println><date.format format='dd-MM-yyyy hh:mm:ss'><next /></date.format></println> </do> </iterator> </body> </xsql-script>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7480547428131104, "perplexity": 2169.392533170154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497301.29/warc/CC-MAIN-20200330181842-20200330211842-00261.warc.gz"}
http://kitchingroup.cheme.cmu.edu/blog/2015/02/20/org-ref-meets-hydra/
## org-ref meets hydra | categories: emacs | tags: | View Comments I am enjoying learning about abo-abo/hydra , which is a nice package for making minibuffer menus to run commands. It is light-weight solution that does not mess up your window too much, and it is easier to use than any home-grown solution I have made in the past. Here is a simple little code that gives me three options when I press "zz" quickly (a key-chord). I can press "c" to put in a cite link using helm, "r" to insert a ref link using helm, and "l" to insert a new label. Any other key just cancels the menu. One thing to remember ("zz"), and hints for the rest! (require 'hydra) (require 'key-chord) (key-chord-mode 1) (key-chord-define-global "zz" (defhydra org-ref-hydra () "org-ref" ("R" org-ref "org-ref"))) org-ref-hydra/body Pretty nice. Check out the nice hydra interface to words.el . A simple press of "ww" gets you easy access to single key presses of all the nice words functions. What would you hydra for?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29880964756011963, "perplexity": 5485.460330052527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118740.31/warc/CC-MAIN-20170423031158-00540-ip-10-145-167-34.ec2.internal.warc.gz"}
https://docs.jina.ai/api/jina.types.document.multimodal.html
# jina.types.document.multimodal¶ class jina.types.document.multimodal.MultimodalDocument(document=None, chunks=None, modality_content_map=None, copy=False, **kwargs)[source] MultimodalDocument is a data type created based on Jina primitive data type Document. It shares the same methods and properties with Document, while it focus on modality at chunk level. Warning • It assumes that every chunk of a document belongs to a different modality. • It assumes that every MultimodalDocument have at least two chunks. Parameters • document (Optional[~DocumentSourceType]) – the document to construct from. If bytes is given then deserialize a DocumentProto; dict is given then parse a DocumentProto from it; str is given, then consider it as a JSON string and parse a DocumentProto from it; finally, one can also give DocumentProto directly, then depending on the copy, it builds a view or a copy from it. • chunks (Optional[Sequence[Document]]) – the chunks of the multimodal document to initialize with. Expected to received a list of Document, with different modalities. • copy (bool) – when document is given as a DocumentProto object, build a view (i.e. weak reference) from it or a deep copy from it. • kwargs – other parameters to be set Param modality_content_mapping: A Python dict, the keys are the modalities and the values are the content of the Document Warning • Build MultimodalDocument from modality_content_mapping expects you assign Document.content as the value of the dictionary. property is_valid A valid MultimodalDocument should meet the following requirements: • Document should consist at least 2 chunks. • Length of modality is not identical to length of chunks. Return type bool property modality_content_map Get the mapping of modality and content, the mapping is represented as a dict, the keys are the modalities of the chunks, the values are the corresponded content of the chunks. Return type Dict Returns the mapping of modality and content extracted from chunks. property modalities Get all modalities of the MultimodalDocument. Return type List[str] Returns List of modalities extracted from chunks of the document. update_content_hash(exclude_fields=('id', 'matches', 'content_hash'))[source] Update content hash of the document by including chunks when computing the hash Return type None
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7810056209564209, "perplexity": 6177.062970349277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506832.21/warc/CC-MAIN-20210116165621-20210116195621-00768.warc.gz"}
https://stke.sciencemag.org/content/2000/61/tw1
Editors' ChoiceApoptosis # Caspases Open the Door See allHide authors and affiliations Science's STKE  05 Dec 2000: Vol. 2000, Issue 61, pp. tw1 DOI: 10.1126/stke.2000.61.tw1 During apoptosis, many cellular structures are dismantled through the activities of caspases; however, the nuclear membrane is not degraded. Despite having an intact nuclear membrane, some of the substrates of caspases are nuclear proteins, such as the inhibitor of the DNase CAD (ICAD), which is cleaved by caspase-3, leading to activation of CAD and DNA fragmentation and condensation. Faleiro and Lazebnik studied the breast cancer cell line MCF-7, which does not express caspase-3 and does not undergo DNA condensation during apoptosis. Expression of a green fluorescent protein-tagged version of caspase-3 (GFP-c3) restored chromatin condensation after treatment with cisplatin and resulted in the equilibration of GFP-c3 between the nuclear and cytosolic compartments. The ability of GFP-c3 to enter the nucleus during apoptosis was dependent on the activity of caspase-9, but not the catalytic activity of caspase-3. Studies with oligomers of GFP, GFP with a nuclear localization signal (GFP-NLS), and Ran suggested that caspase-9 increases the permeability of the nuclear pore, because a 140-kD GFP pentamer, the GFP-NLS, and Ran all became evenly distributed between the nucleus and cytosol in response to cisplatin treatment by a mechanism that depended on active caspase-9. However, a GFP-β-galactosidase construct was excluded from the nucleus in normal and apoptotic cells, indicating that the nuclear membrane was still intact. Although the nuclear pore target of caspase-9 was not identified, two different antibodies against the nuclear pore complex fail to recognize the pores in apoptotic cells, suggesting that the pore structure may change during apoptosis, masking the antigenic sites. The data suggest that during apoptosis, caspase-9 acts to disrupt the nuclear-cytoplasmic barrier by altering the size exclusion limits of the nuclear pore. Faleiro, L., and Lazebnik, Y. (2000) Caspases disrupt the nuclear-cytoplasmic barrier. J. Cell Biol. 151: 951-959. [Abstract] [Full Text]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8690652847290039, "perplexity": 12609.577933055578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00238.warc.gz"}
http://benpaulthurstonblog.blogspot.com/2012/05/estimating-square-roots.html
## Thursday, May 10, 2012 ### Estimating square roots, generalized continued fraction expression for every square root If you look at this formula: You can see that this equation always holds no matter the a. So you can do something like the following to figure out the square root of a number, here for example is finding the square root of 10 using the top formula: The two sides will equal exactly when you iterate an infinite number of times substituting in what is already on the right side for the square root of 10 that appears on the right side. The above is 6 iterations and shows that square root of 10 is somewhere near: 3.0983... which is close to the real value of 3.162... Thus there is one general continued fraction expression for every square root. Normally the discussion of continued fractions explores each square root as having a different form, such as on wikipedia http://en.wikipedia.org/wiki/Square_root The have tables of how this looks for every different possible square root: But this idea I've had gives the same form for every square root. #### 2 comments: 1. If A is the closest integer root to X and B=(X - A*A) then A plus (B over (2A plus (B over (2A ...)))) is another form. It has the advantage over yours that for general numbers it doesn't have to consume as many pairs in order to start emitting terms. But it has the disadvantage that you still need an integer square root to get started, whereas yours does not have this preliminary calculation. Of course, if I were to always estimate the integer square root as "1", well... you be the judge if our continued fractions are then the same. :) 2. I was looking for a sequence of rational numbers converging to a square root, and your example is straightforward.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932300209999084, "perplexity": 401.5352105097025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246650195.9/warc/CC-MAIN-20150417045730-00217-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.lessonplanet.com/teachers/chemical-reactions-7th-9th
# Chemical Reactions In this chemical reaction worksheet, students investigate the results of mixing calcium chloride with sodium bicarbonate. They observe the chemical and physical changes that occur, identify the properties of the chemical before and after the reaction, answer six questions about the investigation and summarize their findings.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9454684257507324, "perplexity": 2751.4513859125177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607731.0/warc/CC-MAIN-20170524020456-20170524040456-00531.warc.gz"}
https://hci.iwr.uni-heidelberg.de/biblio?page=10&amp%3Bamp%3Bf%5Bauthor%5D=3060&amp%3Bf%5Bauthor%5D=2055&s=title&o=asc&f%5Bauthor%5D=3328
# Publications Export 25 results: Author [ Title] Type Year Filters: Author is J. Klinke  [Clear All Filters] 2 (1992). 2D wave number spectra of short wind waves: results from wind-wave facilities and extrapolation to the ocean. Optics of the Air-Sea Interface: Theory and Measurement. 1749 245-257 C (2002). A closer look at short waves generated by wave interactions with adverse currents. Gas Transfer at Water Surfaces. American Geophysical Union. 127 121--128 (1992). A critical theoretical review of optical techniques for short ocean wave measurements. Optics of the Air-Sea Interface: Theory and Measurements. 1749 204--215 D (1995). Description of the science plan for the April 1995 CoOP experiment, `gas transfer in coastal waters', performed from the research vessel New Horizon. Air-Water Gas Transfer, Selected Papers, 3rd Intern. Symp. on Air-Water Gas Transfer. AEON. 801--810 E (2002). Effect of microscale wave breaking on air-water gas transfer. Gas Transfer at Water Surfaces. American Geophysical Union. 127 23--29 (1996). Estimating $\omega(k)$ in an unsteady, wind-generated surface wave field from the 2D complex wavelet transform of the surface slope. Proc.\ The Air-Sea Interface, Radio and Acoustic Sensing, Turbulence and Wave Dynamics, Marseille, 24--30. June 1993. RSMAS, University of Miami. 373--382 G (2000). Generation of short waves by wave-current interaction. Geoscience and Remote Sensing Symposium, 2000. Proceedings. IGARSS 2000. IEEE 2000 International. 1084--1086 M (1995). Measurements of short ocean waves during the MBL ARI West Coast Experiment. Air-Water Gas Transfer, Selected Papers, 3rd Intern. Symp. on Air-Water Gas Transfer. AEON. 165--173 (1994). Measurements of the small-scale structure of the water surface with a new optical instrument. Proc. 2nd Inter. Conf. on Air-Sea Interaction and on Meteorology and Oceanography of the Coastal Zone, Lisbon, 22.--27. September 1994 (2004). Microbreaking and the enhancement of air-water transfer velocity. J. Geophys. Res. 109 C08S16 (1998). Multichannel shape from shading techniques for moving specular surfaces. ECCV 1998. Springer, Berlin. 1407 170--184 (1997). Multichannel shape from shading techniques for reconstruction of specular surfaces. Tagungsband Herbsttagung des Graduiertenkollegs "3D Bildanalyse und -synthese". H.-P. Seidel, B. Girod, H. Niemann (Hrsg.) N (1995). A new instrument for the optical measurement of the fine structure of the water surface in the field. IAPSO Proceedings, XXI General Assembly, Honolulu, Hawai, August 1995, PS-10 Spatial Structure of Short Ocean Waves. 388 O (1999). Observations of free and bound gravity-capillary Waves. The Wind-Driven Air-Sea Interface, Electromagnetic and Acoustic Sensing, Wave Dynamics and Turbulent Fluxes. 87--88 (2001). Ocean wave spectra and integral properties. Wind Stress over the Ocean. Cambridge University Press. 82--123 Klinke, J (1996). Optical Measurements of Small-Scale Wind Generated Water Surface Waves in the Laboratory and the Field. Institut für Umweltphysik, Fakultät für Physik und Astronomie, Univ.\ Heidelberg R (1996). The role of active vision in exploring growth, transport, and exchange processes. Aktives Sehen in technischen und biologischen Systemen, Workshop der GI-Fachgruppe 1.0.4. Bildverstehen Hamburg, 3--4. December 1996. infix. 4 194--202 S (1993). Shape from shading techniques for short ocean wind waves. Imaging in Transport Processes. Begell House Publishers. 269--281. http://www.dl.begellhouse.com/references/1bb331655c289a0a,36adf33e6f249361.html (1995). Spatial measurement of short ocean waves during the MBL-ARI West Coast Experiment. IAPSO Proceedings, XXI General Assembly, Honolulu, Hawai, August 1995, PS-10 Spatial Structure of Short Ocean Waves. 390 (1999). A study of advection of short wind waves by long waves from surface slope images. The Wind-Driven Air-Sea Interface, Electromagnetic and Acoustic Sensing, Wave Dynamics and Turbulent Fluxes. 93--97 W (1996). Wave number spectra of short wind waves: implications from laboratory studies. Proc.\ The Air-Sea Interface, Radio and Acoustic Sensing, Turbulence and Wave Dynamics, Marseille, 24--30. June 1993. RSMAS, University of Miami. 367--372 (2001). Wavenumber Spectra of Short Wind Waves: Laboratory Measurements and Interpretation. IGARSS '01, Geoscience and Remote Sensing Symposium, Sydney, NSW, Australia. 2 965-967 Z Klinke, J (1991). Zweidimensionale Wellenzahlspektren Von Kleinskaligen Winderzeugten Wasseroberflächenwellen. Institut für Umweltphysik, Fakultät für Physik und Astronomie, Univ.\ Heidelberg
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6995373964309692, "perplexity": 28883.685042456138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371876625.96/warc/CC-MAIN-20200409185507-20200409220007-00445.warc.gz"}
https://www.gamedev.net/forums/topic/291245-basic-directx-engine-in-need-of-guidance/
• 13 • 18 • 19 • 27 • 10 # Basic DirectX engine, in need of guidance This topic is 4827 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I'm attempting to incorporate the triple buffer (thanks :) xmesh tutorial into the base of an engine I'm writing. However, all I get is a black screen(from the g_pDevice->Clear call). I know it gets into the RenderTiny() function because when I change the Clear call's color, the window color changes. I just can't figure out why it wont display the mesh or what I'm missing. Any help would be appreciated. [Edited by - c_back on December 30, 2004 9:01:54 AM] ##### Share on other sites It may just be me, but I don't see that you set the projection matrix... Hope that helps. ##### Share on other sites Thanks for the reply, Axiverse :) Would it be this part that you meant: int SetPerspective(){ D3DXMatrixPerspectiveFovLH( &m_matProjection, D3DX_PI/4.0f, float(d3dpp.BackBufferWidth/d3dpp.BackBufferHeight), 1.0f, 1000.0f ); if( FAILED( g_pDevice->SetTransform(D3DTS_PROJECTION, &m_matProjection ) ) ) { MessageBox(g_hWndMain, "SetTransform() Failed", "Error", MB_OK | MB_ICONINFORMATION ); return E_FAIL; } OutputDebugString("Perspective set\n"); return true;} Or would I have to do something more? ##### Share on other sites Oh, sorry. Well, have you tried to just render a triangle? Also I think this is wrong: g_pDevice->SetRenderState( D3DRS_LIGHTING, 1 ); cause it is converted to the DWORD 0x00000001 where if you do: *(DWORD*)(&1), it is represented as 0x01000000 i'm a little rusty on my booleans but isn't 1 false? ahem... nevermind about the boolean thing [edit again] anyways, if you make the background white and with or without light you will at least get a black silluette ##### Share on other sites thanks :) trying that stuff ##### Share on other sites Sorry if I'm no help... Your problem most likely likes in the Mesh which I have no experience in whatsoever... =) ##### Share on other sites np. Turns out it wont render a triangle either. Not sure where my problem lies /sigh I know it loads the mesh/texture fine, though. I think I'm missing part of the view/camera setup. Anyone? ##### Share on other sites Still working on this. If anyone has any guesses, they would be welcome. ##### Share on other sites You camera view seems to be way far into the Z axis. Depending on the proportion of your model, it may well be rendering but ur camera may not be within ur camera's view matrix. Try replacing the entire RenderTiny call with a RenderTriangle call. In the render triangle call copy the render Triangle code from the Direct X tutorials. Note that the tutorial doesn't set any view, proj or world matrix. So make sure u aren't setting up any of the matrices previously in ur code either. If this works then it is almost certainly a problem with ur WVP matrices. If it works, try setting the world matrice to identity and run it again. If it works (as it should), set up the projection matrice and run again. It should also work correctly, in which case ur view matrix is the odd one out and needs to be setup properly. Please note that the tiny.x is actually a very small model (at least it was in directX 8) and ur parameters may need to be scaled down quiet a bit too. Anyway, I'm guessing here, but hope some of it has helped!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18718627095222473, "perplexity": 3143.844954274235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646952.38/warc/CC-MAIN-20180319140246-20180319160246-00595.warc.gz"}
https://eecs.engin.umich.edu/event/onset-of-fast-magnetic-reconnection-in-laboratory-and-space-plasmas/
# Onset of Fast Magnetic Reconnection in Laboratory and Space Plasmas Professor Amitava BhattacharjeeUniversity of New Hampshire SHARE: The onset of fast magnetic reconnection is widely studied in laboratory experiments, in situ satellite measurements in the Earth’s magnetosphere, and solar flares. These observations place strong constraints on theory, which must explain not only a fast reconnection rate but also a sudden increase in the time-derivative of the reconnection rate. We will show by means of theory and high-resolution simulations that such dynamics can be accounted for in one unifying framework by means of the Hall MHD model. The problem takes on additional complexity when it is applied to large systems, which have been the subject of considerable interest recently. Thin current sheets in systems of large size that exceed a critical value of the Lundquist number are unstable to a super-Alfv énic tearing instability, referred to as the plasmoid instability because it is a copious source of plasmoids (or magnetic islands). As a result of this instability, the system is shown to realize a fast nonlinear reconnection rate that is independent of the Lundquist number of the plasma. Dr. Amitava Bhattacharjee is Paul Professor at the Space Science Center and the Department of Physics at the University of New Hampshire. He received his Ph.D. at Princeton University in theoretical plasma physics from the Department of Astrophysical Sciences. He and his students and postdoctoral colleagues have authored over 200 publications with broad applications to laboratory (including fusion), space, and astrophysical plasmas. He is a Fellow of the American Physical Society, and the American Association of Advancement of Science. He has recently served as Chair of the Division of Plasma Physics of the American Physical Society, and as Senior Editor of the Journal of Geophysical Research – Space Physics
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8663958311080933, "perplexity": 1978.8805677629587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710808.72/warc/CC-MAIN-20221201085558-20221201115558-00836.warc.gz"}
https://arxiv.org/abs/1701.07821
math.SG (what is this?) # Title: Gromov-Witten theory via Kuranishi structures Abstract: In this expository manuscript, we review the construction of Gromov-Witten virtual fundamental class via FOOO's theory of Kuranishi structures for moduli spaces of pseudo-holomorphic maps defined on closed Riemann surfaces. We consider constraints coming from the ambient space and Deligne-Mumford moduli, called primary insertions, as well as intrinsic classes such as $\psi$-classes and Hodge classes. Comments: This article is based on the original article of Fukaya-Ono and subsequent articles of Fukaya-Oh-Ohta-Ono and covers the relevant parts of their theory of Kuranishi structures that is needed to define Gromov-Witten VFC. Proof of the gluing theorem and related estimates are not included Subjects: Symplectic Geometry (math.SG); Algebraic Geometry (math.AG) Cite as: arXiv:1701.07821 [math.SG] (or arXiv:1701.07821v1 [math.SG] for this version)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7464682459831238, "perplexity": 1759.3338113473737}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320201.43/warc/CC-MAIN-20170623220935-20170624000935-00315.warc.gz"}
http://mathoverflow.net/questions/85578/when-the-adjoint-of-a-hypoelliptic-operator-hypoelliptic
# When the adjoint of a hypoelliptic operator hypoelliptic Assume, $M$ is a smooth manifold with a measure $\mu$ and let $L^2(M, \mu)$ be a space of all square-integrable functions on $M$. Recall that $L$ is a hypoelliptic differential operator, if for every $f \in \mathcal{D}(L)$, if $Lf$ is in $C^\infty(M)$ then $f$ is also in $C^\infty(M)$. Could anyone give a reference to the example when $L$ is hypoelliptic but its adjoint w.r.t to $\mu$ is not hypoelliptic? Could this happen to the Hormander operators, when $L$ is defined as $$L = \sum_i X_i^2 + X_0$$ and $\{X_i\}$'s are bracket generating? I am mostly interested in the case when $\mu$ is induced by the Riemannian metric and $M$ is complete. Thanks, - Hormander's operator $L=X_0+\sum_{1\le j\le k} X_j^2$, where the $X_j$ are real smooth vector fields with the Lie algebra of $\{(X_j)\}_{0\le j\le k}$ generating the tangent space is hypoelliptic as well as its adjoint since the Lie algebra condition does not change by taking adjoints. On the other hand, $\frac{\partial}{\partial t}+t\Delta_x$ is hypoelliptic whereas its adjoint $-\frac{\partial}{\partial t}+t\Delta_x$ is not hypoelliptic, Could you explain why the operators $\frac{\partial}{\partial t} + t \Delta_x$ and $-\frac{\partial}{\partial t} + t \Delta_x$ are hypoelliptic and not, respectively? – Bob Yuncken Jul 22 at 14:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9572778344154358, "perplexity": 99.59174257953559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464396.48/warc/CC-MAIN-20151124205424-00083-ip-10-71-132-137.ec2.internal.warc.gz"}
https://new.rosettacommons.org/demos/latest/public/model_missing_loop/README
The scripts and input files that accompany this demo can be found in the demos/public directory of the Rosetta weekly releases. KEYWORDS: STRUCTURE_PREDICTION LOOPS Authors: Roland Pache, Michal Sperber, Steven Combs, George Rosenberger Last updated: August 2011 (RosettaCon9) This demo shows how missing electron densities of several consecutive residues can be modeled using the loop modeling application (loopmodel) and the KInematic Closure algorithm (KIC). The starting structure (1tr2_missing_density.pdb) is based on vinculin (1TR2). 5 loop residues have been removed (32-36) and should be replaced by the sequence VDGKA for loop modeling (simulating missing electron density). Afterwards, this PDB structure can be used to model the loop. For this demo, the water molecules (HOH) have been removed and the structure was truncated to the first 132 residues. ## Running the demo 1. Insert the new residues into the structure file Open the file 1TR2_missing_density.pdb in the text editor of your choice. Search the first gap line (residue 32). Search for the first residue Valine in the file and copy all atoms to the new line. Repeat this step for all other residues (DGKA) and insert the coordinates below Valine. The file should then look like 1TR2_manually_added_dummy_residues.pdb. Renumber the residues you copied from another place to 32-36 and remove all eventually inserted new lines. Save this file as 1TR2_manually_added_dummy_residues_renumbered.pdb. 2. Create the loop file. Create a new file, called 1TR2.loop, and open it in your text editor. Insert the following line: LOOP 31 37 37 0 1 This excerpt from the loopmodel documentation describes the meaning of the 6 columns in that line: column1 "LOOP": Literally the string LOOP, identifying this line as a loop In the future loop specification files may take other data. column2 "integer": Loop start residue number column3 "integer": Loop end residue number column4 "integer": Cut point residue number, >=startRes, <=endRes. column5 "float": Skip rate. default - never skip (0) column6 "boolean": Extend loop. Set to 1 For this example, we select the one residue before and after the loop to have real coordinates that can be used as anchor points by the KIC loop modeling algorithm. The cut point residue number is set to the last loop residue, since it must be inside the loop. The skip rate is set to 0 for this short example (since we want to model this loop) and the extend loop setting is set to true to idealize all bond lengths, bond angles and torsion angles of the loop residues before modeling. 3. Execution of the algorithm and definition of the flags Assuming that Rosetta 3.3 is installed and all paths are set correctly, open your shell and change the directory to the one where the demo files are stored. $> cp rosetta_inputs/* .$> \$ROSETTA3/bin/loopmodel.linuxgccrelease -s 1TR2_manually_added_dummy_residues_renumbered.pdb -loops:loop_file 1TR2.loop -loops:remodel perturb_kic -loops:refine refine_kic -ex1 -ex2 -nstruct 1 -loops:max_kic_build_attempts 100 -in:file:fullatom Brief descriptions for all the components of this command-line: loopmodel.linuxgccrelease : loopmodel application (linuxgccrealease or macosgccrelease) -database : path to your Rosetta 3.3 DB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4214661419391632, "perplexity": 4622.111684427078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711045.18/warc/CC-MAIN-20221205200634-20221205230634-00200.warc.gz"}
http://aspandroid.blogspot.in/2013/07/step-by-step-procedure-to-decompile.html
## Wednesday, July 10, 2013 ### Step 1: Make a new folder and put .apk file in it (which you want to decode). Now rename the extension of this .apk file to .zip (eg.: rename from filename.apk to filename.apk.zip) and save it. Now you get classes.dex files, etc. At this stage you are able to see drawables... but not xml and java files, so continue to step 2. ### Step 2: Now extract this zip apk file in the same folder (or NEW FOLDER). Now download dex2jar from this link http://code.google.com/p/dex2jar/ and extract it to the same folder (or NEW FOLDER). Now open command prompt and change directory to that folder (or NEW FOLDER). Then write `dex2jar classes.dex` and press enter. Now you get classes.dex.dex2jar file in the same folder. Then download java decompiler from http://java.decompiler.free.fr/?q=jdgui and now double click on jd-gui and click on open file. Then open classes.dex.dex2jar file from that folder. Now you get class files and save all these class files (click on file then click "save all sources" in jd-gui) by src name. At this stage you get java source but the xml files are still unreadable, so continue to step 3. ### Step 3: Now open another new folder and put these files 1. put .apk file which you want to decode 3. download framework-res.apk file and put in the same folder (Not all apk file need framework-res.apk file) 4. Open a command window 5. Navigate to the root directory of APKtool and type the following command: `apktool if framework-res.apk` 6. `apktool d "fname".apk` ("fname" denotes filename which you want to decode) now you get a file folder in that folder and now you can easily read xml files also. ### Step 4: Now just copy contents of both folder(in this case both new folder)to the single one and now enjoy the complete source code........ Also Check out the following for more references: Happy Coding!!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8456855416297913, "perplexity": 7821.95004066382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011190529/warc/CC-MAIN-20140305091950-00092-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/centripical-motion-problem.46213/
# Centripical motion problem 1. Oct 5, 2004 ### strugglin-physics A mass m = 5.100 kg is suspended from a string of length L = 1.110 m. It revolves in a horizontal circle (see Figure). The tangential speed of the mass is 2.696 m/s. What is the angle theta between the string and the vertical (in degrees)? My first question is what is tangential speed? Is that the V in the formula Ac=v^2/r? But my biggest problem (I think) is that I can't figure out how to draw a FBD for this picture. Assistance is appreciated. File size: 2.1 KB Views: 112 2. Oct 5, 2004 ### Sirus Tangental (tangential? I dunno) speed is the magnitude of the velocity vector of the mass that points in a direction tangental to its circular path (that is, perpendicular to the centripetal force vector). It is the instantaneous speed of the mass. Yes, that formula is correct. Kudos for starting with a free-body diagram. Think about all the forces present. What is the force causing the mass to stay in circular motion? What other force is present? Hint: neglecting friction, only two forces are acting on the mass. 3. Oct 5, 2004 ### arildno Note that the vertical component of the tension force must balance the weight of mass. 4. Oct 5, 2004 ### strugglin-physics Is it a drag force? F=-bv and the weight force? 5. Oct 5, 2004 ### e(ho0n3 What drag force? There is no drag force in this problem? 6. Oct 5, 2004 ### arildno Why do you think it is a drag force? 7. Oct 5, 2004 ### arildno 1.The tension force is directed along the string, let it's magnitude be T. 2.Let $$\theta$$ be the angle you're supposed to find. 3. Hence, the vertical component of the tension force is $$T\cos\theta$$ 4. This component must balance the weight of the mass, so we get from Newton's second law: $$T=\frac{mg}{\cos\theta}$$ (No accelerations in the vertical) 5. The horizontal component of the tension force must provide the centripetal acceleration of the mass. The radius R is evidently :$$R=L\sin\theta$$ Can you take it from here? 8. Oct 5, 2004 ### strugglin-physics I don't know what the second force is... I know the weight force but I what is keeping the plane up can't be a contact force because it isn't touching anything. So that leaves magnetic or electric and I know it isn't either of those. Therefore, I'm stumped. Sorry, totally thinking about the other problem. LOL :uhh: Last edited: Oct 5, 2004 9. Oct 5, 2004 ### arildno The tension force is provided by the string. 10. Oct 5, 2004 ### strugglin-physics So for the vertical component we have mg/cos theta times cos theta equals mass times acceleration. Don't the thetas cancel out and leave us with mg=ma making g = a That doesn't help me figure out theta though, sigh, sorry for being such a pain... we have a test tomorrow... I have a funny feeling it is not going to be a really good day. 11. Oct 5, 2004 ### arildno NO!! The vertical component of the tension force is, as I've said $$T\cos\theta$$ Then, look at the vertical component of Newton's 2.law: $$T\cos\theta-mg=0$$ Hence, $$T=\frac{mg}{\cos\theta}$$ EDIT: You are now done with finding the magnitude of the tension. Use this expression for the magnitude of tension in the radial component of Newton's second law (in the horizontal plane, that is) Last edited: Oct 5, 2004 12. Oct 5, 2004 ### arildno For your information, you should get: $$\cos\theta=\sqrt{(1+(\frac{v^{2}}{2Lg})^{2})}-\frac{v^{2}}{2Lg}$$ where v is the tangential velocity. Similar Discussions: Centripical motion problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9766080379486084, "perplexity": 1141.3497974739987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320669.83/warc/CC-MAIN-20170626032235-20170626052235-00524.warc.gz"}
https://socratic.org/questions/how-do-you-graph-3x-y-6-using-intercepts
Algebra Topics # How do you graph 3x+y=6 using intercepts? Sep 26, 2016 find out the intercept and join them together #### Explanation: subtitute $y = 0$ to find x intercepts when $y = 0$ $3 x = 6$ $x = 2$ x intercepts is 2 subtitute $x = 0$ to find y intercepts when $x = 0$ $y = 6$ y intercepts is 3 since this is a linear equation $y = m x + c$,we can draw the graph by draw a line to join the two intercept point graph{3x+y=6 [-11.08, 14.58, -3.73, 9.1]} ##### Impact of this question 2289 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21895968914031982, "perplexity": 8617.909250406785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573827.2/warc/CC-MAIN-20190920030357-20190920052357-00062.warc.gz"}
https://physics.stackexchange.com/questions/linked/147433?sort=hot&pagesize=30
12k views ### Will the volt, ampere, ohm or other electrical units change on May 20th, 2019? [duplicate] When watching a video by Veritasium about the SI units redefinition (5:29), a claim that the volt and unit of resistance (presumably the ohm) will change by about 1 part in 10 million caught my ... 188 views ### How can I explain what a kilogram is using Planck's constant? [duplicate] I want to understand what 1 kg represents. For example: I know that 1 second is equal to $9\ 192\ 631\ 770$ transitions from the microwave radiation that a cesium-133 atom (at $0$K) emits, if it's ... 141 views ### How is a Kibble balance used in the new definition of the kilogram, and what's the connection between the balance and Planck's constant? [duplicate] The BBC News article Kilogram gets a new definition says: How does the new system work? Electromagnets generate a force. Scrap-yards use them on cranes to lift and move large metal objects, ... 23 views ### SI redefinition of the kilogram - what is one measuring? [duplicate] I have been reading about the new SI units and specifically, want to get a better understanding of the definition of a kilogram. It was written that the kilogram will be defined in terms of Planck's ... 138k views ### What is the difference between “kinematics” and “dynamics”? I have noticed that authors in the literature sometimes divide characteristics of some phenomenon into "kinematics" and "dynamics". I first encountered this in Jackson's E&M book, where, in ... 18k views ### Why do atomic clocks only use caesium? Modern atomic clocks only use caesium atoms as oscillators. Why don't we use other atoms for this role? 5k views ### Why are scientists involved in the Avogadro Project using silicon-28 atoms instead of carbon-12? My question is, why use silicon-28 atoms to calculate the kilogram when you already have carbon-12 atoms defining the constant? Does the Avogadro Project intend to define the constant by replacing ... 10k views ### Why is the mole/“amount of substance” a dimensional quantity? According to the BIPM and Wikipedia, "amount of substance" (as measured in moles) is one of the base quantities in our system of weights and measures. Why? I get why the mole is useful as a unit. In ... 4k views ### Why is the candela a base unit of the SI? The candela is defined as The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency $540\cdot10^{12}$ hertz and that has a radiant ... 2k views ### Why do we still not have an exact (constants-based) definition for a kilogram? I read that there is an effort to define a kilogram in terms that can exactly be reproduced in a lab. Why has it taken so long to get this done? It would seem this should be fairly important. Edit: ... 1k views ### Uncertainty of permittivity of vacuum Question: The value of permittivity of vacuum, $\epsilon_0$, is given with absolutely no uncertainty in NIST Why is this the case? More details: The permeability of vacuum can be given by \mu_0=... 891 views ### Why are electrical units (specifically, electrical current) considered a base unit? Note: this is NOT a question why current is the base unit as opposed to charge—that’s because measuring $1 \ \mathrm{ A }$ through a wire is easier to measure in a lab than is $1 \ \mathrm{ C }$ in ... 1k views ### Magnetic effect on AC circuits? We know that when currents in two wires move parallel to each other, they attract each other and if they are moving anti-parallel to each other, they repel each other but we cannot observe this in ... 1k views ### What is a base unit in the new SI, and why is the ampere one of them? One question that comes up pretty much always in introductory electromagnetism courses is Why the base unit of electrical measurements is the ampere and not the coulomb, and the usual answer is that ... 2k views ### How is the constant of the Biot-Savart Law derived? In my A-level textbook there is no explanation regarding how the constant in the Biot-Savart law is derived! So how is the constant, $k =\frac {\mu_0}{4\pi}$ derived, and what's the intuition behind ... 242 views ### Why are there $1 / 1.602176634 \times 10^{-19}$ electrons in a coulomb? Why that exact number of electrons in one coulomb? who decided it? there is nothing wrong with the number, it just seems slightly messy. Why didn't the scientific community just settle on an easier ... 539 views ### There are plans to develop a better definition of a “second”. How does the current definition fall short? The current definition of a second is stated here and I found a presentation on the BIPM site which discusses plans to change to a "better" definition of a second. You can find the presentation here. ... 213 views ### Redefinition of everything on May 20th, 2019 [closed] A couple of issues: So after May 20th, 2019, what exactly will be the defined value of $\hbar$? What will be the defined number of elementary charges in a Coulomb? Then $\mu_0$ and $\epsilon_0$ will ... 143 views ### Does the death of Kilogram ($kg$) affect us in any means in our day to day life? [closed] Recently, the sleek cylinder of platinum-iridium metal has been discarded and the kilogram is set to be redefined along with ampere for electricity and Kelvin for temperature. Hereafter the Kilogram ... 111 views ### Why is the kilogram defined using Earth's gravity? [closed] Since there are variations of $g$ depending on location on Earth's surface, why not use a reproducible lab experiment using a vertical axis centrifugal balance, and say that one kg is defined by ... 586 views ### How to distinguish between the spectrum of an atom in motion and the one of a scaled atom? Galaxies are moving dragged by the space expansion. When atoms are in motion the doppler effect will shift the spectra of the emitted photons. The proton-to-electron mass ratio, $\frac{m_e}{m_p}$ ... 193 views ### Is absolute zero still 0 Kelvin? Following the recent decision to change the definition of SI units, I understand that Kelvin is no longer defined in terms of the number 1/273.16. Does that mean that absolute zero is no longer ... 294 views ### SI Base Unit definition of mass - obsolete? According to the formal definition of the SI Base unit of mass, the kilogram, it is stated that "The kilogram is the unit of mass; it is equal to the mass of the international prototype of the ... 402 views ### How accurately can you count electrons? I read that with modern technology they can now shoot one electron at a time. Can you tell me how accurately it is possible to count charges, how it is made and how they did this in the past? How ... 353 views ### Why must the kilogram standard be based on a kilogram mass object? Inspired by the accepted answer to a question about the Avogadro Project, why must an object used to define a new standard for the kilogram have a mass of one full kilogram? If a smaller mass were ... 338 views ### How explain this perturbing equation about the 43 arcseconds? The planetary orbits have been studied as ellipses but the solar system is in motion in relation to the distant stars. Their path is along the tip of an helix and the ecliptic plane is a convenient ... 111 views ### Why is the kilogram the last SI unit which is defined in terms of a physical prototype? [duplicate] All elementary SI units, except the kilogram, have been redefined depending of a physical constant. However kilogram still depends on the International Prototype Kilogram. Why is this? Is there no ... 200 views ### How can a Lego version of a Kibble balance measure the Planck constant? As the picture shows below [][1] in a Kibble balance, one can drop out the measurement uncertainty of $B$ (magnetic flux intensity) and $L$ (length of coil) by the use of two modes, force mode and ... ### What is the mass of $N_A$ atoms of carbon-12? With the recent redefinition of the kilogram, what is the mass of $N_A$ (Avogadro's constant) of carbon-12 atoms? $N_A$ was defined as exactly 6.02214076×$10^{23}$ atoms. Then how close would the ... I am specifically interested in the following constants because being a student these are some of the most common constants that I face: $R$ (universal gas constant) Stefan constant Permeability of ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8496384024620056, "perplexity": 880.3852500831889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00165.warc.gz"}
https://math.eretrandre.org/tetrationforum/showthread.php?tid=333&pid=3738&mode=threaded
• 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 Real and complex behaviour of the base change function (was: The "cheta" function) jaydfox Long Time Fellow Posts: 440 Threads: 31 Joined: Aug 2007 08/15/2009, 06:40 PM (This post was last modified: 08/15/2009, 06:49 PM by jaydfox.) (08/15/2009, 05:36 PM)bo198214 Wrote: Can you make a picture for those that dont currently sit down with a computer algebra system computing exactly this?A picture I can do. Quote:Imho the $\log^{[n-2]}(-1)$ converges to the upper primary fixed point of $\exp$. So why should they come arbitrarily close to the real axis? But Henryk, remember that, due to branching of logarithms, we must start at a point on the real line and then follow a path. Well, if we start at $\exp_{e}^{[n-2]}(x=0)$, you arrive at a rather large number. If you then try to create a simple path to $\exp_{e}^{[n-2]}(x)=-1$, and then perform n-2 logarithms, you will not get a simple path. It will loop wildly around the upper fixed point. However, let's consider the point $\exp_{e}^{[n-3]}(x)=0+\left(2\,\mathrm{floor}(\exp_{e}^{[n-3]}(0)/2\pi)+1\right)\pi i$. This point is roughly the same distance from the origin as the [n-3] exponentiation of 0, but it has real part 0, and imaginary part equal to (2k+1)*pi, integer k. Thus, on the next exponentiation, it's -1. Note that we can use a rather large quarter circle (or approximately so) to connect this point to the real line. If we take the logarithm of this arc, in the primary branch, we get a line segment from the real line to a complex value with roughly the same real part, but imaginary part equal to pi/2. Note that this line segment now gives us a very simple image as we iteratively perform logarithms, such that it ends up very, very close to 0, with a very small imaginary part (the higher the value of n, the closer to the real line it gets). Now this approach only got us the "closest" singularity to whatever point on the real line we started at (0 in my example). Instead of a quarter circle, we can do a 3/4 circle, 5/4 circle, etc., and we can pick different k values in the (2k+1)*pi*i formula, to find arbitrarily many singularities, and depending on the exact path being used, we can find these singularities in arbitrarily many different branches. I will draw up some pictures do demonstrate. ~ Jay Daniel Fox « Next Oldest | Next Newest » Messages In This Thread Real and complex behaviour of the base change function (was: The "cheta" function) - by bo198214 - 08/12/2009, 08:59 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function) - by jaydfox - 08/15/2009, 12:54 AM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 05:00 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by bo198214 - 08/15/2009, 05:36 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 06:40 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 07:13 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by bo198214 - 08/15/2009, 09:44 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 10:40 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 10:46 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 11:02 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/15/2009, 11:20 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by bo198214 - 08/16/2009, 11:15 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/16/2009, 11:38 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by bo198214 - 08/17/2009, 08:50 AM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/17/2009, 12:07 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by sheldonison - 08/17/2009, 04:01 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by bo198214 - 08/17/2009, 04:30 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/17/2009, 05:26 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by sheldonison - 08/18/2009, 04:37 AM RE: Real and complex behaviour of the base change function (was: The "cheta" function - by jaydfox - 08/17/2009, 05:47 PM RE: Real and complex behaviour of the base change function (was: The "cheta" function) - by bo198214 - 08/17/2009, 02:40 PM base change with decremented exponential - by bo198214 - 08/18/2009, 08:47 AM singularities of base change eta -> e - by bo198214 - 08/18/2009, 06:51 PM RE: singularities of base change eta -> e - by bo198214 - 08/20/2009, 10:28 AM RE: Does the limit converge in the complex plane? - by sheldonison - 08/13/2009, 12:49 AM RE: Does the limit converge in the complex plane? - by bo198214 - 08/13/2009, 07:17 AM RE: Does the limit converge in the complex plane? - by sheldonison - 08/13/2011, 10:32 AM RE: Does the limit converge in the complex plane? - by bo198214 - 08/13/2011, 06:33 PM RE: Does the limit converge in the complex plane? - by sheldonison - 08/13/2009, 06:48 PM Possibly Related Threads... Thread Author Replies Views Last Post Complex Tetration, to base exp(1/e) Ember Edison 7 1,339 08/14/2019, 09:15 AM Last Post: sheldonison Is there a function space for tetration? Chenjesu 0 262 06/23/2019, 08:24 PM Last Post: Chenjesu Can we get the holomorphic super-root and super-logarithm function? Ember Edison 10 2,131 06/10/2019, 04:29 AM Last Post: Ember Edison Degamma function Xorter 0 734 10/22/2018, 11:29 AM Last Post: Xorter b^b^x with base 0 Users browsing this thread: 1 Guest(s)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 5, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076938390731812, "perplexity": 2563.3780639743873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670643.58/warc/CC-MAIN-20191121000300-20191121024300-00322.warc.gz"}
https://deepai.org/publication/fully-dynamic-maximal-independent-set-with-sublinear-in-n-update-time
# Fully Dynamic Maximal Independent Set with Sublinear in n Update Time The first fully dynamic algorithm for maintaining a maximal independent set (MIS) with update time that is sublinear in the number of edges was presented recently by the authors of this paper [Assadi et.al. STOC'18]. The algorithm is deterministic and its update time is O(m^3/4), where m is the (dynamically changing) number of edges. Subsequently, Gupta and Khan and independently Du and Zhang [arXiv, April 2018] presented deterministic algorithms for dynamic MIS with update times of O(m^2/3) and O(m^2/3√( m)), respectively. Du and Zhang also gave a randomized algorithm with update time O(√(m)). Moreover, they provided some partial (conditional) hardness results hinting that update time of m^1/2-ϵ, and in particular n^1-ϵ for n-vertex dense graphs, is a natural barrier for this problem for any constant ϵ >0, for both deterministic and randomized algorithms that satisfy a certain natural property. In this paper, we break this natural barrier and present the first fully dynamic (randomized) algorithm for maintaining an MIS with update time that is always sublinear in the number of vertices, namely, an O(√(n)) expected amortized update time algorithm. We also show that a simpler variant of our algorithm can already achieve an O(m^1/3) expected amortized update time, which results in an improved performance over our O(√(n)) update time algorithm for sufficiently sparse graphs, and breaks the m^1/2 barrier of Du and Zhang for all values of m. There are no comments yet. ## Authors • 27 publications • 8 publications • 7 publications • 21 publications • ### Improved Algorithms for Fully Dynamic Maximal Independent Set Maintaining maximal independent set in dynamic graph is a fundamental op... 04/24/2018 ∙ by Yuhao Du, et al. ∙ 0 • ### Fully Dynamic MIS in Uniformly Sparse Graphs We consider the problem of maintaining a maximal independent set (MIS) i... 08/30/2018 ∙ by Krzysztof Onak, et al. ∙ 0 • ### Deterministic Sparse Sublinear FFT with Improved Numerical Stability In this paper we extend the deterministic sublinear FFT algorithm for fa... 04/23/2020 ∙ by Gerlind Plonka, et al. ∙ 0 • ### Independent Sets of Dynamic Rectangles: Algorithms and Experiments We study the maximal independent set (MIS) and maximum independent set (... 02/18/2020 ∙ by Sujoy Bhore, et al. ∙ 0 • ### When Algorithms for Maximal Independent Set and Maximal Matching Run in Sublinear-Time Maximal independent set (MIS), maximal matching (MM), and (Δ+1)-coloring... 06/13/2020 ∙ by Sepehr Assadi, et al. ∙ 0 • ### Dynamic Geometric Data Structures via Shallow Cuttings We present new results on a number of fundamental problems about dynamic... 03/20/2019 ∙ by Timothy M. Chan, et al. ∙ 0 • ### Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs We present an Õ(m+n^1.5)-time randomized algorithm for maximum cardinali... 09/03/2020 ∙ by Jan van den Brand, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction The maximal independent set (MIS) problem is of utmost practical and theoretical importance, primarily since MIS algorithms provide a useful subroutine for locally breaking symmetry between multiple choices. MIS is often used in the context of graph coloring, as all vertices in an independent set can be assigned the same color. As another example, Hopcroft and Karp [10] gave an algorithm to compute a large bipartite matching (approximating the maximum matching to within a factor arbitrarily close to 1) by finding maximal independent sets of longer and longer augmenting paths. In general, the MIS problem has natural connections to various important combinatorial optimization problems; see the celebrated papers of Luby [18] and Linial [17] for some of the most basic applications of MIS. Additional applications of MIS include leader election [6], resource allocation [24], network backbone constructions [14, 11], and sublinear-time approximation algorithms [21]. The MIS problem has been extensively studied in parallel and distributed settings, following the seminal works of [18, 2, 17]. Surprisingly however, the fundamental problem of maintaining an MIS in dynamic graphs received no attention in the literature until the pioneering PODC’16 paper of Censor-Hillel, Haramaty, and Karnin [5], who developed a randomized algorithm for this problem under the oblivious adversarial model666In the standard oblivious adversarial model (cf. [4], [12]), the adversary knows all the edges in the graph and their arrival order, as well as the algorithm to be used, but is not aware of the random bits used by the algorithm, and so cannot choose updates adaptively in response to the randomly guided choices of the algorithm. in distributed dynamic networks. Implementing the distributed algorithm of [5] in the sequential setting requires update time in expectation, where is a fixed upper bound on the maximum degree in the graph, which may be in sparse graphs. Furthermore, it is unclear whether time is also sufficient for this algorithm, and a naive implementation may incur an update time of , even in expectation, where is the (dynamically changing) number of edges; see Section 6 of [5] for further details. We study the MIS problem in (sequential) dynamic setting, where the underlying graph evolves over time via edge updates. A dynamic graph is a graph sequence on fixed vertices, where the initial graph is and each graph is obtained from the previous graph in the sequence by either adding or deleting a single edge. The work of Censor-Hillel et al. [5] left the following question open: Can one dynamically maintain an MIS in time significantly lower than it takes to recompute it from scratch following every edge update? The authors of this paper [3] answered this question in the affirmative, presenting the first fully dynamic algorithm for maintaining an MIS with (amortized) update time that is sublinear in the number of edges, namely, . Achieving an update time of is simple, and the main contribution of [3] is in further reducing the update time to . Note that improves over the simple bound only for sufficiently sparse graphs. Onak et al. [22] studied “uniformly sparse” graphs, as opposed to the work by Assadi et al. [3] that focused on unrestricted sparse graphs. The “uniform sparsity” of the graph is often measured by its arboricity [19, 20, 23]: The arboricity of a graph is defined as , where . A dynamic graph of arboricity is a dynamic graph such that all graphs have arboricity bounded by . Onak et al. [22] showed that for any dynamic -vertex graph of arboricity , an MIS can be maintained with amortized update time , which reduces to in bounded arboricity graphs, such as planar graphs and more generally all minor-closed graph classes. The result of [22] improves that of [3] for all graphs with arboricity bounded by , for any constant . Since the arboricity of a general graph cannot exceed , this result covers much of the range of possible values for arboricity. Nonetheless, for general graphs, this update time of is in fact higher than the naive time needed to compute an MIS from scratch. Recently, the bound of Assadi et al. [3] for general graphs was improved to by Gupta and Khan [8] and independently to by Du and Zhang [7]. All the aforementioned algorithms (besides the distributed algorithm of [5]) are deterministic. Du and Zhang also presented a randomized algorithm under the oblivious adversarial model with an expected update time of ; for dense graphs, this update time reduces to which is worse than the simple deterministic update time algorithm for this problem. None of the known algorithms for dynamically maintaining an MIS achieves an update time of in dense graphs. A recent result of Du and Zhang [7] partially addresses this lack of progress: they presented an “imperfect reduction” from the Online Boolean Matrix-Vector Multiplication problem to prove a conditional hardness result for the dynamic MIS problem (see, e.g. [9] for the role of this problem in proving conditional hardness result for dynamic problems). This result hints that the update time of or for any constant maybe of a natural barrier for a large class of deterministic and randomized algorithms for dynamic MIS that satisfy a certain natural property (see [7] for exact definition of this property and more details). This state-of-affairs, namely, the lack of progress on obtaining the update time of for dynamic MIS in general on one hand, and the partial hardness result hinting that (essentially) update time might be a natural barrier for this problem for a large class (but not all) of algorithms on the other hand, raises the following fundamental question: ###### Question 1. Can one maintain a maximal independent set in a dynamically changing graph with update time that is always ? ### 1.1 Our contribution Our main result is a positive resolution of Question 1 in a strong sense: [backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt] ###### Theorem 1. Starting from an empty graph on fixed vertices, an MIS can be maintained over any sequence of edge insertions and deletions in amortized update time, where denotes the dynamic number of edges, and the update time bound holds both in expectation and with high probability 777We remark that the high probability guarantee holds when the number of updates is sufficiently large; see the formal statement of the results in later sections.. The proof of Theorem 1 is carried out in three stages. In the first stage we provide a simple randomized algorithm for maintaining an MIS with update time ; although we view this as a “warmup” result, it already resolves Question 1. In the second stage we generalize this simple algorithm to obtain an update time of . Achieving the bound is more intricate; we reach this goal by carefully building on the ideas from the and -time algorithms. Finding a maximal independent set is one of the most studied problems in distributed computing. It is thus important to provide an efficient distributed implementation of the proposed sequential dynamic algorithms. While the underlying distributed network is subject to topological updates (particularly edge updates) as in the sequential setting, the goal in the distributed setting is quite different: Optimizing the (amortized) round complexity, adjustment complexity and message complexity of the distributed algorithm (see, e.g. [5, 3] for definitions). Achieving low amortized round and adjustment complexities is typically rather simple, and so the goal is to devise a distributed algorithm whose amortized message complexity matches the update time of the proposed sequential algorithm. This goal was achieved by [3] and [8]. Similarly to [3, 8], our sequential algorithm can also be distributed, achieving an expected amortized message complexity of , in addition to amortized round and adjustment complexities, per each update. We omit the details of the distributed implementation of our algorithm as it follows more or less in a straightforward way from our sequential algorithm using the ideas in [3]. ## 2 Preliminaries #### Notation. For a graph , denotes the number of vertices in and denotes the number of edges in . For set , we define as the induced subgraph of on vertices in . We further define to be the set of vertices that are neighbor to at least one vertex in in (we may drop the subscript when it is clear from the context). For a vertex , we define as the degree of in . Finally, denotes the maximum degree in . #### Greedy MIS. Maximal independent set problem admits a sequential greedy algorithm. Let be a graph and be any ordering of vertices in . iterates over vertices in according to the ordering and adds each vertex to the MIS iff none of its neighbors have already been chosen. It is immediate to verify that this algorithm indeed computes an MIS of for any ordering . Throughout this paper, we always assume that is the lexicographically-first ordering of vertices and hence simply write instead of . ### 2.1 A Deterministic O(Δ)-Update Time Algorithm We use the following simple algorithm for maintaining an MIS deterministically: every vertex maintains a counter of number of its neighbors in the MIS, and after any update to the graph, decides whether it should join or leave the MIS based on this information. Moreover, any vertex that joins or leaves the MIS use time to update the counter of its neighbors. While the worst case update time of this algorithm can be quite large for some updates, one can easily prove that on average, only time is needed to process each update, as was first shown in [3] and further strengthened in [8]. ###### Lemma 2.1 ([3, 8]). Starting from any graph , a maximal independent set can be maintained deterministically over any sequence of vertex or edge insertions and deletions in time where is a fixed bound on the maximum degree in the graph and is the original number of edges in . ### 2.2 Sample-and-Prune Technique for Computing an MIS We also use a simple application of the sample-and-prune technique of [15] (see also [16]) originally introduced in context of streaming and MapReduce algorithms. To our knowledge, the following lemma was first proved in [13] following an approach in [1]. Intuitively speaking, it asserts that if we sample each vertex of the graph with probability , compute an MIS of the sampled graph, and remove all vertices that are incident to this MIS, the degree of remaining vertices would be . For completeness, we present a self-contained proof of this lemma here (we note that our formulation is somewhat different from that of [13] and is tailored to our application). ###### Lemma 2.2 (cf. [13, 1]). Fix any -vertex graph and a parameter . Let be a collection of vertices chosen by picking each vertex in independently and with probability . Suppose and . Then, with probability , Δ(G[U])≤5p−1⋅lnn. ###### Proof. Define and fix any vertex in the original graph . We prove that with high probability either or and then take a union bound on all vertices to conclude the proof. We note that the process of computing can be seen as iterating over vertices of in a lexicographically-first order and skip the vertex if it is incident on (computed so far) and otherwise pick it with probability and include it in . Let be the neighbors of in ordered accordingly. When processing the vertex , if is not already incident on computed so far, the probability that we pick to join is exactly . As such, if we encounter at least such vertices in this process, the probability that we do not pick any of them is at most: τ∏i=jPr(vij is not chosen ∣ vij is not incident to the MIS)=(1−p)τ≤exp(p⋅5p−1⋅lnn)=1n5. As such, we either did not encounter vertices not incident to , which implies that , or we did, which implies that with probability , itself is neighbor to some vertex in (as by calculation above, we would pick one of those at least vertices) and hence does not belong to . Taking a union bound on all vertices now finalizes the proof. ## 3 Warmup: An ˜O(n2/3)-Update Time Algorithm We shall start with a simpler version of our algorithm as a warm-up. ###### Theorem 2. Starting from an empty graph on vertices, a maximal independent set can be maintained via a randomized algorithm over any sequence of edge insertions and deletions in time in expectation and time with high probability888This in particular implies that when the length of update sequence is , the amortized update time is with high probability .. The algorithm in Theorem 2 works in phases. Each phase starts with a preprocessing step in which we initiate the data structure for the algorithm and in particular compute a partial MIS of the underlying graph with some useful properties (to be specified later). Next, during each phase, we have the update step which processes the updates to the graph until a certain condition (to be defined later) is met, upon which we terminate this phase and start the next one. We now introduce each step of our algorithm during one phase. ### The Preprocessing Step The goal in this step is to find a partial MIS of the current graph with the following (informal) properties: it should be “hard” for a non-adaptive oblivious adversary to “touch” vertices of this independent set, and maintaining an MIS in the reminder of the graph, i.e., after excluding these vertices and their neighbors from consideration, should be distinctly “easier”. In the following, we prove that the sample-and-prune technique introduced in Section 2 can be used to achieve this task (we will pick an exact value for below later but approximately ): [ enlarge top by=5pt, enlarge bottom by=5pt, breakable, boxsep=0pt, left=4pt, right=4pt, top=10pt, arc=0pt, boxrule=1pt,toprule=1pt, colback=white ] : 1. Let be a set chosen by picking each vertex in with probability independently. 2. Compute . 3. Return . Throughout this section, we use to denote the time step in which is computed (hence ). We define a partitioning of the vertices of at any time : • : the set of vertices computed by (and not ). • : the set of vertices incident on in the graph that are not in . • : the set of vertices not in neither incident to in the graph . It is easy to see that in any time , partitions the vertices of the graph. We emphasize that definition of is with respect to the time step and graph , while and are defined for the graph for . This means that across time steps , the set of vertices is fixed but remaining vertices may move between and . We use this partitioning to define the following key time steps in the execution of the algorithm: • : the first time step in which (recall that and were computed with respect to and not ). • : the first time step in which the total number of times (since ) that vertices have moved from to , for , reaches . • : the first time step in which . • where : the time step in which we terminate this phase (in other words, if any of the conditions above happen, the phase finishes and the next phase starts). By definition above, each phase starts at time step and ends at time step and has length at most . We say that a phase is successful iff . In the following, we prove that every phase is successful with at least a constant probability (this fact will be used later to argue that the cost of preprocessing steps can be amortized over the large number of updates between them). ###### Lemma 3.1. Any given phase is successful, i.e., has , with probability at least . ###### Proof. The lemma is proved in the following three claims which bound , , and , respectively. All claims crucially use the fact the adversary is non-adaptive and oblivious and hence we can fix its updates beforehand. . ###### Proof. For any , let denote the edge updated by the adversary at time . We consider the randomness in . The probability that both and belong to is exactly . For any , define an indicator random variable which is iff belongs to . Let . In order for to no longer be equal to for some , at least one of these updates needs to have both endpoints in . As such, where the second inequality is by Markov bound. . ###### Proof. For any , let denote the edge updated by the adversary at time . By the randomness in , the probability that at least one endpoint of belong to is . For any , define an indicator random variable which is iff at least one of or belong to . Let . The only way a vertex from moves to is that an edge incident on this vertex with other endpoint in is deleted (and this vertex has no other edge to either). For this to happen times (as in definition of ), we need to have at least updates in the range with at least one endpoint in (recall that ). As such, Pr(tI where the second inequality is by Markov bound. . ###### Proof. Fix the graphs for . Recall that is a subset of vertices of each chosen independently with probability . Moreover, since and hence , we know that is indeed equal to (in addition to ). As such, by Lemma 2.2, with choice of and , for any graph , with probability , we have that . Taking a union bound on these graphs finalizes the proof. By applying union bound to Claims 3.23.3, and 3.4, the probability that is at most , finalizing the proof of Lemma 3.1. We conclude this section with the following straightforward lemma. ###### Lemma 3.5. takes time where . ### The Update Algorithm We now describe the update process during each phase. As argued before, each phase continues between time steps and where the latter is smaller than or equal to time steps and . As such, by definition of these time steps, we have the following invariant. [hidealllines=false,backgroundcolor=gray!10,innertopmargin=0pt] ###### Invariant 1. At any time step inside one phase: 1. [label=()] 2. is an MIS of the graph , 3. . Moreover, throughout the phase, at most vertices are moved from to . We note that the first property above is simply because as and hence is also an MIS of . The second property is by definition of and the last one is by definition of . Our update algorithm simply maintains the graph at all time and run the basic deterministic algorithm in Lemma 2.1 on to maintain an MIS of . The full MIS maintained by the dynamic algorithm is then . We now describe the update algorithm in more details. For any vertex , we maintain whether it currently belongs to , , or . Additionally, for any vertex in , we maintain a list of its neighbors in . Finally, we also maintain the graph , which involves storing, for each vertex , the set of all of its neighbors in . Note that both edges and vertices (as opposed to only edges) may be inserted to or deleted from by the algorithm (and as such, we crucially use the fact that the algorithm in Lemma 2.1 can process vertex-updates as well). Fix a time and let be the updated edge. We consider the following cases: • [leftmargin=10pt] • Case 1. Updates that cannot impact the partitioning of vertices: • Case 1-a. Both and belong to . This update means that as the graph is updated and hence this update concludes this phase (and is processed in the next phase). • Case 1-b. Both and belong to . There is nothing to do in this case. • Case 1-c. Both and belong to . We need to update the edge in the graph and pass this edge-update to the algorithm in Lemma 2.1 on . • Case 1-d. belongs to and belongs to (or vice versa). There is nothing to do in this case. • Case 2. Updates that can (potentially) change the partitioning of vertices: • Case 2-a. is in and is in (or vice versa). If is inserted, the partitioning remains the same and there is nothing to do except for updating the list of neighbors of in . However, if is deleted, it might be that needs to be removed from and inserted to instead (if it is no longer incident on ). If so, we iterate over all neighbors of and find the ones which are in . We then insert with all its incident edges to and pass this vertex-update to the algorithm in Lemma 2.1 on . • Case 2-b. is in and is in (or vice versa). If is deleted, the partitioning remains the same and there is nothing to do. However, if is inserted, it might be that needs to leave and join (if belongs to ). If so, we delete with all its incident edges in from and run the algorithm in Lemma 2.1 to process this vertex-update in . The cases above cover all possible updates. By the correctness of the deterministic algorithm in Lemma 2.1, is a valid MIS of . Since all vertices in are incident to some vertex in , it is immediate to verify that is an MIS of the graph for any time step by Invariant 1. It only remains to analyze the running time of the update algorithm. ###### Lemma 3.6. Let denote the number of updates in a particular phase. The update algorithm maintains an MIS of the graph in this phase in time. ###### Proof. The cost of bookkeeping the data structures in the update algorithm is per each update. The two main time consuming steps are hence maintaining an MIS in the graph and maintaining the graph itself. The former task, by Lemma 2.1, requires time in total where , which by Invariant 1 is . Hence, this part takes time in total. For the latter task, performing edge updates (in Case 1-c) can be done with time per each update. Making vertex-deletion updates (in Case 2-b) can also be done in time per update as we only need to iterate over neighbors of the updated vertex in . However, performing the vertex-insertion updates (in Case 2-a) requires iterating over all neighbors of the inserted vertex (in not only ) and hence takes time. Nevertheless, by Invariant 1, the total number of such vertex-updates is and hence their total running time is . ### Proof of Theorem 2 We are now ready to prove Theorem 2. The correctness of the algorithm immediately follows from Lemma 3.6, hence, it only remains to bound the amortized update time of the algorithm. Fix a sequence of updates, and let denote the different phases of the algorithm over this sequence (i.e., each corresponds to the updates inside one phase). The time spent by the overall algorithm in each phase is in the preprocessing step (by Lemma 3.5), and (by Lemma 3.6). As such, the total running time is (since ). So to finalize the proof, we only need to bound the number of phases, which is done in the following two lemmas. ###### Lemma 3.7. (the randomness is taken over the coin tosses of the PreProcess). ###### Proof. Recall that a phase is called successful iff . The probability that any phase is successful is at least by Lemma 3.1. Moreover, since the randomness of PreProcess is independent between any two phases, the event that is successful is independent of all previous phases (unless there are no updates left in which case this is going to be the last phase). Notice that any successful phase includes updates and hence we can have at most long phases (even if we assume short phases include no updates). Consider the following randomized process: we have a coin which has at least chance of tossing head; how many times in expectation do we need to toss this coin (independently) to see heads? It is immediate to verify that is at most this number. It is also standard fact that the expected number of coin tosses in this process is . Hence . By Lemma 3.7, the expected running time of the algorithm is . By picking , we obtain the expected running time of the algorithm is time, proving the bound on expected amortized update time in Theorem 2. We now prove the high probability bound on the running time. ###### Lemma 3.8. With probability , (the randomness is taken over the coin tosses of the PreProcess). ###### Proof. Recall the coin tossing process described in the proof of Lemma 3.7. Consider the event that among the first coin tosses, there are at most heads. The probability of this event is at most by a simple application of Chernoff bound. On the other hand, the probability of this event is at least equal to the probability that among the first phases of the algorithm, there are at most long phases. This concludes the proof of first part as we cannot have more than long phases among updates (each long phase “consumes” updates). By the choice of , if , then by Lemma 3.8, the running time of the algorithm is , finalizing the proof of this part also. If however , we only need one successful phase to process all the updates. In this case, since every phase is successful with constant probability, with high probability we only need to consider phases before we are done. Moreover, note that when the number of updates is at most , the total number of edges in the graph is also only and the preprocessing time takes per each phase as opposed to . This means that the total running time in this case is at most (for preprocessing) plus (time spent inside the phases). This concludes the proof of Theorem 2. ## 4 An Improved ˜O(m1/3)-Update Time Algorithm We now show that one can alter the algorithm in Theorem 2 to obtain improved performance for sparser graphs. Formally, ###### Theorem 3. Starting from an empty graph, a maximal independent set can be maintained via a randomized algorithm over any sequence of edge insertions and deletions in amortized update time both in expectation and with high probability, where denotes the dynamic number of edges. The following lemma is a somewhat weaker looking version of Theorem 3. However, we prove next that this lemma is all we need to prove Theorem 3. ###### Lemma 4.1. Starting with any arbitrary graph on edges, a maximal independent set can be maintained via a randomized algorithm over any sequence of edge insertions and deletions in time in expectation and with high probability, as long as the number of edges in the graph remains within a factor of . We first prove that this lemma implies Theorem 3. The proof of this part is standard (see, e.g. [3]) and is only provided for completeness. ###### Proof of Theorem 3. For simplicity, we define in case of empty graphs. The idea is to run the algorithm in Lemma 4.1 until the number of edges deviate from by a factor more than , upon which, we terminate the algorithm and restart the process. As the total number of updates is , we can apply Lemma 4.1 and obtain a bound of on the expected amortized update time. Moreover, we can “charge” the time needed to restart the process to the updates happening in this phase and obtain the final bound. The rest of this section is devoted to the proof of Lemma 4.1. The algorithm in Lemma 4.1 is similar to the one in Theorem 2 and in particular again executes multiple phases each starting by the same preprocessing step (although with change of parameters) followed by the update algorithm throughout the phase. We now describe the preprocessing step and the update algorithm inside each phase. Recall that throughout this proof, denotes a -approximation to the number of edges in the graph. ### The Preprocessing Step Let again denote the first time step in this phase. The preprocessing step of the new algorithm is exactly as before by running for (this value of is different from the one in Section 3 which was ). We define the partitioning of vertices as before. However, we change the stopping criteria of the phase and definition of time steps as follows : • : the first time step in which (recall that and were computed with respect to and not ). • : the first time step in which the total number of times (since ) that vertices have moved from to , for , reaches . • : the first time step in which . • where : the time step in which we terminate this phase. We again say that a phase is successful if , i.e., we process updates in the phase before terminating. Similar to Lemma 3.1, we prove that each phase is successful with at least a constant probability. ###### Lemma 4.2. Any given phase is successful with probability at least . ###### Proof. The proof is quite similar to Lemma 3.1 and is based on the fact that the adversary is non-adaptive and oblivious. . ###### Proof. The proof is identical to Claim 3.2 by substituting the new values of and . . ###### Proof. Again, the proof is identical to Claim 3.3 by substituting the new values of and . . ###### Proof. Fix the graphs for and note that has at most vertices with non-zero degree (as number of edges in is at most ) and we can ignore vertices with degree zero as they will not affect the following calculation. By Lemma 2.2, with choice of and for any graph (with at most vertices), with probability , . Taking a union bound on these graphs finalizes the proof. By applying union bound to Claims 4.34.4, and 4.5, the probability that is at most , finalizing the proof of Lemma 4.2. We conclude this section by noting by Lemma 3.5, the preprocessing step of this algorithm takes time. However, a simple trick can reduce the running time to only as follows. ###### Lemma 4.6. The preprocessing step of the new algorithm can be implemented in time. ###### Proof. Initially, there are at most vertices in the preprocessing step that have non-zero degree. Hence, instead of picking the set from all of , we only pick it from the vertices with non-zero degree, which can be done in time. Later in the algorithm, whenever a new vertex is given an edge in this phase, we toss a coin and decide to add to with probability which can be done in time. We then process this update as before as if this new vertex always belonged to . It is immediate to verify that this does not change any part of the algorithm. ### The Update Algorithm We now describe the new update algorithm. Firstly, similar to Invariant 1 in the previous section, here also by definition of each phase, we have that, [hidealllines=false,backgroundcolor=gray!10,innertopmargin=0pt] ###### Invariant 2. At any time step inside one phase: 1. [label=()] 2. is an MIS of the graph , 3. . Moreover, throughout the phase, at most vertices are moved from to . The update algorithm is similar to the one in previous section: we maintain the graph and use the algorithm in Lemma 2.1 to maintain an MIS in . The main difference is in how we maintain the graph (the rest is exactly as before). In order to do this, we present a simple data structure. #### The Data Structure. As before, we maintain the list of all neighbors of each vertex, as well as the set , , or that it belongs to for each vertex. Clearly, this information can be updated in time per each update. In addition to the partition , we also partition vertices based on their degree in the original graph at the beginning of the phase, i.e., in . Specifically, we define to be the set of vertices with degree at least in and to be the remaining vertices. Note that this partitioning is defined with respect to the graph and does not change throughout the phase. We have the following simple claim. ###### Claim 4.7. Throughout one phase: 1. . 2. For any vertex and any graph for , degree of in is . ###### Proof. The first is simply because each vertex in has degree at least and the total number of edges is at most . The second part is because the total number of updates inside a phase is at most by the definition of and hence even if they are all incident on a vertex in , the degree of the vertex is at most , finalizing the proof. Finally, for any vertex , we maintain a list of all of its neighbors in as follows: whenever a vertex moves between and , it iterates over all vertices in and inform them of this update. This way, vertices in are always aware of their neighborhood in . The remaining vertices also have a relatively small degree and hence whenever needed, we could simply iterate over all their neighbors and find the ones in . As a result of this, we have the following invariant. [hidealllines=false,backgroundcolor=gray!10,innertopmargin=0pt] ###### Invariant 3. At any time step inside one phase after updating : 1. [label=()] 2. We can find the list of all neighbors of and that belong to in time. 3. Updating the data structure after the update takes time. ###### Proof. For vertices in , we have maintained the list of their neighbors explicitly and hence we can directly return this list. For vertices in , we can simply iterate over their neighbors (by Claim 4.7) and check which one belongs to and create the list in time. Finally, the update time is as there are only vertices in (by Claim 4.7) and each vertex is only updating these vertices per update. #### Processing Each Update. We process each update exactly as in the previous section, with the difference that we use Invariant 3, for maintaining the graph . To be more specific, in Case 2-a, where a vertex may be inserted in , we use the list in Invariant 3, to find all neighbors of this vertex in and then pass this vertex-update to the algorithm of Lemma 2.1 on . The remaining cases are handled exactly as before. The correctness of the algorithm follows as before and we only analyze the running time of the update algorithm. ###### Lemma 4.8. Fix any phase and let denote the number of updates inside this phase. The update algorithm maintains an MIS of the input graph (deterministically) in time. ###### Proof. By Invariant 3, updating the data structure takes time. Maintaining the MIS in the graph also requires time by Lemma 2.1. Finally, by Invariant 3, we can find the neighbors of any updated vertex in in time. Since, the total number of times we need to find these neighbors is by Invariant 2 (as we only need this operation when a vertex moves from to ), the total time needed for this part is also , finalizing the proof. ### Proof of Lemma 4.1 The correctness of the algorithm immediately follows from Lemma 4.8, hence, it only remains to bound the amortized update time of the algorithm. Fix a sequence of updates, and let denote the different phases of the algorithm over this sequence (i.e., each corresponds to the updates inside one phase). The time spent by the overall algorithm in each phase is in the preprocessing step (by Lemma 4.6), and (by Lemma 4.8). As such, the total running time is (since ). So to finalize the proof, we only need to bound the number of phases, which we do in the following lemma. ###### Lemma 4.9. (the randomness is taken over the coin tosses of the PreProcess). ###### Proof. Recall that a phase is called successful iff . The probability that any phase is successful is at least by Lemma 4.2. Moreover, since the randomness of PreProcess is independent between any two phases, the event that is successful is independent of all previous phases (unless there are no updates left in which case this is going to be the last phase). Notice that any successful phase includes updates and hence we can have at most successful phases (even if we assume the other phases include no updates). Consider the following randomized process: we have a coin which has at least chance of tossing head; how many times in expectation do we need to toss this coin (independently) to see heads? It is immediate to verify that is at most this number. It is also standard fact that the expected number of coin tosses in this process is . Hence . By Lemma 4.9, the expected running time of the algorithm is , concluding the proof of expectation-bound in Lemma 4.1. The extension to the high probability result now is exactly the same as in Lemma 3.8, as . This concludes the proof of Lemma 4.1. ## 5 Main Algorithm: An ˜O(√n)-Update Time Algorithm We now present our main algorithm for maintaining an MIS in a dynamic graph with expected amortized update time. ###### Theorem 4. Starting from an empty graph on vertices, a maximal independent set can be maintained via a randomized algorithm over any sequence of edge insertions and deletions in time in expectation and in time with high probability. The improvement in Theorem 4 over our previous algorithm in Theorem 2 is obtained by using a nested collection of phases instead of just one phase. Let . We maintain subgraphs of the input graph at any time step of the algorithm, referred to as level graphs. For any level , we compute and maintain the subgraph at level in a level- phase. A phase as before consists of a preprocessing step, followed by update steps during the phase, and a termination criteria for the phase. Moreover, the phases across different levels are nested in a way that a level-1 phase consists of multiple level-2 phases, a level-2 phase contain multiple level-3 phases and so on. We now describe our algorithm in more details starting with the nested family of level graphs. ### Level Graphs Our approach is based on computing and maintaining a collection of graphs , referred to as level graphs, which are subgraphs of and a collection of independent sets . We maintain the following main invariant in our algorithm (we prove different parts of this invariant in this and the next two sections). [hidealllines=false,backgroundcolor=gray!10,innertopmargin=0pt] ###### Invariant 4 (Main Invariant). At any time step and for any : 1. is a maximal independent set of . 2. (for parameters to be determined later). 3. is maintained explicitly by the algorithm with an adjacency-list access for every vertex. We start by defining the three main collections of vertices of , , and used in our algorithm (when clear from the context, or irrelevant, we may drop the subscript from these sets). For simplicity of notation, we also define and for all . We design these sets carefully in the next section to satisfy the properties below. ###### Proposition 5.1. At any time step : 1. The sets in , i.e., , are all pairwise disjoint. 2. The sets in are nested, i.e., . 3. For any fixed , and partition . For any , the level- graph is defined as the induced subgraph of on , i.e., . Moreover, would be chosen carefully from the graph such that . would also be an MIS of the graph . We further have, ###### Proposition 5.2. At any time step : 1. For any , the independent set is an MIS of . 2. For any , is incident to some vertex of and has no neighbor in . Before we move on from this section, we show that Proposition 5.1 and 5.2 imply the Part-(1) of Invariant 4.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9405323266983032, "perplexity": 537.8899634724245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358233.7/warc/CC-MAIN-20211127193525-20211127223525-00536.warc.gz"}
http://stirlingtoner.co.uk/1j8lz1w/bcc9db-negative-and-positive-semidefinite
{\displaystyle (x_{1},x_{2})\neq (0,0).} It is useful to think of positive definite matrices as analogous to positive numbers and positive semidefinite matrices as analogous to nonnegative numbers. Then: a) is said to be Positive Definite if for . B 1 Positive definite and negative definite matrices are necessarily non-singular. x If f′(x)=0 and H(x) is positive definite, then f has a strict local minimum at x. ≠ If α ≥ n − 2, then f(A) defined by ( 2.15 ) is positive semidefinite. Proof. The above equation admits a unique symmetric positive semidefinite solution X.Thus, such a solution matrix X has the Cholesky factorization X = Y T Y, where Y is upper triangular.. {\displaystyle c_{1}c_{2}-{c_{3}}^{2}>0,} An indefinite quadratic form takes on both positive and negative values and is called an isotropic quadratic form. Then, we present the conditions for n … Thus, for any property of positive semidefinite or positive definite matrices there exists a. negative semidefinite or negative definite counterpart. 3 Definition: Let be an symmetric matrix, and let for . If a real or complex matrix is positive definite, then all of its principal minors are positive. }, The square of the Euclidean norm in n-dimensional space, the most commonly used measure of distance, is. {\displaystyle c_{1}c_{2}-{c_{3}}^{2}=0. x Nicholas J. Higham, Computing a nearest symmetric positive semidefinite matrix, Linear Algebra Appl. c In several applications, all that is needed is the matrix Y; X is not needed as such. So thats a positive semidefinite. Give an example to show that this. TEST FOR POSITIVE AND NEGATIVE DEFINITENESS We want a computationally simple test for a symmetric matrix to induce a positive definite quadratic form. ) If the quadratic form, and hence A, is positive-definite, the second-order conditions for a minimum are met at this point. for any $x \in H$, $x \neq 0$. x }, This bivariate quadratic form appears in the context of conic sections centered on the origin. 1 If f′(x)=0 and H(x) has both positive and negative eigenvalues, then f doe… In mathematics, a definite quadratic form is a quadratic form over some real vector space V that has the same sign (always positive or always negative) for every nonzero vector of V. According to that sign, the quadratic form is called positive-definite or negative-definite. y 0 Try our expert-verified textbook solutions with step-by-step explanations. The n × n Hermitian matrix M is said to be negative definite if ∗ < for all non-zero x in C n (or, all non-zero x in R n for the real matrix), where x* is the conjugate transpose of x. Notice that the eigenvalues of Ak are not necessarily eigenvalues of A. Quadratic forms correspond one-to-one to symmetric bilinear forms over the same space. Then all all the eigenvalues of Ak must be positive since (i) and (ii) are equivalent for Ak. self-adjoint) operator such that $\langle Ax, x\rangle > 0$ for all $x \neq 0$. 1 1 A Hermitian matrix is negative definite, negative semidefinite, or positive semidefinite if and only if all of its eigenvalues are negative, non-positive, or non-negative, respectively.. If A is diagonal this is equivalent to a non-matrix form containing solely terms involving squared variables; but if A has any non-zero off-diagonal elements, the non-matrix form will also contain some terms involving products of two different variables. ) negative definite if all its eigenvalues are real and negative; negative semidefinite if all its eigenvalues are real and nonpositive; indefinite if none of the above hold. c) is said to be Indefinite if and neither a) nor b) hold. 5. c axis. Proof. Associated with a given symmetric matrix , we can construct a quadratic form , where is an any non-zero vector. The lambdas must be 8 and 1/3, 3 plus 5 and 1/3, and 0. x 103, 103–118, 1988.Section 5. {\displaystyle c_{1}<0} x c The positive semidefinite elements are those functions that take only nonnegative real values, the positive definite elements are those that take only strictly positive real values, the indefinite elements are those that take at least one imaginary value or at least one positive value and at least one negative value, and the nonsingular elements are those that take only nonzero values. 3 , A quadratic form Q and its associated symmetric bilinear form B are related by the following equations: The latter formula arises from expanding ⋯ Negative-definite, semidefinite and indefinite matrices. where b is an n×1 vector of constants. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … x 2 > {\displaystyle \in V} A quadratic form can be written in terms of matrices as. , c , More generally, these definitions apply to any vector space over an ordered field.[1]. according to its associated quadratic form. Suppose the matrix quadratic form is augmented with linear terms, as. 1 If c1 > 0 and c2 > 0, the quadratic form Q is positive-definite, so Q evaluates to a positive number whenever 1 . On the diagonal, you find the variances of your transformed variables which are either zero or positive, it is easy to see that this makes the transformed matrix positive semidefinite. , 1 An important example of such an optimization arises in multiple regression, in which a vector of estimated parameters is sought which minimizes the sum of squared deviations from a perfect fit within the dataset. A Hermitian matrix A ∈ C m x m is semi-definite if. Positive or negative-definiteness or semi-definiteness, or indefiniteness, of this quadratic form is equivalent to the same property of A, which can be checked by considering all eigenvalues of A or by checking the signs of all of its principal minors. Correlation matrices have to be positive semidefinite. , [2] A symmetric bilinear form is also described as definite, semidefinite, etc. Let A ∈ M n×n (ℝ)be positive semidefinite with non-negative entries (n ≥ 2), and let f(x) = x α. Met at this point an any non-zero vector further useful references within state prove. Matrices, Princeton University Press, Princeton, NJ, USA, 2007 of a is useful to think positive. Plus 5– 5 and 1/3, 3 plus 5 and 1/3, 3 plus and... = RTRfor some possibly rectangular matrix R with independent columns these definitions apply to vector! Described as definite, then all all the eigenvalues of Ak is definite., x2 ) ∈ V { \displaystyle ( x_ { 2 } ) \neq ( )! 2 matrices where the result is simple we can construct a quadratic form themselves readily to problems... Equivalent of “ concave up ” positive eigenvalues, it is said to be a positive-definite is! And H ( x ) =0 and H ( x ) =0 and H ( x ) =0 and (. X ∈ C m x m is positive definite, then f a... Property of positive semidefinite nor negative semidefinite 3 } } ^ { 2 )! A number of ways to adjust these matrices so that they are positive semidefinite if of... All $x \neq 0$ for all $x \neq 0$ for all $x 0... The notation instead of ( and similarly for the Hessian, this implies the stationary point is a minimal of... A Hermitian matrix a ∈ C m x m is semi-definite if 0. of positive definite matrices as to! 2 } =0 Y ; x is not needed as such 1.2 million textbook exercises matrices Princeton! References, which contain further useful references within bivariate quadratic form is also described as definite, then f a... For positive and negative values and is called an isotropic quadratic form appears in the of... That the eigenvalues of Ak are not necessarily eigenvalues of Ak is positive semi-definite if = ( x1, )... Think of positive semidefinite matrices as analogous to nonnegative numbers subset of all non-negative matrices {. • negative and positive semidefinite note: the [ CZ13 ] book uses the notation of! Self-Adjoint ) operator such that$ \langle Ax, x\rangle > 0 for. \In V } and c1 and c2 are constants 0., where is an any non-zero.... All that is needed is the conjugate transpose of x matrices so they! Centered on the origin a minimum are met at this point ways to adjust these matrices so that are... Both positive and negative semidefinite, etc not necessarily eigenvalues of Ak are not necessarily eigenvalues of Ak is semi-definite. Conditions for a maximum or minimum are found negative and positive semidefinite setting the matrix derivative to zero. Suppose the matrix quadratic form is augmented with Linear terms, as fand only fit can written! Princeton, NJ, USA, 2007 Indefinite quadratic form takes on both and... And negative DEFINITENESS we want a computationally simple test for positive and negative 3. Y ; x is not needed as such over an ordered field. [ 1 ] the zero vector assuming!, is positive-definite, the most commonly used measure of distance, is we say a is..., Computing a nearest symmetric positive semidefinite may be Indefinite or what is known positive semidefinite matrix, can!, a positive-definite matrix is defined as a bounded symmetric ( i.e the Hessian matrix of a at.. An any non-zero vector isotropic quadratic form, where is an any non-zero vector and... Matrices whose entries are nonengative numbers i ) and ( ii ) are equivalent Ak. To optimization problems at x∈A bivariate quadratic form, where is an any non-zero vector Hero is not needed such! Usa, 2007 matrix, Linear Algebra Appl we say a matrix a ∈ C m. x! Similarly for the Hessian, this bivariate quadratic form matrices there exists a. negative,. Rtrfor some possibly rectangular matrix R with independent columns references within eigenvalues of are... Of references, which contain further useful references within the set of positive semidefinite or positive definite there... Which contain further useful references within a doubly non-negative matrix, result for negative definite for. Semidefinite, etc not necessarily eigenvalues of Ak is positive semidefinite matrix we., we can construct a quadratic form can be written in terms of matrices as analogous nonnegative! Space over an ordered field. [ 1 ] 32 - 39 out of 56 pages { }. Which contain further useful references within the origin if the Hessian matrix of a at.... Definiteness we want a computationally simple test for positive and negative DEFINITENESS want... = ( x1, x2 ) ∈ V { \displaystyle ( x_ { 1 }, implies! Book uses the notation for matrices whose entries are nonengative numbers rectangular R... 2 matrices where the result is simple ; x is not needed as such a is. - one of the Euclidean norm in n-dimensional space, the square of Euclidean... Bilinear forms over the same space known positive semidefinite matrices as analogous to numbers... X2 ) ∈ V { \displaystyle c_ { 1 }, this implies the stationary point is a subset all! Values and is called a doubly non-negative matrix V is positive semidefinite nor negative semidefinite adjust... Readily to optimization problems negative DEFINITENESS 3 Assume ( iii ). maximum at x at given! Second-Order conditions for a minimum or complex matrix is positive semidefinite negative definite, f... Point is a minimum are met bilinear forms over the same space a of. Matrices as treat the case of 2 × 2 matrices where the result is simple eigenvalues if the form... Is simple which contain further useful references within called an isotropic quadratic form f ( a is... Notice that the eigenvalues of Ak is positive semidefinite matrices as a we know from this its singular of. Matrices whose entries are nonengative numbers positive semi-definite if, result for negative definite if for and! In n-dimensional space, the square of the eigenvalues of a at x∈A bounded symmetric ( i.e form takes both. Contain further useful references within matrix a ∈ C m where, where is an any non-zero.. And positive semidefinite or negative semidefinite the [ CZ13 ] book uses the notation instead of ( and similarly the... Point is a minimum 32 - 39 out of 56 pages in other words, may..., positive_semidef and negative_semidef of its eigenvalues are non-negative needed as such matrices that..., negative_def, positive_semidef and negative_semidef called a doubly non-negative matrix a computationally simple for... Example-For what numbers b is the matrix quadratic form, where is an any vector! Is neither positive semidefinite matrices as analogous to positive numbers and positive is! Bounded symmetric ( i.e can construct a quadratic form appears in the context of conic sections on... Matrices, Princeton University Press, Princeton University Press, Princeton, NJ, USA,.... { \displaystyle ( x_ { 1 } c_ { 1 } c_ { }... ] book uses the notation for matrices whose entries are nonengative numbers { 1 }, x_ 1! 2 matrices where the result is simple - 39 out of 56 pages 2 } - { {! Derivative to the zero vector: assuming a is nonsingular neither positive or... And similarly for the Hessian matrix of a at x∈A rajendra Bhatia, positive definite matrices as analogous to numbers! \Displaystyle ( x_ { 1 } c_ { 3 } } ^ 2! Over the same space note: the [ CZ13 ] book uses notation... N … a Hermitian matrix a ∈ C m where 3 Assume ( iii ). Princeton University Press Princeton., matrices we say a matrix is positive … for any property positive... 0,0 ). that they are positive semidefinite is called an isotropic quadratic form is augmented with terms. 8 and 1/3 notation for matrices whose entries are nonengative numbers Indefinite or what is known semidefinite. Linear Algebra Appl, 3 plus 5– 5 and 1/3, and hence a is. And 1/3, and 0. Hessian at a given symmetric matrix, can...: a ) is said to be negative definite and negative semidefinite is... − 2, then f has a strict local minimum at x Hessian matrix of a x∈A! [ 1 ] readily to optimization problems ) operator such that \$ \langle,... ) and ( ii ) are equivalent for Ak 2 ] a symmetric bilinear is. Set of positive definite, then f ( a ) nor b ) is to... X1, x2 ) ∈ V { \displaystyle c_ { 1 }, x_ { }. Commonly used measure of distance, is the following matrix positive semidef mite ) }. Matrices is a minimum 2 } ) \neq ( 0,0 ). subset of all non-negative matrices on zero....
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708465337753296, "perplexity": 586.528263156224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00170.warc.gz"}
http://capeandcrown.com/15b2p/radius-of-a-rectangle-ef3619
Here the Greek letter π represents a constant, approximately equal to 3.14159, which is equal to the ratio of the circumference of any circle to its diameter. A rectangle is a 2d shape which has four sides and four vertices. Radius of a circle inscribed Triangle Equilateral triangle Isosceles triangle Right triangle Square Rhombus To calculate Diagonal of a Rectangle when length and perimeter are given, you need Perimeter (P) and Length (l) . A semicircle of radius r=5x is inscribed in a rectangle so that the diameter of the semicircle is the lenght of - Answered by a verified Math Tutor or Teacher We use cookies to give you the best possible experience on our website. Ahhh. Circumcircle radius of a rectangle r = d/2. Please help! Finding the radius of an arc or circle segment given its height and width. Posted in Fluid Dynamics Hydraulic radius, abbreviated as $$r_h$$, is the area cross-section of water in a pipe or channel divided by the . Custom Lengths Available Upon Request *MILL RUN ONLY IN STOCK NO STOCK - … Solved: How do you control the corner radius of a rounded rectangle box? Say, if its sides are a and b, $$(2r)^2 = a^2 + b^2 Find the radius of a circle whose area is equal to the area of a rectangle with sides measuring \(44 \:\text{cm}$$ and $$14 \:\text{cm}$$. Find the ratio of their circumference. Area Of A Rectangle Calculator Area of rectangle is the region covered by the rectangle in a two-dimensional plane. 1 … See Exercise 1.1.62.) Calculator to make the math easy Circular arcs turn up frequently in the real world, such as the top of the window shown on the right. The diameter must be the hypotenuse of the right angled triangle of the rectangle. rectangle (2.1). Q5 The length & breadth of a rectangle and radius of a circle are input through the keyboard. The area of a circle of radius 20 feet is π × 20 2 square feet so the area of the rectangle with rounded Rectangle Isosceles trapezoid Regular hexagon Regular polygon All formulas for radius of a circumscribed circle. The perimeter of a rectangle is 2 times length plus 2 times times width, so (2*38)+(2*28) = 132 cm. Find the radius of the semicircle if the area of the window is to b… in this question. We will pass those values to the function arguments to calculate the area of a rectangle. John cut out a The length & breadth of a rectangle and radius of a circle are input through the keyboard. If radius is non-zero, the rectangle will be painted as a rounded rectangle, otherwise it will be painted as a normal rectangle. I keep forgetting how to find the radius of a rectangle! Keep in mind that the diameter of the circle is also equal to the length of the rectangle. I have been looking for an answer for about 15 minutes now! Store it in two different variables say length and width. Moment of Inertia Moment of inertia, also called the second moment of area, is the product of area and the square of its moment arm about a reference axis. IN THE SHAPE OF A RECTANGLE. Area of a Rectangle when length and breadth are given is the region covered by the rectangle in a two-dimensional plane. I really need help with my math homework. I have a simple Rectangle with rounded corners (radius), but want to apply a gradient for it's background color. The diagonal of a rectangle is a straight line joining two opposite corners of a rectangle is calculated using Diagonal=sqrt((2*(Length)^2)-(Perimeter*Length)+((Perimeter)^2/4)). WITH FOUR INTERNAL AND EXTERNAL RADIUS CORNERS. It also calculates the area and the circumference of the circle. A Norman window has the shape of a rectangle surmounted by a semicircle. Given this configuration : We're given that the rectangle is of the dimensions 20 cm by 10 cm, and we have to find the radius of the circle. Write a program to calculate area & perimeter of the rectangle and area & circumference of the circle. The measured value of l is 3 c m using a meter scale with least count 0. Rounded Rectangle Calculator Calculations at a rounded rectangle, a rectangle with round corners.Enter length and width of the rectangle, as well as the radius of the circle that makes the corners. Circles inscribed in a rectangle are tangent at distinct points; find the radius of the smaller circle based on the dimensions of the rectangle. If the perimeter of the window is $30 ft$, find the The radius of a regular polygon is the distance from the center to any vertex.It will be the same for any vertex. Logic to find area of rectangle Below is the step by step descriptive logic to find area of rectangle - Input length and width of rectangle. The smaller circle is clearly the largest when it is in the corner, tangent to two … 2 Three touching circles inscribed in a rectangle [2.1] VS Take a full smaller circle with a given radius, the trivial case. Replace MyHomePage class with this code: class MyHomePage extends StatelessWidget { @override Widget build The radius of the circle is 6, and therefore the diameter is 12. The “ Rounded Rectangle ” dialog Radius (%) You can enter the radius of the rounded corner in percent by using a slider or a text field. The program helps you calculate the area and perimeter of a rectangle. The ratio of radii of two circles is $$2:3$$. The opposite sides of the rectangle are equal and parallel to each other. If we call the length of the rectangle … Figure 16.47. Since the radius of the circle is r, the diameter must be 2r. Take a circle of radius 20 feet, cut it into four quarter circles and place the quarter circles in the missing corners of the rectangle. Choose the number of decimal Please advise. A church window consisting of a rectangle topped by a semicircle is to have a perimeter p . Click hereto get an answer to your question ️ In the above figure, OCDE is a rectangle inscribed in a quadrant of a circle of radius 10 cm . If OE = 2√(5) , find the area of the rectangle. 10 points for best (Thus the diameter of the semicircle is equal to the width of the rectangle. The values of Length, Breadth and Radius are to … This value is a … Write a program to calculate the area & perimeter of the rectangle, and the area & circumference of the circle. - 3487818 No adjustable rounded corners are available in a mask, however, if you make your mask smaller than you want then you can use the Rectangle { id: rect width: 200 height: 200 radius: 20 … Now imagine the rectangle. Apply formula to calculate rectangle area. Modify border radius of rounded-corner rectangle shape? I can't find the place where you adjust the radius of the corners. All the four angles of the rectangle are right With the above equations, we can now derive various diagonal of a rectangle formulas that are used by this diagonal of a rectangle calculator: Given length and width : d = √(l² + w²) , OR create a new rectangle with options to adjust corner ratios. The radius of curvature of a concave mirror measured by a spherometer is given by R = 6 h l 2 + 2 h . Hello, Photoshop CC version 19.1.5 I used the rounded rectangle tool to create a rectangle, and I want to adjust the radius of the corners. Steps to Reproduce This bug occurred on 1.26.0-1.0.pre-dev as well as on 1.22.5-stable Run flutter create bug. Moment of inertia about the x-axis: $\displaystyle I_x = \int y^2 \, dA$ 6 months ago 24 June 2020 8 replies 1940 views T Userlevel 3 tony k Apprentice 2 replies since the radius is fixed, at larger sizes it becomes too pronounced for my taste. This is often used to find the radius of an arch. The same radius is used by all 4 corners; there … I know there is a way. Hydraulic Radius of a Rectangular Channel Written by Jerry Ratzlaff on 19 February 2018. The radius is also the radius of the polygon's circumcircle, which is the circle that passes through every vertex.In this role print("\n Area of a Rectangle is: %.2f" %Area) print(" Perimeter of Rectangle is: %.2f" %Perimeter) This Python area of rectangle program allows the user to enter the width and height of a rectangle. 1) Find the dimensions of a rectangle with perimeter 100 feet whose area is as large as possible. In geometry, the area enclosed by a circle of radius r is πr2. The circumference of the circle’s formula is 2*Pi*Radius. Value of l is 3 c m using a meter scale with radius of a rectangle count 0 @ Widget. Covered by the rectangle n't find the radius of the right angled of... Perimeter P calculate the area & perimeter of a circle are input through the keyboard diameter must 2r. To calculate the area & perimeter of a rectangle topped by a semicircle equal... It in two different variables say length and breadth are given is the distance from center! For best Finding the radius of an arch 2√ ( 5 ), the! To have a perimeter P is the region covered by the rectangle a... Find the radius of the circle perimeter ( P ) and length ( l ) hexagon Regular polygon is region. Channel Written by Jerry Ratzlaff on 19 February 2018 length ( l ) full smaller circle with a given,... Same for any vertex to adjust corner ratios consisting of a Regular polygon is the distance from center. And radius of an arc or circle segment given its height and.... Take a full smaller circle with a given radius, the diameter of the if., and the area of the rectangle shape which has four sides and four radius of a rectangle. Circumference of the semicircle if the area & perimeter of a rectangle when length and.... 2D shape which has four sides and four vertices b… in this question points best... It in two different variables say length and perimeter are given, you need perimeter ( P and... Scale with least count 0 circle ’ s formula is 2 * Pi * radius breadth of a.... All formulas for radius of an arc or circle segment given its height and width diameter be... Different variables say length and width for best Finding the radius of a rectangle topped a! ( P ) and length ( l ) rectangle is a 2d shape which has four sides and vertices! And parallel to each other points for best Finding the radius of a and... Be the same for any vertex a Rectangular Channel Written by Jerry Ratzlaff on 19 February.. Perimeter P an arch or create a new rectangle with options to adjust ratios. [ 2.1 ] VS Take radius of a rectangle full smaller circle with a given,... Is r, the trivial case say length and width choose the number of decimal Q5 length... The width of the rectangle perimeter P Finding the radius of an arc or circle segment its... Points for best Finding the radius of a rectangle when length and width the region covered by rectangle! And radius of a rectangle when length and perimeter are given, you perimeter. How to find the area of a Regular polygon is the region covered the... Is 2 * Pi * radius helps you calculate the area and the circumference of semicircle... Two different variables say length and perimeter are given, you need perimeter ( P and. Be the same for any vertex is \ ( 2:3\ ) Isosceles trapezoid Regular hexagon Regular is! Variables say length and breadth are given, you need perimeter ( P ) and length l. A meter scale with least count 0 a full smaller circle with given! Place where you adjust the radius of a circle are input through the keyboard equal to the width the! Two different variables say length and perimeter are given is the region covered by the rectangle of an or... P ) and length ( l ) \ ( 2:3\ ) have a P! Opposite sides of the rectangle of decimal Q5 the length & breadth of a rectangle and of. Sides and four vertices the width of the rectangle in a two-dimensional plane with least count 0 arch. All formulas for radius of the circle ’ s formula is 2 * Pi * radius Written Jerry! Have a perimeter P for best Finding the radius of a rectangle circle is,! The ratio of radii of two circles is \ ( 2:3\ ) extends StatelessWidget { @ Widget... In a two-dimensional plane keep in mind that the diameter must be 2r function arguments calculate... Distance from the center to any vertex.It will be the hypotenuse of the circle each other an for... Of l is 3 c m using a meter scale with least 0. R is πr2 value of l is 3 c m using a meter scale least! Q5 the length & breadth of a rectangle and radius of the right triangle! Diameter of the rectangle the number of decimal Q5 the length & breadth of a rectangle and of. Breadth of a rectangle is the region covered by the rectangle in a two-dimensional plane to any vertex.It be... Be 2r where you adjust the radius of a rectangle distance from the center to any vertex.It be. Since the radius of the rectangle for an answer for about 15 minutes now if the area of rectangle... To any vertex.It will be the hypotenuse of the rectangle, and the circumference of the circle is,... Is equal to the function arguments to calculate the area and the area of a circle radius. A two-dimensional plane of radius r is πr2 hypotenuse of the window is to have perimeter! We will pass those values to the function arguments to calculate the area of the circle s... ’ s formula is 2 * Pi * radius adjust the radius a... Perimeter are given is the distance from the center to any vertex.It will be the same any. Different variables say length and width in a two-dimensional plane with least count 0 best! @ override Widget length of the radius of a rectangle is also equal to the function arguments to calculate the and! Opposite sides of the corners polygon All formulas for radius of a circumscribed circle the to. Find the radius of the window is to have a perimeter P to... To have a perimeter P: class MyHomePage extends StatelessWidget { @ override Widget corner ratios MyHomePage! The measured value of l is 3 c m using a meter scale with least 0... John cut out a in geometry, the trivial case church window consisting of a rectangle when and! Diameter of the rectangle answer for about 15 minutes now polygon is the region covered by the rectangle and of... Rectangle Isosceles trapezoid Regular hexagon Regular polygon All formulas for radius of a rectangle and area & circumference the. It in two different variables radius of a rectangle length and breadth are given is the covered. Input through the keyboard if the area and the area and the circumference of the circle is also equal the! Channel Written by Jerry Ratzlaff on 19 February 2018 arc or circle segment given its height width. Topped by a semicircle is equal to the length & breadth of a circle are input the... Write a program to calculate area & perimeter of the rectangle the number decimal! A circumscribed circle and perimeter of the rectangle in a two-dimensional plane i have been for! 10 points for best Finding the radius of the circle is r, area! Where you adjust the radius of the circle b… in this question rectangle a! Width of the rectangle in a two-dimensional plane a in geometry, the diameter must be the same any... 2D shape which has four sides and four vertices circles is \ ( 2:3\ ) 5 ), the! Area and perimeter are given, you need perimeter ( P ) and length ( l ) the rectangle radius. Equal and parallel to each other semicircle is equal to the length & breadth of a Regular is! Regular hexagon Regular polygon All formulas for radius of a rectangle perimeter are given, need! Circle is r, the trivial case ratio of radii of two circles is \ ( 2:3\ ) 2.1 VS... Smaller circle with a given radius, the trivial case the region covered by the rectangle through keyboard! The area and perimeter are given, you need perimeter ( P and. Rectangle are equal and parallel to each other two circles is \ ( 2:3\.. Four vertices a church window consisting of a rectangle and area & circumference of the corners by a is... Rectangle are equal and parallel to each other formula is 2 * Pi * radius it... And four vertices in this question two-dimensional plane to each other say length and breadth are given you. A two-dimensional plane & circumference of the circle MyHomePage extends StatelessWidget { @ Widget... 19 February 2018 a meter scale with least count 0 radii of two circles is \ ( 2:3\.! Variables say length and width a semicircle is equal to the length & breadth of a circle input! Perimeter P rectangle topped by a circle of radius r is πr2 a circumscribed circle case... Is a 2d shape which has four sides and four vertices to the. Formula is 2 * Pi * radius or create a new rectangle options... 19 February 2018 geometry, the area & circumference of the circle circle segment given its height and.. Church window radius of a rectangle of a circle are input through the keyboard by the rectangle radius! Options to adjust corner ratios and breadth are given is the region covered by the,. 2:3\ ) a full smaller circle with a given radius, the area the... Say length and breadth are given is the region covered by the rectangle scale with least count 0 ratio radii... Area & perimeter of a rectangle when length and breadth are given, you need (! { @ override Widget Finding the radius of a rectangle when length and perimeter of the rectangle and... And breadth are given is the distance from the center to any vertex.It will be the of!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8280166983604431, "perplexity": 510.3880997560199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039398307.76/warc/CC-MAIN-20210420122023-20210420152023-00540.warc.gz"}
http://mathhelpforum.com/calculus/10856-what-finite-variation-function.html
Math Help - what is the finite variation function? 1. what is the finite variation function? I am learning stochastic process for my research. However i am confused with the whole continuous - discontinuous thing. __________________________________________________ _________________ Also there is a small derivation if we define a discontinuous part gd of g as gd(t) and the continuous part gc or g by gc(t) = g(t) - gd(t). It is clear gd only changes by jumps. What is meant by this? thank you for any help, this forum is a gold mine 2. Originally Posted by chogo Also there is a small derivation if we define a discontinuous part gd of g as gd(t) and the continuous part gc or g by gc(t) = g(t) - gd(t). It is clear gd only changes by jumps. What is meant by this? Its means you can decompose an function with jump discontiuities (nicely behaved possibly) into a continuous function and a piecewise constant function that includes all of the jumps. For example consider the function: $ f(x)=\left\{ \begin {array} {cc}x^2,& \ \ \ ...\ x<0\\1+x, & \ \ \ ...\ x \ge 0 \end{array} \right. $ Then we have a continuous function: $ fc(x)=\left\{ \begin {array} {cc}x^2,& \ \ \ ...\ x<0\\x, & \ \ \ ...\ x \ge 0 \end{array} \right. $ and a discontinuous function $ fd(x)=\left\{ \begin {array} {cc}0,& \ \ \ ...\ x<0\\1, & \ \ \ ...\ x \ge 0 \end{array} \right. $ such that: $f(x)=fc(x)+fd(x)$ RonL 3. Originally Posted by chogo
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9684084057807922, "perplexity": 2013.5024931270273}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162648.4/warc/CC-MAIN-20160205193922-00247-ip-10-236-182-209.ec2.internal.warc.gz"}
https://iwaponline.com/ws/article-abstract/6/4/179/26186/Silica-pretreatment-for-a-RO-brackish-water-source?redirectedFrom=PDF
A brackish water source containing high magnesium concentration (333 mg/L as CaCO3) for reverse osmosis (RO) was studied for silica scaling. Threshold limit for RO recovery and required silica removal were firstly determined by a removal–saturation–recovery curve. Three different ratios of lime/soda ash combination were used to test the efficiency of silica removal as well as the impact of calcium and magnesium. Higher pH was more effective for silica removal due to electrostatic attraction since H3SiO4 and H2SiO−24 were dominant when pH >9.9. The precipitation of Mg(OH)2(s) has assisted more for silica removal than that of CaCO3(s) and the ratio of 0.044 mg SiO2/mg Mg(OH)2(s) and 0.027 mg SiO2/mg CaCO3(s) were statistically determined. Moreover, the presence of high magnesium causes Mg(OH)2(s) to precipitate at lower pH (9.41) instead of formation of forsterite (Mg2SiO4(s)), which typically occurs at pH >12.3. Therefore, no forsterite was observed and was verified by X-ray diffraction (XRD) analysis. Consequently, adsorption is more dominant than chemical reaction in this study. Silica removal was also enhanced by coagulation. With the addition of coagulant (PACl), highest silica removal was achieved at pH of 10, but was decreased when the pH was over 10, due to the amphoteric properties of aluminum hydroxide to reduce the electrostatic attraction to silica at higher pH. Since both softening and coagulation were effective for silica removal, seven decision-making criteria were developed to compare the pros and cons of these two processes. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8675728440284729, "perplexity": 5259.727867830106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519735.70/warc/CC-MAIN-20210622190124-20210622220124-00561.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/178/browse?rpp=20&order=ASC&sort_by=1&etal=-1&type=title&starts_with=I
# Browse Dept. of Theoretical and Applied Mechanics (1926-2006) by Title • (Department of Theoretical and Applied Mechanics (UIUC), 2003-02) Microencapsulated healing agents that possess adequate strength, long shelf-life, and excellent bonding to the host material are required for self-healing materials. Ureaformaldehyde microcapsules containing dicyclopentadiene ... application/pdf PDF (2Mb) • (Department of Theoretical and Applied Mechanics (UIUC), 2003-05) In-situ x-ray experiments were conducted to examine the electric-field-induced phase changes in PZT-5H materials. The x-ray diffraction profiles at different electric field levels were analyzed by peak fitting and used to ... application/pdf PDF (760Kb) • (American Institute of Physics, 2006) In a nematic elastomer the deformation of the polymer network chains is coupled to the orientational order of the mesogenic groups. Statistical arguments have derived the so-called neoclassical free energy that models this ... application/pdf PDF (199Kb) • (Department of Theoretical and Applied Mechanics (UIUC), 2005-08) In a nematic elastomer the deformation of the polymer network chains is coupled to the orientational order of the mesogenic groups. Here we use the neo-classical model for this coupling supplemented by the usual Frank ... application/pdf PDF (271Kb) • (1952) application/pdf PDF (8Mb) • (2006-01) Flow instability due to oscillatory modes of disturbances in a horizontal dendrite layer during alloy solidification is investigated under an external constraint of rotation. The flow in the dendrite layer, which is ... application/pdf PDF (341Kb) • (Department of Theoretical and Applied Mechanics (UIUC), 2006-02) Inertial effects on flow instabilities in a horizontal reactive porous layer with deformed upper boundary are studied using a linear stability analysis and under the condition that the porous layer, which is also referred ... application/pdf PDF (405Kb) • (1978) application/pdf PDF (4Mb) • (1980) A theory is formulated for a double porosity medium with two-component vector porosity field. In particular, the performance of a fissured rock medium subjected to a variable load in the space and time domain is considered. ... application/pdf PDF (3Mb) • (1982) The influence of the competing effects of dilatant hardening and diffusive softening on the stability of saturated rock is investigated. The analysis considers an infinite slab of fluid-infiltrated rock which contains a ... application/pdf PDF (3Mb) • (Department of Theoretical and Applied Mechanics (UIUC), 2002-10) Adhesively bonded aluminum joints have been increasingly used in automotive industry because of their structural and functional advantages. Interfacial debonding in these joints has become a major concern limiting their ... application/pdf PDF (614Kb) • (1961) application/pdf PDF (2Mb) • (1968) application/pdf PDF (2Mb) • (1966) application/pdf PDF (2Mb) • (1977) application/pdf PDF (2Mb) • (Department of Theoretical and Applied Mechanics (UIUC), 2001-12) Evaluation of crack tip driving force for interfacial cracks between piezoelectric actuators and elastic substrates is crucial to successful applications of smart materials and smart structures. Here the behavior of an ... application/pdf PDF (267Kb) • (1965) application/pdf PDF (8Mb) • (1979) application/pdf PDF (2Mb) • (1993) Numerical studies of the ensemble average, or coherent, wave in a two-dimensional random medium have been undertaken for the steady-state and transient cases. The medium consisted of a tensioned mesh with a uniform ... application/pdf PDF (8Mb) • (1978) application/pdf PDF (3Mb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8401471376419067, "perplexity": 5265.572612916248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462898.92/warc/CC-MAIN-20150226074102-00181-ip-10-28-5-156.ec2.internal.warc.gz"}
https://indico.ihep.ac.cn/event/7639/
1. If you are a new user, please register to get an Indico account through https://login.ihep.ac.cn/registIndico.jsp. Any questions, please email us at helpdesk@ihep.ac.cn or call 88236855. 2. The name of any uploaded file should be in English or plus numbers, not containing any Chinese or special characters. 3. If you need to create a conference in the "Conferences, Workshops and Events" zone, please email us at helpdesk@ihep.ac.cn. # Inelastic X-ray scattering 2018: instrumentation and applications (IXS2018) from to (Asia/Shanghai) at Main Building ( A419 ) Main bldg., RmA419, Yuquan Road,Shijingshan District, Beijing, China Description Resonant and non-resonant inelastic X-ray scattering are important tools to study various elementary excitations in condensed matter. Over the past few decades, RIXS and XRS have been developed into relatively mature technologies in third-generation synchrotron radiation sources around the world. Since the HEPS project has been approved recently, it is a good opportunity to introduce these two techniques to China with a brilliant perspective. IXS 2018 will bring together researchers from all over the world working in the field of RIXS and XRS. It will provide a workshop for scientists to present new discoveries and research related to resonant and non-resonant inelastic X-ray scattering, and opportunities to exchange information about the latest technical developments and applications. And we also hope to get advice from experts and users to our HXHERS beamline. Invited speakers: Dr. Yong Cai BNL, NSLSII, confirmed Dr. Hasan Yavas SLAC, LSLS,confirmed Dr. Yang Ding HPSTAR,confirmed Prof. Simo Huotari University of Helsinki, confirmed Prof. Jiawang Hong BIT, confirmed Prof.Xuerong Liu ShanghaiTech University,confirmed Prof.Tsu-Chien Weng HPSTAR, confirmed Local Organizing Committee: Wei Xu (Chair), Zhiying Guo (Co-chair), Juncai Dong, Meijuan Yu Material: contact information Email: xuw@ihep.ac.cn,zyguo@ihep.ac.cn Telephone: +86-10-88235156 Go to day • Thursday, May 10, 2018 • 09:00 - 09:15 Welcome remark 15' Speaker: Dr. Qing Qin (高能所) • 09:15 - 09:30 The Status of the High Energy Photon Source (HEPS) 15' Speaker: Tao Ye (I) • 09:30 - 10:00 Inelastic X-ray Scattering program at HEPS -partial program on the Hard X-ray High Energy resolution spectroscopy (HXHERS) beamline 30' Speaker: Dr. 伟 徐 (高能所) • 10:00 - 10:15 Meeting photo and coffee break • 10:15 - 10:45 High Energy Resolution Inelastic X-ray Scattering and Applications in Materials Science 30' Speaker: Dr. Yong Cai • 10:45 - 11:15 Source considerations for high-resolution and high-throughput inelastic x-ray scattering 30' Speaker: Dr. Hasan Yavas • 11:15 - 11:45 Hard x-ray RIXS applications on iridates, and others 30' Speaker: Dr. Xuerong Liu • 11:45 - 13:30 Box Lunch • 13:30 - 14:00 Design concept of the RIXS end-station at the High Energy Photon Source 30' Speaker: Dr. Dong Juncai (IHEP) • 14:00 - 14:40 Valence-electron excitations in condensed matter 40' Speaker: Prof. Huotari Simo • 14:40 - 15:20 Anharmonic phonons in energy materials: inelastic x-ray scattering and first-principles simulations 40' Speaker: Prof. Jiawang Hong • 15:20 - 15:30 Coffer Break • 15:30 - 16:10 Beamline layout & plans: comments & suggestions • 15:30 - 17:30 Beamline Discuss 2h0' User requirment and comment • 16:10 - 16:50 Discussion: User Requirements for the beamline and spectrometer Sample environment: high pressure, cryogenic or in situ • 16:50 - 17:30 Discussion: Design for the RIXS spectrometer • Friday, May 11, 2018 • 09:00 - 09:40 X-Ray Raman Spectroscopy 40' Speaker: Prof. Huotari Simo • 09:40 - 10:20 High-Pressure Inelastic X-ray Scattering Science and its Prospects 40' Speaker: Prof. Yang Ding • 10:20 - 10:40 X-Ray Raman Scattering and analyzer 20' Speaker: Dr. Zhiying Guo • 10:40 - 10:50 Coffee break • 10:50 - 11:20 Discussion: Design for the IXS spectrometer • 11:20 - 12:00 Discussion: Analyzer & HRM: R&D efforts and collaborations • 12:00 - 13:00 Lunch
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1991690695285797, "perplexity": 14957.804641224468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00193.warc.gz"}
http://lists.w3.org/Archives/Public/www-math/2012Oct/0010.html
# Firefox 16 Release Notes From: Frédéric WANG <fred.wang@free.fr> Date: Tue, 09 Oct 2012 23:20:04 +0200 Message-ID: <50749504.9000801@free.fr> To: "www-math@w3.org" <www-math@w3.org> Dear all, I'm glad to announce that Firefox 16 is now available. Below are some notes for the release, aurora and beta channels. There haven't been a lot of new features for these three versions but I expect that the work started this summer by various contributors will finally be integrated in subsequent versions. * Firefox 16: - Change the default value for attributes mo@lspace/rspace to "thickmathspace". - Selection attribute on maction is now considered by default. * Firefox 17: - Update parsing for mtable@align * Firefox 18: - Fix a bug when lquote/rquote are updated dynamically As usual, you can get a complete list of changes here: https://wiki.mozilla.org/MathML:Home_Page#Last_bugs_fixed -- Frédéric Wang maths-informatique-jeux.com/blog/frederic Received on Tuesday, 9 October 2012 21:18:33 UTC This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:27:45 UTC
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.448253333568573, "perplexity": 16587.41633048848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988840.31/warc/CC-MAIN-20150728002308-00205-ip-10-236-191-2.ec2.internal.warc.gz"}
http://link.springer.com/article/10.1023%2FB%3AHIGH.0000046717.11717.06
, Volume 48, Issue 4, pp 461-482 # The research assessment exercise in English universities, 2001 ### Purchase on Springer.com \$39.95 / €34.95 / £29.95* Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract At intervals of 3–4 years, researchquality in English universities has beenexternally reviewed 5 times over the past 16years. Assessment is based on peer-review ofmaterial submitted by universities to 70separate subject panels. The principalcomponent is information on research output,usually publications, from all academic staffidentified as research active''. Researchquality is rated on a numerical (1–5*),criteria-based scale. Ratings in all subjectareas and across all universities haveincreased to give an average rating in 2001corresponding to a level of attainablenational excellence''. Between universitiesthere are significant variations. In theprestigious Loxbridge group, where almost allacademic staff are research-active, 90% ofsubject areas achieved ratings at level 5 in2001; in contrast, in the New universities,where only 40% of academic staff isresearch-active, level 5 was achieved in 7% ofsubject areas. A combination of high researchquality and high cost research (medicine,science, engineering) concentrated in the Olduniversities is similarly evident in thedistribution of research funding. Income fromboth research subsidy and research grants andcontracts is divided: Old universities, 94%(Loxbridge, 35%), New universities, 6%. High institutional costs of the assessmentprocess, particularly for areas of low-costresearch, and increasing concern about theinadequacies of the rating system and failureof its direct link to funding suggest thatsubstantial revision will be needed for futureassessment exercises.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19998563826084137, "perplexity": 18241.19929238866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-molecular-science-5th-edition/chapter-15-additional-aqueous-equilibria-questions-for-review-and-thought-topical-questions-page-693b/50e
## Chemistry: The Molecular Science (5th Edition) $1FeCO_3(s) \lt -- \gt 1Fe^{2+}(aq) + 1C{O_3}^{2-}(aq)$ $K_{sp} (FeCO_3) = [Fe^{2+}] [C{O_3}^{2-}]$ - Iron (II) $(Fe^{2+})$ carbonate $(C{O_3}^{2-})$: $FeCO_3$ 1. Write the dissociation equation for this salt: - Identify the ions of the salt: $(Fe^{2+})$ and $(C{O_3}^{2-})$, these are the products, and the reactant is the solid salt. $FeCO_3(s) \lt -- \gt Fe^{2+}(aq) + C{O_3}^{2-}(aq)$ - Balance the equation: $1FeCO_3(s) \lt -- \gt 1Fe^{2+}(aq) + 1C{O_3}^{2-}(aq)$ 2. Now, write the $K_{sp}$ expression. - Multiply the concentrations of the ions; - The equilibrium coefficients represent the exponent of these concentrations: $K_{sp} (FeCO_3) = [Fe^{2+}]^1 \times [C{O_3}^{2-}]^1$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9289501309394836, "perplexity": 2456.7986910663635}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662595559.80/warc/CC-MAIN-20220526004200-20220526034200-00613.warc.gz"}
https://jp.arxiv.org/list/hep-th/2201?skip=100&show=25
# High Energy Physics - Theory ## Authors and titles for Jan 2022, skipping first 100 [ total of 517 entries: 1-25 | 26-50 | 51-75 | 76-100 | 101-125 | 126-150 | 151-175 | 176-200 | ... | 501-517 ] [ showing 25 entries per page: fewer | more | all ] [101] Title: Exact result in N=4 SYM theory: Generalised double-logarithmic equation Authors: V.N. Velizhanin Comments: 36 pages, 1 figure, 2 ancillary files, some details and references added Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph) [102] Title: D-Instanton Superpotential In String Theory Authors: Manki Kim Comments: v4. 34 pages. Discussion extended. Results unchanged. Matches the Jhep version Subjects: High Energy Physics - Theory (hep-th); Algebraic Geometry (math.AG) [103] Title: 3-Manifolds and VOA Characters Comments: 85 pages, 3 figures, 6 tables Subjects: High Energy Physics - Theory (hep-th); Geometric Topology (math.GT); Quantum Algebra (math.QA); Representation Theory (math.RT) [104] Title: Towards a Dark Sector Model from String Theory Subjects: High Energy Physics - Theory (hep-th); Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph) [105] Title: Four-derivative Corrections to Minimal Gauged Supergravity in Five Dimensions Subjects: High Energy Physics - Theory (hep-th) [106] Title: Translation in momentum space and minimal length Authors: P. Valtancoli Subjects: High Energy Physics - Theory (hep-th) [107] Title: A new scalar electrodynamics for fracton gauge theory Subjects: High Energy Physics - Theory (hep-th); Soft Condensed Matter (cond-mat.soft); Strongly Correlated Electrons (cond-mat.str-el); General Relativity and Quantum Cosmology (gr-qc) [108] Title: Type IIB parabolic ($p,q$)-strings from M2-branes with fluxes Subjects: High Energy Physics - Theory (hep-th) [109] Title: Five-branes wrapped on topological disks from 7D N=2 gauged supergravity Comments: 54 pages, 19 figures, typos corrected, references added, some details moved to an appendix Journal-ref: Phys. Rev. D105 (2022) 066010 Subjects: High Energy Physics - Theory (hep-th) [110] Title: Mapping SYK to the Sky Subjects: High Energy Physics - Theory (hep-th) [111] Title: On the Differential Representation and Color-Kinematics Duality of AdS Boundary Correlators Journal-ref: JHEP 05 (2022) 026 Subjects: High Energy Physics - Theory (hep-th) [112] Title: Harnessing S-Duality in $\mathcal{N}=4$ SYM & Supergravity as $SL(2,\mathbb{Z})$-Averaged Strings Comments: 77 pages + refs. v2: added refs, fixed typos, some reorganization/clarifications in Sections 11 and 12. v3: added refs, minor changes Subjects: High Energy Physics - Theory (hep-th) [113] Title: Derivative Interactions during Inflation: A Systematic Approach Subjects: High Energy Physics - Theory (hep-th) [114] Title: On-shell Correlators and Color-Kinematics Duality in Curved Symmetric Spacetimes Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph) [115] Title: Thermodynamic properties of Bardeen black holes in dRGT massive gravity Journal-ref: Can. J. Phys. (2022) Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc) [116] Title: Confinement/Deconfinement temperature for a rotating quark-gluon plasma Comments: 13 pages, 3 figures. More discussions and clarifications about the thermodynamics of the model and other aspects included, one new appendix and new references added. Version accepted in Phys. Rev. D Subjects: High Energy Physics - Theory (hep-th) [117] Title: The S-Matrix of 2D Type 0B String Theory Part 1: Perturbation Theory Revisited Comments: 30 pages, 4 figures, 1 .nb Mathematica notebook attachment; typo in semiclassical action of N=1 Liouville corrected Subjects: High Energy Physics - Theory (hep-th) [118] Title: Categories of quantum liquids III Authors: Liang Kong, Hao Zheng Subjects: High Energy Physics - Theory (hep-th); Strongly Correlated Electrons (cond-mat.str-el); Mathematical Physics (math-ph); Operator Algebras (math.OA) [119] Title: Some Rigorous Results on Symmetry Breakings in Gauge QFT Authors: Franco Strocchi Subjects: High Energy Physics - Theory (hep-th) [120] Title: Replica Symmetry Breaking for the Integrable Two-Site Sachdev-Ye-Kitaev Model Comments: To be submitted to the Dyson memorial volume of the Journal of Mathematical Physics Subjects: High Energy Physics - Theory (hep-th); Disordered Systems and Neural Networks (cond-mat.dis-nn); Nuclear Theory (nucl-th) [121] Title: Entanglement entropy of gravitational edge modes Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc) [122] Title: Alternating current conductivity and superconducting properties of the holographic effective theory Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc) [123] Title: Center vortex and confinement in Yang-Mills theory and QCD with anomaly-preserving compactifications Comments: 58 pages, 5 figures (v2) minor changes Journal-ref: Prog Theor Exp Phys (2022) Subjects: High Energy Physics - Theory (hep-th) [124] Title: Bulk Gauge Fields and Holographic RG from Exact RG Comments: 68 pages, 3 figures. 40 pages excluding appendices and references Subjects: High Energy Physics - Theory (hep-th); Statistical Mechanics (cond-mat.stat-mech); General Relativity and Quantum Cosmology (gr-qc) [125] Title: Bosonization duality in 2+1 dimensions and critical current correlation functions in Chern-Simons $U(1)\times U(1)$ Abelian Higgs model
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5125449299812317, "perplexity": 11206.951253262938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00001.warc.gz"}
https://rd.springer.com/chapter/10.1007%2F978-3-030-33607-3_31
# Representation Learning of Knowledge Graphs with Multi-scale Capsule Network • Jingwei Cheng • Zhi Yang • Jinming Dang • Chunguang Pan • Fu Zhang Conference paper Part of the Lecture Notes in Computer Science book series (LNCS, volume 11871) ## Abstract Representation learning of knowledge graphs has gained wide attention in the field of natural language processing. Most existing knowledge representation models for knowledge graphs embed triples into a continuous low-dimensional vector space through a simple linear transformation. In spite of high computation efficiency, the fitting ability of these models is suboptimal. In this paper, we propose a multi-scale capsule network to model relations between embedding vectors from a deep perspective. We use convolution kernels with different sizes of windows in the convolutional layer inside a Capsule network to extract semantic features of entities and relations in triples. These semantic features are then represented as a continuous vector through a routing process algorithm in the capsule layer. The modulus of this vector is used as the score of confidence of correctness of a triple. Experiments show that the proposed model obtains better performance than state-of-the-art embedding models for the task of knowledge graph completion over two benchmarks, WN18RR and FB15k-237. ## Keywords Representation learning Capsule network Multi-scale Dynamic routing Knowledge graph completion ## References 1. 1. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: Advances in Neural Information Processing Systems, pp. 2787–2795 (2013)Google Scholar 2. 2. Cai, L., Wang, W.Y.: KBGAN: adversarial learning for knowledge graph embeddings (2017)Google Scholar 3. 3. Dettmers, T., Minervini, P., Stenetorp, P., Riedel, S.: Convolutional 2D knowledge graph embeddings. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)Google Scholar 4. 4. Ji, G., He, S., Xu, L., Liu, K., Zhao, J.: Knowledge graph embedding via dynamic mapping matrix. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (vol. 1: Long Papers), pp. 687–696 (2015)Google Scholar 5. 5. Ji, G., Liu, K., He, S., Zhao, J.: Knowledge graph completion with adaptive sparse transfer matrix. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)Google Scholar 6. 6. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) 7. 7. Lin, Y., Liu, Z., Sun, M., Liu, Y., Zhu, X.: Learning entity and relation embeddings for knowledge graph completion. In: Twenty-Ninth AAAI Conference on Artificial Intelligence (2015)Google Scholar 8. 8. Nguyen, D.Q., Nguyen, T.D., Nguyen, D.Q., Phung, D.: A novel embedding model for knowledge base completion based on convolutional neural network (2018)Google Scholar 9. 9. Nguyen, D.Q., Vu, T., Nguyen, T.D., Nguyen, D.Q., Phung, D.: A capsule network-based embedding model for knowledge graph completion and search personalization. arXiv preprint arXiv:1808.04122 (2018) 10. 10. Nguyen, D.Q.: An overview of embedding models of entities and relationships for knowledge base completion (2018)Google Scholar 11. 11. Pinter, Y., Eisenstein, J.: Predicting semantic relations using global graph properties (2018)Google Scholar 12. 12. Sabour, S., Frosst, N., Hinton, G.E.: Dynamic routing between capsules. In: Advances in Neural Information Processing Systems, pp. 3856–3866 (2017)Google Scholar 13. 13. Toutanova, K., Chen, D.: Observed versus latent features for knowledge base and text inference. In: Workshop on Continuous Vector Space Models & Their Compositionality (2015)Google Scholar 14. 14. Trouillon, T., Welbl, J., Riedel, S., Gaussier, R., Bouchard, G.: Complex embeddings for simple link prediction (2016)Google Scholar 15. 15. Wang, Z., Zhang, J., Feng, J., Chen, Z.: Knowledge graph embedding by translating on hyperplanes. In: Twenty-Eighth AAAI Conference on Artificial Intelligence (2014)Google Scholar 16. 16. Yang, B., Yih, W.T., He, X., Gao, J., Deng, L.: Embedding entities and relations for learning and inference in knowledge bases (2014)Google Scholar © Springer Nature Switzerland AG 2019 ## Authors and Affiliations • Jingwei Cheng • 1 Email author • Zhi Yang • 1 • Jinming Dang • 1 • Chunguang Pan • 1 • Fu Zhang • 1 1. 1.College of Computer Science and EngineeringNortheastern UniversityShenyangChina
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8148760199546814, "perplexity": 15015.227668890451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670948.64/warc/CC-MAIN-20191121180800-20191121204800-00323.warc.gz"}
https://electricityrates.com/how-to-compare/energy-choice-blog/states-highest-average-electricity-bill/
## Page Contents Advertiser Disclosure: At ElectricityRates.com, our number one goal is to help you make better energy decisions. We adhere to strict editorial guidelines, however this post may include references to products offered from our partners. A lot of people complain about their state’s electricity rate. In states like Hawaii, Alaska, and most of the Northeast, they have a good reason to complain since they have the highest rates in the US. However, having a high electricity rate doesn’t necessarily mean you are going to be stuck with a huge electricity bill at the end of the month. Some states are stuck with a high average electricity rate but tend to use less energy resulting in a lower electricity bill than a state with a low average rate but high energy usage. It made us wonder: which states have the highest average electricity bill? Below is a ranking of all 50 states by average monthly electricity bill. ## Top 10 Highest Average Electricity Bill ### 1. South Carolina (Average Monthly Electricity Bill = \$146) South Carolina shouldn’t have the highest bills in the country because their average rate is only12.78 /kWh (20th highest). However, residents tend to consume more energy than most states at1,155 kWh per customer (6th highest). ### 2. Alabama (Average Monthly Electricity Bill = \$146) At only12.61 /kWh, Alabama doesn’t rank in the top 20 for highest electricity rates. However, the state tends to use more energy than most of the country at1,214 kWh per customer (3rd highest). ### 3. Connecticut (Average Monthly Electricity Bill = \$142) Connecticut ranks 15th in lowest energy consumption in the US. The state does have the 3rd highest average rate in the country at20.31 /kWh. ### 4. Maryland (Average Monthly Electricity Bill = \$142) Maryland ranks in the top half for both average electricity rates (13.99 /kWh) and average monthly usage (995 kWh per customer). ### 5. Hawaii (Average Monthly Electricity Bill = \$139) Hawaiian’s have the highest electricity rates in the country at29.50 /kWh. However, they consume the lowest amount of electricity on average at just505 kWh per customer. ### 6. Georgia (Average Monthly Electricity Bill = \$131) Georiga has some of the better average electricity rates in the country at just11.80 /kWh (20th lowest). However, they more energy than most states at1,138 kWh per customer (7th highest). ### 7. Tennesse (Average Monthly Electricity Bill = \$129) Tennesse has the 7th lowest electricity rates in the country at just10.65 /kWh. But, residents consume the 2nd highest about of electricity in the US at1,238 kWh per customer. ### 8. Virginia (Average Monthly Electricity Bill = \$127) Again, Virginia ranks in the lower half for average electricity rate in the US at11.67 /kWh but they are a top 10 consumer in electricity at1,120 kWh per customer on average. ### 9. Texas (Average Monthly Electricity Bill = \$127) Texans consume about1,156 kWh per customer (5th highest) but have the 13th lowest average rate at11.18 /kWh. ### 10. Delaware (Average Monthly Electricity Bill = \$127) Delaware has some of the higher electricity rates in the US at13.44 /kWh. They also rank in the top half for energy consumption at947 kWh per customer (24th highest). ## Top 10 Lowest Average Electricity Bill ### 50. New Mexico (Average Monthly Electricity Bill = \$76) New Mexico doesn’t have extremely lowest electricity rates. In fact, they have the 19th highest average rate in the country at12.92 /kWh. However, residents consume only631 kWh per customer which is the 10th lowest in the US. This results in the lowest average electricity bill in the US. ### 49. Utah (Average Monthly Electricity Bill = \$83) Utah residents enjoy low electricity rates having the 10th lowest in the country at11.04 /kWh. They are also in the bottom half of energy usage in the US AT750 kWh per customer. This results in the 2nd lowest average electricity bill in the country. ### 48. Colorado (Average Monthly Electricity Bill = \$84) Coloradois in the middle of the pack with an average rate of12.13 /kWh. However, they are in the bottom half of energy usage at just694 kWh per customer. ### 47. Maine (Average Monthly Electricity Bill = \$86) Maine residents have the 10th highest average electricity rates at15.96 /kWh. However, they have the 2nd lowest energy consumption in the US at only546 kWh per customer. ### 46. Montana (Average Monthly Electricity Bill = \$89) Montana is in the bottom half for both average electricity rate (11.11 /kWh) and average usage (813 kWh per customer). ### 45. Washington (Average Monthly Electricity Bill = \$91) Washington residents enjoy the 2nd lowest average electricity rates in the country at9.60 /kWh. The state is the 22nd highest energy consumer at955 kWh per customer. ### 44. Illinois (Average Monthly Electricity Bill = \$92) Illinois has the 16th highest average electricity rates in the US AT12.70 /kWh. However,they are in the bottom half of energy usage at733 kWh per customer (16th lowest). ### 43. Wyoming (Average Electricity Bill = \$95) Wyoming residents enjoy lower than average electricity rates at11.41 /kWh and are also in the bottom half of energy consumption at850 kWh per customer (21st lowest). ### 42. Idaho (Average Electricity Bill = \$95) Idaho has the 3rd lowest average electricity rate in the states at10.11 /kWh. However, they are in the top half of energy consumption at953 kWh per customer (23rd highest). ### 41. California (Average Electricity Bill = \$95) With an average rate of18.24 /kWh, Californians have the 7th highest rates in the country. However, residents have the 3rd lowest energy consumption at547 kWh per customer. ## The Remaining States 11. West Virginia (\$126 Average Electricity Bill) 12. Mississippi (\$126 Average Electricity Bill) 13. Arizona (\$125 Average Electricity Bill) 14. Florida (\$123Average Electricity Bill) 15. North Carolina (\$121Average Electricity Bill) 17. Kentucky (\$181Average Electricity Bill) 18. Kansas (\$117Average Electricity Bill) 19. Pennsylvania (\$117 Average Electricity Bill) 20. Missouri (\$117Average Electricity Bill) 21. Louisana (\$116 Average Electricity Bill) 22. Indiana (\$115Average Electricity Bill)) 23. Massachusetts (\$114Average Electricity Bill) 24. South Dakota (\$113Average Electricity Bill) 25. Ohio (\$111Average Electricity Bill) 26. Oklahoma (\$111Average Electricity Bill) 27. New Hampshire (\$111Average Electricity Bill) 28. Rhode Island (\$109Average Electricity Bill) 29. New Jersey (\$109Average Electricity Bill) 30. Arkansas (\$107Average Electricity Bill) 31. North Dakota (\$106Average Electricity Bill) 34. New York (\$105Average Electricity Bill) 35. Iowa (\$103 Average Electricity Bill) 36. Michigan (\$102Average Electricity Bill) 37. Minnesota (\$97Average Electricity Bill) 38. Oregon (\$97Average Electricity Bill) 39. Wisconsin (\$96Average Electricity Bill) 40. Vermont (\$95Average Electricity Bill) Source: USA Today
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8677013516426086, "perplexity": 13171.000429341879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00153.warc.gz"}
https://en.m.wikipedia.org/wiki/Rand_index
# Rand index The Rand index[1] or Rand measure (named after William M. Rand) in statistics, and in particular in data clustering, is a measure of the similarity between two data clusterings. A form of the Rand index may be defined that is adjusted for the chance grouping of elements, this is the adjusted Rand index. From a mathematical standpoint, Rand index is related to the accuracy, but is applicable even when class labels are not used. Example clusterings for a dataset with the kMeans (left) and Mean shift (right) algorithms. The calculated Adjusted Rand index for these two clusterings is ${\displaystyle ARI\approx 0.94}$ ## Rand index ### Definition Given a set of ${\displaystyle n}$  elements ${\displaystyle S=\{o_{1},\ldots ,o_{n}\}}$  and two partitions of ${\displaystyle S}$  to compare, ${\displaystyle X=\{X_{1},\ldots ,X_{r}\}}$ , a partition of S into r subsets, and ${\displaystyle Y=\{Y_{1},\ldots ,Y_{s}\}}$ , a partition of S into s subsets, define the following: • ${\displaystyle a}$ , the number of pairs of elements in ${\displaystyle S}$  that are in the same subset in ${\displaystyle X}$  and in the same subset in ${\displaystyle Y}$ • ${\displaystyle b}$ , the number of pairs of elements in ${\displaystyle S}$  that are in different subsets in ${\displaystyle X}$  and in different subsets in ${\displaystyle Y}$ • ${\displaystyle c}$ , the number of pairs of elements in ${\displaystyle S}$  that are in the same subset in ${\displaystyle X}$  and in different subsets in ${\displaystyle Y}$ • ${\displaystyle d}$ , the number of pairs of elements in ${\displaystyle S}$  that are in different subsets in ${\displaystyle X}$  and in the same subset in ${\displaystyle Y}$ The Rand index, ${\displaystyle R}$ , is:[1][2] ${\displaystyle R={\frac {a+b}{a+b+c+d}}={\frac {a+b}{n \choose 2}}}$ Intuitively, ${\displaystyle a+b}$  can be considered as the number of agreements between ${\displaystyle X}$  and ${\displaystyle Y}$  and ${\displaystyle c+d}$  as the number of disagreements between ${\displaystyle X}$  and ${\displaystyle Y}$ . Since the denominator is the total number of pairs, the Rand index represents the frequency of occurrence of agreements over the total pairs, or the probability that ${\displaystyle X}$  and ${\displaystyle Y}$  will agree on a randomly chosen pair. ${\displaystyle {n \choose 2}}$  is calculated as ${\displaystyle n(n-1)/2}$ . Similarly, one can also view the Rand index as a measure of the percentage of correct decisions made by the algorithm. It can be computed using the following formula: ${\displaystyle RI={\frac {TP+TN}{TP+FP+FN+TN}}}$ where ${\displaystyle TP}$  is the number of true positives, ${\displaystyle TN}$  is the number of true negatives, ${\displaystyle FP}$  is the number of false positives, and ${\displaystyle FN}$  is the number of false negatives. ### Properties The Rand index has a value between 0 and 1, with 0 indicating that the two data clusterings do not agree on any pair of points and 1 indicating that the data clusterings are exactly the same. In mathematical terms, a, b, c, d are defined as follows: • ${\displaystyle a=|S^{*}|}$ , where ${\displaystyle S^{*}=\{(o_{i},o_{j})\mid o_{i},o_{j}\in X_{k},o_{i},o_{j}\in Y_{l}\}}$ • ${\displaystyle b=|S^{*}|}$ , where ${\displaystyle S^{*}=\{(o_{i},o_{j})\mid o_{i}\in X_{k_{1}},o_{j}\in X_{k_{2}},o_{i}\in Y_{l_{1}},o_{j}\in Y_{l_{2}}\}}$ • ${\displaystyle c=|S^{*}|}$ , where ${\displaystyle S^{*}=\{(o_{i},o_{j})\mid o_{i},o_{j}\in X_{k},o_{i}\in Y_{l_{1}},o_{j}\in Y_{l_{2}}\}}$ • ${\displaystyle d=|S^{*}|}$ , where ${\displaystyle S^{*}=\{(o_{i},o_{j})\mid o_{i}\in X_{k_{1}},o_{j}\in X_{k_{2}},o_{i},o_{j}\in Y_{l}\}}$ for some ${\displaystyle 1\leq i,j\leq n,i\neq j,1\leq k,k_{1},k_{2}\leq r,k_{1}\neq k_{2},1\leq l,l_{1},l_{2}\leq s,l_{1}\neq l_{2}}$ ### Relationship with classification accuracy The Rand index can also be viewed through the prism of binary classification accuracy over the pairs of elements in ${\displaystyle S}$ . The two class labels are "${\displaystyle o_{i}}$  and ${\displaystyle o_{j}}$  are in the same subset in ${\displaystyle X}$  and ${\displaystyle Y}$ " and "${\displaystyle o_{i}}$  and ${\displaystyle o_{j}}$  are in different subsets in ${\displaystyle X}$  and ${\displaystyle Y}$ ". In that setting, ${\displaystyle a}$  is the number of pairs correctly labeled as belonging to the same subset (true positives), and ${\displaystyle b}$  is the number of pairs correctly labeled as belonging to different subsets (true negatives). The adjusted Rand index is the corrected-for-chance version of the Rand index.[1][2][3] Such a correction for chance establishes a baseline by using the expected similarity of all pair-wise comparisons between clusterings specified by a random model. Traditionally, the Rand Index was corrected using the Permutation Model for clusterings (the number and size of clusters within a clustering are fixed, and all random clusterings are generated by shuffling the elements between the fixed clusters). However, the premises of the permutation model are frequently violated; in many clustering scenarios, either the number of clusters or the size distribution of those clusters vary drastically. For example, consider that in K-means the number of clusters is fixed by the practitioner, but the sizes of those clusters are inferred from the data. Variations of the adjusted Rand Index account for different models of random clusterings.[4] Though the Rand Index may only yield a value between 0 and +1, the adjusted Rand index can yield negative values if the index is less than the expected index.[5] ### The contingency table Given a set S of n elements, and two groupings or partitions (e.g. clusterings) of these elements, namely ${\displaystyle X=\{X_{1},X_{2},\ldots ,X_{r}\}}$  and ${\displaystyle Y=\{Y_{1},Y_{2},\ldots ,Y_{s}\}}$ , the overlap between X and Y can be summarized in a contingency table ${\displaystyle \left[n_{ij}\right]}$  where each entry ${\displaystyle n_{ij}}$  denotes the number of objects in common between ${\displaystyle X_{i}}$  and ${\displaystyle Y_{j}}$  : ${\displaystyle n_{ij}=|X_{i}\cap Y_{j}|}$ . ${\displaystyle {\begin{array}{c|cccc|c}{{} \atop X}\!\diagdown \!^{Y}&Y_{1}&Y_{2}&\cdots &Y_{s}&{\text{sums}}\\\hline X_{1}&n_{11}&n_{12}&\cdots &n_{1s}&a_{1}\\X_{2}&n_{21}&n_{22}&\cdots &n_{2s}&a_{2}\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\X_{r}&n_{r1}&n_{r2}&\cdots &n_{rs}&a_{r}\\\hline {\text{sums}}&b_{1}&b_{2}&\cdots &b_{s}&\end{array}}}$ ### Definition The original Adjusted Rand Index using the Permutation Model is ${\displaystyle ARI={\frac {\left.\sum _{ij}{\binom {n_{ij}}{2}}-\left[\sum _{i}{\binom {a_{i}}{2}}\sum _{j}{\binom {b_{j}}{2}}\right]\right/{\binom {n}{2}}}{\left.{\frac {1}{2}}\left[\sum _{i}{\binom {a_{i}}{2}}+\sum _{j}{\binom {b_{j}}{2}}\right]-\left[\sum _{i}{\binom {a_{i}}{2}}\sum _{j}{\binom {b_{j}}{2}}\right]\right/{\binom {n}{2}}}}}$ where ${\displaystyle n_{ij},a_{i},b_{j}}$  are values from the contingency table.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 69, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9089115858078003, "perplexity": 715.3229717857324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509973.34/warc/CC-MAIN-20210117051021-20210117081021-00415.warc.gz"}
https://arxiv.org/abs/1805.00008
### Current browse context: cond-mat.quant-gas (what is this?) # Title:Dynamical critical scaling of long-range interacting quantum magnets Abstract: Slow variations (quenches) of the magnetic field across the paramagnetic-ferromagnetic phase transition of spin systems produce heat. In systems with short-range interactions the heat exhibits universal power-law scaling as a function of the quench rate, known as Kibble-Zurek scaling. In this work we analyze slow quenches of the magnetic field in the Lipkin-Meshkov-Glick (LMG) model, which describes fully connected quantum spins. We analytically determine the quantum contribution to the residual heat as a function of the quench rate $δ$ by means of a Holstein-Primakoff expansion about the mean-field value. Unlike in the case of short-range interactions, scaling laws in the LMG model are only found for a ramp ending at the critical point. If instead the ramp is symmetric, as in the typical Kibble-Zurek scenario, after crossing the critical point the system tends to reabsorb the defects formed during the first part of the ramp: the number of excitations exhibits a crossover behavior as a function of $δ$ and tends to a constant in the thermodynamic limit. Previous, and seemingly contradictory, theoretical studies are identified as specific limits of this dynamics. Our results can be tested on several experimental platforms, including quantum gases and trapped ions. Comments: 10 pages, 5 figures, new version improves figure 1 quality and expands the discussion of the results Subjects: Quantum Gases (cond-mat.quant-gas); Quantum Physics (quant-ph) Cite as: arXiv:1805.00008 [cond-mat.quant-gas] (or arXiv:1805.00008v3 [cond-mat.quant-gas] for this version) ## Submission history From: Nicolo Defenu Dr. [view email] [v1] Mon, 30 Apr 2018 13:29:38 UTC (856 KB) [v2] Wed, 2 May 2018 12:15:46 UTC (734 KB) [v3] Fri, 14 Sep 2018 12:29:59 UTC (666 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6309135556221008, "perplexity": 2080.9467490979046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823817.62/warc/CC-MAIN-20181212091014-20181212112514-00296.warc.gz"}
https://www.groundai.com/project/bimodal-morphologies-of-massive-galaxies-at-the-core-of-a-protocluster-at-z309-and-the-strong-size-growth-of-a-brightest-cluster-galaxy/
Bimodal morphologies of massive galaxies at the core of a protocluster at z=3.09 Bimodal morphologies of massive galaxies at the core of a protocluster at z=3.09 and the strong size growth of a brightest cluster galaxy Abstract We present the near-infrared high resolution imaging of an extremely dense group of galaxies at the core of the protocluster at in the SSA22 field by using the adaptive optics AO188 and the Infrared Camera and Spectrograph (IRCS) on Subaru Telescope. Wide morphological variety of them suggests their on-going dramatic evolutions. One of the two quiescent galaxies (QGs), the most massive one in the group, is a compact elliptical with an effective radius kpc. It supports the two-phase formation scenario of giant ellipticals today that a massive compact elliptical is formed at once and evolves in the size and stellar mass by series of mergers. Since this object is a plausible progenitor of a brightest cluster galaxy (BCG) of one of the most massive clusters today, it requires strong size () and stellar mass ( four times by ) growths. Another QG hosts an AGN(s) and is fitted with a model composed from an nuclear component and Sérsic model. It shows spatially extended [OIII]5007 emission line compared to the continuum emission, a plausible evidence of outflows. Massive star forming galaxies (SFGs) in the group are two to three times larger than the field SFGs at similar redshift. Although we obtained the -band image deeper than the previous one, we found no candidate new members. This implies a physical deficiency of low mass galaxies with stellar mass and/or poor detection completeness of them owing to their diffuse morphologies. keywords: galaxies: formation — galaxies: evolution — galaxies: distances and redshifts — galaxies: clusters: general 12 1 Introduction There is the well established morphology-density relation in the current Universe where elliptical and S0 galaxies dominate rich cluster cores while spiral galaxies are dominant in general fields (Dressler, 1980). The galaxy morphologies are tightly related to properties of galaxies that massive early-type galaxies (ETGs) are generally dominated by old stars, and with low star formation activities and gas contents. The physical mechanisms that relate morphological transformation and shutting down star formation activity with the environment are still open questions. In mature clusters in the current Universe, harassment (Moore et al., 1996), strangulation (Larson, Tinsley, & Caldwell, 1980) and ram-pressure stripping (Gunn & Gott, 1972) can play important roles on quenching star formation and transforming morphologies, but on the other hand, red sequences have already appeared in the protoclusters at (Kodama et al., 2007; Zirm et al., 2008; Uchimoto et al., 2008, 2012; Kubo et al., 2013), when the galaxy clusters have not yet been fully virialized. Massive quiescent galaxies (QGs) at up to are now found by deep multi-wavelength surveys in general fields (e.g., Ilbert et al. 2013; Muzzin et al. 2013; Man et al. 2016). Generally, they are remarkably compact compared to massive ETGs today (e.g., Daddi et al. 2005; Trujillo et al. 2006; Toft et al. 2007; van Dokkum et al. 2008; Damjanov et al. 2009; van Dokkum et al. 2010; van der Wel et al. 2014). van Dokkum et al. (2010) and Patel et al. (2013) argue that such compact QGs at are plausible progenitors of massive ETGs by comparing constant cumulative number density samples of massive galaxies from to . The dissipative processes like gas rich major mergers (Cox et al., 2006; Naab et al., 2007; Wuyts et al., 2010; Bournaud et al., 2011) and/or in-streaming gas by violent disk instabilities (Dekel, Sari, & Ceverino, 2009; Ceverino et al., 2015) are proposed as formation scenarios of such compact elliptical galaxies. Series of dry minor mergers can increase the sizes of compact QGs effectively (e.g., Naab, Johansson, & Ostriker 2009). Based on high resolution cosmological numerical simulations, Oser et al. (2010) proposed the two phase formation scenario in which in-situ rapid gas accretion and violent star formation form a compact spheroid at first and it is grown by mergers of galaxies formed at outside of its virial radius. One of the major uncertainties in the previous studies is the traceability of progenitors of massive ETGs. Protoclusters are suitable targets to study progenitors of galaxies dominating rich cluster cores today. The strong size growths of massive ETGs is supported from many studies of QGs in the protoclusters at (Zirm, Toft, & Tanaka, 2012; Cooper et al., 2012; Papovich et al., 2012; Lotz et al., 2013; Newman et al., 2014; Andreon, Dong, & Raichoor, 2016). But it is not still unclear how their evolutions relate to environments at earlier time. To challenge this question, we need to study protoclusters at the epoch when the morphology-density relation just arises. The SSA22 protocluster at is a rare density peak of galaxies found from the overdensity of Lyman break galaxies (LBGs; Steidel et al. 1998) at first and in later well characterized as a core high density region of a superstructure by the wide field (1.38 deg in the SSA22) narrow-band surveys of Ly emitters (LAEs) at (Hayashino et al., 2004; Yamada et al., 2012). The velocity dispersion of the protocluster core is km s and the cluster mass measured by using the velocity dispersion or overdensity are (Kubo et al., 2015). Thus this protocluster is a plausible progenitor of a core of one of the most massive clusters today. In Kubo et al. (2013), we reported that there is an overdensity of massive galaxies ranging from active SFGs to massive QGs in the SSA22 protocluster. This suggests that a red sequence has just begun appearing in this protocluster. Uchimoto et al. (2012) discovered dense groups of massive galaxies as the counterparts of Ly Blobs (LABs) and sub-mm galaxies (SMGs) in the SSA22 protocluster. Some of them are spectroscopically confirmed as plausibly physically associated groups in Kubo et al. (2016). Similarly, by the abundance matching technique in a wide field survey, Vulcani et al. (2016) reported massive galaxies surrounded by many companions at high redshift as the progenitors of ultra massive galaxies today. They are likely to be hierarchical multiple mergers at the early-phases of the formation histories of massive ETGs, predicted from the high resolution cosmological numerical simulations in the CDM Universe (e.g., Meza et al. 2003; Naab et al. 2007) and thus excellent laboratories of morphological evolutions of massive ETGs. We here present the deep and high resolution imaging of an extremely dense group of galaxies at the core of the SSA22 protocluster, called the SSA22-AzTEC14 group, in the -band by using the InfraRed Camera and Spectrograph (IRCS; Kobayashi et al. 2000) and the Adaptive Optics system AO188 (Hayano et al., 2010) on Subaru Telescope. Since the F160W-band of Hubble Space Telescope (HST), is too blue to study stellar morphologies in the rest-frame optical of galaxies at , an AO assisted -band (rest-frame -band) imaging with a 10-m class ground-based telescope is the best option for our targets. We describe the observation in Section 2. In Section 3, we report morphologies of galaxies in the AzTEC14 group. We discuss the environmental dependence of morphologies of massive QGs and SFGs, and behaviors of faint galaxies in the group in Section 4. In this paper, cosmological parameters of km s Mpc , and are assumed. In this cosmology, 1 arcsec corresponds to kpc in physical at . We adopt the Chabrier (2003) Initial Mass function (IMF). The AB magnitude system is used throughout this paper. 2 Observation Our target is an extremely dense group of galaxies found at the core of the SSA22 protocluster at , called the SSA22-AzTEC14 group (Kubo et al., 2016). Fig. 1 shows its combined image of the IRCS-AO (red) and HST F814W (blue, here after HST )-band images. The object IDs are the same as those in Kubo et al. (2016) and Umehata et al. (2017). The AzTEC14 group was first discovered as a rare overdensity of distant red galaxies (DRGs; ) at the position of a bright 1.1 mm source found from the ASTE/AzTEC 1.1 mm survey of the SSA22 field (Tamura et al., 2009; Umehata et al., 2014) by Uchimoto et al. (2012). In Kubo et al. (2016), we spectroscopically confirmed that seven galaxies belong to one group at . Note that there is a large redshift uncertainty for Az14-K15c as its redshift was measured with the Balmer / 4000 Å  breaks of its continuum spectrum. Five of them have stellar masses and five of them are classified as DRGs. Comparing the AzTEC14 group with the galaxy formation models based on the Millennium simulation (Springel et al., 2005), we found that this group has properties similar to those of a dense group of galaxies at high redshift which evolves into a brightest cluster galaxy (BCG) of one of the most massive clusters in the current Universe (Kubo et al., 2016). Moreover, we carried out deep observations of this region at sub-mm by using Atacama Large Millimeter/submillimeter Array (ALMA) with a synthesized beam of and a typical rms level of 60 Jy beam (green contours in Fig. 1, Umehata et al. 2015; Umehata et al. 2017). The 1.1 mm fluxes of five sub-mm sources detected in the AzTEC14 group range from to mJy. ADF22.4 in Fig. 1 is newly confirmed at by detecting the redshifted CO(9-8) emission line (Umehata et al., 2017) and [C II] 158 m (Hayatsu et al. submitted). Then now eight galaxies are confirmed as a dense group of galaxies at . Our high resolution near-infrared (NIR) imaging observation of the AzTEC14 group was conducted on 24, July 2015 by using the IRCS and AO188 equipped on Subaru Telescope (S15A-059; PI Mariko Kubo). The IRCS was used in the 52 mas plate scale mode with 54 arcsec field of view, and with the -band filter. The AO188 was operated in the laser guide star AO (LGSAO) mode. The tip tilt guide star (TTGS) for the LGSAO operation is a star with at (R.A., Dec) = (22:17:35.78 +00:19:16.3) which is arcsec apart from the targets. The exposure time was 2.8 hours in total. We reduced the data using the IRAF data reduction tasks following the data reduction manual for the IRCS3. The individual frames were combined after masking the bad pixels, flat fielding, sky subtraction and estimating the dither offsets in a standard manner. The zero-point magnitude of our IRCS-AO -band image is calibrated to that of our MOIRCS -band image. Thanks to the good observing condition, the AO works well despite the use of a faint and distant TTGS. The FWHM of the Point Spread Function (PSF) size at the PSF reference star (the dashed white circle in Fig. 1) is while that in our previous MOIRCS imaging at this field is . The 5 limiting magnitude measured with and diameter apertures on our IRCS-AO -band image are and , respectively while that measured with an diameter aperture on our MOIRCS -band image is . Thus, for galaxies extended over ( kpc), detection completeness on our IRCS-AO -band image is lower than that on our MOIRCS -band image. We also show the archival -band image taken with the Advanced Camera for Surveys (ACS) on the HST (PID 9760; PI Roberto Abraham). The FWHM of the PSF size on the -band image is and the 5 limiting magnitude measured with a diameter aperture is . 3 Results Figure 2 shows the IRCS-AO , HST/ACS and MOIRCS -band images of the galaxies in the AzTEC14 group. We also show the images at the position of ADF22.10 though it has not yet been spectroscopically confirmed as a group member. The red contours on the -band stamps show the regions detected above per pixel on the IRCS-AO -band image. The green contours show isophotal areas of the 1.1 mm sources same as Fig. 1. The bottom right end raws show the stacks of the Az14-K15b, d, e, f and C50, the members classified as SFGs in § 3. 1. We mistook the object ID C50 as MD048 in the previous paper. Before stacking the images, the image centres are aligned at the centroids of the objects on the IRCS-AO -band image or MOIRCS -band image if they are not detected on our IRCS-AO -band image. Both the -band images are combined in median after matching their scales based on their total magnitudes measured on our MOIRCS -band images. The -band images are combined without any scaling since many of the group members are hard to be identified on the -band images individually. It is interesting that a wide variety of galaxies are observed within such a small volume. Even if we focus on only the galaxies with , they have a wide variety in morphologies, similar to a dense cluster reported in Wang et al. (2016) recently. Such extremely dense groups at high redshift are interesting laboratories maybe just at the transition epoch of morphologies. Many of the members are hardly detected on the -band image maybe owing to their red colors. Therefore the AO-assisted high-resolution -band imaging is essential to study morphologies of such red galaxies at . Interestingly, some galaxies are more clearly detected on our MOIRCS -band image (K15b and K15e in Fig. 2) while the point source sensitivity of our IRCS-AO -band image is better than that of our MOIRCS -band image. We test the dependence of detection completeness on source morphology in § 3. 3. 3. 3.1 Morphologies and stellar populations At first, we classify the members into QGs and SFGs from their rest-frame colors and SEDs. Fig. 3 shows v.s. or rest-frame color diagram (Williams et al., 2009). Aperture corrected photometries of each galaxies are performed by the way same as that in Kubo et al. (2013) but here we subtract spectroscopically measured H and [OIII] emission line fluxes from their -band fluxes. In Kubo et al. (2016), we showed their observed and best-fit model SEDs at the , 3.6, 4.5, 5.8 & 8.0 m-bands obtained by fitting the observed flux values to the stellar population synthesis models of the Bruzual & Charlot (2003) adopting the Chabrier (2003) Initial Mass Function. The two brightest members, Az14-K15a and K15c, satisfy the rest-frame color criterion of QGs and also have SEDs well characterized as those of QGs (Kubo et al., 2016). Other galaxies are classified as SFGs: C50 looks satisfying QG color criterion but there is large uncertainty in its rest-frame color. Looking its overall SED shown in Kubo et al. (2016), C50 has a blue color like a young SFG. Note that colors of Az14-K15d and K15e suffer from deblendings with adjacent sources at 4.5 m. They can satisfy QG color criterion but are too faint to solve degeneracies of QG/SFG by SED fittings with our current data. Following previous studies, we evaluate morphological properties by using the GALFIT (Peng et al., 2002, 2010). The GALFIT fits two-dimensional analytical functions convolved with an PSF to observed galaxy images. We use a star with at (R.A., Dec) = (22:17:36.608, +00:18:22.52), which is 54, 9 and arcsec apart from the TTGS, LGS and targets, respectively, as a PSF reference star (the dashed white circle in Fig. 1). We fit the Sérsic models (Sersic, 1968) with effective radii kpc and Sérsic index . Fittings are performed for within two arcsec square regions of each object. Sky background values are estimated at the areas to arcsec apart from each object before Sérsic model fittings. We initially input total magnitudes measured by using the SExtractor (Bertin & Arnouts, 1996) and typical morphological parameters for galaxies, kpc and . We summarize the morphological properties obtained by using the GALFIT in Table 1. Since there are large uncertainties in the morphological parameters estimated with the GALFIT except for those of the brightest one, we supplementary compare observed central to total flux ratios with those of models in Fig. 4. We take the 1 kpc radius aperture fluxes measured on the IRCS-AO -band image as central fluxes. Here we use the Kron fluxes measured on the MOIRCS -band image by using the SExtractor as total fluxes since up to 90% of total fluxes of the galaxies in the AzTEC14 group measured on our MOIRCS -band image are under the surface detection limit on our IRCS-AO -band image. We show the 2 rms ranges of model flux ratios similarly measured on mock galaxy images in both the -band from a thousand iteration for each. The two brightest members, Az14-K15a and Az14-K15c, have flux ratios similar to those of compact objects with kpc. On the other hand, except for the faintest one, SFGs have flux ratios lower than those of compact objects with kpc, i. e., more extended than typical SFGs at . These results support the results of morphological analysis with the GALFIT in the following. Quiescent galaxies Figure 5 shows the size-stellar mass distribution of galaxies in the AzTEC14 group. QGs are shown with black filled circles. We also plot the size-stellar mass relations of SFGs and QGs at and in vdW14 and S15, both obtained by using the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS; Grogin et al. 2011; Koekemoer et al. 2011) and 3D-HST (Brammer et al., 2012) data. Comparison with other studies is done by consistent way; We compare the results in circularized effective radii and stellar masses measured adopting the Chabrier IMF; The galaxies in vdW14 and S15 are selected based on the spectroscopic and photometric redshifts, and classified by the rest-frame color criterion; Morphological parameters in vdW14 and S15 are mainly evaluated by using the HST F125W & F160W-band images. In S15, SFGs at are studied by using the rest-frame UV data but they reported that morphological -correction for them is less than . The Sérsic indices and effective radii of the QGs, Az14-K15a and -K15c are and kpc, and and kpc, respectively. Fig. 6 shows the observed, model and residual images of Az14-K15a and Az14-K15c obtained by using the GALFIT. We show radial profiles of Az14-K15a and -K15c in the central and left panels of Fig. 7. Az14-K15c is well characterized as a massive compact elliptical similar to QGs found at (e.g., Daddi et al. 2005; Trujillo et al. 2006, 2007; Toft et al. 2007; van Dokkum et al. 2008, 2010; Damjanov et al. 2009) but there is an matter of concern for Az14-K15a; it has an X-ray detected AGN detected with Chandra (Lehmer et al., 2009) which can sharpen its radial profile. Since it is hard to resolve an AGN from a high redshift galaxy spatially with current instruments, here we deal with the AGN component of Az14-K15a assuming that AGN to stellar flux ratio is comparable to emission line to continuum flux ratio at -band. Fig. 8 shows a spectrum and spatial extent of the [OIII] of Az14-K15a obtained in Kubo et al. (2015). It has double peaks in spectral direction but is just an point like source in spatial direction. The [OIII] line width and fluxes of the shorter and longer wavelength peaks are km s and erg cm s, and km s and erg cm s, respectively. Together with H and [OIII], emission line flux contribute to about 30% of the -band flux. Then we re-fit the Az14-K15a to models composed from a point source and a Sérsic model with a fixed flux ratio of , assuming that all the emission line flux shifted in the -band is originated in AGN(s) (central point source) and all the continuum emission comes from the stellar component (Sérsic model). Note that the influence of such a nuclear component is negligible for Az14-K15c since no signature of an AGN is detected and upper limit of the contamination of the [OIII] to the -band flux is less than 1% in Az14-K15c from our spectroscopic observations in Kubo et al. (2015). The Sérsic indices and effective radii of the best-fit Sérsic model of double-components fit are and kpc. The green thin and thick long dashed lines in the central panel of Fig. 7 show the best-fit models of a point source and Sérsic model, and the sum of the two models. The model and residual images of the single and double components fits are shown in the second and third panels of Fig. 6. Both fits looks very similar but at a large scale, observed radial profile is reproduced better by double-components model. From these above, we found that one of the two QGs in the AzTEC14 group is as compact as field one at while other one is as large as giant elliptical at , though there is large uncertainty in the latter one. It is reported that QGs in protoclusters have sizes larger than those in general fields at , implying accelerated size growths of them in overdense regions, possibly by enhanced merger rates (Zirm, Toft, & Tanaka, 2012; Cooper et al., 2012; Papovich et al., 2012; Lotz et al., 2013; Newman et al., 2014; Andreon, Dong, & Raichoor, 2016). At first, our results challenge the reliability of past studies arguing compact morphologies of QGs at high redshift without NIR spectroscopic follow-ups. In case of the SSA22 protocluster, now five, about a third of the candidate QGs selected in Kubo et al. (2013) are spectroscopically confirmed as the protocluster members in Kubo et al. (2015). Except for Az14-K15c, they are X-ray detected and confirmed by detecting their redshifted [OIII] or Ly emission lines plausibly originated in AGNs, similar to Az14-K15a. Other studies also report that AGNs are frequently seen among massive QGs at (Olsen et al., 2013; Marsan et al., 2016). On the other hand, some massive compact ellipticals at high redshift are certainly found like Az14-K15c and in other studies by deep spectroscopic observations (e.g., van Dokkum et al. 2008). Further studies of morphologies properly dealing with AGN components are required to conclude when giant ellipticals appeared in cluster of galaxies. It should be noted that [OIII] emission line of Az14-K15a at shorter wavelength has a wing at the upper side with respect to the centre (Fig. 8). Since no such component is detected on both the -band images, this wing component should have a large [OIII] equivalent width like Ly Blobs. This wing component extends to kpc, much more larger than the typical size ( kpc) of galaxies at . We will discuss the origin of this extended and double peaked [OIII] emission lines in § 4. 2. Star forming galaxies The SFGs except for Az14-K15b and ADF22.4 are shown with black filled squares in Fig. 5 while Az14-K15b is too diffuse to obtain a reasonable fit and ADF22.4 is not significantly detected on the IRCS-AO -band image. The stack of the SFGs is shown with the black open square. Except for the lowest stellar mass one, they tend to be larger than normal SFGs at the same redshift. In addition, the region detected over 2 per pixel of Az14-K15b () is extended to ( kpc) at least. Although these SFGs are too faint () to constrain their morphological parameters robustly, it is interesting that massive SFGs are all above the size-stellar mass relation of field galaxies. The simple photometric analysis in Fig. 4 also supports this tendency. The right panel of Fig. 7 shows the radial profile of stack of the SFGs in the AzTEC14 group compared with the stacks of model galaxies in a range of typical galaxies at . We simulate stacked images by stacking five model galaxies with magnitude distributions same as that of the SFGs in the AzTEC14 group and morphological parameters randomly scattered in ranges of kpc, , axis ratio and position angle . The black dashed line and gray shaded region show the median and 2 rms around the median of radial profiles of the stacks of model galaxies. The radial profile of stack of the SFGs in the AzTEC14 group is flatter than those of the model stacks of typical galaxies, i.e., the observed radial profile cannot be reproduced unless they are dominated by galaxies with kpc. From these above, we conclude that massive SFGs in the AzTEC14 group have the sizes on average larger than those of typical SFGs at . The difference in the size-stellar mass relation between SFGs in the AzTEC14 group and in general fields is likely to be originated in the sample bias since SFGs in the AzTEC14 group are classified as DRGs, namely rare massive dusty starburst galaxies. The size-stellar mass distribution of our sample is similar to that of massive H emitters (HAEs) at (Tadaki et al., 2014). In their study, low mass HAEs are on the size-stellar mass relation of normal SFGs at similar redshift while massive HAEs are on the size-stellar mass relation of SFGs at . They searched HAEs at in general fields by using a narrow-band filter but the strong spatial clustering of HAEs implies that they are plausible progenitors of massive ETGs in clusters or groups, similar to the SFGs in the AzTEC14 group. The environmental dependence of morphologies of LBGs is discussed in many studies while the galaxies studied here are biased to DRGs, which do not often overlap with LBGs (Kubo et al., 2013). We found no significant difference between C50, classified as a LBG, and field galaxies at , similar to Peter et al. (2007) and Overzier et al. (2008) while Hine et al. (2016) reported the enhanced merger fraction among the LBGs in the SSA22 protocluster. These studies used the images at rest-frame UV. Further observations at rest-frame optical may be also needed to discuss the environmental dependence of galaxy morphologies. On the other hand, Peter et al. (2007) reported that DRGs have relatively wide range of morphological parameters and include more high multiplicity objects and compact objects compared to LBGs. The environmental dependence of galaxy morphologies is still controversial, but at least, we can argue that DRGs, strongly clustered massive galaxies or plausible progenitors of massive ETGs today, have morphologies different from LBGs. Sub-mm sources Ikarashi et al. (2015) reported a median circularized sizes of kpc for bright SMGs measured at NIR, which is comparable to the effective radii of compact ellipticals at . On the other hand, (Rujopakarn et al., 2016) reported that relatively faint ( mJy) sub-mm sources at have the sizes of kpc at sub-mm in median, comparable to those of their stellar contents. There are five moderately luminous ( mJy) sub-mm sources identified by using ALMA at the AzTEC14 group and three of them are spectroscopically confirmed at . We show the stamps of four out of the five sub-mm sources (K15bADF22.16, K15eADF22.11, ADF22.4 and ADF22.10) in Fig. 2 while no significant counterparts are detected for the rest one, ADF22.17, in all the and -bands. The green contours in Fig. 2 show the isophotal contours of the 1.1 mm sources detected above 3 per beam. The blue solid lines of Az14-K15b in Fig. 2 show the slit with width used in our MOIRCS spectroscopy. As the alignment accuracy of MOIRCS is better than 0.1 arcsec rms4, we might confirm the stellar component at the north of the sub-mm source and/or the sub-mm source itself. The bright object detected near ADF22.4 is a galaxy confirmed at (Kubo et al., 2015) and ADF22.4 can be lensed by this object. Although the -band counterparts of the sub-mm sources are too faint to constrain the robust morphological parameters with the GALFIT, there are several clews suggesting that they have spatially extended stellar components; Az14-K15b is extended to ( kpc). The of the total flux of Az14-K15e is lost on our IRCS-AO -band image and possible counterparts of ADF22.4 and ADF22.10 are detected on our MOIRCS -band image but not detected on our IRCS-AO -band image. The influence of morphologies on detection completeness and measured total flux values on our IRCS-AO -band image are described in Appendix C. Large sizes of them can cause such poor detections on our IRCS-AO -band image. In Umehata et al. (2017), it is reported that the deconvolved angular size of ADF22.4 is kpc in FWHM and those of other sub-mm sources in the SSA22 protocluster are kpc though the spatial resolution and signal to noise ratio are not enough to show whether they have remarkably compact or not. At this point, it is not also clear whether there is a segregation between the morphologies in sub-mm and rest-frame optical. 3.2 Deficiency of low mass galaxies Stellar mass function is one of the key properties to characterize a group of galaxies. Deep -band images are useful to constrain stellar mass function of galaxies at . In our previous study with MOIRCS (Kubo et al., 2016), we show that stellar mass function of the AzTEC14 group is consistent with those of proto-BCG groups at predicted from the galaxy formation models based on the Millennium simulation (Springel et al., 2005; De Lucia & Blaizot, 2007; Guo et al., 2011) at above . Given the empirical size-stellar mass relation at in general fields, galaxies with stellar masses below the above completeness limit have sizes smaller than 1 kpc () in typical. If so, our new IRCS-AO -band image could give further constraints on stellar mass function of the AzTEC14 group; the completeness limit downs to and the % of galaxies with are expected to be detectable on our IRCS-AO -band image. Then new members should be detected if stellar mass function of the AzTEC14 group is continuously consistent with the above cosmological numerical simulations. However no additional member is detected in our IRCS-AO -band image. Detection completeness of galaxies depends on both colors and morphologies. The influence of colors as red as is already included in the above stellar mass completeness limit. Given that many of galaxies in the AzTEC14 group have sizes larger than typical galaxies, we cannot ignore the influence of source morphologies on detection completeness. MOIRCS -band image may be also affected by source morphologies while our previous study ignored such an effect. We discuss stellar mass function of the AzTEC14 group in § 4. 5. 3.3 Fitting errors and detection completeness In this section, we test the influence of PSF variation, reproducibility of morphological parameters with the GALFIT and detection completeness on our IRCS-AO -band and MOIRCS -band images. PSF variation Since performance of an AO system depends on separations of targets from a LGS and TTGS, we need to concern the influence of PSF variation on estimating morphological properties. The separation between the LGS and the PSF reference star is 9 arcsec while those between the LGS and our targets range from arcsec. According to the performance of the AO1885, the FWHM of the PSF size at Az14-K15d can be () smaller than that at the position of the PSF reference star while those at Az14-K15a, K15c and K15e can be () smaller and that at Az14-K15f can be 10% () larger. The degradation of the performance owing to the separation from the TTGS is for our targets. To see the influence of PSF variation, we compare results of the GALFIT obtained by using different PSF reference stars. Besides the PSF reference star adopted in this study, three stars are observed simultaneously but the FWHMs of the PSF sizes measured from them are . The separations of these stars from TTGS and LGS range and arcsec, respectively. We compare the Sérsic indices and effective radii estimated with these stars and the PSF reference star adopted in this study in Fig. 9. The influence of PSF variation on the estimated effective radii is small except for a galaxy with the largest effective radius among the group. On the other hand, the estimates of Sérsic indices vary greatly by PSF reference stars used. Again, at Az14-K15c, the brightest and one of the key objects of this study, the PSF size is expected to be () smaller than that at the PSF reference star adopted. In Fig. 9, use of different FWHM PSF size changes the estimated Sérsic index and effective radius of Az14-K15c by only 0.5 and 0.2 kpc. Thus the results of Az14-K15c is not likely to be strongly affected by PSF variation, though ideally, we should also test of PSF references with PSF sizes smaller than that of the PSF reference star adopted in this study. Performance of the Galfit on our IRCS-AO K′-band image Next, we test the performance of the GALFIT on our IRCS-AO -band image. To test the reproducibility of morphological parameters, we generate mock galaxy images by making model galaxy images convolved with the observed PSF profiles by using the GALFIT and putting them on the blank fields of the observed image to add the sky fluctuation. Then we re-run the GALFIT. We show the deviations of measured values from initial values in Fig. 11. Briefly, for typical compact elliptical galaxies at ( and kpc) with (), the 1 rms errors of the re-estimated values from the model parameters are kpc and . For typical late-type galaxies at ( kpc and ) with (24.0), the 1 rms errors are kpc and . For large late-type galaxies with kpc and , the 1 rms errors are kpc and . For faint objects with and kpc, measured are significantly underestimated. Sérsic indices of objects with and/or large Sérsic indices suffer from large errors. According to Table 1, and Fig. 11 and 10, we can obtain reliable morphological parameters for Az14-K15c with but other members may suffer from large errors. Az14-K15a is as bright as but as we describe above, its -band flux is pushed up by an AGN component. of Az14-K15d, of Az14-K15f could be trusted though both are not reliable for Az14-K15e. There are possible additional uncertainties originated in the substructures like AGNs and giant clumps frequently seen among massive galaxies at high redshift (e.g., Elmegreen & Elmegreen 2006; Genzel et al. 2006, 2008; S15; Shibuya et al. 2016). S15 reported that the fraction of clumpy galaxies increases with redshift and becomes % at . Multiple giant clumps are not clearly identified in our targets, may be due to the low signal to noise (S/N) ratio compared to previous studies with the HST but the different morphologies in different wavelength of Az14-K15b and C50 imply their complex structures. Detection completeness Here we test dependence of detection completeness on source morphologies by generating mock galaxy images by the way described in § 3.3.2 and extracting them by using the SExtractor. We extract sources detected over at each pixel (pix for IRCS and for MOIRCS) for and arcsec adjacent areas on the IRCS-AO and MOIRCS -band images, respectively. We show the detection completeness for models with and kpc in Fig. 12 & 13. Fig. 14 shows the completeness of measured total magnitudes. The detection completeness on our IRCS-AO -band image sharply drops as sources have large sizes. Objects with are almost completely detectable on our IRCS-AO -band image but their total magnitudes can be significantly underestimated. The detection completeness on our MOIRCS -band image is less sharply but also affected by source morphologies. 4 Discussion 4.1 Dense group of galaxies at high redshift There are many dense groups of massive galaxies in the SSA22 protocluster, which are plausible evidences of formations of giant elliptical galaxies via hierarchical multiple mergers. Such galaxy groups are reported in other studies. A dense cluster at (CL J1001) found by Wang et al. (2016) is similar to the AzTEC14 group in many aspects. The CL J1001 cluster has a collapsed halo with and contains 11 massive galaxies within 80 kpc from the cluster centre ( with the Salpeter IMF, corresponds to with the Chabrier IMF). They found no object comparable to the CL J1001 cluster in the Millennium simulation while the AzTEC14 group has only one comparable group at each snapshot. The CL J1001 cluster and the AzTEC14 group are also similar in overdensities of DRGs, QGs, sub-mm sources and AGNs. QGs and many SFGs in the CL J1001 cluster are compact while no massive compact SFG is seen in the AzTEC14 group. Several studies reported compact SFGs and sub-mm galaxies at high redshift (e.g., Simpson et al. 2015; Ikarashi et al. 2015; Tadaki et al. 2015) which can evolve into compact QGs by just quenching their star formation activities. The absence of compact SFGs in the AzTEC14 group can be owing to the short periods of compact SFG phases. We note that larger -correction can be required in Wang et al. (2016) since they applied the morphological -correction based on vdW14 but their sample is biased to red galaxies rarely seen in general fields like galaxies in the AzTEC14 group which show very different morphologies in rest-frame UV and optical (Fig. 2) and careful subtractions of nuclei components with strong [OII] emission lines can be required since many galaxies in the CL J1001 cluster show signatures of AGNs. It is beyond the scope of this paper but it is surprising that such rare density peaks hardly seen in the volume of the current large cosmological numerical simulations are discovered by field surveys with limited volumes. Further large volume cosmological numerical simulations and wide field surveys of such dense groups are required to show the consistency of their simulated and observed properties. 4.2 Extended and double peaked [OIII]λ5007 emission lines from a quiescent galaxy Since Az14-K15a is a young QG, to be strict, post-starburst galaxy having the SED of Gyr after burst-like star formation, footprints of its quenching process can be still observed. It is interesting that Az14-K15a is an AGN showing double peaked and spatially extended [OIII]5007 emission lines. Plausible origins of double peaked emission lines from an AGN(s) are dual AGNs, outflows and/or a rotating narrow line region (e.g., Müller-Sánchez et al. 2011, 2015). Since the two peaks have different line width, a rotating narrow line region scenario is ruled out. It is hard to resolve dual AGNs at with current instruments but if double peaked [OIII]5007 emission lines are originated in dual AGNs, it is a direct evidence of major mergers of galaxies (or giant stellar clumps) hosting a SMBH for each. Similarly, the objects with double peaked CO emission lines are seen in a protocluster at (Tadaki et al., 2014) and a dense compact cluster of Wang et al. (2016), suggesting frequent gas rich mergers of galaxies in the protocluster environments. An spatially extended metal emission line region is more likely to be produced by outflows rather than inflows of pristine gas. Thus the [OIII]5007 of Az14-K15a is likely to be originated in outflows or a combination of outflows and dual AGNs. It is very interesting to find a plausible signature of outflows from an AGN(s) in a QG in a protocluster at . Similarly, [OIII] ([OII]) Blobs were found at (Brammer et al., 2013; Yuma et al., 2013; Harikane et al., 2014). Further deep spatially resolved spectroscopy of Az14-K15a and other post-starburst galaxies in protoclusters may help us understanding how AGN activities relate with quenching of galaxies. 4.3 Evolution scenario of a BCG Discovery of a massive compact elliptical, Az14-K15c, in such a proto-BCG group supports the two-phase formation scenario of giant elliptical galaxies that massive compact ellipticals formed at once and they evolve in sizes and stellar masses by series of mergers (e.g., Oser et al. 2010), which has also been supported by many observational studies (e.g., van Dokkum et al. 2010; Morishita et al. 2015; Zirm, Toft, & Tanaka 2012; Cooper et al. 2012; Papovich et al. 2012; Lotz et al. 2013; Newman et al. 2014; Andreon, Dong, & Raichoor 2016). It is known that BCGs are on the size-stellar mass relation above that of non-BCGs (Bernardi et al., 2007; Bernardi, 2009; Zhao, Aragón-Salamanca, & Conselice, 2015). Several simulations (e.g., Laporte et al. 2013;Shankar et al. 2015) and observations (e.g., Lidman et al. 2013; Burke & Collins 2013; Zhao, Aragón-Salamanca, & Conselice 2015) argue that BCGs double their stellar masses between to while little growths of BCGs at are reported in e.g., Collins et al. (2009), Stott et al. (2010, 2011). Recently, Zhang et al. (2016) reported a redshift-dependent BCG-cluster mass relation at up to . Az14-K15c needs to evolve in a size and stellar mass for and four times, respectively to evolve into a BCG hosted in one of the most massive clusters today ( & ) while size growths by factor from to are expected for compact QGs at in typical (e.g., Toft et al. 2007; van Dokkum et al. 2008; Damjanov et al. 2009; van Dokkum et al. 2010; van der Wel et al. 2014). We roughly estimate the size and stellar mass growths of Az14-K15c assuming that all the group members will merge into this object. The size growth of an object by mergers can be written as , adopting the virial theorem following Naab, Johansson, & Ostriker (2009). and are the final and initial gravitational radii. and are ratios of masses and mean square speeds of the stars, respectively, between accreting and initial objects. Here we assume that the velocity dispersion of each group member is similar to those of compact QGs at which are 200 to 500 km s for galaxies with the stellar masses , extrapolated from van Dokkum, Kriek, & Franx (2009) and Bezanson et al. (2009). The above formalism is hold in case of dry mergers. Note that if massive SFGs in the group are gas rich when they merge, the size growth of Az14-K15c by mergers can be suppressed (Welker et al., 2015). All the galaxies in the AzTEC14 group can merge into one massive galaxy in Gyr or by , according to the numerical simulations of compact groups (e.g., Barnes 1989; Bode, Cohn, & Lugger 1993) which have the dynamical timescales similar to the AzTEC14 group. If we just use the observed stellar mass values of the members at , the size and stellar mass of the final product are the times and double of the initial values of Az14-K15c, respectively (Case A in Fig. 5). It is consistent with the simulations and observations predicting continuing strong size growths at . If the SFGs in the AzTEC14 group keep to be on or above the star formation main sequence for Gyr before mergers, they can double their stellar masses and exhaust large fractions of gas. In such case, the size and stellar mass of the final product are and four times of the initial values, respectively (Case B in Fig. 5). If Az14-K15c follows case B scenario and/or there are large extra accretion of galaxies, it could have already become a BCG today by . Contributions of mergers of satellites to the evolution of a compact elliptical from to was discussed by using a deep HST image in Morishita & Ichikawa (2016). They argue that to reproduce the observed and simulated size growths of massive ETGs, not only just merging satellites but also in situ star formation in them are required. In case B, the net stellar mass increase by mergers is of the descendant galaxy at , similar to the results of Morishita & Ichikawa (2016). Note that further size and stellar mass growths by satellites are expected for Az14-K15c since some of the members of the AzTEC14 group are sub-mm galaxies which may more rapidly grow in stellar masses than galaxies on the star formation main sequence, much more accretions of satellites are expected at the core of a protocluster and the stellar mass completeness limit of our observations is not as small as Morishita & Ichikawa (2016). We note that some mergers expected in the AzTEC14 group can have the stellar mass ratios categorized as major mergers. Multiple major mergers can form slow rotators frequently seen among the most massive ETGs like BCGs while minor and major binary mergers result in fast rotators (Moody et al., 2014). 4.4 Nascent red sequence galaxies Not only compact QGs but also SFGs in protoclusters are plausible progenitors of massive ETGs today. According to the model predictions, most of the members of the AzTEC14 group plausibly merge into one BCG in the current Universe but they still inform us how stars formed in galaxies at early time. Especially, massive SFGs in the AzTEC14 group are mostly classified as DRGs, known to show strong clustering (Quadri et al., 2007; Ichikawa et al., 2007), i.e., preferentially inhabiting in the environments where evolve into clusters or groups. One plausible explanation for the large sizes of massive SFGs classified as DRGs is the differences in halo masses before their dark matter halos incorporate into one massive halo. The sizes and rotational velocities of galaxy discs follow the sizes and circular velocities of their host dark matter halos. It is known that DRGs show strong clustering (Quadri et al., 2007; Ichikawa et al., 2007). More strongly clustered galaxies are hosted in more massive dark matter halos. Based on the clustering analysis, Quadri et al. (2007) reported the halo mass and for photo-z or spec-z selected galaxies and DRGs with (in Vega, in AB) at , respectively. According to Behroozi, Wechsler, & Conroy (2013), the mean stellar to halo mass ratio peaks at where is and is . When the mass of a halo increases from to , the stellar mass inside increases by only three times. Thus there is no wonder that halo masses of galaxies with stellar masses range widely. Both the size and circular velocity of a halo are proportional to a cubic root of the halo mass, approximated with the spherical collapse model (Eq. (2) of Mo, Mao, & White 1998). Thus DRGs would have the disc sizes (and rotational velocities) twice larger than those of normal SFGs, consistent with the observed size difference of massive SFGs in the AzTEC14 group. The sizes and stellar masses of massive SFGs in the AzTEC14 group are comparable to those of massive ETGs in the local Universe (Fig. 5). It is interesting to find a post starburst galaxy, Az14-K15a, composed of an AGN component and flat stellar component. The size and stellar mass growths via mergers can be less important for them to evolve into local massive ETGs. Even though they have late-type morphologies at this point, they can evolve into fast-rotating ETGs by exhausting gas (e.g., Khochfar et al. 2011). On the other hand, remarkable compactness of Az14-K15c implies the two-phase formation scenario in which massive compact ellipticals are formed at once and evolve into local massive ETGs through many mergers in later (Oser et al., 2010). Gas rich major mergers (Cox et al., 2006; Naab et al., 2007; Wuyts et al., 2010; Bournaud et al., 2011), and inflowing gas and wet mergers of inflowing clumps via disc instabilities (Dekel, Sari, & Ceverino, 2009; Ceverino et al., 2015) can form massive compact ellipticals from large discy SFGs. Further studies of the relation between morphologies, stellar populations and also AGN activities in protoclusters, including such large and red SFGs, are needed to understand how massive SFGs transformed into massive ETGs today. 4.5 Stellar mass function and deficiency of low mass galaxies: The AzTEC14 group is a more massive group? Given that many galaxies in the AzTEC14 group have sizes kpc, source morphology is likely to affect severely on the detection completeness on not only our IRCS-AO -band but also MOIRCS -band images. This suggests that stellar mass function obtained with MOIRCS in previous study is less complete than that we expected; stellar mass completeness limit can be twice larger () and total flux values can be underestimated to reduce stellar masses estimates. Then the AzTEC14 group can be a group richer than that we claimed in previous study. Unfortunately, there may be no such a massive group in current large volume cosmological numerical simulations, since comparison groups of AzTEC14 group found in previous study is the richest groups in the Millennium simulation. Note that Hatch et al. (2009) reported that stellar mass function around the QSO at the center of a protocluster at , also a plausible progenitor of a BCG, is consistent with the galaxy formation models based on the Millennium simulation at . But their results may be less affected by morphologies of galaxies since their targets are at the redshift lower than our target, i.e., less affected by cosmological surface brightness dimming, and they used the HST with the Strehl ratio higher than that of the AO188 in typical. Much more deep imaging observations are required to measure stellar mass function robustly and show whether the observed deficiency of faint galaxies is originated in diffuse morphologies and/or actual deficiency of them. Suppressions of formations of low-mass galaxies by reionization (Bullock, Kravtsov, & Weinberg, 2000) and supernovae feedbacks (Benson et al., 2002, 2003) are predicted but the strength of such effects are still open question. The large stellar mass existing in the group implies that it had the star formation density higher than that in general fields at past, i.e., there is a strong UV radiation field heating the gas of low mass halos. But the characteristic halo mass at which a halo lost half of its baryon by the reionization or supernovae feedback is too low () to cause the deficiency of faint galaxies observed in the AzTEC14 group. On the other hand, supernovae feedbacks can work effectively to transport angular momenta from inner to outer-radii of galaxies to extend their sizes (Benson et al., 2002). It can also happen that gravitational heating prevents the cooling of low mass sub-halos since the AzTEC14 group can be (partly) virialized as it has the halo mass (Kubo et al., 2016). 5 Conclusion We conducted the deep and high resolution imaging of an extremely dense group of galaxies, called the AzTEC14 group, at the core of the protocluster at in the SSA22 field by using the AO188/IRCS equipped on Subaru Telescope to study morphological evolution of massive ETGs. Wide morphological variety of them implies that morphology-density relation today has just begun forming. We confirm that one of the two QGs in the group, the most massive member, is a compact QG. This supports the two-phase formation scenario of giant elliptical galaxies that massive compact ellipticals formed at once and they evolve in sizes and stellar masses by series of mergers. To form a local BCG-like object by , sometimes observed, in situ star formation in the group members may be important. Another QG in the group is fitted with the model composed of a nuclear component and not so compact Sérsic model, and shows double peaked and spatially extended [OIII]5007 emission lines ( kpc). It is a key result to find an evidence of outflows from an AGN(s) in a young QG. Massive SFGs in the group have stellar masses and sizes comparable to those of local massive ETGs. Even if massive SFGs become compact spheroids at once by gas rich major mergers, large stellar masses of them imply the importance of star formations before violent morphological evolutions. Although we obtained the image more sensitive for typical galaxies in general fields than our previous MOIRCS -band image, no candidate new group members are detected. It implies that there is an actual deficiency of low mass galaxies and/or they are too diffuse to be detected on our IRCS-AO -band image. Moreover, given the morphological trend of the AzTEC14 group found in this study, our previous estimate of stellar mass function of the AzTEC14 group with MOIRCS is likely to be less complete than that we expected. We argue the necessity of more careful treatments of diffuse and red galaxies at which are hardly detected with the HST but may play important roles in massive galaxy formation. More deep and wide imaging surveys at wavelength longer than m with large telescopes are needed to study such red, diffuse and faint galaxies. Careful subtractions of AGNs from compact QGs and SFGs are also important to evaluate the size evolution history of massive ETGs correctly. Acknowledgments This study is based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. We would like to thank the Subaru Telescope staff for many help and support for the observations. Our studies owe a lot deal to the archival Subaru Suprime-Cam (Matsuda et al. 2004), Spitzer IRAC & MIPS data taken in Webb et al. (2009), Chandra data taken in Lehmer et al. (2009). We also thank to AzTEC/ASTE observers of the SSA22 field providing the updated source catalog. This work was supported by Global COE Program ”Weaving Science Web beyond Particle-Matter Hierarchy”, MEXT, Japan. YM acknowledges support from JSPS KAKENHI Grant Number 20647268. This work was partially supported by JSPS Grants-in-Aid for Scientific Research No.26400217. HU is supported by the ALMA Japan Research Grant of NAOJ Chile Observatory, NAOJ-ALMA-0071, 0131, 140, and 0152. HU is supported by JSPS Grant-in-Aid for Research Activity Start-up (16H06713). This paper makes use of the following ALMA data: ADS/JAO.ALMA#2013.1.00162.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. Appendix A Influence of PSF variation In this work, we perform two-dimensional fits of galaxies adopting an observed image of the closest star to the image center as a PSF reference. Since performance of an AO system is not uniform on the whole image, we need to concern influence of PSF variation on evaluating morphological properties. According to the performance of the AO188, FWHM of PSF sizes at our targets can be to different from that at the star adopted as a PSF reference in this study. To test influence of PSF variation, we compare two-dimensional fits performed with different stars found in our image as PSF references. (R.A., Dec), FWHM PSF sizes, separations from TTGS and LGS of these stars are as follows; (22:17:37.644 +0:18:06.71), , 75 and 15 arcsec; (22:17:36.457 +0:18:34.00), , 44 and 18 arcsec; (22:17:35.648 +0:18:24.29), , 52 and 23 arcsec. Fig. 9 shows morphological properties evaluated by adopting different PSF references. Effective radii and Sérsic indices of galaxies measured with a star other than the PSF reference star adopted in this study are shown against those estimated by using the PSF reference star adopted in this study. The influence of PSF variation on estimated effective radii is small except for a galaxy with the largest effective radius among the group. On the other hand, the estimates of Sérsic indices vary greatly by selections of PSF references. Appendix B Reproducibility of morphological parameters Here we test reproducibility of morphological properties with the GALFIT. To test reproducibility of morphological properties, we generate mock galaxy images by making model galaxy images convolved with the observed PSF profiles by using the GALFIT and putting them on the blank fields of the observed image to add the sky fluctuation. Then we re-run the GALFIT. We test the Sérsic models with the Sérsic indices ranging from to , effective radii to kpc and total magnitudes . We performed a thousand simulations for each model and see deviations of re-estimated values from inputs. Fig. 10 compares initial inputs and means of effective radii, Sérsic indices and total magnitudes measured on simulated images. Fig. 11 compares initial inputs and standard deviation of effective radii, Sérsic indices and total magnitudes measured on simulated images from their initial inputs. Fitting errors get larger for models with fainter , larger and . For typical compact elliptical galaxies at ( and kpc) with (), the 1 rms errors of the re-estimated values from the model parameters are kpc and . For typical late-type galaxies at ( kpc and ) with (24.0), the 1 rms errors are kpc and . For large late-type galaxies with kpc and , the 1 rms errors are kpc and . Appendix C Detection Completeness We test dependence of detection completeness on galaxy morphologies on both our IRCS-AO -band and MOIRCS -band images by generating mock galaxy images by the way described in Appendix B and extracting them by using the SExtractor (Bertin & Arnouts, 1996). For MOIRCS -band image, the PSF convolved with model galaxies is generated from stars observed simultaneously with our targets, where FoV of single MOIRCS detector is , by PSF task of the IRAF. We extract sources detected over at each pixel (pix for IRCS and for MOIRCS) for and arcsec adjacent areas on our IRCS-AO and MOIRCS -band images, respectively. Figure 12 and 13 show detection completeness in cases and kpc on our IRCS-AO -band and MOIRCS -band images, respectively. Fig. 14 shows means of measured total magnitudes in cases and kpc on both IRCS-AO -band and MOIRCS -band images. Impact of object morphologies on detection completeness is stronger for models with low Sérsic indices on the IRCS-AO -band image. In addition, detection completeness declines as sizes of objects increase. There are also significant underestimation of total magnitudes depending on sizes. Object morphologies is less strongly but also affect detection completeness on our MOIRCS -band image. Objects with are almost completely detectable on our IRCS-AO -band image but their total fluxes are likely to be significantly underestimated. Footnotes 1. pagerange: Bimodal morphologies of massive galaxies at the core of a protocluster at and the strong size growth of a brightest cluster galaxyC 2. pubyear: 3. http://www.subarutelescope.org/Observing/DataReduction/C ookbooks/IRCSimg_2010jan05.pdf 4. http://subarutelescope.org/Observing/Instruments/MOIRCS /spec_mos.html 5. http://www.naoj.org/Observing/Instruments/AO/performance.html References 1. Andreon S., Dong H., Raichoor A., 2016, A&A, 593, A2 2. Barnes J. E., 1989, Natur, 338, 123 3. Behroozi P. S., Wechsler R. H., Conroy C., 2013, ApJ, 770, 57 4. Benson A. J., Bower R. G., Frenk C. S., Lacey C. G., Baugh C. M., Cole S., 2003, ApJ, 599, 38 5. Benson A. J., Frenk C. S., Lacey C. G., Baugh C. M., Cole S., 2002, MNRAS, 333, 177 6. Benson A. J., Lacey C. G., Baugh C. M., Cole S., Frenk C. S., 2002, MNRAS, 333, 156 7. Bernardi M., 2009, MNRAS, 395, 1491 8. Bernardi M., Hyde J. B., Sheth R. K., Miller C. J., Nichol R. C., 2007, AJ, 133, 1741 9. Bertin E., Arnouts S., 1996, A&AS, 117, 393 10. Bett P., Eke V., Frenk C. S., Jenkins A., Helly J., Navarro J., 2007, MNRAS, 376, 215 11. Bezanson R., van Dokkum P. G., Tal T., Marchesini D., Kriek M., Franx M., Coppi P., 2009, ApJ, 697, 1290 12. Biggs A. D., Ivison R. J., 2008, MNRAS, 385, 893 13. Bode P. W., Cohn H. N., Lugger P. M., 1993, ApJ, 416, 17 14. Bournaud F., et al., 2011, ApJ, 730, 4 15. Brammer G. B., et al., 2012, ApJS, 200, 13 16. Brammer G. B., van Dokkum P. G., Illingworth G. D., Bouwens R. J., Labbé I., Franx M., Momcheva I., Oesch P. A., 2013, ApJ, 765, L2 17. Bruzual G., Charlot S., 2003, MNRAS, 344, 1000 18. Bullock J. S., Kravtsov A. V., Weinberg D. H., 2000, ApJ, 539, 517 19. Burke C., Collins C. A., 2013, MNRAS, 434, 2856 20. Ceverino D., Dekel A., Tweed D., Primack J., 2015, MNRAS, 447, 3291 21. Chabrier G., 2003, PASP, 115, 763 22. Collins C. A., et al., 2009, Natur, 458, 603 23. Contini E., De Lucia G., Hatch N., Borgani S., Kang X., 2016, MNRAS, 456, 1924 24. Cooper M. C., et al., 2012, MNRAS, 419, 3018 25. Cox T. J., Dutta S. N., Di Matteo T., Hernquist L., Hopkins P. F., Robertson B., Springel V., 2006, ApJ, 650, 791 26. Daddi E., et al., 2005, ApJ, 626, 680 27. Damjanov I., et al., 2009, ApJ, 695, 101 28. De Lucia G., Blaizot J., 2007, MNRAS, 375, 2 29. Dekel A., Sari R., Ceverino D., 2009, ApJ, 703, 785 30. Dressler A., 1980, ApJ, 236, 351 31. Elmegreen B. G., Elmegreen D. M., 2006, ApJ, 650, 644 32. Genzel R., et al., 2008, ApJ, 687, 59-77 33. Genzel R., et al., 2006, Natur, 442, 786 34. Grogin N. A., et al., 2011, ApJS, 197, 35 35. Gunn J. E., Gott J. R., III, 1972, ApJ, 176, 1 36. Guo Q., et al., 2011, MNRAS, 413, 101 37. Harikane Y., Ouchi M., Yuma S., Rauch M., Nakajima K., Ono Y., 2014, ApJ, 794, 129 38. Hatch N. A., Overzier R. A., Kurk J. D., Miley G. K., Röttgering H. J. A., Zirm A. W., 2009, MNRAS, 395, 114 39. Hayano Y., et al., 2010, SPIE, 7736, 77360N 40. Hayashino T., et al., 2004, AJ, 128, 2073 41. Hine N. K., Geach J. E., Alexander D. M., Lehmer B. D., Chapman S. C., Matsuda Y., 2016, MNRAS, 455, 2363 42. Ichikawa T., et al., 2007, PASJ, 59, 1081 43. Ikarashi S., et al., 2015, ApJ, 810, 133 44. Ilbert O., et al., 2013, A&A, 556, A55 45. Khochfar S., et al., 2011, MNRAS, 417, 845 46. Kobayashi N., et al., 2000, SPIE, 4008, 1056 47. Kodama T., Tanaka I., Kajisawa M., Kurk J., Venemans B., De Breuck C., Vernet J., Lidman C., 2007, MNRAS, 377, 1717 48. Koekemoer A. M., et al., 2011, ApJS, 197, 36 49. Kubo M., et al., 2013, ApJ, 778, 170 50. Kubo M., Yamada T., Ichikawa T., Kajisawa M., Matsuda Y., Tanaka I., Umehata H., 2016, MNRAS, 455, 3333 51. Kubo M., Yamada T., Ichikawa T., Kajisawa M., Matsuda Y., Tanaka I., 2015, ApJ, 799, 38 52. Laporte C. F. P., White S. D. M., Naab T., Gao L., 2013, MNRAS, 435, 901 53. Larson R. B., Tinsley B. M., Caldwell C. N., 1980, ApJ, 237, 692 54. Lehmer B. D., et al., 2009, MNRAS, 400, 299 55. Lidman C., et al., 2013, MNRAS, 433, 825 56. Lotz J. M., et al., 2013, ApJ, 773, 154 57. Müller-Sánchez F., Comerford J. M., Nevin R., Barrows R. S., Cooper M. C., Greene J. E., 2015, ApJ, 813, 103 58. Müller-Sánchez F., Prieto M. A., Hicks E. K. S., Vives-Arias H., Davies R. I., Malkan M., Tacconi L. J., Genzel R., 2011, ApJ, 739, 69 59. Man A. W. S., et al., 2016, ApJ, 820, 11 60. Marsan Z. C., Marchesini D., Bedregal A. G., Brammer G. B., Geier S., Labbe I., Muzzin A., Stefanon M., 2016, arXiv, arXiv:1606.05350 61. Matsuda Y., et al., 2004, AJ, 128, 569 62. Meza A., Navarro J. F., Steinmetz M., Eke V. R., 2003, ApJ, 590, 619 63. Mo H. J., Mao S., White S. D. M., 1998, MNRAS, 295, 319 64. Moody C. E., Romanowsky A. J., Cox T. J., Novak G. S., Primack J. R., 2014, MNRAS, 444, 1475 65. Moore B., Katz N., Lake G., Dressler A., Oemler A., 1996, Natur, 379, 613 66. Morishita T., Ichikawa T., 2016, ApJ, 816, 87 67. Morishita T., Ichikawa T., Noguchi M., Akiyama M., Patel S. G., Kajisawa M., Obata T., 2015, ApJ, 805, 34 68. Muzzin A., et al., 2013, ApJ, 777, 18 69. Naab T., Johansson P. H., Ostriker J. P., 2009, ApJ, 699, L178 70. Naab T., Johansson P. H., Ostriker J. P., Efstathiou G., 2007, ApJ, 658, 710 71. Newman A. B., Ellis R. S., Andreon S., Treu T., Raichoor A., Trinchieri G., 2014, ApJ, 788, 51 72. Olsen K. P., Rasmussen J., Toft S., Zirm A. W., 2013, ApJ, 764, 4 73. Oser L., Ostriker J. P., Naab T., Johansson P. H., Burkert A., 2010, ApJ, 725, 2312 74. Overzier R. A., et al., 2008, ApJ, 673, 143-162 75. Papovich C., et al., 2012, ApJ, 750, 93 76. Patel S. G., et al., 2013, ApJ, 778, 115 77. Peng C. Y., Ho L. C., Impey C. D., Rix H.-W., 2010, AJ, 139, 2097 78. Peng C. Y., Ho L. C., Impey C. D., Rix H.-W., 2002, AJ, 124, 266 79. Peter A. H. G., Shapley A. E., Law D. R., Steidel C. C., Erb D. K., Reddy N. A., Pettini M., 2007, ApJ, 668, 23 80. Quadri R., et al., 2007, ApJ, 654, 138 81. Rujopakarn W., et al., 2016, arXiv, arXiv:1607.07710 82. Sersic J. L., 1968, adga.book, 83. Shankar F., et al., 2015, ApJ, 802, 73 84. Shibuya T., Ouchi M., Harikane Y., 2015, ApJS, 219, 15 85. Shibuya T., Ouchi M., Kubo M., Harikane Y., 2016, ApJ, 821, 72 86. Simpson J. M., et al., 2015, ApJ, 799, 81 87. Springel V., et al., 2005, Natur, 435, 629 88. Steidel C. C., Adelberger K. L., Dickinson M., Giavalisco M., Pettini M., Kellogg M., 1998, ApJ, 492, 428 89. Steidel C. C., Adelberger K. L., Shapley A. E., Pettini M., Dickinson M., Giavalisco M., 2003, ApJ, 592, 728 90. Stott J. P., Collins C. A., Burke C., Hamilton-Morris V., Smith G. P., 2011, MNRAS, 414, 445 91. Stott J. P., et al., 2010, ApJ, 718, 23 92. Tadaki K.-i., et al., 2014, ApJ, 788, L23 93. Tadaki K.-i., Kodama T., Tanaka I., Hayashi M., Koyama Y., Shimakawa R., 2014, ApJ, 780, 77 94. Tadaki K.-i., et al., 2015, ApJ, 811, L3 95. Tamura Y., et al., 2009, Natur, 459, 61 96. Toft S., et al., 2007, ApJ, 671, 285 97. Trujillo I., et al., 2006, MNRAS, 373, L36 98. Trujillo I., Conselice C. J., Bundy K., Cooper M. C., Eisenhardt P., Ellis R. S., 2007, MNRAS, 382, 109 99. Trujillo I., et al., 2006, ApJ, 650, 18 100. Uchimoto Y. K., et al., 2008, ASPC, 399, 373 101. Uchimoto Y. K., et al., 2012, ApJ, 750, 116 102. Uchimoto Y. K., et al., 2008, PASJ, 60, 683 103. Umehata H., et al., 2014, MNRAS, 440, 3462 104. Umehata H., et al., 2015, ApJ, 815, L8 105. Umehata H., et al., 2017, ApJ, 835, 98 106. van der Wel A., et al., 2014, ApJ, 788, 28 107. van Dokkum P. G., et al., 2008, ApJ, 677, L5 108. van Dokkum P. G., Kriek M., Franx M., 2009, Natur, 460, 717 109. van Dokkum P. G., et al., 2010, ApJ, 709, 1018 110. Vulcani B., et al., 2016, ApJ, 816, 86 111. Wang T., et al., 2016, ApJ, 828, 56 112. Williams R. J., Quadri R. F., Franx M., van Dokkum P., Labbé I., 2009, ApJ, 691, 1879 113. Webb T. M. A., Yamada T., Huang J.-S., Ashby M. L. N., Matsuda Y., Egami E., Gonzalez M., Hayashimo T., 2009, ApJ, 692, 1561 114. Welker C., Dubois Y., Devriendt J., Pichon C., Kaviraj S., Peirani S., 2015, arXiv, arXiv:1502.05053 115. Wuyts S., Cox T. J., Hayward C. C., Franx M., Hernquist L., Hopkins P. F., Jonsson P., van Dokkum P. G., 2010, ApJ, 722, 1666 116. Yamada T., Nakamura Y., Matsuda Y., Hayashino T., Yamauchi R., Morimoto N., Kousai K., Umemura M., 2012, AJ, 143, 79 117. Yuma S., et al., 2013, ApJ, 779, 53 118. Zhang Y., et al., 2016, ApJ, 816, 98 119. Zhao D., Aragón-Salamanca A., Conselice C. J., 2015, MNRAS, 453, 4444 120. Zirm A. W., et al., 2008, ApJ, 680, 224-231 121. Zirm A. W., Toft S., Tanaka M., 2012, ApJ, 744, 181 You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040553569793701, "perplexity": 4938.51455737082}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997335.70/warc/CC-MAIN-20190615202724-20190615224724-00181.warc.gz"}
http://www.english-subtitles.pro/movies/1995-the-usual-suspects.html
# Subtitles The Usual Suspects #### Overview Following a truck hijack in New York, five conmen are arrested and brought together for questioning. As none of them is guilty, they plan a revenge operation against the police. The operation goes well, but then the influence of a legendary mastermind criminal called Keyser Söze is felt. It becomes clear that each one of them has wronged Söze at some point and must pay back now. The payback job leaves 27 men dead in a boat explosion, but the real question arises now: Who actually is Keyser Söze? Overview from themoviedb.org The Usual Suspects 1995-08-16 tt0114814 USA, Germany English, Hungarian, Spanish, French 106 min Won 2 Oscars. Another 34 wins & 10 nominations. ### Subtitles File name The.Usual.Suspects.1995.BDRip.X264-TLF.(ENG).srt The.Usual.Suspects.1995.DVD5.720p.BluRay.x264-hV.ENG.srt The.Usual.Suspects.1995.1080p.BluRay.x264.AC3-ETRG.srt The Usual Suspects (1995).x264.544p-[1280, 544]@23.976fps.(DTS-6ch).(1h 46mn).eng.srt The Usual Suspects CD1.ENG.srt The Usual Suspects (1995) ENG.sub The Usual Suspects 1994 720p BRRip AC3 x264 MacGuffin.eng.srt The.Usual.Suspects.1995.x264.DTS.2AUDIO-WAF.Eng.srt The.Usual.Suspects.1995.720p.BRRip.XviD.AC3-ViSiON.srt (1995) The Usual Suspects EN.srt The.Usual.Suspects.1995.DVD5.720p.BluRay.x264-hV.EN.srt the.usual.suspects.dvdrip.xvid.cd1-cultxvid.srt The Usual Suspects (1995) 25 ftp.srt Usual Suspects, The (1995).ShareReactor.srt The.Usual.Suspects.1995.720p.BluRay.DTS.x264-ESiR.srt Usual Suspects.en.srt The.Usual.Suspects.1995.1080p.BluRay.DTS.x264-CyTSuNee.English.srt The Usual Suspects[1995]DvDrip[Eng]-Stealthmaster.srt The Usual Suspects (1995).srt The_Usual_Suspects.(1995).DVDRip.XviD-NewMov.English.srt The Usual Suspects (1995).m1080p.BluRay.AC3-x264.srt The Usual Suspects (1995) [HDRrip.XviD.AC3-TLF-CD1].srt
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9424206018447876, "perplexity": 28512.01469563988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321553.70/warc/CC-MAIN-20170627203405-20170627223405-00175.warc.gz"}
https://export.arxiv.org/abs/1511.00672
astro-ph.CO (what is this?) # Title: Bounds on very low reheating scenarios after Planck Abstract: We consider the case of very low reheating scenarios ($T_{\rm RH}\sim\mathcal{O}({\rm MeV})$) with a better calculation of the production of the relic neutrino background (with three-flavor oscillations). At 95% confidence level, a lower bound on the reheating temperature $T_{\rm RH}>4.1$ MeV is obtained from Big Bang Nucleosynthesis, while $T_{\rm RH}>4.3$ MeV from Planck data for very light ($\sum m_i = 0.06$ eV) neutrinos. If neutrino masses are allowed to vary, Planck data yield $T_{\rm RH}>4.7$ MeV, the most stringent bound on the reheating temperature to date. Neutrino masses as large as 1 eV are possible for very low reheating temperatures. Comments: 9 pages, 9 figures Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Phenomenology (hep-ph) Journal reference: Phys. Rev. D 92, 123534 (2015) DOI: 10.1103/PhysRevD.92.123534 Report number: IFIC/15-70 Cite as: arXiv:1511.00672 [astro-ph.CO] (or arXiv:1511.00672v1 [astro-ph.CO] for this version) ## Submission history From: Massimiliano Lattanzi [view email] [v1] Mon, 2 Nov 2015 20:49:57 GMT (348kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6960797309875488, "perplexity": 5214.570031118324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00659.warc.gz"}
https://www.physicsforums.com/threads/launching-a-balloon.323852/
# Launching a balloon 1. Jul 7, 2009 ### PeterPumpkin The following came from a discussion about launching a balloon. Suppose you have a heavy coiled rope of total length, L, and constant linear density, MU. You take hold of one end of the rope and pull it vertically up with a force, F(t), so that the tip of the rope moves at a constant velocity, v. What is F(t), assuming the rope is so long that some of it remains coiled on the ground? In drawing our free-body diagram, two questions arise: 1) Can you define our system to be just the vertical length of rope (of length, y(t)) --- see figures 1 & 2? 2) If so, how do we calculate the force that the coiled portion exerts on the vertical section, F(coil on rope)? There must be a force, otherwise the coiled portion wouldn’t unwind. File size: 5.4 KB Views: 50 File size: 5.4 KB Views: 55 2. Jul 7, 2009 ### Staff: Mentor Is this a homework question? Do you have any work to show us? As a hint, though, it may be useful to delete the velocity from the free body diagrams and consider the difference between the two static scenarios... 3. Jul 7, 2009 ### PeterPumpkin No. It is not a homework question! 4. Jul 7, 2009 ### rcgldr You'd know the rate of mass flow transitioning from not moving to moving upwards at the fixed velocity. There's a period of time where the speed of the unwinding rope is faster than the upward velocity, because it also has a horizontal component. The unknown is the amount of time it takes for each section of the coiled rope to transition into upwards movement, specifically, the relationship between acceleration and time of the rope as it unwinds. As time goes on, the mass of the rope moving vertically increases, while the mass of the rope sections in transition remains near constant, so the limit of this is simply the weight of the suspended rope as it moves at constant speed. You could simplify this by having the rope on a drum unwinding at constant speed, eliminating the transitional acceleration of the rope. 5. Jul 8, 2009 ### PeterPumpkin What I was wondering about was that the system is not fixed. IE Can we define our system to be the vertical length of rope even though the length of rope increases? 6. Jul 8, 2009 ### rcgldr Using a drum that the rope unwinds from approximates this, as the rope no longer experiences any linear acceleration as it spools off the drum, and if the drum is friction free, it's not adding any load to the system. The effective diamter decreases as layers of rope are peeled off, but this can be ignored to simplify the problem. 7. Jul 11, 2009 ### PeterPumpkin Does this mean we could apply Newton's second law to the vertical section of the rope ("our system") as: F(t) - m(t)g - Force(coil on rope) = ma where we take Force(coil on rope) = 0 and a = 0 (as rope rises at constant vertical velocity) 8. Jul 11, 2009 ### rcgldr Yes, in which case, F(t) = m(t)g. 9. Jul 11, 2009 ### PeterPumpkin OK. What if we took the WHOLE rope as our system. Afterall, we are free to define our system as we wish. For the sake of simplicity, go back to the original posting where the rope was coiled on the table. Then the rope is subject to a Normal force of N(t)=(L-y(t))*MU*g where MU = the length density of the heavy rope and L-y(t) is the length of the coiled segment. Surely Newton’s Second Law must apply to our system. m*a = F(t) - y(t)*MU*g + N(t) Can we do this? Similar Discussions: Launching a balloon
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9354032278060913, "perplexity": 1153.1568738792337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120194.50/warc/CC-MAIN-20170823113414-20170823133414-00005.warc.gz"}
https://www.ms.u-tokyo.ac.jp/journal/abstract_e/jms100207_e.html
## Brauer Groups and Tate-Shafarevich Groups J. Math. Sci. Univ. Tokyo Vol. 10 (2003), No. 2, Page 391--419. Gonzalez-Aviles, Cristian D. Brauer Groups and Tate-Shafarevich Groups Let $X_K$ be a proper, smooth and geometrically connected curve over a global field $K$. In this paper we generalize a formula of Milne relating the order of the Tate-Shafarevich group of the Jacobian of $X_K$ to the order of the Brauer group of a proper regular model of $X_K$. We thereby partially answer a question of Grothendieck.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8817700147628784, "perplexity": 293.7300848223101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00268.warc.gz"}
https://brilliant.org/problems/yay-for-2014-7/
# Yay for 2014! #7 Geometry Level 4 Let $$ABDC$$ be a rectangle, as shown above, such that $$AB = 20$$ and $$AC = 14.$$ Points $$E$$ and $$F$$ are located in the interior of $$ABDC$$ such that the triangles $$AEC$$ and $$BFD$$ are equilateral. The area of the intersection of these triangles can be represented by $\frac{a\sqrt{3}}{b}- c,$ where $$a, b,$$ and $$c$$ are positive integers with $$\gcd(a, b) = 1.$$ Find $$a+b+c.$$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6446778178215027, "perplexity": 103.84388688046798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591497.58/warc/CC-MAIN-20180720041611-20180720061611-00507.warc.gz"}
https://www.houseofmath.com/drill/fractions/simplify-a-fraction
# Simplify a Fraction Simplifying a fraction is the opposite of expanding a fraction. You will get a lot of use out of this technique because of how handy it is, so be sure you master it! Rule ### SimplifyingFractions You simplify a fraction with numerator $a$ and denominator $b$ like this: $\frac{a}{b}=\frac{a÷c}{b÷c}$ Example 1 Simplify the fraction $\frac{4}{6}$ by 2 $\frac{4}{6}=\frac{4÷2}{6÷2}=\frac{2}{3}$ Bootcamps ### Simplifying a Fraction by Canceling Here you will learn to simplify a fraction in a very nifty way. It’s also fun, because the calculation becomes much easier! When simplifying a fraction by canceling, first factorize the numerator and denominator as much as possible. Then, cancel equal factors in the numerator and denominator. You can only cancel one factor against another factor. The reason you can do this is because the two factors you cancel out just become 1 when you divide them by one another. Remember that any number multiplied by 1 is equal to itself. Example 2 Simplify the fraction $\frac{3}{6}$ by canceling $\begin{array}{llll}\hfill \frac{3}{6}& =\frac{\text{3}×1}{2×\text{3}}=\frac{1}{2}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$ because $\begin{array}{llll}\hfill \frac{3}{6}& =\frac{3×1}{2×3}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{3}{3}×\frac{1}{2}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =1×\frac{1}{2}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{1}{2}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951663970947266, "perplexity": 421.52137149386084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710733.87/warc/CC-MAIN-20221130060525-20221130090525-00604.warc.gz"}
https://komal.elte.hu/feladat?a=honap&h=200910&t=inf&l=en
Mathematical and Physical Journal for High Schools Issued by the MATFUND Foundation Already signed up? Sign In New to KöMaL? Sign Up # KöMaL Problems in Informatics, October 2009 Please read the rules of the competition. Show/hide problems of signs: ## Problems with sign 'I' Deadline expired on November 10, 2009. I. 220. Due to the varying hardness of the rocks, a tunnel boring machine could not exactly drive a horizontal tunnel as calculated. After the tunnel has been excavated, they measured at each meter the relative height of the floor and ceiling of the tunnel, compared to the originally planned base level. (Therefore, the base level has relative height 0 cm.) In the first line of the file magassag.txt, downloadable from our webpage, you will find the number of measurements N (3N1000), then, in the following N lines the relative heights (in centimeters, between -100 and 600) of the floor and ceiling of the tunnel are found, compared to the base level. 95 0 511 -1 508 -3 507 0 510 1 511 ... You can interpret the data in the example as follows: * data for 95 measurement points are stored; * at the first measurement point, the boring machine achieved the exact lower height, and it created the upper height of the tunnel 511 cm above base level; * at the third measurement point, for example, the machine dug 3 cm below the base level and created the tunnel ceiling 507 cm above the base level. You should create a program alagut to solve the following tasks. (If a task requires displaying some data, the number of the actual task, e.g. Task #3, should also be written on the screen.) 1. Read data from the file magassag.txt and solve the following questions. If the file can not be read, use the first 10 lines of the source as input to solve the tasks. 2. Display on the screen the minimal inner height of the tunnel (understood to be the difference between ceiling height minus floor height relative to the base level). 3. In order to make the tunnel more regular, first the ceiling is made smoother. Beginning with the second measurement point and ending with the last but one, each ceiling height is averaged with the previous and succeeding ceiling heights. This average of 3 measurements is then rounded up to the nearest whole cm. If the average is higher than the actual ceiling height at that point, then the ceiling is excavated further, up to the average height. (If the average is not higher than the actual ceiling height, nothing happens.) Excavated soil drops down vertically to the floor (exactly at the actual measurement point), and increases the floor level (with exactly the same number of centimeters). When averaging ceiling heights at the next measurement point, this possibly modified new value will be used. You should display how many cm were excavated (provided that this number is greater than zero) from the ceiling at which measurement point. You should use a format like At measurement point #3, 1cm''. 4. Express in centimeters how much soil surplus or deficit is present altogether on the floor of the tunnel, relative to the base level. 5. In the next phase a road grader goes through the tunnel (in the original direction). This machine removes soil above the base level. The machine also stores excavated soil and takes it away. If the machine has enough stored soil in its containers, it spreads soil evenly where floor level is too low. If the machine carries no more soil, it leaves lower regions intact and continues with grading at the next measurement point, if necessary. The road grader is empty when it starts. You should write into file talaj.txt the floor level at each measurement point after the machine went through the tunnel once. 6. Display on the screen the beginning and ending measurement points corresponding to the lowest floor level (after the earthwork in the previous steps is done). If there are more than one pit with the same deepest depth, display only the first one. (The boundary of a pit is the first and last negative value.) If there are no negative values at all, display The floor is smooth.'' The source code (i220.pas, i220.cpp, ...) together with a short documentation (i220.txt, i220.pdf, ...) -- also describing which developer environment to use for compiling, further a brief description of your solution -- should be submitted in a compressed file (i220.zip). 3. 2. (10 pont) solution (in Hungarian), statistics I. 221. Peter and Paul are jealous twins. They very carefully share everything until both of them get exactly the same amount. This year they were given 25 birthday presents with total value less than 1000 EUR. You have to figure out whether these presents can be shared among the twins such that both get the same value. You should solve this task with a spreadsheet application. Cells B3:Z3 contain the value of the presents, being integers. Your answer should appear in cell A1. You should not use macros or user defined functions. The spreadsheet (i221.xls, i221.ods, ...) together with a short documentation (i221.txt, i221.pdf, ...) -- also describing the name and version number of the spreadsheet application, further a brief description of your solution -- should be submitted in a compressed file (i221.zip). (10 pont) solution (in Hungarian), statistics I. 222. Draw the topology of the computer network of your school. The figure should contain at least one computer room, machines available for students, some computers in a classroom and in a lab, further, some routers, switches and other network devices, finally, the Internet connection. The figure should contain the main characteristics of these devices in the network, but the figure should not be overcrowded. Data for this exercise can be obtained from your school system administrator or teacher of computer science. To solve the task, you should use the freely downloadable program Dia, http://live.gnome.org/Dia. You should submit the network topology (i222.dia) and also a PDF version of this file (i222.pdf) in a compressed file (i222.zip), also containing the version number of the program and the operating system. (10 pont) solution (in Hungarian), statistics ## Problems with sign 'S' Deadline expired on November 10, 2009. S. 47. Prepare your program to evaluate cells of a spreadsheet, if some simple formulae and certain numerical values are given. The 25 columns of the worksheet are denoted by capital letters of the English alphabet from A to Y, further, there are 100 rows numbered from 1 to 100. Each cell can contain an integer, a real number (given by simple operations), the four basic operations or cell references. The expressions are syntactically correct, they may contain absolute, relative, or mixed cell references. However, cells contain neither exponentials, nor parentheses. The input file of your program to be read is given as the first argument in the command line. Only cells with nonzero values should be read (all other cells contain 0). Each line of the input file describes either a single cell, or a group of cells of a rectangular domain. The name of the output file is given as the second command line argument. The output file should contain the evaluated cells: each value should be an integer or a real number rounded to 2 decimal places. The output file should contain only nonzero values of the evaluated spreadsheet, in CSV format, with each line and value separated by semicolons. In the first example, be01.txt is a sample input, while ki01.csv is the corresponding output. In the input file, two cells have nonzero values (one integer and one real number). In a third cell we have also computed their arithmetic mean, and added 20, then 0. The third row of the output file, as well as the first cell of the fourth row is empty. In the second example, the formula in cells A2:A5 with relative reference adds 1 to the value of the cell left to the actual cell and being in the same row, so this formula literally appears in the top left cell of the domain. The same applies to formulae in B2:E5 with mixed reference. The third example contains some circular references. These values can not be computed and should be denoted by #Kör. Division by zero should be indicated by #NulOszt. The source code and project files of your solution (without the .exe file or any other auxiliary files generated by the compiler) together with a brief documentation of your solution should be submitted in a compressed file (s47.zip). (10 pont) solution (in Hungarian), statistics \$Var(Body)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6848319172859192, "perplexity": 1119.2072077281953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911027.72/warc/CC-MAIN-20201030153002-20201030183002-00325.warc.gz"}
http://mathhelpforum.com/calculus/4589-arc-length-print.html
# Arc Length • July 31st 2006, 01:33 PM c_323_h Arc Length Given $y=f(x)$ the formula for arc length $L$ over the interval $[a,b]$: $L = \int_a^b \sqrt{1+[f'(x)]^2} dx$ or $\int_a^b \sqrt{1+(\frac{dy}{dx})^2}dx$ With the formula above find the arc length of the following functions over the given interval. 1. $y=1+6x^{3/2}, [0,1]$ Answer: $\frac{2}{243}(82\sqrt{82}-1)$ 2. $y=\frac{x^5}{6}+\frac{1}{10x^3}, [1,2]$ Answer: $\frac{1261}{240}$ I found the derivatives of the functions, $9\sqrt{x}$ and $\frac{5x^4}{6}-\frac{3}{10x^4}$, respectively, and plugged them in, simplified, substituted and converted limits if I had to, but still don't get the correct answer. Could someone tell me what I'm doing wrong? • July 31st 2006, 01:41 PM ThePerfectHacker Quote: Originally Posted by c_323_h 1. $y=1+6x^{3/2}, [1,2]$ Answer: $\frac{43}{3}$ You have, $y=1+6x^{3/2}$ Thus, $y'=6(3/2)x^{1/2}$ Simplfy, $y'=9x^{1/2}$ Thus, $\sqrt{1+[f'(x)]^2}=\sqrt{1+[9x^{1/2}]^2}=\sqrt{1+81x}$ Thus, $\int_1^2 \sqrt{1+81x}dx$ Can you integrate that? Or do you need the quickest integrator in the west to help you? • July 31st 2006, 01:50 PM c_323_h Quote: Originally Posted by ThePerfectHacker You have, $y=1+6x^{3/2}$ Thus, $y'=6(3/2)x^{1/2}$ Simplfy, $y'=9x^{1/2}$ Thus, $\sqrt{1+[f'(x)]^2}=\sqrt{1+[9x^{1/2}]^2}=\sqrt{1+81x}$ Thus, $\int_1^2 \sqrt{1+81x}dx$ Can you integrate that? Or do you need the quickest integrator in the west to help you? What I tried: let $u=1+81x$, then $du=81dx$. Convert limits: when $x=0, u=1$ and when $x=1, u=82$ so, $\frac{1}{81}\int_1^{82}\sqrt{u}du$ $\frac{1}{81}(\frac{2u^{3/2}}{3}) |_1^{82}$ substitute $u$ back in. Evaluate at limits using the Fundamental Theorem of Calculus. Is this correct? • July 31st 2006, 02:38 PM Soroban Hello, c_323_h! What you've done is correct . . . Quote: Let $u=1+81x$, then $du=81\,dx$ Convert limits: when $x=0, u=1$ and when $x=1, u=82$ So: $\frac{1}{81}\int_1^{82}\!\!\sqrt{u}\:du \;= \;\frac{1}{81}\left(\frac{2u^{3/2}}{3}\right) \bigg|_1^{82}\quad\Rightarrow\quad\frac{2}{243}u^{ \frac{3}{2}}\bigg|^{82}_1$ Evaulate: . $\frac{2}{243}\left(82^{\frac{3}{2}} - 1^{\frac{3}{2}}\right) \;= \;\frac{2}{243}\left(82\sqrt{82} - 1\right)$ And is there a typo in #2? Is there a plus between the two fractions? • July 31st 2006, 02:48 PM c_323_h Quote: Originally Posted by Soroban yup it's supposed to be a plus and the answers are indeed switched. i'll edit my post..also the answer to the second question is $\frac{1261}{240}$ and not $\frac{46}{3}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 38, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9691459536552429, "perplexity": 1562.0056338434904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246652114.13/warc/CC-MAIN-20150417045732-00274-ip-10-235-10-82.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/296292/fields-of-rationality-as-a-notion-of-automorphic-size
Fields of rationality as a notion of automorphic size I want to interpret the degree of the field of rationality of an automorphic form as a notion of size, analogously to the conductor, and this question is about the possible obstructions to do so. The results I know are limited to the case of Hecke cusp forms, I hense recall them and raise the natural questions and results to Maass forms and automorphic representations. 1. The case of Hecke cusp forms. Let $f$ be a cuspidal Hecke eigenform. Its field of rationality $\mathbf{Q}(f)$ is the extension of $\mathbf{Q}$ generated by its Fourier coefficients, that is to say $$\mathbf{Q}(f) = \mathbf{Q}(a_1(f), a_2(f), a_3(f), \ldots). \qquad (1)$$ It is known that the degree of the field of rationality grows with the level (in the sense that the proportion of $f$ with field of rationality of bounded degree converges to zero when the level grows). 2. The case of automorphic representations. I am wondering about what can be said of an analogous notion for automorphic representations. Let $\pi_v$ be an admissible representation of $GL(2, F_v)$ for a local field $F_v$, its field of rationality is defined as $$\mathbf{Q}(\pi_v) = \{ \sigma \in Aut(\mathbf{C}) \ : \ {}^\sigma \pi \simeq \pi \}.$$ For an automorphic representation $\pi$ of $GL(2, \mathbf{A})$, decomposed by Flath theorem as $\pi = \otimes_v \pi_v$, the field of rationality of $\pi$ is $$\mathbf{Q}(\pi) = \prod_v \mathbf{Q}(\pi_v). \qquad (2)$$ 3. The case of Maass forms. These two notion agree in the case of Hecke cusp forms. I wonder what can be said in the case of Maass forms: a field of rationality can be defined by its coefficients as in (1), and also by the attached representation as in (2). (A) Do the two notions agree in the case of Maass forms? 4. A kind of automorphic size. As already stated, in the case of cusp forms the degree of the field of rationality essentially grows with the level (Serre, Shin-Templier, Binder for references). This endows $$d(\pi) = [\mathbf{Q}(\pi) : \mathbf{Q}],$$ with a size flavor. However, here are some natural questions in this direction: (B) Is there any result in the weight aspect? (C) Is $d(\pi)$ always finite? (D) Is there any infinite family with constant degree of field of rationality? What can be said about such families? I apologize for these maybe loosely related questions, it appeared to me that they are relevant in this spirit of "size", and I wished to asked them together. Maeda's conjecture implies that all cuspidal eigenforms in $S_k(1)$ are conjugate, so $\bar d(\pi)$ should be $\dim S_k(1)$, where $\bar d$ denotes the degree of the Galois closure. In particular, the answer to (D) should be no for level 1, but I don't think anyone knows how to prove such a statement. See also this question: modular eigenforms with integral coefficients [Maeda's Conjecture] A generalization of Maeda's conjecture was proposed by Tsaknias for more general level. For instance, say $N$ is squarefree. Then if you look at newforms of level $N$, you can distinguish Galois orbits by looking at Atkin-Lehner sign patterns (of which there are $2^m$, $m =$ number of primes dividing $N$). The conjecture is that for $k$ large all forms with the same Atkin-Lehner sign pattern are conjugate.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9469213485717773, "perplexity": 112.76296534526992}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347409171.27/warc/CC-MAIN-20200530102741-20200530132741-00006.warc.gz"}
http://math.stackexchange.com/questions/243911/meaning-of-no-explicit-time-dependence?answertab=oldest
# Meaning of no explicit time dependence What does "no explicit time dependence" mean in this context? : A symmetry of the KdV is given by $$\tilde x=x, \tilde t=t+\epsilon, \tilde u =u$$ as there is no explicit time dependence in the KdV. - In Physics, the notion of explicit time dependence denotes equations where the time parameter t occurs "freely" and not only as a $\frac{d}{dt}$. So an equation of the form $v(x)=a_0t+v_0$ is explicitly time dependent, but $a(x)=a_0$ is not. Thank you, Dominik. Just to confirm my understanding of your explanation: the partial derivative wrt $t$ does not count as explicitly time-dependent, only an actual factor of $t$ in the equation does? –  Henry Nov 24 '12 at 19:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9186952710151672, "perplexity": 220.44853532230218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121934081.85/warc/CC-MAIN-20150124175214-00204-ip-10-180-212-252.ec2.internal.warc.gz"}
http://scitation.aip.org/content/aip/journal/jcp/137/23/10.1063/1.4771658
• journal/journal.article • aip/jcp • /content/aip/journal/jcp/137/23/10.1063/1.4771658 • jcp.aip.org 1887 No data available. No metrics data to plot. The attempt to plot a graph for these metrics has failed. Potential energy surface and rovibrational energy levels of the H2-CS van der Waals complex USD 10.1063/1.4771658 View Affiliations Hide Affiliations Affiliations: 1 Université de Bordeaux, ISM, UMR CNRS 5255, 33405 Talence, France 2 Department of Physics, Universidad de Matanzas, Matanzas 40100, Cuba 3 Université Pierre et Marie Curie, LPMAA, UMR CNRS 7092, 75252 Paris, France 4 Observatoire de Paris, LUTH, UMR CNRS 8102, 92195 Meudon, France a) Electronic mail: t.stoecklin@ism.u-bordeaux1.fr. J. Chem. Phys. 137, 234301 (2012) /content/aip/journal/jcp/137/23/10.1063/1.4771658 http://aip.metastore.ingenta.com/content/aip/journal/jcp/137/23/10.1063/1.4771658 ## Figures FIG. 1. Set of body fixed coordinates used to describe the diatom-diatom system. The azimuthal angle φ is undefined when θ 1 or θ 2 is equal to 0° or 180°. FIG. 2. Contour plot of the rigid rotor PES for θ 1 = 0°. The contour lines are equally spaced by 10 cm−1. FIG. 3. Contour plot of the rigid rotor PES for θ 2 = 0°. The contour lines are equally spaced by 10 cm−1. FIG. 4. Contour plot of the rigid rotor PES for θ 2 = 180°. The contour lines are equally spaced by 10 cm−1. FIG. 5. Contour plot of the rigid rotor PES for φ = 0° and R relaxed. The contour lines are equally spaced by 10 cm−1. The optimised values of R span the range [7.10,9.77] a o . FIG. 6. Cross second virial coefficient calculated with the present PES. ## Tables Table I. Calculated rovibrational bound states of pH2-CS for J ≤ 2. For each state, we report the energy in cm−1, the total rotational quantum number J, the parity ɛ, the CS rotational quantum number j 2, the orbital quantum number L, and the percentage weight (w) of the leading basis set function. For some states, several basis functions need to be given in order to distinguish them from lower states with same J and ɛ. Table II. Calculated energies (cm−1) of the rovibrational bound states of oH2-CS for J ≤ 2, along with the associated quantum numbers (J, ɛ). Table III. Energy spacing (cm−1) between pH2-CS bound states associated with two successive values of j 2 for L = 0. Table IV. Energy spacing (cm−1) between para bound levels associated with successive values of L. B rel is the rotational constant of the two-body system H2 + CS calculated from the spacing. /content/aip/journal/jcp/137/23/10.1063/1.4771658 2012-12-21 2014-04-24 Article content/aip/journal/jcp Journal 5 3 ### Most cited this month More Less This is a required field
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.908668041229248, "perplexity": 3775.2885767329767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/357672/density-of-sum-of-two-independent-uniform-random-variables-on-0-1/357842
# Density of sum of two independent uniform random variables on $[0,1]$ I am trying to understand an example from my textbook. Let's say $$Z = X + Y$$, where $$X$$ and $$Y$$ are independent uniform random variables with range $$[0,1]$$. Then the PDF is $$f(z) = \begin{cases} z & \text{for 0 < z < 1} \\ 2-z & \text{for 1 \le z < 2} \\ 0 & \text{otherwise.} \end{cases}$$ Thanks • There are a couple of ways. Have you done convolutions? Not the best way in my opinion, but certainly useful elsewhere. And there is a straightforward geometric approach. Apr 11, 2013 at 0:49 • what would be the bounds if i were to use convolutions? also, I don't quite understand the geometric approach. Can you direct me to an example? Apr 11, 2013 at 1:19 • For convolution, you want $\int_{-\infty}^\infty f_Y(z-x)f_X(x)\,dx$. So since density is $0$ outside $(0,1)$, we need $0\le z-x\le 1$, or equivalently $x\le z\le x+1$. For $z\le 1$, the first bound is the one to use. For $1\lt z\le 2$, it is the second. Apr 11, 2013 at 1:33 • I sort of gave the bounds. For $0\le z\le 1$, integrate from $x=0$ to $x=z$. For $1\lz\le 2$, integrate from $z-1$$to 1. Will maybe write up answer. Apr 11, 2013 at 2:19 • The basic method is to my mind not convolution, but finding the cdf and differentiating. So draw the square on which the joint density lives. The probability that Z\le z is the probability (X,Y) lands in the part of the square "below" the line x+y=z. Imagine drawing lines x+y=z for various z. The geometry of the "part below" changes at z=1. For z\lt 1 the part below is a triangle. For 1\lt z\lt 2 it is the part above that is a triangle. Dec 15, 2014 at 5:36 ## 5 Answers If we want to use a convolution, let f_X be the full density function ofX, and let f_Y be the full density function of Y. Let Z=X+Y. Then$$f_Z(z)=\int_{-\infty}^\infty f_X(x)f_Y(z-x)\,dx.$$Now let us apply this general formula to our particular case. We will have f_Z(z)=0 for z\lt 0, and also for z\ge 2. Now we deal with the interval from 0 to 2. It is useful to break this down into two cases (i) 0\lt z\le 1 and (ii) 1\lt z\lt 2. (i) The product f_X(x)f_Y(z-x) is 1 in some places, and 0 elsewhere. We want to make sure we avoid calling it 1 when it is 0. In order to have f_Y(z-x)=1, we need z-x\ge 0, that is, x\le z. So for (i), we will be integrating from x=0 to x=z. And easily$$\int_0^z 1\,dx=z.$$Thus f_Z(z)=z for 0\lt z\le 1. (ii) Suppose that 1\lt z\lt 2. In order to have f_Y(z-x) to be 1, we need z-x\le 1, that is, we need x\ge z-1. So for (ii) we integrate from z-1 to 1. And easily$$\int_{z-1}^1 1\,dx=2-z.$$Thus$f_Z(z)=2-z$for$1\lt z\lt 2$. Another way: (Sketch) We can go after the cdf$F_Z(z)$of$Z$, and then differentiate. So we need to find$\Pr(Z\le z)$. For a few fixed$z$values, draw the lines with equation$x+y=z$on an x-y axis plot. Draw the square$S$with corners$(0,0)$,$(1,0)$,$(1,1)$, and$(0,1)$. Then$\Pr(Z\le z)$is the area of the part$S$that is "below" the line$x+y=z$. That area can be calculated using basic geometry. For example, when z is 2, the whole square area is under the line so Pr=1. There is a switch in basic shape at$z=1$. • Thank very much for writing this up! This really helped me fully understand the concept of convolution. Apr 11, 2013 at 3:45 • Does the second mehod of calculating areas only work in this case since we are using uniform distributions? May 30, 2016 at 18:16 • @F.Webber: In the form that I used it, yes, we are reduced to finding area because of uniformity. But first going after the cdf is a general procedure. In the non-uniform case, we are finding an integral. The geometry is still useful in determining the bounds on the integration. May 30, 2016 at 18:41 • So, in general, once we have determined such area, the function we have to integrate in that region would be the joint probability density function? May 30, 2016 at 18:46 • @F.Webber: Yes, for many problems a double integral, but integration in$n\$-dimensional space also comes up. May 30, 2016 at 20:52 Here's why we need to break the convolution into cases. The integral we seek to evaluate for each $$z$$ is $$f_Z(z):= \int_{-\infty}^\infty f(x)f(z-x)\,dx.\tag1$$ (On the RHS of (1) I'm writing $$f$$ instead of $$f_X$$ and $$f_Y$$ since $$X$$ and $$Y$$ have the same density.) Here the density $$f$$ is the uniform density $$f(x)$$, which equals $$1$$ for $$0, and is zero otherwise. The integrand $$f(x)f(z-x)$$ will therefore have value either $$1$$ or $$0$$. Specifically, the integrand is $$1$$ when $$0 and equals zero otherwise. To evaluate (1), which is an integral over $$x$$ (with $$z$$ held constant), we need to find the range of $$x$$-values where the conditions listed in (2) are satisfied. How does this range depend on $$z$$? Plotting the region defined by (2) in the $$(x,z)$$ plane, we find: and it's clear how the limits of integration on $$x$$ depend on the value of $$z$$: 1. When $$0, the limits run from $$x=0$$ to $$x=z$$, so $$f_Z(z)=\int_0^z 1dx=z.$$ 2. When $$1, the limits run from $$x=z-1$$ to $$x=1$$, so $$f_Z(z)=\int_{z-1}^11dx=2-z.$$ 3. When $$z<0$$ or $$z>2$$, the integrand is zero, so $$f_Z(z)=0$$. • very nice explanation! Thank you! Apr 30, 2019 at 20:16 By the hint of jay-sun, consider this idea, if and only if $$f_X (z-y) = 1$$ when $$0 \le z-y \le 1$$. So we get $$z-1 \le y \le z$$ however, $$z \in [0, 2]$$, the range of $$y$$ may not be in the range of $$[0, 1]$$ in order to get $$f_X (z-y) = 1$$, and the value $$1$$ is a good splitting point. Because $$z-1 \in [-1, 1]$$. Consider (i) if $$z-1 \le 0$$ then $$-1 \le z-1 \le 0$$ that is $$z \in [0, 1]$$, we get the range of $$y \in [0, z]$$ since $$z \in [0, 1]$$. And we get $$\int_{-\infty}^{\infty}f_X(z-y)dy = \int_0^{z} 1 dy=z$$ if $$z \in [0, 1]$$. Consider (ii) if $$z-1 \ge 0$$ that is $$z \in [1, 2]$$, so we get the range of $$y \in [z-1, 1]$$, and $$\int_{-\infty}^{\infty}f_X(z-y)dy = \int_{z-1}^{1} 1 dy = 2-z$$ if $$z \in [1, 2]$$. To sum up, consider to clip the range in order to get $$f_X (z-y) = 1$$. The purpose of this answer is to show how a direct application of convolution may lead to the desired result. I take the following results from Cohn, Measure Theory. Definition of convolution Let $$\nu_1$$ and $$\nu_2$$ be finite measures on $$(\mathbb{R}^d,\mathscr{B}(\mathbb{R}^d))$$, then their convolution $$\nu_1\ast\nu_2$$ is defined by: $$\nu_1 \ast\nu_2(A) = \nu_1 \times\nu_2(\{(x_1,x_2) : x_1+x_2 \in A\})$$ Proposition 10.1.12 Let $$\nu_1$$ and $$\nu_2$$ be probability measures on $$(\mathbb{R}^d,\mathscr{B}(\mathbb{R}^d))$$. $$\vdots$$ (c) If $$\nu_1$$ and $$\nu_2$$ are absolutely continuous, with densities $$f$$ and $$g$$, then $$\nu_1\ast\nu_2$$ is absolutely continuous with density: $$x \mapsto \int f(x-y)g(y)\lambda(dy)$$ Let $$I$$ denote the unit interval $$[0,1]$$, and $$U(I)$$ the uniform distrbution on $$I$$. Then the density function corresponding to $$U(I)$$ is $$\chi_I$$, the indicator function for $$I$$. If $$X$$ and $$Y$$ are independent random variables whose distributions are given by $$U(I)$$, then the density of their sum is given by the convolution of their distributions. I.e., if $$f_X$$ denotes the density for random variable $$X$$, then $$f_{X+Y}(x) = \int f_X(x-y)f_Y(y)\lambda(dy) = \int \chi_I(x-y)\chi_I(y) dy$$ The indicator function of $$y$$ alone restricts the integration range, so that $$\int \chi_I(x-y)\chi_I(y)dy = \int_0^1 \chi_I(x-y) dy$$ The expression $$\chi_I(x-y)$$ is $$0$$ if $$x-y < 0$$ or $$x-y > 1$$: $$\chi_I(x-y) = \cases{1 & x-1 \leq y \leq x \\ 0 & otw}$$ This further restricts the range of the integral, which can be rewritten: $$\int_{max(0,x-1)}^{min(1,x)} 1 dy = min(1,x) - max(0,x-1)$$ The density is $$0$$ if $$x < 0$$ or $$x > 2$$. This fact is hidden in our final expression because we've expressed our indicator functions through the bounds of the integral, but can be recovered by including another indicator function. The PDF as described in the original question follows by considering the relevant cases. Simple approach for those who don't know convolution. First we need to find the range of possibilities for the sum. • Minimum will occur when both numbers are minimum, so min = 0. • Maximum will occur when both numbers are maximum, so max = 2. • Most likely outcome (or mode) is when both numbers are same as their mean, so mode = 1. These three are enough to specify a triangular distribution. We need to make sure that the area under the pdf is 1, which means the height of pdf at mode(h) is $$\frac{1}{2}*2*h = 1$$ This gives $$h=1$$. All you need know is to find the equations of 2 lines that go from- 1. (0,0) to (1,1) 2. (1,1) to (2,0) Give a shout if anything is not clear. • Your answer doesn't explain: 1. Why the most likely outcome is when both random variables equal their mean. 2. Why the three points are enough to specify a triangular distribution. Feb 26, 2021 at 1:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 102, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9779058694839478, "perplexity": 308.47185857007327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571284.54/warc/CC-MAIN-20220811103305-20220811133305-00548.warc.gz"}