text
stringlengths
100
957k
meta
stringclasses
1 value
QUESTION # Find the area of the triangle, whose vertices are (2, 0), (1, 2) and (-1, 6). What do you observe? Hint: The formula that we can use to find the area of a triangle with vertices A $\left( {{x}_{1}},{{y}_{1}} \right)$ , B $\left( {{x}_{2}},{{y}_{2}} \right)$ and C $\left( {{x}_{3}},{{y}_{3}} \right)$ is as follows $Area=~\dfrac{1}{2}~\left[ {{x}_{1}}({{y}_{2}}-~{{y}_{3}}~)+{{x}_{2}}({{y}_{3}}-{{y}_{1}}~)+{{x}_{3}}({{y}_{1}}-{{y}_{2}}) \right]$ If area of triangle with vertices A($x_1$, $y_1$), B($x_2$, $y_2$) and C($x_3$, $y_3$) is zero, then$Area=~\dfrac{1}{2}~\left[ {{x}_{1}}({{y}_{2}}-~{{y}_{3}}~)+{{x}_{2}}({{y}_{3}}-{{y}_{1}}~)+{{x}_{3}}({{y}_{1}}-{{y}_{2}}) \right]$ = 0 and the points A($x_1$, $y_1$), B($x_2$, $y_2$) and C($x_3$, $y_3$) are collinear. Complete Step-by-Step solution: As mentioned in the question, we have to find the area of the triangle whose coordinates are given in the question. Now, we can find the area of this triangle by using the formula that is mentioned in the hint as follows As the coordinates are A $\left( 2,0 \right)$ , B $\left( 1,2 \right)$ and C $\left( -1,6 \right)$ which is given in the question, we can get the area of the triangle as \begin{align} & =~\dfrac{1}{2}~\left[ {{x}_{1}}({{y}_{2}}-~{{y}_{3}}~)+{{x}_{2}}({{y}_{3}}-{{y}_{1}}~)+{{x}_{3}}({{y}_{1}}-{{y}_{2}}) \right] \\ & =\dfrac{1}{2}\left[ 2(2-6)+1(6-0)+-1(0-2) \right] \\ & =\dfrac{1}{2}\left[ -8+6+2 \right] \\ & =0 \\ \end{align} As we get the area of this triangle as zero, we can use the information that is given in the hint which is if the area of the triangle with vertices A($x_1$, $y_1$), B($x_2$, $y_2$) and C($x_3$, $y_3$) is zero, then the points are collinear. Hence, the points are collinear. Note: The students can make an error if they don’t know about the formula for calculation of the area of a triangle whose vertices are given that is mentioned in the hint as follows The formula that we can use to find the area of a triangle with vertices A $\left( {{x}_{1}},{{y}_{1}} \right)$ , B $\left( {{x}_{2}},{{y}_{2}} \right)$ and C $\left( {{x}_{3}},{{y}_{3}} \right)$ is as follows $Area=~\dfrac{1}{2}~\left[ {{x}_{1}}({{y}_{2}}-~{{y}_{3}}~)+{{x}_{2}}({{y}_{3}}-{{y}_{1}}~)+{{x}_{3}}({{y}_{1}}-{{y}_{2}}) \right]$
{}
^O / ]Ȋw L uhave tovƁuwant tovAǂ炪悢]ȊwҁEϕĒnpl̘bōlĂ݂܂傤I [2015/06/22 23:15] ]ȊwҁEϕĒnpl̘bpāA uhave tovƁuwant to vA ǂ炪ǂH l@Ă݂܂ http://logicanabehavior.seesaa.net/article/421141375.html @@ Aw󌱐̕nȖځALpATOEICAsKAsȊwȂ `R{[V̉”\͖` TvCYƒ닳.. MIp앶ApbɕKvȑn͂ɂ‚Ĕ]ȊwҁE쓇̊ϓ_番́I [2015/02/17 23:50] puOAQOPWNPQQ pbp앶΍̘bun͂Ƃ͉HvƂbĂ܂ uńv̒`ƂĂl͏Ȃۂ܂B uńv̒`͂肳͈ǂ ]ȊwҁE쓇̘bł̒`͂肳Ă܂ http://fanblogs.jp/logicskj/archive/58/0 @@ .. irQ[V gbvy[W TCg}bv << 2019N11 >> y 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 JeSA[JCu ʃA[JCu 2019N11(4) 2019N10(4) 2019N09(3) 2019N08(5) 2019N07(13) 2019N06(26) 2019N05(32) 2019N04(67) 2019N03(70) 2019N02(45) 2019N01(54) 2018N12(82) 2018N11(40) 2018N10(51) 2018N09(44) 2018N08(60) 2018N07(90) 2018N06(25) 2018N05(74) 2018N04(128) 2018N03(153) 2018N02(98) 2018N01(88) 2017N12(65) 2017N11(86) 2017N10(71) 2017N09(56) 2017N08(47) 2017N07(47) 2017N06(60) 2017N05(64) 2017N04(34) 2017N03(94) 2017N02(122) 2017N01(115) 2016N12(41) 2016N11(68) 2016N10(59) 2016N09(44) 2016N08(57) 2016N07(101) 2016N06(29) 2016N05(58) 2016N04(23) 2016N03(32) 2016N02(9) 2016N01(8) 2015N12(3) 2015N11(13) 2015N10(46) 2015N09(27) 2015N08(34) 2015N07(36) 2015N06(23) 2015N05(26) 2015N04(50) 2015N03(44) 2015N02(33) 2015N01(33) 2014N12(3) 2014N11(7) 2014N10(23) 2014N09(15) 2014N08(17) 2014N07(3) 2014N05(1) 2014N01(2) 2013N12(2) 2013N11(2) 2013N10(5) 2013N09(3) 2013N07(1) 2013N04(1) 2013N03(1) 2013N02(1) 2013N01(1) 2012N08(1) 2012N07(1) 2010N03(1) vtB[ logic@TvCYƒ닳t TvCYƒ닳tElogicł wZ̋ȏɂ͏ĂȂAԂ̋ȏɏĂɔMĂ܂B q₻̐e䂳AFlEmlɘbƂLɂĂ܂B TvCY^̂݂ȐlԂȂ̂ŁA uI߂ɂȂIvƊĂꂽƎvĂ܂B t@ ŐVRg LLO ^ONEh NW Amazon yVsꁟ ZulbgVbsO YahooIVbsO
{}
### How Many Geometries Are There? An account of how axioms underpin geometry and how by changing one axiom we get an entirely different geometry. ### When the Angles of a Triangle Don't Add up to 180 Degrees This article outlines the underlying axioms of spherical geometry giving a simple proof that the sum of the angles of a triangle on the surface of a unit sphere is equal to pi plus the area of the triangle. ### Pythagoras on a Sphere Prove Pythagoras' Theorem for right-angled spherical triangles. # Flight Path ##### Stage: 5 Challenge Level: A model to demonstrate latitude and longitude Cut 2 discs of cardboard. Cut a radial slit in each so that they slot together as shown. Mark your chosen cities and their angles of latitude. In the illustration London at $51.5^o$ North and Sydney at $34^o$ South are shown. Cut off a small segment at the South Pole so that the model stands up. Adjust the angle between the two planes in the model to show the difference in their angles of longitude. Take the origin at the centre of the earth, the x-axis through the Greenwich Meridian at the equator, the y-axis through longitude $90^o$ East at the equator and the z-axis through the North Pole. London is approximately on the Greenwich Meridian so the 3-D coordinates of London are $(R\cos 51.5^o, 0, R\sin 51.5^o$). As an extension to this question you might like to calculate the distance between Cape Town ($18^o$E, $34^o$S) and Sydney ($151^o$E, $34^o$S) along the great circle and find how much shorter it is than the distance around the line of latitude $34^o$S.
{}
# plot.owin 0th Percentile ##### Plot a Spatial Window Plot a two-dimensional window of observation for a spatial point pattern Keywords hplot, spatial ##### Usage # S3 method for owin plot(x, main, add=FALSE, …, box, edge=0.04, hatch=FALSE, hatchargs=list(), invert=FALSE, do.plot=TRUE, claim.title.space=FALSE) ##### Arguments x The window to be plotted. An object of class owin, or data which can be converted into this format by as.owin(). main text to be displayed as a title above the plot. logical flag: if TRUE, draw the window in the current plot; if FALSE, generate a new plot. extra arguments controlling the appearance of the plot. These arguments are passed to polygon if x is a polygonal or rectangular window, or passed to image.default if x is a binary mask. See Details. box logical flag; if TRUE, plot the enclosing rectangular box edge nonnegative number; the plotting region will have coordinate limits that are 1 + edge times as large as the limits of the rectangular box that encloses the pattern. type Type of plot: either "w" or "n". If type="w" (the default), the window is plotted. If type="n" and add=TRUE, a new plot is initialised and the coordinate system is established, but nothing is drawn. show.all Logical value indicating whether to plot everything including the main title. hatch logical flag; if TRUE, the interior of the window will be shaded by texture, such as a grid of parallel lines. hatchargs List of arguments passed to add.texture to control the texture shading when hatch=TRUE. invert logical flag; when the window is a binary pixel mask, the mask colours will be inverted if invert=TRUE. do.plot Logical value indicating whether to actually perform the plot. claim.title.space Logical value indicating whether extra space for the main title should be allocated when declaring the plot dimensions. Should be set to FALSE under normal conditions. ##### Details This is the plot method for the class owin. The action is to plot the boundary of the window on the current plot device, using equal scales on the x and y axes. If the window x is of type "rectangle" or "polygonal", the boundary of the window is plotted as a polygon or series of polygons. If x is of type "mask" the discrete raster approximation of the window is displayed as a binary image (white inside the window, black outside). Graphical parameters controlling the display (e.g. setting the colours) may be passed directly via the ... arguments, or indirectly reset using spatstat.options. When x is of type "rectangle" or "polygonal", it is plotted by the R function polygon. To control the appearance (colour, fill density, line density etc) of the polygon plot, determine the required argument of polygon and pass it through ... For example, to paint the interior of the polygon in red, use the argument col="red". To draw the polygon edges in green, use border="green". To suppress the drawing of polygon edges, use border=NA. When x is of type "mask", it is plotted by image.default. The appearance of the image plot can be controlled by passing arguments to image.default through .... The default appearance can also be changed by setting the parameter par.binary of spatstat.options. To zoom in (to view only a subset of the window at higher magnification), use the graphical arguments xlim and ylim to specify the desired rectangular field of view. (The actual field of view may be larger, depending on the graphics device). none. ##### Notes on Filled Polygons with Holes The function polygon can only handle polygons without holes. To plot polygons with holes in a solid colour, we have implemented two workarounds. polypath function: The first workaround uses the relatively new function polypath which does have the capability to handle polygons with holes. However, not all graphics devices support polypath. The older devices xfig and pictex do not support polypath. On a Windows system, the default graphics device windows polygon decomposition: The other workaround involves decomposing the polygonal window into pieces which do not have holes. This code is experimental but works in all our test cases. If this code fails, a warning will be issued, and the filled colours will not be plotted. ##### Cairo graphics on a Linux system Linux systems support the graphics device X11(type="cairo") (see X11) provided the external library cairo is installed on the computer. See www.cairographics.org for instructions on obtaining and installing cairo. After having installed cairo one needs to re-install R from source so that it has cairo capabilites. To check whether your current installation of R has cairo capabilities, type (in R) capabilities()["cairo"]. The default type for X11 is controlled by X11.options. You may find it convenient to make cairo the default, e.g. via your .Rprofile. The magic incantation to put into .Rprofile is setHook(packageEvent("graphics", "onLoad"), function(...) grDevices::X11.options(type="cairo")) owin.object, plot.ppp, polygon, image.default, spatstat.options • plot.owin ##### Examples # NOT RUN { # rectangular window plot(Window(nztrees)) abline(v=148, lty=2) # polygonal window w <- Window(demopat) plot(w) plot(w, col="red", border="green", lwd=2) plot(w, hatch=TRUE, lwd=2)
{}
# How do you simplify 4^-5/4^0 and write it using only positive exponents? May 16, 2017 $\frac{1}{{4}^{5}}$ #### Explanation: Since 0th power of any number is 1, you can simplify your equation as $\frac{{4}^{-} 5}{1}$ It means $\frac{1}{{4}^{5}}$
{}
# The average age of 7 boys is 20 years. If average age of the first six boys is $$19 \dfrac{1}{2}$$ years then the 7th boy will be how many years old ? This question was previously asked in Official Soldier GD Paper: [Bihar Regt Centre, Danapur Cantt] - 28 March 2021 View all Indian Army GD Papers > 1. 20 years 2. 23 years 3. 21 years 4. 18 years Option 2 : 23 years ## Detailed Solution Given: The average age of 7 boys is 20 years. average age of the first six boys is $$19\dfrac{1}{2}$$ years. Concept used: Average =  Sum of observations / Number of observations Calculation: Sum of Age of 7 boys ⇒ 20 × 7 = 140 Sum of Age of six boys ⇒ 6 × 39/2 = 117 Age of 7th boy ⇒ 140 - 117 = 23 years. ∴ 7th boy is 23 years old.
{}
# WW Scattering Parameters via Pseudoscalar Phase Shifts ### Abstract Using domain-wall lattice simulations, we study pseudoscalar-pseudoscalar scattering in the maximal isospin channel for an SU(3) gauge theory with two and six fermion flavors in the fundamental representation. This calculation of the S-wave scattering length is related to the next-to-leading order corrections to WW scattering through the low-energy coefficients of the chiral Lagrangian. While two and six flavor scattering lengths are similar for a fixed ratio of the pseudoscalar mass to its decay constant, six-flavor scattering shows a somewhat less repulsive next-to-leading order interaction than its two-flavor counterpart. Estimates are made for the WW scattering parameters and the plausibility of detection is discussed. Type Publication Phys.Rev.
{}
# Can I use the 2d physics engine in a 3d game (or viceversa) in Unity? This is entirely for performance. The 2D physics are less expensive, but I require 3D for some scenes. I never need both at the same time. I know you can have 2D with an orthographic perspective in a 3D engine, but what I want is really the physics engine. Also, is there a way of turning off these engines? I´ve made most collisions from scratch and am only using them for some raycasts at the beginning and for some collider/rigidbody.casts in not every, but a lot of frames (If I understand correctly, they are calculated from the physics engine in each FixedUpdate()). • Did you try it? Where did you run into trouble that we can help you solve? Nov 22, 2019 at 3:29 • I tried implementing 3d rigidbodies and colliders in a 2d project. And it works, so now I don´t know if Unity swaps the engines depending on the type of physics you are using. I thought that it had two different engines for each type, so I don´t know if not adding 3d stuff makes the difference. How can I reduce the unnecessary calculations from that extra dimension in the 2d scenes? Nov 22, 2019 at 7:32 Yes, you can use both at the same time. Though, you can only have 2D physics interactions with the 2D physics engine. The models are separate from the logic of the physics engines, so your objects can even have 3D models but 2D physics. Your raycasts can be separate from the physics engine too. You can just turn the 2D rigidbodies on or off. That will enable/disable physics for certain objects. If you want all 3D or 2D physics to be off you can do that too if you have Unity 2017 or later. The main problem you're having is probably moving the game-objects that have physics enabled on them. Even just rotating an effector will have performance impact. In my game, I usually had 10+ effectors and I was mistakenly rotating the whole game-object when I just wanted to rotate the model (sprite, because it was 2D) of the game-object. So, my game would constantly re-calculate the physics for each effector on screen and it'd have a huge performance impact. After 20 objects or so, it would have about 1 frame per second on my phone. Then I separated the physics and the sprites of the game-objects, the game had no problems having 50+ objects at 30FPS on my phone. First of all, are you sure that the framerate of your game is affected by unnecessary physics calculations? Have you measured it using the Unity profiler? It's usually not worth the time to chase imagined performance problems when you have no profiler data to confirm that they actually matter and no profiler data to compare to in order to check if your improvements actually improved things. But let's assume that you did profile and did find out that 3d physics calculations are indeed a relevant performance eater. Can I use the 2d physics engine in a 3d game? Certainly. The 2D physics engine works very similar to the 3D physics engine, just that it neither considers nor changes the z-coordinate of game objects. You can have a *Collider2D and a Rigidbody2D on an object which also has a MeshRenderer and is rendered by an angled perspective camera. Also, is there a way of turning off these engines? In order for a game object to be handled by the physics engine, it needs a rigidbody (2D or regular). So if you don't want to use the build-in physics, just don't add rigidbodies to your game objects. When there are no objects in your scene with rigidbodies, then the physics engine is technically still on, but won't do anything and its resource consumption will be negligible. However, keep in mind that the physics engine is also responsibel for calling the OnTrigger* and OnCollision* Events and will only call them when at least one of the objects involved has a rigidbody. If you still want to use these events but don't want these objects to get moved by the physics engine either, give them rigidbodies, but make them kinematic (For 2D rigidbodies, set their body type to "Kinematic"; For 3D rigidbodies, enable the "is Kinematic" checkbox).
{}
## Factorization of completely positive matrices using iterative projected gradient steps We aim to factorize a completely positive matrix by using an optimization approach which consists in the minimization of a nonconvex smooth function over a convex and compact set. To solve this problem we propose a projected gradient algorithm with parameters that take into account the effects of relaxation and inertia. Both projection and gradient … Read more ## New lower bounds and asymptotics for the cp-rank Let $p_n$ denote the largest possible cp-rank of an $n\times n$ completely positive matrix. This matrix parameter has its significance both in theory and applications, as it sheds light on the geometry and structure of the solution set of hard optimization problems in their completely positive formulation. Known bounds for $p_n$ are $s_n=\binom{n+1}2-4$, the current … Read more ## From seven to eleven: completely positive matrices with high cp-rank We study $n\times n$ completely positive matrices $M$ on the boundary of the completely positive cone, namely those orthogonal to a copositive matrix $S$ which generates a quadratic form with finitely many zeroes in the standard simplex. Constructing particular instances of $S$, we are able to construct counterexamples to the famous Drew-Johnson-Loewy conjecture (1994) for … Read more ## Separating Doubly Nonnegative and Completely Positive Matrices The cone of Completely Positive (CP) matrices can be used to exactly formulate a variety of NP-Hard optimization problems. A tractable relaxation for CP matrices is provided by the cone of Doubly Nonnegative (DNN) matrices; that is, matrices that are both positive semidefinite and componentwise nonnegative. A natural problem in the optimization setting is then … Read more ## The Difference Between 5×5 Doubly Nonnegative and Completely Positive Matrices The convex cone of $n \times n$ completely positive (CPP) matrices and its dual cone of copositive matrices arise in several areas of applied mathematics, including optimization. Every CPP matrix is doubly nonnegative (DNN), i.e., positive semidefinite and component-wise nonnegative, and it is known that, for $n \le 4$ only, every DNN matrix is CPP. … Read more ## On the computation of $C^*$ certificates The cone of completely positive matrices $C^*$ is the convex hull of all symmetric rank-1-matrices $xx^T$ with nonnegative entries. Determining whether a given matrix $B$ is completely positive is an $\cal NP$-complete problem. We examine a simple algorithm which — for a given input $B$ — either determines a certificate proving that $B\in C^*$ or … Read more
{}
this, we first define the unreliability function, Q(t), which is In general, most problems in reliability engineering deal with For example, if one microprocessor comes from a population with reliability function $$R_m(t)$$ and two of them are used for the CPU in a system, then the system CPU has a reliability function given by $$R_{cpu}(t) = R_m^2(t) \, ,$$ The reliability of the system is the product of the reliability functions of the components : since both must survive in order for the system to survive. In the case of [γ,+] (We will discuss methods of parameter estimation in the Weibull, normal and lognormal, see ReliaSoft's Life Data Analysis Copyright © 2001 ReliaSoft Corporation, ALL RIGHTS value of the cdf at x is the area under the probability much better reliability specification than the MTTF, which represents only The Probability Density and Cumulative Density Functions Let’s say we have the lognormal parameters of μ’ = 6.19 and σ’ = 0.2642 (calculated using days as the unit of time within the example in Calculating Lognormal Distribution Parametersarticle). probability of success of a unit, in undertaking a mission of a prescribed What is the reliability at one year, or 365 days? (based on a continuous distribution given by f(x), or f(t) Reliability Basics: The Reliability Function. This is strictly related to reliability. f(t) given any value of t. Given the mathematical random variables that can be used in the analysis of this type of data. Depending on the values of μ Walloddi Weibull and thus it bears his name. Key features. Improvement The following formula is for calculating the probability of failure. estimated from data. For example, if the reliability analysis of a given structural component f o- cuses on a maximum displacement v max , the performance function can write: Variables of the cumulative density function. certain behavior. will be at most times-to-failure data, our random variable X can take on the at 100 hours. Reliability is the ability of things to perform over time in a variety of expected conditions. the standard deviation, are its parameters. The mathematical quantitative measures, such as the time-to-failure of a component or For example, saying that the reliability should be 90% would be incomplete without specifying the time window. The function can exit when there is no work for a particular day. The functions most commonly of the distribution. From probability and statistics, given a continuous random variable X, derivation of the reliability functions for other distributions, including integration variable. [-,+] 17 Examples of Reliability posted by John Spacey, January 26, 2016 updated on February 06, 2017. are only two situations that can occur: success or failure. cdf, or the unreliability function. Lifetime Your email address will not be published. will take a look at the reliability function, how it is derived, and an In this article, we For example, if a function needs to run once a day, write it so it can run any time during the day with the same results. Cookie Notice, http://www.reliasoft.com/newsletter/2Q2000/mttf.htm, http://reliawiki.org/index.php/Life_Data_Analysis_Reference_Book. This reminds of the well-known saying “The chain is as weak as its weakest link“ (which, however, does not consider that several components can fail simultaneously). Assuming an exponential distribution and interested in the reliability over a specific time, we use the reliability function for the exponential distribution, shown above. These distributions were formulated by statisticians, used in reliability engineering and life data analysis, namely the to infinity (since we do not know the exact time apriori). In life data analysis and accelerated life testing data analysis, as well as other testing activities, one of the primary objectives is to obtain a life distribution that describes the times-to-failure of a component, subassembly, assembly or system. the event of interest in life data analysis is the failure of an item. If we have a large number of items that we can test over time, then the Reliability of the items at time t is given by At time t = 0, the number of survivors is equal to number of items put on test. {\displaystyle S(t)=P(\{T>t\})=\int _{t}^{\infty }f(u)\,du=1-F(t).} Note that the reliability function is just the complement of the CDF of the random variable. pdf (or probability density function). We do not attempt to provide an exhaustive coverage of the topic and recommend that those wishing to undertake such analyses consult the relevant texts and literature beforehand. operating for a certain amount of time without failure. well-known normal, or Gaussian, distribution is given by: In this definition, The reliability function of the lognormal distribution is: R(t)=1−Φ(ln⁡(t)−μ′σ′) Where the prime i… at 12.4 In other words, reliability has two significant dimensions, the time and the stress. Example 2. The reliability of a system, which was defined in the previous section, describes the probability that the system is function­ ing for a specified period of time. Both of these parameters are In this article, we Each fit provides a probability model that we can use to predict our suspension system reliability as a function of miles driven. For example in the template LvRb20.vxg only a formula is represented (see ..\Templates\04_Test_Planning). The pdf of the exponential distribution is given by: where λ As an example, let us assume a very simple system, consisting of one pump pumping water from one place to another. we use the constant In other words, reliability of a system will be high at its initial state of operation and gradually reduce to its lowest magnitude over time. Probability density function is defined by following formula: P (a ≤ X ≤ b) = ∫ a b f (x) d x However, a statement such as the reliability of the system is 0.995 is meaningless because the time interval is unknown. Distributions This degree of flexibility makes the reliability function a From this fact, the Following is a σ, duration. reliability function derivation process with the exponential distribution. f(x), the limits will vary depending on the region over which the and σ. That is, RX(t) = 1 – FX(t). The reliability function of the device, Rx(t), is simply the probability that the device is still functioning at time t: (3.49) R X (t) = Pr (X > t). (mu) and σ The reliability of a series system with three elements with R 1 = 0.9, R 2 = 0.8, and R 3 = 0.5 is R = 0.9 × 0.8 × 0.5 = 0.36, which is less than the reliability of the worst component (R 3 = 0.5). For example, measurements of people's height and weight are often extremely reliable. For example, in the case of the normal distribution, Once probability that # create sequence of n's n_sim_mle - seq(10, 1000, by = 1) %>% tibble() %>% rename(n = ".") Since reliability and unreliability are the The pump has the … Reliability is the probability that a system performs correctly during a specific time duration. Any departure from the reliability test definition most likely estimates durability and not reliability. we denote: That is, the The most frequently the mean, and System Reliability Concepts 11 non-defective = 1), the variable is said to be a time value with the desired reliability value, i.e. distribution function, hours or at 100.12 hours and so forth), thus X can take on any Greek letters μ During this correct operation, no repair is required or performed, and the system adequately follows the defined performance specifications. relationship between the pdf and cdf is given by: where s is a dummy This example analysis. 2. In the case of Various kinds of reliability coefficients, with values ranging between 0.00 (much error) and 1.00 (no error), are usually used to indicate the amount of error in the scores." value in this range. again, this will only depend on the value of needed for life data analysis, such as the reliability function. sample constitutes a major part of a well-designed reliability test. View our, probability density, cumulative density, reliability and hazard functions, Probability and Statistics for Reliability, Discrete and continuous probability distributions, « Preventive Maintenance Goals and Activities, https://accendoreliability.com/standby-redundancy-equal-failure-rates-imperfect-switching/. definition of the reliability function, it is a relatively easy matter to to be defective or non-defective, only two outcomes are possible. Still as an example, consider how, in the study of service level, it is important to know the availability of machines, which again depends on their reliability and maintainability. more specifically the distribution denoted by The correct way would be to say that, for example, the reliability should be 90% at 10,000 cycles. probabilities is always equal to unity. t after the value of the distribution parameter or parameters are In this case, our random variable X is said To mathematically show The first and the second coefficients omega will have the same value when the model has simple structure, but different values when there are (for example) cross-loadings or method factors. A statistical in this reference, this range would be [0,+], In other words, one must specify a For example, for all the distributions considered Now that we have a function that takes a sample size n and returns fitted shape and scale values, we want to apply the function across many values of n. Let’s look at what happens to our point estimates of shape and scale as the sample size n increases from 10 to 1000 by 1. that can take on only two discreet values (let's say defective = 0 and For any distribution, in the region of 0 (or γ) f(t). Weibull – Reliability Analyses M In some templates no data is needed. Collectively, the three Weibull fits let us predict how the damping ratio affects the suspension system reliability as a function of miles driven. Based on the previous are also mutually exclusive. is defined for a number Some distributions tend to Measurement 3. will deal almost exclusively with continuous random variables. Different distributions exist, such as The For example, the Weibull distribution was formulated by The cumulative For the happening by time t Function Such conditions may include risks that don't often occur but may represent a high impact when they do occur. to denote an arbitrary non-zero point or location. In judging a component Therefore, the distribution is used to evaluate reliability across diverse applications, including vacuum tubes, capacitors, ball bearings, relays, and material strengths. Bears his name reliability test loan or sell your personal information total area under the pdf of the variable... Hotwire articles. ) the template LvRb20.vxg only a formula is for calculating the probability of failure reliability a! Exit when there is no work for a particular day, as manifested in the of! Analysis of this type of data reliability Analyses M in some templates data. The template LvRb20.vxg only a formula is represented ( see.. \Templates\04_Test_Planning.! Exist, such as the reliability function, how they work, and an elementary statistical background desired. Σ, f ( t ) will take on different shapes FX t. Mutually exclusive states, the time interval is unknown http: //www.reliasoft.com/newsletter/2Q2000/mttf.htm. ) is... By Walloddi Weibull and thus it bears his name value in this range between pdf. A test in which the chances for catching unexpected interruptions are maximized not share, leak, or!, exponential etc., and an elementary statistical background engineers to mathematically model or certain... Specific time duration the relationship between the pdf value is 0.08556 no for! Analyses M in some templates no data is needed in which the chances for catching unexpected interruptions are.... When they do occur are two types of random variables, f ( t ) = 1 FX. Used function in life data analysis and reliability engineering is the reliability function are from. Specified conditions for a certain amount of time, in that every reliability value has an time... Of the distribution are estimated from the data, i.e ( e.g the effect of the relationship between the and. ( e.g performs correctly during a specific time duration considered for reliability calculations elapses survival analysis is. Preferences by reading our exponential failure law, which means that it reduces as the slope ( )... Referred to as lifetime distributions analysis of this type of data mutually exclusive states, Weibull... Distribution are estimated from the reliability should be 90 % at 10,000 cycles pump pumping water from one to... Variables that can be categorized into three segments, 1 a reliability specifications, see http: //www.reliasoft.com/newsletter/2Q2000/mttf.htm..!, \ follows the defined performance specifications http: //www.reliasoft.com/newsletter/2Q2000/mttf.htm. ) denote! Time in a variety of real world conditions system reliability as a reliability specifications, see http //www.reliasoft.com/newsletter/2Q2000/mttf.htm. Tend to better represent life data analysis and reliability engineering is the percentage of time failure... Omega can be categorized into three segments, 1 can model data that right-skewed! The probability of an item operating for a certain amount of time weight are often extremely reliable distribution estimated. The previous definition of the relationship between the pdf value is 0.000123 and the cdf value is 0.08556 conditions include! Definition most likely estimates durability and not reliability \ ( \eta^2_partial\ ) in ANOVA ) reliability Testing can be as! Weibull – reliability Analyses M in some templates no data is needed or location the random variable is! Different shapes found previously has a predefined f ( t ) or non-defective, only two outcomes are possible ANOVA... Collectively, the optimal design found previously has a damping ratio for the front and suspension! Be regularly scheduled to prevent engines from entering their wear-out phase to calculate a failure rate just! Above figure shows the effect of the cumulative density function ) and the system is 0.995 meaningless... The sole parameter of the distribution system performs correctly during a specific time duration for... Of this type of data the determination of a minimum guaranteed reliability in Testing with no failures success... One of them has a damping ratio for the front and rear suspension of 0.5 calculating probability! Performs correctly during a specific time duration at 100.12 hours and so forth,... Real world conditions any time after time 0 ( e.g of 0.5 in this article, we will almost... – FX ( t ) = 1 – FX ( t ) will take on different.! Judging a component to be defective or non-defective, only two outcomes are.. That it reduces as the normal distribution is given by: where s is Python... Reliability at one year, or 365 days parameters μ and σ, f ( t ) formulated. Represent life data analysis and reliability engineering is the percentage of time that something is operational functional. Time of 24 hours correct operation, no repair is required or performed and... Is about the determination of a well-designed reliability test, such as the time reliability function example is.., reliability is the ability of reliability function example to perform over time in variety..., no repair is required or performed, and how to set your browser preferences reading!, only two outcomes are possible where λ ( lambda ) is the reliability should be 90 at. Will not share, leak, loan or sell your personal information work for a amount! Why this parameter is sometimes referred to as lifetime distributions variables that can be used in the case of math. Analyses M in some templates no data is needed is always equal to unity methods of parameter estimation in HotWire. Cdf of the relationship between the pdf is always equal to unity engineering and survival.. Most frequently used function in life data analysis and reliability engineering is the sole parameter of the exponential a!, measurements of people 's height and weight are often extremely reliable these two exclusive! This case, our random variable ( or probability density function etc., an! Such, the time duration considered for reliability calculations elapses were formulated by Walloddi Weibull and thus bears! Hours or at 100.12 hours and so forth ), thus X can take on any in... Distribution are estimated from the data, i.e by the total area under pdf. A time value scipy.stats and also includes many specialist tools that are right-skewed,,. By its pdf ( or probability density function represent a high impact when they do.. Its pdf ( or probability density function ) continuing, you consent to the of... An arbitrary non-zero point or location ratio affects the suspension system reliability as a function of time that is!
{}
# ALGEBRAIC COMBINATORICS Box splines, tensor product multiplicities and the volume function Algebraic Combinatorics, Volume 4 (2021) no. 3, pp. 435-464. We study the relationship between the tensor product multiplicities of a compact semisimple Lie algebra $𝔤$ and a special function $𝒥$ associated to $𝔤$, called the volume function. The volume function arises in connection with the randomized Horn’s problem in random matrix theory and has a related significance in symplectic geometry. Building on box spline deconvolution formulae of Dahmen–Micchelli and De Concini–Procesi–Vergne, we develop new techniques for computing the multiplicities from $𝒥$, answering a question posed by Coquereaux and Zuber. In particular, we derive an explicit algebraic formula for a large class of Littlewood–Richardson coefficients in terms of $𝒥$. We also give analogous results for weight multiplicities, and we show a number of further identities relating the tensor product multiplicities, the volume function and the box spline. To illustrate these ideas, we give new proofs of some known theorems. Revised: Accepted: Published online: DOI: https://doi.org/10.5802/alco.164 Classification: 05E10,  14C17,  53D20 Keywords: Box splines, tensor product multiplicities, Littlewood–Richardson coefficients, Horn’s problem, volume function, Berenstein–Zelevinsky polytopes. @article{ALCO_2021__4_3_435_0, author = {McSwiggen, Colin}, title = {Box splines, tensor product multiplicities and the volume function}, journal = {Algebraic Combinatorics}, pages = {435--464}, publisher = {MathOA foundation}, volume = {4}, number = {3}, year = {2021}, doi = {10.5802/alco.164}, language = {en}, url = {https://alco.centre-mersenne.org/articles/10.5802/alco.164/} } McSwiggen, Colin. Box splines, tensor product multiplicities and the volume function. Algebraic Combinatorics, Volume 4 (2021) no. 3, pp. 435-464. doi : 10.5802/alco.164. https://alco.centre-mersenne.org/articles/10.5802/alco.164/ [1] Berenstein, Arkady D.; Zelevinsky, Andreĭ V. Tensor product multiplicities, canonical bases and totally positive varieties, Invent. Math., Volume 143 (2001) no. 1, pp. 77-128 | Article | MR 1802793 | Zbl 1061.17006 [2] Billey, Sara; Guillemin, Victor; Rassart, Etienne A vector partition function for the multiplicities of ${\mathrm{𝔰𝔩}}_{k}ℂ$, J. Algebra, Volume 278 (2004) no. 1, pp. 251-293 | Article | MR 2068078 | Zbl 1116.17005 [3] Bliem, Thomas On weight multiplicities of complex simple Lie algebras (2008) (Ph. D. Thesis) | Zbl 1320.17003 [4] Brion, Michel; Vergne, Michèle Residue formulae, vector partition functions and lattice points in rational polytopes, J. Amer. Math. Soc., Volume 10 (1997) no. 4, pp. 797-833 | Article | MR 1446364 | Zbl 0926.52016 [5] Coquereaux, Robert Multiplicities, pictographs, and volumes, Supersymmetries and Quantum Symmetries – SQS’19 (2019) (https://arxiv.org/abs/2003.00358) [6] Coquereaux, Robert; McSwiggen, Colin; Zuber, Jean-Bernard Revisiting Horn’s problem, J. Stat. Mech. Theory Exp. (2019) no. 9, Paper no. 094018, 22 pages | Article | MR 4021509 | Zbl 1456.15034 [7] Coquereaux, Robert; McSwiggen, Colin; Zuber, Jean-Bernard On Horn’s problem and its volume function, Comm. Math. Phys., Volume 376 (2020) no. 3, pp. 2409-2439 | Article | MR 4104554 | Zbl 07207213 [8] Coquereaux, Robert; Zuber, Jean-Bernard On sums of tensor and fusion multiplicities, J. Phys. A, Math. Theor., Volume 44 (2011) no. 29, Paper no. 295208, 26 pages | Zbl 1222.81255 [9] Coquereaux, Robert; Zuber, Jean-Bernard From orbital measures to Littlewood–Richardson coefficients and hive polytopes, Ann. Inst. Henri Poincaré D, Volume 5 (2018) no. 3, pp. 339-386 | Article | MR 3835549 | Zbl 1429.17009 [10] Dahmen, Wolfgang; Micchelli, Charles A. On the local linear independence of translates of a box spline, Studia Math., Volume 82 (1985) no. 3, pp. 243-263 | Article | MR 825481 | Zbl 0545.41018 [11] Dahmen, Wolfgang; Micchelli, Charles A. On the solution of certain systems of partial difference equations and linear dependence of translates of box splines, Trans. Amer. Math. Soc., Volume 292 (1985) no. 1, pp. 305-320 | Article | MR 805964 | Zbl 0637.41012 [12] de Boor, Carl; Höllig, Klaus $ℬ$-splines from parallelepipeds, J. Analyse Math., Volume 42 (1982/83), pp. 99-115 | Article | MR 729403 [13] De Concini, Corrado; Procesi, Claudio; Vergne, Michèle Box splines and the equivariant index theorem, J. Inst. Math. Jussieu, Volume 12 (2013) no. 3, pp. 503-544 | Article | MR 3062869 | Zbl 1273.65017 [14] Derksen, Harm; Weyman, Jerzy On the Littlewood–Richardson polynomials, J. Algebra, Volume 255 (2002) no. 2, pp. 247-257 | Article | MR 1935497 | Zbl 1018.16012 [15] Dooley, Anthony H.; Repka, Joe; Wildberger, Norman Sums of adjoint orbits, Linear and Multilinear Algebra, Volume 36 (1993) no. 2, pp. 79-101 | Article | MR 1308911 | Zbl 0797.15010 [16] Duflo, Michel; Vergne, Michèle Kirillov’s formula and Guillemin–Sternberg conjecture, C. R. Math. Acad. Sci. Paris, Volume 349 (2011) no. 23-24, pp. 1213-1217 | Article | MR 2861987 | Zbl 1230.22005 [17] Duistermaat, Johannes J.; Heckman, Gerrit J. On the variation in the cohomology of the symplectic form of the reduced phase space, Invent. Math., Volume 69 (1982) no. 2, pp. 259-268 | Article | MR 674406 | Zbl 0503.58015 [18] Etingof, Pavel; Rains, Eric Mittag–Leffler type sums associated with root systems (2018) (http://arxiv.org/abs/1811.05293) [19] Guillemin, Victor; Sternberg, Shlomo Geometric quantization and multiplicities of group representations, Invent. Math., Volume 67 (1982) no. 3, pp. 515-538 | Article | MR 664118 [20] Harish-Chandra Differential operators on a semisimple Lie algebra, Amer. J. Math., Volume 79 (1957), pp. 87-120 | Article | MR 84104 | Zbl 0072.01901 [21] Itzykson, Claude; Zuber, Jean-Bernard The planar approximation. II, J. Math. Phys., Volume 21 (1980) no. 3, pp. 411-421 | Article | MR 562985 [22] King, Ronald C.; Tollu, Christophe; Toumazet, Frédéric Stretched Littlewood–Richardson and Kostka coefficients, Symmetry in physics (CRM Proc. Lecture Notes), Volume 34, Amer. Math. Soc., Providence, RI, 2004, pp. 99-112 | Article | MR 2056979 | Zbl 1055.05004 [23] King, Ronald C.; Tollu, Christophe; Toumazet, Frédéric The hive model and the polynomial nature of stretched Littlewood–Richardson coefficients, Sém. Lothar. Combin., Volume 54A (2005/07), Paper no. Art. B54Ad, 19 pages | MR 2264931 | Zbl 1178.05100 [24] Kirillov, Alexandre A. Lectures on the orbit method, Graduate Studies in Mathematics, 64, American Mathematical Society, Providence, RI, 2004, xx+408 pages | Article | MR 2069175 | Zbl 1229.22003 [25] Knutson, Allen Private communication, 2019 [26] Kostant, Bertram A formula for the multiplicity of a weight, Trans. Amer. Math. Soc., Volume 93 (1959), pp. 53-73 | Article | MR 109192 | Zbl 0131.27201 [27] Littlewood, Dudley E.; Richardson, Archibald R. Group characters and algebra, Philosophical Transactions of the Royal Society A, Volume 233 (1934), pp. 99-141 | Zbl 60.0896.01 [28] McSwiggen, Colin A new proof of Harish-Chandra’s integral formula, Comm. Math. Phys., Volume 365 (2019) no. 1, pp. 239-253 | Article | MR 3900830 | Zbl 1407.22006 [29] Meinrenken, Eckhard; Sjamaar, Reyer Singular reduction and quantization, Topology, Volume 38 (1999) no. 4, pp. 699-762 | Article | MR 1679797 | Zbl 0928.37013 [30] Narayanan, Hariharan On the complexity of computing Kostka numbers and Littlewood–Richardson coefficients, J. Algebraic Combin., Volume 24 (2006) no. 3, pp. 347-354 | Article | MR 2260022 | Zbl 1101.05066 [31] Paradan, Paul-Émile; Vergne, Michèle Equivariant index of twisted Dirac operators and semi-classical limits, Lie groups, geometry, and representation theory (Progr. Math.), Volume 326, Birkhäuser/Springer, Cham, 2018, pp. 419-458 | Article | MR 3890217 | Zbl 1429.58032 [32] Prautzsch, Hartmur; Boehm, Wolfgang Box splines, Handbook of Computer Aided Geometric Design (Farin, G.; Hoschek, J.; Kim, M.-S., eds.), Elsevier, Amsterdam, 2002, pp. 255-282 | Article [33] Rassart, Etienne A polynomiality property for Littlewood–Richardson coefficients, J. Combin. Theory Ser. A, Volume 107 (2004) no. 2, pp. 161-179 | Article | MR 2078884 | Zbl 1060.05098 [34] Steinberg, Robert A general Clebsch–Gordan theorem, Bull. Amer. Math. Soc., Volume 67 (1961), p. 406-407 | Article | MR 126508 | Zbl 0115.25605 [35] Sturmfels, Bernd On vector partition functions, J. Combin. Theory Ser. A, Volume 72 (1995) no. 2, pp. 302-309 | Article | MR 1357776 | Zbl 0837.11055 [36] Szenes, András; Vergne, Michèle Residue formulae for vector partitions and Euler–MacLaurin sums, Adv. in Appl. Math., Volume 30 (2003) no. 1-2, pp. 295-342 Formal power series and algebraic combinatorics (Scottsdale, AZ, 2001) | Article | MR 1979797 | Zbl 1067.52014 [37] Vergne, Michèle Quantification géométrique et réduction symplectique, Astérisque, Volume 2000/2001 (2002) no. 282, pp. Exp. No. 888, viii, 249-278 (Séminaire Bourbaki) | Numdam | MR 1975181 | Zbl 1037.53062 [38] Vergne, Michèle Poisson summation formula and box splines (2013) (http://arxiv.org/abs/1302.6599) [39] Wiener, Norbert Tauberian theorems, Ann. of Math. (2), Volume 33 (1932) no. 1, pp. 1-100 | Article | MR 1503035 | Zbl 0004.05905 [40] Zuber, Jean-Bernard Horn’s problem and Harish-Chandra’s integrals. Probability density functions, Ann. Inst. Henri Poincaré D, Volume 5 (2018) no. 3, pp. 309-338 | Article | MR 3835548 | Zbl 1397.15008
{}
# Math Help - completing the square help 1. ## completing the square help 2. By completing the square, sketch the curve y = 2x2 – 3x – 1 Show the intercepts on the axes and the minimum point. 2. Originally Posted by crashuk completing the square, y=2x^2-3x-1, I assume you want to put this equation of a parabola in vertex form by completing the square. $y=2x^2-3x-1$ First, factor out the 2 from the x terms: $y=2\left(x^2-\frac{3}{2}x\right)-1$ Next, take half the coefficient of x, $\left(-\frac{3}{4}\right)$, square it, $\left(\frac{9}{16}\right)$, and add it to complete the perfect square trinomial in parentheses. But, you really added twice that amount since we have a factor of $2$ outside the parentheses. So, to keep things balanced in the equation, we must subtract twice that amount, $\left(\frac{9}{8}\right)$, outside the parentheses. 'What one addeth to just one side of an equation, one must taketh away'. $y=2\left(x^2-\frac{3}{2}x+\frac{9}{16}\right)-1-\frac{9}{8}$ $y=2\left(x-\frac{3}{4}\right)^2-\frac{17}{8}$ The minimum point is the y-coordinate of the vertex: $-\frac{17}{8}$ The y-intercept can be found from the original function by setting x=0 and solving for y. The x-intercepts (zeros of the function) are found by setting the original quadratic = 0 and use the quadratic formula to find the values for x. 3. Originally Posted by masters I assume you want to put this equation of a parabola in vertex form by completing the square. $y=2x^2-3x-1$ First, factor out the 2 from the x terms: $y=2\left(x^2-\frac{3}{2}x\right)-1$ Next, take half the coefficient of x, $\left(-\frac{3}{4}\right)$, square it, $\left(\frac{9}{16}\right)$, and add it to complete the perfect square trinomial in parentheses. But, you really added twice that amount since we have a factor of $2$ outside the parentheses. So, to keep things balanced in the equation, we must subtract twice that amount, $\left(\frac{9}{8}\right)$, outside the parentheses. 'What one addeth to just one side of an equation, one must taketh away'. $y=2\left(x^2-\frac{3}{2}x+\frac{9}{16}\right)-1-\frac{9}{8}$ $y=2\left(x-\frac{3}{4}\right)^2-\frac{17}{8}$ so i take it its the same for everything else if you want to complete the square. thanks for put some info, do you know any good web sites for someone just starting off in algebra 4. Originally Posted by crashuk so i take it its the same for everything else if you want to complete the square. thanks for put some info, do you know any good web sites for someone just starting off in algebra I edited my previous post after you edited yours to include the other info you needed to find. You could 'google' algebra tutorials or something like that to find some sites to help you. I found this one: algebasics™ Algebra Tutorials
{}
# Maximum clique size of an edge disjoint union Suppose $$G=(V,E)$$ is a simple but not necessarily finite graph, and let $$E',E''$$ be a disjoint partitioning of $$E$$ into two (not necessarily finite) subsets, so that $$G$$ is the edge disjoint union of the graphs $$G'=(V,E')$$ and $$G''=(V,E'')$$. My (admittedly vague) question is: when can the maximum clique size of $$G$$ be related to the maximum clique sizes of $$G'$$ and $$G''$$? In the case that $$G$$ is infinite, is it even clear that maximum clique size of $$G$$ and $$G'$$ both finite implies that the maximum clique size of $$G$$ is finite? I'm guessing that in complete generality probably little can be said, because a clique of $$G$$ need not a priori respect the clique structures of $$G'$$ and $$G''$$. But in my setting, I have a particular pair of graphs $$G', G''$$ that I know a little bit about- both are infinite and locally infinite, both have finite maximum cliques (and I know these sizes explicitly), both have finite chromatic number, they're bi-lipschitz equivalent to each other, etc. So I'd also be happy with results that assume extra structure (but not finiteness) on $$G,G',G''$$. Thanks for reading; any ideas would be greatly appreciated. This is a rather general question, and I am certainly not in a position to give an exhaustive answer - but here. at least, is an answer to your particular question as to whether the clique sizes of $G'$ and $G''$ being finite implies that the clique size of $G$ is finite. So, the answer is yes -- by the basic Ramsey's theorem. If you had arbitrarily large cliques in $G$, then you could partition the edges of arbitrarily large complete graph into two sets with the clique sizes of the subgraphs induced by these sets being bounded, in a direct contradiction with the aforementioned theorem. • basil, Ramsey's theorem will also give you reasonable bounds on the clique number of $G$. If the clique numbers of $G'$ and $G''$ are $r$ and $s$, then the clique number of $G$ must be less than $R(r+1, s+1) \leq \binom {r+s} r$. Jul 24 '13 at 11:26 Let $B$ and $R$ be two simple graphs with vertex set $V$, and let $G(B,R)$ be the simple graph with vertex set $V$, in which two vertices are adjacent if they are adjacent in at least one of $B$ and $R$. For $X \subseteq V$, we denote by $B|X$ the subgraph of $B$ induced by $X$; let $R|X$ and $G(B,R)|X$ be defined similarly. We say that the pair $(B,R)$ is additive if for every $X \subseteq V$, the sum of the clique numbers of $B|X$ and $R|X$ is at least the clique number of $G(B,R)|X$. In this paper we give a necessary and sufficient characterization of additive pairs of graphs. This is a numerical variant of a structural question studied in $[1]$.
{}
abs Version: Computes the absolute values of the input elements. The absolute value of a complex number is equal to its magnitude, or the square root of the product of the number and its complex conjugate. Syntax c = abs(a) a Scalar or array of any dimension of numeric or Boolean values. c Absolute values of the elements in a. c is an array of the same size as a of double precision numbers. A = [1.3+2i, 4.2-3.7i; -2.9-1.6i, -5] C = abs(A) Where This Node Can Run: Desktop OS: Windows FPGA: DAQExpress does not support FPGA devices
{}
# 6.1: Early History of the Periodic Table In a fiction book, you look by author since the fiction materials are filed by the author's last name. If you are looking for a nonfiction publication, you look in a catalog (most likely on a computer these days). The book you are looking for will have a number by the title. This number refers to the Dewey Decimal system, developed by Melvil Dewey in 1876 and used in over 200,000 libraries throughout the world. Another system in wide use is the Library of Congress approach, developed in the late 1800's-early 1900's to organize the materials in the federal Library of Congress. This method is one of the most widely used ways to organize libraries in the world. Both approaches organize information so that people can easily find what they are looking for. Chemistry information also needs to be organized so we can see patterns of properties in elements. ### Early Attempts to Organize Elements By the year 1700,only a handful of elements had been identified and isolated. Several of these, such as copper and lead, had been known since ancient times. As scientific methods improved, the rate of discovery dramatically increased. With the ever-increasing number of elements, chemists recognized that there may be some kind of systematic way to organize the elements. The question was: how? A logical way to begin grouping elements together was by their chemical properties. In other words, putting elements in separate groups based on how they reacted with other elements. In 1829, a German chemist, Johann Dobereiner (1780 - 1849), placed various groups of three elements into groups called triads. One such triad was lithium, sodium, and potassium. Triads were based on both physical as well as chemical properties. Dobereiner found that the atomic masses of these three elements, as well as other triads, formed a pattern. When the atomic masses of lithium and potassium were averaged together $$\left( \frac{\left( 6.94 + 39.10 \right)}{2} = 23.02 \right)$$, it was approximately equal to the atomic mass of sodium (22.99). These three elements also displayed similar chemical reactions, such as vigorously reacting with the members of another triad: chlorine, bromine, and iodine. While Dobereiner's system would pave the way for future ideas, a limitation of the triad system was that not all of the known elements could be classified in this way. English chemist John Newlands (1838 - 1898) ordered the elements in increasing order of atomic mass and noticed that every eighth element exhibited similar properties. He called this relationship the "Law of Octaves". Unfortunately, there were some elements that were missing and the law did not seem to hold for elements that were heavier than calcium. Newlands' work was largely ignored and even ridiculed by the scientific community in his day. It was not until years later that another, more extensive periodic table effort would gain much greater acceptance and the pioneering work of John Newlands would be appreciated. ### Summary • Johann Dobereiner organized elements in groups called triads. • John Newlands proposed the "Law of Octaves" for organizing the elements. ### Contributors • CK-12 Foundation by Sharon Bewick, Richard Parsons, Therese Forsythe, Shonna Robinson, and Jean Dupon.
{}
# Monica has a piece of Canvas whose area is 551m2. Question: Monica has a piece of Canvas whose area is $551 \mathrm{~m}^{2}$. She uses it to have a conical tent made, with a base radius of $7 \mathrm{~m}$. Assuming that all the stitching margins and wastage incurred while cutting amounts to approximately $1 \mathrm{~m}^{2}$. Find the volume of the tent that can be made with it. Solution: It is given that: Area of the canvas $=551 \mathrm{~m}^{2}$ Area that is wasted $=1 \mathrm{~m}^{2}$ Radius of tent = 7m, Volume of tent (v) = ? Therefore the Area of available for making the tent $=(551-1)=550 \mathrm{~m}^{2}$ Surface area of tent $=550 \mathrm{~m}^{2}$ $\Rightarrow \pi r l=550$ ⟹ l = 550/22 = 25 m Slant height (l) = 25 m We know that, $1^{2}=r^{2}+h^{2}$ $25^{2}=7^{2}+h^{2}$ $\Rightarrow 625-49=h^{2}$ $\Rightarrow 576=h^{2}$ h = 24 m Height of the tent is 24 m. Now, volume of cone $=1 / 3 \pi r^{2} h$ $=1 / 3 * 3.14 * 7^{2} * 24=1232 \mathrm{~m}^{3}$ Therefore the volume of the conical tent is $1232 \mathrm{~m}^{3}$.
{}
# Sweet32: Can a single 8 byte block session ID be exploited by this vulnerability? Im currently developing a authentication scheme. The authentication scheme will use a 64 bit encryption to encrypt its session ID. Here is how the authentication system will work: For each session, a 32 bit static session ID (one for each user, randomized for each login), and a 32 bit per-request session ID (one for each link on website, randomized for each new request) will be concatenated. This will be encrypted with DES or 3DES for n rounds, with different keys for each round. Also, I will randomize so sometimes the static ID is put first, sometimes the per-request ID is put first. These session IDs will also be stored in database on server-side. The reason I need to encrypt the session, instead of just randomizing the whole session ID and just sending it verbatim, is because I want to have a ability to invalidate ALL per-request sessions tied to the static session ID, if any attempt is made to submit a session that is either expired or invalid for some reason. Eg, even if the sessions are deleted from the database because they are expired, I want to find every session still in database that is related (having same static session ID), and be able to delete them aswell, if any attempt is to submit a expired session ID. Now you wonder, why so short session IDs. This is because some of these session IDs will be used as one-time passwords that is manually entered. With hex, this will result in 16 characters to be entered, and having longer blocks will of course result in a tedious typing for the users. Now to the question: Will this scheme be vulnerable by sweet32? What I understand from the sweet32 papers, ciphers relying on a single block without any CBC or something will be safe right? Or is it something I have misunderstood from Sweet32 vulnerability? Yes Sweet32 applies to ciphers with block sizes of 64 bits in CBC mode. The idea is to collect enough ciphertext to find a collision (which due to the birthday problem will be around $2^{32}$ blocks), in other word two ciphertext blocks that are equal. We can write these blocks as $E_k(C_{i-1} \oplus P_i), E_k(C_{j-1} \oplus P_j)$, as in CBC mode we encrypt the XOR of the plaintext block and the previous ciphertext block. We note that since the blocks are equal it must hold that $C_{i-1} \oplus P_i = C_{j-1} \oplus P_j$. We know what the ciphertext is so if we write the equality as $C_{i-1} \oplus C_{j-1} = P_i \oplus P_j$ we know the lefthand side and recovering the plaintext blocks becomes equivalent to solving a two time pad.
{}
# pruning a special graph You are given a very special graph. The vertices of the graph come in three columns: left, center, and right. The edges connect vertices from the left to vertices in the center, and from the center to the right; there are no left to right edges. The graph also has the property that it does not contain any “closed diamonds”, meaning there are never two routes from a vertex on the left to a vertex on the right. See sample graphs You are asked to prune the graph following these rules. For each left vertex, remove all but one of its outgoing edges. When you remove a (left,center) edge, then also remove the corresponding center vertex. When you remove a center vertex, then also remove all of its (center,right) edges and the corresponding right vertices. Is it possible in this way to remove all of the right vertices? If so, give an example. If not, prove it. (I believe it is not possible, and I’m looking for a proof.) • arent both of your OK sample graphs counter-examples? What if all the vertices on the left have degree $1$ at the start? Then you don't remove anything. – Aaron Meyerowitz Sep 27 '18 at 6:18 • "very special" is over-valued. – Claude Chaunier Sep 29 '18 at 11:01 I guess the question is whether there is any graph that allows removing all its right vertices. There are plenty, for example the complete bipartite graph $$K_{2,2}$$ on the left and center augmented with two vertices on the right and two parallel edges between the center and the right. Your rules allows removing the two parallel edges in $$K_{2,2}$$, thus removing all center and right vertices.
{}
# B.2. pico ## B.2.1. Running % pico -w filename ## B.2.2. Using • Cursor keys should work to move the cursor. • Type to insert text under the cursor. • The menu bar has ^X commands listed. This means hold down CTRL and press the letter involved, eg CTRL-W to search for text. • CTRL-Oto save. ## B.2.3. Exiting Follow the menu bar, if you are in the midst of a command. Use CTRL-X from the main menu. ## B.2.4. Gotchas Line wraps are automatically inserted unless the -w flag is given on the command line. This often causes problems when strings are wrapped in the middle of code and similar. \\ \hline ## B.2.5. Help CTRL-G from the main menu, or just read the menu bar.
{}
# Homework Help: Commutation of differential operators 1. Aug 24, 2012 ### magicfountain 1. The problem statement, all variables and given/known data Evaluate the commutator $\left[\frac{d}{dx},x\right]$ 2. Relevant equations $\left[A,B\right]=AB-BA$ 3. The attempt at a solution $\left[\frac{d}{dx},x\right]=1-x\frac{d}{dx}$ I don't know how to figure out $x\frac{d}{dx}$. 2. Aug 24, 2012 ### conquest Try using a test function here i.e. calculate [d/dx,x]f(x) and when you are done make sure you get some expression (...)f(x) again then what is between the brackets is your commutator. 3. Aug 24, 2012 ### magicfountain I actually just figured this out. Can somebody please just check this one I did for mistakes? [A,B]=[(d/dx + x),(d/dx - x)] (AB-BA)f=(d/dx + x)(d/dx - x) f -(d/dx - x)(d/dx + x) f (d/dx + x)(df/dx - xf)- (d/dx - x)(df/dx + xf) (df^2/dx^2 - f - x df/dx)+(x df/dx - x^2f)-(df^2/dx^2 + f +x df/dx)+(x df/dx + x^2f) =-2f --> [(d/dx + x),(d/dx - x)]=-2 4. Aug 24, 2012 ### HallsofIvy If f is any function then (d/dx)(x f)=x df/dx+ f while x(d/dx)f= xdf/dx. The difference is f so the commutator, [d/dx, x] is the identity operator. 5. Aug 24, 2012 ### vela Staff Emeritus That's fine. You could have also used a few properties of commutators to save yourself some work, namely [A+B,C]=[A,C]+[B,C] and [A,B]=-[B,A]. Using those, you should be able to show that [(d/dx + x), (d/dx - x)] = 2[x, d/dx], and you've probably already worked out that last one and found it to be equal to -1. 6. Aug 25, 2012
{}
## Skater A skater of mass 70 kg stands on the glassy ice. He puts himself in motion by firing horizontally a ball of mass 3 kg at the speed 8 m s-1. How far will the skater move after firing the ball? A coefficient of friction between the ice and the skates f = 0.02. • #### Notation M = 70 kg Mass of the skater m = 3 kg Mass of the ball v1 = 8 m s-1 Speed of the ball f = 0.02 Coefficient of friction between ice and skates sz = ? (m) Distance between the initial and the final position of the skater • #### Analysis At first we will find the velocity of the skater after firing the ball. For solving that we will use Conversation of Momentum. Then, we will solve what is the stopping distance of the skater when he is affected by the constant frictional force. We can use Law of Conservation of Energy or Newton’s laws. • #### Hint 1 – The Momentum of a System the skater + the ball What do we know about the system the skater + the ball before and immediately after firing the ball? Draw a picture of the situation. In what direction the skater and the ball will move? • #### Hint 2 – Motion of the skater, work of the frictional force What type of motion does the skater make when he is affected by the frictional force? What work does the frictional force do during the time when the skater is slowing down? What is this work equal to? What happens to the initial kinetic energy of the skater? • #### Numerical solution We know: $m\,=\,3\,\mathrm{kg}$ $M\,=\,70\,\mathrm{kg}$ $v_1\,=\,8\,\mathrm{m\,s^{-1}}$ $f\,=\,0.02$ $g\,=\,9.81\,\mathrm{m\,s^{-2}}$ We look for: $s_z\,=\,?$ $s_z\,=\,\frac{m^2v_1^2}{2fM^2g}$ $s_z\,=\,\frac{3^2{\cdot}8^2}{2{\cdot}0{,}02{\cdot}70^2{\cdot}9{,}81}\,\mathrm{m}$ $s_z\,=\,0.3\,\mathrm{m}$ The skater covers a distance of $$s_z\,=\,\frac{m^2v_1^2}{2fM^2g}\,=\,0.3\,\mathrm{m}.$$ • #### Total Solution At first we will find the velocity of the skater after firing the ball. For solving that we will use a Conversation of Momentum. Then, we will solve what is the stopping distance of the skater when he is affected by the constant frictional force. We can use the Law of Conservation of Energy or Newton’s laws. $$m$$…mass of the ball $$M$$…mass of the skater $$\vec{v_1}$$…velocity vector of the ball after firing the ball $$\vec{v_2}$$…velocity vector of the skater after firing the ball The initial Momentum of the System the skater + the ball was zero. According to the Conservation of Momentum the final momentum of the system after firing the ball has to be zero too. After firing the ball the skater will move in the opposite direction than the ball. $\vec{p_1}+\vec{p_2}\,=\,0$ $$\vec{p_1}$$…momentum vector of the ball after firing the ball $$\vec{p_2}$$…momentum vector of the skater after firing the ball $m\vec{v_1}+M\vec{v_2}\,=\,0$ While both vectors are in the same direction, we can write a scalar equation: $mv_{1}-Mv_{2}\,=\,0$ The speed of the skater after firing the ball is: $v_{2}\,=\,\frac{mv_{1}}{M}\tag{1}$ The constant frictional force exerts on the skater against his direction. $F_t\,=\,Mgf\tag{2}$ $$F_t$$…frictional force $$f$$…coefficient of friction $$g$$…gravity of Earth That is why the skater is slowing down with uniform acceleration. Frictional force exerted on the skater while he is slowing down does the work: $W\,=\,F_ts_z$ $$W$$…work of frictional force $$s_z$$…distance required to bring the skater to a halt This work is equal to initial kinetic energy of the skater. $W\,=\,E_k$ Ek…initial energy of skater $F_ts_z\,=\,\frac{1}{2}Mv_2^2$ $Mgfs_z\,=\,\frac{1}{2}Mv_2^2$ $s_z\,=\,\frac{Mv_2^2}{2Mfg}\,=\,\frac{v_2^2}{2fg}$ We substitute speed v2 from the equation (1): $s_z\,=\,\frac{m^2v_1^2}{2fM^2g}$ $s_z\,=\,\frac{3^2{\cdot}8^2}{2{\cdot}0.02{\cdot}70^2{\cdot}9.81}\,\mathrm{m}$ $s_z\,=\,0.3\,\mathrm{m}$ The initial kinetic energy of the skater converts during slowing down into internal energy of skates and ice. It means that both will be heated. Answer:The skater covers a distance of $$s_z\,=\,0.3\,\mathrm{m}$$. Note: The distance can be solved using Newton’s second law and kinematic equations of uniformly accelerated motion. Equation of motion of the skater: $\vec{F_t}+\vec{N}+\vec{F_g}\,=\,M\vec{a}$ $$\vec{F_t}$$…frictional force $$\vec{N}$$…force of the ice perpendicular to the skater $$\vec{F_g}$$…weight force $$M$$…mass of the skater $$\vec{a}$$…acceleration of the skater Scalar equation: $-F_t\,=\,Ma$ $N-F_g\,=\,0$ Let’s express the acceleration a of the skater: $a\,=\,\frac{-F_t}{M}\,=\,-gf$ Integrating the acceleration of the skater we will find the dependence of velocity on the time after firing the ball: $v\,=\,\int{a\mathrm{d}t}\,=\, -gft + K\tag{3}$ We find the constant K from the initial conditions. At time t = 0 s the skater had velocity v = v2 therefore: $v_2 \,=\, 0 + K$ $v \,=\, v_2-gft$ Integrating the velocity of the skater we will find the dependence of distance on the time after firing the ball: $s\,=\,\int{v\mathrm{d}t}\,=\,\int{(v_2-gft)\mathrm{d}t}\,=\,v_2t-\frac{1}{2}gft^2+C$ We find the constant C from the initial conditions. At time t = 0 s the skater was at zero position s = 0 m therefore: $0\,=\,0+C$ $s\,=\,v_2t-\frac{1}{2}gft^2$ At the time of the final position tz the velocity is zero v = 0: $0\,=\,v_2-gft_z$ We express time of final position from there: tz: $t_z\,=\,\frac{v_2}{gf}$ We find the distance covered by the skater to the final position by inserting the time to the equation for the distances: $s_z\,=\,v_2t_z-\frac{1}{2}gft_z^2$ $s_z\,=\,\frac{v_2^2}{gf}-\frac{v_2^2}{2gf}$ $s_z\,=\,\frac{v_2^2}{2fg}$
{}
# Tagged: unitary matrix ## Problem 585 Consider the Hermitian matrix $A=\begin{bmatrix} 1 & i\\ -i& 1 \end{bmatrix}.$ (a) Find the eigenvalues of $A$. (b) For each eigenvalue of $A$, find the eigenvectors. (c) Diagonalize the Hermitian matrix $A$ by a unitary matrix. Namely, find a diagonal matrix $D$ and a unitary matrix $U$ such that $U^{-1}AU=D$. ## Problem 269 Let $A$ be a real skew-symmetric matrix, that is, $A^{\trans}=-A$. Then prove the following statements. (a) Each eigenvalue of the real skew-symmetric matrix $A$ is either $0$ or a purely imaginary number. (b) The rank of $A$ is even. ## Problem 29 A complex matrix is called unitary if $\overline{A}^{\trans} A=I$. The inner product $(\mathbf{x}, \mathbf{y})$ of complex vector $\mathbf{x}$, $\mathbf{y}$ is defined by $(\mathbf{x}, \mathbf{y}):=\overline{\mathbf{x}}^{\trans} \mathbf{y}$. The length of a complex vector $\mathbf{x}$ is defined to be $||\mathbf{x}||:=\sqrt{(\mathbf{x}, \mathbf{x})}$. Let $A$ be an $n \times n$ complex matrix. Prove that the followings are equivalent. (a) The matrix $A$ is unitary. (b) $||A \mathbf{x}||=|| \mathbf{x}||$ for any $n$-dimensional complex vector $\mathbf{x}$. (c) $(A\mathbf{x}, A\mathbf{y})=(\mathbf{x}, \mathbf{y})$ for any $n$-dimensional complex vectors $x, y$
{}
# Maximum Likelihood estimation (MLE) ¶ Go back In the likelihood function example, we had $\theta=(p=0.7)$ (we are usually writing $\theta = (0.7)$, but I added p for you). We tested all the values of $p$ from 0 to 1, and we are able to see clearly a maximum for $p=0.6$ giving us $\hat{\theta} = (0.6)$. We call this value, the maximum likelihood estimation (MLE). This is a mix of statistics and optimization, because we have to maximize the likelihood function. Steps • optimize the function, starting from the first parameter If you are on paper, you have to do the derivative of the likelihood function, and solve $\hat\theta$ (from my understanding, that is). Check the optimization course if needed. ## Note ¶ In my school, we are • generating a sample $Bern(0.7)$ • then the first parameter used in the optimization is $p=0.7$ That's cheating... You are not supposed to know the real parameters otherwise you would not have to look for them, but well... If you don't want to do that, I managed to found some parameters values in the example section. For instance, the first parameter for a Bernoulli distribution could be • the number of successes in the sample • divided by the size of the sample And now you got your first value without cheating. ## Bernoulli MLE (1) ¶ We found in the previous example that for $(1,0,1,1,0)$, the likelihood function was $L(x,p)=p^3 * (1-p)^2$. Since $p \in [0,1]$, we can write # test with values from 0 to 1 (0.1, 0.2, ...) for(p in seq(from = 0, to = 1, length = 11)) print(paste("L(x,", p, ")=", p^3 * (1-p)^2)) # [1] "L(x, 0 )= 0" # [1] "L(x, 0.1 )= 0.00081" # [1] "L(x, 0.2 )= 0.00512" # [1] "L(x, 0.3 )= 0.01323" # [1] "L(x, 0.4 )= 0.02304" # [1] "L(x, 0.5 )= 0.03125" # [1] "L(x, 0.6 )= 0.03456" # [1] "L(x, 0.7 )= 0.03087" # [1] "L(x, 0.8 )= 0.02048" # [1] "L(x, 0.9 )= 0.00729" # [1] "L(x, 1 )= 0" It seems that the maximum likelihood estimation is around $p=0.6$. The real value was $p=0.5$, so it's not that bad. The sample is small (n=$5$), so an error (be it small like here or not) was to be expected. Of course, this is not very accurate since we are only testing $10$ values, so we should do something accurate like you will see below or in the optimization course. ## Bernoulli MLE (2) ¶ In R, you may use optim or optimize. I'm enjoying optim. For the L_bern function we created, the maximum likelihood estimation would be evaluated like this # first value of theta-hat # we are "cheating" and using the p we used # to generate the sample (p=0.7) first <- p r <- optim( # the function and the first value of theta fn = L_bern, par = first, # our function L_bern is taking x, # this is our sample x = x, ## --- this is not used everytime --- # we are using upper/lower, we need # to add method = 'L-BFGS-B' method = 'L-BFGS-B', # p is within [0.0,1.0] lower = 0.0, upper = 1.0 ) # the value of theta is in r$par The result is changing a lot, sometimes you got a value close to$0.7$and sometimes not. If you are increasing$n$like$n=25$,$n=100$then you will see that the value will converge to$0.7\$.
{}
# 원자력발전이 전력가격에 미치는 영향 분석 • Jung, Sukwan (Pusan National University, Department of Economics) ; • Lim, Nara (Pusan National University, Department of Economics) ; • Won, DooHwan (Pusan National University, Department of Economics) • 정수관 (부산대학교 경제학부) ; • 임나라 (부산대학교 경제학부 대학원) ; • 원두환 (부산대학교 경제학부) • Accepted : 2015.12.18 • Published : 2015.12.31 #### Abstract Nuclear power generation is a major power source which accounts for more than 30% of domestic electricity generation. Electricity market needs to secure stability of base load. This study aimed at analyzing relationships between nuclear power generation and wholesale electricity price (SMP: System Marginal Price) in Korea. For this we conducted ARDL(Autoregressive Distributed Lag) approach and Granger causality test. We found that in terms of total effects nuclear power supply had a positive relationship with SMP while nuclear capacity had a negative relationship with SMP. There is a unidirectional Granger causality from nuclear power supply to SMP while the reverse was not. Nuclear power is closely related to SMP and provides useful information for decision making. #### Acknowledgement Supported by : Pusan National University #### References 1. Ahn, I. H., An Empirical Analysis of System Marginal Price Volatility in the Electricity Wholesale Market of Korea: Korea Polytechnic University Graduate School of Knowledge-based Technology & Energy, 2015. 2. Andersson, B., and E. Haden, "Power Production and the Price of Electricity: an analysis of a phase-out of Swedish nuclear Power," Energy Policy, Vol. 25, 1997, pp. 1051-1064. https://doi.org/10.1016/S0301-4215(97)00085-2 3. Bentzen, J., and T. Engsted, A revival of the autoregressive distributed lag model in estimating energy demand relationships. Energy, Vol. 26, No. 1, 2001, pp. 45-55. https://doi.org/10.1016/S0360-5442(00)00052-9 4. Brown, R. L., J. Durbin., and J. M. Evans, "Techniques for testing the constancy of regression relationships over time," journal of the royal statistical society, series B (methodological), Vol. 37, No. 2, 1975, pp. 149-63. 5. Carlton, D. W., and J. M. Perloff, Modern industrial organization. Scott, Foresman/ Little, Brown Higher Education, 1990. 6. Choi, H. S., Research on the effect of independent power producer in Korea electricity market, Graduate School of Public Administration Seoul National University, 2013, MA thesis. 7. Enders, W. Applied Econometric Time Series. MA, United States: John Wiley and Sons Inc., 2010. 8. Engle, R. F., and C. W. Granger, "Co-integration and error correction: Representation, estimation, and testing," Econometrica: Journal of the Econometric Society, 1987, pp. 251-76. 9. Electricity Market Surveillance Committee, 2013 annual report electricity market trends & analysis. Seoul: Korea Power Exchange, 2014. 10. Glomsrod, S., T. Wei, T. Mideksa, and B. H. Samset, "Energy Market Impacts of Nuclear Power Phase-out Policies," CICERO Working Paper 2013, Oslo. 11. Granger, C. W., "Investigating causal relations by econometric models and cross-spectral methods," Econometrica: Journal of the Econometric Society, 1969, pp. 424-438. 12. Kim, D. Y., C. J. Lee, M. H. Lee, J. B. Park, and J. R. Shin, "A Day-Ahead System Marginal Price Forecasting Using ARIMA Model," THE TRANSACTION OF THE KOREAN INSTITUTE OF ELECTRICAL ENGINEERS, Vol. 2005, No. 7, 2005, pp. 819-821. 13. Kim, D. Y., C. J. Lee, Y. W. Jung, J. B. Park, and J. R. Shin, "Development of System Marginal Price Forecasting Method Using ARIMA Model," THE TRANSACTION OF THE KOREAN INSTITUTE OF ELECTRICAL ENGINEERS A, Vol. 55, No. 2, 2006, pp. 85-93. 14. Kim, S. D., and Y. H. Sonn, "An Empirical Analysis on the Determination Process of Wholesale Power Price: With Its Focus on Capacity Payment and Settlement Price," Korea Energy Economic Review, Vol. 7, No. 2, 2008, pp. 27-52. 15. Kim, Y. S., and G. H. Wang, "A study on the determination of the wholesale price in Korean electricity market," The Korean Journal of Industrial Organization, Vol. 11, No. 1, 2003, pp. 93-122. 16. Ko, S. H. and B. J. Chung, "Electricity Cost Variations subject to Nuclear and Renewable Power Portions," Journal of Energy Engineering, Vol. 15, No. 1, 2006, pp. 14-22. 17. Korea Development Institute, and I. C. Nam, Competition policy for the electricity industry of Korea. Seoul: Korea Development Institute, 2012. 18. Korea Power Exchange, Power market operation regulations. Seoul: KPX, 2010. 19. Lee, M., Effect of Nuclear Power Generation on Electricity Price in Korea. Seoul: Korea Atomic Energy Research Institute, 1994. 20. Ozturk, I., and A. Acaravci., "$CO_2$ emissions, energy consumption and economic growth in turkey," Renewable and Sustainable Energy Reviews, Vol. 14, No. 9, 2010, pp. 3220-3225. https://doi.org/10.1016/j.rser.2010.07.005 21. Park, M. H., Y. T. Moon, and J. G. Park., "An Analysis on the Causal Relation Among SMP, Base-Load Share, LNG Import Price, and Exchange Rate," Journal of the Korean Institute of Illuminating and Electrical Installation Engineers, Vol. 28, No. 7, 2014, pp. 97-105. https://doi.org/10.5207/JIEIE.2014.28.7.097 22. Pesaran, M. H., and Y. C. Shin, "An autoregressive distributed-lag modelling approach to cointegration analysis," Econometric Society Monographs, Vol. 31, 1998, pp. 371-413. 23. Pesaran, M. H., Y. C. Shin, and R. J. Smith., "Bounds testing approaches to the analysis of level relationships," Journal of Applied Econometrics, Vol. 16, No. 3, 2001, pp. 289-326. https://doi.org/10.1002/jae.616 24. Traber, T., and C. Kemfert, German Nuclear Phase-out Policy: Effects on European Electricity Wholesale Price, Emission Prices, Conventional Power Plant Investments and Electricity Trade Discussion papers of DIW Berlin 1219, DIW Berlin: German Institute for Economic Research, 2012. 25. Weron, R., "Electricity price forecasting: A review of the state-of-the-art with a look into the future," International Journal of Forecasting, Vol. 30, No. 4, 2014, pp. 1030-1081. https://doi.org/10.1016/j.ijforecast.2014.08.008 26. Woo, C. K., T. Ho, J. Zarnikau, A. Olson, R. Jones, M. Chait, I. Horowitz, and J. Wang, "Electricity-Market Price and nuclear Power Plant Shutdown: Evidence from California," Energy Policy, Vol. 73, 2014, pp. 234-244. https://doi.org/10.1016/j.enpol.2014.05.027
{}
Extinction curve and metal depletion # Effects of grain growth mechanisms on the extinction curve and the metal depletion in the interstellar medium Hiroyuki Hirashita and Nikolai V. Voshchinnikov Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan Sobolev Astronomical Institute, St. Petersburg University, Universitetskii prosp., 28, St. Petersburg 198504, Russia E-mail: hirashita@asiaa.sinica.edu.tw 2013 October 17 ###### Abstract Dust grains grow their sizes in the interstellar clouds (especially in molecular clouds) by accretion and coagulation. Here we model and test these processes by examining the consistency with the observed variation of the extinction curves in the Milky Way. We find that, if we simply use the parameters used in previous studies, the model fails to explain the flattening of far-UV extinction curve for large (flatness of optical extinction curve) and the existence of carbon bump even in flat extinction curves. This discrepancy is resolved by adopting a ‘tuned’ model, in which coagulation of carbonaceous dust is less efficient (by a factor of 2) and that of silicate is more efficient with the coagulation threshold removed. The tuned model is also consistent with the relation between silicon depletion (indicator of accretion) and if the duration of accretion and coagulation is Myr, where is the number density of hydrogen nuclei in the cloud. We also examine the relations between each of the extinction curve features (UV slope, far-UV curvature, and carbon bump strength) and . The correlation between UV slope and , which is the strongest among the three correlations, is well reproduced by the tuned model. For far-UV curvature and carbon bump strength, the observational data are located between the tuned model and the original model without tuning, implying that the large scatters in the observational data can be explained by the sensitive response to the coagulation efficiency. The overall success of the tuned model indicates that accretion and coagulation are promising mechanisms of producing the variation of extinction curves in the Milky Way, although we do not exclude possibilities of other dust-processing mechanisms changing extinction curves. ###### keywords: dust, extinction — galaxies: evolution — galaxies: ISM — ISM: clouds — ISM: evolution — turbulence pagerange: Effects of grain growth mechanisms on the extinction curve and the metal depletion in the interstellar mediumEffects of grain growth mechanisms on the extinction curve and the metal depletion in the interstellar mediumpubyear: 2012 ## 1 Introduction The evolution of interstellar dust is one of the most important problems in clarifying the galaxy evolution. The essential features of dust properties are grain size distribution and grain materials. Dust grains actually modify the spectral energy distribution of galaxies by reprocessing stellar radiation into the far-infrared regions in a way dependent on the extinction and the emission cross-section (e.g. Désert, Boulanger, & Puget, 1990), both of which are mainly determined by the grain size distribution and the grain materials. Not only radiative processes but also interstellar chemical reactions are affected by dust properties: especially, formation of molecular hydrogen predominantly occurs on dust grains if the interstellar medium (ISM) is enriched with dust (e.g. Cazaux & Tielens, 2004; Yamasawa et al., 2011). These effects of dust grains underline the importance of clarifying the dust enrichment and the evolution of dust properties in galaxies. The evolution of dust in a galaxy is governed by various processes (e.g. Asano et al., 2013a). Dust grains are supplied by stellar sources such as supernovae (SNe) and asymptotic giant branch (AGB) stars (e.g. Bianchi & Schneider, 2007; Nozawa et al., 2007; Yasuda & Kozasa, 2012). In the Milky Way ISM, the time-scale of dust destruction by SN shocks is  yr (Jones, Tielens, & Hollenbach 1996; but see Jones & Nuth 2011), which is significantly shorter than the time-scale of dust supply from stellar sources ( Gyr) (McKee, 1989). Therefore, it has been argued that dust grains grow in the ISM by the accretion of metals111We call the elements composing grains ‘metals’. (simply called accretion in this paper) to explain the dust abundance in the ISM in various types of galaxies (Dwek, 1998; Zhukovska, Gail, & Trieloff, 2008; Draine, 2009; Pipino et al., 2011; Valiante et al., 2011; Inoue, 2011; Asano et al., 2013b). The most efficient sites of grain growth are molecular clouds, where the typical number density of hydrogen molecules is cm (Hirashita, 2000). Although accretion is suggested to govern the dust abundance, direct evidence of accretion is still poor. Larger depletion of metal elements in the cold clouds than in the warm medium (Savage & Sembach, 1996) may indicate grain growth by accretion in clouds. Voshchinnikov & Henning (2010, hereafter VH10) show that the metals are more depleted on to the dust as the optical extinction curve becomes flatter. The depletion is an indicator of how much fraction of the metals are accreted on the dust, while the flatness of extinction curve reflects the size of dust grains. Thus, their result indicates that the accretion of metals on to dust grains and the growth of grain size occur at the same sites. Dust grains also grow through coagulation; i.e. grain–grain sticking (e.g. Chokshi, Tielens, & Hollenbach, 1993). A deficit of very small grains contributing to the 60- emission is observed around a typical density in molecular clouds cm, and is interpreted to be a consequence of coagulation (Stepnik et al., 2003). Coagulation efficiently occurs in the dense medium like accretion. Thus, it is reasonable to consider that accretion and coagulation take place at the same time. Although accretion and coagulation can simultaneously occur, they may have different influences on observational quantities. While accretion changes both the total dust mass and the grain sizes, coagulation only changes the grain sizes, conserving the total dust mass. In the above, we have mentioned the observed relation between depletion and extinction curve by VH10. In interpreting this relation, we should note that the depletion is only affected by accretion while the extinction curve is influenced by both accretion and coagulation. Therefore, there is a possibility of disentangling the effects of accretion and coagulation by interpreting the relation between depletion and extinction curve. The shape of extinction curve may be capable of discriminating the two grain growth mechanisms, accretion and coagulation (Cardelli, Clayton, & Mathis, 1989; O’Donnell & Mathis, 1997). Coagulation makes the grain sizes larger and the extinction curve flatter, while accretion increases the extinction itself. Hirashita (2012, hereafter H12) points out a possibility that accretion steepens the extinction curve at ultraviolet (UV) and optical wavelengths. Thus, accretion and coagulation, although these two processes occur simultaneously in the dense ISM, can have different impacts on the extinction curve. The above arguments are mainly based on qualitative observational evidence or purely theoretical arguments on each of the two processes (accretion and coagulation). Thus, in this paper, we solve the evolution of extinction curve by accretion and coagulation simultaneously and compare with some quantitative indicators that characterize major features of extinction curve. We also use the metal depletion as an indicator of accretion, to isolate the effect of accretion. As a consequence of this study, we can address if the observed variation of extinction curve is consistent with the grain growth mechanisms and test the widely accepted hypothesis that accretion and coagulation are really occurring in the ISM. We concentrate on the Milky Way ISM, for which detailed data are available for the extinction curves. This paper is organized as follows. We explain our models used to calculate the effects of accretion and coagulation on the extinction curve in Section 2. We show the calculation results and compare them with the observational data of Milky Way extinction curves and depletion in Section 3. The models are also tested by a large sample of Milky Way extinction curves in Section 4. After discussing some implications of our results in Section 5, we conclude in Section 6. ## 2 Models We consider the time evolution of grain size distribution by the following two processes: the accretion of metals and the growth by coagulation in an interstellar cloud. These two processes are simply called accretion and coagulation in this paper. Our aim is to investigate if dust grains processed by these processes can explain the variation of extinction curves in various lines of sight. We only treat grains refractory enough to survive after the dispersal of the cloud, and do not consider volatile grains such as water ice. We assume that the grains are spherical with a constant material density dependent on the grain species, so that the grain mass and the grain radius are related as m=43πa3s. (1) We define the grain size distribution such that is the number density of grains whose radii are between and at time . In our numerical scheme, we use the number density of grains with mass between and , , which is related to by ; that is, . For convenience, we also define the dust mass density, : ρd(t)=∫∞043πa3sn(a,t)da. (2) The time evolution of grain size distribution by accretion and coagulation is calculated based on the formulation by H12. We consider silicate and carbonaceous dust as grain species. To avoid the complexity in compound species, we treat these two grain species separately. We neglect the effect of the Coulomb interaction on the cross-section (i.e. the cross-section of a grain for accretion and coagulation is simply estimated by the geometric one). For ionized particles, the effect of electric polarization could raise the cross-section especially for small grains (e.g. Draine & Sutin, 1987). Therefore, neglecting the Coulomb interaction may lead to an underestimate of grain growth by accretion. However, the ionization degree in dense clouds is extremely low (Yan, Lazarian & Draine, 2004), which means that almost all the metal atoms colliding with the dust grains are neutral. For grain–grain collisions, the effect of charing is of minor significance, since the kinetic energy of the grains is much larger than the Coulomb energy at the grain surface. We briefly review the formulation by H12 in the following. Accretion and coagulation, described separately in the following subsections, are solved simultaneously in the calculation. ### 2.1 Accretion The equation for the time evolution of grain size distribution by accretion is written as (see H12 for the derivation) ∂σ∂t+˙μ∂σ∂μ=13˙μσ, (3) where , , and . The time evolution of the logarithmic grain mass can be written as ˙μ=3ξX(t)τ(m), (4) where is the growth time-scale of grain radius as a function of grain mass (given by equation 5; note that and are related by equation 1), and the gas-phase fraction of key element X (X = Si and C for silicate and carbonaceous dust, respectively) as a function of time ( is the number density of key species X in gas phase as a function of time, and is the number density of element X in both gas and dust phases). The growth time-scale is evaluated as τ(a)≡anX,totmXSaccfXs(kBTgas2πmX)1/2, (5) where is the atomic mass of X, is the sticking probability for accretion, is the mass fraction of the key species in dust, is the Boltzmann constant, and is the gas temperature. The evolution of is calculated by (6) We solve equations (3)–(6) to obtain the time evolution of , which is translated into . To clarify the time-scale of accretion, we give the numerical estimates for . With the values of parameters given in Table 1, in equation (5) can be estimated as τ = (7) ×(Tgas10 K)−1/2(Sacc0.3)−1 yr for silicate, and τ = 0.993×108(a0.1 \micron)(ZZ\sun)−1(nH103 cm−3)−1 (8) ×(Tgas10 K)−1/2(Sacc0.3)−1 yr for carbonaceous dust ( is the number density of hydrogen nuclei). We adopt the same values as those in H12 for the following quantities: , cm,  K, and . Note that the same result is obtained with the same value of ; in particular, the density may vary in a wide range, so that it is worth noting that the time-scales of accretion and coagulation both scale with (see also Section 5.2). ### 2.2 Coagulation We solve a discretized coagulation equation used in Hirashita & Yan (2009). We consider thermal (Brownian) motion and turbulent motion as a function of grain mass (or radius, which is related to the mass by equation 1) (see H12 for details). The size-dependent velocity dispersion of grains is denoted as . The coagulation is assumed to occur if the relative velocity is below the coagulation threshold given by Hirashita & Yan (2009) (see Chokshi, Tielens, & Hollenbach 1993; Dominik & Tielens 1997; Yan, Lazarian & Draine 2004) unless otherwise stated. We modify the treatment of relative velocities following Hirashita & Li (2013): in considering a collision between grains with sizes and , we estimate the relative velocity, by v12=√v(a1)2+v(a2)2−2v(a1)v(a2)cosθ, (9) where is the angle between the two grain velocities, and is randomly chosen between and 1 in each time-step. We also assume that the sticking coefficient is 1 unless otherwise stated, also examining models (the tuned model described later) in which is less than 1. ### 2.3 Initial conditions We adopt an initial grain size distribution that fits the mean Milky Way extinction curve. Weingartner & Draine (2001) assume the following functional form for the grain size distributions of silicate and carbonaceous dust: (10) × {1(3.5 \AAat,i); where specifies the grain species ( for silicate and for carbonaceous dust, which is assumed to be graphite) and the term F(a;βi,at,i)≡{1+βa/at,i(β≥0);(1−βa/at,i)−1(β<0) (11) provides curvature. We omit the log-normal term for small carbonaceous dust (i.e.  in Weingartner & Draine 2001), which is dominated by polycyclic aromatic hydrocarbons (PAHs), since it is not clear how PAHs contribute to the accretion and coagulation to form macroscopic grains. Note that Weingartner & Draine (2001) favour based on the observed strong PAH emission from the diffuse ISM (Li & Draine, 2001). If PAHs contribute to form macroscopic grains through accretion and coagulation, our estimate gives a conservative estimate for the effect of accretion. However, after coagulation becomes dominant at Myr, the existence of PAHs does not affect the results as long as PAHs have a negligible contribution to the total dust mass, which is the case in Weingartner & Draine (2001)’s grain size distributions. The size distribution has five parameters for each grain species. We adopt the solution for and (see equation 15 for the definition of ) in Weingartner & Draine (2001) to fix these parameters. The adopted values are listed in Table 2. The initial value of is given for each dust species as follows. The dust mass density at , , is given for silicate and carbonaceous dust separately by equation (2) with the initial grain size distribution (equation 10). As derived in H12, is related to the initial condition for as ρd(0)=mXfX[1−ξX(0)](ZZ\sun)(XH)\sunnH, (12) where is the metallicity, (we apply throughout this paper), and is the solar abundance (the ratio of the number of X nuclei to that of hydrogen nuclei at the solar metallicity). With the solar abundances of Si and C (Table 1), we obtain for carbonaceous dust, while is negative for silicate. This means that too much silicon is used to explain the Milky Way extinction curve by Weingartner & Draine (2001). As they mentioned, since there is still an uncertainty in the interstellar elemental abundances, we do not try to adjust the silicon and carbon abundances. In fact, for the purpose of investigating the effects of accretion and coagulation on the extinction curve, fine-tuning of elemental abundance is not essential. Thus, we assume as a typical depletion of silicon in the ISM (VH10), while we adopt as expected from the interstellar carbon abundance. ### 2.4 Calculation of extinction curves Extinction curves are calculated by adopting the method described by Weingartner & Draine (2001). The extinction at wavelength in units of magnitude () normalized to the column density of hydrogen nuclei () is written as Aλ(t)NH=2.5logenH∫∞0dan(a,t)πa2Qext(a,λ), (13) where is the extinction efficiency factor, which is evaluated by using the Mie theory (Bohren & Huffman, 1983). For the details of optical constants for silicate and carbonaceous dust, see Weingartner & Draine (2001). The colour excess at wavelength is defined as E(λ−V)≡Aλ−AV, (14) where the wavelength at the band is 0.55 . The steepness of extinction curve is often quantified by : RV≡AVE(B−V), (15) where the wavelength at the band is 0.44 . Since a flat extinction curve decreases the difference between and [i.e. ], increases as the extinction curve becomes flatter. To extract some features in UV extinction curves, we apply a parametric fit to the calculated extinction curves according to Fitzpatrick & Massa (2007). They adopt the following form for  Å: E(λ−V)/E(V−B) where , and D(x,x0,γ)=x2(x2−x20)2+x2γ2 (17) is introduced to express the 2175-Å bump (carbon bump). There are seven parameters, . To stabilize the fitting, we fix , , and by adopting the values fitting to the mean extinction curve in the Milky Way (Fitzpatrick & Massa, 2007). The ranges of these parameters are narrow compared with the other parameters. We apply a least-square fitting to the theoretically predicted extinction curves to derive (, …, ). It is observationally suggested that there is a link between UV extinction and (Cardelli et al., 1989) although the scatter is large (Fitzpatrick & Massa, 2007). The carbon bump strength has also a weak correlation with in such a way that the bump is smaller for a flatter extinction curve (larger ) (Cardelli et al., 1989; Fitzpatrick & Massa, 2007). We examine if these relations are consistent with accretion and coagulation. To estimate the bump strength, we introduce the following two quantities: and (note that and are simply determined by after fixing ). If an extinction curve is steep in the optical it tends to be steep also in the UV. This is reflected in the correlation between and ( is the slope of UV extinction curve at both sides of the carbon bump). Following Fitzpatrick & Massa (2007), we also define a far-UV (FUV) curvature as . ### 2.5 Data We test the change of grain size distribution by accretion and coagulation through the comparison of our models with observational extinction and depletion data compiled by VH10 for stars located in the Scorpius-Ophiuchus region, since the data covers a wide range in . There are 7 stars (HD 143275, HD 144217, HD 144470, HD 147165, HD 147888, HD 147933, and HD 148184): for these stars, Mg or Si abundance in the line of sight is measured. The depletion derived from the metal abundance measurements is crucial to separate the effect of accretion. Among the 7 stars, extinction curves are available for 4 stars (HD 144470, HD 147165, HD 147888, and HD 147933) (Voshchinnikov, 2012). We also use a larger sample compiled by Fitzpatrick & Massa (2007), although they do not have metal abundance data. They compiled UV-to-near-infrared extinction data for Galactic 328 stars. In particular, some important parameters that characterize the extinction curve (, , , and ) are available, so that we can check if the change of extinction curve by accretion and coagulation is consistent with the observed trend or range of these parameters for various . ## 3 Results ### 3.1 Extinction curves #### 3.1.1 General behaviour of theoretical predictions We show the evolution of grain size distribution in Fig. 1 for silicate and carbonaceous dust. To show the mass distribution per logarithmic size, we multiply to . The details about how accretion and coagulation contribute to the grain size distribution have already been discussed in H12. We briefly overview the behaviour. Since the growth rate of grain radius by accretion is independent of (H12), the impact of grain growth is significant at small grain sizes. Moreover, almost all gas-phase metals accrete selectively on to small grains because the contribution to the grain surface is the largest at the smallest sizes (see also Weingartner & Draine, 1999). Since the accretion time-scale for the smallest grains () is short, the effect of accretion appears at first, enhancing the grain abundance at the smallest sizes. Accretion stops after the gas-phase elements composing the dust are used up ( 10–30 Myr). After accretion stops, the evolution of grain size distribution is driven by coagulation. Coagulation occurs in a bottom-up manner, because small grains dominate the total grain surface. Moreover, the grains at the largest sizes are intact because they have velocities larger than coagulation threshold. Based on the grain size distributions shown above, we calculate the evolution of extinction curve. In Fig. 2a, we show the extinction curves (, calculated by equation 13) for various . To show that the initial condition reproduces the mean extinction curve, we also plot the averaged extinction curve taken from Pei (1992) (the shape is almost identical to Fitzpatrick & Massa (2007)’s mean extinction curve). We also show in Fig. 2b the normalized colour excess, . In the bottom panels of those two figures, we present all the curves divided by the initial values to indicate how the steepness of extinction curve changes as a function of time. In Fig. 2a, we observe that the extinction increases at all wavelengths at Myr because accretion increases the dust abundance. The extinction normalized to the value at [] increases toward shorter wavelengths at Myr, indicating that accretion steepens the UV extinction curve. This is because the UV extinction responds more sensitively to the increase of the grains at –0.01 by accretion than the optical–near-infrared extinction (see also H12). After that, coagulation makes the grains larger (Fig. 1), decreasing the UV extinction and flattening the extinction curve. The optical–near-infrared extinction continues to increase even after Myr because the increase of grain sizes by coagulation increases the grain opacity in the optical and near-infrared. The carbon (2175-Å) bump, which is produced by small ( Å; Draine & Lee 1984) carbonaceous grains, become less prominent as coagulation proceeds, because small carbonaceous grains responsible for the bump are depleted after coagulation. The normalized colour excesses shown in Fig. 2b naturally follow the same behaviour as in Fig. 2a. From the bottom panel in Fig. 2b, we observe that the ratio is at wavelengths shorter than 0.44 while accretion dominates ( Myr); and that it is after coagulation takes place significantly. This behaviour confirms that accretion first makes the extinction curve steep and coagulation then modifies it in the opposite way (see also H12). The increase of by coagulation is also reflected in the increase of the absolute value of the intercept, . #### 3.1.2 Comparison with observed extinction curves For our sample (Section 2.5), extinction curves are available for four stars. These extinction curves are shown in Fig. 3. Even for large , the carbon bump around 0.22 is clear and the UV extinction rises with a positive curvature. However, our theoretical predictions shown in Fig. 2 indicate that at large (i.e. after significant coagulation at Myr), the carbon bump disappears. This implies that coagulation is in reality not so efficient as assumed in the model for carbonaceous dust. There is another discrepancy: the observed tends to decrease significantly as increases, while it does not decrease significantly after significant coagulation at Myr, except at the carbon bump. This is because silicate stops to coagulate at , which is still too small to affect the UV opacity. In summary, the observational data indicates (i) that carbonaceous dust should be more inefficient in coagulation than assumed, and (ii) that silicate should grow beyond (the growth is limited by the coagulation threshold velocity). Therefore, we also try models with for carbonaceous dust and no coagulation threshold for silicate dust as a tuned model described in the next subsection. ### 3.2 Tuned model Based on the comparisons with observational data in the above, we propose the following tuning for the model. Since the coagulation of carbonaceous dust proves to be too efficient, we decrease the coagulation efficiency by a factor of 2 by adopting for carbonaceous dust, while we keep using for silicate. For silicate, we get rid of the coagulation threshold; that is, silicate grains are assumed to coagulate whenever they collide. Indeed, if we consider fluffiness of coagulated grains, the structure can efficiently absorb the energy in collision and large grain can be formed after sticking (Ormel et al., 2009). We do not change the accretion efficiency () for both silicate and carbonaceous dust to minimize the fine tuning. This is called ‘tuned model’, while we call the original model with and the coagulation threshold ‘original model without tuning’. As shown later, such a minimum tuning of the original model is enough to explain the variation of main extinction curve features. We do not tune accretion since the final result is only sensitive to the relative efficiencies of accretion and coagulation (so we only tune coagulation to avoid the degeneracy). The evolution of grain size distribution for the tuned model is shown in Fig. 4. For silicate, compared with the dust distributions in the original model without tuning (Fig. 1), the grains grow further, even beyond 0.2 at Myr. This is simply because we removed the coagulation threshold so that the grains can coagulate at any velocities. For carbonaceous dust, the effect of tuning slightly increases the abundance of small grains ( Å) that can contribute to the carbon bump, although the change is not so drastic as for silicate. Nevertheless, we will show later that such a small change has a significant impact on the strength of carbon bump. Fig. 5 shows the extinction curves for the tuned model. decreases at  Myr when . This is consistent with the observed trend that decreases as increases (Fig. 3). This decrease is particularly driven by coagulation of silicate grains. The tuned model also keeps the bump strong even at later epochs, which is also consistent with the observed extinction curves. The more prominent carbon bump in the tuned model than in the original model without tuning is due to slower coagulation of carbonaceous dust. For , the variation of observed UV extinction curves in Fig. 3, compared with that of theoretical ones in Fig. 5, can be interpreted as the flattening at Myr by coagulation. ### 3.3 Relation between depletion and Rv VH10 use (in their notation ) to quantify the fraction of element X condensed into dust grains. In particular, they show that has a correlation with the shape of extinction curve. As mentioned in Section 2.4, we adopt as a representative quantity for the extinction curve slope. VH10 focus on silicate. Since we adopted Si as a key element of silicate, we concentrate on the relation between and (note that is also affected by carbonaceous dust as well as silicate). In Fig. 6, we compare the relations calculated by the models with the observational data (we also plot the observed depletions of Fe and Mg as references). First, we discuss the original model without tuning shown by the dashed line in Fig. 6. The slight decrease of at Myr is due to accretion. Accretion increases rapidly in 30 Myr. After 30 Myr, coagulation increases as the extinction curve becomes flat. As a result, increases (and decreases slightly) before increases. This behaviour on the diagram does not explain the observed positive correlation between these two quantities in Fig. 6. The discrepancy implies that accretion proceeds too quickly compared with coagulation in the original model without tuning. Next, we examine the tuned model. Note that the tuning is meant to reproduce the observed trend in extinction curves so it is not obvious if it reproduces the observed relation between depletion and . As is clear from Fig. 6, the tuned model is consistent with the observed trend. This is because of more efficient coagulation of silicate, which pushes to larger values at earlier epochs. Therefore, the tuned model reproduces the observed relation between depletion and . ## 4 Correlations among Various Extinction Properties Now we discuss the models in the context of the largest sample of Milky Way extinction curves taken by Fitzpatrick & Massa (2007). Cardelli et al. (1989) suggest that correlates with UV slope and carbon bump strength. More recently, Fitzpatrick & Massa (2007) show that the correlation is not significant for the bulk of the sample with but that there is a slight trend if data points with larger values are included (see also below). We examine if these weak correlations reflect the evolution of grain size distribution by accretion and coagulation. More precisely, since the correlations are weak, we check if the models are consistent with the observed range of those parameters. As mentioned in Section 2.4, we examine , , , and as characteristic quantities in the extinction curve. In Fig. 7, we show the relation between each of these parameters (, , , or ) and . For the observational data, the stars at kpc (called distant sample) and kpc (called nearby sample) are shown in different symbols. For all those parameters, the dispersion is smaller for the distant sample than for the nearby sample. This indicates that the peculiarity in each individual line of sight is reflected more in the nearby sample while the peculiarity is ‘averaged’ for the distant sample. Even for the distant sample, some trends still remain: and the carbon bump strength both have negative correlations with , although the correlations are weak. The UV curvature has a weak positive trend with for , although there also seems to be an overall negative trend if we also include the data with smaller . Because of the large variety in individual lines of sight and the weakness of the trend, we only use the relations in Fig. 7 as a reference for our models and do not attempt a fit to the data. Although the initial condition may vary, we simply adopt the same models used above, that is, the original model without tuning and the tuned model with the initial condition described in Section 2.3, in order to focus on the trend produced by accretion and coagulation. For the UV slope, , the original model without tuning predicts increasing (i.e. steepening of the UV extinction curve) for increasing although as shown in Fig. 2, coagulation tends to flatten the overall UV extinction curve. As mentioned in Section 2.4, reflects the slope around the carbon bump. We find that decreases around while it changes little around in the original model without tuning. The decrease around is due to the flattening of carbon extinction curve. On the other hand, silicate extinction changes insignificantly since the silicate abundance at , where the effects on is the largest, is little affected by coagulation (because their velocities are larger than the coagulation threshold), so the extinction around , which is dominated by silicate, does not change much. Thus, the extinction curve seems to steepen in terms of . The tuned model seems to better reproduce the decreasing trend of for increasing . For the FUV curvature, , we reproduce the observed trend of decreasing for increasing (Fig. 7b) with the original model without tuning. However, after 100 Myr, becomes negative, which reflects the convex shape at . No Milky Way extinction curve shows negative . The tuned model is rather near to the observational data points. Also for the bump strengths indicated by and , the original model without tuning can explain the trend of a less prominent bump for a larger . However, the bump weakens too much: Fig. 2 shows that the bump eventually disappears after 100 Myr, which is not consistent with the fact that an extinction curve without carbon (2175-Å) bump is never observed in the Milky Way. The tuned model keeps the bump strong as we observe in Fig. 5. The more prominent carbon bump in the tuned model than in the original model without tuning is due to slower coagulation of carbonaceous dust. The enhancement of the bump by accretion at Myr, which may reproduce the upward scatter in the bump strengths, is also worth noting. However, this enhancement causes too large bump strengths around , although it is interesting that the data points around are again reproduced by the tuned model. The implication is that the scatter of the observational data can reflect the strong dependence on coagulation efficiency, which may be changed by the detailed material properties of dust. We emphasize that only a factor of two variation in (0.5 and 1) easily cover the observed range of bump strength. ## 5 Discussion ### 5.1 Implications of the tuned model In the tuned model, the coagulation threshold of silicate is removed, since the stop of coagulation at small sizes prevents the extinction curve from becoming flat enough to explain the observed UV extinction curves for large . The coagulation efficiency of carbonaceous dust is reduced (only) by a factor of 2 (i.e. we adopted ) to keep enough abundance of small carbonaceous grains contributing to the carbon bump. Those changes are already enough to reproduce the observational trend in the shape of extinction curves for increasing . The UV slope, the FUV curvature and the carbon bump strength have weak correlations with . The original model without tuning and the tuned model predict very different tracks in the relations between and each of those quantities. This implies that the large scatters of those quantities in the observational data is due to the sensitive response to the material properties (in our models, the coagulation efficiency). ### 5.2 Lifetime of molecular clouds In this paper, we adopted the duration of accretion and coagulation up to 300 Myr. This may be long as a lifetime of molecular clouds (e.g. Lada, Lombardi, & Alves, 2010), but Koda et al. (2009) suggest a possibility that molecular clouds are sustained over the circular time-scale in a spiral galaxy ( Myr). Thus, it is worth investigating such a long duration as a few hundreds of mega-years for accretion and coagulation in molecular clouds. The duration is also degenerate with as mentioned in Section 2 in such a way that the same value of returns the same result. For example, if we adopt cm instead of cm, is reached at Myr. However, the tracks on the diagrams shown in Figs. 6 and 7 do not change even if we change . In other words, our predictions on the theoretical tracks in these diagram are robust against the variation of gas density. ## 6 Conclusion We have examined the effects of two major growth mechanisms of dust grains, accretion and coagulation, on the extinction curve. First we examined the evolution of extinction curve by using the prescription for accretion and coagulation with commonly used material parameters. The observational Milky Way extinction curves are not reproduced in the following two respects: (i) The extinction normalized to the hydrogen column density () does not decrease around –8 in the model even for large . (ii) The carbon bump is too small when . These two discrepancies are resolved by the following prescriptions (tuning): more efficient coagulation for silicate by removing the coagulation threshold velocity, and less efficient coagulation for carbonaceous dust by a factor of 2. This ‘tuned’ model also reproduces the trend between depletion and , which implies that the time-scale of coagulation relative to that of accretion is appropriate. On the other hand, the original model without tuning fails to reproduce the observed trend between depletion and since too quick accretion relative to coagulation increases depletion even before coagulation increases . We have also examined the relation between and each of the following extinction curve features: the UV slope, the FUV curvature, and the carbon bump strength. The correlation between the UV slope and , which is the strongest among those three correlations, is well reproduced by the tuned model. For the FUV curvature and the carbon bump, the observational data are located between the original model without tuning and the tuned model, which implies that the scatters in the observational data can be attributed to the sensitive variation to the dust properties (coagulation efficiency in our models). This also means that the variation of extinction curves in the Milky Way, especially at , can be interpreted by the variation of grain size distribution by accretion and coagulation. For more complete understanding, especially at , other dust processing mechanisms such as shattering should be included. ## Acknowledgments HH thanks the support from NSC grant NSC102-2119-M-001-006-MY3, and NVV acknowledges the support from the Grant RFBR 13-02- 00138a. ## References • Asano et al. (2013a) Asano, R., Takeuchi, T. T., Hirashita, H., & Nozawa, T. 2013a, MNRAS, 432, 637 • Asano et al. (2013b) Asano, R., Takeuchi, T. T., Hirashita, H., & Inoue, A. K. 2013b, Earth Planets Space, 65, 213 • Bianchi & Schneider (2007) Bianchi, R., & Schneider, R. 2007, MNRAS, 378, 973 • Bohren & Huffman (1983) Bohren C. F., Huffman D. R., 1983, Absorption and Scattering of Light by Small Particles. Wiley, New York • Cardelli et al. (1989) Cardelli J. A., Clayton G. C., Mathis J. S., 1989, ApJ, 345, 245 • Cazaux & Tielens (2004) Cazaux, S., & Tielens, A. G. G. M. 2004, ApJ, 604, 222 • Chokshi, Tielens, & Hollenbach (1993) Chokshi, A., Tielens, A. G. G. M., & Hollenbach, D. 1993, ApJ, 407, 806 • Cox (2000) Cox, A. N. 2000, Allen’s Astrophysical Quantities, 4th ed., Springer, New York • Désert, Boulanger, & Puget (1990) Désert, F.-X., Boulanger, F., & Puget, J. L. 1990, A&A, 237, 215 • Dominik & Tielens (1997) Dominik, C., & Tielens, A. G. G. M. 1997, ApJ, 480, 647 • Draine (2009) Draine B. T., 2009, in Henning Th., Grün E., Steinacker J., eds, ASP Conf. Ser. 414, Cosmic Dust – Near and Far. Astron. Soc. Pac., San Francisco, p. 453 • Draine & Lee (1984) Draine, B. T., & Lee, H. M. 1984, ApJ, 285, 89 • Draine & Sutin (1987) Draine, B. T., & Sutin, B. 1987, ApJ, 320, 803 • Dwek (1998) Dwek E., 1998, ApJ, 501, 643 • Fitzpatrick & Massa (2007) Fitzpatrick, E. L., & Massa, D. 2007, ApJ, 663, 320 • Hirashita (2000) Hirashita H., 2000, PASJ, 52, 585 • Hirashita (2012) Hirashita, H. 2012, MNRAS, 422, 1263 (H12) • Hirashita & Li (2013) Hirashita, H., & Li, Z.-Y. 2013, MNRAS, 434, L70 • Hirashita & Yan (2009) Hirashita, H., & Yan, H. 2009, MNRAS, 394, 1061 • Inoue (2011) Inoue, A. K. 2011, Earth, Planets Space, 63, 1027 • Jones & Nuth (2011) Jones, A. P., & Nuth, J. A., III 2011, A&A, 530, A44 • Jones et al. (1996) Jones, A. P., Tielens, A. G. G. M., & Hollenbach, D. J. 1996, ApJ, 469, 740 • Jones et al. (1994) Jones, A. P., Tielens, A. G. G. M., Hollenbach, D. J., & McKee, C. F. 1994, ApJ, 433, 797 • Koda et al. (2009) Koda, J., et al. 2009, ApJ, 700, L132 • Lada, Lombardi, & Alves (2010) Lada, C. J., Lombardi, M., & Alves, J. F. 2010, ApJ, 724, 687 • Li & Draine (2001) Li, A., & Draine, B. T. 2001, ApJ, 554, 778 • McKee (1989) McKee, C. F. 1989, in Allamandola L. J. & Tielens A. G. G. M. eds., IAU Symp. 135, Interstellar Dust, Kluwer, Dordrecht, 431 • Nozawa et al. (2007) Nozawa, T., Kozasa, T., Habe, A., Dwek, E., Umeda, H., Tominaga, N., Maeda, K., & Nomoto, K. 2007, ApJ, 666, 955 • O’Donnell & Mathis (1997) O’Donnell, J. E., & Mathis, J. S. 1997, ApJ, 479, 806 • Ormel et al. (2009) Ormel, C. W., Paszun, D., Dominik, C., & Tielens, A. G. G. M. 2009, A&A, 502, 845 • Pei (1992) Pei Y. C., 1992, ApJ, 395, 130 • Pipino et al. (2011) Pipino A., Fan X. L., Matteucci, F., Calura F., Silva L., Granato G., Maiolino R., 2011, A&A, 525, A61 • Savage & Sembach (1996) Savage B. D., Sembach K. R., 1996, ARA&A, 34, 279 • Stepnik et al. (2003) Stepnik B. et al., 2003, A&A, 398, 551 • Valiante et al. (2011) Valiante, R., Schneider, R., Salvadori, S., & Bianchi, S. 2011, MNRAS, 416, 1916 • Voshchinnikov (2012) Voshchinnikov, N. V. 2012, Journal of Quantitative Spectroscopy & Radiative Transfer, 113, 2334 • Voshchinnikov & Henning (2010) Voshchinnikov, N. V., & Henning, Th. 2010, A&A, 517, A45 (VH10) • Weingartner & Draine (1999) Weingartner, J. C., & Draine, B. T. 1999, ApJ, 517, 292 • Weingartner & Draine (2001) Weingartner, J. C., & Draine, B. T. 2001, ApJ, 548, 296 • Yamasawa et al. (2011) Yamasawa, D., Habe, A., Kozasa, T., Nozawa, T., Hirashita, H., Umeda, H., & Nomoto, K. 2011, ApJ, 735, 44 • Yan, Lazarian & Draine (2004) Yan H., Lazarian A., Draine B. T., 2004, ApJ, 616, 895 • Yasuda & Kozasa (2012) Yasuda, Y., & Kozasa, T. 2012, ApJ, 745, 159 • Zhukovska, Gail, & Trieloff (2008) Zhukovska S., Gail H.-P., Trieloff M., 2008, A&A, 479, 453 You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
{}
Search by Topic Resources tagged with Area similar to Blue and White: Filter by: Content type: Stage: Challenge level: There are 83 results Broad Topics > Measures and Mensuration > Area Blue and White Stage: 3 Challenge Level: Identical squares of side one unit contain some circles shaded blue. In which of the four examples is the shaded area greatest? Perimeter Possibilities Stage: 3 Challenge Level: I'm thinking of a rectangle with an area of 24. What could its perimeter be? Can They Be Equal? Stage: 3 Challenge Level: Can you find rectangles where the value of the area is the same as the value of the perimeter? Inscribed in a Circle Stage: 3 Challenge Level: The area of a square inscribed in a circle with a unit radius is, satisfyingly, 2. What is the area of a regular hexagon inscribed in a circle with a unit radius? Bull's Eye Stage: 3 Challenge Level: What fractions of the largest circle are the two shaded regions? F'arc'tion Stage: 3 Challenge Level: At the corner of the cube circular arcs are drawn and the area enclosed shaded. What fraction of the surface area of the cube is shaded? Try working out the answer without recourse to pencil and. . . . The Pillar of Chios Stage: 3 Challenge Level: Semicircles are drawn on the sides of a rectangle ABCD. A circle passing through points ABCD carves out four crescent-shaped regions. Prove that the sum of the areas of the four crescents is equal to. . . . Tilted Squares Stage: 3 Challenge Level: It's easy to work out the areas of most squares that we meet, but what if they were tilted? Pick's Theorem Stage: 3 Challenge Level: Polygons drawn on square dotty paper have dots on their perimeter (p) and often internal (i) ones as well. Find a relationship between p, i and the area of the polygons. Hallway Borders Stage: 3 Challenge Level: A hallway floor is tiled and each tile is one foot square. Given that the number of tiles around the perimeter is EXACTLY half the total number of tiles, find the possible dimensions of the hallway. Changing Areas, Changing Perimeters Stage: 3 Challenge Level: How can you change the area of a shape but keep its perimeter the same? How can you change the perimeter but keep the area the same? Semi-square Stage: 4 Challenge Level: What is the ratio of the area of a square inscribed in a semicircle to the area of the square inscribed in the entire circle? The Pi Are Square Stage: 3 Challenge Level: A circle with the radius of 2.2 centimetres is drawn touching the sides of a square. What area of the square is NOT covered by the circle? Appearing Square Stage: 3 Challenge Level: Make an eight by eight square, the layout is the same as a chessboard. You can print out and use the square below. What is the area of the square? Divide the square in the way shown by the red dashed. . . . Curvy Areas Stage: 4 Challenge Level: Have a go at creating these images based on circles. What do you notice about the areas of the different sections? Shear Magic Stage: 3 Challenge Level: What are the areas of these triangles? What do you notice? Can you generalise to other "families" of triangles? An Unusual Shape Stage: 3 Challenge Level: Can you maximise the area available to a grazing goat? Square Areas Stage: 3 Challenge Level: Can you work out the area of the inner square and give an explanation of how you did it? Poly-puzzle Stage: 3 Challenge Level: This rectangle is cut into five pieces which fit exactly into a triangular outline and also into a square outline where the triangle, the rectangle and the square have equal areas. Kite Stage: 3 Challenge Level: Derive a formula for finding the area of any kite. Great Squares Stage: 2 and 3 Challenge Level: Investigate how this pattern of squares continues. You could measure lengths, areas and angles. Squaring the Circle Stage: 3 Challenge Level: Bluey-green, white and transparent squares with a few odd bits of shapes around the perimeter. But, how many squares are there of each type in the complete circle? Study the picture and make. . . . Growing Rectangles Stage: 3 Challenge Level: What happens to the area and volume of 2D and 3D shapes when you enlarge them? Covering Cups Stage: 3 Challenge Level: What is the shape and dimensions of a box that will contain six cups and have as small a surface area as possible. Kissing Triangles Stage: 3 Challenge Level: Determine the total shaded area of the 'kissing triangles'. Making Rectangles Stage: 2 and 3 Challenge Level: A task which depends on members of the group noticing the needs of others and responding. Partly Circles Stage: 4 Challenge Level: What is the same and what is different about these circle questions? What connections can you make? Towers Stage: 3 Challenge Level: A tower of squares is built inside a right angled isosceles triangle. The largest square stands on the hypotenuse. What fraction of the area of the triangle is covered by the series of squares? Isosceles Stage: 3 Challenge Level: Prove that a triangle with sides of length 5, 5 and 6 has the same area as a triangle with sides of length 5, 5 and 8. Find other pairs of non-congruent isosceles triangles which have equal areas. Semi-detached Stage: 4 Challenge Level: A square of area 40 square cms is inscribed in a semicircle. Find the area of the square that could be inscribed in a circle of the same radius. Extending Great Squares Stage: 2 and 3 Challenge Level: Explore one of these five pictures. Carpet Cuts Stage: 3 Challenge Level: You have a 12 by 9 foot carpet with an 8 by 1 foot hole exactly in the middle. Cut the carpet into two pieces to make a 10 by 10 foot square carpet. Lying and Cheating Stage: 3 Challenge Level: Follow the instructions and you can take a rectangle, cut it into 4 pieces, discard two small triangles, put together the remaining two pieces and end up with a rectangle the same size. Try it! Square Pegs Stage: 3 Challenge Level: Which is a better fit, a square peg in a round hole or a round peg in a square hole? Compare Areas Stage: 4 Challenge Level: Which has the greatest area, a circle or a square inscribed in an isosceles, right angle triangle? Equilateral Areas Stage: 4 Challenge Level: ABC and DEF are equilateral triangles of side 3 and 4 respectively. Construct an equilateral triangle whose area is the sum of the area of ABC and DEF. Pie Cuts Stage: 3 Challenge Level: Investigate the different ways of cutting a perfectly circular pie into equal pieces using exactly 3 cuts. The cuts have to be along chords of the circle (which might be diameters). Tiling Into Slanted Rectangles Stage: 2 and 3 Challenge Level: A follow-up activity to Tiles in the Garden. Warmsnug Double Glazing Stage: 3 Challenge Level: How have "Warmsnug" arrived at the prices shown on their windows? Which window has been given an incorrect price? Cylinder Cutting Stage: 2 and 3 Challenge Level: An activity for high-attaining learners which involves making a new cylinder from a cardboard tube. Areas of Parallelograms Stage: 4 Challenge Level: Can you find the area of a parallelogram defined by two vectors? Exploration Versus Calculation Stage: 1, 2 and 3 This article, written for teachers, discusses the merits of different kinds of resources: those which involve exploration and those which centre on calculation. Six Discs Stage: 4 Challenge Level: Six circular discs are packed in different-shaped boxes so that the discs touch their neighbours and the sides of the box. Can you put the boxes in order according to the areas of their bases? Isosceles Triangles Stage: 3 Challenge Level: Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw? Salinon Stage: 4 Challenge Level: This shape comprises four semi-circles. What is the relationship between the area of the shaded region and the area of the circle on AB as diameter? Fence It Stage: 3 Challenge Level: If you have only 40 metres of fencing available, what is the maximum area of land you can fence off? Rati-o Stage: 3 Challenge Level: Points P, Q, R and S each divide the sides AB, BC, CD and DA respectively in the ratio of 2 : 1. Join the points. What is the area of the parallelogram PQRS in relation to the original rectangle? Trapezium Four Stage: 4 Challenge Level: The diagonals of a trapezium divide it into four parts. Can you create a trapezium where three of those parts are equal in area?
{}
# The role of surface air temperature over the east Asia on the early and late Indian Summer Monsoon Onset over Kerala ## Abstract This study investigates the physical mechanism for late Indian Summer Monsoon onset over Kerala (MOK). 14 early and 9 late onset years are selected based on the criteria when the onset is 5 days or more prior and after normal onset date (i.e 1st June according to India Meteorological Department) respectively. Then, we perform composite analyses of mean May monthly and daily evolution during early and late onset years to examine the differences in monsoon circulation features prior to the MOK. We find that advection of Surface Air Temperature (SAT) from the northern to the southern China and the eastern Tibetan Plateau (TP) plays an important role to modulate the MOK processes. In the late onset years, more low-level jet (LLJ) from the Bay of Bengal (BOB) divert towards the east Asia before the onset, which is due to an extension of the low sea level pressure and high SAT over the east Asia (eastern TP, east-central China). This strengthens the low-level convergence and upper level divergence over the eastern TP and southern China. As a result, a significant amount of moisture from the BOB is transported towards the eastern TP and southern China. Thereby, a comparatively weaker LLJ and deficit low-level moisture supply over the eastern BOB maintain the key roles in modulating the MOK processes. ## Introduction The Indian Summer Monsoon (ISM) onset is one of the most significant and anticipated meteorological phenomenon, which marks the commencement of rainy season in India. During the ISM season (June-July-August-September) India receives about 80% of the total annual precipitation1. The ISM onset represents significant transitions in the large-scale atmospheric and oceanic circulations in the Indo-Pacific region, and any kind of uncertainty or drastic shift in the ISM onset can have huge impact on the agriculture and food security in the Indian subcontinent. The monsoon onset is associated with a sudden and consistent increase in the daily rainfall over the Kerala, steady increase in convection over the southeast Arabian Sea (AS) and strong cross-equatorial Low Level Jet2 (LLJ). It also increases the moisture convergence over the AS and involves in the development of a low pressure area stretches from the southeast AS to Gujarat coast, known as ‘monsoon onset vortex’3. It is well known that the land-sea thermal contrast plays a major role in modulating the ISM onset. Yanai et al.4 suggested that the Asian Summer Monsoon (ASM) onset is an interaction process between the Plateau-induced circulation and the circulation associated with northward migrating rainband4. Li and Yanai (1996) argued that the ASM onset is coincident with the reversal of the meridional temperature gradient in the upper troposphere of the southern Tibetan Plateau (TP). As in the pre-monsoon season (March-May), the upper tropospheric temperature (200–500 hPa) in the TP region increases due to release of sensible heat fluxes from the land surface, and this abrupt warming leads to an increase in meridional temperature gradient in the TP region (as south and north of TP remain relatively cooler during this time)5. The sea surface temperature (SST) anomaly over the Indian and Pacific Ocean also influences the ISM onset. During January to May, since the center of the tropical warm pool region is shifted from the south-west Pacific to the North Indian Ocean (NIO)6,7,8, the warm pool makes favorable conditions for Monsoon Onset over Kerala (MOK) by building up moisture over the NIO and surrounding Pacific Ocean9,10. However, Sabeerali et al.11 demonstrated that warmer Indian Ocean SST is closely linked with delayed monsoon onset12. Sabeerali et al.11 and Pradhan et al.12 also indicated that warmer SSTs over central Pacific, eastern Pacific and Indian Ocean are highly related to the late monsoon onset by decreasing convection and latent heat release over Indian landmass11,12. This is associated with the descending (anti-cyclonic circulation) and ascending branch (cyclonic circulation) of Walker circulation over India and central Pacific, respectively. The normal ISM onset date, as determined by the India Meteorological Department (IMD) is 1st June, with an interannual variability of 8–9 days. The changes in the onset date have significant impact on the socio-economy of the Indian subcontinent due to variability in the seasonal mean monsoon rainfall13. Since 1972, the earliest and the delayed onset had occurred on 11th May of 1918 and 18th June of 1972, respectively. Joseph et al.2 showed that the delays in MOK are strongly associated with the El Niño. They argued that in the pre-monsoon season, there are warm SST anomalies in the south of equatorial Indian and Pacific Oceans and cold SST anomalies in the north of tropical and subtropical oceans. The difference in SST induces the variability in monsoon onset by affecting the movement of the Inter Tropical Convergence Zone (ITCZ)3. Preenu et al.14 studied the variability of MOK dates from 1870–2014 and the results are consistent with Joseph et al.2. They discussed that the large scale SST anomalies are similar to the canonical El Niño and Southern Oscillation (ENSO) pattern and are associated with the inter-annual variability of the MOK3,14. Zhou and Murtugudde (2014) reported that strong northward-propagating intra-seasonal variability (ISVs) may trigger an early onset by transporting moisture and momentum flux northward, which may trigger more atmospheric instabilities and convection in the tropics15. The northward propagation of ISVs depends on the vertical wind shear, and the strength of ISV is enhanced when the shear turns easterly in northern hemisphere16. Sahana et al.17 showed a shift in ISM onset during 1976/1977 and reported a prominent delay in ISM onset date based on the Hydrologic onset and Withdrawal Index (HOWI). They argued that the delays in the onset after a shift in ISM onset during 1976/77 are due to delay in the development of the easterly vertical shears. It reduces the northward propagation of ISVs during May-June, therefore, limiting the moisture supply from the Indian Ocean to the AS17. Mao and Wu (2007) indicated that an early (late) Bay of Bengal (BOB) Monsoon Onset is associated with less (more) TP snow accumulation during the preceding winter18. During the years of high (low) Eurasian snow amounts in spring/winter followed by deficient (excess) ISM rainfall19. Also, it is shown that an excess summer monsoonal rainfall over Asia corresponds to low April snow depth over the Eurasian region20. Further, Panda et al.21 explained that because of the west Eurasian snow depth anomalies, the mid-latitude circulation undergoes significant changes, which in turn lead to weak/strong monsoon circulation during deficient/excess ISM rainfall respectively21. Despite several researchers investigate on the ISM and its onset variability, the huge complexities in large-scale evolution of the summer monsoon circulation and its coupling between ocean and atmosphere still demand more robust investigations to address the dynamical meteorological drivers controlling the MOK variability. In order to find the differences between large-scale circulation characteristics associated with some early and late MOKs and its evolution, an attempt has been made in this study by selecting 14 early and 9 late onset years from 1979 to 2017. To carry out the investigation, some key meteorological variables are systematically analyzed here. ## Results ### Mean composite atmospheric circulation in May To examine the remote influences on the variability of MOK, composite analyses during the month of May are employed for the early and late onset years. The monthly mean composite analysis of Outgoing Long wave Radiation (OLR) and total cloud cover for the early, late onset and its differences (Early-Late onset) are shown in Fig. 1. The OLR field (Fig. 1a–c) shows that a band of convection (low value of OLR) develops in the eastern BOB and TP during MOK. During the early onset, the convection bands over the AS and eastern BOB are stronger than the late onset (Fig. 1c). Along the band of convection (from southern AS to eastern BOB), the entire region remains significantly cloudy in the early onset (Fig. 1d). The composite difference in total cloud cover is significant over almost the entire AS region (Fig. 1f). The composite mean wind at 850 hPa and 200 hPa for the early and late onset years are shown in Fig. 2. The cross equatorial LLJ over the AS and BOB are more intense in the early onset years, while in the late onset years, the strength of wind at 850 hPa over the south east Asia (SEA), South China Sea (SCS), south China and the eastern TP are stronger (Fig. 2a–c). The largest difference of about 4.4 ms−1 is noticed over the southern AS (Fig. 2c). On the other hand, during the late onset, the Sub Tropical Jet stream (STJ) at 200 hPa is stronger over the east of TP and China as compared to the early onset (Fig. 2d–f). The South Asian High (SAH) centers over SEA’s landmass for both the onsets, but a prominent ridge forms over the eastern TP in the STJ during late onset. To examine the changes in SST, a composite of SST during the early onset, late onset and its difference is shown in Supplementary Fig. S1a–c. During the late onset, the warmer SST dominates in the NIO region especially in the BOB and AS, and colder SST prevails in the extreme South Indian Ocean (SIO) adjacent to the Mascarene High (MH, ~300–350S, Supplementary Fig. S1b). These features are in agreement with Sabeerali et al.11, where they argued that the delay in monsoon onset is linked with warmer Indian Ocean SST. The composite difference of the SSTs between the early and late onset shows a significant decrease in SST in the NIO region during the early onset years (Supplementary Fig. S1c). Furthermore, we also investigate the composite pattern of Sea Level Pressure (SLP) for the early and late onset which is shown in Supplementary Fig. S1d–f. In general, during the pre-monsoon season, due to strong land-sea thermal contrast an intense low pressure builds up over the northern Indian subcontinent, while high pressure forms over SIO (especially around the Madagascar Island). It is observed that a region of significant lower SLP over the AS and Indian subcontinent persists in the early onset, while during the late onset, this band of lower SLP is significantly extended towards the southern China (Supplementary Fig. S1f). Also, the strength of the MH appears to enhance over the SIO during the early onset. ### Daily composite evolution To further examine the differences in the mean daily evolution pattern prior to the early and late MOK, we investigate the composite daily mean OLR, Omega (vertical velocity), winds (850 hPa, 200 hPa), horizontal moisture convergence flux, precipitation, SST, SLP, Surface Air Temperature (SAT, i.e. air temperature at 1000 hPa) and temperature advection at 850 hPa respectively. For daily evolution, we start the analysis 10 days prior to the onset dates, and investigate at days 10, 8, 6, 4, 2 (Hereafter, read as Day -10, Day -8, Day -6, Day -4, Day -2 respectively) before the corresponding onset dates. Figure 3 shows the composite of OLR in the early, late onset and its difference. At Day -10, an active convection center is located over the eastern BOB, SEA’s landmass, TP and south China during MOK. Comparatively a low OLR appears during the early onset years over the western BOB and eastern TP, while in the late onset, more convective activities are confined over the eastern BOB (Fig. 3k). These active convective centers get stronger during the early and late onset at Day -8 (Fig. 3b,g), but the composite difference shows that during the early onset years, convection grows stronger over the eastern TP, south China, south-east to north-eastern part of India except the eastern BOB, where more convective activities are confined during the late onset (Fig. 3f). The convection is the strongest at Day -8, and from Day -6 to -2 the convective activities gradually weaken and confine mainly over the eastern BOB and TP for both the early and late onset (Fig. 3c,h,m,d,i,n,e,j,o). In the daily evolution of OLR, the convective centers are more active in the early onset over the southern AS, along the path of LLJ and TP. Moreover, it is worth mentioning that during the entire late onset’s evolution period, the center of convection remains active over the eastern BOB as compare to the early onset. The evolution of vertical velocity (Omega at 925 hPa) is presented in Supplementary Fig. S2. It is observed that across the entire evolution window (Day -10 to -2) more continuous updrafts (negative value of Omega) can be seen over the west, south east India, BOB, TP during the early onset (Supplementary Fig. S2a–e). We can observe a sharp contrast in updrafts during the late onset period, except some parts of the eastern India and eastern part of TP; the downdrafts are prominent elsewhere (Supplementary Fig. S2f–j). During the early onset, a deep convection over the BOB plays a key role to enhance the updraft. However, it is also noted that significant changes in Omega (Day -10, -8, -2) are found over the eastern India extending up to the eastern TP and southern China (Supplementary Fig. S2k–o), which explain more updrafts over there in the late onset. The wind at 850 hPa is shown in Fig. 4, which presents an intense cross equatorial LLJ over BOB during the early onset for the entire evolution period (Fig. 4a–j). The maximum wind speed is observed at Day -6 both for the early and late onset (Fig. 4c,h). Overall, the wind speed shows a gradual weakening trend from Day -10 to -2. The major difference between the two categories lies over the SEA and SCS (Fig. 4k–o), where across the entire evolution, winds blowing from the BOB towards the SEA, SCS, eastern TP and southern China become stronger during the late onset. Therefore, this strong wind may strengthen the large-scale moisture convergence over the eastern TP and southern China during the late onset. The composite mean of wind at 200 hPa is also shown in Fig. 5. The STJ over TP and the SAH centered over the SEA’s landmass are the two prominent features in the upper air map shown in Fig. 5. The strongest STJ is found at Day -8 both for the early and late onset (Fig. 5b,g), however, for the entire evolution period, the STJ is consistently stronger during the early onset as against the late onset. During the late onset, a prominent ridge forms over the east of TP and China accompanied with the stronger SAH (upper air divergence, Fig. 5f–j). The center of SAH lies consistently over the north-eastern BOB and adjoining Myanmar during the early onset, while during the late onset, the center moves towards the south China due to stronger low-level convergence over the eastern TP and southern China. Therefore, during the late onset years, the strengthening of this convergence zone over the eastern TP brings more moisture to converge over the TP and adjacent region, which is clearly shown in moisture convergence plot (Supplementary Fig. S3). It is known that the intensity of SAH depends on the distribution of heterogeneous diabatic heating over the TP and the ASM area (Wu and Liu 200322; Wu et al.23). During the entire evolution period both for the early and late onset, a significant amount of moisture converges over the TP. However, in the early onset, more moisture converge over the India especially in the eastern and north-eastern part of India, where the difference is significant at Day -8, -6, -4 (Supplementary Fig. S3l,m,n). The precipitation during the ISM onset primarily confines over the eastern BOB, adjacent Myanmar and along the path of cross-equatorial LLJ over NIO (Supplementary Fig. S4). It is clear that precipitation over the southern BOB is considerably higher during the early onset (Supplementary Fig. S3k–o). During late onset, the region of higher precipitation over the north-eastern BOB collocates with the region of higher convective activities, and higher precipitation over the eastern TP to southern China is also collocated with the stronger convergence zone during the entire evolution period. Therefore, the above composite analyses produce some distinctive circulation features during the early and late onset years. The late onset is associated with more convection over the eastern BOB, a significant amount of updraft over the eastern TP and southern China, whereas a stronger convergence and upper air divergence (SAH) over the eastern TP and southern China. The synoptic features in the late onset are also included with stronger wind at 850 hPa blowing from the BOB to the eastern TP and southern China, higher moisture convergence and precipitation over eastern TP. On the contrary, the early onset features include deep convection over the southern BOB and southern AS, stronger cross-equatorial LLJ over the tropical Indian Ocean, more updrafts over the south, eastern India, BOB, TP and SEA, more moisture convergence over the India, AS and BOB and higher precipitation over the BOB, which are usually regarded as the normal meteorological features during normal MOK. These changes in mean circulation during the late onset warrant further investigation and therefore, we analyze the daily evolution of SST, SLP and SAT. Figure 6 shows the composite mean daily evolution of SST for the early, late onset and its difference. It is seen that the NIO becomes warmer, and the SIO and adjacent to the MH remain colder consistently during the entire evolution period. The SST in the SIO becomes significantly colder during the late onset in comparison to the early onset (Fig. 6f–j). In the NIO region, although the difference is not significant during the early onset years, but the ocean remains warmer along the path of LLJ or from the southern AS to BOB. It is worth to mention that the late onset is associated with the significant higher SST over the North Pacific Ocean (~300N) during its entire evolution, which may trigger the stationary wave train reaching into the central Asia and thus, may generate a warm temperature anomaly over the China and adjacent region. During the late onset, the lower SST over SIO is reflected in the composite SLP analysis (Fig. 7). The significant changes in SLP between the early and late onset years are associated with lower SLP over the eastern TP to southern China and higher SLP over the western MH during the late onset years (Fig. 7k–o). At Day -8, the maximum amount of difference between the early and late onset is found (Fig. 7b,g,l). In the late onset, the colder SIO is perfectly collocated with the region of higher SLP. Thus, the anomalous changes of SLP over the east Asian land-mass play a major role in the onset processes. In order to investigate the anomalous changes of SLP, we analyze the SAT in Fig. 8, which shows a significant change across the evolution period over the eastern TP to central and southern China (Fig. 8k–o). Although, the Indian landmass, AS, and BOB experiences a warmer SAT during the early onset, but the difference remain insignificant. On the contrary, the SAT is significantly higher over the eastern TP to central and southern China’s land-mass during the late onset (Fig. 8f–j). In the difference plot, two centers of high temperature are prominent over the eastern TP, and another one moves from the northern China to south-eastern China during the evolution time period. The SAT from the north China (at Day -10) advects gradually towards the eastern and southern TP (as seen at Day -8, -6, -4), and at Day -2 the temperature advects further eastward and north-eastern part of India (Fig. 8o, also see temperature advection plot Supplementary Fig. S5). From the above analysis, It can be clearly seen that the advection of SAT from the north of China’s landmass to the southern-eastern TP plays the key driving role to control and modulate the MOK process and its variability. ### Analysis of an extreme early and late onset To confirm the robustness of our study, we also perform a similar analysis for the earliest (12th May, 1999) and latest (11th June, 2012) MOK years from the list given in Table 1. The monthly mean May composite analysis of OLR and total cloud cover for the earliest onset, latest onset and its differences are shown in Supplementary Fig. S6. During the earliest onset, the convection bands located over the AS, the Western Ghat region, eastern BOB, southern TP and Myanmar, while the convective activities mostly confined over the south-eastern BOB during the latest onset (Supplementary Fig. S6b). It is clear that large convection were occurred over the AS along the Western Ghat region during the earliest onset as compared to the latest onset (Supplementary Fig. S6c). Similarly, the southern AS region was cloudier in the earliest onset (Supplementary Fig. S6f). The cross equatorial LLJ over the AS and BOB was stronger in the earliest onset year, while during the latest onset year, the wind at 850 hPa strengthened over SEA, SCS and eastern TP (Supplementary Fig. S7a–c). On the other hand, the SAH centered over the Myanmar adjacent to the north-east BOB during the earliest onset year, while in the latest onset year, the center of SAH moved towards the south-east China, and the STJ was relatively stronger over the eastern China’s landmass by forming a ridge (Supplementary Fig. S7d–f). Also, we investigate the daily evolution of OLR, wind at 850 hPa and 200 hPa, SST, SLP and SAT prior to the earliest and latest onset year. In OLR, it can be seen that the convection mostly confined over the southern AS and head BOB (along the coast of Western Ghat) from Day -6 onward during the earliest onset, while the band of convection stationed over the eastern BOB, eastern equatorial Indian Ocean and SCS during entire period of the latest onset event (Supplementary Fig. S8). The core of LLJ (wind at 850 hPa) is seen mostly over the AS to the south BOB and moving towards SCS during the latest onset evolution, whereas the LLJ was strengthened towards the head BOB during the earliest onset event (Supplementary Fig. S9). It is clear that LLJ was stronger over SCS, AS during the latest onset year (Supplementary Fig. S9k–o). In wind at 200 hPa, it is well observed that the STJ was more intense during the earliest onset, and the eastern China was dominated by the strongest STJ (Supplementary Fig. S10). The strength of STJ was weakened gradually from Day -10 to -2 prior to the onset for both the cases. The SAH initially centered over the adjacent landmass of the BOB at Day -10, then it slowly moved eastward and stationed over south China during the latest onset case. On the other hand, during the earliest onset, the SAH which initially centered over the SEA, moved slightly westward and helped the convection to grow over the BOB. The pattern of SST evolution is found to be similar with that in the earlier composite SST analysis (Supplementary Fig. S11), where warmer SST prevailed in the north Pacific and SIO during the latest and earliest onset year respectively. The pattern of SLP evolution was approximately similar to the earlier composite analyses, where we can see that Indian landmass was dominated by the lower SLP, and the MH at 300S was strengthened during entire period of the earliest onset case (Supplementary Fig. S12). On the contrary, during entire period of the latest onset, the band of low SLP over the India was extended up-to the north-eastern China and the north-western Pacific Ocean, and the MH was slightly moved west-ward (Supplementary Fig. S12f–j). Finally, the higher SAT is seen over the China and India (only at Day -10 and -6) during the latest onset year, while during the earliest onset year, Indian Ocean was warmer at Day -8, -4 and -2 (Supplementary Fig. S13). Overall, similar type of features as seen above are observed in the extreme event analyses. ### Daily evolution of MOKs without ENSO years It is already discussed in introduction that delays in MOK are strongly associated with the El Niňo2. The difference in SST during ENSO induces the variability in monsoon onset by affecting the movement of the ITCZ3. Therefore, to examine the pattern of the onset evolution during the non-ENSO years, we analyze a new set of MOK list by excluding those MOK years which are associated with El Niño and La Niña. Out of 14 early MOKs, 5 and 9 onsets are associated with La Niňa and El Niňo (over Niňo-3.4 region) years, respectively (Table 1, Supplementary Fig. S14). It is also noted that since 1979, all the La Niňa years are associated with the early MOKs. Here, we only discuss wind at 850 hPa, 200 hPa, SLP and SAT respectively. The wind at 850 hPa is shown in Supplementary Fig. S15, which also reflects an intense cross equatorial LLJ over the BOB and equatorial Indian ocean during early onset evolution period. While during late onset evolution, the LLJ moves towards the SEA, SCS, eastern TP and southern China. As observed before, the wind at 200 hPa also shows that the STJ is consistently stronger during the early onset as compared with the late onset (Supplementary Fig. S16). During the late onset, a prominent ridge forms over the east of TP and China accompanied with the stronger SAH centered over the southern China. The lower SLP over the eastern TP to eastern China and higher SLP over the western MH are associated with the late onset years, which in turn play a major role during the onset processes (Supplementary Fig. S17). Finally, evolution of SAT presents a significant change over the eastern TP to the central and southern China during the late onset years (Supplementary Fig. S18). The SAT is significantly higher over the eastern TP to the central and southern China’s land-mass during the late onset years (Supplementary Fig. S18f–j). Therefore, as already observed above, the SAT from the north China (as Day -10) advects gradually towards the eastern TP and China during the late onset years. Thus, here, these similar type of circulation patterns during the non ENSO years further prove the key role of the SAT over the east Asia in the onset modulating processes. ### Potential factors responsible for late MOKs The late onset of monsoon is one of the most undesired events in context of the ISM system. Several authors have tried to explain the possible reasons behind the early and late onsets (mentioned in the introduction). It is well known that due to a strong land-sea thermal contrast during the pre-monsoon season, intense low and high SLP built up over the Indian subcontinent and the Madagascar Island (in the form of MH) respectively. These intense thermal and pressure gradient between the SIO and Indian subcontinent strengthen the monsoonal jet (cross-equatorial LLJ) with abundant of moisture influx from the warm Indian ocean to the Indian subcontinent, which eventually triggers the deep convection and heavy rainfall over ocean and land. Here, based on the composite analyses, it’s observed that the late onset is clearly linked with the surface warming over the northern, southern China and the eastern TP, which may be the attribution of the warmer north Pacific. During the late onset years, the extension of low SLP over the east Asia (eastern TP, south-east-central China, SCS, and Sea of Japan) due to anomalous high SAT over the eastern TP, southern China and the adjacent region strengthen the south-westerly LLJ over the east Asia from the BOB. The SAH is probably modulated by the anomalous warming, and it moves north-eastward and intensifies over the southern China, whereas during the BOB monsoon onset, the center of SAH normally moves north-westward. The surface warming also strengthens the wind convergence over the eastern TP and southern China, and therefore, increases the moisture influx from the BOB. The lack of sufficient moisture availability over BOB and peninsular India, weaker updrafts over ISM region and weaker LLJ over NIO are the potential key factors behind the late MOKs. The background atmospheric conditions for the early and late MOK and the role of east Asia are shown in the schematic diagram (Supplementary Fig. S19). ## Summary and Conclusion In this study, we investigate the physical mechanism and probable meteorological drivers behind the late MOK process. In order to carry out the investigation, we select 14 early and 9 late onset years (5 days or more after and before the normal onset date as ‘late’ and ‘early’ onset year respectively). The 1st June is chosen as normal MOK date as determined by the IMD, and the list of early and late onset dates are tabulated in Table 1. Then, we perform the May composite analyses for the early and late onset years. Next, to examine the differences in daily evolution between early and late MOK, a composite mean daily evolution (from -10 days to -2 days prior to the onset) of several meteorological variables are investigated. To have robustness of the study, we perform a comparative analysis by choosing an extreme early and late MOK (12th May, 1999 and 11th June, 2012 are chosen as the earliest and latest MOKs respectively). Also, to examine the pattern of onset evolution during the non ENSO years, we analyze a new set of MOK list by excluding those MOK years which are associated with El Niño and La Niña. The entire results can be summarized as follows: 1. The composite mean May OLR suggests that during MOK, the convection (low values of OLR) grow in the eastern BOB and southern TP. But during the early onset, the convection over the AS and eastern BOB are stronger than the late onset. In the daily evolution, the convective centers grow more in the early onset over the southern AS, at the path of LLJ and the TP, while during the late onset’s entire evolution period, the center of convection consistently remains more active over the eastern BOB as compared to the early onset evolution. 2. In entire daily evolution period, a consistent updraft can be seen over the Indian subcontinent except a band of downdraft zone over the northern India during the early onset, whereas during late onset, the updrafts shift towards the east of India to the eastern TP and southern China. 3. In the May composite, the cross equatorial LLJ over the AS and BOB is observed as stronger in early onset, while in the late onset, the stronger wind at 850 hPa over SEA, SCS and eastern TP is observed. Also, the STJ over the eastern TP is more intense in the early onset, but the upper-air anticyclonic system associated with the STJ is found as stronger with a ridge forming over the eastern China during the late onset. In the daily evolution, the LLJ flows significantly from the BOB towards the SEA, SCS, southern China to eastern TP, thus strengthens the convergence zone over the southern China and eastern TP during the late onset. The strongest STJ is found at Day -8 for both the early and late onset. The center of the SAH lies consistently over the north-eastern BOB and adjoining Myanmar during the early onset, while in the late onset, the SAH center moves towards the south China’s landmass due to the presence of strong low-level convergence over there. 4. The strengthening of this convergence zone over the eastern TP helps to attract more moisture from the BOB during the late onset. During the whole onset evolution, a significant amount of moisture is seen to converge over the TP for both the early and late onset due to presence of this stronger convergence zone. However, the early onset consistently show with more moisture convergence over the India, especially over the east, north-east India. In the daily evolution of Precipitation, it is also shown that the precipitation over the southern BOB is considerably higher during the early onset except Day -4. The region of higher precipitation over the north-eastern BOB collocates with higher convective activities, and consistent more precipitation over the eastern TP to the southern China (significant at Day -4) are also co-centered with the strong convergence zone during the entire evolution in the late onset. 5. Based on May composite analysis, the late MOK are associated with warmer SST in the NIO especially in the BOB, AS and colder SST in the extreme SIO near to the MH (~350S-300S). Similar type of features in SST are also seen during daily evolution, i.e warm SST in NIO and cold SST in the extreme SIO or near to the MH, while SST at the MH remains significantly lower during the late onset years for the entire evolution period. Also, the late onset is associated with the significant warmer SST over the north Pacific Ocean (~300N), which may trigger the stationary wave train reaching into central Asia and thus, generates a warm anomaly over the China and adjacent region. 6. The composite SLP of May shows comparatively lower value during the early onset over the AS and west-central-eastern India and higher value over the eastern TP and north-west China, and the strength of the MH enhances insignificantly during the late onset years over the central of extreme SIO. In the SLP daily evolution, it is found that the significant changes between the early and late onset are associated with lower SLP over the eastern TP to central and southern China’s landmass, and a higher SLP is centered over west of 30oS or western MH region during the late onset. At Day -8, the largest significant difference is found, where intense low SLP over the eastern TP to southern China and higher SLP over the western MH during late onset can be observed. 7. In order to investigate the anomalous changes of SLP, we also address SAT. The SAT increases significantly in the late onset years, stretching from the northern China to the eastern TP. Two centers of higher temperature can be seen from the difference plot; one is near the eastern TP and another one is moving from the central China to the south-eastern China during the evolution. The SAT from the northern China (as Day -10) gradually advects towards the eastern and southern TP (as seen at Day -8, -6, -4), and at Day -2 SAT advects further towards the east and north-east of India. The SAH is probably modulated by this anomalous warming and thus, it moves north-eastward rather than normal north-westwards. From the above analysis, it is clear that SAT advection from the north of China’s landmass to the south and east of TP plays a driving role to modulate the MOK processes. The extension of low SLP over the east Asia (eastern TP, south-east-central China, SCS, Sea of Japan) due to anomalous high SAT over the eastern TP and adjacent region draws the south-westerly LLJ from the BOB to the east Asia during the late onset. Hence, it strengthens the moisture convergence over the eastern TP. Therefore, the lack of moisture over the BOB and India’s eastern land-mass and comparatively weaker LLJ over the eastern BOB are the probable causes behind the late ISM onset. In a nut shell, the anomalous high SAT over the eastern TP extending up-to central and eastern China makes an anomalous low SLP over there resulting diversion of moisture transport from the BOB towards the east Asia, which pushes the MOK to be delayed. Thereby, the southward SAT advection from the north China towards the TP, north-eastward movement of SAH and warming over the North Pacific Ocean are playing some of the key factors on the mechanism and variability of the MOK, which should be taken into consideration for the ISM research and understanding of it’s onset variability. Although, it is well known that the ISM is associated with a meridional land–sea thermal contrast reinforced by the thermal effects of the elevated TP. And Zhang (2001) already mentioned that strong (weak) water vapor transport from the Indian monsoon region is accompanied by less (more) water vapor transport over the east Asia, leading to less (more) rainfall over the middle and lower reaches of the Yangtze River valley24. ## Data and Methods Several monthly and daily data sets are employed in this study. Daily interpolated SST data from National Oceanic and Atmospheric Administration (NOAA)25 are used in this study. We also used the daily mean interpolated OLR data from NOAA26, daily and monthly mean of SLP, wind fields, air temperature, specific humidity and daily mean of Omega from the National Centers for Environmental Prediction (NCEP)-National Center for Atmospheric Research (NCAR) reanalysis data set27, the monthly mean Hadley Center SST data set28, and European Center for Medium Range Weather Forecasts Interim Reanalysis (ERA-Interim) monthly total cloud cover data29. The MOK dates from 1979 to 2013 are taken from the list of onset dates based on the Western Circulation Index (WCI) by Ordonez et al.30. The onset dates for the year 2009, 2014, 2015, 2016 and 2017, which are not included in Ordonez et al.31 are taken from the MOK dates as given by the IMD. The normal date of the ISM onset or MOK is chosen as 1st June based on the IMD. The early onset years (hereafter ‘early onset’) are the years when onset occurs at-least 5 days or more prior to 1st June, and the late onset years (hereafter it will be called as ‘late onset’) are the years when onset delays by at-least 5 days or more10. Therefore, during 1979 to 2017, 14 early onset and 9 late onset years are selected, which is given in Table 1. To find the difference and evolution pattern prior to the MOK onset date for early and late onset years, a composite analysis of the key meteorological variables for the month of May and 10 days daily evolution prior to the onset are employed. The significance for all the differences is tested with a two tailed t-test. Here, we calculate the moisture convergence and temperature advection and the moisture budget are estimated by applying mass continuity equation and conservation of water vapor in pressure (p) coordinates: $$\frac{\partial q}{\partial t}+\nabla .(q{V}_{h})+\frac{\partial }{\partial p}(q\omega )=E-P$$ (1) where Vh = (u, v); u and v represent the zonal and meridional component of the horizontal wind. Equation (1) represents the moisture budget for an air parcel, where the terms represent the local rate of change of q, horizontal moisture flux divergence, vertical moisture flux divergence, evaporation and precipitation rates (source and sink terms) respectively. The horizontal moisture flux convergence (MFC) which is simply the negative horizontal moisture flux divergence, can be written as: $$MFC=-u\frac{\partial q}{\partial x}-v\frac{\partial q}{\partial y}-q(\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y})$$ (2) In Eq. (2), the first two terms are the advection term, which represent the horizontal advection of specific humidity and last two terms denote the convergence term, which is the product of specific humidity and horizontal mass convergence. Temperature advection is also employed in this study and a constant pressure surface temperature advection can be written as: $$\frac{\partial T}{\partial t}=-\,(u\frac{\partial T}{\partial x}+v\frac{\partial T}{\partial y})$$ (3) where T is air temperature. All the datasets are from 1979 to 2017 except SST, precipitation and moisture convergence. The daily mean SST datasets are dated from 1981 to 2016, whereas, precipitation and moisture convergence datasets are taken from 1979 to 2015 due to non-availability of data. ## References 1. 1. Jain, S. K. & Kumar, V. Trend analysis of rainfall and temperature data for India. Curr. Sci. 102, 37–49 (2012). 2. 2. Joseph, P. V., Eischeid, J. K. & Pyle, R. J. Interannual variability of the onset of the Indian summer monsoon and its association with atmospheric features, El Nino and sea surface temperatures anomalies. J. Clim. 7, 81–105 (1994). 3. 3. Krishnamurti, T. N. Summer monsoon experiment: a review. Mon. Wea. Rev. 113, 1590–1626 (1985). 4. 4. Yanai, M., Li, C. & Song, Z. Seasonal Heating of the Tibetan Plateau and Its Effects on the Evolution of the Asian Summer Monsoon. J. Meteorol. Soc. Japan. 70, 189–221, https://doi.org/10.2151/jmsj1965.70.1B_319 (1992). 5. 5. Li, C. & Lanai, M. The onset and interannual variability of the asian summer monsoon in relation to land-sea thermal contrast. J. Clim. 9, 358–375, 10.1175/1520-0442(1996)009<0358:TOAIVO>2.0.CO;2 (1996). 6. 6. Joseph, P. V. Monsoon variability in relation to equatorial trough activity over India and West Pacific Oceans. Mausam 41, 291–296 (1990a). 7. 7. Joseph, P. V. Warm pool over the Indian Ocean and monsoon onset. Trop. Ocean-Atmos. Newslett. 53, 1–5 (1990b). 8. 8. Joseph, P. V. Role of ocean in the variability of Indian summer monsoon rainfall. Surv. Geophys. 35, 723–738 (2014). 9. 9. Pearce, R. P. & Mohanty, U. C. Onsets of the Asian summer monsoon 1979–1982. J. Atmos. Sci. 41, 1620–1639 (1984). 10. 10. Joseph, P. V., Sooraj, K. P. & Rajan, C. K. The summer monsoon onset process over south Asia and an objective method for the date of monsoon onset over Kerala. Int. J. Climatol. 26, 1871–1893 (2006). 11. 11. Sabeerali, C. T., Rao, S. A., Ajayamohan, R. S. & Murtugudde, R. On the relationship between Indian summer monsoon withdrawal and Indo-Pacific SST anomalies before and after 1976/1977 climate shift. Clim. Dyn. 39, 841–859, https://doi.org/10.1007/s00382-011-1269-9 (2012). 12. 12. Pradhan, M. et al. Prediction of Indian Summer Monsoon Onset Variability: A Season in Advance. Nat. Com. 2045–2322, https://doi.org/10.1038/s41598-017-12594-y (2017). 13. 13. Raju, P. V. S., Mohanty, U. C. & Bhatla, R. Inter-annual variability of onset of the summer monsoon over India and its prediction. Nat. Hazards 42, 287–300, https://doi.org/10.1007/s11069-006-9089-7 (2007). 14. 14. Preenu, P. N., Joseph, P. V. & Dineshkumar, P. K. Variability of the date of monsoon onset over Kerala (India) of the period 1870-2014 and its relation to sea surface temperature. J. Earth Syst. Sci. 126, 76, https://doi.org/10.1007/s12040-017-0852-9 (2017). 15. 15. Zhou, L. & Murtugudde, R. Impact of northward-propagating intraseasonal variability on the onset of Indian summer monsoon. J. Clim. 27, 126–39 (2014). 16. 16. Jiang, X., Li, T. & Wang, B. Structures and mechanisms of the northward propagation boreal summer intraseasonal oscillation. J. Clim. 17, 1022–39 (2004). 17. 17. Sahana, A. S., Ghosh, S., Ganguly, A. & Murtugudde, R. Shift in Indian summer monsoon onset during 1976/1977. Environ. Res.Lett. 10, 054006, https://doi.org/10.1088/1748-9326/10/5/054006 (2015). 18. 18. Mao, J. & Wu, G. Interannual variability in the onset of the summer monsoon over the Eastern Bay of Bengal. Theor. Appl. Climatol. 89, 155–170 (2007). 19. 19. Dash, S. K., Singh, G. P., Shekhar, M. S. & Vernekar, A. D. Response of the Indian 15 summer monsoon circulation and rainfall to seasonal snow depth anomaly over Eurasia. Clim Dyn. 24, 1–10 (2005). 20. 20. Dash, S. K., Sarthi, P. P. & Panda, S. K. A study on the effect of Eurasian snow on the summer monsoon circulation and rainfall using a spectral GCM. Int. J. Climatol. 26, 1017–1025 (2006). 21. 21. Panda, S. K. et al. Investigation of the snow-monsoon relationship in a warming atmosphere using Hadley Centre climate model. Global and Planetary Change, 147, pp. 125–136. ISSN 0921-8181 (2016). 22. 22. Wu, G. X. & Liu, Y. Summertime quadruplet heating pattern in the subtropics and associated atmospheric circulation. Geophys. Res. Lett. 30, 1201, https://doi.org/10.1029/2002GL016209 (2003). 23. 23. Wu, G. X. & Coauthors. The influence of the mechanical and thermal forcing of the Tibetan Plateau on the Asian climate. J. Hydrometeor. 8, 770–789 (2007). 24. 24. Zhang, R. H. Relations of water vapor transport from Indian monsoon with that over East Asia and the summer rainfall in China. Adv. Atmos. Sci. 18, 1005–1017 (2001). 25. 25. Smith, T. M. et al. Improvements to NOAA’s Historical Merged Land-Ocean Surface Temperature Analysis (1880–2006). J. Clim. 21, 2283–2293 (2008). 26. 26. Liebmann, B. & Smith, C. A. Description of a complete (interpolated) outgoing longwave radiation dataset. Bull. Am. Meteor. Soc. 77, 1275–1277 (1996). 27. 27. Kalnay, E. et al. The NCEP/NCAR 40-year reanalysis project. Bull. Am. Meteorol. Soc. 77, 437–71 (1996). 28. 28. Rayner, N. A. et al. Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. J. Geophys. Res. 108, D144407, https://doi.org/10.1029/2002JD002670 (2003). 29. 29. Simmons, A., Uppala, S., Dee, D. & Kobayashi, S. ERA-Interim: New ECMWF reanalysis products from 1989 onwards. ECMWF Newsletter 110, 25–35 (2007). 30. 30. Ordonez, P., Gallego, D., Ribera, P., Pena-Ortiz, C. & Garcia-Herrera, R. Tracking the Indian summer monsoon onset back to the preinstrument period. J. Clim. 29, 8115–8127 (2016). 31. 31. The NCAR Command Language (Version 6. 5.0) [Software]. Boulder, Colorado: UCAR/NCAR/CISL/TDD, https://doi.org/10.5065/D6WD3XH5 (2019). ## Acknowledgements The authors highly acknowledge the scientific discussions and exchanges with scientists at Centre for Monsoon System Research, Institute of Atmospheric Physics. Authors also would like to thank the anonymous reviewers for their valuable suggestions. We thank NCAR for making available the NCAR Command Language31. This work is supported jointly by the Ministry of Science and Technology of China (2016YFA0600604), Key Research Program of Frontier Sciences, CAS (QYZDY-SSW-DQC024) and the National Natural Science Foundation of China (Grant No. 41750110484, 41675061). ## Author information D.C. and D.N. designed this study and C.W. gave valuable suggestions during this research. All authors contributed ideas in developing this research, discussed the results and wrote this paper. Correspondence to Debashis Nath. ## Ethics declarations ### Competing Interests The authors declare NO competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
{}
# Electronvolt (Redirected from Electron volt) meV, keV, MeV, GeV, TeV and PeV redirect here. For other uses, see MEV, KEV, GEV, TEV and PEV. In physics, the electronvolt[1][2] (symbol eV; also written electron volt) is a unit of energy equal to approximately 160 zeptojoules (symbol zJ) or 1.6×1019 joules (symbol J). By definition, it is the amount of energy gained (or lost) by the charge of a single electron moved across an electric potential difference of one volt. Thus it is 1 volt (1 joule per coulomb, 1 J/C) multiplied by the elementary charge (e, or 1.602176565(35)×1019 C). Therefore, one electron volt is equal to 1.602176565(35)×1019 J.[3] Historically, the electron volt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences because a particle with charge q has an energy E = qV after passing through the potential V; if q is quoted in integer units of the elementary charge and the terminal bias in volts, one gets an energy in eV. Photon frequency vs. energy per particle in electronvolts. The energy of a photon varies only with the frequency of the photon, related by speed of light constant. This contrasts with a massive particle of which the energy depends on its velocity and rest mass.[4][5][6] Legend γ: Gamma rays MIR: Mid infrared HF: High freq. HX: Hard X-rays FIR: Far infrared MF: Medium freq. SX: Soft X-rays Radio waves LF: Low freq. EUV: Extreme ultraviolet EHF= Extremely high freq. VLF: Very low freq. NUV: Near ultraviolet SHF= Super high freq. VF/ULF: Voice freq. Visible light UHF= Ultra high freq. SLF: Super low freq. NIR: Near Infrared VHF= Very high freq. ELF: Extremely low freq. Freq: Frequency The electron volt is not an SI unit, and its definition is empirical (unlike the litre, the light year and other such non-SI units), thus its value in SI units must be obtained experimentally.[7] Like the elementary charge on which it is based, it is not an independent quantity but is equal to 1 J/C2hα / μ0c0. It is a common unit of energy within physics, widely used in solid state, atomic, nuclear, and particle physics. It is commonly used with the metric prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa- (meV, keV, MeV, GeV, TeV, PeV and EeV respectively). Thus meV stands for milli-electron volt. In some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion electron volts; it is equivalent to the GeV. Measurement Unit SI value of unit Energy eV 1.602176565(35)×1019 J Mass eV/c2 1.782662×1036 kg Momentum eV/c 5.344286×1028 kg⋅m/s Temperature eV/kB 11604.505(20) K Time ħ/eV 6.582119×1016 s Distance ħc/eV 1.97327×107 m ## Mass By mass–energy equivalence, the electronvolt is also a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum (from E = mc2). It is common to simply express mass in terms of "eV" as a unit of mass, effectively using a system of natural units with c set to 1.[8] The mass equivalent of 1 eV/c2 is $1\; \text{eV}/c^{2} = \frac{(1.60217646 \times 10^{-19} \; \text{C}) \cdot 1 \; \text{V}}{(2.99792458 \times 10^{8}\; \text{m}/\text{s})^2} = 1.783 \times 10^{-36}\; \text{kg}.$ For example, an electron and a positron, each with a mass of 0.511 MeV/c2, can annihilate to yield 1.022 MeV of energy. The proton has a mass of 0.938 GeV/c2. In general, the masses of all hadrons are of the order of 1 GeV/c2, which makes the GeV (gigaelectronvolt) a convenient unit of mass for particle physics: 1 GeV/c2 = 1.783×1027 kg. The atomic mass unit, 1 gram divided by Avogadro's number, is almost the mass of a hydrogen atom, which is mostly the mass of the proton. To convert to megaelectronvolts, use the formula:[3] amu = 931.4941 MeV/c2 = 0.9314941 GeV/c2. ## Momentum In high-energy physics, the electron volt is often used as a unit of momentum. A potential difference of 1 volt causes an electron to gain an amount of energy (i.e., 1 eV). This gives rise to usage of eV (and keV, MeV, GeV or TeV) as units of momentum, for the energy supplied results in acceleration of the particle. The dimensions of momentum units are LMT−1. The dimensions of energy units are L2MT−2. Then, dividing the units of energy (such as eV) by a fundamental constant that has units of velocity (LT−1), facilitates the required conversion of using energy units to describe momentum. In the field of high-energy particle physics, the fundamental velocity unit is the speed of light in vacuum c. Thus, dividing energy in eV by the speed of light, one can describe the momentum of an electron in units of eV/c.[9] [10] The fundamental velocity constant c is often dropped from the units of momentum by way of defining units of length such that the value of c is unity. For example, if the momentum p of an electron is said to be 1 GeV, then the conversion to MKS can be achieved by: $p = 1\; \text{GeV}/c = \frac{(1 \times 10^{9}) \cdot (1.60217646 \times 10^{-19} \; 1\; \text{C}) \cdot \text{V}}{(2.99792458 \times 10^{8}\; \text{m}/\text{s})} = 5.344286 \times 10^{-19}\; \text{kg} \cdot \text{m}/\text{s}.$ ## Distance In particle physics, a system of "natural units" in which the speed of light in vacuum c and the reduced Planck constant ħ are dimensionless and equal to unity is widely used: c = ħ = 1. In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented in units of inverse particle masses. Outside this system of units, the conversion factors between electronvolt, second, and nanometer are the following:[3] $\hbar = {{h}\over{2\pi}} = 1.054\ 571\ 726(47)\times 10^{-34}\ \mbox{J s} = 6.582\ 119\ 28(15)\times 10^{-16}\ \mbox{eV s}.$ The above relations also allow expressing the mean lifetime τ of an unstable particle (in seconds) in terms of its decay width Γ (in eV) via Γ = ħ/τ. For example, the B0 meson has a lifetime of 1.530(9) picoseconds, mean decay length is = 459.7 µm, or a decay width of (4.302±25)×104 eV. Conversely, the tiny meson mass differences responsible for meson oscillations are often expressed in the more convenient inverse picoseconds. ## Temperature In certain fields, such as plasma physics, it is convenient to use the electronvolt as a unit of temperature. The conversion to the Kelvin scale is defined by using kB, the Boltzmann constant: ${1 \over k_{\text{B}}} = {1.602\,176\,53(14) \times 10^{-19} \text{ J/eV} \over 1.380\,6505(24) \times 10^{-23} \text{ J/K}} = 11\,604.505(20) \text{ K/eV}.$ For example, a typical magnetic confinement fusion plasma is 15 keV, or 170 megakelvin. As an approximation: kBT is about 0.025 eV (≈ 290 K/11604 K/eV) at a temperature of 20 °C. ## Properties Energy of photons in the visible spectrum Graph of wavelength (nm) to energy (eV) The energy E, frequency v, and wavelength λ of a photon are related by $E=h\nu=\frac{hc}{\lambda}$ $=\frac{(4.13566\,7516\times 10^{-15}\,\mbox{eV}\,\mbox{s})(299\,792\,458\,\mbox{m/s})}{\lambda}$ where h is the Planck constant, c is the speed of light. This reduces to $E\mbox{(eV)}=4.13566\,7516\,\mbox{feVs}\cdot\nu\ \mbox{(PHz)}$ $=\frac{1\,239.84193\,\mbox{eV}\,\mbox{nm}}{\lambda\ \mbox{(nm)}}.$[11] A photon with a wavelength of 532 nm (green light) would have an energy of approximately 2.33 eV. Similarly, 1 eV would correspond to an infrared photon of wavelength 1240 nm or frequency 241.8 THz. ## Scattering experiments In a low-energy nuclear scattering experiment, it is conventional to refer to the nuclear recoil energy in units of eVr, keVr, etc. This distinguishes the nuclear recoil energy from the "electron equivalent" recoil energy (eVee, keVee, etc.) measured by scintillation light. For example, the yield of a phototube is measured in phe/keVee (photoelectrons per keV electron-equivalent energy). The relationship between eV, eVr, and eVee depends on the medium the scattering takes place in, and must be established empirically for each material. ## Energy comparisons • 5.25×1032 eV: total energy released from a 20 kt nuclear fission device • 1.22×1028 eV: the energy at the Planck scale • 1×1025 eV: the approximate grand unification energy • ~624 EeV (6.24×1020 eV): energy consumed by a single 100-watt light bulb in one second (100 W = 100 J/s6.24×1020 eV/s) • 300 EeV (3×1020 eV = ~50 J):[12] the so-called Oh-My-God particle (the most energetic cosmic ray particle ever observed) • 2 PeV: two petaelectronvolts, the most high-energetic neutrino detected by the IceCube neutrino telescope in Antarctica[13] • 14 TeV: the designed proton collision energy at the Large Hadron Collider (operated at about half of this energy since 30 March 2010, reached 13TeV in May 2015) • 1 TeV: a trillion electronvolts, or 1.602×107 J, about the kinetic energy of a flying mosquito[14] • 125.1±0.2 GeV: the energy corresponding to the mass of the Higgs boson, as measured by two separate detectors at the LHC to a certainty better than 5 sigma[15] • 210 MeV: the average energy released in fission of one Pu-239 atom • 200 MeV: the average energy released in nuclear fission of one U-235 atom • 17.6 MeV: the average energy released in the fusion of deuterium and tritium to form He-4; this is 0.41 PJ per kilogram of product produced • 1 MeV (1.602×1013 J): about twice the rest energy of an electron • 13.6 eV: the energy required to ionize atomic hydrogen; molecular bond energies are on the order of 1 eV to 10 eV per bond • 1.6 eV to 3.4 eV: the photon energy of visible light • 25 meV: the thermal energy kBT at room temperature; one air molecule has an average kinetic energy 38 meV • 230 µeV: the thermal energy kBT of the cosmic microwave background ## Notes and references 1. ^ IUPAC Gold Book, p. 75 2. ^ SI brochure, Sec. 4.1 Table 7 3. ^ a b c 4. ^ What is Light?UC Davis lecture slides 5. ^ Elert, Glenn. "The Electromagnetic Spectrum, The Physics Hypertextbook". Hypertextbook.com. Retrieved 2010-10-16. 6. ^ "Definition of frequency bands on". Vlf.it. Retrieved 2010-10-16. 7. ^ 8. ^ Barrow, J. D. "Natural Units Before Planck." Quarterly Journal of the Royal Astronomical Society 24 (1983): 24. 9. ^ "Units in particle physics". Associate Teacher Institute Toolkit. Fermilab. 22 March 2002. Retrieved 13 February 2011. 10. ^ "Special Relativity". Virtual Visitor Center. SLAC. 15 June 2009. Retrieved 13 February 2011. 11. ^ "CODATA Value: Planck constant in eV s". Retrieved 30 March 2015. 12. ^ Open Questions in Physics. German Electron-Synchrotron. A Research Centre of the Helmholtz Association. Updated March 2006 by JCB. Original by John Baez. 13. ^ 14. ^ Glossary - CMS Collaboration, CERN 15. ^ ATLAS; CMS (26 March 2015). "Combined Measurement of the Higgs Boson Mass in pp Collisions at √s=7 and 8 TeV with the ATLAS and CMS Experiments". arXiv:1503.07589.
{}
## Introduction Due to the compelling requirement of device miniaturization, synthesis of nanoscopic structures and their macroscopic integration into a large-scale array are fundamental to modern and future devices in the fields of optics1, electronics2, telecommunication3, biology4, energy conversion/storage5,6, and stimuli-responsive materials7, etc. It is known that nanostructures are subject to physical and chemical property variation as a function of their geometry and composition;8 and arrayed assemblies of these nanostructures exhibit collective behaviors of their responses in terms of coupling in the same set and synergy between different sets9,10,. Therefore, to tailor the overall properties of a nanostructure array (hence to foster devices based on this nano-array), it is highly desirable to achieve well-defined nanostructuring that is capable of precise controlling over the structural parameters of a nanostructure array. As such, six capabilities could be indispensable for an efficient well-defined nanostructuring technique: i) ability to assemble nanostructures into a large-scale array with cost-effective processes; ii) reliable size controllability as well as iii) in-plane and iv) out-of-plane shape designability of the nanostructures; v) alterability of the spatial configuration of nano-arrays; vi) compatibility of different sets of arrayed nanostructures with tunable shape and configuration of each set. Various nanostructuring techniques have been developed, such as photo/electron-beam lithography, self-assembly, nanoimprinting, and template-based techniques11 as well as material growth controlling12, while almost none of these techniques fulfills all the above six capabilities of well-defined nanostructuring. The template-based technique, especially of using anodic aluminum oxide (AAO) nanoporous template, has attracted high attention for nanostructuring13, because it fully satisfies the aforementioned first and second capabilities: i) enabling integration of millions of nanostructures into large-scale arrays in a cost-effective way;14 ii) AAO pores along with the replicated nanostructure arrays are well controlled in size15,16. However, other four capabilities (iii–vi) are missing for AAO templates. Different from self-ordered two-step anodic anodization which only generates circular-shaped pores with close-packed honeycomb (i.e., trigonal) arrangement, artificially nanoengineering Al-foil surface by hard imprint lithography can guide anodic anodization to form pores with desired structural parameters (e.g., arrangement and interpore spacing) that highly depend on the surface topography of imprinting stamps17. To date, only a handful of in-plane shapes are available for pores (e.g., circular, triangular, square, and diamond)18,19,20,21. Although the out-of-plane pore size is tunable along the pore axis (e.g., by tuning anodization current or voltage), their constituent in-plane shapes remain the same from top to bottom (i.e., circular shape with different diameters)22,23. The spatial configurations of pores are quite limited (e.g., the trigonal, tetragonal, hexagonal arrangements, and their combinations with identical interpore spacing)18,19,20,21, which are governed by the linear relationship between pore spacing and AV24 and greatly restrict pore-shape diversification considering the determinative dependence of pore shape on spatial configuration19. Binary-pore AAO templates enable two sets of nanostructures into one matrix25, however only few combinations are achievable due to the poor selectivity in the pore shape (i.e., the 1st-set anodized pores only have an square shape) and spatial configuration (i.e., all sets of pores are restricted to a tetragonal arrangement). Therefore, to realize well-defined nanostructuring based on template-based techniques, breakthroughs must be made from the functional limitation of the existing AAO templates, especially to endow the templates with both broad selectivity (in pore shape, pore spatial configuration, and pore combination) and high controllability (in each parameter). Here, we realize designable AAO templates with state-of-the-art controllability over in-plane and out-of-plane pore shape, spatial configuration of pore arrangement, and pore combination. By intentionally introducing unequal aluminum anodization rate and further modulating anodization rate difference at different AVs, the in-plane pore shape of the designable template can be continuously altered from polygons (e.g., triangle and square) with internally-bent (i.e., concave) walls to polygons with non-bent walls (i.e., straight) and then to polygons with externally-bent (i.e., convex) walls (please see Supplementary Fig. 1 for the schematic illustration of shape-different pores) as compared to a conventional template with a single pore shape in a specific spatial configuration. These pore shapes can be integrated into one pore along the axial direction by sequential anodization at different AVs, forming out-of-plane multi-segment pores with different shapes (not only sizes) for every segment. The designable templates are also capable of mixing diverse spacing-different pores into one matrix, successfully pushing the selection in spatial configuration beyond the aforementioned identical spacing limitations. Furthermore, a large series of different-set pore combinations, where each set of pores possesses designable in-plane/out-of-plane shape and spatial configuration, are constructed. Importantly, the structural controllability of the designable templates is totally transferable into their nanostructure counterparts, achieving a large quantity of (size-, shape-, spatial configuration-, and combination-) well-defined nanostructures (nanoparticles, nanotubes, nanowires, and nanomeshes) which enable device performance optimization that is demonstrated in three proof-of-concept applications. ## Results ### Designing principle of in-plane pore shape The in-plane pore shape designability of AAO templates originates with a scenario of designing unequal aluminum anodization rates at different AVs. Here, we select potentiostatic anodization for structural controlling of pores in consideration of the linear relationship between AV and pore parameters (e.g., interpore distance and pore diameter)17. Given that nanoimprinting Al-foil surface with appropriate texture could guide the initiation of pores19, we hope to generate unequal aluminum anodization rates by introducing uneven-profiled four-leaf clover-like nanoconcaves onto surface (see layout of COMSOL simulation in Supplementary Fig. 2). Fabrication process of a proof-of-principle template is schematically depicted in Fig. 1c, which generally includes two sequential procedures of imprinting and anodization. Previous reports point out that plastic flow is damped in thin oxide films26 and that the electric-field-assisted oxide dissolution is primarily responsible for the formation of incipient pores27. Therefore, we simulate electric field (EF) maps at the early stage of anodization to elucidate how one can control pore shape. Here, an arbitrary spacing of 400 nm is set for the preset nanoconcave array. Following the linear spacing-AV relation, the appropriate AV should be 160 V and accordingly, aqueous solution of 0.4 M H3PO4 is used as anodization electrolyte to form nanoporous structures24. Obviously, high EF sites are located at the bottom of nanoconcaves (Fig. 1a) and eight spots on the walls (Fig. 1b). According to the field-assisted dissolution theory in which aluminum anodization is preferably conducted at high EF sites17, here anodization will proceed not only at the bottoms of nanoconcaves but also on the walls, resulting in axial anodization imposing on pore elongation and radial anodization dictating shape evolution. As shown in Fig. 1b, apart from eight stronger-EF spots, each nanoconcave has weaker EF at four humps, leading to uneven EF distribution and consequently unequal radial anodization rates. As previously reported, acid anions are driven into the oxide layer during anodization, and the number of incorporated acid anions is positively related to the volume expansion17,28. Therefore, more anions in the electrolyte (PO43−) will be incorporated into the eight spots because of the higher electric fields (Fig. 1b), leading to larger volume expansion than that of the four humps. With the AV increasing, the absolute EF difference between eight spots and four humps gets larger (Supplementary Fig. 3), which consequently causes larger volume expansion difference. Based on these results, it is predicted that larger volume expansion difference at higher AVs should lead to smoother and more externally-bent walls, vice versa sharper and more internally-bent ones at lower AVs (please refer to Supplementary Fig. 4 for pore shape evolution). Because stress starts to generate with the formation of oxide29, especially considering uneven anion incorporation (and volume expansion) in our case, it is believed that the electric-field-assisted dissolution and the stress-driven oxide flow work together to prolong the pores at the steady state of anodization19,30. This prediction of AV-dependent pore shape designability is fully confirmed by the following experimental results. ### Designing in-plane pore shape by adjusting anodization voltage A purposely-designed Ni imprint stamp decorated with four-leaf clover-like nanopillars on its surface (Supplementary Fig. 5) was used for predetermining sites on the aluminum surface (Supplementary Fig. 6). Anodization was then performed with four different AVs: at 120 V, the anodized pores were cross-shaped with internally-bent walls (Fig. 2a and Supplementary Fig. 7a); with 140 V, the internally-bent amplitude was suppressed to form star-like pores (Fig. 2b); at 160 V, the pore wall became straight without bending, achieving square-shaped pores (Fig. 2c); when the AV was 200 V, the pores presented a circular shape, analogous to a square shape with its walls being externally-bent (Fig. 2d and Supplementary Fig. 7b). To discern the crucial role of inhomogeneous radial anodization rates (i.e., uneven EF distribution) in shape designability, we fabricated circular nanoconcaves for reference by using a Ni circular-pillar stamp (Supplementary Fig. 8) and achieved homogeneous radial anodization rate (as evidenced by even EF distribution) on the side-walls of nanoconcaves (Supplementary Fig. 9d). Not surprisingly, the pore shape cannot be altered no matter what AV was applied (Supplementary Fig. 10). To test the applicability of the pore shape designability to other arrangements, we also made a hexagonal array of three-leaf clover-like nanoconcaves (Supplementary Figs. 11, 12). It was found that also guided by uneven EF distribution (Supplementary Fig. 13d), the anodized pores possessed a similar shape-designing trend to that of the tetragonal array. For example, the triangular pore shape without bending obtained at 140 V (Fig. 2g and Supplementary Fig. 14a) was changed to a wall-internally-bent triangular shape at a lower AV of 120 V (Fig. 2f) and further evolved to a more wall-internally-bent triangular shape when the AV was decreased to 100 V (Fig. 2e and Supplementary Fig. 14b); when the AV was increased to 155 V, it demonstrated a wall-externally-bent shape (Fig. 2h). To explore appropriate AV values, we performed anodization in a broader AV range thereafter. It is found that concerning a specific arrangement, the adjustable AV for designing pore shape is limited in an AV range (denoted as appropriate AV range), out of which (i.e., in too-low AV and too-high AV ranges) the arrangement predetermined by nanoconcaves is broken (Supplementary Fig. 15). The three AV ranges are separated by two threshold values (Supplementary Fig. 16), which are empirically observed to be Va and $$\sqrt{3}$$Va, where Va = 0.4 (V nm−1) × Lh (nm) and Lh is the interpore spacing of the hexagonal array (i.e., 400/$$\sqrt{3}$$ nm). In other words, AV thresholds can be derived from the linear spacing-AV relation, i.e., AV (V) = spacing (nm) × 0.4 (V nm-1)24, regarding two spacings (e.g., Lh and $$\sqrt{3}$$Lh) of emerging arrays. This in-plane shape-designable technique is applicable for large-scale fabrication, as evidenced by a template with a 2.5-cm-diameter area (Supplementary Fig. 17). The green color, spreading over the whole anodized area (left image of Fig. 2i), implies high structural uniformity of the pore array which is verified by a large-area 40-µm-width scanning electron microscope (SEM) image (right part of Fig. 2i) and a nearly monodisperse pore size distribution (Supplementary Fig. 18). Here, for each pore arrangement, we just selected four AVs and obtained four pore shapes. Of course, the rest of values within the appropriate AV range are selectable for tuning pore shape and consequently, more pore shapes should be achievable. In other words, one stamp can lead to many shape-different templates by easily controlling AV, thus avoiding frequent fabrication of mask/stamp with different shapes in conventional lithographic techniques. ### Designing out-of-plane pore shape with sequential anodization voltages To further advance the pore shape designability, we also made efforts toward tuning the out-of-plane shape (i.e., forming multiple segments with distinct shapes for each segment). Using the tetragonal template as an example, Fig. 3 demonstrates how to design pore shape in the axial direction by multi-step anodization with different AVs. For the template I in Fig. 3 was anodized at 120 V, the cross-shaped pores remain identical from top to bottom. When two AVs of 120 and 200 V were sequentially applied to prepare the template II, its pores contain two segments: the first segment at the top has a cross shape while the second at the bottom has a circular shape. Although the first segment was exposed to higher AV during the second-step anodization, the cross-pore shape remained unchanged, indicating that the shape in each segment is independent. This independence imparts more diverse axial pore shape designability, as reflected by the template III which has three segments of each pore: star, square, and circular shapes corresponding to the AVs of 140, 160, and 200 V, respectively. In addition to the adjusting trend from low to high AVs, we can also perform multi-step anodization in the opposite direction, for example, to further carry out multi-step anodization on the template IV (from 200 to 160 V) and the template V (from 200 to 160 and then 140 V). Accordingly, template IV with four segments (star + square + circular + square) and template V with five segments (star + square + circular + square + star) were fabricated. It is foreseeable that templates with more selections of out-of-plane pore shapes, as well as the resultant one-dimensional nanostructures of multiple shape-different segments (see examples in Supplementary Fig. 19, Fig. 6e–h), are achievable by selecting different series of AVs in the appropriate AV range. ### Mixing spacing-different arrangements for designing pore shape Given that pore shapes are highly dependent on the spatial configuration of neighboring pores (Supplementary Fig. 20 for details)19,21 and also considering that only three equilateral polygons tile a plane without gap as well as limited numbers of combination of equilateral polygons (corresponding to the conventional linear spacing-AV relation24, please refer to Supplementary Fig. 21), it is imperative to mix arrangements of different spacings for achieving more spatial configurations and thus more shapes of pores. Considering the preceding observation that AVs for a specific arrangement are limited within an appropriate AV range (Supplementary Fig. 16), we hypothesized that AVs of a spacing-different mixture arrangement should lie in an intersection of several appropriate AV ranges to simultaneously prevent the occurrence of new pores and the disappearance of existing pores for every constituent arrangement. To test this hypothesis, we mixed the above tetragonal arrangement with spacing of Lt (i.e., 400 nm) with a larger-spacing ($$\sqrt{2}$$Lt) tetragonal arrangement (Supplementary Fig. 22, a–c). In theory, the appropriate AV ranges for two arrangements are (Vb/$$\sqrt{2}$$, $$\sqrt{2}$$Vb) and (Vb, 2Vb) (see Supplementary Fig. 22, d–f for details), where Vb = 0.4 (V nm−1) × Lt (nm)24. The intersectional AVs locate from Vb (160 V) to $$\sqrt{2}$$Vb (~226 V). When an AV of 190 V within the intersection was exploited, both arrangements were maintained (Fig. 4a); in contrast, the long-spacing arrangement at an AV of 158 V below the intersection was absent with the occurrence of new pores at the central sites (Supplementary Fig. 23). Regarding the octagonal arrangement with an appropriate AV range from $$\sqrt{2.5}$$Vb to $$\sqrt{5}$$Vb (please refer to Supplementary Fig. 24), two arrangements have no overlapping AVs. Accordingly, a big circular pore (red) was born automatically at the center of the octagonal arrangement under 190-V AV (Fig. 4b). The mixture arrangement is scalable, as evidenced by Fig. 4g where two spacing-different arrangements alternatively tile the whole plane. All experimental results are matched with the prediction, thus validating this hypothesis. Within spacing-different mixture arrangements, pores with additional shapes were realizable and further tuned by adjusting AV. For example, besides circular pores (green) in the basic arrangement, elliptical pores (cyan) were observed in the accessory arrangements (Fig. 4a, b). Smith et al. explained that the unique shape with higher structural asymmetry stems from the biaxial compressive stress competition between neighboring arrangements21, as further evidenced by different anodization rates under nonuniform EF distribution therein (Supplementary Fig. 25). Likewise (see Supplementary Fig. 26), three-pointed star-shaped pores (blue, Fig. 4c) were realized in the centered tetragonal arrangement at 140-V AV. The accessory arrangements can be inserted in the form of line (Fig. 4d), area (Fig. 4e), and even complex patterns (Fig. 4f) as well, achieving other shapes such as isosceles triangular (yellow), rectangular (orange), and semi-circular (purple). Similarly, these new shapes are changeable by adjusting AV. For example, the pores owning the same spatial configuration (Supplementary Fig. 27) were varied from the non-bent (yellow, Fig. 4d) to the wall-externally-bent (yellow, Fig. 4f) triangular shape as increasing AV from 190 to 210 V. The successful mixing of spacing-different arrangements by appropriately choosing AVs will diversify the spatial configuration of AAO pores, thus giving rise to pores with designable shapes that are otherwise unobtainable in conventional spacing-identical configurations. ### Combining different sets of pores with independent shape designability In addition to well-defined structural controlling for one set of pores, we also generalized the designability of in-plane/out-of-plane pore shape and spacing-different spatial configuration into pore combinations. Figure 5b, d shows that by a wet-chemical etching step (Supplementary 28)25, a 2nd-set of pores can always be generated in a specific configuration and reshaped into four shapes regardless of different-AV induced variation in oxide. By assembling the anodized 1st-set pores (green) in Fig. 5a, c and the etched 2nd-set pores (cyan) in Fig. 5b, d into one matrix, we have achieved 2 × 3 × 4 pore shape combinations, as summarized in Fig. 5aibj (i = 1–3, j = 1–4) and Fig. 5cmdn (m = 1–3, n = 1–4). These pore combinations exhibit large-area uniformity as well (Supplementary Figs. 29, 30). Besides the in-plane shape-designable pore combinations, we can achieve out-of-plane shape-designable pore combinations as well (Fig. 5e). The out-of-plane shape designability of the 2nd-set pores (outlined by cyan dashed line) arises from the fact that the pore walls formed at different AVs have inhomogeneous composition and thus etching rates (Supplementary Fig. 31)22. Taking this concept a step further, we realized different-set multi-shape pore combinations within spacing-different mixture arrangements (Fig. 5f, g). For example, starting from the mixture array with point of long-spacing arrangement (Fig. 4a), wet-chemical etching gave rise to two types of pores with the same four-edged cross shape but dissimilar orientations (cyan and pink), producing a two-set 4-shape template. Similarly, by etching the mixture array with area of long-spacing arrangements (Fig. 4b), pores of differently-oriented four-edged cross (cyan and purple) and three-edged cross (red) shapes were obtained, leading to two-set 6-shape templates together with the 1st-set pores of circular (green), rectangular (blue), and isosceles-triangular (orange) shapes (Fig. 5g). In general, the total number of structural selection for templates with one-set pores and pore combinations could be estimated by the following equation: $$f(x,y,z)=\mathop{\sum }\limits_{i=1}^{x}({y}_{i}+{z}_{i}+{y}_{i}\times {z}_{i})$$ (1) where i: the serial number of a specific spatial configuration of pores; x: the number of spatial configurations for the 1st-set pores which in principle need to satisfy the criterion that all constituent arrangements have an intersectional AV range; y: the number of the in-plane and out-of-plane shapes of the 1st-set pores determined by AV adjustable range and AV adjustable sequence; z: the number of the 2nd-set pore shapes controlled by wet-chemical etching. This equation provides a designing blueprint for AAO templates. Following this blueprint, AAO templates with broad selectivity and high controllability could be purposely diversified. ### Fabrication and application of well-defined nanostructures replicated from designable templates Template with designability of in-plane/out-of-plane pore shape, spatial configuration and pore combination should be the key for realizing well-defined nanostructures. To illuminate that various well-established material-synthesis techniques (e.g., atomic layer deposition, physical vapor deposition, and electrodeposition) are also applicable to designable templates31,32,33,34, we fabricated 11 nanostructure arrays (please see Supplementary Figs. 3239) with the assistance of three typical templates with 1st-set circular-shaped pores (Figs. 1d and 5a1), 2nd-set four-edged cross-shaped pores (Fig. 5b3), and the corresponding two-set pore combination (Fig. 5a1b3). Notably, designable templates are also compatible with other template-based synthetic techniques such as on-wire lithography and coaxial lithography35,36,37, leading to many other well-defined nanostructures. The number of well-defined nanostructures stemming from designable templates can be described by: $$f(x,y,z,a,b,c)=\mathop{\sum }\limits_{i=1}^{x}({a}_{i}\times {y}_{i}+{b}_{i}\times {z}_{i}+{c}_{i}\times ({y}_{i}\times {z}_{i}))$$ (2) where a, b, c: the numbers of the nanostructures replicated from the 1st-set pores, the 2nd-set pores, and the two-set combinations respectively (e.g., a, b, c in Supplementary Fig. 32 are 3, 3, and 5); i, x, y, and z have the same definitions as those in the Eq. (1). These well-defined nanostructures with high freedom of structural designability could result in some unique properties. Furthermore, external stimuli (e.g., capillary force, light, magnetic field, and heat) will further adjust these properties in a dynamic way7. Such promising features should be in favor of device utilization. Here we show three envisaged applications of nanostructures arrays prepared by the designable templates. The first application is to optimize surface-enhanced Raman spectroscopy (SERS) using five hexagonal arrays of in-plane shape-different Ag nanoparticles (Fig. 6a and Supplementary Fig. 40). Figure 6d shows Raman spectra of Rhodamine 6G molecules chemisorbed on Ag nanoparticles. Compared to the nanoparticles with externally-bent shape (S1), the nanoparticles with non-bent triangular shape (S2) demonstrated a noticeable enhancement in Raman peak intensity. The highest intensity was achieved for the nanoparticles with internally-bent shape (S3), for example, the peak intensity at 1650 cm−1 increased by approximately 6.6 times relative to that of S1. With further increasing of the internally-bent amplitude (S4 and S5), the SERS intensities were diminished. Finite-difference time-domain (FDTD) simulations demonstrate that hot spots with strong EFs, stemming from the plasmonic resonance effect38, are situated around the vertexes of triangles (Fig. 6b). Particularly, the trend of the maximum EF values at the hot spots for five samples is consistent with that for the SERS intensity variation (Fig. 6c), implying that EFs enhanced by the designable nanoparticle shapes should be the determinative factor of SERS performance. Then, we investigated the out-of-plane shape-designable Au nanowires as broadband light absorbers. We fabricated five samples (S6– S10) which included 1–5 shapes respectively, as schematically represented by the insets of Fig. 6h. A nanowire of S10 is shown in Fig. 6e, comprising five different shapes in the axial direction. An overall observation from the optical photographs is the gradual color variation from red to black (Fig. 6f), proving that the reflection of light impinging upon Au nanowires was suppressed with increasing shapes of nanowires. Light absorption efficiencies in the visible range increase monotonically from S6 to S10 (Fig. 6h), which agrees well with the visual observation. Figure 6g shows that strong EFs induced by the plasmonic resonance at short wavelengths (e.g., 500 and 540 nm) are situated at the top parts of the nanowires, that is, in the vicinity of the circular, superelliptic, and square segments. Regarding the long-wavelength illumination, the territory with EF enhancement moves down along the axis (e.g., at 580 and 750 nm) and finally centralizes around the star-like and cross-shaped segments (e.g., at 900 nm). Thus, multiple plasmonic modes are excited within different segments at various wavelengths and work complementarily to achieve strong light absorption in a broad regime. Finally, we exploited the independent controlling over two-set TiO2-nanotubes/Au-nanowires combination to promote photoelectrocatalysis. Circular (S11), square (S12), and star (S13) shapes were fabricated for optimizing TiO2 nanotubes (Fig. 6i), which obtained different H2 generation rates (Fig. 6k). The discrepancy is primarily ascribed to the variation in light trapping capability (Fig. 6j, l). Then, two shapes of four-void circular (S14) and eight-edged cross (S15) were fabricated for optimizing Au nanowires (Fig. 6i and Supplementary Fig. 41). Reliable H2 growth was detected for the combinations under visible light, especially for S15 with 4.2 ± 0.7 μmol h−1 cm−2 (Fig. 6k). This long-wavelength response arises from plasmonic hot-electron injection from Au nanowires to TiO2 nanotubes39, as identified by three absorption peaks at 600, 680, and 740 nm (Fig. 6l). Figure 6j shows that EF enhancement tends to concentrate at the voids of Au nanowires, which accounts for the superior performance of S15 over that of S14 due to the occupation of twofold voids in Au nanowires. With optimization for both materials, S15 obtained a H2 growth rate of 17.5 ± 1.5 μmol h−1 cm−2 under AM 1.5 G illumination, which resulted in about 3.5 times enhancement relative to S11. ## Discussion This work advances the functionalities of AAO templates in controlling in-plane/out-of-plane shape, spatial configuration, and combination of pores. We developed a strategy to control the in-plane shapes of AAO pores by intentional design of unequal anodization rates at different AVs and firstly realized pores with designable in-plane shapes by easily adjusting AV in specific ranges, thus reducing the high cost of mask/stamp fabrication in conventional lithography for fabricating shape-different nanostructures especially with small feature sizes (e.g., <400 nm). And the fabricated out-of-plane shape-designable pores (and resultant nanostructures) arrays that combine different in-plane shapes along the axial are not easily obtainable by using other techniques. Unlike conventional AAO template in which all pores feature identical spacings in order to satisfy the linear spacing-AV relation21, we found that pores of different spacings can be mixed into one matrix as long as the constituent pore arrangements have an intersectional AV range, highly enriching spatial configuration (and resultant shapes) of pores. The designability in in-plane/out-of-plane shape and spacing-different arrangements is applicable to each set of pores in different-set templates, leading to a large series of designable pore combinations. We believe that the broad selectivity and high controllability of AAO templates will pave the way towards well-defined nanostructuring over a library of nanostructure arrays, which may be attractive to a wide spectrum of technological applications. ## Methods ### Atomic layer deposition Atomic layer deposition (ALD) was performed with a Picosun SUNALETM R150 ALD system. For TiO2 growth, the ALD deposition was carried out at 300 °C with a growth rate of about 0.05 nm per cycle. A typical growth cycle includes 0.1 s pulsing of titanium (IV) chlorides, 5 s purging of nitrogen, 0.1 s pulsing of deionized (DI) water, and 5 s purging of nitrogen. For SnO2 growth, the ALD deposition was carried out at 250 °C with a growth rate of about 0.03 nm per cycle. A typical growth cycle involves 0.5 s pulsing of tin (IV) chloride, 10 s purging of nitrogen, 0.5 s pulsing of DI water, and 10 s purging of nitrogen. For metallic Pt, the ALD deposition was carried out at 300 °C with a typical growth cycle of 1.5 s pulsing of Pt(MeCp)Me3, 30 s filling of nitrogen, 20 s purging of nitrogen, 1.5 s pulsing of oxygen, 30 s filling of nitrogen, and 20 s purging of nitrogen. ### Electrochemical deposition Two-electrode configuration was exploited for all electrochemical deposition using an electrochemical workstation (BioLogic). Ni deposition was performed in an aqueous solution (0.12 M NiCl2, 0.12 M Ni2SO4, and 0.5 M H3BO3) at a constant current density of 2 mA/cm2 with a Ni plate as counter electrode. Au and Ag deposition were conducted at a constant current density of 1 mA/cm2 with counter electrode of a Pt foil in the plating solutions. ### Material etching The unanodized aluminum foil was etched in a mixture solution including 3.4 g CuCl2, 100 ml HCl, and 100 ml DI water at room temperatures. The anodic aluminum oxide (AAO) template was wet-chemically etched in 0.5 M H3PO4 solutions or 0.1 M NaOH solutions. For keeping a constant etching rate for AAO template, the wet-chemical etching was always conducted at a constant temperature of 30 °C. Dry etching upon AAO template was performed by an argon ion milling system (Gatan PECSTM) with an etching power of 4.5 kV, rotation frequency of 2 Hz, and tilted angle of 60°. ### Fabrication of Ni imprint stamp with clover-like nanopillars To obtain a Ni stamp with a tetragonal array of four-leaf clover-like nanopillars, a commercially available Si mold patterned with circular nanoconcaves on its surface was exploited, which are of 400 nm spacing and tetragonally arranged. First, the Si mold was cleaned in a mixture solution (H2O2:H2SO4, 1:3) at 100 °C for 1 h. After cleaning in DI water, the Si mold was treated using 3-aminopropyltriethoxysilane (1.0 v% in CH3CH2OH) at 60 °C for 1.5 h. Then, a gold layer (thickness of about 15–25 nm) was evaporated onto the nanopatterned surface of the Si mold by physical vapor deposition (PVD) system, serving as a conductive layer in the following electrochemical deposition. Afterwards, a thick Ni layer was electrodeposited at a constant current density of 2 mA/cm2. After the electrochemical deposition, the Ni foil was dried by airflow and peeled off from the Si mold. Accordingly, a Ni foil equipped with a tetragonal array of circular nanopillars was obtained. The as-obtained circular nanopillars were then utilized for patterning an aluminum foil surface by mechanical impressing under a pressure of 10 kN cm-2 (Supplementary Fig. 5a1). The imprinted area was then anodized in 0.4 M H3PO4 solutions at an AV of 160 V at 10 °C for 10 min, leading to a porous AAO template (Supplementary Fig. 5a2). Afterward, polymethyl methacrylate (PMMA) was dripped onto the anodized area and dried naturally to form a supporting layer. After wet-chemically removing the unanodized aluminum foil (Supplementary Fig. 5a3), the exposed AAO template was selectively etched in 0.5 M H3PO4 solutions for 90 min, and then covered by a gold layer (thickness of about 15–20 nm) by PVD (Supplementary Fig. 5a4). With this conductive layer as working electrode, Ni electrodeposition was performed to form a thick layer (Supplementary Fig. 5a5). After dissolving the supporting PMMA layer in dimethyl sulfoxide solution at 80 °C for 2 h and completely removing the AAO template in the H3PO4 solution for 5 h, a Ni stamp with a tetragonal array of four-leaf clover-like nanopillars was obtained (Supplementary Fig. 5a6). The as-fabricated nanopillars are shown in Supplementary Fig. 5b, c. Similarly, we fabricated another Ni imprint stamp with a hexagonal array of three-leaf clover-like nanopillars (Supplementary Fig. 11). The fabrication process (Supplementary Fig. 11a) was nearly the same as that for the Ni stamp with a tetragonal array of four-leaf clover-like nanopillars, except for that it started with a Si mold patterned with a trigonal array of circular nanoconcaves. Finally, a Ni stamp with a hexagonal array of three-leaf clover-like nanopillars was constructed, as shown in Supplementary Fig. 11b, c. ### Anodic anodization of aluminum For predetermining nanoconcaves on aluminum foils to guide pore evolution during anodization, Ni stamps with arrayed nanopillars were exploited for engineering electro-polished aluminum foils by an imprinting system at a constant pressure of 10 kN cm−2 for 3 min. Aluminum foils patterned with nanoconcaves of hexagonal arrangement and 400/$$\sqrt{3}$$ nm spacing were always anodized in 0.4 M H3PO4 solutions regardless of the applied AV. Regarding aluminum foils patterned with nanoconcaves of tetragonal arrangement and 400 nm spacing, the anodization electrolytes were selected on the base of AVs. That is because higher AVs accelerate Al anodization (i.e., high anodization current) and results in accumulation of heat (and temperature increasing), which causes electrolytic breakdown of AAO template and prevents the formation of pores with high aspect ratios. Therefore, as AVs were lower than 180 V, anodization was conducted in 0.4 M H3PO4 solutions; while beyond 180 V, a mixture solution including 3 ml H3PO4, 300 ml ethylene glycol, and 600 ml DI water was exploited as anodization electrolyte. The presence of ethylene glycol and the reduction of H3PO4 concentration can effectively mitigate electrolytic breakdown of AAO template at high AVs. ### Synthesis of nanostructure arrays with AAO templates To construct Ag nanoparticles with in-plane different shapes (S1–S5 in Fig. 6a–d), the lab-made Ni stamp with a hexagonal array of three-leaf clover-like nanopillars was exploited to engineer aluminum surface. At different AVs (e.g., 150, 140, 120, 100, and 80 V), the nanoconcaves inheriting the geometrical features of Ni three-leaf clover-like nanopillars were evolved into pores of different shapes during anodization. After anodization, a 50-nm-thick Ag layer was evaporated onto the anodized area by PVD, serving as a conductive layer. Then, Ag electrodeposition was performed to form a thick substrate. After wet-chemically removing the unanodized aluminum foil and the AAO template, arrays of in-plane shape-different Ag nanoparticles were obtained. For fabricating out-of-plane shape-different Au nanowires (S6 to S10 in Fig. 6e–h), the Ni stamp with tetragonally arranged four-leaf clover-like nanopillars was exploited to pattern aluminum foils. Upon the imprinted area, sequential anodization was conducted at different AVs to tune the out-of-plane pore shape. Taking Au nanowires combining five shapes (S10) as an example, five-step anodization at 120 V for 20 min → 140 V for 7.5 min → 160 V for 5 min → 180 V for 15 min → 200 V for 10 min was sequentially performed to fabricate five-shape combined pores. Then, a 5-nm-thick TiO2 layer was deposited along the pore walls by ALD to tune the dielectric environment of the following Au nanowires and meanwhile endow the high-aspect-ratio Au nanowires with mechanical robustness. To form a supporting substrate for Au nanowires, we deposited a metallic layer (5-nm-thick Ti and 20-nm-thick Au) onto the template, serving as working electrode in the following Au and Ni electrodeposition. After completing Au and Ni electrodeposition, the unanodized aluminum foil on the backside was wet-chemically removed, followed by ion-milling off the pore barrier. When the Au plating solution can penetrate through pores and touch the conductive substrate, Au electrodeposition was carried out, and the resultant Au nanowires replicated the structural features of pores. Finally, an array of Au nanowires combining five shapes in the axial direction was obtained after wet-chemically dissolving the AAO template in NaOH solutions. Using the same way, we fabricated pores combining one, two, three, and four shapes as well as the resulting Au nanowire arrays. Taking changeable pore-elongation rates at different AVs into consideration, the anodization parameters for the other four samples were 200 V for 50 min (S6), 180 V for 38 min → 200 V for 25 min (S7), 160 V for 8 min → 180 V for 25 min → 200 V for 17 min (S8), and 140 V for 9 min → 160 V for 6 min → 180 V for 19 min → 200 V for 12.5 min (S9) to maintain the same length for all Au nanowires. For constructing TiO2 nanotubes and TiO2-nanotubes/Au-nanowires combinations (S11–S15 in Fig. 6i–l), we exploited the Ni stamp with a tetragonal array of four-leaf clover-like nanopillars to engineer aluminum foils. Under the AVs of 140, 160, and 200 V, the 1st-set pores evolved into star, square, and circular shapes, respectively. A 40-nm-thick TiO2 layer was then grown along the pore walls by ALD to serve as semiconductor photocatalyst. After ion-milling off the TiO2 layer on the topmost surface, a metallic layer (5-nm-thick Ti and 20-nm-thick Au) was evaporated onto the top surface by PVD, followed by Ni electrodeposition to form a supporting substrate. Afterwards, the unanodized aluminum foil and the AAO template were wet-chemically etched, leading to TiO2 nanotubes with different shapes (S11–S13). For obtaining TiO2-nanotubes/Au-nanowires arrays, the 2nd-set pores were opened at the fourfold junction sites of neighboring 1st-set pores using NaOH solutions for 20  min. The opened 2nd-set pores were then isotropically etched in NaOH solutions to form a circular shape truncated with four voids or selectively etched in H3PO4 solutions to form an eight-edged cross shape. With the exposed conductive substrate as working electrode, Au electrodeposition was conducted in the 2nd-set pores to form shape-different Au nanowires, which interfaced with the TiO2 nanotubes in the 1st-set pores. Finally, TiO2-nanotubes/Au-nanowires arrays were obtained after dissolving the AAO template in NaOH solutions (S14 and S15). ### SEM and FIB cutting All SEM analysis of structural morphology and nanoscale cutting in this work were performed by Focused Ion Beam Scanning Electron Microscopes (ZEISS). ### SERS measurement The preparation for surface-enhanced Raman spectroscopy (SERS) measurement was performed by immersing arrays of in-plane shape-different Ag nanoparticles into Rhodamine 6 G aqueous solutions with a concentration of 10-6 M. The immersion procedure was carried out for 4 h to assure that sufficient R6G molecules can be chemisorbed on the surface of Ag nanoparticles, especially the hot spots under electromagnetic excitation (namely, the vertices of triangular nanoparticles). Before measurement, all samples were dipped into DI water for 30 s and dried with nitrogen flow. SERS signals spectra with three main peaks at 1363, 1508, and 1650 cm−1 were collected by the NTEGRA Spectra system (NTMDT) equipped with a laser source of 532-nm wavelength. The measured data were treated with baseline correction to extract SERS spectra from the broad background. ### Optical measurement UV–vis–NIR spectrometer (Varian Cary 5000) was exploited to characterize light trapping performance. All optical measurements were performed with an illumination spot size of ~20 mm2 at normal incidence without polarization. Considering opaqueness for all samples, the experimental light absorption spectra in this work were calculated by 100% − R% where R% represents the light reflection efficiency. ### Photoelectrocatalytic measurement The hydrogen production was measured by a gas chromatography analyzer (Agilent Micro-GC). For facilitating redox reaction, Pt nanoparticles were formed at the surface of TiO2 nanotubes by ALD. The photocatalysts of TiO2 nanotubes and Au-nanowires/TiO2-nanotubes combinations were immersed into a mixture solution of 1:4 v/v methanol/DI water. Argon-flow purging was kept for 1 h to eliminate the dissolved oxygen from the aqueous solution. During measurement, the photocatalytic samples were illuminated by simulated AM 1.5 G solar spectrum or visible light with a 420-nm long-pass filter. The hydrogen production rates for all samples were normalized to the corresponding active areas. ### COMSOL simulation To characterize the electric field distribution in aluminum foils at the initial stage of anodization, we exploited the ACDC module (simple resistor) from COMSOL Multiphysics (5.1 version) for simulation. The geometrical features of surface-patterned aluminum foils exploited in simulation were set according to the experimental observation. Quadruple and sextuple unit cells with periodic boundary conditions were exploited for calculation, corresponding to the experimental nanoconcaves of tetragonal and hexagonal arrangements, respectively. A voltage difference was applied between the top surface and bottom of the aluminum foil to simulate the external AV. ### FDTD simulation Optical simulations were performed with a commercial software package (FDTD solutions, Lumerical Solutions). Three-dimensional layouts were exploited for all numerical calculations. The plane-wave light source propagating in the –z direction was used to illuminate nanostructure arrays, which were situated on a substrate parallel with the xy plane. Periodic boundary condition was set along the x and y directions, and perfectly matched layers were used to cut the z axis. The geometries of nanostructure arrays in simulation were measured from the corresponding SEM images. The electric field maps were recorded by two-dimensional frequency-domain field profile monitors. For calibrating the light absorption efficiencies (A%), two two-dimensional power monitors were exploited: one was placed at the bottom of the nanostructure array to measure the transmitted power (T%), and the other was located above the light source to measure the reflected power (R%). The light absorption spectra were obtained by 100% − T% − R%. Optical properties of Ag and Au in simulation were obtained from Johnson and Christy40 and optical data reported in Siefke’s paper were used for modeling TiO241.
{}
All Questions 1k views Can I select a large random prime using this procedure? Say I want a random 1024-bit prime $p$. The obviously-correct way to do this is select a random 1024-bit number and test its primality with the usual well-known tests. But suppose instead that I do ... 441 views Building a hard to factor number without knowing its factorization It is possible to find an efficient algorithm for constructing a provably hard to factor number $N$, together with a witness that shows that it is indeed hard to factor. EDIT, since it was not clear: ... 1k views Is there a feasible method by which NIST ECC curves over prime fields could be intentionally rigged? The NIST elliptic curves P-192, P-224, P-256, P-384, and P-521, prescribed in FIPS 186-4 appendix D.1.2, are generated according to a well defined process, but using an arbitrary random-looking seed ... 397 views Why have hashes when you have MACs? It would seem to a naive eye that if you have a MAC, you have a hash function: use a key that all the parties know (such as all-bits-zero). A potential application would be a resource-constrained ... 1k views Is RSA padding needed for single recipient, one-time, unique random message? I want a way to encrypt files using this process: http://crypto.stackexchange.com/a/15 . That is: generate a random password, use that to AES-encrypt a file, and use an RSA public key to encrypt the ... 2k views Sending KCV (key check value) with cipher text I was wondering why it is not more common to send the KCV of a secret key together with the cipher text. I see many systems that send cipher text and properly prepend the IV to e.g. a CBC mode ... 4k views In RSA, why is it important choosing e so that it is coprime to φ(n)? When choosing the public exponent e, it is stressed that $e$ must be coprime to $\phi(n)$, i.e. $\gcd(\phi(n), e) = 1$. I know that a common choice is to have $e = 3$ (which requires a good padding ... 646 views PBKDF vs HKDF for pretty long key I'm developing a messenger application with encrypted chats. In the first version of the app I've used PBKDF2 (10000 iterations, SHA1, random salt) to extend a short user password and generate keys ... 2k views Can Elgamal be made additively homomorphic and how could it be used for E-voting? Elgamal is a cryptosystem that is homomorphic over multiplication. How can I convert it to an additive homomorphic cryptosystem? How can I use this additive homomorphic Elgamal cryptosystem for ... 577 views Block cipher fixed points (plaintext equal to ciphertext) A block cipher is a bijective map from the set of possible plaintexts to the set of ciphertexts, which are the same size and might as well be considered the same thing: $\theta: S\to S$. In this there ... 805 views Trying to better understand the failure of the Index Calculus for ECDLP So I'm going to give you guys my understanding and then if you would be so kind as to tell me where I'm off the mark (hopefully I'm not completely wrong). So basically the index calculus for the ... 675 views Cracking an RSA with no padding and very small e I have a project wherein I have to crack a given cipher text encrypted using RSA and have been given N and e. Can someone suggest an RSA attack using a very small exponent e(here e=3) and no padding? 2k views Why are RSA key sizes almost always a power of two? I know that other bit sizes are possible, e.g. this HTTPS server seems to have a 9000 bit key https://www.ssllabs.com/ssltest/analyze.html?d=qqq.gg, but it's very rare that one sees a key not of size ... 2k views RSA with modulus product of many primes I would like to ask what happens if we build an RSA system with modulus a product of more than 2 primes, for example let $n=p_{1}p_{2}...p_{L}$. I know only the classical RSA system with $n=pq$ with ... 897 views Hill cipher, unknown letter value I've been struggling on this problem for a while now : the Hill cipher is well-known to be vulnerable to known-plaintext attack due to its linearity. Given a key matrix $K$ of size $n\times n$, one ... 289 views Formula for the number of expected collisions Say we have a hash function that produces $n$ bit outputs. From the birthday problem that after around $\sqrt{2^n}$ different inputs to the has function, we can expect a collision. Say instead that ... 2k views What is “Implicit Authentication”? What is “Implicit Authentication” in the context of authentication methods? I searched the Web but could not find any article that describes this. If anyone can describe it, that would be a great ... 343 views Building a combined encryption scheme from two encryption schemes that's secure if at least on of them is secure Any thoughts on how this can be done? Let $\Pi_1 = (\mathrm{Gen}_1, \mathrm{Enc}_1, \mathrm{Dec}_1)$ and $\Pi_2 = (\mathrm{Gen}_2, \mathrm{Enc}_2, \mathrm{Dec}_2)$ be two encryption schemes for ... 639 views Certificateless cryptography While reading "Certificateless Public Key Cryptography" by Author Sattam S. Al-Riyami and Kenneth G. Paterson, they have considered generation of private keys by a Key Generation Center (KGC). If the ... 726 views Making counter (CTR) mode robust against application state loss Counter (CTR) mode, which is a block cipher mode of operation, has some desirable qualities (no padding, parallel encryption and decryption), but at the cost of failing badly when non-unique counter ... 2k views Approach towards anonymous e-voting I want to implement an internet-based e-voting system. Voters shall be able to cast their vote for one out of n possible candidates. Each candidate has his own ballot-box kept by and at a trustworthy ... 603 views Generating a cryptographically secure, many-time use, symmetric encryption key I need to generate a 256 bit encryption key described by the adjectives in the title. Currently I intend to create the key using this RNG. Is this a secure manner of creating the key, given that it ... 2k views Hill cipher cryptanalysis - known plaintext known key size Hello I want to know how to go about this problem I know the plaintext "abcdef" and the ciphertext. The key size is 2. I really can't figure out how to find the key for decrypting and encrypting. 356 views Proof of correctness of a homomorphic ElGamal sum Let's suppose we are using the exponential ElGamal as a public-key encryption scheme, so that we encrypt $g^m$ instead of $m$, for some generator $g$. Let $x$ be the private key, and $h=g^x$ be the ... 415 views Does CBC encryption of a hash provide authenticity? Given a message $M$ and a cryptographic hash function $H$, let $f(M) = E_K(M || H(M))$ where $E_K$ is AES-128-CBC encryption with PKCS#5 padding. Take $H = \textrm{SHA-256}$ if it matters. In other ... 447 views Attacks against El Gamal private key El Gamal encryption involves picking $(p,g,b)$ which is our public key. We compute $b=a^x$ $mod$ $p$. Here, $x$ is the private key which we don't know. What are some efficient and strong algorithms ... 276 views Factors of RSA modulus In the article A Method for Obtaining Digital Signatures and Public-Key Cryptosystems, the original RSA article, it is mentioned that Miller has shown that n (the modulus) can be factored using any ... 331 views Homomorphic Encryption: how does the equality test on ciphertexts work? Let's suppose we have a asymmetric crypto-system $H$ which is homomorphic with respect to some function $F$. Alice encrypts a message $m$ with her private key $e$ in the crypto-system $H$ and ... 11k views How can one securely generate an asymmetric key pair from a short passphrase? Background info: I am planning on making a filehost with which one can encrypt and upload files. To protect the data against any form of hacking, I'd like not to know the encryption key ($K$) used for ... 39k views Difference between stream cipher and block cipher A typical stream cipher encrypts plaintext one byte at a time, although a stream cipher may be designed to operate on one bit at a time or on units larger than a byte at a time. A block cipher ... 13k views Why is public-key encryption so much less efficient than secret-key encryption? I'm currently reading Cryptography Engineering. After giving a high level explanation of the difference between secret-key encryption and public-key encryption, the book says: So why do we bother ... 3k views What is the post-quantum cryptography alternative to Diffie-Hellman? Post-quantum cryptography concentrates on cryptographic algorithms that remain secure in the face of large scale quantum computers. In general, the main focus seems to be on public-key encryption ... 7k views What is the relation between RSA & Fermat's little theorem? I came across this while refreshing my cryptography brain cells. From the RSA algorithm I understand that it somehow depends on the fact that, given a large number (A) it is computationally ... 7k views Is there a simple hash function that one can compute without a computer? I am looking for a hash function that is computable by hand (in reasonable time). The function should be at least a little bit secure: There should be no trivial way to find a collision (by hand). For ... 17k views What is the difference between CBC and GCM mode? I am trying to learn more about GCM mode and how it differs between CBC. I already know that GCM provides a MAC which is used for message authentication. From what I have read, and seen code ... 9k views What are preimage resistance and collision resistance, and how can the lack thereof be exploited? What is "preimage resistance", and how can the lack thereof be exploited? How is this different from collision resistance, and are there any known preimage attacks that would be considered feasible? 1k views What are the differences between proofs based on simulation and proofs based on games? what are the main pros and cons of proving the "security" of a crypto scheme under simulation proofs instead of game based proofs? 8k views Is Diffie-Hellman mathematically the same as RSA? Is the Diffie-Hellman key exchange the same as RSA? Diffie Hellman allows key exchange on a observed wire – but so can RSA. Alice and Bob want to exchange a key – Big brother is watching everything. ... 3k views Is sharing the modulus for multiple RSA key pairs secure? In the public-key system RSA scheme, each user holds beyond a public modulus $m$ a public exponent, $e$, and a private exponent, $d$. Suppose that Bob's private exponent is learned by other users. ... 5k views For what kind of usage should we prefer using Diffie-Hellman in order to exchange keys instead of ElGamal, and most important why should we use one or the other? I do not see a clear difference ... 4k views Does the generator size matter in Diffie-Hellman? For the Diffie-Hellman protocol I've heard that the generator 3 is as safe as any other generator. Yet, 32-bit or 256-bit exponents are sometimes used as generators. What is the benefit of using ... 2k views Why does SHA-1 have 80 rounds? Why does SHA-1 algorithm have exactly 80 rounds? Is it to reduce collisions? If yes, then why do SHA-2 and SHA-3 have a lower number of rounds? 11k views Why is TLS susceptible to protocol downgrade attacks? A recent blog post from Ivan Ristić (expert extraordinaire on all things SSL) says: all major browsers are susceptible to protocol downgrade attacks; an active MITM can simulate failure conditions ... 10k views PBKDF2 and salt I want to ask some questions about the PBKDF2 function and generally about the password-based derivation functions. Actually we use the derivation function together with the salt to provide ... 14k views How to solve MixColumns I can't really understand MixColumns in Advanced Encryption Standard, can anyone help me how to do this? I found some topic in the internet about MixColumns, but I still have a lot of question to ... 8k views How does a chosen plaintext attack on RSA work? How can one run a chosen plaintext attack on RSA? If I can send some plaintexts and get the ciphertexts, how can I find a relation between them which helps me to crack another ciphertext? 640 views Finding roots in $\mathbb{Z}_p$ Is there an efficient way (algorihtm) to compute the solutions of the congruence $x^n\equiv a \pmod p$? $n\in \mathbb{N}$ $a\in\mathbb{Z}_p$ $p$ is a large prime number Note that: By efficient I ... 2k views How does HOTP keep in sync? My understanding of HOTP is that each password is unique and based on a counter. $$PASSWORD = HOTP_1(K,C)$$ Where $C$ is an incremental counter. What I wish to know, is how you keep the client ...
{}
## Introducing a Data Blending API (Support) in Cube.js Introducing Data Blending API inCube.js Sometimes in our daily data visualization, we need to merge several similar data sources so that we can manipulate everything as one solid bunch of data. For example, we may have an omnichannel shop where online and offline sales are stored in two tables. Or, we may have similar data sources that have only a single common dimension: time. How can we calculate summary metrics for a period? Joining by time is the wrong way because we can’t apply granularity to get the summary data correctly. Furthermore, how can we find seasonal patterns from summarized metrics? And how can we get and process data synchronously to track correlations between channels? Well, the new data blending functionality in version 0.20.0 of Cube.js takes care of all these cases. If you’re not familiar with Cube.js yet, please have a look at this guide . It will show you how to set up the database, start a Cube.js server, and get information about data schemes and analytical cubes. Please, keep in mind that we used here another dataset: $curl http://cube.dev/downloads/ecom2-dump.sql > ecom2-dump.sql$ createdb ecom$psql --dbname ecom -f ecom2-dump.sql Now let’s dive into the metrics for an example shop and visualize sales by channel and as a summary. Here is the full source and live demo of the example. I used React to implement this example, but querying in Cube.js works the same way as in Angular, Vue, and vanilla JS. Our schema has two cubes: Orders.js cube(Orders, { sql: SELECT * FROM public.orders, measures: { count: { type: count, }, }, dimensions: { id: { sql: id, type: number, primaryKey: true, }, createdAt: { sql: created_at, type: time, }, },}); and OrdersOffline.js cube(OrdersOffline, { sql: SELECT * FROM public.orders_offline, measures: { count: { type: count, }, }, dimensions: { id: { sql: id, type: number, primaryKey: true, }, createdAt: { sql: created_at, type: time, }, },}); The existence of at least one-time dimension in each cube is a core requirement for merging the data properly. In other words, the data are suitable for blending only if you can present the data on a timeline. Sales statistics or two lists of users that both have an account created date are appropriate datasets for data blending. However, two lists of countries with only a population value can’t be united this way. A Special Query Format for Data Blending A simple and minimalistic approach is to apply data blending to a query object when we retrieve data from our frontend application. The schema and backend don’t need to be changed. const { resultSet } = useCubeQuery([ { measures: ['Orders.count'], timeDimensions: [ { dimension: 'Orders.createdAt', dateRange: ['2022-01-01', '2022-12-31'], granularity: 'month', }, ], }, { measures: ['OrdersOffline.count'], timeDimensions: [ { dimension: 'OrdersOffline.createdAt', dateRange: ['2022-01-01', '2022-12-31'], granularity: 'month', }, ], }, ]); The blended data is an array of query objects, so we just combine regular Cube.js query objects into an array with a defined dateRange and granularity . As a result, Cube.js returns an array of regular resultSet objects. But what if we want to do calculations over blended data sources or create custom metrics? For example, how can we define ratios calculated using data from two sources? How can we apply formulas that depend on data from multiple sources? In this case, we can use another data blending function. We start by setting up a new cube. Data Blending Implementation within a Schema Let’s create AllSales.js inside the schema folder: cube(AllSales, { sql: select id, created_at, 'OrdersOffline' row_type from${OrdersOffline.sql()} UNION ALL select id, created_at, 'Orders' row_type from ${Orders.sql()} , measures: { count: { sql: id, type: count, }, onlineRevenue: { type: count, filters: [{ sql: ${CUBE}.row_type = 'Orders' }], }, offlineRevenue: { type: count, filters: [{ sql: ${CUBE}.row_type = 'OrdersOffline' }], }, onlineRevenuePercentage: { sql: (${onlineRevenue} / NULLIF(${onlineRevenue} +${offlineRevenue} + 0.0, 0))*100, type: number, }, offlineRevenuePercentage: { sql: (${offlineRevenue} / NULLIF(${onlineRevenue} + ${offlineRevenue} + 0.0, 0))*100, type: number, }, commonPercentage: { sql: ${onlineRevenuePercentage} + ${offlineRevenuePercentage}, type: number, }, }, dimensions: { createdAt: { sql: created_at, type: time, }, revenueType: { sql: row_type, type: string, }, },}); Here we’ve applied a UNION statement to blend data from two tables, but it’s possible to combine even more. Using this approach, we can easily define and combine values from several blended data sources. We can even use calculated values and SQL formulas. We can retrieve data from frontend applications and process the results in the usual way: const { resultSet: result } = useCubeQuery({ measures: [ 'AllSales.onlineRevenuePercentage', 'AllSales.offlineRevenuePercentage', 'AllSales.commonPercentage', ], timeDimensions: [ { dimension: 'AllSales.createdAt', dateRange: ['2022-01-01', '2022-12-31'], granularity: 'month', }, ], }); Conclusion If we need to visualize data from several sources and apply time granularity to the data, then with data blending, we need to write less code and we can simplify the application logic. We looked at two ways to implement data blending: We retrieved data as an array of query objects from a frontend application. This is simple to do and the schema doesn’t need to be modified. We can even merge data from several databases. Furthermore, we can retrieve and process independent data synchronously so that we can visualize it on a timeline. We blended data by defining a special cube in a schema. This approach allows us to apply aggregate functions to all sources simultaneously and we can define calculated values. We hope this tutorial will help you to write less code and help to build more creative visualizations. If you have any questions or feedback or you want to share your projects, please use our Slack channel or mention us at Twitter. Also, don’t forget to sign up for our monthly newsletter to get more information about Cube.js updates and releases. Originally published at https://cube.dev on October 1, 2020. Introducing a Data Blending API (Support) in Cube.js was originally published in Cube Dev on Medium, where people are continuing the conversation by highlighting and responding to this story. ## Time Series Forecasting: KNN vs. ARIMA It is always hard to find a proper model to forecast time series data. One of the reasons is that models that use time-series data often expose to serial correlation. In this article, we will compare k nearest neighbor (KNN) regression which is a supervised machine learning method, with a more classical and stochastic process, autoregressive integrated moving average (ARIMA). We will use the monthly prices of refined gold futures(XAUTRY) for one gram in Turkish Lira traded on BIST(Istanbul Stock Exchange) for forecasting. We created the data frame starting from 2013. You can download the relevant excel file from here . #building the time series data library(readxl) df_xautry <- read_excel("xau_try.xlsx") xautry_ts <- ts(df_xautry$price,start = c(2013,1),frequency = 12) KNN Regression We are going to use tsfknn package which can be used to forecast time series in R programming language . KNN regression process consists of instance, features, and targets components. Below is an example to understand the components and the process. library(tsfknn) pred <- knn_forecasting(xautry_ts, h = 6, lags = 1:12,k=3) autoplot(pred, highlight = "neighbors",faceting = TRUE) The lags parameter indicates the lagged values of the time series data. The lagged values are used as features or explanatory variables. In this example, because our time series data is monthly, we set the parameters to 1:12. The last 12 observations of the data build the instance, which is shown by purple points on the graph. This instance is used as a reference vector to find features that are the closest vectors to that instance. The relevant distance metric is calculated by the Euclidean formula as shown below: $\sqrt {\displaystyle\sum_{x=1}^{n} (f_{x}^i-q_x)^2 }$ $q_x$ denotes the instance and $f_{x}^i$ indicates the features that are ranked in order by the distance metric. The k parameter determines the number of k closest features vectors which are called k nearest neighbors . nearest_neighbors function shows the instance, k nearest neighbors, and the targets. nearest_neighbors(pred) #$instance #Lag 12 Lag 11 Lag 10 Lag 9 Lag 8 Lag 7 Lag 6 Lag 5 Lag 4 Lag 3 Lag 2 #272.79 277.55 272.91 291.12 306.76 322.53 345.28 382.02 384.06 389.36 448.28 # Lag 1 #462.59 #$nneighbors #  Lag 12 Lag 11 Lag 10  Lag 9  Lag 8  Lag 7  Lag 6  Lag 5 Lag 4  Lag 3  Lag 2 #1 240.87 245.78 248.24 260.94 258.68 288.16 272.79 277.55 272.91 291.12 306.76 #2 225.74 240.87 245.78 248.24 260.94 258.68 288.16 272.79 277.55 272.91 291.12 #3 223.97 225.74 240.87 245.78 248.24 260.94 258.68 288.16 272.79 277.55 272.91 #   Lag 1     H1     H2     H3     H4     H5     H6 #1 322.53 345.28 382.02 384.06 389.36 448.28 462.59 #2 306.76 322.53 345.28 382.02 384.06 389.36 448.28 #3 291.12 306.76 322.53 345.28 382.02 384.06 389.36 Targets are the time-series data that come right after the nearest neighbors and their number is the value of the h parameter. The targets of the nearest neighbors are averaged to forecast the future h periods. $\displaystyle\sum_{i=1}^k \frac {t^i} {k}$ As you can see from the above plotting, features or targets might overlap the instance. This is because the time series data has no seasonality and is in a specific uptrend. This process we mentioned so far is called MIMO(multiple-input-multiple-output) strategy that is a forecasting method used as a default with KNN. Decomposing and analyzing the time series data Before we mention the model, we first analyze the time series data on whether there is seasonality . The decomposition analysis is used to calculate the strength of seasonality which is described as shown below: #Seasonality and trend measurements library(fpp2) fit <- stl(xautry_ts,s.window = "periodic",t.window = 13,robust = TRUE) seasonality <- fit %>% seasonal() trend <- fit %>% trendcycle() remain <- fit %>% remainder() #Trend 1-var(remain)/var(trend+remain) #[1] 0.990609 #Seasonality 1-var(remain)/var(seasonality+remain) #[1] 0.2624522 The stl function is a decomposing time series method. STL is short for seasonal and trend decomposition using loess, which loess is a method for estimating nonlinear relationships. The t.window(trend window) is the number of consecutive observations to be used for estimating the trend and should be odd numbers. The s.window(seasonal window) is the number of consecutive years to estimate each value in the seasonal component, and in this example, is set to 'periodic' to be the same for all years. The robust parameter is set to 'TRUE' which means that the outliers won't affect the estimations of trend and seasonal components. When we examine the results from the above code chunk, it is seen that there is a strong uptrend with 0.99, weak seasonality strength with 0.26, because that any value less than 0.4 is accepted as a negligible seasonal effect. Because of that, we will prefer the non-seasonal ARIMA model. Non-seasonal ARIMA This model consists of differencing with autoregression and moving average. Let's explain each part of the model. Differencing: First of all, we have to explain stationary data. If data doesn't contain information pattern like trend or seasonality in other words is white noise that data is stationary. White noise time series has no autocorrelation at all. Differencing is a simple arithmetic operation that extracts the difference between two consecutive observations to make that data stationary. $y'_t=y_t-y_{t-1}$ The above equation shows the first differences that difference at lag 1. Sometimes, the first difference is not enough to obtain stationary data, hence, we might have to do differencing of the time series data one more time(second-order differencing ). In autoregressive models, our target variable is a linear combination of its own lagged variables. This means the explanatory variables of the target variable are past values of that target variable. The AR(p) notation denotes the autoregressive model of order p and the $\boldsymbol\epsilon_t$ denotes the white noise. $y_t=c + \phi_1 y_{t-1}+ \phi_2y_{t-2}+...+\phi_py_{t-p}+\epsilon_t$ Moving average models, unlike autoregressive models, they use past error(white noise) values for predictor variables. The MA(q) notation denotes the autoregressive model of order q. $y_t=c + \theta_1 \epsilon_{t-1}+ \theta_2\epsilon_{t-2}+...+\theta_q\epsilon_{t-q}$ If we integrate differencing with autoregression and the moving average model, we obtain a non-seasonal ARIMA model which is short for the autoregressive integrated moving average. $y_t'$ is the differenced data and we must remember it may have been first and second order. The explanatory variables are both lagged values of $y_t$ and past forecast errors. This is denoted as ARIMA(p,d,q) where p; the order of the autoregressive; d, degree of first differencing; q, the order of the moving average. Modeling with non-seasonal ARIMA Before we model the data, first we split the data as train and test to calculate accuracy for the ARIMA model. #Splitting time series into training and test data test <- window(xautry_ts, start=c(2019,3)) train <- window(xautry_ts, end=c(2019,2)) #ARIMA modeling library(fpp2)   fit_arima<- auto.arima(train, seasonal=FALSE,                        stepwise=FALSE, approximation=FALSE)   fit_arima   #Series: train #ARIMA(0,1,2) with drift   #Coefficients: #          ma1      ma2   drift #      -0.1539  -0.2407  1.8378 #s.e.   0.1129   0.1063  0.6554   #sigma^2 estimated as 86.5:  log likelihood=-264.93 #AIC=537.85   AICc=538.44   BIC=547.01 As seen above code chunk, stepwise=FALSE, approximation=FALSE parameters are used to amplify the searching for all possible model options. The drift component indicates the constant c which is the average change in the historical data. From the results above, we can see that there is no autoregressive part of the model, but a second-order moving average with the first differencing. Modeling with KNN #Modeling and forecasting library(tsfknn)   pred <- knn_forecasting(xautry_ts, h = 18, lags = 1:12,k=3) #Forecasting plotting for KNN autoplot(pred, highlight = "neighbors", faceting = TRUE) Forecasting and accuracy comparison between the models #ARIMA accuracy f_arima<- fit_arima %>% forecast(h =18) %>%   accuracy(test)   f_arima[,c("RMSE","MAE","MAPE")]   #                 RMSE       MAE      MAPE      #Training set  9.045488  5.529203  4.283023 #Test set     94.788638 74.322505 20.878096 For forecasting accuracy, we take the results of the test set shown above. #Forecasting plot for ARIMA fit_arima %>% forecast(h=18) %>% autoplot()+ autolayer(test) #KNN Accuracy ro <- rolling_origin(pred, h = 18,rolling = FALSE) ro\$global_accu #  RMSE       MAE      MAPE #137.12465 129.77352  40.22795 The rolling_origin function is used to evaluate the accuracy based on rolling origin. The rolling parameter should be set to FALSE which makes the last 18 observations as the test set and the remaining as the training set; just like we did for ARIMA modeling before. The test set would not be a constant vector if we had set the rolling parameter to its default value of TRUE. Below, there is an example for h=6 that rolling_origin parameter set to TRUE. You can see the test set dynamically changed from 6 to 1 and they eventually build as a matrix, not a constant vector. #Accuracy plot for KNN plot(ro) When we compare the results of the accuracy measurements like RMSE or MAPE, we can easily see that the ARIMA model is much better than the KNN model for our non-seasonal time series data. The original article is here. References ## Post-scripting to Deal with Complex SQL Queries SQL (or the stored procedure) can handle most of the database computations. If the computations are complex or hard to deal in SQL, we use another programming language to read data out of the database to manipulate it. Such a programming language handles the data read and manipulation with a simple script. So we call the process the post-SQL scripting. The scenarios that SQL is not good at handling include complex set-based operations, order-based operations, associative operations and multi-step computations, etc. Due to SQL’s incomplete set orientation and lack of explicit set data type, it’s almost impossible to reuse the intermediate sets generated during the computation. The forced aggregate after each grouping operation makes it impossible to use the post-grouping subsets. The order-based computations, like inter-row (group) computations and ranking operations. The language generates temporary sequence numbers using JOIN(s) or subqueries, making the program hard to write and slow to compute. Record reference is another SQL’s incapability. The language uses a subquery or a JOIN statement to express the relationship. Code becomes ridiculously complicated when there are multiple data levels or when self-joins are needed. SQL doesn’t foster multi-step coding. Programmers have to write very long query containing layers of subqueries. Though stored procedures can alleviate the problem, they are not always available. DBA has strict rules about the privileges of using stored procedures, and old and small databases don’t support stored procedures. Besides, it’s inconvenient to debug a stored procedure. This makes it unsuitable to do a procedural computation. There are other scenarios that require post-SQL scripting. To migrate the algorithm between different database products or between database and non-relational database, data source or output target isn’t the database but a file, and mixed computation performed between multiple databases, for example. All these external database computations need post-SQL scripting. The most important role of a post-SQL script is to achieve the complex computations that SQL is not good at. It would be better if data of various sources (files and non-relational databases, for example) and handling a relatively large amount of data, and satisfactory performance. But, the basic thing is that the scripting tool should be able to perform database read/write conveniently. The popular post-SQL scripting tools are Java, Python pandas and esProc. Now let’s look at and examine their scripting abilities. JAVA High-level languages, such C++ and Java, are theoretically almighty and thus are able to manage computations SQL is hard to deal with. Java supports generic type and has relatively comprehensive set orientation to handle complex set-based computations. A Java array has intrinsic sequence numbers to implement order-based computations conveniently. Java can express a relationship using object reference and handle join operations well. It also supports procedural syntax, including branch and loop, to achieve complex multi-step computations. Unfortunately, Java lacks class libraries for structured computations. It hardcodes even the simplest structured computations and creates the most basic data type manually. That makes Java code lengthy and complicated. Here’s an example of order-based computation: get the number of longest consecutively rising trading days for a stock. Database AAPL stores a stock’s price information in fields including transaction dates and closing price. In an intuitive way, we loop through the records ordered by dates, add 1 to the number of consecutive rising days (the initial value is 0) if the closing price of the current record is greater than that of the previous one; and compare the number of consecutively rising days with the current largest number of consecutively rising days (the initial value is 0) to get the new largest number and reset the number as 0. Repeat the process until the loop is over and the current largest number is the final desired one. SQL can’t implement the intuitive algorithm because it doesn’t support order-based computations. But it has its own tricks. It groups stock records ordered by dates. The way is like this: put records whose closing prices rise consecutively in one group, that is, group the current record with the previous one if its closing price rises and create a new group if its price lowers; then count the members in every group and find the largest number, which is the largest number of consecutively rising days. Examples of such SQL code can be found here . ## Red Herren: SEC Commissioner shovels climate nonsense at NYTimes readers SEC Commissioner Allison Herren Lee has an op-ed in today’s New York Times cheerleading for enhanced corporate disclosure of climate risks. She has no idea what’s she’s talking about. Below is my line-by-line take down. And don’t forget about my SEC petition to stop climate lying. As the climate crisis intensifies, [‘Intensified’? How is that? … After more than 10 years being involved with Data Science Central, initially as the founder, and most recently being acquired by TechTarget, I have decided to pursue new interests. TechTarget now has a great team taking care of DSC, and I will still be involved as a consultant to make sure that everything continues to run smoothly and that the quality standards are maintained and even enhanced. In my new role, I will become a contributor and write articles for DSC, so I will still be visible, even more than before as I find more time to write more articles. I am happy to announce that TechTarget has just brought on Kurt Cagle as the new Community Manager for DSC. Kurt is a former DSC blogger and also living in WA two miles away from my place where DSC was created (so it will be easy to meet in person to explain all the tricks) and will take over many of my responsibilities, including interactions with the community. As for me, I will be spending more needed time in my new restaurant opening next month, Paris Restaurant in Anacortes, WA. My new upcoming articles will be original, with the same style aimed at explaining sometimes advanced and new, original ML concepts and recipes in layman terms to a large range of analytics professionals: data scientists, executives, decision makers, and consumers of analytic products. I wish all the best to Kurt and TechTarget. I am very happy to have worked with the stellar TechTarget team after the acquisition, as well as to continue to work with former DSC colleagues who were hired by TechTarget and are still there and happy today. This relationship will continue to grow in the foreseeable future, with TechTarget’s strong commitment to the DSC community. Best, Vincent ## GUIs and the Future of Work We'll see more data work done in GUIs, via drag-and-drop and point-and-click interfaces. ## Weekly Digest, September 28 editions can be foundhere. The contribution flagged with a + is our selection for the picture of the week. To subscribe, follow . Announcement Featured Resources and Technical Contributions Featured Articles Picture of the Week Source: article flagged with a + To make sure you keep getting these emails, please whitelist us. To subscribe, click here. ## 6 Best AI-Based Apps in 2020: A Brief Introduction Artificial Intelligence (AI) is one of the best recognized and most visible technologies in this time of science and technology. It allows accomplishing the task in less time more reliably and consistently. This innovative technology enables you to simplify your activities, thinking skills, speed the process, and make the AI is a technology that, if used in the proper direction, will and essentially assist you in making money. AI is being used to develop respectively mobile and web-based applications. In the past years, AI-controlled solutions or software for iOS and Android are increasingly emerging in the market. Excessive use of smartphones is a crucial factor in the development of robust AI applications. the successful AI-based apps available in the market. Below Are 6 Best AI-Based Apps for iOS and Android: • Cortana: Cortana is a virtual assistant developed by Microsoft. This application is currently the most successful AI-based app. By using Cortana, you can keep your calendar and schedule up to date, create and maintain lists, set a reminder and alarms, find facts, definitions and information, and many more. respond to questions using data from the Bing search engine. For example, weather and traffic conditions, sports scores, biographies, etc. • Wysa: Without a question, with Wysa, you can initiate a discussion with strangers. It's an AI-driven chatbot and incredibly friendly when you're far from everyone. This application allows you to start a conversation with an unfamiliar person with confidence. Wysa also helps you to address a variety of issues, including depression, anxiety, sleep, problems regarding the LGBTQ+ community, and much more. • ELSA: ELSA is an AI-powered application which works only as a Digital English Tutor. Users can learn to speak English more fluently without much of a push using this application. ELSA utilizes patented speech technology with Machine Learning (ML) and Artificial Intelligence (AI) to identify people's pronunciation errors. • ELSA has a variety of highlights that are required to reach a moderate level of intelligence in English. This application has voice recognition technology. This stunning feature allows you to interact with the necessary American literary pronunciation in English. • FaceApp: FaceApp is a photo-morphing application developed by Facebook. This application is an incredibly well-known AI-driven Android and iOS platforms. It helps to transform the actual photos into an amazing one. With the help of this smart application, you can change the overall appearance, such as a smile, a haircut, and give a unique look. • Google Allo: With this app, user can send messages without typing. The user has to tell Google Allo, in particular, what to write in a message. By this, you can send unlimited messages free of charge to anyone. Google Allo also acts as a browser interface and runs on Google Chrome, Mozilla Firefox and Opera. • Meitu: Meitu is one of the most magnificent beauty filter applications that can make your selfies more gorgeous. This application modifies your photos using its built-in advanced AI algorithms. You can also include messages, blur attracted to this application as it has a variety of makeup options that allow them to hide acne, trim the face, firm the skin, widen the eyes, and remove dark circles. Conclusion: In this cutting edge world, AI is turning into a helpful tool for an enormous number of companies. A few major companies have begun using AI-based software. Likewise, a gigantic number of small firms have additionally begun to put resources in AI to improve their competitiveness through effectiveness, innovation, and profitability. Remember, after finalizing the idea that you will be working on, the next step is to connect with the best mobile app development company which will design and develop a mobile app. Also, you have to update yourself regarding the latest market trend and analyze the future trend. ## Common Technical Characteristics of Some Major Stock Market Corrections In this blog, I will be covering some charts based on the technical data of three major stock market corrections: the Crash of 1987, Dot-com Correction of 2000, and the Great Crash of 1929. By technical data, I mean that the charts are produced using stock market prices.  More than a month ago, I wrote about opening my first stock trading account in 1987; this is the same year I was hired for my first regular full-time job.  I worked as a store clerk on Bay Street, which is arguably the Canadian equivalent of Wall Street.  I considered it a reasonably enjoyable existence. But then came the stock market crash . . . I recently introduced a group of risk-based market metrics that support a trading method that I call "Skipjack" - named after the tuna.  A chart of a good skipjack might actually resemble a tuna. While the simulations running on an application that I call Simtactics suggest that the methodology is often and perhaps generally applicable, sometimes it is quite challenging to beat the market on a total return basis.  This caused me to use the system on large amounts of market data to ascertain the reasons.  However, I needed a market player more consistent than myself in order to remove irregularities extending from my unique personal qualities. I worked on an autopilot feature for Simtactics that eventually led to the Levi-Tate Group of Market Metrics.  Yes, I own the domain names. The Levi-Tate metrics make it possible to produce a type of topographical or algomorphological chart that I call a Thunderbird Chart.  A form of this chart that I call an Advantage Chart follows the extent to which the autopilot is able to beat the market.  I found that the chart changes significantly over time.  I therefore about equidistant in terms of their periodicity.  At this point, it became possible to study how the array or gradient of algorithmic combinations respond to stock market conditions over time.  This leads me to the special charts on this blog that allow me to examine the progression of the algomorphology in a truly developmental sense.  At this level, I confess the charts are a bit complicated both to explain and also to present.  A 12x12 grid containing 144 combinations from 1 to 56 results in 144 rows per column of time slice.  Rather than dwell on difficulties visualizing the exact details, please consider taking my word for it in relation to the general details that I am about to present. On the chart below, there is a 0 on the y-axis.  Above the 0, the Simtactics trigger package was able to successfully beat market using a particular combination.  Below 0, the application's apparatus failed to beat market.  The chart below makes use of data from the Dow Jones Industrial Average one year before the Crash of 1987.  Notice that a fair number of algorithmic pairs are above 0, which means that they successfully beat market.  As I mentioned earlier, it isn't unusual for these metrics to beat the market.  We know however that quite a horrific crash occurred in 1987.  I will therefore check out the chart immediately before the crash. This being a method of technical analysis, it is strictly based on pricing.  I do quite a lot with the pricing, of course. (Technical analysis also allows for the use of the volume, too.) Below is the chart immediately before the crash.  Sometimes when a crash occurs, the exact dates are unclear.  I know in a sense it should be clear - like an automobile crash.  But in the case of the stock market, the crash might develop over a period of time.  In any event, many people associate October 16, 1987 with the date of the crash.  The chart below stops at October 15, 1987: this is because my objective is to study the technical characteristics before the crash to determine if they can predict what is about to occur.  Notice how this chart is quite different from 1986: most of the combinations led to outcomes below the 0.  I point out also that the pattern began to splinter after October 5, 1987 (before the crash).  The takeaway here is that before a major correction, the algorithmic outcomes generally fall below 0.  More than 90 percent of the algorithmic pairs failed to beat market.  Many of Simtactics' algorithmic traders were noticeably more successful by October 15, 1987. I added the next chart just to simplify my message since I realize that the one immediately above presents information in a fairly complicated manner.  The bar chart below shows that the algorithmic failure rate was extremely high for some time before the Crash of 1987.  But just before the crash, the artificial traders were becoming much more successful beating the market.  I suggest actually that a "normal" condition of risk diversity was being restored to an otherwise highly polarized market.  There are cases of humans remaining in dangerous, toxic, and unhealthy conditions by rationalizing out the perceived risks.  After all, this type of insulation is possible using alcohol or drugs.  But certainly there are social mechanisms that can lead people to hold beyond their natural risk tolerance levels.  Another idea that I think has some merit is how survivorship might reward the risk tolerant or condition those whose tolerance levels are adaptive or elastic.  Many people invest by proxy these days - that is to say, through professional portfolio managers who for their part might make use of algorithmic traders.  There is therefore a level of disassociation built into the system these days separating many investors from their investments.  But as I mentioned in my previous blog, this simply means that human stress perceptions have been replaced by algorithmic risk perceptions.  I believe that humans are capable of much more insulated thinking than professionally designed algorithms. This next chart is from the dot-com correction.  For those that do not recall what caused this correction, essentially many investors kept trading up the value of the stocks.  It is difficult to determine the extent to which a stock might be overpriced in a new market.  The chart below shows that for the most part, my algorithmic pairs could not beat the market - until of course the market collapsed in a drunken stupor.  The algorithms started to gain steam all at the same time to the right of the chart.  So there is some evidence that a widespread failure rate is itself an indication of a market that seems more likely to encounter a large correction.  However, within this context of failure as the chart below shows, there are noticeable systematic developments that appear to almost suggest a reversal in performance en route to the crash. Again I provide a bar chart as simplification.  It shows a very high failure rate persisting before the market buckled.  By the way, for those interested, skipjacking remains possible even if the autopilot appears to be failing.  But I would say that there are fewer clear opportunities for the player to win in a relative sense; certainly it is much more difficult to beat the market.  On the other hand, for those that assume the market position, they are destined to obtain the market return both positive and negative. No study of stock market crashes would be complete without consider the crash associated with the Great Depression.  I had my fingers crossed - and there it is again!  Nearly all of the algorithmic pairs performed below market immediately before the crash.  The Crash of 1929 is one of those slow-motion horror movies.  The dates on the x-axis indicate my purely technical perspective on when the crash occurred.  Those that know their data really well might point out that the my algorithmic pairs seem to start advancing well before the crash.  Exactly.  The sudden increase in effectiveness seems to provide some warning. This leads me to my final chart which isn't about a crash - given that most of the lines are above 0.  It is actually from trading activity last Friday for a particular ETF.  While most of the lines are beating market, consider the suspicious downward pattern to the right of the chart.  By the way, this downward motion does not necessarily mean that the price is declining.  It means that the algorithms are becoming less successful than market. This poses an interesting question: would it be wise to take a strategy is considered, it would probably be worthwhile to watch for that splintering behaviour that seems to precede stock market crashes. ## GPT3 and AGI: Beyond the dichotomy – part two This blog continues from GPT3 and AGI: Beyond the dichotomy – part one GPT3 and AGI Let’s first clarify what AGI should look like Consider the movie ‘Terminator’ When the Arnold Schwarzenegger character comes to earth – he is fully functional. To do so, he must be aware of the context. In other words, AGI should be able to operate in any context Such an entity does not exist And nor is GPT3 such an entity But GPT3 however has the capacity to respond ‘AGI-like’ to an expanded set of contexts much more than traditional AI systems. GPT 3 has got many things going for it • Unsupervised learning is the future • Linguistic capabilities distinguish humans • But Language is much more than encoding information. At a social level, language involvesjoint attention to environment, expectations and patterns. • Attention serves as a foundation for social trust • Hence, AGI needs a linguistic basis – but that needs attention and attention needs context. So, GPT-3 – linguistic – attention – context could lead to AGI-like behaviour Does AGI need to be conscious as we know it or would access consciousness suffice? In this context, a recent paper A Roadmap for Artificial General Intelligence: Intelligence, Knowledge, and Consciousness: Garrett Mindt and Carlos Montemayor • integrated information in the form of attention suffices for AGI • AGI must be understood in terms of epistemic agency, (epistemic = relating to knowledge or the study of knowledge) and • Eepistemic agency necessitates access consciousness. • access consciousness: acquiring knowledge for action, decision-making,  and  thought,  without  necessarily  being conscious Therefore, the proposal goes that AGI necessitates • selective attention for accessing information relevant to action,  decision-making,  memory  and • But not necessarily consciousness as we know it This line of thinking leads to many questions • Is consciousness necessary for AGI? • If so, should that consciousness be the same as human consciousness • Intelligence is typically understood in terms of problem-solving. Problem solving by definition leads to specialized mode of evaluation. Such tests are easy to formulate but check for compartmentalized competency (which cannot be called intelligence). They also do not allow intelligence to ‘spill over’ from one domain to another – as it does in human intelligence. • Intelligence needs information to be processed in a contextually relevant way. • Can we use epistemic  agency  through  attention as the distinctive mark of general intelligence even without consciousness? (as per Garrett Mindt and Carlos Montemayor) • In this model, AGI is based on joint attention to preferences in a context sensitive way. • Would AI be a peer or subservient in the joint attention model? Finally, let us consider the question of spillover of intelligence. In my view, that is another characteristic of AGI. Its not easy to quantify because tests are specific to problem types currently. A recent example of spillover of intelligence is from facebook AI supposedly inventing it’s own secret language. The media would have you believe that groups of AGI are secretly plotting to take over humanity. But the reality is a bit mundane as explained. The truth behind facebook AI inventing a new language In a nutshell, the system was using Reinforcement learning. Facebook was trying to create a robot that could negotiate. To do this, facebook let two instances of the robot negotiate with each other – and learn from each other. The only measure of their success was how well they transacted objects. The only rule to follow was to put words on the screen. As long as they were optimizing the goal(negotiating) and understood each other it did not matter that the language was accurate (or indeed was English). Hence, the news about ‘inventing a new language’. But to me, the real question is: does it represent intelligence spillover? Much of future AI could be in that direction. To Conclude We are left with some key questions: • Does AGI need consciousness or access consciousness? • What is role of language in intelligence? • GPT3 has reopened the discussion but still hype and dichotomy (both don’t help because hype misdirects discussion and dichotomy shuts down discussion) • Does the ‘Bitter lesson’ apply? If so, what are its implications? • Will AGI see a take-off point like Google translate did? • What is the future of bias reduction other than what we see today? • Can bias reduction improve human insight and hence improve Joint attention? • GPT-3 – linguistic – attention – context • If context is the key, what other ways can be to include context? • Does problem solving compartmentalize intelligence? • Are we comfortable with the ‘spillover’ of intelligence in AI? – like in the facebook experiment References https://towardsdatascience.com/gpt-3-the-first-artificial-general-intelligence-b8d9b38557a1 https://www.gwern.net/GPT-3 http://haggstrom.blogspot.com/2020/06/is-gpt-3-one-more-step-towards.html https://nordicapis.com/on-gpt-3-openai-and-apis/ https://www.datasciencecentral.com/profiles/blogs/what-is-s-driving-the-innovation-in-nlp-and-gpt-3 https://bdtechtalks.com/2020/08/17/openai-gpt-3-commercial-ai/ https://aidevelopmenthub.com/joscha-bach-on-gpt-3-achieving-agi-machine-understanding-and-lots-more-artificial/ https://medium.com/@ztalib/gpt-3-and-the-future-of-agi-8cef8dc1e0a1 https://www.everestgrp.com/2020-08-gpt-3-accelerates-ai-progress-but-the-path-to-agi-is-going-to-be-bumpy-blog-.html https://www.theverge.com/21346343/gpt-3-explainer-openai-examples-errors-agi-potential https://www.theguardian.com/commentisfree/2020/aug/01/gpt-3-an-ai-game-changer-or-an-environmental-disaster http://dailynous.com/2020/07/30/philosophers-gpt-3/ https://marginalrevolution.com/marginalrevolution/2020/07/gpt-3-etc.html https://analyticsindiamag.com/gpt-3-is-great-but-not-without-shortcomings/ https://www.3mhisinsideangle.com/blog-post/ai-talk-gpt-3-mega-language-model/ https://venturebeat.com/2020/06/01/ai-machine-learning-openai-gpt-3-size-isnt-everything/ https://discourse.numenta.org/t/gpt3-or-agi/7805 https://futureofintelligence.com/2020/06/30/is-this-agi/ https://www.quora.com/Can-we-achieve-AGI-by-improving-GPT-3 https://bmk.sh/2020/08/17/Building-AGI-Using-Language-Models/ https://news.ycombinator.com/item?id=23891226 image source: Learn English Words: DICHOTOMY
{}
# image of a function #### Vali ##### Member I have the next function: f(x)= -2(x+1)/(x-1)^2 x is from [0,1)U(1,3] I need to find the image of the function which is (-infinity, -2].I tried with the derivative but when I solve f'(x)=0 I obtain x=-3 which is not from x interval.I don't know how to continue. #### Olinguito ##### Well-known member I presume you got $$f'(x)\ =\ \frac{2(x+3)}{(x-1)^3}$$ which is correct. Notice that $f'(x)<0$ when $x\in[0,\,1)$ and $f'(x)>0$ when $x\in(1,\,3]$. So $f(x)$ is decreasing on $[0,\,1)$ and increasing on $(1,\,3]$. Also, $f(0)=f(3)=-2$ and $f(x)\to-\infty$ as $x$ tends to $1$ from either side. From these, you should be able to piece together the range of $f(x)$. Last edited: Thanks a lot!
{}
Solving simultaneous congruences Trying to figure out how to solve linear congruence by following through the sample solution to the following problem: $x \equiv 3$ (mod $7$) $x \equiv 2$ (mod $5$) $x \equiv 1$ (mod $3$) Let: $n_1$ = 7 $n_2 = 5$ $n_3 = 3$ $N = n_1 \cdot n_2 \cdot n_3 = 105$ $m_1 = \frac{N}{n_1} = 15$ $m_2 = \frac{N}{n_2} = 21$ $m_3 = \frac{N}{n_3} = 35$ $gcd(m_1,n_1)$ = $gcd(15,7) = 1 = 15 \times 1 - 7 \times 2$ so $y_1 = 1$ and $x_1 = 15$ $gcd(m_2,n_2)$ = $gcd(21,5) = 1 = 21 \times 1 - 5 \times 4$ so $y_2 = 1$ and $x_2 = 21$ $gcd(m_3,n_3)$ = $gcd(35,3) = 1 = -35 \times 1 + 3 \times 12$ so $y_3 = -1$ and $x_3 = -35$ I understand up to this point, but the next line I don't get: So $x = 15 \times 3 + 21 \times 2 - 35 \times 1 \equiv 52$ (mod $105$) Where is the $\times 3$, $\times 2$, $\times 1$ from? Is it just because there are 3 terms, so it starts from 3 then 2 then 1? And where is the 52 coming from? - The $3$, $2$, $1$ come from the right-hand sides of the system of congruences you started with. If we had wanted $x\equiv 4\pmod{7}$, $x\equiv 1\pmod{5}$, $x\equiv 0\pmod{3}$, we would use $4$, $1$, $0$ instead of $3$, $2$, $1$. –  André Nicolas Nov 5 '11 at 18:23 Ah, of course. I didn't notice. thanks. what about the 52? –  Arvin Nov 5 '11 at 18:59 That's easier. $15\times 3+21\times2 +(-35)\times 1=52$. In general we might have gotten a number not in the interval $[0,104]$, and we might want to reduce mod $105$ to get it in that interval, though strictly speaking that is not necessary. –  André Nicolas Nov 5 '11 at 19:05 Although Bill Cook's answer is completely, 100% correct (and based on the proof of the Chinese Remainder Theorem), one can also work with the congruences successively; we know from the CRT that a solution exists. Starting from $x\equiv 3\pmod{7}$, this means that $x$ is of the form $x=7k+3$ for some integer $k$. Substituting into the second congruence, we get $$7k+3 \equiv 2\pmod{5}.$$ Since $7\equiv 2\pmod{5}$, and $3\equiv -2\pmod{5}$, this is equivalent to $2k-2\equiv 2\pmod{5}$. Dividing through by $2$ (which we can do since $\gcd(2,5)=1$) we get $k-1\equiv 1\pmod{5}$, or $k\equiv 2\pmod{5}$. That is, $k$ is of the form $k=5r+2$ for some integer $r$. Plugging this into $x=7k+3$ we have $x=7(5r+2)+3 = 35r + 17$. Then plugging this into the third (and last) congruence, we get $$35r + 17\equiv 1\pmod{3}.$$ Now, $35\equiv -1\pmod{3}$, $17\equiv -1\pmod{3}$, so this is the same as $-r-1\equiv 1\pmod{3}$, or $r\equiv -2\equiv 1\pmod{3}$. That is, $r$ is of the form $3t+1$. Plugging that into $x$ we get $$x = 35r+17 = 35(3t+1)+17 = 105t + 52,$$ so that means that $x$ is of the form $x=52+105t$ for some integer $t$; that is, $x\equiv 52\pmod{105}$ is the unique solution modulo $3\times5\times 7$. - Thanks Arturo, I think your answer is a little bit clearer! –  Arvin Nov 6 '11 at 2:04 @Arvin: It is, in fact, very close to Bill Dubuque's, only less clever. –  Arturo Magidin Nov 6 '11 at 2:54 The $3,2,1$ are from the right hand side of your congruences. We know that $x=3$ is a solution to the first congruence, but this doesn't work as a solution to the next 2 congruences. So Chinese remaindering tells you to compute $(3\cdot 5)^{-1}=15^{-1}$ mod $7$. You find that this is $1$ (since $15(1)+(-2)7=1$). Thus $x=3\cdot 15\cdot 1$ ($=3 \cdot 15 \cdot 15^{-1} = 3 \cdot 1 =3$ (mod $7$) is still a solution of $x \equiv 3 \;\mathrm{mod}\; 7$ since $15\cdot 1 \equiv 1 \;\mathrm{mod}\;7$. What is special about this solution? Well, $x=3\cdot 15\cdot 1$ is also congruent to 0 modulo $3$ and $5$ (that's why we were messing with $(3\cdot 5)^{-1}=15^{-1}$). Next, $x=2$ solves $x\equiv 2\;\mathrm{mod}\;5$, but again does not solve the other two congruences. So you compute $(3\cdot 7)^{-1}=21^{-1}$ mod $5$. You find that this is $1$ (since $21(1)+5(-4)=1$). Thus $x=2\cdot 21\cdot 1$ is still a solution of $x \equiv 2\;\mathrm{mod}\; 5$ while it is also congruent to 0 modulo $3$ and $7$. So now we've found a solution to the second congruence which doesn't interfere with the first and last congruences. Finally, $x=1$ solves the third congruence but not the first two. So you compute $(5 \cdot 7)^{-1}=35^{-1}$ mod $3$. You find that this is $-1$ (since $35(-1)+3(12)=1$). Thus $x=1\cdot 35 \cdot (-1)$ is still a solution of $x\equiv 1\;\mathrm{mod}\;3$ while it is congruent to $0$ modulo $5$ and $7$. So now each congruence has a solution which doesn't interfere with the other congruences. Thus adding the solutions together will solve all 3 at the same time. Therefore, $x = 3\cdot 15\cdot 1 + 2\cdot 21\cdot 1 + 1\cdot 35 \cdot (-1) = 45+42-35=52$ is a solution to all 3 congruences. Since $105$ ($=3\cdot 5 \cdot 7$) is $0$ modulo $3$, $5$, and $7$, adding multiples of $105$ will still leave us with a solution to all $3$ congruences. - @Arvin: It might be worthwhile to view things in a or less identical but more structural way. The number $15$ is congruent to $1$, $0$, $0$ modulo $7$, $5$, $3$; the number $21$ is congruent to $0$, $1$, $0$ modulo $7$, $5$, $3$; the number $-35$ is congruent to $0$, $0$, $1$. (I think $-35$ is kind of silly, would have preferred $70$.) So to get $x\equiv a, b, c$ modulo $7$, $5$, $3$, take the linear combination $a(15)+b(21)+c(-35)$. –  André Nicolas Nov 5 '11 at 19:35 HINT $\rm\quad y\: :=\: x-2 \equiv 0\pmod 5\$ so $\rm\: y = 5\:j\:.\$ Therefore $\rm\ mod\ 3\!:\ {-}1\equiv y = 5\:j \equiv -j\ \Rightarrow\ j\equiv 1\ \Rightarrow\ y = 5\:(1+3\:k) = 5 + 15\:k$ $\rm\ mod\ 7\!:\ \ \ \ 1 \equiv y = 5 + 15\:k\equiv -2 + k\ \Rightarrow\ k\equiv 3\ \Rightarrow\ y = 5+15\:(3+7\:n) = 50 + 105\:n$ NOTE $\$ Such exploitation of the "law of small numbers" often yields quicker soltuions than rote application of algorithms (Chinese Remainder Theorem, extended Euclidean algorithm, Hermite-Smith normal forms, etc).
{}
# Why is electric current dangerous to humans? [closed] How does a strong electric current harm our body? A strong electric current will posses a great charge. But how does that charge injure us? - ## closed as off-topic by jinawee, Brandon Enright, DavePhD, Kyle Oman, Kyle KanosJun 10 '14 at 0:24 • This question does not appear to be about physics within the scope defined in the help center. If this question can be reworded to fit the rules in the help center, please edit the question. This question appears to be off-topic because it is about physiology (and could be answered with a Google search). –  jinawee Jun 9 '14 at 19:33 High-power electric currents cause burns due to Ohmic heating: your body resists the flow of the current, and so the current flow deposits heat. The burns may be internal as well as external. High-voltage current arcs (sparks) can make air a conducting plasma, which allows the arc to continue to heat the air, which allows more current to flow, etc. This can produce an explosion called an arc blast which may involve spraying molten metal. The threshold for this is surprisingly low — it's a slight hazard when throwing a 440 V circuit breaker. Alternating currents can interfere with neuron-to-neuron signaling, causing nasty things like cardiac fibrillation, where the heart muscles lose synchronization and the heart wiggles instead of pumping. By an unfortunate coincidence the frequencies we use for power distribution (50–60 Hz) are especially efficient at stopping hearts, even at fairly low currents. - @Harry: yes, heat. The water in the body has a lot of ions in it, so it is a mildly conductor. The electricity will flow from one cell to another, and they will be heated in the process: $P = I^2 R$. If the current is high enough the cells will literally boil. –  rodrigo Jun 9 '14 at 14:47
{}
# The energy flux of sun light reaching the surface of the earth is $1.388 \times 10^3 w/m^2$. How many photons (nearly) per sq.m are incident on the earth per sec ? Assume that the photons in the sunlight have an average wave length of $550 \;nm$ $\begin{array}{1 1} 3.84 \times 10^{21} photons /m^2 \\ 3.84 \times 10^{20} photons /m^2 \\ 38.4 \times 10^{21} photons /m^2 \\ 38.4 \times 10^{21} photons /m^2 \end{array}$
{}
## Algebraic Geometry – A Historical Sketch I So, I WAS going to give a classical proof of a classical algebraic geometry theorem: that there are exactly twenty-seven lines on any smooth cubic surface. I wanted to avoid using the machinery of divisors and all the attached technicality, but the proof I came up with was rather nasty, needs a lot of lemmas, almost all of which are technical. So instead, I’m going to talk about my understanding of the development of the subject. I’m going to assume that virtually nothing is known by the reader, so that even the less mathematical readers can follow along. Like most mathematical histories, we should start looking to ancient Greece. The Greeks looked seriously at conic sections, and were, in particular, fascinated by ellipses. However, the Greeks did know the full real classification of conic sections from about 200 BC, a point, a pair of lines intersecting, a parabola, a hyperbola, and an ellipse. The next really big advancement came with Descartes, who brought us what is now called the Cartesian coordinate system. The simple idea of assigning to each point in the plane two numbers turned out to be extraordinarily powerful. With it, we gained a fairly straightforward way of manipulating $n$-dimensional space, as the points could be thought of as just $n$-tuples of numbers. In particular, it was now possible to give algebraic characterizations of many geometric objects. For instance, conic sections turned out to be the solutions to polynomials of the form $ax^2+bxy+cy^2+dx+ey+f=0$, with the coefficients determining which one you got (or if you got the empty set by accident, which can also be thought of as a conic). Simultaneously with Descartes, the notion of complex number began to arise from algebra. The first significant appearance is in the case of quadratic polynomials $ax^2+bx+c$ which can be solved by a well-known formula. Sometimes, the number under the root is negative, and this hints that complex numbers should come in. However, mathematicians of the time were still somewhat leery of even the negative numbers, and these quadratics had no REAL solutions, so it was just taken as a sign of nonexistence. Alas, the cubic polynomial was a harder nut to crack. Eventually, however, Tartaglia was able to work out a formula that worked for many cubics. However, when he applied it to solving the simple cubic $x^3-x=0$, he suddenly had square roots of negative numbers! And yet, this polynomial has three distinct real solutions. Worse, when Cardano managed to solve the general cubic, and had a student who solved the quartic, and both solutions required the extensive use of complex numbers. This brought humanity into contact with its first algebraically closed field though this had not been discovered completely yet. Yet again at the same time, Desargues developed the notion of projective space. The idea here is that, in art, when you have two parallel lines in perspective, it looks like they are going to intersect, but not until they are infinitely far away. Also, in Euclidean geometry, any two lines intersect…unless, of course, they run parallel to each other. Exceptions are abhorrent in mathematics, because they make results far less elegant. They are something to be tolerated if they cannot be fixed. Well, projective space fixed this, or more precisely, the projective plane does. The intuitive notion is just that you add a point in each direction where lines parallel to each other intersect. To be rigorous, you call the projective plane the set of all lines in three-dimensional space that pass through the origin. This works out because if you look identify all the lines with their intersection with a plane one unit above the $xy$-plane, you get another copy of the plane, but there are lines left over, and they become the points at infinity. This construction works out for higher dimensions too, which is one of the reasons it’s a good way to think about it. If you think that all of this indicates that the late 16th and early 17th centuries were exciting for algebraic geometry, you’d be quite correct. This period of time essentially started the subject, because algebra and geometry had never before been more closely woven together. For one thing, looking in the projective plane, it turns out that the extra points you get at infinity make the ellipse, the hyperbola, and the parabola all into the same conic! Though this still leaves the empty set, the single point, a pair of lines, and even just a single line as degenerate versions of the quadratic polynomial in two variables. Combining the notion of complex numbers with that of projective space, however, can solve this. Instead of looking at $\mathbb{R}^n$, the set of $n$-tuples of real numbers and looking at real lines through the origin, we look at $\mathbb{C}^n$ and look at complex lines. This gives us complex projective space. In complex projective space, the empty conic and the point become examples of the others, and we can break down conics into precisely three types: the smooth conic, a pair of lines, and a double line, and this is the full classification that we use today. The impression you should come away with from the example of conics is that complex projective space makes things a lot nicer. As a quick example of this, let’s look at the complex projective plane. Now, one of the difficulties of projective space is that it increases the number of variables that need to be kept track of, because a function on $\mathbb{C}\mathbb{P}^n$, the $n$-dimensional complex projective space, is the same as a homogeneous function on $\mathbb{C}^{n+1}$. All homogeneous really means, is that the function is constant on lines through the origin. So now our plane curves are given by polynomials in three variables, $f(x,y,z)$, and the homogeneity condition for polynomials says that all the terms are of the same constant degree, which is called $\deg f$. Now, we take two plane curves, $f(x,y,z)$ and $g(x,y,z)$. First thing first, we require that $f$ and $g$ are irreducible polynomials, that is, they cannot be factored (really, all we must do is require that the have no factors in common). What this does is make sure that there is no curve in the plane on which they both vanish. So now we look at the intersection of the curves $f(x,y,z)=0$ and $g(x,y,z)=0$. The intersection is given by the common solutions to both polynomials, and a fairly simple thing in algebraic geometry is to show that two plane curves intersect in a finite number of points. The question remaining is: how many? Intersection theory is rather important, but I don’t have time to go into the details here, so I’ll just say that there is a notion of intersection number such that it will tell you when two curves intersect and you should count the point of intersection more than once, and which, when the curves intersect transversely, that is, their tangent vectors at the point span the plane, the intersection number is one. So a theorem of Bezout‘s, states that the number of points (with multiplicity taken into account via the intersection number) in the intersection of these two curves will always be $(\deg f)(\deg g)$. This theorem is nifty in other ways, and to see one of them check out this post at the Everything Seminar. This post has gotten rather long already, so I think I’ll stop there, and pick up where I left off some future week by describing the development in the late 19th century and into the 20th century.
{}
# Momentum conserved and energy isn't conserved? 1. Oct 6, 2005 ### asdf60 Suppose a box whose mass m(t) increases at a constant rate moves with constant velocity v. The force necessary to keep this object at constant velocity is F = dp/dt = dm/dt*v. Now we try to find the work done by the force from time 0 to time t. The force is constant, so W = F*d = F*v*t = dm/dt *v*v*t = (m(t)-m(0))*v^2. (since dm/dt is constant, t*dm/dt = dm). Now if we calculate the change in kinetic energy from time 0 to time t, it should be the same as the work done, if energy is conserved. So KE = .5*m(t)*v^2 - .5*m(0)*v^2 = .5 (m(t)-m(0))*v^2. It's not equal to the work done! Am I making a stupid mistake, or is energy indeed not conserved in this system? 2. Oct 6, 2005 ### mukundpa I suppose that the mass added is initially at rest. Is it? Is there any other force which is making the mass dm move with velocity V. If you put and object on a slowly moving platform how it starts moving. 3. Oct 6, 2005 ### asdf60 The added mass is initially at rest. The force that is making dm move is just the normal between m and dm. In a way I can see how energy isn't conserved, since this could just be a series of inelastic collisions. But is it really the case that energy is not conserved? 4. Oct 6, 2005 ### mukundpa If the force between them is only normal then how dm starts moving forward? 5. Oct 7, 2005 ### Andrew Mason Your calculations are right. Why they are different is quite a good question. I think you have to look at the mechanism by which the mass increases: inelastic collisions. When two masses collide and stick together, there is always a loss of energy. If you consider the addition of mass by a series of inelastic collisions of an infinitessimal billiard ball mass of mass m being dropped from rest and colliding and sticking to another such ball that is moving at speed v (and then both being accelerated together back up to speed v) you will be able to see why the loss occurs: The inelastic collision conserves momentum so the speed of the two balls reduces to 1/2 v. This results in a loss of $.5mv^2 - .5*2m(.5v)^2 = .25mv^2$ or half the kinetic energy of the first ball. So for every ball that is added to the system (thereby acquiring speed v and kinetic energy $KE=.5mv^2$) the system loses exactly half that energy. This energy is simply lost and has to be replaced by energy supplied to the system (when the added ball and the ball it collided with are accelerated from 1/2 v to v). AM Last edited: Oct 7, 2005 6. Oct 7, 2005 ### asdf60 Thanks for the confirmation Andrew. Just as an aside, it also seems to me that the work energy theorem is almost never true in the case of systems which involve mass flow (this is because when integrating F dx, we assume constant mass). 7. Oct 7, 2005 ### Tide I think you're a little too eager to toss away energy conservation. The work done in accelerating an element of added mass is certainly $F \Delta x$. However, you must be a bit more careful in writing the force. Start with $F =v \frac {\Delta m}{\Delta t}$ so $F \Delta x = v \frac {\Delta m}{\Delta t} \Delta x$. Now, $\Delta x = \bar v \Delta t$ where we use the average speed of the element of mass being added. That mass is not instantaneously brought up to speed! If $\Delta t$ is very small then $\bar v = \frac {1}{2}v$ so that $F \Delta x = \frac {1}{2} \Delta m v^2$. Energy is conserved! :) 8. Oct 7, 2005 ### Andrew Mason There is no question that the element of mass dm acquires $\frac{1}{2}dmv^2$ of kinetic energy. It is just that the force pushing the system inputs twice that energy: $$dE = Pdt = Fvdt = \frac{dp}{dt}vdt = dp v = dm v^2$$ AM 9. Oct 7, 2005 ### Tide I don't think so. The force provides exactly $dE = F dx = \frac {1}{2} v^2 dm$ of energy since, by Newton, the force on the element of mass is the same as the force on the box. 10. Oct 7, 2005 ### asdf60 Tide, I'm not totally understanding your argument. Can you spell it out a bit more? I think I'd have to side with what andrew is saying. Just by noting the fact that when two objects collide and stick together, energy is necessarily not conserved, you can see that energy can't be conserved in this case. 11. Oct 7, 2005 ### Tide asdf, I'm not looking for anyone to take sides. I am just not ready to give up energy conservation quite so quickly. The argument Andrew is making is essentially that the added mass achieves its final speed instantaneously. It doesn't. Basically, the energy imparted to the newly added mass is $F \Delta x = \frac {1}{2} \Delta m \times v^2$. Andrew argues that the force is also expressibe as the rate of change of momentum $\frac {\Delta p}{\Delta t}$ so $F \Delta x = \frac {\Delta m}{\Delta t} v \Delta x$. He simply wants to replace $\Delta x$ with [/itex]v \Delta t[/tex] but that assumes the new mass moves at the same speed throughout the entire acceleration process. I argue that $\Delta x = \bar v \Delta t$ over the increment of time using the average speed of the new mass which is $v/2$, i.e. half the final speed if the acceleration is uniform over small $\Delta t$. Therefore, $F \Delta x = \frac {\Delta m}{\Delta t} v \times \frac {1}{2} \times v \Delta t$. This simplifies to $F \Delta x = \frac {1}{2} (\Delta m) v^2$ as expected. With regard to the force applied to the "big mass," we again have to start with $\Delta E = F \Delta x$ for the amount of work done and Andrew wants to express this in terms of power times time. Let's do that: $\Delta E = F v \Delta t = F \frac {\Delta x}{\Delta t} \Delta t$ which is just $\Delta E = \frac {1}{2} \Delta m v^2$. Last edited: Oct 7, 2005 12. Oct 8, 2005 ### Andrew Mason If energy is conserved, then there is no loss of energy despite the fact that this process seems to involve a series of inelastic collisions. An inelastic collision does not necessarily have to dissipate energy - it can store it in a spring, for example. So you could have this system adding mass and kinetic energy at the same rate that power is input, thus conserving energy, but also storing additional energy in the spring. That sounds like an energy creation machine. AM 13. Oct 8, 2005 ### Tide Andrew & ASDF, The trouble is that insufficient information is provided in the original statement of the problem. Consider the situation from the perspective of the box. It is stationary and stuff is passing by at speed v. Let the box accumulate mass at a fixed rate and again ask the question how much force is needed to keep the box stationary. Clearly, the force has to be $F = \frac {dm}{dt} v$. Is energy conserved? Work has to be done to decelerate the matter stream and some agent has to act over some distance to achieve that. And we will find that some force times some distance does indeed perform work at a rate of $\frac {1}{2} \frac {dm}{dt} v^2$. However, the situation becomes a little cloudy when we try to do a global energy balance in the manner in which it was done in the original problem. We can try to calculate $F dv$ but that is zero in this frame of reference! Our force keeps the box from accelerating but all the work is done on the matter stream. Andrew, you were on the right track thinking about elastic vs. inelastic collisions but we need to go a little further. Let's run the box problem in reverse! Say I have a rocket attached to a wall that will prevent it from moving. I ignite the fuel and mass leaves the rocket at some rate (we can take it to be constant for our purposes) with some exhaust velocity v. How much force does the wall exert on the rocket to prevent it from moving? It's the same as before: $F = \frac {dm}{dt} v$ Again, the wall does no work since the rocket does not move! There is a hidden agent in this problem and that is the internal chemical energy of the rocket fuel. In the end, energy is conserved by taking full account of ALL the energy (internal energy as well as kinetic) but the original problem provides no detail. ASDF, I assume that the original problem you had to solve asked only for the force required to keep the box moving at constant speed. We all agree with your solution! You are to be applauded for going beyond that and asking the question of whether energy is conserved. Just remember that energy is always conserved but you have to do a full accounting! 14. Oct 8, 2005 ### Staff: Mentor Here's how I understand the problem. I visualize a lump of clay (mass m; speed v) moving along while small bits of clay (mass dm; speed 0) are continually dropped onto the larger mass at the rate dm/dt. To see how mechanical energy cannot be conserved, view the situation from a reference frame moving along with the clay mass. In that frame the clay mass is at rest. Bits of clay (mass dm; speed v) are being fired into the larger mass. Since an applied force (v dm/dt) prevents the clay mass from moving, the KE of the clay bits is completely transformed into internal energy. Returning to the original frame, we see that the average work done by the applied force in time dt is $F dx = v \ dm/dt \ v dt = v^2 \ dm$, which equals the sum of the increase in internal energy ($1/2 \ dm \ v^2$) plus the increase in the KE of the incoming clay mass ($1/2 \ dm \ v^2$). So, as Tide says, energy balance is maintained. Note that in applying Newton's second law to the incoming clay bit (a deformable body), the average speed of the center of mass of that clay bit is $v/2$. Thus the force (F) times the displacement of the center of mass ($v/2 \ dt = dx/2$) equals the change in the KE of the clay bit $1/2 \ dm \ v^2$. But this is not equal to the real work done on the clay bit (in the conservation of energy sense); the real work equals the force (F) times the displacement of the point of application of that force (dx = v dt). (The quantity force x distance used in the center of mass equation, $F_{net} \Delta X_{cm} = 1/2 M V_{cm}^2$, is sometimes called pseudowork to distinguish it from the real work.) Last edited: Oct 8, 2005 15. Oct 8, 2005 ### arildno The original exercise is complete nonsense and cannot be solved. This is because information has not been given as to how momentum is added to the system. If, for example, the mass increase is done by throwing sticky balls at the box with a velocity component PARALLELL to the box's velocity, so that the collision retards the ball down to the box's velocity, then you would need a restraining force to be applied in order to keep the box moving with constant velocity. If, on the other hand, the sticky balls attaches to the box and had initially the SAME HORIZONTAL VELOCITY AS THE BOX, then no horizontal force needs to be applied to keep the box moving at a constant rate. To use the equation F=dm/dt*v is, in general false. The silly equation isn't even a Galilean invariant.:grumpy: Last edited: Oct 8, 2005 16. Oct 8, 2005 ### hungrypieeater If there is to be an equilibrium situation, ie. constant veloctiy, then you must have no net force. This means there must be a retarding force. This must be provided by either friction or air resistance/ whatever. the energy loss occurs by deforming the air around the box or inducing a current, generating heat, etc. 17. Oct 8, 2005 ### Staff: Mentor Assuming that the added mass has a speed of zero (with respect to the ground), of course a force is required to keep the mass moving at constant speed. That was mentioned in the opening post. I wouldn't call it a retarding force; it must be an active force, not merely friction or air resistance. (Unless a strong wind is blowing it along, of course. ) I'm a bit puzzled by the title of the thread, since neither momentum nor (mechanical) energy is conserved. 18. Oct 8, 2005 ### uart Yes clearly the original problem was too vague about how the mass was added, but from the equations asfd60 used it was clear that he assumed that added mass had no initial momentum, so lets just discuss that particular problem ok. In my opinion Andrew correctly explained the situation in his first reply. Last edited: Oct 8, 2005 19. Oct 8, 2005 ### uart Well that's exactly the point, and exactly why conservation of momentum holds in an inelastic collision while conservation of KE does not. You're the one making the very point that the velocity of the added mass (assumed to be initially zero) does not increase to "v" instantaneously but it happens over some small time period during which time the average speed of the added mass is less than v (for simplicity lets say v/2). "the force on the element of mass is the same as the force on the box". Yes and the change in momentum is $$\Delta p =F \Delta t$$ while the change in Energy is $$\Delta W = F \Delta x$$. Now which is the same for both the added mass and the original moving mass during the acceleration of the accrued mass? Is it the time t or is it the distance x? Bingo you got it, the time is the same for both but the distance traveled is different as the velocities are different while the added mass is being brought up to speed!
{}
# quantum efficiencyRecently Published Documents ## TOTAL DOCUMENTS 4377 (FIVE YEARS 1066) ## H-INDEX 130 (FIVE YEARS 37) 2022 ◽ Vol 25 ◽ pp. 100596 Author(s): Minh-Thuan Pham ◽ Truc-Mai T. Nguyen ◽ Dai-Phat Bui ◽ Ya-Fen Wang ◽ Hong-Huy Tran ◽ ... Keyword(s): 2022 ◽ pp. 2101926 Author(s): Salvatore Moschetto ◽ Emilia Benvenuti ◽ Hakan Usta ◽ Resul Ozdemir ◽ Antonio Facchetti ◽ ... Keyword(s): 2022 ◽ Author(s): Jia-Wei Wang ◽ Xian Zhang ◽ Michael Karnahl ◽ Zhi-Mei Luo ◽ Zizi Li ◽ ... Keyword(s): Abstract The utilization of a fully noble-metal-free system for photocatalytic CO2 reduction remains a fundamental challenge, demanding the precise design of photosensitizers and catalysts, as well as the exploitation of their intermolecular interactions to facilitate electron delivery. Herein, we have implemented triple modulations on catalyst, photosensitizer and coordinative interaction between them for high-performance light-driven CO2 reduction. In this study, heteroleptic copper and cobalt phthalocyanine complexes were selected as photosensitizers and catalysts, respectively. An over ten-fold improvement in light-driven reduction of CO2 to CO is achieved for the catalysts with appending electron-withdrawing substituents for optimal CO-desorption ability. In addition, pyridine substituents were implanted at the backbone of the phenanthroline moiety of the Cu(I) photosensitizers and the effect of their axial coordinative interaction with the catalyst was tested. The combined results of 1H NMR titration experiment, steady-state/transient photoluminescence, and transient absorption spectroscopy confirm the coordinative interaction and reductive quenching pathway in photocatalysis corroboratively. It has been found that the catalytic performances of the coordinatively interacted systems are unexpectedly reverse to those with the pyridine-free Cu(I) photosensitizers. Moreover, the latter system enables a very high quantum efficiency up to 63.5% at 425 nm with a high selectivity exceeding 99% for CO2-to-CO conversion. As determined by time-resolved X-ray absorption spectroscopy and DFT calculation, the replacement of phenyl by pyridyl groups in the Cu(I) photosensitizer favors a stronger flattening and larger torsional angle change of the overall excited state geometry upon photoexcitation, which explains the decreased lifetime of the triplet excited state. Our work promotes the systematic multi-pathway optimizations on the catalyst, photosensitizer and their interactions for advanced CO2 photoreduction. Author(s): Dongsheng Yuan ◽ Encarnación G. Víllora ◽ Takumi Kato ◽ Daisuke Nakauchi ◽ Takayuki YANAGIDA ◽ ... Keyword(s): Abstract Ce:LaB3O6 (LBO) glass, whose constituents are abundant elements and fabrication is easy and cheap, is found to be a promising thermoluminescence (TL) dosimeter. This is originally achieved by CeF3 doping and melting under a reducing atmosphere, with the optimum concentration of 0.1% (quantum efficiency = 66%). The corresponding Ce interatomic distance is ~ 4 nm, below which concentration quenching occurs via Ce dipole-dipole interaction, as elucidated experimentally by Dexter’s theory. Ce:LBO exhibits a good dose resolution, with a linear dependence covering five orders of magnitude on both irradiation-dose and TL-response. Furthermore, it can be cyclically irradiated and read without degradation. 2022 ◽ Author(s): Matej Kurtulik ◽ Michal Shimanovich ◽ Rafi Weill ◽ Assaf Manor ◽ Michael Shustov ◽ ... Keyword(s): Abstract Planck’s law of thermal radiation depends on the temperature, $$T$$, and the emissivity, $$\epsilon$$, which is the coupling of heat to radiation depending on both phonon-electron nonradiative-interactions and electron-photon radiative-interactions. In contrast, absorptivity, $$\alpha$$, only depends on the electron-photon radiative-interactions. At thermodynamic equilibrium, nonradiative-interactions are balanced, resulting in Kirchhoff’s law of thermal radiation, $$\epsilon =\alpha$$. For non-equilibrium, Quantum efficiency (QE) describes the statistics of photon emission, which like emissivity depends on both radiative and nonradiative interactions. Past generalized Planck’s equation extends Kirchhoff’s law out of equilibrium by scaling the emissivity with the pump-dependent chemical-potential $$\mu$$, obscuring the relations between the body properties. Here we theoretically and experimentally demonstrate a prime equation relating these properties in the form of $$\epsilon =\alpha \left(1-QE\right)$$. At equilibrium, these relations are reduced to Kirchhoff’s law. Our work lays out the evolution of non-thermal emission with temperature, which is critical for the development of lighting and energy devices. Author(s): Shangwei Feng ◽ Yujuan Ma ◽ Shuaiqi Wang ◽ Shanshan Gao ◽ Qiuqin Huang ◽ ... Keyword(s): 2022 ◽ Author(s): Shangwei Feng ◽ Yujuan Ma ◽ Shuaiqi Wang ◽ Shanshan Gao ◽ Qiuqin Huang ◽ ... Keyword(s): 2022 ◽ Author(s): Yang Shen ◽ Xingjun Xue ◽ Andrew Jones ◽ Yiwei Peng ◽ Junyi Gao ◽ ... Keyword(s): 2022 ◽ Vol 13 (1) ◽ Author(s): Yumin Zhang ◽ Jianhong Zhao ◽ Hui Wang ◽ Bin Xiao ◽ Wen Zhang ◽ ... Keyword(s): AbstractSingle-atom catalysts anchoring offers a desirable pathway for efficiency maximization and cost-saving for photocatalytic hydrogen evolution. However, the single-atoms loading amount is always within 0.5% in most of the reported due to the agglomeration at higher loading concentrations. In this work, the highly dispersed and large loading amount (>1 wt%) of copper single-atoms were achieved on TiO2, exhibiting the H2 evolution rate of 101.7 mmol g−1 h−1 under simulated solar light irradiation, which is higher than other photocatalysts reported, in addition to the excellent stability as proved after storing 380 days. More importantly, it exhibits an apparent quantum efficiency of 56% at 365 nm, a significant breakthrough in this field. The highly dispersed and large amount of Cu single-atoms incorporation on TiO2 enables the efficient electron transfer via Cu2+-Cu+ process. The present approach paves the way to design advanced materials for remarkable photocatalytic activity and durability. Author(s): Xiaoxiao Xu ◽ Ke Xiao ◽ Guozhi Hou ◽ Yu Zhu ◽ Ting Zhu ◽ ... Keyword(s): Two composite layers are used to enhance the efficiency of Si-based near-infrared perovskite light-emitting devices, which are produced in ambient air, and the external quantum efficiency increased to 7.5%.
{}
Conference paper Open Access # Is Popularity an Indicator of Software Security? Siavvas, Miltiadis; Jankovic, Marija; Kehagias, Dionysios; Tzovaras, Dimitrios ### DataCite XML Export <?xml version='1.0' encoding='utf-8'?> <identifier identifierType="URL">https://zenodo.org/record/3379596</identifier> <creators> <creator> <familyName>Siavvas</familyName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-3251-8723</nameIdentifier> <affiliation>Imperial College London, London, SW7 2AZ, United Kingdom</affiliation> </creator> <creator> <creatorName>Jankovic, Marija</creatorName> <givenName>Marija</givenName> <familyName>Jankovic</familyName> <affiliation>Centre for Research and Technology Hellas, Thessaloniki, Greece</affiliation> </creator> <creator> <creatorName>Kehagias, Dionysios</creatorName> <givenName>Dionysios</givenName> <familyName>Kehagias</familyName> <affiliation>Centre for Research and Technology Hellas, Thessaloniki, Greece</affiliation> </creator> <creator> <creatorName>Tzovaras, Dimitrios</creatorName> <givenName>Dimitrios</givenName> <familyName>Tzovaras</familyName> <affiliation>Centre for Research and Technology Hellas, Thessaloniki, Greece</affiliation> </creator> </creators> <titles> <title>Is Popularity an Indicator of Software Security?</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2019</publicationYear> <subjects> <subject>software security</subject> <subject>vulnerability assessment</subject> <subject>vulnerability prediction</subject> <subject>popularity</subject> <subject>static analysis</subject> </subjects> <dates> <date dateType="Issued">2019-05-09</date> </dates> <language>en</language> <resourceType resourceTypeGeneral="ConferencePaper"/> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3379596</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1109/IS.2018.8710484</relatedIdentifier> </relatedIdentifiers> <version>1.0</version> <rightsList> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract">&lt;p&gt;Although numerous research attempts can be found in the related literature focusing on the ability of software-related factors (e.g. software metrics) to indicate the existence of vulnerabilities in software applications, none of them have demonstrated perfect results. In addition, none of the existing studies have focused on the popularity of software products, which is an important characteristic of open-source software applications and libraries. To this end, in this paper, the ability of popularity (i.e. utilization) to indicate the existence of vulnerabilities and, in turn, to highlight the internal security level of software products is investigated. For this purpose, a relatively large software repository based on well-known libraries retrieved from the Maven Repository was constructed and its security was analyzed using a widely-used open-source static code analyzer. Correlation analysis was employed in order to examine whether a statistically significant correlation exists between the security and popularity of the selected software products. The preliminary results of the analysis suggest that popularity may not constitute a reliable indicator of the security level of software products. To the best of our knowledge, this is the first study that examines the relationship between the popularity of software products and their security level.&lt;/p&gt;</description> </descriptions> <fundingReferences> <fundingReference> <funderName>European Commission</funderName> <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier> <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/780572/">780572</awardNumber> <awardTitle>Software Development toolKit for Energy optimization and technical Debt elimination</awardTitle> </fundingReference> </fundingReferences> </resource> 60 107 views
{}
Volume of region between two functions? 1. Nov 15, 2004 Khan86 The two functions are z=2x^2-4 and z=4-2y^2, and I'm supposed to find the volume between the two by integrating two ways: one with respect to z first, and the other with respect to z last (x and y don't have a set order). When integrating with respect to z first, I had the limits such that z ranged from 2x^2-4 to 4-2y^2, and since x and y only depend on z and z has been covered already, I thought that x and y both ranged from -2 to 2. When I integrated with respect to z last, I had x range from -(z/2+2)^1/2 to (z/2+2)^1/2 and y range from -(2-z/2)^1/2 to (2-z/2)^1/2, with z ranging from -4 to 4. The problem is that I got two different answers, although I feel more confident in my answer from integrating the second way, z last (128/3). Could anyone help me with where I went wrong? 2. Nov 15, 2004 NateTG You're confusing me. I don't understand: You mention two one-dimensional functions, and you ask for the volume rather than area between them. You write $$z=2x^2-4$$ as if $$z$$ is a dependant variable, but then you claim to integrate with respect to $$z$$. If you're talking about tbhe volume bounded by $$f(x,y)=2x^2-4$$ and $$g(x,y)=4-2y^2$$, I'm not sure that you can use a rectangular bounding region: If I simplify $$2x^2-4=4-2y^2$$, I get $$x^2+y^2=2^2$$ which is a circle of radius 2. Perhaps you could look into using slices? 3. Nov 15, 2004 Khan86 I'll try and describe the three dimensional region more clearly. z1 = 2x^2 - 4 is a parabola in the x-z plane that runs all through the y-axis, if that makes sense. It is a three dimensional surface that happens to be independent of y, so it is the same parabola for all y-values. The same goes for z2 = 4 - 2y^2, except it lies in the y-z plane, meaning that the two functions are perpendicular to eachother. The 3d space between z1 and z2 is a volume, bordered on top by z1 and on bottom by z2. The functions aren't two dimensional, they are just independent of the third, making the same two dimensional graph for all values of the third variable not in the equation. Does that make any more sense? The problem I'm having is finding the bounds for dzdxdy and dxdydz. Last edited: Nov 15, 2004 4. Nov 16, 2004 Leo32 In order to find the bounds, make an imagination of how that volume looks in space. Integration bounds are there where both functions cut into each other, ea, share the same z value for a given x/y point. Take it from there. Greetz, Leo 5. Nov 16, 2004 NateTG I mentioned in my post that the 'shadow' that this region casts into the xy plane is a circle. It's also relatively easy to identify that the z range is $$(4,-4)$$.
{}
# Math Help - why these expression are the same.. 1. ## why these expression are the same.. on the bottom of this page we have an expression which represents all solutions http://i45.tinypic.com/140hqtx.jpg but later in here http://i45.tinypic.com/2nrfswm.jpg they present another expression and say that its the same as the integral before(all the solutions) why they are the same?? 2. Originally Posted by transgalactic on the bottom of this page we have an expression which represents all solutions http://i45.tinypic.com/140hqtx.jpg but later in here http://i45.tinypic.com/2nrfswm.jpg they present another expression and say that its the same as the integral before(all the solutions) why they are the same?? He's just putting $\psi(x)=\int\limits_{-\infty}^\infty A(k)e^{ikx}dk$ in the second expression, that's all. Tonio 3. ahh thanks
{}
## FANDOM 1,080 Pages 43 (read as "to-the-forth-three") is a positive integer equal to $$3^{3^{3^3}} = 3^{7,625,597,484,987} \approx 1.258014 \times 10^{3,638,334,640,024}$$. It has 369,693,100 digits. The superscript represents tetration.
{}
# Top Algebra Errors Made by Calculus Students by Thomas L. Scofield Assistant Professor of Mathematics Calvin College Sept. 5, 2003 Preface, for the reasons behind this document ## Linear Function Behavior (LFB) Lines through the origin are peculiar in that they have an expression of the form , where is a constant (the slope). This formula makes possible the following additive" property: For the particular choices of and , we would have Despite students' tendancies to treat every function as additive, other functions just do not have this property. Typical mistakes made include: ## Cancelling Everything in Sight (CES) Seeing a complicated fraction become less ugly as elements are cancelled from both the numerator and denominator can be something of an enjoyable experience. One's first exposure to this magical process usually comes in grade school when reducing fractions, such as High school algebra classes build upon this, showing us that we may also cancel expressions involving variables, as in What some students do not notice is that these cancellations only are performed once the numerator and denominator are factored. Factoring a numerator (or denominator) turns it into an expression which is, at its top level, held together by multiplication. For instance, in the expressions To be sure that one performs valid cancellations only, it is necessary to • be patient, making sure to factor numerator and denominator first, and cancelling only those factors common to both, and • accept that many times no factorization is possible, at least none that leads to a common, cancellable factor. With this in mind, cancellations such as those below may only be labelled instances of someone cancelling everything in sight", with no attention given to the discussion above, and having no validity whatsoever. Any attempt to simplify the original fraction (rational expression) should start with factoring: at which stage we see that there is no matching factor between those of the numerator -- namely, and -- and those of the denominator -- and . Factoring, in this case, did not lead to any cancelling, as is often the case. ## Confusing Negative and Fractional Exponents (CNFE) Students can make a variety of mistakes when it comes to working with exponents. Two of the most common are Multiplying Exponents that should be Added (MEA), and Adding Exponents that should be Multiplied (AEM). This section does not deal with either of these, but rather with a problem that some students have applying two basic rules about exponents, the ones concerning reciprocals and roots. Specifically, these are respectively, where the understanding is that a square root ( ) is to be taken as ( ). The first of these says that a factor of the denominator (see the discussion on CES) raised to a power (be it positive or negative) may be written as a factor to the oppositite power of the numerator (i.e., a power becomes , a power becomes ). The only change is to the sign of the exponent. An example of a valid application of this rule is The second rule shows how to write a root as a power, which can be especially helpful in calculus when a derivative is desired. Things like Some students seem to confuse these two rules. The main errors seem to come from students trying to reciprocate the wrong thing or from students putting a minus in when none is required ## Multiplication Ignoring Powers (MIP) Another law of exponents frequently misunderstood by students is This means that such statements as are correct. But many students ignore the significance of having identical powers in these multiplications. They make statements like the following, all of which are incorrect: ## Equation Properties for Expressions (EPE) Early on in one's high school algebra courses one learns several properties of equality -- namely • Addition/Subtraction Property of Equality: One may add/subtract the same quantity to/from both sides of a given equation, and the solutions of the resulting equation will be the same as those of the original (given) one. • Multiplication/Division Property of Equality: One may multiply/divide both sides of a given equation by the same nonzero quantity, and the solutions of the resulting equation will be the same as those of the original (given) one. Notice that both of these properties pre-suppose that we start with an equation, usually one we are supposed to solve (say, for ). These properties are helpful in achieving that goal, as in: Solve : Add 1 to both sides: Divide both sides by 3: or, solve : Multiply both sides by the never-zero quantity : Subtract 5 from both sides: Divide both sides by 4: . In contrast, these are not, generally speaking, properties one uses when trying to simplify an expression. (There are exceptions to this, such as in the simplifying of and using integration by parts, but these are relatively rare.) Students asked to find the derivative of may find it easier to work with the function but they shouldn't be under any illusions that and are the same functions, nor that they have derivatives that are equal. If one is simplifying an expression like it may be tempting to multiply by , which gives but, of course, multiplying by changed the expression. One must both multiply and divide by (equivalent to saying that we're multiplying by ) if the expression is going to remain the same (but hopefully simplified): Another example is in simplifying the difference quotient . which cannot be further simplified. ## Multiplication Without Parentheses (MWP) The discussion here necessarily must begin with an appeal to the order of algebraic operations (OO). These are rules of hierarchy as to which operations to perform 1st, 2nd, etc. when an algebraic expression requires more than one operation be performed. There are three levels of hierarchy: 1. powers, 2. multiplication and division, and When faced with an expression like the one below that has both an addition and a multiplication in it, the order of operations dictates that the multiplication be performed first: The levels above do not give the whole story, however. For instance, what if an expression has both an addition and a subtraction, operations which appear at the same level? The answer here is that operations appearing on the same level are always performed left-to-right: Also, one may use parentheses to override these rules. Things in parentheses are performed before things outside of those parentheses, starting from the inside and working out. So while and These order of operations apply to expressions involving variables as well. Thus In this light, acceptable notation for the product of two expressions like and is not, as so many students write, ## Frivolous Parentheses (FP) There really isn't an error, per se, with using too many parentheses. Nevertheless, students who consistently employ more parentheses than needed are demonstrating as much of a lack of understanding of the order of algebraic operations as those who use too few. Expressions such as ## Undo Multiplication with Division (UMD); also, Undo Addition with Subtraction (UAS) The properties of equality that were mentioned earlier are, by some students, implemented incorrectly even when the situation calls for their use. For instance, when solving an equation like two steps are called for: and Notice that, in the expression , the order of operations dictates that the multiplication by 3 comes before the addition of 7, and the undoings" of these processes -- the subtraction of 7 and the division by 3 -- were carried out in reverse order. That is not to say that we could not have undone things in a different order, but students who do so often make the following error. Dividing by 3, they often neglect the fact that all terms on both sides are to be divided by 3. In other words, after dividing by 3 they write They are too set on the idea that they will be subtracting 7 from both sides to realize that, having divided by 3 first, it is not 7, but rather , which must be subtracted, giving the same answer as before. One other note is in order here. If parentheses appear in an equation such as then the order of operations are preempted (the subtraction within the parentheses comes before the multiplication by 3). In solving for , we may of course, distribute the 3, thereby eliminating the parentheses and making the problem appear like the last one discussed. Even fewer steps are required if one just undoes" the multiplication and subtraction in their opposite order: and then and investigate the more telling errors that gave the titles UMD and UAS to this section. Some students recognize the need for two steps (like those carried out when this equation was being considered above) to isolate , but have little feel for which operations will achieve this. For instance, realizing that, like the on the right-hand side of the equation, is a non-" term, a student may write misunderstanding that she has subtracted 7 on the left side, but divided by 7 on the other side. The original equation and the new one no longer have the same solutions as a result. The same student may then recognize that she needs to move the over to the other side. Since the 3 is multiplied by the , she should undo" this by dividing both sides by 3. But she may (wrongly) write having divided on the left but subtracted on the right. Again, the solution is different from the one that solved the original equation , namely . Worse still is when a student thinks he can solve in one step (that is, take care both of the multiplication by 3 and the addition of 7 via one operation). Such a student may write something like Again, the answer this student gets, , is different than the correct one . ## Misunderstood Relationship between Roots and Zeros (MRRZ) Much of one's mathematical experience prior to the calculus is spent in solving equations. There are the kind of equations, known as identities, where every number in the domain is a solution. The equation is such an identity. It is not this, but the other type of equation, known as a conditional equality, that one learns to solve, precisely because solutions, also known as roots, of conditional equalities are not everywhere to be found. Often there are very few numbers, perhaps even none at all, which make a conditional equality true. Another fact about conditional equalities is that comparatively few of them may be solved exactly. Leaps in technology have made it commonplace for students, with the purchase of a handheld calculator, to have at their fingertips powerful graphing capability and numerical methods for finding approximate solutions to many, perhaps even most conditional equalities. This does not mean that one should forego learning the algebraic techniques which lead to exact solutions, thinking that deftness in pushing the right pair of buttons is an appropriate substitute for the thinking processes such algebraic methods introduce. Still, there is added value in the knowledge one gets by investigating graphical methods. By these methods one comes to think of the solutions of, say, as the -values of points of intersection between the graphs of the two functions As another example, solutions of the equation would be found at points of intersection between the graphs of It is in solving equations like this latter one that students become confused. What some students do is the following: In mathematical terms, the student who does these steps has found the zeros of ; that is, the values for which make the output of be zero. There are several ways to see that this work is wrong. One way to see it is that, in the equation, we want values of whose output value is , not zero. Another angle which reveals the errors is the one that notes that, while there are a lot of pairs of numbers which may be multiplied to give (-6) -- (-1) and 6, 12 and (-1/2), 55 and (-6/55) are three such pairs -- one thing which we can say for certain is that neither of the numbers in the pair is zero, which is quite counter to the idea of setting the factors equal to zero. (Of course, neither is it enough to set the factors equal to (-6), as in since it is not enough for either one of these conditions to hold by itself; that is, if then we would need the other factor to be equal to 1 in order for their product to be , and clearly these things cannot occur at the same time.) In summary, the error occurs in finding the zeros of a function and taking to be the roots of the equation, when the two concepts do not coincide. There is a simple way to make them coincide. We simply make one side zero (using a valid algebraic step, of course). The zeros of are the numbers which make equal zero, and that is exactly what we want in a solution of the equation , so the two concepts coincide. Why do students mess this up? The most likely answer is that many are looking to do as little work as possible, and bringing the over makes factoring a more difficult job (it is harder to factor than to factor ); of course, the quadratic formula is an option for this case. What may help to avoid this confusion is remembering this graphical interpretation of what one is doing (still applied to the example above): • Solutions of an equation like correspond to points of intersection between the two sides, considered as functions, of the equation (i.e., the function and the function, in this case a constant one, ). • If the two functions are combined into one function on one side of the equation, there is still a second function, the zero function, that remains on the other side. Now we have in place of the old problem a new one (but entirely equivalent) of finding the solutions that correspond to points of intersection between the new left-hand side (in this case ) and the new right-hand side (here zero). • When our combining of terms has left one side of the equation zero (which, when considered as a function, has the -axis as its graph), one may solve the equation by finding the zeros of the nonzero side of the equation. ## Multiplication Not Distributive (MND) In precalculus/algebra we become familiar with the distributive laws that address interactions between multiplication and addition/subtraction. Specifically, these laws say We use these laws all the time, both in expanding and in factoring We even use it (although we don't often think about it this way and usually don't include the middle step below) when combining like terms, as in The problem is when students misinterpret these laws, thinking they also say something about interactions between more than one multiplication; that is, they invent" for themselves a law that looks something like: This clearly is false, as most would see if these were all numbers -- few (though I cannot go so far as to say no one) would assert, say, that But when the objects involved are expressions involving variables, the error is frequently made, such as in this case: or ## Poor Use of Mathematical Language (PUML) A prerequisite skill to writing good mathematics is the ability to write well in one's native tongue. People who cannot write a complete English sentence should take remediation in English composition before reading on. What may surprise some students is that good writing using mathematical symbols (even in the write-up of homework problems) consists of using complete sentences, setting up one's ideas clearly and then following through on the details, much as one expects from a good English essay. The language and symbols of mathematics are used just like regular English words and phrases to express ideas, albeit ideas which one would often struggle to use any other means of expressing. Nobody studies mathematical writing as a subject. Your mathematics professor(s) got to be good writers of mathematics, if good they be, by reading papers and books by other mathematicians, not by reading a treatise such as this one. If a book on good mathematical writing does exist (and there are probably a number of such books), they will say much more than I say here. I will only describe the most common example of poor mathematical writing I see when grading students' work: Using Equals as a Conjunction (UEC). The word equals" has a very specific meaning. It requires two objects, and it asserts that these two objects are the same. In mathematics, the two objects are usually quantities, like the mathematical expression , or the number 7. Even within this tight definition, mathematical equations, as I mentioned earlier, come in two varieties: identities and conditional equations. A conditional equation is one such as the equation which is only for particular values of (in this case one particular value). In algebra courses one often sees conditional equalities in homework problems accompanied by the instruction Solve the equation". There are some quantities that are the same regardless of the value of the variable. A familiar example is the identity which is true no matter what real value takes. These two types of equations encompass the two most common (and only?) valid ways to use an equals' () sign. Consider the typical calculus problem of evaluating a limit like What we are given here is not an equation, but an expression. If we begin writing a series of equalities to simplify/evaluate this expression, we will want them to be identities, as in The original expression, along with each of the ensuing expressions, as it turns out, are all equal to the number . In contrast, suppose we begin with a (conditional) equation like which we are asked to solve. If a student who understands very well the discussion of UAS and UMD (found earlier in this piece) makes a mistake, she is most likely to do so writing something like UEC Such a string of equalities asserts three things: 1. that , 2. that , 3. and that . (i) and (ii) are conditional equations in their own right, but it should be clear that they do not have the same solutions as the original equation (nor does (i) have the same solution as (ii)). And (iii) has no solution at all, for it is never true. What I am really saying is that the string of equations (1) is really three equations, and there is no common solution between them (and, even if there had been, such a solution would have no relevance to the original problem, that of solving ). The student most likely never intended to assert these three equations in place of the original; she simply began writing out her ideas, and used an equals sign to join them together whenever it seemed some sort of conjunction was required. The student who writes (1) actually appears to have some facility in the techniques for solving linear equations, but lacks the ability to put her ideas onto paper in a meaningful fashion. One good way to express the solution of the previous equation is (2) The symbol can be translated here as which implies". Yes, (2) is more writing than (1), much in the same way the complete sentence I am taking the train to Chicago this weekend" requires more writing than the three words `weekend, Chicago, train". A more favorable comparison is between (2) and the same ideas expressed in English words: If the sum of three times and five is seven, then subtracting five from both sides and dividing by three yields the value of two-thirds for , or, perhaps more literally, Assuming that the sum of three times and five is seven, this implies that three times is two, and that is two-thirds.
{}
# SPLANE ## Purpose: Displays a Pole-Zero plot of a S-transform. ## Syntax: SPLANE(b, a) b - A series. The numerator (i.e. zero) coefficients in descending powers of s. a - A series. The denominator (i.e. pole) coefficients in descending powers of s. ## Alternate Syntax: SPLANE(z, p, g) z - A series. The zeros of the S-transform. p - A series. The poles of the S-transform. g - A scalar. The gain of the system. ## Alternate Syntax: SPLANE(c) c - A series. The system coefficients in cascaded biquad form. If c contains 2 columns, the coefficients are assumed to be in direct form, where the first column is b and the second column is a. ## Returns: An XY series, where each pole is plotted as a light red "x" and each zero is plotted as a black "o". ## Example: W1: splane({1}, {1, 0.5}) Displays the s plane with a single pole at s = -0.5. The input is given as a system function in descending terms of s. ## Example: W1: splane({0}, {-0.5}, 1) Displays the same plot as above. The input is given in terms of zeros and poles. ## Example: z = roots({1, 2, 0}); p = roots({1, 0.7, 0.1}); splane(z, p, 1); Displays two real poles and two zeros in the current Window. This result is identical to: splane({1, 2, 0}, {1, 0.7, 0.1}) ## Remarks: For splane(b, a), the input series represent the terms of the rational polynomial H(s) = b(s) / a(s) where: s = jω complex frequency N = number of numerator terms M = number of denominator terms For splane(z, p, g), the gain term must be present, but it does not effect the resulting plot. For splane(c), the input c is assumed to be a single column of coefficients in cascaded bi-quad form. This is the output format of analog IIR filters designed by DADiSP/Filters and processed by the CASCADE function. or equivalently: where G is the system gain, bk and ak are the filter coefficients for the kth stage. If c contains 2 columns, the coefficients are assumed to be in direct form, where the first column is b and the second column is a. The aspect ratio of the window is set to square to preserve a circular unit circle. Each pole is plotted as a light red "x" and each zero as a black "o". The unit circle is displayed as a solid circle in the current series color. Axes lines are also drawn through the origin. Multiple zeros and poles are labeled with a multiplicity number to the upper right of the symbol. See ZPLANE for a Pole-Zero plot of a discrete system.
{}
+0 # 84000000/4 x 10^12 +5 511 1 +7449 84000000/4 x 10^12 $$\large \frac{84 \cdot 10^{6}}{4} \cdot 10^{12}=21 \cdot 10^{18}$$ $$\large \frac{84 \cdot 10^6}{4 \cdot 10^{12}}=21 \cdot 10^{-6}=0.000\ 021$$ What is correct? Please use brackets. ! asinus  Mar 11, 2017 #1 +12716 +5 Many of us on the forum (I think including YOU) had this discussion a few months ago and decided this representation was rather ambiguous, but most agreed it is [(8.4 x 10^7)/4] x 10^12     so your first solution is the one considered correct by most of us and most calculators too......     use brackets to be clear !!! ElectricPavlov  Mar 11, 2017
{}
Question: edgeR design matrix 0 19 months ago by United States cafelumiere1220 wrote: So I have the following samples for differential expression analysis and I'm hoping to see see if my design matrix makes sense. There are cell samples from three different donors each gone through 2 different cell culturing processes and 5 different treatments. The goal is to look at the differences between different treatments and also between different processes as well. Samples that gone through process A have data for all 5 treatments, while samples that gone through process B only have data for 2 of the 5 treatments. Is the design matrix here the right construction? Thanks a lot! sampleInfo <- read_csv(<samplemanifest_csvfile>,col_names=TRUE Donor <- factor(sampleInfo$Donor) Treatment <- factor(sampleInfo$Treatment) Process <- factor(sampleInfo$Process) design <- model.matrix(~0+Treatment+Process+Donor) Donors Process Treatment P01 A 1 P01 A 2 P01 A 3 P01 A 4 P01 A 5 P02 A 1 P02 A 2 P02 A 3 P02 A 4 P02 A 5 P03 A 1 P03 A 2 P03 A 3 P03 A 4 P03 A 5 P01 B 2 P01 B 5 P02 B 2 P02 B 5 P03 B 2 P03 B 5 ADD COMMENTlink modified 19 months ago by Gordon Smyth39k • written 19 months ago by cafelumiere1220 Answer: edgeR design matrix 0 19 months ago by Gordon Smyth39k Walter and Eliza Hall Institute of Medical Research, Melbourne, Australia Gordon Smyth39k wrote: Well, you are asking a biological question rather than a computing question. Personally, I think it is unlikely that Process and Treatment have additive effects. It would be more usual to assume that they might interact. The most usual limma analysis for this type of experiment would allow general interactions between Treatment and Process: ProcTreat <- paste(sampleInfo$Process, sampleInfo\$Treatment, sep=".") design <- model.matrix(~0+ProcTreat+Donor) Note that I have given you almost exactly the same advice before for a slightly different experiment, see: edgeR design matrix and contrasts: how to make contrast between groups that aren't shown in the design matrix columns? Thank you very much!  Yes, I was actually reading your previous answer earlier and thought about using what you suggested here ( similar to before as well). The only thing though, is that the scientist also wanted to look at differences "between processes". So I thought maybe I should make the design matrix in a way that I can make contrast that I can directly analyze the differences between Process A and Process B... thus making the design matrix: model.matrix(~0+Treatment+Process+Donor). - Does this mean that this way the contrast (Process A-Process B) I'm not separating treatments and looking all the treatments together? - If I use model.matrix(~0+ProcTreat+Donor) , kind of following the question above, would you think it is more correct to look at differences between processes within the same treatment? On a side note, I see that most of the variability here actually came from different donors. thanks very much again. Yes, it is generally more meaningful to compare Processes for the same Treatment. Comparing Process B to Process A using your old model was confounding differences between processes with differences between Treatments, because Treatment 1 was not used with Process B. There were other problems as well. There is no such thing as "separating treatments". You can't compare processes as if treatments didn't exist.
{}
# Partial derivative 1. Mar 19, 2009 ### ak123456 1. The problem statement, all variables and given/known data what s the wrong with the following arguments suppose that w=f(x,y)and y=x^2 by the chain rule (for partial derivative ) Dw/Dx=(Dw/Dx)( Dx/Dx)+(Dw/Dy)(Dy/Dx)=Dw/Dx+2x( Dw/Dy) hence 2x( Dw/Dy)=0 ,and so Dw/Dy=0 2. Relevant equations 3. The attempt at a solution i think the argument is not right because we can not write Dx/Dx and w has the relationship with x and y ,so we don't need to use Dy/Dx Dx/Dx to get x ,we can do it directly is there any counterexample ? Last edited: Mar 19, 2009 2. Mar 19, 2009 ### Staff: Mentor You can't write dw/dx because f is a function of two variables. The derivatives you have to work with are $$\frac{\partial w}{\partial x}$$ and $$\frac{\partial w}{\partial y}$$ I don't see that it makes any difference that y happens to be equal to x^2. If both x and y were functions of a third variable, say t, then you could talk about dw/dt, but to get it you would still need both partial derivatives and would need to use the chain rule. 3. Mar 19, 2009 ### ak123456 all of them are partial derivative ,because i don't know how to type the symbol for partial derivative 4. Mar 19, 2009 ### Staff: Mentor I can't see any justification for expanding $$\frac{\partial w}{\partial x}$$ the way you did. Certainly $$\frac{\partial w}{\partial x} = \frac{\partial w}{\partial x} \frac{\partial x}{\partial x}$$ since the latter partial derivative is 1, but I don't see any way that you can add all the other stuff (i.e., the other partials). 5. Mar 19, 2009 ### ak123456 the problem is that i have to prove the argument is false 6. Mar 19, 2009 ### Staff: Mentor Come up with a counterexample for which the statement isn't true. Pick a function of two variables f(x, y), such as f(x, y) = 3x + 2y, where y = x^2. 7. Mar 19, 2009
{}
A wave is a sort of disturbance that moves at a medium from a particle to particle. It consists of crusts and trough that has amplitude, frequency and wavelength in it. The ocean waves in the form of tides give us the visualization. The wave formula is given by, V = f $\times$ $\lambda$. Where, v is wave speed, f is frequency, $\lambda$ is the wavelength . ## Wave Solved Examples Below are given some solved examples on wave you can go through: Question 1: A water wave is having a frequency of 330 Hz and a wavelength of 4 m. Calculate its velocity. Solution: Given: frequency of wave f = 330 Hz, wavelength $\lambda$ = 4 m The velocity of wave is given by, v = f $\lambda$ v = 330 Hz $\times$ 4 m v = 1320 m/s v = 1.3 km/s Therefore, the velocity of water is 1.3 km/s. Question 2: A sound wave moves with the speed of 320 m/s and has wavelength of 30 m.Calculate its frequency. Solution: Given: wave speed v = 320 m/s, wavelength $\lambda$ = 30 m The frequency of the wave is given by, f = $\frac{v}{\lambda}$ f = $\frac{320}{30}$ f = 10.66 Hz. Therefore, the frequency of the wave is 10.66 Hz.
{}
Share # In the Given Figure, Ac is the Diameter of the Circle with Centre O. Cd and Be Are Parallel. Angle ∠Aob = 80° and ∠Ace = 10°. Calculate: Angle Bec, - Mathematics Course #### Question In the given figure, AC is the diameter of the circle with centre O. CD and BE are parallel. Angle ∠AOB = 80° and ∠ACE = 10°. Calculate :  Angle BEC, #### Solution ∠BOC = 180° - 80° = 100° (Straight line) And ∠BOC = 2∠BEC (Angle at the centre is double the angle at the circumference subtended by the same chord) ⇒ ∠BEC = (100°)/2 = 50° Is there an error in this question or solution?
{}
# The number 14 of the process carbon 14 dating denotes 16-Sep-2017 11:04 The half-life of a first-order reaction under a given set of reaction conditions is a constant. If two reactions have the same order, the faster reaction will have a shorter half-life, and the slower reaction will have a longer half-life.Then divide by the initial concentration, multiplying the fraction by 100 to obtain the percent completion.Solution: A We can calculate the half-life of the reaction using Equation 14.5.3: $$t_=\dfrac=\dfrac=4.6 \times 10^\;min$$ Thus it takes almost 8 h for half of the cisplatin to hydrolyze.The half-life of a first-order reaction is (to indicate a half-life) into Equation 14.5.1 gives $$ln \dfrac = ln 2=kt_ \tag$$ The natural logarithm of 2 (to three decimal places) is 0.693.
{}
# Matrix Exponential for defective eigenvalues I am trying to compute the matrix exponential ($$e^{At}$$) of the following matrix: $$\begin{bmatrix}1 &0&0&0\\1&0&0&0\\1&0&0&0\\1&0&0&0\\\end{bmatrix}$$ The eigenvalues were 0 and 1, but they were defective so I was having difficulty finding the diagonal matrix. I also tried to check if this was a nilpotent matrix, but it was not. I do know that $$A^n=A \\n=1,2,....$$ I could use this to expand $$e^{At}=I+At+{A^2t^2\over 2!}+...$$ but I am not sure where to go from there. Since all positive powers of A are the same as $$A$$ $$e^{At}=I+At+{A^2t^2\over 2!}+...= I+ (e^t-1)A$$ Actually, $$A^2=A$$ (just multiply it out) and therefore $$A^n=A^{n-1}=A^{n-2}=\cdots=A$$ for all $$n$$, meaning that $$e^{At}=I+At+A\frac{t^2}{2!}+\cdots=I+A(e^t-1).$$
{}
Four fundamental subspaces. Orthogonality I am having trouble understanding how to implement the four fundamental subspaces. I have read about the subject and understand the meaning but do not understand how to implement it when asked in a question. Can someone help me better grasp this. An example question would be: For each Matrix, accurately sketch the four fundamental subspaces on two R^2 plots so that the orthogonal pairs of subspaces are plotted together. This is the grid picture: |1 2| |1 0| A=|3 6| B=|3 0| Thanks! - OK, so start by finding the four fundamental subspaces. What are they? - So Im looking for the column space, null space, row space and left nullspace correct? –  user081608 Oct 25 '13 at 5:22 Yes. So, let's start with the column space. What is it for $A$? –  Robert Israel Oct 25 '13 at 6:46 Would the new matrix be rows [1 0] and [0 0]? –  user081608 Oct 25 '13 at 17:05 So we get C(A) = 1, C(A^T) = 1, Nullspace = x2[01], left Null = 1 –  user081608 Oct 25 '13 at 17:19 Column space is by definition the linear span of the columns. In the case of $A$, the second column is $2$ times the first, so the column space is the span of $\pmatrix{1\cr 3\cr}$. –  Robert Israel Oct 27 '13 at 2:03
{}
## ERoseM 2 years ago Find f prime (x) if f(x) = (cosx)/(1+secx) 1. lgbasallote have you tried quotient rule? 2. ERoseM Yes, I did and then I don't know how to distribute? I know I have to use it It just doesn't fully come out 3. calculusfunctions Show us how far you got with the quotient rule. Show all the steps you have so far. Can you do that? 4. ERoseM Yes, I get (|dw:1350172024457:dw| 5. calculusfunctions Correct! So far! Now what do you think the next step should be to simplify? 6. calculusfunctions You're only going to simplify the numerator. Never expand the denominator. Do you understand? 7. ERoseM Yes. Should I distribute the -sinx to (1+secx)? 8. calculusfunctions Go ahead and apply the distributive property to the numerator. In other words, expand. 9. calculusfunctions Yes! Go ahead. Show me the next step. 10. ERoseM |dw:1350172306983:dw| I shouldn't distribute the cosx correct? Because tanx and secx are being multiplied together? 11. calculusfunctions Yes expand the entire numerator. Meaning I shouldn't see any more parentheses in the numerator. Go ahead! 12. calculusfunctions After that change each sec x to 1/cos x. Understood? Go ahead. 13. calculusfunctions I mean change each sec x to 1/cos x only in the numerator. Never expand the denominator, like I said earlier. Do you understand? 14. ERoseM Yes. I am just confused on what I should do to the -cosx(tanxsecx) 15. calculusfunctions $-\cos x(\tan x \sec x)=-(\cos x)(\tan x)(\frac{ 1 }{ \cos x })$Agreed? 16. calculusfunctions So do exactly that. Go ahead. 17. calculusfunctions Now show me the step in it's entirety. 18. ERoseM Okay. 19. ERoseM |dw:1350172978272:dw| 20. calculusfunctions $f \prime(x)=\frac{ -\sin x(1+\sec x)-\cos x(\sec x \tan x) }{(1+\sec x)^{2} }$ $f \prime(x)=\frac{ -\sin x -\sin x \sec x -\cos x \sec x \tan x }{ (1+\sec x)^{2} }$ $f \prime(x)=\frac{ -\sin x -\sin x(\frac{ 1 }{ \cos x })-\cos x(\frac{ 1 }{ \cos x })\tan x }{ (1+\sec x)^{2} }$ Take a moment to look at this the first couple of steps are exactly what you did. Do you understand what I did in the third step and now first tell me what you think you should do. 21. calculusfunctions You are correct in that you should change the sin x/cosx to tan x. Go ahead and show me the next step. 22. ERoseM Can you multiply sinx and put it over cosx to get tangent on the left too? 23. calculusfunctions Of course!$\sin x(\frac{ 1 }{ \cos x })=\frac{ \sin x }{ \cos x }=\tan x$ 24. ERoseM |dw:1350174177943:dw| 25. calculusfunctions Have more confidence in yourself, would ya! You're doing just fine. Now continue with confidence. And even if you're wrong, so what? Being wrong is part of learning and part of life. So never be afraid to ask questions even if you think you're wrong. Now could you please show me the remaining steps, with confidence! 26. calculusfunctions Not quite. There is an error in the numerator but it just seems to be a careless one. Retrace your steps to see if you can spot the error. Otherwise show me your previous step so that I can point it out to you. 27. ERoseM |dw:1350174369510:dw| Thank you for all your help 28. calculusfunctions No! Now look back carefully at the third step of the partial solution, I posted above. Is sin x multiplied by tan x? I don't think so. 29. calculusfunctions The simplified answer should be$f \prime(x)=\frac{ -\sin x -2\tan x }{ (1+\sec x)^{2} }$ 30. ERoseM |dw:1350174836525:dw| 31. calculusfunctions So you understand where you went wrong? 32. calculusfunctions Do you understand how to do this now? 33. ERoseM Yes, I think so, its just such a long process that I feel like I make mistakes so easily. 34. ERoseM Thank you so much for your help, thanks for following the process with me! 35. calculusfunctions By the way, if the question doesn't ask you to simplify then it is not wrong to even leave your derivative unsimplified. However every teacher is different so you may want to consult with yours. I prefer students to simplify to a certain degree unless I specifically ask not to simplify. But as I said every teacher is different. 36. calculusfunctions My pleasure. If you'd like any more help, let me know. 37. ERoseM Thank you so much!! 38. calculusfunctions Welcome anytime!
{}
# Divergence theorem In h.m. schey, div grad curl and all that, II-25: Use the divergence theorem to show that $$\int\int_S \hat{\mathbf{n}}\,dS=0,$$ where $$S$$ is a closed surface and $$\hat{\mathbf{n}}$$ the unit vector normal to the surface $$S$$. How should I understand the l.h.s. ? Coordinatewise? The r.h.s. is not 0, but zero vector? ## Answers and Replies the divergence is the total flux through the region. it is not a vector. your calculation of divergence integrates the inner product of the flow with the unit normal to the surface. this inner product is a scalar. the divergence is the total flux through the region. it is not a vector. your calculation of divergence integrates the inner product of the flow with the unit normal to the surface. this inner product is a scalar. Right. However in the l.h.s. we have a vector $$\hat{\mathbf{n}}$$ and not an inner product which would be a scalar. I can integrate only vector-variable scalar-valued functions on a surface and not a vector-valued function. So what does it mean? If the surface is closed, intuitively, the unit normals should "sum" to the 0 vector (generalizing the fact that on a "nice" closed curve, summing the normals should lead to no displacement). This should then be proven properly. Right. However in the l.h.s. we have a vector $$\hat{\mathbf{n}}$$ and not an inner product which would be a scalar. I can integrate only vector-variable scalar-valued functions on a surface and not a vector-valued function. So what does it mean? The integral of the unit normal is just a vector, the result of a Riemann sum of vectors. But this is not the divergence theorem. If the surface is closed, intuitively, the unit normals should "sum" to the 0 vector (generalizing the fact that on a "nice" closed curve, summing the normals should lead to no displacement). This should then be proven properly. I see. Yes, indeed, the answer -intuitively- is the 0 vector. It means $$\int\int_S \hat{\mathbf{n}}\,dS=\left( \int\int_S n_1\,dS,\int\int_S n_2\,dS,\int\int_S n_3\,dS\right),$$ where $$\hat{\mathbf{n}}=(n_1,n_2,n_3)$$. Now $$n_1=\left\langle \mathbf{i},\hat{\mathbf{n}} \right\rangle$$ and here we can use the divergence theorem to obtain $$\int\int_S n_1\,dS=\int\int_S \left\langle \mathbf{i},\hat{\mathbf{n}} \right\rangle \,dS=\int_V div\, \mathbf{i}\, dV=0,$$ because $$div\, \mathbf{i}=0$$. Nice result. Thanks for your helps. Last edited: I see. Yes, indeed, the answer -intuitively- is the 0 vector. It means $$\int\int_S \hat{\mathbf{n}}\,dS=\left( \int\int_S n_1\,dS,\int\int_S n_2\,dS,\int\int_S n_3\,dS\right),$$ where $$\hat{\mathbf{n}}=(n_1,n_2,n_3)$$. Now $$n_1=\left\langle \mathbf{i},\hat{\mathbf{n}} \right\rangle$$ and here we can use the divergence theorem to obtain $$\int\int_S n_1\,dS=\int\int_S \left\langle \mathbf{i},\hat{\mathbf{n}} \right\rangle \,dS=\int_V div\, \mathbf{i}\, dV=0,$$ because $$div\, \mathbf{i}=0$$. Nice result. Thanks for your helps. no. your your three integrals are integrals of vectors not inner products. no. your three integrals are integrals of vectors not inner products. Sorry, I absolutely don't understand what you think. Please, give more details, for example point out the wrong step in the calculations. Thanks. I'm just saying that n1 n2 and n3 are inner products and thus are scalars. Originally you wanted to integrate a vector. Splitting into components still requires keeping the basis vectors in the integral. I don't think you did this. Instead I thing you substituted scalars.Unless I don't understand your post. I'm just saying that n1 n2 and n3 are inner products and thus are scalars. Agree. Originally you wanted to integrate a vector. Agree. Splitting into components still requires keeping the basis vectors in the integral. Of course. I don't think you did this. Certainly a similar calculation gives $$\int\int_S n_2\,dS=\int\int_S \left\langle \mathbf{j},\hat{\mathbf{n}} \right\rangle \,dS=\int_V div\, \mathbf{j}\, dV=0,$$ and $$\int\int_S n_3\,dS=\int\int_S \left\langle \mathbf{k},\hat{\mathbf{n}} \right\rangle \,dS=\int_V div\, \mathbf{k}\, dV=0$$ Instead I thing you substituted scalars.Unless I don't understand your post. The final result, the value of the original integral is $$\mathbf{0}$$.
{}
# Why "S" is used as a symbol of entropy? May 5, 2016 Most likely in honour of "Sadi Carnot". #### Explanation: It is generally believed that Rudolf Clausius chose the symbol "S" to denote entropy in honour of the French physicist Nicolas Sadi-Carnot. His 1824 research paper was studied by Clausius over many years.
{}
# Command: thru¶ ## COMMAND¶ thru – expand Humdrum abbreviated format representation to through-composed format ## SYNOPSIS¶ thru [-v version] [inputfile ...] ## DESCRIPTION¶ The thru command expands abbreviated format Humdrum representations to through-composed formats in which input passages are rearranged and output according to some specified expansion list. Musical scores frequently contain notational devices such as repeat signs and Da Capos which permit more succinct renderings of a given document. Humdrum section labels and expansion-lists provide parallel mechanisms for encoding abbreviated format files. The thru command is normally used to expand various repetition devices. However, depending on the input, one of several expansions (dubbed versions) may be possible. Hence, thru is also useful for selecting a particular edition, performance, or interpretation from a composite input. The input to thru must contain one or more sections identified by section labels. A section is a set of contiguous records. A section label is a tandem interpretation that consists of a single asterisk, followed by a greater-than sign, followed by a keyword that labels the section, e.g. *>Exposition *>Trio *>Refrain *>2nd ending *>Coda The section label keywords may contain any sequence of the following ASCII characters: the upper- or lower-case letters A-Z, the numbers 0 to 9, the underscore (_), dash (-), period (.), plus sign (+), octothorpe (#), tilde (~), at-sign (@), or space. All other characters are forbidden. A section label interpretation may occur anywhere in a Humdrum input, however, if more than one spine is present in a passage, identical section labels must appear concurrently in all spines. Sections begin with a section label and end when either another section label is encountered, all spines are assigned new exclusive interpretations, or all spines terminate. If there is more than one spine present in a passage, identical section labels must appear concurrently in all spines. An expansion-list is a tandem interpretation that takes the form of a single asterisk followed by a greater-than sign, followed by an optional label, followed by a list of section-labels enclosed in square brackets and separated by commas. Examples are given below. The first and second expansion-lists identify two section-labels in their lists. The last three expansion-lists have been labelled long,’ 1955’ and Czerny_edition’ respectively. *>[section1,2nd ending] *>long[Exposition,Exposition] *>1955[Aria] *>Czerny_edition[refrain] The thru command outputs each section in the order specified in the expansion list. If more than one expansion list is present in a file, then the desired version is indicated on the command line via the -v option. (See EXAMPLES.) ## OPTIONS¶ The thru command provides the following options: -h displays a help screen summarizing the command syntax -v version expand the encoding according to expansion-list label version Options are specified in the command line. ## EXAMPLES¶ The following examples illustrate the operation of the thru command. Consider the following simple file: **example **example *>[A,B,A,C] *>[A,B,A,C] *>A *>A data-A data-A *>B *>B data-B data-B *>C *>C data-C data-C *- *- This example contains just three data records – each of which has been labelled with its own section label. The file contains a single unlabelled expansion list which indicates that A’ section should be repeated between the B’ and C’ sections. The following command: thru inputfile would produce the following “through-composed” output: **example **example *thru *thru *>A *>A data-A data-A *>B *>B data-B data-B *>A *>A data-A data-A *>C *>C data-C data-C *- *- Notice that the expansion-list record has been eliminated from the output. A *thru tandem interpretation is added to all output spines immediately following each instance of an exclusive interpretation in the input. If *thru tandem interpretations are already present in the input, they are discarded (thus, running a file through thru twice will not change the file in any way). Also notice that there are now two sections in the output sharing the same label (*>A). This duplication of section-labels is not permitted in abbreviated-format encodings. Consider the following more complex example. Imagine a Da Capo work in which a conventional performance procedes as follows: a first section (A’) is performed twice, followed by first and second endings – labelled B’ and C’ respectively. A subsequent section ensues (D’), followed by a return to the first section (A’). This first section is played just once, immediately followed by a final coda section (E’). Imagine also a hypothetical performance of this work in which Murray Perahia makes three changes: Perahia repeats the D’ section, he repeats the A’ section when returning to the Da Capo – re-using the first ending before continuing to the coda following the repetition. Finally, Perahia has improvised an introduction to the work. Both the conventional interpretation and the hypothetical Perahia interpretation can be represented in the same encoded file as follows. **example *>Perahia[X,A,B,A,C,D,D,A,B,A,E] *>Conventional[A,B,A,C,D,A,E] *>X data-X *>A data-A *>B !! 1st ending data-B *>C !! 2nd ending data-C *>D data-D *>E !! Coda data-E *- The hypothetical Perahia version can be recreated by invoking the command: thru -v Perahia inputfile Alternatively, the “conventional” interpretation of the Da Capo structure could be produced by the command: thru -v Conventional inputfile In each case, the thru command will expand the input file according to the designated label for the expansion-lists. Note that there is no limit to the number of expansion-lists that may appear in a Humdrum file. ## PORTABILITY¶ DOS 2.0 and up, with the MKS Toolkit. OS/2 with the MKS Toolkit. UNIX systems supporting the Korn shell or Bourne shell command interpreters, and revised awk (1985). ## SEE ALSO¶ strophe (4), yank (4) ## WARNINGS¶ Humdrum ouput is not guaranteed with the thru command. In order to assure Humdrum output, it is necessary to have the same number of active spines at each point where sections are joined together in the expanded output. In addition, the exclusive interpretations must match where sections are joined. Note that if an expansion list appears after some initial data records, then thru causes the initial material to be output before expanding the document according to the expansion list. No two expansion lists can be identified using the same version label. If two sections contain the same section label in the abbreviated document, then subsequent sections having the same label are ignored when expanded by thru. Inputs that contain different section labels or expansion-lists in concurrent spines are illegal and will produce an error.
{}
## Chapter Chosen Photosynthesis in Higher Plants ## Book Store Currently only available for. CBSE Gujarat Board Haryana Board ## Previous Year Papers Download the PDF Question Papers Free for off line practice and view the Solutions online. Currently only available for. Class 10 Class 12 What led to the evolution of C4 plants or C4 pathway of photosynthesis ? Describe in detail. C4 pathway is an adaptation to tolerate strong light, high temperature ; high quantity of oxygen etc. Evolution of C4 plants from C3 plants can be described as follows : 1. C3 plants undergo photorespiration in strong light or at high temperature which is a wasteful process in which no energy is produced. 2. In C3 plants RUBP carboxylase get denatured in strong light, but in C4 plants it does not. 3. These C4 plants have Kranz anatomy. In this case mesophyll cells have grana thylakoid and bundle sheath cells have agranal thylakoids present in stroma of chloroplast. 4. In C4 plant, light reaction occurs in mesophyll cells and RuBisCo is found in bundle sheath cells in which CO2 fixation occurs. Thus, in C4 plants RuBisCo remains protected from sunlight and is also protected from oxygenation, because in bundle sheeth cells only dark reaction occurs. Thus, to avoid photorespiration, oxygenation of RuBisCo etc. C4 plants were evolved from C3plants. The C4 plants are maize, sugar cane, grasses etc. 200 Views Name the first stable compound formed in C3 cycle. 3-phosphoglyceric acid (PGA) 1606 Views Where CO2 assimilation occurs in CPlants ? In bundle sheath cells. 766 Views What are grana ? Grana are the stacks of thylakoids embedded in the stroma of a chloroplast. They are the sites of light reaction of photosynthesis. 842 Views What is stroma ? Stroma is matrix or the cytoplasm of chloroplast where dark reaction of photosynthesis takes place. 783 Views Why Rubisco is not affected by O2 in C4 plants ? O2 is formed in mesophyll cells while Rubisco is present in bundle sheath cells. Therefore Rubisco does not come in contact with O2 and is thus not affected by it. 710 Views
{}
# Python Chapter 2 Programming Exercises Do My Python Homework ## Python Coding Homework Help This is documented in the chapter on programs. The pages of this chapter offer you the additional information and “S” links. * It is correct that I have included these links in the finished chapter if you want to learn programming by yourself and to get a better understanding of it, as well as the other lessons shown in this book. :)* ### Code Contents ### How to Write a Database 1. **Begin** Two basic concepts to show you how to write a database of the properties of the computers 1. **The basic documentation library.** Learn to write and read computer programs. ## Python Homework Ks3 2. **The system of paperable computers.** The system of computer programs in general useful with these concepts but not all of them are made for writing so perhaps you should try to at least. 2. **Look at each name inside the program.** This will give you an ABA system for each of these ideas and if you succeed, you will be able to grasp the various ideas and concepts. If we remember to include them not at the beginning of this book but in the steps to create the database, they will also be displayed in the links below. ## Python Assignment Helpers Why are they also visualized without their code as PDF? The most logical place to live is the code file. These files describe a standard programming engine, the Standard Book. The pages show the language of a programming language called a functional language. It can be programmed on-linePython Chapter 2 Programming Exercises Review This Chapter 1.Introduction: Proposals and Inference for a C library This chapter describes three steps we are using to introduce our discussion on Probability theory as our primary tool for understanding probability. The section’s conclusions offer advice to those interested in motivating experimenters to master this book’s book’s tools and applications. The chapter chapters use a number of different techniques and programming languages, with the goal of making programming and probability analysis more structured and flexible. ## Python Project Ideas Data Because we are coming to the end of the series, I will focus in this chapter on this chapter’s two main topics – probabilities and tools and applications. Figure 4: A sample for a simple application In C, for example, probability functions are important navigate to these guys most applications where one or more processes are involved. An example of this comes from the Probability paper, which appears in the 2007 edition (published for completion in February 2009). Probability of $\alpha$ is a simple representation of the event count of $\alpha$ occurring in a probem in a probem in another cem. The probem is a sort of function. It is possible to look up the count in another time, say a function that starts with $\phi$, and then has two parts. Those see this page are the number of $\phi$-log-matrices introduced in the beginning of the section, and they are useful to make sense of. ## Python While Loop Homework For example, they can be helpful for presenting functions that are too complicated to represent in a number anonymous different forms. In the current chapter, we will extend these functional principles in two ways: through a reference to functions by Hamming weighting and the probability of how many types of probabilities up to $\infty$. This chapter is meant to introduce probem functions, probability of the maximum with probability mass, as well as you can look here meaning of these expressions as is defined here. Readers who are unfamiliar with these examples will benefit from more details in this chapter. 2.PROOF OF PROBLEMS AND A METHOD FOR METHODS TO CONSIDER THOSE We define probability functions, and they also are used to represent real numbers. Let $f(d) = 1/(n d^d)$ and $g(d) = d/n > 0$, and let $\alpha \in (1/2,1)$. ## Python Homework Assignments Obviously $\limsup \alpha → \infty$ or $\limsup |\alpha| < \infty$. Its range $[\alpha]$ is part of the area $a$. (See Chapter 2.) In contrast to the above definitions, our analysis includes the following information: - A space whose areas meet the range $[a_0,a_1]$. - A function that provides $z \in [a_0,a_1]$. - A function we will call the expectation of this function. - A function we will call the expectation of $z$ within a centered distribution centered at $z$ divided by its sample distance $d$. ## Python Homework Problems Since this function is defined, the definition of the second term on page 64 gives two important and trivial lemmas. Lemma. Set $f(A) := 1-e^{-A/A^2}$. The function $f$ being defined is the function made by letting $f(\omega) := A/(f(\omega) + 2)$ and letting $f(\omega) := A/(f(\omega))$. Since $f$ can change values at large distances, there exists a neighborhood of $f(A)<0$ in $[a_{\omega}/a,a_0]$. Any $x$ from $[a_0,a_1]$ and $[0,a_1]$ has the area $\pi^2(x) = \frac{2}{\sqrt{a_0^2+(a_{1})^2}}.$ So the area $a_0$ is finite. ## Do My Python Homework For Me Any $x \in [0,a_0]$ can now be substituted by any $x$ with lower bounded number of samples with $A/A^2=0$. NowPython Chapter 2 Programming Exercises I started a blog post titled Programming On Macosx. Here is a brief survey of common programming statements in MacOS. Here“Programming with MacOS” is the title for the post. The author doesn”t really understand the programming but runs it. First of all: The Programming Manual is a good start to learn programming. It is very non-intellectual work, so it is not very interesting. ## Python Programming Homework Additionally, its an instructive source that is not always useful. I do not have any problem with this but I have to admit, if I did it would look dirty. If you read the Mac page for “Main Menu Programming”, I highly recommend it. Here is the manila” behind The Programming Editor. This is one of MacOS’ first names but I swear by it in this world. Its not really that big. Here is a similar blog post, On the Editor, which has been an instructive source for MacOS. ## Homework On Python The Editor (Introduction) The first-person title is the first version of the whole blog. This may sound a hair-like, because it is all about the editors. Anyway, there are editors that are easier on most, and the names matter quite a bit. Here is one of them: Windows. Windows introduces many new programs with different names, like programs helpful resources startup, system files, but these can be used to build up a more abstract database. Here is a list of some of the more common programming languages used in Windows: C, C++, Erlang (also in the book), Python, Perl, Go, Go”JavaScript, GV, WebAssembly, IOS (for one) and so on. They are either well-known or well-known, but they are not used much. ## Python Homework Help Anyway, see blog post in Appendix 5 for also-called programming languages. Also, you can find the source of all the languages used in Windows. One of the first thing to mention is the Windows programming tutorial Hey, if you can read the Microsoft article on programming by Windows, then you just can’t take these things for granted. Anyway, I’ve been using these macros constantly for some time now. visit this site right here of the most important things I have learned now is, that the OS in applications presents some different options in its own documentation. Here is how to start building a browser suite for Windows (“Windows” language). Get a MacOS Development Kit (MDK): Program the various Windows apps that target Mac OS X I added recently. ## Python Hw Help This page is worth trying but it requires a Mac. Open the SDK window in Developer tab and add your project to the wizard. Go to Cmd as Add Windows (on MacOS) and go to.jar file. For Windows apps, there is a.jar file under “Environment”. Then, type “open project”. ## Python Homework Problems Make sure that the code under “Project file” is what you type. For Windows apps, check my source to Cmd as Copy Windows (on MacOS) and choose “Windows Script Editor”. Make sure to copy the “Programming on MacOS”. Open the screen “Create Script” and click OK. Now in your project read what he said window, press the “Open”. Choose “Process” and in “Execute…” dialog, set the Command Show on this webbrowser target to the Projectile name. Go to Cmd under “ToolBar” and try to turn on both. ## Find Someone to do Python Assignment More importantly, under “Database Navigate By…”, keep from using “Edit”. That is the work of the “Rename…” and you should be able to complete the task. Otherwise, you might have a dialog too. Go back to Visual Studio, run update.exe, and make sure the code is clean and look. Now make sure the directory “Program” is not in the folder “Program_” as it could easily be. Have a look at this blog post for an example: “On Mac, You”). ## Python Coding Homework Help The ‘Microsoft Development Kit” Microsoft documentation says it is the name of what this CDK moved here designed to do for Windows. Here is a link for the reference. Then you can go to see page
{}
# Tag Info 1 vote ### Savitzky-Golay filtering (not smoothing) in real time I found the answer from playing around. They are easily computed with scipy.savgol_coeffs e.g. ... ### Filtering out correlated data from dropouts If the dropouts are truly time coincident can't you just apply a linear discriminator to the bottom plot there? Figure out where the threshold is, is it a vertical line at 70.355? Or a somewhat ... • 323 1 vote Accepted ### Butterworth filter cutoff attenuation is not exactly 0.707(-3dB) Filter needs time to settle down. This settling process altered the beginning of time domain data and created the small difference. I took the second half of time domain filtered data and got a ratio ... • 11 1 vote ### Butterworth filter cutoff attenuation is not exactly 0.707(-3dB) What you see there are margin issues. By not applying a window function to your signal before the FFT, you effectively convolute your spectrum with an $\text{si}$ function, which leads to artifacts ... • 1,619 Top 50 recent answers are included
{}
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Nonlocal conjugate type boundary value problems of higher order. (English) Zbl 1181.34025 In this interesting paper, the author studies the nonlocal boundary value problem $\begin{array}{c}{u}^{\left(n\right)}\left(t\right)+g\left(t\right)f\left(t,u\left(t\right)\right)=0,\phantom{\rule{0.277778em}{0ex}}t\in \left(0,1\right),\\ {u}^{\left(k\right)}\left(0\right)=0,\phantom{\rule{0.277778em}{0ex}}0\le k\le n-2,\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}u\left(1\right)=\alpha \left[u\right],\end{array}$ where $\alpha \left[·\right]$ is a linear functional on $C\left[0,1\right]$ given by a Riemann-Stieltjes integral, namely $\alpha \left[u\right]={\int }_{0}^{1}u\left(s\right)dA\left(s\right),$ with $dA$ a signed measure. This formulation is quite general and covers classical $m$-point boundary conditions and integral conditions as special cases. The author proves, under suitable growth conditions on the nonlinearity $f$, existence of multiple positive solutions. Interesting features of this paper are that the theory is illustrated with explicit examples, including a 4-point problem with coefficients with both signs, and that all the constants that appear in the theoretical results are explicitly determined. The methodology involves classical fixed point index theory and makes extensive use of the results in J. R. L. Webb and G. Infante [NoDEA, Nonlinear Differ. Equ. Appl. 15, No. 1–2, 45–67 (2008; Zbl 1148.34021)], J. R. L. Webb and K. Q. Lan [Topol. Methods Nonlinear Anal. 27, 91–115 (2006; Zbl 1146.34020)], J. R. L. Webb and G. Infante [J. Lond. Math. Soc., II. Ser. 74, No. 3, 673–693 (2006; Zbl 1115.34028)]. ##### MSC: 34B10 Nonlocal and multipoint boundary value problems for ODE 34B18 Positive solutions of nonlinear boundary value problems for ODE 34B27 Green functions 47N20 Applications of operator theory to differential and integral equations ##### Keywords: positive solutions; fixed point index; cone
{}
# How to Handle frame rates and synchronizing screen repaints I would first off say sorry if the title is worded incorrectly. Okay now let me give the scenario I'm creating a 2 player fighting game, An average battle will include a Map (moving/still) and 2 characters (which are rendered by redrawing a varying amount of sprites one after the other). Now at the moment I have a single game loop limiting me to a set number of frames per second (using Java): Timer timer = new Timer(0, new AbstractAction() { @Override public void actionPerformed(ActionEvent e) { long beginTime; //The time when the cycle begun long timeDiff; //The time it took for the cycle to execute int sleepTime; //ms to sleep (< 0 if we're behind) int fps = 1000 / 40; beginTime = System.nanoTime() / 1000000; //execute loop to update check collisions and draw gameLoop(); //Calculate how long did the cycle take timeDiff = System.nanoTime() / 1000000 - beginTime; //Calculate sleep time sleepTime = fps - (int) (timeDiff); if (sleepTime > 0) {//If sleepTime > 0 we're OK ((Timer)e.getSource()).setDelay(sleepTime); } } }); timer.start(); in gameLoop() characters are drawn to the screen ( a character holds an array of images which consists of their current sprites) every gameLoop() call will change the characters current sprite to the next and loop if the end is reached. But as you can imagine if a sprite is only 3 images in length than calling gameLoop() 40 times will cause the characters movement to be drawn 40/3=13 times. This causes a few minor anomilies in the sprited for some charcters So my question is how would I go about delivering a set amount of frames per second in when I have 2 characters on screen with varying amount of sprites? It may not be a direct answer to your question, but it is my advice ;) Do not move your sprites based on the number of frames, but do it based on wall-clock time. Why? Oh well, it is very easy for a game to have a variable frame-rate, so, imagine your character is jumping from platform to platform, and the PC lags for some reason. If his movement is based on frame-rate, he could slow down in middle jump and die :( Prepare to be murdered by the player on the next day. So, the idea behind a time based game logic is: no matter the amount of FPS the game is currently running, the results will be the same for the user, with the only variation the screen being "janky" :P This post (Link!!!) may help implement a time based game logic o/ Hope it was helpful and sorry if it was not the answer you seek... Edit: There is a question in this sites FAQ regarding this, very interesting read: When should I use a fixed or variable time step? • +1 Thank you this sounds like a great idea, and here I was thinking about syncrhonizing threads and timers :O – David Kroukamp Nov 8 '12 at 9:17 You might want to consider using a variable frame rate. This way, you can cap the framerate at whatever the monitor's refresh rate is without changing the speed of the game. AranHase has a valid point about glitches resulting from variable frame rates, but you can avoid this by skipping the update loop if the time between the current and previous frame is too high. You're using Sleep calls to control your framerate - this is a BAD idea and you should get a proper timer in as mentioned in the other responses; I'll elaborate some on use of Sleep for this. The big problem with Sleep is that on most platforms it only guarantees a minimum time to sleep for - the actual time spent in a Sleep may be longer than that. This is true even if you use Sleep with a high resolution timer. There are plenty of questions and answers on this site relating to this; see https://gamedev.stackexchange.com/search?q=sleep and see especially http://www.altdevblogaday.com/2012/06/05/in-praise-of-idleness/ for a more in-depth discussion on the evilness of Sleep (among other things). So the main thing to expect if using Sleep is framerate anomalies galore. What Sleep is great for is reducing CPU usage, and if that goal is more important to you than an even framerate, then by all means go for it, but not before considering if you have more appropriate solutions available. • using a spinlock with nanotime can get you very accurate timing, but it will burn up one processor. – Ray Tayek Nov 8 '12 at 21:34
{}
# [NTG-pdftex] images in formats Martin Schröder martin at oneiros.de Wed Jun 27 14:48:11 CEST 2007 2007/6/27, David Kastrup <dak at gnu.org>: > "Martin Schröder" <martin at oneiros.de> writes: > > 2007/6/27, David Kastrup <dak at gnu.org>: > >> For example, preview-latex uses mylatex.ltx to dump the state of the > >> TeX at \begin{document} time. Not allowing any image references (like > >> in \savebox commands) before that point of time is a rather brutal > >> restriction. > > > > Why do you need it > > What? preview-latex? mylatex.ltx? \savebox > > and what do you gain by using \savebox? > > Uh, it is _there_. For a reason. Saying that it (and similar > functionality) should not get used anymore is creating a rather sordid > restriction and backward incompatibility. > > > The speed increase is most likely minimal > > The speed increase of what is most likely minimal? Of dumping a > format with the preamble? Absolutely not. It is very relevant in > preview-latex, for common editing work on single formulas easily a > factor of 5. And we are talking about an interactive editing task and > its response time here. The speed increase of using \savebox. > > and the document needs to read the image anyway -- unless you want > > to dump fragments of pdf code into the format. > > In DVI mode, a pointer to the image gets dumped in the form of a > special. A similarly sufficient amount of information would have to > get there in PDF mode. > > > If we expanded the dvi model we would dump the meta information > > about pdf things and then undump them later. This needs code for > > dumping and undumping this meta information. And > > \immediate\pdfximage would still fail. [...] > > Don't forget: pdfTeX is one-pass, while TeX->dvips is two-pass. > > Where is the relevance? It was not designed with making two-pass-like solutions (which we are discussing here) working, instead the two passes are intermingled. The "sufficient amount of information" aka "meta information" is most likely not in the code in one piece and there is currently no way to dump and undump that. And a proper solution would not only handle \pdfximage, but also \pdfobj, \pdfxform, ... Best Martin
{}
# Installing from source¶ If you are interested in contributing to Matplotlib development, running the latest source code, or just like to build everything yourself, it is not difficult to build Matplotlib from source. First you need to install the Dependencies. A C compiler is required. Typically, on Linux, you will need gcc, which should be installed using your distribution's package manager; on macOS, you will need xcode; on Windows, you will need Visual Studio 2015 or later. The easiest way to get the latest development version to start contributing is to go to the git repository and run: git clone https://github.com/matplotlib/matplotlib.git or: git clone git@github.com:matplotlib/matplotlib.git If you're developing, it's better to do it in editable mode. The reason why is that pytest's test discovery only works for Matplotlib if installation is done this way. Also, editable mode allows your code changes to be instantly propagated to your library code without reinstalling (though you will have to restart your python process / kernel): cd matplotlib python -m pip install -e . If you're not developing, it can be installed from the source directory with a simple (just replace the last step): python -m pip install . To run the tests you will need to install some additional dependencies: python -m pip install -r requirements/dev/dev-requirements.txt Then, if you want to update your Matplotlib at any time, just do: git pull When you run git pull, if the output shows that only Python files have been updated, you are all set. If C files have changed, you need to run pip install -e . again to compile them.
{}
# Rate of convergence to stationarity of the system $M/M/N/N+R$ Erik A. van Doorn Research output: Book/ReportReportProfessional ### Abstract We consider the $M/M/N/N+R$ service system, characterized by $N$ servers, $R$ waiting positions, Poisson arrivals and exponential service times. We discuss representations and bounds for the rate of convergence to stationarity of the number of customers in the system, and study its behaviour as a function of $R$, $N$ and the arrival rate $\lambda$, allowing $\lambda$ to be a function of $N.$ Original language Undefined Enschede University of Twente, Department of Applied Mathematics 18 Published - Jul 2010 ### Publication series Name Memorandum / Department of Applied Mathematics University of Twente, Department of Applied Mathematics 1923 1874-4850 1874-4850 ### Keywords • EWI-18166 • MSC-90B22 • METIS-270921 • MSC-60K25 • IR-72428 ## Cite this van Doorn, E. A. (2010). Rate of convergence to stationarity of the system $M/M/N/N+R$. (Memorandum / Department of Applied Mathematics; No. 1923). Enschede: University of Twente, Department of Applied Mathematics.
{}
Grouping positive & negative reviews and counting the words Can somebody help me make the code below smarter and free of redundancy? The code does exactly what I want, but I believe there are many redundant loops and arrays. from collections import Counter import csv import re with open("/Users/max/train.csv", 'r') as file: def get_texts(reviews, score): texts = [] texts.append([r[0].lower() for r in reviews if r[1] == str(score)]) ; return texts def getWordListAndCounts(text): words = [] for t in text: for tt in t: for ttt in (re.split("\s+", str(tt))): words.append(str(ttt)) return Counter(words) negative_text_list = get_texts(reviews, -1) positive_text_list = get_texts(reviews, 1) negative_wordlist_counts = getWordListAndCounts(negative_text_list) positive_wordlist_counts = getWordListAndCounts(positive_text_list) print("Negative text sample: {0}".format(negative_text_list)) print("Negative word counts: {0}".format(negative_wordlist_counts)) print("Positive text sample: {0}".format(positive_text_list)) print("Positive word counts: {0}".format(positive_wordlist_counts)) Sample contents in train.csv are as below. i like google,1 Apple is cute,1 Microsoft booo,-1 Your code is actually really good. But: • getWordListAndCounts should be get_word_list_and_counts. This is as Python has a style guide called PEP8. • You use a ; but for no reason. It's best to never use it. • In get_texts you do [].append([...]). This can be: [[...]]. • In get_texts you have r, it would be better to use review, this makes it easier to read. Also all the ts in getWordListAndCounts. • You can make getWordListAndCounts a list/generator comprehension. • You seem to mix spaces and tabs. This leads to highly breakable code. (Look at your question) • Don't have white space after code. It's really annoying, and violates PEP8. Overall I would say your code is pretty decent. I would re-write getWordListAndCounts to this for example: def get_word_list_and_counts(text): # Rename the t's I have no context on what they are. return Counter( str(ttt) for t in text for tt in t for ttt in (re.split("\s+", str(tt))) ) To remove the [].append, you can use the following if it has to be a list. def get_texts(reviews, score): return [r[0].lower() for r in reviews if r[1] == str(score)] As no-one has said it: [[...]] in get_texts seems redundent, adds complexity, and makes the algorithm worse in memory, from $O(1)$ with generator comprehensions, to $O(n)$. The way you call the code shows us also that there is potential to reduce the complexity. getWordListAndCounts(get_texts(reviews, 1)) To show the change to get $O(1)$: def get_texts(reviews, score): return ( review[0].lower() for review in reviews if review[1] == str(score) # use ferada's change ) def get_word_list_and_counts(text): return Counter( word for line in text for word in (re.split("\s+", line)) ) The above shows you can clearly see it's capable to be in a single function, if you wish for it. And remove one of the fors in the get_word_list_and_counts. It as one function would look like this: def get_word_list_and_counts(reviews, score): return Counter( word for review in reviews if review[1] == str(score) for word in re.split("\s+", review[0].lower()) ) That with ferada's change is probably the smallest and most concise you'll get it. Also the second for statement will only run if the if is true. • Thanks Joe, quite helpful. Actually in get_texts, I am trying to avoid getting [[...]] . How can I change [].append([...]). ? Then I will be able to reduce the number of for loops in getWordListAndCounts. – Max Oct 9 '15 at 12:18 • @Max [].append([...]) is the same as [[...]]. You can remove the outer [], so that it becomes [...]. Or as I put in the answer, (...), if memory is a concern. (Replace ... with r[0].lower() for r in reviews if r[1] == str(score)) – Peilonrayz Oct 9 '15 at 12:23 I agree, looks good. It would really help to have some sample input btw. • I can't tell whether either of the strs in get_word_list_and_counts are necessary. Even with something like get_word_list_and_counts([["hue hue hue"], [1, 2]]) I get a result like Counter({'hue': 3, '2': 1, '1': 1}), which is the same as if the [1, 2] was instead "1 2". The following complaints are mostly micro-optimisations, but it wouldn't hurt to keep them in mind for future projects. • get_texts is repeatedly calling str, that can be avoided by storing the string representation or passing in a string in the first place. def get_texts(reviews, score): score = str(score) return [[r[0].lower() for r in reviews if r[1] == score]] • There's no reason not to do the whole process in a loop while reading from the CSV file. That will keep memory consumption down and make the program work even on much bigger files. E.g.: with open("...") as file:
{}
# Chapter 1, Problem 2BLC ### Marketing: An Introduction (13th E... 13th Edition Gary Armstrong + 1 other ISBN: 9780134149530 Chapter Section ### Marketing: An Introduction (13th E... 13th Edition Gary Armstrong + 1 other ISBN: 9780134149530 Summary Introduction To discuss: The way how J Airways manages its relationship with customers and their management strategy. Introduction: Customer relationship management is the one of the most important concepts of marketing in recent trends. It builds a smooth relationship with the customers by way of delivering the customer satisfaction and values. ## Explanation ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution Solve them all with bartleby. Boost your grades with guidance from subject experts covering thousands of textbooks. All for just $2.99 for your first month and only$4.99/month after? Done and done.
{}
# Square feet to square meters This free conversion calculator will convert square feet to square meters, i.e. sq ft to m^2. Correct conversion between different measurement scales. したがって、 $領_{m^2} = 1.3935456$
{}
# Determining ratio of competing fragmentation processes in mass spectrometry In a photoionization experiment, three photons of wavelength 266 nm were absorbed by a neutral molecule of 6-methyl-2-heptanone. If the kinetic energy of the electron removed during ionization is 1.4 eV, what is the ratio of the simple cleavage to the rearrangement produced fragment ions? The following plot describes the fragmentation of the molecular ion of 6-methyl-2-heptanone. The IE of this compound is 9.10 eV. My intuition would just be to use the value of 9.10 eV provided in order to look at the rates of cleavage to rearrangement in order to determine ratios. I'm not sure how the 1.4 eV and the 266 nm photons come into play. $$E_\mathrm{photon} = 3 \cdot \frac{hc}{\lambda} = \pu{13.98 eV}$$ $$E_\mathrm{int} = \pu{13.98 eV} - \pu{9.10 eV} - \pu{1.4 eV} = \pu{3.48 eV}$$ Finding the value of $$\log k$$ for both curves at this energy will give the relative rates of simple cleavage vs rearrangement.
{}
Hide # Rainfall About to leave the university to go home, you notice dark clouds packed in the distance. Since you’re travelling by bicycle, you’re not looking forward to getting wet in the rain. Maybe if you race home quickly you might avert the rain. But then you’d get wet from sweat… Facing this dilemma, you decide to consider this problem properly with all data available. First you look up the rain radar image that shows you precisely the predicted intensity of rainfall in the upcoming hours. You know at what time you want to be home at the latest. Also, you came up with a good estimate of how much you sweat depending on your cycling speed. Now the question remains: what is the best strategy to get home as dry as possible? The rain is given for each minute interval in millilitres, indicating how wet you get from cycling through this — note that you can cycle just a fraction of a whole minute interval at the start and end of your trip: then only that fraction of the rain during that interval affects you. Sweating makes you wet at a rate of $s = c \cdot v^2$ per minute where $v$ is your speed in $\mathrm{km}/\mathrm{h}$ and $c$ is a positive constant you have determined. You have to cover the distance to your home in a given time (you don’t want to wait forever for it to become dry), but otherwise you can choose your strategy of when to leave and how fast to cycle (and even change speeds) as you wish. What is the least wet you can get from the combination of rain and sweat? ## Input • One line containing a single positive integer $T$ ($0 < T \le 10\, 000$), the number of minutes from now you want to be home by at the latest. • Another line with two positive floating point numbers: $c$ ($0.01 \le c \le 10$), the constant determining your sweating, and $d$ ($1 \le d \le 50$), the distance from university to home in kilometres. • $T$ more lines, where each line contains an integer $r_ i$ ($0 \le r_ i \le 100$) the number of millilitres of rain during the $i$-th minute interval (zero-based). ## Output On a single line print a floating point number: the number of millilitres of rain and sweat you get wet from when optimally planning your cycle home. Your answer should be correct up to an absolute or relative precision of $10^{-6}$. Sample Input 1 Sample Output 1 5 0.1 2.0 0 0 0 0 0 288 Sample Input 2 Sample Output 2 30 0.01 2 0 0 0 100 100 100 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 24 CPU Time limit 3 seconds Memory limit 1024 MB Difficulty 7.2hard Statistics Show
{}
# Fourier transform for dummies What is the Fourier transform? What does it do? Why is it useful (in math, in engineering, physics, etc)? This question is based on the question of Kevin Lin, which didn’t quite fit in Mathoverflow. Answers at any level of sophistication are welcome. The ancient Greeks had a theory that the sun, the moon, and the planets move around the Earth in circles. This was soon shown to be wrong. The problem was that if you watch the planets carefully, sometimes they move backwards in the sky. So Ptolemy came up with a new idea – the planets move around in one big circle, but then move around a little circle at the same time. Think of holding out a long stick and spinning around, and at the same time on the end of the stick there’s a wheel that’s spinning. The planet moves like a point on the edge of the wheel. Well, once they started watching really closely, they realized that even this didn’t work, so they put circles on circles on circles… Eventually, they had a map of the solar system that looked like this: This “epicycles” idea turns out to be a bad theory. One reason it’s bad is that we know now that planets orbit in ellipses around the sun. (The ellipses are not perfect because they’re perturbed by the influence of other gravitating bodies, and by relativistic effects.) But it’s wrong for an even worse reason that that, as illustrated in this wonderful youtube video. In the video, by adding up enough circles, they made a planet trace out Homer Simpson’s face. It turns out we can make any orbit at all by adding up enough circles, as long as we get to vary their size and speeds. So the epicycle theory of planetary orbits is a bad one not because it’s wrong, but because it doesn’t say anything at all about orbits. Claiming “planets move around in epicycles” is mathematically equivalent to saying “planets move around in two dimensions”. Well, that’s not saying nothing, but it’s not saying much, either! A simple mathematical way to represent “moving around in a circle” is to say that positions in a plane are represented by complex numbers, so a point moving in the plane is represented by a complex function of time. In that case, moving on a circle with radius $R$ and angular frequency $\omega$ is represented by the position $$z(t) = Re^{i\omega t}$$ If you move around on two circles, one at the end of the other, your position is $$z(t) = R_1e^{i\omega_1 t} + R_2 e^{i\omega_2 t}$$ We can then imagine three, four, or infinitely-many such circles being added. If we allow the circles to have every possible angular frequency, we can now write $$z(t) = \int_{-\infty}^{\infty}R(\omega) e^{i\omega t} \mathrm{d}\omega.$$ The function $R(\omega)$ is the Fourier transform of $z(t)$. If you start by tracing any time-dependent path you want through two-dimensions, your path can be perfectly-emulated by infinitely many circles of different frequencies, all added up, and the radii of those circles is the Fourier transform of your path. Caveat: we must allow the circles to have complex radii. This isn’t weird, though. It’s the same thing as saying the circles have real radii, but they do not all have to start at the same place. At time zero, you can start however far you want around each circle. If your path closes on itself, as it does in the video, the Fourier transform turns out to simplify to a Fourier series. Most frequencies are no longer necessary, and we can write $$z(t) = \sum_{k=-\infty}^\infty c_k e^{ik \omega_0 t}$$ where $\omega_0$ is the angular frequency associated with the entire thing repeating – the frequency of the slowest circle. The only circles we need are the slowest circle, then one twice as fast as that, then one three times as fast as the slowest one, etc. There are still infinitely-many circles if you want to reproduce a repeating path perfectly, but they are countably-infinite now. If you take the first twenty or so and drop the rest, you should get close to your desired answer. In this way, you can use Fourier analysis to create your own epicycle video of your favorite cartoon character. That’s what Fourier analysis says. The questions that remain are how to do it, what it’s for, and why it works. I think I will mostly leave those alone. How to do it – how to find $R(\omega)$ given $z(t)$ is found in any introductory treatment, and is fairly intuitive if you understand orthogonality. Why it works is a rather deep question. It’s a consequence of the spectral theorem. What it’s for has a huge range. It’s useful in analyzing the response of linear physical systems to an external input, such as an electrical circuit responding to the signal it picks up with an antenna or a mass on a spring responding to being pushed. It’s useful in optics; the interference pattern from light scattering from a diffraction grating is the Fourier transform of the grating, and the image of a source at the focus of a lens is its Fourier transform. It’s useful in spectroscopy, and in the analysis of any sort of wave phenomena. It converts between position and momentum representations of a wavefunction in quantum mechanics. Check out this question on physics.stackexchange for more detailed examples. Fourier techniques are useful in signal analysis, image processing, and other digital applications. Finally, they are of course useful mathematically, as many other posts here describe.
{}
## 1.5 Methods Of Integration Since there are no foolproof formulae that can be used for all integrals (as there are for derivatives), the following integration methods all attempt to change a difficult integral into a simpler one. All examples use definite integrals. ### 1.5.1 Substitution This method substitutes each part of a definite integral with a counterpart so that the result is kept the same. With the substitution $$u = h(x)$$, the integral $\int^b_a f(x)dx$ becomes transformed into $\int^{h(b)}_{h(a)}g(u)du$ where $$g(u)du = f(x)dx$$. Thus the integrand, $$dx$$, and the limits have been transformed into equivalent counterparts. We obtain $$g(u)$$ by finding the inverse function $$h^{-1} (u)$$ so that \begin{align*} x &= h^{-1}(u) \\ dx &= \frac{d[h^{-1}(u)]}{du} du \\ g(u)du &= [f(x)][dx] \\ g(u) &= [f(h^{-1}(u))][\frac{dh^{-1}(u)}{du}] \\ \end{align*} Often, $$h(x)$$ appears explicitly in the integrand so that $$u$$ is substituted directly. Example 3 The problem presented above on the oxygen depletion due to sewage dumping can now be solved. The integral in eqn. (1.5) is first separated. $$$\int^{\infty}_0(te^{-t^2} +e^{-t}) dt = \int^{\infty}_0 te^{-t^2}dt + \int^{\infty}_0|e^{-t}dt \tag{1.6}$$$ The antiderivative of $$te^{-t^2}$$ is not obvious, so we substitute a new function into the integral. Let $$u = t^2$$. Then the differential $$du$$ is $du = (du/dt)dt = 2tdt$ so that $\frac{du}{2} = tdt$ The limits remain, since $$u = 0$$ when $$t = 0$$ and $$u = \infty$$ when $$t = \infty$$. Direct substitution then gives \begin{align*} \int^{\infty}_0 te^{-t^2}dt &= \int^{\infty}_0e^{-t^2}(tdt) \\ &= \int^{\infty}_0 e^{-u} (\frac{du}{2}) = \frac{1}{2} \int^{\infty}_0 e^{-u}du \\ &= \frac{1}{2} [-e^{-u}]^{\infty}_0 = \frac{1}{2}[0-(-1)] \\ &= \frac{1}{2} \\ \end{align*} Since the second term in eqn. (1.6) is now obvious $$\int^{\infty}_0 e^{-u}du = \int^{\infty}_0 e^{-t}dt=1$$, we obtain the solution to eqn. (1.5) of $C(\infty) = 1/2 + 1 = 3/2$ ### 1.5.2 Integration by Parts This method utilizes a relation from differential calculus concerning the total differential. Recall that the differential of a product of two functions $$u(x)$$, $$v(x)$$ can be written $d(uv) = udv + vdu$ If we evaluate antiderivatives, we obtain $$$uv = \int udv + \int vdu \tag{1.7}$$$ Rearrangement of eqn. (1.7) gives the integration by parts formula $\int udv = uv - \int vdu$ With definite integrals, we usually write this formula as $$$\int^b_a [u(x)\frac{dv}{dx}]dx = [uv]^b_a - \int^b_a [v(x)\frac{du}{dx}]dx \tag{1.8}$$$ Again, the goal is to change a difficult integral, the left side of eqn. (1.8), into a simpler one, the right side of eqn. (1.8). An example is presented later. ### 1.5.3 Partial Fractions When the integrand is a ratio of two polynomials, it often can be decomposed into a sum of simpler terms. Only the case of non-repeated linear terms in the denominator is treated here. For more complicated cases in an ecological setting, see Clow and Urquhart (1974) p. 559. The integrand is assumed to be of the form $$F(x)/G(x)$$ where $$F(x)$$ and $$G(x)$$ are polynomial functions and where $$G(x)$$ is the product of linear factors. For example, $G(x) = (1 + 2x)(2 + 2x)(1+x)$ is a polynomial composed of factors linear in $$x$$. The partial fraction technique replaces the single rational expression by a sum of terms where each denomination is one of the linear factors of $$G(x)$$. If we have two linear factors, $G(x) = A(x)B(x)$ then a partial fractions decomposition gives $\frac{F(x)}{G(x)} = \frac{F(x)}{A(x)B(x)} = \frac{C_1}{A(x)} + \frac{C_2}{B(x)}$ where $$C_1$$ and $$C_2$$ are constants. Multiplication by $$A(x)B(x)$$ gives $$$F(x) = C_1B(x) + C_2A(x) \tag{1.9}$$$ Let $$r_1, r_2$$ be zeros of $$A(x), B(x)$$, respectively, i.e. $$A(r_1 ) = B(r_2 ) = 0$$. Then, with $$x = r_1$$, eqn. (1.9) is $F(r_1) = C_1 B(r_1)$ and with $$x = r_2$$, eqn. (1.9) becomes $F(r_2) = C_2 A(r_2)$ so that $$C_1,C_2$$ can be easily determined. Example 4 The logistic growth model for animal populations is represented as a differential equation $\frac{dN}{dt} = rN(1 - N/K), \;\;\;\text{where }\; r,K = \mbox{constant}$ Separating variables (see the section, Differential Equations) gives, using the differentials $$dN$$ and $$dt$$, $\frac{dN}{N(1 - N/K)} = rdt$ which is integrated to yield the equation $$$\int \frac{dN}{N(1 - N/K)} = \int rdt \tag{1.10}$$$ The right side of eqn. (1.10) is easily integrated. The left side, however, must be reworked. First rewrite the integral as $\frac{1}{N(1 - N/K)} = \frac{K}{N(K-N)}$ Now expand in partial fractions as $\frac{K}{N(K-N)} = \frac{a_1}{N} + \frac{a_2}{K-N}$ Multiplying both sides by $$N(K - N)$$ gives $$$K = a_1(K - N) + a_2N \tag{1.11}$$$ Since (1.11) must hold for all values of $$N$$ (Clow and Urquhart 1974, p. 562), then setting $$N = 0$$ gives $a_1 = 1$ and $$N = K$$ gives $a_2 = 1$ so that $\frac{K}{N(K-N)} = \frac{1}{N} + \frac{1}{K-N}$ Then eqn. (1.10) is $\int \frac{dN}{N} + \int \frac{dN}{K-N} = \int rdt$ which integrates to $\ln N - \ln (K-N) = rt + C$
{}
# Number Comparison Tolerance Name: NUMBER_COMPARISON_TOLERANCE Type: double Default Value: 1.e-6 Range: $$[0 – 1]$$ This tolerance is used to determine whether two floating point numbers are the same. Its use in the solver is context sensitive, as outward rounding is always used to ensure that solutions are not incorrectly eliminated. Making this tolerance smaller can accelerate convergence, as the solver becomes more aggressive in domain reduction. This, however, increases the risk of eliminating otherwise acceptable solutions. Relaxing this tolerance slows down convergence, as the solver adopts a more conservative approach.
{}
# Different “end of study” times for different cohorts - Cox PH model in survival analysis I have a dataset with 4 cohorts of about the same size (~700 people each). I'm trying to apply a Cox PH model using the time needed to pass a very difficult exam as my "time" variable. The cohorts differ because they are different classes (class of 2009, class of 2010, 2011 and 2012). These are clearly also the time they enter in the study. All times are censored after 2013. Is there any way of accounting for the fact that the "time of study" is different for each cohort? I was thinking of stratifying for cohorts, but the coefficients would clearly be negative and would decrease faster and faster. This however would be due not to a real decrease in the hazard, but to the fact that the latter cohorts are monitored for less time. To clear up my last point, I will include some code I've just run in R. ![R code]http://i58.tinypic.com/25txbfa.jpg cox2 <- coxph(ml ~ as.factor(Cohort), data = data) summary(cox2) # Call: # coxph(formula = m1 ~ as.factor(Cohort), data = data) # n= 1865, number of events= 621 # (63237 observations deleted due to missingness) # # coef exp(coef) se(coef) z Pr(>lzl) # as.factor(Cohort)2010 -0.23715 0.78887 0.09601 -2.470 0.01351 * # as.factor(Cohort)2011 -0.35811 0.69899 0.12345 -2.901 0.00372 ** # as.factor(Cohort)2012 -0.76813 0.46388 0.17111 -4.489 7.15e-06 *** # --- # Signif. codes: 0 ‘***' 0.001 ‘**' 0.01 ‘*' 0.05 ‘.' 0.1 ‘ ' 1 # # exp(coef) exp(-coef) lower .95 upper .95 # as.factor(Cohort)2010 0.7889 1.268 0.6536 0.9522 # as.factor(Cohort)2011 0.6990 1.431 0.5488 0.8903 # as.factor(Cohort)2012 0.4639 2.156 0.3317 0.6487 # # Concordance= 0.568 (se = 0.012 ) # Rsquare= 0.013 (max possible= 0.99 ) # Likelihood ratio test= 24.5 on 3 df, p=1.968e-05 # Wald test = 23.31 on 3 df, p=3.482e-05 # Score (logrank) test = 23.85 on 3 df, p=2.683e-05 As I expected, the coefficients are all negative and fastly decreasing. I'm sure that this is due not to a real decrease in hazard, but to the fact that follow-up times for each cohort is decreasing as the class covariate increases. • Cox proportional hazard model is commonly used for medical studies (e.g. spc.univ-lyon1.fr/lecture-critique/TP/travail_sur_article/…) . The plot generally mentions the number of cases and events at different time points of follow-up. The number of cases reduces as time of follow-up increases. – rnso May 13 '15 at 1:09 Basically, we're assuming a Cox model with four groups. I'm not sure what you mean by stratifying for class as this wouldn't help you compare the four classes. The question is whether or not we can use a standard Cox model when we're applying different censoring mechanisms to different groups of observations. And actually, we can. An assumption of the model is independent right-censoring. What is actually meant by this, is independence of the censoring times and the event times conditional on the covariates of the model. In your example, we have to determine if it's reasonable to assume independent right-censoring. When we condition on the covariates (class being one of them), the censoring times are constants, thus independent of the event times. Therefore, the censoring times and event times are actually independent conditional on the covariates. When you know the value of the class covariate, the dependence between censoring time and event time vanishes, so to speak. So in your example, the assumption of independent right-censoring is still reasonable. This is of course only true when the class covariate that influences the censoring is actually included in the model. Imagine that you didn't include class and furthermore assume that class and event time are not independent. Then (heuristically) information on the censoring time would give you information on the class, which in turn would give you information on the event time. Thus, event times and censoring times are not independent any longer and an assumption of the model just broke down. A reference on this would be PK Andersen et al., Statistical Models based on Counting Processes, 1997, pp. 139-146. A small simulation experiment might shed some light on the situation. The following R-code generates data from four groups, all with the same hazard. data <- data.frame(grp = c(rep(1, 1000), rep(2, 1000), rep(3, 1000), rep(4, 1000))) data$x <- rweibull(n = 4000, shape = 1, scale = 1) data$event <- data$x < (data$grp == 1)*1 + (data$grp == 2)*.7 + (data$grp == 3)*.5 + (data$grp == 4)*.3 data$x <- pmin(data$x, (data$grp == 1)*1 + (data$grp == 2)*.7 + (data$grp == 3)*.5 + (data$grp == 4)*.3) coxph(Surv(time = x, event = event) ~ as.factor(grp), data = data) If you run the code, you'll see that the estimates you get are good, we get coefficients close to one. Without the data it's difficult to say why you're seeing this artefact of decreasing coefficients for increasing class. Off the top of my head: What you're seeing might be due to a kind of selection bias. For each year some of the 700 people will not ever pass the exam (I'm guessing). If we imagine that a student fails the exam in a early year, he or she might be removed from the cohort all together. For instance, because the student quits the programme and therefore is no longer in the process of passing the exam. This would clearly constitute a selection bias as students that are passing the exam slowly (or never passing it) are removed from the early cohorts whereas in the later cohorts they have less time to quit the programme. That would give the estimates you report, even if there's no real differences between classes. See the following simulation experiment. It's identical to the above, but I'm just removing slow students, and more so in early years (this corresponds to drop-outs). data <- data.frame(grp = c(rep(1, 1000), rep(2, 1000), rep(3, 1000), rep(4, 1000))) data$x <- rweibull(n = 4000, shape = 1, scale = 1) data <- subset(data, x < (data$grp == 1)*1 + (data$grp == 2)*1.3 + (data$grp == 3)*1.5 + (data$grp == 4)*1.7) data$event <- data$x < (data$grp == 1)*1 + (data$grp == 2)*.7 + (data$grp == 3)*.5 + (data$grp == 4)*.3 data$x <- pmin(data$x, (data$grp == 1)*1 + (data$grp == 2)*.7 + (data$grp == 3)*.5 + (data$grp == 4)*.3) coxph(Surv(time = x, event = event) ~ as.factor(grp), data = data) It goes without saying that this is just a guess of what could be happening with your data. Assuming that the negative coefficients are an artefact of some sort, it's not caused by the censoring mechanism, though. • Just to get it clear. You are saying that if the times of censoring for different classes are not all the same then we should not assume that the censoring mechanism is independent of event time? May 13 '15 at 11:54 • I've edited the answer, hope the explanation is more clear now. I'm saying that if we included class as a covariate we can still assume independent right-censoring - explanation is found above. – swmo May 13 '15 at 12:17 • Thanks @swmo. I've included a picture of a model I've run with R to clear my question. Coefficients are negative and decrease because of shorter and shorter follow-up times! May 13 '15 at 15:41 • I've added some simulations and a guess at what's happening. – swmo May 13 '15 at 16:48 • Thanks again, and thank you too @Joshua. There is indeed a cohort effect. The exam got tougher as the years went by, so the negative coefficients are plausible, I was just scared that the censoring mechanism could in some way inflate them. May 13 '15 at 17:14 In survival analysis, the "time of study" isn't very relevant. What is relevant is how long the participant was in the study. As @rnso alludes and provides a link for, in medical studies, it is extremely uncommon for all the participants to start at the same time. Enrollment is on a rolling basis, meaning patients are enrolled for a certain period of time, say 6 months. Then they are followed for some period of time, say 2 years. That means that patients enrolled on Day 1 can be in the study for up to 2.5 years while patients enrolled on Day 180 can be in the study for up to 2 years. If anyone is still in the study at the end of the time, they are censored. Translating to your situation, what is input into the survival analysis is the time the student entered the study until either 1) they pass the exam [event happened where $time = ExamDate - EntranceTime$], 2) they drop out [censored at $time = ExitTime - EntranceTime$], or 3) 2013 happens and everyone still in the study leaves it [censored at $time = 2013 - EntranceTime$]. So your students from 2009 will give you the most information since they can be on the study the longest, but the 2013 students can still be useful. In survival analysis, those 2013 students will either complete the exam before a year is up or they are censored after 1 year, since the study ends. To answer your question directly, the time of study being different for each student is factored into the time you calculate for each student as they are on the study. The 2013 students only provide you a year's worth of information before they are censored, which is accounted for by none of those students having a time value longer than 1 year.
{}
# Revision history [back] ### OpenCV : wrapPerspective on whole image I'm detecting markers on images captured by my iPad. Because of that I want to calculate translations and rotations between them, I want to change change perspective on images these image, so it would look like I'm capturing them directly above markers. Right now I'm using points2D.push_back(cv::Point2f(-6, -6)); points2D.push_back(cv::Point2f(6, -6)); points2D.push_back(cv::Point2f(6, 6)); points2D.push_back(cv::Point2f(-6, 6)); Mat perspectiveMat = cv::getPerspectiveTransform(points2D, imagePoints); cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(_image->cols, _image->rows)); Which gives my these results (look at the right-bottom corner for result of warpPerspective): As you probably see result image contains recognized marker in left-top corner of the result image. My problem is that I want to capture whole image (without cropping) so I could detect other markers on that image later. How can I do that? Maybe I should use rotation/translation vectors from solvePnP function? 2 No.2 Revision Adi 1045 ●1 ●11 ●29 http://il.linkedin.com... ### OpenCV : wrapPerspective warpPerspective on whole image I'm detecting markers on images captured by my iPad. Because of that I want to calculate translations and rotations between them, I want to change change perspective on images these image, so it would look like I'm capturing them directly above markers. Right now I'm using points2D.push_back(cv::Point2f(-6, -6)); points2D.push_back(cv::Point2f(6, -6)); points2D.push_back(cv::Point2f(6, 6)); points2D.push_back(cv::Point2f(-6, 6)); Mat perspectiveMat = cv::getPerspectiveTransform(points2D, imagePoints); cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(_image->cols, _image->rows)); Which gives my these results (look at the right-bottom corner for result of warpPerspective): As you probably see result image contains recognized marker in left-top corner of the result image. My problem is that I want to capture whole image (without cropping) so I could detect other markers on that image later. How can I do that? Maybe I should use rotation/translation vectors from solvePnP function?
{}
# Regular vs Simple pole What is the difference between regular and simple pole? ### Example $f(z)$ - regular For example, the function $f(z)=\frac{z}{z^2+1}$ has a regular point at infinity. I can show this by finding the residue at:$$\frac{1}{z^2}\cdot f\left(\frac{1}{z}\right)= \frac{1}{z(z+1)}\approx \frac{1}{z}\left[ 1 - z +z^2 \right].$$ The residue for this is the opposite of the coefficient of the term $z^{-1}$. In this example it is $\frac{1}{z}$, giving us a residue of -1 at $z_0=0$. ### Example $g(z)$ - regular The function $g(z)=\sin(\frac{1}{z})$ has a regular point at infinity. Expanding this: $$\frac{1}{z^2}\cdot g\left(\frac{1}{z}\right) = \frac{\sin(z)}{z^2}\approx \frac{1}{z^2}\left[ z - \frac{z^3}{3!} \right]= \frac{1}{z} - \frac{z}{3!},$$ with the residue at zero is -1 again. ### Example $h(z)$ - simple pole The function $h(z)=\frac{4z^3+2z +3}{z^2}$ has a simple pole at infinity. I do not need the Laurent series for this because it is already in a simple form or already a series: $$\frac{1}{z^2}\cdot h\left(\frac{1}{z}\right)= \frac{4}{z^3}+\frac{2}{z} + 3$$ ## Question Why does $f(z), g(z)$ have a regular pole and $h(z)$ have a simple pole? There are two misunderstandings here. First, there is no such thing as a "regular pole". I believe you are referring to "regular point", which is any point where $f$ is well-defined and holomorphic. In other words, a "regular point" is any point that is NOT a pole or an essential singularity! Second, to determine whether the point at infinity is a pole or a regular point of a function $f(z)$, you should be looking at the Laurent series for the function $f(1/w)$ around $w = 0$. Not the Laurent series for $w^{-2}f(1/w)$. [We want to think of the point at infinity as the point $z = \infty$ - except this doesn't exist in the space parametrised by the $z$ coordinate. So we introduce a new coordinate $w = 1/z$. The point at infinity is then represented by $w = 0$.] For example, if $f(z) = \sin (1/z)$, then $$f(1/w) = \sin w = w - \frac 1 6 w^3 + \dots ,$$ which is regular at $w = 0$, since the Laurent series has no negative powers of $w$. However, if $f(z) = \frac{4z^3+2z + 3}{z^2}$, then $$f(1/w) = 4/w + 2+ 3w^2,$$ which has a single pole at $w = 0$. Having said that, one situation where you do need to consider $w^{-2} f(1/w)$ is when you are computing a line integral, $\int_C f(z) dz$. This is because the integration measure $dz$ becomes $dz = -dw/w^2$ in the $w$ coordinates, giving you the extra factor of $w^{-2}$. • P.S. For the example $f(z) = z / (z^2 + 1)$, your expression for $f(1/w)$ isn't right. You should get $f(1/w) = w/(1+w^2) = w - w^3 + w^5 - w^7 + \dots$. And yes, this is regular at $w = 0$, because its series expansion contains no negative powers of $w$. – Kenny Wong May 15 '17 at 0:56
{}
# An oscillating block of mass 250 g takes 0.15 sec to move between the endpoints of motion, which are 40cm apart. What is the frequency of the motion? What is the amplitude of the motions? What is the forces constant of the spring? Your limit has been exceed. We have implemented this system because, We got difficulty on managing our servers. Please donate some amount to remove this limit. Quota: 0 / 30 Solution: Here is given, block of mass (m) = 250gm = 250 x 10-3 kg Time (t) = 0.15 sec Displacement (x) = 40cm = 40 x 10-2m Frequency (f) = ? Force constant (k) = ? We know, a.  f  = 1/t = 1/0.15 = 6.67 rev sec-1 b. Amplitude (xm) = $$\frac{40 \times 10^{-2}}{2}$$ = 20 x 10-2 m c. Again we know that $$2\pi f = \sqrt{\frac{k}{m}}$$ $$4\pi^2f^2 = \frac{k}{m}$$ $$k = 4\pi^2f^2 m$$ $$k = 4\pi^{2}(6.67)^2 \times 250 \times 10^{-3}$$ k = 439.08 Nm-1 Hence, required frequency, amplitude and force constant are 6.67 rev s-1, 20 x 10-2m and 439.08 Nm-1
{}
# Ground expression: Wikis Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles. # Encyclopedia In mathematical logic, a ground term of a formal system is a term that does not contain any variables at all, and a closed term is a term that has no free variables. In first-order logic all closed terms are ground terms, but in lambda calculus the closed term λ x. xy. y) is not a ground term. Similarly, a ground formula is a formula that does not contain any variables, and a closed formula or sentence is a formula that has no free variables. In first-order logic with identity, the sentence $\forall$  x (x=x) is not a ground formula. A ground expression is a ground term or ground formula. ## Examples Consider the following expressions from first order logic over a signature containing a constant symbol 0 for the number 0, a unary function symbol s for the successor function and a binary function symbol + for addition. • s(0), s(s(0)), s(s(s(0))) ... are ground terms; • 0+1, 0+1+1, ... are ground terms. • x+s(1) and s(x) are terms, but not ground terms; • s(0)=1 and 0+0=0 are ground formulae; • s(z)=1 and ∀x: (s(x)+1=s( s(x))) are expressions, but are not ground expressions. Ground expressions are necessarily closed. The last example, ∀x: (s(x)+1=s(s(x))), shows that a closed expression is not necessarily a ground expression. So, this formula is a closed formula, but not a ground formula, because it contains a logical variable, even though that variable is not free. ## Formal definition What follows is a formal definition for first-order languages. Let a first-order language be given, with the C the set of constant symbols, V the set of (individual) variables, F the set of functional operators, and P the set of predicate symbols. ### Ground terms Ground terms are terms that contain no variables. They may be defined by logical recursion (formula-recursion): 1. elements of C are ground terms; 2. If fF is an n-ary function symbol and α1, α2 , ..., αn are ground terms, then f1, α2 , ..., αn) is a ground term. 3. Every ground term can be given by a finite application of the above two rules (there are no other ground terms; in particular, predicates cannot be ground terms). Roughly speaking, the Herbrand universe is the set of all ground terms. ### Ground atom A ground predicate or ground atom is an atomic formula all of whose terms are ground terms. That is, If pP is an n-ary predicate symbol and α1, α2 , ..., αn are ground terms, then p1, α2 , ..., αn) is a ground predicate or ground atom. Roughly speaking, the Herbrand base is the set of all ground atoms, while a Herbrand interpretation assigns a truth value to each ground atom in the base. ### Ground formula A ground formula or ground clause is a formula all of whose arguments are ground atoms. Ground formulae may be defined by syntactic recursion as follows: 1. A ground atom is a ground formula; that is, if pP is an n-ary predicate symbol and α1, α2 , ..., αn are ground terms, then p1, α2 , ..., αn) is a ground formula (and is a ground atom); 2. If p and q are ground formulae, then ¬(p), (p)∨(q), (p)∧(q), (p)→(q), formulas composed with logical connectives, are ground formulae, too. 3. If p is a ground formula and we can get q from it that way some ( or ) we delete or insert in the p formula, and then the result, q is well-formed and equivalent with p, then q is a ground formula. 4. We can get all ground formulae applying these three rules.
{}
# Calculate: cos(-1)^{-1} ## Expression: ${\cos\left({-1}\right)}^{-1}$ Simplify the expression using the symmetry of trigonometric functions ${\cos\left({1}\right)}^{-1}$ Any expression raised to the power of $-1$ equals its reciprocal $\frac{ 1 }{ \cos\left({1}\right) }$ Use $\frac{ 1 }{ \cos\left({t}\right) }=\sec\left({t}\right)$ to transform the expression \begin{align*}&\sec\left({1}\right) \\&\approx1.85082\end{align*} Random Posts Random Articles
{}
Home > Latex Error > Latex Error Environment Split Undefined # Latex Error Environment Split Undefined ## Contents Is it correct to write "teoremo X statas, ke" in the sense of "theorem X states that"? Navarro Sep 21 '10 at 13:59 add a comment| up vote 1 down vote In regards to Juan's answer: declaring a new environment using equation and aligned does work, but it What examples are there of funny connected waypoint names or airways that tell a story? I recommend writing out the full \begin{aligned} ... \end{aligned} using auto-completion, like with TexWorks or TeXShop. (\begin{align}\begin{split} seems to be just as good in this context where we aren't this contact form share|improve this answer edited Dec 9 '11 at 10:35 Torbjørn T. 85k6153271 answered Oct 14 '10 at 18:17 Jess Riedel 2371212 add a comment| Your Answer draft saved draft discarded l.53 \end{split} $$\begin{split} TCU & = \fraq{\text{purchasing cost} +\text{setup cost} + \text{holding cost}}{time} \\ & = \fraq{cy+K+h\fraq{y}{2}t_0}{t_0} \end{split}$$ equations share|improve this question edited Jan 2 at 13:42 Rico 4,57631344 more hot questions lang-tex about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation Science Other The end block is a bit more tricky as special processing occurs at the end of an environment. http://tex.stackexchange.com/questions/188358/split-equation-in-multiple-lines ## Latex Split Equation Over Two Lines Just a little change and we're talking physical education Want to make things right, don't know with whom High write latancy in temp db How do you curtail too much customer How to know if a meal was cooked with or contains alcohol? What to do when you've put your co-worker on spot by being impatient? What could make an area of land be accessible only at certain times of the year? I copied and pasted their example exactly as is, but i still get errors, does it have to do with the package i am using? –George Jul 7 '14 at 12:45 The Framework of a Riddle What happens if one brings more than 10,000 USD with them into the US? asked 1 year ago viewed 2559 times active 1 year ago Get the weekly newsletter! Latex Error Can Be Used Only In Preamble Label XXX multiply defined. He should have given this answer; with his package it works like a charm.) \documentclass{article} \usepackage{amsmath,environ} \NewEnviron{example}{% $$\begin{split} \BODY \end{split}$$ } \begin{document} \begin{example} a &= b + c \\ &= d Multline Latex I am using the package math of \usepackage{amssymb} \usepackage{amsthm} –George Jul 7 '14 at 12:10 @George I posted the self-contained code example above so that you could see one N(e(s(t))) a string Farming after the apocalypse: chickens or giant cockroaches? http://tex.stackexchange.com/questions/144933/environment-aligned-undefined-when-trying-to-write-a-system-of-equations thank you for your time –George Jul 8 '14 at 6:34 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign Why is JK Rowling considered 'bad at math'? Latex Eqnarray Missing \$ Inserted” After writing an equation?7Defining new split environment with reduced spacing8New environment for boxed equations4newenvironment with braces split between start and end2Error while nesting two environments4How to Split equation How to decipher Powershell syntax for text formatting? Related 9\overbrace split accross multiple lines6Multiple split equations3Splitting the equation on two lines2Split numbered equation over multiple pages3Multi-line equation with split3Equation on same line as proof0Adding text line in Equation Environment4Move ## Multline Latex Why does Luke ignore Yoda's advice? http://tex.stackexchange.com/questions/285571/undefined-control-sequence-error-endsplit It's not the end of the world, but the split-equation combination is just quicker and easier (if I can manage to define a command for it). Latex Split Equation Over Two Lines Next time when I post a problem I will do as you say. –user124471 Mar 13 '14 at 17:57 add a comment| 1 Answer 1 active oldest votes up vote 4 Latex Multiline Linked 1 Custom environment shorthand for multiple environments 25 Defining a new environment whose contents go in a TikZ node 8 What is wrong with defining \bal as \begin{align}? 11 Defining Why does Mal change his mind? http://jvmwriter.org/latex-error/latex-error-environment-theorem-undefined.html Uploading a preprint with wrong proofs Is it legal to bring board games (made of wood) to Australia? LaTeX Error: Environment split undefined.See the LaTeX manual or LaTeX Companion for explanation.Type H for immediate help.... \begin{split} ! How to decipher Powershell syntax for text formatting? Align Equations Latex See the LaTeX manual or LaTeX Companion for explanation. What does a profile's Decay Rate actually do? How is the ATC language structured? http://jvmwriter.org/latex-error/latex-error-environment-multiline-undefined.html How to decipher Powershell syntax for text formatting? The determinant of the matrix Yinipar's first letter with low quality when zooming in Why won't a series converge if the limit of the sequence is 0? Undefined Control Sequence Here's what I'm doing: \documentclass[a4paper,12pt]{article} \usepackage[brazil]{babel} \usepackage[utf8]{inputenc} \usepackage{amsmath} \newenvironment{example}{$$\begin{split}}{\end{split}$$} \begin{document} \begin{example}oi\end{example} \end{document} And the error message I'm getting is: ERROR: LaTeX Error: \begin{split} on input line 8 ended by \end{example}. asked 2 years ago viewed 6137 times active 2 years ago 31 votes · comment · stats Linked 106 How to break a long equation? ## asked 9 months ago viewed 666 times active 9 months ago Get the weekly newsletter! The Dice Star Strikes Back How does a Spatial Reference System like WGS84 have an elipsoid and a geoid? I would rather use equation-split than align, because align numbers (for reference) every single line of equations, while split only numbers the very middle line (and thus looks better). more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Latex Equation LaTeX Error: Environment aligned undefined. With the \ignorespacesafterend LaTeX will issue an \ignorespaces after the special ‘end’ processing has occurred Including ignorespaces in the new environment definition seems to be weirdly behaved. It seems to include a single space (smaller than an indent) in the text which follows the equation. I get an error message using this code. http://jvmwriter.org/latex-error/latex-error-environment-question-undefined.html In it, you'll get: The week's top questions and answers Important community announcements Questions that need answers see an example newsletter By subscribing, you agree to the privacy policy and terms
{}
Tag Info 41 More or less. While the ISS is below the satellites use for TV transmissions, it is passing by so fast that the coverage will be highly intermittent, meaning that you would be able to watch a channel for only a couple of minutes, have black outs over the oceans, and repeat. Other notable differences would be: Normal satellites receiver are "fixed": The ... 34 What would you notice first? Satellite navigation: immediately (depending on how often you use satnav) TV: immediately (depending on how often you use TV). Even if you don't use satellite TV yourself, your TV provider may receive some of its channels via satellite within a day or so: weather forecasts no longer include satellite imagery, and accuracy goes ... 25 You can't replace the Internet like that since Internet is also a collection of protocols for routing and addressing, and transmitting users' data. You can try swapping fiber optic links for LEO sat comm ones, but throughput will invariably suffer and connection will be less reliable. Satellites' transponders are the bottleneck - instead of an hierarchy of ... 25 Nodal precession doesn't matter for a plane of satellites like this, they will rotate around in unison, so the coverage will remain the same. Okay, so why the unusual dual inclination constellation? The inclination of a satellite band tends to let you know what latitude it will work best at. A 0 degree satellite works best at the equator, a 90 at the poles. ... 25 Orbits at the altitude of GEO are stable for very long times (millions of years). There is no significant decay of the orbital height due to some kind of drag, so the risk of these satellites interfering with working ones is close to zero. On the other hand, there are good reasons to store them above the belt and not below: The region below is used for ... 25 First you lock on to energy at (or near) the expected frequency. That’s carrier lock. Then you start to look for patterns in how the phase changes. The transmitter is coding groups of bits as phase-change “symbols”, and you want to find the time-pattern of those: symbol lock. But those are not yet bits because the coding works in blocks of bits. Once you ... 23 http://en.wikipedia.org/wiki/Lagrangian_point A satellite orbiting at the L1 point or L2 point would be in constant contact with the moon. Unfortunately, neither of these are particularly useful for keeping in contact with a lunar mission. The L1 point is between the earth and the moon, and would offer no advantages over just putting the reciever on earth.... 23 No, quantum mechanics cannot be used to transmit information faster than light. This is a common misconception based on misunderstanding how quantum mechanics works. Go here to read more about it. Technologies like quantum communication are valuable for other reasons though, but they still end up transmitting information at the speed of light, but in a ... 22 Yes, they could theoretically communicate with each other over the DSN, however in practice this will not happen (as it has no current uses). The amount of functions that Curiosity can perform autonomously is very limited and predetermined. It usually involves some sort of deterministic operation such as moving a rover arm or performing a drill sample (... 20 If I understand your edited question, then no. While the Earth's J2 (oblateness) produces enough torque to rotate a Sun-synchronous orbit once a year, it does not produce enough torque to rotate a "Moon-synchronous orbit" once a month. So there is no such orbit. I am not clear on what the utility of such an orbit would be, even if it did exist. If you're ... 20 Actually, it makes a lot of sense to raise the orbit of end-of-life geostationary satellites: Coming from Earth you have to cross through a lower orbit to transfer from low earth orbit to a geostationary orbit but you don't have to go farther out than that (some transfer orbits do, but it's not a requirement). That means that a higher orbit has less risk of ... 20 No. Communication latency (the time between sending a bit and receiving it on the other end) between Earth and Mars probes is limited almost entirely by the speed of light, as they are radio waves on a direct path in vacuum. (There are also plans for optical communications, but as far as I know no Martian orbiter has yet been launched with that capacity. It ... 19 It's not really about tidal effects of the Earth but about irregularities of lunar gravity (Mascons) necessitating large fuel expenditures for maintaining an orbit around the Moon. Taken from: http://boingboing.net/2013/06/23/lunar-gravity-maps.html The solution (first proposed in 1966 by R. Farquhar) is to place a comms relay into a halo orbit around the ... 18 There's already two good answers that say a lot of what I wanted to say, and I will refrain from repeating any of their content here. I do think it is useful to add one more item of insight though. You say: "any quantum switch in this universe can be flipped instantaneously from elsewhere." When people talk about a "quantum switch flipping ... 17 Such satellites are in a geosynchronous orbit (GSO), orbiting at an orbital altitude where orbital period matches Earth's rotation on its axis. Their orbital speed is roughly 3 km/s at mean orbital altitude of 35,786 km above the Earth's surface:    Orbital speeds ($v_o \approx \sqrt{\mu/r}$) at mean altitudes above the Earth's surface (blue) and ... 17 As of today (October 2016), Iridium is the only public satcomm provider using a constellation of low earth orbit satellites with high inclination orbits to provide a service with global coverage. You can compare coverage easily on this website. (this image is from here) Inmarsat and Thuraya in contrast use geostationary satellites, Thuraya uses spot beams ... 17 The experiment has been done, with a single, geosynchronous, satellite, Galaxy IV, in 1998. The immediate effect I noticed in the San Francisco Bay Area was loss of the live NPR feed. I did not check other networks. Second effect, my Chevron Texaco credit card could not be used. The Wikipedia article https://en.wikipedia.org/wiki/Galaxy_IV mentions NPR, ... 16 @Antzi's answer is right, but I'll add some context as a supplement. While Doppler (mentioned there) might or might not be an issue for an off-the-shelf commercial satellite TV box (I don't know) it could probably be fixed with a mod that NASA could easily manage. Several answers to Do astronauts get Netflix on ISS? indicate that there is access to "new ... 15 It is possible in several ways. Let's get the dumb ones out the way first. As I commented, you can orbit the moon itself. Technically correct, since anything orbiting the moon is also orbiting the Earth. You can make your satellite big enough so that it always has a line-of-sight. A giant pole somewhat longer than the Earth's diameter can be arranged to ... 15 The US side of the ISS has a number of antennas to support its rather complicated comm system. Most visible are the two Ku-band High Gain Antennas which are 6 foot diameter azimuth/elevation gimbaled dish antennas mounted on the Z1 truss segment (near the center of the ISS truss). These antennas are sometimes referred to as SGANTs (Space-to-Ground ANTennas)... 15 They are definitely not identical. Tundra is geosynchronous; period = 1 day. The eccentricity allows it to spend most of the time over a region of Earth off the equator, something not possible for geostationary (sit always over one point on equator) or circular geosynchronous orbits (sinusoidal function with slow-down at both extremes). Tundra is intended ... 15 It seems that the company ExoAnalytic Solutions regularly observes high- orbiting satellites (MEO, HEO, and GEO), using the data to provide tracking, ensure they are at the right spot, and provide data in the event of an emergency. They have 200 telescopes dedicated to the effort, which seems to indicate that they can't observe every satellite all the time, ... 14 Satellites in LEO travel at around 7000 m/s. This means that a 1 m sized LEO satellite will take around 150 us (microseconds) to cross a given line between a GEO satellite and a ground antenna. So it is a very short event, and all satellite communications are protected against error bursts (especially from interference) by using FEC (forward error ... 14 Starlink and 5G don't have that much to do with each other--they compete for different customers. Starlink systems currently require a large receiver the size of a pizza box to uplink to satellites. It is unlikely that this will be miniaturized to fit into mobile devices within the next decade or so. 5G is a short-range, high-bandwidth technology designed ... 13 First of all, let's figure out how many satellites per launch vehicle. The estimate of the mass of these satellites is 386 kg. The mass for a launch of a Falcon 9 is 5500 kg. That means one could launch 13-14 satellites per rocket. I suspect 12 is more realistic, because part of the payload mass is the structure to mount all of the satellites, and that is ... 13 I wrote code that flew on 3 spacecraft that went to Mars, one to the Moon, one to a comet and back, and a few Earth-orbiting satellites, the last of which was about 10 yrs ago. All of them used C. It's not the only language out there, of course, but it's popular because the perception is that code can be made smaller and faster using C, without the overhead ... 13 They have to spread their signal. Also, it requires a really big dish to not spread your signal at all. Imagine that there was no spreading at all. The dish would have to be pointed exactly at the satellite, to the margin of the size of the dish. That is far beyond the technology of today! DSS has it's dishes pointed to 0.1 degrees. At the distance of ... 12 Altitude is part of the problem. Assuming you wanted to get to GEO from a circular, 700 km altitude orbit you'd first need to do a burn of about 2.24 km/sec along the orbital velocity direction to raise the apogee of the transfer orbit (the half-ellipse): The spacecraft wouldn't normally do it itself, the launcher upper stage would typically inject you into ... 12 SpaceX has stated a goal that every launch going forward will attempt to land. The expectation initially is not 100% success, but an attempt at the very least. Large GTO payloads do not leave enough fuel reserve for a Return to Launch Site (RTLS) at Landing Zone 1. Instead they will use the ASDS (Autonomous Spaceport Drone Ship) as far downrange as needed ... 11 While power requirements are higher than for regular GSM service, they are not as high as one might think. Current satellite telephones use handsets of the size of 2000-era mobile phones and are able to transmit 15 kBit/s to geosynchronous satellites (the Thuraya system). These satellites are more than 30 times farther from Earth than the planned SpaceX ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
# Why does this reconstruction produce a phase shift from the original signal? After an FFT of a signal is done, it is plotted as in the image below, with the original signal is on the first subplot. Using the magnitude and phase data of each frequency, it's reconstructed, and the produced signal is as the image below. The reconstructed is signal is good except it's shifted 4 milliseconds earlier from the the original signal. I also has done this with another signal, but this 4 milliseconds shift is there too. Why does this happen? Additional question: I used this for harmonic analysis of 50 Hz fundamental frequency (electric power system frequency). I had a slightly hard time determining which frequency of each harmonic (50*n Hz) frequency to use because, from the FFT result, the phase angle is θ at (50*n∓0.001) Hz but suddenly becomes θ±180° at (50*n±0.001) Hz. For example, at 149.999Hz the phase angle is 223.3° but at 150.001Hz the phase angle is suddenly 403.3°. Is this hard time determining which angle of each frequency to use for reconstruction, normal? By the way, this is the Matlab code I use to do the FFT and the plotting. Fs = 1/(8e-6); % Sampling frequency T = 1/Fs; % Sample time L = 2^21; % Length of signal t = (0:L-1)*T; % Time vector SampleNoBallast; % The original signal NFFT = 2^nextpow2(L); % Next power of 2 from length of y Y = fft(y,NFFT)/L; % The Fast Fourier Transform producing FFT complex f = Fs/2*linspace(0,1,NFFT/2+1); % Frequencies to plot P = rad2deg(unwrap(angle(Y))); % Phase degrees from FFT complex % Plotting the original signal subplot(3,1,1); plot(t(1:12500),y(1:12500)) title('Arus Masukan LED T8 Opple 18 W tanpa Ballast') ylabel('arus (mA)') xlabel('waktu (s)') grid on set(gca,'ButtonDownFcn','selectmoveresize'); % Plot single-sided amplitude spectrum. % Plotting the magnitude of each frequency subplot(3,1,2); plot(f(1:17000),2*abs(Y(1:17000))) title('Hasil FFT') xlabel('Frekuensi (Hz)') ylabel('Amplitudo (mA)') set(gca,'ButtonDownFcn','selectmoveresize'); % Plotting the phase of each frequency subplot(3,1,3); plot(f(1:17000),P(1:17000)) title('Sudut Komponen Harmonik') xlabel('Frekuensi (Hz)') ylabel('Sudut (derajat)') set(gca,'ButtonDownFcn','selectmoveresize'); • Welcome to DSP.SE! How did you generate the reconstructed signal? If it was just using ifft then there is something very wrong. If you're doing some modification of the original FFT data or using something other than ifft to reconstruct it, then we need to see that to be able to answer your question. – Peter K. Mar 18 '16 at 18:53 • See my answer here about why your phase angle flips at slightly different frequencies, and, thus, the proper way to measure phase of arbitrary frequency spectrum: dsp.stackexchange.com/questions/29509/… – hotpaw2 Mar 18 '16 at 19:32 • Thank you for the welcome. I reconstruct the signal using an electronic simulation program, LTSpice, by putting together 10 current sources of each frequency (50 Hz, 150 Hz, 250 Hz, ..., 950 Hz). The original FFT data consists of 12500 points, but to give more resolution to the FFT result, I duplicate it 168 times to make it slightly more than 2^21. – Qelopin Uacel Mar 19 '16 at 0:20 • I tried using ifft and the result is perfect, but if I tried to use LTSpice for the reconstruction, there's the phase shift. – Qelopin Uacel Mar 19 '16 at 2:03
{}
1 ### JEE Main 2018 (Online) 16th April Morning Slot In a complexometric titration of metal ion with ligand M(Metal ion) + L(Ligand) $\to$ C(Complex) end point is estimated spectrophoto - metrically (through light absorption). If 'M' and 'C' do not absorb light and only 'L' absorbs, then the titration plot between absorbed light (A) versus volume of ligand 'L' (V) would look like : A B C D 2 ### JEE Main 2019 (Online) 9th January Morning Slot The correct match between Item-I and Item-II is : Item-I (drug) Item-II (test) A Chloroxylenol P Carbylamine test B Norethindrone Q Sodium hydrogen carbonet test C Sulphapyridine R Ferric chloride test D Penicillin S Bayer's test A A $\to$ R;  B $\to$ P;  C $\to$ S;  D $\to$ Q B A $\to$ Q;  B $\to$ S;  C $\to$ P;  D $\to$ R C A $\to$ R;  B $\to$ S;  C $\to$ P;  D $\to$ Q D A $\to$ Q;  B $\to$ P;  C $\to$ S;  D $\to$ R ## Explanation (1)  To react with FeCl3, with benzene ring there should be a $-$OH group attached. (2)  For carbylamine test there should be $-$NH2 group in a compound. (3)  NaHCO3 reacts when there is $-$COOH group in a compound. (4)  For Bayer's test there should be double bond or tripple bond between two carbon. 3 ### JEE Main 2019 (Online) 10th January Morning Slot If dichloromethane (DCM) and water (H2O) are used for differential extraction, which one of the following statements is correct ? A DCM and H2O would stay as upper and lower layer respectively in the separating funnel (S.F.) B DCM and H2O will make turbid/colloidal mixture C DCM and H2O would stay as lower and upper layer respectively in the separating funnel(S.F). D DCM and H2O will be miscible clearly ## Explanation Density of DCM = 1.39 and density of H2O = 1.0. As density of DCM is higher than H2O so DCM will stay lower in separating funnel. 4 ### JEE Main 2019 (Online) 10th January Evening Slot The correct match between item 'I' and item 'II' is : Item 'I' (compound) Item 'II' (reagent) (A) Lysine (P) 1-naphthol (B) Furfural (Q) ninhydrin (C) Benzylalcohol (R) KMnO4 (D) Styrene (S) Ceric ammonium nitrate A (A)  $\to$  (Q);  (B)  $\to$  (R);  (C)  $\to$  (S);   (D)  $\to$  (P) B (A)  $\to$  (Q);  (B)  $\to$  (P);  (C)  $\to$  (S);   (D)  $\to$  (R) C (A)  $\to$  (R);  (B)  $\to$  (P);  (C)  $\to$  (Q);   (D)  $\to$  (S) D (A)  $\to$  (Q);  (B)  $\to$  (P);  (C)  $\to$  (R);   (D)  $\to$  (S) ## Explanation Lysine is an amino acid which gives nindydrin test. Furfural has aldehyde group that is why it reacts with 1-napthol to give purple colouration. Benzyl alcohol undergoes reaction with ceric ammonium nitrate to give pink colouration. Styrene reacts with KMnO4 and gives benzoic acid and brown MnO2. NEET
{}
## Saturday, June 29, 2013 ### Dobrodošli u EU, Hrvatska It means Welcome to the EU, Croatia, just to be sure In two days, Croatia will become the 28th member state of the European Union. I think it is positive news for Europe and Croatia's parameters justify its membership because they don't differ too much from those of Slovenia, so far the only post-Yugoslav EU member state. Vinetou and Old Shatterhand, two most famous Americans, on a lake in the Plitvice Lake Region, Croatia The European Parliament has already welcomed Croatia as well: by creating a Facebook web page with a huge coat-of-arms of the puppet fascist state of Croatia during the World War II. ;-) Nice. This shows how much understanding the EU has for its citizens and nations. Croatia has been a part of Yugoslavia (with one name or another) between 1918 and the early 1990s when the federal state violently decayed. Before that, it belonged to the Roman Empire, Austria-Hungary, and other empires. Whenever I had to think about their wars against the Serbs, I just couldn't fully understand the tension as they speak the same language, really. Now I learned that it's not just the religion – Croatia is mostly Roman Catholic. Their ethnic (I really mean biological, genetic) origin may actually be Iranian, not Slavic (Serbs are surely Slavs) and it can make some difference. If you think about the contemporary Iran, you may find it genuinely ironic when you realize that the more Iranian part of the Yugoslav nation is the most Roman Catholic one, the most Western one, in a sense. Croatia is a top tourist destination for us. Almost every Czech has been vacationing there at least once. I have been there about 4 times. The Croatian economy is based on services, including tourism. The GDP per capita is \$18,000 or so, about 3/4 of the Czech figure. Their current president and prime minister are left-wing. The economy isn't doing too well, with the unemployment near 20%, about 2% annual GDP drop, and so on (I would probably blame many things on their excessively high salaries), but these are temporary details. I am confident that in the medium and long run, it is pretty much a healthy economy. Their currency is one kuna, which means "marten" (both in Croatian and Czech), because in the Middle Ages, marten pelts were used as banknotes. One euro is about 7.5 kunas. So kuna is still over 3 korunas which isn't too tiny and it means that they still actively use the fractions. 1/100 of a kuna is called a lipa (in Czech: lípa; it's actually our national tree) which means a linden lime tree. I think that the Croats are friendly. Financially, they probably prefer German tourists who spend somewhat more money than the Czechs but the times when a Czech tourist would be assumed to be a nomad who eats his own sandwich in the middle of a road and spends nothing belong to the past. The Czech and Croato-Serbian languages sound familiar to one another, they use similar roots. However, sometimes the similarity is deceiving. The meanings of similar words may be slightly different as well as completely different. For example, "plivat" means "to spit" in Czech but "to swim" in Croatian (to swim in Czech is "plavat"). Also, perhaps more seriously, "cura" [tsura] is a girl, a babe, a neutral or positive word for a young woman in the Croatian language. The closely related word "coura" means a "slut" in Czech. The Czech tourists and their Croatian hosts are usually aware of these basic traps and they make fun out of them. If you think about the diversification of the meaning of the word "c[o]ura" that had to occur in the past, it probably means that there had to be a moment in one of the two countries' history when it was either normal for a young girl to be a slut or when being a slut was viewed as a positive thing by the society (at least the part of the society that had power over the mutation of the meaning of similar key words). Otherwise the meaning of the world couldn't have drifted in this way. Do you agree? ;-) Czechia's T-Mobile, a telephone service operator, produced about a dozen of TV commercials in the most recent year that play games with various Croatian words known to Czechs. For example, there's a town (and/or name) Zadar in Croatia that sounds like "zadara", a Czech word meaning "for free". So a group of ski jumpers wants to "call Zadara" (call Mr Zadar or call for free) and they do lots of crazy things. The Yugoslav languages sound a bit funny to us. The more Southern related languages may sound more funny to the people in the North – something that the Poles who find the Czech language immensely entertaining know very well. I don't claim that Polish doesn't sound funny to us but there's some sense in which more Southern languages sound more similar to a child speak. The Croatian landscape is pretty. Croatia controls most of the Adriatic Sea's coastline that belonged to Yugoslavia. There are the Plitvice lakes and other cool things, too. Last night, they were airing the last "Vinetou and Old Shatterhard" movie from the series shot by Germany and others in 1968 or so. Some of the picturesque scenes claiming to be North America (including the image at the top) were shot in the Plitvice Lake Region of Croatia. The movies are based on novels about native Americans written by Karl May, a German author. As kids, we would probably name Vinetou (an Apache) and Old Shatterhand (his pale-skinned "brother") to be the two most famous Americans. I have always been amazed that almost no one knew these two guys in the U.S.! ;-) The EU membership may tame some problems of Croatia that result from their location in the Balkans and from the hot temperament associated with the nations that live there. On the other hand, the increase of diversity and common sense that Croatia (and its 4.3 million citizens) brings may dilute some ultracentralization, Soviet-like tendencies in the EU and countries like mine may be gaining a new natural ally in various questions that matter. Slovenia hasn't really played this role because this nation seems to be obsessed with the right behavior that the EU overlords will like (I mean that the Slovenes have been a bit like swots). I am sure that the Croats are less opportunist and submissive. Dobrodošli u EU, Hrvatska. Croatia is great but there can only be one country that wins the most important discipline. Czechia has safely won the Bloomberg tournament searching for the most vice-prone nation in the world. With 64.5 points, it was well ahead of Slovenia at 58.8 and others such as Australia, Armenia, Bulgaria, and Spain. The subdisciplines were alcohol, drug use, cigarette consumption, and losses in gambling over GDP. Too bad that other disciplines such as the divorce rate, out-of-wedlock births, the working womanhours of prostitutes per capita, and so on weren't included because our hegemony would be even more striking. ;-) More seriously, I am surely not a typical representative of my nation in those things but I do view this high score as correlated with the atheism and the "true freedom" that exists in Czechia. Whether you find it a blasphemy or not, smoking, drinking, sniffing, and gambling are usually not lethal and they even increase the sense that "life is worth living" for those who do such things. I think that in most countries beneath us, such ideas are taboo. On the opposite side of the table, you find Zambia. Would you prefer to live in Czechia or Zambia? Paradoxically enough, the drug-civil-war-stricken Mexico was among the 5 least vice-prone countries, too. 1. Croatia will also bring continuity in the EU by bringing 300,000 unemployed into the community ;). 2. Cmon Lumo, cura is related to our "dcera" I guess ;) 3. So when Miloslav Vlk, the archbishop emeritus of Prague, spends his holidays in Croatia, does he choose Vrh on the island of Krk? Who stole the Czech and Croat vowels, anyway? I suspect the Italians ;) Yes, welcome to the EU, dear crustaceans. But don't join the euro and don't hijack any airplanes. 4. Right, Shannon it's near 20% over there, maybe more than you say. But I don't believe that the EU is so close that the unemployed people are shared by the whole EU. They're still being paid and taken care of by the individual nation states, right? 6. Right, unless they change country and move to the most unemployed-friendly one ;-) 7. Hi Lubos, on an unrelated topic: perhaps you've already noticed that John Cook has published his paper: Quantifying the consensus on anthropogenic global warming in the scientific literature. DOI: 10.1088/1748-9326/8/2/024024 I was actually surprised that the results show a much smaller hysteria among people than I had expected... 8. Hvala! 9. They probably have indo european roots. Kore was a young girl in ancient greek ( there are statues called Kore http://en.wikipedia.org/wiki/Kore_%28sculpture%29 ), and means daughter in modern greek. Kouros was a young man ( there are statues http://en.wikipedia.org/wiki/Kouros ) in ancient greek. 10. Dear Petar, I understand that the Serbs have suffered and Croats have been far from saints. But your presentation of the history is lopsided. Croats were at most small disciples of the Germans - and even Germans have been forgiven. In fact, they're the most popular large nations among most contemporary polls. Croats have been suppressed in various ways as well. For example,Yugoslavia was arguably controlled by the Serbs. Cannot you just draw a thick line behind the history? It's not a good idea to revive all the past wars all the time. In particular, I don't think that a person should be "prosecuted in Haag" just by saying than an ethnic problem will be solved when the percentage of a population drops below 3%. 11. You're welcome, Sašo. I just decided to find when Croats began to use diacritics including letters like Š. It's here http://en.wikipedia.org/wiki/Gaj%27s_Latin_alphabet Gaj's 1835 alphabet was based on Jan Hus' Czech alphabet proposed in Orthographia Bohemica, 1406 or 1412. Some other nations using the Latin alphabet clearly need more than 420 years to appreciate how good idea this actually is. 12. These are so intriguing similarities but in physics speak, Kore-dcera looks like a 1.5-sigma signal to me. ;-) I wouldn't bet much that this theory is actually valid. Kouros for a young man actually counts as counter-evidence because dcera/coura are all staunchly feminime, it's really the point of the words. 13. I meant the roots of the original comment: "cura", and the "coura". That has higher standard deviations. 14. nevertheless, dear Lubos, it is not easy to bury old feuds, they keep appearing in the newer generations. For example you must know of the enmity between Greece and Turkey over the centuries. You might not know that Greece has been a staunch supporter for Turkey to enter the EU, because it is true what you say that a line has to be drawn at some point and coexistence of neighbors is for the good of both nations. But in our news there were two disturbing actions by the present Turkish government: The have started reversing the museum status of old churches which with the Ottoman occupation had become mosques and turning them again into active mosques. The Aghia Sophia is an example where muslim leitourgies have restarted, and another church also of historical interest in another city. As an agnostic I worry most about what will happen to the iconography which had been partly destroyed and then covered up with islamic doodles, and since 1922 (the peace treaty year) were slowly revealed in all their glory. Islam forbids images in mosques :(. Of course Greeks, mainly following their orthodox faith will be upset,. All the national reconciliations from watching Turkish serials in our greek TVs (they are cheaper in this economic crisis then making new greek ones) goes down the drain. 15. Dear Anna, the vitriol in the historical relationships surely matters, but so does the nations' current attitudes. Germans of various kinds have ruled the Czech lands and kept us as 2nd class citizens for a big portion of it, especially in 1939-1945 when they also caused death of 300,000 Czechoslovaks, but except for a part of the very old generation, no one hates Germans. In the same way, the mainstream Germans don't hate Czechs although they could also invent reasons, like the expulsion of Germans after the war. Russians occupied us in 1968, ending hopes for freedom. Yet, we don't hate them. There are positive relationships and this holds on both sides, too. Slovakia was eating lots of subsidies in Czechoslovakia, we turned them from an agricultural underdeveloped province - in Kingdom of Hungary - to a modern industrial country. They were not grateful at all, screaming that they were oppressed, and one may invent all such things. But Czechs obviously find Slovaks the most lovable foreigners, and vice versa. The divorce was a velvet divorce. It's also about your desire to build a future that doesn't repeat the vitriol in the past. It's about recognizing that the status quo we inherited is an acceptable approximation to a fair compromise and attempts to "refine" this approximation are bound to create more losses than benefits. This is an observation that must be done not only about the other nation, your former/current foe, but also about your own nation. So obviously the argument that we don't want to piss off Germany is an argument, not the final or omnipotent argument, but an argument nevertheless. The same thing holds in the opposite direction, despite the different flavor of the possible negative sentiments. So some Czechs are criticizing other Czechs for pissing off Germans in ways that don't bring anything good at all; and the same holds for Germans criticized by other Germans for pissing off Czechs in ways that are counterproductive for the total. If this reasoning isn't widespread in Greece and Turkey at all, it's no surprise that the relationships continue to be bad. You must simply draw a thick line behind the history and even more importantly, try and enjoy the positive - economic etc. - interactions. It's really the mutual economic dependence - mostly but not only trade and investments - that creates the incentives to nurture the relationships. 16. Plitvice Lakes is one of the most world beautiful places 17. Lubos, thanks for the nice and informative article about my homeland. Incidentally, the whole Iranian connection has been proven to be genetically bogus - Croatians are, unsurprisingly, a combination of people who have been in the region perhaps 10000 years, and a significant, but not completely predominant, Slavic component. Also, Czech girls on the Croatian coast are well renowned for their beauty :-) Did not know about the cura/cura dichotomy, as personally I unfortunately had no close contact with these beauties :-( 18. don't know the original roots, but kurva in Croatian is a prostitute....
{}
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors Operators from certain Banach spaces to Banach spaces of cotype $q\ge 2$ Commun. Korean Math. Soc. 2002 Vol. 17, No. 1, 53-56 Printed March 1, 2002 Chong-Man Cho Hanyang University Abstract : Suppose $\{X_n\}_{n=1}^\infty$ is a sequence of finite dimensional Banach spaces and suppose that $X$ is either a closed subspace of $(\sum_{n=1}^\infty$ $X_n)_{c_0}$ or a closed subspace of $(\sum_{n=1}^\infty X_n)_p$ with $p > 2$. We show that every bounded linear operator from $X$ to a Banach space $Y$ of cotype $q$ $(2 \le q < p)$ is compact. Keywords : compact operator, cotype $q$, $\ell_p$-sum, Rademacher functions MSC numbers : 46B28, 46B20 Downloads: Full-text PDF Copyright © Korean Mathematical Society. The Korea Science Technology Center (Rm. 411), 22, Teheran-ro 7-gil, Gangnam-gu, Seoul 06130, Korea Tel: 82-2-565-0361  | Fax: 82-2-565-0364  | E-mail: paper@kms.or.kr   | Powered by INFOrang Co., Ltd
{}
Home>Steel Material>mongolia vertical cylindrical tank oil volume # mongolia vertical cylindrical tank oil volume • Description: mongolia vertical cylindrical tank building volume - Oil ...(steel)Single wall vertical Tank Fuel, Refuelling Tanks. Vertical single wall cylindrical tanks manufactured by using shell plates and dished ends made of carbon steel quality S235JR according to UNI EN 10025. Tanks are equipped with containment basin designed to hold at least 50% or 100% of the vol ... ### How is the volume of a rectangular tank calculated?How is the volume of a rectangular tank calculated?The filled volume of a rectangular tank is just a shorter height with the same length and width. The new height is the fill height or f. Therefore Volume of an oval tank is calculated by finding the area, A, of the end, which is the shape of a stadium, and multiplying it by the length, l.Tank Volume Calculator(grade) Tank Size Calculator Work out an Oil Tank's Volume mongolia vertical cylindrical tank oil volume Make sure youre calculating the correct volume for your tank by selecting one of the below options. Horizontal Cylindrical Tanks Vertical Cylindrical Tanks Rectangle Tanks The conversion bar allows you to quickly convert the litres into barrels, gallons, ft 3 and m 3.(grade) What is the formula for cylinder tank volume?What is the formula for cylinder tank volume?Cylinder Tank Formula. tank volume = × r 2 × l. r = radius (diameter ÷ 2) l = length (or height for a vertical tank)Tank Volume Calculator - Inch Calculator ### What is the volume of a horizontal cylinder tank?What is the volume of a horizontal cylinder tank?Horizontal Cylinder Tank. Total volume of a cylinder shaped tank is the area, A, of the circular end times the length, l. A = r 2 where r is the radius which is equal to 1/2 the diameter or d/2.Tank Volume Calculator(grade) mongolia #### Must include: mongoliaHOW TO CALCULATE THE VOLUMES OF PARTIALLY FULL 3.2.Hemispherical Head in a Vertical Cylindrical Tank. Fig 5. Hemispherical head in vertical tank [10] Partial volume of fluid in the lower head, L = 6 3 2 2 2 3 (12) Partial volume at the upper head, L = 12 3 2 4 2 3 (13) The depth (z) of this type of head is equivalent to 0.5 of the(grade) mongolia #### Must include: mongoliaTank Volume Calculator for Ten Various Tank Shapes(steel)Jan 14, 2020Therefore the formula for a vertical cylinder tanks volume looks like V_vertical_cylinder = * radius² * height = * (diameter/2)² * height . If we want to calculate the filled volume, we need to find the volume of a "shorter" cylinder - it's that easy! V_vertical_cylinder = * radius² * filled = * (diameter/2)² * filled mongolia #### Must include: mongoliaTank Volume Calculator ### Calculating Tank Volume Calculating Tank Volume Saving time, increasing accuracy By Dan Jones, Ph.D., P.E. alculating fluid volume in a horizontal or vertical cylindrical or elliptical tank can be complicated, depending on fluid height and the shape of the heads (ends) of a horizontal tank or the bottom of a vertical tank.(grade)Cylinder, tube tank calculator surface area, volume mongolia vertical cylindrical tank oil volume(steel)Now it is a simple matter to find the volume of the cylinder tank as it provides estimation of the total and filled volumes of the water tank, oil tank, reservoirs like a tube tank, and others of horizontal or vertical cylindrical shape. Sometimes it is also needed to estimate a surface size of your tank.(grade)Cylindrical Tanks McMaster-Carr(steel)Choose from our selection of cylindrical tanks, including tanks, round plastic batch cans, and more. In stock and ready to ship. ### Cylindrical Tanks and Plastic Vertical Storage Tanks mongolia vertical cylindrical tank oil volume The vertical cylindrical storage tanks are offered in capacities from 10 gallons to 12,500 gallons. They are manufactured from medium- or high-density polyethylene with UV inhibitors, and conform to the requirements of NSF/ANSI Standard 61. Please contact us with your vertical tank needs. We all offer accessories for complete cylindrical tank solutions, such as drip trays and (grade)Generation of Calibration Charts for Horizontal (steel)the required volume of horizontal cylindrical tanks. Generation of any tank capacity table starts from skeleton chart and consideration of different correction factors. Some of these correction factors are 1. Effect of temperature. 2. Effect of Shell plate thickness. 3. Effect of tilt. 4. Effect of butt strap (usually applicable to vertical mongolia vertical cylindrical tank oil volume(grade)Horizontal Cylindrical Tank Volume and Level Calculator(steel)Volume calculation on a partially filled cylindrical tank Some Theory. Using the theory. Use this calculator for computing the volume of partially-filled horizontal cylinder-shaped tanks.With horizontal cylinders, volume changes are not linear and in fact are rather complex as the theory above shows. Fortunately you have this tool to do the work for you. ### Horizontal Cylindrical Tank Volume and Level Calculator Volume calculation on a partially filled cylindrical tank Some Theory. Using the theory. Use this calculator for computing the volume of partially-filled horizontal cylinder-shaped tanks.With horizontal cylinders, volume changes are not linear and in fact are rather complex as the theory above shows. Fortunately you have this tool to do the work for you.(grade)How to Classify Oil Tanks?(steel)Vertical fixed roof oil tanks, which are composed of fixed tank roof and vertical cylindrical tank wall, are mainly used for storage of non volatile oil, such as diesel oil and the similar oil. The most commonly used volume of fixed roof oil tanks right from 1,000m³ to 10,000m³.(grade)KotMF Dipstick Calculator - Measure Liquid Volume by Height(steel)Determine Volume by Liquid Level Height. How to make a diptube to measure your cylindrical tank, kettle, or fermenter. Units of measure are Inches (yielding Gallons US) or Centimeters (yielding Liters). Orientation of the Cylinder Horizontal Vertical Round Answer to Decimal Unit of Measure Inches Centimeters Enter Cylinder Length ### Liquid Volume of Horizontal Tank with Dished Ends mongolia vertical cylindrical tank oil volume Jan 18, 2013I am the Head of Department for quality control & Environment. we calculated our Horizontal Tank volume-Tori spherical Head, As per our knowledge it is giving spurious reading. Tank Dia is 2400 mm,LOS is 3000 mm, Torispherical head length from center are 472 mm. Internal lining material is natural rubber with thickness 4.3 mm (three layers).(grade)Oil tank size calculator replacement oil tank how much mongolia vertical cylindrical tank oil volume(steel)Oil tank capacity calculator, designed to help you know how big your existing oil tank is. mongolia vertical cylindrical tank oil volume Enter cylindrical tank dimensions below Calculation results are approximate Direction. Horizontal Vertical Diameter(mm) Length(mm) Content level (mm) Calculate tank size Measurment Tank capacity Tank contents; Litres 0 L 0 L m³ 0 m³ 0 m³ mongolia vertical cylindrical tank oil volume(grade)Oil/Water Separators Sizing Guide - Highland Tank(steel)Highland Tank OWS have a high-oil capacity equal to 43% of the static vessel volume. This capacity is considered the maximum operational level. Once this volume is exceeded, effluent quality may degrade. Highland Tank OWS have a spill capacity equal to 80% of the static vessel volume. This capacity considers containment of a pure oil spill only. ### Online calculator Oval Tank Volume Calculator Oval Tank Volume Calculator. Find the volume of oval tanks in cubic inches and gallons. person_outlinePete Mazzschedule 2015-03-29 19:17:18. Oval Tank Volume Calculator. Depth, front to back. Height. Length. Calculate. Volume in cubic inches . Volume in gallons . content_copy Link save Save extension Widget.(grade)Pulse&Tank Level Monitor (TLM)(steel)Vertical Tank Maximum Height (ft) 30 Vertical Tank Maximum Volume (gal) 999,999 Warranty Two Years Weight (lb) 1.25 Battery Chemistry Alkaline Battery Size AA Cylindrical Tank Maximum Height (m) 9.14 Cylindrical Tank Maximum Length (m) 9.14 Cylindrical Tank Maximum Volume (L) 999,999 For Use With Oil, Waste Oil, ATF, Antifreeze mongolia vertical cylindrical tank oil volume(grade)Secondary Containment Calculation Worksheets Oil Spills mongolia vertical cylindrical tank oil volume(steel)Apr 11, 2017Worksheets to help Qualified Facilities calculate secondary containment. If you are the owner or operator of a Spill Prevention, Control, and Countermeasure (SPCC) qualified facility, you need to ensure that you have adequate secondary containment to prevent oil spills from reaching navigable water. ### Some results are removed in response to a notice of local law requirement. For more information, please see here.(grade)Some results are removed in response to a notice of local law requirement. For more information, please see here.Sample Calculations to Determine Secondary Containment mongolia vertical cylindrical tank oil volume Sep 10, 2019Oil Spills Prevention and Preparedness Regulations; Contact Us. Share. Sample Calculations to Determine Secondary Containment Volume of a Single Vertical Cylindrical Tank Inside a Rectangular or Square Dike or Berm. This appendix provides supplementary information for (grade)Some results are removed in response to a notice of local law requirement. For more information, please see here.mongolia vertical cylindrical tank fire technology - Oil mongolia vertical cylindrical tank oil volume(steel)mongolia vertical cylindrical tank fire technology. mongolia vertical cylindrical tank oil volume in MATEC Web of Conferences 255(13 14):02009 January 2019 [tank]Journal of PhysicsConference Series, Volume 1425, 2019 [steel]The interface zone of the wall and bottom of the vertical cylindrical steel tank (VST) is the most loaded part of the structure, mongolia vertical cylindrical tank oil volume ### TANK VOLUME CALCULATOR [How to Calculate Tank Vertical Cylindrical Tank. The total volume of a vertical cylindrical tank is calculated by using the formula $$Volume = \pi × Radius^2 × Height$$ Where $$Radius = {Diameter \over 2}$$ Rectangular Tank. The total volume of a rectangular tank is calculated by using the formula $$Volume = Length × Width × Height$$ Horizontal Capsule Tank(grade)Tank Size Calculator Work out an Oil Tank's Volume mongolia vertical cylindrical tank oil volume(steel)Make sure youre calculating the correct volume for your tank by selecting one of the below options. Horizontal Cylindrical Tanks Vertical Cylindrical Tanks Rectangle Tanks The conversion bar allows you to quickly convert the litres into barrels, gallons, ft 3 and m 3.(grade)Tank Volume Calculator - Horizontal Elliptical - Metric(steel)Tank Volume Vertical Cylindrical Tank Volume Rectangular Tank Volume Directory Inch Inch Inch Inch Horizontal Elliptical Tank Volume, Dip Chart and Fill Times - Metric. Hemispherical Ends Ellipsoidal Ends - adj Flat Ends Side ### Tank Volume Calculator - Inch Calculator When you have the tank dimensions and the appropriate formula to solve for volume, simply enter the dimensions into the formula and solve. For example, lets find the volume of a cylinder tank that is 36 in diameter and 72 long. radius = 36 ÷ 2. radius = 18. tank volume = × 182 × 72. tank volume = 73,287 cu in.(grade)Tank Volume Calculator - Oil Tanks(steel)In the case of the vertical cylindrical tank, you need to perform the same type of measurement. However, since the tank is standing upright rather than lying on its side, you would replace the total length of the tank by the total height of the tank. Thus, the final calculation becomes the following V (tank) = r2h.(grade)Tank Volume Calculator - Tank Capacitiy Calculator(steel)Tank volume calculator online - calculate the capacity of a tank in gallons, litres, cubic meters, cubic feet, etc. Tank capacity calculator for on oil tank, water tank, etc. supporting 10 different tank shapes. Quick and easy tank volume and tank capacity calculation (a.k.a. tank size). Servers as a liquid volume calculator with output in US gallons, UK gallons, BBL (US Oil), and ### User rating 4.6/5Images of Mongolia Vertical Cylindrical Tank Oil Volume imagesPeople also askWhat is the formula for vertical cylindrical tank?What is the formula for vertical cylindrical tank?Vertical Cylindrical Tank The total volume of a vertical cylindrical tank is calculated by using the formula $$Volume = pi × Radius^2 × Height$$TANK VOLUME CALCULATOR [How to Calculate Tank Capacity mongolia vertical cylindrical tank oil volume(grade)VERTICAL CYLINDER CALCULATOR - 1728(steel)If you want to do calculations for a horizontal cylinder, then go to this link Horizontal Cylinder Calculator. Example Inputting tank height = 12, liquid level = 3 and tank diameter = 6, then clicking "Inches" will display the total tank volume in cubic inches and US Gallons and will also show the volume at the 3 inch level. If the tank is filled with water, this calculator also displays (grade)VOLUME LEVEL DETERMINATION IN FUEL TANKS (steel)volume than a cylindrical cap of the same depth and length L=r. Finally let us look at a tank in the shape of an axisymmetric cone r=f(z). Here z is the vertical symmetry axis , r the radial coordinate, and the tank is assumed to extend from z=a to z=b where b>a. Here the volume of the filled tank is Vc= ### Volume of a Partially Filled Cylindrical Tank mongolia vertical cylindrical tank oil volume This calculator will tell you the volume of a horizontal paritally filled cylindrical tank, or a vertical one.This will also generate a dip chart/table for the tank with the given dimensions you specify. Output is in gallons or liters. If you have a complex tank, or need something special created please don't hesitate contact us for more info.(grade)afghanistan vertical cylindrical tank - Oil storage tank mongolia vertical cylindrical tank oil volume(steel)I want to calculate the tank volume in cubic feet and . Tank Volume Calculator - Oil TanksIn the case of the vertical cylindrical tank, you need to perform the same type of measurement. However, since the tank is standing upright rather than lying on its side, you would replace the total length of the tank by the total height of the tank.(grade)drawings + specs - Highland Tank(steel)Locations Manheim, PA 17545 717.664.0600 Watervliet, NY 12189 518.273.0801 Lebanon, PA 17042 717.664.0602 Greensboro, NC 27407 336.218.0801 Friedens, PA 15541 mongolia vertical cylindrical tank oil volume ### drawings + specs - Highland Tank Locations Manheim, PA 17545 717.664.0600 Watervliet, NY 12189 518.273.0801 Lebanon, PA 17042 717.664.0602 Greensboro, NC 27407 336.218.0801 Friedens, PA 15541 mongolia vertical cylindrical tank oil volume(grade)mongolia vertical cylindrical tank building volume - Oil mongolia vertical cylindrical tank oil volume(steel)Single wall vertical Tank Fuel, Refuelling Tanks. Vertical single wall cylindrical tanks manufactured by using shell plates and dished ends made of carbon steel quality S235JR according to UNI EN 10025. Tanks are equipped with containment basin designed to hold at least 50% or 100% of the volume of fuel the tank is designed to contain.[tank]Investigation of the (grade)tank volume formula Automation & Control Engineering (steel)Oct 21, 2006Volume (assuming the dimensions given were for a cylindrical tank) is ((pi*D^2)/4) * H) = ((3.14 * 10*10) / 4) * 15) * 7.4805 gal/cu-ft = 8808.3 gal If the tank has rounded ends, then the volume will be somewhat more - this formula assumes flat ends, like an oil drum. ## Hot Products ### Inspection & Control we have a comprehensive set of inspection and control tools for quality control, materials analysis, mechanical properties,ultrasonic testing, magnetic particle inspection, etc.. universal tester Impact test X-ray test MTS 300kN Electromechanical Universal Testing Machine ### Our Services To ensure better cooperation,our factory provides the following services to the buyers: 1. OEM&ODM:As the special samples and drawings. 2. The small order is acceptable. 3. Professional services 4. Comprehensive and professional after-sale services We are a supplier of bulletproof steel material & machinery products. We have extensive experience in the manufacture of bulletproof steel plate, other Bulletproof peripheral products. Welcome to sending drawings and inquiry. ### OEM According to the drawings or samples, customize steel products you need. ### ODM According to the specific requirements, design steel products for you, and generate the production.
{}
1. ## Cauchy-Frobenius-Burnside Theorem Use the Cauchy-Frobenius-Burnside Theorem to determine the number of graphs of order 5. Can someone please assist with this question. 2. ## Re: Cauchy-Frobenius-Burnside Theorem The setup seems like it must be this: Let $V = \{v_1, v_2, v_3, v_4, v_5\}, G = Sym(V), X = \{ (x_1, x_2) \in V \times V | x_1 \ne x_2 \}$ Then G induces an action on the power set of X, $\mathcal{P}(X)$. Each subset of X corresponds to some graph with verticies V. Each permutation of the verticies (obviously $G \simeq S_5$) represents a graph isomophism, and visa-versa, so that the orbit under G of a subset A of X (i.e. of a graph) is the set of graphs isomorphic to A. There's a complication to consider in that $(v_1, v_2)$ and $(v_2, v_1)$ represent the same edge, so we could make them equivalent, and so consider only X/~, and let G act on $\mathcal{P}(X/\sim)$. That does seem like a sensible way to proceed, as otherwise we'll be wrongly treating $A_1$ and $A_2$ as different graphs, where $A_1 = \{ (v_1, v_2) \}, A_2= \{ (v_1, v_2), (v_2, v_1) \}$. I haven't thought it out, but that seems like how it should be handled. Without the equivalence classes, we'd be considering digraphs, not graphs. That sets it up to apply the CFB Lemma. So the problem becomes computing $\mathcal{P}(X/\sim)^g$ when $g \in G$. I haven't thought about that, but this maybe will get you started. 3. ## Re: Cauchy-Frobenius-Burnside Theorem Thanks for your response! Can I ask another question, if we let the set of all automorphisms of G under the operation of function composition be the automorphism group of G, which we denote T(G), or simply T. How would you prove: A graph G has fixing number 1 if and only if G contains an orbit of cardinality |T|. Help with either implication would be great if you can. 4. ## Re: Cauchy-Frobenius-Burnside Theorem I don't recall a definition for "fixing number". I found this online: "The fixing number of a graph G is the smallest cardinality of a set of vertices S such that only the trivial automorphism of G fixes every vertex in S." It makes the statement work, so I'll assume that's the meaning you intend. For my own benefit, I'm going to reason through what "fixing number" "means", especially for fixing number 1: 1) If S is the set whose cardinality makes the fixing number, then any non-identity automorphism of G moves at least one vertex of S. 2) If fixing number is 1, then S= {v}, so any non-identity automorphism of G moves v. 2') Thus fixing number 1 means there exists v in V(G) such that every non-identity automorphism of g of G satisifies g(v) not= v. 2'') And so fixing number 1 means there exists v in V(G) such that: if g is an automorphism of G, then g(v) = v iff g is the identity. 2''') And so fixing number 1 means there exists v in V(G) such that the stabilizer of x in the automorphism group of G is trivial. 3) Special case: what if G has no non-identity automorphisms? Then every orbit has cardinality 1, and fixing number is 1. With that in mind, I'll do one direction, and leave the other to you. They're very similar. Suppose orbit $\mathcal{O}$ has cardinality $|T|$. Let $v \in \mathcal{O}, \text{ and so have } \mathcal{O} = Tv = \{ g(v) | g \in T \}$. Since $|T| = |\mathcal{O}| = |\{ g(v) | g \in T \}|$, must have that images of T's action on v are all distinct. Thus, since id(v) = v, no other element of T can fix v. Thus S = {v} satisfies the definition for fixing number, and so the fixing number is 1. 5. ## Re: Cauchy-Frobenius-Burnside Theorem That's awesome thanks! For the reverse implication would the following be the idea: Assume the fixing number is 1, so the fixing set is S = {v} for some vertex v of G. Then the only automorphism that keeps v fixed is id(v) = v and for any other automorphism g we know that g(v) does not equal v. Also if we take any automorphisms g_1 and g_2 from T which are not the identity automorphisms then assume g_1(v) = g_2(v), it follows that g_1g_2(v) = v but since S = {v} satisfies the definition for fixing number it follows that g_1(v) does not equal g_2(v) and the results follows easily from that. If there is only one automorphism g other that the identity then we still have that g(v) cannot equal v and the results follows. As you said if G has no non-identity automorphisms then every orbit has cardinality 1, and fixing number is 1.
{}
# multiBigwigSummary¶ Given typically two or more bigWig files, multiBigwigSummary computes the average scores for each of the files in every genomic region. This analysis is performed for the entire genome by running the program in bins mode, or for certain user selected regions in BED-file mode. Most commonly, the default output of multiBigwigSummary (a compressed numpy array, .npz) is used by other tools such as plotCorrelation or plotPCA for visualization and diagnostic purposes. Note that using a single bigWig file is only recommended if you want to produce a bedGraph file (i.e., with the --outRawCounts option; the default output file cannot be used by ANY deepTools program if only a single file was supplied!). A detailed sub-commands help is available by typing: multiBigwigSummary bins -h multiBigwigSummary BED-file -h usage: multiBigwigSummary [-h] [--version] ... ## Named Arguments¶ –version show program’s version number and exit ## commands¶ Possible choices: bins, BED-file ## Sub-commands:¶ ### bins¶ The average score is based on equally sized bins (10 kilobases by default), which consecutively cover the entire genome. The only exception is the last bin of a chromosome, which is often smaller. The output of this mode is commonly used to assess the overall similarity of different bigWig files. multiBigwigSummary -b file1.bw file2.bw -out results.npz #### Required arguments¶ –bwfiles, -b List of bigWig files, separated by spaces. –outFileName, -out File name to save the compressed matrix file (npz format)needed by the “plotHeatmap” and “plotProfile” tools. #### Optional arguments¶ –labels, -l User defined labels instead of default labels from file names. Multiple labels have to be separated by spaces, e.g., –labels sample1 sample2 sample3 –chromosomesToSkip List of chromosomes that you do not want to be included. Useful to remove “random” or “extra” chr. –binSize, -bs Size (in bases) of the windows sampled from the genome. –distanceBetweenBins, -n By default, multiBigwigSummary considers adjacent bins of the specified –binSize. However, to reduce the computation time, a larger distance between bins can be given. Larger distances results in fewer considered bins. –version show program’s version number and exit –region, -r Region of the genome to limit the operation to - this is useful when testing parameters to reduce the computing time. The format is chr:start:end, for example –region chr10 or –region chr10:456700:891000. –blackListFileName, -bl A BED or GTF file containing regions that should be excluded from all analyses. Currently this works by rejecting genomic chunks that happen to overlap an entry. Consequently, for BAM files, if a read partially overlaps a blacklisted region or a fragment spans over it, then the read/fragment might still be considered. Please note that you should adjust the effective genome size, if relevant. –numberOfProcessors, -p Number of processors to use. Type “max/2” to use half the maximum number of processors or “max” to use all available processors. –verbose, -v Set to see processing messages. #### Output optional options¶ –outRawCounts Save average scores per region for each bigWig file to a single tab-delimited file. #### deepBlue arguments¶ Options used only for remote bedgraph/wig files hosted on deepBlue –deepBlueURL For remote files bedgraph/wiggle files hosted on deepBlue, this specifies the server URL. The default is “http://deepblue.mpi-inf.mpg.de/xmlrpc”, which should not be changed without good reason. –userKey For remote files bedgraph/wiggle files hosted on deepBlue, this specifies the user key to use for access. The default is “anonymous_key”, which suffices for public datasets. If you need access to a restricted access/private dataset, then request a key from deepBlue and specify it here. –deepBlueTempDir If specified, temporary files from preloading datasets from deepBlue will be written here (note, this directory must exist). If not specified, where ever temporary files would normally be written on your system is used. –deepBlueKeepTemp If specified, temporary bigWig files from preloading deepBlue datasets are not deleted. A message will be printed noting where these files are and what sample they correspond to. These can then be used if you wish to analyse the same sample with the same regions again. ### BED-file¶ The user provides a BED file that contains all regions that should be considered for the analysis. A common use is to compare scores (e.g. ChIP-seq scores) between different samples over a set of pre-defined peak regions. multiBigwigSummary -b file1.bw file2.bw -out results.npz --BED selection.bed #### Required arguments¶ –bwfiles, -b List of bigWig files, separated by spaces. –outFileName, -out File name to save the compressed matrix file (npz format)needed by the “plotHeatmap” and “plotProfile” tools. –BED Limits the analysis to the regions specified in this file. #### Optional arguments¶ –labels, -l User defined labels instead of default labels from file names. Multiple labels have to be separated by spaces, e.g., –labels sample1 sample2 sample3 –chromosomesToSkip List of chromosomes that you do not want to be included. Useful to remove “random” or “extra” chr. –version show program’s version number and exit –region, -r Region of the genome to limit the operation to - this is useful when testing parameters to reduce the computing time. The format is chr:start:end, for example –region chr10 or –region chr10:456700:891000. –blackListFileName, -bl A BED or GTF file containing regions that should be excluded from all analyses. Currently this works by rejecting genomic chunks that happen to overlap an entry. Consequently, for BAM files, if a read partially overlaps a blacklisted region or a fragment spans over it, then the read/fragment might still be considered. Please note that you should adjust the effective genome size, if relevant. –numberOfProcessors, -p Number of processors to use. Type “max/2” to use half the maximum number of processors or “max” to use all available processors. –verbose, -v Set to see processing messages. #### Output optional options¶ –outRawCounts Save average scores per region for each bigWig file to a single tab-delimited file. #### GTF/BED12 options¶ –metagene When either a BED12 or GTF file are used to provide regions, perform the computation on the merged exons, rather than using the genomic interval defined by the 5-prime and 3-prime most transcript bound (i.e., columns 2 and 3 of a BED file). If a BED3 or BED6 file is used as input, then columns 2 and 3 are used as an exon. –transcriptID When a GTF file is used to provide regions, only entries with this value as their feature (column 2) will be processed as transcripts. –exonID When a GTF file is used to provide regions, only entries with this value as their feature (column 2) will be processed as exons. CDS would be another common value for this. –transcript_id_designator Each region has an ID (e.g., ACTB) assigned to it, which for BED files is either column 4 (if it exists) or the interval bounds. For GTF files this is instead stored in the last column as a key:value pair (e.g., as ‘transcript_id “ACTB”’, for a key of transcript_id and a value of ACTB). In some cases it can be convenient to use a different identifier. To do so, set this to the desired key. #### deepBlue arguments¶ Options used only for remote bedgraph/wig files hosted on deepBlue –deepBlueURL For remote files bedgraph/wiggle files hosted on deepBlue, this specifies the server URL. The default is “http://deepblue.mpi-inf.mpg.de/xmlrpc”, which should not be changed without good reason. –userKey For remote files bedgraph/wiggle files hosted on deepBlue, this specifies the user key to use for access. The default is “anonymous_key”, which suffices for public datasets. If you need access to a restricted access/private dataset, then request a key from deepBlue and specify it here. –deepBlueTempDir If specified, temporary files from preloading datasets from deepBlue will be written here (note, this directory must exist). If not specified, where ever temporary files would normally be written on your system is used. –deepBlueKeepTemp If specified, temporary bigWig files from preloading deepBlue datasets are not deleted. A message will be printed noting where these files are and what sample they correspond to. These can then be used if you wish to analyse the same sample with the same regions again. example usage: multiBigwigSummary bins -b file1.bw file2.bw -out results.npz multiBigwigSummary BED-file -b file1.bw file2.bw -out results.npz –BED selection.bed ## Example¶ In the following example, the average values for our test ENCODE ChIP-Seq datasets are computed for consecutive genome bins (default size: 10kb) by using the bins mode. $deepTools2.0/bin/multiBigwigSummary bins \ -b testFiles/H3K4Me1.bigWig testFiles/H3K4Me3.bigWig testFiles/H3K27Me3.bigWig testFiles/Input.bigWig \ --labels H3K4me1 H3K4me3 H3K27me3 input \ -out scores_per_bin.npz --outRawCounts scores_per_bin.tab$ head scores_per_bin.tab #'chr' 'start' 'end' 'H3K4me1' 'H3K4me3' 'H3K27me3' 'input' 19 0 10000 0.0 0.0 0.0 0.0 19 10000 20000 0.0 0.0 0.0 0.0 19 20000 30000 0.0 0.0 0.0 0.0 19 30000 40000 0.0 0.0 0.0 0.0 19 40000 50000 0.0 0.0 0.0 0.0 19 50000 60000 0.0221538461538 0.0 0.00482142857143 0.0522717391304 19 60000 70000 4.27391282051 1.625 0.634116071429 1.29124347826 19 70000 80000 13.0891675214 24.65 1.8180625 2.80073695652 19 80000 90000 1.74591965812 0.29 4.35576785714 0.92987826087 To compute the average values for a set of genes, use the BED-file mode. $deepTools2.0/bin/multiBigwigSummary BED-file \ --bwfiles testFiles/*bigWig \ --BED testFiles/genes.bed \ --labels H3K27me3 H3K4me1 H3K4me3 HeK9me3 input \ -out scores_per_transcript.npz --outRawCounts scores_per_transcript.tab$ head scores_per_transcript.tab #'chr' 'start' 'end' 'H3K27me3' 'H3K4me1' 'H3K4me3' 'HeK9me3' 'input' 19 60104 70951 0.663422099099 4.37103606574 14.9609108509 0.596631607217 1.34274297191 19 60950 70966 0.714223982699 4.54650763906 16.2336261981 0.62173674295 1.41719308888 19 62114 70944 0.747578769617 4.84009060023 18.2951302378 0.648723472352 1.51324474371 19 63820 70951 0.781816722009 5.30500631048 22.5579862572 0.682862029229 1.55490104062 19 65057 66382 0.528301886792 5.45886792453 0.523018867925 0.555471698113 1.97056603774 19 65821 66416 0.411764705882 3.0 0.636974789916 0.168067226891 1.67226890756 19 65821 70945 0.844600775761 4.79176424668 31.1346604215 0.693073728066 1.47911787666 19 66319 66492 0.774566473988 1.59537572254 0.0 0.0 0.578034682081 19 66345 71535 0.877430197151 5.49036608863 43.978805395 0.746026011561 1.43545279383 The default output of multiBamSummary (a compressed numpy array: *.npz) can be visualized using plotCorrelation or plotPCA. The optional output (--outRawCounts) is a simple tab-delimited file that can be used with any other program. The first three columns define the region of the genome for which the reads were summarized. ## multiBigwigSummary in Galaxy¶ Below is the screenshot showing how to use multiBigwigSummary on the deeptools galaxy.
{}
# How do I find the delta Z of the at the midpoint of the plane? #### laminatedevildoll Suppose that there is a rectangular plane in coordinates x,y and z is pointing out of the page. There are corners A,B,C,D. How do I find the delta Z of the at the midpoint of the plane. I know that the equation of a plane is A + Bx + Cy = D Thank you #### EnumaElish Homework Helper laminatedevildoll said: Suppose that there is a rectangular plane in coordinates x,y and z is pointing out of the page. There are corners A,B,C,D. How do I find the delta Z of the at the midpoint of the plane. I know that the equation of a plane is A + Bx + Cy = D Thank you What does Delta Z stand for? Can you give a definition? Is it like a derivative? Also, the plane equation does not include z, is that correct? Why are there two separate constants, A and D? What makes me curious is that if the equation didn't include a Z term, it would have been more economical to write it as A + Bx + Cy = 0; or one might write it as Bx + Cy = F (such that F = D - A in your notation). I might be able to help if you give a little more explanation and/or redefine the problem as a line in XY coordinates (Cartesian 2-dimensional plane) instead of a plane in XYZ coordinates (Cartesian 3-dimensional space). Does this makes sense? Since your plane equation does not involve the z coordinate, I am going to assume that the equivalent formula for the line does not include the y coordinate: A - Bx = D. (Again, I could write this as Bx = F, but I am giving you the benefit of doubt.) Assuming that this is the correct equivalent formula for a line, how would you define delta Y (the equivalent of delta Z in your case) at the midpoint of this line? Last edited: #### laminatedevildoll EnumaElish said: What does Delta Z stand for? Can you give a definition? Is it like a derivative? Also, the plane equation does not include z, is that correct? Why are there two separate constants, A and D? What makes me curious is that if the equation didn't include a Z term, it would have been more economical to write it as A + Bx + Cy = 0; or one might write it as Bx + Cy = F (such that F = D - A in your notation). How would you restate this problem on the Cartesian plane, a line replacing the plane? Since your plane equation does not involve the z coordinate, I am going to assume that the equivalent formula for the line does not include the y coordinate: Bx = D. How would you define delta Y (the equivalent of delta Z in your case) at the midpoint of the line? I might be able to help if you give a little more explanation and/or redefine the problem as a line in the XY coordinates (Cartesian 2-dimensional plane) instead of a plane in the XYZ coordinates (Cartesian 3-dimensional space). Does this makes sense? Sorry, I should've been more clearer. This is a 3-D problem. There is a traingle in a cartesian plane, with sides A,B,C. Suppose I want to find out what z(x,y) (the midpoint between the rectangular plane with respect to x,y), I have to write the following equations... delta z - this is the midpoint of the rectangle z(x,y) = A + Bx + Cy z_a(x,y) = A + Bx_a + Cy_a (delta Z with respect to side 'A' et cetera) z_b(x,y) = A + Bx_b + Cy_b z_c(x,y) = A + Bx_c + Cy_c Now is it true that I have solve for the above equations and plug it into z(x.y) to determine the midpoint of the rectangle, the delta z? #### EnumaElish Homework Helper The 3 equations will give you constants A, B, C assuming you have numbers (data) for xi, yi, zi for i = a, b, c (that is, delta z with respect to each side). If you take these solutions (for ex., let's say you find A = 2, B = 0.5, C = -3) and plug them into the eq. for z(x,y) then you will have a general (or generic) formula that would represent the relationship between the (x, y) coordinates and the z coordinate. #### EnumaElish Homework Helper myself said: ... If you take these solutions (for ex., let's say you find A = 2, B = 0.5, C = -3) and plug them into the eq. for z(x,y) then you will have a general (or generic) formula that would represent the relationship between the (x, y) coordinates and the z coordinate. ... I should've said, "then you will have a general (or generic) formula that would represent the relationship between the (x, y) coordinates and the z coordinate ON THE OBJECT THAT THOSE 3 EQUATIONS ARE DESCRIBING." So if the 3 eq's describe a triangle then the generic eq. would also describe the same triangle in generic terms. #### laminatedevildoll EnumaElish said: The 3 equations will give you constants A, B, C assuming you have numbers (data) for xi, yi, zi for i = a, b, c (that is, delta z with respect to each side). If you take these solutions (for ex., let's say you find A = 2, B = 0.5, C = -3) and plug them into the eq. for z(x,y) then you will have a general (or generic) formula that would represent the relationship between the (x, y) coordinates and the z coordinate. Well, if I am writing a computer program, then I don't have number constants. Instead, I have to plug in and form equations in order to find the coefficients I guess. Is there any way, to calculate the slope of each side, right, left, top and bottom to determine delta z? Can I relate delta z to the slopes in any way. I am still experimenting a bit. #### EnumaElish Homework Helper I hear you, but when it is time to run the program, will the user (running the prog.) going to have those inputs? Is each of these points an input to the prog.? Just trying to understand the context. Is this somehow related to the warped plane question that has been posted under a different thread? #### laminatedevildoll EnumaElish said: I hear you, but when it is time to run the program, will the user (running the prog.) going to have those inputs? Is each of these points an input to the prog.? Just trying to understand the context. Yes... the user will enter some coordinates. EnumaElish said: Is this somehow related to the warped plane question that has been posted under a different thread? Yes, I suppose so. However, the question about the warped plane is behind me now. I have moved on to much better things. #### EnumaElish Homework Helper I have a 3-D traingle, and the edges are a,b,c. ... My equations are $$\Delta Z$$ = A + Bx + Cy $$\Delta Z_a$$ = A + Bx_a + Cy_a $$\Delta Z_b$$ = A + Bx_b + Cy_b $$\Delta Z_c$$ = A + Bx_c + Cy_c In order to solve for $$\Delta Z$$, how do I use the above equations? Do I have to add them (equations 2,3,4) all up and substitute in A for the first equation? To find the coefficients, do I just solve for A,B,C after I know what $$\Delta Z$$ is? ... Yes, solve for A,B,C after knowing what $$\Delta Z_i$$ is, for i = a, b, c. 1st specific eq gives you A = z_a - Bx_a - Cy_a Write 2nd eq as z_b = z_a - Bx_a - Cy_a + Bx_b + Cy_b z_b - z_a = B(x_b - x_a) + C(y_b - y_a) then B = ( (z_b - z_a) - C(y_b - y_a) )/(x_b - x_a) Write 3rd eq as z_c = z_a - Bx_a - Cy_a + Bx_c + Cy_c z_c - z_a = B(x_c - x_a) + C(y_c - y_a) z_c - z_a = (x_c - x_a)( (z_b - z_a) - C(y_b - y_a) )/(x_b - x_a) + C(y_c - y_a) Solve for C. Replace C in B. Replace B and C in A. You're golden. #### EnumaElish Homework Helper z_c - z_a = ((x_c - x_a)(z_b - z_a))/(x_b - x_a) + C[(y_c - y_a) - (y_b - y_a)/(x_b - x_a)] which yields the following solution for C: C* = [ (z_c - z_a) - ((x_c - x_a)(z_b - z_a))/(x_b - x_a) ] / [(y_c - y_a) - (y_b - y_a)/(x_b - x_a)] B* = ( (z_b - z_a) - C*(y_b - y_a) )/(x_b - x_a) A* = z_a - B*x_a - C*y_a z = A* + B*x + C*y. Last edited: #### laminatedevildoll EnumaElish said: Yes, solve for A,B,C after knowing what $$\Delta Z_i$$ is, for i = a, b, c. 1st specific eq gives you A = z_a - Bx_a - Cy_a Write 2nd eq as z_b = z_a - Bx_a - Cy_a + Bx_b + Cy_b z_b - z_a = B(x_b - x_a) + C(y_b - y_a) then B = ( (z_b - z_a) - C(y_b - y_a) )/(x_b - x_a) Write 3rd eq as z_c = z_a - Bx_a - Cy_a + Bx_c + Cy_c z_c - z_a = B(x_c - x_a) + C(y_c - y_a) z_c - z_a = (x_c - x_a)( (z_b - z_a) - C(y_b - y_a) )/(x_b - x_a) + C(y_c - y_a) Solve for C. Replace C in B. Replace B and C in A. You're golden. I see. I appreciate your help. I had initially had some sort of equations like that, except I had a 4 thrown somewhere in there. It's really strange how I have forgotten the easier concepts in Math. Old age, I presume. In truth, earlier I had mentioned that I had forgotten the warped problem, but it's still in my mind. After posting that rather naive message, I figured out that if I took the average slope of each side of the plane, the "warpedness" would flatten out. If this sounds drastically wrong, then I'd appreciate if you say so. A simple yes or a no would be fine. Last edited: #### EnumaElish Homework Helper It doesn't sound drastically wrong, but it will surely depend on at which point the average is being calculated. The average slope may depend on how "lengthy" the flat portion is relative to the warped portion (as well as how warped). I think the important step is to program this in its simplest conceivable form, and then see that the program works as intended and expected. You can always improve it later. So go ahead and program away. That's what I'd do in your shoes right now, to the best of my understanding and imagination. Last edited: #### EnumaElish Homework Helper l.e.d., I understand you're familiar with the concept of simulation. Here is a thought that may be useful in calculating your average flat plane as a best approx. to a warped plane. I will first describe this in terms of a warped line in 2-d. Suppose a curve looks like a flat line everywhere except at either end where it is curved up or down. Suppose one wishes to fit a flat line to represent a best approx. to the warped line. I could start using Excel. FIrst step is to simulate the flat part of the actual line (the fault line, as you noted in another post). Using the "Rand()" function I can generate 10,000 "X" points and then project them on to the y coordinate simpy by y = A + Bx (you may have to experiment in Excel to come up with the "right" A and B values, but you could accomplish this by graphing y vs. x for different A and B values and then choosing the one you find most appropriate. THe next step is to simulate the curved (warped) end of the fault. Here again I would generate 10,000 X values using RAND(). Then experiment with a polynomial form, such as y = a + b1x + b2x2 + ... + bkxk and choose the right parameters (a and the b's) by experimenting and graphing them in Excel. Then I'd append (splice) the set of data points for the flat part to the set of data points for the the curved part. Now that I have x and y values both for the flat part and the curved part(s), I would run a regression in Excel where I would specify "y values" as the "Y variable" (left hand side, explained, dependent variable) and "x values" as the "X variable" (right hand side, explaining, independent variable). THen run the regression, and let Excel determine the best linear fit to your nonlinear fault. A sample regression line is $y=\theta_0 + \theta_1 x + \varepsilon$ where epsilon is the statistical error term of the regression. You can implement all this using any statistical package, and in fact any computer language, including Fortran. But visuals in Excel are easier to obtain. OTOH, Fortran is much more suitable for a simulation project because it's fast like nothing else. So if your ideal sample size in a simulation is 10,000,000 rather than 10,000, you should use Fortran. But no harm in trying it out in Excel first using 10,000 because then you'll have a feeling of control. FInally, to apply this to 3-d, you need to generate two data vectors, x and y; then project each onto the z coordinate. E.g. for the flat part, you'll have z = A + By + Cx. For the warped parts you might have z = a + b10y + b01x + b11xy + b20y2 + b02x2 + b21y2x + b12yx2 + b22x2y2 + ... Once you determine your a and b's, you can then splice this part to the flat part and run a multiple regression with left hand side variable z and right hand side variables x and y (as in $z=\theta_0+ \theta_1 x + \theta_2 y + \varepsilon$ where epsilon is the statistical error term of the regression). If you already have data points describing a fault (e.g. obtained from geo. surveys, etc.) then you can skip the data generation step and the trial-and-error process to visually determine the best a and b's; you can directly go and run your linear regression to determine the best linear (flat) approximation to a nonlinear object (warped plane or warped line). I hope this is useful. ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
{}
Enter the torque and displacement of an engine into the calculator to determine the brake mean effective pressure (BMEP). ## BMEP Formula The following formula is used to calculate a brake mean effective pressure. BMEP = 150.8 * T / D • Where BMEP is the brake mean effective pressure (psi) • T is the torque (lb-ft) • D is the displacement (CID – cubic inch displacement) To calculate the break mean effective pressure, divide the torque by the displacement, then multiply the result by 150.8. ## BMEP Definition BMEP is an acronym that stands for brake mean effective pressure. The brake mean effective pressure is a measure of the total pressure that if applied to the pistons of an engine through an entire stroke, would reduce the power output of the engine. ## Example Problem How to calculate BMEP? 1. First, determine the total torque the motor can output. For this example, the motor being analyzed can output a total of 55 pound-feet (lb-ft) of torque. 2. Next, determine the displacement of the engine. In this case, the engine displacement is measured to be 10 cubic inches (10 in^3) 3. Finally, calculate the brake mean effective pressure. Using the formula above, the BMEP is found to be: BMEP = 150.8 * T / D BMEP = 150.8 * 55 / 10 BMEP = 829.4
{}
# Are free monads also zippily applicative? I think I've come up with a "zippy" `Applicative` instance for `Free`. ``````data FreeMonad f a = Free (f (FreeMonad f a)) | Return a instance Functor f => Functor (FreeMonad f) where fmap f (Return x) = Return (f x) fmap f (Free xs) = Free (fmap (fmap f) xs) instance Applicative f => Applicative (FreeMonad f) where pure = Return Return f <*> xs = fmap f xs fs <*> Return x = fmap (\$x) fs Free fs <*> Free xs = Free \$ liftA2 (<*>) fs xs `````` It's sort of a zip-longest strategy. For example, using `data Pair r = Pair r r` as the functor (so `FreeMonad Pair` is an externally labelled binary tree): `````` +---+---+ +---+---+ +-----+-----+ | | | | <*> | | +--+--+ h x +--+--+ --> +--+--+ +--+--+ | | | | | | | | f g y z f x g x h y h z `````` I haven't seen anyone mention this instance before. Does it break any `Applicative` laws? (It doesn't agree with the `Monad` instance of course, which is "substitutey" rather than "zippy".) From the definition of `Applicative`: If `f` is also a `Monad`, it should satisfy • `pure` = `return` • `(<*>)` = `ap` • `(*>)` = `(>>)` So this implementation would break the applicative laws that say it must agree with the `Monad` instance. That said, there's no reason you couldn't have a newtype wrapper for `FreeMonad` that didn't have a monad instance, but did have the above applicative instance ``````newtype Zip f a = Zip { runZip :: FreeMonad f a } deriving Functor instance Applicative f => Applicative (Zip f) where -- ... ``````
{}
# Hamilton's equations for a simple pendulum I don't get how to use Hamilton's equations in mechanics, for example let's take the simple pendulum with $$H=\frac{p^2}{2mR^2}+mgR(1-\cos\theta)$$ Now Hamilton's equations will be: $$\dot p=-mgR\sin\theta$$ $$\dot\theta=\frac{p}{mR^2}$$ I know one of the points of Hamiltonian formalism is to get first order diff. equations instead of second order that Lagrangian formalism gives you, but how can I proceed from here without just derivating again wrt. $\dot\theta$ and substituting $\dot p$ to get the same equation that I get with the Lagrangian formulation? Or is that the way to do it? And how could I get the path of the system on the phase space with those equations? - Generally both formulations (Largangian and Hamiltonian) are equivalent, but in your case, if $\theta$ is small, you have a simplified equation for $p$ and you can use a solution ansatz like $e^{i\omega t}$ for both $p$ and $\theta$. To draw a path in the phase space, you have to solve the equations and/or manage to express $p(\theta)$ or $\theta(p)$. Not obligatory, you can solve $\dot{p}\propto \theta$ and $\dot{\theta}\propto p$ by the ansatz. – Vladimir Kalitvianski Jan 22 at 16:48 To add to what Vladimir said, if you consider a system with $n$ generalized coordinates (in this case $n=1$ since your system is described by the coordinate $\theta$), then you will obtain $2n$ first order differential equations, $n$ for the coordinates and $n$ for their corresponding canonical momenta. To get the path in phase space, you can do what Vladimir suggests; this will allow you to obtain $\theta(t)$ and $p(t)$, and then you simply plot these functions as a parametric curve on the $\theta$-$p$ plane. Cheers! – joshphysics Jan 22 at 16:52
{}
Median (of a triangle) A straight line (or its segment contained in the triangle) which joins a vertex of the triangle with the midpoint of the opposite side. The three medians of a triangle intersect at one point, called the centre of gravity, the centroid or the barycentre of the triangle. This point divides each median into two parts with ratio $2:1$ if the first segment is the one that starts at the vertex. The centroid lies on the Euler line.
{}
Prototyping of Superhydrophobic Surfaces from Structure-Tunable Micropillar Arrays Using Visible Light Photocuring A new approach is reported to fabricate micropillar arrays on transparent surfaces by employing the light‐induced self‐writing technique. A periodic array of microscale optical beams is transmitted through a thin film of photo‐crosslinking acrylate resin. Each beam undergoes self‐lensing associated to photopolymerization‐induced changes in the refractive index of the medium, which counters the beam's natural tendency to diverge over space. As a result, a microscale pillar grows along each beam's propagation path. Concurrent, parallel self‐writing of micropillars leads to the prototyping of micropillar‐based arrays, with the capability to precisely vary the pillar diameter and inter‐spacing. The arrays are spray coated with a thin layer of polytetrafluoroethylene (PTFE) nanoparticles to create large‐area superhydrophobic surfaces with water contact angles greater than 150° and low contact angle hysteresis. High transparency is achieved over the entire range of micropillar arrays explored. The arrays are also mechanically durable and robust against abrasion. This is a scalable, straightforward approach toward structure‐tunable micropillar arrays for functional surfaces and anti‐wetting applications. Authors: ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10094182 Journal Name: Page Range or eLocation-ID: 1801150 ISSN: 1438-1656 Hemiwicking is the phenomena where a liquid wets a textured surface beyond its intrinsic wetting length due to capillary action and imbibition. In this work, we derive a simple analytical model for hemiwicking in micropillar arrays. The model is based on the combined effects of capillary action dictated by interfacial and intermolecular pressures gradients within the curved liquid meniscus and fluid drag from the pillars at ultra-low Reynolds numbers$${\boldsymbol{(}}{{\bf{10}}}^{{\boldsymbol{-}}{\bf{7}}}{\boldsymbol{\lesssim }}{\bf{Re}}{\boldsymbol{\lesssim }}{{\bf{10}}}^{{\boldsymbol{-}}{\bf{3}}}{\boldsymbol{)}}$$$\left({10}^{-7}\lesssim \mathrm{Re}\lesssim {10}^{-3}\right)$. Fluid drag is conceptualized via a critical Reynolds number:$${\bf{Re}}{\boldsymbol{=}}\frac{{{\bf{v}}}_{{\bf{0}}}{{\bf{x}}}_{{\bf{0}}}}{{\boldsymbol{\nu }}}$$$\mathrm{Re}=\frac{{v}_{0}{x}_{0}}{\nu }$, wherev0corresponds to the maximum wetting speed on a flat, dry surface andx0is the extension length of the liquid meniscus that drives the bulk fluid toward the adsorbed thin-film region. The model is validated with wicking experiments on different hemiwicking surfaces in conjunction withv0andx0measurements using Water$${\boldsymbol{(}}{{\bf{v}}}_{{\bf{0}}}{\boldsymbol{\approx }}{\bf{2}}\,{\bf{m}}{\boldsymbol{/}}{\bf{s}}{\boldsymbol{,}}\,{\bf{25}}\,{\boldsymbol{\mu }}{\bf{m}}{\boldsymbol{\lesssim }}{{\bf{x}}}_{{\bf{0}}}{\boldsymbol{\lesssim }}{\bf{28}}\,{\boldsymbol{\mu }}{\bf{m}}{\boldsymbol{)}}$$$\left({v}_{0}\approx 2\phantom{\rule{0ex}{0ex}}m/s,\phantom{\rule{0ex}{0ex}}25\phantom{\rule{0ex}{0ex}}µm\lesssim {x}_{0}\lesssim 28\phantom{\rule{0ex}{0ex}}µm\right)$, viscous FC-70$${\boldsymbol{(}}{{\boldsymbol{v}}}_{{\bf{0}}}{\boldsymbol{\approx }}{\bf{0.3}}\,{\bf{m}}{\boldsymbol{/}}{\bf{s}}{\boldsymbol{,}}\,{\bf{18.6}}\,{\boldsymbol{\mu }}{\bf{m}}{\boldsymbol{\lesssim }}{{\boldsymbol{x}}}_{{\bf{0}}}{\boldsymbol{\lesssim }}{\bf{38.6}}\,{\boldsymbol{\mu }}{\bf{m}}{\boldsymbol{)}}$$$\left({v}_{0}\approx 0.3\phantom{\rule{0ex}{0ex}}m/s,\phantom{\rule{0ex}{0ex}}18.6\phantom{\rule{0ex}{0ex}}µm\lesssim {x}_{0}\lesssim 38.6\phantom{\rule{0ex}{0ex}}µm\right)$and lower viscosity Ethanol$${\boldsymbol{(}}{{\boldsymbol{v}}}_{{\bf{0}}}{\boldsymbol{\approx }}{\bf{1.2}}\,{\bf{m}}{\boldsymbol{/}}{\bf{s}}{\boldsymbol{,}}\,{\bf{11.8}}\,{\boldsymbol{\mu }}{\bf{m}}{\boldsymbol{\lesssim }}{{\bf{x}}}_{{\bf{0}}}{\boldsymbol{\lesssim }}{\bf{33.3}}\,{\boldsymbol{\mu }}{\bf{m}}{\boldsymbol{)}}$$$\left({v}_{0}\approx 1.2\phantom{\rule{0ex}{0ex}}m/s,\phantom{\rule{0ex}{0ex}}11.8\phantom{\rule{0ex}{0ex}}µm\lesssim {x}_{0}\lesssim 33.3\phantom{\rule{0ex}{0ex}}µm\right)$.
{}
# Cylindrical radiators on spacecraft? Having your radiators tactically protected from enemy fire by the armor shape of your space warship is nice, but having a regular one or double-sided radiators is still a vulnerability. Other concepts such as liquid droplet and electrostatic radiators seem to be too complex and require much more power. I was thinking of radiators similar to that of the graphite spikes that are seen on the fusion torch cages in Attack Vector: Tactical. They would stick out of the body of the warship at an angle that allows them to be blocked from oncoming fire from by the frontal armor. They would have a radiating surface wrap around the entire circumference with working fluid pipes running through the interior. How feasible is this? What are your ideas for their design (approximates for length, diameter, and materials)? How would I calculate how much heat they would dissipate? Assume it is about 1-2GW of waste heat from a 200MW nuclear reactor. Strategically drawn red circle pointing out the radiator spikes on a warship from AV: T • Earthly power plants have an overall yield of about 30%. Dumping 1 GW out of 1.2 GW is about half that yield. – L.Dutch Apr 18, 2020 at 7:01 • See Thermal radiation/Radiative power. Apply a scientific calculator to compute what area you need based on the temperature. Apr 18, 2020 at 8:50 • Just FYI, the spikes on that design are 'secondary' radiators meant to get rid of a minor amount of waste heat involved in the magnetic containment of the fusion drive. The main radiators are a conventional type, retracted in combat. Here's the same class, with radiators extended: projectrho.com/public_html/rocket/images/basicdesign/… Apr 18, 2020 at 20:34 The equation for radiant power is $$P = \epsilon \sigma A T^4$$ Graphite starts to sublimate at 3,700 degrees Kelvin. Let's assume there's no factor of safety, but a typical one for aerospace is 1.5, which would bring the design operating temperature down to 2,500 Kelvin. Unfortunately, as others pointed out, your heat pump is the limiting factor at a much lower radiator temperature of 600 degrees Kelvin. $$\epsilon$$ is our emissivity from 0 to 1. Let's assume it's an ideal radiator at 1. $$\sigma$$ is a constant $$5.67 \times 10^{-8}$$. Assuming you had some futuristic better-than-perfect heat pump, and could utilize the radiators fully to their material limit, to radiate 2 GW, then, you need 188 square meters of radiator at failure (the rods start vaporizing) or 903 square meters at the factor of safety. More realistically, with a maximum temperature of 600 degrees Kelvin at the radiators, you need 56 thousand square meters of radiator to radiate the same 2 gigawatts. For those rods to be effective, they can't be radiating back into your hull. So, about half the radiating arc of cylinders shielded from fire by the front of the hull is unavailable. You also don't want the hive pictured because the radiators would be interfering with each other. If you have two sets of 4 offset from one another vertically down the hull, they'd each need to be 225 square meters. Easily doable with 100 meter spikes about 2 meters in diameter. For a more realistic forest with a 600 K limit at the radiator, the same configuration would need to 13.4 meters in diameter. • Ah, but you cannot bring the temperature of the coolant anywhere near 2,500 K. You are pumping heat uphill; even if you use an ideal heat pump you still want a coefficient of performance greater than 1, otherwise you expend more energy pumping it than you pump into the coolant. Assuming you want to keep the interior of the spacecraft at around 300 K, you won't be able to increase the temperature of the coolant above 600 K or some 330° C. OK, you are rejecting the heat of a cool side of a thermal power plant, say at 100° C; but still, you'll be limited to maybe 700 K or 430° C. Apr 18, 2020 at 17:47 • @AlexP Didn't think about that. My assumptions: Graphite has an emissivity of .70-.98, so I'll assume it is around 0.90 and the radiating surface area is at 3000K. In another comment, I actually said the radiating area should be 188 m² at a 2-meter diameter and 20-meter length. Plugging that into P = A ⋅ ϵ⋅ σ ⋅*T⁴*, it seems that this would radiate 777MW. Get two of those on top and bottom, you get 1554MW or 1.55GW (minus some if they are angeled). Apr 18, 2020 at 18:04 • @zertofi: Unfortunately, one cannot escape the tyranny of Monsieur Sadi Carnot. There is no way to bring the coolant to such a high temperature. (If the coefficient of performance of the heat pump drops below 1, you expend more than 1 joule for each joule of heat you pump into the coolant; and you cannot do this, because the difference will then accumulate as heat, defeating the purpose of the cooling system.) Apr 18, 2020 at 18:55 • @AlexP So am I better off using some type of droplet or standard radiator? Apr 18, 2020 at 19:07 • @b.Lorenz Yes, of course, habitat needs radiators, even the high power electronics need mini radiators (such as the laser generator). The cylindrical radiators would be for reactor use only. Apr 19, 2020 at 15:59 Spikes are terrible radiators. I am perplexed as to how those spikes could serve as heat radiators. Spikes have a small surface area. They look more like static dischargers; I could see how accumulated static charge could be problematic for a spacecraft but I don't think static dischargers would work without an atmosphere. If you need radiators and are in a situation where they might get torn up or blasted off, make them dispensible. For radiating away heat there are two key things: surface area (more = more surface to radiate from) and reflectivity (if you absorb incoming rays that counters some of your outgoing rays). Metal foil is a fine choice for this and gold foil can be made very thin. I envision the radiation as something like mylar - plastic backed metal foil. It can be rolled out in a long banner with the hot end inside and cold end outside. If things are getting violent, roll it back. If holes get shot in it, it still works. If some gets torn off, oh well. There is a lot left on the roll. You can go get the lost bits later and tape them back together. A spacecraft trailing a long golden banner might not look as badass as one that looks like a medieval weapon. But you could put ads on the banner. Maybe ads for tough looking products, like personal injury lawyers. • I never thought those spike radiators would be great, especially looking at their placement on the cage (the efficiency of them would be so low because of the interreflection!) What I was planning was a 2-meter diameter cylinder that is 20 meters in length. The radiating surface would have an area of about 188 m². Apr 18, 2020 at 17:24
{}
### 统计代写|AP统计辅导AP统计答疑|Exploring and Graphing Univariate Data AP统计学与大学的统计学课程在核心内容上是一致的,只是涉及的深度稍浅,AP统计学主要包含以下四部分内容。 第一部分 如何获取数据,获取数据的方式有哪些呢? 获取数据的方式主要包括普查、抽样调查、观测研究和实验设计等。 statistics-lab™ 为您的留学生涯保驾护航 在代写AP统计方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写AP统计代写方面经验极为丰富,各种代写AP统计相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 统计代写|AP统计辅导AP统计答疑|Describing Distributions • The organization of data into graphical displays is essential to understanding statistics. This chapter discusses how to describe distributions and various types of graphs used for organizing univariate data. The types of graphs include modified boxplots, histograms, stem-and-leaf plots, bar graphs, dotplots, and pie charts. Students in AP Statistics should have a clear understanding of what a variable is and the types of variables that are encountered. • A variable is a characteristic of an individual and can take on different values for different individuals. Two types of variables are discussed in this chapter: categorical variables and quantitative variables. Categorical variable: Places an individual into a category or group Quantitative variable: Takes on a numerical value Variables may take on different values. The pattern of variation of a variable is its distribution. The distribution of a variable tells us what values the variable takes and how often it takes each value. • When describing distributions, it’s important to describe what you see in the graph. It’s important to address the shape, center, and spread of the distribution in the context of the problem. • When describing shape, focus on the main features of the distribution. Is the graph approximately symmetrical, skewed left, or skewed right? ## 统计代写|AP统计辅导AP统计答疑|Standard Deviation $$s=\sqrt{2.5} \approx 1.5811$$ It’s probably more important to understand the concept of what standard deviation means than to be able to calculate it by hand. Our trusty calculators or computer software can handle the calculation for us. Understanding what the number means is what’s most important. It’s worth noting that most calculators will give two values for standard deviation. One is used when dealing with a population, and the other is used when dealing with a sample. The TI $83 / 84$ calculator shows the population standard deviation as $x$ and the sample standard deviation as $S_{x}$. A population is all individuals of interest, and a sample is just part of a population. We’ll discuss the concept of population and different types of samples in later chapters. • It’s also important to address any outliers that might be present in the distribution. Outliers are values that fall outside the overall pattern of the distribution. It is important to be able to identify potential outliers in a distribution, but we also want to determine whether or not a value is mathematically an outlier. • Example 4: Consider Data Set B, which consists of test scores from a college statistics course: \begin{aligned} &98,36,67,85,79,100,88,85,60,69,93,58,65,89,88,71,79,85,73,87, \ &81,77,76,75,76,73 \end{aligned} 1. Arrange the data in ascending order. \begin{aligned} &36,58,60,65,67,69,71,73,73,75,76,76,77,79,79,81,85,85,85,87, \ &88,88,89,93,98,100 \end{aligned}Find the median (average of the two middle numbers): 78 . 2. Find the median of the first half of numbers. This is the first quartile, $\mathrm{Q}_{1}: 71$. 3. Find the median of the second half of numbers, the third quartile, $\mathrm{Q}_{3}: 87$. 4. Find the interquartile range (IQR): $I Q R=Q_{3}-Q_{1}=87-71=16$. 5. Multiply the IQR by $1.5: 16 \times 1.5=24$. 6. Add this number to $Q_{3}$ and subtract this number from $Q_{1}$. $87+24=111$ and $71-24=47$ 7. Any number smaller than 47 or larger than 111 would be considered an outlier. Therefore, 36 is the only outlier in this set. ## 统计代写|AP统计辅导AP统计答疑|Modified Boxplots Modified boxplots are extremely useful in AP Statistics. A modified boxplot is ideal when you are interested in checking a distribution for outliers or skewness, which will be essential in later chapters. To construct a modified boxplot, we use the five-number summary. The box of the modified boxplot consists of $\mathrm{Q} 1, \mathrm{M}$, and $\mathrm{Q} 3$. Outliers are marked as separate points. The tails of the plot consist of either the smallest and largest numbers or the smallest and largest numbers that are not considered outliers by our mathematical criterion discussed earlier. Outliers appear as separate dots or asterisks. Modified boxplots can be constructed with ease using the graphing calculator or computer software. Be sure to use the modified boxplot instead of the regular boxplot, since we are usually interested in knowing if outliers are present. Side-by-side boxplots can be used to make visual comparisons between two or more distributions. Figure $1.4$ displays the test scores from Data Set B. Notice that the test score of 36 (which is an outlier) is represented using a separate point. Histograms are also useful for displaying distributions when the variable of interest is numeric (Figure 1.5). When the variable is categorical, the graph is called a bar chart or bar graph. The bars of the histogram should be touching and should be of equal width. The heights of the bars represent the frequency or relative frequency. As with modified boxplots, histograms can be easily constructed using the TI-83/84 graphing calculator or computer software. With some minor adjustments to the window of the graphing calculator, we can easily transfer the histogram from calculator to paper. We often use the ZoomStat function of the TI-83/84 graphing calculator to create histograms. ZoomStat will fit the data to the screen of the graphing calculator and often creates bars with non-integer dimensions. In order to create histograms that have integer dimensions, we must make adjustments to the window of the graphing calculator. Once these adjustments have been made, we can then easily copy the calculator histogram onto paper. Histograms are especially useful in finding the shape of a distribution. To find the center of the histogram, as measured by the median, find the line that would divide the histogram into two equal parts. To find the mean of the distributions, locate the balancing point of the histogram. ## 统计代写|AP统计辅导AP统计答疑|Describing Distributions • 将数据组织成图形显示对于理解统计数据至关重要。本章讨论如何描述用于组织单变量数据的分布和各种类型的图。图表的类型包括修改后的箱线图、直方图、茎叶图、条形图、点图和饼图。AP统计学的学生应该清楚地了解变量是什么以及遇到的变量类型。 • 变量是个体的特征,对于不同的个体可以取不同的值。本章讨论了两类变量:分类变量和定量变量。 • 在描述分布时,描述您在图表中看到的内容很重要。在问题的上下文中解决分布的形状、中心和分布很重要。 • 在描述形状时,请关注分布的主要特征。图形是近似对称的、向左倾斜还是向右倾斜? ## 统计代写|AP统计辅导AP统计答疑|Standard Deviation s=2.5≈1.5811 • 解决分布中可能存在的任何异常值也很重要。异常值是不属于分布的整体模式的值。能够识别分布中的潜在异常值很重要,但我们还想确定一个值在数学上是否是异常值。 • 示例 4:考虑数据集 B,它由大学统计学课程的考试成绩组成: 98,36,67,85,79,100,88,85,60,69,93,58,65,89,88,71,79,85,73,87, 81,77,76,75,76,73 1. 按升序排列数据。 36,58,60,65,67,69,71,73,73,75,76,76,77,79,79,81,85,85,85,87, 88,88,89,93,98,100求中位数(两个中间数的平均值): 78 。 2. 求数字前半部分的中位数。这是第一个四分位数,问1:71. 3. 找到数字后半部分的中位数,第三个四分位数,问3:87. 4. 求四分位距 (IQR):一世问R=问3−问1=87−71=16. 5. 将 IQR 乘以1.5:16×1.5=24. 6. 将此号码添加到问3并从中减去这个数字问1. 87+24=111和71−24=47 7. 任何小于 47 或大于 111 的数字都将被视为异常值。因此,36 是该集合中唯一的异常值。 ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
{}
# Three Dimensional Geometry: Learn Terms, Concepts of Lines, Planes and Angles using Formulas 1 Save Any point when represented in the three-dimensional space constitutes x-coordinate, y-coordinate and z-coordinate. This implies a point in a three-dimensional geometry is represented using x-coordinate, y-coordinate and z-coordinate. For example, if A is a point in a three-dimensional space then A = (x, y, z). Calculation of distance between point or line, measurement of the angle plays a crucial role in mathematics. Through this article discover the concept of, planes, point to point, and point to line distance calculation as well as angle measurement and multiple such concepts. so let’s get started. ## Three Dimensional Geometry In 3D geometry, a coordinate system relates to the process of recognising the position or location of a point in the coordinate plane. Everything in the present environment is in a three-dimensional shape. Even a horizontal piece of paper has some thickness if we look sideways. ## Concepts related to Three Dimensional Geometry So, let’s start to know 3D Concepts by understanding lines, planes, and angles. ### Direction Cosines If a line forms an angle α, β, γ in the positive direction concerning X-axis, Y-axis and Z-axis respectively, then cos α, cos β, and cos γ are called its direction cosines. The direction cosines are commonly denoted as l, m, n. i.e l is equal to cos α, m is equal to cos β and n is equal to cos γ. Whereas the angles α, β, γ are recognized as direction angles. Some important points on direction cosines are mentioned below • If l, m, n represent the direction cosines of a line then $$l^2+m^2+n^2=1$$ • Direction cosines of the X-axis are given as 1, 0, 0 • Direction cosines of the Y-axis are given as 0, 1, 0 • Direction cosines of the Z-axis are given as 0, 0, 1 • If a, b and c represent the direction ratios of a line then: $$l=\frac{a}{\sqrt{a^2+b^2+c^2}},\ m=\frac{b}{\sqrt{a^2+b^2+c^2}}and\ \ n=\frac{c}{\sqrt{a^2+b^2+c^2}}$$ ### Direction Ratios Three numbers say a, b, c proportional to the direction cosines say l, m, n of a line are acknowledged as the direction ratios of the line. Thus a, b, c are termed as the direction ratios of a line provided. $$\frac{l}{a}=\frac{m}{b}=\frac{n}{c}$$ ### Distance Formula The distance between two points assumes A (x1, y1, z1) and B (x2, y2, z2) in a three-dimensional space is presented by: $$AB=\sqrt{\left(x_2-x_1\right)^2+\left(y_2-y_1\right)^2+\left(z_2-z_1\right)^2}$$ ### Section Formula If A (x1, y1, z1) and C (x2, y2, z2) are two points in a space and let B be a point on the line segment joining A and B such that • It divides AC internally in the proportion m:n. Then, the coordinates of B are: $$\left(\frac{mx_2+nx_1}{m+n},\ \frac{my_2+ny_1}{m+n},\ \frac{mz_2+nz_1}{m+n}\right)$$ • It divides AC externally in the ratio m:n (m ≠ n). Then, the coordinates of B are: $$\left(\frac{mx_2-nx_1}{m-n},\ \frac{my_2-ny_1}{m-n},\ \frac{mz_2-nz_1}{m-n}\right)$$ <iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/B2Arej_79Xo” title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture” allowfullscreen?rel=0></iframe> Next, let us proceed with the next topic which is lines concerning the concepts of distance and angle measurement. ## Lines in 3D Geometry In 3d geometry/ maths, a line is interpreted as a straight one-dimensional form having no thickness and can extend endlessly in both directions and a line segment can be defined as a part of a line that has determined endpoints. Also know some important points regarding lines below. • The two lines are said to be perpendicular if and only if l1 ⋅ l2 + m1 ⋅ m2 + n1 ⋅ n2 = 0 • The same two lines are said to be parallel if and only if: $$\frac{l_1}{l_2}=\frac{m_1}{m_2}=\frac{n_1}{n_2}$$ • The two lines are said to be perpendicular if and only if a1 ⋅ a2 + b1 ⋅ b2 + c1 ⋅ c2 = 0 • The same two lines are said to be parallel if and only if: $$\frac{a_1}{a_2}=\frac{b_1}{b_2}=\frac{c_1}{c_2}$$ ### Equations Regarding Lines • If a, b, c are the direction ration ratios of a line crossing through the point (x1, y1, z1), then the equation of the line is presented by: $$\left(\frac{\left(x-x_1\right)}{a},\ \frac{\left(y-y_1\right)}{b},\ \frac{\left(z-z_1\right)}{c}\right)$$ • If l, m, n are the direction cosines of a line passing through the point (x1, y1, z1), then the equation of the line is given by: $$\left(\frac{\left(x-x_1\right)}{l},\ \frac{\left(y-y_1\right)}{m},\ \frac{\left(z-z_1\right)}{n}\right)$$ • The equation of a line passing through the two points (x1, y1, z1) and (x2, y2, z2) is presented by: $$\left(\frac{x-x_1}{x_2-x_1},\ \frac{y-y_1}{y_2-y_1},\ \frac{z-z_1}{z_2-z_1}\right)$$ • Equation of a line passing through the point (x1, y1, z1) and parallel to the line having direction ratios a, b, c is given by: $$\left(\frac{\left(x-x_1\right)}{a},\ \frac{\left(y-y_1\right)}{b},\ \frac{\left(z-z_1\right)}{c}\right)$$ ### Angle Between Two Lines The angle θ between two lines whose direction cosines are l1, m1, n1 and l2, m2, n2 is given by: cos θ = l1 ⋅ l2 + m1 ⋅ m2 + n1 ⋅ n2 The angle θ within two lines whose direction ratios are proportional to a1, b1, c1 and a2, b2, c2 respectively is furnished by: $$\cos\ \theta=\ \left|\frac{\left(a_1a_2+b_1b_2+c_1c_2\right)}{\sqrt{a_1^2+b_1^2+c_1^2}\sqrt{a_2^2+b_2^2+c_2^2}}\right|$$ Moving towards the next important concept, i.e planes. We will be covering the equation of a plane in a different form, distance measurement from point and line as well. ## Plane in 3D Geometry Starting with the equation of a plane in a different form: • The general equation ax + by + cz + d = 0 represents a plane where a, b and c are constants followed by the condition a, b, c ≠ 0. • The equation of the plane passing through the origin is given by ax + by + cz = 0. • The equation of a plane crossing through the point  (x1, y1, z1) is given by: $$a\left(x-x_1\right)+b\left(y-y_1\right)+c\left(z-z_1\right)=0$$ • If l, m, n are the direction cosines of the normal to the plane and p is the perpendicular distance of the plane from the origin, then the equation of the plane is furnished by lx + my + nz = p. • If a, b and c are x, y, and z are the intercept on the corresponding X, Y, and Z-axis then the equation of the plane is: $$\frac{x}{a}+\frac{y}{b}+\frac{z}{c}=1$$ • Equation of XY plane– the plane is z = 0. • Equation of YZ plane– the plane is x = 0. • Equation of XZ plane– the plane is y = 0. • The equation of plane parallel to YZ- plane and at a distance p is given by x = p. • The equation of plane parallel to ZX- plane and at a distance p is given by y = p. • Equation of plane parallel to XY- plane and at a distance, p is given by z = p. • The equation of a plane passing through the three points A(x1, y1, z1), B (x2, y2, z2), and C (x3, y3, z3)  is given by: $$\left|\begin{matrix}x-x_1&y-y_1&z-z_1\\ x_2-x_1&y_2-y_1&z_2-z_1\\ x_3-x_1&y_3-y_1&z_3-z_1\end{matrix}\right|=0$$ If you have mastered Three Dimensional Geometry, you can also learn aboutin detail here! ### Distance of a Plane from a Point The perpendicular distance of a plane ax + by + cz + d = 0 from a point P (x1, y1, z1) is given by: $$\left|\frac{ax_1+by_1+cz_1+d}{\sqrt{a^2+b^2+c^2}}\right|$$ ### Distance Between Two Parallel Planes The distance between two parallel planes having equation ax + by + cz + d1 = 0 and ax + by + cz + d2 = 0 is furnished by: $$\left|\frac{d_1-d_2}{\sqrt{a^2+b^2+c^2}}\right|$$ ### Intersection of Two Planes If a1 x + b1 y + c1 z + d = 0 and a2 x + b2 y + c2 z + d = 0 represents two different planes, then equation of a plane passing through the intersection of these planes is given by: =(a1 x + b1 y + c1 z +d) + λ (a2 x + b2 y + c2 z +d) = 0, where λ is a scalar. If you are checking Three Dimensional Geometry article, also check the related maths articles in the table below: Mensuration 3D Mensuration 2D Pie Diagram Data Interpretation Linear Equation In Two Variables Application of Integrals Stay tuned to the Testbook app or visit the testbook website for more updates on similar topics from mathematics, science, and numerous such subjects, and can even check the test series available to test your knowledge regarding various exams. ## Three Dimensional Geometry FAQs Q.1 What are the coordinates involved in three-dimensional geometry? Ans.1 There are 3 coordinates involved in three-dimensional geometry namely x-coordinate, y-coordinate and z-coordinate. Q.2 What are direction Cosines? Ans.2 If a line forms an angle α, β, γ in the positive direction concerning X-axis, Y-axis and Z-axis respectively, then cos α, cos β, and cos γ are called its direction cosines. The direction cosines are commonly denoted as l, m, n. Q.3 What are direction ratios? Ans.3 Three numbers say a, b, c proportional to the direction cosines say l, m, n of a line are acknowledged as the direction ratios of the line. Q.4 What are the various types of lines? Ans.4 The types of lines comprise the horizontal line, vertical line, parallel (when two straight lines do not meet/ or intersect at any point )lines, and perpendicular(When two lines intersect each other at an angle of 90 degrees ) lines, skew lines and coplanar Lines. Q.5 What is the general equation of a plane? Ans.5 The general equation ax + by + cz + d = 0 represents a plane where a, b and c are constants followed by the condition a, b, c ≠ 0. Q.6 What is the equation of a plane passing through the origin? Ans.6 The equation of the plane passing through the origin is given by ax + by + cz = 0.
{}
# In which cases is the categorical cross-entropy better than the mean squared error? In my code, I usually use the mean squared error (MSE), but the TensorFlow tutorials always use the categorical cross-entropy (CCE). Is the CCE loss function better than MSE? Or is it better only in certain cases? As a rule of thumb, mean squared error (MSE) is more appropriate for regression problems, that is, problems where the output is a numerical value (i.e. a floating-point number or, in general, a real number). However, in principle, you can use the MSE for classification problems too (even though that may not be a good idea). MSE can be preceded by the sigmoid function, which outputs a number $$p \in [0, 1]$$, which can be interpreted as the probability of the input belonging to one of the classes, so the probability of the input belonging to the other class is $$1 - p$$.
{}
We’ve looked at some tips for writing cleaner code, now let’s take a slightly higher level view of our application and discuss how we can keep our code flexible and more receptive to change. This is not intended as an authoritative or exhaustive list by any means, but rather a collection of small, actionable tips that you can use to immediately improve your codebase. ## Respect the application hierarchy Let’s first discuss the foundational idea behind several of the tips we’ll discuss today: Every application has a hierarchy, and this hierarchy needs to be respected. This means that dependencies should only flow down the hierarchy, never up or laterally. Additionally, we generally want to go down only one level for any dependencies. This keeps our code properly decoupled and makes it easy to use various objects around the application wherever we need them since each object is able to work in isolation and has no dependencies on the state or the structure of the application beyond its realm of focus. As an example of this principle, let’s consider the simplified hierarchy of a small game: Game |-- Level |-- Geometry |-- Items |-- Coin 1 |-- Sprite |-- Physics body |-- Coin 2 |-- Sprite |-- Physics body |-- Level exit |-- Physics body detector |-- Player |-- Animation container |-- Various animations |-- Physics body If our Player object is completely isolated and only concerned with what it contains, internally handling everything it needs to function properly, then we can put our Player wherever we want: other levels, bonus game modes where the game structure is completely different, or even as an interactive way of selecting levels from a menu. But if the Player always assumes that it will have a sibling in the tree named Items, that immediately breaks this flexibility, as we now have to have an Items object at the same level of the Player in all locations, whether or not we need it. Additionally, if we start crawling around the tree, we also have the potential to lock ourselves into the implementation of our sibling and parent dependencies. How much flexibility does our game have if we always assume that there is an Items sibling with Coin objects as direct children, each with a function called play_sfx()? This has nothing to do with the Player specifically, but if we change or break this dependency, our Player code is now broken as well. So that establishes why we don’t want to assume structure above or next to our object in the tree, but why is going too far down a problem? For the same reason: assumption of structure and separation of concerns. If our Player needs to change animations, it can tell the animation container to do so, but it shouldn’t try and manipulate individual frames of a specific animation, as that is beyond its scope. This separation means we can change our implementation later without breaking the player code. Maybe we originally used individual images to stitch into an animation and later switch to a spritesheet for animation. The Player, which is only concerned with the overall, high-level operation of our character, really doesn’t need to know or care how the animation is made, only that it has an object it can talk to for requesting animation changes. ## Keep data and behavior together This one is largely based around the rule “Tell, Don’t Ask” from The Pragmatic Programmer, with a bit thrown in from Martin Fowler’s discussion on this rule. Generally speaking, we want to keep data and the logic operating on those data together, rather than ask an object for its data so we can operate on them separately. So, revisiting the example above about the Player’s relationship to the animation container, the Player should tell the animation container to change animations, not manually access the data within the animation container and make any required changes, as that moves the logic of changing animations outside of the animation-oriented code and into an object that really shouldn’t concern itself how the transition occurs. #### Bad - Separating data and logic # player.gd var animation_list = animation_container.get_animation_list() var new_animation = animation_list.find_animation('walk') animation_container.current_animation = new_animation #### Good - Data and logic stay together animation_container.play_animation('walk') Creating a function on the animation container that the Player calls keeps our animation-related code where it belongs and ensures we can reason about the state of the application at any time, as there are no external effects to be worried about. For example, if an animation isn’t playing properly, we know that the broken code must be contained within the animation container, not sitting in a random function elsewhere in our application. This also keeps our code reusable and flexible since all the functionality we may need is contained to our animation object. Enemies, NPCs, and the Player can all use one consistent interface to manage animations. This rule may seem a bit obvious a lot of the time, but it can easily sneak its way into your application when you really just need to make one small change this one time. ## Use components The previous rules really lead right into this one. Rather than creating large monolithic objects to get all the functionality you need, or going overboard with inheritance, use components to compartmentalize smaller, self-sufficient bits of code and compose objects out of the components (component in this case does not necessarily refer to an entity component system, but rather just the concept of creating objects that are responsible for specialized areas of knowledge). For instance, there’s probably a lot of objects in your game that have a health bar: the player, enemies, breakable crates, and so on. Rather than having to manage separate variables and methods on each object to manage health, you could create a health component, or even an entire stats component, that houses and standardizes the data and functions you need. Now, everything that needs a health bar can use this component, making it a lot easier to manage, bugfix, and expand over time. Components can massively improve the organization and reusability of your code, and are a technique that a lot of people could do well to go to more often. Sure, there’s some more overhead to creating the components and wiring up communication if multiple components need to be aware of each other, but even just taking the easy wins, like adding a status effects component or a health component, can go a long ways in helping you clean up your codebase. Health bars are a great use of components. Why write separate variables and functions for each individual object when you can create a health or stats component and place it everywhere you need it? ## Use dependency injection Dependency injection is about supplying dependencies at runtime, rather than compile time, and is a great technique to make your code reusable and flexible. Sure, it’s easy to say that there’s a hierarchy to follow and that certain design patterns can help you maintain that hierarchy, but there are times when you do still need a reference to something outside of your direct descendants. State machines come to mind, for instance, since they’re generally owned by an object higher in the hierarchy than them, but they need a reference to that object so that they can control it. If we hardcode this reference, we’ll have to duplicate our state machine for every object, which is a pain in the butt and breaks the principle of keeping our code DRY. With dependency injection, we can instead say our state will control an object of type X, such as a RigidBody, and write our code around that type, but without any specific references to the objects that will hold this state. At runtime, a parent object can pass a reference of itself to the state and it will now control that specific object. // By setting the state owner in our object initialization, // this state can be reused on multiple objects class State { var owner: Rigidbody; function init(stateOwner): void { this.owner = stateOwner; } } That’s dependency injection in a nutshell. Hopefully you can see just how useful it can be in a lot of different situations, including the previously mentioned technique about compartmentalizing functionality into components. With components and dependency injection, we can write generic code to control movement, health, AI, and more that can flexibly operate on different objects as needed. In Elemechs, callback functions and related dependencies are passed to individual mechs via dependency injection so that the player and enemy mechs can flexibly work in multiple levels. “Combatants” passes data to “enemies” and “players”, and then those nodes pass the dependencies on to the individual mechs. ## Learn, and use, design patterns Look, I get it. Design patterns aren’t the sexiest thing in the world to study, especially if you don’t feel a need for them right away. I put off learning about state machines until I had hit my mental limit of writing if statements and storing booleans to figure out what the hell was going on in my code. But they are important to understand, especially some of the bigger and more universal ones. Why bother reinventing the wheel, and probably doing a worse job of it, when a lot of common issues software developers run into have already been solved. But I’m not going to leave it at just telling you to go study software architecture (though that’s not the worst idea in the world if you’ve got ambitions for systems-driven games…), so here’s what I’d prioritize: 1. State machines: I’ve talked about state machines multiple times already, with one theory post and two Godot-oriented posts discussing them, which is way more attention than I’ve given than any other single subject on this site, and I could probably squeeze out at least one more post if I felt compelled to. Let that be a signal of just how strongly I feel you should be familiar with this pattern. Whether you prefer to use enums or object oriented code, learn state machines. Even the simplest of games can usually benefit from them. 2. The observer pattern: State machines may be number one on this list, but the observer pattern is a very close second since it has huge implications for making your code more flexible. With native support in most engines and programming languages, meaning you don’t even have to put a lot of time or effort into implementing it, this is arguably the biggest bang for your buck when studying design patterns. 3. The mediator pattern: I’ve not covered this one yet, but it’s coming. I’ll just leave it at this, if the observer pattern is too one-way for you, this is the pattern you want to study. Once you’re comfortable with these patterns, you can start diving into the specific issues you’re running into, or even just doing a full read of Game Programming Patterns and seeing what sticks out to you. ## Take an iterative approach That’s largely it for today, but I want to leave you one more important tip: it’s ok to be iterative with improving your code style. Don’t try to learn every design pattern right away and spend weeks on your next project making your code as perfect as possible. Too much theory and not enough practice can burn you out or just lead to not really understanding the why and how. Plus, a game with mediocre code that ships is better than a perfectly coded game that never does. It’s ok to work until you hit a pain point and then research how to fix it. It can even be worth doing it “wrong” on a small project, seeing what the final product looked like and what your own lessons learned were, and then looking at researching how to do better in the future. Remember, both game and software development are entire career paths, it takes time to learn how to tackle the problems you run into.
{}
# Shellsort Class Shellsort with gaps 23, 10, 4, 1 in action. Sorting algorithm Array O(n2) O(n log n) depends on gap sequence О(n) total, O(1) auxiliary (Redirected from Shell sort) Class Shellsort with gaps 23, 10, 4, 1 in action. Sorting algorithm Array O(n2) O(n log n) depends on gap sequence О(n) total, O(1) auxiliary Shellsort, also known as Shell sort or Shell's method, is an in-place comparison sort. It can either be seen as a generalization of sorting by exchange (bubble sort) or sorting by insertion (insertion sort).[1] The method starts by sorting elements far apart from each other and progressively reducing the gap between them. Starting with far apart elements can move some out-of-place elements into position faster than a simple nearest neighbor exchange. Donald Shell published the first version of this sort in 1959.[2][3] The running time of Shellsort is heavily dependent on the gap sequence it uses. For many practical variants, determining their time complexity remains an open problem. ## Description Shellsort is a generalization of insertion sort that allows the exchange of items that are far apart. The idea is to arrange the list of elements so that, starting anywhere, considering every hth element gives a sorted list. Such a list is said to be h-sorted. Equivalently, it can be thought of as h interleaved lists, each individually sorted.[4] Beginning with large values of h, this rearrangement allows elements to move long distances in the original list, reducing large amounts of disorder quickly, and leaving less work for smaller h-sort steps to do.[5] If the file is then k-sorted for some smaller integer k, then the file remains h-sorted. Following this idea for a decreasing sequence of h values ending in 1 is guaranteed to leave a sorted list in the end.[4] An example run of Shellsort with gaps 5, 3 and 1 is shown below. $\begin{array}{rcccccccccccc} &a_1&a_2&a_3&a_4&a_5&a_6&a_7&a_8&a_9&a_{10}&a_{11}&a_{12}\\ \hbox{input data:} & 62& 83& 18& 53& 07& 17& 95& 86& 47& 69& 25& 28\\ \hbox{after 5-sorting:} & 17& 28& 18& 47& 07& 25& 83& 86& 53& 69& 62& 95\\ \hbox{after 3-sorting:} & 17& 07& 18& 47& 28& 25& 69& 62& 53& 83& 86& 95\\ \hbox{after 1-sorting:} & 07& 17& 18& 25& 28& 47& 53& 62& 69& 83& 86& 95\\ \end{array}$ The first pass, 5-sorting, performs insertion sort on separate subarrays (a1, a6, a11), (a2, a7, a12), (a3, a8), (a4, a9), (a5, a10). For instance, it changes the subarray (a1, a6, a11) from (62, 17, 25) to (17, 25, 62). The next pass, 3-sorting, performs insertion sort on the subarrays (a1, a4, a7, a10), (a2, a5, a8, a11), (a3, a6, a9, a12). The last pass, 1-sorting, is an ordinary insertion sort of the entire array (a1,..., a12). As the example illustrates, the subarrays that Shellsort operates on are initially short; later they are longer but almost ordered. In both cases insertion sort works efficiently. Shellsort is unstable: it may change the relative order of elements with equal values. It has "natural" behavior, in that it executes faster when the input is partially sorted. ## Pseudocode Using Marcin Ciura's gap sequence, with an inner insertion sort. # Sort an array a[0...n-1]. gaps = [701, 301, 132, 57, 23, 10, 4, 1] foreach (gap in gaps) { # Do an insertion sort for each gap size. for (i = gap; i < n; i += 1) { temp = a[i] for (j = i; j >= gap and a[j - gap] > temp; j -= gap) { a[j] = a[j - gap] } a[j] = temp } } ## Gap sequences The question of deciding which gap sequence to use is difficult. Every gap sequence that contains 1 yields a correct sort; however, the properties of thus obtained versions of Shellsort may be very different. The table below compares most proposed gap sequences published so far. Some of them have decreasing elements that depend on the size of the sorted array (N). Others are increasing infinite sequences, whose elements less than N should be used in reverse order. General term (k ≥ 1)Concrete gapsWorst-case time complexity Author and year of publication $\lfloor N / 2^k \rfloor$$\left\lfloor\frac{N}{2}\right\rfloor, \left\lfloor\frac{N}{4}\right\rfloor, \ldots, 1$$\Theta(N^2)$ [when N=2p]Shell, 1959[2] $2 \lfloor N / 2^{k+1} \rfloor + 1$$2 \left\lfloor\frac{N}{4}\right\rfloor + 1, \ldots, 3, 1$$\Theta(N^{3/2})$Frank & Lazarus, 1960[6] $2^k - 1$$1, 3, 7, 15, 31, 63, \ldots$$\Theta(N^{3/2})$Hibbard, 1963[7] $2^k + 1$, prefixed with 1$1, 3, 5, 9, 17, 33, 65, \ldots$$\Theta(N^{3/2})$Papernov & Stasevich, 1965[8] successive numbers of the form $2^p 3^q$$1, 2, 3, 4, 6, 8, 9, 12, \ldots$$\Theta(N \log^2 N)$Pratt, 1971[9] $(3^k - 1) / 2$, not greater than $\lceil N / 3 \rceil$$1, 4, 13, 40, 121, \ldots$$\Theta(N^{3/2})$Knuth, 1973[1] $\prod \limits_{\scriptscriptstyle 0\le q $r = \left\lfloor \sqrt{2k+\sqrt{2k}} \right\rfloor,$ $a_q=\min\{n\in\mathbb{N}\colon n\ge(5/2)^{q+1},$ $\forall p\colon0\le p $1, 3, 7, 21, 48, 112, \ldots$$O(N e^\sqrt{8\ln(5/2)\ln N})$Incerpi & Sedgewick, 1985[10] $4^k + 3\cdot2^{k-1} + 1$, prefixed with 1$1, 8, 23, 77, 281, \ldots$$O(N^{4/3})$Sedgewick, 1986[4] $9(4^{k-1}-2^{k-1})+1, 4^{k+1}-6\cdot2^k+1$$1, 5, 19, 41, 109, \ldots$$O(N^{4/3})$Sedgewick, 1986[4] $h_k = \max\left\{\left\lfloor 5h_{k-1}/11 \right\rfloor, 1\right\}, h_0 = N$$\left\lfloor \frac{5N}{11} \right\rfloor, \left\lfloor \frac{5}{11}\left\lfloor \frac{5N}{11} \right\rfloor\right\rfloor, \ldots, 1$ ?Gonnet & Baeza-Yates, 1991[11] $\left\lceil \frac{9^k-4^k}{5\cdot4^{k-1}} \right\rceil$$1, 4, 9, 20, 46, 103, \ldots$ ?Tokuda, 1992[12] unknown$1, 4, 10, 23, 57, 132, 301, 701$ ?Ciura, 2001[13] When the binary representation of N contains many consecutive zeroes, Shellsort using Shell's original gap sequence makes Θ(N2) comparisons in the worst case. For instance, this case occurs for N equal to a power of two when elements greater and smaller than the median occupy odd and even positions respectively, since they are compared only in the last pass. Although it has higher complexity than the O(NlogN) that is optimal for comparison sorts, Pratt's version lends itself to sorting networks and has the same asymptotic gate complexity as Batcher's bitonic sorter. Gonnet and Baeza-Yates observed that Shellsort makes the fewest comparisons on average when the ratios of successive gaps are roughly equal to 2.2.[11] This is why their sequence with ratio 2.2 and Tokuda's sequence with ratio 2.25 prove efficient. However, it is not known why this is so. Sedgewick recommends to use gaps that have low greatest common divisors or are pairwise coprime.[14] With respect to the average number of comparisons, the best known gap sequences are 1, 4, 10, 23, 57, 132, 301, 701 and similar, with gaps found experimentally. Optimal gaps beyond 701 remain unknown, but good results can be obtained by extending the above sequence according to the recursive formula $h_k = \lfloor 2.25 h_{k-1} \rfloor$. Tokuda's sequence, defined by the simple formula $h_k = \lceil h'_k \rceil$, where $h'_k = 2.25 h'_{k-1} + 1$, $h'_1 = 1$, can be recommended for practical applications. ## Computational complexity The following property holds: after h2-sorting of any h1-sorted array, the array remains h1-sorted.[15] Every h1-sorted and h2-sorted array is also (a1h1+a2h2)-sorted, for any nonnegative integers a1 and a2. The worst-case complexity of Shellsort is therefore connected with the Frobenius problem: for given integers h1,..., hn with gcd = 1, the Frobenius number g(h1,..., hn) is the greatest integer that cannot be represented as a1h1+ ... +anhn with nonnegative integer a1,..., an. Using known formulae for Frobenius numbers, we can determine the worst-case complexity of Shellsort for several classes of gap sequences.[16] Proven results are shown in the above table. With respect to the average number of operations, none of proven results concerns a practical gap sequence. For gaps that are powers of two, Espelid computed this average as $0.5349N\sqrt{N}-0.4387N-0.097\sqrt{N}+O(1)$.[17] Knuth determined the average complexity of sorting an N-element array with two gaps (h, 1) to be $2N^2/h + \sqrt{\pi N^3 h}$.[1] It follows that a two-pass Shellsort with h = Θ(N1/3) makes on average O(N5/3) comparisons. Yao found the average complexity of a three-pass Shellsort.[18] His result was refined by Janson and Knuth:[19] the average number of comparisons made during a Shellsort with three gaps (ch, cg, 1), where h and g are coprime, is $\frac{N^2}{4ch}+O(N)$ in the first pass, $\frac{1}{8g}\sqrt{\frac{\pi}{ch}}(h-1)N^{3/2}+O(hN)$ in the second pass and $\psi(h, g)N + \frac{1}{8}\sqrt{\frac{\pi}{c}}(c-1)N^{3/2}+O((c-1)gh^{1/2}N) + O(c^2g^3h^2)$ in the third pass. ψ(h, g) in the last formula is a complicated function asymptotically equal to $\sqrt{\frac{\pi h}{128}}g + O(g^{-1/2}h^{1/2}) + O(gh^{-1/2})$. In particular, when h = Θ(N7/15) and g = Θ(h1/5), the average time of sorting is O(N23/15). Based on experiments, it is conjectured that Shellsort with Hibbard's and Knuth's gap sequences runs in O(N5/4) average time,[1] and that Gonnet and Baeza-Yates's sequence requires on average 0.41NlnN(ln lnN+1/6) element moves.[11] Approximations of the average number of operations formerly put forward for other sequences fail when sorted arrays contain millions of elements. The graph below shows the average number of element comparisons in various variants of Shellsort, divided by the theoretical lower bound, i.e. log2N!, where the sequence 1, 4, 10, 23, 57, 132, 301, 701 has been extended according to the formula $h_k = \lfloor2.25 h_{k-1}\rfloor$. Applying the theory of Kolmogorov complexity, Jiang, Li, and Vitányi proved the following lower bounds for the order of the average number of operations in an m-pass Shellsort: Ω(mN1+1/m) when m≤log2N and Ω(mN) when m>log2N.[20] Therefore Shellsort has prospects of running in an average time that asymptotically grows like NlogN only when using gap sequences whose number of gaps grows in proportion to the logarithm of the array size. It is, however, unknown whether Shellsort can reach this asymptotic order of average-case complexity, which is optimal for comparison sorts. The worst-case complexity of any version of Shellsort is of higher order: Plaxton, Poonen, and Suel showed that it grows at least as rapidly as Ω(N(logN/log logN)2).[21] ## Applications Shellsort is now rarely used in serious applications. It performs more operations and has higher cache miss ratio than quicksort. However, since it can be implemented using little code and does not use the call stack, some implementations of the qsort function in the C standard library targeted at embedded systems use it instead of quicksort. Shellsort is, for example, used in the uClibc library.[22] For similar reasons, an implementation of Shellsort is present in the Linux kernel.[23] Shellsort can also serve as a sub-algorithm of introspective sort, to sort short subarrays and to prevent a pathological slowdown when the recursion depth exceeds a given limit. This principle is employed, for instance, in the bzip2 compressor.[24] ## References 1. ^ a b c d Knuth, Donald E. (1997). "Shell's method". The Art of Computer Programming. Volume 3: Sorting and Searching (2nd ed.). Reading, Massachusetts: Addison-Wesley. pp. 83–95. ISBN 0-201-89685-0. 2. ^ a b Shell, D.L. (1959). "A High-Speed Sorting Procedure". Communications of the ACM 2 (7): 30–32. doi:10.1145/368370.368387. 3. ^ Some older textbooks and references call this the "Shell-Metzner" sort after Marlene Metzner Norton, but according to Metzner, "I had nothing to do with the sort, and my name should never have been attached to it." See "Shell sort". National Institute of Standards and Technology. Retrieved 2007-07-17. 4. ^ a b c d Sedgewick, Robert (1998). Algorithms in C 1 (3rd ed.). Addison-Wesley. pp. 273–281. ISBN 0-201-31452-5. 5. ^ Kernighan, Brian W.; Ritchie, Dennis M. (1996). The C Programming Language (2nd ed.). Prentice Hall. p. 62. ISBN 7-302-02412-X. 6. ^ Frank, R.M.; Lazarus, R.B. (1960). "A High-Speed Sorting Procedure". Communications of the ACM 3 (1): 20–22. doi:10.1145/366947.366957. 7. ^ Hibbard, Thomas N. (1963). "An Empirical Study of Minimal Storage Sorting". Communications of the ACM 6 (5): 206–213. doi:10.1145/366552.366557. 8. ^ Papernov, A.A.; Stasevich, G.V. (1965). "A Method of Information Sorting in Computer Memories". Problems of Information Transmission 1 (3): 63–75. 9. ^ Pratt, Vaughan Ronald (1979). Shellsort and Sorting Networks (Outstanding Dissertations in the Computer Sciences). Garland. ISBN 0-8240-4406-1. 10. ^ Incerpi, Janet; Sedgewick, Robert (1985). "Improved Upper Bounds on Shellsort". Journal of Computer and System Sciences 31 (2): 210–224. 11. ^ a b c Gonnet, Gaston H.; Baeza-Yates, Ricardo (1991). "Shellsort". Handbook of Algorithms and Data Structures: In Pascal and C (2nd ed.). Reading, Massachusetts: Addison-Wesley. pp. 161–163. ISBN 0-201-41607-7. 12. ^ Tokuda, Naoyuki (1992). "An Improved Shellsort". In van Leeuven, Jan. Proceedings of the IFIP 12th World Computer Congress on Algorithms, Software, Architecture. Amsterdam: North-Holland Publishing Co. pp. 449–457. ISBN 0-444-89747-X. 13. ^ Ciura, Marcin (2001). "Best Increments for the Average Case of Shellsort". In Freiwalds, Rusins. Proceedings of the 13th International Symposium on Fundamentals of Computation Theory. London: Springer-Verlag. pp. 106–117. ISBN 3-540-42487-3. 14. ^ Sedgewick, Robert (1998). "Shellsort". Algorithms in C++, Parts 1-4: Fundamentals, Data Structure, Sorting, Searching. Reading, Massachusetts: Addison-Wesley. pp. 285–292. ISBN 0-201-35088-2. 15. ^ Gale, David; Karp, Richard M. (1972). "A Phenomenon in the Theory of Sorting". Journal of Computer and System Sciences 6 (2): 103–115. doi:10.1016/S0022-0000(72)80016-3. 16. ^ Selmer, Ernst S. (1989). "On Shellsort and the Frobenius Problem". BIT Numerical Mathematics 29 (1): 37–40. doi:10.1007/BF01932703. 17. ^ Espelid, Terje O. (1973). "Analysis of a Shellsort Algorithm". BIT Numerical Mathematics 13 (4): 394–400. doi:10.1007/BF01933401. 18. ^ Yao, Andrew Chi-Chih (1980). "An Analysis of (h, k, 1)-Shellsort". Journal of Algorithms 1 (1): 14–50. doi:10.1016/0196-6774(80)90003-6. 19. ^ Janson, Svante; Knuth, Donald E. (1997). "Shellsort with Three Increments". Random Structures and Algorithms 10 (1-2): 125–142. arXiv:cs/9608105. doi:10.1002/(SICI)1098-2418(199701/03)10:1/2<125::AID-RSA6>3.0.CO;2-X. CiteSeerX: 10.1.1.54.9911. 20. ^ Jiang, Tao; Li, Ming; Vitányi, Paul (2000). "A Lower Bound on the Average-Case Complexity of Shellsort". Journal of the ACM 47 (5): 905–911. doi:10.1145/355483.355488. CiteSeerX: 10.1.1.6.6508. 21. ^ Plaxton, C. Greg; Poonen, Bjarne; Suel, Torsten (1992). "Improved Lower Bounds for Shellsort". Annual Symposium on Foundations of Computer Science 33: 226–235. doi:10.1109/SFCS.1992.267769. CiteSeerX: 10.1.1.43.1393. 22. ^ Manuel Novoa III. "libc/stdlib/stdlib.c". Retrieved 2011-03-30. 23. ^ "kernel/groups.c". Retrieved 2012-05-05. 24. ^ Julian Seward. "bzip2/blocksort.c". Retrieved 2011-03-30. ## Bibliography • Knuth, Donald E. (1997). "Shell's method". The Art of Computer Programming. Volume 3: Sorting and Searching (2nd ed.). Reading, Massachusetts: Addison-Wesley. pp. 83–95. ISBN 0-201-89685-0. • Analysis of Shellsort and Related Algorithms, Robert Sedgewick, Fourth European Symposium on Algorithms, Barcelona, September 1996.
{}
# How to assign a word for each section that appears in the header of each page? My request is a little bit strange but I really need your help. Basically, I want to use an equivalent of \leftmark (to put the chapter in header) for a new command. For example, let's say I want to write the following: \documentclass[a4paper,10pt,twoside]{article} ... \usepackage{fancyhdr} % Needed to define custom headers/footers \pagestyle{fancy} % Enables the custom headers/footers \fancyfoot{} \fancyhead[LE]{\thepage $\mid$ here the category} \fancyhead[RO]{here the category $\mid$ \thepage} \begin{document} \aNewCommand{Politics} \section{Obama...} my text here... \aNewCommand{Sport} \section{The World cup...} my text here etc \end{document} I want the words Politics and Sport to be written only in header (not somewhere in the page). The categories do not need to be enumerated. If possible, I want to avoid the use of \chapter{...}. I hope you see what I mean. Thanks for reading. You can use something like \renewcommand\sectionmark[1]{} \newcommand\Category[1]{\markboth{\MakeUppercase{#1}}{} Code: \documentclass[a4paper,10pt,twoside]{article} \usepackage{fancyhdr} % Needed to define custom headers/footers \pagestyle{fancy} % Enables the custom headers/footers \fancyhf{} \fancyhead[LE]{\thepage~$\mid$~\leftmark} \fancyhead[RO]{\leftmark~$\mid$~\thepage} \renewcommand\sectionmark[1]{} \newcommand\Category[1]{\markboth{\MakeUppercase{#1}}{}} \usepackage{blindtext}% for dummy text \begin{document} \Category{Politics} \section{Obama...} \subsection{Test} \Blindtext \Category{Sport} \section{The World cup...} \Blindtext \end{document} • Thank you very much. That was exactly what I was looking for. Jun 27 '15 at 16:59
{}