url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://pdglive.lbl.gov/DataBlock.action?node=S023AI&home=BXXX030
# ${{\boldsymbol \Xi}^{0}}$ DECAY PARAMETERS See the Note on Baryon Decay Parameters'' in the neutron Listings. # $\boldsymbol g_{2}(0)/\boldsymbol f_{1}$(0)) FOR ${{\boldsymbol \Xi}^{0}}$ $\rightarrow$ ${{\boldsymbol \Sigma}^{+}}{{\boldsymbol e}^{-}}{{\overline{\boldsymbol \nu}}_{{e}}}$ INSPIRE search VALUE EVTS DOCUMENT ID TECN  COMMENT $-1.7$ ${}^{+2.1}_{-2.0}$ $\pm0.5$ 487 1 2001 I KTEV ${{\mathit p}}$ nucleus, 800 GeV 1  ALAVI-HARATI 2001I thus assumes that $\mathit g_{2}$ = 0 in calculating $\mathit g_{1}/\mathit f_{1}$, above. References: ALAVI-HARATI 2001I PRL 87 132001 First Measurement of Form Factors of the Decay ${{\mathit \Xi}^{0}}$ $\rightarrow$ ${{\mathit \Sigma}^{+}}{{\mathit e}^{-}}{{\overline{\mathit \nu}}_{{e}}}$
2020-09-19 12:18:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9897653460502625, "perplexity": 10300.936798177052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191780.21/warc/CC-MAIN-20200919110805-20200919140805-00100.warc.gz"}
https://search.r-project.org/CRAN/refmans/cat.dt/html/probab_NRM.html
probab_NRM {cat.dt} R Documentation Item response NRM probabilities Description Computes the probabilities of picking every possible response of an specified item from the item bank for all evaluated ability levels using the Nominal Response Model Usage probab_NRM(item_par, nres) Arguments item_par vector containing the item parameters. Odd components are the alpha parameters and even are the beta parameters nres number of possible item responses Value A matrix of response probabilities. Rows represent evaluated ability levels and columns represent responses
2022-08-14 13:31:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7327183485031128, "perplexity": 10016.14484707768}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00055.warc.gz"}
https://www.spp2026.de/members-guests/18-member-pages/dr-steffen-kionke
# Members & Guests ## Jun.-Prof. Dr. Steffen Kionke Juniorprofessor für Algebra FernUniversität in Hagen E-mail: steffen.kionke(at)fernuni-hagen.de Telephone: +49 2331 987-2558 Homepage: https://www.fernuni-hagen.de/juniorprofe… ## Project 58Profinite perspectives on l2-cohomology ## Publications within SPP2026 This article explores the interplay between the finite quotients of finitely generated residually finite groups and the concept of amenability. We construct a finitely generated, residually finite, amenable group A and an uncountable family of finitely generated, residually finite non-amenable groups all of which are profinitely isomorphic to A. All of these groups are branch groups. Moreover, picking up  Grothendieck's problem, the group A embeds in these groups such that the inclusion induces an isomorphism of profinite completions. In addition, we review the concept of uniform amenability, a strengthening of amenability introduced in the 70's, and we prove that uniform amenability indeed is detectable from the profinite completion. Related project(s): 58Profinite perspectives on l2-cohomology By arithmeticity and superrigidity, a commensurability class of lattices in a higher rank Lie group is defined by a unique algebraic group over a unique number subfield of $$\mathbb{R}$$ or $$\mathbb{C}$$. We prove an adelic version of superrigidity which implies that two such commensurability classes define the same profinite commensurability class if and only if the algebraic groups are adelically isomorphic. We discuss noteworthy consequences on profinite rigidity questions. Related project(s): 58Profinite perspectives on l2-cohomology We investigate which higher rank simple Lie groups admit profinitely but not abstractly commensurable lattices.  We show that no such examples exist for the complex forms of type $$E_8$$, $$F_4$$, and $$G_2$$.  In contrast, there are arbitrarily many such examples in all other higher rank Lie groups, except possibly $$\mathrm{SL}_{2n+1}(\mathbb{R})$$, $$\mathrm{SL}_{2n+1}(\mathbb{C})$$, $$\mathrm{SL}_n(\mathbb{H})$$, or groups of type~$$E_6$$. Related project(s): 58Profinite perspectives on l2-cohomology We define and study generalizations of simplicial volume over arbitrary seminormed rings with a focus on p-adic simplicial volumes. We investigate the dependence on the prime and establish homology bounds in terms of p-adic simplicial volumes. As the main examples, we compute the weightless and p-adic simplicial volumes of surfaces. This is based on an alternative way to calculate classical simplicial volume of surfaces without hyperbolic straightening and shows that surfaces satisfy mod p and p-adic approximation of simplicial volume. Journal Glasgow Math. Journal Link to preprint version Link to published version Related project(s): 58Profinite perspectives on l2-cohomology We prove that the sign of the Euler characteristic of arithmetic groups with CSP is determined by the profinite completion.  In contrast, we construct examples showing that this is not true for the Euler characteristic itself and that the sign of the Euler characteristic is not profinite among general residually finite groups of type F.  Our methods imply similar results for L2-torsion as well as a strong profiniteness statement for Novikov--Shubin invariants. We explain how the construction of the real numbers using quasimorphisms can be transformed into a general method to construct the completion of a field with respect to an absolute value. Journal P-Adic Numbers Ultrametric Anal. Appl. Volume 11 Pages 335 - 337 Link to preprint version Link to published version Related project(s): 18Analytic L2-invariants of non-positively curved spaces We define a variant of Benjamini-Schramm convergence for finite simplicial complexes with the action of a fixed finite group G which leads to the notion of random rooted simplicial G-complexes. For every random rooted simplicial G-complex we define a corresponding 2-homology and the 2-multiplicity of an irreducible representation of G in the homology. The 2-multiplicities generalize the 2-Betti numbers and we show that they are continuous on the space of sofic random rooted simplicial G-complexes. In addition, we study induction of random rooted complexes and discuss the effect on 2-multiplicities. Pages 20 Link to preprint version Related project(s): 18Analytic L2-invariants of non-positively curved spaces The purpose of this article is to define and study new invariants of topological spaces: the p-adic Betti numbers and the p-adic torsion. These invariants take values in the p-adic numbers and are constructed from a virtual pro-p completion of the fundamental group. The key result of the article is an approximation theorem which shows that the p-adic invariants are limits of their classical analogues. This is reminiscent of Lück's approximation theorem for L2-Betti numbers. After an investigation of basic properties and examples we discuss the p-adic analog of the Atiyah conjecture: When do the p-adic Betti numbers take integer values? We establish this property for a class of spaces and discuss applications to cohomology growth. In this note we refine examples by Aka from arithmetic to S-arithmetic groups to show that the vanishing of the i-th ℓ²-Betti number is not a profinite invariant for all i≥2. Related project(s): 18Analytic L2-invariants of non-positively curved spaces Given an S-arithmetic group, we ask how much information on the ambient algebraic group, number field of definition, and set of places S is encoded in the commensurability class of the profinite completion. As a first step, we show that the profinite commensurability class of an S-arithmetic group with CSP determines the number field up to arithmetical equivalence and the places in S above unramified primes. We include some applications to profiniteness questions of group invariants.
2021-12-07 02:27:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6131625175476074, "perplexity": 803.5268573915727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363332.1/warc/CC-MAIN-20211207014802-20211207044802-00524.warc.gz"}
https://math.iitm.ac.in/event/view/253
## Department of Mathematics Indian Institute Of Technology Madras , Chennai ### Stokes' Theorem on Smooth Manifolds #### Abstract : Since classical Stokes' formula involves integrals, surfaces with boundary, orientations, we’ll discuss about what is the notion orientations and integration on smooth manifolds and what is meant by boundary of a manifold. We’ll discuss the theorem and finally we’ll try to deduce known Stokes' formula and Green’s theorem from the Stokes' theorem. Key Speaker Sudeep Poddar Place NAC 522 Start Time 3:00 PM Finish Time 4:00 PM
2019-10-22 04:06:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.873055636882782, "perplexity": 3750.828413703947}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987798619.84/warc/CC-MAIN-20191022030805-20191022054305-00216.warc.gz"}
https://www.hpmuseum.org/forum/printthread.php?tid=13513
Calculating infinite series of roots - Printable Version +- HP Forums (https://www.hpmuseum.org/forum) +-- Forum: HP Calculators (and very old HP Computers) (/forum-3.html) +--- Forum: HP Prime (/forum-5.html) +--- Thread: Calculating infinite series of roots (/thread-13513.html) Calculating infinite series of roots - KeithB - 08-26-2019 04:58 PM https://skullsinthestars.com/2019/08/18/a-curious-mathematical-identity/ Is there a way to calculate this easily on the Prime? i.e. showing that (2^.5)^(2^.5)^(2^.5)... = 2? RE: Calculating infinite series of roots - Helge Gabert - 08-29-2019 04:56 PM There is no symbol for repeated exponentiation on the calc, as opposed to the repeated summation and product operator, so I'm not sure if you can show that symbolically with the limit function. But, of course, you can always write a program and show that numerically, you get very close to 2. For example, with a REPEAT ... UNTIL loop, using approx(sqrt(2)), and setting your tolerance at 1E-12, you get to 2 after 71 iterations, after which the result doesn't change any more. RE: Calculating infinite series of roots - ijabbott - 08-30-2019 11:57 PM A "power tower" of $$n$$ $$x$$'s $$\underbrace{x^{x^{x^{\dots^x}}}}_n$$ is also known as the $$n$$th tetration of $$x$$. The operation is called "tetration" because it is the fourth in a series of operations: addition, multiplication, exponentiation, tetration. There are other terms for the same thing. There are also several notations, but one such notation is $$^{n}x$$. Now we come on to the "infinite tetration" of $$x$$, or $$^{\infty}x$$, which is $$\lim\limits_{n\to\infty}{^{n}x}$$. For certain real values of $$x$$ such as $$x=\sqrt{2}$$, this infinite tetration converges (to $$2$$ in this case). In fact it only converges for real $$x$$ in the interval $$\big[e^{-e}, e^\frac{1}{e}\big]$$ (approx. $$[0.066, 1.445]$$). The limit of convergence can evaluated using the Lambert W function: $^{\infty}x = \lim_{n\to\infty}{^{n}x} = \frac{W(-\ln(x))}{-\ln(x)} \big|_{e^{-e} \le x \le e^\frac{1}{e}}$ where the Lambert W function is defined by $$W(z{e^z})=z$$, or by $$z_0 = W(z_0)e^{W(z_0)}$$. Unfortunately, the Lambert W function is not yet built-in on the HP Prime, although it is available in Xcas/Giac, so maybe later.... RE: Calculating infinite series of roots - Helge Gabert - 08-31-2019 12:59 AM (08-30-2019 11:57 PM)ijabbott Wrote:  In fact it only converges for real $$x$$ in the interval $$\big[e^{-e}, e^\frac{1}{e}\big]$$ (approx. $$[0.066, 1.445]$$). Yes, very true, and that was shown by Euler (back in the days . . . ). Support for the Lambert W function would be nice, if there is ever an update again! RE: Calculating infinite series of roots - Stevetuc - 08-31-2019 04:07 PM (08-31-2019 12:59 AM)Helge Gabert Wrote:   (08-30-2019 11:57 PM)ijabbott Wrote:  In fact it only converges for real $$x$$ in the interval $$\big[e^{-e}, e^\frac{1}{e}\big]$$ (approx. $$[0.066, 1.445]$$). Yes, very true, and that was shown by Euler (back in the days . . . ). Support for the LambertW function would be nice, if there is ever an update again! This cas program uses fsolve to calc Lambert fn Code: #cas lmb(z):= fsolve(equal(w*e^w-z,0),w) #end Graph from wikipedia If z <0 lmb(z) returns both principal and negative branch solutions : lmb(-0.5/e) returns [−2.67834699002,−0.231960952987] lmb(-1/e) returns −0.999999842597 (negative and principal branch converge at -1 when z=-1/e) lmb(1) returns 0.56714329041 lmb(2) returns 0.852605502014 RE: Calculating infinite series of roots - ijabbott - 08-31-2019 07:13 PM (08-30-2019 11:57 PM)ijabbott Wrote:  The limit of convergence can evaluated using the Lambert W function: $^{\infty}x = \lim_{n\to\infty}{^{n}x} = \frac{W(-\ln(x))}{-\ln(x)} \big|_{e^{-e} \le x \le e^\frac{1}{e}}$ where the Lambert W function is defined by $$W(z{e^z})=z$$, or by $$z_0 = W(z_0)e^{W(z_0)}$$. I've just noticed that this evaluates to $$\frac{0}{0}$$ for $$x=1$$. Hmm... more work needed? Also when $$1 \lt x \le e^\frac{1}{e}$$, then letting $$z=-\ln(x)$$, $$-\frac{1}{e} \le z \lt 0$$, and there are two branches of the $$W$$ function in this interval. I guess it uses the upper branch?
2020-10-30 05:32:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.949267566204071, "perplexity": 1245.156970826969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107907213.64/warc/CC-MAIN-20201030033658-20201030063658-00291.warc.gz"}
https://dsp.stackexchange.com/tags/local-features/hot?filter=all
# Tag Info ## Hot answers tagged local-features 22 An interest point (key point, salient point) detector is an algorithm that chooses points from an image based on some criterion. Typically, an interest point is a local maximum of some function, such as a "cornerness" metric. A descriptor is a vector of values, which somehow describes the image patch around an interest point. It could be as simple as the ... 11 I will try to avoid math, because math and "how to do it" tutorials can be easily found. So, I start by pointing out one VERY important thing: One does not compute Harris for a single pixel, but for a vicinity (a patch of image) around that pixel! Let $I(i)_{xx}, I(i)_{xy} ...$ be your derivatives for a point $i_0$, then, $H = \left[ \begin{array}{cc} \... 9 I would rather look into KAZE / AKAZE, which perform equally good with significant speed-up. The deformation cases are also tolerated. OpenCV has recently obtained an implementation through GSoC 2014. You can find it here. Its OpenCV tutorial is also present here. 9 I think it is kind'a similar to soft and hard thresholding using in wavelet de-noising. Have you come across this topic? pywt has already an in-built function for this purpose. Please take a closer look at this code and try to play with it: import pywt import matplotlib.pyplot as plt import numpy as np ts = [2, 56, 3, 22, 3, 4, 56, 7, 8, 9, 44, 23, 1, 4, 6,... 9 Image keypoints are a key feature in many Image and Video processing softwares, both industrial and academic. The principle behind is always the same: detect some meaningful points in some images; [optional] compute a stable description of the image part surrounding each keypoint; match keypoints from an image (the template) to another (the query). Now, ... 8 Here is what I did for a client (What you are asking is the same). Assuming that you have access to certain type of a pattern on the image (or the center of the hole), you could always detect the template to obtain the location of a possible unwarp: Note that in the transformed image, two region of interests are defined and the region within which we would ... 8 Some Features: Mean. Variance. Skewness. Kurtosis. Dominant 3 frequencies in the DFT. Energy of the 3 dominant frequencies. Max Value. Min Value. Median. Usually I'd compute them in running windows. Another great information is the Histogram of the Derivative. Or just all the above of the Derivative. 7 The 1D gabor filter has the following form in the frequency domain: $$G_{b(\sigma,\omega_0)}(\omega) = \text{exp}\left(-\frac{\sigma^2}{2}(\omega - \omega_0)^2\right)$$ The 1D log-gabor filter is: $$G_{l(\sigma,\omega_0)}(\omega) = \text{exp}\left(-\frac{\ln^2(\omega/\omega_0)}{2\ln^2(\sigma)}\right)$$ Log-gabor filters are used because they have 0 DC ... 5 In the robot navigation problem, the localization problem refers to the real time estimation of its position and orientation under various backgrounds. This is usually achieved by some natural landmark selection (laser points, camera views, etc.), and the features in the image (corners, tiny lines with different orientations, etc.). So the localizability ... 5 There are two different concepts: If you think as your signal as a single random variable$X$that is emitting values, then what you want is to calculate the Entropy of the random variable http://en.wikipedia.org/wiki/Entropy_estimation If you are considering the entire random signal or stochastic process, then you have to estimate the autocorrelation ... 5 In addition to the features mentioned so far I would like to mention measures of complexity such as: Shannon Entropy LZ Complexity Fractal Dimension There are also Fourier Descriptors (as hinted by Drazick already) and their equivalent in Wavelet Analysis and of course simple histogram bins which would return how frequently each gear is engaged en route. ... 4 Most likely your images look different from the ones in the lectures because of scaling. Note that the result of the convolution with a Laplacian filter will have positive and negative values. What the resulting image looks like depends on the data type of the array, and on the range to which the values are scaled. For example, if you store your filtered ... 3 For a quantized or digital signal, you can get a upper bound on an estimate of information complexity or randomness by attempting to compress the data and/or the data's spectrum using a large variety of compression algorithms. 3 The$\sigma_I$determines the scale level at which the Harris corners are computed. Coarser scales (higher values of$\sigma_I$) correspond to larger corners. The$\sigma_D$is effectively the window size, over which the derivatives are summed to generate the entries of the matrix. If$\sigma_D$is too small, then the detector will be seriously affected by ... 3 I don't know if you are familiar with statistical signal processing and therefore will write my answer assuming that you are not. Everything I explain here is much better presented in any book about statistics. I would recommend Kay's book about detection theory. I first summarize your question by reformulating the 2 points you made, first in comprehensive ... 3 Haralick's primal topograhic sketch is the answer to that. Check-out the peak section of : Haralick R., et al. - The Topographic Primal Sketch If you also look at the notation and Hessian parts, you will grasp how to implement peak finding (local-max) as a convolution operator. Regarding your comments below: Of course you get multiple peaks, but ... 3 As Conrad pointed out, a correlator is probably your best bet. The correlation of a signal with itself (also known as its self-similarity) is larger than its correlation with any other signal (except for a constant factor related to the signals' energy). In your case, you would implement two correlators, one for Signal 1 and one for Signal 2. Then, you'd ... 3 The generalisation of the concept of an analytic signal is not straight forward. I'm quite certain however that looking for such a generalisation with quarternions (or even octonions) will not turn out fruitful. Those generalise complex numbers primarily algebraically, attempting to preserve as much of the field structure as possible, and not so much as a ... 3 Unless mentioned otherwise withing the context the classic interpretation of Second Derivative Gaussian Filter is indeed (a) in your question: $$L \left( x, y, \theta \right) = \cos \left( \theta \right) {g}_{xx} \left( x, y \right) + \sin \left( \theta \right) {g}_{yy} \left( x, y \right)$$ 2 I think a better way to understand what PCA does is to understand what is a good feature. Suppose you are classifying obese people from non-obese people. A good feature (let's call it$f_1$) to use for example might be "body mass index (BMI)" for each person. Another good feature (called$f_2$) to use might be "weight". A third feature$f_3\$ to use would be ... 2 Ok, it sounds like you are trying to do eigenfaces, right? In that case, you have to think of your face images as points in a very high-dimensional space. For example, if your images are 32x32, then the space has 32 * 32 = 1024 dimensions. Operating in so many dimensions is very difficult, because distances between points become almost meaningless. With ... 2 You could take a look at recent publications by Segvic et al, I know they have been working on the problems of traffic sign detection. The basic idea was to use the Viola-Jones framework for object detection, which was later improved by adding some temporal and spatial constraints. If I remember correctly, they achieved a nearly 100% recall rate with just 2 ... 2 Do you know what signs you are looking for? If yes, maybe you could do template matching (e.g. a normalized cross-correlation, available in matlab). It won't work great when signals are getting closer, since the perspective projection will change their appearance, but it should work for mid-range detection. You can limit your template-matching search to the ... 2 The original question was well posed, while the edit made it wrong. Let's clarify things first: the term scale normalized derivative was introduced (to my knowledge) in Mikolajczyk, K. and Schmid, C. 2001. Indexing based on scale invariant interest points. In Proceedings of the 8th International Conference on Computer Vision, Vancouver, Canada, pp. ... 2 Make use of perceptual hashes. It is very fast to compute and is very lightweight, both in terms of memory and cpu consumption. They are represented by simple long integers and can be indexed using many types of data structures such as VP Trees: http://www.phash.org/ If that doesn't work, you can extract SURF features, quantize them into visual words using ... 2 Well, that's a great answer by @sansuiso, so I'll just concentrate on various possible uses of detected keypoints, and describe some examples for you. There are certainly more uses, the ones listed are just based on what I came in touch with until now. Content based image retrieval (CBIR) You treat the features (the feature vectors you get after applying ... 2 Log-gabor are filters defined similarly as gabor filters in the sense that their envelope consist in a Gaussian in Fourier space. This is advantageous because this makes them optimal with respect to the compromise between localization (in space) and detection (of the mean frequency). The difference is that log-gabor (as their name implies) are defined in ... 2 Hello I will be brief and I hope you understand, due to the shape of your signal I think it is best treated with wavelet transform base HAAR, the reason for using this transform is that it will give a representation in time and frequency where you can get the relevant information of the signal, now an important parameter is that you use the base HAAR (there ... 2 The mean and standard deviation are two measurements of a distribution. Others you could also use are higher order moments like 'skewness' (how skewed the distribution is) and 'kurtosis' (how 'peaky' the distribution is). However, what I would try is a histogram of the values for each of the channels. For example, if you used a histogram with 16 bins, you ... 2 Regarding LAB, it is a good way if you are interested in the differences as humans perceive them. About texture, I would suggest taking a look at some proprietary texture descriptors: Gray level co-occurrence matrix. Response to wavelets Only top voted, non community-wiki answers of a minimum length are eligible
2021-01-27 05:07:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8888081908226013, "perplexity": 705.6149499947903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704820894.84/warc/CC-MAIN-20210127024104-20210127054104-00374.warc.gz"}
http://openstudy.com/updates/50084b67e4b020a2b3bda3e1
• anonymous Find a power series representative of the function and determine it's interval of convergence. $g(x)=\frac{1}{(1-x)^2}$ Mathematics Looking for something else? Not the answer you are looking for? Search for more explanations.
2017-03-29 23:31:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4251747131347656, "perplexity": 1324.2791732386243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191405.12/warc/CC-MAIN-20170322212951-00629-ip-10-233-31-227.ec2.internal.warc.gz"}
http://www.singaporemathguru.com/question/primary-4-problem-sums-word-problems-fractions-just-a-taste-of-what-s-to-come-fractions-exercise-12-1789
### Primary 4 Problem Sums/Word Problems - Try FREE Score : (Single Attempt) #### Question John had 300 rubber bands. He gave 2/5 of the rubber bands to Sally. After that, he gave 1/3 of the remaining rubber bands to Mary. How many rubber bands did John have left? The correct answer is : 120
2018-11-18 01:37:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6081463098526001, "perplexity": 12634.940864268085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743960.44/warc/CC-MAIN-20181118011216-20181118033216-00094.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=553370
MathSciNet bibliographic data MR553370 14G30 (14M10) Vitulli, Marie A. Complete intersections in ${\bf C}\sp{n}$${\bf C}\sp{n}$ and $R\sp{2n}$$R\sp{2n}$. Proc. Amer. Math. Soc. 78 (1980), no. 3, 331–336. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2016-05-31 03:22:37
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990610480308533, "perplexity": 6240.440864091816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051165777.47/warc/CC-MAIN-20160524005245-00004-ip-10-185-217-139.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:0564.47009
# zbMATH — the first resource for mathematics Inequalities of Bernstein-Jackson-type and the degree of compactness of operators in Banach spaces. (English) Zbl 0564.47009 The paper deals with covering problems and the degree of compactness of operators. The main part is devoted to relationships between entropy moduli and Kolmogorov (resp. Gelfand and approximation) numbers for operators which may be interpreted as counterparts to the classical Bernstein-Jackson inequalities for functions. Certain quantifications of results in the Riesz-Schauder-Theory are given. Finally, the largest distance between ”the degree of approximation” and the ”degree of compactness” of integral operators in C[0,1] generated by smooth kernels is determined. For illustrating of the quantifications we treat some eigenvalue and compactness problems of nuclear operators and operators of Hille-Tamarkin-type. ##### MSC: 47B10 Linear operators belonging to operator ideals (nuclear, $$p$$-summing, in the Schatten-von Neumann classes, etc.) 47L30 Abstract operator algebras on Hilbert spaces 47B38 Linear operators on function spaces (general) Full Text: ##### References: [1] B. CARL, Entropy numbers, s-numbers, and eigenvalue problems, J. Funct. Anal., 41 (1981), 290-306. · Zbl 0466.41008 [2] B. CARL, On a characterization of operators from lq into a Banach space of type p with some applications to eigenvalue problems, J. Funct. Anal., 48 (1982), 394-407. · Zbl 0509.47017 [3] B. CARL, Entropy numbers of r-nuclear operators between lp spaces, Studia Math., 77 (1983), 155-162. · Zbl 0563.47013 [4] B. CARL, On the degree of compactness of operators acting from function spaces into Banach spaces of type q, (Jena 1982). · Zbl 0507.47015 [5] B. CARL, H. TRIEBEL, Inequalities between eigenvalues, entropy numbers and related quantities of compact operators in Banach spaces, Math. Ann., 251 (1980), 129-133. · Zbl 0465.47019 [6] X. FERNIQUE, Régularité des trajectoires des fonctions aléatoires gaussiennes, Lecture Notes Math., 480 (1975), 1-96. · Zbl 0331.60025 [7] T. FIGIEL, J. LINDENSTRAUSS, V.D. MILMAN, The dimensions of almost spherical sections of convex bodies, Acta Math., 139 (1977), 53-94. · Zbl 0375.52002 [8] E.D. GLUSKIN, On some finite dimensional problems of the theory of diameters, Vestnik Leningr. Univ., 13 (1981), 5-10. · Zbl 0482.41018 [9] E.D. GLUSKIN, Norms of random matrices and diameters of finite dimensional sets, Math. Sbornic, 120 (1983), 180-189. · Zbl 0528.46015 [10] U. HAAGERUP, The best constants in the Khintchine inequality, Proc. Intern. Conf. “Operator algebras, ideals, ...”, Teubner Texte Math., pp. 69-79, Leipzig, 1978. · Zbl 0411.41006 [11] S. HEINRICH, Optimal approximation of integral operators, in preparation. [12] J. HOFFMANN-JØRGENSEN, Sums of independent Banach space valued random variables, Studia Math., 52 (1974), 159-186. · Zbl 0265.60005 [13] R.A. HUNT, On L (p, q) spaces, Enseign. Math., 12 (1966), 249-276. · Zbl 0181.40301 [14] W.B. JOHNSON, G. SCHECHTMAN, Embedding lmp into ln1, Acta Math., 149 (1982), 71-85. · Zbl 0522.46015 [15] B.S. KASHIN, Sections of some finite dimensional sets and classes of smooth functions, Izv. ANSSR, ser. mat., 41 (1977), 334-351, (Russian). [16] T. KÜHN, Entropy numbers of r-nuclear operators in Banach spaces of type, Studia Math., (to appear). · Zbl 0574.47018 [17] J. LINDENSTRAUSS, L. TZAFRIRI, Classical Banach spaces, Lect. Notes Math., 338, Berlin - Heidelberg - New York, 1973. · Zbl 0259.46011 [18] G.G. LORENTZ, Approximation of functions, Academic Press, New York/Toronto/London, 1966. · Zbl 0153.38901 [19] E. MAKAI Jr., J. ZEMANEK, Geometrical means of eigenvalues, J. Operator Theory, 7 (1982), 173-178. · Zbl 0483.47018 [20] M. MARCUS, G. PISIER, Characterizations of almost surely continuous p-stable random Fourier series and strongly stationary processes (to appear). · Zbl 0547.60047 [21] B. MAUREY, G. PISIER, Séries de variables aléatoires vectorielles indépendantes et propriétés géométriques des espaces de Banach, Studia Math., 58 (1976), 45-90. · Zbl 0344.47014 [22] B.S. MITJAGIN, A. PEŁCZYNSKI, Nuclear operators and approximative dimension, Proc. ICM, (1966), 366-372. · Zbl 0191.41704 [23] A. PIETSCH, Operator ideals, Berlin, 1978. · Zbl 0399.47039 [24] A. PIETSCH, Weyl numbers and eigenvalues of operators in Banach spaces, Math. Ann., 47 (1980), 149-168. · Zbl 0428.47027 [25] G. PISIER, Remarques sur un résultat non public de B. Maurey, Sem. d’Analyse Fonctionnelle 1980/1981, Exp. V. · Zbl 0491.46017 [26] G. PISIER, On the dimension of the lnp-subspaces of Banach spaces, for 1 ≤ p < 2, Trans. AMS, 276 (1983), 201-211. · Zbl 0509.46016 [27] F. RIESZ, Über lineare funktionalgleichungen, Acta Math., 41 (1918), 71-98. · JFM 46.0635.01 [28] J. SCHAUDER, Über lineare, vollstetige funktionaloperationen, Studia Math., 2 (1930), 1-6. · JFM 56.0353.02 [29] C. SCHUTT, Entropy numbers of diagonal operators between symmetric Banach spaces, J. Approx. Theory (to appear). [30] J.S. SZARCK, On kashin’s almost Euclidean orthogonal decomposition of l1n, Bull. Acad. Polon. Sci., 26 (1978). · Zbl 0395.46015 [31] A.F. TIMAN, Approximation theory of functions of real variables, Moscow, 1960. [32] A. ZYGMUND, Trigonometric series, Cambridge, 1968. · Zbl 0157.38204 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-10-23 07:14:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8463999032974243, "perplexity": 3955.4420859162883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00086.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/amc.2017031
Article Contents Article Contents # Fast algebraic immunity of Boolean functions • Since 1970, Boolean functions have been the focus of a lot of attention in cryptography. An important topic in symmetric ciphers concerns the cryptographic properties of Boolean functions and constructions of Boolean functions with good cryptographic properties, that is, good resistance to known attacks. An important progress in cryptanalysis areas made in 2003 was the introduction by Courtois and Meier of algebraic attacks and fast algebraic attacks which are very powerful analysis concepts and can be applied to almost all cryptographic algorithms. To study the resistance against algebraic attacks, the notion of algebraic immunity has been introduced. In this paper, we use a parameter introduced by Liu and al., called fast algebraic immunity, as a tool to measure the resistance of a cryptosystem (involving Boolean functions) to fast algebraic attacks. We prove an upper bound on the fast algebraic immunity. Using our upper bound, we establish the weakness of trace inverse functions against fast algebraic attacks confirming a recent result of Feng and Gong. Mathematics Subject Classification: 12K10, 94A60. Citation: • C. Carlet, Boolean functions for cryptography and error correcting codes, in Boolean Models and Methods in Mathematics, Computer Science, and Engineering (eds. Y. Crama and P. L. Hammer), Cambridge Univ. Press, 2010,257-397. doi: 10.1017/CBO9780511780448. C. Carlet and K. Feng, An infinite class of balanced functions with optimal algebraic immunity, good immunity to fast algebraic attacks and good nonlinearity, in Adv. Crypt. -ASIACRYPT 2008, Springer, 2008,425-440. doi: 10.1007/978-3-540-89255-7_26. C. Carlet  and  D. Tang , Enhanced Boolean functions suitable for the filter model of pseudo-random generator, Des. Codes Crypt., 76 (2015) , 571-587.  doi: 10.1007/s10623-014-9978-9. N. Courtois, Fast algebraic attacks on stream ciphers with linear feedback, Advances in Cryptology-CRYPTO 2003, Springer, 2003,177-194. doi: 10.1007/978-3-540-45146-4_11. N. Courtois and W. Meier, Algebraic attacks on stream ciphers with linear feedback, in Advances in Cryptology, Springer, 2002,346-359. doi: 10.1007/3-540-39200-9_21. Y. Du, F. Zhang and M. Liu, On the resistance of Boolean functions against fast algebraic attacks, in ICISC 2011, Springer, 2012,261-274. doi: 10.1007/978-3-642-31912-9_18. X. Feng and G. Gong, On algebraic immunity of trace inverse functions over finite fields with characteristic two, Cryptology ePrint Archive: Report 2013/585. M. Liu , D. Lin  and  D. Pei , Fast algebraic attacks and decomposition of symmetric Boolean functions, IEEE Trans. Inf. Theory, 57 (2011) , 4817-4821.  doi: 10.1109/TIT.2011.2145690. W. Meier, E. Pasalic and C. Carlet, Algebraic attacks and decomposition of Boolean functions, in Eurocrypt 2004, Springer, 2004,474-491. doi: 10.1007/978-3-540-24676-3_28. Y. Nawaz, G. Gong and K. C. Gupta, Upper bounds on algebraic immunity of Boolean power functions, in 13th Int. Workshop Fast Softw. Encrypt., Springer, 2006,375-389. K. Nyberg, Differentially uniform mappings for cryptography, in Eurocrypt 1993, Springer, 1994, 55-64. doi: 10.1007/3-540-48285-7_6. E. Pasalic, Almost fully optimized infinite classes of Boolean functions resistant to (fast) algebraic cryptanalysis, in ICISC 2008, Springer, 2008,399-414. doi: 10.1007/978-3-642-00730-9_25. C. Shannon , Communication theory of secrecy systems, Bell Syst. Techn. J., 28 (1949) , 656-715.  doi: 10.1002/j.1538-7305.1949.tb00928.x.
2023-03-29 02:50:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.521317183971405, "perplexity": 3915.5605955741207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00356.warc.gz"}
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/25%3A_Bose-Einstein_and_Fermi-Dirac_Statistics/25.02%3A_Fermi-Dirac_Statistics_and_the_Fermi-Dirac_Distribution_Function
# 25.2: Fermi-Dirac Statistics and the Fermi-Dirac Distribution Function Let us consider the total probability sum for a system of particles that follows Fermi-Dirac statistics. As before, we let $${\epsilon }_1$$, $${\epsilon }_2$$,, $${\epsilon }_i$$,. be the energies of the successive energy levels. We let $$g_1$$, $$g_2$$,, $$g_i$$,. be the degeneracies of these levels. We let $$N_1$$, $$N_2$$,, $$N_i$$,. be the number of particles in all of the degenerate quantum states of a given energy level. The probability of finding a particle in a quantum state depends on the number of particles in the system; we have $$\rho \left(N_i,{\epsilon }_i\right)$$ rather than $$\rho \left({\epsilon }_i\right)$$. Consequently, we cannot generate the total probability sum by expanding an equation like $1={\left(P_1+P_2+\dots +P_i+\dots \right)}^N.$ However, we continue to assume: 1. A finite subset of the population sets available to the system accounts for nearly all of the probability when the system is held in a constant-temperature environment. 2. Essentially the same finite subset of population sets accounts for nearly all of the probability when the system is isolated. 3. All of the microstates that have a given energy have the same probability. We let this probability be $${\rho }^{FD}_{MS,N,E}$$. As before, the total probability sum will be of the form $1=\sum_{\{N_i\}}{W^{FD}\left(N_i,{\epsilon }_i\right)}{\rho }^{FD}_{MS,N,E}$ Each such term reflects the fact that there are $$W^{FD}\left(N_i,{\epsilon }_i\right)$$ ways to put $$N_1$$ particles in the $$g_1$$ quantum states of energy level $${\epsilon }_1$$, and $$N_2$$ particles in the $$g_2$$ quantum states of energy level $${\epsilon }_2$$, and, in general, $$N_i$$ particles in the $$g_i$$ quantum states of energy level $${\epsilon }_i$$. Unlike Boltzmann statistics, however, the probabilities are different for successive particles, so the coefficient $$W^{FD}$$ is different from the polynomial coefficient, or thermodynamic probability, $$W$$. Instead, we must discover the number of ways to put $$N_i$$ indistinguishable particles into the $$g_i$$-fold degenerate quantum states of energy $${\epsilon }_i$$ when a given quantum state can contain at most one particle. These conditions can be satisfied only if $$g_i\ge N_i$$. If we put $$N_i$$ of the particles into quantum states of energy $${\epsilon }_i$$, there are 1. $$g_i$$ ways to place the first particle, but only 2. $$g_i-1$$ ways to place the second, and 3. $$g_i-2$$ ways to place the third, and 4. $$g_i-\left(N_i-1\right)$$ ways to place the last one of the $$N_i$$ particles. This means that there are $\left(g_i\right)\left(g_i-1\right)\left(g_i-2\right)\dots \left(g_i-\left(N_i+1\right)\right)=$ $=\frac{\left(g_i\right)\left(g_i-1\right)\left(g_i-2\right)\dots \left(g_i-\left(N_i+1\right)\right)\left(g_i-N_i\right)\dots \left(1\right)}{\left(g_i-N_i\right)!}=\frac{g_i!}{\left(g_i-N_i\right)!}$ ways to place the $$N_i$$ particles. Because the particles cannot be distinguished from one another, we must exclude assignments which differ only by the way that the $$N_i$$ particles are permuted. To do so, we must divide by $$N_i!$$. The number of ways to put $$N_i$$ indistinguishable particles into $$g_i$$ quantum states with no more than one particle in a quantum state is $\frac{g_i!}{\left(g_i-N_i\right)!N_i!}$ The number of ways to put indistinguishable Fermi-Dirac particles of the population set $$\{N_1\mathrm{,\ }N_2\mathrm{,\dots ,\ }N_i\mathrm{,\dots }\}$$ into the available energy states is $W^{FD}\left(N_i,g_i\right)=\left[\frac{g_1!}{\left(g_1-N_1\right)!N_1!}\right]\times \left[\frac{g_2!}{\left(g_2-N_2\right)!N_2!}\right]\times \dots \times \left[\frac{g_i!}{\left(g_i-N_i\right)!N_i!}\right]\times \dots =\prod^{\infty }_{i=1}{\left[\frac{g_i!}{\left(g_i-N_i\right)!N_i!}\right]}$ so that the total probability sum for a Fermi-Dirac system becomes $1=\sum_{\{N_j\}}{\prod^{\infty }_{i=1}{\left[\frac{g_i!}{\left(g_i-N_i\right)!N_i!}\right]}{\left[{\rho }^{FD}\left({\epsilon }_i\right)\right]}^{N_i}}$ To find the Fermi-Dirac distribution function, we seek the population set $$\{N_1\mathrm{,\ }N_2\mathrm{,\dots ,\ }N_i\mathrm{,\dots }\}$$ for which $$W^{FD}$$ is a maximum, subject to the constraints $N=\sum^{\infty }_{i=1}{N_i}$ and $E=\sum^{\infty }_{i=1}{N_i}{\epsilon }_i$ The mnemonic function becomes $F^{FD}_{mn}=\sum^{\infty }_{i=1}{ \ln g_i!\ } -\sum^{\infty }_{i=1}{\left[\left(g_i-N_i\right){ \ln \left(g_i-N_i\right)\ }-\left(g_i-N_i\right)\right]}-\sum^{\infty }_{i=1}{\left[N_i{ \ln N_i-N_i\ }\right]+\alpha \left[N-\sum^{\infty }_{i=1}{N_i}\right]} +\ \beta \left[E-\sum^{\infty }_{i=1}{N_i}{\epsilon }_i\right]$ We seek the $$N^{\textrm{⦁}}_i$$ for which $$F^{FD}_{mn}$$ is an extremum; that is, the $$N^{\textrm{⦁}}_i$$ satisfying \begin{align*} 0&=\frac{\partial F^{FD}_{mn}}{\partial N_i}=\frac{g_i-N^{\textrm{⦁}}_i}{g_i-N^{\textrm{⦁}}_i}+{ \ln \left(g_i-N^{\textrm{⦁}}_i\right)\ }-1-\frac{N^{\textrm{⦁}}_i}{N^{\textrm{⦁}}_i}-{ \ln N^{\textrm{⦁}}_i\ }+1-\alpha -\beta {\epsilon }_i \\[4pt] &={ \ln \left(g_i-N^{\textrm{⦁}}_i\right)\ }-{ \ln N^{\textrm{⦁}}_i\ }-\alpha -\beta {\epsilon }_i \end{align*} Solving for $$N^{\textrm{⦁}}_i$$, we find $N^{\textrm{⦁}}_i=\frac{g_ie^{-\alpha }e^{-\beta {\epsilon }_i}}{1+e^{-\alpha }e^{-\beta {\epsilon }_i}}$ or, equivalently, $\frac{N^{\textrm{⦁}}_i}{g_i}=\frac{1}{1+e^{\alpha }e^{\beta {\epsilon }_i}}$ If $$1\gg e^{-\alpha }e^{-\beta {\epsilon }_i}$$ (or $$1\ll e^{\alpha }e^{\beta {\epsilon }_i}$$), the Fermi-Dirac distribution function reduces to the Boltzmann distribution function. It is easy to see that this is the case. From $N^{\textrm{⦁}}_i=\frac{g_ie^{-\alpha }e^{-\beta {\epsilon }_i}}{1+e^{-\alpha }e^{-\beta {\epsilon }_i}}\approx g_ie^{-\alpha }e^{-\beta {\epsilon }_i}$ and $$N=\sum^{\infty }_{i=1}{N^{\textrm{⦁}}_i}$$, we have $N=e^{-\alpha }\sum^{\infty }_{i=1}{g_i}e^{-\beta {\epsilon }_i}=e^{-\alpha }z$ It follows that $$e^{\alpha }={z}/{N}$$. With $$\beta ={1}/{kT}$$, we recognize that $${N^{\textrm{⦁}}_i}/{N}$$ is the Boltzmann distribution. For occupied energy levels, $$e^{-\beta {\epsilon }_i}=e^{-\epsilon_i}/{kT}\approx 1$$; otherwise, $$e^{-\beta \epsilon_i}=e^{-\epsilon_i/kT}<1$$. This means that the Fermi-Dirac distribution simplifies to the Boltzmann distribution whenever $$1\gg e^{-\alpha }$$. We can illustrate that this is typically the case by considering the partition function for an ideal gas. Using the translational partition function for one mole of a monatomic ideal gas from Section 24.3, we have \begin{align*} e^{\alpha } &=\frac{z_t}{N}=\left[\frac{2\pi mkT}{h^2}\right]^{3/2} \frac{\overline{V}}{\overline{N}} \\[4pt] &=\left[\frac{2\pi mkT}{h^2}\right]^{3/2} \frac{kT}{P^0} \end{align*} For an ideal gas of molecular weight $$40$$ at $$300$$ K and $$1$$ bar, we find $$e^{\alpha }=1.02\times {10}^7$$ and $$e^{-\alpha }=9.77\times {10}^{-8}$$. Clearly, the condition we assume in demonstrating that the Fermi-Dirac distribution simplifies to the Boltzmann distribution is satisfied by molecular gases at ordinary temperatures. The value of $$e^{\alpha }$$ decreases as the temperature and the molecular weight decrease. To find $$e^{\alpha }\approx 1$$ for a molecular gas, it is necessary to consider very low temperatures. Nevertheless, the Fermi-Dirac distribution has important applications. The behavior of electrons in a conductor can be modeled on the assumption that the electrons behave as a Fermi-Dirac gas whose energy levels are described by a particle-in-a-box model. This page titled 25.2: Fermi-Dirac Statistics and the Fermi-Dirac Distribution Function is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
2022-08-11 03:23:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9492146968841553, "perplexity": 299.98641848480173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571232.43/warc/CC-MAIN-20220811012302-20220811042302-00122.warc.gz"}
http://physics.stackexchange.com/questions/11781/show-that-the-electric-field-e-in-the-system-at-rest-is-e-fracq4-pi-epsil
# Show that the electric field E in the system at rest is $E=\frac{Q}{4 \pi \epsilon_0 \sqrt{(x^2+y^2+z^2)^3}} (x,y,z)$ [closed] A body point charge $Q$ moves in relation to the reference system $\Sigma$ according to the law of motion $x(t)=v_0 t$, $y(t)=0$, $z(t)=0$. - Hello Patty24 and welcome to this site. Please reconsider en.wikipedia.org/wiki/Electrostatics#The_electric_field and some of the following paragraphs, e.g. concerning the electrostatic potential. Greets –  Robert Filter Jul 1 '11 at 16:53 No one is so kind to help me? .. Perhaps I was not very clear in explaining the problem. –  Patty24 Jul 1 '11 at 16:55 ..perhaps. Also, are you sure this shouldn't have had a special-relativity tag instead of GR? –  qftme Jul 1 '11 at 17:05 thanks!good idea :) –  Patty24 Jul 1 '11 at 17:12 Hi Patty! We have some rules about how homework questions need to be posed on this site: most importantly, you need to show us your work and ask about the specific issue that is giving you trouble. Don't just ask us to do your work for you. See this meta question for details. If you edit your post to make it a better homework question, I'll be happy to reopen it. –  David Z Jul 1 '11 at 17:35
2013-12-06 10:18:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7645484805107117, "perplexity": 973.36149045743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163051244/warc/CC-MAIN-20131204131731-00072-ip-10-33-133-15.ec2.internal.warc.gz"}
http://mathhelpforum.com/geometry/125113-triangle-circle.html
1. ## Triangle in circle?! Triangle ABC has angle B 45 degrees, side AB 9 and side BC $6\sqrt2$. The area of ABC is: And since side AC is: ... the circle containing triangle ABC has the radius: Would be great to get some insight on this. I'm completely lost as to what to do. 2. Originally Posted by davidman Triangle ABC has angle B 45 degrees, side AB 9 and side BC $6\sqrt2$. The area of ABC is: And since side AC is: ... the circle containing triangle ABC has the radius: Would be great to get some insight on this. I'm completely lost as to what to do. Use the formula $A = \frac{1}{2}ab\,\sin{C}$, where $a$ and $b$ are two sides of the triangle and $C$ is the angle between them. 3. $A=\frac{1}{2}absinC=\frac{1}{2}\times9\times6\sqrt {2}sin45$ and looking at triangles with special angles 45 and 90 tells me that $sin45=\frac{1}{\sqrt{2}}\:\:\therefore$ $27\sqrt{2}\times\frac{1}{\sqrt{2}}=27$ how to get side AC? I only know that one angle... 4. Originally Posted by davidman $A=\frac{1}{2}absinC=\frac{1}{2}\times9\times6\sqrt {2}sin45$ and looking at triangles with special angles 45 and 90 tells me that $sin45=\frac{1}{\sqrt{2}}\:\:\therefore$ $27\sqrt{2}\times\frac{1}{\sqrt{2}}=27$ how to get side AC? I only know that one angle... Now to get side $c$, use the Cosine Rule. $c^2 = a^2 + b^2 - 2ab\,\cos{C}$. Also, since you found the area of the triangle before, you need to write $= 27\,\textrm{units}^2$ 5. $AC^2=AB^2+BC^2-2(AB\times{BC})cosB=9^2+(6\sqrt{2})^2-2(9\times{6}\sqrt{2})cos45=$ $81+36\times{2}-2\times{9}\times{6}\sqrt{2}\times{\frac{1}{\sqrt{2 }}}=45$ $AC=\sqrt{45}=\sqrt{9\times5}=3\sqrt{5}$ ok, managed with that somehow... about the circle though, is there a rule for that? 6. Originally Posted by davidman Triangle ABC has angle B 45 degrees, side AB 9 and side BC $6\sqrt2$. The area of ABC is: And since side AC is: ... the circle containing triangle ABC has the radius: Would be great to get some insight on this. I'm completely lost as to what to do. Are you saying that the vertices A, B, C need to lie on the circle? 7. Yes, like wasn't sure what it's called in English, but I guess circumscribed circle of a triangle. 8. OK I don't know if there's an easier way, but here goes: I placed the length of 9 units along the $x$ axis of a set of cartestian axes, beginning at the origin. So that means that two of the vertices lie at $(0,0)$ and $(9, 0)$. There is also an angle of $45^\circ$ made with the $x$ axis, and a length of $6\sqrt{2}$ units. Using some trigonometry, I know that the final vertex must have co-ordinate $(x, y) = (6\sqrt{2}\,\cos{45^\circ}, 6\sqrt{2}\,\sin{45^\circ})$ $(x, y) = (6, 6)$. So now you have the three vertices as $(x, y)$ co-ordinates. Substitute the three co-ordinates into the general equation for the circle $(x - h)^2 + (y - k)^2 = r^2$. You will end up with three equations in three unknowns that you will need to solve simultaneously. One of them ( $r$) is the radius of the circle. 9. Originally Posted by Wikipedia @ http://en.wikipedia.org/wiki/Law_of_sines#Relation_to_the_circumcircle In the equation the common value of the three fractions is actually the diameter of the triangle's circumcircle. hence; $\frac{a}{sin\:A}=2R$ or in my case; $\frac{AC}{sin\:B}=\frac{3\sqrt{5}}{\frac{1}{\sqrt{ 2}}}=\frac{3\sqrt{10}}{1}$ $R=\frac{3\sqrt{10}}{2}$ so I guess wikipedia helped save the day this time.
2017-07-24 12:50:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258915781974792, "perplexity": 530.4095444308688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424876.76/warc/CC-MAIN-20170724122255-20170724142255-00266.warc.gz"}
https://keplerlounge.com/physics/2021/03/21/quantum-randomness.html
## Motivation: According to Feynman, the double-slit experiment contains the central mystery of Quantum Mechanics. However, in spite of numerous attempts, the quantum randomness observed in the double-slit experiment has remained a mystery until this day. That said, a potentially effective approach originally proposed by John Wheeler consists in using information theory to reformulate Quantum Mechanics. An information-theoretic perspective requires interpreting light as a signal that carries information and not as a material particle or wave. Given the cosmological properties of light, this signal would operate within an event horizon. This interpretation is consistent with other clues. We don’t live in an observer-independent universe as any cosmological model must account for the existence of billions of observers. Moreover, the scientific method itself is not observer-independent. ## Reflections on the nature of light: If we are to interpret light as a signal, I believe cosmological natural selection is a useful meta-framework. In particular, if we assume that conscious life ultimately allows an exponential increase in cosmic progeny via technological singularities, we may make two remarks. A constant velocity of signal propagation allows decentralised coordination of cosmological morphogenesis. Unbounded signal propagation on the other hand would significantly diminish the potential diversity of cosmic forms as it would allow global coordination of cosmic events. The implicit assumption here is that electromagnetic waves are a crucial means of communication for technologically advanced civilisations. While the theory of cosmological natural selection is not inconsistent with the anthropic principle, it allows cosmologists to derive the principle from familiar Darwinian arguments. But, how do we account for the quantum randomness observed in the double-slit experiment? ## The observer-dependence theorems, or observer-dependence in the scientific method: In order to make sense of the quantum randomness that is observed in the double-slit experiment, it is important to note that the scientific method is an algorithmic method that may be implemented by a Turing Machine and that scientists ultimately discover the relations between things and not the things themselves. Moreover, these relations are exactly those which lie within the Turing limit. These insights motivate the use of algorithmic information theory in order to understand the epistemic limits of the scientific method. Given the family of probabilistic models $$P_M$$, the Minimum Description Length of a dataset $$X$$ of $$N$$ samples from a discrete probability distribution $$P_X$$ relative to the optimal model $$\Omega \in P_M$$ is given by the First Observer Dependence Theorem: $$\mathbb{E}[K(X)] = H(X|\Omega) + H(\Omega)$$ where $$H(\Omega)$$ is the inextricable epistemic uncertainty of $$\Omega$$ concerning its own operation and $$H(X|\Omega)$$ is the inextricable epistemic uncertainty of $$\Omega$$ relative to $$P_X$$. In (1) I used the fact that $$\Omega$$ is a probabilistic program so it makes sense to compute the expected Kolmogorov Complexity, as well as the fact that the expected Kolmogorov Complexity of a random variable equals the Minimum Description Length of that variable [1]. I also implicitly assumed that ergodic assumptions are satisfied, which is the case in the regime of repeatable scientific experiments. If $$\Omega$$ is identified with what physicists call an observer i.e. a system that collects data $$X$$ and tests hypotheses concerning the probabilistic structure of $$X$$ in order to discover $$P_X$$ then we find that relative to this observer, incompressibility(in terms of memory requirements), algorithmic randomness and incompleteness(as defined) are all equivalent to $$\mathbb{E}[K(X)]$$. We also find that the inextricable epistemic uncertainty of $$\Omega$$ relative to $$P_X$$ is bounded by: $$\mathbb{E}[K(X)] \geq H(\Omega)$$ This suggests that perceived randomness is not observer independent. More generally it suggests that the scientific method is not observer independent and it provides an information-theoretic formulation of Planck’s claim that: Science can’t solve the ultimate mystery of nature. And that is because, in the last analysis, we ourselves are a part of the mystery that we are trying to solve.-Planck Now, given the observer-dependence theorems and the fact that the most fundamental physical observations are expressions of fundamental relations between humans and their environment we are now ready to analyse the double-slit experiment. ## Observer-dependence in the double-slit experiment: Assuming that light is a signal that transmits information to a detector, we may interpret the double-slit experiment as a communication channel where the detector corresponds to an observer. This is consistent with John Wheeler’s proposal that every quantum observation corresponds to a bit of information. Furthermore, if we consider that each bit of information corresponds to a yes/no question what are the precise questions asked in the double-slit experiment? 1. With a photo-sensitive screen, the question is ‘Where did the quantum object land?’ 2. On the other hand, if you have a detector behind each slit the question is ‘Which slit did the quantum go through?’ It is worth noting that the second question yields a bi-modal distribution as it contains two possibilities whereas the first question implies a much more complex multi-modal distribution. However, there remains one more mystery. If the double-slit experiment is observer-dependent and there is an ensemble of observers, how is it that all these observations are consistent? This must be because the entire universe is identifiable with a single wave function. As for how a particular branch of the multiverse is chosen at any instant, that is a mystery that leaves something to the imagination. In fact, epistemological limit is precisely the origin of quantum randomness in the double-slit experiment. Given the observer-dependence theorems, there is good reason to believe that we shall never know. Not unless we figure out how to engineer computers that go beyond the Turing limit. ## Discussion: While this article demonstrates that Cosmological Natural Selection and the observer-dependence theorems address the central mystery of quantum mechanics, I would like to add that these two theories are related via the Physical Church-Turing thesis. One important motivation for exploring this connection is that it is the scientific basis for speculations that we may be living in a computer simulation of a civilisation that mastered the principles of black-hole engineering. On this front there are two complementary approaches. One approach involves the analysis of physical constraints on black-hole computers. Another interesting approach involves the consideration of limit-computable mathematical objects such as prime formulas that are not computable by Turing Machines. ## References: 1. Feynman. The Feynman Lectures in Physics. 1963. 2. Wheeler. INFORMATION, PHYSICS, QUANTUM: THE SEARCH FOR LINKS. 1989. 3. Aidan Rocke (https://cstheory.stackexchange.com/users/47594/aidan-rocke), Understanding the Physical Church-Turing thesis and its implications, URL (version: 2021-02-22): https://cstheory.stackexchange.com/q/48450 4. L. Smolin, Did the Universe Evolve? Class. Quantum Grav. 9 (1992a) 173–191 5. Jeffrey M Shainline. Does cosmological natural selection select for technology? Institute of Physics. 2020.
2021-09-25 12:27:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.7229716181755066, "perplexity": 538.3555693766192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057622.15/warc/CC-MAIN-20210925112158-20210925142158-00190.warc.gz"}
https://daxpy.xyz/notes/attribute-selection-in-decision-trees/
# Attribute Selection in Decision Trees For constructing a new node in decision tree, choosing which attribute to partition the data on is important. Choosing a less desirable attribute to split the data on may result in lower performance. Lets look into a few important measures which helps us find the best attribute. ## Information Gain Information Gain is defined as amount of information gained about a random variable (outcome) from observing another (attribute). We can quantify information gain as difference in entropy when random variable is observed. \begin{aligned} IG(T,A) &= H(T) - H(T|A) \\ H(T) &= -\sum_{c\in C}^{}p_c\log_2 p_c \\ H(T|A) &= \sum_{a \in A}p_a H(T_a) \end{aligned} Here $H(T)$ is the entropy of set $T$ and $T_a = \{t\in T: t_{A} = a\}$ is its subset of items with attribute $A=a$. Also, $p_a = \frac{\left|T_a\right|}{|T|}$. ## GINI Impurity GINI Impurity is a measure of how often a randomly chosen element from a set would be incorrectly labeled if it was randomly labeled according to the distribution of an attribute in the set. Let say we partition the input set $T$ according to the values of attribute $A$ such that $T = \bigcup_{a\in A} T_a$. The split would be ideal if each of the partitions would have only a single class (different subset can have same class). GINI impurity quantifies having multiple classes in same partition. \begin{aligned} G(T_a) &= \sum_{c\in C}p_{a,c}\left(\textstyle\sum_{k \neq c} p_{a,k}\right) \\ &=\sum_{c\in C}p_{a,c}(1-p_{a,c}) \\ &= 1 - \sum_{c\in C}p_{a,c}^2 \end{aligned} Overall GINI Impurity score of partitioning $T$ according to $A$ is \begin{aligned} G(A) &= \sum_{a\in A}p_{a}G(T_a) \quad \\ &= \sum_{a\in A} p_{a} \left(1- \sum_{c\in C}p_{a,c}^{2}\right) \end{aligned} $p_a$ fraction of elements which has attribute $a$ $p_{a,c}$ fraction of elements in class $c$ and has attribute $a$ ## Variance Reduction Variance reduction is used when target variable is continuous (tree is a regression tree). If the set $T$ is being partitioned into $T_L$ and $T_R$, the reduction in variance is given by \begin{aligned} V(T) = Var(T) &- \left(\frac{|T_L|}{|T|}Var(T_L)+\frac{|T_R|}{|T|}Var(T_R)\right) \end{aligned} For calculating the best split point, the standard variance calculation formula would require recalculation of mean repeatedly. But we can compute variance without explicitly calculating mean as $$Var(S) = \frac{1}{|S|^2}\sum_{i,j\in S}(y_i-y_j)^2$$ tags: machine-learning
2022-09-25 05:12:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999570846557617, "perplexity": 1479.2221776540644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00371.warc.gz"}
https://strimas.com/smoothr/reference/smooth_densify.html
This function adds additional vertices to lines or polygons via linear interpolation, always while keeping the original vertices. Each line segment will be split into equal length sub-segments. This densification algorithm treats all vertices as Euclidean points, i.e. new points will not fall on a great circle between existing vertices, rather they'll be along a straight line. smooth_densify(x, wrap = FALSE, n = 10L, max_distance) ## Arguments x numeric matrix; matrix of coordinates. logical; whether the coordinates should be wrapped at the ends, as for polygons and closed lines, to ensure a smooth edge. integer; number of times to split each line segment. Ignored if max_distance is specified. numeric; the maximum distance between vertices in the resulting matrix. This is the Euclidean distance and not the great circle distance. ## Value A matrix with the coordinates of the densified curve. ## Details This function works on matrices of points and is generally not called directly. Instead, use smooth() with method = "densify" to apply this smoothing algorithm to spatial features. ## Examples # smooth_densify works on matrices of coordinates # use the matrix of coordinates defining a line as an example m <- jagged_lines$geometry[[2]][] m_dense <- smooth_densify(m, n = 5) class(m) #> [1] "matrix" "array" class(m_dense) #> [1] "matrix" "array" plot(m, type = "b", pch = 19, cex = 1.5, axes = FALSE, xlab = NA, ylab = NA) points(m_dense, col = "red", pch = 19, cex = 0.5) # max_distance can be used to ensure vertices are at most a given dist apart m_md <- smooth_densify(m, max_distance = 0.05) plot(m, type = "b", pch = 19, cex = 1.5, axes = FALSE, xlab = NA, ylab = NA) points(m_md, col = "red", pch = 19, cex = 0.5) # smooth is a wrapper for smooth_densify that works on spatial features library(sf) l <- jagged_lines$geometry[[2]] l_dense <- smooth(l, method = "densify", n = 2) class(l) #> [1] "XY" "LINESTRING" "sfg" class(l_dense) #> [1] "XY" "LINESTRING" "sfg" plot(l, lwd = 5) plot(l_dense, col = "red", lwd = 2, lty = 2, add = TRUE) plot(l_dense %>% st_cast("MULTIPOINT"), col = "red", pch = 19,
2021-09-28 09:49:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32957780361175537, "perplexity": 3787.8316312866978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060677.55/warc/CC-MAIN-20210928092646-20210928122646-00128.warc.gz"}
https://tex.stackexchange.com/questions/524455/how-can-i-make-spaces-appear-with-guidelines-with-this-penmanship-font
# How can I make spaces appear with guidelines with this penmanship font? I'm using a special font family for penmanship called ZNuscript. Some of the versions of this font include ruled lines that help to guide children in forming their letters. It displays just fine for any character other than a space. If I write a sentence in LibreOffice using the font, the spaces appear as normal: But when I use the same font in Overleaf using the XeLaTeX compiler, the spaces appear without guidelines: I have tried using underscores, but in this font, they are almost double the width of a regular space. Please have a look at this relevant Overleaf project to see the font files and code. • You'll need an active space character which will likely break your document. – Henri Menke Jan 16 '20 at 5:33 You can do it with an active space character, but this has some issues as you can see in the output. \documentclass{article} \usepackage{fontspec} \newfontface\tracingfont[Path=fonts/]{ZNuscriptDottedGuidedNL} \begingroup \catcode\ =13 \gdef\installactivespace{\catcode\ =13\def {\char"20\hskip0pt\relax}}% \endgroup \newcommand\tracing{\begingroup\installactivespace\dotracing} \newcommand\dotracing[1]{\tracingfont#1\endgroup} \begin{document} \huge \tracing{Here are some spaces and if the line is too long it breaks but leaves spaces behind.} \end{document} • Thank you for this. For my purposes, sentences using these guidelines would never extend beyond one line as I'm using them for writing worksheets. Is that the only issue? Also, could you explain a bit more about the relevant pieces of your code? – cbunn Jan 16 '20 at 6:03
2021-06-20 19:17:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7781287431716919, "perplexity": 1407.2616000676564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488253106.51/warc/CC-MAIN-20210620175043-20210620205043-00611.warc.gz"}
https://math.stackexchange.com/questions/2937306/area-of-irregular-similar-hexagons
# Area of irregular similar hexagons [closed] Irregular Hexagons $$A$$ and $$B$$ are geometrically similar. The shortest sides are $$4$$ inches and $$3$$ inches, respectively. If the area of hexagon $$A$$ is $$48in^2$$, what is the area of hexagon $$B$$? I know the answer is $$27 in^2$$, but how do you get that? ## closed as off-topic by Saad, Leucippus, Xander Henderson, Chris Custer, Ahmad BazziOct 1 '18 at 5:44 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Saad, Leucippus, Xander Henderson, Chris Custer, Ahmad Bazzi If this question can be reworded to fit the rules in the help center, please edit the question. Because if corresponding linear dimensions of $$A$$ and $$B$$ are in $$4:3$$ ratio, then their areas are in $$4^2:3^2$$ ratio. The areas of similar figures vary in proportion to the square of the proportion by which the lengths are stretched. For example, a square with one and one half times the side length has area $$(3/2)^2$$ as large.
2019-08-24 11:19:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7738158702850342, "perplexity": 735.3728586213398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320734.85/warc/CC-MAIN-20190824105853-20190824131853-00239.warc.gz"}
https://www.doubtnut.com/qa-hindi/603146657
Login HomeHindiकक्षा 12MathsChapterअवकल समीकरण अवकल समीकरण {xy^(3)(1+ cos x )... # अवकल समीकरण {xy^(3)(1+ cos x ) - y } dx + xdy = 0 का हल है उत्तर Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Updated On: 18-1-2021 Apne doubts clear karein ab Whatsapp par bhi. Try it now. CLICK HERE Get Answer to any question, just click a photo and upload the photo and get the answer completely free, UPLOAD PHOTO AND GET THE ANSWER NOW! Watch 1000+ concepts & tricky questions explained! Click here to get PDF DOWNLOAD for all questions and answers of this chapter - Class 12 अवकल समीकरण Click here to get PDF DOWNLOAD for all questions and answers of this Book - Class 12 MATHS लिखित उत्तर x^3/3-x^(2) sinx + x cos x - 2 sin x + C x^3/3+ x^(2) sinx + 2x cos x - 2 sin x + C x^3/3-2x^(2) sinx + x cos x - sin x + C उपरोक्त में से कोई नहीं Answer : B
2022-01-26 17:35:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27038300037384033, "perplexity": 10456.703021050042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304959.80/warc/CC-MAIN-20220126162115-20220126192115-00648.warc.gz"}
http://www.math.md/en/publications/basm/issues/y1993-n3/
RO  EN ## BASM n.3 (13), 1993 Research Paper The problem of determination of multiregular tilings of space. (Russian) Zamorzaeva Elizaveta pp.3-10 Isogonal strata tessellations of the Lobachevsky space. (Russian) Makarov P. V. pp.11-14 Totally symmetric $n$-ary loops. (Russian) Ursu L. A. pp.15-25 Conditions of completeness in a family of extensions of a duality-intuitionistic logic. (Russian) Gkhaliekh Ya. N. pp.26-34 Analysis of the condition ${\germ A}(k)$ for four-dimensional systems of differential equations. II. (Russian) Bronshtejn I. U., Kopanskij A. Ya. pp.35-40 Solvability of linear parabolic boundary value problems in anisotropic spaces ${\cal H}\sb p\sp s, (s\in R)$ of Bessel potentials. (Russian) Zhitarashu N. V., Rudi I. I. pp.41-48 The uniform estimates for solutions of nonstrictly hyperbolic equation of second order with large parameter. (English)Perjan A. V. pp.49-53 Conditions of presence of four integral straight lines for a cubic differential system in the case of a center or focus. (Russian)Kozima D. V., Shubeh A. S. pp.54-62 Numerical investigation of the process of dynamic action on a layered obstacle. (Russian) Rîbachin Boris, Secrieru Grigore pp.63-70 Impact of a viscoelastic rod upon a heated obstacle. (Russian) Naval I. K. pp.71-75 Exact shifts of the constraints in methods of centres. (English) Zaporojan D. pp.76-82 Impact of an elastic rod of a finite length upon a compliant obstacle. I. (Russian) Cheban V. G. pp.83-90 Some results for a priority discipline DD with non-zero switching. (English) Mişcoi Gheorghe pp.91-94 Multiplicative system of functions corresponding to the group $C\sb 2 \times \Gamma$. (English) Kolesnik Alexander pp.95-98
2022-01-18 19:00:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6708443760871887, "perplexity": 2182.335828985601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00477.warc.gz"}
https://www.intel.com/content/www/us/en/developer/articles/guide/intel-software-guard-extensions-data-center-attestation-primitives-quick-install-guide.html
# Intel® Software Guard Extensions Data Center Attestation Primitives (Intel® SGX DCAP): A Quick Install Guide Published: 09/23/2020 Recent Revisions 2/8/2021 Install the provisioning tool from Debian packages until a standalone version is available for Ubuntu* Linux* 20.04 2/3/2021 Move to Ubuntu Linux 20.04 64-bit as the base OS ## Introduction Intel® Software Guard Extensions Data Center Attestation Primitives (Intel® SGX DCAP) is Intel's solution for deploying Intel SGX attestation services into data centers. Because every data center environment has unique needs that require customization, it is distributed as a collection of components rather than a complete, turn-key solution. To help developers, data centers, and cloud service providers (CSP's) get started with Intel SGX DCAP, this article steps through the process of creating a minimal, but complete, Intel SGX DCAP environment. It produces a working infrastructure that is suitable for software developers who are creating Intel SGX solutions, and does so in a manner that mimics a real-world deployment to help data center administrators develop system deployment and management procedures. For an introduction to Intel SGX DCAP, see the following white papers: And the following videos from Intel® Network Builders University (free registration required): ## The Minimum Environment An Intel SGX DCAP Environment consists of three, fundamental components: 1. A subscription to the Intel Provisioning Certification Service (Intel PCS), which provides you with the API keys needed to query the service for Intel SGX attestation collateral. 2. A data center caching service, which acts as a caching proxy for the Intel PCS. 3. An Intel SGX enabled platform, which must be provisioned before it can execute runtime Intel SGX workloads.​ The goal of this guide is to produce a simplified, but not trivialized, environment with the following requirements: • The caching service shall be run on a discreet system. Specifically, it shall not be a system that is used to run Intel SGX workloads. • The system that hosts the caching service shall not require Intel SGX. • The Intel SGX enabled platform shall be provisioned and then re-imaged with a fresh OS. This guide will also assume that the Intel SGX enabled platform can be provisioned on a production network. This may not be a valid assumption in a data center’s production environment, but it removes unnecessary complexity from a development and testing setup. ## Intel SGX DCAP Installation Procedure At a high level, the steps to produce the minimum Intel SGX DCAP environment are: 1. Subscribe to the Intel PCS for ECDSA Attestation and obtain the required API keys. 2. Set up Intel's reference caching service, the Provisioning Certification Caching Service (PCCS). 3. Provision the Intel SGX enabled platform for Intel SGX workloads. 4. Verify the provisioning data. 5. Reimage the Intel SGX enabled platform. 6. Load the Intel SGX runtime stack onto the reimaged system. The last two steps are intended to simulate real-world environments, where system provisioning might occur on a restricted network prior to deployment into production. They also ensure that the runtime system does not retain any of the provisioning components. ### Subscribe to the Intel PCS Each subscription to the Intel PCS Service for ECDSA Attestation issues two API keys: a primary key and a secondary key. Either one can be used. The point of issuing two keys is to provide continuity of service in the event the active key needs to be regenerated. To subscribe to the service, browse to the Intel SGX Software Services page. If you have an account, sign in using the "Sign In" link in the banner. If you don't have an account, click "Sign Up" to register. Once you are logged in, return the Intel SGX Software Services page. Under the "Intel Provisioning Certification Service for ECDSA Attestation" header, click on the link titled "Intel® SGX provisioning certification service". From there, scroll down to the "Get PCK Certificate/s" API and click on the "Subscribe" button. You'll be taken to a subscription summary page. Confirm your choices by clicking on the "Add subscription" button. This will load a subscription summary page for your account. Scroll down to the "Intel® Software Guard Extensions Provisioning Certification Service subscription" section and click on the "Show" links to reveal your API keys. ### Set up the Intel PCCS Now that you have your API keys, you can set up the Intel PCCS as your caching service. The following steps assume you are starting from a fresh installation of Ubuntu Linux 20.04 LTS. The PCCS package has a dependency on Node®.js version 14 which is not part of the 20.04 LTS distribution, so the first step is to fetch the Node.js setup script. $curl -o setup.sh -sL https://deb.nodesource.com/setup_14.x$ chmod a+x setup.sh $sudo ./setup.sh Once the script completes, you can add the Node.js package using apt. $ sudo apt-get -y install nodejs We'll use Intel's Debian packages for the Intel SGX DCAP installation, but to do that we need to add the repository to the list of sources for apt, and add the key: $sqlitebrowser /opt/intel/sgx-dcap-pccs/pckcache.db Use the “Browse Data” tab to examine the tables. They should all be empty. ### Provisioning a system Now that the PCCS is up and running, it’s time to provision an Intel SGX enabled platform. This guide assumes you have already enabled Intel SGX in the system BIOS, and imaged it with a fresh installation of Ubuntu Server 20.04 LTS. Before we can install the Intel SGX driver, it’s necessary to add the Dynamic Kernel Module Support package: $ sudo apt install dkms Once that’s done, you can fetch the driver for Intel SGX DCAP and install it. $https://download.01.org/intel-sgx/sgx-dcap/1.9/linux/distro/ubuntu20.04-server/sgx_linux_x64_driver_1.36.2.bin$ chmod 755 sgx_linux_x64_driver_1.36.2.bin $sudo ./sgx_linux_x64_driver_1.36.2.bin Verify that the driver loaded by checking the kernel log: $ dmesg | grep sgx [ 245.139702] intel_sgx: module verification failed: signature and/or required key missing - tainting kernel [ 245.142415] intel_sgx: EPC section 0x2000c00000-0x207f7fffff [ 245.154474] intel_sgx: EPC section 0x4000c00000-0x407fffffff [ 245.167151] intel_sgx: Intel SGX DCAP Driver v1.36.2 The messages show that the driver successfully loaded and assigned memory to the Enclave Page Cache (EPC). You can ignore the warnings: they stem from the fact that this is an out-of-tree kernel driver. Now we can install the provisioning tools. In a production environment, the provisioning binary and its supporting libraries might be incorporated into a minimal kernel image that is loaded via a PXE boot, but for testing purposes we’ll assume a production network and load the Intel SGX support packages directly from Intel’s repository. The provisioning tool is distributed as a standalone package since it won’t typically be installed on a live system, but there is no release for Ubuntu 20.04 at the current time. Until one is ready, we'll need to install it using the Debian packages. This will also pull in a number of other dependencies that would not normally be present in a production provisioning image, but it's fine for testing purposes: $echo 'deb [arch=amd64] https://download.01.org/intel-sgx/sgx_repo/ubuntu focal main' | sudo tee /etc/apt/sources.list.d/intel-sgx.list > /dev/null$ wget -O - https://download.01.org/intel-sgx/sgx_repo/ubuntu/intel-sgx-deb.key | sudo apt-key add - $sudo apt update$ sudo apt install sgx-pck-id-retrieval-tool Now the provisioning tool needs to be configured so that it knows how to contact the caching service. Edit the configuration file /opt/intel/sgx-pck-id-retrieval-tool/network_setting.conf and make the following changes: • Change the PCCS_URL to match your caching service’s location. • Uncomment the user_token parameter, and set it to the user password you created when configuring the PCCS • Set the proxy_type to fit your environment (most likely this will be “direct”) • Ensure USE_SECURE_CERT is set to “FALSE” since we’re using a self-signed certificate for testing purposes. In a production environment, this should be set to “TRUE”. Save your changes and run the provisioning tool: $PCKIDRetrievalTool Intel(R) Software Guard Extensions PCK Cert ID Retrieval Tool Version 1.9.100.3 ERROR: readUEFIVar: failed to open uefi variable /sys/firmware/efi/efivars/SgxRegistrationServerRequest-304e0796-d515-4698-ac6e-e76cb1a71c28 ,error: No such file or directory ERROR: getRequestType: SgxRegistrationServerRequest UEFI variable was not found. Warning: platform manifest is not available or current platform is not multi-package platform. Warning: Couldn't collect platform manifest. If you are using single-package platform, you can ignore this warning. the data has been sent to the cache service successfully and pckid_retrieval.csv has been generated successfully! When running on a single socket system you can ignore the warnings about the platform manifest and the UEFI variable for the Registration Server. The last line is the most important as it tells us the system has been successfully provisioned. ### Verify the provisioned system We can verify the provisioning by returning to the SQLite database using sqlitebrowser. There should be data present in every table. The screen shot below shows the contents of the table fmspc_tcbs. Note the tcbinfo data structure on the right. ### Runtime configuration Now that the Intel SGX enabled platform has been provisioned, it can be loaded with a production OS image. It can, in fact, be reimaged multiple times: as long as the Intel SGX Trusted Compute Base (TCB) doesn’t change, it will not need to be reprovisioned. Again, we’ll assume a fresh install of Ubuntu Server 20.04 LTS as the starting point. Install the driver: $ sudo apt install dkms $wget https://download.01.org/intel-sgx/sgx-dcap/1.9/linux/distro/ubuntu20.04-server/sgx_linux_x64_driver_1.36.2.bin$ chmod 755 ./sgx_linux_x64_driver_1.36.2.bin $sudo ./sgx_linux_x64_driver_1.36.2.bin Update the apt repositories: $ echo 'deb [arch=amd64] https://download.01.org/intel-sgx/sgx_repo/ubuntu focal main' | sudo tee /etc/apt/sources.list.d/intel-sgx.list > /dev/null $wget -O - https://download.01.org/intel-sgx/sgx_repo/ubuntu/intel-sgx-deb.key | sudo apt-key add -$ sudo apt update Now you can install the Intel SGX runtime components, and Intel’s reference quoting library and quote provider library. $sudo apt-get install libsgx-urts libsgx-dcap-ql libsgx-dcap-default-qpl This step will also install Intel’s Architectural Enclave Service Manager which needs to be configured for ECDSA-based attestations. Edit the configuration file, /etc/aesmd.conf, and uncomment this line: default quoting type = ecdsa_256 Since we’re using ECDSA quotes, AESM has no need to connect to the internet and you can leave the proxy settings alone. Restart AESM for the changes to take effect: $ systemctl restart aesmd Next, we need to configure the quote provider library to connect to PCCS to obtain the attestation collateral. Edit the configuration file, /etc/sgx_default_qcnl.conf, and make the following changes: • Set the PCCS_URL parameter to the location of our PCCS server • Set USE_SECURE_CERT to “FALSE” since we’re using a self-signed certificate for testing purposes. Again, in a production environment, this should be set to “TRUE”.
2022-09-24 17:14:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17252928018569946, "perplexity": 9785.438501490193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00044.warc.gz"}
https://mailman.ntg.nl/pipermail/ntg-context/2008/034474.html
# [NTG-context] Strange behaviour with tikz Mojca Miklavec mojca.miklavec.lists at gmail.com Thu Sep 11 10:05:47 CEST 2008 On Wed, Sep 10, 2008 at 7:44 PM, Eric DÉTREZ wrote: > Hello again > > I don't understand a strange thing. > > Patterns in tikz become black in some case. > > Here is a minimal example : > *************************************** > \usemodule[tikz] > \usetikzlibrary[patterns] > \starttext > > blabla > > \chapter {blabla} > > \starttikzpicture > \draw [pattern=north west lines](0,0) rectangle +(1,2); > \stoptikzpicture > > \stoptext > ************************************* > > Without any text before the chapter or without the chapter command I > see the patterns. > With these commands I just see a black rectangle. > > Can I get my patterns back ? What version of ConTeXt and TikZ are you using? I first tried your example with TeX Live 2008, and it looked like a ConTeXt-related problem in TikZ. The patterns only worked on the first page. But then I checked with the latest version of both ConTeXt and TikZ (http://minimals.contextgarden.net/current/modules/t-tikz/ - you can fetch it with rsync) rsync -av rsync://contextgarden.net/minimals/current/modules/t-tikz/ place-on-your-computer And it worked fine. Mojca
2017-01-19 06:37:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9847118258476257, "perplexity": 14499.095336953253}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00569-ip-10-171-10-70.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/314803/time-taken-by-projectile-on-an-inclined-plane
# Time taken by projectile on an inclined plane? A particle is projected up an inclined plane of base angle $\beta$ with the horizontal with an initial velocity $V$. The particle collides elastically with the incline and rebounds vertically. If the particle reaches back the point of projection after time $\large T = \dfrac {aV}{g \sqrt{1+b\sin^2\beta}}$ then enter the value of $a + b$ Note: $g$ denotes the acceleration due to gravity. Well first of all I am confused over the word "vertical". Vertical as in making an angle of $90^\circ$ with the plane or the horizontal?? Confused over this, I proceeded in two cases :- CASE I From the plane So when a particle is projected up an inclined plane, the time taken by it to complete it projectile motion is : $T$ = $\frac{2u\sin(\alpha - \beta)}{g\cos\beta}$ ...... (i) Where $\alpha$ is the angle the projectile makes with the horizontal(the smaller angle) and $\beta$ is the angle of the inclined plane. When it is projected down an inclined plane :- $T$ = $\frac{2u\sin(\alpha + \beta)}{g\cos\beta}$ ............. (ii) Here, $\alpha$ and $\beta$ have the same meaning, angle from the horizontal and angle of the inclined plane respectively. Now, from what is given in the question, we know that the range of the projectile will be the same in both cases. And we have to find the sum of the times to go up and down the plane. When it is being projected down the plane, $\alpha + \beta$ is 90. Now since the projectile reaches back the point of projection, the ranges up and down the inclined plane should be equal. The formulas for these are : Up the plane : $\frac{u^2}{g\cos\beta}[\sin(2\alpha - \beta) - \sin\beta]$ ......... (iii) Down the plane : $\frac{u^2}{g\cos\beta}[\sin(2\alpha + \beta) + \sin\beta]$ ............... (iv) (Alpha and beta have the same meanings) so (iii) = (iv) and $\alpha + \beta$ is 90. Using this, we get :- $\sin(2\alpha - \beta) = 3\sin\beta$ ................... (v) Now the total time would be (i) + (ii), which involves the variable \alpha. So I need to eliminate the variable $\alpha$. I could do that using (v), but that would involve writing $\cos2\alpha$ and $\sin2\alpha$ ONLY in terms of $\sin\alpha$ and then using that value of $\sin\alpha$ in order to eliminate $\alpha$ from (i) + (ii). However, I dont think the solution to be that long/untidy and am like 99% sure that there must be a shorter/neater way. CASE II From the horizontal I pretty much do the same steps as in case I. In this case I get from equating the ranges : - $\sin(2\alpha - \beta) = \sin\beta$ which gives me $\alpha = \beta$. This not only makes the time up the plane to be 0, but also makes the time down the plane to be $\frac{2u}{g}$, which is the same for a freely falling body/1D. Also, in this case I think the particle will fall back to it's starting position, thus never getting back to its original position. So I am like 80% sure that this is the wrong case, but I havent just got the gist of projectile motion along an inclined plane, so I think that I may be wrong. Also, the word "elastic" means that no energy is lost right, so the particle rebounds with the same velocity with which it struck. But I think I vaguely remember reading somewhere that it's also got to to do something about making equal angles with the normal or something like that. My best guess is that this might be used somewhere and thus providing us with an easier relation between $\alpha$ and $\beta$. Either this, or the worst case scenario that I have COMPLETELY misunderstood the problem. PLS correct me if I am wrong anywhere in this explanation. I tried my level best to keep in compliance with the homework policy. Pls be kind enough to point out any mistakes I might have made n this regard. Thank you I think this is a good question to test if you think like a physicist. One thing physicist do is to analyze a problem by looking at limiting cases. Here there are two interesting cases: one where the angle of the ramp is zero, so that the ramp is actually just a flat level surface, the other is where the ramp becomes almost vertical (I would say the ramp actually being vertical doesn't make sense, but you can still take the limit). Think about what the trajectory looks like when the angle is zero. Think what the trajectory becomes as you make the ramp more and more vertical. Then it should become clear what the times are in these limiting cases. Now if you trust the formula they gave you, you should have enough information from these two cases to determine the two quantities they want. • It is not too hard to solve for the trajectory fully and verify the formula you are given is correct. I might edit this answer and add the full solution later. Certainly you should be worried if you can't solve this problem the hard way. – Brian Moths Mar 3 '17 at 16:02 "Vertically" means vertically upwards, in the direction of $\vec{-g}$. So case II applies. This means that after reaching its highest point C (vertically above the point of collision B) the particle retraces its trajectory back to the point of projection A. (See diagram below.) An elastic collision means that there is no loss in KE. If the object rebounded from is fixed, then the angles of incidence and reflection are equal. This is the diagram you should have drawn. If, as you suggest, the particle were launched at $90^{\circ}$ to the horizontal then it would start at B, rise to C, fall back to B, rebound and be projected to A. • From this, I did after reading this, I think that the particle was projected with $\alpha = 90^\circ$. Pls point whether I am in the right direction or not. The particle was initially projected at 90 degrees, hit the incline, and then rebounded with the same 90 degreess(which is how i understood "This means that the particle retraces its trajectory back to the point of projection." – The East Wind Feb 25 '17 at 20:53 • You think the particle was projected vertically upwards from the projection point? Make a sketch. – sammy gerbil Feb 25 '17 at 21:03 • as i said i may be wrong. I am not at all sure anout this one. Making a sketch doesnt help much. Pls elabortae further. – The East Wind Feb 25 '17 at 21:07 • It has been a week now. I am still not able to solve this problem. I suggest that you please post a full solution. – The East Wind Mar 3 '17 at 10:52
2021-01-23 02:11:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7614992260932922, "perplexity": 253.0221046561392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531702.36/warc/CC-MAIN-20210123001629-20210123031629-00487.warc.gz"}
https://www.dias.ie/ga/2011/06/15/2007-23-probing-the-fuzzy-sphere-regularization-in-simulations-of-the-3d-phi-4/?option=com_contact&view=contact&id=14&Itemid=32
Dublin Institute for Advanced Studies 00353 (0) 16140100 DIAS-STP-07-23 # Probing the fuzzy sphere regularization in simulations of the 3d $\phi ^4$ ### J. Medina, W. Bietenholz & Denjoe O’Connor This preprint is available on Arxiv.org This preprint is not available for download on our website. To obtain copies of any of the preprints in the archives, please contact us and specify the preprint number(s), author(s), title(s) and number of copies wanted.
2021-05-08 01:08:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5057712197303772, "perplexity": 6208.622819877953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00253.warc.gz"}
https://www.esaral.com/q/which-one-of-the-following-23143
# Which one of the following Question: Which one of the following is correct structure for cytosine ? Correct Option: , 3 Solution: The correct structure of cytosine
2023-02-02 08:14:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028798341751099, "perplexity": 7215.417079934261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00049.warc.gz"}
https://people.smp.uq.edu.au/MatthewDavis/matts_arXiv/mailings/0053.html
# Matt's arXiv selection bumper holiday edition: fortnight ending 29th December 2006. From: Matthew Davis <mdavis_at_physics.uq.edu.au> Date: Tue, 2 Jan 2007 16:12:24 +1000 (EST) The following message was sent to the matts_arxiv list by Matthew Davis <mdavis_at_physics.uq.edu.au> Hi everyone, I had some time off in the last week and so didn't get to the weekly email. To make up for it, today's email contains two weeks worth of abstracts for your reading pleasure. A total of 36 new abstracts: ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612376 Date: Thu, 14 Dec 2006 21:14:41 GMT (59kb) Title: Expansion of a mesoscopic Fermi system from a harmonic trap Authors: Pavel Nagornykh and Victor Galitski Categories: cond-mat.mes-hall cond-mat.supr-con Comments: 4 pages, 1 color figure Subj-class: Mesoscopic Systems and Quantum Hall Effect; Superconductivity \\ We study quantum dynamics of an atomic Fermi system with a finite number of particles, N, after it is released from a harmonic trapping potential. We consider two different initial states: The Fermi sea state and the paired state described by the projection of the grand-canonical BCS wave function to the subspace with a fixed number of particles. In the former case, we derive exact and simple analytic expressions for the dynamics of particle density and density-density correlation functions, taking into account the level quantization and possible anisotropy of the trap. In the latter case of a paired state, we obtain analytic expressions for the density and its correlators in the leading order with respect to the ratio of the trap frequency and the superconducting gap (the ratio assumed small). We discuss several dynamic features, such as time evolution of the peak due to pair correlations, which may be used to distinguish between the Fermi sea and the paired state. \\ ( http://arXiv.org/abs/cond-mat/0612376 , 59kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612388 Date: Fri, 15 Dec 2006 10:22:45 GMT (19kb) Title: BEC in a star-comb graph Authors: F. P. Mancini, P. Sodano and A. Trombettoni Categories: cond-mat.stat-mech Comments: Proceedings of the "Eleventh Training Course in the Physics of Correlated Electron Systems and High-Tc Superconductors", Vietri sul Mare, Italy, Oct 2006 Subj-class: Statistical Mechanics \\ We investigate the properties of free bosons hopping on a star-comb network, discussing the single-particle spectrum and the main thermodynamic equilibrium properties: Bose-Einstein critical temperature, fraction of condensate, and spatial boson distribution. We find an enhancement of the critical temperature with respect to other inhomogeneous networks. \\ ( http://arXiv.org/abs/cond-mat/0612388 , 19kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0612128 Date: Fri, 15 Dec 2006 09:54:06 GMT (83kb) Title: Entangling two atoms in spatially separated cavities through both photon emission and absorption processes Authors: Peng Peng, Fu-li Li Comments: 12 pages, 4 figures \\ We consider a system consisting of a $\Lambda$-type atom and a V-type atom, which are individually trapped in two spatially separated cavities that are connected by an optical fibre. We show that an extremely entangled state of the two atoms can be deterministically generated through both photon emission of the $\Lambda$-type atom and photon absorption of the V-type atom in an ideal situation. The influence of various decoherence processes such as spontaneous emission and photon loss on the fidelity of the entangled state is also investigated. We find that the effect of photon leakage out of the fibre on the fidelity can be greatly diminished in some special cases. As regards the effect of spontaneous emission and photon loss from the cavities, we find that the present scheme with a fidelity higher than 0.98 may be realized under current experiment conditions. \\ ( http://arXiv.org/abs/quant-ph/0612128 , 83kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0612129 Date: Fri, 15 Dec 2006 11:09:36 GMT (111kb) Title: Two-photon states generated from a continuous wave source Authors: Anne E. B. Nielsen and Klaus Molmer Categories: quant-ph Comments: 10 pages, 9 figures \\ Conditional preparation of two-photon states from a continuous wave non-degenerate optical parametric oscillator is investigated. We derive the phase space Wigner function for the output state conditioned on two nearby photo detection events, and we maximize its overlap with a two-photon state by varying the temporal output state mode function. In the low intensity limit, we generalize to n-photon state production. We find a simple expression for the conditional output, and from this we determine the optimal output state mode function and n-photon state fidelity. \\ ( http://arXiv.org/abs/quant-ph/0612129 , 111kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612415 Date: Fri, 15 Dec 2006 21:00:07 GMT (513kb) Title: Hard-core bosons on optical superlattices: Dynamics and relaxation in the superfluid and insulating regimes Authors: Marcos Rigol, Alejandro Muramatsu, Maxim Olshanii Categories: cond-mat.other cond-mat.stat-mech Comments: 14 pages, 17 figures, published version Subj-class: Other; Statistical Mechanics Journal-ref: Phys. Rev. A 74, 053616 (2006) DOI: 10.1103/PhysRevA.74.053616 \\ We study the ground-state properties and nonequilibrium dynamics of hard-core bosons confined in one-dimensional lattices in the presence of an additional periodic potential (superlattice) and a harmonic trap. The dynamics is analyzed after a sudden switch-on or switch-off of the superlattice potential, which can bring the system into insulating or superfluid phases, respectively. A collapse and revival of the zero-momentum peak can be seen in the first case. We study in detail the relaxation of these integrable systems towards equilibrium. We show how after relaxation time averages of physical observables, like the momentum distribution function, can be predicted by means of a generalization of the Gibbs distribution. \\ ( http://arXiv.org/abs/cond-mat/0612415 , 513kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612417 Date: Mon, 18 Dec 2006 18:32:59 GMT (17kb) Title: Dimer states in atomic mixtures Authors: K. Ziegler Categories: cond-mat.stat-mech Comments: 4 pages, 1 figure Subj-class: Statistical Mechanics \\ A mixture of heavy atoms in a Mott state and light spin-1/2 fermionic atoms is studied in an optical lattice. Scattering processes excite the heavy atoms and generate an attraction between the light atoms. An effective Hamiltonian is derived that describes tunneling of single fermions, tunneling of fermionic pairs and an exchange of fermionic spins. An eigenstate of the last two processes is found in form of an alternating dimer state. This state has a global symmetry with respect to a discrete rotation of all dimers. Fluctuations due to rotations of dimer clusters may lead to a transition to a dimer liquid in frustrated optical lattices. \\ ( http://arXiv.org/abs/cond-mat/0612417 , 17kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612436 Date: Sun, 17 Dec 2006 09:03:36 GMT (15kb) Title: Complex Envelope Soliton in Bose-Einstein Condensate with Time Dependent Scattering Length Authors: Ayan Khan, Rajneesh Atre and Prasanta K. Panigrahi Categories: cond-mat.other Comments: 3 pages, 2 eps figures, To appear in the proceedings of Condensed Matter Days - 2006 Subj-class: Other \\ We elaborate on a general method to find complex envelope solitons in a cigar shaped Bose-Einstein condensate in a trap. The procedure incorporates time dependent scattering length, oscillator frequency and loss/gain. A variety of time dependencies of the above parameters, akin to the ones occurring in the experiments can be tackled. \\ ( http://arXiv.org/abs/cond-mat/0612436 , 15kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612460 Date: Mon, 18 Dec 2006 16:20:44 GMT (24kb) Title: Shear viscosity and damping for a Fermi gas in the unitarity limit Authors: G. M. Bruun, H. Smith Categories: cond-mat.stat-mech Comments: 4 pages, 3 figures Subj-class: Statistical Mechanics \\ We calculate the shear viscosity of a two-component Fermi gas in the normal phase as a universal function of temperature in the unitarity limit. Using a microscopic Kubo approach, the importance of strong-coupling effects such as the emergence of the pseudogap is examined. We demonstrate how recent experimental results on the damping of collective modes in the unitarity limit can be understood in terms of viscous damping when the gas is in the collisional hydrodynamic regime. \\ ( http://arXiv.org/abs/cond-mat/0612460 , 24kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612464 Date: Mon, 18 Dec 2006 18:08:32 GMT (12kb) Title: Two-body correlations and the superfluid fraction for nonuniform systems Authors: Wayne M. Saslow, Davide E. Galli, and Luciano Reatto Categories: cond-mat.stat-mech Subj-class: Statistical Mechanics \\ We extend the one-body phase function upper bound on the superfluid fraction in a periodic solid (a spatially ordered supersolid) to include two-body phase correlations. The one-body current density is no longer proportional to the gradient of the one-body phase times the one-body density, but rather it depends also on two-body correlation functions. The equations that simultaneously determine the one-body and two-body phase functions require a knowledge of one-, two-, and three-body correlation functions. The approach can also be extended to disordered solids. Fluids, with two-body densities and two-body phase functions that are translationally invariant, cannot take advantage of this additional degree of freedom to lower their energy. \\ ( http://arXiv.org/abs/cond-mat/0612464 , 12kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612465 Date: Mon, 18 Dec 2006 19:41:04 GMT (133kb) Title: Reconstruction of the finite size canonical ensemble from incomplete micro-canonical data Authors: P. H. Lundow, K. Markstr\"om Categories: cond-mat.stat-mech Subj-class: Statistical Mechanics \\ In this paper we discuss how partial knowledge of the density of states for a model can be used to give good approximations of the energy distributions in a given temperature range. From these distributions one can then obtain the statistical moments corresponding to eg the internal energy and the specific heat. These questions have gained interest apropos of several recent methods for estimating the density of states of spin models. As a worked example we finally apply these methods to the 3-state Potts model for cubic lattices of linear order up to 128. We give estimates of eg latent heat and critical temperature, as well as the microcanonical properties of interest. \\ ( http://arXiv.org/abs/cond-mat/0612465 , 133kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0612138 Date: Sat, 16 Dec 2006 03:38:17 GMT (362kb) Title: An ultrahigh Finesse Fabry-Perot superconducting resonator as a photon box for cavity-QED experiments Authors: Stefan Kuhr, S\'{e}bastien Gleyzes (LKB - Lhomond), Christine Guerlin (LKB - Lhomond), Julien Bernu (LKB - Lhomond), Ulrich Busk Hoff (LKB - Lhomond), Samuel Del\'{e}glise (LKB - Lhomond), Michel Brune (LKB - Lhomond), Jean-Michel Raimond (LKB - Lhomond), Serge Haroche (LKB - Lhomond), Stefano Osnaghi (LKB - Lhomond), E. Jacques (DAPNIA), P. Bosland (DAPNIA), B. Visentin (DAPNIA) Categories: quant-ph Proxy: ccsd hal-00120654 \\ We have built a microwave Fabry-Perot resonator made of diamond-machined copper mirrors coated with superconducting niobium. Its damping time (Tc = 130 ms at 51 GHz and 0.8 K) corresponds to a finesse of 4.6 e9, the highest ever reached for a Fabry-Perot in any frequency range. We have tested this resonator by sending across it two circular Rydberg atoms, the first emitting a photon and the second absorbing it after a delay of 1/10 s. This long storage time photon box opens novel perspectives for quantum information. It can be used to perform sequences of hundreds of gate operations on individual atomic qubits. A set-up with one or two photon boxes can store mesoscopic fields made of hundreds of photons for decoherence and non-locality studies. \\ ( http://arXiv.org/abs/quant-ph/0612138 , 362kb) ------------------------------------------------------------------------------ \\ Paper: physics/0612163 Date: Sat, 16 Dec 2006 12:36:49 GMT (501kb) Title: A supersonic beam of cold lithium hydride molecules Authors: S. K. Tokunaga, J. O. Stack, J. J. Hudson, B. E. Sauer, E. A. Hinds and M. R. Tarbutt Categories: physics.atom-ph Comments: 8 pages, 6 figures Subj-class: Atomic Physics \\ We have developed a source of cold LiH molecules for Stark deceleration and trapping experiments. Lithium metal is ablated from a solid target into a supersonically expanding carrier gas. The translational, rotational and vibrational temperatures are 0.9(0.1) K, 5.9(0.5) K and 468(17) K respectively. Although they have not reached thermal equilibrium with the carrier gas, we estimate that 90% of the LiH molecules are in the ground state, X^{1} \Sigma^{+} (v=0, J=0). With a single 7 ns ablation pulse, the number of molecules in the ground state is 4.5(1.8)*10^{7} molecules per steradian. A second, delayed, ablation pulse produces another LiH beam in a different part of the same gas pulse, thereby almost doubling the signal. A long pulse, lasting 150 microseconds, can make the beam up to 15 times more intense. \\ ( http://arXiv.org/abs/physics/0612163 , 501kb) ------------------------------------------------------------------------------ \\ Paper: physics/0612162 Date: Sat, 16 Dec 2006 11:36:27 GMT (166kb) Title: Tasting edge effects Authors: Lyderic Bocquet Categories: physics.ed-ph physics.gen-ph Comments: to appear in American Journal of Physics Subj-class: Physics Education; General Physics \\ We show that the baking of potato wedges constitutes a crunchy example of edge effects, which are usually demonstrated in electrostatics. A simple model of the diffusive transport of water vapor around the potato wedges shows that the water vapor flux diverges at the sharp edges in analogy with its electrostatic counterpart. This increased evaporation at the edges leads to the crispy taste of these parts of the potatoes. \\ ( http://arXiv.org/abs/physics/0612162 , 166kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612479 Date: Tue, 19 Dec 2006 10:30:46 GMT (116kb) Title: Fractional Quantum Hall States in Fast Rotating Bose Gases Authors: A. Lakhoua, T. Masson, J.C. Wallet Categories: cond-mat.mes-hall Subj-class: Mesoscopic Systems and Quantum Hall Effect \\ We use a Chern Simons Landau-Ginzburg (CSLG) framework related to hierarchies of composite bosons to describe 2D harmonically trapped fast rotating Bose gases in Fractional Quantum Hall Effect (FQHE) states. The predicted values for $\nu$ (ratio of particle to vortex numbers) are $\nu$$=$${{p}\over{q}}$ ($p$, $q$ are any integers) with even product $pq$, including numerically favored values previously found and predicting a richer set of values. We show that those values can be understood from a bosonic analog of the law of the corresponding states relevant to the electronic FQHE. A tentative global phase diagram for the bosonic system for $\nu$$<$1 is also proposed. \\ ( http://arXiv.org/abs/cond-mat/0612479 , 116kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612496 Date: Tue, 19 Dec 2006 18:10:49 GMT (42kb) Title: Imbalanced Fermi-Fermi mixtures in optical lattices Authors: M. Iskin and C. A. R. Sa de Melo Categories: cond-mat.supr-con cond-mat.other Comments: 4 pages with 3 figures Subj-class: Superconductivity; Other \\ The ground state phase diagram of imbalanced Fermi-Fermi mixtures in optical lattices is analyzed as a function of interaction strength, population imbalance, filling fraction and tunneling parameters. It is shown that population imbalanced Fermi-Fermi mixtures reduce to strongly interacting Bose-Fermi mixtures in the molecular limit, in sharp contrast with homogenous systems where the resulting Bose-Fermi mixture is weakly interacting. Furthermore, insulating phases are found in optical lattices of Fermi-Fermi mixtures in addition to the standard phase-separated or coexisting superfluid/excess-fermion phases found in homogeneous systems. The insulating states can be a molecular Bose-Mott insulator (BMI), a Fermi-Pauli insulator (FPI), phase-separated BMI/FPI mixture or a Bose-Fermi checkerboard (BFC). \\ ( http://arXiv.org/abs/cond-mat/0612496 , 42kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612498 Date: Tue, 19 Dec 2006 18:53:06 GMT (101kb) Title: Metastable states of a gas of dipolar bosons in a 2D optical lattice Authors: C. Menotti, C. Trefzger, and M.Lewenstein Categories: cond-mat.other cond-mat.dis-nn Comments: 4 pages, 4 figures Subj-class: Other; Disordered Systems and Neural Networks \\ We investigate the physics of dipolar bosons in a two dimensional optical lattice. It is known that due to the long-range character of dipole-dipole interaction, the ground state phase diagram of a gas of dipolar bosons in an optical lattice presents novel quantum phases, like checkerboard and supersolid phases. In this paper, we consider the properties of the system beyond its ground state, finding that it is characterised by a multitude of almost degenerate metastable states, often competing with the ground state. This makes dipolar bosons in a lattice similar to a disordered system and opens possibilities of using them for quantum memories. \\ ( http://arXiv.org/abs/cond-mat/0612498 , 101kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612499 Date: Tue, 19 Dec 2006 18:34:06 GMT (254kb) Title: What makes a crystal supersolid ? Authors: Nikolay Prokof'ev Categories: cond-mat.dis-nn cond-mat.stat-mech Comments: Perspective to appear in Advances in Physics, 25 pages, 7 figures Subj-class: Disordered Systems and Neural Networks; Statistical Mechanics \\ For nearly half a century the supersolid phase of matter has remained mysterious, not only eluding experimental observation, but also generating a great deal of controversy among theorists. Recent discovery of what is interpreted as a non-classical moment of inertia at low temperature in solid He-4 has elicited much excitement as a possible first observation of a supersolid phase. In the two years following the discovery, however, more puzzles than answers have been provided to the fundamental issue of whether the supersolid phase exists, in helium or any other naturally occurring condensed matter system. Presently, there is no established theoretical framework to understand the body of experimental data on He-4. Different microscopic mechanisms that have been suggested to underlie superfluidity in a perfect quantum crystal do not seem viable for \he4, for which a wealth of experimental and theoretical evidence points to an insulating crystalline ground state. This perspective addresses some of the outstanding problems with the interpretation of recent experimental observations of the apparent superfluid response in He-4 (seen now by several groups) and discusses various scenarios alternative to the homogeneous supersolid phase, such as superfluidity induced by extended defects of the crystalline structure which include grain boundaries, dislocations, anisotropic stresses, etc. Can a metastable superfluid "glassy" phase exist, and can it be relevant to some of the experimental observations ? One of the most interesting and unsolved fundamental questions is what interatomic potentials, given the freedom to design one, can support an ideal supersolid phase in continuous space, and can they be found in Nature. \\ ( http://arXiv.org/abs/cond-mat/0612499 , 254kb) ------------------------------------------------------------------------------ \\ Paper: physics/0612182 Date: Tue, 19 Dec 2006 09:38:53 GMT (239kb) Title: Dimension of holes and high-temperature condensate in Bose--Einstein statistics Authors: V. P. Maslov Categories: physics.gen-ph Comments: 14 pages, 3 figures Subj-class: General Physics \\ We introduce the notion of weight for the lattice dimension and the notion of topological dimension -- hole dimension. The condensate in Bose-holes exists in the case when temperature in not low. \\ ( http://arXiv.org/abs/physics/0612182 , 239kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612505 Date: Tue, 19 Dec 2006 22:13:17 GMT (921kb) Title: Supersolidity is Dirty Authors: Jiansheng Wu and Philip Phillips Categories: cond-mat.dis-nn cond-mat.supr-con Comments: 4 pages, 2 .eps figures Subj-class: Disordered Systems and Neural Networks; Superconductivity \\ A microscopic model for the supersolid phase in $^4$He is given. On a grain boundary, atom motion is well described by a disordered Bose-Hubbard model. We argue that the clean system is a commensurate Mott insulator but in the presence of disorder, a supersolid state obtains. At work is the disorder-induced closing of the Mott gap. We find that the transition temperature to the supersolid state is an increasing function of disorder as is seen experimentally. We also find that a glassy phase mediated by disorder possesses a period shift, though it lacks superflow. This latter observation is relevant to solid hydrogen in which a period shift is observed without superflow. \\ ( http://arXiv.org/abs/cond-mat/0612505 , 921kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612508 Date: Wed, 20 Dec 2006 01:59:34 GMT (257kb) Title: Using modified Gaussian distribution to study the physical properties of one and two-component ultracold atoms Authors: C. C. Huang and W. C. Wu Categories: cond-mat.other cond-mat.stat-mech Comments: 7 pages, 7 figures, accepted for publication in Phys. Rev. A Subj-class: Other; Statistical Mechanics \\ Gaussian distribution is commonly used as a good approximation to study the trapped one-component Bose-condensed atoms with relatively small nonlinear effect. It is not adequate in dealing with the one-component system of large nonlinear effect, nor the two-component system where phase separation exists. We propose a modified Gaussian distribution which is more effective when dealing with the one-component system with relatively large nonlinear terms as well as the two-component system. The modified Gaussian is also used to study the breathing modes of the two-component system, which shows a drastic change in the mode dispersion at the occurrence of the phase separation. The results obtained are in agreement with other numerical results. \\ ( http://arXiv.org/abs/cond-mat/0612508 , 257kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612522 Date: Wed, 20 Dec 2006 13:03:54 GMT (24kb) Title: Mott-insulator phases of non-locally coupled 1D dipolar Bose gases Authors: A. Arguelles and L. Santos Categories: cond-mat.other Comments: 4 pages, 4 eps figures Subj-class: Other \\ We analyze the Mott-insulator phases of dipolar bosonic gases placed in neighboring but unconnected 1D traps. Whereas for short-range interactions the 1D systems are independent, the non-local dipole-dipole interaction induces a direct Mott-insulator to pair-superfluid transition which significantly modifies the boundaries of the lowest Mott-insulator phases. The lowest boundary of the lowest Mott regions becomes progressively constant as a function of the hopping rate, eventually inverting its slope, leading to a re-entrant configuration which is retained in 2D. We discuss the consequences of this effect on the spatial Mott-insulator plateaux in experiments with additional harmonic confinement, showing that anti-intuitively the plateaux may become wider for increasing hopping. Our results are also applicable to non-dipolar boson-boson mixtures. \\ ( http://arXiv.org/abs/cond-mat/0612522 , 24kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612524 Date: Wed, 20 Dec 2006 13:24:00 GMT (70kb) Title: Confinement effects on the stimulated dissociation of molecular BECs Authors: I. Tikhonenkov and A. Vardi Categories: cond-mat.other Comments: 4 pages, 4 figures Subj-class: Other \\ We show that a molecular BEC in a trap is stabilized against stimulated dissociation if the trap size is smaller than the resonance healing length $(\hbar^2/2mg\sqrt{n})^{1/2}$. The condensate shape determines the critical atom-molecule coupling frequency. We discuss an experiment for triggering dissociation by a sudden change of coupling or trap parameters. This effect demonstrates one of the unique collective features of 'superchemistry' in that the yield of a chemical reaction depends critically on the size and shape of the reaction vessel. \\ ( http://arXiv.org/abs/cond-mat/0612524 , 70kb) ------------------------------------------------------------------------------ \\ Paper (*cross-listing*): nucl-th/0612086 Date: Tue, 19 Dec 2006 14:43:10 GMT (141kb) Title: Effective Field Theory for Dilute Fermions with Pairing Authors: R.J. Furnstahl, H.-W. Hammer, S.J. Puglia Categories: cond-mat.supr-con Comments: 31 pages, 10 figures Report-no: HISKP-TH-06-40 Subj-class: Nuclear Theory; Superconductivity \\ Effective field theory (EFT) methods for a uniform system of fermions with short-range, natural interactions are extended to include pairing correlations, as part of a program to develop a systematic Kohn-Sham density functional theory (DFT) for medium and heavy nuclei. An effective action formalism for local composite operators leads to a free-energy functional that includes pairing by applying an inversion method order-by-order in the EFT expansion. A consistent renormalization scheme is demonstrated for the uniform system through next-to-leading order, which includes induced-interaction corrections to pairing. \\ ( http://arXiv.org/abs/nucl-th/0612086 , 141kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612544 Date: Thu, 21 Dec 2006 12:14:42 GMT (166kb) Title: Ground states of hard-core bosons in one dimensional periodic potentials Authors: Yuan Lin and Biao Wu Categories: cond-mat.other Comments: 5 pages, 5 figures Subj-class: Other \\ With Girardeau's Fermi-Bose mapping, we find the exact ground states of hard-core bosons residing in a one dimensional periodic potential. The analysis of these ground states shows that when the number of bosons $N$ is commensurate with the number of wells $M$ in the periodic potential, the boson system is a Mott insulator whose energy gap, however, is given by the single-particle band gap of the periodic potential; when $N$ is not commensurate with $M$, the system is a metal (not a superfluid). The Kronig-Penney potential is used to illustrate our results. \\ ( http://arXiv.org/abs/cond-mat/0612544 , 166kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612565 Date: Thu, 21 Dec 2006 19:11:01 GMT (155kb) Title: Novel quantum phases of dipolar Bose gases in optical lattices Authors: S. Yi, T. Li, C. P. Sun Categories: cond-mat.other Comments: 4 pages, 4 figures Subj-class: Other \\ We investigate the quantum phases of dipolar Bose gases loaded into a two dimension square and three dimension cubic optical lattices. We show that the long-range and anisotropic nature of the dipole-dipole interaction induces a rich variety of quantum phases, including the supersolid and striped supersolid phases in 2D lattices, and the layered supersolid phase in 3D lattices. \\ ( http://arXiv.org/abs/cond-mat/0612565 , 155kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0612180 Date: Thu, 21 Dec 2006 15:10:30 GMT (891kb) Title: Designing spin-1 lattice models using polar molecules Authors: Gavin K. Brennen, Andrea Micheli, and Peter Zoller Comments: 24 pages, 5 figures \\ We describe how to design a large class of always on spin-1 interactions between polar molecules trapped in an optical lattice. The spin degrees of freedom correspond to the hyperfine levels of a ro-vibrational ground state molecule. Interactions are induced using a microwave field to mix ground states in one hyperfine manifold with the spin entangled dipole-dipole coupled excited states. Using multiple fields anistropic models in one, two, or three dimensions, can be built with tunable spatial range. An illustrative example in one dimension is the generalized Haldane model, which at a specific parameter has a gapped valence bond solid ground state. The interaction strengths are large compared to decoherence rates and should allow for probing the rich phase structure of strongly correlated systems, including dimerized and gapped phases. \\ ( http://arXiv.org/abs/quant-ph/0612180 , 891kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612567 Date: Thu, 21 Dec 2006 21:53:20 GMT (50kb) Title: Sound propagation in a Fermi gas near a Feshbach resonance Authors: J. Joseph, B. Clancy, L. Luo, J. Kinast, A. Turlapov, and J. E. Thomas Categories: cond-mat.other Comments: 4 pages, 5 figures Subj-class: Other \\ Sound waves are observed and studied in an optically trapped degenerate Fermi gas of spin-up and spin-down atoms with magnetically tunable interactions. Measurements are made throughout the crossover region, from a weakly-interacting Fermi gas through the resonant Fermi superfluid regime to a Bose condensate of dimer molecules. The measured sound velocities test the equation of state and confirm the universal hypothesis. \\ ( http://arXiv.org/abs/cond-mat/0612567 , 50kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612572 Date: Fri, 22 Dec 2006 15:10:02 GMT (31kb) Title: Stable and unstable regimes in Bose-Fermi mixture with attraction between components Authors: A.M. Belemuk, S.-T. Chui, V. N. Ryzhov Categories: cond-mat.stat-mech cond-mat.dis-nn Comments: 7 pages, 5 figures Subj-class: Statistical Mechanics; Disordered Systems and Neural Networks \\ A collapse of the trapped boson- fermion mixture with the attraction between bosons and fermions is investigated in the framework of the effective Hamiltonian for the Bose system. The properties of the $^{87}$Rb and $^{40}$K mixture are analyzed quantitatively at $T= 0$. We find numerically solutions of modified Gross- Pitaevskii equation which continuously go from stable to unstable branch. We discuss the relation of the onset of collapse with macroscopic properties of the system. A comparison with the case of a Bose condensate of atomic $^7Li$ system is given. \\ ( http://arXiv.org/abs/cond-mat/0612572 , 31kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612590 Date: Fri, 22 Dec 2006 14:45:11 GMT (103kb) Title: Ground-State Fidelity and Bipartite Entanglement in the Bose-Hubbard Model Authors: Pierfrancesco Buonsante and Alessandro Vezzani Categories: cond-mat.other Comments: 7 pages, 5 figures (endfloats used due to problems with figures and latex. Sorry about that) Subj-class: Other \\ We analyze the quantum phase transition in the Bose-Hubbard model borrowing two tools from Quantum Information Theory, i.e. the ground-state fidelity and entanglement measures. We consider systems at unitary filling comprising up to 50 sites and show for the first time that a finite-size scaling analysis in terms of these quantities provides excellent estimates for the quantum critical point. We conclude that fidelity is particularly suited for revealing a quantum phase transition and pinning down the critical point thereof, while entanglement measures give a deeper insight into the mechanisms governing the transition. \\ ( http://arXiv.org/abs/cond-mat/0612590 , 103kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612592 Date: Fri, 22 Dec 2006 15:36:10 GMT (36kb) Title: Wigner Crystallization in Fast Rotating 2D Dipolar Fermi Gases Authors: M.A. Baranov, H. Fehrmann, and M. Lewenstein Categories: cond-mat.mes-hall Comments: 4 pages, 3 figures Subj-class: Mesoscopic Systems and Quantum Hall Effect \\ We study the competition between the Wigner crystal and the Laughlin liquid states in an ultracold quasi two-dimensional rapidly rotating polarized dipolar fermionic gas, and find that below a critical filling factor, the Wigner crystal has a lower energy. We examine the corresponding quantum crystal melting transition for different confinements of the gas in the third direction. Our analysis of the phonon spectra of the Wigner crystal with the account of phonon-phonon interactions also shows the stability of the Wigner crystal for sufficiently low filling factors (\nu <1/7). \\ ( http://arXiv.org/abs/cond-mat/0612592 , 36kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612601 Date: Fri, 22 Dec 2006 19:57:24 GMT (200kb) Title: Universality Constraints on Three-Body Recombination for Cold Atoms: from 4He to 133Cs Authors: Eric Braaten, Daekyoung Kang, Lucas Platter Categories: cond-mat.other Comments: 17 pages, 6 eps figures Subj-class: Other \\ For atoms with large scattering length, the dependence of the 3-body recombination rate on the collision energy is determined by the scattering length and the Efimov 3-body parameters and can be expressed in terms of universal functions of a single scaling variable. We use published results on the 3-body recombination rate for 4He atoms to constrain the universal functions. We then use those universal functions to calculate the 3-body recombination rate for other atoms with large scattering length at nonzero temperature. The constraints from the 4He results are strong if the scattering length is near the minimum of the 3-body recombination rate at threshold. We apply our results to 133Cs atoms with a large positive scattering length, and compare them with experimental results from the Innsbruck group. \\ ( http://arXiv.org/abs/cond-mat/0612601 , 200kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0612189 Date: Fri, 22 Dec 2006 01:45:04 GMT (329kb) Title: Semiclassical limits to the linewidth of an atom laser Authors: Mattias Johnsson, Simon Haine, Joseph Hope, Nick Robins, Cristina Figl, Matthew Jeppesen, Julien Dugu\'{e}, John Close Categories: quant-ph \\ We investigate the linewidth of a quasi-continuous atom laser within a semiclassical framework. In the high flux regime, the lasing mode can exhibit a number of undesirable features such as density fluctuations. We show that the output therefore has a complicated structure that can be somewhat simplified using Raman outcoupling methods and energy-momentum selection rules. In the weak outcoupling limit, we find that the linewidth of an atom laser is instantaneously Fourier limited, but, due to the energy chirp' associated with the draining of a condensate, the long-term linewidth of an atom laser is equivalent to the chemical potential of the condensate source. We show that correctly sweeping the outcoupling frequency can recover the Fourier-limited linewidth. \\ ( http://arXiv.org/abs/quant-ph/0612189 , 329kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0612191 Date: Fri, 22 Dec 2006 03:21:43 GMT (50kb) Title: Multimode quantum limits to the linewidth of an atom laser Authors: Mattias T. Johnsson and Joseph J. Hope \\ The linewidth of an atom laser can be limited by excitation of higher energy modes in the source Bose-Einstein condensate, energy shifts in that condensate due to the atomic interactions, or phase diffusion of the lasing mode due to those interactions. The first two are effects that can be described with a semiclassical model, and have been studied in detail for both pumped and unpumped atom lasers. The third is a purely quantum statistical effect, and has been studied only in zero dimensional models. We examine an unpumped atom laser in one dimension using a quantum field theory using stochastic methods based on the truncated Wigner approach. This allows spatial and statistical effects to be examined simultaneously, and the linewidth limit for unpumped atom lasers is quantified in various limits. \\ ( http://arXiv.org/abs/quant-ph/0612191 , 50kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0612200 Date: Fri, 22 Dec 2006 17:40:50 GMT (254kb) Title: Coherent optical detection of highly excited Rydberg states using electromagnetically induced transparency Authors: A. K. Mohapatra, T. R. Jackson, C. S. Adams Categories: quant-ph Comments: Submitted to Physical Review Letters \\ We observe electromagnetically induced transparency (EIT) on the 5s to 5p transition in a room temperature rubidium vapour cell by coupling the 5p state to a Rydberg state (ns or nd with n=26 to 124). We demonstrate that the narrow line-width of the EIT resonance (2 MHz) allows precise measurement of the d state fine structure splitting, and together with the sensitivity of the Rydberg state to electric fields, we are able to detect transient electric fields produced by the dynamics of charges within the cell. Coherent coupling of Rydberg states via EIT could also be used for cross-phase modulation and photon entanglement. \\ ( http://arXiv.org/abs/quant-ph/0612200 , 254kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612613 Date: Sun, 24 Dec 2006 10:13:14 GMT (304kb) Title: Basic theory tools for degenerate Fermi gases Authors: Yvan Castin (LKB - Lhomond) Categories: cond-mat.other Comments: 60 pages, Proceedings of the Enrico Fermi Varenna School on Fermi gases (2006) Proxy: ccsd hal-00122049 Subj-class: Other \\ This is an introductory lecture to the theory of degenerate Fermi gases, in the context of present experiments on atomic Fermi gases. In part one, some properties of the ideal Fermi gas are presented, including a discussion of the fluctuations of the number of fermions in a given spatial zone in 1D, 2D and 3D. In part two, two-body aspects of the interaction potential are discussed and several possible models for the interaction are analyzed, including the two-channel model for the Feshbach resonance. In part three, basic predictions of zero temperature BCS theory are presented, including a derivation of superfluid hydrodynamic equations from time dependent BCS theory. \\ ( http://arXiv.org/abs/cond-mat/0612613 , 304kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612616 Date: Sun, 24 Dec 2006 16:52:04 GMT (65kb) Title: Role of excited states in the splitting dynamics of interacting Bose-Einstein condensates when rumping-up a barrier Authors: Alexej I. Streltsov, Ofir E. Alon and Lorenz S. Cederbaum Categories: cond-mat.other Comments: 11 pages, 3 figures Subj-class: Other \\ An essentially-exact approach to compute the wavefunction in the time-dependent many-boson Schr\"odinger equation is derived and employed to study accurately the process of splitting a trapped condensate when rumping-up a barrier such that a double-well is formed. We follow the role played by many-body excited states during the splitting process. Among others, a 'counter-intuitive' regime is found in which the evolution of the condensate when the barrier is rumped-up sufficiently slow {\it is not} to the ground-state which is a fragmented condensate, but to a low-lying excited-state which is a coherent condensate. Experimental implications are discussed. \\ ( http://arXiv.org/abs/cond-mat/0612616 , 65kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0612201 Date: Sun, 24 Dec 2006 00:10:55 GMT (267kb) Title: Decoherence-induced geometric phase in a multilevel atomic system Authors: Shubhrangshu Dasgupta and Daniel A. Lidar (USC) Categories: quant-ph Comments: 9 pages, 12 figures \\ We consider the STIRAP process in a three-level atom. Viewed as a closed system, no geometric phase is acquired. But in the presence of spontaneous emission and/or collisional relaxation we show numerically that a non-vanishing, purely real, geometric phase is acquired during STIRAP, whose magnitude grows with the decay rates. Rather than viewing this decoherence-induced geometric phase as a nuisance, it can be considered an example of "beneficial decoherence": the environment provides a mechanism for the generation of geometric phases which would otherwise require an extra experimental control knob. \\ ( http://arXiv.org/abs/quant-ph/0612201 , 267kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0612213 Date: Tue, 26 Dec 2006 21:42:12 GMT (889kb) Title: Control of the geometric phase and pseudo-spin dynamics on coupled Bose-Einstein condensates Authors: E.I. Duzzioni, L. Sanz, S.S. Mizrahi and M.H.Y. Moussa Categories: quant-ph Comments: 1 tex file and 11 figures in pdf format \\ We describe the behavior of two coupled Bose-Einstein condensates in time-dependent (TD) trap potentials and TD Rabi (or tunneling) frequency, using the two-mode approach. Starting from Bloch states, we succeed to get analytical solutions for the TD Schroedinger equation and present a detailed analysis of the relative and geometric phases acquired by the wave function of the condensates, as well as their population imbalance. We also establish a connection between the geometric phases and constants of motion which characterize the dynamic of the system. Besides analyzing the affects of temporality on condensates that differs by hyperfine degrees of freedom (internal Josephson effect), we also do present a brief discussion of a one specie condensate in a double-well potential (external Josephson effect). \\ ( http://arXiv.org/abs/quant-ph/0612213 , 889kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0612214 Date: Wed, 27 Dec 2006 07:35:06 GMT (502kb) Title: Controllable Majorana transition in spinor Bose-Einstein condensates Authors: Lin Xia, Xu Xu, Fan Yang, Wei Xiong, Juntao Li, Qianli Ma, Xiaoji Zhou, Hong Guo and Xuzong Chen Categories: quant-ph Comments: 5 pages, 5 figures \\ Controllable Majorana transition in spinor BEC system has been realized by altering the rotation frequency of the magnetic fleld's direction. The population of spinor states can be conveniently manipulated by adjusting the turn-off time of the trap coils in experiment, which provides a new tool to manipulate quantum states. Using the Majorana transition process on pulsed atom laser, multicomponent spinor atom laser is generated. We demonstrate that the experiment results are agreed with the theoretical predication. \\ ( http://arXiv.org/abs/quant-ph/0612214 , 502kb) ------------------------------------------------------------------------------ \\ Paper: physics/0612233 Date: Sat, 23 Dec 2006 21:26:24 GMT (86kb) Title: Enhancement of Rydberg atom interactions using ac Stark shifts Authors: P. Bohlouli-Zanjani, J. A. Petrus, J. D. D. Martin Categories: physics.atom-ph Subj-class: Atomic Physics \\ The ac Stark effect was used to induce resonant energy transfer between translationally cold Rydberg atoms. The Rb Rydberg atoms were obtained by laser excitation of cold atoms from a magneto-optical trap. Using two-photon microwave spectroscopy of the 49s_{1/2}-50s_{1/2} transition, the field strength of a 28.5 GHz dressing field was calibrated. When this dressing field was set at specific field strengths, the two-atom dipole-dipole process 43d_{5/2} + 43d_{5/2} -> 45p_{3/2} + 41f was dramatically enhanced, due to induced degeneracy of the initial and final states. A series of observed resonant field strengths correspond to different magnetic sublevel possibilities for the initial and final states. These agree well with calculated resonance fields based on a perturbative ac Stark shift formula. This method for enhancing interactions is complementary to dc electric-field-induced resonant energy transfer, but has more flexibility due to the possibility of varying the applied frequency. \\ ( http://arXiv.org/abs/physics/0612233 , 86kb) ------------------------------------------------------------------------------ \\ Paper: physics/0612235 Date: Sun, 24 Dec 2006 07:33:33 GMT (84kb) Title: Rubidium lifetime in a dark magneto-optical trap Authors: O.I.Permyakova, A.V.Yakovlev, P.L.Chapovsky Categories: physics.atom-ph Comments: 12 pages, 4 figures Subj-class: Atomic Physics \\ Measurements of rubidium lifetime in a dark magneto-optical trap (DMOT) was performed at various populations of the bright and dark hyperfine states of the trapped atoms. The rubidium lifetime in the trap appeared to be shorter if the atom spent more time in the bright state. A simple explanation of this effect is based on the increase of the cross-section of rubidium collisions with the surrounding warm atoms upon rubidium electronic excitation. \\ ( http://arXiv.org/abs/physics/0612235 , 84kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612659 Date: Thu, 28 Dec 2006 07:23:15 GMT (65kb) Title: Super-shell structure in harmonically trapped fermionic gases and its semi-classical interpretation Authors: M. Ogren, Y. Yu, S. Aberg, S. M. Reimann, M. Brack Categories: cond-mat.other Comments: Final version of procedings for the 'Nilsson conference' Subj-class: Other Journal-ref: Phys. Scr. T125 (2006) 37-40 \\ It was recently shown in self-consistent Hartree-Fock calculations that a harmonically trapped dilute gas of fermionic atoms with a repulsive two-body interaction exhibits a pronounced {\it super-shell} structure: the shell fillings due to the spherical harmonic trapping potential are modulated by a beat mode. This changes the magic numbers'' occurring between the beat nodes by half a period. The length and amplitude of the beating mode depends on the strength of the interaction. We give a qualitative interpretation of the beat structure in terms of a semiclassical trace formula that uniformly describes the symmetry breaking U(3) $\to$ SO(3) in a 3D harmonic oscillator potential perturbed by an anharmonic term $\propto r^4$ with arbitrary strength. We show that at low Fermi energies (or particle numbers), the beating gross-shell structure of this system is dominated solely by the two-fold degenerate circular and (diametrically) pendulating orbits. \\ ( http://arXiv.org/abs/cond-mat/0612659 , 65kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612664 Date: Thu, 28 Dec 2006 18:22:12 GMT (70kb) Title: Vortex quantum creation and winding number scaling in a quenched spinor Bose gas Authors: Michael Uhlmann, Ralf Sch\"utzhold, and Uwe R. Fischer Categories: cond-mat.other Comments: 4 pages of RevTex4, 2 figures Subj-class: Other \\ Motivated by a recent experiment, we study non-equilibrium quantum phenomena taking place in the quench of a spinor Bose-Einstein condensate through the zero-temperature phase transition separating the paramagnetic and ferromagnetic phases. We derive the typical spin domain structure (correlations of the effective magnetization) created by the quench arising due to spin-mode fluctuations, and establish a sample-size scaling law for the creation of spin vortices, which are topological defects in the transverse magnetization. \\ ( http://arXiv.org/abs/cond-mat/0612664 , 70kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0612670 Date: Thu, 28 Dec 2006 19:42:23 GMT (28kb) Title: Anderson Localization of Expanding Bose-Einstein Condensates in Random Potentials Authors: Laurent Sanchez-Palencia (LCFIO), David Cl\'{e}ment (LCFIO), Pierre Lugan (LCFIO), Philippe Bouyer (LCFIO), Georgy V. Shlyapnikov (LPTMS), Alain Aspect (LCFIO) Categories: cond-mat.other Comments: 4 pages, 2 figures Proxy: ccsd hal-00122278 Subj-class: Other \\ We show that the expansion of an initially confined interacting 1D Bose-Einstein condensate can exhibit Anderson localization in a weak random potential. For speckle potentials used in quantum gases, the Fourier transform of the correlation function has a finite support and in 1D there is a mobility edge $\k_m=1/\sigma_R$, where $\sigma_R$ is the correlation length of the disorder. Then, for the initial healing length of the expanding condensate $\xi_{ini}>\sigma_R$ the localization is exponential, and for $\xi_{ini}<\sigma_R$ it changes to algebraic. \\ ( http://arXiv.org/abs/cond-mat/0612670 , 28kb) ------------------------------------------------------------------------------ \\ Paper (*cross-listing*): gr-qc/0407075 Date: Tue, 20 Jul 2004 00:31:46 GMT (16kb) Title: Gravitational Vacuum Condensate Stars Authors: Pawel O. Mazur, Emil Mottola (University of South Carolina, Los Alamos National Laboratory) Categories: gr-qc hep-ph hep-th quant-ph Comments: 17 pages, LaTeX file Journal-ref: Proc. Nat. Acad. Sci. 101 (2004) 9545-9550 DOI: 10.1073/pnas.0402717101 \\ A new final state of gravitational collapse is proposed. By extending the concept of Bose-Einstein condensation to gravitational systems, a cold, dark, compact object with an interior de Sitter condensate $p_{_V} = -\rho_{_V}$ and an exterior Schwarzschild geometry of arbitrary total mass $M$ is constructed. These are separated by a shell with a small but finite proper thickness $\ell$ of fluid with equation of state $p=+\rho$, replacing both the Schwarzschild and de Sitter classical horizons. The new solution has no singularities, no event horizons, and a global time. Its entropy is maximized under small fluctuations and is given by the standard hydrodynamic entropy of the thin shell, which is of order $k_{_B}\ell Mc/\hbar$, instead of the Bekenstein-Hawking entropy formula, $S_{_{BH}}= 4\pi k_{_B} G M^2/\hbar c$. Hence unlike black holes, the new solution is thermodynamically stable and has no information paradox. \\ ( http://arXiv.org/abs/gr-qc/0407075 , 16kb) ------------------------------------------------------------------------------ The replacements: ------------------------------------------------------------------------------ \\ Paper: quant-ph/0610029 replaced with revised version Thu, 14 Dec 2006 22:55:23 GMT (881kb) Title: Cavity QED determination of atomic number statistics in optical lattices Authors: W. Chen, D. Meiser, and P. Meystre Categories: quant-ph cond-mat.other physics.atom-ph Comments: 10 pages revtex, 13 figures Subj-class: Quantum Physics; Other; Atomic Physics \\ ( http://arXiv.org/abs/quant-ph/0610029 , 881kb) ------------------------------------------------------------------------------ \\ Paper: physics/0608021 replaced with revised version Fri, 15 Dec 2006 15:36:01 GMT (231kb) Title: From Optical Lattice Clocks to the Measurement of Forces in the Casimir Regime Authors: Peter Wolf (SYRTE, BIPM), Pierre Lemonde (SYRTE), Astrid Lambrecht (LKB - Jussieu), Sebastien Bize (SYRTE), Arnaud Landragin (SYRTE), Andre Clairon (SYRTE) Categories: physics.atom-ph Comments: revised and extended version Proxy: ccsd ccsd-00088469 Subj-class: Atomic Physics \\ ( http://arXiv.org/abs/physics/0608021 , 231kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0606706 replaced with revised version Mon, 18 Dec 2006 20:19:21 GMT (229kb) Title: Superfluidity and excitations at unitarity Authors: Dean Lee (North Carolina State University) Categories: cond-mat.stat-mech cond-mat.supr-con Comments: 40 pages, 19 figures, revised version includes new data on two-particle density correlations Subj-class: Statistical Mechanics; Superconductivity \\ ( http://arXiv.org/abs/cond-mat/0606706 , 229kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0609390 replaced with revised version Sun, 17 Dec 2006 18:13:10 GMT (85kb) Title: Precision Measurements of Collective Oscillations in the BEC-BCS Crossover Authors: A. Altmeyer, S. Riedl, C. Kohstall, M. Wright, R. Geursen, M. Bartenstein, C. Chin, J. Hecker Denschlag, R. Grimm Categories: cond-mat.other cond-mat.supr-con Comments: improved discussion of small ellipticity and anharmonicity corrections Subj-class: Other; Superconductivity \\ ( http://arXiv.org/abs/cond-mat/0609390 , 85kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0512149 replaced with revised version Mon, 18 Dec 2006 03:57:15 GMT (18kb) Title: Quantum state transfer from light to molecules via coherent two-color photo-association in an atomic Bose-Einstein condensate Authors: Hui Jing and Ming-Sheng Zhan Comments: 1 figure, accepted by Eur.Phys.J.D on Dec.15,2006 \\ ( http://arXiv.org/abs/quant-ph/0512149 , 18kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0608656 replaced with revised version Wed, 20 Dec 2006 01:55:38 GMT (306kb) Title: Dynamical vortex phases in a Bose-Einstein condensate driven by a rotating optical lattice Authors: Kenichi Kasamatsu, Makoto Tsubota Categories: cond-mat.other Comments: 4 pages, 3 figures Subj-class: Other Journal-ref: Phys. Rev. Lett. 97, 240404 (2006) \\ ( http://arXiv.org/abs/cond-mat/0608656 , 306kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603273 replaced with revised version Thu, 21 Dec 2006 18:38:39 GMT (581kb) Title: Ultracold atomic Bose and Fermi spinor gases in optical lattices Authors: K. Eckert, L. Zawitkowski, M.J. Leskinen, A. Sanpera, M.Lewenstein Categories: cond-mat.other Comments: 15 pages, 5 figures; a completely new and substantially expanded version with several errors corrected Subj-class: Other \\ ( http://arXiv.org/abs/cond-mat/0603273 , 581kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0610033 replaced with revised version Thu, 21 Dec 2006 12:57:23 GMT (8kb) Title: Macroscopic realism, wave-particle duality and the superposition principle Authors: N. L. Chuprikov Comments: Latex, 8 pages; the article is thoroughly rewritten \\ ( http://arXiv.org/abs/quant-ph/0610033 , 8kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0606322 replaced with revised version Sat, 23 Dec 2006 06:42:52 GMT (750kb) Title: Finite Temperature Phase Diagram of a Two-Component Fermi Gas with Density Imbalance Authors: Lianyi He, Meng Jin and Pengfei Zhuang Categories: cond-mat.supr-con cond-mat.other cond-mat.stat-mech physics.atom-ph Comments: Final published version Subj-class: Superconductivity; Other; Statistical Mechanics; Atomic Physics Journal-ref: Phys. Rev. B 74, 214516 (2006) DOI: 10.1103/PhysRevB.74.214516 \\ ( http://arXiv.org/abs/cond-mat/0606322 , 750kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0608472 replaced with revised version Sun, 24 Dec 2006 09:47:33 GMT (23kb) Title: Conserving Gapless Mean-Field Theory of a Multi-Component Bose-Einstein Condensate Authors: Yoshiyuki Kondo, Takafumi Kita Categories: cond-mat.stat-mech Comments: 8 pages, 7 figures Subj-class: Statistical Mechanics \\ ( http://arXiv.org/abs/cond-mat/0608472 , 23kb) ------------------------------------------------------------------------------ \\ Paper: physics/0612182 replaced with revised version Fri, 22 Dec 2006 11:16:03 GMT (239kb) Title: Dimension of holes and high-temperature condensate in Bose--Einstein statistics Authors: V. P. Maslov Categories: physics.gen-ph Comments: 14 pages, 3 figures; minor correction on p.11 Subj-class: General Physics \\ ( http://arXiv.org/abs/physics/0612182 , 239kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0607405 replaced with revised version Thu, 28 Dec 2006 20:54:51 GMT (513kb) Title: Density fingerprint of giant vortices in Fermi gases near a Feshbach resonance Authors: Hui Hu and Xia-Ji Liu Categories: cond-mat.supr-con cond-mat.str-el Comments: 4 pages and 5 figures, fig. 5 is changed. Editorially approved for publication in Physical Review A (as a Rapid Communication) Subj-class: Superconductivity; Strongly Correlated Electrons \\ ( http://arXiv.org/abs/cond-mat/0607405 , 513kb) ------------------------------------------------------------------------------ -- ========================================================================= Dr M. J. Davis, Senior Lecturer in Physics School of Physical Sciences, email: mdavis_at_physics.uq.edu.au University of Queensland, ph : +61 7 334 69824 Brisbane, QLD 4072, fax : +61 7 336 51242 Australia. http://www.physics.uq.edu.au/people/mdavis/ ========================================================================= Matt's arXiv selection: weekly summary of cold-atom papers from arXiv.org http://www.physics.uq.edu.au/people/mdavis/matts_arXiv/ ========================================================================= Legal stuff: Unless stated otherwise, this e-mail represents only the views of the sender and not the views of the University of Queensland ========================================================================= ` Received on Tue Jan 02 2007 - 16:12:24 EST This archive was generated by hypermail 2.2.0 : Thu May 08 2008 - 11:51:41 EST
2020-04-03 04:56:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6192246675491333, "perplexity": 12386.776313096409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00009.warc.gz"}
https://www.komal.hu/feladat?a=feladat&f=K379&l=en
Mathematical and Physical Journal for High Schools Issued by the MATFUND Foundation Already signed up? New to KöMaL? # Problem K. 379. (September 2013) K. 379. Kate sewed a button on her coat. The button has four holes in it as shown in the figure (the holes form the four vertices of a square). As the thread is pulled through the holes again and again, it produces various patterns, as viewed from the front. One such pattern is shown in the figure. How many different patterns may result, provided that at least two holes need to be used to fix the button to the coat? (6 pont) Deadline expired on October 10, 2013. Sorry, the solution is available only in Hungarian. Google translation Megoldás. Bármelyik két lyuk vagy össze van kötve cérnával, vagy nem. Ez minden lyuk-párra nézve 2 lehetőség. Mivel összesen $\displaystyle \binom42=6$ lehetséges lyuk-páros van, ez $\displaystyle 2^6=64$ lehetőség lenne, azonban ebből ki kell vonni azt az egy esetet, amikor semelyik két lyuk nincs összekötve. Tehát 63 mintát láthatunk a varrás után. ### Statistics: 247 students sent a solution. 6 points: 125 students. 5 points: 8 students. 4 points: 11 students. 3 points: 5 students. 2 points: 9 students. 1 point: 42 students. 0 point: 41 students. Unfair, not evaluated: 6 solutionss. Problems in Mathematics of KöMaL, September 2013
2021-04-11 09:04:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4137841761112213, "perplexity": 12652.098991383096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061820.19/warc/CC-MAIN-20210411085610-20210411115610-00323.warc.gz"}
http://cjcp.ustc.edu.cn/html/hxwlxb_en/2018/4/cjcp1804068.htm
Chinese Journal of Chemical Physics  2018, Vol. 31 Issue (4): 421-432 #### The article information Zhi-hao Gong, Zhou-fei Tang, Jian-shu Cao, Jianlan Wu Optimal Initialization of a Quantum System for an Efficient Coherent Energy Transfer Chinese Journal of Chemical Physics, 2018, 31(4): 421-432 http://dx.doi.org/10.1063/1674-0068/31/cjcp1804068 ### Article history Accepted on: June 21, 2018 Optimal Initialization of a Quantum System for an Efficient Coherent Energy Transfer Zhi-hao Gonga, Zhou-fei Tanga, Jian-shu Caob, Jianlan Wua Dated: Received on April 14, 2018; Accepted on June 21, 2018 a. Department of Physics, Zhejiang University, Hangzhou 310027, China; b. Department of Chemistry, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA *Author to whom correspondence should be addressed. Jianlan Wu, E-mail:jianlanwu@zju.edu.cn Abstract: For an energy transfer network, the irreversible depletion of excited electron energy occurs through either an efficient flow into an outer energy sink or an inefficient decay. With a small decay rate, the energy transfer efficiency is quantitatively reflected by the average life time of excitation energy before being trapped in the sink where the decay process is omitted. In the weak dissipation regime, the trapping time is analyzed within the exciton population subspace based on the secular Redfield equation. The requirement of the noise-enhanced energy transfer is obtained, where the trapping time follows an exact or approximate 1/Γ-scaling of the dissipation strength Γ. On the opposite side, optimal initial system states are conceptually constructed to suppress the 1/Γ-scaling of the trapping time and maximize the coherent transfer efficiency. Our theory is numerically testified in four models, including a biased two-site system, a symmetric three-site branching system, a homogeneous onedimensional chain, and an 8-chromophore FMO protein complex. Key words: Noise-enhanced energy transfer    Trapping-free subspace    Optimal initialization    Quantum dissipation Ⅰ. INTRODUCTION Optimizing system- and environment-related parameters is a fundamental question in quantum dynamics and thermodynamics, appearing in contexts of energy and charge transfer [1], quantum heat engine [2, 3], optimal control [4, 5], and many other problems. For example, the transfer efficiency of electronic excitation energy in biological photosynthetic protein complexes is maximized ($\sim$100%) at intermediate environmental parameters around the physiological condition. This optimization behavior can be observed in terms of the reorganization energy, the temporal-spatial correlation of bath, the static disorder, and many other parameters [6-22]. The environment surrounding a quantum system induces quantum dissipation [23, 24], which plays an interesting role of adjusting quantum transport processes such as energy transfer. In the strong dissipation limit, the conventional hopping kinetics predicts a small transfer rate and a low efficiency. In the opposite limit of a long-lasting quantum coherence, a delocalized eigenstate (exciton) leads to an instantaneous long-range transfer, but energy oscillates within the quantum system before being irreversibly absorbed by an outer energy sink. The transfer efficiency stays at a low level. As the dissipation strength is gradually applied to disrupt coherence of the quantum system, the transfer efficiency is significantly enhanced until reaching a maximum value at an intermediate dissipation strength. The phenomenon that the efficiency increases with the dissipation strength is termed with different names such as the noise-enhanced energy transfer (NEET) and environment-assisted quantum transport (ENAQT) [6-22]. Taking a biased two-site system as a simple example, we can interpret the NEET using the Förster resonance energy transfer (FRET) theory [25], where the energy transfer rate (more accurately the time integration of the rate kernel) is proportional to the spectral overlap between donor emission and acceptor absorption. The environmental noise broadens two lineshapes, which subsequently increases the spectral overlap and the transfer rate. A similar phenomenon is observed in the classical Kramer's theory, where the friction accelerates the diffusion in the energy space and also leads to the increase of the reaction rate in the weak damping regime [26]. For a general multi-site quantum network, the NEET has been interpreted with the concepts of the invariant subspace [9] and the trapping-free subspace [15]. In the eigen basis representation, the trapping-free subspace consists of excitons free of the irreversible trapping process (orthogonal to the trapping operator) and the assistance of the environmental noise can break this orthogonality for the enhancement of efficiency. In our previous study [15], the trapping-free subspace is demonstrated in a highly-symmetric dendrimer system, and a more comprehensive construction is required. As a contradiction, the time integration of the rate kernel for an unbiased two-site system approaches the infinity in the complete coherent limit and the transfer efficiency is maximized accordingly. Another example is a homogeneous one-dimensional (1D) chain, which is the simplest polymer model [27]. The coherent transfer efficiency can also be maximized by a ballistic quantum motion. It may seem that such systems disobey the NEET behavior. Instead, we take a different angle to think that the maximized coherent transfer efficiency is induced by a finely tuned initial system state. In other words, the NEET is a universal behavior of a quantum network with an irreversible population depletion but the optimization on the system initialization can suppress the NEET behavior, which is also universal. In this paper, we will perform a mathematical analysis to reveal this conceptual point and verify it in various model systems. In this work, we briefly review the theoretical modelling of a quantum dynamic network with an irreversible energy trapping process. The trapping time is approximated by its partial value within the exciton population subspace. Following a rational polynomial expression, the partial trapping time is analyzed to derive the constraints of the NEET as well as the optimal initial system states to suppress the $1/\Gamma$-scaling. Four model energy transfer networks are also numerically inspected in detail to verify our theory. Ⅱ. QUANTUM DISSIPATIVE DYNAMICS WITH AN IRREVERSIBLE TRAPPING PROCESS For an open quantum system with an irreversible population depletion process, e.g. the trapping process, the time evolution of the system reduced density matrix (RDM) $\rho_{\rm{S}}(t)$ is formally given by [11] $\begin{eqnarray} \dot{\rho}_{\rm{S}}(t) = -{\cal L}_{\rm{S}}\rho_{\rm{S}}(t) -{\cal L}_{\rm{t}}[\rho_{\rm{S}}(t)]-{\cal L}_{\rm{d}}\left[\rho_{\rm{S}}(t)\right] \label{eq_01} \end{eqnarray}$ (1) The three superoperators, $\{{\cal L}_{\rm{S}}, {\cal L}_{\rm{t}}, {\cal L}_{\rm{d}}\}$, refer to the dynamic processes of system quantum oscillation, irreversible trapping (yielding an efficient work), and environment-induced dissipation, respectively. The system Liouville superoperator, ${\cal L}_{{\rm{S}}}$=$i[H_{\rm{S}}, \cdots]$, arises from the commutator of the system Hamiltonian $H_{\rm{S}}$. Notice that an extra unit imaginary number is included in ${\cal L}_{{\rm{S}}}$, while the definition without this number is also widely used in literature [28]. Throughout this paper, the reduced Planck constant is set as unity ($\hbar$=1). In a 'local' basis set $\{|n=1, \cdots, N\rangle\}$, the system Hamiltonian is non-diagonal, expanded as $H^{\rm{L}}_{\rm{S}}$=$\displaystyle\sum_{m, n} H_{mn} |m\rangle\langle n|$. For an energy transfer network with a single excitation, which is the main focus of this paper, $|n\rangle$ represents a quantum state of the excitation localized at site $n$ [29]. Next we build the eigen basis set $\{|\varepsilon_i(i=1, \cdots, N)\rangle\}$ to diagonalize the system Hamiltonian as $H^{\rm{E}}_{\rm{S}}$=$\displaystyle\sum_i \varepsilon_i |\varepsilon_i\rangle\langle \varepsilon_i|$, where $\varepsilon_i$ is the $i$-th eigen energy. In this paper, the two basis representations are distinguished explicitly by the two superscripts, 'L' and 'E'. For simplicity, a non-Hermitian Hamiltonian $H_{\rm{t}}$ is employed to phenomenologically describe the irreversible trapping process, yielding ${\cal L}_{\rm{t}}[\rho(t)]$=$i (H_{\rm{t}}\rho(t)-\rho(t) H^+_{\rm{t}})$. If a trapping rate $k_{{\rm{t}}; n}$ is defined incoherently at each local site $n$, the trapping Hamiltonian is written as $H^{\rm{L}}_{\rm{t}}$=$\displaystyle\sum_n -i(k_{{\rm{t}}; n}/2)|n\rangle\langle n|$. In the local basis representation, the trapping superoperator ${\cal L}_{\rm{t}}$ is simplified to be [11] $\begin{eqnarray} {\cal L}^{\rm{L}}_{{\rm{t}}; mn, m' n'} = \delta_{m', m}\delta_{n', n}\frac{k_{{\rm{t}};m}+k_{{\rm{t}}; n}}{2} \label{eq_02} \end{eqnarray}$ (2) The trapping rates can be similarly assigned to delocalized excitons (eigenstates) if necessary [19]. The dissipation of a quantum system induced by an interaction between the system and the surrounding environment is reflected by the phenomena of population re-distribution and decoherence [23, 24]. In a microscopic description, we introduce the total Hamiltonian, $H_{\rm{tot}}$=$H_{\rm{S}}$+$H_{\rm{B}}$+$H_{{\rm{S}}{\rm{B}}}$, by excluding the non-Hermitian trapping Hamiltonian $H_\rm{t}$. Here $H_{\rm{B}}$ is the Hamiltonian of the bare bath and $H_{{\rm{S}}{\rm{B}}}$ is the system-bath interaction. In the local basis representation, $H_{{\rm{S}}{\rm{B}}}$ is assumed to follow the form of $H^{\rm{L}}_{{\rm{S}}{\rm{B}}}$=$\displaystyle\sum_n |n\rangle\langle n| B_n$ with $B_n$ being the bath operator. The initial condition of a system-bath factorized state, $\rho_{\rm{tot}}(0)$=$\rho_{\rm{S}}(0)\rho^{\rm{eq}}_{\rm{B}}$ with $\rho^{\rm{eq}}_{\rm{B}}$$\propto$$\exp(-\beta H_{\rm{B}})$, is also assumed. In the simplest case, the environment is modelled as a classical white noise. The quantum dissipation is described by the Haken-Strobl-Reineker (HSR) model as [30, 31] $\begin{eqnarray} {\cal L}^{\rm{L}}_{{\rm{d}}; mn, m' n'} = (1-\delta_{m, n})\delta_{m', m}\delta_{n', n}\Gamma \label{eq_03} \end{eqnarray}$ (3) where $\Gamma$ is a dephasing rate. In a formal manner, the environment-induced dissipation can be expressed exactly by the Nakajima-Zwanzig projection operator technique [32, 33]. With respect to the reference Hamiltonian, $H_0$=$H_{\rm{S}}$+$H_{\rm{B}}$, the system-bath interaction $H_{{\rm{S}}{\rm{B}}}$ is treated as a perturbation term. In the interaction picture of $H_0$, we perform two unitary transformations, $H_{{\rm{S}}{\rm{B}}}(t)$=$\exp(iH_0t)H_{{\rm{S}}{\rm{B}}}\exp(-iH_0t)$ and $\rho^{({\rm{I}})}_{\rm{S}}(t)$=$\exp(i H_0 t) \rho_{\rm{S}}(t) \exp(-i H_0 t)$. The quantum dissipation is formulated as [24] \begin{align} & {{\mathcal{L}}_{\text{d}}}[\rho _{\text{S}}^{(\text{I})}(t)]\to -\int_{0}^{t}{\text{d}}\tau {\mathcal{P}}{{\mathcal{L}}_{\text{SB}}}(t){{\mathcal{T}}_{+}}\left[ {{\text{e}}^{-\int_{\tau }^{t}{\text{d}}{\tau }'{\mathcal{Q}}{{\mathcal{L}}_{\text{SB}}}({\tau }')}} \right]\cdot \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\mathcal{Q}}{{\mathcal{L}}_{\text{SB}}}(\tau ){\mathcal{P}}\rho _{\text{tot}}^{(\text{I})}(\tau ) \\ \end{align} (4) where ${\cal L}_{{\rm{S}}{\rm{B}}}(t)$=$i[H_{{\rm{S}}{\rm{B}}}(t), \cdots]$ is a Liouville superoperator, and ${\cal P}$=$\rho^{\rm{eq}}_{\rm{B}}\}{\rm{Tr}}_{\rm{B}}\{$ and ${\cal Q}$=${\cal I}$$-$${\cal P}$ are two orthogonal projection operators. The practical difficulty of Eq.(4) is caused by the ambiguous definition of ${\cal Q}$. On the second order perturbation of $H_{{\rm{S}}{\rm{B}}}$, Eq.(4) is simplified to be [24] \begin{align} & {{\mathcal{L}}_{\text{d}}}[\rho _{\text{S}}^{(\text{I})}(t)]\approx -\int_{0}^{t}{\text{d}}\tau \text{T}{{\text{r}}_{\text{B}}}\{{{\mathcal{L}}_{\text{SB}}}(t){{\mathcal{L}}_{\text{SB}}}(\tau )\rho _{\text{S}}^{(\text{I})}(\tau )\rho _{\text{B}}^{\text{eq}}\} \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\int_{0}^{t}{\text{d}}\tau \text{T}{{\text{r}}_{\text{B}}}\{[{{H}_{\text{SB}}}(t),[{{H}_{\text{SB}}}(\tau ),\rho _{\text{S}}^{(\text{I})}(\tau )\rho _{\text{B}}^{\text{eq}}]]\} \\ \end{align} (5) which is a good approximation in the limit of weak dissipation. Furthermore, the Born-Markov approximation and the random phase approximation are applied. In the Schrödinger picture and the eigen basis representation, Eq.(5) is simplified to be $\begin{eqnarray} {\cal L}_{\rm{d}}[\rho^{\rm{E}}_{\rm{S}}(t)] &\approx& \sum\limits_{i, j} R_{ii, jj}\rho^{\rm{E}}_{{\rm{S}}; jj}(t)+\sum\limits_{i\neq j} R_{ij, ij} \rho^{\rm{E}}_{{\rm{S}}; ij}(t)\quad \label{eq_06} \end{eqnarray}$ (6) where $R_{ii, jj}$ and $R_{ij, ij}$ are the Redfield tensors. Eq.(6) is named the secular Redfield equation [29, 34], which can be transformed into the form of the Lindblad equation [24, 25]. Under a condition that the environment is a Gaussian noise (e.g., the bosonic bath), the projection operator ${\cal Q}$ can be explicitly expanded over a series of auxiliary dynamic elements [36-54]. A complete basis set, $\boldsymbol \sigma$=$\{\sigma_0(t)$=$\rho_{\rm{S}}(t), \sigma_1(t), \cdots\}$, is constructed, where $\sigma_h(t)$ is the set of dynamic variables on the $h$-th order of the hierarchic expansion. In a shorthand notation, quantum dissipation is described exactly by the hierarchical equations of motion (HEOM) [36-54], $\begin{eqnarray} \dot{\boldsymbol\sigma}(t) = - {\cal W} \boldsymbol\sigma(t) \label{eq_07} \end{eqnarray}$ (7) where the transition rate matrix ${\cal W}$ is block tridiagonal, ${\cal W}_{h, h'}$=${\cal W}_{h, h} \delta_{h', h}$+${\cal W}_{h, h\pm1}\delta_{h', h\pm 1}$. For the factorized initial state, $\rho_{\rm{tot}}(0)$=$\rho_{\rm{S}}(0)\rho^{\rm{eq}}_{\rm{B}}$, all the high order auxiliary dynamic elements vanish initially, $\sigma_{h\ge 1}(0)$=0, so that the block matrix inversion leads to a time-convolution form, $\begin{eqnarray} {\cal L}_{\rm{d}}[\rho_{\rm{S}}(t)] =\int_0^t \textrm{d}\tau {\cal L}_{\rm{d}}(t-\tau)\rho_{\rm{S}}(\tau) \label{eq_08} \end{eqnarray}$ (8) The dissipation kernel ${\cal L}_{\rm{d}}(t)$ is obtained by the inverse Laplace transform (LT$^{-1}$) of a continued fraction expression [55], $\begin{eqnarray} {\cal L}_{\rm{d}}(t) = \mathit{\rm{LT}}^{-1}\bigg[{\cal W}_{0, 1}\cdot\nonumber\\ &&\frac{1}{z+{\cal W}_{1, 1}+{\cal W}_{1, 2}\frac{1}{z+{\cal W}_{2, 2}+\cdots}{\cal W}_{2, 1} }{\cal W}_{1, 0} \bigg] \label{eq_09} \end{eqnarray}$ (9) In general, the equation of motion in Eq.(1) can be formally solved in the Laplace $z$-space, yielding $\begin{eqnarray} \tilde{\rho}_{\rm{S}}(z) = \left[z+{\cal L}_{{\rm{s}}}+{\cal L}_{\rm{t}}+\tilde{{\cal L}}_{\rm{d}}(z)\right]^{-1}\rho_{\rm{S}}(0) \label{eq_10} \end{eqnarray}$ (10) where $\tilde{\rho}_{\rm{S}}(z)$ and $\tilde{{\cal L}}_{\rm{d}}(z)$ are the Laplace transforms of $\rho_{\rm{S}}(t)$ and ${\cal L}_{\rm{d}}(t)$, respectively. A summary of various quantum dissipation methods can be found in a recent review [1]. For an irreversible quantum dynamic network, a key quantity is the trapping time $\langle t\rangle$, which is the sum of the survival time at each site, i.e., $\langle t\rangle$=$\displaystyle\sum_n \int_0^\infty \rho^{\rm{L}}_{{\rm{S}}; nn}(t) \textrm{d}t$, with $\rho^{\rm{L}}_{{\rm{S}}; nn}(t)$ being the population of site $n$ at time $t$ [11]. Eq.(10) allows us to calculate the trapping time using \begin{align} & \langle t\rangle =\text{T}{{\text{r}}_{\text{S}}}\{{{{\tilde{\rho }}}_{\text{S}}}(z=0)\} \\ & \ \ \ \ =\text{T}{{\text{r}}_{\text{S}}}\left\{ {{\left[ {{\mathcal{L}}_{\text{S}}}+{{\mathcal{L}}_{\text{t}}}+{{\widetilde{\mathcal{L}}}_{\text{d}}}(z=0) \right]}^{-1}}{{\rho }_{\text{S}}}(0) \right\} \\ \end{align} (11) Since $\langle t\rangle$ only depends on the dissipation superoperator at $z$=0, the abbreviation, ${\cal L}_{\rm{d}}$$\equiv$$\tilde{{\cal L}}_{\rm{d}}$($z$=0), is introduced. For an energy transfer network, inefficient energy loss as heat or light can be phenomenologically described by an additional decay process characterized by a decay rate $k_{\mathit{\rm{decay}}}$. The energy transfer efficiency is defined as the cumulated population flow to the energy trapping process, i.e., $q$=$\displaystyle\int_0^\infty {\cal L}_{{\rm{t}}} \rho_{\rm{S}}(t) \textrm{d}t$. If the decay rate $k_{\mathit{\rm{decay}}}$ is much smaller than the trapping rate $k_{\rm{t}}$, the energy transfer efficiency is approximated as [11] $\begin{eqnarray} q\approx \frac{1}{1+k_\mathit{\rm{decay}} \langle t\rangle} \label{eq_12} \end{eqnarray}$ (12) In a real system, $k_{\rm{t}}$ is of the order of ps$^{-1}$ while $k_{\mathit{\rm{decay}}}$ is of the order of ns$^{-1}$. Thus, the trapping time $\langle t\rangle$ in Eq.(11) fully determines the transfer efficiency $q$, i.e., a maximum value of $q$ always corresponds to a minimum value of $\langle t\rangle$. Ⅲ. TRAPPING TIME IN THE WEAK DISSIPATION LIMIT In the HSR model [30, 31], the dissipation strength from the classical white noise is purely determined by the dephasing rate $\Gamma$. When the bath is an ensemble of harmonic oscillators, the system-bath interaction is described by the spectral density $J(\omega)$, where $\omega$ is the frequency of a harmonic oscillator. The reorganization energy, $\lambda$=$\displaystyle\int_0^\infty \textrm{d}\omega J(\omega)/\omega$, quantitatively represents the average dissipation strength [23]. At high temperatures, the bosonic bath can be qualitatively mapped onto the HSR model with a linear dependence between $\Gamma$ and $\lambda$ [12, 15]. For conciseness, we use the symbol $\Gamma$ in this section to denote the dissipation strength of a general bath. A. Noise-enhanced energy transfer (NEET) In the weak dissipation limit ($\Gamma$$\rightarrow0), the dissipation superoperator follows a Taylor expansion, {\cal L}_{\rm{d}}=\ell_{\rm{d}}\Gamma+O(\Gamma^2), where the reduced \Gamma-independent superoperator is approximated as \begin{eqnarray} \ell^{\rm{E}}_{{\rm{d}}; ij, kl} \approx \ell^{\rm{E}}_{{\rm{d}}; ii, kk}\delta_{i, j}\delta_{k, l}+\ell^{\rm{E}}_{{\rm{d}}; ij(\neq i), ij}\delta_{k, i}\delta_{l, j} \label{eq_13} \end{eqnarray} (13) in the eigen basis representation. Eq.(13) is obtained from the secular Redfield equation in Eq.(6). In the Liouville space, we partition the RDM into the exciton population and coherence subspaces [56], given by \rho^{\rm{E}}_{\rm{P}}=\displaystyle\sum_i \rho^{\rm{E}}_{{\rm{S}}; ii} |\varepsilon_i\rangle\langle \varepsilon_i| and \rho^{\rm{E}}_{\rm{C}}=\displaystyle\sum_{i\neq j}\rho^{\rm{E}}_{{\rm{S}}; ij}|\varepsilon_i\rangle\langle \varepsilon_j|. The two subscripts, 'P' and 'C', denote the subspaces of population and coherence, respectively. In the eigen basis representation, the three superoperators are written in block matrix forms as \begin{eqnarray} {\cal L}_{\rm{S}}^{\rm{E}} \hspace{-0.12cm}&=&\hspace{-0.12cm} \left( {\begin{array}{*{20}{c}} 0&0\\ 0&{\cal L}_{\rm{S;C}}^{\rm{E}} \end{array}} \right)\nonumber \\ {\cal L}_{\rm{d}}^{\rm{E}} \hspace{-0.12cm}&=&\hspace{-0.12cm}\left( {\begin{array}{*{20}{c}} \ell_{\rm{d;P}}^{\rm{E}}&0\\ 0&\ell_{\rm{d;C}}^{\rm{E}} \end{array}} \right)\\ {\cal L}_{\rm{t}}^{\rm{E}} \hspace{-0.12cm}&=&\hspace{-0.12cm} \left( {\begin{array}{*{20}{c}} {\cal L}_{\rm{t;P}}^{\rm{E}} &{\cal L}_{\rm{t;PC}}^{\rm{E}}\\ {\cal L}_{\rm{t;CP}}^{\rm{E}}&\ell_{\rm{t;C}}^{\rm{E}} \end{array}} \right)\nonumber \end{eqnarray} (14) In general, the trapping process (\simps) is usually slower than a typical quantum oscillation of excited electrons (\simfs). Through the block matrix inversion of \left[{\cal L}_{\rm{S}}^{\rm{E}} +L_{\rm{d}}^{\rm{E}}+L_{\rm{t}}^{\rm{E}}\right]^{-1} in Eq.(11), we can straightforwardly estimate that the average trapping time \langle t\rangle is reliably approximated by its partial value, \begin{eqnarray} \langle t\rangle^{\rm{E}}_{\rm{P}} = {\rm{Tr}}_{\rm{S}}\left\{\left[\Gamma\ell^{\rm{E}}_{{\rm{d}}; {\rm{P}}} + {\cal L}^{\rm{E}}_{{\rm{t}}; {\rm{P}}} \right]^{-1} \rho^{\rm{E}}_{\rm{P}}(0) \right\} \label{eq_15} \end{eqnarray} (15) where the matrices in the trace {\rm{Tr}}_{\rm{S}} are restricted to the exciton population subspace. Despite the fact that only dissipation and trapping appear in Eq.(15), the system Hamiltonian enters implicitly through the unitary transformation from the local to eigen basis set. Notice that the NEET may appear in a fast trapping process due to a strong influence of hopping kinetics, which is however beyond the scope of this paper. For a finite N-site energy transfer network, \ell^{\rm{E}}_{{\rm{d}}; {\rm{P}}} and {\cal L}^{\rm{E}}_{{\rm{t}}; {\rm{P}}} are two N$$\times$$N matrices. Through a standard matrix inversion for [\Gamma\ell^{\rm{E}}_{{\rm{d}}; {\rm{P}}}+{\cal L}^{\rm{E}}_{{\rm{t}}; {\rm{P}}}]^{-1} [57], the partial trapping time \langle t\rangle^{\rm{E}}_{\rm{P}} in Eq.(15) can be expressed as a rational polynomial, \begin{eqnarray} \langle t \rangle^{\rm{E}}_{\rm{P}} &=& \frac{\displaystyle\sum\limits_{k=0}^{N-1}a_k \Gamma^{k}}{\displaystyle\sum\limits_{k=0}^{N-1}b_k \Gamma^{k}}= t_0+\sum\limits_{k=1}^{N-1}\frac{t_k}{\Gamma+\Gamma_k} \label{eq_16} \end{eqnarray} (16) where all the four parameter sets, \{a_k, b_k, t_k, \Gamma_k (k=0, \cdots, N$$-$1)$\}$ are functions of $H_{\rm{S}}$, $\ell^E_{{\rm{d}}; {\rm{P}}}$, and ${\cal L}^{\rm{E}}_{{\rm{t}}; {\rm{P}}}$. The two sets of $\{a_k, t_k\}$ vary with the initial exciton population $\rho^{\rm{E}}_{\rm{P}}(0)$, while the other two sets of $\{b_k, \Gamma_k\}$ are independent of $\rho^{\rm{E}}_{\rm{P}}(0)$. The constraint, $b_N$$\propto$$\mathit{\rm{Det}}[\ell^{\rm{E}}_{{\rm{d}}; {\rm{P}}}]$=0, always holds due to the fact that quantum dissipation conserves the total population. Another constraint, $b_0$$\propto$$\mathit{\rm{Det}}[{\cal L}^{\rm{E}}_{{\rm{t}}; {\rm{P}}}]$, can be also obtained. Here the symbol $\mathit{\rm{Det}}[\cdots]$ denotes the matrix determinant. The set of $\{\Gamma_k\}$ is composed of the roots of $\begin{eqnarray} \sum\limits_{k=0}^{N-1}b_k \Gamma^{k}=0 \label{eq_17} \end{eqnarray}$ (17) To include a weak contribution of the exciton coherence, we add a linear $\Gamma$-term for the total trapping time, giving $\begin{eqnarray} \langle t\rangle\approx \langle t\rangle^{\rm{E}}_{\rm{P}}+\delta t_0+f_{\mathit{\rm{hop}}}\Gamma \label{eq_18} \end{eqnarray}$ (18) where the two positive parameters, $\delta t_0$ and $f_{\mathit{\rm{hop}}}$, depend on the three superoperators and the initial system state. The correction term $\delta t_0$ is usually much smaller than $t_0$ and can be neglected. This linear $\Gamma$-term is the reminiscent of incoherent hopping kinetics in the weak dissipation limit [15]. The total trapping time in the weak dissipation limit is expanded as $\begin{eqnarray} \langle t \rangle &\approx& \left(t_0+\delta t_0+\sum\limits_{k=1}^{N-1} \frac{t_k}{\Gamma_k}\right) +\nonumber\\ &&\left(f_{\mathit{\rm{hop}}}-\sum\limits_{k=1}^{N-1}\frac{t_k}{\Gamma^2_k}\right) \Gamma +O(\Gamma^2) \label{eq_19} \end{eqnarray}$ (19) As a result, the condition $\begin{eqnarray} \sum\limits_{k=1}^{N-1}t_k/\Gamma^2_k>f_{\mathit{\rm{hop}}} \label{eq_20} \end{eqnarray}$ (20) is a general requirement of the NEET that $\langle t\rangle$ decreases with $\Gamma$ (i.e. efficiency $q$ increases with $\Gamma$) in the weak dissipation regime. Under the opposite condition, $\langle t\rangle$ increases with $\Gamma$ and the efficiency is maximized by the coherent energy transfer without dissipation. However, one or more zero roots in Eq.(17) dramatically change the $\Gamma$-dependence of the trapping time. For example, the partial trapping time $\langle t\rangle^{\rm{E}}_{\rm{P}}$ with $\Gamma_1$=0 is changed to $\begin{eqnarray} \langle t \rangle^{\rm{E}}_{\rm{P}} = \left(t_0+\sum\limits_{k=2}^{N-1}\frac{t_k}{\Gamma_k}\right)+ \frac{t_1}{\Gamma}+O(\Gamma) \label{eq_21} \end{eqnarray}$ (21) where $t_1$ is nonnegative for a physical trapping time. The condition necessary for the exact $1/\Gamma$-scaling in Eq.(21) is $\begin{eqnarray} b_0 \propto \mathit{\rm{Det}}\left[{\cal L}^{\rm{E}}_{{\rm{t}}; {\rm{P}}}\right] = 0 \label{eq_22} \end{eqnarray}$ (22) which is our definition of the rigorous trapping-free subspace $\Phi_\perp$ [15]. For the incoherent trapping process defined in Eq.(2), ${\cal L}^{\rm{E}}_{{\rm{t}}; {\rm{P}}}$ is a diagonal matrix satisfying ${\cal L}^{\rm{E}}_{{\rm{t}}; ii, jj}$=${\cal L}^{\rm{E}}_{{\rm{t}}; ii}\delta_{i, j}$. The matrix determinant becomes $\mathit{\rm{Det}}\left[{\cal L}^{\rm{E}}_{{\rm{t}}; {\rm{P}}} \right]$=$\Pi_{i=1}^N {\cal L}^{\rm{E}}_{{\rm{t}}; ii}$ with ${\cal L}^{\rm{E}}_{{\rm{t}}; ii}$=$\displaystyle\sum_{n=1}^N \langle \varepsilon_i|n\rangle^2 k_{{\rm{t}}; n}$. The trapping-free subspace $\Phi_\perp$ composed of $N_{\rm{t}}$ ($<$$N) trapping-free excitons ({\cal L}^{\rm{E}}_{{\rm{t}}; ii}=0) requires the following two conditions to be satisfied simultaneously: (ⅰ) at least one site is not connected to the energy sink, giving k_{{\rm{t}}; n}=0; (ⅱ) each exciton state in \Phi_\perp is a linear combination of these trapping-free sites, giving \langle \varepsilon_i|n'\rangle=0 for |\varepsilon_i\rangle\in\Phi_\perp and k_{{\rm{t}}; n'}$$\neq$0. The rigorous solution of $b_0$=0 implies a high symmetry in the system Hamiltonian, which is often not the case in a general quantum network. Instead, we consider a weaker constraint: the characteristic dissipation strengths $\{\Gamma_{k}\}$ are well separated by their magnitudes, satisfying 0$\leftarrow$$|\Gamma_1|, \cdots, |\Gamma_{N_{\rm{t}}}|$$\ll$$|\Gamma_{N_{\rm{t}}+1}|, \cdots, |\Gamma_N|. One possible solution for this constraint is \begin{eqnarray} |b_0|\ll |b_{N-1}| \Gamma_\textrm{c}^{N-1} \label{eq_23} \end{eqnarray} (23) where \Gamma_\textrm{c} is a critical dissipation strength to balance the units of b_0 and b_{N-1}. Roughly speaking, \Gamma_\textrm{c} can be considered as the boundary of weak dissipation. The exciton states with the small trapping rates ({\cal L}^{\rm{E}}_{{\rm{t}}; ii}$$\rightarrow$0) form an approximate trapping-free subspace $\Phi_\perp$. In the weak dissipation regime, the partial trapping time $\langle t\rangle^{\rm{E}}_{\rm{P}}$ becomes $\begin{eqnarray} \begin{array}{l} \left\langle t \right\rangle _{\rm{P}}^{\rm{E}} \approx \left[{{t_0} + \displaystyle\sum\limits_{k = {N_{\rm{t}}} + 1}^{N-1} {\frac{{{t_k}}}{{{\Gamma _k}}}} } \right] + \displaystyle\sum\limits_{k = 1}^{{N_{\rm{t}}}} {\frac{{{t_k}}}{{\Gamma + {\Gamma _k}}}} \\ \hspace{3cm}\textrm{for}\ \Gamma\ll|\Gamma_{k(>N_{\rm{t}})}|;\\ \left\langle t \right\rangle _{\rm{P}}^{\rm{E}} \approx \left[{{t_0} + \displaystyle\sum\limits_{k = {N_{\rm{t}}} + 1}^{N-1} {\frac{{{t_k}}}{{{\Gamma _k}}}} } \right] + \displaystyle\frac{{\displaystyle\sum\limits_{k = 1}^{{N_{\rm{t}}}} {{t_k}} }}{\Gamma }\\ \hspace{3cm}\textrm{for}\ |\Gamma_{k(N_{\rm{t}})}| \end{array} \label{eq_24} \end{eqnarray}$ (24) To achieve the approximate $1/\Gamma$-scaling in Eq.(24), the two sets of coefficients, $\{t_{k(\le N_{\rm{t}})}\}$ and $\{t_{k(>N_{\rm{t}})}\}$, are needed to be comparable in magnitude. B. The optimal initial system state for the disappearance of the $1/\Gamma$-scaling In Section Ⅲ.A, we formulate the exact and approximate $1/\Gamma$-scalings of the trapping time in Eqs. (21) and (24), respectively. On the opposite side, the $1/\Gamma$-scaling can disappear by an appropriate initial system state, $\rho^{\rm{E}}_{\rm{P}}(0)$. For a rigorous trapping-free subspace $\Phi_\perp$, a straightforward way is to prepare zero population in $\Phi_\perp$, i.e., $\rho^{\rm{E}}_{ii}(0)$=0 for $|\varepsilon_i\rangle$$\in$$\Phi_\perp$ [15]. For a general network, we apply a stronger requirement, $\begin{eqnarray} a_0:a_1:\cdots:a_{N-1} = b_0:b_1:\cdots:b_{N-1} \label{eq_25} \end{eqnarray}$ (25) from which all the $\Gamma$-dependent terms vanish in Eq.(16). The partial trapping time $\langle t\rangle^{\rm{E}}_{\rm{P}}$ becomes a constant and the total trapping time $\langle t\rangle$ increases linearly with the dissipation strength $\Gamma$, i.e., $\langle t\rangle$$\approx$$(t_0$+$\delta t_0$)+$f_{\mathit{\rm{hop}}} \Gamma$+$O(\Gamma^2)$. The energy transfer efficiency is globally or locally maximized in the coherent regime (\Gamma\rightarrow0). Next we provide a mathematical procedure of optimizing the initial system state. In the subspace of exciton population, the partial reduced dissipation superoperator \ell^{\rm{E}}_{{\rm{d}}; {\rm{P}}} is diagonalized as follows, \begin{eqnarray} {\cal D}={\cal V}^{-1}\ell^{\rm{E}}_{{\rm{d}}; {\rm{P}}}{\cal V} = \left( {\begin{array}{*{20}{c}} 0&{}&{}\\ {}&{{\Lambda _2}}&{}\\ {}&{}& \ddots \end{array}} \right) \label{eq_26} \end{eqnarray} (26) where \Lambda_i are the eigen dissipation rates reduced by the dissipation strength \Gamma. The zero eigen rate, \Lambda_1=0, appears due to the condition that system population is conserved by dissipation. The matrix \cal V is not necessarily unitary since \ell^{\rm{E}}_{{\rm{d}}; {\rm{P}}} can be non-Hermitian. The same transformation is applied to the trapping superoperator and the initial RDM, leading to {\cal T}={\cal V}^{-1}{\cal L}^{\rm{E}}_{{\rm{t}}; {\rm{P}}}{\cal V} and \varrho(0)={\cal V}^{-1}\rho^{\rm{E}}_{\rm{P}}(0). The partial trapping time \langle t\rangle^{\rm{E}}_{\rm{P}} in Eq.(15) is rewritten as \begin{eqnarray} \langle t\rangle^{\rm{E}}_{\rm{P}} = \mathit{\rm{Tr}}_{\rm{S}}\left\{\mathcal V(\Gamma{\cal D}+{\cal T})^{-1}\varrho(0)\right\} \label{eq_27} \end{eqnarray} (27) After tedious but straightforward steps, we obtain b_0={\rm{Det}}[{\cal T}] and b_{N-1}=\displaystyle\prod_{i=2}^{N} \Lambda_{i} [{\cal T}]_{11}. Eq.(23) for the approximate 1/\Gamma-scaling of the NEET becomes \begin{eqnarray} |\mathit{\rm{Det}}\left[{\cal T}\right]| \ll \prod\limits_{i=2}^{N} |\Lambda_{i} [{\cal T}]_{11}| \Gamma_c^{N-1} \label{eq_28} \end{eqnarray} (28) which implies an upper limit of the trapping rates. With respect to the zero eigen rate of \Lambda_1=0, the optimal condition in Eq.(25) to maximize the coherent energy transfer efficiency is explicitly given by \begin{align} & {{[\varrho (0)]}_{1}}:{{[\varrho (0)]}_{2}}:\cdots :{{[\varrho (0)]}_{N}}= \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {{[T]}_{11}}:{{[T]}_{21}}:\cdots :{{[T]}_{N1}} \\ \end{align} (29) Since the initial exciton coherence varies freely, an infinite possibility of the initial system states (either pure or mixed) can satisfy Eq.(29). If the reduced dissipation superoperator \ell^{\rm{E}}_{{\rm{d}}; {\rm{P}}} is Hermitian as in the HSR model, the eigenvector associated with \Lambda_1=0 is the identity operator, {\cal I}=\displaystyle\sum_{i=1}^N |\varepsilon_i\rangle\langle \varepsilon_i|. The trapping time is equivalent to the survival time along this special eigenvector. The matrix \cal T can be block diagonal as required by the symmetry of the system Hamiltonian H_{\rm{S}}. The calculation of the trapping time is reduced inside an M (\leN)-dimensional subspace including the eigenvector $\cal I$. Consequently, Eqs. (28) and (29) are modified by replacing $\{N, \varrho(0), {\cal T}\}$ with $\{M, \varrho_M(0), {\cal T}_M\}$, where $\varrho_M(0)$ and ${\cal T}_M$ are defined in the $M$-dimensional subspace. The optimal requirement of $\rho_{\rm{S}}(0)$ to maximize the energy transfer efficiency becomes more flexible. Ⅳ. NEET AND OPTIMAL SYSTEM INITIALIZATION IN MODEL SYSTEMS In Section Ⅲ, we have derived the requirements of the rigorous and approximate $1/\Gamma$-scaling for the NEET. In the same framework, the initial system state to suppress the $1/\Gamma$-scaling is theoretically obtained, which optimizes the coherent energy transfer efficiency in the weak dissipation regime. In this section, we verify our theory in four model systems of energy transfer, as shown in FIG. 1. FIG. 1 The four energy transfer networks studied in this work. (a) a biased two-site system, (b) a symmetric three-site branching system, (c) a homogeneous 1D chain, and (d) an 8-chromophore FMO monomer. The trapping site of each system is highlighted in red color A. Biased two-site system The first example is a biased two-site system (see FIG. 1(a)). The system Hamiltonian is defined in the local basis as $H^{\rm{L}}_{\rm{S}}$=$\Delta |1\rangle\langle 1|$+$J(|1\rangle\langle 2|$+$|2\rangle\langle 1|)$, where $\Delta$ is the site energy detuning and $J$ is the site-site coupling. The HSR model in Eq.(3) is applied to model the quantum dissipation. An incoherent trapping process is assumed with the trapping rate $k_{\rm{t}}$ at site 2. The analytical expression of the trapping time for this model was shown previously [11]. Here we apply the theoretical procedure developed in Section Ⅲ to analyze the NEET and determine the optimal initial system states. In the exciton population subspace, the two relevant superoperators are explicitly written as $\begin{eqnarray} \begin{array}{l} \ell^{\rm{E}}_{{\rm{d}}; {\rm{P}}} = \displaystyle\frac{\sin^22\theta }{2}\left( {\begin{array}{*{20}{c}} 1&{{\rm{ - }}1}\\ {{\rm{ - }}1}&1 \end{array}} \right)\\ {\cal L}^{\rm{E}}_{{\rm{t}}; {\rm{P}}} = k_{\rm{t}}\left( {\begin{array}{*{20}{c}} {\sin {\theta ^2}}&0\\ \theta &{\cos {\theta ^2}} \end{array}} \right)\\ \end{array} \label{eq_30} \end{eqnarray}$ (30) with $\theta$=$-\arctan(2J/\Delta)/2$. Following Eq.(28), we determine a condition of $k_{\rm{t}}$$\ll$$2 \Gamma_\textrm{c}$ for the NEET. For a weak site-site coupling, an intermediate dephasing rate around the energy detuning, $\Gamma_\textrm{c}$$\approx$$|\Delta|$, separates the weak and strong dissipation regimes. To achieve an apparent $1/\Gamma$-scaling, we propose a scenario of a slow trapping process $k_{\rm{t}}$$\ll$$|\Delta|$ accompanied with a large energy mismatch $|J|$$\ll$$|\Delta|$, which is consistent with the result in Ref.[11]. The approximate trapping-free subspace $\Phi_\perp$ is composed of a single exciton state, $|\varepsilon_1\rangle$=$\cos\theta|1\rangle$+$\sin\theta|2\rangle$ with $\theta$$\rightarrow0. The total trapping time is explicitly given by \begin{eqnarray} \langle t\rangle = t_0+ \frac{t_1}{\Gamma+k_{\rm{t}}/2} +\delta t_0 + f_{\mathit{\rm{hop}}} \Gamma \label{eq_31} \end{eqnarray} (31) where the parameters, \{t_0, \delta t_0, t_1, f_{\mathit{\rm{hop}}}\}, depend on the initial RDM. The first two terms on the right hand side of Eq.(31) arise from the partial trapping time \langle t\rangle^{\rm{E}}_{\rm{P}}. As shown in FIG. 2(a), the partial trapping time \langle t\rangle^{\rm{E}}_{\rm{P}} excellently describes the total trapping time \langle t\rangle in the weak dissipation regime for an example inital RDM, \rho^{\rm{L}}_{{\rm{S}}}(0)=|1\rangle\langle 1|. With the increase of energy mismatch, the approximate 1/\Gamma-scaling of \langle t\rangle^{\rm{E}}_{\rm{P}} gradually becomes exact. FIG. 2 The trapping time vs. the dephasing rate for a biased two-site sytem (J=1 and k_{\rm{t}}=0.1). (a) The results of the initial system state at \rho^{\rm{L}}_{\rm{S}}(0)=|1\rangle\langle 1|. Subtracted by t_0=2/k_{\rm{t}}, the circles are the total trapping time \langle t\rangle defined in Eq.(11); the solid lines are the partial trapping time \langle t\rangle^{\rm{E}}_{\rm{P}} defined in Eq.(15); the dashed lines are the 1/\Gamma-scaling of (\Delta^2/2J^2)\Gamma^{-1}. The data in black, red, and blue colors refer to \Delta=2, 5, 10, respectively. (b) With \Delta=2, the solid and dashed lines are the results of the initial system states at \phi^{\rm{L}}_{{\rm{S}}}(0)=(|1\rangle$$-$$|2\rangle)/\sqrt{2} and \rho^{\rm{L}}_{{\rm{S}}}(0)=(|1\rangle\langle1|+|2\rangle\langle2|)/2, respectively To suppress the 1/\Gamma-scaling in the trapping time and achieve an optimized coherent energy transfer, we solve Eq.(29) and determine an requirement, \begin{eqnarray} \rho^{\rm{L}}_{{\rm{S}}; 11}(0)+\frac{2J}{\Delta}\mathit{\rm{Re}} \{\rho^{\rm{L}}_{{\rm{S}}; 12}(0)\}=0 \label{eq_32} \end{eqnarray} (32) in the local basis representation. A trivial solution of Eq.(32) is \rho^{\rm{L}}_{\rm{S}}(0)=|2\rangle\langle 2|, which means that the system is prepared initially at the trapping site 2. Instead, an infinite number of nontrivial optimal initial system states exist, satisfying Eq.(32). One example is a quantum pure state, \phi^{\rm{L}}_{\rm{S}}(0)$$\propto$$2J|1\rangle$$-$$\Delta|2\rangle, and the calculation of \langle t\rangle for this initial RDM is shown in FIG. 2(b). In the coherent limit (\Gamma$$\rightarrow$0), the trapping time from $\phi^{\rm{L}}_{{\rm{S}}}(0)$ is very close to the global minimum result from $\rho^{\rm{L}}(0)$=$|2\rangle\langle 2|$. As a comparison, we construct a quantum mixed state, $\rho^{\rm{L}}_{\rm{S}}(0)$$\propto$$4|J|^2|1\rangle\langle 1|$+$|\Delta|^2|2\rangle\langle 2|$. By violating the requirement of the site-site coherence in Eq.(32), the coherent trapping time $\langle t\rangle_{\Gamma=0}$ is increased significantly and $\langle t\rangle$ is minimized at an intermediate $\Gamma$ (see FIG. 2(b)). B. Symmetric three-site branching system The second example is a symmetric three-site branching system (see FIG. 1(b)). The system Hamiltonian is given by $H^{\rm{L}}_{\rm{S}}$=$J(|1\rangle\langle 2|$+$|2\rangle\langle 1|$+$|2\rangle\langle 3|$+$|3\rangle\langle 2|)$, in the local basis representation. The dissipation is approximated by the HSR model, while the trapping rate $k_{\rm{t}}$ is assigned to the middle site 2. The explicit expression of the trapping time was also provided previously [11]. Following the procedure in Section Ⅲ, we obtain dissipation and trapping superoperators, $\begin{eqnarray} \begin{array}{l} \ell^{\rm{E}}_{{\rm{d}}; {\rm{P}}}= \displaystyle\frac{1}{8} \left( {\begin{array}{*{20}{c}} 4&{ - 2}&{ - 2}\\ { - 2}&5&{ - 3}\\ { - 2}&{ - 3}&5 \end{array}} \right)\\ [2ex] {\cal L}^{\rm{E}}_{{\rm{t}}; {\rm{P}}} = \displaystyle\frac{k_{\rm{t}}}{2}\left( {\begin{array}{*{20}{c}} 0&0&0\\ 0&1&0\\ 0&0&1 \end{array}} \right)\\ \end{array} \label{eq_33} \end{eqnarray}$ (33) in the exciton population subspace. The first exciton state, $|\varepsilon_1\rangle$=$(|1\rangle -|3\rangle)/\sqrt{2}$, is orthogonal to the trapping site 2 and composes a rigorous one-element trapping-free subspace $\Phi_\perp$ with ${\cal L}^{\rm{E}}_{{\rm{t}}; 11}$=0 [15]. FIG. 3(a) demonstrates that the total trapping time $\langle t\rangle$ follows the exact $1/\Gamma$-scaling for a non-optimal initial system state, $\rho^{\rm{L}}_{{\rm{S}}}(0)$=$(|1\rangle\langle 1|$+$|3\rangle\langle 3|)/2$. FIG. 3 The trapping time vs. the dephasing rate for a symmetric three-site model ($J$=1). (a) The results of the initial system state at $\rho^{\rm{L}}_{\rm{S}}(0)$=$(|1\rangle\langle 1|$+$|3\rangle\langle 3|)/2$. The circles and solid lines are the total trapping time $\langle t\rangle$ and the partial trapping time $\langle t\rangle^{\rm{E}}_{\rm{P}}$ (i.e., the exact $1/\Gamma$-scaling), respectively. The data in black and red colors refer to $k_{\rm{t}}$=0.1 and 1.0, respectively. (b) The results of an optimal initial system state at $\phi^{\rm{L}}_{\rm{S}}(0)$=$(|1\rangle$+$|3\rangle)/\sqrt{2}$. The solid and dashed lines refer to $k_{\rm{t}}$=0.1 and 1.0, respectively As discussed in Section Ⅲ.B, a straightforward way to optimize the coherent energy transfer is to avoid the initial population at the trapping-free subspace, i.e., $\rho^{\rm{E}}_{{\rm{S}}; 11}(0)$=0. Alternatively, we apply the mathematical procedure in Section Ⅲ.B to solve the optimization requirement in Eq.(29), which leads to $\begin{eqnarray} \rho^{\rm{L}}_{11}(0)+\rho^{\rm{L}}_{33}(0) = 2 \mathit{\rm{Re}}\{\rho^{\rm{L}}_{13}(0) \} \label{eq_34} \end{eqnarray}$ (34) in the local basis representation. Eq.(34) is identical to the condition of zero population in the trapping-free subspace. Here we design an optimal initial system state, $\phi^{\rm{L}}_{\rm{S}}(0)$=$(|1\rangle+|3\rangle)/\sqrt{2}$, which shares the same site population as in the above non-optimal state but differs in the site-site coherence. The trapping time is changed to a linearly increasing function of $\Gamma$ (see FIG. 3(b)), which indicates the relevance of an appropriate site-site coherence in quantum optimization. The underlying symmetry argument shows that our analysis is not limited to the simple HSR model, but valid in general environments, as demonstrated in our previous study of a tree-like dendrimer system [15]. C. Homogeneous 1D chain systems The third example is a homogeneous 1D $N$-site chain (see FIG. 1(c)). In the local basis representation, the system Hamiltonian is written as $\begin{eqnarray} H^{\rm{L}}_{\rm{S}} = \sum\limits_{n=1}^{N-1} J (|n\rangle\langle n+1|+|n+1\rangle\langle n|) \label{eq_35} \end{eqnarray}$ (35) with the nearest neighboring interaction. The dissipation is simulated by the HSR model, while the trapping process is defined by an irreversible rate $k_{\rm{t}}$ at the end site $N$. Following the procedure in Section Ⅲ.B, we numerically calculate Eq.(28) and determine an upper trapping rate limit, $k_{\rm{t}}$$\ll$$N \Gamma_\textrm{c}(N)$, where the critical dephasing rate roughly follows $\Gamma_\textrm{c}(N)$$\sim$$N^{-1}J$. Together, we reach a slow trapping rate of $k_{\rm{t}}$$\ll$$J$, which can lead to the NEET. The participation coefficient of the trapping site in each exciton state decreases with the increased chain size $N$ so that the trapping rate of each exicton decreases accordingly, i.e. ${\cal L}^{\rm{E}}_{{\rm{t}}; ii}$($i$=1, $\cdots, N)$$\rightarrow0 for N$$\rightarrow$$\infty. The NEET is thus expected to be more pronounced with a long spatial extension (N$$\gg$1). To extract the approximate $1/\Gamma$-scaling, we prepare the initial population at the third site, $\rho^{\rm{L}}_{\rm{S}}(0)$=$|3\rangle\langle 3|$. The usual condition, $\rho^{\rm{L}}_{\rm{S}}(0)$=$|1\rangle\langle 1|$, is actually an optimal initial state and will be discussed later in this subsection. With a small trapping rate, $k_{\rm{t}}$=$0.1J$, we numerically calculate the total trapping time $\langle t\rangle$ and its partial value $\langle t\rangle^{\rm{E}}_{\rm{P}}$ for the chain sizes of 5$\le$$N$$\le$40. FIG.~4(a) shows that $\langle t\rangle^{\rm{E}}_{\rm{P}}$ agrees excellently with $\langle t\rangle$ in the weak dissipation limit. Furthermore, the partial trapping time $\langle t\rangle^{\rm{E}}_{\rm{P}}$ can be fitted by $\begin{eqnarray} \langle t\rangle^{\rm{E}}_{\rm{P}} \approx \frac{N}{k_\textrm{t}}+\frac{t_1}{\Gamma+\Gamma_1}+\frac{t_2}{\Gamma+\Gamma_2} \label{eq_36} \end{eqnarray}$ (36) over a broad range of $\Gamma$. As shown in FIG. 4(a), the difference between the exact and fitting results of $\langle t\rangle^{\rm{E}}_{\rm{P}}$ cannot be seen with naked eyes. The two characteristic dephasing rates, $\Gamma_1$ and $\Gamma_2$, monotonically decrease with the chain size $N$, as shown in FIG. 4(b). In the condition of $\max(\Gamma_1, \Gamma_2)$$\ll$$\Gamma$$\ll$$|\Gamma_{k(>2)}|$, we reach the exact $1/\Gamma$-scaling, $\begin{eqnarray} \langle t\rangle^{\rm{E}}_{\rm{P}} \approx \frac{N}{k_{\rm{t}}}+\frac{t_1+t_2}{\Gamma} \label{eq_37} \end{eqnarray}$ (37) FIG. 4 (a) The trapping time subtracted by $N/k_{\rm{t}}$ vs. the dephasing rate for the 1D $N$-site chain. The initial system state is at $\rho^{\rm{L}}_{\rm{S}}(0)$=$|3\rangle\langle 3|$. The circles and solid lines are the calculated results of $\langle t\rangle$ and $\langle t\rangle^{\rm{E}}_{\rm{P}}$. The dashed lines are the fitting results of Eq.(36). The dotted-dashed line is the exact $1/\Gamma$-scaling for $N$=40. The data in black, red, blue and green colors refer to $N$=5, 10, 20, and 40, respectively. (b) The fitting parameters of $\Gamma_1$ and $\Gamma_2$ in Eq.(36) vs. the chain size $N$. The solid lines with circles and diamonds are the results of $\Gamma_1$ and $\Gamma_2$, respectively. Here the parameters are $J$=1 and $k_{\rm{t}}$=0.1 which is confirmed for the result of $N$=40 in FIG. 4(a). Next we calculate the optimal initial system state of the coherent energy transfer. Without tedious details, Eq.(29) is transformed into an $M$-equation array ($k$=1, 2, $\cdots$, $M$), $\begin{eqnarray} \sum\limits_{i=2}^{N-1}x^k_{i}\rho^{\rm{L}}_{i}(0)+\sum\limits_{i=1}^{N-2}\sum\limits_{j=1}^{(N-i)/2} 2 x^k_{i, i+2j}\mathit{\rm{Re}}\rho^{\rm{L}}_{i, i+2j}(0)=0\quad \label{eq_38} \end{eqnarray}$ (38) with $M$=$(N$$-1)/2 for an odd N and M=N/2$$-$1 for an even $N$. For conciseness, we will not provide the explicit forms of the coefficients, $\{x^k_{i}, x^k_{i, i+2j}\}$. However, there are several important properties to be noticed: $\{x^k_{i}, x^k_{i, i+2j}\}$ are independent of the site-site coupling $J$ and the trapping rate $k_{\rm{t}}$; the coefficients obey a mirror symmetry, $x^k_{i}$=$x^k_{N+1-i}$ and $x^k_{i, i+2j}$=$x^k_{N+1-i-2j, N+1-i}$, together with another constraint, $x^k_{i}$=$x^k_{i-1, i+1}$. Interestingly, a unique initial state satisfying Eq.(38) for an arbitrary chain size $N$ is $\rho^{\rm{L}}_{\rm{S}}(0)$=$|1\rangle\langle1|$, where all the population is initially prepared at the starting site. This preference of the coherent energy transfer is consistent with the ballistic quantum diffusion in the infinite homogeneous 1D chain [58, 59]. On the other hand, Eq.(38) allows an infinite number of $N$-dependent solutions (quantum pure and mixed states). Here we present an example, $\begin{eqnarray} \rho^{\rm{L}}_{\rm{S}}(0) &=& \frac{1}{2(N+1)}\bigg[3|1\rangle\langle 1|+3|N\rangle\langle N|+2\sum\limits_{i=2}^{N-1}|i\rangle\langle i| -\nonumber\\ &&\sum\limits_{i=1}^{N-2}\left(|i\rangle\langle i+2|+|i+2\rangle\langle i| \right) \bigg] \label{eq_39} \end{eqnarray}$ (39) As shown in FIG. 5, the total trapping time $\langle t\rangle$ follows the linear $\Gamma$-function exactly for both optimal initial system states. FIG. 5 The trapping time vs. the dephasing rate for the 1D $N$-site chain, with the two optimal initial system states satisfying Eq.(38). The solid and dashed lines are the results of $\rho^{\rm{L}}_{\rm{S}}(0)$=$|1\rangle\langle 1|$ and the initial state defined in Eq.(39). The data in black and blue colors refer to $N$=20 and 40, respectively. Here the parameters are $J$=1 and $k_{\rm{t}}$=0.1 By comparing the results of the non-optimal and optimal initial system states in FIGs. 4 and 5, we observe that the trapping time of $\rho^{\rm{L}}_{\rm{S}}(0)$=$|3\rangle\langle 3|$ in the 20-site chain is larger than that of $\rho^{\rm{L}}_{\rm{S}}(0)$=$|1\rangle\langle 1|$ in the 40-site chain for $\Gamma$$<$$10^{-3}$. In contradiction to a classical diffusion picture, the energy can be transferred faster in the quantum coherent limit even when the distance is doubled. The basic mechanism of this phenomenon lies on the the mirror symmetry in the initial system state compatible with dissipation and trapping. D. Eight-site FMO model Our final example is the 8-chromophore Fenna-Matthews-Olson (FMO) protein complex (see FIG. 1(d)), which is an important light-harvesting system in green sulfur bacteria [60, 61]. The effective Hamiltonian of an FMO monomer is taken from Refs. [13, 62]. The influence of the bosonic bath is simulated by a Debye spectral density, $\begin{eqnarray} J(\omega) = \frac{2\lambda}{\pi} \frac{\omega\omega_\textrm{D}}{\omega^2+\omega^2_\textrm{D}} \label{eq_40} \end{eqnarray}$ (40) where $\lambda$ is the reorganization energy and $\omega_\textrm{D}$ is the Debye frequency. The trap process is assigned to bacteriochlorophyll (BChl) 3 with a typical trapping rate $k_{\rm{t}}$=1 ps$^{-1}$. To be consistent with the physiological condition, we set the Debye frequency at $\omega^{-1}_\textrm{D}$=50 fs and temperature at $T$=300 K. For the non-symmetric FMO system, the upper limit of the trapping rate for the NEET is estimated by Eq.(28) as $k_{\rm{t}}$$\ll$$4000~\mathit{\rm{ps}}^{-1}$, where the critical dissipation strength is approximated as $\lambda_\textrm{c}$$\approx50 cm^{-1}. As the realistic trapping rate (\sim1 ps^{-1}) is much smaller than this upper limit, we expect a strong NEET behavior. The intrinsic mechanism is that the excitons (eigen states) in the FMO system are highly localized due to both the large energy mismatch and the long spatial extension. The eigen states almost orthogonal to the trapping site, BChl 3, compose the approximate trapping-free subspace. As a demonstration, we consider a natural initial condition of \rho^{\rm{L}}_{\rm{S}}(0)=|8\rangle\langle 8| [13, 62] and apply the HEOM [36-54] to calculate the trapping time. FIG. 6(a) demonstrates that the total trapping time \langle t\rangle decreases significantly by five orders of magnitude as the reorganization energy increases from 0 cm^{-1} to 1 cm^{-1}. In the weak dissipation regime, the partial trapping time \langle t\rangle^{\rm{E}}_{\rm{P}} from the secular Redfield equation provides an accurate estimation of the total trapping time \langle t\rangle from the HEOM. The approximate 1/\Gamma-scaling, \begin{eqnarray} \langle t\rangle^{\rm{E}}_{\rm{P}} \approx t_0 + \frac{t_1}{\lambda+\lambda_1}+\frac{t_2}{\lambda+\lambda_2}+\frac{t_3}{\lambda+\lambda_3} \label{eq_41} \end{eqnarray} (41) FIG. 6 The trapping time vs. the reorganization energy in the 8-site FMO (k_t=1 ps^{-1}). (a) The initial system state is at \rho^{\rm{L}}_{\rm{S}}(0)=|8\rangle\langle 8|. The circles and solid line are the results of \langle t\rangle from the HEOM and \langle t\rangle^{\rm{E}}_{\rm{P}} from the secular Redfield equation. The dashed and dotted-dashed lines are the results of Eqs. (41) and (42). (b) The solid and dashed lines are the results of the initial system state in Eq.(43) with and without site-site coherence can reliably describe the partial trapping time over a broad range of the dissipation strength (\lambda$$ <$100 cm$^{-1}$). Here the three characteristic reorganization energies are $\lambda_1$=7.55$\times$10$^{-5}$ cm$^{-1}$, $\lambda_2$=5.23$\times$10$^{-3}$ cm$^{-1}$, and $\lambda_3$=0.858 cm$^{-1}$. Eq.(41) is consistent with the fact that four sites form a major energy transfer pathway, BChls 8$\rightarrow$(1, 2)$\rightarrow$3 [13, 55, 62-64]. With $\lambda_1, \lambda_2$$\ll$$\lambda_3$, the exact $1/\Gamma$-scaling, $\begin{eqnarray} \langle t\rangle^{\rm{E}}_{\rm{P}} \sim \left(t_0+\frac{t_3}{\lambda_3}\right) +\frac{t_1+t_2}{\lambda} \label{eq_42} \end{eqnarray}$ (42) is extracted in the weak dissipation regime of 0.01 cm$^{-1}$$<$$\lambda$$< 0.2 cm^{-1}. On the opposite side, the optimal initial system state in Eq.(29) is determined by the secular Redfield equation, from which the efficiency of the coherent energy transfer (\lambda$$\rightarrow$0) is optimized. For simplicity, we only present one example, $\begin{eqnarray*} \rho^{\rm{L}}_{\rm{S}}(0) = \left( {\begin{array}{*{20}{c}} 0.0081 & 0.0146 &-0.0444 &-0.0127 &-0.0020 &-0.0013 &-0.0015 &-0.0010 \\ 0.0146 & 0.0303 &-0.1252 &-0.0394 &-0.0058 &-0.0042 &-0.0059 &-0.0018 \\ -0.0444 &-0.1252 & 0.8378 & 0.2675 & 0.0392 & 0.0259 & 0.0420 & 0.0045 \\ -0.0127 &-0.0394 & 0.2675 & 0.1065 & 0.0184 & 0.0061 & 0.0242 & 0.0016 \\ -0.0020 &-0.0058 & 0.0392 & 0.0184 & 0.0043 &-0.0009 & 0.0055 & 0.0002 \\ -0.0013 &-0.0042 & 0.0259 & 0.0061 &-0.0009 & 0.0037 &-0.0011 & 0.0002 \\ -0.0015 &-0.0059 & 0.0420 & 0.0242 & 0.0055 &-0.0011 & 0.0091 & 0.0003 \\ -0.0010 &-0.0018 & 0.0045 & 0.0016 & 0.0002 & 0.0002 & 0.0003 & 0.0001 \end{array}} \right) \end{eqnarray*}$ (43) The trapping time $\langle t\rangle$ under this artificially designed optimal initial state $\rho^{\rm{L}}_{\rm{S}}(0)$ is then calculated using the HEOM. As shown in FIG. 6(b), $\langle t\rangle$ in the coherent limit ($\lambda$$\rightarrow0) is minimized to \langle t\rangle=2.64 ps, significantly smaller than that from \rho^{\rm{L}}_{\rm{S}}(0)=|8\rangle\langle 8|. As the reorganization energy \lambda is increased from 0 to 100 cm^{-1}, the trapping time is increased very weakly to \langle t\rangle=2.77 ps, resisting the influence of dynamic disorders. To illustrate the relevance of an appropriate site-site coherence in the optimal state, we remove the off-diagonal elements in Eq.(43) and \langle t\rangle is changed to a decreasing function of \lambda (see FIG. 6(b)). In fact, the NEET behavior can be observed even with all the initial population localized at the trapping site, BChl 3. Thus, the optimized coherent energy transfer requires an appropriate initial site-site coherence constrained by Eq.(29). Ⅴ. SUMMARY In this paper, we extend our previous studies of efficiency optimization [12, 14, 15] to probing the mechanism of the noise-enhanced energy transfer (NEET) and engineering the initial system state to maximize the efficiency of coherent energy transfer from a conceptual point of view. In the weak dissipation limit, the trapping time \langle t\rangle of an energy transfer network with a slow trapping process can be reliably approximated by the survival time \langle t\rangle^{\rm{E}}_{\rm{P}} of an exciton spanning only inside its population subspace. Following a simple but accurate description of quantum dissipation from the secular Redfield equation, we express \langle t\rangle^{\rm{E}}_{\rm{P}} in a rational polynomial of the dissipation strength \Gamma and formulate the general requirement of the NEET. If the determinant of the trapping superoperator is zero in the exciton population subspace (\mathit{\rm{Det}}[{\cal L}^{\rm{E}}_{{\rm{t}}; {\rm{P}}}]=0), a rigorous trapping-free subspace \Phi_\perp is formed and the trapping time follows the exact 1/\Gamma-scaling, \langle t\rangle$$\sim$$1/\Gamma, in the limit of \Gamma$$\rightarrow$0. Under a weaker condition that $\mathit{\rm{Det}}[{\cal L}^{\rm{E}}_{{\rm{t}}; {\rm{P}}}]$ is nonzero but sufficiently small, the trapping time can be still approximated as $\langle t\rangle$$\sim$$A$+$B/\Gamma$ over a certain range of dissipation strengths. In contrast, the initial system state can be tuned finely to suppress the $1/\Gamma$-scaling of the NEET, globally or locally maximizing the transfer efficiency in the coherent limit ($\Gamma$$\rightarrow$0). A mathematical procedure is proposed by us to fulfill the optimization constraint in Eq.(29), showing an infinite possibility of solutions. Our theoretical predictions of the $1/\Gamma$-scaling and the optimal initial system states are verified in the four examples of the energy transfer networks: a biased two-site system, a symmetric three-site branching system, a homogeneous 1D chain, and an 8-chromophore FMO monomer. We observe that the NEET can always appear approximately due to a slow trapping process or exactly due to a symmetric system Hamiltonian compatible with trapping. The approximate $1/\Gamma$-scaling induced by the former condition becomes more pronounced with a larger energy mismatch and a longer spatial extension. For each model system, the optimal initial system states to maximize the coherent transfer efficiency are successfully obtained following our procedure. The studies in this paper confirm that the NEET is a universal behavior in the energy transfer network, which can be applied to other irreversible quantum dynamic systems such as a quantum heat engine. Here we focus on the dephasing rate and reorganization energy, but the concept of the dissipation strength can be extended to other system- and bath-related parameters such as temperature. Our calculation of the optimal initial system states shows that quantum (site-site) coherence to accelerate energy transfer must be tuned finely to be compatible with dissipation and trapping. Although such a fine tuning is difficult to be achieved in natural systems, it is experimentally accessible in precisely-controlled artificial quantum devices, which will be an interesting problem to be explored in the future [65]. Ⅵ. ACKNOWLEDGEMENTS The work reported was supported by the National Natural Science Foundation of China (No.21573195) and the Ministry of Science and Technology of China (MOST-2014CB921203). Reference [1] A. Chenu, and G. D. Scholes, Annu. Rev. Phys. Chem. 66 , 69 (2015). DOI:10.1146/annurev-physchem-040214-121713 [2] R. Kosloff, and A. Levy, Annu. Rev. Phys. Chem. 65 , 365 (2014). DOI:10.1146/annurev-physchem-040513-103724 [3] D. Z. Xu, and J. S. Cao, Front. Phys. 11 , 110308 (2016). DOI:10.1007/s11467-016-0540-2 [4] P. Brumer, and M. Shapiro, Ann. Rev. Phys. Chem. 43 , 257 (1992). DOI:10.1146/annurev.pc.43.100192.001353 [5] S. J. Glaser, U. Boscain, T. Calarco, C. P. Koch, W. Köckenberger, R. Kosloff, I. Kuprov, B. Luy, S. Schirmer, T. Schulte-Herbrüggen, D. Sugny, and F. K. Wilhelm, Eur. Phys. J. D 69 , 279 (2015). DOI:10.1140/epjd/e2015-60464-1 [6] K. M. Gaab, and C. J. Bardeen, J. Chem. Phys. 121 , 7813 (2004). DOI:10.1063/1.1786922 [7] A. Olaya-Castro, C. F. Lee, F. F. Olsen, and N. F. Johnson, Phys. Rev. B 78 , 085115 (2008). DOI:10.1103/PhysRevB.78.085115 [8] B. M. Plenio, and F. S. Huelga, New J. Phys. 10 , 113019 (2008). DOI:10.1088/1367-2630/10/11/113019 [9] F. Caruso, A. W. Chin, A. Datta, S. F. Huelga, and M. B. Plenio, J. Chem. Phys. 131 , 105106 (2009). DOI:10.1063/1.3223548 [10] F. Caruso, New J. Phys. 16 , 055015 (2014). DOI:10.1088/1367-2630/16/5/055015 [11] J. S. Cao, and R. J. Silbey, J. Phys. Chem. A 113 , 13825 (2009). [12] J. L. Wu, F. Liu, Y. Shen, J. S. Cao, and R. J. Silbey, New J. Phys. 12 , 105012 (2010). DOI:10.1088/1367-2630/12/10/105012 [13] J. Moix, J. L. Wu, P. F. Huo, D. Coker, and J. S. Cao, J. Phys. Chem. Lett. 2 , 3045 (2011). DOI:10.1021/jz201259v [14] J. L. Wu, F. Liu, J. Ma, R. J. Silbey, and J. S. Cao, J. Chem. Phys. 137 , 174111 (2012). DOI:10.1063/1.4762839 [15] J. L. Wu, R. J. Silbey, and J. S. Cao, Phys. Rev. Lett. 110 , 200402 (2013). DOI:10.1103/PhysRevLett.110.200402 [16] M. Mohseni, P. Rebentrost, S. Lloyd, and A. AspuruGuzik, J. Chem. Phys. 129 , 174106 (2008). DOI:10.1063/1.3002335 [17] P. Rebentrost, M. Mohseni, I. Kassal, S. Lloyd, and A. Aspuru-Guzik, New J. Phys. 11 , 033003 (2009). DOI:10.1088/1367-2630/11/3/033003 [18] I. Kassal, and A. Aspuru-Guzik, New J. Phys. 14 , 053041 (2012). DOI:10.1088/1367-2630/14/5/053041 [19] R. de J. Léon-Montiel, I. Kassal, and J. P. Torres, J. Phys. Chem. B 118 , 10588 (2014). DOI:10.1021/jp505179h [20] M. Mohseni, A. Shabani, S. Lloyd, and H. Rabitz, J. Phys. Chem. 140 , 035102 (2014). DOI:10.1063/1.4856795 [21] L. Mühlbacher, and U. Kleinekathöfer, J. Phys. Chem. B 116 , 3900 (2012). DOI:10.1021/jp301444q [22] E. K. Irish, R. Gomez-Bombarelli, and B. W. Lovett, Phys. Rev. A 90 , 012510 (2014). DOI:10.1103/PhysRevA.90.012510 [23] U. Weiss, Quantum Dissipative Systems, New Jersey: World Scientific Publishing Company, (2008). [24] H. Breuer and F. Petruccione, The Theory of Open Quantum Systems, New York: Oxford University Press, (2002). [25] T. Förster, Ann. Phys. (Leipzig) 437 , 55 (1948). DOI:10.1002/(ISSN)1521-3889 [26] P. Hanggi, P. Talkner, and M. Borkovec, Rev. Mod. Phys. 62 , 251 (1990). DOI:10.1103/RevModPhys.62.251 [27] B. J. Schwartz, Annu. Rev. Phys. Chem. 54 , 141 (2003). DOI:10.1146/annurev.physchem.54.011002.103811 [28] S. Mukamel, Principles of Nonlinear Optical Spectroscopy, New York: Oxford University Press, (1995). [29] V. May and K. Oliver, Charge and Energy Transfer Dynamics in Molecular Systems, Weinheim: Wiley-VCH, (2004). [30] H. Haken, and P. Reineker, Z. Phys. 249 , 253 (1972). DOI:10.1007/BF01400230 [31] H. Haken, and G. Strobl, Z. Phys. 262 , 135 (1973). DOI:10.1007/BF01399723 [32] S. Nakajima, Prog. Theor. Phys. 20 , 948 (1958). DOI:10.1143/PTP.20.948 [33] R. Zwanzig, J. Chem. Phys. 33 , 1338 (1960). DOI:10.1063/1.1731409 [34] A. G. Redfield, IBM J. Res. Dev. 19 , 1 (1957). [35] G. Lindblad, Commun. Math. Phys. 48 , 119 (1976). DOI:10.1007/BF01608499 [36] Y. Tanimura, and R. Kubo, J. Phys. Soc. Jpn. 58 , 101 (1989). DOI:10.1143/JPSJ.58.101 [37] A. Ishizaki, and Y. Tanimura, J. Phys. Soc. Jpn. 74 , 3131 (2005). DOI:10.1143/JPSJ.74.3131 [38] Y. Tanimura, J. Chem. Phys. 141 , 044114 (2014). DOI:10.1063/1.4890441 [39] Y. Tanimura, J. Chem. Phys. 142 , 144110 (2015). DOI:10.1063/1.4916647 [40] Y. A. Yan, F. Yang, Y. Liu, and J. S. Shao, Chem. Phys. Lett. 395 , 216 (2004). DOI:10.1016/j.cplett.2004.07.036 [41] J. S. Shao, Chem. Phys. 322 , 187 (2006). DOI:10.1016/j.chemphys.2005.08.007 [42] Y. Zhou, and J. S. Shao, J. Chem. Phys. 128 , 034106 (2008). DOI:10.1063/1.2818095 [43] R. X. Xu, P. Cui, X. Q. Li, Y. Mo, and Y. J. Yan, J. Chem. Phys. 122 , 041103 (2005). DOI:10.1063/1.1850899 [44] J. Hu, M. Luo, F. Jiang, R. X. Xu, and Y. J. Yan, J. Chem. Phys. 134 , 244106 (2011). DOI:10.1063/1.3602466 [45] Y. J. Yan, J. Chem. Phys. 140 , 054105 (2014). DOI:10.1063/1.4863379 [46] R. X. Xu, Y. Liu, H. D. Zhang, and Y. J. Yan, Chin. J. Chem. Phys. 30 , 395 (2017). DOI:10.1063/1674-0068/30/cjcp1706123 [47] J. M. Moix, and J. S. Cao, J. Chem. Phys. 139 , 134106 (2013). DOI:10.1063/1.4822043 [48] C. Y. Hsieh, and J. S. Cao, J. Chem. Phys. 148 , 014103 (2018). DOI:10.1063/1.5018725 [49] C. Y. Hsieh, and J. S. Cao, J. Chem. Phys. 148 , 014104 (2018). DOI:10.1063/1.5018726 [50] Q. Shi, L. P. Chen, G. J. Nan, R. X. Xu, and Y. J. Yan, J. Chem. Phys. 130 , 084105 (2009). DOI:10.1063/1.3077918 [51] H. Liu, L. L. Zhu, S. M. Bai, and Q. Shi, J. Chem. Phys. 140 , 134106 (2014). DOI:10.1063/1.4870035 [52] Z. F. Tang, X. L. Ouyang, Z. H. Gong, H. B. Wang, and J. L. Wu, J. Chem. Phys. 143 , 224112 (2015). DOI:10.1063/1.4936924 [53] C. R. Duan, Z. F. Tang, J. S. Cao, and J. L. Wu, Phys. Rev. B 95 , 214308 (2017). DOI:10.1103/PhysRevB.95.214308 [54] C. R. Duan, Q. L. Wang, Z. F. Tang, and J. L. Wu, J. Chem. Phys. 147 , 164112 (2017). DOI:10.1063/1.4997669 [55] J. L. Wu, Z. F. Tang, Z. H. Gong, J. S. Cao, and S. Mukamel, J. Phys. Chem. Lett. 6 , 1240 (2015). DOI:10.1021/acs.jpclett.5b00227 [56] The terms of 'coherent' and 'incoherent' usually refer to the local basis representation. The evolutions in the exciton population and coherence subspaces correspond to the coherent and incoherent motions of local sites, respectively. [57] J. L. Wu, and J. S. Cao, Adv. Chem. Phys. 146 , 329 (2011). [58] P. W. Anderson, Phys. Rev. 109 , 1492 (1958). DOI:10.1103/PhysRev.109.1492 [59] J. M. Moix, M. Khasin, and J. S. Cao, New J. Phys. 15 , 085010 (2013). DOI:10.1088/1367-2630/15/8/085010 [60] R. E. Fenna, and B. W. Matthews, Nature (London) 258 , 573 (1975). DOI:10.1038/258573a0 [61] D. E. Tronrud, J. Z. Wen, L. Gay, and R. E. Blankenship, Photosynth. Res. 100 , 79 (2009). DOI:10.1007/s11120-009-9430-6 [62] M. Schmidt am Busch, F. Müh, M. E. A. Madjet, and T. Renger, J. Phys. Chem. Lett. 2 , 93 (2011). DOI:10.1021/jz101541b [63] J. Adolphs, and T. Renger, Biophys. J. 91 , 2778 (2006). DOI:10.1529/biophysj.105.079483 [64] A. Ishizaki, and G. R. Fleming, Proc. Natl. Acad. Sci. USA 106 , 17255 (2009). DOI:10.1073/pnas.0908989106 [65] A. Potočnik, A. Bargerbos, F. A. Y. N. Schröder, S. A. Khan, M. C. Collodo, S. Gasparinetti, Y. Salathé, C. Creatore, C. Eichler, H. E. Türeci, A. W. Chin, and A. Wallraff, Nat. Commun. 9 , 904 (2018). DOI:10.1038/s41467-018-03312-x a. 浙江大学物理系, 杭州 310027; b. 麻省理工学院化学系, 马萨诸塞州, 剑桥 02139
2019-08-17 12:45:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886713445186615, "perplexity": 1913.1734851276265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313259.30/warc/CC-MAIN-20190817123129-20190817145129-00087.warc.gz"}
https://chat.stackexchange.com/transcript/41/2021/12/1
7:15 AM @samcarter Needs documentation explaining how to use it @samcarter No, I'd leave it: Till always uses multiple dots, and it works OK 2 hours later… 8:54 AM @UlrikeFischer You have your interface :) 9:07 AM @JosephWright I tried yesterday night but got only errors and was to tired to figure out if you missed something or if the installation was wrong. But what I already saw: Is the naming "KeysOptions" or "KeysOption" ?? And imho SetKeysXX is not only for "inside packages" @UlrikeFischer I never said I'd checked it works :) @UlrikeFischer I fixed a few things this morning, so at least the code loads: I'll probably add some further checks today @JosephWright well there is a certain probability that if the \Declare-command is completely undefined that I messed up the installation ;-) I will try in a few minutes, but have to finish some other stuff first. @UlrikeFischer Sure: the test suite is now passing so I've not actively broken the traditional mechanism. I'll perhaps have a few minutes to look at one or two tests before a meeting @JosephWright oh and I wondered if the doc should mention that "normal" l3keys definitions can be used too in case some more complicated stuff is wanted. 9:19 AM @UlrikeFischer One of the things I wondered about, links to which properties should be available where 9:52 AM I did not know that one can blame the duolingo bloke for captchas: ted.com/talks/luis_von_ahn_massive_scale_online_collaboration 10:08 AM @UlrikeFischer I started to write some notes last night and also queried the double plural, it reads OK as DeclareKeysAsOptions but that's a bit long, otherwise each option is one key so DeclareKeyOption or possibly just DeclareKey (but I'll mail later if I get chance) (@JosephWright) @DavidCarlisle We will see what the consensus is :) @PauloCereda A South America model: youtu.be/yi6co_r1PfU?t=51 (turn on captions to get the texts in English) 11:08 AM @samcarter OOH A GIANT ARARA @PauloCereda yes, is this product placement for your cool automatisation tool? :) @samcarter royalties @PauloCereda :D @samcarter <3 1 hour later… 12:20 PM @MarcelKrüger I looked at the space-in-font-name question. The following seems to work, but do you think that luatex will complain somewhere if spaces are no longer removed? \documentclass{article} \usepackage{fontspec} \ExplSyntaxOn \sys_if_engine_luatex:T { \cs_set:Nn \__fontspec_sanitise_fontname:Nn { \tl_set:Nx #1 {#2} %\tl_remove_all:Nn #1 {~} \clist_map_inline:Nn \l__fontspec_extensions_clist { \tl_if_in:NnT #1 {##1} { \tl_remove_once:Nn #1 {##1} \tl_set:Nn \l__fontspec_extension_tl {##1} \clist_map_break: } } } 12:38 PM @UlrikeFischer you might? want to trim spaces from the ends, leaving interior ones? otherwise \setmainfont{ Arial } uses font " Arial :mode=node;script=latn;language=dflt;+tlig;"! with a space before the : which seems to work OK but .. @UlrikeFischer If the name contains a space, \fontname surrounds it with quotes. I'm not sure if that breaks something. Actually fontspec is already testing that for us: It adds a space when colors are used, so after this change all fonts will behave like fonts with color set behave currently. IIRC the only issue is with microtype which sometimes did something odd with them, but I'm not sure if that's already fixed. @DavidCarlisle For font names that shouldn't matter since we normalize them anyway, but filenames might break. So it's probably a good idea to trim them. 1:21 PM @DavidCarlisle, @UlrikeFischer I've now got a working test for the option code: the obvious bugs are fixed \documentclass{article} \begin{filecontents}{mypkg.sty} \DeclareKeysOptions{ general-option-C .store = \my@C } \ProcessKeysOptions \newcommand\mypkgtest{% \begingroup \edef\x{\my@A:\my@B:\my@C}% \show\x \endgroup } \end{filecontents} \input{test2e} \START \AUTHOR{Joseph Wright} \mypkgtest @UlrikeFischer ^^^ :) 1:55 PM @JosephWright I get an error at begin document, and preamble keys seems to be executed later anyway: \documentclass{article} \begin{filecontents}[overwrite]{mypkg.sty} \DeclareKeysOptions{ general-option-C .store = \my@C, preamble-option-D .store = \my@D, preamble-option-D .usage = preamble } \ProcessKeysOptions%{module} \newcommand\mypkgtest{% \begingroup \edef\x{\my@C:\my@D}% \show\x \endgroup } \end{filecontents} \usepackage[general-option-C= pack,preamble-option-D = pack]{mypkg} \mypkgtest \SetKeysOptions[mypkg]{general-option-C = preamble,preamble-option-D=preamble} \mypkgtest 2:11 PM @UlrikeFischer I'll take a look @UlrikeFischer What error did you see? @JosephWright this one: ! LaTeX3 Error: The key 'mypkg/mypkg.code:n' is unknown and is being ignored. @UlrikeFischer It's OK, I've worked it out :) @JosephWright I remarked that \ProcessKeysOptions{module} gives not an error. Do you look for an "optional" mandatory argument? @UlrikeFischer Yes: the l3keys2e version takes a mandatory argument, and we have to manage a transition [it's one of my queries :)] @JosephWright yes I knew about the argument, I checked the docu ;-). That's why I tested if it explodes or not. 2:19 PM @UlrikeFischer I have a low-level check: my thinking was that if there is a braced argument, it's a 'classical' set up and I leave things to the option-clash code, if there's no brace group, the package is opting-in to the new mech. and all options are keyvals @JosephWright I think it would be simpler to use different names (as I just commented in mail I'd change \ProcessKeysOptions name anyway) then you don't have to worry about l3keys2e compatibility any more than kvoptions compat or any of the other kv option handlers. @DavidCarlisle It's a draft PR for a reason ;) @JosephWright yes it's good to see plausible looking examples even if only to object to them:-) @DavidCarlisle I think \SetKeys would be fine (and curiously it seems not to be in use yet). @JosephWright actually I suppose the model of a general key setting and an option processer handling a specified key family is in fact exactly the l3keys2e model 2:33 PM @UlrikeFischer Fixed with a test @UlrikeFischer Yes, that is odd: I guess we can take it :) @DavidCarlisle Yup :) It's only a question of what 2e interface you want, and how far it goes (see team list) @JosephWright \DeclareKeys seems to be free too. @UlrikeFischer Yup @UlrikeFischer Everyone has left us with the obvious names @DavidCarlisle, @UlrikeFischer I suspect we have a plan: need to wait for FMi and (probably) Chris ... @JosephWright It remains the question for the "process command". Like David I don't quite like the two s. What about \ProcessOptionkeys or \ProcessOptionKeys? 2:51 PM @UlrikeFischer Except were are definitely processing options not an option @JosephWright I think key is an adjective here so ProcessKeyOptions ? or ProcessKeyFamilyAsPackageOptionsBasedOnExplThreeKeys 6 \catcodes @AlanMunn pesky umlauts get everywhere @DavidCarlisle Yes, that would work @DavidCarlisle Meöw 2:59 PM @AlanMunn H̤m̤m̤ @DavidCarlisle \ProcessZZZZ? @UlrikeFischer yes but drop the Process prefix 2 hours later… 4:53 PM @DavidCarlisle a cat-astrophe surely? 3 hours later… 7:49 PM just learned that github has a citation feature: docs.github.com/en/repositories/… 2 1 hour later… 9:06 PM 9:29 PM @DavidCarlisle :) @DavidCarlisle 'Bug in longtable'? ;) @JosephWright no one would believe that @DavidCarlisle Enjoy
2022-01-29 00:37:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.74417644739151, "perplexity": 4967.90137952161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299894.32/warc/CC-MAIN-20220129002459-20220129032459-00677.warc.gz"}
https://www.physicsforums.com/threads/natural-convection-question.225587/
# Natural convection question 1. Homework Statement What is the heat transfer from a 60W electric light bulb at 127C to the stagnant air in a room at 27C. Approximate the bulb to a 50mm diameter sphere. What percentage of the power is lost by free convection? 2. Homework Equations Nu=2 + 0.6(Gr^1/4)(Pr^1/3) Gr=(g$$\beta$$$$\theta$$d^3)/$$\nu$$^2 Nu=hd/k 3. The Attempt at a Solution I started by working out the Grasshof no. with $$\nu$$ at 27C= 1.568x10^-5m^2/s, $$\beta$$=1/T = 1/27, $$\theta$$ = 100 to be Gr = 1.8x10^7. Pr = 0.707 therefore Nu = 21.702, using the relationship above. substitute this into Nu = hd/k and h= 1.139x10^-5kW/m^2K the area of a sphere = 4(pi)r^2= 7.854x10^-3 so finally Q=hAdT left me with 8.945x10^-3W which is 0.014% of the power being lost by free convection. this appears to be out by a factor of 100, but that could just be a coincidence, meaning that im totally wrong. Can anyone assist? ## Answers and Replies Related Introductory Physics Homework Help News on Phys.org Mapes Science Advisor Homework Helper Gold Member Check your calculation of $\beta$, $T$ should be in Kelvins. And what value of $k$ are you using? Shouldn't you have $h\approx\mathrm{Nu}$?
2019-12-07 08:27:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5222024321556091, "perplexity": 2275.5379195330106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540497022.38/warc/CC-MAIN-20191207082632-20191207110632-00507.warc.gz"}
https://quantnet.com/threads/bonds.47662/
# Bonds #### Tsunayoshi Let two U.S Treasury Bonds: Long Term Bond (LTB) pays $1000 at the end of ten years. Short Term Bond (STB) pays$1000 at the end of one year. 1) Effective interest rates are equal to 2% / year. Which bond will sell for a lower price? $$LTB : \frac{1000}{(1+0,02)^{10}}= 820,35 \\ STB : \frac{1000}{(1+0,02)}= 980,39$$ So it's LTB. 2) If there is a surprise interest rate cut of 0.5% for all, which bond will have a larger percentage price drop? Why would the price drop? isn't it the other way around? I don't understand it 3) Would stock prices be affected by this surprise interest cut ? If yes, in which direction ? Last edited: #### Qui-Gon 2 must be stated wrong given rates and prices move inversely to one another (except for things like MBS C-strips because rate cuts can increase prepayments which causes decrease in price). In any case, duration is an increasing function, in absolute value, of maturity, so all things the same the price change in absolute value of the LTB would be greater than that of the STB if rates change (either up or down). For 3, the rate cut pushes bond prices up and spurs more borrowing which would cause net inflow into stocks leading to higher stock prices (this is admittedly very simplified). Replies 3 Views 684 Replies 1 Views 731 Replies 4 Views 830 Replies 8 Views 2K Replies 1 Views 687
2022-12-10 03:08:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4000052213668823, "perplexity": 2538.3349042853433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711637.64/warc/CC-MAIN-20221210005738-20221210035738-00502.warc.gz"}
http://openstudy.com/updates/55d6887be4b02e69bf5c561e
## anonymous one year ago If sine of x equals 1 over 2, what is cos(x) and tan(x)? Explain your steps in complete sentences. 1. anonymous $sinx = \frac{ 1 }{ 2 }$ as such correct? 2. anonymous @iambatman yes 3. anonymous Notice the ratio for sinx is $\sin(x) = \frac{ \text{opposite} }{ \text{hypotenuse} }$ so we can make a right triangle |dw:1440123129207:dw| we can use pythagorean theorem to find the adjacent side, can you do that please? 4. anonymous |dw:1440123246771:dw| 5. anonymous @iambatman 6. anonymous $a^2+b^2 = c^2 \implies 1^2+b^2=2^2$$b^2 = 2^2 - 1^2 \implies b = \sqrt{4-1} = \sqrt{3}$ yup perfect! 7. anonymous Now since we have adjacent side we should be able to find tanx and cosx now :)$cosx = \frac{ \text{adjacent} }{ \text{hypotenuse} }$ $tanx = \frac{ \text{opposite} }{ \text{adjcanet} }$ 8. anonymous So far so good? 9. anonymous 10. anonymous Exactly! And tanx will be? :) 11. anonymous 12. anonymous Perfect :) So we have $\cos(x) = \frac{ \sqrt{3} }{ 2 }$ and $\tan(x) = \frac{ 1 }{ \sqrt{3} }$ as you mentioned! 13. anonymous
2017-01-23 07:00:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.930302083492279, "perplexity": 5105.0210742514155}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00250-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.ademcetinkaya.com/2023/03/btbt-bit-digital-inc-ordinary-shares.html
Outlook: Bit Digital Inc. Ordinary Shares is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Sell Time series to forecast n: 13 Mar 2023 for (n+4 weeks) Methodology : Modular Neural Network (Financial Sentiment Analysis) ## Abstract Bit Digital Inc. Ordinary Shares prediction model is evaluated with Modular Neural Network (Financial Sentiment Analysis) and Polynomial Regression1,2,3,4 and it is concluded that the BTBT stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell ## Key Points 1. Why do we need predictive models? 2. Prediction Modeling 3. Probability Distribution ## BTBT Target Price Prediction Modeling Methodology We consider Bit Digital Inc. Ordinary Shares Decision Process with Modular Neural Network (Financial Sentiment Analysis) where A is the set of discrete actions of BTBT stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Polynomial Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Financial Sentiment Analysis)) X S(n):→ (n+4 weeks) $∑ i = 1 n s i$ n:Time series to forecast p:Price signals of BTBT stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## BTBT Stock Forecast (Buy or Sell) for (n+4 weeks) Sample Set: Neural Network Stock/Index: BTBT Bit Digital Inc. Ordinary Shares Time series to forecast n: 13 Mar 2023 for (n+4 weeks) According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Bit Digital Inc. Ordinary Shares 1. Sales that occur for other reasons, such as sales made to manage credit concentration risk (without an increase in the assets' credit risk), may also be consistent with a business model whose objective is to hold financial assets in order to collect contractual cash flows. In particular, such sales may be consistent with a business model whose objective is to hold financial assets in order to collect contractual cash flows if those sales are infrequent (even if significant in value) or insignificant in value both individually and in aggregate (even if frequent). If more than an infrequent number of such sales are made out of a portfolio and those sales are more than insignificant in value (either individually or in aggregate), the entity needs to assess whether and how such sales are consistent with an objective of collecting contractual cash flows. Whether a third party imposes the requirement to sell the financial assets, or that activity is at the entity's discretion, is not relevant to this assessment. An increase in the frequency or value of sales in a particular period is not necessarily inconsistent with an objective to hold financial assets in order to collect contractual cash flows, if an entity can explain the reasons for those sales and demonstrate why those sales do not reflect a change in the entity's business model. In addition, sales may be consistent with the objective of holding financial assets in order to collect contractual cash flows if the sales are made close to the maturity of the financial assets and the proceeds from the sales approximate the collection of the remaining contractual cash flows. 2. However, the designation of the hedging relationship using the same hedge ratio as that resulting from the quantities of the hedged item and the hedging instrument that the entity actually uses shall not reflect an imbalance between the weightings of the hedged item and the hedging instrument that would in turn create hedge ineffectiveness (irrespective of whether recognised or not) that could result in an accounting outcome that would be inconsistent with the purpose of hedge accounting. Hence, for the purpose of designating a hedging relationship, an entity must adjust the hedge ratio that results from the quantities of the hedged item and the hedging instrument that the entity actually uses if that is needed to avoid such an imbalance 3. If a variable-rate financial liability bears interest of (for example) three-month LIBOR minus 20 basis points (with a floor at zero basis points), an entity can designate as the hedged item the change in the cash flows of that entire liability (ie three-month LIBOR minus 20 basis points—including the floor) that is attributable to changes in LIBOR. Hence, as long as the three-month LIBOR forward curve for the remaining life of that liability does not fall below 20 basis points, the hedged item has the same cash flow variability as a liability that bears interest at three-month LIBOR with a zero or positive spread. However, if the three-month LIBOR forward curve for the remaining life of that liability (or a part of it) falls below 20 basis points, the hedged item has a lower cash flow variability than a liability that bears interest at threemonth LIBOR with a zero or positive spread. 4. For example, an entity may use this condition to designate financial liabilities as at fair value through profit or loss if it meets the principle in paragraph 4.2.2(b) and the entity has financial assets and financial liabilities that share one or more risks and those risks are managed and evaluated on a fair value basis in accordance with a documented policy of asset and liability management. An example could be an entity that has issued 'structured products' containing multiple embedded derivatives and manages the resulting risks on a fair value basis using a mix of derivative and non-derivative financial instruments *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Bit Digital Inc. Ordinary Shares is assigned short-term Ba1 & long-term Ba1 estimated rating. Bit Digital Inc. Ordinary Shares prediction model is evaluated with Modular Neural Network (Financial Sentiment Analysis) and Polynomial Regression1,2,3,4 and it is concluded that the BTBT stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell ### BTBT Bit Digital Inc. Ordinary Shares Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementB3B2 Balance SheetBa3C Leverage RatiosBa3B3 Cash FlowB2Baa2 Rates of Return and ProfitabilityBaa2Caa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 81 out of 100 with 789 signals. ## References 1. Athey S, Imbens GW. 2017a. The econometrics of randomized experiments. In Handbook of Economic Field Experiments, Vol. 1, ed. E Duflo, A Banerjee, pp. 73–140. Amsterdam: Elsevier 2. Matzkin RL. 1994. Restrictions of economic theory in nonparametric methods. In Handbook of Econometrics, Vol. 4, ed. R Engle, D McFadden, pp. 2523–58. Amsterdam: Elsevier 3. Morris CN. 1983. Parametric empirical Bayes inference: theory and applications. J. Am. Stat. Assoc. 78:47–55 4. M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Ma- chine Learning, Proceedings of the Eleventh International Conference, Rutgers University, New Brunswick, NJ, USA, July 10-13, 1994, pages 157–163, 1994 5. Jiang N, Li L. 2016. Doubly robust off-policy value evaluation for reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning, pp. 652–61. La Jolla, CA: Int. Mach. Learn. Soc. 6. Bessler, D. A. S. W. Fuller (1993), "Cointegration between U.S. wheat markets," Journal of Regional Science, 33, 481–501. 7. G. Theocharous and A. Hallak. Lifetime value marketing using reinforcement learning. RLDM 2013, page 19, 2013 Frequently Asked QuestionsQ: What is the prediction methodology for BTBT stock? A: BTBT stock prediction methodology: We evaluate the prediction models Modular Neural Network (Financial Sentiment Analysis) and Polynomial Regression Q: Is BTBT stock a buy or sell? A: The dominant strategy among neural network is to Sell BTBT Stock. Q: Is Bit Digital Inc. Ordinary Shares stock a good investment? A: The consensus rating for Bit Digital Inc. Ordinary Shares is Sell and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of BTBT stock? A: The consensus rating for BTBT is Sell. Q: What is the prediction period for BTBT stock? A: The prediction period for BTBT is (n+4 weeks)
2023-03-23 08:41:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5054847598075867, "perplexity": 5058.354777923671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00402.warc.gz"}
https://la.mathworks.com/help/control/ref/lti.evalfr.html
# evalfr Evaluate frequency response at given frequency ## Syntax ```frsp = evalfr(sys,f) ``` ## Description `frsp = evalfr(sys,f) ` evaluates the transfer function of the TF, SS, or ZPK model `sys` at the complex number `f`. For state-space models with data (ABCD), the result is H(f) = D + C(fI – A)–1B `evalfr` is a simplified version of `freqresp` meant for quick evaluation of the response at a single point. Use `freqresp` to compute the frequency response over a set of frequencies. ## Examples collapse all Create the following discrete-time transfer function. `$H\left(z\right)=\frac{z-1}{{z}^{2}+z+1}$` `H = tf([1 -1],[1 1 1],-1);` Evaluate the transfer function at `z = 1+j`. ```z = 1+j; evalfr(H,z)``` ```ans = 0.2308 + 0.1538i ``` Create the following continuous-time transfer function model: `$H\left(s\right)=\frac{1}{{s}^{2}+2s+1}$` `sys = idtf(1,[1 2 1]);` Evaluate the transfer function at frequency 0.1 rad/second. ```w = 0.1; s = j*w; evalfr(sys,s)``` ```ans = 0.9705 - 0.1961i ``` Alternatively, use the `freqresp` command. `freqresp(sys,w)` ```ans = 0.9705 - 0.1961i ``` ## Limitations The response is not finite when `f` is a pole of `sys`.
2021-08-01 11:10:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8825350403785706, "perplexity": 1944.2130437670926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154175.76/warc/CC-MAIN-20210801092716-20210801122716-00499.warc.gz"}
http://clay6.com/qa/36984/if-f-and-g-are-differentiable-function-in-0-1-satisfying-f-0-2-g-1-g-0-0-an
Browse Questions # If $f$ and $g$ are differentiable function in $[0, 1]$ satisfying $f(0)=2 = g(1),g(0)= 0$ and $f(1)=6$, then for some $c\in [0,1]$ $\begin{array}{1 1}(A)\;2f'(c)=g'(c)\\(B)\;2f'(c)=3g'(c)\\(C)\;f'(c)=g'(c)\\(D)\;f'(c)=2g'(c)\end{array}$ Can you answer this question? $f'(c)=2g'(c)$ Hence (D) is the correct answer. answered Apr 7, 2014
2017-02-25 21:07:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.982943594455719, "perplexity": 602.6722931705409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00469-ip-10-171-10-108.ec2.internal.warc.gz"}
http://www.cl.eps.manchester.ac.uk/medialand/maths/archived-events/seminars/www.mims.manchester.ac.uk/events/seminars/pure-postgraduate.html
Please note, this is archived content and is no longer being updated. It is provided for historical records. Links on the pages may be broken or redirect to our current site. Archived material You are here: MIMS > events > seminars > pure postgraduate MIMS The Pure Postgraduate Seminar Series provides an informal environment for pure maths postgraduates to present mathematics, either from their research or just a topic of interest. If you would like to give a talk or have any comments or suggestions as to the organisation of the seminars please contact Matthew Taylor or Nic Clarke. Every week, a reminder will be sent to all pure postgraduates. If you are not a pure postgraduate and would also like to be sent a reminder then please e-mail us to be added to the list. The seminars are held in Frank Adams 1 in the Alan Turing Building, on Fridays from 4pm to 5pm. We will have tea, coffee and biscuits before the seminar at 3:45pm on the Atrium bridge. Afterwards we usually go to Sandbar. You are currently looking at the Spring 2014 schedule. For the Autumn 2013 seminar timetable, please click here. ## Upcoming seminars • 17th January 2014 The Weyl Algebra Andrew Davies Abstract (click to view) Some past attendees may recall previous seminars in which Sian has spoken about the quantum plane, a simple enough looking ring that still has many interesting properties. In this seminar I will show that the Weyl algebra (another easy-to-define ring) also displays some very curious behaviour. Among other things, I will speak about its relations to Lie theory, which illustrate some useful techniques when working with noncommutative algebras in general. • 24th January 2014 Local cohomology Nic Clarke Abstract (click to view) In the early 1960's Grothendieck, in a series of seminars, introduced the idea of local cohomology. Given a ring R, an ideal I and an R-module M, I will show how to construct the Cech complex and compute the local cohomology of M with support in I. There will be plenty of examples. From the definition it will be apparent that the local cohomology inherits the R-module structure. However, the behaviour of local cohomology modules remains quite mysterious and is an area of active research. I will highlight some of the important structure theorems and will describe applications to finding the minimal number of defining equations of an affine/projective variety. If I have time at the end I will explain how the algebraic perspective ties in with Grothendieck's original definition and try to give some intuition as to why local cohomology is 'local'. • 31st January 2014 An intro to the theory of monads Sam Dean Abstract (click to view) Throughout the ages, various interpretations of the words "algebraic theory" have arisen. These have given rise to a field of mathematical logic called universal algebra, which studies these different interpretations and how they are related. In this talk, I will discuss just two of these. First, I will discuss how model theorists might interpret the words "algebraic theory", and then go on to what a category theorist is likely to mean by "algebraic theory": a monad. I’ll discuss how these approaches compare, in particular showing that the model theoretic method fits into the theory of monads. I’ll also use the theory of monads to show how you can PROVE that topology is not algebra. • 7th February 2014 Transfer Operators Anthony Chiu Abstract (click to view) An iterated function system (IFS) is a fixed set of maps that are applied at random in a sensible manner. How can we study the typical behaviour of an IFS? A powerful approach is to use a transfer operator, a tool that encodes the IFS in a way that is easier to deal with. Although some of the definitions and theorems may look like messy functional analysis, I will use a simple diagram to explain the details in a much easier way. I will also give an idea of how different kinds of transfer operators can be used together to prove certain properties of the IFS. • 14th February 2014 Linearizing certain Boolean algebras Amit Kuber Abstract (click to view) Boolean algebras are quintessential objects in almost all areas of mathematics. A (freely generated) Boolean algebra can be analyzed using a family of integer valued valuations (i.e., finitely additive measures). I will describe the construction of such valuations using localization and discuss techniques combining (semi-)lattice theory and simplicial homology. By the end of the talk, you will certainly see the hidden geometry in these algebraic objects! • 21st February 2014 The Ping-Pong Lemma Jamie Phillips Abstract (click to view) “Freedom is like drink. If you take any at all, you might as well take enough to make you happy for a while.” – Finley Peter Dunne. Free objects in a category (whatever they may be) are the most basic objects in mathematics. A paradigm is the theory of free groups. They arose naturally through the study of the geometry of hyperbolic groups but their fundamental role in group theory was recognised by Nielson (who named them), Dehn, and others. We'll begin with a crash course in free group theory before proving The Ping-Pong Lemma, a statement which ensures that several elements in a group acting on a set freely generate a free subgroup of that group. We’ll see examples of it in action and reformulations of the result in other areas of pure mathematics. Time permitting, I’ll also discuss the role the lemma plays in the proof of Tits Alternative, an important theorem about the structure of finitely generated linear groups. The group theoretic ideas involved are fairly elementary so the talk should be approachable to all. • 28th February 2014 Making Clocks with Maths Tom Withers Abstract (click to view) Sundials are usually a simple object where pointing a rod south produces a shadow which we can draw a clock around and use to tell the time. They might look pretty in the garden, but they are very difficult to use practically. I'm going to construct an argument showing the existence of a sundial which can arbitrarily accurately give the time using a digital clock face, the only moving part being the sun. We will need ideas from dimension theory and fractal geometry. There hasn't been a dimension theory talk yet this year, so, I'll start from scratch and look at the basics and why we bother with it. Then I'll do some easy examples of constructing fractal sets using iterated function systems and the properties of projections of these fractal sets; eventually arguing for the existence of our clock, hopefully just before the sun goes down. • 7th March 2014 Introduction to amalgamating structures Alex Antao Abstract (click to view) Amalgamation is a way of gluing together overlapping structures to form a structure which is similar to all the orginal ones. A little more precisely, let $\mathscr{K}$ be a class of structures. Then $\mathscr{K}$ is said to have the amalgamation property (AP) if any $\mathcal{B}, \mathscr{C} \in \mathscr{K}$ with a common substructure $\mathcal{A}$ (not necessarily in $\mathscr{K}$) can be embedded in some structure $\mathcal{D} \in \mathscr{K}$, such that the embeddings agree when restricted to $\mathcal{A}$. This notion of amalgamation was introduced by Roland Fraïssé in the 1950s. Subsequently variants of amalgamation have been formulated, some of which will be considered in the talk, given enough time. Such results are vital tools in model theory for constructing new structures with desirable properties or classifying existing structures by looking at how they are "built up" around substructures. The talk will contain mathematical examples (and maybe non-mathematical pseudo-examples) of amalgamation in action. Unlike Sandbar's finest whiskey, it should also be fairly light on proof. Note: There will be some model theory, but the necessary "evils" will either be casually defined or described intuitively in the talk ad hoc. In other words, everyone is welcome!* * Though if you think this all looks as dour as Gordon Brown at a funeral, probably stay away. • 14th March 2014 The Maths Behind Bitcoin Matthew Taylor Abstract (click to view) As something of a computer geek, I've been asked several times in the past to explain what Bitcoin is. There's no easy answer, but you can think of Bitcoin as a digital currency. You can buy things with it and sell things for it just like with real money; Bitcoin has "value" because enough people believe it's worth something. That, however, is for an economics talk. We're not economists. When your currency is entirely digital, a number of problems arise. How do you regulate the creation of new Bitcoins? How do you verify who's sending what to where? And how do you protect people's money? The answer to all of these is public-key cryptography, a mathematical endeavour which underpins a surprising amount of the entire system. In this talk, I'll be giving a brief overview of what a "difficult" problem is in cryptographic terms - and how the Bitcoin system works - followed by a look at hash functions, Bitcoin transactions and what "mining" actually is. • 21st March 2014 Introduction to supermanifolds Matthew Peddie Abstract (click to view) Supermathematics became useful in physical theories when supersymmetric models were used to relate two types of elementary particle, the boson and the fermion, which have a different quantum nature. These supersymmetric models provide a link between classical and quantum physics and provide a much simpler way to analyse a quantum field theory that possesses supersymmetry. Despite mounting arguments and a lack of evidence, suggesting that supersymmetry may just be fundamentally wrong, the maths is still there! In this we introduce the supermanifold, the space where our theories are set, and with time permitting, look at how we might integrate over this new space with the Berezin integral. This should be widely accessible with absolutely no understanding of physics needed. • 28th March 2014 Simple Lie algebras and their maximal subalgebras Tom Purslow Abstract (click to view) In 1894 Cartan produced a classification of finite dimensional simple Lie algebras over the complex numbers, sadly it has been 120 years and the classification of simple Lie algebras over fields of positive characteristic is still incomplete. But that’s not a problem we will be solving this Friday! Instead we will be looking at some of the tools Dynkin used in his classification of simple Lie algebras. Then we’ll ask the question of whether this helps us in characteristic p and see some of the differences that happen and if there is time we will look at maximal subalgebras in certain exceptional Lie algebras in fields of positive characteristic. Like David Moyes we will haplessly attempt to reuse some of the building blocks laid out for us by former legends, but unlike David Moyes we will have some success! • 2nd May 2014 Hilbert's 17th Problem Laura Phillips Abstract (click to view) Hilbert's 17th problem asks whether a polynomial in $n$ variables over $\mathbb{R}$ that is non-negative on all of $\mathbb{R}^n$ can be written as the sum of squares of rational functions. In this seminar I'll present a (positive) solution to the problem and describe the wider context that it sits in. If there's time I'll talk about some of the more quantitative aspects of the problem and some variants. • 9th May 2014 Metric spaces aren't torsors David Wilding Abstract (click to view) But (you might reasonably ask) was there any danger of them being torsors in the first place? Actually, you are probably just wondering what a 'torsor' is. A torsor is essentially a group, except we have "forgotten" which element is the identity element. Given a torsor for the additive group of real numbers, we can define a distance function on the torsor that satisfies all but one of the metric space axioms. This suggests that torsors generalise metric spaces, but it turns out that a torsor can never be a metric space. Instead, metric spaces and torsors have a common generalisation, which I will describe. • 16th May 2014 Vector fields and Lie algebras Matthew Peddie Abstract (click to view) The structure of a Lie algebra can be described by a weight +1 homological vector field on the space of shifted parity considered as a supermanifold. We give some background and then show how this is done. • 23rd May 2014 Fermat's Last Theorem, 20th Anniversary Goran Malic Abstract (click to view) This year will mark the 20th anniversary of Andrew Wiles' proof of Fermat's Last Theorem (FLT). As it is very well known within the maths community, in June 1993 Wiles presented what he thought was a proof of FLT. However, in August of the same year, during the review process, Nick Katz alerted Wiles of a flaw in his argument. But all was not lost; in September 1994 Wiles corrected the proof and in October 1994 submitted two papers to the Princeton Annals of Mathematics (which were published in May 1995), and the rest is, as they say, history. • 30th May 2014 TBD Lyndsey Clark Abstract (click to view) An abstract. • 6th June 2014 TBD Abstract (click to view) An abstract. • 13th June 2014 TBD Office 2.121 Abstract (click to view) An abstract. • 20th June 2014 TBD Simon Baker Abstract (click to view) An abstract.
2018-06-25 11:39:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6289687752723694, "perplexity": 782.7642218583271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867666.97/warc/CC-MAIN-20180625111632-20180625131632-00252.warc.gz"}
https://zbmath.org/?q=an:06301701
## Isomonodromic differential equations and differential categories.(English)Zbl 1332.12011 Summary: We study isomonodromicity of systems of parameterized linear differential equations and related conjugacy properties of linear differential algebraic groups by means of differential categories. We prove that isomonodromicity is equivalent to isomonodromicity with respect to each parameter separately under a filtered-linearly closed assumption on the field of functions of parameters. Our result implies that one does not need to solve any nonlinear differential equations to test isomonodromicity anymore. This result cannot be further strengthened by weakening the requirement on the parameters as we show by giving a counterexample. Also, we show that isomonodromicity is equivalent to conjugacy to constants of the associated parameterized differential Galois group, extending a result of P. Cassidy and M. Singer, which we also prove categorically. We illustrate our main results by a series of examples, using, in particular, a relation between the Gauss-Manin connection and parameterized differential Galois groups. ### MSC: 12H20 Abstract differential equations 13N10 Commutative rings of differential operators and their modules 20G05 Representation theory for linear algebraic groups 34M56 Isomonodromic deformations for ordinary differential equations in the complex domain 37K20 Relations of infinite-dimensional Hamiltonian and Lagrangian dynamical systems with algebraic geometry, complex analysis, and special functions 58A12 de Rham theory in global analysis Full Text: ### References: [1] Arreche, C., Computing the differential Galois group of a one-parameter family of second order linear differential equations, (2012) [2] Arreche, C., A Galois-theoretic proof of the differential transcendence of the incomplete gamma function, J. Algebra, 389, 119-127, (2013) · Zbl 1320.34121 [3] Bélair, L.; Macintyre, A.; Scanlon, T., Model theory of the Frobenius on the Witt vectors, Am. J. Math., 129, 665-721, (2007) · Zbl 1121.03043 [4] Besser, A., Heidelberg lectures on Coleman integration, (The Arithmetic of Fundamental Groups, PIA 2010, Contributions in Mathematical and Computational Sciences, vol. 2, (2012), Springer), 3-52 · Zbl 1315.14033 [5] Bessonov, M.; Ovchinnikov, A.; Shapiro, M., Integrability conditions for parameterized linear difference equations, (Proceedings of the 38th International Symposium on International Symposium on Symbolic and Algebraic Computation, ISSAC 2013, (2013), ACM Press New York), 45-52 · Zbl 1360.12003 [6] Cassidy, P., Differential algebraic groups, Am. J. Math., 94, 891-954, (1972) · Zbl 0258.14013 [7] Cassidy, P., The differential rational representation algebra on a linear differential algebraic group, J. Algebra, 37, 223-238, (1975) · Zbl 0318.12105 [8] Cassidy, P.; Singer, M., Galois theory of parametrized differential equations and linear differential algebraic group, (IRMA Lectures in Mathematics and Theoretical Physics, vol. 9, (2007), European Mathematical Society), 113-157 [9] Cassidy, P.; Singer, M., A Jordan-Hölder theorem for differential algebraic groups, J. Algebra, 328, 190-217, (2011) · Zbl 1234.12003 [10] Chen, S.; Kauers, M.; Singer, M., Telescopers for rational and algebraic functions via residues, (Proceedings of the 37th International Symposium on Symbolic and Algebraic Computation, ISSAC 2012, (2012), ACM Press New York), 130-137 · Zbl 1323.68592 [11] Clemens, C. H., A scrapbook of complex curve theory, (2002), American Mathematical Society [12] Demazure, M., Schémas en groupes réductifs, Bull. Soc. Math. Fr., 93, 369-413, (1965) · Zbl 0163.27402 [13] Dreyfus, T., The Kovacic’s algorithm for parameterized differential Galois theory, Proc. Am. Math. Soc., (2014), in press [14] Gillet, H., Differential algebra—a scheme theory approach, (Differential Algebra and Related Topics, Newark, NJ, 2000, (2002), World Sci. Publ. River Edge, NJ), 95-123 · Zbl 1051.13011 [15] Gillet, H.; Gorchinskiy, S.; Ovchinnikov, A., Parameterized Picard-Vessiot extensions and Atiyah extensions, Adv. Math., 238, 322-411, (2013) · Zbl 1328.12010 [16] Hardouin, C., Hypertranscendance et groupes de Galois aux différences, (2006) [17] Hardouin, C., Hypertranscendance des systèmes aux différences diagonaux, Compos. Math., 144, 565-581, (2008) · Zbl 1183.39005 [18] Hardouin, C.; Singer, M., Differential Galois theory of linear difference equations, Math. Ann., 342, 333-377, (2008) · Zbl 1163.12002 [19] Hardouin, C.; di Vizio, L., Algebraic and differential generic Galois groups for q-difference equations, (2010) [20] Hardouin, C.; di Vizio, L., Courbures, groupes de Galois génériques et D-groupoïdes de Galois d’un système aux D-différences, C. R. Math. Acad. Sci., 348, 951-954, (2010) · Zbl 1245.12006 [21] Hardouin, C.; di Vizio, L., Descent for differential Galois theory of difference equations. confluence and q-dependency, Pac. J. Math., 256, 79-104, (2012) · Zbl 1258.12004 [22] Jimbo, M.; Miwa, T., Deformation of linear ordinary differential equations, Proc. Jpn. Acad., Ser. A, Math. Sci., 56, 143-148, (1980) · Zbl 0453.34007 [23] Jimbo, M.; Miwa, T.; Ueno, K., Monodromy preserving deformation of linear ordinary differential equations with rational coefficients. I. general theory and τ-function, Physica D, 2, 306-352, (1981) · Zbl 1194.34167 [24] Johnson, J.; Reinhart, G.; Rubel, L., Some counterexamples to separation of variables, J. Differ. Equ., 121, 42-66, (1995) · Zbl 0835.35005 [25] M. Kamensky, Model theory and the Tannakian formalism, Trans. Am. Math. Soc. (2015), in press, http://arxiv.org/abs/0908.0604. · Zbl 1375.03033 [26] Kamensky, M., Tannakian formalism over fields with operators, Int. Math. Res. Not., 361, 163-171, (2012) [27] Kolchin, E., Differential algebra and algebraic groups, (1973), Academic Press New York · Zbl 0264.12102 [28] Kolchin, E., Differential algebraic groups, (1985), Academic Press New York · Zbl 0556.12006 [29] Landesman, P., Generalized differential Galois theory, Trans. Am. Math. Soc., 360, 4441-4495, (2008) · Zbl 1151.12004 [30] Lang, S., Algebra, (2002), Springer New York · Zbl 0984.00001 [31] Magid, A., Lectures on differential Galois theory, (1994), American Mathematical Society Providence, RI · Zbl 0855.12001 [32] Magid, A., The Picard-Vessiot antiderivative closure, J. Algebra, 244, 1-18, (2001) · Zbl 1049.12006 [33] Malgrange, B., Sur LES déformations isomonodromiques. I. singularités irrégulierès, Prog. Math., 37, 401-426, (1983) · Zbl 0528.32018 [34] Malgrange, B., Sur LES déformations isomonodromiques. II. singularités régulierès, Prog. Math., 37, 427-438, (1983) · Zbl 0528.32017 [35] Minchenko, A.; Ovchinnikov, A., Zariski closures of reductive linear differential algebraic groups, Adv. Math., 227, 1195-1224, (2011) · Zbl 1215.12009 [36] Minchenko, A.; Ovchinnikov, A., Extensions of differential representations of $$\operatorname{SL}_2$$ and tori, J. Inst. Math. Jussieu, 12, 199-224, (2013) · Zbl 1295.12008 [37] Minchenko, A.; Ovchinnikov, A.; Singer, M. F., Reductive linear differential algebraic groups and the Galois groups of parameterized linear differential equations, (2013) [38] Minchenko, A.; Ovchinnikov, A.; Singer, M. F., Unipotent differential algebraic groups as parameterized differential Galois groups, J. Inst. Math. Jussieu, 13, (2014), in press · Zbl 1364.12005 [39] Mitschi, C.; Singer, M., Monodromy groups of parameterized linear differential equations with regular singularities, Bull. Lond. Math. Soc., 44, 913-930, (2012) · Zbl 1254.34124 [40] Mitschi, C.; Singer, M., Projective isomonodromy and Galois groups, Proc. Am. Math. Soc., 141, 605-617, (2013) · Zbl 1268.34187 [41] Ovchinnikov, A., Tannakian approach to linear differential algebraic groups, Transform. Groups, 13, 413-446, (2008) · Zbl 1231.20045 [42] Ovchinnikov, A., Differential Tannakian categories, J. Algebra, 321, 3043-3062, (2009) · Zbl 1173.18003 [43] Ovchinnikov, A., Tannakian categories, linear differential algebraic groups, and parametrized linear differential equations, Transform. Groups, 14, 195-223, (2009) · Zbl 1229.18008 [44] Ovchinnikov, A., Difference integrability conditions for parameterized linear difference and differential equations, (2013) · Zbl 1360.12003 [45] Positsel’skii, L., Nonhomogeneous quadratic duality and curvature, Funct. Anal. Appl., 27, 197-204, (1993) · Zbl 0826.16041 [46] van der Put, M.; Singer, M., Galois theory of linear differential equations, (2003), Springer Berlin · Zbl 1036.12008 [47] Sabbah, C., The work of andrey bolibrukh on isomonodromic deformations, (IRMA Lectures in Mathematics and Theoretical Physics, vol. 9, (2007), European Mathematical Society), 9-25 · Zbl 1356.32003 [48] Scanlon, T., A model complete theory of valued D-fields, J. Symb. Log., 65, 1758-1784, (2000) · Zbl 0977.03021 [49] Scanlon, T., Model theory of valued D-fields, (May 1997), Harvard University, PhD thesis [50] Seidenberg, A., Abstract differential algebra and the analytic case, Proc. Am. Math. Soc., 9, 159-164, (1958) · Zbl 0186.07502 [51] Sibuya, Y., Linear differential equations in the complex domain: problems of analytic continuation, vol. 82, (1990), American Mathematical Society Providence, RI [52] Singer, M., Linear algebraic groups as parameterized Picard-Vessiot Galois groups, J. Algebra, 373, 153-161, (2013) · Zbl 1296.12003 [53] Sit, W., Differential algebraic subgroups of $$\operatorname{SL}(2)$$ and strong normality in simple extensions, Am. J. Math., 97, 627-698, (1975) · Zbl 0343.20028 [54] Springer, T. A., Invariant theory, (1977), Springer-Verlag Berlin, New York · Zbl 0346.20020 [55] Stalder, N., Scalar extension of abelian and Tannakian categories, (2008) [56] Umemura, H., Invitation to Galois theory, (IRMA Lectures in Mathematics and Theoretical Physics, vol. 9, (2007), European Mathematical Society), 269-289 · Zbl 1356.12006 [57] Wibmer, M., Existence of ∂-parameterized Picard-Vessiot extensions over fields with algebraically closed constants, J. Algebra, 361, 163-171, (2012) · Zbl 1280.12003 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-05-24 03:14:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7652141451835632, "perplexity": 4647.8186858810905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00229.warc.gz"}
https://electronics.stackexchange.com/questions/274454/problem-about-nyquest-signalling-theorem
# Problem about Nyquest signalling theorem? A sinusoidal signal with amplitude 20 V and bandwidth 35 kHz is sampled at 2.5 times its Nyquist rate and then quantized using a 128-level quantizer. Determine: i) The resulting data bit rate? The Bit rate= 2Wm=2(35K)(7)(2.5)=1.225M bit/s ii) The Signal to Quantization Noise Ratio (SQNR) in dBs of the resulting sampled signal assuming the sinusoidal signal amplitude covers all levels of the quantizer? I know the power of the signal , But how to get the noise power.? iii) The minimum required baseband channel bandwidth needed for transmitting the digital bits assuming that Manchester line coding is used. The required bandwidth is 2R= 2(1.225M) Can someone help me solving this problem ? • A sine wave has a band width of zero theoretically. It may have a frequency of 35 kHz of course. – Andy aka Dec 12 '16 at 11:21
2019-07-20 12:04:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8023319244384766, "perplexity": 2072.1572817173824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526508.29/warc/CC-MAIN-20190720111631-20190720133631-00530.warc.gz"}
https://jp.mathworks.com/help/physmod/sps/ref/smac8c.html
# SM AC8C Discrete-time or continuous-time synchronous machine AC8C excitation system including an automatic voltage regulator and an exciter • Library: • Simscape / Electrical / Control / SM Control ## Description The SM AC8C block implements a synchronous machine type AC8C excitation system model in conformance with IEEE 421.5-2016[1]. Use this block to model the control and regulation of the field voltage of a synchronous machine that operates as a generator using an AC rotating exciter. You can switch between continuous and discrete implementations of the block by using the Sample time (-1 for inherited) parameter. To configure the integrator for continuous time, set the Sample time (-1 for inherited) property to `0`. To configure the integrator for discrete time, set the Sample time (-1 for inherited) property to a positive, nonzero value, or to `-1` to inherit the sample time from an upstream block. The SM AC8C block is made up of five major components: • The Current Compensator modifies the measured terminal voltage as a function of the terminal current. • The Voltage Measurement Transducer simulates the dynamics of a terminal voltage transducer using a low-pass filter. • The Excitation Control Elements component compares the voltage transducer output with a terminal voltage reference to produce a voltage error. This voltage error is then passed through a voltage regulator to produce the exciter field voltage. • The AC Rotating Exciter models the AC rotating exciter, which produces a field voltage that is applied to the controlled synchronous machine. The block also feeds the exciter field current (which is given the standard symbol VFE) back to the excitation system. • The Power Source models the dependency of the power source for the controlled rectifier from the terminal voltage. This diagram shows the overall structure of the AC8C excitation system model: In the diagram: • VT and IT are the measured terminal voltage and current of the synchronous machine. • VC1 is the current-compensated terminal voltage. • VC is the filtered, current-compensated terminal voltage. • VREF is the reference terminal voltage. • VS is the power system stabilizer voltage. • SW1 is the user-selected power source switch for the controlled rectifier. • VB is the exciter field voltage. • EFE and VFE are the exciter field voltage and current, respectively. • EFD and IFD are the field voltage and current, respectively. The following sections describe each of the major parts of the block in detail. ### Current Compensator and Voltage Measurement Transducer The current compensator is modeled as: `${V}_{C1}={V}_{T}+{I}_{T}\sqrt{{R}_{C}^{2}+{X}_{C}^{2}},$` where: • RC is the load compensation resistance. • XC is the load compensation reactance. The voltage measurement transducer is implemented as a Low-Pass Filter block with time constant TR. Refer to the documentation for this block for the discrete and continuous implementations. ### Excitation Control Elements This diagram illustrates the overall structure of the excitation control elements: In the diagram: • The Summation Point Logic subsystem models the summation point input location for the overexcitation limiter (OEL), underexcitation limiter (UEL), stator current limiter (SCL), and the power switch selector (V_S) voltages. For more information about using limiters with this block, see Field Current Limiters. • There are two Take-over Logic subsystems. They model the take-over point input location for the OEL, UEL, SCL and PSS voltages. For more information about using limiters with this block, see Field Current Limiters. • The PID_R subsystem models a PID controller that functions as a control structure for the automatic voltage regulator. The minimum and maximum anti-windup saturation limits for the block are VPIDmin and VPIDmax, respectively. • The Low-Pass Filter block models the major dynamics of the voltage regulator. Here, KA is the regulator gain and TA is the major time constant of the regulator. The minimum and maximum anti-windup saturation limits for the block are VRmin and VRmax, respectively. • The Logical switch 1 parameter controls the origin of the power source for the controlled rectifier. The voltage regulator command signal VR is multiplied by the exciter field voltage, VB. For more information about the user-selected logical switch for the power source of the controlled rectifier, see Power Source. ### Field Current Limiters You can use various field current limiters to modify the output of the voltage regulator under unsafe operating conditions: • Use an overexcitation limiter to prevent overheating of the field winding due to excessive field current demand. • Use an underexcitation limiter to boost field excitation when it is too low, which risks desynchronization. • Use a stator current limiter to prevent overheating of the stator windings due to excessive current. Attach the output of any of these limiters at one of these points: • The summation point as part of the automatic voltage regulator (AVR) feedback loop • The take-over point to override the usual behavior of the AVR If you are using the stator current limiter at the summation point, use the single input VSCLsum. If you are using the stator current limiter at the take-over point, use both the overexcitation input, VOELscl, and the underexcitation input, VUELscl. ### AC Rotating Exciter This diagram illustrates the overall structure of the AC rotating exciter: In the diagram: • The exciter field current VFE is modeled as the summation of three signals: • The nonlinear function Vx models the saturation of the exciter output voltage. • The proportional term KE models the linear relationship between exciter output voltage and the exciter field current. • The demagnetizing effect of the load current on the exciter output voltage is modeled using the demagnetization constant KD in the feedback loop. • The Integrator with variable limits subsystem integrates the difference between EFE and VFE to generate the exciter alternator output voltage VE. TE is the time constant for this process. • The nonlinear function FEX models the exciter output voltage drop from the rectifier regulation. This function depends on the constant KC, which itself is a function of commutating reactance. • The parameters VEmin and VFEmax model the lower and upper limits of the rotating exciter. ### Power Source It is possible to use different power source representations for the controlled rectifier by selecting the relevant option in the Logical switch 1 parameter. The power source for the controlled rectifier can be either derived from the terminal voltage (```Position A: power source derived from terminal voltage```) or it can be independent of the terminal voltage (```Position B: power source independent from the terminal conditions```). ## Ports ### Input expand all Voltage regulator reference set point, in per-unit representation, specified as a scalar. Data Types: `single` | `double` Input from the power system stabilizer, in per-unit representation, specified as a scalar. Data Types: `single` | `double` Terminal voltage magnitude in per-unit representation, specified as a scalar. Data Types: `single` | `double` Terminal current magnitude in per-unit representation, specified as a scalar. Data Types: `single` | `double` Input from the overexcitation limiter, in per-unit representation, specified as a scalar. #### Dependencies • To ignore the input from the overexcitation limiter, set Alternate OEL input locations (V_OEL) to `Unused`. • To use the input from the overexcitation limiter at the summation point, set Alternate OEL input locations (V_OEL) to ```Summation point```. • To use the input from the overexcitation limiter at the take-over point, set Alternate OEL input locations (V_OEL) to `Take-over`. Data Types: `single` | `double` Input from the underexcitation limiter, in per-unit representation, specified as a scalar. #### Dependencies • To ignore the input from the underexcitation limiter, set Alternate UEL input locations (V_UEL) to `Unused`. • To use the input from the underexcitation limiter at the summation point, set Alternate UEL input locations (V_UEL) to ```Summation point```. • To use the input from the underexcitation limiter at the take-over point, set Alternate UEL input locations (V_UEL) to `Take-over`. Data Types: `single` | `double` Input from the stator current limiter when using the summation point, in per-unit representation, specified as a scalar. #### Dependencies • To ignore the input from the stator current limiter, set Alternate SCL input locations (V_SCL) to `Unused`. • To use the input from the stator current limiter at the summation point, set Alternate SCL input locations (V_SCL) to ```Summation point```. Data Types: `single` | `double` Input from the stator current limiter to prevent field overexcitation when using the take-over point, in per-unit representation, specified as a scalar. #### Dependencies • To ignore the input from the stator current limiter, set Alternate SCL input locations (V_SCL) to `Unused`. • To use the input from the stator current limiter at the take-over point, set Alternate SCL input locations (V_SCL) to `Take-over`. Data Types: `single` | `double` Input from the stator current limiter to prevent field underexcitation when using the take-over point, in per-unit representation, specified as a scalar. #### Dependencies • To ignore the input from the stator current limiter, set Alternate SCL input locations (V_SCL) to `Unused`. • To use the input from the stator current limiter at the take-over point, set Alternate SCL input locations (V_SCL) to `Take-over`. Data Types: `single` | `double` Measured per-unit field current of the synchronous machine, specified as a scalar. Data Types: `single` | `double` ### Output expand all Per-unit field voltage to apply to the field circuit of the synchronous machine, returned as a scalar. Data Types: `single` | `double` ## Parameters expand all ### General Initial per-unit voltage to apply to the field circuit of the synchronous machine. Initial per-unit voltage to apply to the terminal. Initial per-unit voltage to apply to the terminal. Time between consecutive block executions. During execution, the block produces outputs and, if appropriate, updates its internal state. For more information, see What Is Sample Time? and Specify Sample Time. For inherited discrete-time operation, specify `-1`. For discrete-time operation, specify a positive integer. For continuous-time operation, specify `0`. If this block is in a masked subsystem, or other variant subsystem that allows you to switch between continuous operation and discrete operation, promote the sample time parameter. Promoting the sample time parameter ensures correct switching between the continuous and discrete implementations of the block. For more information, see Promote Parameter to Mask. ### Pre-Control Resistance used in the current compensation system. Set this parameter and Reactance component of load compensation, X_C (pu) to `0` to disable current compensation. Reactance used in the current compensation system. Set this parameter and Resistive component of load compensation, R_C (pu) to `0` to disable current compensation. Equivalent time constant for the voltage transducer filtering. ### Control Per-unit proportional gain of the voltage regulator. Per-unit integral gain of the voltage regulator. Derivative gain of the voltage regulator. Equivalent lag time constant for the derivative channel of the PID controller. Maximum admissible per-unit output of the PID regulator. Minimum admissible per-unit output of the PID regulator. Gain associated with the rectifier. Time constant of the rectifier. Maximum per-unit output voltage of the regulator. Minimum per-unit output voltage of the regulator. Location of the power system stabilizer input. Location of the overexcitation limiter input. Location of the underexcitation limiter input. Location of the stator current limiter input: • If you select `Summation point`, use the V_SCLsum input port. • If you select any of the `Take-over` options, use the V_OELscl and V_UELscl input ports. ### Exciter Proportional constant for the exciter field. Time constant for the exciter field. Demagnetization factor related to the exciter alternator reactances. Exciter output voltage for the first saturation factor. Saturation factor for the first exciter. Exciter output voltage for the second saturation factor. Saturation factor for the second exciter. Maximum per-unit field current limit reference. Minimum per-unit exciter voltage output. Per-unit potential circuit gain coefficient. #### Dependencies To enable this parameter, set Logical switch 1 to ```Position A: power source derived from terminal voltage```. Potential circuit phase angle, in degrees. #### Dependencies To enable this parameter, set Logical switch 1 to ```Position A: power source derived from terminal voltage```. Per-unit potential circuit current gain coefficient. #### Dependencies To enable this parameter, set Logical switch 1 to ```Position A: power source derived from terminal voltage```. Per-unit reactance associated with the potential source. #### Dependencies To enable this parameter, set Logical switch 1 to ```Position A: power source derived from terminal voltage```. Per-unit loading factor of the rectifier that is proportional to the commutating reactance. Position of logical switch 1. Maximum per-unit available field voltage for the exciter. ## References [1] IEEE Recommended Practice for Excitation System Models for Power System Stability Studies. IEEE Std 421.5-2016. Piscataway, NJ: IEEE-SA, 2016.
2021-11-30 02:43:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7004634141921997, "perplexity": 7170.435908211105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358903.73/warc/CC-MAIN-20211130015517-20211130045517-00298.warc.gz"}
http://manufacturingscience.asmedigitalcollection.asme.org/article.aspx?articleid=2728662
0 Review Article # Double-Sided Incremental Forming: A ReviewOPEN ACCESS [+] Author and Article Information Wenxuan Peng Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham, NG7 2RD, UK e-mail: wenxuan.peng@nottingham.ac.uk Hengan Ou Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham, NG7 2RD, UK e-mail: h.ou@nottingham.ac.uk Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham, NG7 2RD, UK e-mail: a.a.becker@nottingham.ac.uk 1Corresponding author. Manuscript received March 12, 2018; final manuscript received December 12, 2018; published online March 27, 2019. Assoc. Editor: Gracious Ngaile. J. Manuf. Sci. Eng 141(5), 050802 (Mar 26, 2019) (12 pages) Paper No: MANU-18-1148; doi: 10.1115/1.4043173 History: Received March 12, 2018; Accepted December 13, 2018 ## Abstract Incremental sheet-forming (ISF) processes have been developed rapidly in the past two decades. Its high flexibility and easy operability have a significant appeal for industrial applications, and substantial progress has been made in fundamental understanding and demonstration of practical implementation. However, there are a number of obstacles including achievable accuracy and instability in material deformation, which are considered as a main contributing factor for preventing the ISF process to be widely used in industry. As a variant of the general ISF process, double-sided incremental forming (DSIF) uses an additional supporting tool in the opposite side of the workpiece, maintains the flexibility, and at the same time improves the material deformation stability and reduces material thinning. In recent years, there has been increased research interest in looking into DSIF-specific material deformation mechanisms and investigation. This paper aims to provide a technical review of the DSIF process as benchmarked with single-point incremental forming (SPIF). It starts with a brief overview of the current state of the art of both SPIF and DSIF. This is followed by a comparative study between SPIF and DSIF with the key research challenges identified. This leads to a recommendation of future directions for DSIF focused research. <> ## Introduction ###### Incremental Sheet Forming. Incremental forming may be referred to a number of sheet- and bulk-forming processes such as spinning or rolling that enable workpiece deformation in a gradual or incremental manner. Incremental sheet forming (ISF), also known as single-point incremental forming (SPIF) initially reported by Emmens et al. [1] and Mason and Appleton in 1984 [2], is specifically designated to processes that use particular tools, driven by CNC machines or industrial robots, moving along a predefined tool path to form a clamped metal sheet into the desired shape. Figure 1 shows a schematic of the ISF process, indicating three key elements, i.e., (i) a simple hemispherical forming tool; (ii) blank holder, and (iii) backing plate and partial die [3]. The sheet is clamped on the periphery and the trajectory of the tool is combined by a succession of spiral contours, which determines the geometry and accuracy of the formed products. The main advantages of ISF include its “dieless” feature, easy adaptation, and simple structure of the tool system as compared with many conventional sheet-forming processes, which require a complex press and dedicated tool system. The flexibility and low setup cost allow ISF to be implemented to manufacture low-volume, high-value, and even customized parts [4]. For instance, a limited number of replacement parts have been made by Honda with Amino through the ISF process [1]. Nevertheless, the ISF technology is generally considered to have considerable potential in aerospace, automotive, medical, and other industrial fields. In recent years, many variants such as two-point incremental forming (TPIF) and double-sided incremental forming (DSIF) have been proposed. As shown in Fig. 2, the classification of the variants of ISF processes is based on the method of applying a supporting tool. The SPIF uses only one tool traveling along a predefined toolpath on one side of the forming sheet, while TPIF uses an extra full or partial die on the other side of the workpiece to increase forming stability. The DSIF process replaces the die with a flexible supporting tool with a specified trajectory. ###### Double-Sided Incremental Forming. DSIF was initially studied by Maidagan et al. [5] and Meier et al. in 2007 [6]. It aims to apply a backup force for the better control of the localized deformation and improvement of the formability and accuracy. In the DSIF process, the tool on the top side of the workpiece is generally named as the master tool mainly generating localized deformation, while the tool on the bottom side is named as the supporting tool or slave tool offering a supporting force. The toolpath of the supporting tool is derived from the trajectory of the top tool incorporated with the wall angle and thickness at the contact point. In DSIF, two variants can be classified by the tool moving direction in the radial orientation, i.e., conventional DSIF with the out-to-in toolpath (Fig. 3(a)) and accumulative double-sided incremental forming (ADSIF) with the in-to-out toolpath (Fig. 3(b)). In the conventional DSIF process, the tools move continuously from the maximum fringe of the workpiece toward the center. The tool steps along the negative Z direction with an incremental depth ΔZ and forms the material in the horizontal X–Y plane at the current Z depth. In ADSIF, the tools start from the minimum annulus of the clamped sheet and simultaneously move down by an incremental depth ΔZ. Then, both the master and slave tools travel on the X–Y plane while keeping at a constant Z = −ΔZ. Generally, ADSIF is regarded as a specific toolpath strategy of DSIF. However, it is worthwhile to discuss and compare ADSIF with SPIF and DSIF due to its enhanced formability, accuracy, and unique deformation mechanism [7]. Consequently, the purpose of this review is to introduce the current development of DSIF, including comparison with SPIF in terms of toolpath design, formability, and material deformation mechanisms. This is followed by the identified research gaps in the field of DSIF and recommendations for further research. ## Comparison Between SPIF and DSIF As a mainstream in the ISF field, extensive studies have been carried out on the aspects of SPIF concerning accuracy, toolpath strategy, deformation, and failure mechanisms. There are many published reviews of SPIF that concentrate on its history, the improvements, and limitations compared with the conventional sheet-forming processes, also the deformation and failure mechanisms [1,811]. The advantages of SPIF include high flexibility, low setup cost, and improved formability [8,9]. However, the SPIF process is also limited by the slow-forming speed, insufficient accuracy, rough surface finishing, and small achievable wall angle [9]. A number of reviews focus on specific aspects of ISF processes such as the effects of the process parameters [12], hardware [13], asymmetric incremental sheet forming [14], toolpath strategies [15], and FEA research [11]. Future prospects were also suggested in terms of the mechanism studies, prediction by FE simulations, and the extension of small-scale production and industrial applications [11]. While notable advances have been made in SPIF processes, specific limitations of SPIF have also been identified by various studies through experimental testing or theoretical analysis. The localized deformation under ISF process results in the high dependence of the geometrical accuracy on the stability of the deformation occurring around the contact region [16]. The springback effect and the artificial dynamic oscillations in contact area are also reported as the main resource of error [7,8]. The optimization of toolpath and the application of the supporting tool are commonly used to overcome the drawbacks of SPIF. In terms of the optimization of toolpath, the starting point, direction of tool movement, and the distance between two adjacent contour lines are optimized and tested to be effective [17,18]. Techniques like FEA and closed-loop feedback control system are combined with the toolpath design and successfully enhance the geometrical accuracy and homogeneity of the parts produced [19,20]. In terms of the additional support, the TPIF process has been proven that the partial die helps to reduce the springback upon the unloading in SPIF, and thereby the formability and accuracy are improved [3,21,22]. DSIF also brings comparative benefits by using an additional tool support, instead of the partial die in the TPIF process, to achieve improved formability and accuracy to the SPIF process. DSIF was studied by Meier et al. [6] as a truly “die-less” TPIF method and was tested and compared with SPIF through a robotic system. A simply truncated cone was produced separately by SPIF and DSIF processes. The thickness deviation results show that the cone fabricated by DSIF has more homogeneous error dispersion than using SPIF. According to the comparison between SPIF and DSIF from published papers [23,24], the additional supporting tool can effectively reduce the geometrical deviation. The die-less feature makes DSIF an economical solution to resolve the current problems in SPIF. A full comparative investigation using both experiment and FE simulation was conducted by Lasunon and Knight [25] looking into the stress and strain condition, thickness, and part profile. A shallow square-sided pyramid with 38 mm dimension for open width, 4.6 mm for depth was chosen as the desired shape in both experimental and analytical trials. The material was AA5052 alloy and two-wall angles were tested to evaluate the performance of both processes to the distinct angles. It was observed from contour plots (Figs. 4(a) and 4(b)) that the peak value of equivalent strain in SPIF was higher than that of DSIF when forming the same shape; on the aspect of thickness (Figs. 4(c) and 4(d)), the sheet thinning from SPIF process was greater than that by DSIF, and the thinning concentrates in the corner in SPIF when it happened in the wall area in DSIF [25]. Therefore, the failure is more likely to occur in the wall region in DSIF instead of the corners from SPIF. The contrast of minimum thickness values indicates an advantage of DSIF process that can be utilized to produce parts with sharper edges and steeper wall angle as compared with the SPIF process. The deformation instability, which limits the accuracy and formability of the SPIF process is improved by the implementation of the support tool in DSIF. However, the support tool was found to lose contact with the workpiece at an early stage of the process in both experimental test and FE simulation conducted by Maidagan et al. [5], suggesting the process degenerated to SPIF. This fact confirms that the control of the support tool to avoid loss of contact due to the unexpected material thinning is a key challenge in DSIF research. Thus, the prediction of sheet thickness distribution is an important question to maintain the improved formability in DSIF, which is very different from the SPIF process. In both SPIF and DSIF processes, the sheet is clamped and formed by the movement of specific tools. Most of the advantages of SPIF are inherited by DSIF, such as high flexibility and no requirement of dies. DSIF shows the potential to surmount the formability limits and low geometric accuracy in SPIF [25,26]. The value of plastic strains and the extent of sheet thinning concentration in SPIF are higher than that in DSIF, which means that the possibility of failure occurrence in DSIF is less than that of SPIF. Therefore, the aforementioned studies show a clear advantage of the DSIF over SPIF in a number of key attributes under general ISF-processing conditions. ## Formability in DSIF ###### Definition. The formability in the sheet-forming process is used to describe the ability to enable sheet deformation without material failure. Similar to the SPIF process, the formability of DSIF can be described by the maximum achievable wall angle αmax according to the sine law as shown in Eq. (1). Display Formula (1) $tf=tisin(90deg−α)$ where tf and ti are the final and initial wall thickness, respectively, and α is the wall angle (Fig. 5) [27]. Using the sine law, the wall thickness can be uniquely determined by the wall angle α. Considering the potential squeezing effect brought by the support tool, another parameter called squeeze factor s, suggested by Malhotra et al. [23], was used to estimate the extent of an artificial squeezing effect applied on the thickness in the contact region as defined in Eq. (2), where d is the current thickness of squeezed wall: Display Formula (2) $d=s⋅tf$ The squeeze factor s demonstrates the ratio of the final formed thickness to the thickness predicted by the sine law. It has various influences but cannot indicate to formability of the processes. According to the sine law, the sheet thickness decreases and the chance of failure rises when the wall angle increases. Thus, the maximum achievable wall angle αmax is regarded as the most important indicator for the formability of the DSIF process and has been widely implemented in a comparative study [24] and a parametric investigation [28,29]. The indicators of formability in ISF can also be used in DSIF, especially for analytical studies. Lu et al. [30] used the stress triaxiality, which is previously used to evaluate the contribution of deformation mode combined with stretching, squeezing, and shearing in the ISF processes, as an indicator of the enhanced formability of the DSIF process. Based on the traditional forming limit diagram (FLD) [10], Allwood and Shouler [31] proposed a generalized forming limit diagram (GFLD), which took the normal stresses and through-thickness shear into consideration to match the nature of the deformation mechanisms of DISF. The general methods for the evaluation of process formability can be used to support the analytical research of DSIF [7], although a specific procedure or approach for adaptation and implementation is still to be fully researched and validated. ###### Influential Factors in the Formability of DSIF. The DSIF process inherits the localized deformation feature of the general ISF processes and hence the dependence of formability on process parameters such as tool rotations, feed rate, and step size [12]. However, the use of the support tool and its interactions with the workpiece and the master tool requires further research to the formability of the DSIF process. Lu et al. [30] implemented a series of experiments with dedicated support tool mounted on a pneumatic actuator in order to investigate the effect of tool shift and compressive stress in DSIF. The effect of tool shift and compression were estimated via checking the deformation of predrilled holes in the through-thickness direction (Fig. 6). Based on the observation of results, the stretching in meridional direction is considered to have a dominant effect in the deformation of DSIF. The other factors are the compression in the radial direction and slight through-thickness shear in the tool movement direction. The compressive stress shows a positive effect on the formability. However, excessive supporting forces generating too much squeezing pressure reduce the formability in DSIF. Similarly, the tool shift has a positive effect on the enhancement of formability and would change the position of fracture occurrence. This approach was successfully used to form pure titanium cranioplasty plate for medical applications [32]. Ren et al. [28] processed a parametric study using FE simulation to investigate the influential factors on formability in the ADSIF process. As shown in Fig. 7(a), the tool gap Tg and the position angle θ were chosen as the variables, and the stable wall angle was measured as the criterion of formability (Fig. 7(b)). It is found that the dominant influential factor is switched between Tg and θ depending on the extent of the squeeze effect. When squeezing is the dominant deformation mode during the process (Tg ≤ 0.7), Tg has the leading effect on the stable angle; when bending is the main deformation mode (Tg ≥ 1), θ is the dominant factor instead of Tg; when the process is in the competing region (0.7 < Tg < 1), there are implicit influential factors of process formability. This result may be partially explained by the force transfer due to the changes of Tg. When Tg < 1, the forming force increases dramatically with the Tg decreasing. Also, θ affects the bending instead of squeezing and makes the transition from the current wall angle into a stable wall angle. Therefore, keeping the process under the competing or bending-dominant region (Tg > 0.7) and increasing the position angle θ is recommended to control the squeezing effect, as well as the forming force, in an appropriate range to reach the maximum formability in ADSIF forming. However, this conclusion may not be applicable to the conventional DSIF process due to significant differences between the ADSIF and DSIF processes, which are discussed in Sec. 5 on deformation mechanisms. ###### Electrical-Assisted Heating in DSIF. Several trials were reported for various materials being tested in ISF including composite and high-strength materials. Preliminary studies for composite materials focused on the formability and failure mechanics of polymers [33] or metal-polymer/foam/fiber [3436] by the SPIF process. Davarpanah et al. [37] presented a series of experiments for thermoplastics using SPIF, conversational DSIF and ADSIF processes to give a comprehensive assessment. The results showed an agreement with previous tests using a metal sheet that the DSIF process postpones or avoids the occurrence of fracture on the polymer sheet as well. The scanning electron microscopy (SEM) images demonstrated less void growth in the parts of polymer made by DSIF, which leads to a discussion that the heat generated by the additional tool due to frictional heat effect causes the formability improvement, instead of the compression effect when forming sheet metals. For hard-to-form materials, heating is a generally used solution to increase the formability of materials in the ISF processes. There are various methods developed for heating-assisted ISF including mounting heating equipment (e.g., hot air blowers [38], heater band [39], and laser generator [40]), friction stir heat [41,42], and electric heating [43,44]. Xu et al. [45] investigated electrically assisted DSIF in forming an AZ31B magnesium alloy. Two modes of electrical connection were discussed (Fig. 8(a)). An air cylinder and rolling-ball tool were used to eliminate loss of contact and electrical discharging phenomenon. In contrast to the electrical SPIF process, the formability, geometric accuracy, and surface finishing were improved (Fig. 8(b)). Valoppi et al. [46] conducted an electrically assisted ADSIF process to form a Ti6Al4V alloy. The electric current was applied directly to two forming tools and the value of electric current was the main variable to control the temperature during the process. In the results, the formability improvement was apparent as almost all specimens fabricated with the electrically assisted ADSIF could achieve a larger depth than the normal ADSIF process without cracks. However, the best-achieved depth was obtained in the 50 A case. Then, the negative effect appeared when the applied current value increased above 50 A. The reason could be the decrease of forming force and the release of the internal stresses due to the thermal effects during the process. A specific advantage of heat-assisted DSIF over SPIF is potentially the local heating effect by tool rotation friction or direct resistance heating from both sides of the workpiece to allow more even heating and greater controllability under ISF deformation conditions. Toolpath design and optimization are the mainstreams of DSIF research. The key questions to be solved through toolpath design are insufficient geometrical accuracy and loss of contact due to the unpredictable material thinning. The basic principle of toolpath design is the sine law, which is consistent with conventional ISF. The toolpaths can be classified into two options: traditional out-to-in DSIF and in-to-out ADSIF strategies. Several prediction and compensation strategies are applied based on DSIF and ADSIF processes. The advantages and limitations of each strategy are discussed. ###### Accumulative-DSIF Strategy. ADSIF requires a novel toolpath that uses two tools forming the sheet via an initially single incremental depth and horizontal in-plane motion. ADSIF has two main differences from the conventional DSIF strategy: (i) the tools are moving from minimum annulus to the fringe and (ii) after the first step, tools are positioned at the Z = −ΔZ level without any movement in vertical axis throughout the process. A significant advantage of ADSIF is that two tools maintain contact with the forming sheet and mechanically avoid the loss of contact throughout the whole process. The constant contact leads to the improved formability and geometrical accuracy in the ADSIF process as compared with the conventional DSIF and SPIF processes [47]. The ADSIF concept was originally introduced and tested by Malhotra et al. [24]. Two cones with wall angles of 40 deg and 50 deg were formed to test the forming limit and precision with the reference parts made by SPIF and DSIF for comparison. As a result, the smaller geometric deviations and thickness inconsistency were observed from parts produced by ADSIF instead of SPIF or conventional DSIF (Fig. 9). When forming a cone with 50 deg draw angle, fracture occurs in the processes except ADSIF (Fig. 9). On the other hand, a small incremental depth is required in the ADSIF process to sustain the high accuracy and results in considerable time for completion. Additionally, the achievable wall angle is hard to exceed much more than 50 deg [48]. Although the long forming time and the limitation of forming wall angles in ADSIF prevent it to be a full replacement of conventional DSIF, the improved formability and quality by ADSIF still attract attention for further studies. Toolpath optimization and combination with out-to-in DSIF toolpath are verified to be beneficial for the enhancement of formability and profile accuracy [47]. Other numerical analyses reveal the unique deformation mechanism in ADSIF of enhanced shearing and bending effect in contrast to dominant stretching and bending effect in conventional DSIF processes [7,29]. In summary, the ADSIF is a noteworthy strategy and has a considerable potential of applications. ###### Compensation-Based Toolpath. Toolpath compensation is one strategy to accurately predict the deflection during the process and counteract the springback effect through the optimization of tools trajectory. Meier et al. [49] presented experiments of model-based and sensor-based strategies in robotic forming. An adjustment vector, derived from the deviation obtained by simulation results and CAD model (Fig. 10), was defined as a key parameter for toolpath revision. In terms of the sensor-based method, actual geometry is obtained through a 3D surface scanner and projected onto the reference CAD model in the direction of the adjustment vector with the aim of plotting counter points of the next forming run. For the model-based strategy, the process is simulated through an FE model with adjustment made based on the geometric errors obtained from the simulation result. Both strategies were verified to be able to reduce the deviations with results showing a significant reduction of deviation from ±1 mm to ±0.25 mm using the sensor-based strategy, and 0.7 mm in z-direction with 1 mm in the y-direction using the model-based strategy. To improve the performance of the model-based approach, the springback effect should be taken into account in the FE model. Rakesh et al. [50] developed another toolpath strategy that takes the deformation of tools into consideration. The adjustment vector was made up by sheet and tool deflection separately calculated by the predicted forming forces (Fig. 11(a)). The geometry obtained in the first iteration of FE simulation was compared with the desired shape to predict the deflections and to generate the compensated toolpath (Fig. 11(b)). In order to optimize the forming process of varying wall angle or asymmetric component, the desired profile was divided by geometric features to predict forming forces more precisely (Fig. 11(c)). Based on the results of several tests with different shapes and features, it can be confirmed that the deflection-compensated toolpath strategy can enhance the geometrical accuracy of DSIF as the measured maximum error is less than 0.5 mm. The maximum deviation can maintain under 0.6 mm in the fringe region after component being trimmed. Another approach to compensate for tool deflection was proposed by Ren et al. [51], who developed a closed-loop algorithm to eliminate the inaccuracy from the tool deformation and the system error from motor drivers or controllers. A model coupled with the stiffness of the forming tools, machine, and the sheet metal under squeezing was established to derive the system response from force to displacement. Therefore, the loss of contact can be avoided, and the state of stresses is under control by monitoring the reaction force on the supporting tool. In the tests, the produced asymmetric and complex parts proved the effectiveness of this algorithm toward maintaining the contact between tools and sheet as well as the enhancement on the formability and geometric accuracy. Wang et al. [52] proposed a novel strategy to reduce the springback in the DSIF process with a dedicated pneumatic supporting tool. The position between the master and slave tools was relatively rotated as shown in Fig. 12. With the constant supporting force, the “reverse-bending” or “squeezing” effect can be produced in the local forming region according to the rotation angle. The error in both major and minor axis on the semi-ellipsoid cone part was measured and compared to demonstrate the restraint effort from different strategies to the springback effect. The numerical and experimental results showed an agreement that both the squeezing and reverse-bending strategies can reduce the offset due to springback effect, while the reverse-bending strategy can achieve more accurate geometry than the squeezing strategy. Comparing the compensation strategies conducted by Rakesh et al. [50] and Meier et al. [53], the method considering the compensations of both sheet and tools shows advantages of higher accuracy and the capability of adaption for complex features. This may be attributed to the accurate force prediction (within 50 N of deviation compared with the experimentally measured forming forces) and reasonable arrangement of toolpaths. Simplified algorithms with the consideration of tool machine stiffness [51] and “squeezing” and “reverse-bending” strategies [52] can be implemented in DSIF processes for improved accuracy and inhibition of the springback without requiring for predicted forming forces or deflections. ###### Feature-Based Toolpath Strategy. The feature-based approach was initially proposed by Lu et al. [18] for SPIF that was based on the local characteristics of profile and design-specific toolpath with the aim of enhancing the accuracy. The final toolpath utilizes an interpolation technique from Malhotra et al. [54]. This strategy was further developed to be adaptive to the DSIF process by Lingam et al. [55] and Ndip-Agbor et al. [56]. The additional support tool provides the suitability for more forming features and sequences. The initial step is the establishment of the component model and reading features from the desired surfaces to generate silhouettes. These silhouettes or saddle points are matched to the characteristics of features to recognize the geometrical features correctly. Then, a proper forming sequence of features is selected in checking the corresponding effects between adjacent feature forming steps and the algorithm calculates the rectifying parameters. Finally, the DSIF toolpath is generated using helical trajectory incorporating deflection compensation. Improvements in surface finish, geometric features, and accuracy were confirmed by experiments. Maximum deviation to the ideal profile can be controlled under 0.4 mm when the correct forming sequence and strategy were applied. Moser et al. [57] developed another analogous method corresponding to the in-plane curvature effect. It was observed that the region of loss of contact was affected by the tool motion direction (Fig. 13(a)). The forming force on the slave tool in the Z-direction also shows a drop after forming a turning point at the corner that indicates the occurrence of loss of contact (Fig. 13(a)). The toolpath was divided into four parts according to the contour protrusion and tool moving direction (Fig. 13(b)). Then, the compensation was applied to the region with insufficient support. As a result, the uniformity of thickness distribution was improved and the possibility of failure was reduced. Zhang et al. [47] proposed a mixed strategy that combined DSIF and ADSIF methods. ADSIF was used to form the main body of the part to avoid the 0loss of contact and to achieve a better geometry, whereas DSIF was used to form the finer details on the part in the next step. As a result, the mixed strategy can form the part with larger forming depth than that implemented by only using the conventional DSIF process and achieve better details than that obtained from only the ADSIF method (Fig. 14). However, it was also found that the fracture occurred during the DSIF process when forming parts of a large wall angle. The forming limits of both the DSIF and the ADSIF need to be investigated for further development of combined forming strategies. In conclusion, the feature-based strategy is a useful method to optimize the toolpath with different geometrical features. The feature-based strategy can be generalized in the form of detailed guidelines of toolpath design in the DSIF process, similar to the study presented in SPIF [15]. Two essential issues to be addressed are the separation of the contour model and the implementation of compensation. The feature-based and compensation-based strategies are compatible with each other and can be incorporated together for further enhancement of formability and accuracy, which is considered as a main research direction for the DSIF toolpath optimization. There is a clear need for further study on the precision of feature recognition and corresponding optimization methods. ## Deformation Mechanisms of DISF The deformation modes and fracture mechanisms are important research topics in the ISF field. The deformation mechanisms in SPIF are generally the combinations of bending, stretching, and shearing depending on the forming conditions [31]. An appreciable amount of studies also focuses on the local deformation around the contact point and concludes the significance of the plain strain stretching, bending-under-tension (BUT), and through-thickness shear to the improvement of material formability [10,5860]. In this area, FEA helps to gain in-depth understanding of the strain and stress development as well as the fracture. On the other hand, many damage models have been developed based on classic fracture theories, including Gurson–Tvergaard–Needleman model and Lemaitre continuum damage mechanics model to provide an insight in fracture mechanisms [6164]. Similar studies are making progress in the DSIF field based on the methods such as FEA [7], parametric research [28,29], or the stress analysis amended from the previous SPIF studies [30]. The clear understanding for the deformation mechanisms in various ISF processes helps to learn the reasons for the disparities and the ways for further improvement of formability and geometry accuracy in SPIF, DSIF, and ADSIF [30]. ###### Deformation Mechanisms in DSIF. The deformation mechanisms in DSIF have been discussed in early comparative studies between SPIF and DSIF and toolpath research. A significant difference being revealed by FE method is that the maximum equivalent plastic strain in the DSIF process is less than that in SPIF and the failure caused by material over-thinning is more likely to occur in the wall region in DSIF, while it happens more in the corners in SPIF [25]. Malhotra et al. [23] found that the squeezing effect of tools leads to the higher plastic strains concentrating in a small area around the contact point of the tools and workpiece. In experiments, this compressive effect results in high strain hardening of the formed part. This phenomenon was explained by a comprehensive investigation of mechanism in DSIF presented by Lu et al. [30]. Based on the similarity between the deformation mechanics in SPIF and DSIF, a membrane method was implemented in the stress analysis where the shear effect was only considered to occur in the tangential direction. The stress triaxiality was selected as an indicator of formability. As a result, the drop of stress triaxiality with the increase of equivalent strain in DSIF (Fig. 15(a)) was found, whereas in SPIF the stress triaxiality underwent a stationary development throughout the process (Fig. 15(b)). It was concluded that the drop of stress triaxiality produced by the support tool in DSIF deferred the occurrence of failure and helped achieve a greater wall angle than in SPIF. Compared with the previous mechanism studies in SPIF, the additional squeezing and shear effects were the main differences. However, the distinct effects of compression and shear need to be further investigated. Valoppi et al. [65] investigated the fracture in the E-DSIF forming process with Ti-6Al-4V sheet. Three electric current intensities (50 A, 87.5 A, and 100 A) were applied on both the master and support tools to investigate the sheet fracture under different temperatures. The E-DSIF process was conducted until the sheet fracture occurred and the fracture surface was checked by using SEM. To investigate the material failure behavior, the SEM results from the tested samples were compared with samples from uniaxial tensile and pure shear tests under corresponding temperature cases. The achievable forming depth was increased with the rise in current intensity. It was observed that the fracture began from the outer surface, then propagated along the tool movement direction and ended when the laceration formed. The fracture can be classified as mode I, also named as tearing by the SEM analysis. Combined with the analysis by Lu et al. [30], the fracture in E-DSIF is more likely to happen in the formed region during the progressive thinning of the sheet, instead of the compressive area contacted with both the forming tool and the supporting tool. The reason can be that the material ductility in the compressive area is increased due to local heating and the compressive pressure, which postpones the fracture because of the drop of stress triaxiality. ADSIF, as a specific toolpath strategy of DSIF, exhibits a degree of peculiarity and different patterns of deformation mechanisms. This was initially revealed through FEM by Smith et al. [7]. An eight-layer model of linear brick elements with reduced integration is used for both SPIF and ADSIF analysis (Fig. 16(a)). A radial mesh strategy was applied in order to capture the material behavior in the directions parallel and perpendicular to the tool movement. Moreover, four sections along the forming depth were created (Fig. 16(b)) to investigate the changes of strain components and hydrostatic pressure throughout the forming process. With the comparison of SPIF, a general view of the deformation of ADSIF can be analyzed. In summary, the dominant deformation modes in ADSIF are local bending around the contact point between the tools and forming sheet, combined with a squeezing effect as well as a shear effect perpendicular and parallel to the tool moving direction. In contrast to SPIF, there are significant but consistent differences between the plastic strains as observed on the inner and outer surfaces in ADSIF, which may be attributed to bending and high through-thickness shear effect. The support tool results in increased compressive pressure and shear. Compared with the DSIF process, ADSIF is characterized by high through-thickness shear and the lack of downward pressure because the counteracting effect from material bunching reduces the stretching effect. Ren et al. [28] presented a parametric study using a simplified FE model to improve the simulation efficiency without losing too much accuracy in wall angle prediction (Fig. 17(a)). A clamped narrow strip model was implemented to simulate a part of ADSIF process, which consumed 10–15 min for simulation instead of 7–10 days with a full scale of ADSIF model. The relative tool position, described as the horizontal distance D and vertical distance S, are used as research variables (Fig. 17(b)). To compensate for the accuracy loss due to the simplified model, the Latin Hypercube Sampling method is applied to find the relationship between the values of D, S, and the achievable wall angle. The prediction via the simplified model was very close to the experimental results in a certain range of angles and the deviation was controlled within 5 deg. The results show that the combination of the values of D and S is not unique when forming a specific wall angle [29,66]. A similar parametric study was also reported by Ren et al. [28] as the tool gap and tool position angle were chosen as parameters. The stable wall angle in the forming region was used as a representative indicator of the whole formed part. The entire ADSIF model with an advanced mesh density of eight layers of solid elements was executed using ls-dyna. Thus, the accuracy of FE prediction was significantly improved at the cost of long simulation time. The conclusion agrees with the deduction of Lu et al. [30] that the excessive squeezing brings a negative effect on formability. ## Discussion and Conclusions The DSIF process has been shown to be able to achieve improved formability and geometric accuracy as compared with the SPIF process. The additional tool provides extra supporting force and increases the deformation stability while the benefit of high flexibility is kept and the low setup cost feature is maintained as compared with the TPIF process. Most of the published research focus on the development of DSIF toolpath strategies and the evaluation of deformation mechanics. As a variant of DSIF, ADSIF shows distinctive characteristics in forming accuracy and deformation mechanics. This review presents an overview of the current state of the art and the most important areas of DSIF focused research as summarized. ###### Formability. The DSIF process exhibits a better formability than the SPIF process. The additional supporting force stabilizes local deformation and homogenizes the thickness distribution of the formed part. In terms of stress analysis, the sudden drop of stress triaxiality indicates that the possibility of failure in DSIF is lower than that in SPIF when forming the same parts. However, the unpredicted material thinning could lead to the loss of contact of the support tool with the workpiece and degenerate DSIF into a SPIF process. Thus, the avoidance of losing contact is an important prerequisite to maintain the formability of DSIF. The maximum achievable wall angle αmax can be used to describe the formability of the DISF process. Similar to other ISF processes, the thickness reduction results in failure and prevents the part from achieving larger wall angles. Other indicators such as stress triaxiality are introduced from previous ISF research in terms of stress analysis and fracture evaluation. In DSIF, the wall thickness is also controlled by the squeeze factor s, which indicates the extent of the squeezing between the tools and workpiece. The squeezing effect controlled by the supporting force affects the shear and compression and is a main affecting factor to the formability in both DSIF and ADSIF. The squeezing effect improves the formability, although excessive contact pressure over deforms the workpiece and reduces the formability. The formability in DSIF is also studied by varying tool gaps and relative angles of the master and support tools. In general, the DSIF process formability is mainly dependent on the relative position between tools including the distance and angle. Alternatively, the interaction between tools can also be presented by the value of supporting force and relative position of tools. The performance of DSIF in composite materials and high-performance alloys is investigated in the preliminary stage. For polymers, the formability improvement of DISF is confirmed as effective. For hard-to-form materials such as titanium alloys, local heating can improve the formability of the material. In the DSIF process, electric currents can be applied through the sheet and two forming tools, which provide an effective means for electrically assisted heating over SPIF processes. ###### Toolpath Strategies in DSIF. The toolpath strategies of DSIF can be achieved via the ADSIF, close-loop iteration, compensation, and geometrical feature-based methods. The ADSIF process is based on an in-to-out toolpath strategy that forms the workpiece from the inner annulus traveling outward to the fringe and can maintain the contact between the tools and workpiece throughout the process. This leads to improved forming accuracy and formability. The drawbacks of ADSIF include limited forming wall angle, long forming time due to the small incremental depth, and high compressive pressure throughout the process. Unlike the other toolpath strategies, reversing the forming direction also causes the change of the deformation mechanisms, which requires more in-depth investigation. The compensation-based strategy can be divided into the prediction of tool or sheet deflections, which forms the basis for developing corresponding compensation strategies to toolpath generation. The performance of compensation-based toolpath is highly dependent on the accuracy of the prediction of forming forces and geometrical deviation as well as the algorithm of calculating the adjustment to the toolpath. The compensation-based strategy can significantly improve the forming accuracy. However, the prediction through FE simulation and experimental test is either a computing intensive process or requires iterative forming test and measurement steps. Feature-based toolpath implements an algorithm to recognize the geometric features and adjust the density of toolpath contours. The feature recognition can be also used to improve the quality and efficiency of deflection prediction in the compensation-based strategy. In this case, the correct feature partition and the order of the forming sequence for features are of significance. ###### Deformation Mechanisms in DSIF. The deformation mode of DSIF is generally a combination of stretching in the meridional direction, compression in the radial direction, and slight through-thickness shear in the tool movement direction. Compared with the SPIF process, the additional support tool provides an extra compressive effect, which reduces through-thickness shear in the tool-moving direction. The drop of stress triaxiality postpones the fracture in the double-contacted area in the DSIF process. Thus, the material failure is more likely to occur in the deformed region, starting from the outer side of the sheet and propagating in the circumferential direction due to material thinning and tool movement. In the E-DSIF process, the increase of material ductility by local heating in the contact area is another factor to defer fracture. The deformation mechanisms in ADSIF are mainly associated with local bending of the sheet around the contact point between the workpiece and the master tool, accompanied with a squeezing effect due to the support tool as well as shearing perpendicular and parallel to the tool moving direction. The history of equivalent strain shows that there is a constant strain discrepancy between the inner and outer surfaces in ADSIF instead of the artificial dynamic oscillations and uncontrolled nature of deformation in SPIF. In contrast to DSIF, ADSIF has high through-thickness shear and low downward pressure, as well as material bunching that counteracts the downward deformation of the sheet. ## Recommendation for Future Work • The improvement of accuracy and formability is still a priority in DSIF research. The combination of feature-based and compensation strategy significantly improves the forming accuracy and has the potential for further enhancement. The closed-loop algorithms allow the adjustment during the forming process based on data obtained from sensors of forming forces and potentially tool positions. For example, a pneumatic-supported support tool with an adjustable supporting force can be used to prevent loss of contact throughout the process and to provide precise control of local stress state of sheet deformation. This means it is possible to design specific DSIF toolpath either aiming for improved formability or geometrical accuracy or the combination of both. • In terms of ADSIF, there is a need for quantitative evaluation of the forming limit. It would be useful to test the capability of forming components with wall angles over 60 deg in the ADSIF process. • New indicators to formability, such as forming limit diagrams, can be introduced from previous ISF research since the stress triaxiality has been adapted to the DSIF process. • FE simulation is still a powerful tool to investigate the deformation mechanics of the DSIF process accompanying with challenges. Reduction of computing time with sufficient accuracy by using adaptive mesh strategy would be beneficial to the detailed understanding of DSIF deformation mechanisms. There is a lack of work on the fracture prediction for both DSIF and ADSIF processes. Considering the material deformation under DSIF, the damage models modified with through-thickness shear may be able to predict the fracture occurrence precisely. • In terms of deformation mechanisms, the transition of deformation modes due to the change of the squeezing factor and forming wall angle may be a research focus. Fracture in DSIF processing is assumed as stretching and tearing starting from the outer surface. However, there is a lack of established failure theory to DSIF and ADSIF process without loss of contact. • The forming forces and thickness distribution are important in ISF research. There is a necessity to look into the effect of the squeezing between the tools and workpiece on the deflection of tools. The excessive deflection of the master tool may not only cause the geometrical inaccuracy, but also the rotation of the relative position between the master and support tools, which could lead to the change of local deformation mode. The large forming force encounters a problem with the rigidity of the DSIF machine. The design of a DSIF machine in terms of controllability and stiffness needs to be investigated. • Electric-assisted DSIF has great potential to expend the range of sheet materials and can be implemented in the DSIF process for such as titanium alloys, or composite materials. However, the current local heating method (Fig. 8) faces problem of insulation. A possible solution is to isolate direct resistance heating on both tools to provide a local heating effect. The feasibility and efficiency of this scheme require verification by experimental testing. The capability and performance of using non-metal material, i.e., polyether ether ketone (PEEK) material in heat-assisted DSIF process are waiting for detailed assessment. The difference between the mechanics of formability improvement, such as compressive effect for metal workpiece or heating effect to void generation for polymers, can be a focus of future investigation. • Although it is a less researched area in SPIF, design rules for DSIF toolpath generation with the consideration of process parameters and tool dimensions and arrangements could have a positive impact to not only fundamental understanding but also practical implementation of DSIF. Specific areas of interest include the toolpath design in the transition area, i.e., the sheet bending at the beginning of the forming process and the effect of tool parameter, e.g., tool radius and relative position to workpiece deformation and fracture under DSIF conditions. ## Acknowledgements This work was supported by the Engineering and Physical Sciences Research Council (grant number EP/L02084X/1). ## Nomenclature • d = current thickness of the squeezed wall • s = squeeze factor—the ratio between the practical and desired tool gap • D = forming depth • tf = the desired thickness of the formed part • t0 = the initial thickness of the work piece accumulative double-sided incremental forming • ASIF = asymmetric incremental sheet forming • BUT = bending under tension • DOST = drop of stress triaxiality • DSIF = double-sided incremental forming • FE = finite element • FLC = forming limit curve • FLD = forming limit diagram • GFLD = generalized forming limit diagram • ISF = incremental sheet forming • MaxR = the outer radius of the desired forming truncated cone • PEEK = polyether ether ketone • SPIF = single-point incremental forming • Step depth/Δz = the distance the tool moves down after completing an entire closed loop in toolpath • TPIF = two-point incremental forming • α = wall angle ## References Emmens, W. C., Sebastiani, G., and van den Boogaard, A. H., 2010, “The Technology of Incremental Sheet Forming—A Brief Review of the History,” J. Mater. Process. Technol., 210(8), pp. 981–997. Mason, B., and Applton, E., 1984, “Sheet Metal Forming for Small Batches Using Sacrificial Tooling,” 3rd International Conference on Rotary Metalworking Processes (ROMP 3), Kyoto, Japan, Sept. Silva, M. B., and Martins, P. A. F., 2012, “Two-Point Incremental Forming With Partial Die: Theory and Experimentation,” J. Mater. Eng. Perform., 22(4), pp. 1018–1027. Hirt, G., Bambach, M., Bleck, W., Prahl, U., and Stollenwerk, J., 2015, “The Development of Incremental Sheet Forming From Flexible Forming to Fully Integrated Production of Sheet Metal Parts,” Advances in Production Technology, Springer, New York, pp. 117–129. Maidagan, E., Zettler, J., Bambach, M., Rodríguez, P. P., and Hirt, G., 2007, “A New Incremental Sheet Forming Process Based on a Flexible Supporting Die System,” Key Eng. Mater., 344, pp. 607–614. Meier, H., Smukala, V., Dewald, O., and Zhang, J., 2007, “Two Point Incremental Forming With Two Moving Forming Tools,” Key Eng. Mater., 344, pp. 599–605. Smith, J., Malhotra, R., Liu, W. K., and Cao, J., 2013, “Deformation Mechanics in Single-Point and Accumulative Double-Sided Incremental Forming,” Int. J. Adv. Manuf. Technol., 69(5–8), pp. 1185–1201. Hagan, E., and Jeswiet, J., 2003, “A Review of Conventional and Modern Single-Point Sheet Metal Forming Methods,” Proc. Inst. Mech. Eng., Part B, 217(2), pp. 213–225. Micari, F., Ambrogio, G., and Filice, L., 2007, “Shape and Dimensional Accuracy in Single Point Incremental Forming: State of the art and Future Trends,” J. Mater. Process. Technol., 191(1–3), pp. 390–395. Emmens, W. C., and van den Boogaard, A. H., 2009, “Incremental Forming by Continuous Bending under Tension—An Experimental Investigation,” J. Mater. Process. Technol., 209(14), pp. 5456–5463. Duflou, J. R., Habraken, A.-M., Cao, J., Malhotra, R., Bambach, M., Adams, D., Vanhove, H., Mohammadi, A., and Jeswiet, J., 2018, “Single Point Incremental Forming: State-of-the-Art and Prospects,” Int. J. Mater. Form. 11(6), pp. 743–773. Gatea, S., Ou, H. G., and McCartney, G., 2016, “Review on the Influence of Process Parameters in Incremental Sheet Forming,” Int. J. Adv. Manuf. Technol., 87(1–4), pp. 479–499. Behera, A. K., de Sousa, R. A., Ingarao, G., and Oleksik, V., 2017, “Single Point Incremental Forming: An Assessment of the Progress and Technology Trends From 2005 to 2015,” J. Manuf. Process., 27, pp. 37–62. Jeswiet, J., Micari, F., Hirt, G., Bramley, A., Duflou, J., and Allwood, J., 2005, “Asymmetric Single Point Incremental Forming of Sheet Metal,” CIRP Ann., 54(2), pp. 88–114. Afonso, D., Alves de Sousa, R., and Torcato, R., 2017, “Integration of Design Rules and Process Modelling Within SPIF Technology—A Review on the Industrial Dissemination of Single Point Incremental Forming,” Int. J. Adv. Manuf. Technol., 94(9–12), pp. 4387–4399. Fang, Y., Lu, B., Chen, J., Xu, D. K., and Ou, H., 2014, “Analytical and Experimental Investigations on Deformation Mechanism and Fracture Behavior in Single Point Incremental Forming,” J. Mater. Process. Technol., 214(8), pp. 1503–1515. Yamashita, M., Gotoh, M., and Atsumi, S.-Y., 2008, “Numerical Simulation of Incremental Forming of Sheet Metal,” J. Mater. Process. Technol., 199(1–3), pp. 163–172. Lu, B., Chen, J., Ou, H., and Cao, J., 2013, “Feature-Based Tool Path Generation Approach for Incremental Sheet Forming Process,” J. Mater. Process. Technol., 213(7), pp. 1221–1233. Allwood, J. M., Music, O., Raithathna, A., and Duncan, S. R., 2009, “Closed-Loop Feedback Control of Product Properties in Flexible Metal Forming Processes With Mobile Tools,” CIRP Ann., 58(1), pp. 287–290. Azaouzi, M., and Lebaal, N., 2012, “Tool Path Optimization for Single Point Incremental Sheet Forming Using Response Surface Method,” Simul. Model. Pract. Theory, 24, pp. 49–58. Matsubara, S., 2001, “A Computer Numerically Controlled Dieless Incremental Forming of a Sheet Metal,” Proc. Inst. Mech. Eng., Part B, 215(7), pp. 959–966. Hirt, G., Ames, J., and Bambach, M., 2006, “Basic Investigation Into the Characteristics of Dies and Support Tools Used in CNC-Incremental Sheet Forming,” Proceedings of the International Deep Drawing Research Group Conference, Leca do Balio, Portugal, June 19–21. Malhotra, R., Cao, J., Ren, F., Kiridena, V., Cedric Xia, Z., and Reddy, N. V., 2011, “Improvement of Geometric Accuracy in Incremental Forming by Using a Squeezing Toolpath Strategy With Two Forming Tools,” ASME J. Manuf. Sci. Eng., 133(6), p. 061019. Malhotra, R., Cao, J., Beltran, M., Xu, D., Magargee, J., Kiridena, V., and Xia, Z. C., 2012, “Accumulative-DSIF Strategy for Enhancing Process Capabilities in Incremental Forming,” CIRP Ann., 61(1), pp. 251–254. Lasunon, O., and Knight, W. A., 2007, “Comparative Investigation of Single-Point and Double-Point Incremental Sheet Metal Forming Processes,” Proc. Inst. Mech. Eng., Part B, 221(12), pp. 1725–1732. Wu, J. H., and Wang, Q. C., 2014, “Comparison of the Geometric Accuracy by DSIF Toolpath With SPIF Toolpath,” Appl. Mech. Mater., 494–495, pp. 497–501. Sortais, H. C., Kobayashi, S., and Thomsen, E., 1962, Mechanics of Conventional Spinning, University of California, Berkeley. Ren, H., Moser, N., Zhang, Z., Ndip-Agbor, E., Smith, J., Ehmann, K. F., and Cao, J., 2015, “Effects of Tool Positions in Accumulated Double-Sided Incremental Forming on Part Geometry,” ASME J. Manuf. Sci. Eng., 137(5), p. 051008. Ndip-Agbor, E., Smith, J., Ren, H., Jiang, Z., Xu, J., Moser, N., Chen, W., Xia, Z. C., and Cao, J., 2015, “Optimization of Relative Tool Position in Accumulative Double Sided Incremental Forming Using Finite Element Analysis and Model Bias Correction,” Int. J. Mater. Form., 9(3), pp. 371–382. Lu, B., Fang, Y., Xu, D. K., Chen, J., Ai, S., Long, H., Ou, H., and Cao, J., 2015, “Investigation of Material Deformation Mechanism in Double Side Incremental Sheet Forming,” Int. J. Mach. Tools Manuf., 93, pp. 37–48. Allwood, J. M., and Shouler, D. R., 2009, “Generalised Forming Limit Diagrams Showing Increased Forming Limits With Non-Planar Stress States,” Int. J. Plast., 25(7), pp. 1207–1230. Lu, B., Xu, D. K., Liu, R. Z., Ou, H., Long, H., Chen, J., 2015, “Cranial Reconstruction Using Double Side Incremental Forming,” Key Eng. Mater., 639, pp. 535–542. Davarpanah, M. A., Mirkouei, A., Yu, X., Malhotra, R., and Pilla, S., 2015, “Effects of Incremental Depth and Tool Rotation on Failure Modes and Microstructural Properties in Single Point Incremental Forming of Polymers,” J. Mater. Process. Technol., 222, pp. 287–300. Davarpanah, M. A., and Malhotra, R., 2018, “Formability and Failure Modes in Single Point Incremental Forming of Metal-Polymer Laminates,” Procedia Manuf., 26, pp. 343–348. Jackson, K. P., Allwood, J. M., and Landert, M., 2008, “Incremental Forming of Sandwich Panels,” J. Mater. Process. Technol., 204(1–3), pp. 290–303. Fiorotto, M., Sorgente, M., and Lucchetta, G., 2010, “Preliminary Studies on Single Point Incremental Forming for Composite Materials,” Int. J. Mater. Form., 3(S1), pp. 951–954. Davarpanah, M. A., Zhang, Z., Bansal, S., Cao, J., and Malhotra, R., 2016, “Preliminary Investigations on Double Sided Incremental Forming of Thermoplastics,” Manuf. Lett., 8, pp. 21–26. Ji, Y. H., and Park, J. J., 2008, “Formability of Magnesium AZ31 Sheet in the Incremental Forming at Warm Temperature,” J. Mater. Process. Technol., 201(1–3), pp. 354–358. Ambrogio, G., Filice, L., and Manco, G. L., 2008, “Warm Incremental Forming of Magnesium Alloy AZ31,” CIRP Ann., 57(1), pp. 257–260. Duflou, J. R., Callebaut, B., Verbert, J., and De Baerdemaeker, H., 2007, “Laser Assisted Incremental Forming: Formability and Accuracy Improvement,” CIRP Ann., 56(1), pp. 273–276. Otsu, M., Yasunaga, M., Matsuda, M., and Takashima, K., 2014, “Friction Stir Incremental Forming of A2017 Aluminum Sheets,” Procedia Eng., 81, pp. 2318–2323. Xu, D., Wu, W., Malhotra, R., Chen, J., Lu, B., and Cao, J., 2013, “Mechanism Investigation for the Influence of Tool Rotation and Laser Surface Texturing (LST) on Formability in Single Point Incremental Forming,” Int. J. Mach. Tools Manuf., 73, pp. 37–46. Göttmann, A., Bailly, D., Bergweiler, G., Bambach, M., Stollenwerk, J., Hirt, G., and Loosen, P., 2012, “A Novel Approach for Temperature Control in ISF Supported by Laser and Resistance Heating,” Int. J. Adv. Manuf. Technol., 67(9–12), pp. 2195–2205. Palumbo, G., and Brandizzi, M., 2012, “Experimental Investigations on the Single Point Incremental Forming of a Titanium Alloy Component Combining Static Heating With High Tool Rotation Speed,” Mater. Des., 40, pp. 43–51. Xu, D. K., Lu, B., Cao, T. T., Zhang, H., Chen, J., Long, H., and Cao, J., 2016, “Enhancement of Process Capabilities in Electrically-Assisted Double Sided Incremental Forming,” Mater. Des., 92, pp. 268–280. Valoppi, B., Sánchez Egea, A. J., Zhang, Z., González Rojas, H. A., Ghiotti, A., Bruschi, S., and Cao, J., 2016, “A Hybrid Mixed Double-Sided Incremental Forming Method for Forming Ti6Al4V Alloy,” CIRP Ann., 65(1), pp. 309–312. Zhang, Z. X., Ren, H. Q., Xu, R., Moser, N., Smith, J., Ndip-Agbor, E., Malhotra, R., Xia, Z. C., Ehmann, K. F., and Cao, J., 2015, “A Mixed Double-Sided Incremental Forming Toolpath Strategy for Improved Geometric Accuracy,” ASME J. Manuf. Sci. Eng., 137(5), p. 051007. Moser, N., Pritchet, D., Ren, H., Ehmann, K. F., and Cao, J., 2016, “An Efficient and General Finite Element Model for Double-Sided Incremental Forming,” ASME J. Manuf. Sci. Eng., 138(9), p. 091007. Meier, H., Buff, B., Laurischkat, R., and Smukala, V., 2009, “Increasing the Part Accuracy in Dieless Robot-Based Incremental Sheet Metal Forming,” CIRP Ann., 58(1), pp. 233–238. Rakesh, L., Amit, S., and Reddy, N. V., 2016, “Deflection Compensations for Tool Path to Enhance Accuracy During Double-Sided Incremental Forming,” ASME J. Manuf. Sci. Eng., 138(9), p. 091008. Ren, H., Li, F., Moser, N., Leem, D., Li, T., Ehmann, K., and Cao, J., 2018, “General Contact Force Control Algorithm in Double-Sided Incremental Forming,” CIRP Ann., 67(1), pp. 381–384. Wang, H., Zhang, R., Zhang, H., Hu, Q., and Chen, J., 2018, “Novel Strategies to Reduce the Springback for Double-Sided Incremental Forming,” Int. J. Adv. Manuf. Technol., 96(1–4), pp. 973–979. Meier, H., Magnus, C., and Smukala, V., 2011, “Impact of Superimposed Pressure on Dieless Incremental Sheet Metal Forming With Two Moving Tools,” CIRP Ann., 60(1), pp. 327–330. Malhotra, R., Reddy, N., and Cao, J., 2010, “Automatic 3D Spiral Toolpath Generation for Single Point Incremental Forming,” ASME J. Manuf. Sci. Eng., 132(6), p. 061003. Lingam, R., Prakash, O., Belk, J. H., and Reddy, N. V., 2016, “Automatic Feature Recognition and Tool Path Strategies for Enhancing Accuracy in Double Sided Incremental Forming,” Int. J. Adv. Manuf. Technol., 88(5–8), pp. 1639–1655. Ndip-Agbor, E., Ehmann, K., and Cao, J., 2017, “Automated Flexible Forming Strategy for Geometries With Multiple Features in Double-Sided Incremental Forming,” ASME J. Manuf. Sci. Eng., 140(3), p. 031004. Moser, N., Zhang, Z., Ren, H., Zhang, H., Shi, Y., Ndip-Agbor, E. E., Lu, B., Chen, J., Ehmann, K. F., and Cao, J., 2016, “Effective Forming Strategy for Double-Sided Incremental Forming Considering In-Plane Curvature and Tool Direction,” CIRP Ann., 65(1), pp. 265–268. Martins, P. A. F., Bay, N., Skjoedt, M., and Silva, M. B., 2008, “Theory of Single Point Incremental Forming,” CIRP Ann., 57(1), pp. 247–252. Lu, B., Fang, Y., Xu, D. K., Chen, J., Ou, H., Moser, N. H., and Cao, J., 2014, “Mechanism Investigation of Friction-Related Effects in Single Point Incremental Forming Using a Developed Oblique Roller-Ball Tool,” Int. J. Mach. Tools Manuf., 85, pp. 14–29. Eyckens, P., Belkassem, B., Henrard, C., Gu, J., Sol, H., Habraken, A. M., Duflou, J. R., Van Bael, A., and Van Houtte, P., 2011, “Strain Evolution in the Single Point Incremental Forming Process: Digital Image Correlation Measurement and Finite Element Prediction,” Int. J. Mater. Form., 4(1), pp. 55–71. Malhotra, R., Xue, L., Belytschko, T., and Cao, J., 2012, “Mechanics of Fracture in Single Point Incremental Forming,” J. Mater. Process. Technol., 212(7), pp. 1573–1590. Gatea, S., Ou, H. G., Lu, B., and McCartney, G., 2017, “Modelling of Ductile Fracture in Single Point Incremental Forming Using a Modified GTN Model,” Eng. Fract. Mech., 186, pp. 59–79. Gatea, S., Xu, D., Ou, H., and McCartney, G., 2017, “Evaluation of Formability and Fracture of Pure Titanium in Incremental Sheet Forming,” Int. J. Adv. Manuf. Technol., 95(1–4), pp. 625–641. Wang, C., Daniel, W. J. T., Lu, H., Liu, S., and Meehan, P. A., 2017, “FEM Investigation of Ductile Fracture Prediction in Two-Point Incremental Sheet Metal Forming Process,” International Conference on the Technology of Plasticity, Procedia Engineering, Cambridge, UK, Sept. 17–22, Vol 207, pp. 836–841. Valoppi, B., Zhang, Z., Deng, M., Ghiotti, A., Bruschi, S., Ehmann, K. F., and Cao, J., 2017, “On the Fracture Characterization in Double-Sided Incremental Forming of Ti6Al4 V Sheets at Elevated Temperatures,” Procedia Manuf., 10, pp. 407–416. Ndip-Agbor, E. E., Smith, J., Xu, R., Malhotra, R., and Cao, J., 2013, “Effect of Relative Tool Position on the Geometric Accuracy of Accumulative DSIF,” AIP Conf. Proc., 1567(1), pp. 828–831. Topics: Deformation View article in PDF format. ## References Emmens, W. C., Sebastiani, G., and van den Boogaard, A. H., 2010, “The Technology of Incremental Sheet Forming—A Brief Review of the History,” J. Mater. Process. Technol., 210(8), pp. 981–997. Mason, B., and Applton, E., 1984, “Sheet Metal Forming for Small Batches Using Sacrificial Tooling,” 3rd International Conference on Rotary Metalworking Processes (ROMP 3), Kyoto, Japan, Sept. Silva, M. B., and Martins, P. A. F., 2012, “Two-Point Incremental Forming With Partial Die: Theory and Experimentation,” J. Mater. Eng. Perform., 22(4), pp. 1018–1027. Hirt, G., Bambach, M., Bleck, W., Prahl, U., and Stollenwerk, J., 2015, “The Development of Incremental Sheet Forming From Flexible Forming to Fully Integrated Production of Sheet Metal Parts,” Advances in Production Technology, Springer, New York, pp. 117–129. Maidagan, E., Zettler, J., Bambach, M., Rodríguez, P. P., and Hirt, G., 2007, “A New Incremental Sheet Forming Process Based on a Flexible Supporting Die System,” Key Eng. Mater., 344, pp. 607–614. Meier, H., Smukala, V., Dewald, O., and Zhang, J., 2007, “Two Point Incremental Forming With Two Moving Forming Tools,” Key Eng. Mater., 344, pp. 599–605. Smith, J., Malhotra, R., Liu, W. K., and Cao, J., 2013, “Deformation Mechanics in Single-Point and Accumulative Double-Sided Incremental Forming,” Int. J. Adv. Manuf. Technol., 69(5–8), pp. 1185–1201. Hagan, E., and Jeswiet, J., 2003, “A Review of Conventional and Modern Single-Point Sheet Metal Forming Methods,” Proc. Inst. Mech. Eng., Part B, 217(2), pp. 213–225. Micari, F., Ambrogio, G., and Filice, L., 2007, “Shape and Dimensional Accuracy in Single Point Incremental Forming: State of the art and Future Trends,” J. Mater. Process. Technol., 191(1–3), pp. 390–395. Emmens, W. C., and van den Boogaard, A. H., 2009, “Incremental Forming by Continuous Bending under Tension—An Experimental Investigation,” J. Mater. Process. Technol., 209(14), pp. 5456–5463. Duflou, J. R., Habraken, A.-M., Cao, J., Malhotra, R., Bambach, M., Adams, D., Vanhove, H., Mohammadi, A., and Jeswiet, J., 2018, “Single Point Incremental Forming: State-of-the-Art and Prospects,” Int. J. Mater. Form. 11(6), pp. 743–773. Gatea, S., Ou, H. G., and McCartney, G., 2016, “Review on the Influence of Process Parameters in Incremental Sheet Forming,” Int. J. Adv. Manuf. Technol., 87(1–4), pp. 479–499. Behera, A. K., de Sousa, R. A., Ingarao, G., and Oleksik, V., 2017, “Single Point Incremental Forming: An Assessment of the Progress and Technology Trends From 2005 to 2015,” J. Manuf. Process., 27, pp. 37–62. Jeswiet, J., Micari, F., Hirt, G., Bramley, A., Duflou, J., and Allwood, J., 2005, “Asymmetric Single Point Incremental Forming of Sheet Metal,” CIRP Ann., 54(2), pp. 88–114. Afonso, D., Alves de Sousa, R., and Torcato, R., 2017, “Integration of Design Rules and Process Modelling Within SPIF Technology—A Review on the Industrial Dissemination of Single Point Incremental Forming,” Int. J. Adv. Manuf. Technol., 94(9–12), pp. 4387–4399. Fang, Y., Lu, B., Chen, J., Xu, D. K., and Ou, H., 2014, “Analytical and Experimental Investigations on Deformation Mechanism and Fracture Behavior in Single Point Incremental Forming,” J. Mater. Process. Technol., 214(8), pp. 1503–1515. Yamashita, M., Gotoh, M., and Atsumi, S.-Y., 2008, “Numerical Simulation of Incremental Forming of Sheet Metal,” J. Mater. Process. Technol., 199(1–3), pp. 163–172. Lu, B., Chen, J., Ou, H., and Cao, J., 2013, “Feature-Based Tool Path Generation Approach for Incremental Sheet Forming Process,” J. Mater. Process. Technol., 213(7), pp. 1221–1233. Allwood, J. M., Music, O., Raithathna, A., and Duncan, S. R., 2009, “Closed-Loop Feedback Control of Product Properties in Flexible Metal Forming Processes With Mobile Tools,” CIRP Ann., 58(1), pp. 287–290. Azaouzi, M., and Lebaal, N., 2012, “Tool Path Optimization for Single Point Incremental Sheet Forming Using Response Surface Method,” Simul. Model. Pract. Theory, 24, pp. 49–58. Matsubara, S., 2001, “A Computer Numerically Controlled Dieless Incremental Forming of a Sheet Metal,” Proc. Inst. Mech. Eng., Part B, 215(7), pp. 959–966. Hirt, G., Ames, J., and Bambach, M., 2006, “Basic Investigation Into the Characteristics of Dies and Support Tools Used in CNC-Incremental Sheet Forming,” Proceedings of the International Deep Drawing Research Group Conference, Leca do Balio, Portugal, June 19–21. Malhotra, R., Cao, J., Ren, F., Kiridena, V., Cedric Xia, Z., and Reddy, N. V., 2011, “Improvement of Geometric Accuracy in Incremental Forming by Using a Squeezing Toolpath Strategy With Two Forming Tools,” ASME J. Manuf. Sci. Eng., 133(6), p. 061019. Malhotra, R., Cao, J., Beltran, M., Xu, D., Magargee, J., Kiridena, V., and Xia, Z. C., 2012, “Accumulative-DSIF Strategy for Enhancing Process Capabilities in Incremental Forming,” CIRP Ann., 61(1), pp. 251–254. Lasunon, O., and Knight, W. A., 2007, “Comparative Investigation of Single-Point and Double-Point Incremental Sheet Metal Forming Processes,” Proc. Inst. Mech. Eng., Part B, 221(12), pp. 1725–1732. Wu, J. H., and Wang, Q. C., 2014, “Comparison of the Geometric Accuracy by DSIF Toolpath With SPIF Toolpath,” Appl. Mech. Mater., 494–495, pp. 497–501. Sortais, H. C., Kobayashi, S., and Thomsen, E., 1962, Mechanics of Conventional Spinning, University of California, Berkeley. Ren, H., Moser, N., Zhang, Z., Ndip-Agbor, E., Smith, J., Ehmann, K. F., and Cao, J., 2015, “Effects of Tool Positions in Accumulated Double-Sided Incremental Forming on Part Geometry,” ASME J. Manuf. Sci. Eng., 137(5), p. 051008. Ndip-Agbor, E., Smith, J., Ren, H., Jiang, Z., Xu, J., Moser, N., Chen, W., Xia, Z. C., and Cao, J., 2015, “Optimization of Relative Tool Position in Accumulative Double Sided Incremental Forming Using Finite Element Analysis and Model Bias Correction,” Int. J. Mater. Form., 9(3), pp. 371–382. Lu, B., Fang, Y., Xu, D. K., Chen, J., Ai, S., Long, H., Ou, H., and Cao, J., 2015, “Investigation of Material Deformation Mechanism in Double Side Incremental Sheet Forming,” Int. J. Mach. Tools Manuf., 93, pp. 37–48. Allwood, J. M., and Shouler, D. R., 2009, “Generalised Forming Limit Diagrams Showing Increased Forming Limits With Non-Planar Stress States,” Int. J. Plast., 25(7), pp. 1207–1230. Lu, B., Xu, D. K., Liu, R. Z., Ou, H., Long, H., Chen, J., 2015, “Cranial Reconstruction Using Double Side Incremental Forming,” Key Eng. Mater., 639, pp. 535–542. Davarpanah, M. A., Mirkouei, A., Yu, X., Malhotra, R., and Pilla, S., 2015, “Effects of Incremental Depth and Tool Rotation on Failure Modes and Microstructural Properties in Single Point Incremental Forming of Polymers,” J. Mater. Process. Technol., 222, pp. 287–300. Davarpanah, M. A., and Malhotra, R., 2018, “Formability and Failure Modes in Single Point Incremental Forming of Metal-Polymer Laminates,” Procedia Manuf., 26, pp. 343–348. Jackson, K. P., Allwood, J. M., and Landert, M., 2008, “Incremental Forming of Sandwich Panels,” J. Mater. Process. Technol., 204(1–3), pp. 290–303. Fiorotto, M., Sorgente, M., and Lucchetta, G., 2010, “Preliminary Studies on Single Point Incremental Forming for Composite Materials,” Int. J. Mater. Form., 3(S1), pp. 951–954. Davarpanah, M. A., Zhang, Z., Bansal, S., Cao, J., and Malhotra, R., 2016, “Preliminary Investigations on Double Sided Incremental Forming of Thermoplastics,” Manuf. Lett., 8, pp. 21–26. Ji, Y. H., and Park, J. J., 2008, “Formability of Magnesium AZ31 Sheet in the Incremental Forming at Warm Temperature,” J. Mater. Process. Technol., 201(1–3), pp. 354–358. Ambrogio, G., Filice, L., and Manco, G. L., 2008, “Warm Incremental Forming of Magnesium Alloy AZ31,” CIRP Ann., 57(1), pp. 257–260. Duflou, J. R., Callebaut, B., Verbert, J., and De Baerdemaeker, H., 2007, “Laser Assisted Incremental Forming: Formability and Accuracy Improvement,” CIRP Ann., 56(1), pp. 273–276. Otsu, M., Yasunaga, M., Matsuda, M., and Takashima, K., 2014, “Friction Stir Incremental Forming of A2017 Aluminum Sheets,” Procedia Eng., 81, pp. 2318–2323. Xu, D., Wu, W., Malhotra, R., Chen, J., Lu, B., and Cao, J., 2013, “Mechanism Investigation for the Influence of Tool Rotation and Laser Surface Texturing (LST) on Formability in Single Point Incremental Forming,” Int. J. Mach. Tools Manuf., 73, pp. 37–46. Göttmann, A., Bailly, D., Bergweiler, G., Bambach, M., Stollenwerk, J., Hirt, G., and Loosen, P., 2012, “A Novel Approach for Temperature Control in ISF Supported by Laser and Resistance Heating,” Int. J. Adv. Manuf. Technol., 67(9–12), pp. 2195–2205. Palumbo, G., and Brandizzi, M., 2012, “Experimental Investigations on the Single Point Incremental Forming of a Titanium Alloy Component Combining Static Heating With High Tool Rotation Speed,” Mater. Des., 40, pp. 43–51. Xu, D. K., Lu, B., Cao, T. T., Zhang, H., Chen, J., Long, H., and Cao, J., 2016, “Enhancement of Process Capabilities in Electrically-Assisted Double Sided Incremental Forming,” Mater. Des., 92, pp. 268–280. Valoppi, B., Sánchez Egea, A. J., Zhang, Z., González Rojas, H. A., Ghiotti, A., Bruschi, S., and Cao, J., 2016, “A Hybrid Mixed Double-Sided Incremental Forming Method for Forming Ti6Al4V Alloy,” CIRP Ann., 65(1), pp. 309–312. Zhang, Z. X., Ren, H. Q., Xu, R., Moser, N., Smith, J., Ndip-Agbor, E., Malhotra, R., Xia, Z. C., Ehmann, K. F., and Cao, J., 2015, “A Mixed Double-Sided Incremental Forming Toolpath Strategy for Improved Geometric Accuracy,” ASME J. Manuf. Sci. Eng., 137(5), p. 051007. Moser, N., Pritchet, D., Ren, H., Ehmann, K. F., and Cao, J., 2016, “An Efficient and General Finite Element Model for Double-Sided Incremental Forming,” ASME J. Manuf. Sci. Eng., 138(9), p. 091007. Meier, H., Buff, B., Laurischkat, R., and Smukala, V., 2009, “Increasing the Part Accuracy in Dieless Robot-Based Incremental Sheet Metal Forming,” CIRP Ann., 58(1), pp. 233–238. Rakesh, L., Amit, S., and Reddy, N. V., 2016, “Deflection Compensations for Tool Path to Enhance Accuracy During Double-Sided Incremental Forming,” ASME J. Manuf. Sci. Eng., 138(9), p. 091008. Ren, H., Li, F., Moser, N., Leem, D., Li, T., Ehmann, K., and Cao, J., 2018, “General Contact Force Control Algorithm in Double-Sided Incremental Forming,” CIRP Ann., 67(1), pp. 381–384. Wang, H., Zhang, R., Zhang, H., Hu, Q., and Chen, J., 2018, “Novel Strategies to Reduce the Springback for Double-Sided Incremental Forming,” Int. J. Adv. Manuf. Technol., 96(1–4), pp. 973–979. Meier, H., Magnus, C., and Smukala, V., 2011, “Impact of Superimposed Pressure on Dieless Incremental Sheet Metal Forming With Two Moving Tools,” CIRP Ann., 60(1), pp. 327–330. Malhotra, R., Reddy, N., and Cao, J., 2010, “Automatic 3D Spiral Toolpath Generation for Single Point Incremental Forming,” ASME J. Manuf. Sci. Eng., 132(6), p. 061003. Lingam, R., Prakash, O., Belk, J. H., and Reddy, N. V., 2016, “Automatic Feature Recognition and Tool Path Strategies for Enhancing Accuracy in Double Sided Incremental Forming,” Int. J. Adv. Manuf. Technol., 88(5–8), pp. 1639–1655. Ndip-Agbor, E., Ehmann, K., and Cao, J., 2017, “Automated Flexible Forming Strategy for Geometries With Multiple Features in Double-Sided Incremental Forming,” ASME J. Manuf. Sci. Eng., 140(3), p. 031004. Moser, N., Zhang, Z., Ren, H., Zhang, H., Shi, Y., Ndip-Agbor, E. E., Lu, B., Chen, J., Ehmann, K. F., and Cao, J., 2016, “Effective Forming Strategy for Double-Sided Incremental Forming Considering In-Plane Curvature and Tool Direction,” CIRP Ann., 65(1), pp. 265–268. Martins, P. A. F., Bay, N., Skjoedt, M., and Silva, M. B., 2008, “Theory of Single Point Incremental Forming,” CIRP Ann., 57(1), pp. 247–252. Lu, B., Fang, Y., Xu, D. K., Chen, J., Ou, H., Moser, N. H., and Cao, J., 2014, “Mechanism Investigation of Friction-Related Effects in Single Point Incremental Forming Using a Developed Oblique Roller-Ball Tool,” Int. J. Mach. Tools Manuf., 85, pp. 14–29. Eyckens, P., Belkassem, B., Henrard, C., Gu, J., Sol, H., Habraken, A. M., Duflou, J. R., Van Bael, A., and Van Houtte, P., 2011, “Strain Evolution in the Single Point Incremental Forming Process: Digital Image Correlation Measurement and Finite Element Prediction,” Int. J. Mater. Form., 4(1), pp. 55–71. Malhotra, R., Xue, L., Belytschko, T., and Cao, J., 2012, “Mechanics of Fracture in Single Point Incremental Forming,” J. Mater. Process. Technol., 212(7), pp. 1573–1590. Gatea, S., Ou, H. G., Lu, B., and McCartney, G., 2017, “Modelling of Ductile Fracture in Single Point Incremental Forming Using a Modified GTN Model,” Eng. Fract. Mech., 186, pp. 59–79. Gatea, S., Xu, D., Ou, H., and McCartney, G., 2017, “Evaluation of Formability and Fracture of Pure Titanium in Incremental Sheet Forming,” Int. J. Adv. Manuf. Technol., 95(1–4), pp. 625–641. Wang, C., Daniel, W. J. T., Lu, H., Liu, S., and Meehan, P. A., 2017, “FEM Investigation of Ductile Fracture Prediction in Two-Point Incremental Sheet Metal Forming Process,” International Conference on the Technology of Plasticity, Procedia Engineering, Cambridge, UK, Sept. 17–22, Vol 207, pp. 836–841. Valoppi, B., Zhang, Z., Deng, M., Ghiotti, A., Bruschi, S., Ehmann, K. F., and Cao, J., 2017, “On the Fracture Characterization in Double-Sided Incremental Forming of Ti6Al4 V Sheets at Elevated Temperatures,” Procedia Manuf., 10, pp. 407–416. Ndip-Agbor, E. E., Smith, J., Xu, R., Malhotra, R., and Cao, J., 2013, “Effect of Relative Tool Position on the Geometric Accuracy of Accumulative DSIF,” AIP Conf. Proc., 1567(1), pp. 828–831. ## Figures Fig. 1 Illustration of incremental sheet forming (reproduced from Ref. [1] with permission from Elsevier © 2010) Fig. 2 Schematics of (a) SPIF, (b) TPIF, and (c) DSIF Fig. 3 Schematics of (a) conventional DSIF toolpath and (b) accumulative DSIF toolpath [7] (reprinted with permission from Springer © 2013) Fig. 4 Equivalent plastic strain contour plot of (a) SPIF and (b) DSIF; thickness distribution in (c) SPIF and (d) DSIF process of 45 deg pyramids [25] (reprinted with permission from SAGE © 2007) Fig. 5 An illustration of the sin law and squeeze factor in DSIF Fig. 6 Material deformation via different supporting forces and tool shift: (a) 240 N without tool shift, (b) 240 N with tool shift, (c) 480 N without tool shift, and (d) 480 N with tool shift [30] (reprinted with permission from Elsevier © 2015) Fig. 7 (a) Schematic of toolpath variables and (b) definition of stable angle [28] (reprinted with permission from ASME © 2015) Fig. 8 (a) Two electrical connection modes in DSIF and (b) improved geometry accuracy in electrically assisted DSIF [45] (reprinted with permission from Elsevier © 2016) Fig. 9 Comparison of geometries of components made by ADSIF, DSIF, and SPIF with wall angles (a) 40 deg and (b) 50 deg [24] (reprinted with permission from Elsevier © 2012) Fig. 10 Schematic of applying the correction through the reference CAD model [49] (reprinted with permission from Springer © 2013) Fig. 11 (a) Toolpath and sheet deflection, (b) steps of how compensation applied, and (c) partition of profile based on features [50] (reprinted with permission from ASME © 2016) Fig. 12 Illustration of squeezing and reverse-bending strategies for anti-springback purpose [52] (reprinted with permission from Springer © 2018) Fig. 13 (a) Contour plot of Z-forces predicted via FE simulation and (b) classification of tool movement based on tool movement and contour [57] (reprinted with permission from Elsevier © 2016) Fig. 14 The desired and final product fabricated via the mixed toolpath strategy [47] (reprinted with permission from ASME © 2015) Fig. 15 Comparison plots of equivalent strain versus stress triaxiality in (a) DSIF and (b) SPIF [30] (reprinted with permission from Elsevier © 2015) Fig. 16 (a) FE model for ADSIF and (b) deformation sections for different forming stages [7] (reprinted with permission from Springer © 2013) Fig. 17 Schematics of (a) simplified ADSIF model and (b) variables D and S in ADSIF [29,66] (reprinted with permission from Springer © 2015 and AIP Publishing © 2013) ## Errata Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
2019-04-24 12:49:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4029349386692047, "perplexity": 7870.267480292307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578641278.80/warc/CC-MAIN-20190424114453-20190424140453-00084.warc.gz"}
https://plainmath.net/70571/i-am-trying-to-take-the-laplace-transfor
# I am trying to take the laplace transform of \cos(t)u(t-\pi). I am trying to take the laplace transform of $\mathrm{cos}\left(t\right)u\left(t-\pi \right)$. Is it valid for me to treat it as $\left(\left(\mathrm{cos}\left(t\right)+\pi \right)-\pi \right)u\left(t-\pi \right)$ and treat $\mathrm{cos}\left(t\right)-\pi$ as f(t) and use the 2nd shifting property, or is this not the correct procedure? You can still ask an expert for help ## Want to know more about Laplace transform? • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it utloverej Here is one approach: $\mathcal{L}\left(\mathrm{cos}\left(t\right)\right)=\frac{s}{{s}^{2}+1}$ $\mathcal{L}\mathrm{cos}\left(t\right)u\left(t-\pi \right)={e}^{-\pi s}\mathcal{L}\left(\mathrm{cos}\left(t-\pi \right)\right)={e}^{-\pi s}\mathcal{L}\left(-\mathrm{cos}\left(t\right)\right)=-{e}^{-\pi s}\frac{s}{{s}^{2}+1}$ Recall from the sum formula: $\mathrm{cos}\left(t-\pi \right)=\mathrm{cos}t\mathrm{cos}\pi +\mathrm{sin}t\mathrm{sin}\pi =-\mathrm{cos}t$ ###### Not exactly what you’re looking for? Note that $\mathrm{cos}\left(t\right)=\mathrm{cos}\left(\left(t-\pi \right)+\pi \right)=-\mathrm{cos}\left(t-\pi \right)$ Because Hence, $\mathcal{L}\left\{\mathrm{cos}\left(t\right)u\left(t-\pi \right)\right\}=-\mathcal{L}\left\{\mathrm{cos}\left(t-\pi \right)u\left(t-\pi \right)\right\}=\dots$
2022-05-25 04:03:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6909658312797546, "perplexity": 531.0299881100657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662578939.73/warc/CC-MAIN-20220525023952-20220525053952-00428.warc.gz"}
https://datascience.stackexchange.com/questions/44180/data-splitting-for-a-binary-classification-model
Data splitting for a binary classification model I'm trying to build a binary classification model that will tell who's going to buy the product and who's not. I've heard that splitting a dataset into two different subsets is a common way when you prepare an input data. [ ================ Training Data 80% ================= ] [ ==== Test Set 20% ==== ] Is it just mindlessly splitting a chunk of dataset by some amount of proportion like above? Is it that simple? Imagine I have this simple dataset below. UserId,UserName,AppId,Purchased 1,Lianne,1,1 1,Lianne,2,1 1,Lianne,3,1 1,Lianne,4,1 1,Lianne,5,1 1,Lianne,6,0 1,Lianne,7,0 1,Lianne,8,0 1,Lianne,9,0 1,Lianne,10,0 As the common recommended way, I splitted it into two groups. // Training Data Set 1,Lianne,1,1 1,Lianne,2,1 1,Lianne,3,1 1,Lianne,4,1 1,Lianne,5,1 1,Lianne,6,0 1,Lianne,7,0 1,Lianne,8,0 // Test Set 1,Lianne,9,0 1,Lianne,10,0 Would this work? well it seemed not and it turned out it actually didn't. The model was wrong about predicting on the appId of 6,7,8,9. It thought the user number one would buy them with a slightly high chance. The metrics look like... • TP : 5 • FP : 4 • FN : 1 • Accuracy : 0.5 • Auc : NaN • F1Score : NaN • Precision : 0 • Negative Precision : 1 • Negative Recall : 0.5 To make a proper model, what my test dataset should look like on this sample training data? Assuming that the data set posted is just an illustrative example (and therefore so small): The problem is that your test data has a very different distribution regarding the dependent variable compared to your training data (in your example split: it does not contain any examples of class 1). When splitting the data into train and test sets you need to include some randomness to fix this. However, in such a small data set that might still lead to very different empirical distributions regarding the training and test data. What you can do is to apply a split which keeps the distribution of the target variable the same for the training and test data (i.e. both sets will have the same share of examples with y==1 and y==0). Scikit Learn offers the parameter stratify for this (I copied your data to a CSV file): import pandas as pd from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split raw_data = pd.read_csv('binary classification examples') data = pd.get_dummies(raw_data) X = data[["UserId", "UserName_Lianne", "AppId"]] y = data[["Purchased"]] X_train, X_test, y_train, y_test = train_test_split( X, y, stratify=y, test_size=0.2) With stratify=y I have forced the split to keep the same share of each class of y in the train and test split: >>> y_train Out[40]: Purchased 7 0 3 1 0 1 5 0 1 1 9 0 8 0 4 1 >>> y_test Out[41]: Purchased 2 1 6 0 As you can see both, the training and the test data, now contain 50% items with y==0 and y==1. And with this data a DecisionTreeClassifier can easily classify the training and test data correctly: model = DecisionTreeClassifier() model.fit(X_train, y_train) print("Train score: {}\tTest score: {}".format( model.score(X_train, y_train), model.score(X_test, y_test))) Gives the following scores: Train score: 1.0 Test score: 1.0 My 2 cents: the number of records in the data set used here is very small. If we have a look into the data set we can see that the target variable split is exactly 50:50 which means the probability is half. Its like flipping a coin to get heads or tail. The training set contains a known output and the model learns on this data in order to be generalized to other data later on. The dependent variables and the independent variable should be in splatted and then do a train test fit. You can use the library from scikit learn as well from sklearn.model_selection import train_test_split if this is how your data looks like it would be a good idea to split users into training examples and test examples, train examples would contain users with info about all related apps, and in test data you would give your model about 80% info about user and model would have to fill covered 20%, in some cases you have to split data in problem speciffic way
2022-01-22 17:50:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2780609726905823, "perplexity": 1669.3099472295874}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303868.98/warc/CC-MAIN-20220122164421-20220122194421-00563.warc.gz"}
https://www.studysmarter.us/textbooks/math/essential-calculus-early-transcendentals-2nd/partial-derivatives/q15e-sketch-the-graph-of-the-function-fleft-xy-right-y2-1/
Suggested languages for you: Americas Europe Q15E Expert-verified Found in: Page 623 Essential Calculus: Early Transcendentals Book edition 2nd Author(s) James Stewart Pages 830 pages ISBN 9781133112280 Sketch the graph of the function $$f\left( {x,y} \right) = {y^2} + 1$$ The graph of the function See the step by step solution Step-2: Consideration Given the function $$f\left( {x,y} \right) = {y^2} + 1$$ Step-3: Observation The domain of the function can be found through graph of the function
2023-03-24 22:05:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6083086133003235, "perplexity": 6739.870994308019}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00269.warc.gz"}
https://www.biostars.org/p/9514918/
[ATACseqQC] NA values in sigs makes heatmap not completed 1 0 Entering edit mode 8 months ago Hello all, I'm using ATACseqQC package for analyzing my post-sorted bam file of ATAC-seq (from mice). I found my heatmap was only half shown whatever I tried. Please see the code below. # for mouse library(ATACseqQC) library(ChIPpeakAnno) library(MotifDb) library(GenomicAlignments) library(Rsamtools) library(GenomicScores) library(BSgenome.Mmusculus.UCSC.mm10) library(TxDb.Mmusculus.UCSC.mm10.knownGene) mm10_gscore <- getGScores("phastCons60way.UCSC.mm10") txs <- txs[seqnames(txs) %in% c("chr1", "chr2", "chr3", "chr4", "chr5", "chr6", "chr7", "chr8", "chr9", "chr10", "chr11", "chr12", "chr13", "chr14", "chr15", "chr16", "chr17", "chr18", "chr19", "chrX", "chrY")] genome <- Mmusculus objs <- splitGAlignmentsByCut(obj=gal1, txs=txs, genome=genome, outPath="D:/HYJ/splited_Cracd_KO1/") bamfiles <- file.path("D:/HYJ/splited_Cracd_KO1/", c("NucleosomeFree.bam", "mononucleosome.bam", "dinucleosome.bam", "trinucleosome.bam")) TSS <- promoters(txs, upstream=0, downstream=1) TSS <- unique(TSS) librarySize <- estLibSize(bamfiles) librarySize D:/HYJ/splited_Cracd_KO1//NucleosomeFree.bam D:/HYJ/splited_Cracd_KO1//mononucleosome.bam 7552685 1484090 D:/HYJ/splited_Cracd_KO1//dinucleosome.bam D:/HYJ/splited_Cracd_KO1//trinucleosome.bam 1497497 0 NTILE <- 101 dws <- ups <- 1010 sigs <- enrichedFragments(gal=objs[c("NucleosomeFree", "mononucleosome", "dinucleosome", "trinucleosome")], TSS=TSS, librarySize=librarySize, upstream = ups, downstream = dws, TSS.filter=0.5, seqlev = paste0("chr", c(1:19, "X", "Y")), n.tile = NTILE) sigs.log2 <- lapply(sigs, function(.ele) log2(.ele+1)) featureAlignedHeatmap(cvglists=sigs.log2, feature.gr=reCenterPeaks(peaks=TSS, width=ups+dws), upstream = ups, downstream = dws, zeroAt=0.5, n.tile=NTILE) I think the reason may be the NA values in sigs. And the NA values may be caused by the not well-aligned bam files because there is no conservation=mm10_gscore in objs <- splitGAlignmentsByCut(obj=gal1, txs=txs, genome=genome, outPath="D:/HYJ/splited_Cracd_KO1/"). However, if I add conservation=mm10_gscore, the splitGAlignmentsByCut() will generate Error: subscript contains invalid names. sigs \$NucleosomeFree [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [1,] NA NA NA NA NA NA NA NA NA NA NA [2,] 1.5850972 1.5850972 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 [3,] 0.0000000 0.0000000 0.000000 0.000000 0.000000 0.000000 0.000000 2.451784 2.451784 2.451784 2.451784 [4,] 0.0000000 0.0000000 0.000000 0.000000 0.000000 1.585097 1.585097 1.585097 1.585097 1.585097 0.000000 [5,] 0.0000000 1.2258919 1.225892 1.225892 1.225892 1.225892 0.000000 0.000000 0.000000 0.000000 0.000000 [6,] NA NA NA NA NA NA NA NA NA NA NA [7,] NA NA NA NA NA NA NA NA NA NA NA [8,] 0.0000000 0.0000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 [9,] NA NA NA NA NA NA NA NA NA NA NA It is appreciated if anyone could share some opinions or tips on this issue. Thanks! Best, YJ ATACseqQC ATAC-seq • 400 views 0 Entering edit mode 8 months ago dottercp • 0 Hi, I ran into a similar problem also triggering a Error: subscript contains invalid names error. What I found out is that the problem was with including the Y chromosome in the list as the (very shallow) data only contained nucleosome-free fragments for this chromosome which lead to the error since it tried to select chromosomes that weren't in the split data. I'm not sure if this helps in your situation but if it runs properly with chrY excluded at least you know where the error comes from. Edit: Another problem leading to the same error I encountered has to do with the caching when using getGScores("phastCons60way.UCSC.mm10"). The directory specified in the object (data_dirpath) did not exist which led to non of the scores being actually available when using the object (all NA values) - this led to problems in splitGAlignmentsByCut leading to the same error Error: subscript contains invalid names but at a different point of the code. After creating the directory getGScores downloaded the data again and the script ran through without error. 0 Entering edit mode hi dottercp the developer has pushed a patch to solve the NA value BUG and the invalid names BUG. check these https://github.com/jianhong/ATACseqQC/issues/49 and https://github.com/jianhong/ATACseqQC/issues/48. Now what I'm suffering is that I cannot plot the completed heatmap with red signals. details see here (https://github.com/jianhong/ATACseqQC/issues/50). Appreciate it if you have any ideas about this issue. Thanks! Best, YJ
2022-12-04 09:53:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18024039268493652, "perplexity": 9934.468022264942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710968.29/warc/CC-MAIN-20221204072040-20221204102040-00118.warc.gz"}
https://www.tutorialspoint.com/how-to-create-boxplot-using-mean-and-standard-deviation-in-r
# How to create boxplot using mean and standard deviation in R? R ProgrammingServer Side ProgrammingProgramming The main statistical parameters that are used to create a boxplot are mean and standard deviation but in general, the boxplot is created with the whole data instead of these values. If we don’t have whole data but mean and standard deviation are available then the boxplot can be created by finding all the limits of a boxplot using mean as a measure of central tendency. ## Example Consider the below data frame: Live Demo > df<-data.frame(mean=c(24,25,27,24),sd=c(1.1,2.1,1.5,1.8),Category=as.factor(c("A","B","C","D"))) > df ## Output mean sd Category 1 24 1.1 A 2 25 2.1 B 3 27 1.5 C 4 24 1.8 D Loading ggplot2 package and creating the boxplot of each category in df: ## Example > library(ggplot2) > ggplot(df,aes(x=Category))+geom_boxplot(aes(lower=mean-sd,upper=mean+sd,middle=mean,ymin=mean-3*sd,ymax=mean+3*sd),stat="identity") ## Output: Published on 19-Nov-2020 08:01:21
2021-09-25 09:07:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4220465123653412, "perplexity": 2805.3835495841377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057615.3/warc/CC-MAIN-20210925082018-20210925112018-00632.warc.gz"}
http://math.stackexchange.com/questions/382760/composition-of-two-axis-angle-rotations
Composition of two axis-angle rotations Please note that I am not referring to Euler angles of the form (α,β,γ). I am referring to the axis-angle representation, in which a unit vector indicates the direction axis of a rotation and a scalar the magnitude of the rotation. Let $(\hat{n_1},\theta_1)$ refer to the first rotation and $(\hat{n_2},\theta_2)$ refer to the second rotation. What is the value of the first rotation followed by the second rotation, in axis-angle representation? I understand that the composition of two rotations represented by quaternions $q_1$ and $q_2$ is equal to their product $q_2q_1$. Is there a way to find the composition of axis-angle rotations (without having to convert them to quaternions, multiply them, and convert them back to axis-angle) in a similar manner? Is there a simplified formula for this operation? - I do not believe there is without passing through some alternate representation (quaternion, matrix, ...). This is one of the known disadvantages of axis-angle compared to the others, while an advantage is the triviality of inversion (simply negate the angle or the axis). - The quaternion procedure is probably the simplest, easiest to implement, and most computationally economical way to go. In practice you would likely be doing all of this in a computer anyhow, and computing the product of two quaternions (in the big scheme of things) is not much harder than two real numbers, or two complex numbers. I think the multiplication is more computatationally efficient than multiplying two $3\times 3$ matrices, at least. Actually, if you sit down and work the quaternion solution, you can probably work out a formula completely in terms of the coordinates of the $n_i$ and the angles $\theta_i$. It would be monstrous, but it would be totally in terms of your data (and maybe inverse trigonometric functions.) - Look at the following link: Axis–angle representation - Sorry I didn't check the link in the post, but I did in Wikipedia. If you read the details there is a formula on how to rotate a vector give the axis-angle, you compose it twice and get the desired formula. –  Heberto del Rio May 6 '13 at 11:04 Well I should delete my comment, but it just seemed odd it was the same link. Also I think he wants to start with one axis-angle rotation, rotate in space to get result, rotate that result by the second angle-axis rotation, and finally put the overall map back into the form of an angle-axis rotation, so a kind of "inverse" at the end, going from the result of the two maps back into angle-axis form. I'd guess the final result wouldn't be simple only in terms of the two angle-axis rotations composed. –  coffeemath May 6 '13 at 17:14 That is correct. What would be the explicit axis-angle representation of two axis-angle rotations combined, without having to apply the first rotation to a vector and then the second, using Rodrigues' rotation formula? –  user1667423 May 6 '13 at 18:19
2015-08-29 21:40:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7603469491004944, "perplexity": 338.6406299152251}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064538.31/warc/CC-MAIN-20150827025424-00018-ip-10-171-96-226.ec2.internal.warc.gz"}
https://encyclopediaofmath.org/wiki/Semi-ring
# Semi-ring 2010 Mathematics Subject Classification: Primary: 16Y60 [MSN][ZBL] A non-empty set $S$ with two associative binary operations $+$ and $\cdot$, satisfying the distributive laws $$(a+b) \cdot c = a\cdot c + b \cdot c$$ and $$a \cdot (b+c) = a\cdot b + a\cdot c \ .$$ In most cases one also assumes that the addition is commutative and that there exists a zero element $0$ such that $a + 0 = a$ for every $a \in S$. The most important classes of semi-rings are rings and distributive lattices. If there is a multiplicative unit element 1, the two classes are combined by the condition $$\forall x \, \exists y \ x+y=1 \ .$$ The non-negative integers with the usual operations provide an example of a semi-ring that does not satisfy this condition. The term "exotic" semi-rings has been used to describe subsets of the real numbers with $\min$ or $\max$ as ${+}$ and addition as ${\star}$. These are thus idempotent semi-rings. Examples include the tropical semiring on $\mathbf{N} \cup \{\infty\}$ with operations ${\min},\, +$. An additive zero in a semiring $S$ is an element $a$ such that $a+x = x+a = x$ for all $x$; a multiplicative zero is an element $m$ such that $m \cdot x = x \cdot m = m$ for all $x$. A double zero is an element which is both an additive zero and a multiplicative zero. If the additive semigroup of a semiring $S$ is commutative and satisfies the cancellative property $a + c = b + c \Rightarrow a = b$ for all $c$, then the additive semigroup embeds in its Grothendieck group $R$ and the multiplication $\cdot$ extends to $R$, giving it a ring structure: the Grothendieck ring of $S$. The Grothendieck ring of a finite group $G$ over a field $K$ is the ring constructed in this way from the semiring of isomorphism classes of modules over the group ring $K[G]$ with direct sum and tensor product as the operations. #### References • K. Glazek, A Guide to the Literature on Semirings and their Applications in Mathematics and Information Sciences: With Complete Bibliography Springer (2013) ISBN 9401599645 • U. Hebisch, H.J. Weinert, Semirings: Algebraic Theory and Applications in Computer Science World Scientific (1998) ISBN 9814495697 Zbl 0934.16046 • Serge Lang Algebra (3rd rev. ed.) Graduate Texts in Mathematics 211 Springer (2002) Zbl 0984.00001 How to Cite This Entry: Semi-ring. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Semi-ring&oldid=37689 This article was adapted from an original article by L.A. Skornyakov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
2020-08-07 20:48:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8911756277084351, "perplexity": 276.80284745317095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737225.57/warc/CC-MAIN-20200807202502-20200807232502-00314.warc.gz"}
https://www.andlearning.org/average-deviation-formula/
Home » Math Formulas » Average Deviation Formula # Average Deviation Formula with Problem Solution & Solved Example The average deviation is generally used by statisticians to calculate the dispersion among measures in a given population. For example, if you are given a complete set of scores then it could be calculated by computing mean and check their specific distance between each score an don’t forget to consider the Mean either the score is above or below the mean. The other name for the same concept is average absolute deviation. The formula to calculate the average deviation in mathematics is given below – $\LARGE Average\:Deviation=\sum_{i=1}^{n}\left | x-\overline{x} \right |$ Where, x, represents the observation. $bar{x}$, represents the mean. n, represents the number of observation. If you wanted to get the complete understanding of the average deviation concept then you should know the term absolute deviation first. This is the distance between each value in the data set and their mean or median calculations. Absolute deviation usage is not so common with standard deviation but it is highly similar and used to measure the spread. There are particular situations when two different data sets with variable spreads produce the exactly same absolute deviation. However, this is possible standard deviation is also the same for different data sets. If you will consider real-life scenarios then absolute deviations are always more accurate and suitable to use as compared to the standard deviations. Not only they are accurate in real-life situations but simpler to calculate as well. You must be wondering how can you calculate the average deviation in mathematic. For this purpose, you just have to replace the mean value with the median values. That’s all for the day! Don’t forget to share the experience how did you calculate the average deviations in your case.
2020-01-19 16:04:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.843620777130127, "perplexity": 374.5709839998095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594662.6/warc/CC-MAIN-20200119151736-20200119175736-00304.warc.gz"}
https://www.sanfoundry.com/mathematics-questions-answers-properties-determinants/
# Mathematics Questions and Answers – Properties of Determinants « » This set of Mathematics Multiple Choice Questions & Answers (MCQs) focuses on “Properties of Determinants”. 1. Which of the following is not a property of determinant? a) The value of determinant changes if all of its rows and columns are interchanged b) The value of determinant changes if any two rows or columns are interchanged c) The value of determinant is zero if any two rows and columns are identical d) The value of determinant gets multiplied by k, if each element of row or column is multiplied by k Explanation: The value of determinant remains unchanged if all of its rows and columns are interchanged i.e. |A|=|A’|, where A is a square matrix and A’ is the transpose of the matrix A. 2. Find the determinant of the matrix A=$$\begin{bmatrix}1&x&y\\1&x&-y\\1&-x^2&y^2\end{bmatrix}$$. a) (x+1) b) -2xy(x+1) c) xy(x+1) d) 2xy(x+1) Explanation: Given that, A=$$\begin{bmatrix}1&x&y\\1&x&-y\\1&-x^2&y^2\end{bmatrix}$$ Δ=$$\begin{vmatrix}1&x&y\\1&x&-y\\1 &-x^2&y^2 \end{vmatrix}$$ Taking x common C2 and y common from C3, we get Δ=xy$$\begin{vmatrix}1&1&1\\1&1&-1\\1&-x&y\end{vmatrix}$$ Expanding along R1, we get Δ=xy{1(y-x)-1(y+1)+1(-x-1)} Δ=xy(y-x-y-1-x-1) Δ=xy(-2x-2)=-2xy(x+1). 3. Evaluate $$\begin{vmatrix}x^2&x^3&x^4\\x&y&z\\x^2&x^3&x^4 \end{vmatrix}$$. a) 0 b) 1 c) xyz d) x2 yz3 Explanation: Δ=$$\begin{vmatrix}x^2&x^3&x^4\\x&y&z\\x^2&x^3&x^4 \end{vmatrix}$$ If the elements of any two rows or columns are identical, then the value of determinant is zero. Here, the elements of row 1 and row 3 are identical. Hence, its determinant is 0. 4. Evaluate $$\begin{vmatrix}cos⁡θ&-cos⁡θ&1\\sin^2⁡θ&cos^2⁡θ&1\\sin⁡θ&-sin⁡θ&1\end{vmatrix}$$. a) sin⁡θ+cos2⁡θ b) -sin⁡θ-cos2⁡⁡θ c) -sin⁡θ+cos2⁡⁡θ d) sin⁡θ-cos2⁡⁡θ Explanation: Δ=$$\begin{vmatrix}cos⁡θ&-cos⁡θ&1\\sin^2⁡θ&cos^2⁡θ&1\\sin⁡θ&-sin⁡θ&1\end{vmatrix}$$ Applying C1→C1+C2 Δ=$$\begin{vmatrix}cos⁡θ-cos⁡θ&-cos⁡θ&1\\sin^2⁡θ+cos^2⁡θ&cos^2⁡θ&1\\sinθ-sin⁡θ&-sin⁡θ&1\end{vmatrix}$$=$$\begin{vmatrix}0&-cos⁡θ&1\\1&cos^2⁡θ&1\\0&-sin⁡θ&1\end{vmatrix}$$ Expanding along C1, we get 0-1(cos2⁡⁡θ+sinθ)=sin⁡θ-cos2⁡⁡θ. 5. Evaluate $$\begin{vmatrix}b-c&b&c\\a&c-a&c\\a&b&a-b\end{vmatrix}$$. a) 2abc b) 2a{(b-c)(c-a+b)} c) 2b{(a-c)(a+b+c)} d) 2c{(b-c)(a-c+b)} Explanation: Δ=$$\begin{vmatrix}b-c&b&c\\a&c-a&c\\a&b&a-b\end{vmatrix}$$ Applying C2→C2-C3 Δ=$$\begin{vmatrix}b-c&b-c&c\\a&-a&c\\a&-a&a-b\end{vmatrix}$$ Applying C1→C1-C2 Δ=$$\begin{vmatrix}0&b-c&c\\2a&-a&c\\2a&-a&a-b\end{vmatrix}$$ Applying R2→R2-R3 Δ=$$\begin{vmatrix}0&b-c&c\\0&0&c-a+b\\2a&-a&a-b\end{vmatrix}$$ Expanding along C1, we get Δ=2a{(b-c)(c-a+b)} 6. If A=$$\begin{bmatrix}1&3\\2&1\end{bmatrix}$$, then ________ a) |2A|=4|A| b) |2A|=2|A| c) |A|=2|A| d) |A|=|4A| Explanation: Given that, A=$$\begin{bmatrix}1&3\\2&1\end{bmatrix}$$ 2A=2$$\begin{bmatrix}1&3\\2&1\end{bmatrix}$$=$$\begin{bmatrix}2&6\\4&2\end{bmatrix}$$ |2A|=$$\begin{vmatrix}2&6\\4&2\end{vmatrix}$$=(4-24)=-20 4|A|=4$$\begin{vmatrix}1&3\\2&1\end{vmatrix}$$=4(1-6)=4(-5)=-20 ∴|2A|=4|A|. 7. Evaluate $$\begin{vmatrix}-a&b&c\\-2a+4x&2b-4y&2c+4z\\x&-y&z\end{vmatrix}$$. a) 0 b) abc c) 2abc d) -1 Explanation: Δ=$$\begin{vmatrix}-a&b&c\\-2a+4x&2b-4y&2c+4z\\x&-y&z\end{vmatrix}$$ Using the properties of determinants, the given determinant can be expressed as a sum of two determinants. Δ=$$\begin{vmatrix}-a&b&c\\-2a&2b&2c\\x&-y&z\end{vmatrix}$$+$$\begin{vmatrix}-a&b&c\\4x&-4y&4z\\x&-y&z\end{vmatrix}$$ Δ=2$$\begin{vmatrix}-a&b&c\\-a&b&c\\x&-y&z\end{vmatrix}$$+4$$\begin{vmatrix}-a&b&c\\x&-y&z\\x&-y&z\end{vmatrix}$$ Since two rows are similar in each of the determinants, the determinant is 0. 8. Find the determinant of A=$$\begin{bmatrix}c^2&cb&ca\\ab&a^2&-ac\\ab&bc&-b^2\end{bmatrix}$$ a) abc(a3+b3+c3+abc) b) abc(a3+b3+c3-abc) c) abc(a3+b3+c3+abc) d) (a3-b3+c3-abc) Explanation: Given that, A=$$\begin{bmatrix}c^2&cb&ca\\ab&a^2&-ac\\ab&bc&-b^2\end{bmatrix}$$ Taking c a, b common from R1, R2, R3 respectively, we get Δ=abc$$\begin{bmatrix}c&b&a\\b&a&-c\\a&c&-b\end{bmatrix}$$ Δ=abc{(c(-ab+c2)-b(-b2+ac)+a(bc-a2) Δ=abc(-abc+c3+b3-abc+abc-a3) Δ=abc(a3+b3+c3-abc). 9. Evaluate $$\begin{vmatrix}1+m&n&q\\m&1+n&q\\n&m&1+q\end{vmatrix}$$. a) -1(1+m+n+q) b) 1+m+n+q c) 1+2q d) 1+q Explanation: Given that, Δ=$$\begin{vmatrix}1+m&n&q\\m&1+n&q\\n&m&1+q\end{vmatrix}$$ Applying C1→C1+C2+C3 Δ=$$\begin{vmatrix}1+m+n+q&n&q\\1+m+n+q&1+n&q\\1+m+n+q&m&1+q\end{vmatrix}$$=(1+m+n+q)$$\begin{vmatrix}1&n&q\\1&1+n&q\\1&m&1+q\end{vmatrix}$$ Applying R1→R2-R1 Δ=(1+m+n+q)$$\begin{vmatrix}0&1&0\\1&1+n&q\\1&m&1+q\end{vmatrix}$$ Expanding along the first row, we get Δ=(1+m+n+q)(0-1(1+q-q)+0) Δ=-1(1+m+n+q). 10. Evaluate $$\begin{vmatrix}4&8&12\\6&12&18\\7&14&21\end{vmatrix}$$. a) 168 b) -1 c) -168 d) 0 Explanation: Δ=$$\begin{vmatrix}4&8&12\\6&12&18\\7&14&21\end{vmatrix}$$ Taking 4, 6 and 7 from R1, R2, R3 respectively Δ=4×6×7$$\begin{vmatrix}1&2&3\\1&2&3\\1&2&3\end{vmatrix}$$ Since the elements of all rows are identical, the determinant is zero. Sanfoundry Global Education & Learning Series – Mathematics – Class 12. To practice all areas of Mathematics, here is complete set of 1000+ Multiple Choice Questions and Answers.
2022-01-19 14:40:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7955376505851746, "perplexity": 2914.160598890417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301341.12/warc/CC-MAIN-20220119125003-20220119155003-00210.warc.gz"}
http://sk.sagepub.com/reference/hdbk_classroomassessment/n4.xml
# Previous ChapterChapter 4: Classroom Assessment in the Context of High-Stakes Testing Next Chapter Chapter 4: Classroom Assessment in the Context of High-Stakes Testing Classroom Assessment in the Context of High-Stakes Testing Classroom assessment in the context of high-stakes testing During the school year, students take a variety of assessments. First, students take day-to-day classroom assessments (CAs). Second, they may take interim or common assessments that are meant to gauge their progress in mastering the ...
2019-12-06 17:57:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8196440935134888, "perplexity": 7931.752129103318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540490743.16/warc/CC-MAIN-20191206173152-20191206201152-00319.warc.gz"}
https://www.usatranslate.com/site/119ee2-si-unit-of-length
mole [ 11th CGPM (1960), Resolution 6.] ΔνCs. The meter was first defined in 1791 as 1/10,000,000 of the distance from the equator to the North Pole. mole Online: March 1998   -   Last update: June 2019. The prefixes indicate whether the unit is a multiple or a fraction of the base ten. A set unit of prefixes have been established and are known as the SI prefixes or the metric prefixes (or units). Unit of meter The prefixes indicate whether the unit is a multiple or a fraction of the base ten . It allows the reduction of zeros of a very small number or a very larger number such as 0.000000001 meter and 7,500,000 Joules into 1 nanometer and 7.5 Megajoules respectively. The kilogram, symbol kg, is the SI unit of mass. These SI prefixes also have a set of symbols that precede unit symbol. kilogram The second, symbol s, is the SI unit of time. Unit of second SI units Edit Main article: International System of Units. This number is the fixed numerical value of the Avogadro constant, NA, when expressed in the unit mol-1 and is called the Avogadro number. Measures the volume of a liquid. SI units and symbols used in the physics guide. The amount of substance, symbol n, of a system is a measure of the number of specified elementary entities. Unit of The mole, symbol mol, is the SI unit of amount of substance. kg. It is defined by taking the fixed numerical value of the speed of light in vacuum c to be 299 792 458 when expressed in the unit m s-1, where the second is defined in terms of Δν Cs. G stands for gram,small unit of mass. S stands for second.small unit of time. These are commonly used as a convention. The International System of Units (SI) is system of units of measurements that is widely used all over the world. The U.S. usually makes measurements in inches and feet, but the SI system prefers meters as the unit for length. Definition (CGPM) length: metre: m: The metre is the length equal to 1 650763,73 wavelengths in vacuum of the radiation corresponding to the transition between the levels 2 p10 and 5 d5, of the krypton-86 atom. Since the SI Units are nearly globally though, the scientific and mathematical field will use these SI units in order to provide ease between the sharing data with one another because of a common set of measurements. It is defined by taking the fixed numerical value of the Planck constant h to be 6.626 070 15 × 10-34 when expressed in the unit J s, which is equal to kg m2 s-1, where the meter and the second are defined in terms of Common examples are: Horse racing and other equestrian activities keeps alive: Units of Measurement Wiki is a FANDOM Lifestyle Community. amount of The SI unit for length is the meter (abbreviated m); its definition has also changed over time to become more precise. An elementary entity may be an atom, a molecule, an ion, an electron, any other particle or specified group of particles. Unit of mass  ampere Take your favorite fandoms with you and never miss a beat. https://units.fandom.com/wiki/Unit_of_length?oldid=6091. It is defined by taking the fixed numerical value of the speed of light in vacuum c to be 299 792 458 when expressed in the unit m s-1, where the second is defined in terms of These SI prefixes also have a set of symbols that precede unit symbol. Prefix is a kind of word used before the name of an SI unit to get a bigger value or a smaller value of the unit. This number is the fixed numerical value of the Avogadro constant, NA, when expressed in the unit mol-1 and is called the Avogadro number. The symbol of meter is m. The SI unit of measuring mass is Kilogram “kg” and the SI unit of measuring time is ‘second’ (s). The meter, symbol m, is the SI unit of length. temperature temperature Become familiar with the seven defining constants of the SI. Unit of length The standard instruments used nowadays to measure a length are – ruler, meter scale, measuring tape, vernier caliper, and screw gauge. Legal. A set unit of prefixes have been established and are known as the SI prefixes or the metric prefixes (or units). The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. It is defined by taking the fixed numerical value of the Planck constant h to be 6.626 070 15 × 10-34 when expressed in the unit J s, which is equal to kg m2 s-1, where the meter and the second are defined in terms of The mole, symbol mol, is the SI unit of amount of substance. intensity We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. It is defined by taking the fixed numerical value of the Boltzmann constant k to be 1.380 649 x 10-23 when expressed in the unit J K-1, which is equal to kg m2 s-2 K-1, where the kilogram, meter and second are defined in terms of h, c and ΔνCs. For more information contact us at [email protected] or check out our status page at https://status.libretexts.org. Standard International (SI) unit for measurement of length is ‘meter’. The kelvin, symbol K, is the SI unit of thermodynamic temperature. It is defined by taking the fixed numerical value of the elementary charge e to be 1.602 176 634 x 10-19 when expressed in the unit C, which is equal to A s, where the second is defined in terms of ΔνCs. Linear position meter SI units are internationally accepted. However countries such as the United States, Liberia, and Berma have not officially adopted the International System of Units as their primary system of measurements. SI base unit for length. It allows the reduction of zeros of a very small number or a very larger number such as. Unit. Metric units use a prefix, used for conversion from or to an SI unit. THE SEVEN SI BASE UNITS. SI base unit for time. Unit of mass  SI. Metre (m) is the unit of length. electric current International system of measurement used by scientist around the world. This number is the fixed numerical value of the Avogadro constant, NA, when expressed in the unit mol-1 and is called the Avogadro number. The following seven SI base unit definitions are based on the BIPM SI Brochure (9th Edition). For Fahrenheit to Celsius: $F= \dfrac{9}{5} \times C+32$, For Celsius to Fahrenheit: $C= \dfrac{5}{9} \times F - 32$. substance Have questions or comments? The candela, symbol cd, is the SI unit of luminous intensity in a given direction. It is defined by taking the fixed numerical value of the cesium frequency ΔνCs, the unperturbed ground-state hyperfine transition frequency of the cesium 133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s-1. kelvin ΔνCs. The Kelvin scale does not use the degree symbol (°) and only K, which can only be positive since it is an absolute scale, Mass is usually measured by a sensitive balance machine. i.e., multiples and submultiples of the units can be easily expressed as powers of 10. Liter. Free LibreFest conference on November 4-6! Name. [ "article:topic", "fundamental", "units", "showtoc:no" ], The prefixes indicate whether the unit is a multiple or a fraction of the, . Prefixes used with SI units. Common units of length in the International System of Units (SI) are: meter and its multiples, such as "centimeter" or "kilometer" Non-SI units Edit. The mole, symbol mol, is the SI unit of amount of substance. These are unit of CGS System. Unit of It is defined by taking the fixed numerical value of the elementary charge e to be 1.602 176 634 x 10-19 when expressed in the unit C, which is equal to A s, where the second is defined in terms of ΔνCs. kilogram   The kilogram, symbol kg, is the SI unit of mass. Go to SI Units Background or SI base units Measuring the wingspan of an insect b. measuring the height of a building. Definitions of the SI base units When we measure the distance between any two points in terms of width, thickness, depth, and height, we measure length. Unit for mass; stands for kilogram. It is defined by taking the fixed numerical value of the cesium frequency ΔνCs, the unperturbed ground-state hyperfine transition frequency of the cesium 133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s-1. The ampere, symbol A, is the SI unit of electric current. thermodynamic Sym. Metric system. An elementary entity may be an atom, a molecule, an ion, an electron, any other particle or specified group of particles. measuring the distance between two cities . Meter (m) length unit of measurement: Distance traveled by light in a vacuum in 1/299,792,458 seconds. candela m, M. kilogram. The amount of substance, symbol n, of a system is a measure of the number of specified elementary entities. The meter, symbol m, is the SI unit of length. The main units in modern use are U.S. customary units in the United States and the Metric system elsewhere. intensity Derived Units are created by mathematical relationships ​between other Base Units and are expressed in a combination of fundamental and base quantities. One mole contains exactly 6.022 140 76 x 1023 elementary entities. c and ΔνCs. The SI unit of measure for length-the meter-would be most appropriate when a. substance The candela, symbol cd, is the SI unit of luminous intensity in a given direction. nanometer and 7.5 Megajoules respectively. luminous Unit of length System of measurement based on 10. Kilogram. Register now! ampere SI unit system is a metric system. SI units commonly uses derived units for Volume such as meters cubed to liters. An elementary entity may be an atom, a molecule, an ion, an electron, any other particle or specified group of particles. Definitions of the SI units. Unit of Unit of One mole contains exactly 6.022 140 76 x 1023 elementary entities. British Imperial units are still used for some purposes in the United Kingdom and some other countries. Unit of length : meter: The meter, symbol m, is the SI unit of length. The definitions of the fundamental SI units are given below: Metre definition . Real Life Superheroes 2020, Parseghian Star Wars, How To Check Light Level In Minecraft Ps4, Where Is The Disco Effect On Tiktok, Nike Meaning In Hebrew, Alan Wilder 2019, Nbc App A Video Error Has Occurred, Larry Wilson Car Collection, Instax Wide 300 Manual, Nascar For Sale Craigslist, Sarah Cooper Net Worth, Cole Deboer Age, Mullein Tea For Copd, Doyle Devereux Age, Offroad Outlaws Mod, Cartons Of Newports For 19 Dollars A Carton, Wordscapes Brilliance Score, Jason Hawes Daughters, Cpsia Certified Vinyl, Juda Tv Series Cast, Earthroamer Lti Price, Craig Smith Wife, What Does An Inverter Board Do In A Refrigerator, Georg Listing Wife, Oatmeal Looking Poop, Halo Md Maps, What's The Difference Between Peanut Butter And Jam Joke Meaning, Truman Hanks' Wife, I Am Proud Of Myself Essay, Nrl Magic Round Shop, What To Say When Someone Calls You Babe, Ami Cat Litter Box, Distilled Water Safeway, Oración De La Santa Cruz Para Embarazadas, Can Baby Sparrows Eat Bananas, Truck Headache Rack Diy, Doom Demon Language, Terry Bradshaw Salary, Carl Weathers Height, 医龍 動画 Pandora 1話, Split Screen Xbox One Games, Crimped Cat Whiskers, Reema Khan Son, Fake Icq Number, Temporary Permit Dmv, Richard Halsey Best Obituary, Randolph Carter Diversity, Quiz Logo Game Answers Level 8, Dnd Train Encounters, Carlos Ponce Harry Potter, Kj Henderson Motorcycle For Sale, Shanghai Movie Ending Explained, Scag Liberty Z Oil Filter, The House On Mango Street Monologues, Kid Rock 2020 Cruise, Hmas Adroit Piracy, Evening Primrose Flower Name In Tamil, Juanita Bynum Mother Died, Mary Magdalene Lyrics Meaning, Metro: Last Light Bandits Bug, The Arbitration Movie Ending, Isuzu Engine 6bd1t, Mcgraw Hill Connect Exam Proctoring, Kimura Ryohei Married, Noah Wyle Net Worth, Afk Summoning Rs3, Oatmeal Looking Poop, Jalen Hurts Bench Press, Dod Firefighter Certification Lookup, Growing Gladiolus In Phoenix, Az, Nick Buonfiglio Wikipedia, Dr Quinn Medicine Woman, Rosey Grier Net Worth, Wild Turkey 101 Rye Discontinued, Hidden Figures Google Drive, Weaver C4 Scope, Anton Dostler Last Words, Oxalic Acid Pka, Sri Suryanarayana Meluko Mp3 Naa Songs, Blue Velvet Shrimp, Gregory Boyington Josephine Wilson Moseman, How Did Nigel Green Die, Falkirk Tv Commentators, Laffy Taffy Slogan, Prime Time St George Island, Sarah Iannarone Wikipedia, The Secret Treasure Hunt Virginia, Paul Hicks Sculptor, Ben Shapiro Siblings, Kryptonite Key Blank, Low Ceiling 2 Post Lift, Cosy Dens Watch Online English Subtitles, Terraria 3ds Cheats, Protein Absorbance At 340 Nm, Hermes And Psyche, Middle Finger Bohnes Meaning, How Much Does Dermafrac Cost, Wonder Woman 1984 Stream Online, Tacoma World Lift Kit,
2021-01-20 04:30:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.630269467830658, "perplexity": 2645.781158957772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519883.54/warc/CC-MAIN-20210120023125-20210120053125-00660.warc.gz"}
https://www.futurelearn.com/info/courses/introduction-to-javascript-1/0/steps/341074
# if and if else statements In this activity, we will explore how code can make decisions for us. We can make decisions in our code using if and if else statements. It is very much like this template: if *some condition is true*, then *a certain action will happen*, else *another action will happen* For example, if it is raining then, I will take my umbrella, else I will leave my umbrella at home. It is not that much different in code: let rain = true; if(rain){ console.log("** Taking my umbrella when I need to go outside **"); } else { console.log("** I can leave my umbrella at home **"); } In this case, the value of rain is true. And therefore, it will log to the console: ** Taking my umbrella when I need to go outside ** But let’s first take a step back and look at the syntax. We start with the word “if.” After this, we get something within parentheses. Whatever is between these parentheses will be translated to a Boolean. If the value of this Boolean is true, it will execute the block of code associated with if. You can recognize this block by the curly braces. The next block is optional; it is an else block. It starts with the word “else” and is only executed in case of the Boolean having the value false. If there is no else block and the condition evaluates to false, the program will just skip ahead to the code underneath the if. Only one of these two blocks will be executed; the if block when the expression is true, and the else block when the expression is false: if(expression) { // code associated with the if block // will only be executed if the expression is true } else { // code associated with the else block // we don't need an else block, it is optional // this code will only be executed if the expression is false } Here is another example. If the age is below 18, log to the console that access is denied, otherwise log to the console that the person is allowed to come in: if(age < 18) { console.log("We're very sorry, but you can't get in under 18"); } else { console.log("Welcome!"); } There is a common coding mistake related to if statements. I have made it in the following code snippet. Can you see what this code does? let hobby = "dancing"; if(hobby = "coding"){ console.log("** I love coding too! **"); } else { console.log("** Can you teach me that? **"); } It will log the following: ** I love coding too! ** That might surprise you. The problem here is the single equal sign in the if statement. Instead of evaluating the condition, it is assigning coding to hobby. And then it is converting coding to a Boolean, and since it is not an empty string, it will become true, so the if block will be executed. So, always remember to use the double equal sign in this case. Let’s test our knowledge with a practice exercise. ## Practice exercise 2.18 1. Create a variable with a Boolean value. 2. Output the value of the variable to the console. 3. Check whether the variable is true and if so, output a message to the console, using the following syntax: if(myVariable){//action} 1. Add another if statement with an ! in front of the variable to check whether the condition is not true, and create a message that will be printed to the console in that instance. You should have two if statements, one with an ! and the other without. You could also use an if and an else statement instead—experiment! 2. Change the variable to the opposite to see how the result changes.
2023-03-30 10:30:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27649760246276855, "perplexity": 934.6295803708714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00141.warc.gz"}
https://rup.silverchair.com/jcb/article/175/5/681/44625/The-depletion-attraction-an-underappreciated-force
Cellular structures are shaped by hydrogen and ionic bonds, plus van der Waals and hydrophobic forces. In cells crowded with macromolecules, a little-known and distinct force—the “depletion attraction”—also acts. We review evidence that this force assists in the assembly of a wide range of cellular structures, ranging from the cytoskeleton to chromatin loops and whole chromosomes. ### The depletion attraction As biologists, we are all aware that ionic and hydrogen bonds, plus van der Waals and hydrophobic forces, act within and between macromolecules to shape the final structure. However, a distinct interaction, known as the “depletion attraction,” may also play a substantial role (Asakura and Oosawa, 1958; Yodh et al., 2001). This force is only seen in crowded environments like those found in cells, where 20–30% of the volume is occupied by soluble proteins and other macromolecules (Ellis, 2001; Minton, 2001, 2006). Crowding increases effective concentrations, which has important consequences (Box 1), but it also creates a force apparently out of nothing. We argue that this force drives the assembly of many large structures in cells. Box 1. AO and related theories The physics of an aqueous solution crowded with ions and macromolecules of different sizes is complicated, and various theories provide different perspectives on the underlying problems (Lebowitz et al., 1965; Ogston, 1970; Cotter, 1974; Mao et al., 1995; Minton, 1998; Parsegian et al., 2000; Kinjo and Takada, 2002; Spitzer and Poolman, 2005). The AO theory (Asakura and Oosawa, 1958) is one approximation, that shows that ${\Delta}F_{\mathrm{gain}}{=}{\sim}[1{+}3/2(D/d)]nk_{B}T,$ where ΔFgain is the free energy gained when the two large spheres in Fig. 1 come into contact, D and d are the diameters of the large and small spheres, n the volume occupied by the small spheres, kB the Boltzmann constant, and T is the absolute temperature. This equation applies generally because particles of all sizes possess a hard core; it also applies to values of n up to ∼0.3, after which it becomes less accurate (Gotzelmann et al., 1998). In cells, n can be determined in various ways (i.e., by cell fractionation, electron microscopy, or gel filtration), and is (luckily) between 0.2–0.3 (Busch and Daskal, 1977; Zimmerman and Trach, 1991; Bohrmann et al., 1993). D thus determines the scale of the attraction (as d, n, and T are usually constant). Results obtained using “molecular tweezers” show the equation to be so accurate that it is being used to position particles within manmade nanostructures (Yodh et al., 2001). We now consider how AO theory differs from two related theories. First, both the depletion attraction and hydrophobic effect (Chandler, 2002) tend to minimize the surface exposed to the macromolecular solute or water. They are also superficially similar in that one is purely, and the other mainly, driven by entropic effects. However, an increase in volume available to a macromolecular solute drives the depletion attraction, whereas an increase in hydrogen-bonding states available to water underlies the hydrophobic effect (Chandler, 2002). The second theory is known as “macromolecular crowding” in the biological literature. “Crowding” increases thermodynamic activities, and has been successfully used to compute effects on chemical reactions and equilibria (Ellis, 2001; Minton, 2001, 2006). Macromolecular crowding describes the same phenomenon as AO theory, but is based on scaled particle theory and so cannot be applied to the (concave) structures we consider (i.e., two touching large spheres; Minton, 1998). But if the large spheres are allowed to fuse to give one larger (convex) sphere, it then gives roughly equivalent results (unpublished data). Therefore, the hydrophobic effect differs in mechanism, and macromolecular crowding differs in technical treatment. Consider Fig. 1 A, where many small and a few large spheres are contained in a box, representing the many small, crowding macromolecules and the fewer, larger complexes in a cell. In physicists' terminology, both types of sphere are “hard” and “noninteracting,” so that none of the forces familiar to biologists act between them. The small spheres bombard the large ones from all sides (arrows). When two large spheres approach one another, the small ones are excluded from the volume between the two. Therefore, the small ones exert an unopposed force equivalent to their osmotic pressure on opposite sides of the two large ones to keep them together. This osmotic effect depends on the volume that is inaccessible to the small spheres; if the small spheres could gain access to this (depleted) volume, they would force the two large ones apart. Fig. 1 B gives an alternative view. The centers of mass of the small spheres can access the yellow volume, but not the gray volumes, around each large sphere or abutting the wall. When one large sphere approaches another, these excluded volumes overlap; as a result, the small spheres can now access a greater volume. The resulting increase in entropy of the many small spheres generates a depletion attraction between the large spheres. At first glance, this seems like an oxymoron; entropy usually destroys the order that an attraction creates. But if we consider the whole system (not just the large spheres), the excluded volume is minimized and thus entropy is maximized (because there are so many small spheres). The Asakura–Oosawa theory (“AO theory”; Asakura and Oosawa, 1958), allows us to estimate the scale of this depletion attraction (Box 1). In cells, the diameters of the large spheres are the major determinants, as the other variables in the equation in Box 1 are constant; larger spheres tend to cluster more than smaller ones (Fig. 2 A, compare i with ii). The attraction can easily be recognized in vitro; adding an inert crowding agent like a dextran or polyethylene glycol (PEG) promotes aggregation (by increasing the volume fraction, n, of the small spheres). However, the force has a maximum range of only ∼5 nm, which is the diameter of a typical crowding protein; it will be larger if the two large objects fit snugly together (or are “soft” enough to fuse into one with conservation of volume) and smaller if surface irregularities limit close contact (Marenduzzo et al., 2006). In what follows, free energy is expressed in kBT units; 1 kBT is ∼0.7 kcal/mol, which is roughly comparable to the energy associated with one hydrogen bond in a protein (Pace et al., 1996). Therefore, attractions of only a few kBT are within the range that biologists know can stabilize a structure. ### A simple case: actin dimerization and bundling It is widely believed that ATP hydrolysis provides most of the energy that drives actin dimerization. However, calculation shows the depletion attraction makes some contribution, ∼0.5 kBT (Fig. 2 A, i; Marenduzzo et al., 2006) compared with the experimentally determined free energy change of 1–2 kBT (Sept and McCammon, 2001; Dickinson et al., 2004). The attraction is nonspecific in the sense that it can bring two large spheres together, but it cannot orient them. Therefore, the addition of a third sphere would create the structure shown in Fig. 2 A (i, inset), and not a linear fiber. Long (F-actin) fibers will only form if specific forces augment the nonspecific attraction to orient monomers appropriately; then the overlap volume between two fibers (Fig. 2 A, iii) becomes large enough (i.e., many tens of kBT per micrometer) that adding a crowding agent causes fiber “bundling” (Hosek and Tang, 2004). Similar aggregation is seen with other spheres (e.g., bovine pancreatic trypsin inhibitor; Snoussi and Halle, 2005) and rods (e.g., tobacco mosaic virus; Adams and Fraden, 1998; Adams et al., 1998). ### Secondary structures, tertiary structures, and helices Within a protein, the scale of the attraction is small relative to hydrogen bonding. For example, forming a linear tube into a helix generates an overlap volume (Fig. 2 C, iv) so the attraction can stabilize a helix (Maritan et al., 2000; Snir and Kamien, 2005). But in the case of an α helix (with four hydrogen bonds per helical turn), it contributes only ∼0.07 kBT per turn (calculated using a helix with a 0.25-nm radius and 0.54-nm pitch, and assuming d = 5 nm and n = 0.2; unpublished data). The attraction created by folding a tube into a β-sheet (to produce two cylinders lying side-by-side, as in Fig. 2 A, iii), where each amino acid makes two hydrogen bonds and strands are 0.35 nm apart, is similarly small (i.e., <0.02 kBT per amino acid; not depicted). This is consistent with experimental observations and calculations showing that crowding agents increase the rates of refolding of lysozyme and the β-sheet WW domain by two- to fivefold (van den Berg et al., 2000; Cheung et al., 2005). The attraction also contributes ∼0.8 kBT per 14-nm turn in a coiled coil (calculated using two 0.5-nm cylinders; unpublished data), and <1 kBT per 10 bp of DNA (not depicted). Again, this is consistent with crowding agents slightly increasing the melting temperature of DNA (Woolley and Wills, 1985; Goobes et al., 2003). ### Abnormal interactions: sickle cell hemoglobin and amyloid fibrils In larger structures, the attraction becomes more prominant. For example, sickle cell hemoglobin results from the substitution of valine for glutamic acid at the β6 site of hemoglobin; this drives end-to-end polymerization of deoxygenated hemoglobin into fibers, followed by side-by-side “zippering” into bundles. As a result, red blood cells become more rigid and so pass less rapidly through capillaries, reducing oxygen exchange and causing sickle cell anemia. As with actin, the attraction contributes slightly to dimerization (Fig. 1 C, i), but contributes many tens of kBT per micrometer of fiber length to bundling (Fig. 1 C, iii; Jones et al., 2003). It may similarly drive aggregation in many other pathologies (e.g., into amyloid fibrils in Alzheimer's, type 2 diabetes, and the transmissible spongiform encephalopathies; Hatters et al., 2002; Ellis and Minton, 2006). As tissue hydration falls slightly on ageing (Barber et al., 1995), this may increase the volume fraction, n, and promote aggregation, which is consistent with the increased incidence seen with age. ### Large nuclear bodies and membrane-bound structures Nucleoli and promyelocytic leukemia bodies disassemble when nuclei from human hematopoietic cells are immersed in a low concentration of monovalent cations; both reassemble (and nucleolar transcription recovers) when a crowding agent like PEG is added (Rosania and Swanson, 1995; Hancock, 2004). This points to a role for crowding, perhaps acting through cooperative effects and the depletion attraction (Fig. 1 C, i). If so, the attraction could also shape other large nuclear structures, such as splicing speckles and Cajal bodies (Spector, 2003). PEG is also used routinely to induce cell fusion during hybridoma production, and the attraction drives the first step, which is cell aggregation (Kuhl et al., 1996; Chu et al., 2005); it also induces thylakoid membranes to stack (Kim et al., 2005). Thus, thermodynamics could give direction to vesicular traffic—toward clustering (through the attraction) and membrane fusion (by minimizing surface curvature). ### Genome looping There are entropic costs associated with forming DNA or chromatin into a loop, but these can be overcome if large enough complexes are bound to the template (Fig. 2 B, i; Marenduzzo et al., 2006). Consider two transcription complexes; each might contain a multisubunit polymerase, the transcript and its neutralizing proteins, plus associated ribosomes (in bacteria) or spliceosome (in eukaryotes). When they come into contact, the resulting attraction will keep them together, thus looping the intervening DNA. A cost/benefit analysis of the energies involved enabled correct prediction of various types of organization. First, looping should depend on ongoing transcription (as only then is the complex associated with the template); it does. For example, loops are present in all transcriptionally active cells examined (from bacteria to man), but not in inactive ones like chicken erythrocytes and human sperm (Jackson et al., 1984; Cook, 2002). And as chicken erythroblasts mature into erythrocytes, transcription falls progressively as loops are lost, until no activity or loops remain (Cook and Brazell, 1976). Recent evidence also shows that loops detected using “chromosome conformation capture” are tied through active polymerizing complexes (Cook, 2003). Thus, the Hbb-b1 (β-globin) gene lies tens of kilobase pairs away, on chromosome 7, from its locus control region, and ∼25 Mbp away from a gene (Eraf) encoding the α-globin–stabilizing protein; it contacts the locus control region and Eraf in erythroid nuclei (where all three are transcribed), but not in brain nuclei (where all are inactive; Osborne et al., 2004). Second, active polymerases cluster, as predicted. Thus, in higher eukaryotes, ∼8 active polymerase II units cluster into nucleoplasmic “factories” (Cook, 1999; Faro-Trindade and Cook, 2006), and bacterial ribosomal DNA operons aggregate similarly (Cabrera and Jin, 2003). Active DNA-polymerizing complexes in both pro- and eukaryotes also cluster into analogous replication factories (Cook, 1999), and the bacterial ones separate (Bates and Kleckner, 2005) just when the looping cost exceeds the attraction. In all cases, the scale of the attraction relative to the looping cost correlates with the clustering seen. ### Conclusions We have argued that an osmotic depletion attraction drives the organization of many cellular structures. Unlike other noncovalent interactions (i.e., ionic and hydrogen bonds, van der Waals and hydrophobic forces), this one only becomes significant in crowded environments like those in cells. It is nonspecific in the sense that it can bring spheres together without orienting them. It also depends on size and shape; the larger the overlap volume, the larger the attraction. Just as the entropy of the solvent (water) mainly underlies the hydrophobic effect, that of the solute (the crowding macromolecules) creates the attraction. These generalizations come with caveats because the underlying physics is complicated, and AO theory involves several simplifications (e.g., it becomes less accurate when n is >0.3, and it takes no account of kinetics). Nevertheless, the concept of a hydrophobic force is useful to biologists despite the underlying complexity, and we believe the concept discussed in this work will be similarly useful, especially because its scale can be calculated so simply. Many questions remain. On the theoretical side, what happens when n increases above 0.3, and the AO equation becomes less precise and the theory much more complicated (Gotzelmann et al., 1998)? What are the relative advantages and disadvantages of the different theories of crowded solutions (Box 1)? On the experimental side, what exactly is the volume fraction within a cell, and how closely can typical proteins approach each other? Could the attraction help nucleosomes strung along DNA pack into the chromatin fiber (Fig. 2 B, ii). Can clumps of heterochromatin be treated as spheres that are subject to the attraction? If so, the attraction could underpin the condensation of an (interphase) string of such clumps into the mitotic chromosome (Fig. 2 B, ii; Manders et al., 1999). Could it also underpin the pairing of chromosomes seen during meiosis and polytenization, where a string bearing a unique array of factories and heterochromatic clumps aligns in perfect register with a homologue, but not with others carrying different arrays (Fig. 2 B, iii; Cook, 1997)? Could it drive end-to-end pairing of chromosomes? For example, diploid human lymphocytes contain 10 chromosomes encoding nucleolar organizing regions (NORs), but only ∼6 NORs are transcribed, and only these aggregate to form nucleoli (Wachtler et al., 1986). Does the attraction act through the thousands of active polymerizing complexes associated with each active NOR to drive nucleolar assembly (Fig. 2 B, iv)? Could it similarly drive the clustering of heterochromatic centromeres into chromocenters (Fig. 2 B, iv)? We have also seen how the attraction contributes to protein folding, but what of the special case where a protein is so confined that the overlap volume resulting from contact with the surrounding wall becomes significant (Fig. 2 C)? Do pores, and the barrels formed by chaperonins, proteasomes, and exosomes (Lorentzen and Conti, 2006), all exploit the attraction to promote ingress of their target proteins (Martin, 2004; Cheung et al., 2005; see Ellis, 2006, for a review of how crowding affects protein folding in confined spaces)? Clearly, we need to extend the experimental studies on the simple model systems reviewed in this study to complex subcellular assemblies, much as Hancock (2004) describes. As soon as cellular structures become larger than ∼75 nm, the overlap volume can generate an attraction of ∼5 kBT; this is probably sufficient to promote irreversible aggregation when cooperative effects are included (Fig. 2 A, i, inset; Marenduzzo et al., 2006). This begs the obvious question: why don't all large structures in the cell end up in one aggregate (just as overexpressed bacterial proteins form inclusion bodies)? We suggest they will tend to do so unless energy is spent to stop aggregation and/or inert mechanisms prevent it. For example, anchorage to a larger structure (e.g., the cytoskeleton), surface irregularities (Jones et al., 2003), or charge interactions could all prevent close contact, and thus reduce the attraction. All seem to operate; for example, >70% of proteins in Escherichia coli and Bacillus subtilis (and >90% of the most abundant ones) are anionic at cellular pH, and thus would be expected to repel each other (Eymann et al., 2004; Weiller et al., 2004). We also note that structures like the cytoskeleton and membrane-bound vesicles are not rigid and permanent; rather, they continually turn over, to reduce their effective size and ensure that a large structure does not persist long enough to aggregate (Misteli, 2001; Altan-Bonnet et al., 2004). Nature, although constrained by the second law of thermodynamics, finds ways around it. We thank the Biotechnology and Biological Sciences Research Council, the Engineering and Physical Sciences Research Council, Cancer Research UK, the Medical Research Council, and the Wellcome Trust for support. K. Finan is supported by the E.P. Abraham Trust, a Clarendon Fund award from the University of Oxford, and an Overseas Research Student award from the UK government. 1998 . Phase behavior of mixtures of rods (tobacco mosaic virus) and spheres (polyethylene oxide, bovine serum albumin). Biophys. J. 74 : 669 –677. Adams, M., Z. Dogic, S.L. Keller, and S. Fraden. 1998 . Entropically driven microphase transitions in mixtures of colloidal rods and spheres. Nature. 393 : 349 –352. Altan-Bonnet, N., R. Sougrat, and J. Lippincott-Schwartz. 2004 . Molecular basis for Golgi maintenance and biogenesis. Curr. Opin. Cell Biol. 16 : 364 –372. Asakura, S., and F. Oosawa. 1958 . Interactions between particles suspended in solutions of macromolecules. J. Polym. Sci. [B]. 33 : 183 –192. Barber, B.J., R.A. Babbitt, S. Parameswaran, and S. Dutta. 1995 . Age- related changes in rat interstitial matrix hydration and serum proteins. J. Gerontol. A. Biol. Sci. Med. Sci. 50 : B282 –B287. Bates, D., and N. Kleckner. 2005 . Chromosome and replisome dynamics in E. coli: loss of sister cohesion triggers global chromosome movement and mediates chromosome segregation. Cell. 121 : 899 –911. Bohrmann, B., M. Haider, and E. Kellenberger. 1993 . Concentration evaluation of chromatin in unstained resin-embedded sections by means of low-dose ratio-contrast imaging in STEM. Ultramicroscopy. 49 : 235 –251. Busch, H., and Y. Daskal. 1977 . Methods for isolation of nuclei and nucleoli. Methods Cell Biol. 16 : 1 –43. Cabrera, J.E., and D.J. Jin. 2003 . The distribution of RNA polymerase in Escherichia coli is dynamic and sensitive to environmental cues. Mol. Microbiol. 50 : 1493 –1505. Chandler, D. 2002 . Hydrophobicity: two faces of water. Nature. 417 : 491 . Cheung, M.S., D. Klimov, and D. Thirumalai. 2005 . Molecular crowding enhances native state stability and refolding rates of globular proteins. Proc. Natl. Acad. Sci. USA. 102 : 4753 –4758. Chu, Y.S., S. Dufour, J.P. Thiery, E. Perez, and F. Pincet. 2005 . Johnson-Kendall-Roberts theory applied to living cells. Phys. Rev. Lett. 94 : 028102 . Cook, P.R. 1997 . The transcriptional basis of chromosome pairing. J. Cell Sci. 110 : 1033 –1040. Cook, P.R. 1999 . The organization of replication and transcription. Science. 284 : 1790 –1795. Cook, P.R. 2002 . Predicting three-dimensional genome structure from transcriptional activity. Nat. Genet. 32 : 347 –352. Cook, P.R. 2003 . Nongenic transcription, gene regulation and action at a distance. J. Cell Sci. 116 : 4483 –4491. Cook, P.R., and I.A. Brazell. 1976 . Conformational constraints in nuclear DNA. J. Cell Sci. 22 : 287 –302. Cotter, M.A. 1974 . Hard-rod fluid: scaled particle theory revisited. Phys. Rev. A. 10 : 625 –636. Dickinson, R.B., L. Caro, and D.L. Purich. 2004 . Force generation by cytoskeletal end-tracking proteins. Biophys. J. 87 : 2838 –2854. Ellis, R.J. 2001 . Macromolecular crowding: obvious but underappreciated. Trends Biochem. Sci. 26 : 597 –604. Ellis, R.J. 2006 . Protein folding: inside the cage. Nature. 442 : 360 –362. Ellis, R.J., and A.P. Minton. 2006 . Protein aggregation in crowded environments. Biol. Chem. 387 : 485 –497. Eymann, C., A. Dreisbach, D. Albrecht, J. Bernhardt, D. Becher, S. Gentner, T. Tam le, K. Buttner, G. Buurman, C. Scharf, S. Venz, U. Volker, and M. Hecker. 2004 . A comprehensive proteome map of growing Bacillus subtilis cells. Proteomics. 4 : 2849 –2876. Faro-Trindade, I., and P.R. Cook. 2006 . A conserved organization of transcription during embryonic stem cell differentiation and in cells with high C value. Mol. Biol. Cell. 17 : 2910 –2920. Goobes, R., N. Kahana, O. Cohen, and A. Minsky. 2003 . Metabolic buffering exerted by macromolecular crowding on DNA-DNA interactions: origin and physiological significance. Biochemistry. 42 : 2431 –2440. Gotzelmann, B., R. Evans, and S. Dietrich. 1998 . Depletion forces in fluids. Phys. Rev. E. 57 : 6785 –6800. Hancock, R. 2004 . A role for macromolecular crowding effects in the assembly and function of compartments in the nucleus. J. Struct. Biol. 146 : 281 –290. Hatters, D.M., A.P. Minton, and G.J. Howlett. 2002 . Macromolecular crowding accelerates amyloid formation by human apolipoprotein C–II. J. Biol. Chem. 277 : 7824 –7830. Hosek, M., and J.X. Tang. 2004 . Polymer-induced bundling of F actin and the depletion force. Phys. Rev. E. Nonlin. Soft Matter Phys. 69 : 051907 . Jackson, D.A., S.J. McCready, and P.R. Cook. 1984 . Replication and transcription depend on attachment of DNA to the nuclear cage. J. Cell Sci. Suppl. 1 : 59 –79. Jones, C.W., J.C. Wang, F.A. Ferrone, R.W. Briehl, and M.S. Turner. 2003 . Interactions between sickle hemoglobin fibers. 123 : 221 –236. Kim, E.H., W.S. Chow, P. Horton, and J.M. Anderson. 2005 . Entropy-assisted stacking of thylakoid membranes. Biochim. Biophys. Acta. 1708 : 187 –195. Kinjo, A.R., and S. Takada. 2002 . Effects of macromolecular crowding on protein folding and aggregation studied by density functional theory: statics. Phys. Rev. E. Nonlin. Soft Matter Phys. 66 : 031911 . Kuhl, T., Y. Guo, J. Aldefer, A. Berman, D. Leckband, J. Israelachvili, and S. Hui. 1996 . Direct measurement of PEG induced depletion attraction between bilayers. Langmuir. 12 : 3003 –3014. Lebowitz, J.L., E. Helfand, and E. Praestgaard. 1965 . Scaled particle theory of fluid mixtures. J. Chem. Phys. 42 : 774 –779. Lorentzen, E., and E. Conti. 2006 . The exosome and the proteasome: nano-compartments for degradation. Cell. 125 : 651 –654. Manders, E.M.M., H. Kimura, and P.R. Cook. 1999 . Direct imaging of DNA in living cells reveals the dynamics of chromosome formation. J. Cell Biol. 144 : 813 –822. Mao, Y., M.E. Cates, and N.H.W. Lekkerkerker. 1995 . Depletion stabilization by semidilute rods. Phys. Rev. Lett. 75 : 4548 –4551. Marenduzzo, D., C. Micheletti, and P.R. Cook. 2006 . Entropy-driven genome organization. Biophys. J. 90 : 3712 –3721. Maritan, A., C. Micheletti, A. Trovato, and J.R. Banavar. 2000 . Optimal shapes of compact strings. Nature. 406 : 287 –290. Martin, J. 2004 . Chaperonin function – effects of crowding and confinement. J. Mol. Recognit. 17 : 465 –472. Minton, A.P. 1998 . Molecular crowding: analysis of effects of high concentrations of inert cosolutes on biochemical equilibria and rates in terms of volume exclusion. Methods Enzymol. 295 : 127 –149. Minton, A.P. 2001 . The influence of macromolecular crowding and macromolecular confinement on biochemical reactions in physiological media. J. Biol. Chem. 256 : 10577 –10580. Minton, A.P. 2006 . Macromolecular crowding. Curr. Biol. 16 : R269 –R271. Misteli, T. 2001 . The concept of self-organization in cellular architecture. J. Cell Biol. 155 : 181 –185. Ogston, A.G. 1970 . On the interaction of solute molecules with porous networks. J. Phys. Chem. 74 : 668 –669. Osborne, C.S., C. Chakalova, K.E. Brown, D. Carter, A. Horton, E. Debrand, B. Goyenechea, J.A. Mitchell, S. Lopes, W. Reik, and P. Fraser. 2004 . Active genes dynamically co-localize to shared sites of ongoing transcription. Nat. Genet. 36 : 1065 –1071. Pace, C.N., B.A. Shirley, M. McNutt, and K. Gajiwala. 1996 . Forces contributing to the conformational stability of proteins. FASEB J. 10 : 75 –83. Parsegian, V.A., R.P. Rand, and D.C. Rau. 2000 . Osmotic stress, crowding, preferential hydration, and binding: a comparison of perspectives. Proc. Natl. Acad. Sci. USA. 97 : 3987 –3992. Rosania, G.R., and J.A. Swanson. 1995 . Effects of macromolecular crowding on nuclear size. Exp. Cell Res. 218 : 114 –122. Sept, D., and J.A. McCammon. 2001 . Thermodynamics and kinetics of actin filament nucleation. Biophys. J. 81 : 667 –674. Snir, Y., and R.D. Kamien. 2005 . Entropically driven helix formation. Science. 307 : 1067 . Snoussi, K., and B. Halle. 2005 . Protein self-association induced by macromolecular crowding: a quantitative analysis by magnetic relaxation dispersion. Biophys. J. 88 : 2855 –2866. Spector, D.L. 2003 . The dynamics of chromosome organization and gene regulation. Annu. Rev. Biochem. 72 : 573 –608. Spitzer, J.J., and B. Poolman. 2005 . Electrochemical structure of the crowded cytoplasm. Trends Biochem. Sci. 30 : 536 –541. van den Berg, B., R. Wain, C.M. Dobson, and R.J. Ellis. 2000 . Macromolecular crowding perturbs protein refolding kinetics: implications for folding inside the cell. EMBO J. 19 : 3870 –3875. Wachtler, F., A.H. Hopman, J. Wiegant, and H.G. Schwarzacher. 1986 . On the position of nucleolus organizer regions (NORs) in interphase nuclei. Exp. Cell Res. 167 : 227 –240. Weiller, G.F., G. Caraux, and N. Sylvester. 2004 . The modal distribution of protein isoelectric points reflects amino acid properties rather than sequence evolution. Proteomics. 4 : 943 –949. Woolley, P., and P.R. Wills. 1985 . Excluded-volume effect of inert macromolecules on the melting of nucleic acids. Biophys. Chem. 22 : 89 –94. Yodh, A.G., K.H. Lin, J.C. Crocker, A.D. Dinsmore, R. Verma, and P.D. Kaplan. 2001 . Entropically driven self-assembly and interaction in suspension. Philos. Trans. R. Soc. Lond. A. 359 : 921 –937. Zimmerman, S.B., and S.O. Trach. 1991 . Estimation of macromolecule concentrations and excluded volume effects for the cytoplasm of Escherichia coli. J. Mol. Biol. 222 : 599 –620. D. Marenduzzo and K. Finan contributed equally to this paper. Abbreviations used in this paper: AO, Asakura–Oosawa; NOR, nucleolar organizing region; PEG, polyethyleneglycol.
2022-05-28 07:54:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6458738446235657, "perplexity": 9203.760669930436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00605.warc.gz"}
https://studyadda.com/solved-papers/punjab-met-solved-paper-2008_q30/394/183808
• # question_answer The equivalent capacity between the points$X$and$Y$in the circuit with$C=1\mu F$is A) $2\mu F$                          B) $3\mu F$ C) $1\mu F$                                          D) $0.5\mu F$ The circuit is shown in the figure. As points$a$and $b$ are at same potential, so no charge flows through$ab$. Now two remaining capacitors are in parallel, hence                 ${{C}_{eq}}=C+C=2C$                        $=2\times 1=2\mu F$
2020-04-06 02:06:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9601069688796997, "perplexity": 9967.122740214698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371612531.68/warc/CC-MAIN-20200406004220-20200406034720-00063.warc.gz"}
https://dqydj.com/further-reasons-why-dollar-cost-averaging-is-best-for-the-typical-investor/
# Further Reasons Why Dollar Cost Averaging is Best For the Typical Investor February 21st, 2014 by PK If you have been following us lately, last week we posted an article which argued that typical Americans were better off not trying to time the market when they invested monthly through paycheck deductions (say, 401(k)s or automatic investments in IRAs).  That's right: periodic investing and dollar cost averaging is our opinion of the best option for most investors. This being the internet, the goalposts were quickly moved and we were forced into the awkward position of attempting to defend the premise of a question that we didn't even ask (it came from here, for the record).  Perhaps our usage of the phrase "market timing" in our original post was a trigger, because the argument of whether or not we answered the original question quickly became a question of whether or not we used the word 'timing' correctly and somehow morphed into whether we were treating market timing fairly as a discipline. For the record, we don't believe expending tons of effort to pocket $16,000 over 26 years and 308 individual months is worth it. In fact, due to minimum wage laws, it would be illegal to work for a company which offered that sort of reward. Since there was one substantive question buried in the mix, we wanted to come back and answer it. "What about for yearly investing?" - that's a good question since, yes, some investors do invest once a year (say... waiting to fill out their taxes before filling an IRA). As for the rest of the complaints, please read past the yearly analysis as we've issued a challenge which will help us focus our future efforts to help you with your investing. ## If You Invested$6,000 Once a Year and Never Sold, How Much Would Perfect Timing Help? The answer is still "for most people, not enough to be worth the added stress".  As before, our universe is the dividends reinvested S&P 500 from June of 1988 until February 7, 2014.  Investor A invests steadily - in this case, on the first of June (or whenever the market is open next) annually.  Investor B invests at the high point of the year, and Investor C perfectly calls the market bottom of every year.  I accounted for 2014 by adding $1,000 for their respective dates (A invested on the 7th). Here is how they finished: Investor A Invest B Investor C (Steady) (Bad Luck) (Brilliant Timer) Ending Balance$569,081.72 $520,015.92$655,185.95 Average Yearly Return* 8.66% 8.29% 9.47% ###### *Note: I'm using XIRR, which depends on when the investments happen, especially the investment in year one.  That means there is some slop, but I'm not going to go through, predict when A, B, & C added money to the account or put their money in a pseudo-savings account in the meantime to improve comparability on two blog posts. So - 26 years of efforts, late nights, ulcers, and antacid usage to make... $86,104.23 over 26 years.$3,311.70 per year.  Worth it? ## A Dollar Cost Averaging Challenge to You Market Timing Has Nothing on Time in the Market First, note that I wasn't trying to make this a challenge between market timing professionals and dollar cost averaging main street investors.  The point of these pieces is to help the everyday investor - people who don't know the definition of, say, margin call.  The people who read our articles on the blog or syndicated through places like Seeking Alpha are an interesting demographic: 1. We care enough about finance to seek out sources 2. We're have more education in the markets than most investors 3. We generally make more money annually than the people who ask us for investing and finance help With that in mind, if you want me to compare market timing (or any strategy which sells, ever), please give me: 1. A market timing algorithm (one contrived example: 'Sell when 48% over the 228-day moving average, Buy when 23% below the 44-day') 2. The sample you used to determine it was a winner - I will be using S&P closes from 1988 until today.  (You must leave me some out of sample room to run the scenario - so try to stop around 1997 or so with your data mining and backtesting.  We're trying to avoid ridiculous - yet entertaining - theoreticals like this.) 3. Your investing background - years, education, etc.  (Why?  We need to account for #2 above and figure out what it would take to raise main street investors to your level) I think that with that information we'll be able to fairly decide whether it's worth it for main street investors to employ any sort of timing in their periodic and repeated investing. ## Our Household (and the Dollar Cost Averaging Conclusion) When we redid our master bathroom, my wife (an Interior Designer) asked me to install a shelf in the shower to hold our soaps, shampoos, conditioners and the like.  I did a lot of research into building shelves, liquid waterproofing, supporting the cut studs, moving the electricals, and so on.  I cut the backer board in the freezing rain, and for my efforts, got the dreaded alkaline hands from the holes in my gloves I used grouting (and yes, suffered through soaking them in vinegar after). All told, I calculated I spent about 50 hours on just the shelf in the shower... between gathering materials, researching, and executing.  What did I get from the experience?  A funny story when people tour my house and a smug sense of superiority every time my wife buys a new bottle of perfectly-fitting Pantene. In retrospect... was it worth it?  I could have bought a pre-fitted shelf which would have wasted space, sure - but could have cut down my install time by 90-95%.  Pantene may change their bottle size, or my wife may change her hair care ritual (#1 is more likely), and the fit might not be as impressive.  But hey, my shelf is better than the typical shower shelf.  So I've got that going for me... which is nice. I have no doubt that there are some strategies which would have beat dollar cost averaging.  But remember - even non-market-timer and non-stock-seller Warren Buffett even used $1,000,000 portfolios as his example when he talked about beating the market. My doubts enter into play when we start talking about whether the typical investor should bother to employ them. For million dollar plus portfolios? Sure, no brainer (especially in 1999 dollars). That's why you're reading Don't Quit Your Day Job... and/or reading Seeking Alpha. For your friend's$90,000 401(k)? No - set it and forget it.  If he wants to learn on his own, he can adjust his dollar cost averaging strategy in the future. ### Don't Quit Your Day Job... DQYDJ may be compensated by our advertising and affiliate partners if you make purchases through links. See our disclosures page for more information. HOMEBLOGABOUT DQYDJPRIVACY & DISCLOSURES
2020-09-24 21:54:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1775887906551361, "perplexity": 3278.7086245170653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400220495.39/warc/CC-MAIN-20200924194925-20200924224925-00140.warc.gz"}
https://stacks.math.columbia.edu/tag/08QE
Proof. This follows from Lemma 91.4.1 which tells us $L\pi _!(\pi ^{-1}M)$ is computed by $(\pi ^{-1}M)(P_\bullet , \epsilon )$ which is the constant simplicial object on $M$. $\square$ There are also: • 2 comment(s) on Section 91.4: Simplicial resolutions and derived lower shriek In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2023-02-07 15:51:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.973730206489563, "perplexity": 790.8690164528789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00099.warc.gz"}
https://math.stackexchange.com/questions/2006969/regular-open-sets-and-set-differences
# Regular open sets and set differences Suppose $T$ is a metric space. A subset $R$ of $T$ is called regular open if the interior of the closure of $R$ is equal to $R$ itself: $$R = \text{int}(\text{cl}(R)).$$ Suppose $R$ and $S$ are two (non empty) regular open sets with $S \subset R$ (strict inclusion). Is it then necessarily the case that $$\text{cl}( R \setminus S) = \text{cl}( \text{cl}(R ) \setminus \text{cl} (S) )?$$ (I worked out this was false in general, my question is whether being regularly open is the right property to ensure it's true) Yes, even for a topological space $T$. Proof of $\supset$. Let $R$ and $S$ be any subsets of $T$. Let $x\in \text{cl}( (\text{cl } R) \setminus \text{cl } S )$ be an arbitrarily point and $U$ be an arbitrary open neighborhood of the point $x$. Then there exists a point $x’\in U\cap ((\text{cl }R) \setminus \text{cl } S)$. A set $V=U\setminus \text{cl } S$ is a neighborhood of the point $x’$. Since $x’\in \text{cl }R$, there exists a point $x’’\in R\cap V\subseteq R\setminus \text{cl } S$. Thus $x\in \text{cl}(R\setminus \text{cl } S )\subseteq \text{cl}(R\setminus S)$. Proof of $\subset$. Let $R$ be an open and $S$ be a canonically open subsets of $T$. Let $x\in \text{cl}( R \setminus S)$ be an arbitrarily point and $U$ be an arbitrary open neighborhood of the point $x$. Then there exists a point $x’\in U\cap (R \setminus S)$. Since the set $R$ is open, $V=U\cap R$ is an open neighborhood of the point $x'$. If the set $S$ is dense in $V$ then $x'\in V\subseteq\text{int cl } S=S$, a contradiction. Therefore there exists a point $x''\in V\setminus\text{cl } S\subseteq R\setminus\text{cl } S$. Thus $x\in \text{cl}(R\setminus \text{cl } S)\subseteq \text{cl}((\text{cl }R)\setminus (\text{cl } S))$.
2019-12-11 09:14:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9632488489151001, "perplexity": 34.54838370612226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00011.warc.gz"}
https://our.umbraco.com/forum/developers/xslt/6413-Dynamic-Conditional-Sorting
Copied to clipboard #### Flag this post as spam? This post will be reported to the moderators as potential spam to be looked at • Lee 1123 posts 3059 karma points Jan 13, 2010 @ 17:29 1 ## Dynamic / Conditional Sorting? If I do ever get my head round why we use this XSLT I'll buy everyone a beer... But as I can't I'm sorry no Beers for now! Yet again (Can you sense my frustration) something that should be simple, is insanely complicated with XSLT... How on earth do you get round conditional sorting?  Say based on QueryString value.. For example, I want the users to be able to choose how to sort some data either via date or via a nodename or another property - The value is then posted as a QueryString. I have tried <choose> using a $variable but still it doesn't work?? Choose says invalid XSLT and if I stuff it in a variable it just doesn't work!? Please can someone put me out of my misery and show me how you get round conditional sorting? :( • Chris Houston 533 posts 978 karma points Jan 13, 2010 @ 17:49 0 Hi Lee, Have a look at this post here: Which should help you. Cheers, Chris • Chriztian Steinmeier 2731 posts 8338 karma points Jan 13, 2010 @ 20:31 0 Hi Lee, The problem is (as you may or may not have found out) that it's not allowed to use a variable or parameter in the select attribute of a sort instruction, so the way to get around it is to use a choose statement and test for all the possible values: <?xml version="1.0" encoding="utf-8" ?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:umbraco.library="urn:umbraco.library" exclude-result-prefixes="umbraco.library" > <xsl:output method="xml" indent="yes" omit-xml-declaration="yes" /> <xsl:param name="currentPage" /> <xsl:variable name="sortby" select="umbraco.library:RequestQueryString('sortby')" /> <xsl:variable name="nodes" select="$currentPage/node" /> <xsl:template match="/"> <xsl:choose> <!-- Sort by name? --> <xsl:when test="$sortby = 'name'"> <xsl:apply-templates select="$nodes"> <xsl:sort select="@nodeName" data-type="text" order="ascending" /> </xsl:apply-templates> </xsl:when> <!-- Sort by id? --> <xsl:when test="$sortby = 'id'"> <xsl:apply-templates select="$nodes"> <xsl:sort select="@id" data-type="number" order="ascending" /> </xsl:apply-templates> </xsl:when> <!-- More ways to sort... --> <!-- Just take the Umbraco Content order then... --> <xsl:otherwise> <xsl:apply-templates select="$nodes" /> </xsl:otherwise> </xsl:choose> </xsl:template> <!-- Template for how a single node should be output --> <xsl:template match="node"> <p> Name: <xsl:value-of select="@nodeName" />, Id: <xsl:value-of select="@id" /> </p> </xsl:template> <!-- No output for hidden nodes --> <xsl:template match="node[data[@alias = 'umbracoNaviHide'] = 1]" /> </xsl:stylesheet> /Chriztian PS: I hope you get to like XSLT some day, 'coz I loooove beer :-) • Lee 1123 posts 3059 karma points Jan 13, 2010 @ 21:11 0 Thanks for the help chaps, very, very appreciated :) @ Chris - I actually found that page, but couldn't quite get my head round the Xpath in the sort select=""? I understand what she is saying and tried this <xsl:sort select="./data [@alias = 'distance']" data-type="number" /> <xsl:sort select="./data [@alias='eventDate'][$resultsort = 'date']" data-type="text" order="ascending" /> But it didn't do anything :S   Do I just add the condition after the XPath?  And is this right?  I want to sort on this ./data [@alias='eventDate'] So added the following condition afterwards, which based on that ladies code should say if the variable $resultsort is equal to 'date' then apply the sort? [$resultsort = 'date'] Which gives me ./data [@alias='eventDate'][$resultsort = 'date'] @ Chriztian - Thanks for the code, I'll have a good look at it in a minute and see if I can use it if I can't get the XSLt working based on the link Chris gave. The problem I have is the XSLT is quite complex already and I'm worried about adding all that extra code :( But if thats the only way to do it, then so be it ;) If it works, yet another person I'll owe beer too :) • Lee 1123 posts 3059 karma points Jan 13, 2010 @ 21:40 1 # IT'S ALIVE I TELL YA, ALLLIIIIIIIIIVVEEEE Thanks for the help again everyone, I used idea from the link Chris put up and think I have it working now Pretty fiddly, will double check in the morning when I have had a break from this screen...lol Thanks again :) • Chriztian Steinmeier 2731 posts 8338 karma points Jan 13, 2010 @ 21:54 1 Yeah - I had a "Weee" moment too - the template with the choose an' all in my example can actually be written llike this, using Jeni's trick from the link above: <xsl:template match="/"> <xsl:apply-templates select="$nodes"> <xsl:sort select="@nodeName[$sortby = 'name']" data-type="text" order="ascending" /> <xsl:sort select="@id[$sortby = 'id']" data-type="number" order="ascending" /> </xsl:apply-templates> </xsl:template> /Chriztian • Lee 1123 posts 3059 karma points Jan 14, 2010 @ 07:29 1 Hey Chriztian, thats 99% the same way I have done it on the page - But my sorts are for a number or by a date and the date is a DocType property <xsl:for-each select="$searchResults"> <xsl:sort select="umbraco.library:FormatDateTime(./data [@alias = 'eventDate'] [string($resultsort) = 'date'], 'yyyyMMdd')" order="ascending" /> <xsl:sort select="./data [@alias = 'distance'] [string($resultsort) = 'distance']" data-type="number" /> ..... ..... </xsl:for-each> Hope this thread helps someone else • kentiler 76 posts 69 karma points Feb 15, 2010 @ 18:26 0 I'm having a tough time with this. I have a property named sortOrder which is a numeric type. When I put this inside my for-each loop, <xsl:sort select="./data [@alias = 'sortOrder'] [string($resultsort) = 'sortOder']" data-type="number" /> But when I put this line in, I get this error: Error occured System.Xml.Xsl.XslLoadException: The variable or parameter 'resultsort' is either not defined or it is out of scope. An error occurred at D:\www\beta.bayarts.net\html\xslt\634018299623240000_temp.xslt(25,3). at System.Xml.Xsl.XslCompiledTransform.LoadInternal(Object stylesheet, XsltSettings settings, XmlResolver stylesheetResolver) at umbraco.presentation.webservices.codeEditorSave.SaveXslt(String fileName, String oldName, String fileContents, Boolean ignoreDebugging) Any ideas?  I thought ResultSort is a variable created by the sort. Thanks! --Kent • Nik Wahlberg 639 posts 1237 karma points Feb 15, 2010 @ 18:42 0 Hi Kent, can you post your entire XSLT here? resultsort would need to be defined in the your XSLT manually. With the rest of your file, I should be able to help out. Off the bat, if you are simply tring to sort by that property it should look like this: <xsl:sort select="data [@alias = 'sortOrder']" data-type="number" /> Thanks, Nik • organic 108 posts 157 karma points Dec 16, 2011 @ 19:46 0 I'm trying to do the same task, sort based on a column if that coumn is specifed in the Url as the 'sort' querystring, but it's not working. The basic sort on 'lastName' works if I remove: [string($sortby)='lastName'] Any ideas? ...<xsl:variable name="sortby"select="umbraco.library:RequestQueryString('sort')"/><xsl:template match="/"><!--The fun starts here --><table border="1" cellspacing="0" cellpadding="2"><tr class="gridHeader"> <td><b><a href="?sort=number">#</a></b></td> <td><b><a href="?sort=lastName">Player</a></b></td> <td><b><a href="?sort=position">Position</a></b></td>...</tr><xsl:for-each select="$currentPage/node [string(data [@alias='hide'])!='1']"><xsl:sort select="data [@alias='lastName'][string(\$sortby)='lastName']" data-type="text" order="ascending" />...
2021-08-01 10:09:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2927583158016205, "perplexity": 13366.466964040379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154175.76/warc/CC-MAIN-20210801092716-20210801122716-00015.warc.gz"}
https://math.stackexchange.com/questions/3477888/how-many-local-extrema-points-does-the-function-fx-ln1-sqrtx-x-hav
# How many local extrema points does the function $f(x) = \ln(1+\sqrt{|x|}-x)$ have? I am given the following function: $$f: D \rightarrow \mathbb{R} \hspace{2cm} f(x) = \ln \bigg (1 + \sqrt{|x|} -x \bigg )$$ where $$D$$ is the maximum domain of the function. I have to find the number of local extrema points of this function. What I did was to first make the function look a little cleaner by getting rid of the absolute value: $$f(x) = \left\{ \begin{array}{ll} \ln(1 + \sqrt{x} - x) & \quad x \ge 0 \\ \ln(1 + \sqrt{-x}-x) & \quad x < 0 \end{array} \right.$$ I know that we have local extrema points where the derivative of the function is $$0$$ while the derivative approaching that point from right and left has opposite signs. We can also have a local extremum in a point where the derivative does not exist (an example being the function $$h(x) = |x|$$). So naturally I found the derivative of the function: $$f'(x) = \left\{ \begin{array}{ll} \ \dfrac{1-2\sqrt{x}}{2\sqrt{x}(1+\sqrt{x}-x)} & \quad x > 0 \\ \\ \dfrac{1-2\sqrt{-x}}{2\sqrt{-x}(1+\sqrt{-x}-x)} & \quad x < 0 \end{array} \right.$$ But I got stuck here. I don't know how to continue. What bothers me is that I do not know the domain $$D$$ and I don't know how can I find it. If I try to find the values of $$x$$ for which $$f'(x)=0$$ I get $$x_0= \dfrac{1}{4}$$ and $$x_1 = -\dfrac{1}{4}$$, but if I look at the graph of the function: I can see that we don't have a local extremum point at $$x_1 = -\dfrac{1}{4}$$, even thought we do have a local extremum point at $$x_0 = \dfrac{1}{4}$$. By looking at the graph I also see that we have an extremum point at $$x = 0$$. I'm guessing that is because of the denominator of the derivative makes it such that the derivative is not defined at $$x=0$$, so we have an extremum point, but shouldn't we also take into consideration the values of $$x$$ for which the other term in the denominator of the derivative, $$(1+\sqrt{\pm x} - x)$$ is not defined? And how would we handle that? And how would I find that we have a local extremum point at $$x=0$$ analytically, without looking at the graph of the function? Also, isn't that point of the graph that is close to $$y=-35$$ also a local extremum point? TL;DR: How can I find all the local extremum points of the given function without looking at the graph? The correct answer is $$2$$ (according to my textbook). For a fraction to be zero, the numerator must be zero (with one caveat, discussed at the end of this paragraph). So you want to solve $$1 - 2 \sqrt{x} = 0 \text{,}$$ restricted to $$x > 0$$ and check each solution to see if it also makes the denominator zero. (If it does also make the denominator zero, take a limit approaching that potential solution to see what is really happening.) This says $$\sqrt{x} = 1/2$$, so $$x = 1/4$$. Putting that in the denominator, $$2 \sqrt{1/4}(1+\sqrt{1/4} - 1/4) = 5/4$$, so we do not need to take a limit. Notice that for $$0, the denominator of $$f'$$ is positive, so we need only check the sign of the numerator to the left and right of $$1/4$$ in $$(0,1)$$. Let's check the signs of the numerators of $$f'(1/16)$$ and $$f'(1/2)$$ : $$1 - 2/4$$ is positive and $$1 - 2/\sqrt{2}$$ is negative, so we found a local maximum. Perform a similar procedure on the $$x < 0$$ half and find no local extrema on that half. But you have missed something. What is $$f'(0)$$ (Is it even defined?) and does $$f'$$ change signs while crossing $$x = 0$$? • But the problem is that if I perform a similar procedure on the $x < 0$ half like you said, I do get a local extremum point at $-\dfrac{1} {4}$ and I don't know what's wrong. I shouldn't get a minimum or maximum at that value. – user1502 Dec 16 '19 at 21:14 • @user1502 : $x = -1/4$ is not a solution of $-2\sqrt{-x} - 1 = 0$. – Eric Towers Dec 16 '19 at 21:27 • Just one more question: Isn't there also something to consider for a value like $x = \dfrac{3+\sqrt{5}}{2}$ since for that value of $x$ the derivative of $f(x)$ is again not defined (denomiantor = 0, because of the term $1+\sqrt{x}-x$). Wouldn't the function have an extremum at this point also? Just like in the case where $x=0$, the case you told me to look at in the last sentence of your answer. – user1502 Dec 17 '19 at 23:54 • @user1502 : No extremum. No change of sign in the derivative as $x$ crosses that value. In fact, the function isn't even defined to the right of that point. – Eric Towers Dec 18 '19 at 0:02
2020-07-03 20:58:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 37, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9139258861541748, "perplexity": 60.29958208918281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882934.6/warc/CC-MAIN-20200703184459-20200703214459-00082.warc.gz"}
https://trajectoryguide.greydongilmore.com/widgets/05_registration.html
# Registration ## Patient Space Registration¶ All registrations with patient scans will be rigid registrations. With some of the more advanced algorithms you can override this and run non-linear registration but it is strongly discouraged. See the below Algorithms to learn more about each algorithm and the respective settings. ### Registration Settings¶ • Reference Volume: the scan that all other scans will be registered to (generally the pre-operative T1w scan). • Frame Volume: the scan that contains the sterotactic frame (either MRI or CT). • Floating volumes: the scans that will be co-registered to the reference volume. You can un-check any scans you do not want registered, all scans checked in this drop-down box will be registered. Within the Reference Volume drop-down box, select the scan you want to co-register all other scans to (Reference). In the Frame Volume drop-down box, select the scan that contains the stereotactic fiducials. In the Floating Volumes drop-down box, all other scans (floating) will be checked to indicate they will be registered to the reference. If there are any floating scans you do not want registered, uncheck them. To begin the registration, press the Run Registration button. The Registration Process box will display updated information during the registration. When the registration is complete, the view will be automatically changed to a compare view. ### Check Registration Results¶ For each registration you will either select Confirm Registration or Decline Registration. If you choose to decline a registration, the registration can be re-run with a different algorithm. To check the registration, it is helpful to use the opacity slider to change the opacity of the foreground scan (floating scan). You can also use the Layer Reveal tool to check the registration in more detail. This tool displays a square that contains half the foreground scan and half the background scan. When finished checking the registrations, any confirmed registered scans will disappear from the Floating Volumes drop-down box, declined scans will still appear in the drop-down box. To re-run the registration, update any settings and press the Run Registration button, all previous registration information, for the current floating scans, will be erased. ## Algorithms¶ The default algorithm will be NiftyReg using nearest neighbor interpolation when applying the transform. You are able to change the registration algorithm and parameters according to the following information. • interpolation order: nearest neighbor, cubic, sinc, linear (default nearest neighbor) ### ANTS - antsRegistrationSyNQuick¶ • transform type: rigid, rigid+affine, rigid+affine+syn, rigid+syn, rigid+affine+b-spl syn, rigid+b-spl syn • rigid: rigid (1 stage) • rigid+affine: rigid + affine (2 stages) • rigid+affine+syn: rigid + affine + deformable syn (3 stages) • rigid+syn: rigid + deformable syn (2 stages) • rigid+affine+b-spl syn: rigid + affine + deformable b-spline syn (3 stages) • rigid+b-spl syn: rigid + deformable b-spline syn (2 stages) • cc radius: histogram bins for mutual information in SyN stage (default = 32) • spline distance: spline distance for deformable B-spline SyN transform (default = 26) • histogram matching: use histogram matching (default = 0) ### FSL - flirt¶ • interpolation order: nearest neighbor, spline, sinc, trilinear (default trilinear) • cost: used during the second stage. Options are: mutual info, correlation ratio, least square, normalized correlation, normalized mutual info (default corratio) • search cost: used during initial search stage. Options are: mutual info, correlation ratio, least square, normalized correlation, normalized mutual info (default corratio) • coarse search: search delta angle to use during initial alignment between the images (default 60) • fine search: search delta angle to use during final alignment between the images (default 18) ### ANTS - antsRegistration¶ For information about this algorithm you can visit this page. This algorithm gives the user more control over each step. The user can specify the "stages" of registration, where a stage consists of a transform and an image metric. Each stage consists of levels with specific values set for iterations, shrink factors, and smoothing sigmas. • interpolation: applied only to the output image. Options are: linear, nearest neighbor, bspline, cosinc, hammingsinc (default nearest neighbor) • metric: CC,MI,GC (CC) • CC: ANTS neighborhood cross correlation • MI: Mutual information • GC: Global Correlation • gradient step: how big the mutual info, correlation ratio, least square, normalized correlation, normalized mutual info (default 0.1) • convergence: for each hierarchical step, this value specifies the threshold that is needed to be met before stopping the respective step (default 1000x500x250x100x0,1e-6,10) • shrink: shrink factor for each hierarchical step (default 8x4x2x2x1) • i.e. for an image with 256x256x256 voxels, the levels will work on images of size 32mm, 64mm, 128mm, and 256mm • smoothing: smoothing factor applied in each hierarchical step (default 3x2x1x0x0vox) ## Template Space Registration¶ Click Run Registration. The registration progression will be updated within the Registration Progress window. Once registration is completed, you will see the co-registered volumes appear in the floating drop-down box (under Co-registered Volumes). You will now confirm that the registration results by clicking the Compare Volumes button. For each registration you will either select Confirm Registration or Decline Registration. If you choose to decline a registration, you will be able to re-run the registration with a different algorithm.
2022-09-27 02:37:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17363370954990387, "perplexity": 11286.126774486655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00734.warc.gz"}
https://math.stackexchange.com/questions/2442140/definition-of-a-limit-epsilon-delta
# Definition of a Limit epsilon delta Why is it that the limit exists if: for all numbers epsilon (in the epsilon range close to L) => that there’s a delta (in the delta range close to a) and if this number Epsilon exists and it turns out that there is a delta for this epsilon then this implies that there is a limit? Why does the other way around not work as well? (it seems like from this definition that the y axis (epsilon range) is greater than the x-axis delta range according to the definition. Could someone clarify this for me? • "Why does the other way around not work as well?" What other way? – Simply Beautiful Art Sep 23 '17 at 19:19 • 0<|x-a|<\delta implies that |f(x)-L|<\epsilon Why is it not reversed so that if for every epsilon we can find a delta then there is a limit – user420309 Sep 23 '17 at 19:26 • You mean to ask why it is not reversed so that:$$|f(x)-L|<\epsilon\implies0<|x-a|<\delta$$? – Simply Beautiful Art Sep 23 '17 at 19:29 • yes that is what I wanted to ask – user420309 Sep 23 '17 at 19:30 • Consider $f(x)=\sin(x)$. Then $|f(x)-0|<\epsilon$ does not imply $0<|x-0|<\delta$, since we could have $x=k\pi$, where $k$ is an integer $k>\delta$. – Simply Beautiful Art Sep 23 '17 at 19:30 hint: This gif show the performance of $$\epsilon,\delta$$ . Hope it help you • thanks, could you elaborate any further? – user420309 Sep 23 '17 at 19:23
2019-08-26 06:07:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.794605016708374, "perplexity": 482.99435385178344}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330968.54/warc/CC-MAIN-20190826042816-20190826064816-00128.warc.gz"}
http://mathhelpforum.com/advanced-statistics/174024-little-linear-algebra-normal-distribution-print.html
# a little linear algebra with a normal distribution • March 9th 2011, 03:52 PM ecc5 a little linear algebra with a normal distribution Let Y = <Y1, Y2, Y3> and let v = <1,2,-3>. Suppose Yi ~ Norm(5,2). What is the distribution of v dot Y? I know that the dot product of the two vectors is Y1 + 2Y2 - 3Y3. But I don't understand the concept of how to tie the Normal distribution into this result. • March 9th 2011, 04:42 PM mr fantastic Quote: Originally Posted by ecc5 Let Y = <Y1, Y2, Y3> and let v = <1,2,-3>. Suppose Yi ~ Norm(5,2). What is the distribution of v dot Y? I know that the dot product of the two vectors is Y1 + 2Y2 - 3Y3. But I don't understand the concept of how to tie the Normal distribution into this result. Get the distribution of the random variable $X = Y_1 + 2Y_2 - 3Y_3$. I assume you know how to do this? • March 9th 2011, 06:44 PM ecc5 Umm no I'm not completely sure. It's been a while. • March 9th 2011, 07:49 PM mr fantastic Quote: Originally Posted by ecc5 Umm no I'm not completely sure. It's been a while. Here is a link for getting the sum: Sum of normally distributed random variables - Wikipedia, the free encyclopedia Getting the difference is very similar (means subtract but the variances still add). • March 9th 2011, 08:28 PM ecc5 Oh yeah!!! This is coming back to me. So here's what I got: Norm(0,sqrt(56)). I got the standard deviation by doing: sqrt(4+4*4+9*4). Does that look/sound right? • March 10th 2011, 12:29 AM mr fantastic Quote: Originally Posted by ecc5 Oh yeah!!! This is coming back to me. So here's what I got: Norm(0,sqrt(56)). I got the standard deviation by doing: sqrt(4+4*4+9*4). Does that look/sound right? No. Wrong variance.
2016-05-26 10:54:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8542625904083252, "perplexity": 1979.6529043850314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275835.98/warc/CC-MAIN-20160524002115-00044-ip-10-185-217-139.ec2.internal.warc.gz"}
https://gmatclub.com/forum/if-a-5-a-which-of-the-following-must-be-true-159797.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 20 Aug 2018, 03:47 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If a^5 ≤ a, which of the following must be true? Author Message TAGS: ### Hide Tags Intern Joined: 18 Jul 2012 Posts: 11 If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags Updated on: 15 Sep 2013, 12:08 3 15 00:00 Difficulty: 95% (hard) Question Stats: 35% (01:04) correct 65% (01:05) wrong based on 435 sessions ### HideShow timer Statistics If a^5 ≤ a, which of the following must be true? I. –1 ≤ a ≤ 0 II. a=0 III. 0 ≤ a ≤ 1 A. None of the above B. I only C. II only D. III only E. I and III only Originally posted by nroy347 on 15 Sep 2013, 12:04. Last edited by Bunuel on 15 Sep 2013, 12:08, edited 1 time in total. Edited the question. Math Expert Joined: 02 Sep 2009 Posts: 48044 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 15 Sep 2013, 12:12 2 1 nroy347 wrote: If a^5 ≤ a, which of the following must be true? I. –1 ≤ a ≤ 0 II. a=0 III. 0 ≤ a ≤ 1 A. None of the above B. I only C. II only D. III only E. I and III only $$a^5\leq{a}$$ --> $$a\leq{-1}$$ or $$0\leq{a}\leq{1}$$. If a=-2, then none of the options MUST be true. All Must or Could be True Questions to practice: search.php?search_id=tag&tag_id=193 _________________ Intern Joined: 10 Sep 2013 Posts: 39 Location: United States Concentration: Strategy, Operations GMAT Date: 12-10-2013 GPA: 3.5 WE: Operations (Manufacturing) Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 18 Sep 2013, 07:27 can u plz elaborate y D is nt d correct answer Math Expert Joined: 02 Sep 2009 Posts: 48044 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 18 Sep 2013, 07:45 fnumiamisburg wrote: can u plz elaborate y D is nt d correct answer The question asks: which of the following MUST be true. Now, 0 ≤ a ≤ 1 is NOT always true, because, a can be less than or equal to -1, say -2 ((-2)^5<-2), and in this case this option is not true. Does this make sense? _________________ Intern Joined: 07 Aug 2013 Posts: 24 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 18 Sep 2013, 07:48 fnumiamisburg wrote: can u plz elaborate y D is nt d correct answer -1<=X=<1 so none are true Math Expert Joined: 02 Sep 2009 Posts: 48044 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 18 Sep 2013, 07:51 Valerun wrote: fnumiamisburg wrote: can u plz elaborate y D is nt d correct answer -1<=X=<1 so none are true The range is not correct. $$a^5\leq{a}$$ --> $$a\leq{-1}$$ or $$0\leq{a}\leq{1}$$. Check here: if-a-5-a-which-of-the-following-must-be-true-159797.html#p1267231 _________________ Intern Joined: 10 Sep 2013 Posts: 39 Location: United States Concentration: Strategy, Operations GMAT Date: 12-10-2013 GPA: 3.5 WE: Operations (Manufacturing) Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 18 Sep 2013, 08:01 thanks Brunel... I got it now.. Intern Joined: 28 Mar 2012 Posts: 5 Location: Yugoslavia Concentration: Accounting, Finance GPA: 3.75 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 18 Sep 2013, 09:52 Just put (-2) as a possible "a" and do the work, it took me 30 sec to get to answer "A" Math Expert Joined: 02 Sep 2009 Posts: 48044 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 18 Sep 2013, 10:00 1 1 Predecessor wrote: Just put (-2) as a possible "a" and do the work, it took me 30 sec to get to answer "A" That's a valid approach. "MUST BE TRUE" questions: These questions ask which of the following MUST be true, or which of the following is ALWAYS true for ALL valid sets of numbers you choose. Generally for such kind of questions if you can prove that a statement is NOT true for one particular valid set of numbers, it will mean that this statement is not always true and hence not a correct answer. So, for "MUST BE TRUE" questions plug-in method is good to discard an option but not 100% sure thing to prove that an option is ALWAYS true. As for "COULD BE TRUE" questions: The questions asking which of the following COULD be true are different: if you can prove that a statement is true for one particular set of numbers, it will mean that this statement could be true and hence is a correct answer. So, for "COULD BE TRUE" questions plug-in method is fine to prove that an option could be true. But here, if for some set of numbers you'll see that an option is not true, it won't mean that there does not exist some other set which will make this option true. _________________ SVP Joined: 06 Sep 2013 Posts: 1850 Concentration: Finance Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 20 Nov 2013, 07:21 Bunuel wrote: nroy347 wrote: If a^5 ≤ a, which of the following must be true? I. –1 ≤ a ≤ 0 II. a=0 III. 0 ≤ a ≤ 1 A. None of the above B. I only C. II only D. III only E. I and III only $$a^5\leq{a}$$ --> $$a\leq{-1}$$ or $$0\leq{a}\leq{1}$$. If a=-2, then none of the options MUST be true. All Must or Could be True Questions to practice: search.php?search_id=tag&tag_id=193 Hi Bunuel, I don't get it. How come do you get two ranges. It isn't just a^5<=a Then solve this inequality with key points after factorizing that gives 0<=a<=1? Where did you get the other range from? Cheers J Math Expert Joined: 02 Sep 2009 Posts: 48044 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 20 Nov 2013, 07:32 jlgdr wrote: Bunuel wrote: nroy347 wrote: If a^5 ≤ a, which of the following must be true? I. –1 ≤ a ≤ 0 II. a=0 III. 0 ≤ a ≤ 1 A. None of the above B. I only C. II only D. III only E. I and III only $$a^5\leq{a}$$ --> $$a\leq{-1}$$ or $$0\leq{a}\leq{1}$$. If a=-2, then none of the options MUST be true. All Must or Could be True Questions to practice: search.php?search_id=tag&tag_id=193 Hi Bunuel, I don't get it. How come do you get two ranges. It isn't just a^5<=a Then solve this inequality with key points after factorizing that gives 0<=a<=1? Where did you get the other range from? Cheers J $$a^5\leq{a}$$ --> $$a(a-1)(a+1) (a^2+1)\leq{0}$$ --> reduce by a^2+1 since it's always positive: $$a(a-1)(a+1)\leq{0}$$. Now, solve it with key points approach. _________________ SVP Joined: 06 Sep 2013 Posts: 1850 Concentration: Finance Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags Updated on: 07 Feb 2014, 05:44 Bunuel wrote: fnumiamisburg wrote: can u plz elaborate y D is nt d correct answer The question asks: which of the following MUST be true. Now, 0 ≤ a ≤ 1 is NOT always true, because, a can be less than or equal to -1, say -2 ((-2)^5<-2), and in this case this option is not true. Does this make sense? Now I get it we need to factorize everything and get two ranges Therefore, none of the statement will HAVE to be true Thanks all J Originally posted by jlgdr on 01 Jan 2014, 07:43. Last edited by jlgdr on 07 Feb 2014, 05:44, edited 1 time in total. Manager Status: Student Joined: 26 Aug 2013 Posts: 218 Location: France Concentration: Finance, General Management Schools: EMLYON FT'16 GMAT 1: 650 Q47 V32 GPA: 3.44 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 01 Jan 2014, 10:28 jlgdr wrote: Bunuel wrote: fnumiamisburg wrote: can u plz elaborate y D is nt d correct answer The question asks: which of the following MUST be true. Now, 0 ≤ a ≤ 1 is NOT always true, because, a can be less than or equal to -1, say -2 ((-2)^5<-2), and in this case this option is not true. Does this make sense? I don't quite get it Bunuel. I mean it looks fine that when you pick -2 it satisfies the inequality on question stem and is not good for any of the statements BUT, when one tries to find the range for a^5<=a, one ends up with 0<=a<=1, which is the exact same as option III I'm pretty sure I'm getting the reasoning incorrectly Cheers! J The trick here is more on the answer choices. Look at I II and III. all are right (pick out some numbers). Therefore I was totally confused because non of the answers had the right answer (I II and III) I finally chose D, but hesitate between A and D quite a long time... This is a tricky tricky question, since it takes into consideration that the answers are correct but they do not cover all the POSSIBLE answers. Therefore it is A. Confusing... _________________ Think outside the box Senior Manager Joined: 21 Oct 2013 Posts: 433 If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 06 Jun 2014, 08:26 If a^5 ≤ a, which of the following must be true? I –1 ≤ a ≤ 0 II a=0 III 0 ≤ a ≤ 1 A) None of the above B) I only C) II only D) III only E) I and III only Math Expert Joined: 02 Sep 2009 Posts: 48044 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 06 Jun 2014, 08:29 goodyear2013 wrote: If a^5 ≤ a, which of the following must be true? I –1 ≤ a ≤ 0 II a=0 III 0 ≤ a ≤ 1 A) None of the above B) I only C) II only D) III only E) I and III only Merging similar topics. Please refer to the discussion above. _________________ Senior Manager Joined: 21 Oct 2013 Posts: 433 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 07 Jun 2014, 04:48 Hi, What is the difference between these 2 equations? I am trying to apply this inequality trick. http://gmatclub.com/forum/inequalities-trick-91482.html http://gmatclub.com/forum/which-of-the-following-represents-the-complete-range-of-x-108884.html $$x^3-4x^5<0$$ --> $$x^3(1-4x^2)<0$$ --> $$(1+2x)(x^3)(1-2x)<0$$ --> roots are -1/2, 0, and 1/2 --> $$-\frac{1}{2}<x<0$$ or $$x>\frac{1}{2}$$. $$a^5\leq{a} --> a(a-1)(a+1) (a^2+1)\leq{0}$$ --> reduce by $$(a^2+1)$$ since it's always positive: $$a(a-1)(a+1)\leq{0}$$ --> $$a^5\leq{a}$$ --> $$a\leq{-1}$$ or $$0\leq{a}\leq{1}$$. Math Expert Joined: 02 Sep 2009 Posts: 48044 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 08 Jun 2014, 04:55 1 goodyear2013 wrote: Hi, What is the difference between these 2 equations? I am trying to apply this inequality trick. http://gmatclub.com/forum/inequalities-trick-91482.html http://gmatclub.com/forum/which-of-the-following-represents-the-complete-range-of-x-108884.html $$x^3-4x^5<0$$ --> $$x^3(1-4x^2)<0$$ --> $$(1+2x)(x^3)(1-2x)<0$$ --> roots are -1/2, 0, and 1/2 --> $$-\frac{1}{2}<x<0$$ or $$x>\frac{1}{2}$$. $$a^5\leq{a} --> a(a-1)(a+1) (a^2+1)\leq{0}$$ --> reduce by $$(a^2+1)$$ since it's always positive: $$a(a-1)(a+1)\leq{0}$$ --> $$a^5\leq{a}$$ --> $$a\leq{-1}$$ or $$0\leq{a}\leq{1}$$. The first one is solved with slightly different approach, check here: which-of-the-following-represents-the-complete-range-of-x-108884.html#p868863 If you want to solve it with the approach given here: which-of-the-following-represents-the-complete-range-of-x-108884.html, then ensure that the factors are of the form (x - a)(x - b)(x - c)... So, it would be (x+1/2)(x^3)(x-1/2)>0. _________________ Intern Joined: 22 Feb 2014 Posts: 27 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 15 Jul 2014, 09:45 Bunuel wrote: nroy347 wrote: If a^5 ≤ a, which of the following must be true? I. –1 ≤ a ≤ 0 II. a=0 III. 0 ≤ a ≤ 1 A. None of the above B. I only C. II only D. III only E. I and III only $$a^5\leq{a}$$ --> $$a\leq{-1}$$ or $$0\leq{a}\leq{1}$$. If a=-2, then none of the options MUST be true. All Must or Could be True Questions to practice: search.php?search_id=tag&tag_id=193 Hi How can 2 comes in the range?? 0<=a<=1.. I dont understand.. Can you pls explain.... I checked all threads in the discussion still did not get it. Thanks Math Expert Joined: 02 Sep 2009 Posts: 48044 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 15 Jul 2014, 10:12 GGMAT760 wrote: Bunuel wrote: nroy347 wrote: If a^5 ≤ a, which of the following must be true? I. –1 ≤ a ≤ 0 II. a=0 III. 0 ≤ a ≤ 1 A. None of the above B. I only C. II only D. III only E. I and III only $$a^5\leq{a}$$ --> $$a\leq{-1}$$ or $$0\leq{a}\leq{1}$$. If a=-2, then none of the options MUST be true. All Must or Could be True Questions to practice: search.php?search_id=tag&tag_id=193 Hi How can 2 comes in the range?? 0<=a<=1.. I dont understand.. Can you pls explain.... I checked all threads in the discussion still did not get it. Thanks _________________ Intern Joined: 26 Jan 2017 Posts: 10 Re: If a^5 ≤ a, which of the following must be true?  [#permalink] ### Show Tags 21 Feb 2017, 22:09 Hi, I have understood the whole solution except for the part where a^5 reduces to a(a-1)(a+1)(a^2 -1).can anyone guide me if I am doing it correctly?. a^5 <a a^5 -a <0 a (a^4 -1)<0 a(a^2-1)(a^2+1)<0 a(a+1)(a-1)(a^2+1)<0 a<0 a<1 a<-1 Re: If a^5 ≤ a, which of the following must be true? &nbs [#permalink] 21 Feb 2017, 22:09 Go to page    1   2    Next  [ 21 posts ] Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-08-20 10:47:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7369674444198608, "perplexity": 3626.677318399281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216333.66/warc/CC-MAIN-20180820101554-20180820121554-00617.warc.gz"}
https://yoshiwarabooks.org/mfg/linear-regression.html
## Section1.6Linear Regression We have spent most of this chapter analyzing models described by graphs or equations. To create a model, however, we often start with a quantity of data. Choosing an appropriate function for a model is a complicated process. In this section, we consider only linear models and explore methods for fitting a linear function to a collection of data points. First, we fit a line through two data points. ### SubsectionFitting a Line through Two Points If we already know that two variables are related by a linear function, we can find a formula from just two data points. For example, variables that increase or decrease at a constant rate can be described by linear functions. ###### Example1.97 In 1993, Americans drank 188.6 million cases of wine. Wine consumption increased at a constant rate over the next decade, and we drank 258.3 million cases of wine in 2003. (Source: Los Angeles Times, Adams Beverage Group) 1. Find a formula for wine consumption, $W\text{,}$ in millions of cases, as a linear function of time, $t\text{,}$ in years since 1990. 2. State the slope as a rate of change. What does the slope tell us about this problem? Solution 1. We have two data points of the form $(t, W)\text{,}$ namely $(3, 188.6)$ and $(13, 258.3)\text{.}$ We use the point-slope formula to fit a line through these two points. First, we compute the slope. \begin{equation*} \frac{\Delta W}{\Delta t}=\frac {258.3 - 188.6}{13 - 3}= 6.97 \end{equation*} Next, we use the slope m = $6.97$ and either of the two data points in the point-slope formula. \begin{equation*} \begin{aligned}[t] W \amp =W_1 + m(t - t_1) \\ W \amp = 188.6 + 6.97(t - 3) \\ W \amp = 167.69 + 6.97t \end{aligned} \end{equation*} Thus, $W = f (t) = 167.69 + 6.97t\text{.}$ 2. The slope gives us the rate of change of the function, and the units of the variables can help us interpret the slope in context. \begin{equation*} \frac{\Delta W}{\Delta t}= \frac{258.3 - 188.6 \text{ millions of cases}}{13 - 3\text{ years}} = 6.97 \text{ millions of cases / year} \end{equation*} Over the 10 years between 1993 and 2003, wine consumption in the United States increased at a rate of 6.97 million cases per year. ###### To Fit a Line through Two Points: 1. Compute the slope between the two points. 2. Substitute the slope and either point into the point-slope formula \begin{equation*} y = y_1 + m(x - x_1) \end{equation*} ###### Checkpoint1.98 In 1991, there were 64.6 burglaries per 1000 households in the United States. The number of burglaries reported annually declined at a roughly constant rate over the next decade, and in 2001 there were 28.7 burglaries per 1000 households. (Source: U.S. Department of Justice) 1. Find a function for the number of burglaries, $B\text{,}$ as a function of time, $t\text{,}$ in years, since 1990. 2. State the slope as a rate of change. What does the slope tell us about this problem? 1. $y = 68.19 - 3.59x$ 2. $-3.59$ burglaries per $1000$ households per year. From 1991 to 2001, the burglary rate declined by $3.59$ burglaries per 1000 households every year. ### SubsectionScatterplots Empirical data points in a linear relation may not lie exactly on a line. There are many factors that can affect experimental data, including measurement error, the influence of environmental conditions, and the presence of related variable quantities. ###### Example1.99 A consumer group wants to test the gas mileage of a new model SUV. They test-drive six vehicles under similar conditions and record the distance each drove on various amounts of gasoline. Gasoline used (gal) $9.6$ $11.3$ $8.8$ $5.2$ $10.3$ $6.7$ Miles driven $155.8$ $183.6$ $139.6$ $80.4$ $167.1$ $99.7$ 1. Are the data linear? 2. Draw a line that fits the data. 3. What does the slope of the line tell us about the data? Solution 1. No, the data are not strictly linear. If we compute the slopes between successive data points, the values are not constant. We can see from an accurate plot of the data, shown below, that the points lie close to, but not precisely on, a straight line. 2. We would like to draw a line that comes as close as possible to all the data points, even though it may not pass precisely through any of them. In particular, we try to adjust the line so that we have the same number of data points above the line and below the line. One possible solution is shown above. 3. To compute the slope of the our estimated line, we first choose two points on the line. Our line appears to pass through one of the data points,$(8.8, 139.6)\text{.}$ We look for a second point on the line whose coordinates are easy to read, perhaps $(6.5,100)\text{.}$ The slope is \begin{equation*} m = \frac{139.6 - 100}{8.8 - 6.5}= 17.2\text{ miles per gallon} \end{equation*} According to our data, the SUV gets about 17.2 miles to the gallon. ###### Caution1.100 To find the slope of your estimated line, be sure to choose points on the line; do not choose any of the data points (unless they happen to lie on your line). ###### Checkpoint1.101 1. Plot the data points. Do the points lie on a line? 2. Draw a line that fits the data. $x$ $1.49$ $3.68$ $4.95$ $5.49$ $7.88$ $8.41$ $y$ $2.69$ $3.7$ $4.6$ $5.2$ $7.2$ $7.3$ The graph in Example 1.99 is called a scatterplot. The points on a scatterplot may or may not show some sort of pattern. Consider the three plots shown below. • In figure (a), the data points resemble a cloud of gnats; there is no apparent pattern to their locations. • In figure (b), the data follow a generally decreasing trend, but certainly do not all lie on the same line. • The points in figure (c) are even more organized; they seem to lie very close to an imaginary line. If the data in a scatterplot are roughly linear, we can estimate the location of an imaginary line of best fit that passes as close as possible to the data points. We can then use this line to make predictions about the data. ### SubsectionLinear Regression One measure of a person's physical fitness is the body mass index, or BMI. Your BMI is the ratio of your weight in kilograms to the square of your height in centimeters. Thus, thinner people have lower BMI scores, and fatter people have higher scores. The Centers for Disease Control considers a BMI between 18.5 and 24.9 to be healthy. The points on the scatterplot below show the BMI of Miss America from 1921 to 1991. From the data in the scatterplot, can we see a trend in Americans’ ideal of female beauty? ###### Example1.102 1. Estimate a line of best fit for the scatterplot above. (Source: http://www.pbs.org) 2. Use your line to estimate the BMI of Miss America 1980. Solution 1. We draw a line that fits the data points as best we can, as shown below. (Note that we have set $t = 0$ in 1920 on this graph.) We try to end up with roughly equal numbers of data points above and below our line. 2. We see that when $t = 60$ on this line, the $y$-value is approximately 18.3. We therefore estimate that Miss America 1980 had a BMI of 18.3. (Her actual BMI was 17.85.) ###### Checkpoint1.103 Human brains consume a large amount of energy, about 16 times as much as muscle tissue per unit weight. In fact, brain metabolism accounts for about 25% of an adult human’s energy needs, as compared to about 5% for other mammals. As hominid species evolved, their brains required larger and larger amounts of energy, as shown below. (Source: Scientific American, December 2002) 1. Draw a line of best fit through the data points. 2. Estimate the amount of energy used by the brain of a hominid species that lived three million years ago. 1. About $10.5\%$ The process of predicting an output value based on a straight line that fits the data is called linear regression, and the line itself is called the regression line. The equation of the regression line is usually used (instead of a graph) to predict values. ###### Example1.104 1. Find the equation of the regression line in Example 1.102. 2. Use the regression equation to predict the BMI of Miss America 1980. Solution 1. We first calculate the slope by choosing two points on the regression line. The points we choose are not necessarily any of the original data points; instead they should be points on the regression line itself. The line appears to pass through the points $(17, 20)$ and $(67, 18)\text{.}$ The slope of the line is then \begin{equation*} m = \frac{18 - 20}{67 - 17}\approx -0.04 \end{equation*} Now we use the point-slope formula to find the equation of the line. (If you need to review the point-slope formula, see Section 1.5.) We substitute $m = -0.04$ and use either of the two points for $(x_1, y_1)\text{;}$ we will choose $(17, 20)\text{.}$ The equation of the regression line is \begin{equation*} \begin{aligned}[t] y \amp = y_1 + m(x - x_1)\\ y \amp = 20-0.04(x-17) \amp \amp \blert{\text{Simplify.}}\\ y \amp = 20.68 - 0.04t\\ \end{aligned} \end{equation*} 2. We will use the regression equation to make our prediction. For Miss America 1980, $t = 60$ and \begin{equation*} y = 20.68 - 0.04(60) = 18.28 \end{equation*} This value agrees well with the estimate we made in Example 1.102. ###### Checkpoint1.105 The number of manatees killed by watercraft in Florida waters has been increasing since 1975. Data are given at 5-year intervals in the table. (Source: Florida Fish and Wildlife Conservation Commission) Year Manatee deaths $1975$ $6$ $1980$ $16$ $1985$ $33$ $1990$ $47$ $1995$ $42$ $2000$ $78$ 1. Draw a regression line through the data points shown in the figure. 2. Use the regression equation to estimate the number of manatees killed by watercraft in 1998. 1. $y = 4.7 + 2.6t$ 2. $65$ ### SubsectionLinear Interpolation and Extrapolation Using a regression line to estimate values between known data points is called interpolation. Making predictions beyond the range of known data is called extrapolation. ###### Example1.106 1. Use linear interpolation to estimate the BMI of Miss America 1960. 2. Use linear extrapolation to predict the BMI of Miss America 2001. Solution 1. For 1960, we substitute $t = 40$ into the regression equation we found in Example 1.104. \begin{equation*} y = 20.68 - 0.04(40) = 19.08 \end{equation*} We estimate that Miss America 1960 had a BMI of 19.08. (Her BMI was actually 18.79.) 2. For 2001, we substitute $t = 81$ into the regression equation. \begin{equation*} y = 20.68 - 0.04(81) = 17.44 \end{equation*} Our model predicts that Miss America 2001 had a BMI of 17.44. In fact, her BMI was 20.25. By the late 1990s, public concern over the self-image of young women had led to a reversal of the trend toward ever-thinner role models. Example 1.106b illustrates an important fact about extrapolation: If we try to extrapolate too far, we may get unreasonable results. For example, if we use our model to predict the BMI of Miss America 2520 (when $t = 600$), we get \begin{equation*} y = 20.68 - 0.04(600) =-3.32 \end{equation*} Even if the Miss America pageant is still operating in 600 years, the winner cannot have a negative BMI. Our linear model provides a fair approximation for 1920–1990, but if we try to extrapolate too far beyond the known data, the model may no longer apply. We can also use interpolation and extrapolation to make estimates for nonlinear functions. Sometimes a variable relationship is not linear, but a portion of its graph can be approximated by a line. The graph at right shows a child’s height each month. The graph is not linear because her rate of growth is not constant; her growth slows down as she approaches her adult height. However, over a short time interval the graph is close to a line, and that line can be used to approximate the coordinates of points on the curve. ###### Checkpoint1.107 Emily was 82 centimeters tall at age 36 months and 88 centimeters tall at age 48 months. 1. Find a linear equation that approximates Emily's height in terms of her age over the given time interval. 2. Use linear interpolation to estimate Emily’s height when she was 38 months old, and extrapolate to predict her height at age 50 months. 3. Predict Emily's height at age 25 (300 months). Is your answer reasonable? 1. $y = 64 + 0.5x$ 2. $83$ cm, $89$ cm 3. $214$ cm; No Estimating a line of best fit is a subjective process. Rather than base their estimates on such a line, statisticians often use the least squares regression line. This regression line minimizes the sum of the squares of all the vertical distances between the data points and the corresponding points on the line, as shown at left. Many calculators are programmed to find the least squares regression line, using an algorithm that depends only on the data, not on the appearance of the graph. ###### Technology1.108Using a Calculator for Linear Regression You can use a graphing calculator to make a scatterplot, find a regression line, and graph the regression line with the data points. On the TI-83 calculator, we use the statistics mode, which you can access by pressing STAT. You will see a display that looks like figure (a) below. Choose $1$ to $Edit$ (enter or alter) data. Now follow the instructions in Example 1.109 for using your calculator’s statistics features. ###### Example1.109 1. Find the equation of the least squares regression line for the following data: \begin{equation*} (10, 12), (11, 14), (12, 14), (12, 16), (14, 20) \end{equation*} 2. Plot the data points and the least squares regression line on the same axes. Solution 1. We must first enter the data. • Press STATENTER to select $Edit\text{.}$ • If there are data in column $L_1$ or $L_2\text{,}$ clear them out: Use the $\boxed{\uparrow}$ key to select $L_1\text{,}$ press CLEAR, then do the same for $L_2\text{.}$ • Enter the $x$-coordinates of the data points in the $L_1$ column and enter the $y$-coordinates in the $L_2$ column, as shown in figure (a) below. Now we are ready to find the regression equation for our data. • Press STAT $\boxed{\rightarrow}$ 4 to select linear regression, or LinReg (ax + b), then press ENTER. • The calculator will display the equation $y = ax + b$ and the values for $a$ and $b\text{,}$ as shown in figure (b). You should find that your regression line is approximately $y = 1.95x - 7.86\text{.}$ 2. First, we first clear out any old definitions in the list. • Position the cursor after $Y_1 =$ and copy in the regression equation as follows: • Press VARS$5$ $\boxed{\rightarrow}$ $\boxed{\rightarrow}$ ENTER. • To draw a scatterplot, press 2ndY=$1$ and set the Plot1 menu as shown in figure (a) below. • Finally, press ZOOM $9$ to see the scatterplot of the data and the regression line. The graph is shown in figure (b). ###### Caution1.110 When you are through with the scatterplot, press Y= $\boxed{\uparrow}$ ENTER to turn off the $Stat Plot\text{.}$ If you neglect to do this, the calculator will continue to show the scatterplot even after you ask it to plot a new equation. ###### Checkpoint1.111 1. Use your calculator’s statistics features to find the least squares regression equation for the data in Checkpoint 1.101. 2. Plot the data and the graph of the regression equation. 1. $y = 1.34 + 0.71x$ ### SubsectionSection Summary #### SubsubsectionVocabulary Look up the definitions of new terms in the Glossary. • Scatterplot • Least squares regression line • Extrapolate • Regression line • Interpolate • Linear regression #### SubsubsectionCONCEPTS 1. Data points may not lie exactly on the graph of an equation. 2. Points in a scatterplot may or may not exhibit a pattern. 3. We can approximate a linear pattern by a regression line. 4. We can use interpolation or extrapolation to make estimates and predictions. 5. If we extrapolate too far beyond the known data, we may get unreasonable results. #### SubsubsectionSTUDY QUESTIONS 1. What is a regression line? 2. State two formulas you will need to calculate the equation of a line through two points. 3. Explain the difference between interpolation and extrapolation. 4. In general, should you have more confidence in figures obtained by interpolation or by extrapolation? Why? #### SubsubsectionSKILLS Practice each skill in the Homework  problems listed. 1. Find the equation of a line through two points: #1–6, 29–36 2. Draw a line of best fit: #7–18 3. Find the equation of a regression line: #11–28, 37–40 4. Use interpolation and extrapolation to make predictions: #11–40 ### SubsectionHomework 1.6 In Problems 1–6, we find a linear model from two data points. 1. Make a table showing the coordinates of two data points for the model. (Which variable should be plotted on the horizontal axis?) 2. Find a linear equation relating the variables. 3. State the slope of the line, including units, and explain its meaning in the context of the problem. ###### 1 It cost a bicycle company $$9000$ to make $40$ touring bikes in its first month of operation and$$15,000$ to make $125$ bikes during its second month. Express the company's monthly production cost, $C\text{,}$ in terms of the number, $x\text{,}$ of bikes it makes. ###### 2 Flying lessons cost $$645$ for an $8$-hour course and$$1425$ for a $20$-hour course. Both prices include a fixed insurance fee. Express the cost, $C\text{,}$ of flying lessons in terms of the length, $h\text{,}$ of the course in hours. ###### 3 Under ideal conditions, Andrea's Porsche can travel $312$ miles on a full tank ($12$ gallons of gasoline) and $130$ miles on $5$ gallons. Express the distance, $d\text{,}$ Andrea can drive in terms of the amount of gasoline, $g\text{,}$ she buys. ###### 4 On an international flight, a passenger may check two bags each weighing $70$ kilograms, or $154$ pounds, and one carry-on bag weighing $50$ kilograms, or $110$ pounds. Express the weight, $p\text{,}$ of a bag in pounds in terms of its weight, $k\text{,}$ in kilograms. ###### 5 A radio station in Detroit, Michigan, reports the high and low temperatures in the Detroit/Windsor area as $59\degree$F and $23\degree$F, respectively. A station in Windsor, Ontario, reports the same temperatures as $15\degree$C and $-5\degree$C. Express the Fahrenheit temperature, $F\text{,}$ in terms of the Celsius temperature, $C\text{.}$ ###### 6 Ms. Randolph bought a used car in 2000. In 2002, the car was worth $$9000\text{,}$ and in 2005 it was valued at$$4500\text{.}$ Express the value, $V$ , of Ms. Randolph's car in terms of the number of years, $t\text{,}$ she has owned it. Each regression line can be improved by adjusting either m or b. Draw a line that fits the data points more closely. ###### 10 In Problems 11 and 12, use information from the graphs to answer the questions. ###### 11 The scatterplot shows the ages of 10 army drill sergeants and the time it took each to run 100 meters, in seconds. 1. What was the hundred-meter time for the 25-year-old drill sergeant? 2. How old was the drill sergeant whose hundred-meter time was $12.6$ seconds? 3. Use a straightedge to draw a line of best fit through the data points. 4. Use your line of best fit to predict the hundred-meter time of a $28$-year-old drill sergeant. 5. Choose two points on your regression line and find its equation. 6. Use the equation to predict the hundred-meter time of a 40-year-old drill sergeant and a 12 year-old drill sergeant. Are these predictions reasonable? ###### 12 The scatterplot shows the outside temperature and the number of cups of cocoa sold at an outdoor skating rink snack bar on 13 consecutive nights. 1. How many cups of cocoa were sold when the temperature was $2\degree$C? 2. What was the temperature on the night when $25$ cups of cocoa were sold? 3. Use a straightedge to draw a line of best fit through the data points 4. Use your line of best fit to predict the number of cups of cocoa that will be sold at the snack bar if the temperature is $7\degree$C. 5. Choose two points on your regression line and find its equation. 6. Use the equation to predict the number of cups of cocoa that will be sold when the temperature is $10\degree$C and when the temperature is $24\degree$C. Are these predictions reasonable? ###### 13 With Americans' increased use of faxes, pagers, and cell phones, new area codes are being created at a steady rate. The table shows the number of area codes in the United States each year. (Source: USA Today, NeuStar, Inc.) Year $1997$ $1998$ $1999$ $2000$ $2001$ $2002$ $2003$ Number of area codes $151$ $186$ $204$ $226$ $239$ $262$ $274$ 1. Let $t$ represent the number of years after 1995 and plot the data. Draw a line of best fit for the data points. 2. Find an equation for your regression line. 3. How many area codes do you predict for 2010? ###### 14 The number of mobile homes in the United States has been increasing since 1960. The data in the table are given in millions of mobile homes. (Source: USA Today, U.S. Census Bureau) Year $1960$ $1970$ $1980$ $1990$ $2000$ Number of mobile homes $0.8$ $2.1$ $4.7$ $7.4$ $8.8$ 1. Let $t$ represent the number of years after 1960 and plot the data. Draw a line of best fit for the data points 2. Find an equation for your regression line. 3. How many mobile homes do you predict for 2010? ###### 15 Teenage birth rates in the United States declined from 1991 to 2000. The table shows the number of births per 1000 women in selected years. (Source: U.S. National Health Statistics) Year $1991$ $1993$ $1995$ $1996$ $1997$ $1998$ Births $62.1$ $59.6$ $56.8$ $54.4$ $52.3$ $51.1$ 1. Let $t$ represent the number of years after 1990 and plot the data. Draw a line of best fit for the data points. 2. Find an equation for your regression line. 3. Estimate the teen birth rate in 1994. 4. Predict the teen birth rate in 2010. ###### 16 The table shows the minimum wage in the United States at five-year intervals. (Source: Economic Policy Institute) Year $1960$ $1965$ $1970$ $1975$ $1980$ $1985$ $1990$ $1995$ $2000$ Minimum wage $1.00$ $1.25$ $1.60$ $2.10$ $3.10$ $3.35$ $3.80$ $4.25$ $5.15$ 1. Let $t$ represent the number of years after 1960 and plot the data. Draw a line of best fit for the data points. 2. Find an equation for your regression line. 3. Estimate the minimum wage in 1972. 4. Predict the minimum wage in 2010. ###### 17 Life expectancy in the United States has been rising since the nineteenth century. The table shows the U.S. life expectancy in selected years. (Source: http://www.infoplease.com) Year $1950$ $1960$ $1970$ $1980$ $1990$ $2000$ Life expectancy at birth $68.2$ $69.7$ $70.8$ $73.7$ $75.4$ $77$ 1. Let $t$ represent the number of years after 1950, and plot the data. Draw a line of best fit for the data points. 2. Find an equation for your regression line. 3. Estimate the life expectancy of someone born in 1987. 4. Predict the life expectancy of someone born in 2010. ###### 18 The table shows the per capita cigarette consumption in the United States at five-year intervals. (Source: http://www.infoplease.com) Year $1980$ $1985$ $1990$ $1995$ $2000$ Per capita cigarette consumption $3851$ $3461$ $2827$ $2515$ $2092$ 1. Let $t$ represent the number of years after 1980, and plot the data. Draw a line of best fit for the data points. 2. Find an equation for your regression line. 3. Estimate the per capita cigarette consumption in 1998. 4. Predict the per capita cigarette consumption in 2010. ###### 19 "The earnings gap between high-school and college graduates continues to widen, the Census Bureau says. On average, college graduates now earn just over $$51,000$ a year, almost twice as much as high-school graduates. And those with no high-school diploma have actually seen their earnings drop in recent years." The table shows the unemployment rate and the median weekly earnings for employees with different levels of education. (Source: Morning Edition, National Public Radio, March 28, 2005) Years ofeducation Unemploymentrate Weeklyearnings ($) Somehigh schoolno diploma $10$ $8.8$ $396$ High-schoolgraduate $12$ $5.5$ $554$ Some collegeno degree $13$ $5.2$ $622$ Associate'sdegree $14$ $4.0$ $672$ Bachelor'sdegree $16$ $3.3$ $900$ Master'sdegree $18$ $2.9$ $1064$ Professionaldegree $20$ $1.7$ $1307$ 1. Plot years of education on the horizontal axis and weekly earnings on the vertical axis. 2. Find an equation for the regression line. 3. State the slope of the regression line, including units, and explain what it means in the context of the data. 4. Do you think this model is useful for extrapolation or interpolation? For example, what weekly earnings does the model predict for someone with 15 years of education? For 25 years? Do you think these predictions are valid? Why or why not? ###### 20 The table shows the birth rate (in births per woman) and the female literacy rate (as a percent of the adult female population) in a number of nations. (Source: UNESCO, The World Fact Book, EarthTrends) Country Literacy rate Birth rate Brazil $88.6$ $1.93$ Egypt $43.6$ $2.88$ Germany $99$ $1.39$ Iraq $53$ $4.28$ Japan $99$ $1.39$ Niger $9.4$ $6.75$ Pakistan $35.2$ $4.14$ Peru $82.1$ $2.56$ Philippines $92.7$ $3.16$ Portugal $91$ $1.47$ Russian Federation $99.2$ $1.27$ Saudi Arabia $69.3$ $4.05$ United States $97$ $2.08$ 1. Plot the data with literacy rate on the horizontal axis. Draw a line of best fit for the data points. 2. Find an equation for the regression line. 3. What values for the input variable make sense for the model? What are the largest and smallest values predicted by the model for the output variable? 4. State the slope of the regression line, including units, and explain what it means in the context of the data. ###### 21 The table shows the amount of carbon released into the atmosphere annually from burning fossil fuels, in billions of tons, at 5-year intervals from 1950 to 1995. (Source: www.worldwatch.org) Year $1950$ $1955$ $1960$ $1965$ $1970$ $1975$ $1980$ $1985$ $1990$ $1995$ Carbonemissions $1.6$ $2.0$ $2.5$ $3.1$ $4.0$ $4.5$ $5.2$ $5.3$ $5.9$ $6.2$ 1. Let $t$ represent the number of years after 1950 and plot the data. Draw a line of best fit for the data points. 2. Find an equation for your regression line. 3. Estimate the amount of carbon released in 1992. ###### 22 High-frequency radiation is harmful to living things because it can cause changes in their genetic material. The data below, collected by C. P. Oliver in 1930, show the frequency of genetic transmutations induced in fruit flies by doses of X-rays, measured in roentgens. (Source: C. P. Oliver, 1930) Dosage(roentgens) $285$ $570$ $1640$ $3280$ $6560$ Percentage ofmutated genes $1.18$ $2.99$ $4.56$ $9.63$ $15.85$ 1. Plot the data and draw a line of best fit through the data points. 2. Find an equation for your regression line. 3. Use the regression equation to predict the percent of mutations that might result from exposure to $5000$ roentgens of radiation. ###### 23 Bracken, a type of fern, is one of the most successful plants in the world, growing on every continent except Antarctica. New plants, which are genetic clones of the original, spring from a network of underground stems, or rhizomes, to form a large circular colony. The graph shows the diameters of various colonies plotted against their age. (Source: Chapman et al.,1992) 1. Calculate the rate of growth of the diameter of a bracken colony, in meters per year. 2. Find an equation for the line of best fit. (What should the vertical intercept of the line be?) 3. In Finland, bracken colonies over $450$ meters in diameter have been found. How old are these colonies? ###### 24 The European sedge warbler can sing several different songs consisting of trills, whistles, and buzzes. Male warblers who sing the largest number of songs are the first to acquire mates in the spring. The data below show the number of different songs sung by several male warblers and the day on which they acquired mates, where day 1 is April 20. (Source: Krebs and Davies, 1993) Numberof songs $41$ $38$ $34$ $32$ $30$ $25$ $24$ $24$ $23$ $14$ Pairing day $20$ $24$ $25$ $21$ $24$ $27$ $31$ $35$ $40$ $42$ 1. Plot the data points, with number of songs on the horizontal axis. A regression line for the data is $y = -0.85x + 53\text{.}$ Graph this line on the same axes with the data. 2. What does the slope of the regression line represent? 3. When can a sedge warbler that knows $10$ songs expect to find a mate? 4. What do the intercepts of the regression line represent? Do these values make sense in context? ###### 25 One of the factors that determines the strength of a muscle is its cross-sectional area. The data below show the cross-sectional area of the arm flexor muscle for several men and women, and their strength, measured by the maximum force they exerted against a resistance. (Source: Davis, Kimmet, Autry, 1986) Women Area (sq cm) $11.5$ $10.8$ $11.7$ $12.0$ $12.5$ $12.7$ $14.4$ $14.4$ $15.7$ Strength (kg) $11.3$ $13.2$ $13.2$ $14.5$ $15.6$ $14.8$ $15.6$ $16.1$ $18.4$ Men Area (sq cm) $13.5$ $13.8$ $15.4$ $15.4$ $17.7$ $18.6$ $20.8$ $-$ $-$ Strength (kg) $15.0$ $17.3$ $19.0$ $19.8$ $20.6$ $20.8$ $26.3$ $-$ $-$ 1. Plot the data for both men and women on the same graph using different symbols for the data points for men and the data points for women. 2. Are the data for both men and women described reasonably well by the same regression line? Draw a line of best fit through the data. 3. Find the equation of your line of best fit, or use a calculator to find the regression line for the data. 4. What does the slope mean in this context? ###### 26 Astronomers use a numerical scale called magnitude to measure the brightness of a star, with brighter stars assigned smaller magnitudes. When we view a star from Earth, dust in the air absorbs some of the light, making the star appear fainter than it really is. Thus, the observed magnitude of a star, $m\text{,}$ depends on the distance its light rays must travel through the Earth's atmosphere. The observed magnitude is given by \begin{equation*} m = m_0 + kx \end{equation*} where $m_0$ is the actual magnitude of the star outside the atmosphere, $x$ is the air mass (a measure of the distance through the atmosphere), and $k$ is a constant called the extinction coefficient. To calculate $m_0\text{,}$ astronomers observe the same object several times during the night at different positions in the sky, and hence for different values of $x\text{.}$ Here are data from such observations. (Source: Karttunen et al., 1987) Altitude Air mass, $x$ Magnitude, $m$ $50\degree$ $1.31$ $0.90$ $35\degree$ $1.74$ $0.98$ $25\degree$ $2.37$ $1.07$ $20\degree$ $2.92$ $1.17$ 1. Plot observed magnitude against air mass, and draw a line of best fit through the data. 2. Find the equation of your line of best fit, or use a calculator to find the regression line for the data. 3. Find the equation of your line of best fit, or use a calculator to find the regression line for the data. 4. What is the value of the extinction coefficient? What is the apparent magnitude of the star outside Earth's atmosphere? ###### 27 Six students are trying to identify an unknown chemical compound by heating the substance and measuring the density of the gas that evaporates. (Density $=$ mass/volume.) The students record the mass lost by the solid substance and the volume of the gas that evaporated from it. They know that the mass lost by the solid must be the same as the mass of the gas that evaporated. (Source: Hunt and Sykes, 1984) Student A B C D E F Volume ofgas ($\text{cm}^3$) $48$ $60$ $24$ $81$ $76$ $54$ Loss inmass (mg) $64$ $81$ $32$ $107$ $88$ $72$ 1. Plot the data with volume on the horizontal axis. Which student made an error in the experiment? 2. Ignoring the incorrect data point, draw a line of best fit through the other points. 3. Find an equation of the form $y = kx$ for the data. Why should you expect the regression line to pass through the origin? 4. Use your equation to calculate the mass of $1000\text{ cm}^3$ (one liter) of the gas. 5. Here are the densities of some gases at room temperature: Hydrogen $8$ mg/liter Nitrogen $1160$ mg/liter Oxygen $1330$ mg/liter Carbon dioxide $1830$ mg/liter Which of these might have been the gas that evaporated from the unknown substance? Hint Use your answer to part (d) to calculate the density of the gas. $1 \text{ cm}^3 = 1 \text{ milliliter}\text{.}$ ###### 28 The formulas for many chemical compounds involve ratios of small integers. For example, the formula for water, H$_2$0, means that two atoms of hydrogen combine with one atom of oxygen to make one water molecule. Similarly, magnesium and oxygen combine to produce magnesium oxide. In this problem, we will discover the chemical formula for magnesium oxide. (Source: Hunt and Sykes, 1984) 1. Twenty-four grams of magnesium contain the same number of atoms as sixteen grams of oxygen. Complete the table showing the amount of oxygen needed if the formula for magnesium oxide is $\text{MgO}\text{,}$ $\text{Mg}_2\text{O}\text{,}$ or $\text{MgO}_2\text{.}$ Grams of Mg Grams ofO (if MgO) Grams ofO (if Mg$_2$O) Grams ofO (if MgO$_2$) $24$ $16$ $48$ $12$ $6$ 2. Graph three lines on the same axes to represent the three possibilities, with grams of magnesium on the horizontal axis and grams of oxygen on the vertical axis. 3. Here are the results of some experiments synthesizing magnesium oxide. Experiment Grams ofMagnesium Gramsof oxygen $1$ $15$ $10$ $2$ $22$ $14$ $3$ $30$ $20$ $4$ $28$ $18$ $5$ $10$ $6$ Plot the data on your graph from part (b). Which is the correct formula for magnesium oxide? For Problems 29–32, 1. Use linear interpolation to give approximate answers. 2. What is the meaning of the slope in the context of the problem? ###### 29 The temperature in Encino dropped from $81\degree$F at 1 a.m. to $73\degree$F at 5 a.m. Estimate the temperature at 4 a.m. ###### 30 Newborn blue whales are about $24$ feet long and weigh $3$ tons. The young whale nurses for $7$ months, at which time it is $53$ feet long. Estimate the length of a $1$-year-old blue whale. ###### 31 A car starts from a standstill and accelerates to a speed of $60$ miles per hour in $6$ seconds. Estimate the car's speed $2$ seconds after it began to accelerate. ###### 32 A truck on a slippery road is moving at $24$ feet per second when the driver steps on the brakes. The truck needs $3$ seconds to come to a stop. Estimate the truck's speed $2$ seconds after the brakes were applied. In Problems 33–36, use linear interpolation or extrapolation to answer the questions. ###### 33 The temperature of an automobile engine is $9\degree$ Celsius when the engine is started and $51\degree$C seven minutes later. Use a linear model to predict the engine temperature for both $2$ minutes and $2$ hours after it started. Are your predictions reasonable? ###### 34 The temperature in Death Valley is $95\degree$ Fahrenheit at 5 a.m. and rises to $110\degree$ Fahrenheit by noon. Use a linear model to predict the temperature at 2 p.m. and at midnight. Are your predictions reasonable? ###### 35 Ben weighed $8$ pounds at birth and $20$ pounds at age $1$ year. How much will he weigh at age $10$ if his weight increases at a constant rate? ###### 36 The elephant at the City Zoo becomes ill and loses weight. She weighed $10,012$ pounds when healthy and only $9641$ pounds a week later. Predict her weight after $10$ days of illness. ###### 37 Birds' nests are always in danger from predators. If there are other nests close by, the chances of predators finding the nest increase. The table shows the probability of a nest being found by predators and the distance to the nearest neighboring nest. (Source: Perrins, 1979) Distance tonearest neighbor(meters) $20$ $40$ $60$ $80$ $100$ Probability ofpredators (%) $47$ $34$ $32$ $17$ $1.5$ 1. Plot the data and the least squares regression line. 2. Use the regression line to estimate the probability of predators finding a nest if its nearest neighbor is $50$ meters away. 3. If the probability of predators finding a nest is $10\%\text{,}$ how far away is its nearest neighbor? 4. What is the probability of predators finding a nest if its nearest neighbor is $120$ meters away? Is your answer reasonable? ###### 38 A trained cyclist pedals faster as he increases his cycling speed, even with a multiple-gear bicycle. The table shows the pedal frequency, $p$ (in revolutions per minute), and the cycling speed, $c$ (in kilometers per hour), of one cyclist. (Source: Pugh, 1974) Speed(km/hr) $8.8$ $12.5$ $16.2$ $24.4$ $31.9$ $35.0$ Pedalfrequency (rpm) $44.5$ $50.7$ $60.6$ $77.9$ $81.9$ $95.3$ 1. Plot the data and the least squares regression line. 2. Estimate the cyclist's pedal frequency at a speed of $20$ kilometers per hour. 3. Estimate the cyclist's speed when he is pedaling at $70$ revolutions per minute. 4. Does your regression line give a reasonable prediction for the pedaling frequency when the cyclist is not moving? Explain. ###### 39 In this problem we will calculate the efficiency of swimming as a means of locomotion. A swimmer generates power to maintain a constant speed in the water. If she must swim against an opposing force, the power increases. The following table shows the power expended by a swimmer while working against different amounts of force. (A positive force opposes the swimmer, and a negative force helps her.) (Source: diPrampero et al., 1974, and Alexander, 1992) Force(newtons) $-3.5$ $0$ $0$ $6$ $8$ $10$ $17$ $17$ Metabolicpower(watts) $100$ $190$ $230$ $320$ $380$ $450$ $560$ $600$ 1. Plot the data on the grid, or use the StatPlot feature on your calculator. Use your calculator to find the least squares regression line. Graph the regression line on top of the data. 2. Use your regression line to estimate the power needed for the swimmer to overcome an opposing force of $15$ newtons. 3. Use your regression line to estimate the power generated by the swimmer when there is no force either hindering or helping her. 4. Estimate the force needed to tow the swimmer at $0.4$ meters per second while she rests. (If she is resting, she is not generating any power). 5. The swimmer's mechanical power (or rate of work) is computed by multiplying her speed times the force needed to tow her at rest. Use your answer to part (d) to calculate the mechanical power she generates by swimming at $0.4$ meters per second. 6. The ratio of mechanical power to metabolic power is a measure of the swimmer's efficiency. Compute the efficiency of the swimmer when there is no external force opposing or helping her. ###### 40 In this problem, we calculate the amount of energy generated by a cyclist. An athlete uses oxygen slowly when resting but more quickly during physical exertion. In an experiment, several trained cyclists took turns pedaling on a bicycle ergometer, which measures their work rate. The table shows the work rate of the cyclists, in watts, measured against their oxygen intake, in liters per minute. (Source: Pugh, 1974) Oxygenconsumption(liters/min) $1$ $1.7$ $2$ $3.3$ $3.9$ $3.6$ $4.3$ $5$ Work rate(watts) $40$ $100$ $180$ $220$ $280$ $300$ $320$ $410$ 1. Plot the data on the grid, or use the StatPlot feature on your calculator. Use your calculator to find the least squares regression line. Graph the regression line on top of the data. 2. Find the horizontal intercept of the regression line. What does the horizontal intercept tell you about this situation? 3. Estimate the power produced by a cyclist consuming oxygen at $5.9$ liters per minute. 4. What is the slope of the regression line? The slope represents the amount of power, in watts, generated by a cyclist for each liter of oxygen consumed per minute. How many watts of power does a cyclist generate from each liter of oxygen? 5. One watt of power represents an energy output of one joule per second. How many joules of energy does the cyclist generate in one minute? 6. How many joules of energy can be extracted from each cubic centimeter of oxygen used? (One liter is equal to 1000 cubic centimeters.)
2020-02-24 02:51:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5024502873420715, "perplexity": 921.4127811277028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145869.83/warc/CC-MAIN-20200224010150-20200224040150-00503.warc.gz"}
http://www.procasestudy.com/writing-exponential-functions-template/
# writing exponential functions template writing exponential functions template is a writing exponential functions template sample that gives infomration on writing exponential functions template doc. When designing writing exponential functions template, it is important to consider different writing exponential functions template format such as writing exponential functions template word, writing exponential functions template pdf. You may add related information such as writing exponential functions from tables worksheet, how to write an exponential function from a graph, exponential function equation, how to write an exponential function from a word problem. the function $$g(x)=\left(\frac{1}{2}\right)^x$$ is an example of exponential decay. in the exponential decay of $g(x)$, the function shrinks in half every time you add one to its input $x$. if we call this parameter $k$, we can write our exponential function $f$ as $$f(x)=b^{kx}.$$ you can explore the influence of both parameters $b$ and $k$ in the following applet. it turns out the parameters $b$ and $k$ can change the function $f$ in the same way, so you really only need to change one of them to see all the different functions $f$. in fact, for any change you make to $k$, you can make a compensating change in $b$ to keep the function the same. then, if you change either $b$ or $k$, the applet will automatically make a compensatory change in the other parameter to keep the function the same. we’ll often use two parameters for the exponential function: $c$ and one of $b$ or $k$. for example, we might set $k=1$ and use $$f(x)=cb^x$$ or set $b=e$ and use $$f(x)=ce^{kx}.$$ you can add the parameter $c$ to the applet by checking the “scale function” checkbox. is an example of exponential decay. it gets rapidly smaller as x increases, as illustrated by its graph. exponential i can write an exponential function from a table, using common ratios. exponential function formula: y=a(b)x. what do an example of an exponential function is the growth of bacteria. some bacteria double this can be written as f(x) = 2, writing exponential functions from tables worksheet, writing exponential functions from tables worksheet, how to write an exponential function from a graph, exponential function equation, how to write an exponential function from a word problem. an exponential function with base b is defined by f (x) = ab where a ≠0, b in the following example, a = 1 and b = 2. example 3: writing an exponential model when the initial value is known. in 2006, 80 deer were introduced into a , exponential function problems, exponential functions practice, exponential functions practice, how to analyze exponential functions, exponential function calculator A writing exponential functions template Word can contain formatting, styles, boilerplate text, headers and footers, as well as autotext entries. It is important to define the document styles beforehand in the sample document as styles define the appearance of Word text elements throughout your document. You may design other styles and format such as writing exponential functions template pdf, writing exponential functions template powerpoint, writing exponential functions template form. When designing writing exponential functions template, you may add related content, exponential function problems, exponential functions practice, how to analyze exponential functions, exponential function calculator. how do you write an exponential function? what are examples of exponential functions? how do you find an exponential function?
2021-07-24 09:48:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28649353981018066, "perplexity": 592.1295458643666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00114.warc.gz"}
https://math.stackexchange.com/questions/3088666/what-is-approximately-the-distribution-of-your-total-earning
# What is approximately the distribution of your total earning? You play a game in a casino: you roll two dice and if the sum of the spots equals seven, you win $$5$$€. In every other case, you lose $$1$$€. You decide to play this game $$120$$ times. What is approximately the distribution of your total earning? So the probability that the sum of two dice $$X$$ and $$Y$$ is $$7$$ can be calculate with: $$P(X+Y=7)=\frac{6}{36}=\frac{1}{6}$$ considering all the cases: ((1,6), (2,5), (3,4), (4,3), (5,2), (6,1)). Then the probability that the sum of the two dice is not $$7$$ is $$1-\frac{1}{6}=\frac{5}{6}$$. So I think that the distribution should be $$Bin(120,\frac{1}{6})$$ but from the teacher answer it is $$N(0,600)$$, where I'm wrong? Why is not Binomial but Normal? • I'm guessing that teacher approximated binomial distribution with the central limit theorem. – Jakobian Jan 26 at 19:44 • $\mathrm{Bin(120,6)}$ is the distribution of successful gambles, not the distribution of your total earning. – zahbaz Jan 26 at 20:00 $$X \sim \text{Bin}(120, \frac{1}{6})$$ is the number of times you win. But the winnings themselves are $$5X - (120 - X) = 6X - 120$$. We can compute $$\mathbb{E}[6X-120] = 0$$, and $$\text{Var}[6X-120] = 36\text{Var}[X] = 36 \times 120 \times \frac{1}{6} \times \frac{5}{6} = 600$$, so the normal approximation will be $$N(0, 600)$$. • So to calculate the expectation and variance you use the formula from the binomial distribution? $\mu=np$ $\sigma^2=np(1-p)$? – Mark Jacon Jan 26 at 20:43 • I calculate the true mean and variance (as I said, it's not exactly binomial). $X$ is binomial, but I need to adjust since I have $6X-120$ instead of just $X$. Then I take a normal that has the same mean and variance. – Todor Markov Jan 26 at 21:00
2019-05-19 16:22:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9070925116539001, "perplexity": 310.9263255151988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255071.27/warc/CC-MAIN-20190519161546-20190519183546-00069.warc.gz"}
https://dataspace.princeton.edu/handle/88435/dsp01kh04ds569
Skip navigation Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01kh04ds569 Title: Observation of Higgs boson decay to bottom quarks Authors: Cooperstein, Stephane Advisors: Olsen, James Contributors: Physics Department Keywords: bottom quarkCMSHiggs bosonLHCobservationVH Subjects: Particle physicsPhysics Issue Date: 2019 Publisher: Princeton, NJ : Princeton University Abstract: The observation of the Standard Model Higgs boson decay to a bottom quark-antiquark pair is presented. The primary contribution to this result is from processes in which the Higgs boson is produced in association with a W or Z boson. The latest measurement of these processes is described, using 41.3/fb of proton-proton collision data at center-of-mass energy √s = 13 TeV, collected by the CMS experiment in 2017. The significance of the observed excess in data over Standard Model backgrounds is 3.3 standard deviations. The result is combined with similar measurements performed by CMS on previous datasets, resulting in an observed significance of 5.6 standard deviations. The measured signal is well consistent with the Standard Model expectation for a Higgs boson with mass 125 GeV decaying to bottom quarks, with a precision of 20%. URI: http://arks.princeton.edu/ark:/88435/dsp01kh04ds569 Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu Type of Material: Academic dissertations (Ph.D.) Language: en Appears in Collections: Physics Files in This Item: File Description SizeFormat Cooperstein_princeton_0181D_13037.pdf8.52 MBAdobe PDF Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.
2020-09-28 19:31:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1892700046300888, "perplexity": 3514.391753628907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401604940.65/warc/CC-MAIN-20200928171446-20200928201446-00071.warc.gz"}
https://www.physicsforums.com/threads/calc-optimization.90328/
# Calc Optimization 1. Sep 22, 2005 ### mattxr250 I was wondering if someone could workout this problem... The sum of the perimeters of an equilateral triangle and square is 10. Find the dimensions of the triangle and the square that produce a minimum total area. Thanks for any help 2. Sep 22, 2005 ### EnumaElish So you have 3L + 4L = 7L = 10, so L = 10/7, is that right? Or is it more complicated? 3. Sep 23, 2005 ### amcavoy Well it could be 3L + 4W = 10. Are the sides of the triangle the same as the square? 4. Sep 23, 2005 ### EnumaElish Let's see how mattxr250 thinks it is. mattxr250 are you there? 5. Sep 24, 2005 ### HallsofIvy Staff Emeritus EnumaElish, If the triangle and square had the same lenght sides, then the side length would have to be 10/7 and there wouldn't be any question about which lengths give the minimum total area would there? mattxr250, as apmcavoy says, the total perimeter would be 3L + 4W = 10 (I assume he means L as the length of a side of the triangle and W as the side of the square. The area of the square would be W2 (I'm happy to do the easy part for you! What would the area of the triangle be in terms of L? What would the total area be? Can you put that only in terms of L (or W)? Can you find the value that makes that a minimum? 6. Sep 25, 2005 ### mattxr250 HallsofIvy, I had the 3L + 4W = 10...As you stated, the sides of the triangle cannot be the same length as the square because you don't have to differentiate to find a minimum area....heres what i came up with for setting that equation equal to L... L = (10-4W)/3... so does this make sense for the perimeter? 3[(10-4W)/3] + 4W = 10...after that i'm lost, lol...any more help? 7. Sep 25, 2005 ### EnumaElish You have two unknowns L and W but one equation. Can you determine 2 unknowns from a single equation? The thread title said calc optimization, so where do you think is the calculus or the optimization part? Okay, I see that you said area needs to be minimized. So how do you write the total area in terms of L and W? 8. Sep 25, 2005 ### mattxr250 well, as stated the area of the square is just W^2...and although the formula for the area of triangle is (1/2)b(h), I dont know how to get to that.... to get the optimization part you need the equations for the area, differentiate them, and then find the min of f '(x), but I dont know how...help?? 9. Sep 25, 2005 ### EnumaElish You are right about the square. How do you get from the side of an equilateral triangle to its height? I don't remember the formula for it. (The base is just L so b = L.) 10. Sep 25, 2005 ### mattxr250 oh, isn't that a "30, 60, 90" triangle if you draw an altitude from a vertex to the opposite side?? ok, maybe I'll try that...but... what do I differentiate?? the two forumlas for area? 11. Sep 25, 2005 ### mattxr250 well, i found the formula for the area of an equilateral triangle... [(3^(1/2))/4](L^2)... i really need help guys...any suggestions? 12. Sep 26, 2005 ### EnumaElish Okay, so you have the formulas for the triangle and the square, so you can represent total area in terms of L and W: Triangle area = T(L), Square area = S(W), so total area = T(L)+S(W). YOu had said you'd need to differentiate, and that's correct. You need to differentiate with resp. to L and W separately and set each derivative = 0. But you also know that L and W have to satisfy the condition "total circumference = 10." This last condition makes the problem a constrained optimization problem. Have you covered constrained optimization in class? Have you encountered or solved constrained optimization examples?
2016-10-21 13:29:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.829852819442749, "perplexity": 740.0882726315447}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718278.43/warc/CC-MAIN-20161020183838-00208-ip-10-171-6-4.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/290064/grh-and-the-rank-of-elliptic-curves/290068
# GRH and the rank of elliptic curves I have been using the Magma calculator recently, and while calculating ranks of elliptic curves with very big coefficients, there is a possibility to assume GRH is true, which signaficantly speeds up the calculation. My question is, how is computation of the rank of an elliptic curve made faster by assuming the GRH. I am no expert in this field, so please keep your answers as simple as possible. Computation of ranks of elliptic curves relies on descent. The first step of descent is the computation of a finite Selmer group, which in turn uses the computation of the class group of a potentially large number field. This is the step where GRH is used: it allows you to assume that the class group is generated by the set of prime ideals up to a relatively small norm bound, therefore speeding up the computation. • I think you mean under GRH that the class group is generated by a small number of prime ideals of small norm (essentially bounded by a small power of $\log |{\rm disc}(K)|$, by Bach). – KConrad Jan 6 '18 at 14:32 • Yes, I should probably have been more precise (I'll edit). But my goal was not to describe the full algorithm here anyway. – Aurel Jan 6 '18 at 16:26 Note: Here i present the method assuming GRH when the rank is large compared to the conductor .may this helping you . Take $f(x)$ to be a function such that $f(0)=1$ and $f(x)\geq0$ for all real $x$ Then, assuming the Riemann hypothesis, the sum :$\sum f(\beta)$ where $1/2+i\beta$ runs over the nontrivial zeros of $L(s,E)$,will be an upper bound for the analytic rank of $E$ Moreover, for certain choices of $f(x)$ this sum may be efficiently evaluated using the explicit formula for the $L$-function attached to $E$ , The method, is available as part of William Stein’s (W. A. Stein et al., Purple PSAGE, The PSAGE Development Team, 2011,) • Well, this gives an upper bound of the rank, but how does this help compute the rank itself? – Alex M. Jan 6 '18 at 20:03 • The link doesn't work. Let $\Lambda_E(p^k) = (p^k+1-\#E(\mathbf{F}_{p^k})) \log p$ such that $\frac{L'}{L}(s,E) = -\sum_{n=1}^\infty \Lambda_E(n) n^{-s}$. Then the modularity theorem implies $\sum_\beta f(\beta)+C(f) = \sum_{n=1}^\infty \frac{\Lambda_E(n)}{n^{1/2}} (\hat{f}(\log n)+\hat{f}(-\log n))$ where $C(f)$ is in term of $L(0,E)$, $\frac{\Gamma'}{\Gamma}$ and the analytic conductor. Then how do you relate $\text{rank}(E)$ to that ? @AlexM. – reuns Jan 6 '18 at 23:59 • @reuns: Sorry to disappoint you, but I do not know. In fact, my comment was meant to tell Zeraoulia Rafik that I do not understand how to make use of his answer. – Alex M. Jan 7 '18 at 10:29 • Concerning the software, @WilliamStein used to be active on MO, so maybe he can give a correct link. – Alex M. Jan 7 '18 at 10:37 • @AlexM.I have provided the correct link , i think now it work – zeraoulia rafik Jan 7 '18 at 15:53
2020-07-04 16:26:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8531864285469055, "perplexity": 246.88132082462045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886178.40/warc/CC-MAIN-20200704135515-20200704165515-00098.warc.gz"}
http://math.stackexchange.com/questions/59297/getting-average/59299
Getting Average How excatly do I calculate the average? Can somebody explain step by step process of how to get an average? If I have an array of numbers (5, 3, 4, 3, 1) and so on. And I need to get the average of these numbers on a scale of 5 what is the equation of doing this? - There are many averages. Regular average is known as arithmetic mean, obtain by summing elements of your array and dividing over its length. –  Sasha Aug 23 '11 at 19:51 1 Answer To average a list of numbers, add up all the numbers and then divide by the size of your list. For your example, we'd get $$\frac{5+3+4+3+1}{5} = \frac{16}{5} = 3.2.$$ - Ok, so for a little explanation. The first division by 5 is the number of votes? The second being the number of possible choices? –  Howdy_McGee Aug 23 '11 at 20:12 There is only one division by 5. The leftmost expression is the setup. The middle one is where I added up all the numbers in the numerator (the numbers in the list). The rightmost is when I divided the sum by 5 (the size of the list). –  Austin Mohr Aug 23 '11 at 20:16 Ok cool, I see where that is coming from! What if I want to go deep than that though and only have the average between 0 and 5? I have a survey where a user votes 1-5. I want to find the average vote between 1 and 5 for all the users combined. Do I need to average the average? Divide the average by 5? sum/number of vote / number of voting options? –  Howdy_McGee Aug 23 '11 at 20:21 You don't need to consider the range that the votes come from. Just add up all the votes and divide by the number of votes cast. The average will automatically fall within the desired range. –  Austin Mohr Aug 23 '11 at 20:24 It's because the average can never be smaller than the smallest number in the list or bigger than the biggest number in the list. For example, average the numbers 2, 3, and 4. It's definitely larger than the average of 2, 2, and 2, which is $\frac{2 + 2 + 2}{3} = \frac{3 \cdot 2}{3} = 2$. On the other hand, it's definitely smaller than the average of 4, 4, and 4, which is $\frac{4 + 4 + 4}{3} = \frac{3 \cdot 4}{3} = 4$. –  Austin Mohr Aug 23 '11 at 20:56
2014-12-20 07:27:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.810168445110321, "perplexity": 331.3186599708999}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769581.93/warc/CC-MAIN-20141217075249-00120-ip-10-231-17-201.ec2.internal.warc.gz"}
https://onlinedocs.microchip.com/pr/GUID-EE7956F6-534E-4B4F-AD10-214BF9914B5A-en-US-5/GUID-D470F3C9-ADB2-4535-B057-AE1543A27B0C.html
Input Value Name: IN Offset: 0x02 Reset: 0x00 Access: - Bit7 6 5 4 3 2 1 0 IN[7:0] AccessR/W R/W R/W R/W R/W R/W R/W R/W Reset0 0 0 0 0 0 0 0 ## Bits 7:0 – IN[7:0]: Input Value Input Value This bit field shows the state of the PORTx pins when the digital input buffer is enabled. Writing a ‘0’ to bit n in this bit field has no effect. Writing a ‘1’ to bit n in this bit field will toggle the corresponding bit in PORTx.OUT. If the digital input buffer is disabled, the input is not sampled, and the bit value will not change. The digital input buffer for pin n (Pxn) can be configured in the Input/Sense Configuration (ISC) bit field in the Pin n Control (PORTx.PINnCTRL) register. The available states of each bit n in this bit field is shown in the table below. ValueDescription 0 The voltage level on Pxn is low 1 The voltage level on Pxn is high
2023-03-27 12:43:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2703937292098999, "perplexity": 6689.476940159318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948632.20/warc/CC-MAIN-20230327123514-20230327153514-00414.warc.gz"}
http://mathhelpforum.com/algebra/202279-word-problem.html
1. ## Word problem If 6 time j is 1 more than the square of k, where k is an integer, what is the smallest possible value of j? How can I show that it is indeed 1/6 ? (Converting into symbols: 6j=k^2 + 1 but I'm stuck..) 2. ## Re: Word problem You should prove that there exist no integer k such that (1+k^2) be divisible by 2 & 3 at the same time. 1+k^2 either is odd or even hence in each case is not divisible either by 2 or 3 3. ## Re: Word problem Originally Posted by donnagirl If 6 time j is 1 more than the square of k, where k is an integer, what is the smallest possible value of j? How can I show that it is indeed 1/6 ? (Converting into symbols: 6j=k^2 + 1 but I'm stuck..) If you use 0 as smallest integer , then k=0, so 6j = 1 : j = 1/6 If you use 1 as smallest integer , then k=1, so 6j = 2 : j = 1/3
2017-03-29 08:17:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8153286576271057, "perplexity": 626.4076436279515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190234.0/warc/CC-MAIN-20170322212950-00514-ip-10-233-31-227.ec2.internal.warc.gz"}
https://electronics.stackexchange.com/questions/401333/why-does-my-varistor-cause-my-fuse-to-blew-up
# Why does my varistor cause my fuse to blew up? I have made the following circuit (copied from a very similar picture on internet): At one side of AC/DC I put a normal fuse. The MOV is a 14D391K 14K391 390V varistor diameter 14mm AC 250V DC 320V (similar to this datasheet. However, as soon as I put 220 AC on the circuit, the fuse blew. I would expect that it would blow when it gets a power spike of over 390 V (AC). I don't know if the varistor still works, but I removed it and the circuit works as it should be. What did I do wrong? Update I see that the voltage rating is 390 VDC, and the clamping rating is 650 V 1. Does this mean that this varistor is not meant to be for AC ? 2. How can the clamping rating be higher than the voltage rating? 3. In case this varistor is to be used for AC, why does my fuse blows up when using this varistor? • What's the fuse rated for? What's the circuit on the other side of the varistor? Oct 15, 2018 at 23:39 • The varistor is rated for 390V, the fuse is rated for 220/250 V, 0.2A. The circuit is currently 5 AC/DC converters (Hi Links, to 5V), connected to nothing (so far). Oct 15, 2018 at 23:41 • What does the varistor measure on a multimeter? Oct 15, 2018 at 23:45 • What's the expected current draw of your power supplies? Are you sure it's less than 200mA? Because of it isn't, your fuse will blow from the load, not the varistor. – DSWG Oct 15, 2018 at 23:47 • @Felthry I took it out already, a bit hard to test it since I had to cut off the legs. Oct 15, 2018 at 23:48 The MOV is probably toast. They fail shorted. You can measure it with an ohmmeter. The rating (normal operation) is 250VAC (assumed to be RMS voltage of a sine wave), which is $$\\sqrt{2}\cdot 250 \$$, or about 354V peak. The 390V refers to the (nominal) peak voltage when the MOV is just starting to turn on (1mA). It's not clamping much, but it could get hot if that condition persisted, especially with DC. It could be as much as 429V or as little as 351V. When it is clamping a heavy transient (50A) the voltage can be as much as 650V. So if you anticipate surges no greater than 50A you can use it to protect a circuit that is capable of withstanding 650V peak. Perhaps worth noting that MOVs "wear out" and a series of heavy transients can lead them to failing shorted. Of course if the surge is large enough they can fail open because the lead wires have been blown off etc. Silicon devices such as bipolar TVS devices can provide sharper clamping without the wear-out mechanism (they will still fail- often shorted- if overloaded, of course). • Thanks, I used bipolar TVS for my (RS485/DMX512) signals (for 15V), so I will order some 390V TVS diodes for the AC signals than, thanks. Oct 16, 2018 at 10:51 • I added a new question: electronics.stackexchange.com/questions/401418/… ... in principle: would a P6KE400CA be a good choice? Oct 16, 2018 at 11:08 • I found P6KE440CA TVS diodes, which have a reverse breakdown of 374 V, which should be good I guess. Oct 16, 2018 at 11:26
2022-05-19 11:49:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4496859312057495, "perplexity": 2815.3817504145363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662527626.15/warc/CC-MAIN-20220519105247-20220519135247-00583.warc.gz"}
https://physics.stackexchange.com/questions/490408/do-magnetic-field-lines-indicate-how-a-mangetic-field-will-act-on-an-idealized
# Do magnetic field lines indicate how a mangetic field will act on an “idealized” test magnetic north monopole? We imagine an electric field and a magnetic field as vector fields. When we are introduced to a static electric field, we usually picture it as an infinite number of vectors (magnitude,direction) in every point in space that will affect the movement ( or will exert a force) of any charged particle going thorough it. So we can imagine how a test charge will follow (let's assume that the charged paricle field is negligible with respect to the "main" field) our electric/vector field. Under this assumption, we usually draw field lines to represent how the field will act on a positive charge. My question is the following: When we draw magnetic field lines, are we describing how the magnetic field will act on an "idealized" test magnetic north monopole? ## 2 Answers The magnetic field lines shows the directions along which little iron shavings line up. Each shaving become a magnetic dipole in the outer magnetic field, so the magnetic field lines describe how field will act on a test magnetic DIPOLE! • Actually to me the magnetic field lines are identical to the ones of an "electric dipole" that indicates how that field act on a test positive charge – Gabriele Scarlatti Jul 8 at 11:07 • However , there is a difference. Electric field will act on a test positive charge, even if it is motionless. But magnetic would not act on a motionless charge, action will be only if this charge moves. – Leiba Goldstein Jul 8 at 11:15 • @GabrieleScarlatti Just an addition: If there were magnetic monopoles, the magnetic field would indeed act on a positive magnetic charge along these lines. But the thought process is similar on dipoles too, especially the part about shavings. Both are correct. – acarturk Jul 8 at 11:40 Yes, you are correct. This is an example of electromagnetic duality --- if you swap electric fields and magnetic ones, and electric charges for magnetic monopoles, the laws of physics are the same* *If there are time derivatives involved, there are some multiplicative factors of $$-1$$ due to the Lorentzian signature of spacetime. For the motion of test charges in static electromagnetic fields these do not arise.
2019-10-16 21:43:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6068161725997925, "perplexity": 561.0394655006811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986670928.29/warc/CC-MAIN-20191016213112-20191017000612-00118.warc.gz"}
https://perso.univ-rennes1.fr/san.vu-ngoc/bib/
Searching for my papers on other websites is not always obvious, because of the various (mis)spellings of my name (including accented characters). If you have access to zbMATH or MathSciNet, this is your best bet, they are good at this. Here are some links with incomplete data you may try: ArXiv, HAL. Here are the correct spellings. Any other combination is wrong, but many can be found online unfortunately... • San Vu Ngoc • San Vũ Ngọc #### Vietnamese ordering: • Vu Ngoc San • Vũ Ngọc San The list below is produced from my BibTeX file. Here is also a more traditional PDF version. • Everything • Book • Article • Chapter • Conference • Manuscript • Thesis • #### Eigenvalue Asymptotics for Confining Magnetic Schrödinger Operators with Complex Potentials International Mathematics Research Notices (2022) This article is devoted to the spectral analysis of the electromagnetic Schrödinger operator on the Euclidean plane. In the semiclassical limit, we derive a pseudo-differential effective operator that allows us to describe the spectrum in various situations and appropriate regions of the complex plane. Not only results of the self-adjoint case are proved (or recovered) in the proposed unifying framework, but also new results are established when the electric potential is complex-valued. In such situations, when the non-self-adjointness comes with its specific issues (lack of a “spectral theorem”, resolvent estimates), the analogue of the “low-lying eigenvalues” of the self-adjoint case are still accurately described and the spectral gaps estimated. • #### The inverse spectral problem for quantum semitoric systems (2021) Given a quantum semitoric system composed of pseudodifferential operators, Berezin-Toeplitz operators, or a combination of both, we obtain explicit formulas for recovering, from the semiclassical asymptotics of the joint spectrum, all symplectic invariants of the underlying classical semitoric system. Our formulas are based on the possibility to obtain good quantum numbers for joint eigenvalues from the bare data of the joint spectrum. In the spectral region corresponding to regular values of the momentum map, the algorithms developed by Dauge, Hall and the second author (27) produce such labellings. In our proof, it was crucial to extend these algorithms to the boundary of the spectrum, which led to the new notion of asymptotic half-lattices, and to globalize the resulting labellings. Using the construction given by Pelayo and the second author in (79), our results prove that semitoric systems are completely spectrally determined in an algorithmic way : from the joint spectrum of a quantum semitoric system one can construct a representative of the isomorphism class of the underlying classical semitoric system. In particular, this recovers the uniqueness result obtained by Pelayo and the authors in (62,61), and completes it with the explicit computation of all invariants, including the twisting index. In the cases of the spin-oscillator and the coupled angular momenta, we implement the algorithms and illustrate numerically the computation of the invariants from the joint spectrum. • #### Magnetic WKB constructions on surfaces Reviews in Mathematical Physics 33 (07) pp. 2150022 (2021) This article is devoted to the description of the eigenvalues and eigenfunctions of the magnetic Laplacian in the semiclassical limit via the complex WKB method. Under the assumption that the magnetic field has a unique and non-degenerate minimum, we construct the local complex WKB approximations for eigenfunctions on a general surface. Furthermore, in the case of the Euclidean plane, with a radially symmetric magnetic field, the eigenfunctions are approximated in an exponentially weighted space. • #### Uniform spectral asymptotics for semiclassical wells on phase space loops Indag. Math. 32 (1) pp. 3-32 (2021) We consider semiclassical self-adjoint operators whose symbol, defined on a two-dimensional symplectic manifold, reaches a non-degenerate minimum $b_0$ on a closed curve. We derive a classical and quantum normal form which gives uniform eigenvalue asymptotics in a window $(−{\infty}, b_0 + \varepsilon)$ for $\varepsilon>0$ independent on the semiclassical parameter. These asymptotics are obtained in two complementary settings: either an approximate invariance of the system under translation along the curve, which produces oscillating eigenvalues, or a Morse hypothesis reminiscent of Helffer-Sjöstrand’s “miniwell” situation. • #### Spatio-temporal simulation of Covid-19 propagation via continuous automata (2020) We present a new continuous automata based computer simulation of virus propagation in human populations, and apply it to the Covid-19 outbreak, in various scales and situations. We also take the opportunity to propose various mathematical questions, and ask about their biological relevance. • #### Quantum footprints of Liouville integrable systems Reviews in Mathematical Physics 31 (1) pp. 2060014 (2021) We discuss the problem of recovering geometric objects from the spectrum of a quantum integrable system. In the case of one degree of freedom, precise results exist. In the general case, we report on the recent notion of good labellings of asymptotic lattices. • #### Exponential localization in 2D pure magnetic wells Arkiv för Matematik 59 (1) pp. 53-85 International Press of Boston (2021) We establish a magnetic Agmon estimate in the case of a purely magnetic single non-degenerate well, by means of the Fourier-Bros-Iagolnitzer transform and microlocal exponential estimates à la Martinez-Sjöstrand. • #### Correction to: Inverse spectral theory for semiclassical Jaynes–Cummings systems Mathematische Annalen 375 (1) pp. 917-920 (2019) We explain why Theorem B in the original article does not follow from the main result of this paper (Theorem A). While we conjecture that Theorem B should nevertheless be true, in this erratum we prove a slightly weaker version of it. • #### Asymptotic lattices, good labellings, and the rotation number for quantum integrable systems Discrete and Continuous Dynamical Systems 42 (12) pp. 5683-5735 (2022) This article introduces the notion of good labellings for asymptotic lattices in order to study joint spectra of quantum integrable systems from the point of view of inverse spectral theory. As an application, we consider a new spectral quantity for a quantum integrable system, the quantum rotation number. In the case of two degrees of freedom, we obtain a constructive algorithm for the detection of appropriate labellings for joint eigenvalues, which we use to prove that, in the semiclassical limit, the quantum rotation number can be calculated on a joint spectrum in a robust way, and converges to the well-known classical rotation number. The general results are applied to the semitoric case where formulas become particularly natural. • #### Analytic Bergman operators in the semiclassical limit Duke Math. J. 169 (16) pp. 3033-3097 (2020) Transposing the Berezin quantization into the setting of analytic microlocal analysis, we construct approximate semiclassical Bergman projections on weighted $L^2$ spaces with analytic weights, and show that their kernel functions admit an asymptotic expansion in the class of analytic symbols. As a corollary, we obtain new estimates for asymptotic expansions of the Bergman kernel on $\CM^n$ and for high powers of ample holomorphic line bundles over compact complex manifolds. • #### On the stability of the Schwartz class under the magnetic Schrödinger flow Math. Research Letters 27 (1) pp. 1-18 (2020) We prove that the Schwartz class is stable under the magnetic Schrödinger flow when the magnetic 2-form is non-degenerate and does not oscillate too much at infinity. • #### Long-time dynamics of coherent states in strong magnetic fields Amer. J. Math. (2021) We consider a charged particle on a plane, subject to a strong, purely magnetic external field. It is well known that the quantum evolution closely follows the classical dynamics for short periods of time, while for times larger than $\ln \frac{1}{\hbar}$, where $\hbar$ is Planck’s constant, purely quantum phenomena are expected to happen. In this paper we investigate the Schrödinger evolution of generalized coherent states for times of order $1/\hbar$. We prove that, when the initial energy is low, the initial states splits into multiple wavepackets, each one following the average dynamics of the guiding center motion but at its own speed. • #### Boundary effects on the magnetic Hamiltonian dynamics in two dimensions Enseign. Math. 64 (3-4) pp. 353-369 (2018) We study the Hamiltonian dynamics of a charged particle submitted to a pure magnetic field in a two-dimensional domain. We provide conditions on the magnetic field in a neighbourhood of the boundary to ensure the confinement of the particle. We also prove a formula for the scattering angle in the case of radial magnetic fields. • #### Un monde d’oscillations — de l’horloge de Huygens à la physique quantique Tangente 167 (2017) • #### Les Annales Henri Lebesgue Ann. H. Lebesgue 0 pp. 1-6 (2017) • #### Integrable systems, symmetries, and quantization Lett. Math. Phys. 108 (3) pp. 499-571 (2017) These notes are an expanded version of a mini-course given at the Poisson 2016 conference in Geneva. Starting from classical integrable systems in the sense of Liouville, we explore the notion of non-degenerate singularities and expose recent research in connection with semi-toric systems. The quantum and semiclassical counterpart are also presented, in the viewpoint of the inverse question: from the quantum mechanical spectrum, can one recover the classical system? • #### The affine invariant of generalized semitoric systems Nonlinearity 30 (11) pp. 3993-4028 (2017) • #### Magnetic Wells in dimension three Anal. & PDE 9 (7) pp. 1575-1608 (2016) This paper deals with semiclassical asymptotics of the three-dimensional magnetic Laplacian in presence of magnetic confinement. Using generic assumptions on the geometry of the confinement, we exhibit three semiclassical scales and their corresponding effective quantum Hamiltonians, by means of three microlocal normal forms *à la Birkhoff*. As a consequence, when the magnetic field admits a unique and non degenerate minimum, we are able to reduce the spectral analysis of the low-lying eigenvalues to a one-dimensional $\hbar$-pseudo-differential operator whose Weyl’s symbol admits an asymptotic expansion in powers of $\hbar^{\frac1 2}$. • #### Spectral limits of semiclassical commuting self-adjoint operators A mathematical tribute to Professor José Marı́ a Montesinos Amilibia pp. 527-546 Dep. Geom. Topol. Fac. Cien. Mat. UCM, Madrid (2016) Using an abstract notion of semiclassical quantization for self-adjoint operators, we prove that the joint spectrum of a collection of commuting semiclassical self-adjoint operators converges to the classical spectrum given by the joint image of the principal symbols, in the semiclassical limit. This includes Berezin-Toeplitz quantization and certain cases of ℏ-pseudodifferential quantization, for instance when the symbols are uniformly bounded, and extends a result by L. Polterovich and the authors. In the last part of the paper we review the recent solution to the inverse problem for quantum integrable systems with periodic Hamiltonians, and explain how it also follows from the main result in this paper. • #### Sharp symplectic embeddings of cylinders Indag. Math. 27 (1) pp. 307-317 (2016) • #### Inverse spectral theory for semiclassical Jaynes-Cummings systems Math. Ann. 364 (3) pp. 1393-1413 (2016) Quantum semitoric systems form a large class of quantum Hamiltonian integrable systems with circular symmetry which has received great attention in the past decade. They include systems of high interest to physicists and mathematicians such as the Jaynes-Cummings model (1963), which describes a two-level atom interacting with a quantized mode of an optical cavity, and more generally the so-called systems of Jaynes-Cummings type. In this paper we consider the joint spectrum of a pair of commuting semiclassical operators forming a quantum integrable system of Jaynes-Cummings type. We prove, assuming the Bohr-Sommerfeld rules hold, that if the joint spectrum of two of these systems coincide up to O(ℏ²), then the systems are isomorphic. • #### Geometry and spectrum in 2D magnetic wells Ann. Inst. Fourier 65 (1) pp. 137-169 (2015) This paper is devoted to the classical mechanics and spectral analysis of a pure magnetic Hamiltonian in $\mathbb{R}^2$. It is established that both the dynamics and the semiclassical spectral theory can be treated through a Birkhoff normal form and reduced to the study of a family of one dimensional Hamiltonians. As a corollary, recent results by Helffer-Kordyukov are extended to higher energies. • #### Fiber connectivity and bifurcation diagrams for almost toric systems Journal of Symplectic Geometry 13 (2) pp. 343-386 (2015) • #### Asymptotic Analysis for Schrödinger Hamiltonians via Birkhoff-Gustavson Normal Form Asymptotic Analysis 85 pp. 1-28 (2013) • #### Microlocal Normal Forms for the Magnetic Laplacian Journées EDP (2014) • #### Semiclassical inverse spectral theory for singularities of focus-focus type Commun. Math. Phys. 329 (2) pp. 809-820 (2014) We prove, assuming that the Bohr-Sommerfeld rules hold, that the joint spectrum near a focus-focus singular value of a quantum integrable system determines the classical Lagrangian foliation around the full focus-focus leaf. The result applies, for instance, to $\hbar$-pseudodifferential operators on cotangent bundles and Berezin-Toeplitz operators on prequantizable compact symplectic manifolds. • #### Smooth normal forms for integrable Hamiltonian systems near a focus–focus singularity Acta Math. Viet. 38 (1) (2013) • #### Semiclassical quantization and spectral limits of $\hslash$-pseudodifferential and Berezin-Toeplitz operators Proc. Lond. Math. Soc. (3) 109 (3) pp. 676-696 (2014) • #### Hofer’s question on intermediate symplectic capacities Proc. London Math. Soc (2015) (2012) • #### De l’autre côté du miroir... Le Spectre Image des maths (2013) • #### Isospectrality for Quantum Toric Integrable Systems Ann. Sci. École Norm. Sup. 43 pp. 815-849 (2013) • #### First steps in symplectic and spectral theory of integrable systems Discrete Contin. Dyn. Syst. 32 (10) pp. 3325-3377 (2012) • #### Hamiltonian dynamics and spectral theory for spin-oscillators Comm. Math. Phys. 309 (1) pp. 123-154 (2012) • #### Spectral invariants for coupled spin-oscillators Séminaire X-EDP (2011) • #### Remembering Johannes J. Duistermaat Notices AMS 58 (06) (2011) • #### Johannes Jisse (dit Hans) Duistermaat Gazette des mathématiciens 127 (2011) • #### Symplectic theory of completely integrable Hamiltonian systems Bull. Amer. Math. Soc. (N.S.) 48 (3) pp. 409-455 (2011) • #### Constructing integrable systems of semitoric type Acta Math. 206 pp. 93-125 (2011) • #### Semitoric integrable systems on symplectic 4-manifolds Invent. Math. 177 (3) pp. 571-597 (2009) • #### Symplectic inverse spectral theory for pseudodifferential operators Progr. Math. 292 pp. 353-372 Birkhäuser/Springer, New York (2011) • #### Symplectic invariants near hyperbolic-hyperbolic points Regular & Chaotic Dyn 12 (6) pp. 689-716 (2007) • #### Spectral asymptotics via the semiclassical Birkhoff normal form Duke Math. J. 143 (3) pp. 463-511 (2008) • #### Quantum Birkhoff normal form and semiclassical analysis Adv. Studies in Pure Math. Mathematical Society of Japan (2009) • #### Diophantine tori and spectral asymptotics for non-selfadjoint operators Amer. J. Math. 169 (1) pp. 105-182 (2007) • #### A Singular Poincaré Lemma Int. Math. Res. Not. 2005 (1) pp. 27-45 (2005) (2003) • #### Symplectic techniques for semiclassical completely integrable systems Topological methods in the theory of integrable systems pp. 241-270 Camb. Sci. Publ., Cambridge (2006) • #### Moment polytopes for symplectic manifolds with monodromy Adv. in Math. 208 pp. 909-934 (2007) A natural way of generalising Hamiltonian toric manifolds is to permit the presence of generic isolated singularities for the moment map. For a class of such “almost-toric 4-manifolds” which admits a Hamiltonian $S^1$-action we show that one can associate a group of convex polygons that generalise the celebrated moment polytopes of Atiyah, Guillemin–Sternberg. As an application, we derive a Duistermaat–Heckman formula demonstrating a strong effect of the possible monodromy of the underlying integrable system. • #### Vanishing twist near focus-focus points Nonlinearity 17 (5) pp. 1777-1785 (2004) • #### Sign of the monodromy for Liouville integrable systems Annales Henri Poincaré 3 (5) pp. 883-894 (2002) • #### The Quantum Birkhoff Normal Form and Spectral Asymptotics Journées EDP CNRS (2006) • #### Invariants symplectiques et semi-classiques des systèmes intégrables avec singularités Séminaire X-EDP (2001) • #### On semi-global invariants for focus-focus singularities Topology 42 (2) pp. 365-380 (2003) • #### Quantum Monodromy and Bohr–Sommerfeld Rules Letters in Mathematical Physics 55 (3) pp. 205-217 Kluwer Academic Publishers (2001) • #### Singular Bohr-Sommerfeld rules for 2D integrable systems Ann. Sci. École Norm. Sup. (4) 36 pp. 1-55 (2003) • #### Quantum monodromy in integrable systems Commun. Math. Phys. 203 (2) pp. 465-479 (1999) • #### Formes normales semi-classiques des systèmes complètement intégrables au voisinage d’un point critique de l’application moment Asymptotic Analysis 24 (3,4) pp. 319-342 (2000) • #### Bohr-Sommerfeld conditions for Integrable Systems with critical manifolds of focus-focus type Comm. Pure Appl. Math. 53 (2) pp. 143-217 (2000) • #### Sur le spectre des systèmes complètement intégrables semi-classiques avec singularités Institut Fourier, Université Grenoble 1 (1998) • #### La recherche mathématique aux Pays-Bas TechnoPol’der (1998) • #### Indices de difféomorphismes de contact: exemple du tore en dimension 3 Ecole Normale Supérieure Paris / UC Berkeley / Univ. Paris XI (1994) • #### Mémoire de magistère École Normale Supérieure (Ulm) (1995) • #### Systèmes intégrables semi-classiques: du local au global Panoramas et Synthèses SMF (2006) • #### Finite dimensional integrable systems: on the crossroad of algebra, geometry and physics • V. Matveev • E. Miranda • V. Roubtsov • S. Tabashnikov • S. Vũ Ngọc Journal of Geometry and Physics 87 Elsevier (2015)
2023-02-03 04:45:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6313085556030273, "perplexity": 1460.0100420826025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00025.warc.gz"}
https://proofwiki.org/wiki/Definition:Bounded_Normed_Vector_Space
# Definition:Bounded Subset of Normed Vector Space This page is about Bounded in the context of Normed Vector Space. For other uses, see Bounded. ## Definition Let $M = \struct {X, \norm {\, \cdot \,}}$ be a normed vector space. Let $M' \subseteq X$. ### Definition 1 $M'$ is bounded (in $M$) if and only if: $\exists x \in X, C \in \R_{> 0}: \forall y \in Y: \norm {x - y} \le C$ ### Definition 2 $M'$ is bounded (in $M$) if and only if: $\exists \epsilon \in \R_{>0} : \exists x \in X : Y \subseteq \map {B_\epsilon^-} x$ where $\map {B_\epsilon^-} x$ is a closed ball in $M$.
2023-03-30 01:58:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9319264888763428, "perplexity": 524.473643764135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00687.warc.gz"}
https://www.tutorialspoint.com/duration-compareto-method-in-java
# Duration compareTo() method in Java Java 8Object Oriented ProgrammingProgramming #### Complete Java Programming Fundamentals With Sample Projects 98 Lectures 7.5 hours #### Get your Java dream job! Beginners interview preparation 85 Lectures 6 hours #### Core Java bootcamp program with Hands on practice Featured 99 Lectures 17 hours Two durations can be compared using the compareTo() method in the Duration class in Java. This method requires a single parameter i.e. the duration to be compared. If the first duration is greater than the second duration it returns a positive number, if the first duration is lesser than the second duration it returns a negative number and if both the durations are equal it returns zero. A program that demonstrates this is given as follows − ## Example Live Demo import java.time.Duration; public class Demo { public static void main(String[] args) { Duration d1 = Duration.ofHours(8); Duration d2 = Duration.ofHours(6); System.out.println("The first duration is: " + d1); System.out.println("The second duration is: " + d2); int val = d1.compareTo(d2); if(val > 0) System.out.println("The first duration is greater than the second duration"); else if(val < 0) System.out.println("The first duration is lesser than the second duration"); else System.out.println("The durations are equal"); } } ## Output The first duration is: PT8H The second duration is: PT6H The first duration is greater than the second duration Now let us understand the above program. First the two durations are displayed. Then the durations are compared using the compareTo() method and the result is displayed using if else statement. A code snippet that demonstrates this is as follows − Duration d1 = Duration.ofHours(8); Duration d2 = Duration.ofHours(6); System.out.println("The first duration is: " + d1); System.out.println("The second duration is: " + d2); int val = d1.compareTo(d2); if(val > 0) System.out.println("The first duration is greater than the second duration"); else if(val < 0) System.out.println("The first duration is lesser than the second duration"); else System.out.println("The durations are equal"); Updated on 30-Jul-2019 22:30:25
2022-12-05 04:43:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1745147705078125, "perplexity": 6714.956986490882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00538.warc.gz"}
https://brilliant.org/discussions/thread/cryptology-2/
Cryptology Cryptology is something I have been trying on for the last few months. Many combinations. Thus, very hard to crack. So, I have decided to create some. Of course, I am not fast enough, so hopefully anyone can create some too? I will add about one to three from your questions per week. Have fun! ~thefourseasons Note by Aloysius Ng 3 years, 8 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$
2018-06-23 02:03:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994971752166748, "perplexity": 12274.639024589584}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864919.43/warc/CC-MAIN-20180623015758-20180623035758-00485.warc.gz"}
https://plainmath.net/3508/similar-matrices-equal-begin-bmatrix-303-bmatrix-equal-bmatrix-bmatrix
# Show that A and B are not similar matrices A=begin{bmatrix}1 & 0 &1 2 & 0 &2 3&0&3 end{bmatrix} , B=begin{bmatrix}1 & 1 &0 2 & 2 &0 0&1&1 end{bmatrix} Show that A and B are not similar matrices $A=\left[\begin{array}{ccc}1& 0& 1\\ 2& 0& 2\\ 3& 0& 3\end{array}\right],B=\left[\begin{array}{ccc}1& 1& 0\\ 2& 2& 0\\ 0& 1& 1\end{array}\right]$ You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it d2saint0 Step 1 If the matrices A and B are not similar, then their eigenvalues are not the same. Step 2 Consider the matrix A and find its eigenvalues as follows. $A=\left[\begin{array}{ccc}1& 0& 1\\ 2& 0& 2\\ 3& 0& 3\end{array}\right]$ $|A-\lambda I|=0$ $|\begin{array}{c}\left[\begin{array}{ccc}1& 0& 1\\ 2& 0& 2\\ 3& 0& 3\end{array}\right]-\lambda \left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]\end{array}|=0$ $|\begin{array}{ccc}1-\lambda & 0& 1\\ 2& -\lambda & 2\\ 3& 0& 3-\lambda \end{array}|=0$ $\left(1-\lambda \right)\left(-\lambda \right)\left(3-\lambda \right)-0+1\left(0+3\lambda \right)=0$ $-{\lambda }^{3}+4{\lambda }^{2}=0$ $\lambda \left(-{\lambda }^{2}+4\lambda \right)=0$ $\lambda \left({\lambda }^{2}-4\lambda \right)=0$ $\lambda \left(\lambda \right)\left(\lambda -4\right)=0$ ${\lambda }_{1}=0,{\lambda }_{2}=0,{\lambda }_{3}=4$ Consider the matrix B and find its eigenvalues as follows. $B=\left[\begin{array}{ccc}1& 1& 0\\ 2& 2& 0\\ 0& 1& 1\end{array}\right]$ $|B-\lambda I|=0$ $|\begin{array}{c}\left[\begin{array}{ccc}1& 1& 0\\ 2& 2& 0\\ 0& 1& 1\end{array}\right]-\lambda \left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]\end{array}|=0$ $|\begin{array}{ccc}1-\lambda & 1& 0\\ 2& 2-\lambda & 0\\ 0& 1& 1-\lambda \end{array}|=0$ $\left(1-\lambda \right)\left(2-\lambda \right)\left(1-\lambda \right)-1\left(2\left(1-\lambda \right)\right)+0=0$ $-{\lambda }^{3}+4{\lambda }^{2}-3\lambda =0$ $\lambda \left(-{\lambda }^{2}+4\lambda -3\right)=0$ $\lambda \left({\lambda }^{2}-4\lambda +3\right)=0$ $\lambda \left(\lambda -1\right)\left(\lambda -3\right)=0$ ${\lambda }_{1}=0,{\lambda }_{2}=1,{\lambda }_{3}=3$ Observe that the eigenvalues of the matrices A and B are not the same and thus these matrices are not similar. Jeffrey Jordon
2022-06-30 11:20:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 29, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8957356214523315, "perplexity": 691.3741308606886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103671290.43/warc/CC-MAIN-20220630092604-20220630122604-00014.warc.gz"}
https://dsp.stackexchange.com/questions/20368/what-is-the-difference-between-cubic-interpolation-and-cubic-spline-interpolat/56119
# What is the difference between cubic interpolation and cubic “Spline” interpolation?. How to use it for upsampling purpose? After considering a couple of advices and suggestions for upsampling techniques here, I finally converged to use the cubic interpolation technique to estimate the voltage values corresponding to intermediate samples present between the original or previous samples. I know that spline interpolation is basically used for getting smoother curves, but what makes it different from the normal cubic interpolation technique as both of them use a 3rd degree polynomial to estimate intermediate values?. Another implementation issue is that, for example, If I have some voltages corresponding to some samples say V = 3.674, 6.791, 8.888, 9.667..... Sample = 2, 3, 4, 5..... Now, If we have to find the voltage information corresponding to the intermediate sample 3.5, then using cubic polynomial method, we arrive at 4 equations and 4 unknowns obtained by using the information provided by the neighboring samples closest to sample 3.5. V(2) = a + 2b + 4c + 8d V(3) = a + 3b + 9c + 16d V(4) = a + 4b + 16c + 64d V(5) = a + 5b + 125c + 125d So, solving these equations I arrived at a = -4.428 b = 4.3756 c = -0.06299 d = -0.04966 Using these values , we can calculate the voltage value at the sample 3.5 as V(3.5) = a + 3.5b + 12.25c + 42.875d V(3.5) = 7.186 Volts Now my question - Is this method of interpolation suitable for large sampling rates?. How can I use this technique for upsampling a signal say from 10 sps to 100 sps for N = 1024 sample points?. I know that I have to develop a function that performs this cubic interpolation task between original samples (in C++), but I am just wondering how to implement it for upsampling for a continuous series of samples . Any suggestions, ideas or advices regarding the topic and its implementation would be appreciated. Thanks! I understand that cubic interpolation can operate on 4 data points and the more sophisticated technique I can think of is cubic spline. In case I am using the normal cubic interpolation, how about I loop through the "N" sample points i.e. 1024, for a condition below the "input sampling rate" i.e. 10 sps considering 4 data points each and then performing the interpolation function based on the up sampling factor between each of those 4 consecutive data points (Meaning - Interpolating/Estimating 10 values between each of those 4 data points) and then the function considers the next 4 data points to perform the same operation and it goes on until a 100 samples has bee acquired i.e. the output sampling rate! • it looks like your interpolation polynomial is the Lagrange interpolation polynomial. consider Hermite (or "osculating") polynomials, instead. they will preserve continuity of as many derivatives at the splice points as possible. – robert bristow-johnson Feb 6 '15 at 18:42 The difference between cubic interpolation as described in your question and cubic spline interpolation is that in cubic interpolation you use 4 data points to compute the polynomial. There are no constraints on the derivatives. Cubic spline interpolation computes a third order polynomial only from two data points with the additional constraint that the first and second derivative at the interpolation points are continuous. So if you have 4 points, then you compute 3 different polynomials (between points 1-2, 2-3, and 3-4), and these polynomials are smoothly connected in the sense that their first and second derivatives are equal at the given data points. This is not the case with standard polynomial interpolation. But if you want an efficient implementation I wouldn't necessarily use polynomial or spline interpolation but I would go for a standard polyphase implementation of a low-pass interpolation filter. If $$L$$ is the upsampling factor you get $$L$$ parallel polyphase filters operating at the lower sampling rate, followed by $$L$$ upsamplers, the results of which are added. The low-pass filter can be designed using the window method (e.g. a Kaiser window). There is a lot of material available on the polyphase implementation of interpolators. This Master's thesis is very accessible. Just have a look at single-stage polyphase implementations using linear-phase FIR filters and ignore the rest. EDIT: If you want or need to use cubic interpolation, here's what you need to do. First of all, some coefficients in your equations are wrong, that's why also the result is not correct. These would be the correct equations: V(2) = a + 2b + 4c + 8d V(3) = a + 3b + 9c + 27d V(4) = a + 4b + 16c + 64d V(5) = a + 5b + 25c + 125d However, you don't want to do it this way because if you always use the sample index as the $$x$$-coordinate you'll end up with a very large range for the coefficients of the system of linear equations (because you need to take the third power), and your coefficient matrix will sooner or later become ill-conditioned (asking for numerical problems). Second, and even more importantly, you would get a different system of equations for every new segment of the input signal, which would be very inefficient. You can do the interpolation by solving a $$4\times 4$$ system of linear equations with the same coefficient matrix each time. Just the left-hand side vector (containing the values of the given data points, V in your example) will be different for each segment. You compute a cubic polynomial for each segment (i.e. for each range between two given data points) by considering the data points defining the segment and the two adjacent data points, just as in your example. If you define the cubic polynomial as $$P(x)=a_0+a_1x+a_2x^2+a_3x^3$$ and if you define the $$x$$-axis values of the 4 current input data points as $$-1$$, $$0$$, $$1$$, and $$2$$, you get the following system of equations: \begin{align}y_0&=a_0-a_1+a_2-a_3\\ y_1&=a_0\\ y_2&=a_0+a_1+a_2+a_3\\ y_3&=a_0+2a_1+4a_2+8a_3\end{align} where $$y_i$$, $$k=0,1,\ldots, 3$$ are the current data points, and the current segment is between the points $$y_1$$ and $$y_2$$. The system above is readily solved as \begin{align} a_0&=y_1\\ a_1&=-\frac13 y_0 - \frac12 y_1 + y_2 - \frac16 y_3\\ a_2&=\frac12 (y_0 + y_2) - y_1\\ a_3&=\frac12 \left[(y_1 - y_2) + \frac13 (y_3 - y_0) \right]\\ \end{align} So you just need to evaluate the above 4 expressions for the polynomial coefficients for each set of 4 data values. As soon as you have the coefficients, you can compute the interpolated output values in the current segment by evaluating the polynomial at $$x_i=i/L$$, $$i=1,2,\ldots,L-1$$, where $$L$$ is the upsampling factor ($$L=10$$ in your case). Then you forget the oldest input value, read the new input value, and you have a new set of 4 values from which you compute a new polynomial for the current segment, etc. Here is a little Matlab/Octave script implementing this method: x = randn(20,1); % some input data L = 10; % upsampling factor % append points at both ends for first and last segment N = length(x); x2 = [x(1);x(:);x(N)]; Nseg = N-1; % # segments % matrix to compute polynomial coefficients from the current 4 data points: a = A*y A = [0 1 0 0;-1/3 -.5 1 -1/6;.5 -1 .5 0;-1/6 .5 -.5 1/6]; % x-axis values of interpolated values in the current segment mu = (0:L-1)/L; % allocate output vector Ny = Nseg*L + 1; y = zeros(Ny,1); % loop over all segments for i = 1:Nseg, xt = x2(i:i+3); a = A*xt; yt = a(1) + a(2)*mu + a(3)*mu.^2 + a(4)*mu.^3; Iy = (i-1)*L + (1:L); y(Iy) = yt; end y(Ny) = x(N); % plot result tx = 0:N-1; ty = (0:Ny-1)*(N-1)/(Ny-1); plot(ty,y,'r-'), hold on stem(ty,y,'r'), hold on stem(tx,x), hold off • Note that the polyphase method is an optimization, and not necessary for low-pass filter kernel or windowed-Sinc interpolation. The optimization may even be wrong for systems with fast math but slow tiny data caches. Thus, the usual rule is to avoid premature optimization. – hotpaw2 Feb 5 '15 at 17:35 • Yes, it's just one possible implementation which is very practical when cost is determined by the number of multiplications. – Matt L. Feb 5 '15 at 17:57 • On one of my iOS devices, I was surprised to find that a large polyphase table lookup was slower than an earlier vectorized multiple transcendental function computation (lots of multiplies). I surmised that the cache miss penalty was vast compared to a stream of pipelined FPMACs. – hotpaw2 Feb 5 '15 at 18:32 • I am restricted to the use of cubic interpolation or cubic spline interpolation technique. Once this is done, then I can pass it through a filter. Therefore, I am not looking for alternatives and looking forward to suggestions related to the cubic interpolation method. Thanks! – PsychedGuy Feb 6 '15 at 6:20 • @DigitalGeeK: You just shift the point of reference (x-axis value $0$) to the left end of the current segment. The right end has value $x=1$, and the two points to the left and to the right of the current segment have $x$-axis values $-1$ and $2$. This results in simple algebra, but this choice is completely arbitrary. – Matt L. Feb 7 '15 at 13:59 For those seeking more DSP/FPGA friendly solution, just substitude 'a' in first equation and rearrange to get: $$\begin{array}{ll} C_0(x) = &-\frac{1}{6}x^3+\frac{1}{2}x^2-\frac{1}{3}x \\ C_1(x) = &\frac{1}{2}-x^2-\frac{1}{2}x+1 \\ C_2(x) = &-\frac{1}{2}x^3 + \frac{1}{2}x^2+x \\ C_3(x) = &\frac{1}{6}x^3 - \frac{1}{6}x \end{array}$$ And interpolated value: $$P(x) = \sum_{i=0}^{3} C_i y_i \textrm{ for } x=[0\ldots1)$$ Pre compute required coefficients for each phase and multiply selected phase with data points. Example coefficients for upsampling factor 10: -0.0000 1.0000 -0.0000 0.0000 -0.0285 0.9405 0.1045 -0.0165 -0.0480 0.8640 0.2160 -0.0320 -0.0595 0.7735 0.3315 -0.0455 -0.0640 0.6720 0.4480 -0.0560 -0.0625 0.5625 0.5625 -0.0625 -0.0560 0.4480 0.6720 -0.0640 -0.0455 0.3315 0.7735 -0.0595 -0.0320 0.2160 0.8640 -0.0480 -0.0165 0.1045 0.9405 -0.0285 Matt L. got error in his equations (but not in a code). a2 should be like this $$a_2 = \frac{1}{2}(y_0+y_2)-y1$$ • Error corrected, thanks. – Matt L. Jan 26 at 11:07
2020-08-13 06:44:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6220885515213013, "perplexity": 602.8560749542957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00373.warc.gz"}
https://ipvs.informatik.uni-stuttgart.de/mlr/johannes/icra2015/
## Active Exploration of Joint Dependency Structures Johannes Kulick $\cdot$ Stefan Otte $\cdot$ Marc Toussaint Machine Learning and Robotics Lab $\cdot$ University of Stuttgart International Conference on Robotics and Automation, 2015 $\cdot$ Seattle, Washington ### Joint Dependencies Joints are all around us in modern human-made environments. We call them dependent joints if the position of one joint changes the ability of another joint to move. E.g. the position of the window handle determines whether one can open the window or not. Robots need to be able to find out about these dependencies and manipulate them. A lot of these joints are provided with mechanical, audible, or other clues to guide its usage. These changes in some sensible domain robots should learn to use. How can we make robots perceive these change points? #### Joint Movement When the robot moves a joint it records its position in joint space. From this no clues are visible. #### Force Sensor The PR2 has a force sensor at its wrist. We use it to capture a force profile of the actuated joint. There are many changes visible, but it is not clear, which are due to the joint's feedback and which come from the control. ## Dynamics Equations $\Delta v = \sqrt{v^2-c_f v \Delta t}$ Name Description $v$Velocity $c_f$Virtual friction constant $\Delta t$Time difference between two steps #### Inverse Dynamics Equations Using a motion and dynamics model we can compute the virtual friction constant for each time step. Here it is very apparent where the change points in the temporal development are. #### Bayesian Change Point Detection The Bayesian change point detection we use agrees. The probability for change points is clearly correlated with the changes we anticipated. ## Symbol description SymbolDescriptionDomain $N$ Number of joints $\mathbb{N}$ $M^{j}$ Maximum joint angle of joint $j$ $\mathbb{R}$ $t, s, u, v$ Index for time $\mathbb{N}$ $j$ Index for joints $\{1, \dots, N\}$ $D^{j}$ RV, dependency of joint $j$ $\{1, \ldots, N+1\}$ $L^{j}_{t}$ RV, locking state of joint $j$ $\{0, 1\}$ $Q^{j}_{t}$ RV, joint state/position of joint $j$ at time $t$ $\mathbb{R}$ $F^{j}_{t}$ RV, force/torque measurements of joint $j$ at time $t$ $\mathbb{R}$ $C^{j}_{t}$ RV, change points of joint $j$ at time $t$ $\{0, 1\}$ $S^{j}_{p}$ RV, segment borders of joint $j$ at position $p$ $\{0, 1\}$ ### Exploration #### Probabilistic Modeling We model the dependency structure with a graphical model capturing our insights and prior knowledge about how joints are used in human environments. Observing the force signal or the position of the joints we can infer various other things: How likely is a joint locked? How likely are two joint positions in the same segment? And most import how likely locks one joint another? #### Active Exploration With the probabilistic model we can now use methods from active learning and Bayesian experimental design to choose points to explore optimally. We want to choose points where our belief over the dependency structure changes most. Thus we will quickly converge to a correct belief over the dependency structure of the involved joints. We use the MaxCE method, that maximizes the expected cross entropy before and after adding an observation: $\DeclareMathOperator*{\argmax}{argmax} ({Q^{1:N}_{t+1}}^*, j) = \argmax\limits_{(Q^{1:N}_{t+1}, j)} \sum_{L^{j}_{t+1}} \underbrace{P\left(L^{j}_{t+1}|Q^{1:N}_{t+1}, S^{1:N}\right)}_{\text{joint$j$locked in the next state}}~\cdot \underbrace{H\left[ P_{D^j_{t}};P_{D^j_{t+1}} \right]}_{\text{Cross entropy}}$ with $P_{D^j_{t}} = P(D^{j}|L^{j}_{1:t}, Q^{1:N}_{1:t}, S^{1:N})$ $P_{D^j_{t+1}} = P(D^{j}|L^{j}_{1:t+1}, Q^{1:N}_{1:t+1}, S^{1:N}).$ ### Experiments We let our PR2 uncover the dependency between a key and the drawer it locks. We did the same with a couple of furniture in simulation. The results show that our method uncovers the dependency quite quickly and better than all other tested methods. Our intuition that change points in the environment help to find out the structure was indeed correct. Even for simple dependencies it took the robot much longer to uncover it without the help of change points. Also the uncertainty over the results is much lower with the use of change points. The change points in fact almost distcretize the space, such that much less observations are needed to cover the whole space.
2019-02-17 04:32:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31546494364738464, "perplexity": 1601.0626048445683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481612.36/warc/CC-MAIN-20190217031053-20190217053053-00378.warc.gz"}
https://plainmath.net/14128/hypothetical-potential-aparticle-particle-released-position-atposition
Question # Hypothetical potential energy curve for aparticle of mass m If the particle is released from rest at position r0, its speed atposition 2r0, is most ne Other Hypothetical potential energy curve for aparticle of mass m If the particle is released from rest at position r0, its speed atposition 2r0, is most nearly a) $$\displaystyle{\left({\frac{{{8}{U}{o}}}{{{m}}}}\right)}^{{1}}{\left\lbrace/{2}\right\rbrace}$$ b) $$\displaystyle{\left({\frac{{{6}{U}{o}}}{{{m}}}}\right)}^{{\frac{{1}}{{2}}}}$$ c) $$\displaystyle{\left({\frac{{{4}{U}{o}}}{{{m}}}}\right)}^{{\frac{{1}}{{2}}}}$$ d) $$\displaystyle{\left({\frac{{{2}{U}{o}}}{{{m}}}}\right)}^{{\frac{{1}}{{2}}}}$$ e) $$\displaystyle{\left({\frac{{{U}{o}}}{{{m}}}}\right)}^{{\frac{{1}}{{2}}}}$$ if the potential energy function is given by $$\displaystyle{U}{\left({r}\right)}={b}{r}^{{P}}-\frac{{3}}{{2}}\rbrace+{c}$$ where b and c are constants which of the following is an edxpression of the force on theparticle? 1) $$\displaystyle{\frac{{{3}{b}}}{{{2}}}}{\left({r}^{{-\frac{{5}}{{2}}}}\right)}$$ 2) $$\displaystyle{\frac{{{3}{b}}}{{{2}}}}{\left\lbrace{3}{b}\right\rbrace}{\left\lbrace{2}\right\rbrace}{\left({r}^{{-\frac{{1}}{{2}}}}\right)}$$ 3) $$\displaystyle{\frac{{{3}{b}}}{{{2}}}}{\left\lbrace{3}\right\rbrace}{\left\lbrace{2}\right\rbrace}{\left({r}^{{-\frac{{1}}{{2}}}}\right)}$$ 4) $$\displaystyle{2}{b}{\left({r}^{{-\frac{{1}}{{2}}}}\right)}+{c}{r}$$ 5) $$\displaystyle{\frac{{{3}{b}}}{{{2}}}}{\left\lbrace{2}{b}\right\rbrace}{\left\lbrace{5}\right\rbrace}{\left({r}^{{-\frac{{5}}{{2}}}}\right)}+{c}{r}$$
2021-10-28 08:18:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5439683198928833, "perplexity": 1085.904591815693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00310.warc.gz"}
https://nigerianscholars.com/past-questions/literature-in-english/question/227655/
Home » » The clash of interest that originates from opposing forces in literature is # The clash of interest that originates from opposing forces in literature is ### Question The clash of interest that originates from opposing forces in literature is A) climax B) denouement C) conflict D) aside
2021-11-28 02:42:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8280158042907715, "perplexity": 6452.39240400129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00333.warc.gz"}
https://loreanvictor.github.io/rxdeep/docs/precision
# linkPrecision RxDeep is designed to be extremely precise, meaning states should emit only when they have a good reason to. To be more precise (pun unintended), absolute precision means a state emits values when and only when one of the following holds: 1. Its value has changed. 2. Its value is directly updated (possibly to the same value). 3. It points to the same address in state-tree as another state whose value is directly updated (possibly to the same value). 4. One of its descendent sub-states (a sub-state in its sub-tree) satisfies (2) or (3). 1linkconst root = state([2link { name: 'Jack', address: { city: 'Java', country: 'Indonesia' } },3link { name: 'Jafar', address: { city: 'Jiroft', country: 'Iran' } },4link]);5link6linkconst mid1 = root.sub(0);7linkconst mid2A = mid1.sub('address'); // --> shares tree address with mid2B8linkconst mid2B = root.sub(0).sub('address'); // --> shares tree address with mid2A9linkconst leaf1 = root.sub(0).sub('address').sub('city'); // --> shares tree address with leaf210linkconst leaf2 = mid2A.sub('city'); // --> shares tree address with leaf111link12linkleaf1.value = leaf1.value; // --> issue update without value change13link14link// As a result:15link// > leaf1 will emit (due to (2))16link// > leaf2 will emit (due to (3))17link// > mid2A, mid2B, mid1, and root will emit (due to (4)) By this definition RxDeep is absolutely precise if object immutabilty is respected and objects passed to it are of plain (non-circular) JavaScript objects. ## linkLossiness A Loss is when a State should emit (according to the outlined criteria) but it doesn't. RxDeep is not lossy by default, only IF object immutability is respected. Simply put, you must not change the value of a State without changing its reference, as otherwise there is no mechanism for RxDeep to pickup those changes and distinguish what has changed. ## linkRedundancy Redundancy refers to situations when a state emits without a good reason for doing so (i.e. none of the outline criteria hold up). RxDeep has no redundancy, meaning a state does not emit values without a good reason (without one of the aforementioned criteria being true). This is due to the fact that leaf-changes are fully traced and delivered only to affected sub-states, and arbitrary changes are traced until their issuing depth and then post-traced so that the change trace is complete until leafs of the state-tree, allowing for precise propagation of the change. ## linkPerformance As discussed here, a leaf-change has $\Omicron(\log(n))$ complexity and an arbitrary change at depth $\delta$ and above a sub-tree of $n_{\delta}$ nodes has $\Omicron(\log(n) + n_{\delta}\log(n_{\delta}))$ time complexity. For most day to day use cases both of these are more than fast enough. However, in special cases you might need that additional performance. As (also) discussed here, if the whole sub-tree needs to change, you need minimum of $\Omicron(\log(n) + n_{\delta}\log(n_{\delta}))$ operations, as you need that many emissions to not be lossy. However, if your change affects a bounded number of leaf states, for example $k$ leaf states, then the minimum number of operations is given by: $\Omicron(\log(n) + (k - 1)\log(n_{\delta})) = \Omicron(\log(n))$1 A simple method of achieving that performance is identifying the $k$ leaf nodes and applying change to them. Time-complexity of this solution is given by: $\Omicron(k\log(n)) = \Omicron(\log(n))$1 However, this also results in $k$ emissions by all affected states (e.g. the root state will also emit $k$ times). 1linkconst company = state({2link teams: [{3link people: [{name: 'Jack', age: 42}, {name: 'Jill', age: 31}],4link name: 'Awesome Team',5link }, ...]6link});7link8link//9link// Find all affected leaf changes and apply changes directly to them.10link//11linkcompany.sub('teams').sub(0).sub('people').sub(0).sub('name').value = 'Jafar';12linkcompany.sub('teams').sub(0).sub('name').value = 'Pro Team'; A more efficient solution would be to: 1. Identify the top-most common ancestor of all affected leaf nodes, 2. Apply changes respecting maximal object immutability. 1linkconst company = state({2link teams: [{3link people: [{name: 'Jack', age: 42}, {name: 'Jill', age: 31}],4link name: 'Awesome Team',5link }, ...]6link});7link8link//9link// This is the top-most common ancestor:10link//11linkconst target = company.sub('teams').sub(0);12link13link//14link// Now lets apply changes with maximal object immutability:15link//16linktarget.value = {17link ...target.value,18link name: 'Pro Team',19link people: [20link {21link ...target.value.people[0],22link name: 'Jafar'23link }24link ...target.value.people.slice(1),25link ]26link} Object immutability means that the reference of an object is changed IF its value has changed. Maximal object immutability means the reference of an object is changed IF AND ONLY IF its value has changed. The post-tracing algorithm of RxDeep makes quick reference checks to rule out identical objects, so when making a change respecting maximal object immutability, and with the aforementioned criteria holding (constant $k$ leafs are actual subjects of the change), post-tracing consumes $\Omicron(k\log(n_{\delta}))$ operations, so the complexity of overall change propagation is given by: $\Omicron(log(n) + (k - 1)log(n_{\delta})) = \Omicron(log(n))$1 Additionally, with this solution affected states emit exactly once.
2021-03-02 08:07:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21508611738681793, "perplexity": 6655.554148729964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363782.40/warc/CC-MAIN-20210302065019-20210302095019-00181.warc.gz"}
https://electronics.stackexchange.com/questions/391382/stm32-external-interrupt
# stm32 external interrupt I am working with a stm32f103 and am trying to get the external interrupt done. My code is: void delay(unsigned int counts); //--------------------------------------------------------------------------------------- int main(int argc, char* argv[]) { RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOC, ENABLE); GPIO_Led.GPIO_Pin = LED_PIN; GPIO_Led.GPIO_Speed = GPIO_Speed_50MHz; GPIO_Led.GPIO_Mode = GPIO_Mode_Out_PP; GPIO_Init(LED_PORT, &GPIO_Led); LED_OFF; delay(500); LED_ON; // Enable Clock for AFIOEN RCC->APB2ENR |= RCC_APB2ENR_AFIOEN; // Enable Clock for IOPBEN RCC->APB2ENR |= RCC_APB2ENR_IOPBEN; // Set PB1 as input with pullup GPIOB->CRL &= ~((1 << 4) | (1 << 5)); GPIOB->CRL |= (1 << 7); GPIOB->CRL &= ~(1 << 6); GPIOB->ODR |= (1 << 1); // Source input for EXTI1 is PB1 AFIO->EXTICR[0] |= AFIO_EXTICR1_EXTI1_PB; // Interrupt request from Line 1 is not masked => enabled EXTI->IMR |= (1 << 1); // Rising edge EXTI->RTSR |= (1 << 1); NVIC_InitTypeDef NVIC_InitStruct; // Add IRQ vector to NVIC // PB1 is connected to EXTI_Line1, which has EXTI1_IRQn vector NVIC_InitStruct.NVIC_IRQChannel = EXTI1_IRQn; // Set priority NVIC_InitStruct.NVIC_IRQChannelPreemptionPriority = 0x0f; // Set sub priority NVIC_InitStruct.NVIC_IRQChannelSubPriority = 0x0f; // Enable interrupt NVIC_InitStruct.NVIC_IRQChannelCmd = ENABLE; NVIC_Init(&NVIC_InitStruct); delay(500); LED_OFF; // Infinite loop while (1) { LED_TOGGLE; delay(500); } } //--------------------------------------------------------------------------------------- void delay(unsigned int counts) { volatile unsigned int i = 0, j = 0; while (i < counts) { j = 0; while (j < 0x1AFF) { j++; } i++; } } //--------------------------------------------------------------------------------------- void EXTI1_IRQHandler() { /* Make sure that interrupt flag is set */ if (EXTI_GetITStatus(EXTI_Line1) != RESET) { LED_TOGGLE; /* Clear interrupt flag */ EXTI_ClearITPendingBit(EXTI_Line1); } } That is the main part of my code. LED_ON, LED_OFF and LED_TOGGLE are only some defines. They work fine. The problem is: If there is a rising edge at pin PB1, the stm32 does not jump to the IRQ subroutine. So the stm32 got stuck. Does anyone know what is the problem? I don't have any ideas. • "not jump to the IRQ subroutine. So the stm32 got stuck." I'm new to programming, but maybe there is a problem in vector table. – Long Pham Aug 16 '18 at 17:21 • I'd review this tutorial and make sure you're not missing any EXTI configurations: stm32f4-discovery.net/2014/08/… – Catsunami Aug 16 '18 at 18:39 • @LongPham what do you mean with problem in the vector table? – Lukas_M94 Aug 16 '18 at 21:55 • @LongPham First i thought that the problem is something like an undefault jump, but I do not really know And yes, this is my next step I will do, to buy a debugger :-D – Lukas_M94 Aug 17 '18 at 6:26 • @LongPham, I never suggested that this is what he should do instead of finding the cause of the problem. I meant that he can confirm that it's in the default handler since he obviously doesn't have a debugger to do it the easy way. Toggling an LED within the default handler would confirm that the handler should be looked at. – Catsunami Aug 17 '18 at 16:30
2020-01-21 19:37:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3013332188129425, "perplexity": 6960.650894969276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00310.warc.gz"}
https://bathmash.github.io/HELM/37_3_poisson_dist-web/37_3_poisson_dist-web.html
### Introduction In this Section we introduce a probability model which can be used when the outcome of an experiment is a random variable taking on positive integer values and where the only information available is a measurement of its average value. This has widespread applications, for example in analysing traffic flow, in fault prediction on electric cables and in the prediction of randomly occurring accidents. We shall look at the Poisson distribution in two distinct ways. Firstly, as a distribution in its own right. This will enable us to apply statistical methods to a set of problems which cannot be solved using the binomial distribution. Secondly, as an approximation to the binomial distribution $X\sim B\left(n,p\right)$ in the case where $n$ is large and $p$ is small. You will find that this approximation can often save the need to do much tedious arithmetic. #### Prerequisites • understand the concepts of probability • understand the concepts and notation for the binomial distribution #### Learning Outcomes • recognise and use the formula for probabilities calculated from the Poisson model • use the recurrence relation to generate a succession of probabilities • use the Poisson model to obtain approximate values for binomial probabilities
2022-09-26 00:39:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696725368499756, "perplexity": 217.47453720023995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00677.warc.gz"}
https://dergipark.org.tr/ijot/issue/43635/487951
| | | | Waste Heat Recovery and Conversion into Electricity: Current Solutions and Assessment Mathilde Blaise [1] , Michel Feidt [2] 38 87 The main energy consumption sectors are the residential, industry and transport. In all of them, a part of the energy consumption is not used and generally rejected as heat in the environment. This is named the waste heat. Firstly, the main way is to optimize the process to reduce the fuel consumption. Then, if there is a residual waste heat, a valorization way is to convert this heat into electricity. Some technologies are developed. The main technology is the Organic Rankine Cycle engine. Then, a new concept, named Turbosol, is based on the quasi-isothermal expansion of a water and oil mixture in a nozzle. Some piston engines are also developed, based on Stirling, Ericsson and Joule cycles. All these technologies are named externally heated engines. Some other research studies concern the thermo-electric effect and the thermo-magnetic effect. In this article, a non-exhaustive list, with description and comments on these technologies is proposed. The aim is to assess the potential of them and identify the current limits. To compare the different technologies in first law efficiency terms is not sufficient. Some new criterions are proposed. The first consideration is to assess the heat rate consumption referred to the heat rate available. To assess the quality of waste heat to power conversion, it is pertinent to evaluate the power output divided by the available heat rate. Then, because of the second law, it is pertinent to evaluate the exergy recovery ratio. These new waste heat criterions are compared to the classical first law efficiency in different cases. Then, the main current issue is to produce enough electrical power output to ensure the profitability. Some thermo-economic considerations are proposed, including the impact of a waste heat taxation. Thermodynamics, energy, exergy, sustainability • Andrea Lazaretto • Diogo Queiros-Conde • Lavinia Grosu Primary Language en Regular Original Research Article Author: Mathilde Blaise (Primary Author)Institution: Lorraine University, NancyCountry: France Author: Michel FeidtInstitution: Lorraine University, NancyCountry: France Bibtex @research article { ijot487951, journal = {International Journal of Thermodynamics}, issn = {1301-9724}, eissn = {2146-1511}, address = {Yaşar DEMİREL}, year = {2019}, volume = {22}, pages = {1 - 7}, doi = {10.5541/ijot.487951}, title = {Waste Heat Recovery and Conversion into Electricity: Current Solutions and Assessment}, key = {cite}, author = {Blaise, Mathilde and Feidt, Michel} } APA Blaise, M , Feidt, M . (2019). Waste Heat Recovery and Conversion into Electricity: Current Solutions and Assessment. International Journal of Thermodynamics, 22 (1), 1-7. DOI: 10.5541/ijot.487951 MLA Blaise, M , Feidt, M . "Waste Heat Recovery and Conversion into Electricity: Current Solutions and Assessment". International Journal of Thermodynamics 22 (2019): 1-7 Chicago Blaise, M , Feidt, M . "Waste Heat Recovery and Conversion into Electricity: Current Solutions and Assessment". International Journal of Thermodynamics 22 (2019): 1-7 RIS TY - JOUR T1 - Waste Heat Recovery and Conversion into Electricity: Current Solutions and Assessment AU - Mathilde Blaise , Michel Feidt Y1 - 2019 PY - 2019 N1 - doi: 10.5541/ijot.487951 DO - 10.5541/ijot.487951 T2 - International Journal of Thermodynamics JF - Journal JO - JOR SP - 1 EP - 7 VL - 22 IS - 1 SN - 1301-9724-2146-1511 M3 - doi: 10.5541/ijot.487951 UR - https://doi.org/10.5541/ijot.487951 Y2 - 2019 ER - EndNote %0 International Journal of Thermodynamics Waste Heat Recovery and Conversion into Electricity: Current Solutions and Assessment %A Mathilde Blaise , Michel Feidt %T Waste Heat Recovery and Conversion into Electricity: Current Solutions and Assessment %D 2019 %J International Journal of Thermodynamics %P 1301-9724-2146-1511 %V 22 %N 1 %R doi: 10.5541/ijot.487951 %U 10.5541/ijot.487951 ISNAD Blaise, Mathilde , Feidt, Michel . "Waste Heat Recovery and Conversion into Electricity: Current Solutions and Assessment". International Journal of Thermodynamics 22 / 1 (March 2019): 1-7. https://doi.org/10.5541/ijot.487951
2019-05-23 01:24:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2867348790168762, "perplexity": 7353.902323285216}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256997.79/warc/CC-MAIN-20190523003453-20190523025453-00433.warc.gz"}
https://blog.harbourfronts.com/2018/12/31/asset-dynamics-priced-correctly-black-scholes-merton-model/
# Is Asset Dynamics Priced In Correctly by Black-Scholes-Merton Model? A lot of research has been devoted to answering the question: do options price in the volatility risks correctly? The most noteworthy phenomenon (or bias) is called the volatility risk premium, i.e. options implied volatilities tend to overestimate future realized volatilities.  Much less attention is paid, however, to the underlying asset dynamics, i.e. to answering the question: do options price in the asset dynamics correctly? Note that within the usual BSM framework, the underlying asset is assumed to follow a GBM process. So to answer the above question, it’d be useful to use a different process to model the asset price. We found an interesting article on this subject [1].  Instead of using GBM, the authors used a process where the asset returns are auto-correlated and then developed a closed-form formula to price the options. Specifically, they assumed that the underlying asset follows an MA(1) process, where β represents the impact of past shocks and h is a small constant. We note that and in case β=0 the price dynamics becomes GBM. After applying some standard pricing techniques, a closed-form option pricing formula is derived which is similar to BSM except that the variance (and volatility) contains the autocorrelation coefficient, From the above equation, it can be seen that • When the underlying asset is mean reverting, i.e. β<0, which is often the case for equity indices, the MA(1) volatility becomes smaller. Therefore if we use BSM with σ as input for volatility, it will overestimate the option price. • Conversely, when the asset is trending, i.e. β>0, BSM underestimates the option price. • Time to maturity, τ, also affects the degree of over- underpricing. Longer-dated options will be affected more by the autocorrelation factor. References [1] Liao, S.L. and Chen, C.C. (2006), Journal of Futures Markets, 26, 85-102. ## 2 thoughts on “Is Asset Dynamics Priced In Correctly by Black-Scholes-Merton Model?” 1. claudiu says: Isn’t the behavior of autocorrelation in equities different depending on the time horizon, negative for short terms and positive for long terms? Is the application of this to the model imply short term equity options are overpriced and long term underpriced? 1. rvarb says: Yes, agree with both statements
2023-01-30 10:23:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9240840077400208, "perplexity": 2428.6907894204464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00868.warc.gz"}
https://au.mathematicstip.com/9194-25-linear-equations-manipulating-and-solving-solving.html
# 2.5: Linear Equations - Manipulating and Solving (Solving the Puzzle) You are shopping at Old Navy for seven new outfits. How do you spend $110 to acquire all the needed outfits without exceeding your budget while getting as many$30 items as possible? This is a problem of linear equations, and it illustrates how you can use them to make an optimal decision. Let (L) represent the quantity of clothing at the low price point of $10, and (H) represent the quantity of clothing at the high price point of$30. This results in the following algebraic equations: [L+H=7 ext { (the total number of outfits you need) } onumber] [$10 L+$ 30 H=$110 ext { (your total budget) } onumber] By simultaneously solving these equations you can determine how many outfits at each price point you can purchase. You will encounter many situations like this in your business career, for example, in making the best use of a manufacturer’s production capacity. Assume your company makes two products on the same production line and sells all its output. Each product contributes differently to your profitability, and each product takes a different amount of time to manufacture. What combination of each of these products should you make such that you operate your production line at capacity while also maximizing the profits earned? This section explores how to solve linear equations for unknown variables. ## Understanding Equations To manipulate algebraic equations and solve for unknown variables, you must first become familiar with some important language, including linear versus nonlinear equations and sides of the equation. The goal in manipulating and solving a linear equation is to find a value for the unknown variable that makes the equation true. If you substitute a value of (x = −1) into the above example, the left-hand side of the equation equals the right-hand side of the equation (see Figure below). The value of (x = −1) is known as the root, or solution, to the linear equation. ## Solving One Linear Equation with One Unknown Variable In your study of solving linear equations, you need to start by manipulating a single equation to solve for a single unknown variable. Later in this section you will extend from this foundation to the solution of two linear equations with two unknowns. ### How It Works To determine the root of a linear equation with only one unknown variable, apply the following steps: Step 1: Your first goal is to separate the terms containing the literal coefficient from the terms that only have numerical coefficients. Collect all of the terms with literal coefficients on only one side of the equation and collect all of the terms with only numerical coefficients on the other side of the equation. It does not matter which terms go on which side of the equation, so long as you separate them. To move a term from one side of an equation to another, take the mathematical opposite of the term being moved and add it to both sides. For example, if you want to move the +3 in (4x + 3 = −2x − 3) from the left-hand side to the right-hand side, the mathematical opposite of +3 is −3. When you move a term, remember the cardinal rule: What you do to one side of an equation you must also do to the other side of the equation. Breaking this rule breaks the equality in the equation. Step 2: Combine all like terms on each side and simplify the equation according to the rules of algebra. Step 3: In the term containing the literal coefficient, reduce the numerical coefficient to a 1 by dividing both sides of the equation by the numerical coefficient. ### Important Notes When you are unsure whether your calculated root is accurate, an easy way to verify your answer is to take the original equation and substitute your root in place of the variable. If you have the correct root, the left-hand side of the equation equals the right-hand side of the equation. If you have an incorrect root, the two sides will be unequal. The inequality typically results from one of the three most common errors in algebraic manipulation: 1. The rules of BEDMAS have been broken. 2. The rules of algebra have been violated. 3. What was done to one side of the equation was not done to the other side of the equation. ### Things To Watch Out For When you move a term from one side of the equation to another using multiplication or division, remember that this affects each and every term on both sides of the equation. To remove the (x) from the denominator in the following equation, multiply both sides of the equation by (x): (dfrac{5}{x}+dfrac{1}{x}=dfrac{2}{x}+2) becomes (xleft(dfrac{5}{x}+dfrac{1}{x} ight)=left(dfrac{2}{x}+2 ight) x) which then becomes (5+1=2+2 x) Multiplying every term on both sides by (x) maintains the equality. ### Paths To Success Negative numbers can cause some people a lot of grief. In moving terms from a particular side of the equation, many people prefer to avoid negative numerical coefficients in front of literal coefficients. Revisiting (4x + 3 = −2x − 3), you could move the (4x) from the left side to the right side by subtracting (4x) from both sides. However, on the right side this results in (−6x). The negative is easily overlooked or accidentally dropped in future steps. Instead, move the variable to the left side of the equation, yielding a positive coefficient of (6x). Example (PageIndex{1}): How to Solve the Opening Example Take the ongoing example in this section and solve it for (x): (4x + 3 = −2x − 3) Solution This is a linear equation since the exponent on the variable is 1. You are to solve the equation and find the root for (x). What You Already Know The equation has already been provided. How You Will Get There Apply the three steps for solving linear equations. To arrive at the root, you must follow the rules of algebra, BEDMAS, and equality. Perform Step 1: Move terms with literal coefficients to one side and terms with only numerical coefficients to the other side. Let’s collect the literal coefficient on the left-hand side of the equation. Move (−2x) to the left-hand side by placing (+2x) on both sides. [4x + 3 = −2x – 3 onumber ] On the right-hand side, the (−2x) and (+2x) cancel out to zero. [4x + 3 (f{+ 2x}) = −2x − 3 (f{+ 2x}) onumber ] Step 1 (continued): All of the terms with the literal coefficient are now on the left. Let’s move all of the terms containing only numerical coefficients to the right-hand side. Move the +3 to the right-hand side by placing −3 on both sides. [4x + 3 + 2x = −3 onumber ] On the left-hand side, the +3 and −3 cancel out to zero. [4x + 3 + 2x (f{– 3}) = −3 (f{− 3}) onumber ] Step 2: The terms are now separated. Combine like terms according to the rules of algebra. [4x + 2x = −3 – 3 onumber ] Step 3: The term with the literal coefficient is being multiplied by the numerical coefficient of 6. Therefore, divide both sides by 6. [f{6x = −6} onumber ] The left-hand side numerical coefficients will divide to 1. Resolve the numerical coefficients on the right-hand side. [dfrac{6 x}{f{6}}=dfrac{-6}{f{6}} onumber ] This is the root of the equation. [x = −1 onumber] The root of the equation is (x = −1). To verify the accuracy of your manipulation, take the root of (x = −1) and substitute it into the original equation: [4(−1) + 3 = −2(−1) − 3 onumber] [−4 + 3 = 2 − 3 onumber] [−1 = −1 onumber] The left-hand side equals the right-hand side, so the root is correct. Example (PageIndex{2}): Solving a Linear Equation with One Unknown Variable Solve the following equation for (m): (dfrac{3 m}{4}+2 m=4 m-15) Solution This is a linear equation since the exponent on the variable is a 1. You are to solve the equation and find the root for (m). What You Already Know The equation has already been provided. How You Will Get There Simplify the equations first and then apply the three steps for solving linear equations. To arrive at the root you must follow the rules of algebra, BEDMAS, and equality. You can use an approach that avoids negatives. Perform First, simplify all fractions to make the equation easier to work with. [dfrac{3 m}{4}+2 m=4 m-15 onumber ] Still simplifying, collect like terms where possible. [(f{0.75m}) + 2m = 4m − 15 onumber ] Step 1: Collect all terms with the literal coefficient on one side of the equation. Move all terms with literal coefficients to the right-hand side. [(f{2.75m })= 4m − 15 onumber ] Step 1 (continued): Combine like terms and move all terms with only numerical coefficients to the left-hand side. [2.75m (f{− 2.75m}) = 4m − 15 (f{− 2.75m}) onumber ] On the left-hand side, the (+2.75m) and (−2.75m) cancel each other out. Now move the numerical coefficients to the left-hand side. [(f{0}) = 4m − 15 (f{− 2.75m}) onumber ] On the right-hand side, the −15 and +15 cancel each other out. [0 (f{+ 15 })= 4m − 15 − 2.75m (f{+ 15}) onumber ] Step 2: Combine like terms on each side. [0 (f{+ 15}) = 4m − 2.75m onumber ] Step 3: Divide both sides by the numerical coefficient that accompanies the literal coefficient. [f{15 = 1.25m} onumber ] Simplify. [dfrac{15}{f{1.25}}=dfrac{1.25 m}{f{1.25}} onumber ] This is the root of the equation. [12=m onumber ] The root of the equation is (m = 12). This makes both sides of the equation, (dfrac{3 m}{4}+2 m) and (4 m-15), equal 33. Example (PageIndex{3}): Solving a Linear Equation with One Unknown Variable Containing Fractions Solve the following equation for (b) and round your answer to four decimals: (dfrac{5}{8} b+dfrac{2}{5}=dfrac{17}{20}-dfrac{b}{4}) Solution This is a linear equation since the exponent on the variable is a 1. You are to solve the equation and find the root for (b). What You Already Know The equation has already been provided. Although you could attempt to clear each and every fraction or try to find a common denominator, recall that you can eliminate fractions by converting them to decimals. How You Will Get There Simplify the fractions into decimal form. Then apply the three steps for solving linear equations. To arrive at the root, you must follow the rules of algebra, BEDMAS, and equality. Perform Simplify the fractions and convert to decimals. [dfrac{5}{8} b+dfrac{2}{5}=dfrac{17}{20}-dfrac{b}{4} onumber ] Step 1: Move the literal coefficient terms to the left-hand side. [(f{0.625})b (f{+ 0.4}) = (f{0.85 − 0.25})b onumber ] The literal coefficients on the right-hand side cancel each other out. [0.625b + 0.4 + (f{0.25b}) = 0.85 − 0.25b + (f{0.25b}) onumber ] Move the numerical coefficient terms to the right-hand side. [0.625b + 0.4 + 0.25b = 0.85 onumber ] The numerical coefficients on the left-hand side cancel each other out. [0.625b + 0.4 +0.25b (f{− 0.4}) = 0.85 (f{− 0.4}) onumber ] Step 2: Combine like terms on each side. [0.625b + 0.25b = 0.85 − 0.4 onumber ] Step 3: Divide both sides by the numerical coefficient that accompanies the literal coefficient. [f{0.875b = 0.45} onumber ] Simplify. [dfrac{0.875 b}{f{0.875}}=dfrac{0.45}{f{0.875}} onumber ] Round to four decimals as instructed. [b = 0.514285 onumber ] This is the root. [b = 0.5143 onumber ] The root of the equation, rounded to four decimals, is (b = 0.5143). ## Solving Two Linear Equations with Two Unknown Variables The manipulation process you have just practiced works well for solving one linear equation with one variable. But what happens if you need to solve two linear equations with two variables simultaneously? Remember when you were at Old Navy purchasing seven outfits earlier in this chapter (equation 1)? You needed to stay within a pricing budget (equation 2). Each equation had two unknown variables representing the number of lower-priced and higher-priced outfits. The goal is to reduce two equations with two unknowns into a single linear equation with one unknown. Once this transformation is complete, you then identify the unknown variable by applying the three-step procedure for solving one linear equation, as just discussed. When you work with two linear equations with two unknowns, the rules of algebra permit the following two manipulations: 1. What you do to one side of the equation must be done to the other side of the equation to maintain the equality. Therefore, you can multiply or divide any equation by any number without changing the root of the equation. For example, if you multiply all terms of (x + y = 2) by 2 on both sides, resulting in (2x + 2y = 4), the equality of the equation remains unchanged and the same roots exist. 2. Terms that are on the same side of an equation can be added and subtracted between equations by combining like terms. Each of the two equations has a left side and right side. This rule permits taking the left side of the first equation and either adding or subtracting like terms on the left side of the second equation. When you perform this action, remember the first rule above. If you add the left sides of the equations together, you then must add the right side of both equations together to maintain equality. ### How It Works Follow these steps to solve two linear equations with two unknown variables: Step 1: Write the two equations one above the other, vertically lining up terms that have the same literal coefficients and terms that have only the numerical coefficient. If necessary, the equations may need to be manipulated such that all of the literal coefficients are on one side with the numerical coefficients on the other side. Step 2: Examine your two equations. Through multiplication or division, make the numerical coefficient on one of the terms containing a literal coefficient exactly equal to its counterpart in the other equation. Step 3: Add or subtract the two equations as needed so as to eliminate the identical term from both equations. Step 4: In the new equation, solve for the last literal coefficient. Step 5: Substitute the root of the known literal coefficient into either of the two original equations. If one of the equations takes on a simpler structure, pick that equation. Step 6: Solve your chosen equation for the other literal coefficient. ### Paths To Success Sometimes it is unclear exactly how you need to multiply or divide the equations to make two of the terms identical. For example, assume the following two equations: [4.9x + 1.5y = 38.3 onumber] [2.7x − 8.6y = 17.8 onumber] If the goal is to make the terms containing the literal coefficient (x) identical, there are two alternative solutions: 1. Take the larger numerical coefficient for (x) and divide it by the smaller numerical coefficient. The resulting number is the factor for multiplying the equation containing the smaller numerical coefficient. In this case, (4.9 div 2.7 =1 . overline{814}). Multiply all terms in the second equation by (1 . overline{814}) to make the numerical coefficients for (x) equal to each other, resulting in this pair of equations: [4.9x + 1.5y = 38.3 onumber] [4.9 x-15.6 overline{074} y=32.3 overline{037} ext { (every term multiplied by } 1 . overline{814}) onumber] 1. Take the first equation and multiply it by the numerical coefficient in the second equation. Then take the second equation and multiply it by the numerical coefficient in the first equation. In this case, multiply all terms in the first equation by 2.7. Then multiply all terms in the second equation by 4.9. [13.23 x+4.05 y=103.41 ext { (every term multiplied by } 2.7) onumber] [13.23 x-42.14 y=87.22 ext { (every term multiplied by 4.9) } onumber] Note that both approaches successfully result in both equations having the same numerical coefficient in front of the literal coefficient (x). ### Paths To Success Ultimately, every pairing of linear equations with two unknowns can be converted into a single equation through substitution. To make the conversion, do the following: 1. Solve either equation for one of the unknown variables. 2. Take the resulting algebraic expression and substitute it into the other equation. This new equation is solvable for one of the unknown variables. 3. Substitute your newfound variable into one of the original equations to determine the value for the other unknown variable. Take the following two equations: [a + b = 4 quad quad 2a + b = 6 onumber] 1. Solving the first equation for a results in (a = 4 - b). 2. Substituting the expression for a into the second equation and solving for b results in (2(4 - b) + b = 6), which solves as (b = 2). 3. Finally, substituting the root of b into the first equation to calculate a gives (a + 2 = 4) resulting in (a = 2). Therefore, the roots of these two equations are (a = 2) and (b = 2). Example (PageIndex{4}): Buying Those Outfits Recall from the section opener that in shopping for outfits there are two price points of$10 and $30, your budget is$110, and that you need seven articles of clothing. The equations below represent these conditions. Identify how many low-priced outfits ((L)) and high-priced outfits ((H)) you can purchase. [L + H = 7 ext{ } $10L +$30H = $110 onumber] Solution You need to determine the quantity of low-price-point items, or (L), and high-price-point items, or (H), that are within your limited budget. Note that the exponents on the variables are 1 and that there are two unknowns. So there are two linear equations with two unknowns. What You Already Know You require seven articles of clothing and only have a budget of$110. The equations express the relationships of quantity and budget. How You Will Get There Apply the six-step procedure for solving two linear equations with two unknowns. Step 1: Write the equations one above the other and line them up. [egin{array} {lllll} {L} & + &{H}& = &{7} {$10L} & + &{$30H}& = &{$110} end{array} onumber ] Step 2: Multiply all terms in the first equation by 10 so that (L) has the same numerical coefficient in both equations. [egin{array} {lllll} {10L} & + &{10H}& = &{70} {$10L} & + &{$30H}& = &{$110} end{array} onumber ] Step 3: Subtract the equations by subtracting all terms on both sides. [egin{array} {llllll} { } &{10L} & + &{10H}& = &{70} { ext{subtract}} &{$10L} & + &{$30H}& = &{$110}{ } &{ } & - &{$20H}& = &{−$40} end{array} onumber ] Step 4: Solve for (H) by dividing both sides by −20. [dfrac{-$ 20 H}{-$20}=dfrac{-$ 40}{-$20} quad H=2 onumber ] Step 5: Substitute the known value for (H) into one of the original equations. The first equation is simple, so choose that one. [egin{array} {lllll} {L} & + &{H}& = &{7} {L} & + &{2}& = &{7} end{array} onumber ] Step 6: Solve for (L) by subtracting 2 from both sides. You now have the roots for (L) and (H). [egin{array} {lllllllll} {L}&+&{2} & - &{2}& = &{7}&-&{2} { } & { } &{ } & { } &{L}& = &{5} & { } & { } end{array} onumber ] You can purchase five articles of clothing at the low price point and two articles of clothing at the high price point. This allows you to purchase seven articles of clothing and stay within your budget of$110. ### Paths To Success One of the most difficult areas of mathematics involves translating words into mathematical symbols and operations. To assist in this translation, the table below lists some common language and the mathematical symbol that is typically associated with the word or phrase. LanguageMath Symbol Sum In excess Increased by Plus + Subtract Decreased by Diminished by Less Minus Difference Reduced by - Multiplied by Times Percentage of Product of Of × Divide Division Divisible Quotient Per÷ Becomes Is/Was/Were Will be Results in Totals = More thanGreater than> Less thanLower than< Greater than or equal to Less than or equal to Not equal to Example (PageIndex{5}): Solving Two Linear Equations with Two Unknowns for an Amusement Park Tinkertown Family Fun Park charges $15 for a child wrist band and$10.50 for an adult wrist band. On a warm summer day, the amusement park had total wrist band revenue of $15,783 from sales of 1,279 wrist bands. How many adult and child wrist bands did the park sell that day? Solution You need the number of both adult and child wrist bands sold on the given day. Therefore, you must identify two unknowns. What You Already Know The price of the wrist bands,total quantity, and sales are known: Child wrist band price =$15 Adult wrist band price = $10.50 Total revenue =$15,783 Total unit sales = 1,279 The quantity of adult wrist bands sold and the quantity of child wrist bands sold are unknown: Adult wrist bands quantity = (a) Child wrist bands quantity = (c) How You Will Get There 1. Work with the quantities first. Calculate the total unit sales by adding the number of adult wrist bands to the number of child wrist bands: [# ext { of adult wrist bands }+# ext { of child wrist bands }= ext { total unit sales } onumber] [a + c = 1,279 onumber] 1. Now consider the dollar figures. Total revenue for any company is calculated as unit price multiplied by units sold. In this case, you must sum the revenue from two products to get the total revenue. [ ext { Total adult revenue }+ ext { Total child revenue }= ext { Total revenue } onumber] [ ext { (Adult price } imes ext { Adult guantity })+ ext { (Child price } imes ext { Child quantity) }= ext { Total revenue } onumber] [$10.50 a+$ 15 c=$15,783 onumber] 1. Apply the six-step procedure for solving two linear equations with two unknowns. Perform Step 1: Write the equations one above the other and line them up. [egin{array} {lllll} {a} & + &{c}& = &{1,279} {$10.50a} & + &{$15c}& = &{$15,783} end{array} onumber ] Step 2: Multiply all terms in the first equation by 10.5, resulting in a having the same numerical coefficient in both equations. [egin{array} {lllll} {f{10.50} a} & + &{f{10.50} c} & = & {f{13,429.50}} {$10.50a} & + &{$15c}& = &{$15,783} end{array} onumber ] Step 3: Subtract the equations by subtracting all terms on both sides. [egin{array} {llllll} { } & {f{10.50} a} & + &{f{10.50} c} & = & {f{13,429.50}} { ext{Subtract}} & {underline{$10.50a}} & {underline{+}} &{underline{$15c}} & {underline{=}} &{underline{$15,783}} { } & { } & { } & {f{-4.5c}} & {f{=}} & {f{-2,353.50}} end{array} onumber ] Step 4: Solve for (c) by dividing both sides by −4.5. [dfrac{-4.5 c}{-4.5}=dfrac{-2,353.50}{-4.5} quad c=523 onumber ] Step 5: Substitute the known value for (c) into one of the original equations. The first equation is simple, so choose that one. [egin{array} {lllll} {a} & + & {c} & = &{1,279} {a} & + & {f{523}} & = &{1,279} end{array} onumber ] Step 6: Solve for a by subtracting 523 from both sides. You now have the roots for (a) and (c). [egin{aligned} a+523 f{-523} &=1,279 f{-523} a &=756 end{aligned} onumber ] Tinkertown Family Fun Park sold 523 child wrist bands and 756 adult wrist bands. ## 2.5: Linear Equations - Manipulating and Solving (Solving the Puzzle) An equation Statement indicating that two algebraic expressions are equal. is a statement indicating that two algebraic expressions are equal. A linear equation with one variable An equation that can be written in the standard form a x + b = 0 , where a and b are real numbers and a ≠ 0 . , x, is an equation that can be written in the standard form a x + b = 0 where a and b are real numbers and a ≠ 0 . For example, A solution Any value that can replace the variable in an equation to produce a true statement. to a linear equation is any value that can replace the variable to produce a true statement. The variable in the linear equation 3 x − 12 = 0 is x and the solution is x = 4 . To verify this, substitute the value 4 in for x and check that you obtain a true statement. 3 x − 12 = 0 3 ( 4 ) − 12 = 0 12 − 12 = 0 0 = 0 ✓ Alternatively, when an equation is equal to a constant, we may verify a solution by substituting the value in for the variable and showing that the result is equal to that constant. In this sense, we say that solutions “satisfy the equation.” ### Example 1 Is a = − 1 2 a solution to − 10 a + 5 = 25 ? Recall that when evaluating expressions, it is a good practice to first replace all variables with parentheses, and then substitute the appropriate values. By making use of parentheses, we avoid some common errors when working the order of operations. − 10 a + 5 = − 10 ( − 1 2 ) + 5 = 5 + 5 = 10 ≠ 25 ✗ Answer: No, a = − 1 2 does not satisfy the equation. Developing techniques for solving various algebraic equations is one of our main goals in algebra. This section reviews the basic techniques used for solving linear equations with one variable. We begin by defining equivalent equations Equations with the same solution set. as equations with the same solution set. 3 x − 5 = 16 3 x = 21 x = 7 > E q u i v a l e n t e q u a t i o n s Here we can see that the three linear equations are equivalent because they share the same solution set, namely, <7>. To obtain equivalent equations, use the following properties of equality Properties that allow us to obtain equivalent equations by adding, subtracting, multiplying, and dividing both sides of an equation by nonzero real numbers. . Given algebraic expressions A and B, where c is a nonzero number: Subtraction property of equality: Multiplication property of equality: Division property of equality: Note: Multiplying or dividing both sides of an equation by 0 is carefully avoided. Dividing by 0 is undefined and multiplying both sides by 0 results in the equation 0 = 0. We solve algebraic equations by isolating the variable with a coefficient of 1. If given a linear equation of the form a x + b = c , then we can solve it in two steps. First, use the appropriate equality property of addition or subtraction to isolate the variable term. Next, isolate the variable using the equality property of multiplication or division. Checking the solution in the following examples is left to the reader. ### Example 2 7 x − 2 = 19 7 x − 2 + 2 = 19 + 2 A d d 2 t o b o t h s i d e s . 7 x = 21 7 x 7 = 21 7 D i v i d e b o t h s i d e s b y 7 . x = 3 ### Example 3 When no sign precedes the term, it is understood to be positive. In other words, think of this as 56 = + 8 + 12 y . Therefore, we begin by subtracting 8 on both sides of the equal sign. 56 − 8 = 8 + 12 y − 8 48 = 12 y 48 12 = 12 y 12 4 = y It does not matter on which side we choose to isolate the variable because the symmetric property Allows you to solve for the variable on either side of the equal sign, because x = 5 is equivalent to 5 = x . states that 4 = y is equivalent to y = 4 . ### Example 4 Isolate the variable term using the addition property of equality, and then multiply both sides of the equation by the reciprocal of the coefficient 5 3 . 5 3 x + 2 = − 8 5 3 x + 2 − 2 = − 8 − 2 S u b t r a c t 2 o n b o t h s i d e s . 5 3 x = − 10 3 5 ⋅ 5 3 x = 3 5 ⋅ ( − 10 ) − 2 M u l t i p l y b o t h s i d e s b y 3 5 . 1 x = 3 ⋅ ( − 2 ) x = − 6 In summary, to retain equivalent equations, we must perform the same operation on both sides of the equation. Try this! Solve: 2 3 x + 1 2 = − 5 6 . ## Solving by Combination The basic principle of solving by combination is to manipulate two equations so that, when the equations are added together, one of the variables cancels out. Since one of the variables cancels, this method is sometimes called the elimination method. Let’s use combination to solve this system of two equations: This system of equations is suited for combination, because there is already a 2x in both equations. Therefore, if we subtract equation (1) from equation (2) – or, equivalently, multiply equation (1) by -1 and add the two equations – we have a single equation with y: Dividing both sides, we find that y = -4/3. We can then plug y back into either original equation to get the value of x, as we did when solving by substitution. We can still solve by combination even if the variables aren’t lined up so nicely. For example, we can start over and solve the system of equations by making the y‘s cancel, rather than the x‘s. To do that, we can multiply the first equation (1) by the number 2 on both sides: Now subtracting (2) from that result gives us: Solving, we find x = 7. To finish the job, we substitute x = 7 into either of the original equations. If we plug x = 7 into (1), we get: Subtracting 14 from both sides, we get And dividing by 3, we find that So the solution to the two equations (1) and (2) is: Most people prefer the substitution method to the combination method. However, the combination method will prove much faster on certain questions, so if you don’t consider using it, you are likely to lose time or a correct answer on the Quant section. Furthermore, you want to be comfortable with the concept that equations can be added, since a given equation is after all equal on both sides, since that fact can be useful even when you are not solving a system of linear equations by the combination method. As I mentioned before, our variables are formatted as below: To ensure that each box has only value, we can keep the row, col constant and vary the value from 1 to 9. The sum of the binary values should be equal to 1 since only one variable will be equal to 1 and others must be 0. (value = 1, row =1, col =1) + (value = 2, row =1, col =1) + (value = 3, row =1, col =1) + (value = 4, row =1, col =1) + (value = 5, row =1, col =1) + (value = 6, row =1, col =1) + (value = 7, row =1, col =1) +(value = 8, row =1, col =1) +(value = 9, row =1, col =1) == 1 We will need to perform this check for all different combinations of row, col ## How to Find Solving Linear Equations Calculator? A linear equation is defined as an equation that is written for two different variables. This equation will be a linear combination of these two variables and a constant. An equation of the form Ax + By = C. Here, x and y are variables, and A, B, and C are constants. ### Solved Example: Solve 2x + y = 7 and x + y = 5 ### Solution: Similarly, you can try the calculator to find the algebra value for a given equation ## ALGEBRAIC MANIPULATION PROBLEMS If x > 0 and  x 2 - 2x - 35  =  0, then find the value of : If t < 0 and (t - 1) 2  = 16, what is the value of t 2  ? Let "x" be a real number which satisfies the relations (2x-5)  > 2 and (3x+3) < 18. Which of the values can "x" take ? Calculate the fifth term of the sequence defined as Find the domain of the function : We hope that the students will be able to solve the a lgebraic manipulation problems1 on SAT above. Apart from the stuff given in this section ,    if you need any other stuff in math, please use our google custom search here. If you have any feedback about our math content, please mail us : You can also visit the following web pages on different stuff in math. ## Maze Solving Equations Activities ### Solving One-Step Equations • 2-1 Solving One-Step Equations - Answers - Maze Activity (PDF - Member Only) • 2-1 Solving One-Step Equations - Maze Activity (Editable - Member Only) • ⭐ Solving One-Step Equations - Maze Activity(PDF - FREEBIE) ### Solving Two-Step Equations • 2-2 Solving Two-Step Equations - Answers - Maze Activity (PDF - Member Only) • 2-2 Solving Two-Step Equations - Maze Activity (Editable - Member Only) • ⭐ Solving Two-Step Equations - Maze Activity(PDF - FREEBIE) ### Solving Multi-Step Equations • 2-3 Solving Multi-Step Equations - Answers - Maze Activity (PDF - Member Only) • 2-3 Solving Multi-Step Equations - Maze Activity (Editable - Member Only) • ⭐ Solving Multi-Step Equations - Maze Activity(PDF - FREEBIE) ### Solving Equations with Variables on Both Sides • 2-4 Solving Equations with Variables on Both Sides - Answers - Maze Activity (PDF - Member Only) • 2-4 Solving Equations with Variables on Both Sides - Maze Activity (Editable - Member Only) • ⭐ Solving Equations with Variables on Both Sides - Maze Activity(PDF - FREEBIE) ### Literal Equations and Formulas • 2-5 Literal Equations and Formulas - Answers - Maze Activity (PDF - Member Only) • 2-5 Literal Equations and Formulas - Maze Activity (Editable - Member Only) • ⭐ Literal Equations and Formulas - Maze Activity(PDF - FREEBIE) ### Ratios, Rates, and Conversions • 2-6 Ratios, Rates, and Conversions - Answers - Maze Activity (PDF - Member Only) • 2-6 Ratios, Rates, and Conversions - Maze Activity (Editable - Member Only) • ⭐ Ratios, Rates, and Conversions - Maze Activity(PDF - FREEBIE) ### Solving Proportions • 2-7 Solving Proportions - Answers - Maze Activity (PDF - Member Only) • 2-7 Solving Proportions - Maze Activity (Editable - Member Only) • ⭐ Solving Proportions - Maze Activity(PDF - FREEBIE) ### Proportions and Similar Figures • 2-8 Proportions and Similar Figures - Answers - Maze Activity (PDF - Member Only) • 2-8 Proportions and Similar Figures - Maze Activity (Editable - Member Only) • ⭐ Proportions and Similar Figures - Maze Activity(PDF - FREEBIE) ### Percentages • 2-9 Percentages - Answers - Maze Activity (PDF - Member Only) • 2-9 Percentages - Maze Activity (Editable - Member Only) • ⭐ Percentages - Maze Activity(PDF - FREEBIE) ### Change Expressed as a Percent • 2-10 Change Expressed as a Percent - Answers - Maze Activity (PDF - Member Only) • 2-10 Change Expressed as a Percent - Maze Activity (Editable - Member Only) • ⭐ Change Expressed as a Percent - Maze Activity(PDF - FREEBIE) ## Solving Linear Equations The simplest equation to solve is a linear equation. A linear equation is an equation where the highest exponent of the variable is ( ext<1>) . The following are examples of linear equations: Solving an equation means finding the value of the variable that makes the equation true. For example, to solve the simple equation (x + 1 = 1) , we need to determine the value of (x) that will make the left hand side equal to the right hand side. The solution is (x = 0) . The solution, also called the root of an equation, is the value of the variable that satisfies the equation. For linear equations, there is at most one solution for the equation. To solve equations we use algebraic methods that include expanding expressions, grouping terms, and factorising. Check the answer by substituting (x=-cfrac<1><2>) . egin ext & = 2x + 2 & = 2(-cfrac<1><2>) + 2 & = -1 + 2 & = 1 ext & =1 end The following video gives an introduction to solving linear equations. Note that the video(s) in this lesson are provided under a Standard YouTube License. ## Let&rsquos Start &hellip Coding! The Games::LMSolve::Base class tries to solve a game by iterating through its various positions, recording every one it passes through, and trying to reach the solution. However, it does not know in advance what the games rules are, and what the meaning of the positions and moves are. In order for it to know that, we need to inherit it and code several methods that are abstract in the base class. We will code a derived class that will implement the logic specific to the Jumping Cards game. It will implement the following methods, which, together with the methods of the base class, enable the solver to solve the game: 1. input_board 2. pack_state 3. unpack_state 4. display_state 5. check_if_final_state 6. enumerate_moves 7. perform_move 8. render_move Here&rsquos the beginning of the file where we put the script: As can be seen, we declared a new package, Jumping::Cards , imported the Games::LMSolve::Base namespace, and inherited from it. Now let&rsquos start declaring the methods. First, a method to input the board in question. Since our board is constant, we just return an array reference that contains the initial sequence. When Games::LMSolve::Base iterates over the states, it stores data about each state in a hash. This means we&rsquore going to have to provide a way to convert each state from its expanded form into a uniquely identifying string. The pack_state method does this, and in our case, it will look like this: It is a good idea to use functions like pack , join or any other serialization mechanism here. In our case, we simply used join . It is not very convenient to manipulate a packed state, and so we need another function to expand it. unpack_state does the opposite of pack_state and expands a packed state. display_state() converts a packed state to a user-readable string. This is so that it can be displayed to the user. In our case, the comma-delimited notation is already readable, so we leave it as that. We need to determine when we have reached our goal and can terminate the search with a success. The check_if_final_state function accepts an expanded state and checks if it qualifies as a final state. In our case, it is final if it&rsquos the 8-to-1 sequence. Now we need a function that will tell the solver what subsequent states are available from each state. This is done by enumerating a set of moves that can be performed on the state. The enumerate_moves function does exactly that. What enumerate_moves does is iterate over the indices of the locations twice, and checks every move for the validity of the resultant board. If it&rsquos OK, it pushes the exchanged indices to the array @moves , which is returned at the end. We also need a function that will translate an origin state and a move to a resultant state. The perform_move function performs a move on a state and returns the new state. In our case, it simply swaps the cards in the two indices specified by the move. Finally, we need a function that will render a move into a user-readable string, so it can be displayed to the user. ## Solving linear equations This unit teaches students to identify linear relationships and solve linear equations in context. • Identify and find values for variables in context. • Identify linear relationships in context. • Represent linear relationships using tables, graphs and simple linear equations. • Draw strip diagrams to represent linear equations. • Solve simple linear equations and interpret the answers in context. Algebra started with the need to solve problems. Al Khwarizmi, a Persian mathematician, was arguably the first person to represent linear and quadratic problems in symbolic form and solved the problems by processes of ‘restoration’, i.e. equivalent operations that conserved equality. In fact, the word for algebra comes from the Arabic word for restoration. It is fitting then that modern approaches to algebra focus on the thinking that underpins the symbolic systems. Algebraic thinking is concerned with generalisation. Letters, words, tables, graphs, networks, etc. are cultural tools that enable us to represent, then think with, those generalisations. With representational tools we are capable of ‘amplified cognition’ in that we can anticipate results that would never be possible if we relied solely on the physical environment, and on our limited capacity to process ideas just mentally. Generalisation begins with noticing patterns and structures. A pattern is a consistency, that is something that occurs in a predictable way. It is the ‘what’ of algebraic thinking. Structure is about the organisation of patterns. It is the ‘how’ and sometimes the ‘why’ of generalisation. From noticing pattern and structure, we develop properties. For example, early counting involves pattern and structure. The ‘fourness’ of a collection comes from noticing sameness among collections of four, irrespective of the size, colour, texture, etc. of the objects. Structure of counting involves ideas like the order of counting the objects doesn’t matter. #### Specific Teaching Points In upper primary school, learning experiences for algebraic thinking typically begin with patterns. Usually these patterns are spatial and may be connected to some meaningful life context, though number patterns are also rich in opportunity. Patterns involve variables, that is features, some of which can be quantified. For example, consider this simple spatial pattern. Among the variables we might discern that the ‘tower’ has height and each ‘tower’ is made of some number of squares. Height and number of cubes may not be the only variables, just those we notice. Variables change, that is height varies and so does the number of squares in the ‘tower’. We might try to find a relation between the variables, describe and represent that relation, and use it to predict how the pattern grows beyond what we can see. Then we are thinking with the properties and representations in a sophisticated way. On the way it is likely we will need to organise the data from the pattern systematically. A table of values is a productive generic strategy, so we represent the pattern like this: The danger in moving to an organised numeric strategy like a table too early is that it may negate what we can ‘see’ in the pattern visually. Noticing and reasoning may be inductive, that is tied to the incremental change of the figures. For example: Noticing and reasoning can also be abductive, that is based on the structure of one example. Noticing and reasoning can be deductive, that is based on making assumptions about structure and reasoning with the assumptions. For example, we might assume that the tower is composed of an array of something multiplied by three plus two. From the assumptions we might deduce the appearance of towers much further on in the sequence, e.g. A tower 100 high will contain 2 + 99 x 3 squares. Ways of ‘seeing’ the pattern are manifest in relations within the table of values. For example, inductive thinking leads to seeing the values in the bottom row increasing by three each time. Abductive reasoning might support seeing this relation in the table: Representing the relation as an algebraic equation involves two important and connected types of knowledge, related to the language conventions (semiotics), and to the nature of variables. We might write s = 3h – 1, or s = 3(h - 1) + 2, or s = 2h + (h – 1), depending on what we notice. The equations are meaningless to anyone else unless we clearly define what the variables, s and h, represent. Note that both and s refer to quantities that vary and are not fixed objects, such as houses or towers. Quantities are a combination of count and measurement unit. In this case h expresses unit lengths in height, and s refers to an area of squares. 3h means h multiplied by three, not thirty-something, and 3(h - 1) means that one is subtracted from h before the multiplication by three occurs. Working with variables requires acceptance of lack of closure, that is thinking with an object (h in this case) without specifically knowing what it is. For example, knowing that 3(h – 1) = 3h – 3 is true, irrespective of whatever the value of h, is itself a generalisation. The equals sign represents a statement of ‘transitive balance’ meaning that the balance is conserved if equivalent operations are performed on both sides of the equation. Knowledge of which operations conserve equality and those which disrupt it are important generalisations about the properties of numbers under those operations, e.g. distributive property of multiplication. This unit specifically deals with relations that are linear. The first sign of linearity is that there is constant difference in the increase or decrease of one variable, as the value of the other increases by one. In the table above the number of squares increases by three as height increases by one. Note that this graph shows a relation, not a function, since the values of variables are discrete, not continuous. There are some important connections between features of the algebraic equation, the table and the graph of a linear relation. Constant difference is represented by the co-efficient of the independent variable (s = 3h -1 in this case), differences of three in the bottom values of the table, and a slope of three (change in s for every unit change in h). The constant in the equation (- 1) is reflected in the table by a need to adjust the value of 3h by subtracting one to get the value of s, and reflected in the graph as a downward translation (shift) of the graph for s = 3h by one unit. This results in the intercept of the graph with the s axis being (0, -1), not the origin (0, 0). Simple linear equations occur when the value of one variable in a relation or function is set and the other must be found. For example, with the tower problem this problem might be posed “A tower in the pattern has 98 squares. How high is the tower?” Depending on the equation used to represent the relation, this problem can be expressed as 3h – 1 = 98, 3(h – 1) + 2 = 98 or 2h + (h – 1) = 98. Linear equations with the variable on both sides occur when two conditions are equalised. An example might be, “Both Lilly and Todd look at the same tower. Lilly notices that the number of squares in the tower is three times the height less one. Todd notices that the number of squares is two times the height plus 18. How tall is the tower?” This problem can be written as 3h – 1 = 2h + 18. • Attachments as listed at the bottom of the unit #### Prior Experience It is anticipated that students at Level 4 understand, and are proficient with, multiplicative thinking. However, the tasks in this unit are also accessible for students whose preference is additive thinking. In fact, the experiences may prompt a move towards multiplicative thinking. #### Session One: Maia the Moa In this session students are shown a spatial growth pattern for a moa made from square tiles. As Maia the moa ages she grows in her legs, body and neck while her feet and head remain constant. Session One is driven using PowerPoint One. The approach is to structure one example of the pattern then transfer that structure to other members of the pattern. 1. Show the students Slide One. Aim to identify features of the pattern that might become variables. Ask: What do you notice about this figure? Students might notice different features such as colour, height, width, age, total number of squares, etc. 2. Ask: Is there an easy way to count the number of squares that Maia is made of? 3. Give students a while to structure their counting then ask them to share their method with others. Building a model of Maia at age three years with connecting cubes allows students to experiment with ways to partition the model. Encourage them to express their counting method as an expression. Use these videos to show examples of how to do this, but only if needed: 4. Ask students to apply their counting structures to Maia at age two years (Slide Two). Ask them to record expressions for their counting strategy and compare them to what they recorded for year four. 5. Ask: What changes and what stays the same in your expressions? For example, from Casey’s method these two expressions emerge: 4 + 2 x 3 + 2 (Age two) 6 + 2 x 5 + 2 (Age four) The ‘+ 2’ is constant and ‘2 x’ is present in both expressions. The other numbers vary. 6. Ask: What will your expression for Maia at age three years look like? Write the expression then check it by drawing a picture of Maia at age three (See Slide 3). 7. Ask students to show where the parts of their expressions come from in the picture. For Casey’s method the expression is 5 + 2 x 4 + 2. Slide 4 shows how parts of the diagram can be linked to parts of the expression. Look at the strategies of the students. Are their strategies based on induction? That is sequential processing. For example, 4 , ? , 6, so ? = 5, and 2 x 3, 2 x ?, 2 x 5, so ? = 4. 8. Are their strategies based on deduction? That is reasoning about the structure of any term. For example, the first number is two more than the age, and the multiplier of two is one more than the age. So, for y = 3 Casey’s expression is 5 + 2 x 4 + 2. 9. Pose this problem for students to explore individually or in small co-operative groups: Imagine that Maia celebrates her twentieth birthday. How many squares will she be made of? Find a way to predict the number of squares that Maia is made of for any age in years? 10. Allow students plenty of time to explore the problem. Look for the following: • Do the students record the data systematically? For example, if they draw Maia at age five years. Are their structural counting methods consistent? Is their recording in sequence? • Do students use inductive methods? For example, Maia increases by three squares each year. • Do students use deductive methods? For example, applying Casey’s method Maia should be (20 + 2) + 2 x (20 + 1) + 2 on her twentieth birthday. • How do students express their general rules? Do they use words?, e.g. “I take the age and add two to it to get the first number..” or do they attempt to symbolise their rules, e.g. Next number = number before + 3. 11. Bring the class together to discuss their methods with emphasis on the points above. Acknowledge the legitimacy of inductive methods but also highlight the power of deductive methods. Use questions like, “Which strategy would be better for finding out about Maia at 100 years of age?” #### Session Two This session builds on the Maia, the moa, pattern to represent the relation between age and number of squares using a table, a graph and an equation. Features of these representations are connected through looking at the effect of changing the original spatial pattern with focussed variation. 1. Open Excel or a similar spreadsheet program and create a blank workbook. You may need to have Slide 3 of PowerPoint One available for source data. Ask one of the students to set up a table like this: 2. Ask students what they notice in the table. Some may notice missing values in the Age column, particularly the ages 0, and 1. Others may notice that the number of squares are all multiples of three. They may express this idea inductively, “The number of squares goes up by three.” How can we continue the table to get more values? 3. Induction can be used to ‘fill down’ the values in both columns but deductive rules across the columns are more sophisticated. Video 2A shows how to create values by filling down. Video 2B is about using formulae across the columns. The videos can be stopped at any point for discussion. Video 2B goes straight to the most efficient rule but students could enter the rules they developed in Lesson One. 4. Ask: Can you use Excel to show that your rule from yesterday works? 5. Next a graph is created from the table of values. Video 2C shows how to do this. Ask the students to create their own graph of Maia’s growth patterns and record some features that they notice. Why are the points in a line? (This tell us that the relation is linear) How steep is the line? Note (0, 6) represents Maia’s situation upon hatching. Where does it cross the s axis? Why does it cross there? offers three scenarios in which Maia’s shape is changed in some way. The reason for doing this is to connect features of the table and graph with the spatial pattern. For each scenario students may need to draw the progression of each pattern back until Maia hatches. That will lead a table of values that can be graphed. Video 2D shows what happens when the original Maia growth pattern is altered by a constant, - 1 for losing her foot and + 2 for gaining a backpack. Video 2E and Video 2F show the effect of changing the co-efficient (multiplier) of a, in that the slope of the graph alters from three to four. Copymaster 1 provides printable versions and the start of a table for each. #### Session Three 1. Remind the students of the rule that was entered into Excel to create the pattern in the Number of squares column for Maia’s original growth pattern (e.g. =(A2+2)*3). What does A# represent? (Maia’s age in years, a). So instead of A# we could write = (a + 2) x 3 or = 3(a + 2). What does this expression tell us? (The number of squares Maia is made up of). So we could write s = 3(a + 2). 2. Show the students PowerPoint Three which shows how linear equations can be represented using a length model. Work through the slides. Do the students observe that a is free to take up different values? a is a variable. The twos remain equal in length as the value of a changes. So, +2 is a constant. Pose this problem to the students. 3. Maia is made up of 144 squares. How old is she, in years? This situation constrains s to 144 so a linear equation is created which might be expressed as 3(a + 2) = 144 or in other forms, dependent on the structure of the rule. For example, Katia’s method would yield 3a + 6 = 144. Student may need access to a picture of Maia’s growth pattern, e.g. Slide 3 of PowerPoint One. 4. Look to see whether the students use deductive reasoning or whether they are reliant on inductive methods. For example, inductive methods might involve creating a table of values and extending it until the matching value of a is found. Spreadsheets make inductive methods easy to implement. A sign of reliance on additive methods would be repeated adding of three to find next values of s. Deductive methods involve applying inverse operations to rules. For example, “I divided 144 by three to get 48, so the age plus two must equal 48.” 5. After a suitable time gather the class to discuss their strategies. Highlight the efficiency of deductive rules, which are sometimes referred to as function or direct rules, compared to lengthy inductive rules, which are sometimes referred to as recursive. Slides 5 and 6 show one way to solve the problem of Maia’s age when she is made of 144 squares. 6. The 144 squares problem shows how solving linear equations can lead to solutions efficiently. Play this video which introduces how to use the simplest version of the Visual Linear Algebra learning object. Allow students plenty of time to explore the object. #### Session Four In this session students investigate linear equations where the variable is present on both sides. 1. Begin with a reminder of how to solve linear equations in their simplest form by looking at the structural similarity of possible rules for Maia’s growth pattern. PowerPoint Four gives two possible rules attributed to hypothetical students. The rules may be alike some that the students created in Session One and Two. Slide Four shows the lengths rearranged end on end. 2. Ask: Why do these rules give the same total for any value of a? Do students recognise that both rules can be rearranged to give 3a + 6 which is Katia’s rule? 3. Possibly link the algebraic manipulation that matches the lengths in the diagram is students show interest. For example: (Leah’s rule) 3 (a + 1) + 3 = 3a + 3 + 3 = 3a + 6 (Katia’s rule) 4. Ask the students to use Katia’s rule to solve this problem: Maia the moa is made of 222 squares. How old is Maia? Do students apply inverse operations to both sides of the equation, 3a + 6 = 222, to find the solution? 5. Pose this problem: Ken and Katia are looking at the same picture of Maia. Katia says that the number of squares equals three times Maia’s age plus six. Ken says that the number of squares equals four times Maia’s age minus 18. They are both correct. How old is Maia? 6. Let the students work in small groups to solve the problem. Look for the following: • Do they build up a table of value inductively to find a value for a that meets both conditions? • Do they try values of a and ‘close in’ on the solution? • Do they use their knowledge of equations to solve the problem? 7. Bring the class together to share their solution methods. Trial and improvement strategies can be very efficient in solving these types of problems, especially if the initial attempts are based on reasonable estimation. For example, setting a = 30 gives Ken’s number of squares at 102 and Katia’s at 96. So, is 30 too big or too small? An equation based solution looks like: 3a + 6 = 4a – 18 3a + 24 = 4a (adding 18) 24 = a (subtracting 3a) Note that there are many possible first moves. 8. Introduce the second learning object in the Visual Linear Algebra collection using this video. Allow students plenty of time to explore the tool. #### Session Five This session is intended as an opportunity for students to practice applying their understanding of linear relations and their techniques for solving linear equations. Provide the students with copies of Copymaster 2 and encourage them to solve the problems in co-operative groups. Dear parents and caregivers, This week we are learning about linear relationships. Real life is full of situations where things grow at a constant rate, such as the money we earn for the hours we work, or the total cost related to the quantity we buy. In the unit we will learn to represent linear relationships using tables of values, graphs and equations. We will use spreadsheets to solve problems with linear relations, and use a learning object to solve linear equations.
2021-12-06 05:59:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48498043417930603, "perplexity": 1094.6338769235283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.39/warc/CC-MAIN-20211206042636-20211206072636-00553.warc.gz"}
https://glcnx.weihnachtskalender-kaufen.de/two-blocks-a-and-b-are-connected-with-an-ideal-string-is-pulled-horizontally-by-a-force-of-10-n.html
In this case, we can calculate the tension as below. m 1 g - T = m 1 a. m 2 g - T = m 2 a. The acceleration of the two objects can be given by adding the above equations. m 1 g+m 2 g = m 1 a+m 2 a. (m 1 + m 2 )g = a (m 1 + m 2) The acceleration acting on the objects is equal to the acceleration due to gravity. Annette Arroyo 2020-10-18 Answered. Three identical blocks connected by ideal strings are being pulled along a horizontal frictionless surface by a horizontal force F → The magnitude of the tension in the string between blocks B and C is T=3.00N. Assume that each block has mass m=0.400kg. What is the magnitude F of the force?. # Two blocks a and b are connected with an ideal string is pulled horizontally by a force of 10 n ## spam whatsapp termux ### one of the disadvantages of the departmental and the plantwide overhead rate methods is that mql4 hedging code 30ml roller bottles what does it mean when you call someone on messenger and it just beeps new treatments for bipolar disorder 2022 ## what is the difference between 3a2b and 2a 5b daisy bell soundboard home assistant pyscript examples successful after school program models james and sikes funeral home obituaries who is the actress in the rexulti commercial boy mom what Need Help? Call : anthony koletti now • If a force of 24 N is applied to the string connected to R, tensions T (1) and T (2) in the string are. May 30,2022 - Two rectangular blocks A and B of masses 2 kg and 3 kg respectively are connected by a spring of spring constant 10.8 Nm-1and are placed on a frictionless horizontal surface. The block A was given an 'initial velocity of 0.15 ... • The two -block system is released from rest and the blocks accelerate. Which of the following correctly relates the potential energy gained by the block 1-Earth. Two blocks a and b are connected with an ideal string is pulled horizontally by a force of 10 n • Two blocks a and b are connected with an ideal string is pulled horizontally by a force of 10 n N ame 0.1 m/s 8. ( b) (8) If the force F is such that the block will move with constant speed v — down the ramp, calculate the work done by the force F if the block travels the length of the ramp. • See reviews, photos, directions, phone numbers and more for the best Concrete Blocks & Shapes in Lowes , KY. Concrete blocks are also known as concrete masonry units (CMU), which are standard-size rectangular building construction blocks . CMU block sizes are referenced by their nominal — not actual ...
2022-09-26 02:58:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48659268021583557, "perplexity": 1033.9829852168514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00605.warc.gz"}
https://blog.csdn.net/luojie140/article/details/80339420
# 第4~5周(5.14-5.27) Linux应用编程 《Linux 网络编程卷一》《Linux 网络编程卷二》 《《Linux4.0设备驱动开发详解》 主要熟练掌握Linux进程、线程模型、各种进程间通讯方式、网络套接字编程、基础后台模块架构模型 ## 1. 工欲善其事必先利其器. Ubuntu下安装clang:https://blog.csdn.net/straydragon/article/details/79323502 ## 2. 根据checklist设置注释模板 文件头注释范例: /* 功能: 日期:\${TIME} 作者:luojie */ ## 3. 下载本书的源码,遇到很多坑 ### 1. 下载完成后查看README Execute the following from the src/ directory: ./configure # try to figure out all implementation differences cd lib # build the basic library that all programs need make # use "gmake" everywhere on BSD/OS systems cd ../libfree # continue building the basic library make cd ../libroute # only if your system supports 4.4BSD style routing sockets make # only if your system supports 4.4BSD style routing sockets cd ../libxti # only if your system supports XTI make # only if your system supports XTI cd ../intro # build and test a basic client program make daytimetcpcli ./daytimetcpcli 127.0.0.1 If all that works, you're all set to start compiling individual programs. Notice that all the source code assumes tabs every 4 columns, not 8. ### 2. unpv13e/libfree目录下 make报错 inet_ntop.c:60:9: error: argument ‘size’ doesn’t match prototype size_t size; ### 3. unpv13e/libroute目录下 make报错 gcc -I../lib -g -O2 -D_REENTRANT -Wall -c -o get_rtaddrs.o get_rtaddrs.c unproute.h:3:45: fatal error: net/if_dl.h: No such file or directory compilation terminated. /* * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by the University of * California, Berkeley and its contributors. * 4. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * @(#)if_dl.h 8.1 (Berkeley) 6/10/93 */ /* * A Link-Level Sockaddr may specify the interface in one of two * ways: either by means of a system-provided index number (computed * anew and possibly differently on every reboot), or by a human-readable * string such as "il0" (for managerial convenience). * * Census taking actions, such as something akin to SIOCGCONF would return * both the index and the human name. * * High volume transactions (such as giving a link-level from'' address * in a recvfrom or recvmsg call) may be likely only to provide the indexed * form, (which requires fewer copy operations and less space). * * The form and interpretation of the link-level address is purely a matter * of convention between the device driver and its consumers; however, it is * expected that all drivers for an interface of a given if_type will agree. */ /* */ u_char sdl_len; /* Total length of sockaddr */ u_char sdl_family; /* AF_DLI */ u_short sdl_index; /* if != 0, system given index for interface */ u_char sdl_type; /* interface type */ u_char sdl_nlen; /* interface name length, no trailing 0 reqd. */ u_char sdl_slen; /* link layer selector length */ char sdl_data[12]; /* minimum work area, can be larger; contains both if name and ll address */ }; #ifndef KERNEL #include <sys/cdefs.h> __BEGIN_DECLS __END_DECLS #endif /* !KERNEL */ ### 4. RTAX_MAX变量未声明 get_rtaddrs.c:21:18: error: ‘RTAX_MAX’ undeclared (first use in this function) for (i = 0; i < RTAX_MAX; i++) { ^ ### 5. Run linux 现在因为安全问题,各个发行版本默认是不开daytime服务的。 • 一个运行: make daytimetcpsrv sudo ./daytimetcpsrv • 另一个运行 ./daytimetcpcli 127.0.0.1 • 返回结果: Wed May 16 16:37:05 2018 • 广告 • 抄袭 • 版权 • 政治 • 色情 • 无意义 • 其他 120
2019-01-19 21:37:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3220471739768982, "perplexity": 5945.968789402399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583681597.51/warc/CC-MAIN-20190119201117-20190119223117-00611.warc.gz"}
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=1136&parent=2944
## Forum archive 2000-2006 ### John Jones - sending e-mail by Arnold Pizer - Number of replies: 0 sending e-mail topic started 1/21/2003; 2:35:55 PMlast post 1/28/2003; 1:55:16 PM John Jones - sending e-mail  1/21/2003; 2:35:55 PM (reads: 1368, responses: 5) Hi, We just started to use the e-mail feature of WeBWorK. Some of my faculty were confused by the From and Reply-to spots (they simply typed their names in which generated faulty e-mail addresses). Leaving them blank is supposed to have them filled with the feedback address, but that didn't seem to work. I think it is because there is no default.msg file. I think the best solution would be to have the feedback address pre-entered into From and Reply-to fields in this circumstance. It just takes 2 lines of code in profSendMail.pl. I can upload the change if this sounds good. John <| Post or View Comments |> Michael Gage - Re: sending e-mail  1/22/2003; 8:51:15 PM (reads: 1570, responses: 0) Hi John, I've been getting a bit confused by this myself. A change sounds fine to me, but could you also report to the bulletin board here how these two variables and the Global::feedback variable interact? Thanks much. Take care, Mike <| Post or View Comments |> John Jones - Re: sending e-mail  1/23/2003; 5:05:54 PM (reads: 1609, responses: 0) OK, I just uploaded the tiny change. In my first message, I said that the From and Reply-To fields would get the feedback address. That is not quite right; they get what they deserve (Global::defaultFrom and Global::defaultReply respectively). Those, in turn, default to the Global::feedback address, to which mail from the feedback button is sent. So, most people would get the feedback address, but those who have customized the addresses more closely get the right thing. Incidentally, if there is a default.msg file exists and it is empty, then WW goes into an infinite loop. Do you think that is worth fixing? John <| Post or View Comments |> Arnold K. Pizer - Re: sending e-mail  1/27/2003; 10:37:35 AM (reads: 1598, responses: 0) Hi John, Sorry I'm coming to this rather late. I think it would be much better to use the Global::smtpSender variable for this. Here's what it does (look at your webworkCourse.ph file): # In addition, the smtp mail sender (defined in Global.pm) requires a valid single# email address. Normally this is set to the address of the course administrator.# Undeliverable email from the Send Mail page will be returned to the smtpSender address.# Uncomment the line below and enter a valid email address (if you leave it commented# out the webmaster's email address from Global.pm will be used).$Global::smtpSender = 'apizer@math.rochester.edu'; If not defined in webworkCourse.ph , it defaults to (from Global.pm) $smtpSender = \$webmaster; # should be redefined for each course in webworkCourse.ph Arnie <| Post or View Comments |>
2023-03-20 22:12:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20198726654052734, "perplexity": 5145.519620746656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00122.warc.gz"}
https://zbmath.org/?q=1030.22003
# zbMATH — the first resource for mathematics Hausdorff measure of the singular set of quasiregular maps on Carnot groups. (English) Zbl 1030.22003 It is proved that the $$(i-m-1)$$-dimensional Hausdorff measure of the image of the branch set of a quasiregular mapping on the Carnot group is positive ($$i$$ is the homogeneous dimension of the Carnot group and $$m$$ is the index of the last vector space of the corresponding Lie algebra). Reviewer: A.Neagu (Iaşi) ##### MSC: 22E30 Analysis on real and complex Lie groups 30C65 Quasiconformal mappings in $$\mathbb{R}^n$$, other generalizations 43A80 Analysis on other specific Lie groups Full Text:
2021-09-19 01:30:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5448980331420898, "perplexity": 455.5772454848781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056656.6/warc/CC-MAIN-20210919005057-20210919035057-00355.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/i-appear-lost-steps-2-3-youexplain-intermediate-steps-dropterms-thanks-q102234
I appear to get lost between steps 2 and 3, can youexplain the intermediate steps in detail?  Where do we dropterms, etc?  Thanks.
2013-06-19 02:01:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935092687606812, "perplexity": 2220.340724614439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707439689/warc/CC-MAIN-20130516123039-00092-ip-10-60-113-184.ec2.internal.warc.gz"}
https://bookdown.org/egarpor/NP-UC3M/app-reg-lin.html
## B.1 Linear regression ### B.1.1 Model formulation and least squares The multiple linear regression employs multiple predictors $$X_1,\ldots,X_p\,$$275 for explaining a single response $$Y$$ by assuming that a linear relation of the form \begin{align} Y=\beta_0+\beta_1 X_1+\cdots+\beta_p X_p+\varepsilon \tag{B.1} \end{align} holds between the predictors $$X_1,\ldots,X_p$$ and the response $$Y$$. In (B.1), $$\beta_0$$ is the intercept and $$\beta_1,\ldots,\beta_p$$ are the slopes, respectively. $$\varepsilon$$ is a random variable with mean zero and independent from $$X_1,\ldots,X_p$$. Another way of looking at (B.1) is \begin{align} \mathbb{E}[Y|X_1=x_1,\ldots,X_p=x_p]=\beta_0+\beta_1x_1+\cdots+\beta_px_p, \tag{B.2} \end{align} since $$\mathbb{E}[\varepsilon|X_1=x_1,\ldots,X_p=x_p]=0$$. Therefore, the expectation of $$Y$$ is changing in a linear fashion with respect to the values of $$X_1,\ldots,X_p$$. Hence the interpretation of the coefficients: • $$\beta_0$$: is the expectation of $$Y$$ when $$X_1=\ldots=X_p=0$$. • $$\beta_j$$, $$1\leq j\leq p$$: is the additive increment in expectation of $$Y$$ for an increment of one unit in $$X_j=x_j$$, provided that the remaining variables do not change. Figure B.1 illustrates the geometrical interpretation of a multiple linear model: a plane in the $$(p+1)$$-dimensional space. If $$p=1$$, the plane is the regression line for simple linear regression. If $$p=2$$, then the plane can be visualized in a three-dimensional plot. The estimation of $$\beta_0,\beta_1,\ldots,\beta_p$$ is done by minimizing the so-called Residual Sum of Squares (RSS). We first need to introduce some helpful notation for this and the next section: • A sample of $$(X_1,\ldots,X_p,Y)$$ is denoted by $$(X_{11},\ldots,X_{1p},Y_1),\ldots$$, $$(X_{n1},\ldots,X_{np},Y_n)$$, where $$X_{ij}$$ denotes the $$i$$-th observation of the $$j$$-th predictor $$X_j$$. We denote with $$\mathbf{X}_i=(X_{i1},\ldots,X_{ip})'$$ to the $$i$$-th observation of $$(X_1,\ldots,X_p)$$, so the sample is $$(\mathbf{X}_{1},Y_1),\ldots,(\mathbf{X}_{n},Y_n)$$. • The design matrix contains all the information of the predictors and a column of ones \begin{align*} \mathbf{X}=\begin{pmatrix} 1 & X_{11} & \cdots & X_{1p}\\ \vdots & \vdots & \ddots & \vdots\\ 1 & X_{n1} & \cdots & X_{np} \end{pmatrix}_{n\times(p+1)}. \end{align*} • The vector of responses $$\mathbf{Y}$$, the vector of coefficients $$\boldsymbol\beta$$, and the vector of errors are, respectively, \begin{align*} \mathbf{Y}=\begin{pmatrix} Y_1 \\ \vdots \\ Y_n \end{pmatrix}_{n\times 1},\quad\boldsymbol\beta=\begin{pmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_p \end{pmatrix}_{(p+1)\times 1},\text{ and } \boldsymbol\varepsilon=\begin{pmatrix} \varepsilon_1 \\ \vdots \\ \varepsilon_n \end{pmatrix}_{n\times 1}. \end{align*} Thanks to the matrix notation, we can turn the sample version of the multiple linear model, namely \begin{align*} Y_i=\beta_0 + \beta_1 X_{i1} + \ldots +\beta_p X_{ip} + \varepsilon_i,\quad i=1,\ldots,n, \end{align*} into something as compact as \begin{align*} \mathbf{Y}=\mathbf{X}\boldsymbol\beta+\boldsymbol\varepsilon. \end{align*} The RSS for the multiple linear regression is \begin{align} \text{RSS}(\boldsymbol\beta):=&\,\sum_{i=1}^n(Y_i-\beta_0-\beta_1X_{i1}-\ldots-\beta_pX_{ip})^2\nonumber\\ =&\,(\mathbf{Y}-\mathbf{X}\boldsymbol{\beta})'(\mathbf{Y}-\mathbf{X}\boldsymbol{\beta}).\tag{B.3} \end{align} $$\text{RSS}(\boldsymbol\beta)$$ aggregates the squared vertical distances from the data to a regression plane given by $$\boldsymbol\beta$$. Note that the vertical distances are considered because we want to minimize the error in the prediction of $$Y$$. Thus, the treatment of the variables is not symmetrical276; see Figure B.2. The least squares estimators are the minimizers of (B.3): \begin{align*} \hat{\boldsymbol{\beta}}:=\arg\min_{\boldsymbol{\beta}\in\mathbb{R}^{p+1}} \text{RSS}(\boldsymbol{\beta}). \end{align*} Luckily, thanks to the matrix form of (B.3), it is simple to compute a closed-form expression for the least squares estimates: \begin{align} \hat{\boldsymbol{\beta}}=(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'\mathbf{Y}.\tag{B.4} \end{align} Exercise B.1 $$\hat{\boldsymbol{\beta}}$$ can be obtained by differentiating (B.3). Prove it using that $$\frac{\partial \mathbf{A}\mathbf{x}}{\partial \mathbf{x}}=\mathbf{A}$$ and $$\frac{\partial f(\mathbf{x})'g(\mathbf{x})}{\partial \mathbf{x}}=f(\mathbf{x})'\frac{\partial g(\mathbf{x})}{\partial \mathbf{x}}+g(\mathbf{x})'\frac{\partial f(\mathbf{x})}{\partial \mathbf{x}}$$ for two vector-valued functions $$f$$ and $$g$$. Figure B.2: The least squares regression plane $$y=\hat\beta_0+\hat\beta_1x_1+\hat\beta_2x_2$$ and its dependence on the kind of squared distance considered. Application also available here. Let’s check that indeed the coefficients given by R’s lm are the ones given by (B.4) in a toy linear model. # Generates 50 points from a N(0, 1): predictors and error set.seed(34567) x1 <- rnorm(50) x2 <- rnorm(50) x3 <- x1 + rnorm(50, sd = 0.05) # Make variables dependent eps <- rnorm(50) # Responses y_lin <- -0.5 + 0.5 * x1 + 0.5 * x2 + eps y_qua <- -0.5 + x1^2 + 0.5 * x2 + eps y_exp <- -0.5 + 0.5 * exp(x2) + x3 + eps # Data data_animation <- data.frame(x1 = x1, x2 = x2, y_lin = y_lin, y_qua = y_qua, y_exp = y_exp) # Call lm # lm employs formula = response ~ predictor1 + predictor2 + ... # (names according to the data frame names) for denoting the regression # to be done mod <- lm(y_lin ~ x1 + x2, data = data_animation) summary(mod) ## ## Call: ## lm(formula = y_lin ~ x1 + x2, data = data_animation) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.37003 -0.54305 0.06741 0.75612 1.63829 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.5703 0.1302 -4.380 6.59e-05 *** ## x1 0.4833 0.1264 3.824 0.000386 *** ## x2 0.3215 0.1426 2.255 0.028831 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.9132 on 47 degrees of freedom ## Multiple R-squared: 0.276, Adjusted R-squared: 0.2452 ## F-statistic: 8.958 on 2 and 47 DF, p-value: 0.0005057 # mod is a list with a lot of information # str(mod) # Long output # Coefficients mod\$coefficients ## (Intercept) x1 x2 ## -0.5702694 0.4832624 0.3214894 # Application of formula (3.4) # Matrix X X <- cbind(1, x1, x2) # Vector Y Y <- y_lin # Coefficients beta <- solve(t(X) %*% X) %*% t(X) %*% Y beta ## [,1] ## -0.5702694 ## x1 0.4832624 ## x2 0.3214894 Exercise B.2 Compute $$\boldsymbol{\beta}$$ for the regressions y_lin ~ x1 + x2, y_qua ~ x1 + x2, and y_exp ~ x2 + x3 using equation (B.4) and the function lm. Check that the fitted plane and the coefficient estimates are coherent. Once we have the least squares estimates $$\hat{\boldsymbol{\beta}}$$, we can define the following two concepts: • The fitted values $$\hat Y_1,\ldots,\hat Y_n$$, where \begin{align*} \hat Y_i:=\hat\beta_0+\hat\beta_1X_{i1}+\cdots+\hat\beta_pX_{ip},\quad i=1,\ldots,n. \end{align*} They are the vertical projections of $$Y_1,\ldots,Y_n$$ into the fitted line (see Figure B.2). In a matrix form, inputting (B.3) \begin{align*} \hat{\mathbf{Y}}=\mathbf{X}\hat{\boldsymbol{\beta}}=\mathbf{X}(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'\mathbf{Y}=\mathbf{H}\mathbf{Y}, \end{align*} where $$\mathbf{H}:=\mathbf{X}(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'$$ is called the hat matrix because it “puts the hat into $$\mathbf{Y}$$.” What it does is to project $$\mathbf{Y}$$ into the regression plane (see Figure B.2). • The residuals (or estimated errors) $$\hat \varepsilon_1,\ldots,\hat \varepsilon_n$$, where \begin{align*} \hat\varepsilon_i:=Y_i-\hat Y_i,\quad i=1,\ldots,n. \end{align*} They are the vertical distances between actual and fitted data. ### B.1.2 Model assumptions Observe that $$\hat{\boldsymbol{\beta}}$$ was derived from purely geometrical arguments, not probabilistic ones. That is, we have not made any probabilistic assumption on the data generation process. However, some probabilistic assumptions are required to infer the unknown population coefficients $$\boldsymbol{\beta}$$ from the sample $$(\mathbf{X}_1, Y_1),\ldots,(\mathbf{X}_n, Y_n)$$. The assumptions of the multiple linear model are: 1. Linearity: $$\mathbb{E}[Y|X_1=x_1,\ldots,X_p=x_p]=\beta_0+\beta_1x_1+\cdots+\beta_px_p$$. 2. Homoscedasticity: $$\mathbb{V}\text{ar}[\varepsilon_i]=\sigma^2$$, with $$\sigma^2$$ constant for $$i=1,\ldots,n$$. 3. Normality: $$\varepsilon_i\sim\mathcal{N}(0,\sigma^2)$$ for $$i=1,\ldots,n$$. 4. Independence of the errors: $$\varepsilon_1,\ldots,\varepsilon_n$$ are independent. A good one-line summary of the linear model is the following (independence is assumed): \begin{align} Y|(X_1=x_1,\ldots,X_p=x_p)\sim \mathcal{N}(\beta_0+\beta_1x_1+\cdots+\beta_px_p,\sigma^2).\tag{B.5} \end{align} Inference on the parameters $$\boldsymbol\beta$$ and $$\sigma$$ can be done, conditionally277 on $$X_1,\ldots,X_n$$, from (B.5). We do not explore this further, referring the interested reader to, e.g., Section 2.4 in . Instead, we remark the connection between least squares estimation and the maximum likelihood estimator derived from (B.5). First, note that (B.5) is the population version of the linear model (it is expressed in terms of the random variables). The sample version that summarizes assumptions i–iv is \begin{align*} \mathbf{Y}|\mathbf{X}\sim \mathcal{N}_n(\mathbf{X}\boldsymbol{\beta},\sigma^2\mathbf{I}_n). \end{align*} Using this result, it is easy to obtain the log-likelihood function of $$Y_1,\ldots,Y_n$$ conditionally on $$\mathbf{X}_1,\ldots,\mathbf{X}_n$$ as278 \begin{align} \ell(\boldsymbol{\beta})=\log\phi_{\sigma^2\mathbf{I}_n}(\mathbf{Y}-\mathbf{X}\boldsymbol{\beta})=\sum_{i=1}^n\log\phi_{\sigma}(Y_i-(\mathbf{X}\boldsymbol{\beta})_i).\tag{B.6} \end{align} Finally, the following result justifies the consideration of the least squares estimate: it equals the maximum likelihood estimator derived under assumptions i–iv. Theorem B.1 Under assumptions i–iv, the maximum likelihood estimator of $$\boldsymbol{\beta}$$ is the least squares estimate (B.4): \begin{align*} \hat{\boldsymbol{\beta}}_\mathrm{ML}=\arg\max_{\boldsymbol{\beta}\in\mathbb{R}^{p+1}}\ell(\boldsymbol{\beta})=(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}\mathbf{Y}. \end{align*} Proof. Expanding the first equality at (B.6) gives279 \begin{align*} \ell(\boldsymbol{\beta})=-\log((2\pi)^{n/2}\sigma^n)-\frac{1}{2\sigma^2}(\mathbf{Y}-\mathbf{X}\boldsymbol{\beta})'(\mathbf{Y}-\mathbf{X}\boldsymbol{\beta}). \end{align*} Optimizing $$\ell$$ does not require knowledge on $$\sigma^2$$, since differentiating with respect to $$\boldsymbol{\beta}$$ and equating to zero gives (see Exercise B.1) $$\frac{1}{\sigma^2}(\mathbf{Y}-\mathbf{X}\boldsymbol{\beta})'\mathbf{X}=\mathbf{0}$$. Solving the equation gives the form for $$\hat{\boldsymbol{\beta}}_\mathrm{ML}$$. Exercise B.3 Conclude the proof of Theorem B.1. ### References García-Portugués, E. 2021. Notes for Predictive Modeling. https://bookdown.org/egarpor/PM-UC3M/. 1. Not to confuse with a sample!↩︎ 2. If that were the case, we would consider perpendicular distances, which yield Principal Component Analysis (PCA).↩︎ 3. We assume that the randomness is on the response only.↩︎ 4. Recall that $$\phi_{\boldsymbol{\Sigma}}(\cdot-\boldsymbol{\mu})$$ and $$\phi_{\sigma}(\cdot-\mu)$$ denote the pdf of a $$\mathcal{N}_p(\boldsymbol{\mu},\boldsymbol{\Sigma})$$ and a $$\mathcal{N}(\mu,\sigma^2)$$, respectively.↩︎ 5. Since $$|\sigma^2\mathbf{I}_n|^{1/2}=\sigma^{n}$$.↩︎
2021-09-23 20:33:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812126159667969, "perplexity": 1504.0468088071075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00070.warc.gz"}
https://www.nature.com/articles/s41467-019-09699-5?error=cookies_not_supported&code=13152040-3ed1-4dee-aec4-b12c47f45ff0
# Out-of-equilibrium quantum magnetism and thermalization in a spin-3 many-body dipolar lattice system ## Abstract Understanding quantum thermalization through entanglement build up in isolated quantum systems addresses fundamental questions on how unitary dynamics connects to statistical physics. Spin systems made of long-range interacting atoms offer an ideal experimental platform to investigate this question. Here, we study the spin dynamics and approach towards local thermal equilibrium of a macroscopic ensemble of S = 3 chromium atoms pinned in a three dimensional optical lattice and prepared in a pure coherent spin state, under the effect of magnetic dipole–dipole interactions. Our isolated system thermalizes under its own dynamics, reaching a steady state consistent with a thermal ensemble with a temperature dictated from the system’s energy. The build up of quantum correlations during the dynamics is supported by comparison with an improved numerical quantum phase-space method. Our observations are consistent with a scenario of quantum thermalization linked to the growth of entanglement entropy. ## Introduction Ultra-cold atomic systems featuring long-range interactions are becoming ideal platforms for probing strongly correlated out-of-equilibrium quantum behavior and, in particular, the phenomenon of quantum magnetism, where magnetic moments with quantized energy levels (spins) interact with one another1,2,3. Their appeal stems from the fact that they feature internal levels that can be initialized in pure states and coherently evolved with controllable long-range interactions even under frozen conditions. Recently great advances have been accomplished, but, so far have been mostly limited to small systems (hundreds or fewer particles)4,5,6,7,8,9,10,11,12, or to dilute disordered molecular ensembles13,14. Magnetic quantum dipoles featuring sizable magnetic moments offer unique opportunities since magnetic interactions can directly happen in an enlarged set of low-lying hyperfine Zeeman levels and are not forbidden by parity and time-reversal symmetry as is the case with electric dipoles15. They offer untapped opportunities as a quantum resource since S > 1/2 spin models have more complexity and cost exponentially more resources to classically simulate16,17. In fact, the exploration of the complex non-equilibrium dynamics of dipolar-coupled S > 1/2 spin models remains a fascinating territory which only starts to be explored18,19. Here we compare our experimental observations of the seven Zeeman populations of an initial spin coherent state made of S = 3 spin particles, with different models. We find that our data compare well with exact short time calculations and semiclassical simulations based on a discrete Monte Carlo sampling in phase space20,21,22. We show that the steady state reached at long times is captured by a statistical ensemble with nonzero thermodynamic entropy, by deriving a simple analytical formula, which compares well to both data and semiclassical simulations. We finally study the growth of entanglement by computing the Renyi entropy associated with a local spin. Our studies confirm a scenario of quantum thermalization as a result of the entanglement accumulated during the dynamics. ## Results ### Realization of an XXZ Heisenberg spin model In our system the spin degree of freedom is encoded in the Zeeman levels of the purely electronic S = 3 ground state of 52Cr atoms. The experiment starts with the production of a spin-3 Bose-Einstein condensate (BEC) of ~4 × 104 atoms in the mS = −3 state, following the procedure described in ref. 23. We then adiabatically load the BEC into a three dimensional (3D) optical lattice made by laser beams at 532 nm18. The lattice structure is rectangular in the horizontal plane, and uses a standard retro-reflecting scheme on the vertical axis (see Methods). After loading the atoms into a deep optical lattice, the sample forms a Mott insulator consisting of a core with doubly occupied sites $$\left( {\bar n = 2} \right)$$, surrounded by a 3D shell of singly occupied sites $$\left( {\bar n = 1} \right)$$, see Fig. 1. The experimental procedure to induce spin dynamics is shown in Fig. 1. We initialize the system in a well-characterized state consisting of a macroscopic array of long-lived singly occupied sites close to unit filling, by performing first a filtering protocol. It relies on dipolar relaxation18 to empty all doubly occupied sites within the $$\bar n = 2$$ Mott core after the application of a π rf pulse that promotes the atoms to the most energetic spin state mS = 3. The filtering protocol takes about 7 ms (see Methods). To trigger the spin dynamics we then apply a second rf pulse. This rotates the coherent spin state, such that it forms an angle θ with respect to the external magnetic field which sets the quantization axis (see Fig. 1). This prepares a tilted spin coherent state. The spin dynamics is studied by monitoring the time evolution of the population of the different Zeeman states, using absorption imaging after a Stern–Gerlach separation procedure23. A unit-filled array of frozen magnetic dipoles in a lattice interact via dipolar interactions. In the presence of an external magnetic B field strong enough to generate Zeeman splittings larger than nearest-neighbor dipolar interactions, only those processes that conserve the total magnetization are energetically allowed and the dynamics is described by the following secular Hamiltonian18 (with B along the z axis): $$\hat H = \mathop {\sum}\limits_{i > j} {V_{ij}} \left[ {\hat S_i^z\hat S_j^z - \frac{1}{2}\left( {\hat S_i^x\hat S_j^x + \hat S_i^y\hat S_j^y} \right)} \right]$$ (1) where the sum runs over all pairs of atoms (i,j). It corresponds to a XXZ Heisenberg model with dipolar couplings $$V_{ij} \equiv \frac{{{\mu} _0(g{\mu} _{\mathrm{B}})^2}}{{4\pi }}\left( {\frac{{1 - 3{\mathrm{cos}}^2{\phi} _{(i,j)}}}{{r_{(i,j)}^3}}} \right)$$. Here μ0 is the magnetic permeability of vacuum, $$g \simeq 2$$ is the Landé factor, and μB the Bohr magneton. r(i,j) is the distance between atoms, and ϕ(i,j) the angle between their inter-atomic axis and the external magnetic field. The Hamiltonian is given in terms of spin-3 angular momentum operators, $$\widehat {\mathbf{S}}_i = \{ \hat S_i^x,\hat S_i^y,\hat S_i^z\}$$, associated to atom i. An important feature is that the dynamical redistribution of populations can happen for large spins (S > 1/2), even though both the total particle number N and the collective magnetization $$M = \langle \hat S^z\rangle$$ are conserved quantities (with $$\hat S^{x,y,z} = \mathop {\sum}\nolimits_{j = 1}^N {\hat S_i^{x,y,z}}$$). The magnetic dipolar interaction energy between S = 3 spins is 36 times larger than the one for S = 1/2 alkali atoms, allowing us to probe such population dynamics at milliseconds time scales, as seen in Fig. 2. ### Perturbation theory We will first introduce the expected basic dynamical features according to time-dependent perturbation theory. We will focus on the main differences when assuming a classical behavior, or when taking into account quantum correlations. The simplest possible picture for the population dynamics relies on a mean-field treatment (i.e., neglecting quantum correlations), where each atom undergoes Larmor precession around an effective dipolar field created by all the other spins, $$\hat H^{MF} = \mathop {\sum}\nolimits_{i = 1}^N {{\mathbf{B}}_i^{{\mathrm{eff}}}} \cdot \widehat {\mathbf{S}}_i$$, with $${\mathrm{B}}_i^{{\mathrm{eff}}} = - \mathop {\sum}\nolimits_{j = 1}^N \frac{{V_{ij}}}{2}\{ \langle \hat S_j^x\rangle ,\langle \hat S_j^y\rangle , - 2\langle \hat S_j^z\rangle \}$$. Time-dependent perturbation theory yields the following equation for $$p_{m_S}$$, the relative population of Zeeman level mS = −3, …, 3: $$p_{m_S}^{{\mathrm{MF}}}(t) = p_{m_S}(0) + {\mathrm{sin}}[\theta ]^4\alpha _{m_S}(\theta )t^2{\cal{K}}_{\mathrm{d}}(t) + {\cal{O}}(t^6V_{ij}^6),$$ (2) We give in the Methods section exact formulas for $$\alpha _{m_S}(\theta )$$. For instance, $$\alpha _{m_S = \{ - 3, - 2, - 1,0,1,2,3\} }(\pi /2) = 135/512 \times \{ 1,2, - 1, - 4, - 1,2,1\}$$. Here $${\cal{K}}_{\mathrm{d}}(t) \equiv \frac{{t^2}}{{2N}}\mathop {\sum}\nolimits_{i = 1}^N \left[ {\mathop {\sum}\nolimits_{j \ne i}^N V_{ij}B_{ij}^{{\mathrm{dih}}}} \right]^2$$. Thus classical dynamics is driven by the dipolar field $$B_{ij}^{{\mathrm{dih}}} = - 9/2\mathop {\sum}\nolimits_{k \ne j,i}^N (V_{ki} - V_{kj}){\mathrm{cos}}\theta$$. For a homogeneous gas, Bdih vanishes, and the population dynamics with it. This behavior remains valid at all times given that, by preparation, all spins point along the same direction initially, they precess around the same classical dipolar field, and thus evolve identically. Therefore, the local magnetization $$\langle S_i^z\rangle = M/N$$ remains constant for each spin, cancelling population dynamics altogether. On the other hand, in a trapped gas, the inhomogeneous dipolar field introduces a differential precession rate between spins, which results in population dynamics. Note that Bdih is determined by border effects and also that $${\cal{K}}_{\mathrm{d}}(t)$$ itself is time dependent (t2). Therefore, classically, population redistribution is a slow t4 process. We emphasize that Bdih is proportional to cos θ and thus vanishes when θ = π/2 where no mean-field dynamics takes place at all. Quantum fluctuations can drastically modify this behavior and induce much faster population dynamics even for a homogeneous gas24. Second-order time-dependent perturbation theory on the exact Hamiltonian25,26 in Eq. (1) yields: $$p_{m_S}(t) = p_{m_S}(0) + {\mathrm{sin}}[\theta ]^4\alpha _{m_S}(\theta )t^2V_{{\mathrm{eff}}}^2 + {\cal{O}}(t^4V_{ij}^4).$$ (3) In contrast to the mean-field case, the dynamics grows as t2 and is driven by $$V_{{\mathrm{eff}}} \equiv \sqrt {\mathop {\sum}\nolimits_{i,j \ne i}^N V_{ij}^2/N}$$. We emphasize the relatively fast decay of $$V_{{\mathrm{eff}}}^2$$ with interparticle distance r (as r−6), which makes the short time evolution mainly determined by the nearest-neighbor interactions. As Veff is independent of θ the tipping angle θ provides a way to study an out-of-equilibrium magnetism increasingly determined by quantum correlations as θ → π/2. In the experiment, external systematics such as quadratic Zeeman fields, BQ, generated by tensorial light shifts induced by the lattice lasers—with eigenenergies $$B_{\mathrm{Q}}m_S^2$$—or inhomogeneities associated with magnetic field gradients, Δij = Bi − Bj, need to be accounted for. Their role in the short time dynamics can be understood using perturbation theory (see Methods). Quadratic Zeeman fields can be accounted for by replacing $${\cal{K}}_{\mathrm{d}}(t) \to {\cal{K}}_{\mathrm{d}}(t) - 4/3Q^2$$ and $$V_{{\mathrm{eff}}}^2 \to V_{{\mathrm{eff}}}^2 - 4/3Q^2$$ in the classical and quantum cases, respectively, with $$Q^2 \equiv \frac{1}{N}\mathop {\sum}\nolimits_{j \ne i}^N V_{ij}B_{\mathrm{Q}}$$. At the mean-field level Δij directly renormalizes $$B_{ij}^{{\mathrm{dih}}} \to B_{ij}^{{\mathrm{dih}}} + {\mathrm{\Delta }}_{ij}$$. Thus dipolar inhomogeneities and magnetic field gradients are in direct competition. In the quantum case magnetic field gradients also enter as t4 but in this case they play a subdominant role since the leading dipolar dynamics is significantly faster (t2). ### Numerical methods to model the spin dynamics Although perturbation theory allows emphasizing some of the main qualitative differences in the classical and in the quantum regime, to accurately describe the population dynamics we need to go beyond perturbation theory. To accomplish that we parameterize each spin i by a generalized Bloch vector, $$\vec \lambda ^{[i]}$$. In contrast to spin-1/2 systems, this vector is a 48-dimensional object that determines all independent elements of the 7 × 7 (= (2S + 1)2) individual spin-3 density matrices, $$\hat \rho _i(\vec \lambda ^{[i]})$$27. Inserting the product-state ansatz of the system density matrix, $$\hat \rho = \mathop {\prod}\nolimits_{i = 1}^N \hat \rho _i(\vec \lambda ^{[i]})$$, into the von-Neumann equation, $$d\hat \rho /dt = ( - {\mathrm{i}}/\hbar )[\hat H,\hat \rho ]$$, yields N × 48 independent non-linear mean-field equations, in which each generalized Bloch vector evolves in the field of the others. The mean-field “classical” results are obtained by numerically integrating these equations of motion (see Methods). To capture the build up of quantum correlations we developed a generalization of a semiclassical method (generalized discrete truncated Wigner approximation, GDTWA) based on a discrete Monte Carlo sampling in phase space originally derived in the framework of the so-called truncated Wigner approximation (TWA)28. It describes the initial state in terms of a probability distribution. Initial spin coherent states are ideal since they can be fully described by a positive discrete probability distribution. For spin-1/2 systems, randomly sampling this initial “Wigner function”, leads to the discrete truncated Wigner approximation (DTWA)20,21,22, an approximation that has been remarkably successful and can capture complex quantum aspects of spin dynamics. In contrast to the spin-1/2 case, here (GDTWA) the discrete probabilities are not provided by the eigenvalues of the three Pauli matrices, but instead by the eigenvalues of the corresponding 48 generalized SU(7) generators (see Methods). This semiclassical approach is benchmarked by comparison with exact diagonalization predictions (see Supplementary Note 1 and Supplementary Fig. 1). ### Comparisons between experiment and numerical simulations We now describe how our data compare with simulations for different values of θ. In Fig. 2, we show our data and the comparisons to both the classical and the GDTWA models. The theoretical models take into account the 3D lattice structure and the measured magnetic field gradients along all three directions. We also include the weak quadratic Zeeman field present in the experiment. Since we could not measure it directly we allow it to be a fitting parameter (see Supplementary Note 2 and Supplementary Fig. 2). For each of the four tilting angles used for the measurements we plot the evolution of fractional populations in different Zeeman states. We only plot the most relevant Zeeman states (the most populated for most of the angles, and only the negative Zeeman states for the symmetric case π/2—see Supplementary Fig. 3 for extensive data). Experimentally, we find that the amplitude of spin dynamics (i.e., the amplitude of the variations of the populations in the different spin states) is stronger when the angle increases. At small angles the experimental data are qualitatively reproduced by both classical and GDTWA simulations. As can be seen in Fig. 2, both simulations then yield similar results, but nevertheless show systematic differences. This shows that even at the smallest angles that we have probed, beyond mean-field effects are in principle already at play. However, given the signal-to-noise ratio in the experimental data, it is difficult to quantify the contribution of beyond mean-field effects to the spin dynamics at weak rotations. When increasing the angle, it becomes increasingly clear that only the beyond-mean-field simulation accounts for the observed dynamics, both at short times (t < 20 ms, see Fig. 2) and at long times (t > 40 ms, see Fig. 3a). We have performed a systematic study in our numerical simulations, by varying the size of the system which we use. We find that a good agreement between experiment and beyond mean-field theory is only reached, provided the number of interacting spins in the simulation is larger than about 60 (see Supplementary Note 4 and Supplementary Fig. 4). Taken together, these data show that spin dynamics after the initial quench is inherently many-body, and beyond the grasp of mean-field models. As can also be seen in Fig. 2, our experimental data at short times are also in excellent agreement with the exact dynamics calculated within the framework of second-order time-dependent perturbation theory (see Eq. (3)). Also in good agreement with this equation, we find the dynamics at short times to be roughly independent of the magnetic field gradient applied to the sample (up to values >30 MHz m−1; see Supplementary Note 5 and Supplementary Fig. 5). In contrast, we point out that the experimental data at short times systematically show faster dynamics than predicted in the classical picture, whose initial t4 dependence (see Eq. (2)) fails to reproduce the experimental observations. Finally, we checked that the effect of imperfections in the lattice preparation on GDTWA predictions is small (see Supplementary Note 6 and Supplementary Fig. 6). ### Models for thermalization at long times For an isolated system, entanglement build up after a quench into a non-equilibrium situation is tied to the scenario of quantum thermalization. To support the relevance of quantum correlations during dynamics, we thus analyze the long-time behavior of the populations. For all tipping angles θ, we observe that the experimental system approaches a steady state, which is in agreement with predictions of closed system quantum thermalization, given, e.g., by the eigenstate thermalization hypothesis (ETH)29,30. In particular, we find that the long-time average populations are very well described by the effective thermal distribution $$\hat \rho _{{\mathrm{cT}}}(\beta ,\mu ) = \frac{{e^{ - \beta \hat H_{\mathrm{T}} - \mu \hat S^z}}}{{{\mathrm{tr}}[e^{ - \beta \hat H_{\mathrm{T}} - \mu {\hat S}^z}]}}$$ where the chemical potential μ and inverse temperature β = 1/kBT are set by the energy and magnetization of the initial pure state: $$\left\langle {\hat H_{\mathrm{T}}} \right\rangle = {\mathrm{tr}}\left[ {\hat \rho _{{\mathrm{cT}}}\left( {\beta ,\mu } \right)\hat H_{\mathrm{T}}} \right]\quad \left\langle {\hat S^z} \right\rangle = {\mathrm{tr}}\left[ {\hat \rho _{{\mathrm{cT}}}\left( {\beta ,\mu } \right)\hat S^z} \right],$$ (4) which are conserved throughout the evolution. Here $$\hat H_{\mathrm{T}} = \hat H + \mathop {\sum}\nolimits_i B_{\mathrm{Q}}(\hat S_i^z)^2$$ is the total Hamiltonian. As shown in Fig. 2, the steady-state populations approach the ones (indicated by the arrows for all tipping angles) dictated by the thermal ensemble when simply setting β = 0 (in which case the maximum-entropy state only depends on magnetization). For angles close to π/2, however, where quantum effects are most significant, we find a deviation compared with this simplistic prediction. We therefore proceed to study this interesting regime. In Fig. 3 we show dynamics up to longer times, and confirm that a steady state is indeed reached after 40 ms for the π/2 case, a feature that the GDTWA simulation reproduces, while classical simulations predict an oscillatory behavior for a much longer duration. This qualitative difference between the classical and the quantum behavior is associated with the different origin of thermalization in both pictures: while quantum-mechanically thermalization is tied to the growth of entanglement, classically, reaching a steady state in a system of frozen particles is a consequence of the single-particle dephasing induced by field inhomogeneities. The inhomogeneous fields arise either from external fields, or from effective fields generated on one particle from the mean-field interactions with the surrounding particles. This behavior differs from the typical thermalization scenario in mobile particles where collisions can classically change both motional and internal degrees of freedom while redistributing energies and momenta29,31,32,33. Our observations clearly rule out a simple mean-field (classical) behavior. Most interestingly, we find a very good agreement between the experimental data points taken at long times, i.e. after the system has reached its steady state, if instead of setting β = 0 we account for the corrections generated by the quadratic Zeeman field BQ, and the finite but small energy of the initial state in Eq. (4). Using a simple perturbative approach (see Methods) we obtain that the β(0) = μ(0) = 0 solutions should be replaced by $${\beta ^{(1)} = \frac{{{\mathrm{tr}}[\hat \rho _{{\mathrm{cT}}}(0,0)\hat H_{\mathrm{T}}] - \langle \hat H_{\mathrm{T}}\rangle }}{{{\mathrm{tr}}[\hat \rho _{{\mathrm{cT}}}(0,0)\hat H_{\mathrm{T}}^2] - {\mathrm{tr}}[\hat \rho _{{\mathrm{cT}}}(0,0)\hat H_{\mathrm{T}}]^2}} = \frac{{5B_{\mathrm{Q}} + 9\bar V}}{{24V_{{\mathrm{eff}}}^2 + 24B_{\mathrm{Q}}^2}}\quad \quad \mu ^{(1)} = 0}$$ (5) $$p_{m_S}(t_{{\mathrm{SS}}}) \approx {\mathrm{tr}}[\hat \rho _{{\mathrm{cT}}}(\beta ^{(1)},0)\hat p_{m_S}] = \frac{1}{7}(1 - \beta ^{(1)}B_{\mathrm{Q}}(m_S^2 - 4)),$$ (6) with $$\bar V \equiv 1/N\mathop {\sum}\nolimits_{i > j} V_{ij}$$. We find $$\bar V/h \approx - 0.57$$ Hz and $$V_{{\mathrm{eff}}}/h \approx 6.13\,{\mathrm{Hz}}$$ for our lattice geometry. As $$\bar V < 0$$, negative temperatures are expected for low enough BQ (as allowed for a system whose maximum energy is bounded). Figure 3b shows a very good agreement between the equilibrium data and the analytical model (see Supplementary Note 7 and Supplementary Fig. 8 for extensive equilibrium data). For this comparison, there is no free parameter, since we use the value of BQ for which the dynamical evolution of the spins is best reproduced by GDTWA simulations. This good agreement confirms the scenario that the coherent Hamiltonian evolution of the many-body system drives it toward a strongly entangled pure state for which the observables display thermal-like behavior. The agreement between the analytical model and the GDTWA at long times shown in Fig. 3b also indicates that the GDTWA not only captures the short term dynamics (as previously known from the theoretical point of view), but also the approach to equilibrium. To compare the analytical formula to the data we have ignored magnetic field gradients in Eq. (6) (See Methods). In principle, magnetic field gradients should lead to an equilibrium state where a spatial texture of magnetization develops. However, such a texture requires long-range interactions between remote parts of the cloud, which only occurs for an extremely long timescale for dipolar interactions. We have verified (see Supplementary Note 7 and Supplementary Fig. 7) that indeed magnetic field gradients can be neglected to evaluate the quasi-steady state populations reached at 100 ms. This shows that a local equilibrium is first reached, well before the full many-body system may reach true equilibrium with maximum entropy, where all populations in the different Zeeman states would be equally populated. ### Study of quantum correlations To quantify the importance of quantum correlations in the spin dynamics as a function of the tilting angle θ, we analyze from the theoretical point of view the properties of the reduced density matrix for each spin, $$\hat \rho _i(\vec \lambda ^{[i]})$$. In our simulations those density matrices are readily available from the generalized Bloch vectors. To minimize finite size and boundary effects we focus on the density matrix of the central spin of our simulated block $$\hat \rho _0$$. Even when, as in our simulations, the quantum state of the full system $$\hat \rho$$ is pure, the reduced single-spin density matrices can assume a mixed character due to the buildup of entanglement between the spins. This mixed character is quantified by a reduced purity, $${\mathrm{t}}r(\hat \rho _0^2) \, < \, 1$$ and thus an increased entropy, which we compute in terms of the second-order Renyi entropy, $$S_0^{(2)} = - {\mathrm{log}}_2[{\mathrm{tr}}(\hat \rho _0^2)]$$. If the state of the full system is pure, $${\mathrm{t}}r(\hat \rho ^2) = 1$$, the Renyi entropy is a measure of entanglement34: it is zero for product states, and reaches the maximum value of $$S_0^{(2){\mathrm{max}}} = {\mathrm{log}}_2[7]$$ (the value for a fully mixed state of a spin-3 particle) for many-body states where the quantum information encoded in an individual spin is completely scrambled due to entanglement with other ones. As can be seen in Fig. 4, the quantum evolution leads to a growth of $$S_0^{(2)}$$ already for the smallest investigated angle θ = 0.2π. The dynamical growth of entanglement increases significantly for larger tilt angles. At θ = π/2, we find that $$S_0^{(2)}$$ approaches its maximum possible value $$S_0^{(2){\mathrm{max}}}$$. Although we cannot perform a full state-tomography from the experimental data, we can compare our experimental data to a “diagonal entropy” computed in terms of the diagonal part of the averaged single-particle density matrix $$\hat \rho _{\mathrm{S}} = (1/N)\mathop {\sum}\nolimits_{i = 1}^N \hat \rho _i(\vec \lambda ^{[i]})$$. Note that for an homogeneous system $$\hat \rho _{\mathrm{S}} = \hat \rho _0$$. In this case we can define this entropy as $$S_2^{\mathrm{D}} = - {\mathrm{log}}_2\{ {\mathrm{tr}}[{\mathrm{diag}}(\hat \rho _{\mathrm{S}})^2]\}$$, which can be readily accessed from the population data, assuming homogeneity: $$S_2^{\mathrm{D}} = - {\mathrm{log}}_2\{ {{\sum} p_{m_S}^2} \}$$. This diagonal entropy is not an entanglement witness, but it provides an upper bound of the entanglement entropy, $$S_2^{\mathrm{D}} \ge S_0^{(2)}$$. In a translationally invariant system it increases as quantum correlations build up with time and approaches the full entropy as the single-spin density matrices decohere due to entanglement. However, in our finite system, boundary effects can obscure this behavior. For example, at small angles, diagonal entropy shows a slight reduction as a function of time (see Fig. 4 for θ ≤ 0.3π). On the other hand, we do observe it increases with time as the system thermalizes for large values of θ, in which case the non-trivial growth of the experimental diagonal entropy is in excellent agreement with our theoretical estimates, provided quantum fluctuations are taken into account. Moreover it also approaches $$S_0^{(2)}$$ for θ = π/2. ## Discussion In summary, our study demonstrates the dominant role of quantum correlations in the out-of-equilibrium dynamics of an initially uncorrelated spin coherent state, when the angle it makes with the external magnetic field is close to π/2. We have shown that our long-range interacting many-particle isolated spin system internally thermalizes through entanglement buildup, and develops an effective thermal-like behavior through a mechanism which is purely quantum and conservative. The comparison between experiment and theory shows that the GDTWA simulations can be trusted for studying the dynamics in a complex quantum many-body system, provided a sufficient number of atoms is included in the simulation. Thus, our experiment provides a test-bed for a theoretical method based on the GDTWA, for systems of large spins, and in a many-body regime where simulations based on exact diagonalization techniques are intractable with current computational resources. In turn, our study can be used as a benchmark of a quantum simulator of the spin-3 XXZ Heisenberg model and opens a path toward the study of open problems in quantum many-body physics. For example, by operating the experiment at smaller lattice depths, where tunneling is allowed, we will have the exciting opportunity to study itinerant magnetism, whose description is typically unaccessible to theory, but which is believed to be at the heart of the physics behind high-temperature superconductivity35. ## Methods ### Description of the 3D lattice The 3D lattice is made with five laser beams at 532 nm. On the horizontal plane, three beams with the same frequency define a rectangular pattern, with respective directions $${\mathbf{u}}_{{\mathbf{H}}_{\mathbf{1}}} = {\mathrm{cos}}(\alpha ){\mathbf{u}}_{\mathbf{x}} - {\mathrm{sin}}(\alpha ){\mathbf{u}}_{\mathbf{y}}$$, $${\mathbf{u}}_{{\mathbf{H}}_{\mathbf{2}}} = - {\mathbf{u}}_{{\mathbf{H}}_{\mathbf{1}}}$$, $${\mathbf{u}}_{{\mathbf{H}}_{\mathbf{3}}} = {\mathrm{cos}}(\pi /4){\mathbf{u}}_{\mathbf{x}} + {\mathrm{sin}}(\pi /4){\mathbf{u}}_{\mathbf{y}}$$, α = 8/180π. Two other beams, contra-propagating, with a frequency offset by 30 MHz compared with the beams in the horizontal plane, with directions $${\mathbf{u}}_{{\mathrm{V}}_{\mathbf{1}}} = - {\mathrm{cos}}(\beta ){\mathbf{u}}_{\mathbf{z}} + {\mathrm{sin}}(\beta ){\mathbf{u}}_{\mathbf{x}} = - {\mathbf{u}}_{{\mathrm{V}}_{\mathbf{2}}}$$, β = 7/180π, form an independent light pattern. Calibration of the lattice is performed by standard matter wave diffraction pattern analysis after pulsing lattice beams onto the BEC, with the three pairs of beams (H1, H2), (H1, H3), and (V1, V2). The laser powers are chosen so that these three couples of beams induce almost equal lattice depths, larger than 25 recoil energy. For these lattice depths, the tunneling time is typically 100 ms, and tunneling events can safely be neglected during dynamics. ### Preparation of a lattice with only singly occupied sites To prepare a lattice of atoms at unit filling, we first slowly load the BEC into a 3D optical lattice, to reach a Mott-insulating state. For our experimental parameters, there exists a core with only doubly occupied sites, surrounded by a 3D shell of atoms at unit filling. We empty the doubly occupied sites by performing a rf pulse to promote all atoms from the lowest energy Zeeman state ms = −3 into the state ms = 3, which triggers dipolar relaxation. We perform our experiment in presence of an external magnetic field which is large enough that dipolar relaxation can be considered as a short-range process36. Thus, only atoms in doubly occupied sites undergo dipolar relaxation, and each dipolar relaxation event empties one doubly occupied lattice site. We estimate the probability of secondary collisions during this filtering procedure to be below 0.05. After 7 ms, all doubly occupied sites are empty, with about 10,000 remaining atoms. The spin dynamics experiment is then performed using the atoms remaining in the shell with unit occupancy. Because the sample during dynamics consists of a 3D shell of atoms with unit occupancy within the lattice, border effects might not be fully negligible during dynamics. Indeed, our estimates is that about 20 percent of the atoms within the shell of singly occupied sites are close to the boundary. It is likely that spin dynamics is slower for these atoms lying close to the frontier of the shell. Note that the experiment could not be performed at arbitrarily high magnetic field intensities. As a consequence, some of the atoms which underwent dipolar relaxation remain trapped in very highly excited states of the combined lattice-dipole trap potentials. This translates into losses affecting the sample with unit filling. After 40 ms, from 20 to up to 40 percent of the atoms are typically missing, depending on the magnetic field strength. This phenomenon does not seem to impact the agreement of our spin dynamics data with GDTWA theory as long as losses are below 30 percent. ### Atom number calibration The number of atoms in different spin states is estimated using standard absorption imaging, after spin separation using an applied magnetic field gradient during the free fall of atoms, following a Stern–Gerlach procedure. The cross section for absorption of resonant light strongly depends on the ms states, through Clebsch–Gordan coefficients. Therefore, we calibrate the relative sensitivity of the imaging system for the different spin states by comparing the measured populations just after the rf pulse to the theoretically expected values. This calibration depends on the external magnetic field direction during spin dynamics, as eddy currents do not allow to rapidly set its direction during imaging. For the specific case of θ = π/2, we employ a slightly different method to calibrate the different sensitivities. Indeed, the number of atoms in ms = +3 is then very small just after the rf pulse and the detectivity of this Zeeman state is the lowest, due to unfavorable Clebsch–Gordan coefficients. For this specific dataset, we thus enforce that the ms = −3 and ms = 3 average atom number after spin dynamics is identical. This choice is motivated by the fact that the Hamiltonian preserves magnetization (as experimentally verified for all other datasets), and by the initially symmetric theoretical populations in the different Zeeman states. For example, for the π/2 data in Fig. 2 of the main article, the detectivity correction factors of the different Zeeman states are f−3 = 0.76, f−2 = 0.96, f−1 = 1.18, f0 = 1.57, f1 = 2.93, f2 = 2.68, and f3 = 5.32. ### Short-time analysis of population dynamics Using time-dependent perturbation theory we analyze the contribution of the different terms in the Hamiltonian at short times. For our system the initial population is given by $$p_{m_S}(0) = \left( {\begin{array}{*{20}{c}} 6 \\ {m_S + 3} \end{array}} \right)\left( {{\mathrm{sin}}\left( {\frac{\theta }{2}} \right)} \right)^{(6 + 2m_S)}\left( {{\mathrm{cos}}\left( {\frac{\theta }{2}} \right)} \right)^{(6 - 2m_S)}$$ and the coefficients $$\alpha _{m_S}(\theta )$$ given by $$\alpha _{ - 3}(\theta ) = \frac{{135}}{{32}}{\mathrm{cos}}^8\left( {\frac{\theta }{2}} \right)$$, $$\alpha _{ - 2}(\theta ) = \frac{{135}}{{32}}{\mathrm{cos}}^6\left( {\frac{\theta }{2}} \right)[1 - 3{\mathrm{cos}}(\theta )]$$, $$\alpha _{ - 1}(\theta ) = \frac{{135}}{{256}}{\mathrm{cos}}^4\left( {\frac{\theta }{2}} \right)[13 - 20{\mathrm{cos}}(\theta ) + 15{\mathrm{cos}}(2\theta )]$$, $$\alpha _0(\theta ) = \frac{{135}}{{256}}{\mathrm{sin}}^2(\theta )[3 + 5{\mathrm{cos}}(2\theta )]$$, $$\alpha _1(\theta ) = \frac{{135}}{{256}}{\mathrm{sin}}^4\left( {\frac{\theta }{2}} \right)[13 + 20{\mathrm{cos}}(\theta ) + 15{\mathrm{cos}}(2\theta )]$$, $$\alpha _2(\theta ) = \frac{{135}}{{32}}{\mathrm{sin}}^6\left( {\frac{\theta }{2}} \right)[1 + 3{\mathrm{cos}}(\theta )]$$, $$\alpha _3(\theta ) = \frac{{135}}{{32}}{\mathrm{sin}}^8\left( {\frac{\theta }{2}} \right)$$. ### Generalized Bloch vectors and the GDTWA A generic density matrix for a discrete system with D states on site i takes the form $$\hat \rho _i = \mathop {\sum}\nolimits_{\alpha = 1,\beta = 1}^D c_{\alpha ,\beta }|\alpha \rangle \langle \beta |$$. For a spin-3 atom D = 7, and to the states $$\left| {\alpha = 1,2,3, \ldots ,6,7} \right\rangle$$ we may associate the spin states $$\left| {m_S = 3,2 \ldots , - 2, - 3} \right\rangle$$. Since $$(\hat \rho _i)^\dagger = \hat \rho _i$$ and $${\mathrm{tr}}(\hat \rho _i) = 1$$ a total of D2 − 1 real numbers are needed to describe an arbitrary state. Those numbers can be expressed as expectation values of D2 − 1 orthogonal observables: $$\hat \Lambda _{\alpha ,\beta < \alpha }^{[i],R} = (|\beta \rangle \langle \alpha | + |\alpha \rangle \langle \beta |)$$ and $$\hat \Lambda _{\alpha ,\beta < \alpha }^{[i],I} = - {\mathrm{i}}(|\beta \rangle \langle \alpha | - |\alpha \rangle \langle \beta |)$$ for 1 ≤ α ≤ D, 1 ≤ β ≤ D − 1, and $$\hat \Lambda _\alpha ^{[i],D} = \sqrt {\frac{2}{{\alpha (\alpha + 1)}}} \left( {\mathop {\sum}\nolimits_{\beta = 1}^\alpha |\beta \rangle \langle \beta | - \alpha |\alpha + 1\rangle \langle \alpha + 1|} \right)$$ for $$1 \le \alpha < D - 1$$. Here, the $$\hat \Lambda _{\alpha ,\beta < \alpha }^{[i],R/I}$$ correspond to measurements of the real (“R”) and imaginary (“I”) parts of the off-diagonal parts of $$c_{\alpha ,\beta }$$, and $$\hat \Lambda _\alpha ^{[i],D}$$ to linear combinations of the real diagonal elements $$c_{\alpha ,\alpha }$$. Together, the set of matrices $$\hat \Lambda _\mu ^{[i]} \in \{ \Lambda _{\alpha ,\beta }^{[i],R/I},\hat \Lambda _\alpha ^{[i],D}\}$$ are traceless, $${\mathrm{tr}}(\hat \Lambda _\mu ^{[i]}) = 0$$ and $${\mathrm{tr}}(\hat \Lambda _\mu ^{[i]}\hat \Lambda _\nu ^{[i]}) = 2\delta _{\mu ,\nu }$$. Note that for D = 2, the matrices reduce to standard Pauli matrices, for D = 3 to standard Gell-Mann matrices. They are known as generalized Gell-Mann matrices (GGMs) and are the generators of the SU(D) group27. The mean-field equations can be written as (D2 − 1) × N coupled non-linear equations for the expectation values of $$\lambda _\mu ^{[i]} = \langle \hat \Lambda _\mu ^{[i]}\rangle$$. The $$\lambda _\mu ^{[i]}$$ can be interpreted as components of a D2 − 1 dimensional Bloch vector via the expansion $$\hat \rho _i(\lambda _\mu ^{[i]}) = \left[ {{\Bbb I} + \mathop {\sum}\nolimits_{\mu > 0} \lambda _\mu ^{[i]}\hat \Lambda _\mu ^{[i]}} \right]/D$$. We denote the Bloch vector elements associated to the off-diagonal and diagonal GGMs as $$\lambda _{\alpha ,\beta < \alpha }^{[i],R/I} = (D/2){\kern 1pt} {\mathrm{tr}}(\hat \Lambda _{\alpha ,\beta < \alpha }^{[i],R/I}\hat \rho ^{[i]})$$ and $$\lambda _\alpha ^{[i],D} = (D/2){\kern 1pt} {\mathrm{tr}}(\hat \Lambda _\alpha ^{[i],D}\hat \rho _i)$$, respectively. Furthermore, we define $$\hat \Lambda _0^{[i]} = {\Bbb I}\sqrt {2/D}$$, such that $${\mathrm{tr}}(\hat \Lambda _0^{[i]}\hat \Lambda _\nu ^{[i]}) = 2\delta _{0,\nu }$$. Then, an arbitrary operator can be expanded into the orthogonal basis $$\{ \hat \Lambda _\mu ^{[i]}\}$$ for 0 ≤ μ < D2. Consider a generic two-spin Hamiltonian between sites i, and j, and its expansion into GGMs, $$\hat H_{i,j} = \mathop {\sum}\nolimits_{\mu ,\nu } h_{\mu ,\nu }^{[i,j]}\hat \Lambda _\mu ^{[i]}\hat \Lambda _\nu ^{[j]}$$. Then the mean-field equations of motion follow from inserting a product-state ansatz $$\hat \rho = \mathop {\prod}\nolimits_i \hat \rho _i$$ into the von-Neumann equations of motion. For the Bloch vector at site i ( = 1): $$\dot \lambda _\eta ^{[i]} \approx {\kern 1pt} \frac{2}{D}\mathop {\sum}\nolimits_{\mu ,\nu ,\kappa } h_{\mu ,\nu }^{[i,j]}\lambda _\nu ^{[j]}\lambda _\kappa ^{[i]}f_{\mu ,\kappa ,\eta } \equiv \mathop {\sum}\nolimits_\kappa {\cal{F}}_{\eta ,\kappa }^{[i,j]}\lambda _\kappa ^{[i]}$$. Here, we defined the “mean-field matrix” $${\cal{F}}_{\eta ,\kappa }^{[i,j]} \equiv \frac{2}{D}\mathop {\sum}\nolimits_{\mu ,\nu } h_{\mu ,\nu }^{[i,j]}\lambda _\nu ^{[j]}f_{\mu ,\kappa ,\eta }$$. Here the tensor fμ,κ,η is defined via $$[\hat \Lambda _\mu ^{[i]},\hat \Lambda _\kappa ^{[i]}] = {\mathrm{i}}{\kern 1pt} f_{\mu ,\kappa ,\eta }\hat \Lambda _\eta ^{[i]}$$, whose elements are the structure constants of the SU(D) group. The full mean-field equations for the generalized Bloch vector at site i are then $$\dot \lambda _\eta ^{[i]} = \mathop {\sum}\nolimits_\kappa \left[ {\left( {\mathop {\sum}\nolimits_j {\cal{F}}_{\eta ,\kappa }^{[i,j]}} \right) + h_\kappa ^{[i]}} \right]\lambda _\kappa ^{[i]}$$ where $$\hat H^{[i]} = \mathop {\sum}\nolimits_\kappa h_\kappa ^{[i]}\hat \Lambda _\kappa ^{[i]}$$ is the expansion of the single-site Hamiltonians containing all local terms (field gradients, quadratic Zeeman fields, etc.) into GGMs. It is straightforward to construct the equations for arbitrary Hamiltonians containing single- and two-site terms numerically, as well as to evolve the generalized Bloch vectors in time. In the numerical mean-field simulations, the quantum state is represented by N time-dependent generalized Bloch vectors, $$\lambda _\mu ^{[i]}(t)$$. We evolve the vectors for the initial state $$\mathop {\prod}\nolimits_i |m_S = - 3\rangle _i = \mathop {\prod}\nolimits_i |\alpha = 7\rangle _i$$. Explicitly, this state corresponds to a state with $$\lambda _{\alpha ,\beta < \alpha }^{[i],R/I}(t = 0) = 0$$, $$\lambda _{1,2,3,4,5}^{[i],D}(t = 0) = 0$$, and $$\lambda _6^{[i],D}(t = 0) = - \sqrt {21} = - \sqrt {(D - 1)D/2}$$. To also simulate dynamics of initially tilted states, i.e. states created by applying a unitary collective rotation, $$\left| {\psi _0} \right\rangle = \mathop {\prod}\nolimits_i \hat U_i(\theta )\left| {m_S = - 3} \right\rangle _i$$, we simply rotate the equations of motion by rotating the Hamiltonian $${\hat{H}}\prime = {\mathop {\prod}\limits_i} {{\hat{U}}_i} (\theta ) {\hat{H}} {\mathop {\prod}\limits_j} {\hat{U}}_j^{\dagger} (\theta ) {\hbox{]}}$$. In contrast, in the GDTWA approach we describe the initial state not by a generalized Bloch vector, but instead by a probability “Wigner” distribution, $$p_{\mu ,a_\mu }^{[i]}$$, for certain discrete configurations of Bloch vector elements, $$\lambda _{\mu ,a_\mu }^{[i]}$$. Initially, the probabilities and configurations are chosen in such a way that on average $$\lambda _\mu ^{[i]}(t = 0) = \mathop {\sum}\nolimits_{a_\mu } p_{\mu ,a_\mu }^{[i]}\lambda _{\mu ,a_\mu }^{[i]} \equiv \overline {\lambda _{\mu ,a_\mu }^{[i]}}$$. In practice, the initial multi-spin configurations are selected via a random sampling of $$p_{\mu ,a_\mu }^{[i]}$$ for each spin i and each Bloch vector component μ. Then the individually selected configurations are evolved according to the non-linear mean-field equations. Observables are computed from a statistical average over the different trajectories. It is important to note that due to the non-linear nature of the equations, this approach can capture the buildup of correlations, e.g., at later times in general $$\overline {\lambda _\mu ^{[i]}\lambda _\nu ^{[j]}(t)} \ne \overline {\lambda _\mu ^{[i]}(t)} {\kern 1pt} \overline {\lambda _\nu ^{[j]}(t)}$$. In particular, as discrete set of initial configurations, $$\{ \lambda _{\mu ,a_\mu }^{[i]}\}$$, we use a set which is inspired by a “projective measurement, of the GGMs”: for each $$\lambda _\mu ^{[i]}(t = 0)$$, we choose a set of initial configurations given by the eigenvalues of each GGM. Consider the eigen-expansion of the GGMs, $$\hat \Lambda _\mu ^{[i]} = \mathop {\sum}\nolimits_{a_\mu } \eta _{\mu ,a_\mu }^{[i]}\left| {\eta _{\mu ,a_\mu }^{[i]}} \right\rangle \left\langle {\eta _{\mu ,a_\mu }^{[i]}} \right|$$, where $$\eta _{\mu ,a_\mu }^{[i]}$$ and $$\left| {\eta _{\mu ,a_\mu }^{[i]}} \right\rangle$$ denote the eigenvalues and eigen-vectors, respectively. Then, we choose the “a-th” eigenvalue, $$\lambda _\mu ^{[i]}(t = 0) = (D/2){\kern 1pt} \eta _{\mu ,a_\mu }^{[i]}$$, with probability $$p_{\mu ,a_\mu }^{[i]} = {\mathrm{tr}}\left[ {\hat \rho _0^{[i]}\left| {\eta _{\mu ,a}^{[i]}} \right\rangle \left\langle {\eta _{\mu ,a}^{[i]}} \right|} \right]$$, where $$\hat \rho _0^{[i]} = \left| {\alpha = 7} \right\rangle \left\langle {\alpha = 7} \right|_i$$. Note that this choice is a generalization of the one used for the spin-1/2 DTWA method20,21, and for D = 2, we reproduce the DTWA sampling. Specifically, for the initial state |mS = −3〉i, this prescription leads to fixed “diagonal” Bloch vector elements $$\lambda _{1,2,3,4,5}^{[i],D}(t = 0) = 0$$ and $$\lambda _6^{[i],D}(t = 0) = - \sqrt {21}$$, fixed off-diagonal elements $$\lambda _{\alpha < 7,\beta < \alpha }^{[i],R/I} = 0$$ and fluctuating off-diagonal elements $$\lambda _{\alpha = 7,\beta = 1, \ldots 6}^{[i],R/I} \in \{ - D/2, + D/2\}$$, each with 50% probability. ### Quantum thermalization It is generally believed that the unitary quantum evolution of a complex quantum system leads to an apparent maximum-entropy state that can be described by thermodynamical ensembles that properly account for the conserved quantities. In our systems those are the energy and magnetization. We thus postulate that the steady-state properties of local observables, such as the relative population of Zeeman levels, can be described in our system by the thermal distribution $$\hat \rho _{{\mathrm{cT}}}(\beta ,\mu ) = \frac{{e^{ - \beta \hat H_{\mathrm{T}} - \mu \hat S^z}}}{{{\mathrm{tr}}[e^{ - \beta \hat H_{\mathrm{T}} - \mu {\hat S}^z}]}}$$ where μ and β = 1/kBT are the chemical potential and inverse temperature set by the energy and magnetization, respectively, accordingly to Eq. (4). While the determination of β and μ can be a challenging task for a complex many-body system, the anisotropic character of the dipolar interactions facilitates an analytic high-temperature expansion around β = 0 since $$\bar{V}$$ is small (see main text). Under this assumption, the chemical potential to leading order, is set by $$\langle \hat S^z\rangle = {\mathrm{tr}}[\hat \rho _{{\mathrm{cT}}}(0,\mu ^{(0)})\hat S^z] = \frac{{\mathop {\sum}\nolimits_{m_S = - 3}^3 m_Se^{ - \mu ^{(0)}m_S}}}{{\mathop {\sum}\nolimits_{m_S = - 3}^3 e^{ - \mu ^{(0)}m_S}}}$$ and therefore $$p_{m_S}^{(0)}(t_{{\mathrm{S}}S}) = \frac{{e^{ - \mu ^{(0)}m_S}}}{{\mathop {\sum}\nolimits_{m = - 3}^3 e^{ - \mu ^{(0)}m}}}$$. Here tSS refers to the steady state. These are the populations indicated by arrows in Fig. 2. The case θ = π/2 is particularly simple since $$\langle \hat S_z\rangle = 0$$ and thus $$\mu ^{(0)} = 0$$ and $$p_{m_S}^{(0)}(t_{SS}) = 1/7$$. This solution, however, shows deviations with the observed long-time dynamics indicating that finite β corrections are relevant. To first order in β(1) the chemical potential can be written as $$\mu ^{(1)} = \mu ^{(0)} + \beta ^{(1)}\delta \nu$$ and the solutions of Eq. (4) are described by the relations: $$\langle \hat S^z\rangle = \widetilde {\hat S^z} - \beta ^{(1)}(\delta \nu {\mathrm{\Delta }}\widetilde {\hat S^z\hat S^z} + {\mathrm{\Delta }}\widetilde {\hat H_{\mathrm{T}}\hat S^z})$$ and $$\langle \hat H_{\mathrm{T}}\rangle = \widetilde {\hat H_{\mathrm{T}}} - \beta ^{(1)}(\delta \nu {\mathrm{\Delta }}\widetilde {\hat H_{\mathrm{T}}\hat S^z} + {\mathrm{\Delta }}\widetilde {\hat H_{\mathrm{T}}\hat H_{\mathrm{T}}})$$, where we have defined $$\widetilde {\hat {\cal{O}}} \equiv {\mathrm{tr}}[\hat \rho _{{\mathrm{c}}T}(0,\mu ^{(0)})\hat {\cal{O}}]$$ and $${\mathrm{\Delta }}\widetilde {\hat {\cal{O}}\hat {\cal{A}}} \equiv \widetilde {\hat {\cal{O}}\hat {\cal{A}}} - \widetilde {\hat {\cal{O}}}\widetilde {\hat {\cal{A}}}$$. Solutions of those equations, taking into account the intial magnetization and total energy in the intial product state, are particularly simple for the θ = π/2 case, where μ(0) = 0, $$\widetilde {\hat S}^z = 0$$, $${\mathrm{\Delta }}\widetilde {\hat H_{\mathrm{T}}\hat S^z} = 0$$ and $${\mathrm{\Delta }}\widetilde {\hat S^z\hat S^z} = NI_2$$, $$\widetilde {\hat H_T} = NB_{\mathrm{Q}}I_2$$ and $${\mathrm{\Delta }}\widetilde {\hat H_{\mathrm{T}}\hat H_{\mathrm{T}}} = N(B_{\mathrm{Q}}^2(I_4 - I_2^2) + 3/4V_{{\mathrm{eff}}}^2I_2^2)$$ with $$I_r = (\mathop {\sum}\nolimits_{m = - 3}^3 m^r)/7$$ (thus I2 = 4 and I4 = 28). Those yield the expressions for β(1) and μ(1) quoted in Eq. (5). In the presence of linear gradients $$\hat H_{\mathrm{T}} \to \hat H_{\mathrm{T}} + \mathop {\sum}\nolimits_{i=1}^N B_{i }\hat S_i^z$$, under the assumption that $$\mathop {\sum}\nolimits_{i= 1}^{^N} B_{i } = 0$$, the inverse temperature equation in Eq. (6) for the case of θ = π/2 should be replaced by $$\beta ^{(1)} = \frac{{5B_{\mathrm{Q}} + 9\bar V}}{{24V_{{\mathrm{eff}}}^2 + 24B_{\mathrm{Q}}^2 + 8V_{\mathrm{B}}^2}}$$ with $$V_{\mathrm{B}}^2 = 1/N\mathop {\sum}\nolimits_{i = 1}^N B_i^2$$. ## Data availability The experimental data supporting the findings of this study are available within the paper and its Supplementary Material. Additional numerical data and computer codes used in this study are available from the corresponding author upon request. ## References 1. 1. Bohn, J. L., Rey, A. M. & Ye, J. Cold molecules: progress in quantum engineering of chemistry and quantum matter. Science 357, 1002–1010 (2017). 2. 2. Blatt, R. & Roos, C. F. Quantum simulations with trapped ions. Nat. Phys. 8, 277–284 (2012). 3. 3. Saffman, M., Walker, T. G. & Mølmer, K. Quantum information with Rydberg atoms. Rev. Mod. Phys. 82, 2313–2363 (2010). 4. 4. Bohnet, J. G. et al. Quantum spin dynamics and entanglement generation with hundreds of trapped ions. Science 352, 1297–1301 (2016). 5. 5. Gärttner, M. et al. Measuring out-of-time-order correlations and multiple quantum spectra in a trapped-ion quantum magnet. Nat. Phys. 13, 781–786 (2017). 6. 6. Richerme, P. et al. Non-local propagation of correlations in quantum systems with long-range interactions. Nature 511, 198–201 (2014). 7. 7. Jurcevic, P. et al. Quasiparticle engineering and entanglement propagation in a quantum many-body system. Nature 511, 202–205 (2014). 8. 8. Jurcevic, P. et al. Direct observation of dynamical quantum phase transitions in an interacting many-body system. Phys. Rev. Lett. 119, 080501 (2017). 9. 9. Hess, P. W. et al. Non-thermalization in trapped atomic ion spin chains. Philos. Trans. Royal Soc. A 375, 20170107 (2017). 10. 10. Zeiher, J. et al. Coherent many-body spin dynamics in a long-range interacting ising chain. Phys. Rev. X 7, 041063 (2017). 11. 11. Labuhn, H. et al. Tunable two-dimensional arrays of single Rydberg atoms for realizing quantum ising models. Nature 534, 667–670 (2016). 12. 12. Bernien, H. et al. Probing many-body dynamics on a 51-atom quantum simulator. Nature 551, 579–584 (2017). 13. 13. Yan, B. et al. Observation of dipolar spin-exchange interactions with lattice-confined polar molecules. Nature 501, 521–525 (2013). 14. 14. Hazzard, K. R. A. et al. Many-body dynamics of dipolar molecules in an optical lattice. Phys. Rev. Lett. 113, 195302 (2014). 15. 15. Lahaye, T., Menotti, C., Santos, L., Lewenstein, M. & Pfau, T. The physics of dipolar bosonic quantum gases. Rep. Prog. Phys. 72, 126401 (2009). 16. 16. Aharonov, D., Gottesman, D., Irani, S. & Kempe, J. The power of quantum systems on a line. Commun. Math. Phys. 287, 41–65 (2009). 17. 17. Hallgren, S., Nagaj, D. & Narayanaswami, S. The local Hamiltonian problem on a line with eight states is QMA-complete. Quantum Inf. Comput. 13, 721–750 (2013). 18. 18. de Paz, A. et al. Nonequilibrium quantum magnetism in a dipolar lattice gas. Phys. Rev. Lett. 111, 185305 (2013). 19. 19. de Paz, A. et al. Probing spin dynamics from the Mott insulating to the superfluid regime in a dipolar lattice gas. Phys. Rev. A. 93, 021603 (2016). 20. 20. Schachenmayer, J., Pikovski, A. & Rey, A. M. Many-body quantum spin dynamics with Monte Carlo trajectories on a discrete phase space. Phys. Rev. X 5, 011022 (2015). 21. 21. Schachenmayer, J., Pikovski, A. & Rey, A. M. Dynamics of correlations in two-dimensional quantum spin models with long-range interactions: a phase-space Monte-Carlo study. New J. Phys. 17, 065009 (2015). 22. 22. Orioli, A. Pn et al. Relaxation of an isolated dipolar-interacting Rydberg quantum spin system. Phys. Rev. Lett. 120, 063601 (2018). 23. 23. Lepoutre, S. et al. Spin mixing and protection of ferromagnetism in a spinor dipolar condensate. Phys. Rev. A. 97, 023610 (2018). 24. 24. Hazzard, K. R. A., Manmana, S. R., Foss-Feig, M. & Rey, A. M. Far-from-equilibrium quantum magnetism with ultracold polar molecules. Phys. Rev. Lett. 110, 075301 (2013). 25. 25. Abragam, A. Principles of Nuclear Magnetism: International series of monographs on physics, Clarendon Press, Oxford, UK (1989). 26. 26. Hazzard, K. R. A. et al. Quantum correlations and entanglement in far-from-equilibrium spin systems. Phys. Rev. A. 90, 063622 (2014). 27. 27. Bertlmann, R. A. & Krammer, P. Bloch vectors for qudits. J. Phys. A 41, 235303 (2008). 28. 28. Polkovnikov, A. Phase space representation of quantum dynamics. Ann. Physics 325, 1790–1852 (2010). 29. 29. D’Alessio, L., Kafri, Y., Polkovnikov, A. & Rigol, M. From quantum chaos and eigenstate thermalization to statistical mechanics and thermodynamics. Adv. Phys. 65, 239–362 (2016). 30. 30. Kaufman, A. M. et al. Quantum thermalization through entanglement in an isolated many-body system. Science 353, 794–800 (2016). 31. 31. Kinoshita, T., Wenger, T. & Weiss, D. S. A quantum Newton’s cradle. Nature 440, 900–903 (2006). 32. 32. Langen, T. et al. Experimental observation of a generalized gibbs ensemble. Science 348, 207–211 (2015). 33. 33. Tang, Y. et al. Thermalization near integrability in a dipolar quantum Newton’s cradle. Phys. Rev. X 8, 021030 (2018). 34. 34. Renyi, A. in Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, 547–561. (University of California Press, Berkeley, 1961). 35. 35. Lee, P. A., Nagaosa, N. & Wen, X.-G. Doping a Mott insulator: physics of high-temperature superconductivity. Rev. Mod. Phys. 78, 17–85 (2006). 36. 36. Pasquiou, B. et al. Control of dipolar relaxation in external fields. Phys. Rev. A. 81, 042716 (2010). ## Acknowledgements We thank Arghavan Safavi-Naini and Colin Kennedy for their careful reading of the manuscript and useful feedback and Paulo Souto Ribeiro for stimulating discussions about thermalization. The Villetaneuse group acknowledges financial support from Conseil Régional d’Ile-de-France under DIM Nano-K/IFRAF, CNRS, Ministère de l’Enseignement Supérieur et de la Recherche within CPER Contract, Université Sorbonne Paris Cité (USPC), and the Indo-French Centre for the Promotion of Advanced Research—CEFIPRA under the LORIC5404-1 contract. A.M.R. acknowledges supported by NIST, DARPA (W911NF-16-1-0576 through ARO), ARO (Individual Investigator award W911NF-19-1-0210 ), JILA Physics Frontier Center (NSF-PFC-1125844), AFOSR-MURI, and AFOSR (FA9550-18-1-0319). Work in Strasbourg is supported by IdEx Unistra (project STEMQuS) with funding managed by the French National Research Agency as part of the Investments for the future program. B.Z. acknowledges support of the NSF through a grant to ITAMP. ## Author information Authors ### Contributions S.L., L.G., B.N., E.M., O.G., L.V., and B.L.T. contributed to the execution of the experiments. J.S., B.Z., and A.M.R. developed the theory model. All authors discussed the results, contributed to the data analysis, and worked together on the manuscript. ### Corresponding author Correspondence to A. M. Rey. ## Ethics declarations ### Competing interests The authors declare no competing interests. Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Lepoutre, S., Schachenmayer, J., Gabardos, L. et al. Out-of-equilibrium quantum magnetism and thermalization in a spin-3 many-body dipolar lattice system. Nat Commun 10, 1714 (2019). https://doi.org/10.1038/s41467-019-09699-5 • Accepted: • Published: • ### Performance evaluation of the discrete truncated Wigner approximation for quench dynamics of quantum spin systems with long-range interactions • Masaya Kunimi • , Kazuma Nagao • , Shimpei Goto •  & Ippei Danshita Physical Review Research (2021) • ### Simulation of XXZ Spin Models Using Sideband Transitions in Trapped Bosonic Gases • Anjun Chu • , Johannes Will • , Jan Arlt • , Carsten Klempt •  & Ana Maria Rey Physical Review Letters (2020) • ### Nonlinear dynamics and energy transfer for two rotating dipoles in an external field: A complete dimensional analysis • Rosario González-Férez • , Manuel Iñarrea • , J. Pablo Salas •  & Peter Schmelcher Communications in Nonlinear Science and Numerical Simulation (2020) • ### Quantum many-body physics with ultracold polar molecules: Nanostructured potential barriers and interactions • Andreas Kruckenhauser • , Lukas M. Sieberer • , Luigi De Marco • , Jun-Ru Li • , Kyle Matsuda • , William G. Tobias • , Giacomo Valtolina • , Jun Ye • , Ana Maria Rey • , Mikhail A. Baranov •  & Peter Zoller Physical Review A (2020) • ### Magnetic Dipolar Interaction between Hyperfine Clock States in a Planar Alkali Bose Gas • Y.-Q. Zou • , B. Bakkali-Hassani • , C. Maury • , É. Le Cerf • , S. Nascimbene • , J. Dalibard •  & J. Beugnon Physical Review Letters (2020)
2021-01-22 22:50:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8038850426673889, "perplexity": 1019.5698632854358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531429.49/warc/CC-MAIN-20210122210653-20210123000653-00012.warc.gz"}
https://socratic.org/questions/how-do-you-differentiate-f-x-sqrt-e-tan-1-x-using-the-chain-rule
# How do you differentiate f(x)=sqrt(e^(tan(1/x) using the chain rule.? Jan 23, 2018 $f ' \left(x\right) = \frac{- {\left({e}^{\tan} \left(\frac{1}{x}\right)\right)}^{- \frac{1}{2}} {e}^{\tan} \left(\frac{1}{x}\right) {\sec}^{2} \left(\frac{1}{x}\right) {x}^{- 2}}{2}$ $\textcolor{w h i t e}{f ' \left(x\right)} = \frac{- {e}^{\tan} \left(\frac{1}{x}\right) {\sec}^{2} \left(\frac{1}{x}\right)}{2 {x}^{2} \sqrt{\left({e}^{\tan} \left(\frac{1}{x}\right)\right)}}$ $\textcolor{w h i t e}{f ' \left(x\right)} = - \frac{{e}^{\tan} \left(\frac{1}{x}\right) {\sec}^{2} \left(\frac{1}{x}\right)}{2 {x}^{2} \sqrt{\left({e}^{\tan} \left(\frac{1}{x}\right)\right)}}$ #### Explanation: We are given $f \left(x\right) = {\left({e}^{\tan} \left(\frac{1}{x}\right)\right)}^{\frac{1}{2}}$ $f ' \left(x\right) = \frac{d}{\mathrm{dx}} \left[{\left({e}^{\tan} \left(\frac{1}{x}\right)\right)}^{\frac{1}{2}}\right]$ $\textcolor{w h i t e}{f ' \left(x\right)} = \frac{1}{2} \cdot {\left({e}^{\tan} \left(\frac{1}{x}\right)\right)}^{\frac{1}{2} - 1} \cdot \frac{d}{\mathrm{dx}} \left[{e}^{\tan} \left(\frac{1}{x}\right)\right]$ $\textcolor{w h i t e}{f ' \left(x\right)} = \frac{1}{2} \cdot {\left({e}^{\tan} \left(\frac{1}{x}\right)\right)}^{\frac{1}{2} - 1} \cdot {e}^{\tan} \left(\frac{1}{x}\right) \cdot \frac{d}{\mathrm{dx}} \left[\tan \left(\frac{1}{x}\right)\right]$ $\textcolor{w h i t e}{f ' \left(x\right)} = \frac{1}{2} \cdot {\left({e}^{\tan} \left(\frac{1}{x}\right)\right)}^{\frac{1}{2} - 1} \cdot {e}^{\tan} \left(\frac{1}{x}\right) \cdot {\sec}^{2} \left(\frac{1}{x}\right) \cdot \frac{d}{\mathrm{dx}} \left[{x}^{- 1}\right]$ $\textcolor{w h i t e}{f ' \left(x\right)} = \frac{1}{2} \cdot {\left({e}^{\tan} \left(\frac{1}{x}\right)\right)}^{\frac{1}{2} - 1} \cdot {e}^{\tan} \left(\frac{1}{x}\right) \cdot {\sec}^{2} \left(\frac{1}{x}\right) \cdot - 1 \cdot {x}^{- 1 - 1}$ $\textcolor{w h i t e}{f ' \left(x\right)} = \frac{- {\left({e}^{\tan} \left(\frac{1}{x}\right)\right)}^{- \frac{1}{2}} {e}^{\tan} \left(\frac{1}{x}\right) {\sec}^{2} \left(\frac{1}{x}\right) {x}^{- 2}}{2}$ $\textcolor{w h i t e}{f ' \left(x\right)} = \frac{- {e}^{\tan} \left(\frac{1}{x}\right) {\sec}^{2} \left(\frac{1}{x}\right)}{2 {x}^{2} \sqrt{\left({e}^{\tan} \left(\frac{1}{x}\right)\right)}}$ $\textcolor{w h i t e}{f ' \left(x\right)} = - \frac{{e}^{\tan} \left(\frac{1}{x}\right) {\sec}^{2} \left(\frac{1}{x}\right)}{2 {x}^{2} \sqrt{\left({e}^{\tan} \left(\frac{1}{x}\right)\right)}}$
2019-06-24 11:21:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.996687650680542, "perplexity": 14906.219095168788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999482.38/warc/CC-MAIN-20190624104413-20190624130413-00520.warc.gz"}
https://taoofmac.com/space/blog/2004/09/19
# The Poor Man's Growl Bridge In case you're interested in writing portable Perl and Python scripts that can be deployed anywhere and use Growl with minimal hassle (i.e., without installing all the extra bridging crud), this bit of Python code works fine for me: import os APPLESCRIPT = "/usr/bin/osascript" def notify(title, description, icon = "Finder"): # see if we're on a Mac if os.path.exists(APPLESCRIPT): # See if Growl is installed if os.path.exists("/Library/Frameworks/GrowlAppBridge.framework"): applescript = os.popen(APPLESCRIPT, 'w') applescript.write( 'tell application "GrowlHelperApp"\n' + 'notify with title "%s" description "%s" icon of application "%s"\n' % (title, description, icon) + 'end tell') applescript.close() else: # use something else here, or edit the if clauses to fall straight down pass else: # use the age old UNIX way print "NOTIFICATION - %s: %s" % (title, description) if __name__ == '__main__': notify( "Python", "Poor man's Growl bridge using piping" ) (The Perl version is left as an exercise to the readers - after all, it's trivial to port, and each Perl geek will want to do it their own way...) Of course, I've barely scratched the surface - but if you're anything like me and hate depending on any one platform, this should be easy enough to graft into your own scripts. ## Arrrrrrr! That's my little contribution to Talk Like a Pirate Day. I've got a busy Sunday ahead, so if you want to see the whole front page in pirate talk, it be here.
2017-11-21 13:46:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36036187410354614, "perplexity": 7265.044314230498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806388.64/warc/CC-MAIN-20171121132158-20171121152158-00111.warc.gz"}
https://codereview.stackexchange.com/questions/87506/spoj-problem-the-last-digit-of-a-number-to-a-power
# SPOJ problem - The last digit of a number to a power Here is the problem statement: Given integers $a$ ($0 \le a \le 20$) and $b$ ($0 \le b \le 2,147,483,000$), where $a$ and $b$ are not both 0, find the last digit of $a^b$. Input The first line of input contains an integer $t$, the number of test cases ($t \le 30$). $t$ test cases follow. For each test case will appear $a$ and $b$ separated by a space. Output For each test case output an integer per line representing the result. Example Input: 2 3 10 6 2 Example Output: 9 6 Here is the code which is exceeding the time limit: #include <stdio.h> int main() { int t; scanf("%d",&t); while(t--) { long long int base,exponent; scanf("%lld%lld",&base,&exponent); if(base==0&&exponent==0) { printf("1\n"); } else { long long int digit=1; long long int i; for(i=1;i<=exponent;i++) { digit=(base*digit)%10; } printf("%lld\n",digit); } } return 0; } How do I make this more efficient? As you have probably noticed, this is not efficient because you have a loop running exponent times (which could be over 2 billion). The way to make this more efficient is not to make your code faster, but to choose a better algorithm. I'm not going to tell you outright what the answer is, but do the following. Add some output to your loop: for(i=1;i<=exponent;i++) { digit=(base*digit)%10; printf("digit = %d\n", digit); } and then run your program for some (small) sample inputs. You should notice a pattern. The key is to identify the pattern and use it to avoid running the loop a billion times.
2020-08-13 18:30:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3397299349308014, "perplexity": 730.9171068407786}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739048.46/warc/CC-MAIN-20200813161908-20200813191908-00400.warc.gz"}