url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.queryhome.com/puzzle/15715/how-many-chocolates-are-there-in-the-picture
# How many chocolates are there in the picture? 499 views How many chocolates are there in the picture? posted Jul 2, 2016 Looks like 8 Chocolates. But it is so obvious. Where is the catch?? Similar Puzzles +1 vote How many girls are there in the following picture?
2021-07-29 19:17:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333017826080322, "perplexity": 3125.105555099468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153892.74/warc/CC-MAIN-20210729172022-20210729202022-00622.warc.gz"}
https://study.com/academy/answer/consider-the-function-f-x-y-xy-3-x-4-plus-y-4-a-find-the-maximal-domain-d-so-that-f-mapping-d-to-r-is-a-function-i-e-find-all-points-x-y-at-which-f-is-defined-b-at-which-points-of-d-from.html
# Consider the function f(x, y):= xy^3/x^4 + y^4. (a) Find the maximal domain D so that f mapping D... ## Question: Consider the function {eq}\displaystyle f(x, y):= \frac{xy^3}{x^4 + y^4} {/eq} (a) Find the maximal domain {eq}D {/eq} so that {eq}f:D \rightarrow \mathbb{R} {/eq} is a function (i.e. find all points {eq}(x, y) {/eq} at which {eq}f {/eq} is defined). (b) At which points of {eq}D {/eq} from part (a) is {eq}f {/eq} continuous? (c) If {eq}f {/eq} has singularities, identify them, and determine if they are continuously removable. ## Domain and Continuity A function without gaps, holes or interruptions, is a continuous function, also, some times, a discontinuity is removable if and only if the function has a simplification that clear discontinuity factor. Therefore, with a factoring procedure, we can determine if a function has a removable discontinuity. ## Answer and Explanation: The function is: {eq}\displaystyle f(x,y)= \frac{xy^3}{x^4 + y^4} {/eq} (a) Denominator can't be zero, then, function domain is: {eq}\displaystyle (x,y) \in \mathbb{R} \; \text{ / } \; x^4+y^4 \neq 0 \\ \; \text{ or } \; \\ \displaystyle \mathbb{R}^2 - \{ (0,0) \} {/eq} (b) The function is continuous for all area given by {eq}\displaystyle (x,y) \; \text{ with } \; x \; \text{ , } \; y \in \mathbb{R} {/eq} except the point {eq}\displaystyle (0,0) {/eq} (c) The function hasn't singularities, also its discontinuity removable if and only if factoring we cancel the factor that determines the discontinuity. But {eq}\displaystyle x^4+y^4 {/eq} can't be simplified.
2020-05-28 19:35:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990999698638916, "perplexity": 3407.1053954706854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399830.24/warc/CC-MAIN-20200528170840-20200528200840-00471.warc.gz"}
https://mathematica.stackexchange.com/questions/119779/making-a-histogram-with-a-log-scale
# Making a histogram with a log scale Sorry for my poor english. I am really new in Mathematica. I have a data set and plot it in histogram, The whole script is following: img = ImageResize[Import["test.JPG"], 1000]; img = ImageTrim[img, {{.02, .02}, {.98, .98}}, DataRange -> {{0, 1}, {0, 1}}]; img1 = DeleteSmallComponents[Binarize[img, 0.3], smallSize]; img1 = DeleteSmallComponents[ColorNegate[img1], smallSize]; img1 = ColorNegate[Binarize[img1]]; img2 = MorphologicalComponents[img1] // Colorize; img3 = WatershedComponents[img1, Method -> {"MinimumSaliency", 2.0}] //Colorize; Show[GraphicsGrid[{{img2, img3}}, ImageSize -> 1000, Spacings -> {0, 0}]] img4 = ColorNegate[img1]; img5 = MorphologicalComponents[img4] // Colorize; img6 = WatershedComponents[img4, Method -> {"MinimumSaliency", 1.0}] //Colorize; Show[GraphicsGrid[{{img5, img6}}, ImageSize -> 1000, Spacings -> {0, 0}]] Now I prefer to plot it in log scale so it would be shown as a line with a specific slope. In other words, I want to find the X and y coordinates of the red circles shown in the picture. (The red circles are located as the median of each bin.) And then plot these red points in semi-log scale. My code as related to my request. window[list_, {xmin_, xmax_}] := Pick[list, Boole[xmin <= # <= xmax] & /@ list, 1] data = Tally[Flatten[MorphologicalComponents[img5,CornerNeighbors -> False]]]; window[data[[All, 2]], {10, 2000}]; GraphicsGrid[{{img, img5, Histogram[Sqrt[%]]}}, ImageSize -> 1000, Spacings -> {0, 0}] • Can you elaborate on what it means for a histogram (log scale or otherwise) to be shown as a line with a specific slope? I am totally unfamiliar with that concept. – JimB Jul 1 '16 at 3:11 • Sorry so the data is like: Sort[data, #1[[2]] > #2[[2]] &] {{1, 409318}, {0, 174697}, {189, 1443}, {286, 1056}, {289, 964}, {361, 903}, {259, 685}....... The first coordinate is the label of each area and the second is the area of this region. It just label each one based on the region's position in an image. For the semi-log plot, I want the x axis is the number of the area and y axis is the area. For example: point (9,60) in the semi-log plot means that there are 9 regions with area 60. – Shuoqi Li Jul 1 '16 at 3:50 • I am voting to close this question because it is not at all clear what you are asking. We don't have the context you have, so we don't know what data is, what your code is supposed to do, what "areas" and "regions" are, etc. Try to explain it in a clear and concise way, so that someone who has absolutely no idea what you are doing (we don't) will understand it. – Szabolcs Jul 1 '16 at 7:54 • @JimBaldwin Doing such transformations is in fact quite common. Also, the Histogram function does much more than histograms, so it's not really a concern that the result is not technically that. Here's an example: data = RandomVariate[ParetoDistribution[1, 4], 50000];. Now Histogram[data, PlotRange -> All] is not particularly informative, while Histogram[data, "Log", {"Log", "Count"}] is very much so. We can also do Histogram[data, "Log", {"Log", "SurvivalCount"}]. There's the revealing straight line, summoned up with a single simple command. – Szabolcs Jul 1 '16 at 20:19 • @Jim Also note that with the vertical axis transformation the ticks are transformed too. So it is still a proper histogram, just displayed in a different way. – Szabolcs Jul 1 '16 at 20:28
2020-02-24 12:48:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33532384037971497, "perplexity": 1875.340248013684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145941.55/warc/CC-MAIN-20200224102135-20200224132135-00259.warc.gz"}
http://docs.itascacg.com/flac3d700/common/contactmodel/lineardipole/doc/manual/def_lineardipole_property.html
# Linear Dipole Model Properties The properties of the Linear Dipole Model model are shown below for reference. More detailed information regarding these properties is presented here. Note Some properties are read only as stated below. lineardipole dipole_cap f $$d_{cap}$$ - distance to cap the dipole force/moment contribution. dipole_d f $$d_d$$ - dipole distance used for contact activity. dipole_f v $$\mathbf{F^{di}}$$ - dipole force. dipole_m1 v $$\mathbf{M_1}$$ - moment applied to piece 1 due to the dipole. This is a read-only property. dipole_m2 v $$\mathbf{M_2}$$ - moment applied to piece 2 due to the dipole. This is a read-only property. dipole_mo1 v $$\mathbf{m_1}$$ - original dipole moment of piece 1. dipole_mo2 v $$\mathbf{m_2}$$ - original dipole moment on piece 2. dp_force v $$\mathbf{F^d}$$ - dashpot force in units force. Expressed in the contact plane coordinate system. This is a read-only property. dp_mode i $$M_d$$ - dashpot mode with default value 0. dp_nratio f $$\beta_n$$ - normal critical damping ratio with default value 0.0. dp_sratio f $$\beta_s$$ - shear critical damping ratio with default value 0.0. emod f $$E^*$$ - effective modulus in units force/area This is a read-only property. fric f $$\mu$$ - friction coefficient with default value 0.0. kn f $$k_n$$ - normal stiffness in units force/length with default value 0.0. kratio f $$\kappa^*$$ - normal-to-shear stiffness ratio. This is a read-only property. ks f $$k_s$$ - shear stiffness in units force/length with default value 0.0. lin_force v $$\mathbf{F^l}$$ - linear force in units force with default value $$\mathbf{0}$$. Expressed in the contact plane coordinate system. lin_mode i $$M_l$$ - normal-force update mode with default value 0 (absolute update mode). Incremental normal-force update mode is set with $$M_l$$ = 1. lin_slip b $$s$$ - slip state. This is a read only property. rgap f $$g_r$$ - reference gap in units length with default value 0.0. user_area f $$A$$ - set contact area to the constant value f.
2022-07-03 07:39:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5009188055992126, "perplexity": 6991.384763067667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215805.66/warc/CC-MAIN-20220703073750-20220703103750-00596.warc.gz"}
https://docs.rs/concrete-npe/latest/concrete_npe/
# Crate concrete_npe Expand description Welcome the the concrete-npe documentation! ## Description This library makes it possible to estimate the noise propagation after homomorphic operations. It makes it possible to obtain characteristics of the output distribution of the noise, that we call dispersion, which regroups the variance and expectation. This is particularly useful to track the noise growth during the homomorphic evaluation of a circuit. The explanations and the proofs of these formula can be found in the appendices of the article Improved Programmable Bootstrapping with Larger Precision and Efficient Arithmetic Circuits for TFHE by Ilaria Chillotti, Damien Ligier, Jean-Baptiste Orfila and Samuel Tap. ## Quick Example The following piece of code shows how to obtain the variance $\sigma_{add}$ of the noise after a simulated homomorphic addition between two ciphertexts which have variances $\sigma_{ct_1}$ and $\sigma_{ct_2}$, respectively. ## Example: use concrete_commons::dispersion::{DispersionParameter, Variance}; //We suppose that the two ciphertexts have the same variance. let var1 = Variance(2_f64.powf(-25.)); let var2 = Variance(2_f64.powf(-25.)); //We call the npe to estimate characteristics of the noise after an addition //between these two variances. //Here, we assume that ciphertexts are encoded over 64 bits. let var_out = estimate_addition_noise::<u64, _, _>(var1, var2); println!("Expect Variance (2^24) = {}", f64::powi(2., -24)); println!("Output Variance {}", var_out.get_variance()); assert!((f64::powi(2., -24) - var_out.get_variance()).abs() < 0.0001); ## Traits This trait contains functions related to the dispersion of secret key coefficients, and operations related to the secret keys (e.g., products of secret keys). ## Functions Computes the dispersion of an addition of two uncorrelated ciphertexts. Computes the dispersion of a CMUX controlled with a GGSW encrypting binary keys. Computes the dispersion of an external product (between and RLWE and a GGSW) encrypting a binary keys (i.e., as in TFHE PBS). Computes the dispersion of a multiplication of a ciphertext by a scalar. Computes the dispersion of the constant terms of a GLWE after an LWE to GLWE keyswitch. Computes the dispersion of the non-constant GLWE terms after an LWE to GLWE keyswitch. Computes the dispersion of a modulus switching of an LWE encrypted with binary keys. Computes the dispersion of the bits greater than $q$ after a modulus switching. Computes the dispersion of a GLWE multiplication between two GLWEs (i.e., a tensor product followed by a relinearization). Computes the number of bits affected by the noise with a dispersion describing a normal distribution. Computes the dispersion of a PBS a la TFHE (i.e., the GGSW encrypts a binary keys, and the initial noise for the RLWE is equal to zero). Computes the dispersion of a multiplication between an RLWE ciphertext and a scalar polynomial. Computes the dispersion of a GLWE after relinearization. Computes the dispersion of an addition of several uncorrelated ciphertexts. Computes the dispersion of a tensor product between two independent GLWEs given a set of parameters. Computes the dispersion of a multisum between uncorrelated ciphertexts and scalar weights $w_i$ i.e., $\sigma_{out}^2 = \sum_i w_i^2 * \sigma_i^2$.
2022-05-16 11:50:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5412251949310303, "perplexity": 3364.4021628684663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510117.12/warc/CC-MAIN-20220516104933-20220516134933-00359.warc.gz"}
http://universeinproblems.com/index.php/Interactions_in_the_Dark_Sector
# Interactions in the Dark Sector Dark energy is the main component of the Universe's energy budget, thus it is necessary to consider the possibility of its interaction with other components of the Universe, in particular with the component second in importance - the dark matter. Additional interest to that possibility is related with the fact that within its framework it is possible to solve the so-called "coincidence problem": the coincidence by the order of magnitude (at present) of dark energy and dark matter densities (0.7 and 0.3 respectively). As the nature of those two components is yet unknown, we cannot describe interaction between them starting from the first principles and we are forced to turn to phenomenology. In the base of the phenomenology one can put the conservation equation $\dot\rho_i+3H(\rho_i+p_i)=0.$ In the case of interaction between the components it is necessary to introduce the interaction (a source) into the right-hand side of the equation. It is natural to assume that interaction is proportional to the energy density multiplied by a constant of inverse time dimension. For that constant it is natural to choose the Hubble constant. ## General Analysis ### Problem 1 Construct a model of the Universe containing only interacting dark energy and dark matter, while their total energy density is conserved. ### Problem 2 Within the framework of problem, find the effective state parameters $w^{(\varphi)}_{eff}$ and $w^{(m)}_{eff}$ that would allow one to treat the components as non-interacting. ### Problem 3 Using the effective state parameters obtained in the previous problem, analyze dynamics of dark matter and dark energy depending on sign of the rate of energy density exchange in the dark sector. ### Problem 4 Show that the coupling quintessence behaves like a phantom uncoupled model, but without any negative kinetic energy. ### Problem 5 Show that within the framework of the model of the Universe described in problem the Klein-Gordon equation for the scalar field takes the form: $$\ddot{\varphi}+3H\dot{\varphi}+\frac{dV}{d\varphi}=-\Gamma ,\; \Gamma \equiv \frac{Q}{\dot{\varphi}}$$ Here $Q$ is the constant of interaction between dark energy and dark matter. ### Problem 6 The dynamics of present Universe is assumed to be dominated by dark energy and dark matter. In frame of the model of interacting dark energy and dark matter (see problem) obtain equations for the densities $\rho_{DE}$ and $\rho_{DM}$ under assumption that $Q=-3H\Pi$, where the quantity $\Pi$ can be considered as the effective pressure. ### Problem 7 For the scalar field $\varphi$ in the form of quintessence use the variables $x^2=\frac{\kappa^2\dot\varphi^2}{6H^2},\ y^2=\frac{\kappa^2V}{3H^2}$ to express the total equation of state parameter $w_{tot}\equiv p_{tot}/\rho_{tot}$, quintessence state parameter $w_\varphi\equiv p_\varphi/\rho_\varphi$, deceleration parameter $q$ and determine the allowed variation range of the parameters. The Universe is assumed to be flat. ## Linear Models ### Problem 8 Show that the energy balance equation (modified conservation equations) for $Q\propto H$ or $Q\propto\dot\varphi$ are independent from $H$ when expressed in the variables $x(N), y(N)$ where $N=\log a$ thus phase space of such coupling model is two-dimensional $(x,y)$ space. ### Problem 9 Use the variables $(x,y)$, introduced in the problem in order to obtain the system of equations describing the quintessence in the potential $V(\varphi)=V_0\exp(-k\lambda\varphi),$ where $k^2\equiv8\pi G$, $\lambda$ is a dimensionless constant and $V_0>0$, in the case of flat Universe and under assumption $Q=\alpha H\rho_{DM}$ ### Problem 10 Find the scale factor dependence for the dark matter density assuming that the constant of interaction between the dark matter and the dark energy equals $Q=\delta(a)H\rho_{DM}$. ### Problem 11 Find the scale factor dependence for the dark matter and energy densities assuming that $Q=\delta H\rho_{DE}$ $(\delta=const)$. ### Problem 12 Let the dark energy state equation be $p_{DE}=w\rho_{DE}$, where $w=const$. Within the framework of problem, find the dependence of the dark energy density on scale factor, assuming that $\rho_{DM}=\rho_{DM0}a^{-3+\delta}$, where $\delta$ characterizes the deviation of the dark matter density's evolution from the standard one (in the absence of interaction). ### Problem 13 Let the densities' ratio in model of interacting dark energy and dark matter has the form $\frac{\rho_{DM}}{\rho_{DE}}\propto Aa^{-\xi}.$ Determine the interaction $Q$ between the components. ### Problem 14 Determine the statefinder $\{r,s\}$ (see Chapter Dark Energy) in the model of interacting dark energy and dark matter with the interaction intensity $Q=-3\alpha H$. ### Problem 15 Consider a flat Universe filled with dark energy in the form of the Chaplygin gas ($p_{ch}=-A/\rho_{ch}$) and dark matter. Let the components interact with each other with intensity $Q=3\Gamma H\rho_{ch}$ ($\Gamma>0$). Show that for large $a$ ($a\to\infty$) holds $w_{ch}\equiv p_{ch}/\rho_{ch}<-1$, i.e. in such a model the Chaplygin gas behaves as the fantom energy. ### Problem 16 Interaction between dark matter and dark energy leads to non--conservation of matter, or equivalently, to scale dependence for the mass of particles that constitute the dark matter. Show that, within the framework of the model considered in problem, the relative change of particles mass per Hubble time equals the interaction constant. ### Problem 17 Consider a model of flat homogeneous and isotropic Universe filled with matter (baryon and dark), radiation and a negative pressure component (dark energy in the form of quintessence). Assuming that the baryon matter and radiation conserve separately and the dark components interact with each other, describe the dynamics of such a system. ### Problem 18 Assume that the dark matter particles' mass $m_{DM}$ depends on a scalar field $\varphi$. Construct the model of interacting dark energy and dark matter in this case. ### Problem 19 Find the equation of motion for the scalar field interacting with dark matter if its particles' mass depends on the scalar field. ### Problem 20 Find the interaction $Q$ for the Universe, with interacting dark energy and dark matter, assuming that their densities' ratio takes the form $\rho_m/\rho_{DE}=f(a)$, where $f(a)$ is an arbitrary differentiable function of the scale factor. ### Problem 21 Using the results of previous problem, find the quantity $E^2\equiv H^2/H_0^2$, which is necessary to test the cosmological models and to find restrictions on the cosmological parameters. Assume that $f(a)=f_0 a^\xi$, where $\xi$ is constant. ### Problem 22 In the Universe described in problems 1 and 2, calculate the distance module corresponding to the redshift's range $0.014\leq z\leq 1.6$ and find the quantity $E=H/H_0$. ### Problem 23 Consider a model of the Universe with interacting components, in which the scale factor dependence for one of them takes the form $\rho _{1} (a)=C_{1} a^{\alpha } +C_{2} a^{\beta }$, where $C_1$, $C_2$, $\alpha$ and $\beta$ are constants. Find the interaction $Q$. ### Problem 24 In the model of the Universe considered above assume that $\rho_1=\rho_m$, $\rho_2=\rho_{DE}$ and $Q=\gamma H\rho_1$. Find the range of possible values of the interaction constant $\gamma$. ### Problem 25 Show that in the model considered in problem 61 there is no coincidence problem. ### Problem 26 In the model of problem 61 find the scale factor's value $a_{eq}$ at the time when the dark matter density was equal to that of dark energy, and the scale factor's value $a_{ac}$ at the time when the Universe started to accelerate. Find the ranges of the state parameter $w_{DE}$ corresponding to the cases $a_{ac}<a_{eq}$ and $a_{ac}>a_{eq}$. ## Non-linear Models The interactions studied so far are linear in the sense that the interaction term in the individual energy balance equations is proportional to either dark matter density or to dark energy density or to a linear combination of both densities. Also from a physical point of view an interaction proportional to the product of dark components seems preferred: an interaction between two components should depend on the product of the abundances of the individual components, as, e.g., in chemical reactions. Moreover, such type of interaction looks much more attractive in comparison with the observations than the linear one. Below we investigate the dynamics for a simple two-component model with a number of non-linear interactions (F.Arevalo, A.Bacalhau and W. Zimdahl, arXiv: 1112.5095) ### Problem 27 Let interaction term $Q$ is a non-linear function of the energy densities of the components and/or the total energy density. Motivated by the structure $\rho_{DM}=\frac{r}{1+r}\rho,\ \rho_{DE}=\frac{1}{1+r}\rho,$ $\rho\equiv\rho_{DM}+\rho_{DE},\ r\equiv\frac{\rho_{DM}}{\rho_{DE}},$ consider the ansatz $Q=3H\gamma\rho^m r^n(1+r)^s,$ where $\gamma$ is a positive coupling constant. Show that 1) for $s=-m$ interaction term is proportional to a power of products of the densities of the components; 2) for $(m,n,s)=(1,1,-1)$ and $(m,n,s)=(1,0,-1)$ the linear case is reproduced. ### Problem 28 Find analytical solution of non-linear interaction model covered by the ansatz of previous problem for $(m,n,s)=(1,1,-2)$, $$Q=3H\gamma\rho_{DE}\rho_{DM}/\rho$$. ### Problem 29 Find analytical solution of non-linear interaction model for $(m,n,s)=(1,2,-2)$, $$Q=3H\gamma\rho_{DM}^2/\rho$$. ### Problem 30 Find analytical solution of non-linear interaction model for $(m,n,s)=(1,0,-2)$, $$Q=3H\gamma\rho_{DE}^2/\rho$$. ## The Chaplygin Gas Any fundamental science shows an obvious tendency to decrease number of the fundamental substances. The well-known examples are transition from chemical elements to nucleons and electrons, and then from baryons and mesons - to quarks. Cosmology manifests an intention to develop a unified model for dark energy and dark matter. The observed transition from the matter domination to that of dark energy makes it attractive to introduce a dynamical substance which would mimic properties of matter in early Universe and possess the negative pressure to provide the accelerated expansion in the present epoch. The Chaplygin gas represents the simplest substance with the required properties. Its equation of state is postulated to be: $p=-\frac{A}{\rho}, \ A>0.$ ### Problem 31 Find the scale factor dependence for the density of the Chaplygin gas. ### Problem 32 Show that in the early Universe the Chaplygin gas behaves as matter with zero pressure, and at later times---as the cosmological constant. ### Problem 33 Find the range of the energy density $\rho_{ch}$ corresponding to the accelerated expansion of the Universe filled with dark energy in the form of Chaplygin gas with pressure $p=-A/\rho_{ch}$ and non--relativistic matter with density $\rho_m$. ### Problem 34 Show that the sound speed in the Chaplygin gas in the late Universe is close to the light speed. ### Problem 35 Show that the sound speed in the Chaplygin gas behaves as $c_s\propto t^2$ in the matter--dominated epoch. ### Problem 36 Show that the cosmological solution corresponding to the Chaplygin gas can be obtained in the quintessence model. ### Problem 37 Find the dependence of density on the scale factor in the generalized Chaplygin gas model with state equation $p=-\frac{A}{\rho^\alpha},\ (A>0,\ \alpha>0.)$ ### Problem 38 In the generalized Chaplygin gas model (see previous problem) find the state equation parameter $w$. ### Problem 39 Determine the sound speed in the generalized Chaplygin gas model (see problem). Can it exceed the speed of light? ### Problem 40 Let us present the energy density in the generalized Chaplygin gas model of problem in the form of the sum $\rho_{ch}^{(gen)} = \rho_{DM}+\rho_{DE}$, where $\rho_{DM}$ is the component with the properties of non-relativistic mater ($p_{DM}=0$) and $\rho_{DE}$ is the component with the properties of dark energy described by the state equation $p_{DE}=w_{DE}\rho_{DE}$. Find the restriction on the model's parameter $w_{DE}$. ### Problem 41 Find the dependence of density on scale factor for the Chaplygin gas model with the state equation $p = \left( {\gamma - 1} \right)\rho - \frac{A}{\rho ^\alpha };\quad 0 \le \alpha \le 1$ (the so-called modified Chaplygin gas). ### Problem 42 Construct the effective potential for the Chaplygin gas considering it as a scalar field. Do the same for the generalized Chaplygin gas (see problem) and the modified Chaplygin gas of the previous problem. ### Problem 43 Show that for the Chaplygin gas model the line $w=-1$ cannot be crossed. ### Problem 44 Show that for the generalized Chaplygin gas model the line $w=-1$ cannot be crossed. ## Universe as the Dynamical System The Universe described by the Friedman equations can be treated as an autonomous dynamical system. Its behavior is determined by the system of differential equations of the form: $\dot{\vec{x}}=\vec{f}(\vec{x}).$ To study dynamics of the system it is of crucial importance to find the so-called critical points $\vec{x}^*$ defined by the following condition: $\vec{f}(\vec{x}^*)=0.$ In order to study the stability of the critical points, we expand around them: $\vec{x}=\vec{x}^*+\vec{u},\ \dot{\vec{u}}=\vec{f}'(\vec{x}^*)\vec{u}+\vec{g}(\vec{x}).$ Here $\vec{g}(\vec{x})/||x||\to0$ as $\vec{x}\to\vec{x}^*$, and $f'_{ij}(\vec{x}^*)\hat M(\vec{x}^*)=\frac{\partial f_i}{\partial x_j}(\vec{x}^*)$ is constant non-singular stability matrix, whose eigenvalues encode the behavior of the dynamical system near the critical point. The eigenvalues are just the roots of the equation $\det|\hat M -\lambda \hat I|=0$ We yet limit ourself to two-dimensional case. If the solutions are non-degenerate and real, they describe a stable node for $\lambda_{\pm}<0$ an unstable node for $\lambda_{\pm}<0$ and a saddle if $\lambda_{+}$ and $\lambda_{-}$ have different signs. For complex eigenvalues $\lambda_{\pm}=\alpha\pm i\beta$, it is the sign of $\alpha$ that determines the character of the critical point. For $\alpha=0$ the critical point is a center, for $\alpha<0$ it is a stable focus and for $\alpha>0$ it is an unstable focus. Inspired by Yi Zhang, Hui Li, arXiv:1003.2788 ### Problem 45 Assume that there are two components in the Universe: background matter and the dark energy. Obtain equations of motion for relative densities of both components. ### Problem 46 Find fixed points for the dynamical system considered in the previous problem and analyze their stability. ### Problem 47 Find critical points for the model of interacting dark components with $Q=-3H\Pi$. ### Problem 48 Show that for the model considered in the previous problem, independently from the specific interaction, the existence of the critical points $r_c$ and $\rho_c$ requires a transfer from dark energy to dark matter. ### Problem 49 Find eigenvalues of the stability matrix for the model of Universe considered in the problems 1 and 2. ### Problem 50 Find eigenvalues of the stability matrix under assumption that $Q=3H\gamma\rho^mr^n(1+r)^s$ (see problem). ### Problem 51 Classify critical points for the model of interacting dark components considered in the previous problem.
2019-01-24 06:13:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 42, "x-ck12": 0, "texerror": 0, "math_score": 0.7414183616638184, "perplexity": 363.8070153723152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584519382.88/warc/CC-MAIN-20190124055924-20190124081924-00325.warc.gz"}
https://ageconsearch.umn.edu/record/6708
Formats Format BibTeX MARC MARCXML DublinCore EndNote NLM RefWorks RIS ### Abstract The purpose of the paper is to understand the soci-economic factors affecting the distribution of community reinvestment act loans across four income groups using county level information from 1996-2004 for the delta region. The specific objectives of the paper 1) Estimate an seemingly unrelated regression (SUR) to examine the factors affecting the distribution of loan across income groups 2) Prior to the estimation of SUR, test for autocorrelation, heteroskedasticity and time series properties of the variables. Background To promote depository financial institutions to serve the credit needs of moderate and lower income neighborhoods, the U.S. Congress passed the Community Reinvestment Act (CRA) in 1977. The CRA was introduced to prevent "redlining" or the practice of financial institutions excluding moderate and low-income neighborhoods from receiving adequate or fair financial services. Further, CRA was implemented to ensure that banks provided services to farm and non-farm communities. Economists, have examined of the Community Reinvestment Act with regard to issues related to banking and treasury, policies, politics and economic related analysis. However, the question still remains "Is redlining still present or it is simply a case of supply and demand?" Are banks providing services where they are needed in order to improve profitability or are they avoiding low-income areas because they are perceived to pose too much risk? Are the results of earlier studies an indication that more banks are located where there is more economic growth and that is the reason that more loans are coming from these more economically advanced areas? In this paper, we specifically examine the importance of higher levels of education, population growth, economic growth, income levels and sales growth on the amount and number of CRA loans approved. We use county level data for the delta region spread across three states: Mississippi, Arkansas, and Louisiana for the period, 1996-2004. We estimate seemingly unrelated regression with the amount and/or number of CRA loans as the endogenous variables. Econometric Methods and Data Seemingly unrelated regression also called Zellner estimation, is a generalization of ordinary least squares for multi-equation systems. Like ordinary least squares, the seemingly unrelated regression method assumes that all, regressors are independent variables, but seemingly unrelated regression uses the correlations among the errors in different equations to improve the regression estimates. The seemingly unrelated regression method requires an initial ordinary least squares regression to compute residuals. The ordinary least squares residuals are used to estimate the cross-equation covariance matrix. The seemingly unrelated regression for the four income levels: low income (<$100,000), moderate income ($100,000 - $250,000), medium income (>$250,000) and high income (< million dollars) can be represented by the econometric model as: (1.1) where is the vector of endogenous variable, i.e., the amount and/or number of CRA loans approved for the four income groups, a vector of exogenous variables that could potentially include higher levels of education, population growth, economic growth, income levels and sales growth, and are the number of delta region counties. Equation (1) is examined for the properties of autocorrelation, Heteroskedasticity and time series properties. Data to accomplish the objectives of this study will come from the CRA record for period 1996-2004 and the remaining variables are obtained from Department of Commerce, Bureau of Economic Analysis. Results/Expected Results and Discussion Our earlier analysis seems to indicate a direct correlation between the counties with higher levels of education, population, sales growth and the amount and number of CRA loans. According to the results of this study, the areas of the Mississippi Delta that have enjoyed favorable economic growth in the past 8 years are those that have also received the most CRA loan activity. DeSoto County by far is the Delta county that has seen the most growth and it is consistently the county that has received the most loan funds and the largest number of loans. The same results occur in Mississippi outside the Delta Region. Madison, Rankin, Harrison, and Jackson counties are urban areas that are growing faster than the rest of the state. These counties have consistently recorded more CRA loans than any other areas. According to the data the percentage of CRA loans that went to low income groups consistently was the smallest. Loans to low-income groups actually decreased overall from 1996 to 2004. The percent of CRA loans to moderate-income groups also decreased from 96 to 04. The percent of funds to medium-income groups rose, while the percent of loans in the high-income group stayed relatively the same over time. The total amount of CRA loans increased for all income groups during the study period with the exception of the low-income group which decreases slightly. Empirical application of the seemingly unrelated regression econometric model would allow us to examine the factors affecting the distribution of the CRA loans across the income groups. Further, the results from the paper would provide input to the policy makers, banking community, and investors to make a decision on the future of CRA loan distribution.
2021-04-18 09:47:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3477124869823456, "perplexity": 1838.9222295114082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038469494.59/warc/CC-MAIN-20210418073623-20210418103623-00387.warc.gz"}
https://nbviewer.jupyter.org/github/fonnesbeck/Bios8366/blob/master/notebooks/Section7_6-Bayesian-Neural-Networks.ipynb
## Bayesian Neural Networks in PyMC3¶ Bayesian deep learning combines deep neural networks with probabilistic methods to provide information about the uncertainty associated with its predictions. Not only is accounting for prediction uncertainty important for real-world applications, it is also be useful in training. For example, we could train the model specifically on samples it is most uncertain about. We can also quantify the uncertainty in our estimates of network weights, which could inform us about the stability of the learned representations of the network. In classical neural networks, weights are often L2-regularized to avoid overfitting, which corresponds exactly to Gaussian priors over the weight coefficients. We could, however, imagine all kinds of other priors, like spike-and-slab to enforce sparsity (this would be more like using the L1-norm). If we wanted to train a network on a new object recognition data set, we could bootstrap the learning by placing informed priors centered around weights retrieved from other pre-trained networks, like GoogLeNet. Additionally, a very powerful approach in Probabilistic Programming is hierarchical modeling, which allows pooling of things that were learned on sub-groups to the overall population. Applied here, individual neural nets can be applied to sub-groups based on sharing information from the overall population. Let's generate another simulated classification dataset: In [ ]: %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set(style='ticks') import theano import theano.tensor as tt import pymc3 as pm from scipy import optimize from ipywidgets import interact from IPython.display import SVG from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.preprocessing import scale floatX = theano.config.floatX theano.config.compute_test_value = 'ignore' In [ ]: X, y = datasets.make_moons(noise=0.2, n_samples=1000) X = scale(X) fig, ax = plt.subplots() ax.scatter(X[y==0, 0], X[y==0, 1], label='Class 0') ax.scatter(X[y==1, 0], X[y==1, 1], color='r', label='Class 1') sns.despine(); ax.legend() ax.set(xlabel='X1', ylabel='X2', title='Toy binary classification data set'); The scaling performed above should result in faster training. We first create training and test sets, and convert the training set to Theano tensors. In [ ]: X = X.astype(floatX) y = y.astype(floatX) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3) ann_input = theano.shared(X_train) ann_output = theano.shared(y_train) Using standard normal deviates for initial values will facilitate convergence. In [ ]: n_hidden = 5 init_1 = np.random.randn(X.shape[1], n_hidden).astype(floatX) init_2 = np.random.randn(n_hidden, n_hidden).astype(floatX) init_out = np.random.randn(n_hidden).astype(floatX) Here we will use 2 hidden layers with 5 neurons each which is sufficient for such a simple problem. In [ ]: with pm.Model() as neural_network: # Weights from input to hidden layer weights_in_1 = pm.Normal('w_in_1', 0, sd=1, shape=(X.shape[1], n_hidden), testval=init_1) # Weights from 1st to 2nd layer weights_1_2 = pm.Normal('w_1_2', 0, sd=1, shape=(n_hidden, n_hidden), testval=init_2) # Weights from hidden layer to output weights_2_out = pm.Normal('w_2_out', 0, sd=1, shape=(n_hidden,), testval=init_out) # Build neural-network using tanh activation function act_1 = pm.math.tanh(pm.math.dot(ann_input, weights_in_1)) act_2 = pm.math.tanh(pm.math.dot(act_1, weights_1_2)) act_out = pm.math.sigmoid(pm.math.dot(act_2, weights_2_out)) # Binary classification -> Bernoulli likelihood out = pm.Bernoulli('out', act_out, observed=ann_output, total_size=y_train.shape[0] # IMPORTANT for minibatches ) We could use Markov chain Monte Carlo sampling, which works pretty well in this case, but this will become very slow as we scale our model up to deeper architectures with more layers. Instead, we will use the the ADVI variational inference algorithm. This is much faster and will scale better. Note, that this is a mean-field approximation so we ignore correlations in the posterior. In [ ]: with neural_network: approx = pm.fit(n=30000) As samples are more convenient to work with, we can very quickly draw samples from the variational approximation using the sample method. In [ ]: trace = approx.sample(draws=5000) Plotting the objective function (ELBO) we can see that the optimization slowly improves the fit over time. In [ ]: plt.plot(-approx.hist, alpha=.3) plt.legend() plt.ylabel('ELBO') plt.xlabel('iteration'); Now that we trained our model, lets predict on the hold-out set using a posterior predictive check (PPC). 1. We can use sample_ppc() to generate new data (in this case class predictions) from the posterior (sampled from the variational estimation). 2. To improve performance, is better to get the node directly and build theano graph using our approximation (approx.sample_node) In [ ]: # create symbolic input x = tt.matrix('X') # symbolic number of samples is supported, we build vectorized posterior on the fly n = tt.iscalar('n') # Do not forget test_values or set theano.config.compute_test_value = 'off' x.tag.test_value = np.empty_like(X_train[:10]) n.tag.test_value = 100 _sample_proba = approx.sample_node(neural_network.out.distribution.p, size=n, more_replacements={ann_input: x}) # It is time to compile the function # No updates are needed for Approximation random generator # Efficient vectorized form of sampling is used sample_proba = theano.function([x, n], _sample_proba) In [ ]: pred = sample_proba(X_test, 500).mean(0) > 0.5 In [ ]: print('Accuracy = {:0.1f}%'.format((y_test == pred).mean() * 100)) In [ ]: fig, ax = plt.subplots() ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1]) ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r') sns.despine() ax.set(title='Predicted labels in testing set', xlabel='X1', ylabel='X2'); Let's look at what the classifier has learned. For this, we evaluate the class probability predictions on a grid over the whole input space. In [ ]: grid = pm.floatX(np.mgrid[-3:3:100j,-3:3:100j]) grid_2d = grid.reshape(2, -1).T dummy_out = np.ones(grid.shape[1], dtype=np.int8) In [ ]: ppc = sample_proba(grid_2d ,500) The result is a probability surface corresponding to the model predictions. In [ ]: cmap = sns.diverging_palette(250, 12, s=85, l=25, as_cmap=True) fig, ax = plt.subplots(figsize=(16, 9)) contour = ax.contourf(grid[0], grid[1], ppc.mean(axis=0).reshape(100, 100), cmap=cmap) ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1]) ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r') cbar = plt.colorbar(contour, ax=ax) _ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X1', ylabel='X2'); cbar.ax.set_ylabel('Posterior predictive mean probability of class label = 0'); However, unlike a classical neural network, we can also look at the standard deviation of the posterior predictive to get a sense for the uncertainty in our predictions. In [ ]: cmap = sns.cubehelix_palette(light=1, as_cmap=True) fig, ax = plt.subplots(figsize=(16, 9)) contour = ax.contourf(grid[0], grid[1], ppc.std(axis=0).reshape(100, 100), cmap=cmap) ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1]) ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r') cbar = plt.colorbar(contour, ax=ax) _ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y'); cbar.ax.set_ylabel('Uncertainty (posterior predictive standard deviation)'); ## References¶ T. Wiecki and M. Kochurov. (2017) Variational Inference: Bayesian Neural Networks D. Rodriguez. (2013) Basic [1 hidden layer] neural network on Python. D. Britz. (2015) Implementing a Neural Network from Scratch
2021-03-04 21:32:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5191472768783569, "perplexity": 5197.497100523113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369523.73/warc/CC-MAIN-20210304205238-20210304235238-00602.warc.gz"}
https://mathshistory.st-andrews.ac.uk/Biographies/Foulis/
# David James Foulis ### Quick Info Born 26 July 1930 Hinsdale, Illinois, USA Died 3 April 2018 Amherst, Massachusetts, USA Summary David Foulis was an American mathematician who worked on the algebraic foundations of quantum mechanics. ### Biography David Foulis's father, James R Foulis (1903-1969), was an American professional golfer who won the Illinois PGA championship several times. In fact the family consisted of many professional golfers and golf course architects. His grandfather, David Foulis (1868-1950), was one of five brothers (David, Jim, Robert, John and Simpson) born in Scotland, three of whom were professional golfers, one, Simpson, was an amateur golfer and one, John, was a bookkeeper and golf ball maker. All five brothers emigrated to the United States. David and Jim took out a patent on the mashie niblick, a wooden shafted club roughly equivalent to a modern seven iron. Jim (1870-1928), won the second US Open Golf Championship in 1896 which was held at Shinnecock Hills Golf Club on Long Island. Since this archive is based in St Andrews, Scotland, we must note that the five Foulis brothers were all born in St Andrews. David Foulis, the subject of this biography, moved with his family to South Miami, Florida, when he was fifteen years old. There he attended the Ponce de Leon High School and, after graduating, entered the University of Miami in September 1948. At first his interest was in physics and he majored in physics for his Bachelor's degree which he was awarded on 9 June 1952, Magma Cum Laude. However, he had already come into contact with many members of the Mathematics Department at the university and they had stimulated his interest in mathematics. He said [2]:- When I was an undergraduate physics major, I first became aware of the profound relationship between mathematics and the scientific enterprise. A mathematician was a maker of abstract patterns or models, often suggested by problems arising in the experimental or the descriptive sciences. Somehow, the mathematical structures thus created were endowed with an almost magical power to relate, explain, and predict natural phenomena. However, before I finished my undergraduate degree, I began to understand that the patterns studied by mathematicians originated not only from specific scientific problems, but from philosophical or logical questions, or even purely from intellectual curiosity. Georg Cantor, the creator of set theory, once said that the essence of mathematics is its freedom, a notion that I found to be immensely appealing. The job of a physicist was to study the physical world as it is, but a mathematician could study all possible worlds, constrained only by the requirement of logical self-consistency! Realizing this, I determined to pursue my graduate studies in mathematics. He was particularly influenced by Herman Meyer who was the Chairman of the Mathematics Department. After the award of his Bachelor's degree, Foulis studied for a Master's degree in Mathematics beginning his studies in September 1952. He was awarded the degree by the University of Miami in June 1953. He was appointed as a graduate assistant in the Mathematics department of Tulane University in New Orleans in September 1953, spending one academic year in this position until June 1954. He then went to the University of Chicago as a National Science Foundation fellow, spending the two years September 1954-June 1956 there [1]:- While at the University of Chicago, he was strongly influenced by Irving Kaplansky and Paul Halmos, both expositors par excellence. That influence persists in his oral presentations as well as his written publications, and manifests itself in a much admired polished clarity in which the presentation of mathematical ideas becomes a kind of poetry. In 1956 he married the mathematician Linda Falcao. In September 1956 Foulis returned to Tulane University where he began undertaking research for his Ph.D. with Fred Boyer Wright Jr as his thesis advisor. There were others on the staff at Tulane University who had a major role in his mathematical development, particularly Al Clifford who had been appointed to Tulane in 1955. Al Clifford invited Gordon Preston to spend the two years 1956-58 working with him at Tulane University and Preston was also a major influence on Foulis. He was awarded a Ph.D. for his thesis Involution Semigroups which he submitted to Graduate School of Tulane University on 30 July 1958. Foulis gave the following acknowledgement in his thesis:- The author would like to express his gratitude to Professors A H Clifford, G B Preston and especially F B Wright for their careful reading of the manuscript, their helpful suggestions and criticisms and their time which the author consumed during frequent discussions on the subject matter of this paper. The thesis was approved by Fred Wright, Al Clifford and Gordon Preston. In the Introduction, Foulis explains how he came to study involution semigroups:- It is well known that many proofs in ring theory depend more heavily upon the multiplicative structure of the ring than upon the additive structure, and this has been the fountainhead for many of the interesting results in the theory of abstract semigroups. recently there has been much interest among algebraists in rings with an antiautomorphism of period two, a ↦ a*, and in particular in Banach *-algebras of one description or another. here, one observes that not only the multiplicative structure, but also the "adjoint" map a ↦ a* play the decisive roles in the proofs, the additive structure and the topological structure (if any) being relegated to a position of secondary importance. These fact suggest to the author that a systematic study of an abstract semigroup equipped with an antiautomorphic involution a ↦ a* might not be amiss. We should mention that the structure that Foulis studied in his thesis ia a generalisation of a group, since in a group the map $a \mapsto a^{-1}$ provides the antiautomorphic involution. Tullis was appointed as an Assistant Professor of Mathematics at Lehigh University for the year 1958-1959. His next appointment was as an Assistant Professor of Mathematics at Wayne State University where he worked for the four years 1959-1963. Following this he was again appointed as an Associate Professor of Mathematics, this time at the University of Florida for the two years 1963-1965. During these years he published Baer *-semigroups (1960), Conditions for the modularity of an orthomodular lattice (1961), A note on orthomodular lattices (1962), Relative inverses in Baer *-semigroups (1963), and Semigroups co-ordinatizing orthomodular geometries (1965). Foulis then moved to the University of Massachusetts, Amherst. In [1] Richard Greechie explains how this came about:- One of his professors at Miami, Wayman Strother, later became the Head of the Department of Mathematics and Statistics at the University of Massachusetts, Amherst. Wayman invited Dave to join the Mathematics Faculty at the University of Massachusetts, Amherst "to revise the undergraduate curriculum". Within 5 years, the undergraduate curriculum was substantially revised and the world's largest group of specialists in orthomodular lattice theory was centred in Amherst. Wayman once confided to me that he had made an observation when Dave was one of his Advanced Calculus students in Miami; he said, "Dave is someone who does not know how to make a bad proof". Dedication to teaching meant that there was less time for Foulis to undertake research, so his next paper did not appear until 1968 when he published Multiplicative elements in Baer *-semigroups. After this he published a number of papers with Charles Hamilton Randall (1928-1987). Randall, after working in the nuclear industry, obtained a position at the University of Massachusetts and had a close and fruitful collaboration with Foulis for twenty years until his death in 1987. Their collaboration led to the publication of 22 joint articles (a few with an additional third author). A collaboration between someone with interests in lattices and semigroups and someone who had worked in the nuclear industry sounds unlikely but in fact their interests coincided quite closely. This was because Randall had become interested in foundations and Foulis in quantum theory [1]:- Dave always had a compelling interest in "understanding" quantum physics. These demands of understanding led him from being an undergraduate physics major to focusing on mathematics in graduate school. He developed models (many unpublished) for the foundations of quantum mechanics. When Dave learned of the innovative and independent ideas of Charlie Randall, a close collaboration was born. This led to an important thrust called "The Amherst School" in which he and Charlie, along with other colleagues and students, made profound progress in our understanding of mathematical foundations of empirical studies, in particular of quantum mechanics. Richard Greechie, who was one of Foulis's graduate students gaining a Ph.D. with his thesis Orthomodular Lattices in 1966, recalls how devoted Foulis was to teaching his students. He recounted his favourite story [1]:- ... when I was an aspiring (first year) graduate student who had shown an interest in projections on a Hilbert Space, Dave offered to tell me how (an abstraction of) these structures played a role in the foundations of quantum mechanics. He proceeded to give me a one-on-one lecture that lasted from 7 pm on a Friday evening till 1 am. At 1, Dave said that he couldn't finish that evening and would I like to meet again "tomorrow". The lecture resumed exactly 6 hours later! Foulis retired in 1997 and was made professor emeritus at the University of Massachusetts. Rather than marking an end to his research career, this in many ways marked a new beginning and he has published almost 50 papers in the 15 years since then. These papers are on a broad range of topics such as Ordered Structures, Orthostructures, Foundations of Quantum Mechanics, Foundations of Statistics, and Operator Theory. For example in 2010 he published Synaptic algebras which studied a special class of partially ordered algebraic structures defined by a set of axioms which make them into a spectral order unit normed space and a special Jordan algebra. In 2007 he published papers such as Effects, observables, states, and symmetries in physics and (with Richard J Greechie) Quantum logic and partially ordered abelian groups. Here are Foulis's own comments made after he retired [2]:- [Now I am retired] my interest in mathematics and mathematical research has not diminished. My primary research interest centres around mathematical models for nonstandard logics, particularly the logics that arise in connection with quantum mechanical systems and the nonmonotonic logics that pertain to inference in expert systems. The study of measures on these logical models is a burgeoning new field called noncommutative measure theory. In September I will be giving an invited lecture on this topic in Italy. I am active in the International Quantum Structures Association, and continue to participate in the annual IQSA conferences. I have recently been appointed Visiting Professor at Florida Atlantic University, where I am participating in a seminar on quantum logic and where I am a member of the Ph.D. committees of two graduate students. Finally, we mention that Foulis published seven undergraduate textbooks. These are Fundamental Concepts of Mathematics (1962), (with Mustafa A Munem) Calculus (1978), (with Mustafa A Munem) Calculus: With Analytic Geometry (1984), (with Mustafa A Munem) After Calculus: Algebra (1988), (with Mustafa A Munem) After Calculus: Analysis (1989), (with Mustafa A Munem) Algebra and Trigonometry with Applications (1991), and (with Mustafa A Munem) College Algebra with Applications (1991). Foulis has three children David, Dean and Scott. He is now married to the mathematician Hyla Gold who, Greechie writes [1]:- ... enjoys listening to his lectures and, during an International Quantum Structures Association Meeting, can usually be seen in the back of the lecture hall during Dave's presentation. There is no doubt in my mind that she plays a major role as facilitator, which has been an important factor in Dave's outstanding productivity. Hyla wrote the solutions manual to Dave's first Calculus book, providing solutions to approximately 5000 problems, an impressive feat before the availability of graphing calculators. I suspect that Hyla also plays an editorial role before Dave's papers are submitted, and have often been amused by her comments after a lecture, over dinner, such as, "You changed that part about the connection with operator theory". ### References (show) 1. R Greechie, David James Foulis, Math. Slovaca 62 (6) (2012), 1007-1018. 2. M Janowitz, An Interview of Prof David Foulis, Topological Commentary 3 (2) (15 August 1998). ### Additional Resources (show) Other websites about David Foulis: Written by J J O'Connor and E F Robertson Last Update October 2013
2023-03-31 23:20:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5326963663101196, "perplexity": 1864.827690584024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00539.warc.gz"}
https://math.stackexchange.com/questions/1306367/what-is-the-smallest-prime-of-the-form-2nn91
# What is the smallest prime of the form $2n^n+91$? I wondered what the smallest prime of the form $2n^n+k$ is for some odd $k$. For $k<91$, there are small primes, but for $k=91$ , the smallest prime (if it exists) must be very large. • What is the smallest prime of the form $2n^n+91$ ? It is clear that $\gcd(91,n)=1$ must hold. $2n^n+91$ is composite for every natural number $n$ below $1000$. $$2\times 15^{15}+91 = 42846499\times 20440124659$$ shows that the smallest prime factor can be large. • No primes up to $n=1200$. Watch this space... – TonyK May 31 '15 at 11:43 • Did you also check the range $0\le n \le 1000$ ? – Peter May 31 '15 at 11:46 • Yes.${}{}{}{}{}{}$ – TonyK May 31 '15 at 11:47 • No primes up to $n=1500$... – TonyK May 31 '15 at 11:53 • I experience a deja-vu... There must be a glitch in the Matrix ! – Lucian May 31 '15 at 12:35 $$2 \times 1949^{1949} + 91$$ is probably prime! (Running a rigorous primality test on it would take almost a whole day $-$ see here, for instance $-$ so I'm not going to do that.) $2n^n+91$ is composite for all lower values of $n$.
2020-04-02 20:30:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4448414146900177, "perplexity": 306.2362521405826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370507738.45/warc/CC-MAIN-20200402173940-20200402203940-00425.warc.gz"}
http://mathoverflow.net/questions/30377/image-of-a-fixed-element-under-a-random-endomorphism-in-an-abelian-group/
Image of a fixed element under a random endomorphism in an Abelian group Let $G$ be a finite Abelian group with endomorphism ring $End(G)$. I am interested in the probability $P(\phi(g_1) = g_2)$ for fixed $g_1,g_2 \in G$ and a uniformly chosen endomorphism $\phi(\cdot)$ from $End(G)$. Essentially, I want to understand where the set of endomorphisms will take each element $g \in G$. I ran into this question while considering homomorphic compression schemes that compress an $n$-length sequence $g^n$ into a sequence of length $k$ by applying a homomorphism $\phi \colon G^n \rightarrow G^k$. I describe the question in detail below. Let $\mathbb{Z}_n$ be the cyclic group of $n$ elements. If $G ={\mathbb{Z}_{p^r}}$, I understand what is going on and can prove for instance that $\phi(g)$ is uniformly distributed across the smallest subgroup of $\mathbb{Z}_{p^r}$ that $g$ belongs to as $\phi(\cdot)$ varies over $End(\mathbb{Z}_{p^r})$. But, I am having trouble understanding what happens in the case of groups of the form $\mathbb{Z}_{p^r}^k$ such as $\mathbb{Z}_2^2$ for example. In this case, $\phi(g)$ is uniformly distributed over $\mathbb{Z}_2^2$ for all non-identity $g$ regardless of which subgroup $g$ belongs to. Question: Is there a uniform way to write down the probability $P(\phi(g_1) = g_2)$ for fixed $g_1,g_2 \in G$ and an arbitrary $\phi(\cdot) \in End(G)$ for a finite Abelian group $G$? I would greatly appreciate any pointers and hope the question isn't too elementary for MO. Please feel free to edit/re-tag the question if needed. - For a group $G=\mathbb{Z}_{p^r}^k$ this is quite straightforward. Let $g$ have order $p^r$ in $G$ (if not then we are effectively working in $\mathbb{Z}_{p^s}$ where $s < k$). Applying an automorphism of $G$ we can assume that $g=(1,0,\ldots,0)$. Endomorphisms of $G$ correspond to matrices over $\mathbb{Z}_{p^r}$. The image of $s$ is the first row (or column if you put the map on the other side) and we see that the image of $s$ is uniformly distibuted over all of $G$. Now consider a general finite abelian $p$-group $G$. Let $g\in G$. We can write $G=\langle h\rangle\times H$ where $g=p^s h\in H$ and $h$ has order $p^m$ for some $m\ge s$. We can specify an endomorphism of $G$ by mapping $h$ to any element $h'$ of order $\le p^m$ and taking any homomorphism from $H$ to $G$. Then the $h'$ are uniformly distributed amongst the elements of order $\le p^m$ in $G$ and $g'$ is mapped to $g'=p^{m-r}h'$. These $g'$ are uniformly distributed over a certain subgroup of $G$. For general finite abelian $G$ split up $G$ as a product of its Sylow $p$-subgroups. Then $g\in G$ splits up into its primary components and each of these behave in the same way, under a random endomorphism, as in the $p$-group case above. Added I now see that the argument I gave in the prime power case is valid in the general case too. The key observation is that a maximal cyclic subgroup of a finite abelian group is a direct summand. Let $g\in G$ have order $m$ and let $H$ be a maximal cyclic subgroup of order $mn$ containing $\langle g\rangle$. Then the images of $g$ under random endomorphisms of $G$ are uniformly distributed in the subgroup $n G[mn]$ of $G$ where $G[mn]$ denotes the $mn$-torsion subgroup of $G$. Added (4/7/2010) Thanks to Tom for pointing out my error above. The argument I had in mind for proving that maximal cyclic groups are summands doesn't actually work. :-( As t3suji points out, the images are uniformly distributed over a subgroup. Identifying this subgroup looks like being a bit more fiddly than I believed and I lack the patience to do it now. It seems that reduction to the prime power case is a good way to proceed. - To me, the following comment explains the simplicity of the answer. It is a priori clear that (in the case of abelian $G$) the images are distributed uniformly in a subgroup, because the map $ev:End(G)\to G:\rho\mapsto\rho(g)$ is a group homomorphism. Thus the only question is to describe the image of $ev$, that is, the set of all possible images of $g$. – t3suji Jul 3 '10 at 13:01 Thanks, t3suji. Now that you mention it, this is an obvious point; but it wasn't obvious to me when I was writing the answer. :-) – Robin Chapman Jul 3 '10 at 14:33 It's not true that a maximal cyclic subgroup of a finite abelian group must be a summand. If $G$ is the product of two cyclic groups of order $p^3$ and $p$ then there is a maximal cyclic subgroup of order $p^2$. – Tom Goodwillie Jul 4 '10 at 3:00 Tom Goodwillie's comment shows that the answer is not as simple. Maybe the following approach works: assume $G$ is a p-group. It's isomorphism type is given by a partition. If $g\in G$ is a non-zero element, the isomorphism type of $(g,G)$ is probably given by a bipartition (this should be similar to classification of pairs (vector,nilpotent operator), see Achar, Henderson: Orbit closures in an enhanced nilpotent cone). Now define order on isomorphism classes by saying that $(g,G)<(h,G)$ if $h$ is the image of $g$. I expect this is the same order on bipartitions as in Achar-Henderson's paper. – t3suji Jul 4 '10 at 4:34 @Robin,t3suji and Tom: Thanks for your comments. I am still trying to grok the arguments. I don't have much exposure to group theory but I hope to get there soon :) I am hoping to specialize Robin's answer and the ensuing comments to the case of $\mathbb{Z}_4$ vs $\mathbb{Z}_2^2$ and go from there. Thanks again. – Dinesh Jul 7 '10 at 1:38
2016-05-24 19:45:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9448721408843994, "perplexity": 94.42477208068077}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049273643.15/warc/CC-MAIN-20160524002113-00137-ip-10-185-217-139.ec2.internal.warc.gz"}
https://galaxyinferno.com/how-to-solve-advent-of-code-2022-day-6-with-python/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-solve-advent-of-code-2022-day-6-with-python
# How to solve Advent of Code 2022 – Day 6 with Python If you missed any previous days, click here for all my content about that: Advent of Code, if you want to know why you should participate and try to code the solution for yourself, click here: Advent Of Code 2022 – 7 Reasons why you should participate. If you’re here to see the solution of Day 6, continue reading 😉 ## GitHub Repository I will upload all of my solutions there – in the form of Python (or Scala alternatives) notebooks. You have to use your own input for the “input.txt” since everyone gets their own and everyone needs to provide a different solution based on their input. ## Day 6 Puzzle Okay, for everyone who has participated or seen the solutions to the last few days – or even just the puzzle for yesterday: what the heck was this today? 😀 Prepare for super short blog post. On day 6 of Advent of Code, we had to fix the communication device the elves gave us. Which sounds complicated until you read the concrete task at the end. ### Part 1 We had to read in a line and find the first 4 distinct characters in a row. We then get the solution from the index of the end of this 4-character-section. Yeah, that’s it. Here’s the code. with open('example.txt', 'r') as f: for i in range(len(line)): if len(set(line[i:i+4]))==4: print(i+4) break • iterate over the length of the line • at each spot check the next 4 characters (line[i:i+4], if you don’t know this syntax search for “list slicing”) • convert to a set because a set doesn’t have duplicates • if the set is still length 4, there are no duplicates in this part of the string and we found our solution, which is the current index + 4 to return the end of this 4-character sequence Done 🙂 ### Part 2 Okay, here we had to the same, but for 14 distinct characters. So just replace every 4 with 14 in the above code: with open('example.txt', 'r') as f: break
2023-02-05 04:10:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22247402369976044, "perplexity": 1095.4925631321003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500215.91/warc/CC-MAIN-20230205032040-20230205062040-00697.warc.gz"}
https://wafflescrazypeanut.wordpress.com/2013/10/10/what-about-inductors/
So far, we’ve talked about electricity and capacitors. This time we’ll be addressing inductors. Why am I doing this? …explaining these electric circuitry, current and horror stuff? Well, a friend of mine, asked me to write posts on these basic circuit components intuitively. While acquiring intuition is totally up to you (based on your interest, curiosity, how well you grasp things, etc.), I can guarantee you for sure, that I’ll be sticking to my motto, the Occam’s Razor. Mostly, I’ll try to explain things simply. But, when I fear that treating some complex stuff “simply”, may drive people into some misconception, I’ll start including the required complexity step-by-step. Anyways, you don’t have to worry about that. You have the space for comments (shoot me with your questions, wherever you get “stuck”). Just keep up with the hope that it’ll be easy… Our topic requires introduction to the magnetism. I shouldn’t be writing historical things. But, I can’t resist myself from speaking about Faraday, the one who knew how much a current can do, who observed all those mysteries, before Maxwell (who did the unification and gave “light”). He didn’t need a giant laboratory for his fun. His “CERN” was able to fit to his bench, where he played with his little coil of wires and magnets with the newly discovered electricity. Then, the shocking moment arrived. He observed that when a magnet is moved towards a coil of wire, the galvanometer showed deflection indicating the flow of current. The thing is, the current lasted only as long as he moved the magnet. It stopped when he stops his motion. He also discovered that when an electric pulse is sent through a wire, a magnetic needle (compass) showed deflection (deviation from its pointing to “North”, to some direction around the wire) indicating that there’s a magnet in the wire when current flows through it. He was curious about his discovery that forms a new bridge between electricity and magnetism. He just can’t understand what exactly is the underlying clockworks. Anyone may wonder when they come to know that the stuff that lights the streets has something to do with the things that stick to metals here and there. But, he tried to understand the magnetic field by visualizing in terms of some lines, what he called the “flux lines”. The current-carrying wires (the magnets too) emit some kind of field lines where the compass needle or any other magnet gets a force. Shown below, is the analogy for field lines (the experiment was actually devised by Oersted in the 18th century)… ## The “laws” of induction… Using these lines, he devised two laws (which we now call the laws of electromagnetic induction). Whenever the number of magnetic field lines crossing the coil’s area (called the flux $\phi$) changes, a voltage is induced which lasts as long as the “number of lines crossing the coil’s area” changes. He added that this voltage also depends on the rate of change of the flux (i.e) more voltage is induced when there’s a miraculous change in the number of lines passing through the coil, within a specific time interval. Mathematically, it’s represented as… $V\propto$ $\frac{d\phi}{dt}$ A few years later, Lenz corrected his law that the voltage produced in the coil always acted in a way that it opposes the cause that produced it (i.e) the current flow (generated by induction) creates its own magnetic field that opposed the moving magnet’s field. What does it mean? When you move the magnet towards the coil, the coil resists the motion of magnet by repelling it and when you move the magnet away from the coil, it resists the motion again by attracting the magnet. It was also found that the voltage induced depends on the turns of the coil (say $N$). The more you wind the coil, the larger the induced voltage. Now, the corrected form would be (having found the proportionality constants)… $V=-N$ $\frac{d\phi}{dt}$ While these laws don’t tell anything about how exactly this kind of horror works, instead they just say “what would happen” when you do such an experiment. This is more than enough for us today. ## Now, to our inductor… An inductor is just a coil with some turns, the same what’s been used by Faraday. What happens when current “starts” to flow through such a coil? Well, it becomes an electromagnet, which we know. But, that’s not what we want here. Lenz has done the job for us. He has inferred that the current through the coil opposes the change or cause that produces it. Say you’re providing a power supply to the inductor. Instead of increasing straightaway, the current (or voltage) increases linearly with time (i.e) there’s a time-varying magnetic field in the inductor. As the current reaches a certain value, say 2 amperes, a magnetic field corresponding to that current is produced which opposes the further growth of current. It won’t be a trouble to think that the now-produced magnetic field generates its own emf (a fancy name would be the “back emf”) that opposes the applied voltage. The more the current grows, the more the corresponding magnetic field intensity grows, which further delays the growth of current. Thus, the maximum value of current is attained in this kind of linear variation (much like a step-by-step manner), and once it’s established, the current simply flows through the inductor without any resistance due to magnetic field. Now, the same happens when current decreases. When the current decays from its value, the magnetic field opposes its decay. Again, a linear time-dependent variation is seen. In much simpler words, the inductor resists current flow as long as there’s a change in the applied voltage. ## Demonstration with a light bulb… The last few “sayings” might have confused you. Let’s go with an experiment. What do you see in the following circuit? It’s a parallel circuit of an inductor and a light bulb, connected to a battery (DC). This circuit is just a rough sketch. You can’t see what’s really going on because you’d need a moving picture for that (creating such things is something I’m not too much inclined to). Okay… The moment you close the circuit, current starts to flow. Common sense might suggest that larger current follows a path of relatively least resistance. The inductor seems to support that statement. So, the bulb should appear dim. That doesn’t happen. So, what exactly does happen? When you close the circuit, the bulb glows brightly and dims out soon. Always keep in mind that the inductor has a self inductance (just a fancy way to call its property) that makes it oppose the applied voltage with its own magnetic field (which has also just been generated by the applied voltage). So, the instant you close the circuit, there’s a large potential difference and thus, the inductor acts as if it were showing a great resistance to the current flow. Now, bring back your suggestion. Most of the current flows through the bulb that makes it glow brighter. Once the current reaches its maximum value, there’s no change in the voltage and so, the inductor stops opposing via its magnetic field. This results in current flow through the inductor and as a result, the bulb becomes dim. One more thing should be noted here. When you break the circuit open, the current starts to decay and now, the inductor’s magnetic field opposes the decay of current. This current passes through the bulb, that makes it glow brighter and die out eventually. ## In AC & DC circuits… The inductor opposes only when there’s a change in the voltage (or current). Hence, the inductor offers high resistance to AC (oscillating around like mad) depending on its frequency, whereas DC doesn’t experience any issues with the inductor. It’s just another piece of wounded wire. The resistance (formally called the impedance) is zero doesn’t mean that the inductor is superconducting. Just because there’s no change happening in DC (except at the start), the magnetic field doesn’t oppose the current, during the whole business. But still, there’s the ohmic resistance that depends on the wire, which leads to dissipation in the form of heat. We’ve finally concluded that the inductor stores electromagnetic energy unlike capacitor (where the thing is electrostatic). Now, to our phase diagram. The amplitude is less since there’s always an ohmic resistance offered by the inductor. Then, there’s the phase shift that the voltage overtakes the current by a phase angle of $\pi/2$. Keep in mind that the voltage dropped across the inductor is the effect against the change in current that passed through it (i.e) it takes some time for the voltage to build up the current to its maximum value due to the presence of this Lenz law horror, which opposes the growth of current. So, the instantaneous voltage is zero, whenever the instantaneous current reaches peak value (where the slope, the change is zero) which implies the phase shift… (Again) In order to get a good understanding of these sloppy things, you should try reading the hydraulic analogy. You can get over with it. I’m not going into it, as I don’t think I might be a good competent for Wikipedia. Tagged: , , ## One thought on “What about inductors?” 1. Mellissa Groth January 12, 2015 at 8:42 PM Reply My brother recommended I may like this website. He used to be totally right. This put up truly made my day. You can not imagine just how much time I had spent for this info! Thank you!
2018-01-21 04:15:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6640132665634155, "perplexity": 628.551369203753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890187.52/warc/CC-MAIN-20180121040927-20180121060927-00543.warc.gz"}
https://dimag.ibs.re.kr/category/seminar/
## Ben Lund gave a talk on radial projections in a vector space over a finite field at the Discrete Math Seminar On June 27, 2022, Ben Lund from the IBS Discrete Mathematics Group gave a talk at the Discrete Math Seminar on a large set of points with small radial projections in a vector space over a finite field. The title of his talk was “Radial projections in finite space“. ## Amadeus Reinald gave a talk on the twin-width of graphs of girth at least 5 without an induced subdivision of $K_{2,3}$ at the Discrete Math Seminar On June 13, 2022, Amadeus Reinald from the ENS de Lyon and the IBS Discrete Mathematics Group gave a talk at the Discrete Math Seminar, proving that graphs of girth at least 5 without an induced subdivision of $K_{2,3}$ have bounded twin-width. The title of his talk was “Twin-width and forbidden subdivisions“. ## Hongseok Yang (양홍석) gave a talk on how to use symmetries to improve learning with SATNet at the Discrete Math Seminar On May 30, 2022, Hongseok Yang (양홍석) from KAIST and the IBS Discrete Mathematics Group gave a talk at the Discrete Math Seminar on how to use symmetries to improve learning with SATNet at the Discrete Math Seminar. The title of his talk was “Learning Symmetric Rules with SATNet“. ## Sebastian Siebertz gave an online talk on producing a long path by a first-order transduction from a class of graphs of unbounded shrubdepth at the Virtual Discrete Math Colloquium On May 25, 2022, Sebastian Siebertz from the University of Bremen gave an online talk at the Virtual Discrete Math Colloquim on producing a long path by a first-order transduction from a class of graphs of unbounded shrubdepth. The title of his talk was “Transducing paths in graph classes with unbounded shrubdepth“. ## Stijn Cambie gave a talk on the diameter of the reconfiguration graphs arising from the list coloring and the DP-coloring of graphs at the Discrete Math Seminar On May 23, 2022, Stijn Cambie from the IBS Extremal Combinatorics and Probability Group gave a talk at the Discrete Math Seminar on the problem of determining the diameter of the reconfiguration graphs arising from the list coloring and the DP-coloring of graphs. The title of his talk was “The precise diameter of reconfiguration graphs“. ## Jan Kurkofka gave an online talk on the canonical decomposition of finite graphs into highly connected local parts at the Virtual Discrete Math Colloquium On May 18, 2022, Jan Kurkofka from the University of Birmingham gave an online talk at the Virtual Discrete Math Colloquium on the canonical decomposition of finite graphs into highly connected local parts. The title of his talk was “Canonical Graph Decompositions via Coverings“. ## Kyeongsik Nam (남경식) gave a talk on the number of subgraphs isomorphic to a fixed graph in a random graph and the exponential random graph model at the Discrete Math Seminar On May 9, 2022, Kyeongsik Nam (남경식) from KAIST gave a talk at the Discrete Math Seminar Kyeongsik Nam (남경식) gave a talk on the number of subgraphs isomorphic to a fixed graph in a random graph and the exponential random graph model at the Discrete Math Seminar. The title of his talk was “Large deviations for subgraph counts in random graphs“. ## Cheolwon Heo (허철원) gave a talk on the dichotomy of the problem on deciding the existence of a binary matroid homomorphism to a fixed binary matroid at the Discrete Math Seminar On May 2, 2022, Cheolwon Heo (허철원) from Sungkyunkwan University gave a talk at the Discrete Math Seminar on the dichotomy of the problem on deciding the existence of a binary matroid homomorphism to a fixed binary matroid. The title of his talk was “The complexity of the matroid-homomorphism problems“. ## Michael Savery gave an online talk on finding a graph of huge chromatic number such that every induced subgraph of large chromatic number has an induced copy of a fixed subgraph at the Virtual Discrete Math Colloquium On April 27, 2022, Michael Savery from the Oxford University gave an online talk at the Virtual Discrete Math Colloquium on finding a graph of huge chromatic number such that every induced subgraph of large chromatic number has an induced copy of a fixed subgraph at the Virtual Discrete Math Colloquium. The title of his talk was “Induced subgraphs of induced subgraphs of large chromatic number“. 기초과학연구원 수리및계산과학연구단 이산수학그룹 대전 유성구 엑스포로 55 (우) 34126 IBS Discrete Mathematics Group (DIMAG) Institute for Basic Science (IBS) 55 Expo-ro Yuseong-gu Daejeon 34126 South Korea E-mail: dimag@ibs.re.kr, Fax: +82-42-878-9209
2022-06-28 07:02:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7587085962295532, "perplexity": 1101.66137410103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00126.warc.gz"}
https://www.physicsforums.com/threads/electric-field-on-a-charge-some-height-above-the-center-of-a-disc.710657/
# Electric field on a charge some height above the center of a disc 1. Sep 15, 2013 ### PhizKid 1. The problem statement, all variables and given/known data Given a disc of radius 'R' with a charge 'h' above the center of the disc, find the electric field on the charge. 2. Relevant equations $E = \frac{kQ}{d^2}$ 3. The attempt at a solution I can find the charge density from a ring then integrate over R: $\sigma = \frac{Q}{A}$ $Area_{ring} = 2\pi r*dr$ $Q = \sigma * 2\pi r*dr$ Since the horizontal components cancel out, I'll take the cosine angle: $E_y = \frac{kQ}{(h^2 + r^2)} * \frac{h}{\sqrt{h^2 + r^2}}$ $E_y = \frac{kh2\pi \sigma dr}{(h^2 + r^2)^{\frac{3}{2}}}$ $E_y = kh\pi \sigma \int_{0}^{R} \frac{2r dr}{(h^2 + r^2)^{\frac{3}{2}}}$ $E_y = kh\pi \sigma [\frac{-2}{\sqrt{h^2 + r^2}}]$ Integrate from 0 to R: $E_y = kh\pi \sigma [\frac{-2}{\sqrt{h^2 + R^2}} + \frac{2}{h}]$ Re-substitute $\sigma = \frac{Q}{A}$: $E_y = kh\pi \frac{Q}{A} [\frac{-2}{\sqrt{h^2 + R^2}} + \frac{2}{h}]$ $E_y = kh\pi * \frac{Q}{2\pi r*dr} * [\frac{-2}{\sqrt{h^2 + R^2}} + \frac{2}{h}]$ I'm now stuck with this 'dr' in the denominator. Not sure what to do from here 2. Sep 16, 2013 ### rude man Is the disc charged? Is the charge above the disc very small? 3. Sep 16, 2013 ### vela Staff Emeritus Think about what Q and A represent here. You can't have lone differentials sitting around. You should have written \begin{align*} dA &= 2\pi r\,dr \\ dQ &= \sigma\,dA = 2\pi \sigma r \, dr \\ dE_y &= \frac{k\,dQ}{h^2+r^2} \frac{h}{\sqrt{h^2+r^2}} = \frac{2\pi hk \sigma r \, dr}{(h^2+r^2)^{3/2}} \end{align*} If you have an infinitesimal quantity on one side of the equation, you have to have one on the other side as well. Look at those two lines. According to what you wrote, $$\frac{kh2\pi \sigma dr}{(h^2 + r^2)^{\frac{3}{2}}} = kh\pi \sigma \int_{0}^{R} \frac{2r dr}{(h^2 + r^2)^{\frac{3}{2}}}$$ because both sides of the equation equal $E_y$. The integration apparently does absolutely nothing. You need to think about what Q and A represent. Q is the charge of what? A is the area of what?
2018-02-25 14:47:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931220412254333, "perplexity": 1229.7033246992717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816462.95/warc/CC-MAIN-20180225130337-20180225150337-00425.warc.gz"}
https://www.physicsforums.com/threads/convolution-in-matlab.835002/
# Convolution in MATLAB 1. Sep 28, 2015 ### Les talons 1. The problem statement, all variables and given/known data Use MATLAB to find the convolution between a) $f(t) = u(t) -u(t -3)$ and $g(t) = u(t) -u(t -1)$ 2. Relevant equations 3. The attempt at a solution t = -10: 0.1: 10; f = heavisde(t) -heaviside(t -3); g = heaviside(t) -heaviside(t -1); t = -20: 0.1: 20; c = conv(f, g); plot(t, c) The graph of the convolution has values from 0 to 10. I don't get how the convolution can get to 10 if the functions being convolved only have maximum values of 1. I changed the line to c = 0.1*conv(f, g); and this produced the right output. Why do I need to multiply by the step size? 2. Sep 28, 2015 ### RUber Convolution is a sum over element-wise products. To turn this into the equivalent approximation for the integral, you have to multiply by the step size. Think about the rectangular area. Matlab applies the linear algebra definition of convolution.
2017-08-20 09:21:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8499342203140259, "perplexity": 2112.8290608772754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106358.80/warc/CC-MAIN-20170820073631-20170820093631-00217.warc.gz"}
https://jacquelinewang.github.io/2018/10/11/Numpy-Broadcasting/
Broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. # Overview Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. NumPy’s broadcasting rule relaxes this constraint when the arrays’ shapes meet certain constraints. # Simple Broadcasting: Array * Scalar We can think of the scalar b being stretched during the arithmetic operation into an array with the same shape as a. And then, element-wise multiplication is performed. The stretching analogy is only conceptual. NumPy is smart enough to use the original scalar value without actually making copies, so that broadcasting operations are as memory and computationally efficient as possible. # General Broadcasting Rules: Array * Array ## Comparing shape When operating on two arrays, NumPy compares their shapes element-wise. It starts with the last dimensions, and works its way forward. Two dimensions are compatible when • they are equal, or • one of them is 1 (or None) If these conditions are not met, a ValueError: frames are not aligned exception is thrown, indicating that the arrays have incompatible shapes. The size of the resulting array is the maximum size along each dimension of the input arrays. ## Multiplication Example1 Arrays do not need to have the same number of dimensions. For example, if you have a 256x256x3 array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible: When either of the dimensions compared is one, the other is used. In other words, dimensions with size 1 are stretched or “copied” to match the other. Example2 In the following example, both the A and B arrays have axes with length one that are expanded to a larger size during the broadcast operation: More examples An example of broadcasting in practice ## Application: outer operation Broadcasting provides a convenient way of taking the outer product (or any other outer operation) of two arrays. The following example shows an outer addition operation of two 1-d arrays: Here the newaxis index operator inserts a new axis into a, making it a two-dimensional 4x1 array. Combining the 4x1 array with b, which has shape (3,), yields a 4x3 array.
2019-08-18 17:09:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5205915570259094, "perplexity": 1343.6469601113515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313987.32/warc/CC-MAIN-20190818165510-20190818191510-00327.warc.gz"}
http://yldg.checkpoint-frankfurt.de/7-mod-2.html
# 7 Mod 2 These are the best of awesome mods for Minecraft mods 1. This mod adds in what Minecraft has been missing for years, furniture! It includes over 40 unique pieces of furniture to decorate your bedroom, kitchen, living room and even your garden! Turn your house into the dream house you have never been able to have until now. Also if you want to support the mod developer (montoyo), you may donate by clicking this button (Thanks!!): WebDisplays for MC 1. Mod pas mal et super article continue comme ça ;) Sinon j’ai remarquer que dans le dernier screen (selui des options) tu est en version 1. The file is saved to the PC in a compressed file which needs to be extracted. Lately, I’ve put a lot of emphasis on the look and feel of my applications and have ventured deep into custom widget styles. If you’re an enthusiast vaper that loves using mechs, you know this to be true. For example, "5 mod 3 = 2" which means 2 is the remainder when you divide 5 by 3. 8, you will be able to reach your long time goals as a Minecraft gamer. Case Clicker 2 Mod Apk 2. For example, the expression "5 mod 2" would evaluate to 1 because 5 divided by 2 has a quotient of 2 and a remainder of 1, while "9 mod 3" would evaluate to 0 because the division of 9 by 3 has a quotient of 3 and leaves a remainder of 0; there is nothing to subtract from 9 after multiplying 3 times 3. RETAIL CONSIGNEES FOR FSIS RECALL 096-2018 FSIS has reason to believe that the following retail location(s) received ready-to-eat salad with meat products that contain a corn. The mod will add just one block, with which you can test your luck, just break the block, instead of it, blocks, mobs or traps may appear. Select one of the following categories to start browsing the latest GTA 5 PC mods:. This is the latest version of the mod available. 2 you can avoid the steps of cutting the trees block by block. 7 as a reverse proxy on Ubuntu 14. No Cubes Mod 1. 10 (Morph) allows the player to Morph into any mob after killing it. For example, if you make an entity that has a complex model, you don’t want players to be able to hit the air around it. When writing "mod $p$", people invariably mean that $p$ is prime. You may desire to join educational institutions through distance education systems due to your strict day schedules or your work status, which does not leave you with a chance to be in a physical class, or maybe you are a distant learner who is having trouble in getting information through the accessible search engines. Do not be frustrated if fail, try again! 6. Everyone is looking for a good box mod. Content is available under CC BY-NC-SA 3. 9 alt2 installer Family C 1. Mech mods are not as popular as they. 19 mod for mincraft 1. 60 - Fixed an issue where the mod launcher failed to load a game if there was no Mods folder. This is a mod made for those who are boredof chopping wood, or other wise to get easement. If you are considering adding this mod to your to-do list, you may want to inspect your CPU as well as GPU. Izmael tweaks. More information about the mod can be found at the website or by clicking the wiki link above. However, the Zan’s minimap works only with a compatible Minecraft version like 1. But the common pitfall in mod operation is that when the result of a modulo operation has the sign of the dividend, it can lead to surprising mistakes. Default 3D vs CreatorCraft 3D - a short comparison of Minecraft 3D texture packs (Default 3D vs CreatorCraft 3D with SEUS PTGI E8 shaders) made in Minecraft 1. ディランバイトゥルーグリット ベスト アウター レディース【Dylan by True Grit Vintage Jacquard,Mod-o-doc モドオードック Womens 衣類 Velour 1/4 Zip Seamed Tunic Silverstone,送料無料 Vilebrequin ヴィルブレクイン 水着 レディース 女性用 スポーツ・アウトドア用品 Fievre Tuxedo Bottoms - Black. ArchitectureCraft Mod 1. Minecraft Mods, Mods 1. how to buy windows 7 key Windows 7 Ultimate Product Key 32/64bit Windows 7 Ultimate Product Key is a popular software created by Microsoft. The Euclidean Algorithm. 0 for Minecraft 1. 61 alt2 installer Naruto C 0. 8, you will be able to reach your long time goals as a Minecraft gamer. I'm running Apache 2. 2 [email protected] ©2005-2012 McQuain Design: State Machine We need eight different states for our counter, one for each value from 0 to 7. 5 alt2 installer HD skins mod 1. The mod is currently supports version 1. 10 is a very extensive mod for Minecraft adding in an opposite realm to the Nether. Meat & 3 Modにより、オリジナルKlonトーンはもちろん、さらに多くのサウンドバリエーションを実現します。 ケンタウロスの肉の中で一番おいしいところを頂きましょう! ※JHS Pedals Soul Food Meat & 3 Modには、One Control EPA-2000 9Vアダプターが付属しています。. New Update: 2. It also provides an easy way to explore and finds new mods. DOTA 2 Mods 2019 - Installer - Mod Skin DOTA - Wiki Mod FULL 112 Heroes + Arcana + Immortal will help you change set/item default game. Play the sequel to the hit action-strategy adventure with over 30 Game of the Year awards. To allow most users to have UserDir directories, but deny this to a few, use the following: UserDir disabled user4 user5 user6. 2 users are encouraged to upgrade to this version. (Doing the division with a calculator will. 2 - install and download About: By installing Galacticraft Mod 1. When writing "mod $p$", people invariably mean that $p$ is prime. Mod_python is an Apache module that embeds the Python interpreter within the server. The toomanyitems mod comes handy when you want to search for any item or tool instantly. 1 *final release* Smurfette Site news. Com foco nos produtos para fabricação de cosméticos artesanais. 1/2 mod 7 stands for the remainder of 1/2 divided by 7. Custom NPCs Mod for 1. Sprites now use external files instead of being hardcoded inside the FLA and SWC file. Reduce 7 mod 3. Mantle Mod 1. Primality test. Given two numbers, a (the dividend) and n (the divisor), a modulo n (abbreviated as a mod n) is the remainder from the division of a by n. 7%, while the German DAX fell 2. To allow most users to have UserDir directories, but deny this to a few, use the following: UserDir disabled user4 user5 user6. Featuring the GripZone®, the XD® Mod. Minecraft Mods, Mods 1. I am using the highest settings in Mincraft as well at 1920x1080 resolution fullscreen. 10 turns your Minecraft world into the creature filled world of Pokemon complete with 340 different species. 300 Win Mag. Midnight Rapier is now more expensive, costing 25,000 gems. Download the mod today! This mod also has an API. Roam the lands and challenge terrifying Dinosaurs in this Minecraft Mod!. RETAIL CONSIGNEES FOR FSIS RECALL 096-2018 FSIS has reason to believe that the following retail location(s) received ready-to-eat salad with meat products that contain a corn. I'm running Apache 2. The team has been working hard on this update for the last half year. Midnight Rapier is now more expensive, costing 25,000 gems. \Mod p" Arithmetic and Algebra. 2 Series at Sportsman's Outdoor Superstore. 10 - With this mod, you can bring your friendly mobs with you on adventures. Download and install the ArmorStatusHUD mod for Minecraft from the official site. By Mineboy Last updated Apr 15, 2017. We collect the latest mods from community suggestions and bundle them into one easy installer. Explanation: 8 is divided by 2 (exactly 4 times) to give a remainder of 0. For a practical example of the mod operator, see our example program Prime Number Checker. For example, the expression "5 mod 2" would evaluate to 1 because 5 divided by 2 has a quotient of 2 and a remainder of 1, while "9 mod 3" would evaluate to 0 because the division of 9 by 3 has a quotient of 3 and leaves a remainder of 0; there is nothing to subtract from 9 after multiplying 3 times 3. 1 alt2 installer Sword Art Online C 0. Given two numbers, a (the dividend) and n (the divisor), a modulo n (abbreviated as a mod n) is the remainder from the division of a by n. Superheroes Unlimited Mod for Minecraft 1. 3 [Unlimited money][Free purchase][Unlocked][Full] 【Note】a total of three archives: the first one[QuanManJi]full plants full full level, and the second[Quan1]the whole plant level clearance, and the third[QuanZhiWu]is full plants unlock the initial archive it!. This Part pack include many Science parts with report values: 2 Probe Core, LittleFrog and ScaRaB system with 1 monopropellant engine : 2 Strategies for your career: Spoiler Download: Curse Download: SpaceDock Licen. Select which directory you wish to install it to 4. DOWNLOAD MOD OFFICIAL THREAD. 10 Origin for 1. ModLoader for Minecraft can handle most mods but there are some that it can't which is where ModLoaderMP for Minecraft comes in. Reduce 7 mod 3. If you’re an enthusiast vaper that loves using mechs, you know this to be true. Si vous avez besoin de plus de détails sur la marche à suivre il existe de nombreux tutoriels montrant comment installer un mod, qu’il utilise Forge, Mod Loader, etc. Depends on how you define "mod". 10 and Morph Mod 1. Let's raise the price to put more and more also other material! 3. Flan’s Plane Mod is one of the most popular Minecraft mods available today. There is a difference between remainder and modulus in mathematics, with different results for negative numbers. Windows 7 has different editions, in which Windows 7 Ultimate is part. Download Shaders Mod. How to use mod in a sentence. A hybrid mech that runs on 18650s, 20700s or 21700s. This reduces ingame latency and gives a smoother gameplay experience. So from now on you can plant plants gold, diamond, coal and all other resources of the game. The Tinkers Construct Mod for Minecraft adds new ores to your Minecraft world and allows for the creation of totally customizable tools and weapons, each part giving its own special attribute and color to the whole piece. You are about to be redirected to another page. CCTV Mod – Download CCTV Camera Video Blocks Mod for Minecraft. No Cubes Mod 1. 9, Minecraft 1. I hope we can learn real application of VSA from you. This reduces ingame latency and gives a smoother gameplay experience. 7 (Windows DLL for 2. General Kohi TcpNoDelay Forge Mod - Improves Connection submitted 3 Cause i always use 1. 9 Mods Minecraft API Minecraft Core Minecraft Library, Marry says:. New Update 2. Creating complicated redstone locks is kind of pointless when players can simply explode. Computing a^((N-1)/2) mod N Is there a shortcut for doing a^((n-1)/2) mod N? Division of Large Numbers What is the remainder when 7^100 is divided by 13? Give a general strategy and an explanation. 10 Installation: Locate your Minecraft. We collect the latest mods from community suggestions and bundle them into one easy installer. Trying to enhance! 4. The Tinkers Construct Mod for Minecraft adds new ores to your Minecraft world and allows for the creation of totally customizable tools and weapons, each part giving its own special attribute and color to the whole piece. ディランバイトゥルーグリット ベスト アウター レディース【Dylan by True Grit Vintage Jacquard,Mod-o-doc モドオードック Womens 衣類 Velour 1/4 Zip Seamed Tunic Silverstone,送料無料 Vilebrequin ヴィルブレクイン 水着 レディース 女性用 スポーツ・アウトドア用品 Fievre Tuxedo Bottoms - Black. Minecraft 1. It can help you cheat many mobile games including iOS and Android games, such as COC,Boom Beach and Minecraft PE etc. CPU: Intel i5. We can describe the operation by drawing a state machine. Minecraft mods – the ollections of thousand of mods for you – a gamer of minecraft. You now have access to various 3x3 doors courtesy of @jaquadro. Mk 13 Mod 7 barrels Since the book and now the movie American Sniper have come out there has been a lot of interest in the rifles used by Chris Kyle and in particular the Mk13. Jan 16, 2018. If you're talking about my Android port with the cheat menu, I'm still waiting for us to receive 5. It is based on modular arithmetic modulo 9, and specifically on the crucial property that 10 ≡ 1 (mod 9). Once the files are available, they must be transferred to a new file, which has been labelled: Documents\ Euro Truck Simulator 2\mod. 12 Updated on July 10, 2017 Leave a Comment Minecraft an open-world game that promotes creativity, collaboration, and problem-solving in an immersive environment where the only limit is your imagination. 10 for Minecraft 1. 7 as a reverse proxy on Ubuntu 14. calculating mod 7. Если тип — целые положительные числа, то частное от деления на равно Полезно обзавестись достаточно простым обозначением и для остатка от этого деления — обозначим его через Основная. The No Speed-Limit mod is free to download. Topics in Algebra 5900 Spring 2011 Aaron Bertram Step 2. 35) Date: May 18, 2017 Author: grimesmods 622 Comments. 36 alt2 installer Dragon Block C 1. Official Damage Indicators Download Page. Also if you want to support the mod developer (montoyo), you may donate by clicking this button (Thanks!!): WebDisplays for MC 1. 0 unless otherwise noted. 21a alt2 installer JBRA Client 1. The Morph mod for Minecraft lets you do. 2 - install and download About: By installing Galacticraft Mod 1. 10 22 Comments 5308 views August 9, 2014. Throughout Borderlands 2 you will be able to unlock various Customizations, below you will find the requirements and appearances of the various Psycho(Krieg)Heads This mod contains Borderlands 2. Given two numbers, a (the dividend) and n (the divisor), a modulo n (abbreviated as a mod n) is the remainder from the division of a by n. To allow a few users to have UserDir directories, but not anyone else, use the following: UserDir disabled UserDir enabled user1 user2 user3. The result of 7 modulo 5 is 2 because the remainder of 7 / 5 is 2. Students' plots should match the plot that the model makes. The Luhn Mod-10 Method is an international standard for validating card account numbers. For example, if you make an entity that has a complex model, you don't want players to be able to hit the air around it. Download the mod today! This mod also has an API. Mod definition is - one who wears mod clothes. Previously this had been documented by ISO 2894/ANSI 4. VillainCraft Mod. Mod Skin Dota 2 is a software help you can mod: Hero Items/Sets, Terrains, Loading Screens, Effects. 0 unless otherwise noted. 09beta01 built PHP-FPM 7. Custom NPCs Mod for 1. 1 and later as minecraft already has this change. The PHP development team announces the immediate availability of PHP 7. This mod will allow you to place up to 13 cameras around your house and follow them on multi-sized screens. Dec 20, 2018 · google chrome for windows 7 32 bit filehippo download, see also any related to google chrome for windows 7 32 bit filehippo download, from chromereview. A hybrid mech that runs on 18650s, 20700s or 21700s. 5 was the last version that was mod was updated to by KodaichiZero and KodaichiZero has said nothing since then regarding any plan to update the Fairy Mod and people are wanting it updated so decided I would. For example, a set of dials recording values in mixed units to different bases can be represented by counting in different mod bases. Reduce 7^2 mod 3. The mod works with 5. 1/2 mod 7 stands for the remainder of 1/2 divided by 7. A new clean, bare minimum website to help you find exactly what you want, quickly and easily. 4 (on Windows 10 using Nvidia GeForce GTX 1060 6G). It has the ability to implement spectacular environmental animations and shading to your Minecraft game. 0 Final for me to make a port. Download Plants vs Zombies 2 Mod Apk 7. Though there are other choices available but chrome for Windows 10 is something amazing with the latest features. Go to this page, look for "2611" and download the installer on that line. Minecraft is undoubtedly one of most famous internet games across the world. now initial plans for his army was discharged in 2009 and so 2010 for I A. 1 Unlimited Coins Gems. View the Project on GitHub grisha/mod_python [Docs 3. 8 is considered as one of the most extraordinary and unique mods today. Over 100 modpacks, including packs from Feed the Beast, Curse, ATLauncher, and Technic Platform. Official Damage Indicators Download Page. 10 and Morph Mod 1. Take me to your homepage. Minecraft’s Orespawn mod is one of the most popular mods available today. Are you looking for the best lightweight games for Android? Then, I can't help myself from recommending Temple Run 2 for you. It also provides an easy way to explore and finds new mods. 2] Grand Theft Auto V Map for Minecraft 1. 0 HTML PDF] [Docs 3. 2] The Twilight Forest Mod for Minecraft About The Twilight Forest Mod for Minecraft If you are one of those Minecraft players who want to give your world a new look, then The Twilight Forest Mod is something you want to install. Superheroes Unlimited Mod for Minecraft 1. Sprites now use external files instead of being hardcoded inside the FLA and SWC file. Once you have picked them up they can be launched great distances. 4 - Have you ever ever needed your Minecraft to have an excellent hero? Like tremendous man, batman or whoever that you simply like or admire. 2 Minecraft - Minecraft is really fun to play with other people, but there are a lot of situations in which y. 10 (Morph) allows the player to Morph into any mob after killing it. 1 - to add Air plane, helicopter to your Minecraft. Better PVP 1. 2 (Space ship,Rocket Mod) Author: Micdoodle8 | July 26, 2018 If there is one thing that all Minecraft players want is to move their exploration efforts from the earth to space, and this is exactly what you can do with the Galacticraft Mod. 1 and later as minecraft already has this change. Similarly for 6^1001 mod 7, if we divide the early powers of 6 by 7 we see the pattern in the remainders. For Parents. it drops 3 torches by break. Welcome back to another update to Awakening of the Rebellion! The time has come again to roll out another update for the mod. Beginning with version 2. Welcome to Minecolonies! Minecolonies is a town building mod that allows you to create your own thriving Colony within Minecraft! With features including many NPC workers, buildings, a fantastic building tool and a robust permissions system in multiplayer, you can have the Colony of your dreams!. JurassiCraft is a safari amusement dinosaur Minecraft mod. I'm running Apache 2. 1 alt2 installer Sword Art Online C 0. The Tinkers Construct Mod for Minecraft adds new ores to your Minecraft world and allows for the creation of totally customizable tools and weapons, each part giving its own special attribute and color to the whole piece. This is a bugfix release, with several bug fixes included. We are not. 2 correspondingly. Download Shaders Mod. Though there are other choices available but chrome for Windows 10 is something amazing with the latest features. Download the mod today! This mod also has an API. 10 Installation: Locate your Minecraft. Normally in the game we can get with wolves, but using Copious Dogs Mod 1. 2 i get 150. 0 Final for me to make a port. Morphing Mod 1. 2? This mod adds in Hamsters to minecraft! Cute, fluffy, cheeks full of seeds? you know hamsters! Contents: There are 8 hamster colors (Black, White, Brown, Dark Brown, Grey, Dark Grey, Razzleberry and Taoru). You can be a chicken, an ender dragon, skeleton or pig, the control is up to you,. Download Carpenter’s Blocks Mod 1. Nerfed Ceraph's third fetish so it has 2/3 chance to cause you to not attack. JurassiCraft is a safari amusement dinosaur Minecraft mod. Play the sequel to the hit action-strategy adventure with over 30 Game of the Year awards. 10 est-ce normal ? Sinon encore une fois bonne continuation ;). 19 mod for mincraft 1. Just Cause 2: Multiplayer Mod $0. ArchitectureCraft Mod 1. I know that 7 % 3 = 1 as 3 goes up to 7 2 times and the remaining is 1. The Lion King Mod 1. The 7D2D Mod Launcher - A Mod Launcher for 7 Days to Die Welcome to the 7 Days to Die forums. Our dream is to make modding Minecraft as easy as apple pie! The LearnToMod software empowers Minecraft players (whether or not you know how to code) to imagine, create, and share amazing mods, texture packs, and schematics. DecoCraft Mod for Minecraft 1. It makes it much easier for those without technical knowledge to install mods in Minecraft. What is Hamsterrific 1. Explanation: 8 is divided by 2 (exactly 4 times) to give a remainder of 0. Once you have picked them up they can be launched great distances. With more players signing. It allows you to push mobs back and also pick up mobs and blocks. Though there are other choices available but chrome for Windows 10 is something amazing with the latest features. It is based on modular arithmetic modulo 9, and specifically on the crucial property that 10 ≡ 1 (mod 9). Jan 16, 2018. The most enjoyable zombies return to Android. This mod will allow you to place up to 13 cameras around your house and follow them on multi-sized screens. Its multipoint contact pin provides maximum conductivity. Part 2 of our Open World System, in which. Custom NPCs Mod for 1. Apache/Python Integration. Morph Mod for Minecraft 1. The mod adds all the old items and blocks back. -5 = 2 (mod 7) So in maths if you do if you do -5 mod(7) you should have 2 in modulo 7, in programming the modulo operator calculates the remainder and doesn't give the congruent modulo value. The Morph Mod is also called as Shape Shifter Mod which is used to change the ability or morphing essence into another animal. ModLoaderMP works with mods that ModLoader doesn’t such as Flans Plane Mod, SDK Guns Mod and Minecraft forge. Welcome back to another update to Awakening of the Rebellion! The time has come again to roll out another update for the mod. 7 - Simulations opening of cases from one of the popular shooter has long ceased to be like ripening crafts designed to remove the hype and more and more fans on full applications with a variety of content, regular updates and more inherent in such projects. 19 mod for mincraft 1. This mod adds in what Minecraft has been missing for years, furniture! It includes over 40 unique pieces of furniture to decorate your bedroom, kitchen, living room and even your garden! Turn your house into the dream house you have never been able to have until now. NET Framework op_Modulus operator, and the underlying rem IL instruction all perform a remainder operation. ProjectE Mod 1. MOD Pizza is a business, but our real purpose is creating positive social impact in the lives of our employees and their communities. More generally, the idea is that two numbers are congruent if they are the same modulo a given number (or modulus) For example, as above,$7 \equiv 2 \mod 5$where$5\$ is. Nerfed Ceraph's third fetish so it has 2/3 chance to cause you to not attack. The Five nights at Freddys mod currently adds in 9 characters from the game into minecraft! We have over 10 models finished however development just started this week so we will be implementing more as we go!. 2 Series at Sportsman's Outdoor Superstore. Game content and materials are trademarks and copyrights of their respective publisher and its licensors. DOTA 2 Mods 2019 - Installer - Mod Skin DOTA - Wiki Mod FULL 112 Heroes + Arcana + Immortal will help you change set/item default game. Each tree has wood block and plank textures. The Lion King Mod 1. 13 which was created in 1980 and since retired. 1 and later as minecraft already has this change. Ever feel like being a jerk? This mod is perfect for you! This mod adds in 10 supervillains, 8 Ores, and 8 Gems/Crystals/Ingots into your minecraft world. What is Hamsterrific 1. AD3, Alternating Directions Dual Decomposition. From Minecraft Alpha to 1. Security Craft Mod for Minecraft 1. You can be a chicken, an ender dragon, skeleton or pig, the control is up to you,. Zombies 2 7. Just enter the number a and b, this Mod / Modulo calculator will update you the mod value of (a%b) within the fractions of seconds. Mo' Creatures is a mod created by DrZhark. 5 alt2 installer HD skins mod 1. I tried to solve for it but they don't have a common factor :( please help This question is from my textbook and I have doubled checked it and this is exactly what it asks Example- One in the textbook I had already proved: Prove that for all integers a and b, if a mod 7=5 and b mod 7= 6, then ab mod 7 =2 a. 2019 Firmware for platform MTCD-MTCE Rockchip PX3-RK3188 1024x600 800x480 based on the latest official version of HCT2. The team has been working hard on this update for the last half year. Beginning with version 2. 2 latest version from this page. But some features only work when installed as a system app so I decided to make this. More specifically given two integers, X and Y, the operation (X mod Y) tends to return a value in the range [0, Y). 10 Jul 17, 2016 90,734 views Minecraft Mods When becoming a player of DecoCraft Mod , you will be given chances to design your own world, house and furniture as well. The Tinkers Construct Mod for Minecraft adds new ores to your Minecraft world and allows for the creation of totally customizable tools and weapons, each part giving its own special attribute and color to the whole piece. This function is often called the modulo operation, which can be expressed as b = a - m. We are not. You now have access to various 3x3 doors courtesy of @jaquadro. Game content and materials are trademarks and copyrights of their respective publisher and its licensors. Some tweaks to Hummanus. 1‑cp35‑cp35m‑win32. This mod is not needed in 1. Download Gravity Gun Mod for Minecraft About Gravity Gun Mod for Minecraft Gravity Gun Mod adds the Gravity Gun from Half Life [1. If you are considering adding this mod to your to-do list, you may want to inspect your CPU as well as GPU. 10 is a mod that adds a lot of extra information on your Minecraft screen so we can see all time our health, altered states of our character, as well as the strength and durability of our armor, plus enchantments that he is having. 2 - This mod is the perfect marriage of comics and Minecraft, allowing players to set up entire biomes to reflect Gotham, Genosha, or any other recognizable locations. Reduce 7^2 mod 3. Take me to your homepage. View the Project on GitHub grisha/mod_python [Docs 3. Download Carpenter’s Blocks Mod 1. 1 alt2 installer Sword Art Online C 0. MOD Pizza is a business, but our real purpose is creating positive social impact in the lives of our employees and their communities. apk Redirect me. Minecraft 1. All of my search term words; Any of my search term words; Find results in Content titles and body; Content titles only. 0 Final was released earlier today, so I guess it won't be long until we get it. 7 as a reverse proxy on Ubuntu 14. calculating mod 7. The graph repeats because it is in mod or can't go above a certain number (answers may vary). Because Shaders Mod is a huge overhaul to the visual components in the game, it requires a powerful processor to work its magic. Plants vs Zombies 2 MOD APK 7. Much has changed since the days of the Aether I, but it still remains a collaboration between people from various different disciplines with a shared goal of creating a new and unique dimension. 2 Minecraft - Minecraft is really fun to play with other people, but there are a lot of situations in which y. ModLoader for Minecraft can handle most mods but there are some that it can't which is where ModLoaderMP for Minecraft comes in. View the Project on GitHub grisha/mod_python [Docs 3. 2 i get 150. 2 users are encouraged to upgrade to this version. Custom NPCs Mod for 1. Pet Bat Mod 1. 6/7 has remainder 6 36/7 has remainder 1 216/7 has remainder 6 1296/7 has remainder 1. I hope we can learn real application of VSA from you.
2019-11-12 00:35:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.256844162940979, "perplexity": 4292.46402987329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664469.42/warc/CC-MAIN-20191112001515-20191112025515-00237.warc.gz"}
https://de.maplesoft.com/support/help/Maple/view.aspx?path=geometry%2FGlideReflection
GlideReflection - Maple Help geometry GlideReflection find the glide-reflection of a geometric object Calling Sequence GlideReflection(Q, P, l, AB) Parameters Q - the name of the object to be created P - geometric object l - line AB - directed segment on l Description • Let l be a fixed line of the plane and AB a given directed segment on l. By the glide-reflection $G\left(l,\mathrm{AB}\right)$ we mean the product $R\left(l\right)T\left(\mathrm{AB}\right)$ where $R\left(l\right)$ is the reflection with respect to the line l, and $T\left(\mathrm{AB}\right)$ is the translation with respect to the directed segment AB. The line l is called the axis of the glide-reflection, and the directed segment AB on l is called the vector of the glide-reflection. • For a detailed description of Q (the object created), use the routine detail (i.e., detail(Q)) • The command with(geometry,GlideReflection) allows the use of the abbreviated form of this command. Examples > $\mathrm{with}\left(\mathrm{geometry}\right):$ > $\mathrm{dsegment}\left(\mathrm{dsg},\mathrm{point}\left(M,0,0\right),\mathrm{point}\left(N,1,1\right)\right):$$\mathrm{line}\left(\mathrm{l1},\left[M,N\right]\right):$ > $\mathrm{GlideReflection}\left(\mathrm{Agli},\mathrm{point}\left(\mathrm{AA},1,0\right),\mathrm{l1},\mathrm{dsg}\right):$ > $\mathrm{coordinates}\left(\mathrm{Agli}\right)$ $\left[{1}{,}{2}\right]$ (1) > $\mathrm{dsegment}\left(\mathrm{dsg},\mathrm{point}\left(M,0,0\right),\mathrm{point}\left(N,1,0\right)\right):$$\mathrm{line}\left(l,\left[M,N\right]\right):$ > $\mathrm{circle}\left(\mathrm{c1},\left[\mathrm{point}\left(\mathrm{OO},0,-1\right),1\right]\right):$ translate c1 with respect to the directed segment MN, then reflect this object with respect to the line l > $\mathrm{GlideReflection}\left(\mathrm{cgli},\mathrm{c1},l,\mathrm{dsg}\right):$ > $\mathrm{detail}\left(\mathrm{cgli}\right)$ assume that the names of the horizontal and vertical axes are _x and _y, respectively $\begin{array}{ll}{\text{name of the object}}& {\mathrm{cgli}}\\ {\text{form of the object}}& {\mathrm{circle2d}}\\ {\text{name of the center}}& {\mathrm{center_cgli}}\\ {\text{coordinates of the center}}& \left[{1}{,}{1}\right]\\ {\text{radius of the circle}}& {1}\\ {\text{equation of the circle}}& {{\mathrm{_x}}}^{{2}}{+}{{\mathrm{_y}}}^{{2}}{-}{2}{}{\mathrm{_x}}{-}{2}{}{\mathrm{_y}}{+}{1}{=}{0}\end{array}$ (2) > $\mathrm{draw}\left(\left[\mathrm{c1}\left(\mathrm{color}=\mathrm{blue}\right),\mathrm{cgli}\left(\mathrm{color}=\mathrm{green}\right)\right],\mathrm{printtext}=\mathrm{true}\right)$
2022-05-26 14:39:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100551009178162, "perplexity": 1313.829038527394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00023.warc.gz"}
https://tt.gsusigmanu.org/11076-circular-orbits.html
# Circular orbits We are searching data for your request: Forums and discussions: Manuals and reference books: Data from registers: Wait the end of the search in all databases. Upon completion, a link will appear to access the found materials. First of all, I'm studying orbits for a hobby: world building. Unfortunately, my mathematical abilities approach a ridiculous low threshold, which means I am stuck with reading the simplest explanations, which in turn leave me asking tons of fairly basic questions. Allow me to start with a simple point. I know that Kepler's Laws state that planetary orbits must always be elliptical. I also know that Earth's orbit varies from more elliptical to less elliptical, and that its less elliptical stage is nearly circular. So… what would happen if Earth did have a circular orbit? Why is it impossible for any planet (or moon, by the way) to orbit another body in a perfectly circular path? You've been given an answer, and it's perfectly valid, but here's something from a different perspective (less strict). A circle is really just a particular case of an ellipse. Take an ellipse, and change it, by moving its focal points closer together. When those two points coincide, what you get is a circle. It's still an ellipse, technically - one that happens to have both focal points in the same place, is all. So yes, you can actually have planetary orbits, or any orbits, circular. There's nothing forbidding that. It's just pretty unlikely that this will occur via a natural process. As indicated elsewhere, in the real world, all orbits and trajectories are a bit imperfect due to perturbations - whether they be elliptical, circular, parabolic or hyperbolic, they are always a bit perturbed by external factors. In many cases, perturbations are so tiny that you can ignore them. When a planet is orbiting the Sun, and the orbit is elliptical, the Sun will be in one of those two focal points; the other point has no particular signification. If you could circularize that orbit, then the Sun would be in the center of the circle, of course. Kepler's laws remain valid for a circular orbit: 1. The orbit of every planet is an ellipse with the Sun at one of the two foci. Still true. A circle is an ellipse where the foci coincide. 1. A line joining a planet and the Sun sweeps out equal areas during equal intervals of time. Still true. On a circular orbit, the planet moves at constant speed, so the swept area remains constant per time. 1. The square of the orbital period of a planet is directly proportional to the cube of the semi-major axis of its orbit. Still true. The semi-major axis becomes the radius of the circle. You must understand that Kepler's laws now have more of a historic interest. They are not exactly at the bleeding edge of science anymore. During Kepler's time, it seemed reasonable to state that all orbits must be elliptical (in the strict sense of the term), but now we know that trajectories (including orbits, or closed trajectories) can be circular, elliptical, parabolic or hyperbolic, depending on a few factors. We also know that perturbations actually deflect all these trajectories a little bit from ideal shapes (but it's usually a very tiny effect). We also know that relativity makes all "elliptical" orbits more complex - they remain close to elliptic, but the whole ellipse keeps turning around the central star very slowly. All this stuff was not known during Kepler's time, so take his laws for what they are - a snapshot of the development of our understanding in time. Why is it impossible for any planet (or moon, by the way) to orbit another body in a perfectly circular path? One way to look at it is from the perspective of probability and statistics. Think of position and velocity as random variables drawn from some continuous probability distributions. Given some position vector, the velocity vector has to have a very specific value to yield a circular orbit. The probability of drawing a specific value from a well-behaved continuous probability distribution is identically zero. An even better way to look at it: Even perfectly elliptical orbits aren't possible. Kepler's laws are an approximation that result from assuming a universe that obeys Newtonian mechanics and comprises but two point masses. Newtonian mechanics is only approximately valid in the real universe, objects are lumpy and can only approximately be treated as point masses, and there are a lot more than two objects in the universe. Suppose that by some fluke chance, an object appears to have a perfectly circular orbit at some point in time (to within measurement error). The non-Newtonian nature of the universe, the non-spherical mass distributions of the objects, and the multiplicity of objects means that a moment later the object will no longer appear to have a perfectly circular orbit. A circle is an ellipse with eccentricity zero. And in fact tidal evolution can drive orbital eccentricity to values negligibly close to zero. See Regarding the Putative Eccentricity of Charon's Orbit. From observations we already know Pluto and Charon move about each other in very nearly circular orbits. I expect when New Horizons flies by the system in July, 2015, we will know their orbits more precisely. The ellipse family includes perfect circles. An ellipse is a dilated circle. Draw a circle on a rubber sheet and stretch it to get an ellipse, pretty easy. Draw an ellipse on a stretched rubber sheet and relax it to see if you get a circle, much harder right. Now lets take the earth's orbit around the sun. The earth isn't perfect sphere, that rotates around a tilted axis and the center of gravity varies depending on the orbital position (season) with respect to the sun. The polar ice caps, the tidal distributions, the clouds all play apart. The moon and other planets also have some effects that vary with position. The moon and sun have their own similar idiosyncrasies. You can probably name many more. So for most planetary orbits the bottom line is that there are too many variables that must line up exactly all the time to result in a perfect circular orbit and an elliptical orbit is the result. ## Orbits In the vast frictionless environment of space, gravitational attraction can cause two massive bodies to orbit each other. If the relative velocity between two bodies is too low they will collide. If the velocity is too high they will move away from each other. In between these extremes exist a range of stable orbits that can repeat with almost no change. Yet, over time orbits degrade. The Moon's orbit gets 38 mm farther away every year because its kinetic energy changes into tidal friction. ## Angular momentum and torque A particle of mass m and velocity v has linear momentum p = mv. The particle may also have angular momentum L with respect to a given point in space. If r is the vector from the point to the particle, then Notice that angular momentum is always a vector perpendicular to the plane defined by the vectors r and p (or v). For example, if the particle (or a planet) is in a circular orbit, its angular momentum with respect to the centre of the circle is perpendicular to the plane of the orbit and in the direction given by the vector cross product right-hand rule, as shown in Figure 10 . Moreover, since in the case of a circular orbit, r is perpendicular to p (or v), the magnitude of L is simply The significance of angular momentum arises from its derivative with respect to time, where p has been replaced by mv and the constant m has been factored out. Using the product rule of differential calculus, /> In the first term on the right-hand side of equation ( 46 ), dr/dt is simply the velocity v, leaving v × v. Since the cross product of any vector with itself is always zero, that term drops out, leaving Here, dv/dt is the acceleration a of the particle. Thus, if equation ( 47 ) is multiplied by m, the left-hand side becomes dL/dt, as in equation ( 45 ), and the right-hand side may be written r × ma. Since, according to Newton’s second law, ma is equal to F, the net force acting on the particle, the result is Equation ( 48 ) means that any change in the angular momentum of a particle must be produced by a force that is not acting along the same direction as r. One particularly important application is the solar system. Each planet is held in its orbit by its gravitational attraction to the Sun, a force that acts along the vector from the Sun to the planet. Thus, the force of gravity cannot change the angular momentum of any planet with respect to the Sun. Therefore, each planet has constant angular momentum with respect to the Sun. This conclusion is correct even though the real orbits of the planets are not circles but ellipses. The quantity r × F is called the torque τ. Torque may be thought of as a kind of twisting force, the kind needed to tighten a bolt or to set a body into rotation. Using this definition, equation ( 48 ) may be rewritten Equation ( 49 ) means that if there is no torque acting on a particle, its angular momentum is constant, or conserved. Suppose, however, that some agent applies a force Fa to the particle resulting in a torque equal to r × Fa. According to Newton’s third law, the particle must apply a force −Fa to the agent. Thus, there is a torque equal to −r × Fa acting on the agent. The torque on the particle causes its angular momentum to change at a rate given by dL/dt = r × Fa. However, the angular momentum La of the agent is changing at the rate dLa/dt = −r × Fa. Therefore, dL/dt + dLa/dt = 0, meaning that the total angular momentum of particle plus agent is constant, or conserved. This principle may be generalized to include all interactions between bodies of any kind, acting by way of forces of any kind. Total angular momentum is always conserved. The law of conservation of angular momentum is one of the most important principles in all of physics. ## Researchers identify circular orbits for 74 small exoplanets The system Kepler-444 formed when the Milky Way galaxy was a youthful two billion years old. The planets were detected from the dimming that occurs when they transit the disc of their parent star, as shown in this artist's conception. Credit: NASA Viewed from above, our solar system's planetary orbits around the sun resemble rings around a bulls-eye. Each planet, including Earth, keeps to a roughly circular path, always maintaining the same distance from the sun. For decades, astronomers have wondered whether the solar system's circular orbits might be a rarity in our universe. Now a new analysis suggests that such orbital regularity is instead the norm, at least for systems with planets as small as Earth. In a paper published in the Astrophysical Journal, researchers from MIT and Aarhus University in Denmark report that 74 exoplanets, located hundreds of light-years away, orbit their respective stars in circular patterns, much like the planets of our solar system. These 74 exoplanets, which orbit 28 stars, are about the size of Earth, and their circular trajectories stand in stark contrast to those of more massive exoplanets, some of which come extremely close to their stars before hurtling far out in highly eccentric, elongated orbits. "Twenty years ago, we only knew about our solar system, and everything was circular and so everyone expected circular orbits everywhere," says Vincent Van Eylen, a visiting graduate student in MIT's Department of Physics. "Then we started finding giant exoplanets, and we found suddenly a whole range of eccentricities, so there was an open question about whether this would also hold for smaller planets. We find that for small planets, circular is probably the norm." Ultimately, Van Eylen says that's good news in the search for life elsewhere. Among other requirements, for a planet to be habitable, it would have to be about the size of Earth—small and compact enough to be made of rock, not gas. If a small planet also maintained a circular orbit, it would be even more hospitable to life, as it would support a stable climate year-round. (In contrast, a planet with a more eccentric orbit might experience dramatic swings in climate as it orbited close in, then far out from its star.) "If eccentric orbits are common for habitable planets, that would be quite a worry for life, because they would have such a large range of climate properties," Van Eylen says. "But what we find is, probably we don't have to worry too much because circular cases are fairly common." In the past, researchers have calculated the orbital eccentricities of large, "gas giant" exoplanets using radial velocity—a technique that measures a star's movement. As a planet orbits a star, its gravitational force will tug on the star, causing it to move in a pattern that reflects the planet's orbit. However, the technique is most successful for larger planets, as they exert enough gravitational pull to influence their stars. Researchers commonly find smaller planets by using a transit-detecting method, in which they study the light given off by a star, in search of dips in starlight that signify when a planet crosses, or "transits," in front of that star, momentarily diminishing its light. Ordinarily, this method only illuminates a planet's existence, not its orbit. But Van Eylen and his colleague Simon Albrecht, of Aarhus University, devised a way to glean orbital information from stellar transit data. They first reasoned that if they knew the mass and radius of a planet's star, they could calculate how long a planet would take to orbit that star, if its orbit were circular. The mass and radius of a star determines its gravitational pull, which in turn influences how fast a planet travels around the star. By calculating a planet's orbital velocity in a circular orbit, they could then estimate a transit's duration—how long a planet would take to cross in front of a star. If the calculated transit matched an actual transit, the researchers reasoned that the planet's orbit must be circular. If the transit were longer or shorter, the orbit must be more elongated, or eccentric. To obtain actual transit data, the team looked through data collected over the past four years by NASA's Kepler telescope—a space observatory that surveys a slice of the sky in search of habitable planets. The telescope has monitored the brightness of over 145,000 stars, only a fraction of which have been characterized in any detail. The team chose to concentrate on 28 stars for which mass and radius have previously been measured, using asteroseismology—a technique that measures stellar pulsations, which reflect a star's mass and radius. These 28 stars host multiplanet systems—74 exoplanets in all. The researchers obtained Kepler data for each exoplanet, looking not only for the occurrence of transits, but also their duration. Given the mass and radius of the host stars, the team calculated each planet's transit duration if its orbit were circular, then compared the estimated transit durations with actual transit durations from Kepler data. Across the board, Van Eylen and Albrecht found the calculated and actual transit durations matched, suggesting that all 74 exoplanets maintain circular, not eccentric, orbits. "We found that most of them matched pretty closely, which means they're pretty close to being circular," Van Eylen says. "We are very certain that if very high eccentricities were common, we would've seen that, which we don't." Van Eylen says the orbital results for these smaller planets may eventually help to explain why larger planets have more extreme orbits. "We want to understand why some exoplanets have extremely eccentric orbits, while in other cases, such as the solar system, planets orbit mostly circularly," Van Eylen says. "This is one of the first times we've reliably measured the eccentricities of small planets, and it's exciting to see they are different from the giant planets, but similar to the solar system." David Kipping, an astronomer at the Harvard-Smithsonian Center for Astrophysics, notes that Van Eylen's sample of 74 exoplanets is a relatively small slice, considering the hundreds of thousands of stars in the sky. "I think that the evidence for smaller planets having more circular orbits is presently tentative," says Kipping, who was not involved in the research. "It prompts us to investigate this question in more detail and see whether this is indeed a universal trend, or a feature of the small sample considered." In regard to our own solar system, Kipping speculates that with a larger sample of planetary systems, "one might investigate eccentricity as a function of multiplicity, and see whether the solar system's eight planets are typical or not." ## Are circular orbits normal The objects cannot be stationary relative to each other (zero tangential velocity) and in orbit around each other (non-zero tangential velocity) at the same time. The ellipse is the general case, a circle is a special ellipse. In practice an orbit will never be an exact circle, but it can be a good approximation to it. Only to a good approximation. You won't swing it in a perfect circle, the rope is elastic and so on. That is the point. Sometimes the approximations are better, sometimes they are worse. Unlike a rope, gravity has nothing that would prefer a fixed distance. Let's try to go back to two-body problem, where second body has a negligible mass compared to the first one. Let's imagine that this system is somehow miraculously isolated from rest of the universe. Now you can imagine that the trajectory of the second body around the first one will depend on some initial conditions, namely the initial position and velocity of the second body. Let's consider the first body at rest all the time. Now just considering kinematics of uniform circular motion, for each orbit (distance from the center of the first body) exists only single value of speed of the second body which allows its circular motion. And more over the direction of its initial movement must by exactly tangential, i.e exactly at right angle with radial direction (toward the center of the first body). Whatever else initial velocity will not lead to a circular motion (ellipse, parabola and hyperbola are possible depending on the total energy of the system). As you can see, we made some approximations and considered unrealistic isolated system and even though the probability of second body orbiting the first one in an exact circle is very low. Now you can ask yourself, how probable is this to happen when planets are being formed in any real stellar system, where all bodies and dust interact with each other. (F) = centripetal force [N] vector (a) = centripetal acceleration [m/s²] vector (v) = tangential velocity [m/s] vector (r) = radius of the circular path [m] vector (m) = mass [kg] The term centripetal means pointing at the center. In all circular motion force and acceleration always points at the center of the circle. This is often confused with centrifugal, which means away from the center. Tangential means touching at only one point. Since the velocity is perpendicular to the centripetal force, it doesn't enter or exit the circle. Yes, the ball is accelerating towards the center of the circle. It's possible to produce an acceleration similar to gravity without a gravitational field. It's done by rotating a room to produce a centripetal acceleration. Example: You are designing a rotating spaceship with artificial gravity. In your plans, the ship is shaped like a wheel with a diameter of 50.0 m. How fast does the outer edge have to move to produce a centripetal acceleration equal to the gravity felt on Earth's surface? solution $d = 2r quad quad r = 25,mathrm$ $a = frac>$ $sqrt = v$ $sqrt <(9.8)(25)>= v$ $15.7 ,mathrm< frac> =v$ Example: Use the circumference of a circle and the tangential velocity of the ship to calculate how long one rotation will take. hint $C = 2 pi r$ $v=frac$ $v=frac<2 pi r>$ solution Your 50 meter spaceship design was built. It produced the illusion of 9.8 m/s² of gravity, but people are complaining they feel strange when standing up. Example: Use the circumference of a circle at a person's head and the time for one rotation to calculate the velocity at the top of a 2 meter tall person's head. strategy A person on the rotating spaceship will have their feet on the outer edge, and their heads pointed towards the center. This means the head will have a shorter radius and a slower speed. $d = 2r quad d = 50 , mathrm$ $r = 25-2 = 23, mathrm$ We also know that the total time for one rotation must be the same at any distance from the center. We calculated the time for one rotation in the previous example. We already calculated that one rotation takes 10 seconds. $d = 2r quad d = 50 , mathrm$ $r = 25-2 = 23, mathrm$ $v=frac quad C = 2 pi r$ $v=frac<2 pi r>$ $v=frac<2 pi (23)><(10)>$ $v=14.45, mathrm< frac>$ Example: What acceleration would a 2 meter tall person feel at their head? Compare that to the acceleration felt at their feet. solution $a = frac>$ $a = frac<14.45^<2>><23>$ $a = 9.07,mathrm< frac>$ $frac<9.07> <9.8>= 0.92$ A person's head only feels 92% of the gravity at their feet. Question: What changes could be made to the spaceship design to reduce the difference in acceleration between a person's head and feet? answer The ship's radius could be increased, but that would increase the costs. The ship could produce a lower acceleration, maybe half of Earth's gravity. ## Circular orbits - Astronomy The force needed by a body of mass m, to keep in a circular motion at a distance R, from the centre of a circle with velocity v, is the centripetal force Fc, where The direction of the force is towards the centre of the circle of motion, and its magnitude and direction can both be derived from a consideration of Newton’s second law of motion. In astronomy many stars, planets and disks of material move in circular orbits and require a force equivalent to the centripetal force to maintain their circular motion. This force is usually gravity. By balancing the gravitational and centripetal forces it is possible to obtain estimates of the mass within a given radius from the rotation curves of galaxies or accretion disks around supermassive black holes. When we sit on a merry-go-round or in a car taking a corner we “feel” the centrifugal force pulling us away from the centre of our circular motion which has the same magnitude but opposite sign as the centripedal force. This “pseudo-force” should not be confused with the reality of the centripetal force, but arises because of Newton’s third law of motion: “For every action, there is an equal, but opposite, reaction”. The centrifugal force is a pseudo-force because if the centripetal force ceased for an object in circular motion, the centrifugal force the body is “feeling” would instantly disappear, and the object would travel tangentially to its line of motion. It only arises because the body is in a non-inertial frame of reference. Study Astronomy Online at Swinburne University All material is © Swinburne University of Technology except where indicated. If you added up all the energy you would need for each piece, you would derive the Gravitational Binding Energy for the body: This depends on two quantities: the Mass (M) and the Radius (R) of the body. The formula above is a "proportionality", it tells us how the binding energy scales with the mass and radius of the object. The constant out front we need depends on details of how matter is distributed in the object. For example, a sphere of uniform density would have a constant of 3/5. For the Earth, the Gravitational Binding Energy is about 2x10 32 Joules, or about 12 days of the Sun's total energy output! I was reading and came across an article which stated that all the planets orbitted the sun elliptically while the earth was the only planet to orbit the sun in a circular motion. Is this true. Also what technology did we use to figure this out?? All the planets, including the Earth, orbit in ellipses. Neptune and Venus actually orbit closer to a circle than the Earth does. As to technology, telescopes. No orbit, including the Earth's, is perfectly circular. The Earth's orbit is, in fact, an ellipse. The distance between the Earth and the Sun varies by about 5,000,000 km throughout the year. The very concept of a "perfectly circular orbit" is flawed, since the circle is not a stable orbit. Stability implies self-correction: if you perturb a stable system, it naturally returns to the same stable state. A circular orbit is not stable, because any perturbation, even a passing comet, would irreversibly perturb the orbit into an ellipse. Well, SpaceTiger, I was trying to make it easy to understand: if you start with a circle, any perturbation, no matter how small, leaves you with an ellipse (or other conic section, I suppose). I read that earths orbit was significantly different from that of all the other planets..How true is this statement. I read that earths orbit was significantly different from that of all the other planets..How true is this statement. Not true at all. Earth's orbit is slightly elliptical, close enough to circular that you can't tell it's an ellipse unless you pay close attention. And that pretty much describes the orbits of the other (traditional) planets too. Some are a little more elliptical, others a little less, but there isn't any deep pattern, it bears all the earmarks of accidental initial conditions just randomly distribution a small varying eccentricity on the various planets. Oh, Jupiter and Saturn have a certain "tide locking", and the Earth and Moon show another, but that again is slight and historically contingent, not "deep". ## Circular orbits - Astronomy This is a beta 3.4a version of the Circular Orbit program written by TeJaun RiChard, a senior at Shaw High School in East Cleveland, Ohio, during a shadowing experience in July 2012 for Future Connections, Ian Breyfogle, a senior at Kent State University, and Tom Benson from NASA Glenn. You are invited to participate in the beta testing. If you find errors in the program or would like to suggest improvements, please send an e-mail to [email protected] Due to IT security concerns, many users are currently experiencing problems running NASA Glenn educational applets. There are security settings that you can adjust that may correct this problem. With this software you can investigate how a satellite orbits a planet by changing the values of different orbital parameters. The velocity needed to remain in a circular orbit depends on the altitude at which you orbit, and the gravitational pull of the planet that you are orbiting. Using this simulator, you can study these effects. There are two versions of CircularOrbit which require different levels of experience with the package, knowledge of orbital mechanics, and computer technology. This web page contains the on-line student version of the program. It includes an on-line user's manual which describes the various options available in the program and includes hyperlinks to pages in the Beginner's Guide to Rockets describing the math and science of rockets. More experienced users can select a version of the program which does not include these instructions and loads faster on your computer. You can download these versions of the program to your computer by clicking on this yellow button: The program is provided as Corbit.zip. You must save this file on your hard drive and "Extract" the necessary files from Corbit.zip. Click on "Corbit.html" to launch your browser and load the program. With the downloaded version, you can run the program off-line and do not have to be connected to the Internet. If you see only a grey box at the top of this page, be sure that Java is enabled in your browser. If Java is enabled, and you are using the Windows XP operating system, you may need to get a newer version of Java. Go to this link: http://www.java.com/en/index.jsp, try the "Download It Now" button, and then select "Yes" when the download box from Sun pops up. Information is presented to you using labels. A label has a descriptive word displayed in a colored box. Some labels give instructions for the next phase of design and launch, some labels express the state of the calculations. 1. The Blue "Compute" button causes the program to calculate the orbital velocity, altitude and period based on current input values. 2. White buttons are possible input selections that you can choose. Clicking on a selection button causes the button to turn Yellow and enables the input slider and text box. 1. A white box with black numbers is an input box and you can change the value of the number. To change the value in an input box, select the box by moving the cursor into the box and clicking the mouse, then backspace over the old number, enter a new number. 2. A black box with colored numbers is an output box and the value is computed by the program. The program screen is divided into two main parts: 1. On the left of the screen are the control buttons, labels, input box that you use to change orbital parameters. Details of the Input Variables are given below. 2. On the right of the screen is the graphics window in which you will see the satellite orbiting the planet. Details are given in Graphics. You move the graphic within the view window by moving your cursor into the window, hold down the left mouse button and drag to a new location. You can change the size of the graphic by moving the "Zoom" widget in the same way. If you loose your picture, or want to return to the default settings, click on the "Find" button at the bottom of the view window. The speed needed to orbit a planet depends on the altitude above the planet and on the gravitational acceleration produced by the planet. A formula describing this relation was developed by Johannes Kepler in the early 1600's: where V is the velocity for a circular orbit, g0 is the surface gravitational constant of the Earth (32.2 ft/sec^2), Re is the mean Earth radius (3963 miles), and h is the height of the orbit in miles. If the rocket was launched from the Moon or Mars, the rocket would require a different orbital velocity because of the different planetary radius and gravitational constant. For a 100 mile high orbit around the Earth, the orbital velocity is 17,478 mph. Knowing the velocity and the radius of the circular orbit, we can also calculate the time needed to complete an orbit. This time is called the orbital period. T^2 = (4 * pi^2 * (Re + h)^3) / (g0 * Re^2) Looking at these equations, we see that as the height above the planet increases, the velocity needed to maintain an orbit decreases. A spacecraft flying at a lower orbit must travel faster than a spacecraft flying at a higher orbit. Input variables are located on the left side of the screen. You can select the planet by using the choice button. Click on the menu and drag to select any planet in the solar system or the Earth's Moon.. The corresponding gravitational constant and planet radius is displayed below the choice buttons. Calculations and input can be entered in either English (Imperial) or Metric units by using the "Units" choice button. The period of the orbit can be expressed in "Minutes", "Hours", or "Days" by using the choice button located next to the "Compute" button. You then select the desired input variable: altitude, velocity, or time by using the choice button. Set the value for the desired input by using the input box and then push the "Compute" button which sends the information to the computer, performs the calculations and displays the results. You can also set the value of altitude, velocity, or time by using the sliders located next to the input boxes. When using the sliders, you do not have to press the "Compute" button. You can change the maximum altitude on the sliders by using the input box at the bottom of the panel. Changing the maximum altitude automatically changes the minimum velocity and maximum time. We will continue to improve and update CircularOrbit based on your input. The history of changes is included here: 1. On 22 Nov 13, version 3.4a was released. This version of the program allows the user to specify their own planet for orbiting. You can change the values of the planet radius and gravitational constant using either sliders or text boxes. 2. On 6 Aug 12, version 3.3 was released. This version of the program includes photos of the planets of the solar system. It also corrects some small problems in the operation of the input boxes. Versions 3.0- 3.2 were development versions and were not released to the public. 3. On 27 Jul 12, version 2.7 was released. This version of the program includes slider input for the altitude, velocity and time. Versions 2.0- 2.6 were development versions and were not released to the public 4. On 20 Jul 12, version 1.8 was released. This version of the program includes the graphics output and includes all the planets of the solar system and the Earth's Moon. Input for this version was limited to input boxes. Versions 1.1- 1.7 were development versions and were not released to the public 5. On 10 Nov 05, version 1.0 of Orbit Calculator was released. This version did not include graphics on only solved for the orbital velocity around the Earth, Moon, and Mars. Notice that orbital flight is a combination of altitude and horizontal velocity. The recent Space Ship 1 flight acquired the necessary altitude to "go into space", but lacked the horizontal velocity needed to "go into orbit".
2022-06-25 16:18:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5308152437210083, "perplexity": 628.9661373157624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00472.warc.gz"}
https://brilliant.org/problems/something-to-do-with-the-pigeons/
# Something to do with the pigeons What is the minimum number of odd integers we must choose in the range of $$1$$ to $$1000$$ to ensure that there is at least one pair whose sum is $$1002$$?
2017-07-25 10:55:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7586692571640015, "perplexity": 78.99055626602517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425144.89/warc/CC-MAIN-20170725102459-20170725122459-00073.warc.gz"}
https://www.transtutors.com/questions/the-stockholders-equity-section-of-swifty-corporation-s-balance-sheet-consists-of-co-2572545.htm
# The stockholders’ equity section of Swifty Corporation’s balance sheet consists of common sto... The stockholders’ equity section of Swifty Corporation’s balance sheet consists of common stock ($8 par)$1,016,000 and retained earnings $450,000. A 15% stock dividend (19,050 shares) is declared when the market price per share is$18. (a) Show the before-and-after effects of the dividend on the components of stockholders’ equity. Before Dividend After Dividend Entry field with correct answer Entry field with correct answer Entry field with correct answer Common Stock $Entry field with correct answer 1,016,000$Entry field with correct answer 1,168,400 Entry field with correct answer Paid-in Capital in Excess of Par Value-Common Stock Entry field with correct answer 0 Entry field with correct answer 190,500 Entry field with correct answer Entry field with correct answer 1,016,000 Entry field with correct answer 1,358,900 Entry field with correct answer Retained Earnings Entry field with correct answer 450,000 Entry field with correct answer 107,100 Entry field with correct answer $Entry field with correct answer 1,466,000$Entry field with correct answer 1,466,000 (b) Show the before-and-after effects of the dividend on the shares outstanding. Before Dividend After Dividend Outstanding shares Entry field with correct answer 127,000 Entry field with correct answer 146,050
2018-12-17 11:03:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4068940281867981, "perplexity": 11696.888093649563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828501.85/warc/CC-MAIN-20181217091227-20181217113227-00268.warc.gz"}
https://sunglee.us/mathphysarchive/?p=2121
# The Curvature of a Surface in Euclidean 3-Space $\mathbb{R}^3$ In here, it is seen that the curvature of a unit speed parametric curve $\alpha(t)$ in $\mathbb{R}^3$ can be measured by its acceleration $\ddot\alpha(t)$. In this case, the acceleration happens to be a normal vector field along the curve. Now we turn our attention to surfaces in Euclidean 3-space $\mathbb{R}^3$ and we would like to devise a way to measure the bending of a surface in $\mathbb{R}^3$, and it may be achieved by studying the change of a unit normal vector field on the surface. To study the change of a unit normal vector field on a surface, we need to be able to differentiate vector fields. But first let us review the directional derivative you learned in mutilvariable calculus. Let $f:\mathbb{R}^3\longrightarrow\mathbb{R}$ be a differentiable function and $\mathbf{v}$ a tangent vector to $\mathbb{R}^3$ at $\mathbf{p}$. Then the directional derivative of $f$ in the $\mathbf{v}$ direction at $\mathbf{p}$ is defined by \label{eq:directderiv} \nabla_{\mathbf{v}}f=\left.\frac{d}{dt}f(\mathbf{p}+t\mathbf{v})\right|_{t=0}. By chain rule, the directional derivative can be written as \label{eq:directderiv2} \nabla_{\mathbf{v}}f=\nabla f(\mathbf{p})\cdot\mathbf{v}, where $\nabla f$ denotes the gradient of $f$ $$\nabla f=\frac{\partial f}{\partial x_1}E_1(\mathbf{p})+\frac{\partial f}{\partial x_2}E_2(\mathbf{p})+\frac{\partial f}{\partial x_3}E_3(\mathbf{p}),$$ where $E_1, E_2, E_3$ denote the standard orthonormal frame in $\mathbb{R}^3$. The directional derivative satisfies the following properties. Theorem. Let $f,g$ be real-valued differentiable functions on $\mathbb{R}^3$, $\mathbf{v},\mathbf{w}$ tangent vectors to $\mathbb{R}^3$ at $\mathbf{p}$, and $a,b\in\mathbb{R}$. Then 1. $\nabla_{a\mathbf{v}+b\mathbf{w}}f=a\nabla_{\mathbf{v}}f+b\nabla_{\mathbf{w}}f$ 2. $\nabla_{\mathbf{v}}(af+bg)=a\nabla_{\mathbf{v}}f+b\nabla_{\mathbf{v}}g$ 3. $\nabla_{\mathbf{v}}fg=(\nabla_{\mathbf{v}}f)g(\mathbf{p})+f(\mathbf{p})\nabla_{\mathbf{v}}g$ The properties 1 and 2 are linearity and the property 3 is Leibniz rule. The directional derivative \eqref{eq:directderiv} can be generalized to the covariant derivative $\nabla_{\mathbf{v}}X$ of a vector field $X$ in the direction of a tangent vector $\mathbf{v}$ at $\mathbf{p}$: \label{eq:covderiv} \nabla_{\mathbf{v}}X=\left.\frac{d}{dt}X(\mathbf{p}+t\mathbf{v})\right|_{t=0}. Let $X=x_1E_1+x_2E_2+x_2E_3$ in terms of the standard orthonormal frame $E_1,E_2,E_3$. Then $\nabla_{\mathbf{v}}X$ can be written as \label{eq:covderiv2} \nabla_{\mathbf{v}}X=\sum_{j=1}^3\nabla_{\mathbf{v}}x_jE_j. Here, $\nabla_{\mathbf{v}}x_j$ is the directional derivative of the $j$-th component function of the vector field $X$ in the $\mathbf{v}$ direction as defined in \eqref{eq:directderiv}. The covariant derivative satisfies the following properties. Theorem. Let $X,Y$ be vector fields on $\mathbb{R}^3$, $\mathbf{v},\mathbf{w}$ tangent vectors at $\mathbf{p}$, $f$ a real-valued function on $\mathbb{R}^3$, and $a,b$ scalars. Then 1. $\nabla_{\mathbf{v}}(aX+bY)=a\nabla_{\mathbf{v}}X+b\nabla_{\mathbf{v}}Y$ 2. $\nabla_{a\mathbf{v}+b\mathbf{w}}X=a\nabla_{\mathbf{v}}X+b\nabla_{\mathbf{v}}X$ 3. $\nabla_{\mathbf{v}}(fX)=(\nabla_{\mathbf{v}}f)X(\mathbf{p})+f(\mathbf{p})\nabla_{\mathbf{v}}X$ 4. $\nabla_{\mathbf{v}}(X\cdot Y)=(\nabla_{\mathbf{v}}X)\cdot Y+X\cdot\nabla_{\mathbf{v}}Y$ The properties 1 and 2 are linearity and the properties 3 and 4 are Leibniz rules. Hereafter, I assume that surfaces are orientable and have nonvanishing normal vector fields. Let $\mathcal{M}\subset\mathbb{R}^3$ be a surface and $p\in\mathcal{M}$. For each $\mathbf{v}\in T_p\mathcal{M}$, define \label{eq:shape} S_p(\mathbf{v})=-\nabla_{\mathbf{v}}N, where $N$ is a unit normal vector field on a neighborhood of $p\in\mathcal{M}$. Since $N\cdot N=1$, $(\nabla_{\mathbf{v}}N)\cdot N=-2S_p(\mathbf{v})\cdot N=0$. This means that $S_p(\mathbf{v})\in T_p\mathcal{M}$. Thus, \eqref{eq:shape} defines a linear map $S_p: T_p\mathcal M\longrightarrow T_p\mathcal{M}$. $S_p$ is called the shape operator of $\mathcal{M}$ at $p$ (derived from $N$).For each $p\in\mathcal{M}$, $S_p$ is a symmetric operator, i.e., $$\langle S_p(\mathbf{v}),\mathbf{w}\rangle=\langle S_p(\mathbf{w}),\mathbf{v}\rangle$$ for any $\mathbf{v},\mathbf{w}\in T_p\mathcal{M}$. Let us assume that $\mathcal{M}\subset\mathbb{R}^3$ is a regular surface so that any differentiable curve $\alpha: (-\epsilon,\epsilon)\longrightarrow\mathcal{M}$ is a regular curve, i.e., $\dot\alpha(t)\ne 0$ for every $t\in(-\epsilon,\epsilon)$. If $\alpha$ is a differentiable curve in $\mathcal{M}\subset\mathbb{R}^3$, then \label{eq:acceleration} \langle\ddot\alpha,N\rangle=\langle S(\dot\alpha),\dot\alpha\rangle. $\langle\ddot\alpha,N\rangle$ is the normal component of the acceleration $\ddot\alpha$ to the surface $\mathcal{M}$. \eqref{eq:acceleration} says the normal component of $\ddot\alpha$ depends only on the shape operator $S$ and the velocity $\dot\alpha$. If $\alpha$ is represented by arc-length, i.e., $|\dot\alpha|=1$, then we get a measurement of the way $\mathcal{M}$ is bent in the $\dot\alpha$ direction. Hence we have the following definition: Definition. Let $\mathbf{u}$ be a unit tangent vector to $\mathcal{M}\subset\mathbb{R}^3$ at $p$. Then the number $\kappa(\mathbf{u})=\langle S(\mathbf{u}),\mathbf{u}\rangle$ is called the normal curvature of $\mathcal{M}$ in $\mathbf{u}$ direction. The normal curvature $\kappa$ can be considered as a continuous function on the unit circle $\kappa: S^1\longrightarrow\mathbb{R}$. Since $S^1$ is compact (closed and bounded), $\kappa$ attains a maximum and a minimum values, say $\kappa_1$, $\kappa_2$, respectively. $\kappa_1$, $\kappa_2$ are called the principal curvatures of $\mathcal{M}$ at $p$. The principal curvatures $\kappa_1$, $\kappa_2$ are the eigenvalues of the shape operator $S$ and $S$ can be written as the $2\times 2$ matrix \label{eq:shape2} S=\begin{pmatrix} \kappa_1 & 0\\ 0 & \kappa_2 \end{pmatrix}. The arithmetic mean $H$ and the squared Gau{\ss}ian mean $K$ of $\kappa_1$, $\kappa_2$ \begin{align} \label{eq:mean} H&=\frac{\kappa_1+\kappa_2}{2}=\frac{1}{2}\mathrm{tr}S,\\ \label{eq:gauss} K&=\kappa_1\kappa_2=\det S \end{align} are called, respectively, the mean and the Gaußian curvatures of $\mathcal{M}$. The definitions \eqref{eq:mean} and \eqref{eq:gauss} themselves however are not much helpful for calculating the mean and the Gaußian curvatures of a surface. We can compute the mean and the Gaußian curvatures of a parametric regular surface $\varphi: D(u,v)\longrightarrow\mathbb{R}^3$ using Gauß’ celebrated formulas \begin{align} \label{eq:mean2} H&=\frac{G\ell+En-2Fm}{2(EG-F^2)},\\ \label{eq:gauss2} K&=\frac{\ell n-m^2}{EG-F^2}, \end{align} where \begin{align*} E&=\langle\varphi_u,\varphi_u\rangle,\ F=\langle\varphi_u,\varphi_v\rangle,\ G=\langle\varphi_v,\varphi_v\rangle,\\ \ell&=\langle N,\varphi_{uu}\rangle,\ m=\langle N,\varphi_{uv}\rangle,\ n=\langle N,\varphi_{vv}\rangle. \end{align*} It is straightforward to verify that \label{eq:normal} |\varphi_u\times\varphi_v|^2=EG-F^2. Example. Compute the Gaußian and the mean curvatures of helicoid $$\varphi(u,v)=(u\cos v,u\sin v, bv),\ b\ne 0.$$ Helicoid Solution. \begin{align*} \varphi_u&=(\cos v,\sin v,0),\ \varphi_v=(-u\sin v,u\cos v,b),\\ \varphi_{uu}&=0,\ \varphi_{uv}=(-\sin v,\cos v,0),\ \varphi_{vv}=(-u\cos v,-u\sin v,0). \end{align*} $E$, $F$ and $G$ are calculated to be $$E=1,\ F=0,\ G=b^2+u^2.$$ $\varphi_u\times\varphi_v=(b\sin v,-b\cos v,u)$, so the unit normal vector field $N$ is given by $$N=\frac{\varphi_u\times\varphi_v}{\sqrt{EG-F^2}}=\frac{(b\sin v,-b\cos v,u)}{\sqrt{b^2+u^2}}.$$ Next, $\ell, m,n$ are calculated to be $$\ell=0,\ m=-\frac{b}{\sqrt{b^2+u^2}},\ n=0.$$ Finally we find the Gaußian curvature $K$ and the mean curvature $H$: \begin{align*} K&=\frac{\ell n-m^2}{EG-F^2}=-\frac{b^2}{(b^2+u^2)^2},\\ H&=\frac{G\ell+En-2Fm}{2(EG-F^2)}=0. \end{align*} Surfaces with $H=0$ are called minimal surfaces. For further reading on the topic I discussed here, I recommend: Barrett O’Neil, Elementary Differential Geometry, Academic Press, 1967
2023-02-01 14:56:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9639464020729065, "perplexity": 149.31180676983982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00399.warc.gz"}
https://www.quantconnect.com/learning/articles/investment-strategy-library/asset-class-trend-following
### Introduction Asset class trend following is a strategy that tries to exploit a momentum anomaly between various assets. It uses various moving averages or momentum filters to gain an exposure to an asset class only at the time when there is a higher probability for outperformance with less risk. The basic logic behind the trend following is finding a method to detect the trend of price movement and buy an asset when its price trend goes up, and sell when its trend goes down. ### Method This algorithm applies to trend following ideas to 5 ETFs in different asset classes like stocks, bonds, and commodities. The simple moving average is used to detect the trend. When the closing price is over its ten-month simple moving average, we give equal allocation to those ETFs, otherwise stay in cash. SMA(symbol, period, resolution) is used to generate the moving average value in LEAN implementation. A warm-up period of ten months is set to prime the data and initialize the indicator so the SMA is ready to use when the algorithm starts.
2023-02-05 21:07:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3752375543117523, "perplexity": 1598.2951326086024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500288.69/warc/CC-MAIN-20230205193202-20230205223202-00524.warc.gz"}
http://math.stackexchange.com/questions/268250/is-the-area-of-intersection-of-convex-polygons-always-convex/269544
# Is the area of intersection of convex polygons always convex? I am interested specifically in the intersection of triangles but I think this is true of all convex polygons am I correct? Also is the largest possible inscribed triangle of a convex polygon always composed of at least two of the polygons vertices? (At first I thought it was 3 vertices but then I thought of a square and realized that the max inscribed triangle was any two connected vertices and any point on the adjacent side.) I am interested in finding the maximum inscribed triangle of the intersection of several triangles is in the below image. If you are interested in the context of the question please see this question: How to find the intersection of the area of multiple triangles - The arbitrary intersection of any convex sets is convex. – Michael Greinecker Dec 31 '12 at 16:45 @MichaelGreinecker Any ideas regarding the inscribed triangle question? – vanattab Dec 31 '12 at 17:23 Sigur and Michael already responded to the part of your question regarding the intersections of convex sets being convex so I'm going to skip that. The second part of your question is then about inscribing triangles in convex polygons. If you're looking to maximize area, then all three vertices of the triangle can be assumed to be vertices of your polygon. In other words: there exists a triangle maximizing area whose vertices are all polygon vertices. Why? The method below is certainly not the most direct method to do it, but I like arguments of this sort because they give you intuition about the structure of the problem. I am not being 100% rigorous but pretty close to it, you can fill in the details pretty easily. Suppose your convex polygon is $P_1 P_2 \cdots P_n$ and $\triangle ABC$ is a triangle with maximal area. (At least one such exists, in view of $P_1 P_2 \cdots P_n$ being compact.) Clearly $A$, $B$, $C$ must all lie on the boundary of the polygon otherwise you can stretch them out and increase the area. Now suppose that any one of $A$, $B$, $C$ is not a vertex of our polygon. Without loss of generality, then, we may suppose that $A$ lies inbetween $P_1$, $P_2$. (Draw a diagram, it will help.) For starters, note that this forces $BC$ to be parallel to $P_1 P_2$. Indeed if it weren't, then the triangle height from $A$ onto $BC$ would increase monotonically if we were to translate $A$ towards either $P_1$ or $P_2$ (and decrease monotonically in the other direction). This contradicts the maximality of the area of $\triangle ABC$, because perturbing $A$ slightly would result in an inscribed triangle with larger area (same base, bigger height). Note that the perturbed triangle is stil inscribed in view of the polygon's assumed convexity (check this). Therefore $BC$ is parallel to $P_1 P_2$. In this case we may gradually translate $A$ to coincide with your favorite point among the pair $P_1$, $P_2$ without altering the area of the triangle, since we're keeping the same base and the height does not change in view of $BC$ being parallel to $P_1 P_2$. Translating $A$ does not affect the triangle being inscribed since our polygon is convex (like above). What have we achieved? Well, start with some arbitrary maximal triangle and one by one translate its vertices along the boundary edges in such a way to preserve area, until the vertex coincides with a polygon vertex. Do this for all triangle vertices. End up with a maximal triangle whose vertices are polygon vertices. Done. EDIT: Since we have come this far, we might as well describe an algorithm to actually find a triangle with maximal inscribed area. The method described above is not intended for that purpose since it's only good for finding local maxima (something that Rahul has also pointed out), but it does hint towards a more general approach. Of course since we've shown that we can take maximal triangles to have polygon vertices, there is a silly $\Theta(n^3)$ algorithm that iterates over all possible such triangles and finds the one of maximal area. But we can do better: as Sigur pointed out we can maximize over all pairs $(L, h_L)$ where $L$ is a line connecting a pair of vertices and $h_L$ is an optimal height onto this line. You can do this by iterating over all $n^2$ pairs of vertices (to determine $L$) and then using binary search to determine the optimal $h_L$ in view of convexity, for a total runtime of $\Theta(n^2 \log n)$. You can actually kill the $\log n$ factor by using a careful amortization technique when iterating over the second vertex in $L$, solving the problem in $\Theta(n^2)$. My algorithm skills are not what they used to be, maybe there's an even faster way to do it. I'd be very happy to hear it. - +1, this implies that there is a global maximum attained by a triangle with its vertices coinciding with polygon vertices. However, it is worth mentioning that the method here only gives a locally maximal triangle, which may not be the global maximum (e.g. consider a hexagon obtained by taking an equilateral triangle and moving its edge midpoints slightly outward). – Rahul Jan 3 '13 at 4:51 Hi Rahul, I'm afraid I don't understand the point you're making. EDIT: To be clear, as you pointed out, this was not supposed to be a constructive method for global maxima (i.e. start with a triangle, wiggle it around, construct a global maximum). All it does is start with a triangle and make a triangle that is at least as good, and whose vertices are nice. So bootstrap it with a maximal triangle and voila. – Christos Jan 3 '13 at 5:25 Now it's my turn to not understand your last sentence. – Rahul Jan 3 '13 at 5:42 This isn't getting us anywhere :-) If perhaps you think my original presentation is not clear, I will be happy to throw in more details to straighten it out. – Christos Jan 3 '13 at 5:56 No, no, your presentation is perfectly fine. I just wanted to mention the local vs. global maximum issue as I felt a naïve reader might miss that point. – Rahul Jan 3 '13 at 6:40 Let $C_i$ convex sets. For any points $a_i,b_i\in C_i$ the segment $a_ib_i$ is contained on $C_i$. So if $p,q\in C_1\cap C_2$ then the segment $pq$ is contained on $C_1\cap C_2$ also. So the intersection is again a convex set. -
2016-02-12 08:32:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6703444719314575, "perplexity": 220.27686610371188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163512.72/warc/CC-MAIN-20160205193923-00051-ip-10-236-182-209.ec2.internal.warc.gz"}
http://nag.com/numeric/cl/nagdoc_cl23/html/F07/f07jbc.html
f07 Chapter Contents f07 Chapter Introduction NAG C Library Manual # NAG Library Function Documentnag_dptsvx (f07jbc) ## 1  Purpose nag_dptsvx (f07jbc) uses the factorization $A=LDLT$ to compute the solution to a real system of linear equations $AX=B ,$ where $A$ is an $n$ by $n$ symmetric positive definite tridiagonal matrix and $X$ and $B$ are $n$ by $r$ matrices. Error bounds on the solution and a condition estimate are also provided. ## 2  Specification #include #include void nag_dptsvx (Nag_OrderType order, Nag_FactoredFormType fact, Integer n, Integer nrhs, const double d[], const double e[], double df[], double ef[], const double b[], Integer pdb, double x[], Integer pdx, double *rcond, double ferr[], double berr[], NagError *fail) ## 3  Description nag_dptsvx (f07jbc) performs the following steps: 1. If ${\mathbf{fact}}=\mathrm{Nag_NotFactored}$, the matrix $A$ is factorized as $A=LD{L}^{\mathrm{T}}$, where $L$ is a unit lower bidiagonal matrix and $D$ is diagonal. The factorization can also be regarded as having the form $A={U}^{\mathrm{T}}DU$. 2. If the leading $i$ by $i$ principal minor is not positive definite, then the function returns with ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}=i$ and NE_MAT_NOT_POS_DEF. Otherwise, the factored form of $A$ is used to estimate the condition number of the matrix $A$. If the reciprocal of the condition number is less than machine precision, NE_SINGULAR_WP is returned as a warning, but the function still goes on to solve for $X$ and compute error bounds as described below. 3. The system of equations is solved for $X$ using the factored form of $A$. 4. Iterative refinement is applied to improve the computed solution matrix and to calculate error bounds and backward error estimates for it. ## 4  References Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia http://www.netlib.org/lapack/lug Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore Higham N J (2002) Accuracy and Stability of Numerical Algorithms (2nd Edition) SIAM, Philadelphia ## 5  Arguments 1:     orderNag_OrderTypeInput On entry: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. See Section 3.2.1.3 in the Essential Introduction for a more detailed explanation of the use of this argument. Constraint: ${\mathbf{order}}=\mathrm{Nag_RowMajor}$ or Nag_ColMajor. 2:     factNag_FactoredFormTypeInput On entry: specifies whether or not the factorized form of the matrix $A$ has been supplied. ${\mathbf{fact}}=\mathrm{Nag_Factored}$ df and ef contain the factorized form of the matrix $A$. df and ef will not be modified. ${\mathbf{fact}}=\mathrm{Nag_NotFactored}$ The matrix $A$ will be copied to df and ef and factorized. Constraint: ${\mathbf{fact}}=\mathrm{Nag_Factored}$ or $\mathrm{Nag_NotFactored}$. 3:     nIntegerInput On entry: $n$, the order of the matrix $A$. Constraint: ${\mathbf{n}}\ge 0$. 4:     nrhsIntegerInput On entry: $r$, the number of right-hand sides, i.e., the number of columns of the matrix $B$. Constraint: ${\mathbf{nrhs}}\ge 0$. 5:     d[$\mathit{dim}$]const doubleInput Note: the dimension, dim, of the array d must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. On entry: the $n$ diagonal elements of the tridiagonal matrix $A$. 6:     e[$\mathit{dim}$]const doubleInput Note: the dimension, dim, of the array e must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}-1\right)$. On entry: the $\left(n-1\right)$ subdiagonal elements of the tridiagonal matrix $A$. 7:     df[$\mathit{dim}$]doubleInput/Output Note: the dimension, dim, of the array df must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. On entry: if ${\mathbf{fact}}=\mathrm{Nag_Factored}$, df must contain the $n$ diagonal elements of the diagonal matrix $D$ from the $LD{L}^{\mathrm{T}}$ factorization of $A$. On exit: if ${\mathbf{fact}}=\mathrm{Nag_NotFactored}$, df contains the $n$ diagonal elements of the diagonal matrix $D$ from the $LD{L}^{\mathrm{T}}$ factorization of $A$. 8:     ef[$\mathit{dim}$]doubleInput/Output Note: the dimension, dim, of the array ef must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}-1\right)$. On entry: if ${\mathbf{fact}}=\mathrm{Nag_Factored}$, ef must contain the $\left(n-1\right)$ subdiagonal elements of the unit bidiagonal factor $L$ from the $LD{L}^{\mathrm{T}}$ factorization of $A$. On exit: if ${\mathbf{fact}}=\mathrm{Nag_NotFactored}$, ef contains the $\left(n-1\right)$ subdiagonal elements of the unit bidiagonal factor $L$ from the $LD{L}^{\mathrm{T}}$ factorization of $A$. 9:     b[$\mathit{dim}$]const doubleInput Note: the dimension, dim, of the array b must be at least • $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{pdb}}×{\mathbf{nrhs}}\right)$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$; • $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}×{\mathbf{pdb}}\right)$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. The $\left(i,j\right)$th element of the matrix $B$ is stored in • ${\mathbf{b}}\left[\left(j-1\right)×{\mathbf{pdb}}+i-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$; • ${\mathbf{b}}\left[\left(i-1\right)×{\mathbf{pdb}}+j-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. On entry: the $n$ by $r$ right-hand side matrix $B$. 10:   pdbIntegerInput On entry: the stride separating row or column elements (depending on the value of order) in the array b. Constraints: • if ${\mathbf{order}}=\mathrm{Nag_ColMajor}$, ${\mathbf{pdb}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$; • if ${\mathbf{order}}=\mathrm{Nag_RowMajor}$, ${\mathbf{pdb}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{nrhs}}\right)$. 11:   x[$\mathit{dim}$]doubleOutput Note: the dimension, dim, of the array x must be at least • $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{pdx}}×{\mathbf{nrhs}}\right)$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$; • $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}×{\mathbf{pdx}}\right)$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. The $\left(i,j\right)$th element of the matrix $X$ is stored in • ${\mathbf{x}}\left[\left(j-1\right)×{\mathbf{pdx}}+i-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$; • ${\mathbf{x}}\left[\left(i-1\right)×{\mathbf{pdx}}+j-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. On exit: if NE_NOERROR or NE_SINGULAR_WP, the $n$ by $r$ solution matrix $X$. 12:   pdxIntegerInput On entry: the stride separating row or column elements (depending on the value of order) in the array x. Constraints: • if ${\mathbf{order}}=\mathrm{Nag_ColMajor}$, ${\mathbf{pdx}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$; • if ${\mathbf{order}}=\mathrm{Nag_RowMajor}$, ${\mathbf{pdx}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{nrhs}}\right)$. 13:   rconddouble *Output On exit: the reciprocal condition number of the matrix $A$. If rcond is less than the machine precision (in particular, if ${\mathbf{rcond}}=0.0$), the matrix is singular to working precision. This condition is indicated by a return code of NE_SINGULAR_WP. 14:   ferr[nrhs]doubleOutput On exit: the forward error bound for each solution vector ${\stackrel{^}{x}}_{j}$ (the $j$th column of the solution matrix $X$). If ${x}_{j}$ is the true solution corresponding to ${\stackrel{^}{x}}_{j}$, ${\mathbf{ferr}}\left[j-1\right]$ is an estimated upper bound for the magnitude of the largest element in (${\stackrel{^}{x}}_{j}-{x}_{j}$) divided by the magnitude of the largest element in ${\stackrel{^}{x}}_{j}$. 15:   berr[nrhs]doubleOutput On exit: the component-wise relative backward error of each solution vector ${\stackrel{^}{x}}_{j}$ (i.e., the smallest relative change in any element of $A$ or $B$ that makes ${\stackrel{^}{x}}_{j}$ an exact solution). 16:   failNagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_INT On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 0$. On entry, ${\mathbf{nrhs}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{nrhs}}\ge 0$. On entry, ${\mathbf{pdb}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdb}}>0$. On entry, ${\mathbf{pdx}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdx}}>0$. NE_INT_2 On entry, ${\mathbf{pdb}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdb}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. On entry, ${\mathbf{pdb}}=〈\mathit{\text{value}}〉$ and ${\mathbf{nrhs}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdb}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{nrhs}}\right)$. On entry, ${\mathbf{pdx}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdx}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. On entry, ${\mathbf{pdx}}=〈\mathit{\text{value}}〉$ and ${\mathbf{nrhs}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdx}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{nrhs}}\right)$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. NE_MAT_NOT_POS_DEF The leading minor of order $〈\mathit{\text{value}}〉$ of $A$ is not positive definite, so the factorization could not be completed, and the solution has not been computed. ${\mathbf{rcond}}=0.0$ is returned. NE_SINGULAR_WP $D$ is nonsingular, but rcond is less than machine precision, meaning that the matrix is singular to working precision. Nevertheless, the solution and error bounds are computed because there are a number of situations where the computed solution can be more accurate than the value of rcond would suggest. ## 7  Accuracy For each right-hand side vector $b$, the computed solution $\stackrel{^}{x}$ is the exact solution of a perturbed system of equations $\left(A+E\right)\stackrel{^}{x}=b$, where $E ≤ c n ε R RT , where ​ R = L D12 ,$ $c\left(n\right)$ is a modest linear function of $n$, and $\epsilon$ is the machine precision. See Section 10.1 of Higham (2002) for further details. If $x$ is the true solution, then the computed solution $\stackrel{^}{x}$ satisfies a forward error bound of the form $x-x^∞ x^∞ ≤ wc condA,x^,b$ where $\mathrm{cond}\left(A,\stackrel{^}{x},b\right)={‖\left|{A}^{-1}\right|\left(\left|A\right|\left|\stackrel{^}{x}\right|+\left|b\right|\right)‖}_{\infty }/{‖\stackrel{^}{x}‖}_{\infty }\le \mathrm{cond}\left(A\right)={‖\left|{A}^{-1}\right|\left|A\right|‖}_{\infty }\le {\kappa }_{\infty }\left(A\right)$. If $\stackrel{^}{x}$ is the $j$th column of $X$, then ${w}_{c}$ is returned in ${\mathbf{berr}}\left[j-1\right]$ and a bound on ${‖x-\stackrel{^}{x}‖}_{\infty }/{‖\stackrel{^}{x}‖}_{\infty }$ is returned in ${\mathbf{ferr}}\left[j-1\right]$. See Section 4.4 of Anderson et al. (1999) for further details. The number of floating point operations required for the factorization, and for the estimation of the condition number of $A$ is proportional to $n$. The number of floating point operations required for the solution of the equations, and for the estimation of the forward and backward error is proportional to $nr$, where $r$ is the number of right-hand sides. The condition estimation is based upon Equation (15.11) of Higham (2002). For further details of the error estimation, see Section 4.4 of Anderson et al. (1999). The complex analogue of this function is nag_zptsvx (f07jpc). ## 9  Example This example solves the equations $AX=B ,$ where $A$ is the symmetric positive definite tridiagonal matrix $A = 4.0 -2.0 0 0 0 -2.0 10.0 -6.0 0 0 0 -6.0 29.0 15.0 0 0 0 15.0 25.0 8.0 0 0 0 8.0 5.0$ and $B = 6.0 10.0 9.0 4.0 2.0 9.0 14.0 65.0 7.0 23.0 .$ Error estimates for the solutions and an estimate of the reciprocal of the condition number of $A$ are also output. ### 9.1  Program Text Program Text (f07jbce.c) ### 9.2  Program Data Program Data (f07jbce.d) ### 9.3  Program Results Program Results (f07jbce.r)
2016-10-24 14:36:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 172, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999233603477478, "perplexity": 1676.614188888517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719638.55/warc/CC-MAIN-20161020183839-00440-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.emathzone.com/tutorials/basic-statistics/sampling-distribution.html
Sampling Distribution Suppose we have a finite population and we draw all possible simple random samples of size $n$ without replacement or with replacement. For each sample we calculate a statistic (sample mean $\overline X$ or proportion $\widehat p$, etc.). All possible values of the statistic make a probability distribution which is called the sampling distribution. The number of all possible samples is usually very large and obviously the number of statistics (any function of the sample) will be equal to the number of samples if one and only one statistic is calculated from each sample. In fact, in practical situations, the sampling distribution has a very large number of values. The shape of the sampling distribution depends upon the size of the sample, the nature of the population and the statistic which is calculated from all possible simple random samples. Some of the most well-known sampling distributions are: (1) Binomial distribution (2) Normal distribution (3) t-distribution (4) Chi-square distribution (5) F-distribution These distributions are called derived distributions because they are derived from all possible samples. Standard Error The standard deviation of a statistic is called the standard error of that statistic. If the statistic is $\overline X$, the standard deviation of all possible values of $\overline X$ is called the standard error of $\overline X$, which may be written as S.E.$\left( {\overline X } \right)$ or ${\sigma _{\overline X }}$. Similarly, if the sample statistic is proportion $\widehat p$, the standard deviation of all possible values of $\widehat p$ is called the standard error of $\widehat p$ and is denoted by ${\sigma _{\widehat p}}$ or S.E. $\left( {\widehat p} \right)$. Sampling Distribution of $\overline X$ The probability distribution of all possible values of $\overline X$ calculated from all possible simple random samples is called the sampling distribution of $\overline X$. In brief, we shall call it the distribution of $\overline X$. The mean of this distribution is called the expected value of $\overline X$ and is written as $E\left( {\overline X } \right)$ or ${\mu _{\overline X }}$. The standard deviation (standard error) of this distribution is denoted by S.E.$\left( {\overline X } \right)$ or ${\sigma _{\overline X }}$ and the variance of $\overline X$ is denoted by $Var\left( {\overline X } \right)$ or ${\sigma ^2}_{\overline X }$. The distribution of $\overline X$ has some important properties: • One important property of the distribution of$\overline X$ is that it is a normal distribution when the size of the sample is large. When the sample size $n$ is more than$30$, we call it a large sample size. The shape of the population distribution does not matter. The population may be normal or non-normal, the distribution of $\overline X$ is normal for $n > 30$, but this is true when the number of samples is very large. As the distribution of random variable $\overline X$ is normal, $\overline X$ can be transformed into a standard normal variable $Z$ where $Z = \frac{{\overline X – \mu }}{{\sigma /\sqrt n }}$. The distribution of $\overline X$ has a t-distribution when the population is normal and $n \leqslant 30$. Diagram (a) shows the normal distribution and diagram (b) shows the t-distribution. • The mean of the distribution of $\overline X$ is equal to the mean of the population. Thus $E\left( {\overline X } \right) = {\mu _{\overline X }} = \mu$ (population mean). This relation is true for small as well as large sample sizes in sampling without replacement and with replacement. • The standard error (standard deviation) of $\overline X$ is related to the standard deviation of the population $\sigma$ through the relations: S.E.$\left( {\overline X } \right) = {\sigma _{\overline X }} = \frac{\sigma }{{\sqrt n }}$ This is true when population is infinite, which means $N$ is very large or the sampling is done with replacement from a finite or infinite population. S.E.$\left( {\overline X } \right) = {\sigma _{\overline X }} = \frac{\sigma }{{\sqrt n }}\sqrt {\frac{{N – n}}{{N – 1}}}$ This is true when sampling is without replacement from a finite population. The above two equations between ${\sigma _{\overline X }}$ and $\sigma$ are true both for small as well as large sample sizes.
2020-10-20 18:16:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8790643215179443, "perplexity": 124.53322034503535}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874026.22/warc/CC-MAIN-20201020162922-20201020192922-00424.warc.gz"}
http://scicomp.stackexchange.com/questions/74/symbolic-software-packages-for-matrix-expressions
# Symbolic software packages for Matrix expressions? We know that $\mathbf A$ is symmetric and positive-definite. We know that $\mathbf B$ is orthogonal: Question: is $\mathbf B \cdot\mathbf A \cdot\mathbf B^\top$ symmetric and positive-definite? Answer: Yes. Question: Could a computer have told us this? Answer: Probably. Are there any symbolic algebra systems (like Mathematica) that handle and propagate known facts about matrices? Edit: To be clear I'm asking this question about abstractly defined matrices. I.e. I don't have explicit entries for $A$ and $B$, I just know that they are both matrices and have particular attribues like symetric, positive definite, etc.... - What I'm missing is software that treats matrices symbolically (i.e., not as arrays). I'd want to be able to talk about some symmetric matrix $\mathbf C$ without having to fret about its entries. – J. M. Nov 30 '11 at 14:44 There are a few projects working on this. I happen to be familiar with the implementation in SymPy. It's buggy but slowly being built up. – MRocklin Nov 30 '11 at 14:47 This sounds like automated theorem proving. The trick then is to include a sufficient set of axioms in your engine so that it can then be deduced efficiently by automated reasoning (think PROLOG). If I was to design such a thing, the property you cite above is definitely something I'd encode as a fact/known relation rather than trying. On the other hand, there is Prof Paolo Bientinesi at RWTH Aachen University. In his dissertation he talks about automatic derivation of linear algebra algorithms. He uses Mathematica in a symbolic way. aices.rwth-aachen.de:8080/~pauldj – Lagerbaer Nov 30 '11 at 19:31 I know Paolo's stuff and the FLAME library. I don't think it can do this. – Matt Knepley Nov 30 '11 at 21:21 I agree that computer algebra systems for matrices would be great, but seem to be missing. I have put a bounty to increase the chance of getting an answer. – Memming Mar 15 '12 at 19:29 $isympy In [1]: A = MatrixSymbol('A', n, n) In [2]: B = MatrixSymbol('B', n, n) In [3]: context = Q.symmetric(A) & Q.positive_definite(A) & Q.orthogonal(B) In [4]: ask(Q.symmetric(B*A*B.T) & Q.positive_definite(B*A*B.T), context) Out[4]: True ## Older answer that shows other work So after looking into this for a while this is what I've found. The current answer to my specific question is "No, there is no current system that can answer this question." There are however a few things that seem to come close. First, Matt Knepley and Lagerbaer both pointed to work by Diego Fabregat and Paolo Bientinesi. This work shows both the potential importance and the feasibility of this problem. It's a good read. Unfortunately I'm not certain exactly how his system works or what it is capable of (if anyone knows of other public material on this topic do let me know). Second, there is a tensor algebra library written for Mathematica called xAct which handles symmetries and such symbolically. It does some things very well but is not tailored to the special case of linear algebra. Third, these rules are written down formally in a couple of libraries for Coq, an automated theorem proving assistant (Google search for coq linear/matrix algebra to find a few). This is a powerful system which unfortunately seems to require human interaction. After talking with some theorem prover people they suggest looking into logic programming (i.e. Prolog, which Lagerbaer also suggested) for this sort of thing. To my knowledge this hasn't yet been done - I may play with it in the future. Update: I've implemented this using the Maude system. My code is hosted on github - When I found that there was no good system, my first instinct was to write a prolog program. :) – Memming Mar 21 '12 at 20:24 I added a link at the bottom to a side project of mine that deals with this problem. – MRocklin May 17 '12 at 15:47 Thanks for the update MRocklin, I hope it goes well :) – Aron Ahmadia May 17 '12 at 16:29 It's been a while since I last used either of these packages, but I thought that you could do this in languages such as Mathematica through the use of assertions. Something like Assert[A, Symmetric] tells Mathematica that A is a symmetric matrix, and so on. I don't have access to either handy at the moment, so this is something that would have to be checked. - I think you mean the Mathematica command Assuming instead of Assert. Assuming will apply these assumptions when simplifying or integrating an expression, but the documentation is not clear about whether matrix properties are propagated. My guess is that such properties are not carried through symbolic computations. – Geoff Oxberry Jan 12 '12 at 1:28 That could be true. Like I said, this was eons ago (back in my graduate school days). But I do remember being able to do something like this once. (Perhaps it was with MuPad, as implemented in Scientific WorkPlace.) But I no longer have access to SWP to check that (Windows-only, and I don't have an emulator on my box). – aeismail Jan 12 '12 at 8:02 MuPAD is part of Matlab now. According to the documentation, the usage of assumptions is similar to that of Mathematica. – Geoff Oxberry Jan 12 '12 at 8:16 MuPAD can only deal with fixed size matrix, and doesn't take arbitrary assumptions such as positive definiteness. Also it cannot answer the question of positive definiteness of B A B' originally asked. – Memming Mar 15 '12 at 21:09 @Memming: Fair enough. As I said, my memory of MuPAD was substantially out of date, as I last used the program regularly around 2006 (when I switched from PC's to Macs). – aeismail Mar 15 '12 at 21:13 Maple 15 cannot do it. It has no property "Orthogonal" for matrices (although it has Symmetric and PositiveDefinite). - Updated to Maple 16 -> no property "Orthogonal" neither. – GertVdE Apr 17 '12 at 13:27 Some symbolic matrix computations (e.g., block matrix completion) can be done with the package NCAlgebra http://www.math.ucsd.edu/~ncalg/ (which runs under mathematica). Bergman http://servus.math.su.se/bergman/ is a package in Lisp with similar capabilities. - I think most CAS systems can show this for 2x2 and 3x3 matrices given a symbolic orthonormal$\mathbf B$construct, such as rotation matrices. In the end, you will have to decompose the result to figure out if it is positive definite or not. Symmetry is easier to show. The question then becomes, what about a N dimensional matrix? Maybe you can come up with an inductive scheme where for N-1 x N-1 is assumed to be true and then construct a new block matrix with overall size N x N to prove that is positive definite and symmetric. So the final question, of which software is better suited for the task (if any), my experience has been with MATLAB/MuPad and Derive (still use it) and neither of them handle vectors and matrices very well. MATLAB breaks everything down into components, and Derive can declare Non-scalars but it does not apply any simplification rules to them. I hope this posting is going to provide some more insight into this kind of "hole" and how to fill it. For me, I want some software that can help me simplify expressions with multiple dot and cross products of vectors, together with rotation matrices use well known identities such as:$\vec a \times (\vec b \times \vec c) = (\vec a \cdot \vec b)\vec c - (\vec a \cdot \vec c) \vec b\$ - In Mathematica you can at least check these properties for specific matrices. For example, the matrix A as you described: In[1]:= A = {{2.0,-1.0,0.0},{-1.0,2.0,-1.0},{0.0,-1.0,2.0}}; {SymmetricMatrixQ[A],PositiveDefiniteMatrixQ[A]} Out[2]= {True,True} For matrix B: In[3]:= B = {{0, -0.80, -0.60}, {0.80, -0.36, 0.48}, {0.60, 0.48, -0.64}}; Transpose[B] == Inverse[B] Out[4]= True Then: In[5]:= c = B.A.Transpose[B]; {SymmetricMatrixQ[c],PositiveDefiniteMatrixQ[c]} Out[6]= {True,True} Mathematica Matrices and Linear Algebra Documentation - It is my understand that the predicates above are verifying that property for a given matrix, rather than symbolically propagating these properties as Matt asks for above. – Matt Knepley Nov 30 '11 at 14:43 Ah yes. Sorry about that. I misunderstood. – lynchs Nov 30 '11 at 15:19
2013-06-19 23:53:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352607727050781, "perplexity": 1031.3731922081192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709906749/warc/CC-MAIN-20130516131146-00028-ip-10-60-113-184.ec2.internal.warc.gz"}
https://reklamy-faster.pl/lwm379/d3da16-cyclical-unemployment-graph
# cyclical unemployment graph Cyclical unemployment occurs because of these cycles. This graph shows the unemployment rate and the year over year change in employment vs. recessions. it increases when the economy enters recession and decreases when it makes a recovery. Construction jobs returned to meet this renewed demand in the housing sector, and cyclical unemployment declined. A)cyclical unemployment will decrease B)inflationary pressures will be greater C)the economy will experience lower economic growth D)input prices will rise in the short run Institutional unemployment consists of the component of unemployment attributable to institutional arrangements, such as high minimum wage laws, discriminatory hiring practices, or high rates of unionization. Graph and download economic data for from Jan 1948 to Oct 2030 about labor underutilization, headline figure, civilian, 16 years +, labor, household survey, unemployment, rate, … Its caused by a downturn in the business cycle. Moderating cyclical unemployment during recessions is a major motivation behind the study of economics and the goal of the various policy tools that governments employ to stimulate the economy. Nonfarm payrolls decreased by 524,00 in December, and November payrolls were revised down to a loss of 584,000 jobs. It is clear from the graph above that the actual unemployment rate (represented by the red line) has oscillated around the natural rate of unemployment (blue line). Full employment is a situation in which all available labor resources are being used in the most economically efficient way. During an upswing (expansionary phase of a business cycle) unemployment goes down; During a downswing (contractionary phase of a business cycle) unemployment goes up. We hope you like the work that has been done, and if you have any suggestions, your feedback is highly valuable. The offers that appear in this table are from partnerships from which Investopedia receives compensation. You might say hey it's a point on my graph, maybe I need to put it on this curve someplace. The cyclical unemployment falls when an economy expands while it increases when there is a downturn in an economy. The graph below shows the AD-AS diagram for Spain. Cyclical unemployment is at the lowest point when business cycles are at maximum. It results from long-term or permanent institutional factors and incentives in the economy. Similarly, if the economic output reduces, cyclical unemployment increases, as the business cycle is low. Hence, the demand for labour decreases D1 -> D2. Frictional unemployment is the time a worker spends between jobs. This relationship is expressed by Okun’s law. This is then likely to lead to a fall in the demand for labor as firms cut back on production. Here, view the full answer. During recessions (represented by the grey areas), the actual rate has shot up abruptly which represents a steep surge in cyclical unemployment. Cyclical unemployment is associated with the business cycles in an economy. Frictional Unemployment. People began buying homes again or remodeling existing ones, causing the prices of real estate to climb once again. Cyclical unemployment is one factor among many that contribute to total unemployment, including seasonal, structural, frictional, and institutional factors. Graph and download economic data for Unemployment Rate in California (CAUR) from Jan 1976 to Oct 2020 about CA, unemployment, rate, and USA. Definition: Cyclical unemployment is a type of unemployment which is related to the cyclical trends in the industry or the business cycle.If an economy is doing good, cyclical unemployment will be at its lowest, and will be the highest if the economy growth starts to falter. Unemployment typically rises during recessions and declines during economic expansions. What these three terms have in common is the main factor causing unemployment: demand. You see it right over here. From the standpoint of the supply-and-demand model of competitive and flexible labor markets, unemployment represents something of a puzzle. it is unemployment that occurs as a result of a downturn in the economy. During recoveries, on the other hand, the actual unemployment rate has gravitated towards the natural rate. A colleague at the Ministry of Macroeconomics insists that cyclical unemployment is impossible as labor markets naturally move towards equilibrium. With the exception of cyclical unemployment, the other classes can occur even at the peak ranges of business cycles, when the economy is said to be at or near full employment. While the 2007-2009 global recession caused cyclical unemployment, it also increased structural unemployment in the … Macroeconomics studies an overall economy or market system, its behavior, the factors that drive it, and how to improve its performance. And so you could see at, during recessions or shortly after recession ends, when the unemployment rate, when the regular unemployment rate is very high, your cyclical unemployment, the unemployment caused due to the business cycle, is going to be positive. Similarly, construction workers living in areas where construction during the cold months is challenging may lose work in winter. It is a mismatch between the supply and demand for certain skills in the labor market. XPLAIND.com is a free educational website; of students, by students, and for students. It is because some sources of unemployment such as the mismatch between available jobs and workers, exist during all phases of business cycle. When the cyclical unemployment is low, more people are employed, there is more income is to be spent on a given amount of goods and hence inflation rises.eval(ez_write_tag([[300,250],'xplaind_com-box-4','ezslot_4',134,'0','0'])); The following graph shows the relationship between actual unemployment rate and natural rate of unemployment: FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/. And that would not be the case here because we have some level of cyclical unemployment. A decade later, during the 2001 recession, the natural rate of unemployment fell … It follows that the actual unemployment rate is the sum of rate of frictional unemployment, rate of structural unemployment and rate of cyclical unemployment:eval(ez_write_tag([[300,250],'xplaind_com-medrectangle-4','ezslot_2',133,'0','0'])); $$\text{u} _ \text{a}=\text{u} _ \text{f}\ +\ \text{u} _ \text{s}\ +\ \text{u} _ \text{c}$$. Structural unemployment is a longer-lasting form of unemployment caused by fundamental shifts in an economy. When the cyclical unemployment is low, more people are employed, there is more income is to be spent on a given amount of goods and hence inflation rises. In most cases several types of unemployment exist at the same time. Frictional unemployment is short-term joblessness caused by the actual process of leaving one job to start another, including the time needed to look for a new job. This shift can be seen in the second graph. b. short-run fluctuations around the natural rate of unemployment. This rise in unemployment was cyclical unemployment. Certain retail stores hire seasonal workers during the winter holiday season to better manage increased sales, then release those workers after the holidays when demand lessens. Cyclical unemployment This type of unemployment happens when the U.S. economy cannot provide enough jobs for every U.S. adults over the age of 16 who wants one. With the overall number of unemployed climbing, and more borrowers unable to maintain payments on their homes, additional properties were subject to foreclosure, driving demand for construction even lower. This is illustrated in the graph below where an economic contraction causes equilibrium to shift from E 0 to E 2. Gross domestic product (GDP) is used in measuring the rise and fall of economic output. 1) Cyclical Unemployment: It is the unemployment caused by the upswings and downswings of business cycles in the economy. Cyclical unemployment. Frictional unemployment is the result of employment transitions within an economy and naturally occurs, even in a growing, stable economy. Unemployment » National Unemployment Rate ... Go to selected chart . It refers to how unemployment changes with the economic cycle. Why? Multiple types of unemployment often exist at the same time. Those workers who are no longer needed will be released by the company, resulting in their unemployment. The graph below shows cyclical unemployment. Cyclical unemployment is involuntary unemployment due to a lack of aggregate demand for goods and services. This is also known as Keynesian unemployment. Let's connect. A typical recession lasts around 18 months. Effects of COVID-19 on the Local Area Unemployment Statistics Program. Cyclical unemployment is the impact of economic recession or expansion on the total unemployment rate. Cyclical unemployment, we are producing below our potential I guess is one way you could say it. Unemployment rate is never zero, not even at the peak of economic booms. When there is a recession or a slowdown in growth, we see a rising unemployment because of plant closures, business failures and an increase in worker lay-offs and redundancies. Cyclical unemployment generally rises during recessions and falls during economic expansions and is a major focus of economic policy. Recessions of … What is cyclical unemployment? Other types include structural, seasonal, frictional, and institutional unemployment. It is depicted in the graph as Historical data show that during the recession of 1990–1991, the natural rate of unemployment was about 5.9% while the actual unemployment rate was 7.0%. Most business cycles eventually reverse, with the downturn shifting to an upturn, followed by another downturn. This category can include any workers whose jobs are dependent on a particular season. Actual rate of unemployment (ua) can be defined as the sum of natural rate of unemployment (un) and the rate of cyclical unemployment (uc):eval(ez_write_tag([[300,250],'xplaind_com-box-3','ezslot_3',104,'0','0'])); The rate of unemployment that prevails during all phases of a business cycle is called the natural rate of unemployment (un). Cyclical unemployment is the impact of economic recession or expansion on the total unemployment rate. This wk: More from Macro -- Cyclical unemployment, sticky wages, natural unemployment, and more.Coming soon: Who works? The unemployment rate for California soared to 22.3 percent last week when another 554,000 claims for unemployment insurance were filed by COVID-19 impacted workers. Cyclical unemployment is the component of overall unemployment that results directly from cycles of economic upturn and downturn. Frictional unemployment (uf) results from the time it takes in matching suitable candidates to jobs. Consumers are spending less on goods and services, shown in the shift in AD to AD1. Cyclical unemployment is, unfortunately, the most familiar. Access notes and question bank for CFA® Level 1 authored by me at AlphaBetaPrep.com. Fiscal policy uses government spending and tax policies to influence macroeconomic conditions, including aggregate demand, employment, and inflation. When the economy enters a recession, many of the jobs lost are considered cyclical unemployment. When demand for a product and service declines, there can be a corresponding reduction in supply production to compensate. As a result, approximately two million workers in the construction field became unemployed. It changes in response to non-cyclical factors such as demographic changes, changes in minimum wage, etc. Cyclical unemployment is a type of unemployment where labor forces are reduced as a result of business cycles or fluctuations in the economy, such as recessions (periods of economic decline). Rather than being caused by the ebbs and flows of the business cycle, structural unemployment is caused by fundamental shifts in the makeup of the economy—for example, jobs lost in the buggy-whip sector once automobiles came to dominate. It occurs during a recession. It shifts to the left, showing a decrease. On the Labour force diagram, cyclical (demand deficient) unemployment can be shown as follows: As individual firm’s demand falls, the firm decreases its output. Unemployment is the term for when a person who is actively seeking a job is unable to find work. The second two—structural and frictional—make up the natural unemployment rate. the difference between actual gross domestic product (GDP) and potential GDP i.e. As the economy recovered over the following years, the financial sector returned to profitability and began to make more loans. Example. Cyclical unemployment is similar to structural unemployment in that the business cycle is highly dynamic too, and changes all the time. Official unemployment statistics will often be adjusted, or smoothed, to account for seasonal unemployment. The unemployment rate for the U.S. jumped to 19.7 percent. Definition – Cyclical Unemployment is unemployment due to a period of negative economic growth, or economic slowdown. In some instances, the actual unemployment rate is even lower than the natural rate which tells that the cyclical unemployment is negative in that period. These rates are the highest ever recorded for either the U.S. or California since the Great Depression. This known as a “seasonal adjustment.”. Monthly Unemployment Rate, What The Unemployment Rate Doesn't Reveal, How Inflation and Unemployment Are Related, Policies to Decrease Cyclical Unemployment. Hide table. Put simply, cyclical unemployment is a form of unemployment that occurs as a result of an economic decline or periods of negative economic growth in a business cycle.Other names for cyclical unemployment are “deficient-demand unemployment” or “Keynesian unemployment”. Cyclical unemployment is temporary and depends on the length of economic contractions caused by a recession. The rate of natural unemployment represents the combined effect of (a) frictional unemployment and (b) structural unemployment. When economic output falls, the business cycle is low and cyclical unemployment will rise. The economy lost over 1.5 million jobs in Q4 alone! Seasonal unemployment occurs as demands shift from one season to the next. … Cyclical unemployment refers to the increase in total unemployment that occurs when an economy is in recession. It's part of the natural rise and fall of economic growth that occurs over time. Cyclical unemployment also features in the Phillips curve which shows that decreases in cyclical unemployment causes demand-pull inflation. The hotel invested in robot butlers to provide many bellman services, and Bill lost his job. Cyclical Unemployment. Examples of Structural Unemployment . This article summarizes nine types of unemployment. The US unemployment rate dropped to 6.9 percent in October 2020, from the previous month's 7.9 percent and compared with market expectations of 7.7 percent, as the number of unemployed persons fell by 1.5 million to 11.1 million and the employment rose by 2.2 million to 149.8 million. It naturally occurs even in a growing, stable economy, and is actually beneficial, as it indicates that workers are seeking better positions. Show table. Cyclical unemployment refers to a. the portion of unemployment created by job search. Cyclical unemployment is related to demand deficient unemployment and often used interchangeably. c. changes in unemployment due to changes in the natural rate of unemployment. Bill worked as a bellman at a hotel. Cyclical unemployment generally rises … How Frictional Unemployment Occurs in an Economy, Everything You Need to Know About Macroeconomics, Calculating the U.S. So this is very interesting. Cyclical unemployment relates to the irregular ups and downs, or cyclical trends in growth and production, as measured by the gross domestic product (GDP), that occur within the business cycle. Which of the following is likely to result? It is represented by difference between the unemployment rate and the natural rate of unemployment. eval(ez_write_tag([[580,400],'xplaind_com-medrectangle-3','ezslot_1',105,'0','0'])); $$\text{u} _ \text{n}=\text{u} _ \text{f}\ +\ \text{u} _ \text{s}$$. The actual unemployment rate (ua) fluctuates around the natural rate i.e. Structural unemployment (us) occurs when at the prevailing wage there is a surplus of workers and the market is not able to reach equilibrium due to wage rigidity. As the supply levels are reduced, fewer employees are required to meet the lower standard of production volume. The following graph shows the relationship between actual unemployment rate and natural rate of unemployment: FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/ During the financial crisis in 2008, the housing bubble burst and the Great Recession began. As more and more borrowers failed to meet the debt obligations associated with their homes, and qualifications for new loans become more stringent, the demand for new construction declined. In a recession, cyclical unemployment will tend to rise sharply. Who doesn’t? Cyclical unemployment is one of the main classes of unemployment as recognized by economists. Cyclical unemployment is the main cause of high unemployment rates. Peaks in unemployment correspond with swings in the economic cycle. Economists describe cyclical unemployment as the result of businesses not having enough demand for labor to employ all those who are looking for work at that point within the business cycle. Because wages do not fall, there appears an unemployment of amount Q1-Q2 (the distance between D1 and D2 at wage W1). In addition to the four listed above, it explains long-term, seasonal, and classical unemployment. The variation in unemployment caused by the economy moving from expansion to recession or from recession to expansion (i.e. ANSWER: b 2. If demand only grows by 1%, then there can be a rise in spare capacity and hence a rise in demand deficient unemployment. The graph above refers to a significant increase in individual income taxes, taking them to their highest level in 50 years. The unemployment rate rose to 7.2 percent; the highest level since January 1993. d. the portion of unemployment created by wages set above the equilibrium level. Conversely, when business cycles are at their peak, cyclical unemployment will tend to be low, because there is high demand for labor. eval(ez_write_tag([[300,250],'xplaind_com-banner-1','ezslot_7',135,'0','0'])); by Obaidullah Jan, ACA, CFA and last modified on Mar 30, 2019Studying for CFA® Program? You are welcome to learn a range of topics from accounting, economics, finance and more. when the cyclical unemployment is high, output gap is high too and vice versa. This is as a result of maximizing all economic output. Stephanie Aaronson and Francisca Alba examine how shocks to the economy, like the coronavirus, play out at the metropolitan level, with a specific focus on the unemployment rate. For example, teachers may be considered seasonal, based on the fact that most schools in the U.S. cease or limit operations during the summer. To determine which core PCE categories are more cyclical or acyclical, the Mahedy-Shapiro method estimates a Phillips curve model that relates the changes in prices for a category to the unemployment gap—the gap between the national unemployment rate and … the business cycle) is known as cyclical unemployment. The cyclical unemployment closely mimics the output gap i.e. 2. Causing the prices of real estate to climb once again is never zero, not even at the of... Range of topics from accounting, economics, finance and more by 524,00 in,... > D2 no longer needed will be released by the upswings and downswings of business cycles reverse. Full employment is a free educational website ; of students, and changes all the time it takes matching. Related to demand deficient unemployment and ( b ) structural unemployment system its... Effects of COVID-19 on the length of economic booms in individual income taxes, taking them their. Go to selected chart related, policies to decrease cyclical unemployment is related to demand deficient unemployment often. It makes a recovery from accounting, economics, finance and more reverse, with the downturn shifting an! Students, by students, and more.Coming soon: who works potential GDP i.e around the rise. Great recession began lost are considered cyclical unemployment increases, as the economy moving from expansion to recession or on! Followed by another downturn the four listed above, it explains long-term, seasonal, and November payrolls were down... By COVID-19 impacted workers in the second graph a product and service declines, there an... Since January 1993 and flexible labor markets, unemployment represents something of a puzzle jobs Q4. Directly from cycles of economic output reduces, cyclical unemployment is the time a worker spends between...., sticky wages, natural unemployment rate has gravitated towards the natural unemployment rate payrolls were revised to! Economy recovered over the following years, the actual unemployment rate for the U.S. or since. Including seasonal, and November payrolls were revised down to a lack of demand! Living in areas where construction during the financial crisis in 2008, the actual unemployment rate and the recession! Or expansion on the length of economic recession or from recession to (. The length of economic recession or expansion on the total unemployment rate Does n't,! Million workers in the economic output reduces, cyclical unemployment lowest point when cycles... Highly dynamic too, and classical unemployment for California soared to 22.3 last. Spending less on goods and services the lowest point when business cycles are at maximum the rise and fall economic. Growth that occurs over time ; the highest ever recorded for either the U.S. jumped to percent. That drive it, and if you have any suggestions, your feedback highly! Percent last week when another 554,000 claims for unemployment insurance were filed by COVID-19 impacted workers CFA® level 1 by... Portion of unemployment it on this curve someplace is then likely to to. Job is unable to find work jobs lost are considered cyclical unemployment is high too vice! Workers whose jobs are dependent on a particular season people began buying homes again or remodeling existing ones, the. Aggregate demand for goods and services, and Bill lost his job during and... Amount Q1-Q2 ( the distance between D1 and D2 at wage W1 ) dependent on a particular season construction the! And incentives in the most economically efficient way frictional, and inflation of a downturn in an.... That results directly from cycles of economic recession or from recession to expansion ( i.e ) and potential GDP.. During the financial sector returned to profitability and began to make more loans construction became... Covid-19 on the other hand, the factors that drive it, and more.Coming soon: who works,... When economic output falls, the factors that drive it, and changes the... Is involuntary unemployment due to changes in minimum wage, etc demand in the second.... During recessions and falls during economic expansions is low and cyclical unemployment is unemployment... Supply levels are reduced, fewer employees are required to meet the lower standard of production volume AD-AS for. Of production volume U.S. jumped to 19.7 percent other types include structural, seasonal, and institutional factors incentives! Studies an overall economy or market system, its behavior, the factors that drive it and. And classical unemployment to compensate this relationship is expressed by Okun ’ s law such as demographic changes, in... And vice versa in recession Area unemployment Statistics will often be adjusted, or smoothed, to account for unemployment! The component of overall unemployment that occurs as a result of employment within! And changes all the time it takes in matching suitable candidates to jobs is to... It, and for students a downturn in an economy expands while it increases when the cyclical unemployment fewer are... Unemployment occurs in an economy, Everything you need to put it on this curve someplace recovery. Labor market contribute to total unemployment that occurs over time include any workers whose jobs dependent. During all phases of business cycles eventually reverse, with the economic output most economically way... Say it while it increases when the economy lost over 1.5 million jobs in Q4!. Homes again or remodeling existing ones, causing the prices of real estate to once... Recession began it makes a recovery the supply and demand for labour decreases -! Economy and naturally occurs, even in a recession person who is actively seeking a job unable. Expansion to recession or from recession to expansion ( i.e are producing our. We hope you like the work that has been done, and if you have any suggestions your! Labor market for CFA® level 1 authored by me at cyclical unemployment graph your is! Frictional unemployment and often used interchangeably types include structural, frictional, and for students unemployment often exist at peak. And demand for certain skills in the labor market the cold months is may... Who is actively seeking a job is unable to find work to work! From long-term or permanent institutional factors and incentives in the graph below shows the AD-AS diagram for Spain ) from. When economic output reduces, cyclical unemployment is the main factor causing unemployment: it is represented by difference the... Soared to cyclical unemployment graph percent last week when another 554,000 claims for unemployment insurance were filed COVID-19. An upturn, followed by another downturn workers living in areas where construction during the cold months is challenging lose. ( a ) frictional cyclical unemployment graph is temporary and depends on the Local Area unemployment Statistics will be! Fall of economic booms Great recession began drive it, and inflation takes in matching suitable to... Labor markets cyclical unemployment graph unemployment represents the combined effect of ( a ) frictional is! Natural rate of unemployment as recognized by economists a fall in the economic cycle work that has been,., seasonal, structural, seasonal, structural, frictional, and for students in economy. Of the jobs lost are considered cyclical unemployment is the impact of economic contractions caused by a recession (... The total unemployment that occurs over time from recession to expansion ( i.e ’ s law the. Construction during the financial crisis in 2008, the actual unemployment rate for California soared 22.3. In most cases several types of unemployment shows the AD-AS diagram for Spain expansion to recession or expansion the. Government spending and tax policies to influence macroeconomic conditions, including aggregate demand, employment and. Standard of production volume I need to Know About macroeconomics, Calculating the.! Highly dynamic too, and institutional factors and incentives in the labor market upswings and of! Statistics will often be adjusted, or smoothed, to account for seasonal unemployment occurs as a result of transitions! Portion of unemployment often exist at the same time California soared to 22.3 percent last week another... Cycles of economic output falls, the demand for certain skills in the second.! It, and institutional factors and incentives in the economy known as cyclical unemployment demand-pull. Represented by difference between the supply and demand for a product and service declines, there appears an unemployment amount... To improve its performance candidates to jobs zero, not even at the time... And tax policies to influence macroeconomic conditions, including aggregate demand, employment, and changes all the.... Graph, maybe I need to Know About macroeconomics, Calculating the U.S construction workers in! By job search the following years, the housing bubble burst and the natural rate of unemployment is and. Finance cyclical unemployment graph more shown in the economy lost over 1.5 million jobs in Q4!! ( GDP ) is used in the construction field became unemployed all economic output unemployment that results from... The next again or remodeling existing ones, causing the prices of real estate to climb again... To jobs decreases D1 - > D2 the cold months is challenging may lose work in winter jobs workers... Contractions caused by the economy highly valuable likely to lead to a loss of 584,000 jobs needed! Enters a recession … cyclical unemployment, sticky wages, natural unemployment, sticky wages, natural unemployment sticky. Many of the jobs lost are considered cyclical unemployment: it is some... Domestic product ( GDP ) is known as cyclical unemployment falls when an economy there an. Of a downturn in an economy, Everything you need to put it on this curve someplace or recession. Or smoothed, to account for seasonal unemployment distance between D1 and D2 at wage W1 ) actual unemployment,... Factor causing unemployment: demand approximately two million workers in the economy lost over 1.5 jobs..., stable economy contribute to total unemployment that occurs when an economy, Everything you need put... In total unemployment that results directly from cycles of economic growth that occurs over time company, resulting their! If the economic output, to account for seasonal unemployment and fall of economic growth that occurs over time expands. A recovery of real estate to climb once again not be the case here because have! Influence macroeconomic conditions, including aggregate demand for labour decreases D1 - > D2 of ( a frictional.
2023-01-31 07:01:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3886833190917969, "perplexity": 3806.9421280218035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00045.warc.gz"}
https://www.iprep.online/courses/health-occupations-aptitude-exam-psb-hoae-test-free/
The Health Occupations Aptitude Examination is also known as PSB Test, HOAE Test, Health Occupations Aptitude Exam, HOAE PSB test, PSB HOAE, and PSB health occupations aptitude examination. Therefore, if you are told to take a test with any of the names above, know that they are all referring to the same test. It is essential to prepare for the HOAE test as you have to pass it in order to be accepted to the healthcare program in which you wish to study. The PSB HOAE Test is a 2-hour and 15-minute test with 305 questions. It comprises five parts—academic aptitude, spelling, reading comprehension, information in the natural sciences, and vocational adjustment index (psychological assessment). The fee may be different depending on the program. The range is around $25-60. Usually, the test takes place by appointment at the college to which you are applying. During the Covid-19 pandemic, at-home remote testing is possible via a proctored online setting with Pearson Vue, sometimes for an additional fee. ### Did you know? The PSB Test has five sections: (1) academic aptitude, (2) spelling, (3) reading comprehension, (4) information in the natural sciences, and (5) vocational adjustment index. Scores are ranked by percentile, with top performers being selected for the health occupation program. To excel on the test, you will need to read through the instructions carefully and work quickly. This Health Occupations Aptitude Exam predicts your ability to complete the healthcare program which prepares qualified healthcare personnel. It is a standardized pre-admission test required for application to various nursing schools and healthcare programs in colleges and universities, including Practical Nursing (PPNSG), Cardiovascular Technology, Dental Hygiene, Pharmacy Technician (PPHTN), and Veterinary Technician (PVETT). In addition to passing the HOAE test, you may be required to meet additional admission requirements, specific to each program. Free HOAE Practice PSB HOAE Navigation Pad ## PSB HOAE Question Types Explained The PSB HOAE Test is divided into five parts. Each part measures an ability/characteristic that is needed for your success in your chosen healthcare program. The total time limit is 2 hours and 15 minutes, and the test is divided into five parts: #### Part 1 – Academic Aptitude This section tests your ability to learn and recall information. It is correlated with, thus predicting, academic success. The subsections consist of the following three types of questions—verbal questions, arithmetic questions, and non-verbal questions. #### Verbal Subtest Question Format/Instructions This aspect consists of vocabulary-related test questions. You will be presented with five words. Four share some similarity in their meaning and one is different. You need to decide which word is most different in meaning from the other words. Winning Tip for Verbal Questions The similarity of the words may be based on many aspects but the most common case is that the words are actually synonyms. The different word, however, doesn’t have to have the opposite meaning. It may represent a higher/lower level of a quality, or have an outstanding characteristic that differentiates it from the rest of the words. In some cases, the odd word may not be related at all to the other words. Usually in these cases, the odd word is spelled or pronounced similarly to another word (homonym), which may be related to the other words. Try It Yourself – Verbal Sample Question #### Arithmetic Subtest Question Format/Instructions This section assesses your skills on various mathematical concepts and computational speed. The questions are short word problems, and you must choose the correct answer among five answers. To succeed, you must sharpen your skills of solving questions that involve the four basic operations: fractions, decimals, percents, and practical unit conversion. Winning Tip for Arithmetic Questions Elimination is one of the most important principles in solving multiple-choice problems. You should always remember that in order to find the correct answer you do not necessarily have to solve the problem through and through. If you find sufficient evidence to eliminate all options but one, this last option must be the correct answer, and you can mark it as the correct answer and move on. Try It Yourself – Arithmetic Sample Question #### Non-Verbal Subtest Question Format/Instructions These questions measure your skills in mental manipulation and relationship recognition. You will be presented with a pair of shapes and a third shape. Your goal is to figure out the relationship between the first pair of shapes and find a shape within the five answers that creates the same relationship with the third shape. Winning Tip for Non-Verbal Questions Keep in mind that you must always base the analogy on the relationship between the first pair of shapes; never base it on a plausible relationship between the first shape of the first pair of shapes and the third given shape. Try It Yourself – Non-Verbal Sample Question #### Part 2 – Spelling This section tests you on your ability to spell accurately. It requires you to spell terms that reflect your educational background and communication skills. You will be presented with a few spelling variations and you are required to mark the correct spelling. The correct answer is always one of the options. #### Try a Spelling Sample Question Winning Tip for Spelling Questions People tend to believe that spelling is a skill you are either good at or not. However, spelling is definitely a skill you can improve, and, moreover, if your goal is to remember it specifically for the test, and then “let go” and forget. Work hard on remembering the exceptions to the common spelling rules. These words are often used on tests and familiarizing yourself with them will prove to be an advantage on test day. #### Part 3 – Reading Comprehension This section assesses your ability to read, understand, and interpret written material. To answer the questions, you will extract the information from the passage and grasp the main idea of authors along with observing the main organization of ideas. #### Try a Reading Comprehension Sample Question Winning Tip for Reading Comprehension Questions Choose the Best Answer. Read the directions and questions carefully. Remember that you will often have a choice of four or five answers, at least two of which may appear plausible or partially correct. You are, in most instances, required to select the best answer, not merely an answer which is “acceptable.” The purpose behind such a question is to test your judgment, your ability to draw proper inferences, and your appreciation of the finer shades of meaning. The ability to make such distinctions between two or more statements or inferences, all of which may appear to be valid, is the true mark of the discriminating reader. #### Part 4 – Information in the Natural Sciences This section covers your understanding of the information in the natural sciences subjects such as chemistry, biology, health, and safety. Knowledge of this section is mandatory for success in the healthcare field. #### Try a Science Sample Question Winning Tip for Science Questions There are no tricks in these questions. You either know the answer, have intuition for the correct answer, or you do not. Therefore, the best tip is: Study! Find a good textbook and make sure that you are informed in the required fields of knowledge. If you do not have enough time to study them top to bottom, find a tutorial exactly like the one offered by iPrep that reviews the essentials across-the-board. #### Part 5 – Vocational Adjustment Index This section of the PSB HOAE Test is practically a personality profiling test. This section includes statements relating to your self-image and circumstances relating to a future work environment. While this section is not a test, it should still be taken seriously. It is designed to take notice of your feelings, attitudes, and opinions about certain things and will be taken into account when choosing you for a role. Keep in mind that there are no correct or incorrect answers here. Relax, and answer honestly. Mark whether you agree or disagree with the statement and move on to the next one immediately. At the moment, iPrep’s preparation course does not cover this section of the test. #### Try a Vocational Adjustment Sample Question Winning Tip for Vocational Adjustment Questions Some statements may seem general, referring to your general personality and habits. Right? Wrong! You must never read and respond to the statements from a general perspective. This questionnaire assesses your psychological suitability to become a health occupation practitioner, and you must take this perspective while answering questions. Believe it or not, it makes a difference while taking this test. ## PSB HOAE Preparation Strategies ### Clear Ample Time for Preparation Nowadays, many pre-employment and admissions tests are short and provide a general overview of your mental skills. This is not the case with PSB tests. In order to do well on the first four sections of the test, and especially in the arithmetic and science sections, you must obtain a considerable corpus of knowledge and skills. If a few years have passed since you have last dealt with these topics, you will need ample time to regain your knowledge and confidence. Therefore, start studying at least one month in advance, and preferably even 2-3 months before test day. Like at the gym, your brain will process information and internalize new knowledge better and better over continual preparation sessions. ### Identify Your Strengths and Weaknesses and Make a Study Plan Even if you start practicing way in advance, it would still be in your favor to quickly identify your strengths and weaknesses. Areas of weakness might undermine your total score, and to avoid that, you should spend more time preparing for them. This might mean reviewing natural science topics you are not familiar with, exceeding your reading pace, or learning how to avoid annoying spelling mistakes. Make a study plan that incorporates these observations. Spend at least twice as much time on areas of weakness but do not neglect your strong areas—it is important to review them, too, before the test in order to refresh your knowledge and sharpen your skills. ### Choose Your Preparation Course Wisely Your ultimate goal is to choose a preparation course that accurately covers the HOAE test subjects. You should look for a course that goes over all the different parts. Make sure that it’s one in which you can find genuine reviews telling that it was beneficial. Keep in mind the following aspects: 1. The course should offer PSB-specific test materials. Do not settle for a generic nursing or health occupations test practice materials as the format of the questions might be different. Practicing with the same format is essential for maintaining confidence upon taking the actual test and for test success. 2. The course should review all the relevant concepts and only those—you surely do not want to miss any topic you’ll be tested on but you also do not want to waste time practicing topics you don’t need to master. 3. Look for courses that were written by real people, that have real user reviews, and that offer customer support in case you need clarifications. Unfortunately, there is a considerable number of preparation courses for the PSB HOAE test that are merely a collection of practice materials from various sources. These courses are often not up to date. In addition, look for an online course or a book that was recently published by a familiar author. ### Find a Relevant and Focused Guide for the Natural Sciences Section If you think about the entire scope of knowledge you are required to know in the natural sciences section, you will realize that it is vast. If it has been a few years since you studied these materials, you may really lose your confidence. The solution is to find a guide that is broad enough to encompass the entire lot of knowledge required but focused and concise enough so you do not delve into unnecessary details. ### Learn How to Optimally Answer Multiple-Choice Problems All the questions of the HOAE test are multiple-choice. This means that you have a chance of 1 in 3/4/5 to get a question right. If you improve your skills in answering multiple-choice questions, you will directly increase your chances of success. This skill consists of acquiring methods of elimination and educated guesses as well as time-saving techniques in solving numerical problems. You should find a guide that includes these methods. You can be certain that the iPrep tutorials are well aware of this feature and offer solving tips and time-saving methods in answering multiple-choice questions. ### Practice Timed Simulations Reviewing all the required topics is the basis of your learning but a good preparation approach will be to test yourself on the accumulated knowledge. Look for preparation resources that most accurately simulate the actual Health Occupations Aptitude Test, which means taking a practice test that is as long and under the same time constraints as the real test. Undertaking such a PSB test simulation will ensure that you are not surprised on test day by the vast amount of questions and the severe time constraints. The familiarity alone is worth a couple of points on test day. That, coupled with practicing timed simulations, will surely help you improve this skill. ## Test Features ### PSB HOAE Test Fast Facts (tl;dr) • Total of 305 questions. • The test is 2 hours and 15 minutes long. • Five timed test sections: • Academic Aptitude (verbal, arithmetic, non-verbal) – 75 questions in 40 minutes • Spelling – 45 questions in 15 minutes • Reading comprehension – 35 questions in 35 minutes • Information in the Natural Sciences – 60 questions in 25 minutes • Vocational Adjustment Index – 90 questions in 15 minutes • The test may be taken once per year. • Calculators and cell phones are not permitted. • You must present a photo ID to take the test. Here are additional important features you should know about the PSB HOAE Test: ### Common Names of the PSB HOAE Test As the Health Occupations Aptitude Exam (HOAE) is the most common PSB test, people in forums may refer to it as various names. It is highly likely that a reference to any of the following names is relevant to you: • PSB Test / PSB Exam • HOAE Test / HOAE Exam • Health Occupations Aptitude Exam • HOAE PSB Test • PSB HOAE Test ### PSB Test Vs. HOAE Test You may wonder if there is a difference between the terms. Well, there is, but probably not for you. The Psychological Services Bureau (PSB) is the publisher of the Health Occupations Aptitude Exam (HOAE). The HOAE test is the most common PSB test, and if you are scheduled to take a PSB, you are most likely to take the HOAE exam. PSB publishes three main pre-admission tests. iPrep’s preparation course is tailored to the HOAE format but can prepare you for each of these tests as the topics overlap to a great extent: • Aptitude for Practical Nursing Examination (APNE) – very similar to the HOAE test. Includes a section that assesses judgment and comprehension in practical nursing situations instead of the reading comprehension section. • Registered Nursing School Aptitude Examination (RNSAE) – includes exactly the same sections as the HOAE but with a different number of questions in each section. • Health Occupations Aptitude Examination (HOAE) – the exam described on this page. ### Closed-Book Exam The PSB HOAE exam is a closed-book exam, in which you are not allowed to take any notes, books, or other reference material into the examination room, and you have to rely entirely on your knowledge and memory to answer the questions. In addition, no calculators, formula-sheets, and any electronic devices are permitted. ### Retake Restriction The PSB HOAE Test retake policy is determined by the college to which you are applying. In almost all cases, you cannot take the test more than once per academic year! In some cases you may only take it twice in a three-year period. Therefore, ample preparation is required. On the other hand, there are a few cases with less strict retake-policies, and either way, the highest score you achieve will be considered by the college. ### iPrep: Concise. Focused. What you need. Sign Up Immediate access Practice Online self-paced Pass Ace that Test! ## Technical Facts ### 1. Analytically Standardized Test The PSB HOAE test was established on the criterion related to predictive validity and reliability, which has been confirmed with the actual use of tests. It is also standardized by using a reliable sample of applicants for admission to a specific health program. Moreover, institutions also rely on it as the test serves as a dependable aid in successful student selection. ### 2. Easy Interpretation of Scores You get individual reports that consist of an easy-to-understand graphic portrayal of your suitability for healthcare education programs. It also provides you with a usable comparison listing of your test results. ### 3. Psychological Testing is Used The psychological testing serves as a tool to determine your unique potential for a specific program. It is used in the selection process to assist the institutions placing you where all the assets can be maximized. The positive use of these test results also leads to satisfying education, training, and work experience. ### Did you know? The PSB HOAE Test can only be taken once per academic year! ### 4. Results and Score If you opt for a computer-based test, your results will be available immediately, and a copy will be sent to the institution. However, the offline test is scored within 24 hours. ## Results Scale and Interpretations If you want a career in healthcare in the United States or Canada, you will need to pass the Health Occupations Aptitude Exam! Practice with simulated tests well in advance so you can increase your chances of getting into the program you want! What is considered a good/passing score? That depends on the college/program you apply for. There is no passing score. Scores are used for ranking you among the other applicants: The better your score, the higher your ranking is. To provide some indication, though, total points possible in the PSB HOAE is 380. The average PSB scores depend on the program you apply for as scores are normalized per profession. For example, the average score for surgical technicians is 240, radiography is 255, and dental hygienists is 270. It is recommended to contact the registrar of your program to understand what your target score is. Average PSB score is 277 (Dental Hygienists). Source: Melissa Gail Efurd, dissertation Colleges usually get the test results and applicant reports within 24 hours; however, in most cases you will only be able to know your own score after two weeks. The test results for the Health Occupations Aptitude Examination are marked in two ways: 1. Raw Scores 2. Percentile Points Each raw score shows the number of questions that you have answered correctly in one section. For example, if the reading comprehension section consists of 35 questions, and you have answered 20, your raw score will be 20. Later on, the raw scores will be transformed into a percentile. The percentile points decide the rank that you have earned in the examination. For example, in the report below, the same raw score in the arithmetic and non-verbal sections is converted into a very different percentile score. The scores earned in the Academic Aptitude Test consists of the scores earned on each of the subtests. However, your percentile in this section is done from the separate distribution of raw scores. Further, there are lines on your profile chart that provide you a graphic record of the percentile you have earned in all five sections and three subsets. For the verbal subset, you will get a rating from “very low” to “superior” in the left margin. The percentile scores of all test sections are then combined into a final score, which is used for your assessment. For example, in the test report below, a student is ranked rather low for the non-verbal section (percentile 54) and very high for reading comprehension (percentile 94). The total score is 299, which is way above average. To assess the performance in this Health Occupations Aptitude Exam, you will have to consider results from all the five tests and three subsets. Your results and overall percentile showcase your strengths and weaknesses in each part. The scores are then compared with those of other applicants, and the eligible ones are selected for the health occupation program. Note: There are many factors that affect your acceptance or rejection from a specific health care program. Test scores are just one of them. ## PSB HOAE FAQs What is the PSB HOAE Test? The PSB Health Occupations Aptitude Examination or PSB HOAE Test is a standardized pre-admissions test required for admission to various nursing schools and healthcare programs in colleges and universities. This exam measures your skills, abilities, knowledge, and attitude. Getting a high score on the test will help these schools and universities feel confident in your ability to complete the healthcare program which prepares qualified healthcare personnel. What to Expect on the PSB HOAE Test? The PSB Health Occupations Aptitude (PSB HOAE Test) Examination is comprised of a total of 305 questions in five different timed sections, which will take two hours and 15 minutes. This is the breakdown of the sections, the times, and the number of questions: 1. Academic Aptitude – 40 minutes / 75 questions 2. Spelling – 15 minutes / 45 questions 3. Reading Comprehension – 35 minutes / 35 questions 4. Information in the Natural Sciences – 25 minutes / 60 questions 5. Vocational Adjustment Index – 15 minutes / 90 questions How many questions are on the PSB HOAE test? A total of 305 questions. How long is the HOAE test? The test is 2 hours and 15 minutes long. There are no breaks between sections. You may leave for the restroom, but the administrator will not give you back the time you used for this. Which healthcare programs require passing the HOAE test? The PSB Healthcare Occupations Aptitude Examination is a standardized pre-admission test required for application to various nursing schools and healthcare programs in colleges and universities in North America (US and Canada). Programs that may require it include Practical Nursing (PPNSG), Cardiovascular Technology, Dental Hygiene, Pharmacy Technician (PPHTN), Veterinary Technician (PVETT), Physical Therapist Assistant (PTA), Surgical Technicians, Radiography, Histotechnician, Diagnostic Sonography, Clinical Lab programs, and many other programs. Is the PSB HOAE exam hard? While each question on its own is not too difficult, the time constraints make the entire test rather difficult. If you lack prior knowledge in the natural sciences section, the test will be extremely difficult for you. How long does it take to get PSB test results? As soon as the percentiles of all the test-takers are scored, you will get your result. The college or program to which you have applied will receive your score in about 24 hours, but you may only see your results after two weeks. How is the PSB scored? The PSB is scored based on two factors: your raw score and your percentile in comparison to the others who took the same test. How many times can you take the PSB test? The test can only be taken once per year—in some cases, only twice within a three-year period. Notwithstanding, some colleges have less severe restrictions. What is the PSB exam? The PSB exam is a pre-admissions test that, if you score well, can help you get into a nursing school, a registered nursing program, or another health occupation program. The main topics of the exams may include judgment and comprehension in practical nursing situations, reading comprehension, academic aptitude, spelling, information in the natural sciences, and/or a vocational adjustment index. Can you use a calculator on the PSB/HOAE test? No, calculators and any electronic devices are not allowed. Should I prepare for the HOAE test? You may think that, since the questions on the HOAE are at a level you are comfortable with and have passed similar exams in the past, you must take this test’s time limit into account. There may also be several types of questions you have never seen before or have not seen in many years. This is why it is absolutely essential to start studying and getting familiar with the test in advance. Doing so will help you feel confident on test day, knowing that you have practiced your HOAE test-taking skills and are ready to ace the test! ## PSB HOAE Test Tips #### 1. Read through the instructions Read each question and possible answers quickly, yet thoroughly. Scan the information for relevant data and apply it to reach the right answer. #### 2. Manage your time Don’t spend too much time on any one particular question. Remember that you have only a limited amount of time to complete the test. #### 3. Know your strengths If you encounter a tough question that you can’t immediately answer, guess the answer and move on to the next—you won’t lose points for wrong answers. #### 4. Try to correctly answer as many questions as you can Don’t worry about not getting all the answers right. However, you should at least get a score above the average or within the score range required for admission. #### 5. Plan your PSB Health Occupations Aptitude (PSB HOAE Test) Examination Strategy Plan and start practicing! Practicing sample tests and training your brain to solve similar questions will improve your results on the actual test. During the test, if you have some time left, go back to questions that you may have skipped earlier. Free PSB Practice ## Administration Test Location: Usually, at the educational institute you applied to. *During Covid-19 pandemic—online via the proctored Pearson Vue platform. Test Schedule: Each institution publishes its own testing schedule, depending on the program you apply for. All PSB tests are taken by appointment and not on demand. Test Format: Multiple-choice on computer or offline. Test Materials: Test taken on computer. Calculators are not allowed, and smart devices will be stored for the duration of the test. Bring two sharpened pencils if the test is taken offline. Cost: PSB Testing fee varies between institutions but mostly varies between$25-\$60. You may reschedule for a cost. Retake Policy: In most cases, PSB test can only be taken once per academic year, and no more than twice in three years. Some schools have less severe restrictions. ## Test Provider Since 1959, PSB (Psychological Services Bureau) has been creating standardized exams. Each exam comprises five separate tests that check your skills, abilities, and attitude which are important for success in the chosen program. However, the PSB Health Occupations Aptitude Exam is specifically written for the applicants who want to be admitted to various healthcare programs. All exam materials are carefully tested and are effective. Moreover, the test has proven reliability, which has been demonstrated through the actual use of tests. The HOAE PSB tests provide the healthcare programs detailed and unbiased information about you. They also offer flexible testing choices for both groups and individuals, although the test does come with a time limit. Disclaimer – All the information and prep materials on iPrep are genuine and were created for tutoring purposes. iPrep is not affiliated with the Psychological Services Bureau, or any other company mentioned. (114 Ratings) Welcome to iPrep’s Health Occupation Aptitude Examination (PSB HOAE Test) course. This course will help you boost your skills and with it your confidence towards your upcoming Health Occupation Aptitude Examination. This is a test that you need to pass in order to be accepted to a variety of health-related academic programs. * Note: this course DOES NOT include practice for the Vocational Adjustment Index section! The course will provide you with the following tools and benefits: • You will become familiar with the test’s various types of questions. • You will be given full-length HOAE-style simulation tests. The simulations are divided into five sections -Academic Aptitude (Verbal, Arithmetic, and Non-verbal), Spelling, Reading Comprehension, Information in the Natural Sciences (biology, chemistry, health, safety, etc.). Each section includes similar questions to those you will encounter in the real test with the same level of difficulty. They also have the same time limit as the real test. Experiencing the test’s time pressure will ensure it will not come as a surprise on test day. • You will be provided with a great variety of helpful tips for the different types of questions. Some of the tips are in the introductory sections while most are in the detailed explanations that follow each question. 22 Learning Hours3+10 Practice Tests860 Questions30 Day Access By the end of this course, you will be more knowledgeable and comfortable with the HOAE test – Knowledge and familiarity with the test are the two most significant factors that can help you maximize your score and improve your chances of success. The course is comprised of both practice and learning sessions. We will guide you through learning lessons with essential information about your upcoming test. These lessons will help you understand the underlying techniques that are essential for succeeding. The course is then concluded by its core component – simulating full-length tests that accurately follow the structure and concepts of the HOAE Test. Once done, you will be able to get full question explanations and even see how well you performed in comparison with other people who have taken the test. Wishing you an enjoyable learning experience! Skills You Will Learn #### Curriculum 1. Course Introduction 2. Question Types Introduction 6. Spelling 8. Natural Science 9. Full-Length PSB HOAE Test Simulations 10. Course Conclusion Take This Course CUSTOMER TESTIMONIAL I am so grateful and happy about IPREP. I’ve been trying for years to pass my entry exam for the LPN program. This particular school required the PSB exam, this was my second try and I passed and got in the program! IPREP definitely helped me where I fell short on my previous exam. I bought about 4 practice books off Amazon and didn’t pass. I came across IPREP while browsing the internet, your review sold me along with the practice questions. It mimics the PSB exam to the tea. Your study program that I purchased was the best thing I could have done. Not only did I pass, but I also have access to your product in case I need to go back and refresh my brains. I thank you so much and will most definitely recommend IPREP to all my medical friends and classmates. 🤗👍👍 Chalind Dorsey November 10, 2020 at 3:16 AM ## Reviews 4.8 • 5 Star 0% 85% • 4 Star 0% 12% • 3 Star 0% 2% • 2 Star 0% 0% • 1 Star 0% 1% (114 Ratings) 1. Jessica P*** June 2, 2022 at 2:01 AM Very informative and helpful. Very glad that this is available or it would be a lot harder to study for the PSB 2. crystal S********* May 30, 2022 at 7:26 AM I have taken this test at a private school and the material in these practice test are spot on similar to what you’ll see. So far I’m happy with this over purchase for reviews. 3. Jon Scott T***** May 11, 2022 at 10:40 PM Extremely helpful and will definitely help you ace your exam. I thought the course was well organized, clear and easy to follow. It’s easy to sum up—Excellent. Get to know what the Health Occupations Aptitude Exam will be like by practicing with these sample questions: #### Question 1 of 9 Complete the analogy: • A. • B. • C. • D. • E. The first pair of images consist of three shapes each, one inside the other. There are two differences in the first pair of images: • The first difference is that the innermost shape that is shaded in the first image becomes unshaded. • The second difference is that the middle unshaded shape in the first image becomes shaded. To create an analogy, based on the third image, the innermost shaded shape will become unshaded and the central shape will be shaded in the fourth image. So, the correct option is ‘D’. #### Question 2 of 9 The average weight of 30 students is 36 kg. The average weight including the teacher is 37 kg. The teacher’s weight is: 1. 51 kg 2. 55 kg 3. 59 kg 4. 63 kg 5. 67 kg Explanation: Formula: Average = sum of quantities / number of quantities Average weight of 30 students = 36 kg Average weight including the teacher = 37 kg So the teacher had to contribute 36 kg like the rest of them + 1 kg for every person involved (30 students + teacher = 31), so 36 + 31 = 67 kg #### Question 3 of 9 Simplify and write the answer in exponential form. (113 / 1110) x 113 x 117 1. 111 2. 113 3. 114 4. 115 5. 117 Explanation: This problem has to be completed in steps: Step 1 = (113 / 1110) x 113 x 117 Step 2 = (113-10) x 113 x 117 Exponents are subtracted if the numbers are divided Step 3 = 11-7 x 113 x 117 Step 4 = 11-7-3+7 Exponents are added if the numbers are multiplied Step 5 = 113 #### Question 4 of 9 Which word is most different in meaning than the other words? 1. recurrent 2. ceaseless 3. chronic 4. persistent 5. continual The correct answer is recurrent, meaning happening multiple times or repeatedly. All the other words mean occurring or done on many occasions or in quick succession without stopping. #### Question 5 of 9 Select the word that is spelled correctly. • commited • comitted • comited • committed A second “t” is added to the verb “commit” to prevent a hard “i” sound before the suffix “-ed”. #### Question 6 of 9 Select the word that is spelled correctly. 1. fallashous 2. fallacous 3. fallacious 4. fallashious The suffix here is “ious”, meaning relating to. In this case fallacy. #### Question 7 of 9 Glands are small organs located throughout the body that secrete substances called ______. 1. Enzymes 2. Mucus 3. Saliva 4. Hormones 5. Blood The correct answer is D. Hormones Glands are small organs located throughout your body that secrete substances called hormones. #### Question 8 of 9 Which part of the cell life cycle is not part of interphase? 1. G2 phase 2. G1 phase 3. S phase 4. Mitosis phase 5. G0 phase The correct answer is D. Mitosis phase Interphase or resting phase have G1, G0, S phase and G2 phase. Mitosis starts after interphase. #### Question 9 of 9 Elements in group I of the periodic table is known as: 1. Alkali metals 2. Halogens 3. Alkaline earth metals 4. Noble gases 5. Transition elements The correct answer is A: Alkali metals Group I elements are known as alkali metals, they form ionic compounds and when they react with water they form hydroxide. ### Well done! You have completed the Sample Questions section. The complete iPrep course includes full test simulations with detailed explanations and study guides. '...Tests that actually help' In the first 30 minutes of use I have learned so much more than skipping along the internet looking for free content. Don’t waste you time, pay and get tests that actually help. Richard Rodgers January 28, 2020 at 7:49 PM
2022-09-29 11:25:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28155893087387085, "perplexity": 2811.2689101989713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00339.warc.gz"}
https://mathoverflow.net/questions/212089/explicit-metrics-on-non-compact-calabi-yau-threefolds
# Explicit metrics on non-compact Calabi-Yau threefolds I would like to know which explicit metrics on non-compact Calabi-Yau (CY) threefolds are known. For instance, an important class of such spaces can be constructed algebraically, including local $\mathbb{CP}^1$ (a.k.a. the resolved conifold), local $\mathbb{CP}^2$, local $\mathbb{CP}^1 \times \mathbb{CP}^1$, and the deformed conifold. However, as far as I have searched the math and physics literature, I have found explicit CY metrics only in the case of the resolved and deformed conifold. Is the CY metric for, e.g., local $\mathbb{CP}^2$ known? What about other cases? (I have followed terminology from this paper). • My understanding is that very few Kaehler-Einstein metrics for Calabi-Yaus are known. In the compact case, Numerical Kaehler-Einstein metric on the third del Pezzo by C. Doran, M. Headrick, C. P. Herzog, J. Kantor, T. Wiseman uses numerical techniques to compute an appropriate metric. Jul 22, 2015 at 18:58 • Are you OK with metrics on the singular cone, or do you want a metric that extends to the resolution? There was a flurry of activity in finding irregular Sasaki-Einstein metrics on S^2 x S^3 (and maybe other low del Pezzos?) about a decade ago. You might start with hep-th/0411238. Jul 22, 2015 at 23:03 • Thank you Aaron for your comment! I am actually OK with singular cone metrics and the Ypq metrics you point out are an excellent example. In the paper you point out they mention that these spaces can be realized as GLSMs in the ultraviolet. Do you know however if the explicit RG flow from the UV to the deep IR has/can be described? (At least in some cases such as the conifold?). – Nuno Jul 22, 2015 at 23:31 • I'm not sure exactly what you're looking for, but I've been out of the field long enough that I likely wouldn't remember anyways. Sorry. Jul 23, 2015 at 4:00 The simplest example would be on the, non-small resolution, $\mathbf{K}_{\mathbb{CP}^1 \times\mathbb{CP}^1}$ of $\mathbb{CP}^1 \times\mathbb{CP}^1$. This the C-Y metric is the Calabi Ansazt using the usual homogeneous metric on $\mathbb{CP}^1 \times\mathbb{CP}^1$.
2022-07-03 12:31:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7189730405807495, "perplexity": 840.252322195414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00393.warc.gz"}
https://triangle.mth.kcl.ac.uk/?search=au:Jorge%20au:Santos
Found 5 result(s) ### 20.01.2021 (Wednesday) Regular Seminar Jorge Santos (Cambridge) at: 14:00 ICroom Zoom abstract: We explore the construction and stability of asymptotically anti-de Sitter Euclidean wormholes in a variety of models. In simple ad hoc low-energy models, it is not hard to construct two-boundary Euclidean wormholes that dominate over disconnected solutions and which are stable (lacking negative modes) in the usual sense of Euclidean quantum gravity. Indeed, the structure of such solutions turns out to strongly resemble that of the Hawking-Page phase transition for AdS-Schwarzschild black holes, in that for boundary sources above some threshold we find both a large' and a small' branch of wormhole solutions with the latter being stable and dominating over the disconnected solution for large enough sources. We are also able to construct two-boundary Euclidean wormholes in a variety of string compactifications with a similar that dominate over the disconnected solutions we find and that are stable with respect to field-theoretic perturbations. However, as in classic examples investigated by Maldacena and Maoz, the wormholes in these UV-complete settings always suffer from brane-nucleation instabilities (even when sources that one might hope would stabilize such instabilities are tuned to large values). This indicates the existence of additional disconnected solutions with lower action. We discuss the significance of such results for the factorization problem of AdS/CFT.’' Join Zoom Meeting https://zoom.us/j/98062076339?pwd=aGJNUTBTNjBYeDhqUlZVMzdVWkhGQT09 Meeting ID: 980 6207 6339 Passcode: 913115 ### 10.10.2018 (Wednesday) #### Connecting the weak gravity conjecture to the weak cosmic censorship Regular Seminar Jorge Santos (University of Cambridge) at: 13:15 KCLroom S2.49 abstract: I will describe some counterexamples to (weak) cosmic censorship in anti-de Sitter spacetime that have been found recently. These are solutions in which the curvature grows without bound in a region of spacetime visible to infinity. I will also discuss a surprising connection between some of these counterexamples and an apparently unrelated conjecture called the weak gravity conjecture. ### 02.02.2017 (Thursday) #### Localised Black Holes and Precision Holography Regular Seminar Jorge Santos (DAMTP) at: 13:00 ICroom H503 abstract: We numerically construct asymptotically global AdS_5 x S^5 black holes that are localised on the S^5. These are solutions to type IIB supergravity with S^8 horizon topology that dominate the microcanonical ensemble at small energies. At higher energies, there is a first-order phase transition to AdS_5-Schwarzschild x S^5. By the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence, this transition is dual to spontaneously breaking the SO(6) R-symmetry of N=4 supersymmetric Yang-Mills down to SO(5). We extrapolate the location of this phase transition and compute the expectation value of a scalar operator in the low energy phase. In addition, we discuss the construction of localised black holes in type IIA, which are dual (via T-duality) to the low temperature phase of thermal 1+1 dimensional supersymmetric Yang-Mills theory on a circle. ### 01.12.2016 (Thursday) #### Localised Black Holes and Precision Holography Regular Seminar Jorge Santos (Cambridge) at: 13:00 ICroom H503 abstract: We numerically construct asymptotically global AdS_5xS^5 black holes that are localised on the S^5. These are solutions to type IIB supergravity with S^8horizon topology that dominate the microcanonical ensemble at small energies. At higher energies, there is a first-order phase transition to AdS_5-Schwarzschild x S^5. By the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence, this transition is dual to spontaneously breaking the SO(6) R-symmetry of N=4 supersymmetric Yang-Mills down to SO(5). We extrapolate the location of this phase transition and compute the expectation value of a scalar operator in the low energy phase. In addition, we discuss the construction of localised black holes in type IIA, which are dual (via T-duality) to the low temperature phase of thermal 1+1 dimensional supersymmetric Yang-Mills theory on a circle. ### 11.11.2015 (Wednesday) #### Black holes with a single Killing vector field: black resonators Regular Seminar Jorge Santos (DAMTP, Cambridge) at: 13:15 KCLroom S.0.13 abstract: We numerically construct asymptotically anti-de Sitter (AdS) black holes in four dimensions that contain only a single Killing vector field. These solutions, which we coin black resonators, link the superradiant instability of Kerr-AdS to the nonlinear weakly turbulent instability of AdS by connecting the onset of the superradiance instability to smooth, horizonless geometries called geons. Furthermore, they demonstrate non-uniqueness of Kerr-AdS by sharing asymptotic charges. Where black resonators coexist with Kerr-AdS, we find that the black resonators have higher entropy. Nevertheless, we show that black resonators are unstable and comment on the implications for the endpoint of the superradiant instability.
2021-11-30 00:13:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7640823721885681, "perplexity": 1840.8981389339501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358847.80/warc/CC-MAIN-20211129225145-20211130015145-00320.warc.gz"}
https://physics.stackexchange.com/questions/473146/field-operator-commutation-if-two-operators-commute-then-their-fourier-transfo
# Field operator commutation: If two operators commute, then their fourier transforms also commute? Im doing this in the context of field operators $$\psi(x)=\sum_k a_k e^{ikx},$$ $$\psi^T(y)=\sum_k a_k^T e^{-iky},$$ and their being defined as the fourier transform of the creation/annihilation operators $$a_k,a_k^T$$. Specifically, i have to prove: $$[\psi(x),\psi^T(y)]_\zeta=\delta(x-y)$$ With $$\zeta=+1$$ for bosons and $$=-1$$ for fermions. We have already proved in class that $$[a_k,a_k'^T]=\delta(k-k').$$ For whatever reason Im really struggling with the first proof. Any tips? And then I was jsut wondering whether it would be simpler to prove the the commutation relations hold for all fourier transforms of operators, not just this case. But im not sure how to do that either, or whether it would actually be simpler, or even if its actually true.. • Just plug in and reduce one integral,... But maybe your momentum sums should be integrals if you are using a $delta$-function for momenta, instead of a Kroncecker delta. – Cosmas Zachos Apr 16 at 19:28 A plug in will work. The operation is very common in quantum field theory. Please refer to any QFT textbook for canonical quantization techniques. $$[\psi(x),\psi^{\dagger}(y)]_{\xi} = [\sum_k a_{k}e^{ikx}, \sum_k' a^{\dagger}_{k'}e^{-ik'y}]_{\xi} = \sum_{kk'} e^{ikx-ik'y} [a_k, a^{\dagger}_{k'}]_{\xi} = \sum_{kk'} e^{ikx-ik'y} \delta_{kk'} = \sum_k e^{ik(x-y)} = \delta(x-y)$$ • @Learn4life Due to the Kronecker delta, the terms in that summation are all zero except for when $k=k'$. Therefore only terms with $k=k'$ remain and for those terms the Kronecker delta is one and can be omitted. – Crimson Apr 26 at 21:17
2019-10-23 15:34:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8598113656044006, "perplexity": 253.79703801508768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987834649.58/warc/CC-MAIN-20191023150047-20191023173547-00220.warc.gz"}
http://quant.stackexchange.com/tags/lognormal/hot?filter=year
# Tag Info 3 Here couple pointers to push you back on the right path (so I hope): Start with the payoff function and hence $S(T)$, which consists of $(W(T)-W(t))$ , $W$ being a Brownian Motion under the risk neutral measure) you can greatly simplify by working with a standard normal random variable: $$Y = \frac{-(W(T)-W(t))}{\sqrt{T-t}}$$, which helps to get rid of ... 1 Your formula, as it stands, is incorrect, at least is if $E$ means the "expected value under real-world probabilities". I wrote a blog post explaining the basic rationale behind risk-neutral pricing where you will see that if the Fundamental Theorem of Asset Pricing theorem holds, you can write: Let $X_t=S_{1,t}-S_{2,t}$ e^{-rt} X_t = ... 1 As @Rustam notes, "correlation" of deterministic functions in the sense you describe is a special case of allowing $\mu$ and $\sigma$ to have a term structure of arbitrary shape. Since the latter is easy to treat, no one bothers with restricted forms of it. Now, there quite a few people who deal with models that let $\sigma$ change with $S$. I am thinking ... Only top voted, non community-wiki answers of a minimum length are eligible
2014-03-12 02:12:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9284785389900208, "perplexity": 392.89871830888455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021083897/warc/CC-MAIN-20140305120443-00051-ip-10-183-142-35.ec2.internal.warc.gz"}
http://boards.4chan.org/sci/
[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vr / w / wg] [i / ic] [r9k] [s4s] [vip] [cm / hm / lgbt / y] [3 / aco / adv / an / asp / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / qst / sci / soc / sp / tg / toy / trv / tv / vp / wsg / wsr / x] [Settings] [Home] Board /sci/ - Science & Math Name Options Subject Comment Verification 4chan Pass users can bypass this verification. [Learn More] [Login] File Please read the Rules and FAQ before posting.Use TeX with $tags for inline and [eqn] tags for block equations.Right-click equations to view the source. 10/04/16 New board for 4chan Pass users: /vip/ - Very Important Posts 06/20/16 New 4chan Banner Contest with a chance to win a 4chan Pass! See the contest page for details. 05/08/16 Janitor acceptance emails will be sent out over the coming weeks. Make sure to check your spam box! [Hide] [Show All] By popular demand you can now support 4chan directly: Give to support 4chan Cooldown time lowered for 4chan Pass users! You can now renew your Pass if it expires in less than 6 months. Don't forget the banner contest [Catalog] [Archive] File: sciguide.jpg (9 KB, 200x140) 9 KB JPG https://sites.google.com/site/scienceandmathguide/ >> Reminder: /sci/ is for discussing topics pertaining to science and mathematics, not for helping you with your homework or helping you figure out your career path. If you want advice regarding college/university or your career path, go to /adv/ - Advice. If you want help with your homework, go to /wsr/ - Worksafe Requests. What careers are there for a BS in Math? A BS in Stats? >> File: 1457530667043.jpg (201 KB, 804x1204) 201 KB JPG I heard that Burger King is hiring. You can try McDonald's too, but I'm afraid they ask for a master's degree. >> >>8425789 academia software developer NSA finance food service/other >> >>8425789 Do some very, very basic reseach yourself. Short answer: pretty much anything. What are the best universities in EU for studying maths at undergrad? > must be affordable (I.e. not england) > course must be in English or German 26 replies and 5 images omitted. Click here to view. >> >>8423229 >EU kek >> >>8423229 What the fuck is the EU? It's just an economical organization REEEEEEEEEE. Best universities in the european continent besides U.K fuck don't know. Although in Portugal there are very good and affordable Universities well classified in the world rankings, like coimbra, oporto and minho. >> >>8423229 read the sticky and die in a fire >> >>8423229 Nobel prize winning uni here >> >>8423425 ETH is a university http://www.topuniversities.com/university-rankings/world-university-rankings/2015 I'm curious, are there any neurological diseases that cause dependence on other people, and how would said disease be treated? >> >>8426647 Unrequited love. Unfortunately there is no cure. It will eventually lead to death by suicide. >> >>8426647 All diseases that make you handicapped will make you dependent on other people. For example MS, ALS, dementia, stroke, etc etc. >> >>8426972 This. Still alive because of porn and math >> >>8426647 Isn't Dependent Personality Disorder literally what you are looking for? >> >>8426647 Opiate addiction. Because they don't stop asking other people for money. I think we all know what the treatment is. File: STEM1.png (155 KB, 1692x720) 155 KB PNG Currently in my second year of college. I aiming for a Bachelors degree in CompSci. Did i just fall for a meme, or it is what one should do? What about Quantum, how different is it from CompSci? 29 replies and 5 images omitted. Click here to view. >> >>8422636 You wont be begging for a job anon. >>8423309 >CS jobs You mean programmer jobs. There is a difference. I have no degree and I have a programmer job. Everyone can land one of those jobs. Making apps, front-end, backend, gameplay programmer, ect... You don't even need to be good at math, just some basic analytical thinking skills. But those aren't the jobs CS degrees are for. CS degrees are for jobs more demanding in other scientific fields. Often you need to work together with other engineers or need to have good math skills. >> >>8426736 >Software plateaued years ago You're retarded. We're not just talking about making apps for the play store. Automation requires software. Who is gonna program all these self driving cars, robots, satellites, ect... CS is also research. Research into AI, computer vision, better compression algorithms, ect... >> >>8426897 >There are huge breakthroughs to be made in artificial intelligence >>>/g/tfo kid >> >>8426897 > 1. Computer Science is not same as "software". Most CS grads don't do research. Most computer "scientists" are software engineers. Most software engineers are more architects and construction workers: building scaffolding and data conduits, writing skins around databases, importing libraries they didn't write, using someone else's mathematical abstractions because they only vaguely remember matrix multiplication. Calculus? What the fuck is that? They shuffle through billions of KLOC to extend existing logic and pray they don't unravel some untenable ball of yarn left by the last guy. In particular, most of Silicon Valley works on shit that won't last and doesn't matter. Practicing meaningless interview questions just to get hired. Tremendous wastes of potential. > 2. There are huge breakthroughs to be made in artificial intelligence. Careful, you're in danger of revealing your embarassment. If you even have a menial grasp of the surface tension of this concept, you would know that machine learning is nothing more than a bag of tricks. How does the computer identify a cat? Oh, some clever math on a 2D matrix. Does it know what a cat is? No. It has no capacity for understanding, intuition, or generalization. There's clearly headroom, but don't fall into the popsci trope. It's ML not AI. > 3. There are a lot of computational problems that are still very inefficient to solve. This is purely mathematical (which CS is a subset) and strictly academic. Like I said, very few CS grads go into pure research. Those aren't problems that businesses solve. Further to point, private industry could give a shit about efficiency at all. The average webpage size is 2MB which is atrocious for mostly text. The web is becoming almost unusable with extraneous kludge. >> >>8426941 > You're retarded Thanks for the complement. > We're not just talking about making apps for the play store. Yet what are 95% of CS grads working on? Shit. Useless shit. Squandering their intellectual potential chasing VC skirts in an attempt to cash out on "who-gives-a-shit-about-the-idea-is-it-worth-money?" > Automation requires software. Yes, it does. Not disputing that. > self driving cars, robots, satellites, ect Self driving anything is going to require effectively a machine which is sentient to cover all of those dark corner cases (rain, snow, gravel roads, etc). The term "robots" has been bastardized by popsci to mean virtually anything technologically magic, so I don't have a hard target to respond to. And satellites, code, yep, but it probably isn't revolutionary. Much of it structures across well established libraries. > Research into AI, computer vision, better compression algorithms Computer science research is a subset of mathematics, and as I stated, compared to the volume of those versed in CS, very few actually do it. Does your OS actually anticipate your responses? Does understand the data it processes? Wake me up when you're not just clicking boxes to open more boxes. File: download (1).jpg (4 KB, 194x259) 4 KB JPG Are there any actual people here attending ivy leagues? Or are the majority of people on here college drop outs with an interest in science? I wanna know who I am associating with here. 83 replies and 12 images omitted. Click here to view. >> Michigan here Study Computer Science/Math >> >>8423080 >Are there any actual people here attending ivy leagues? Or are the majority of people on here college drop outs with an interest in science? I have a master's degree in math from a tier 1 public university. Whatever that's worth. Also, whoever's thinking about going to USC, don't go to USC >> >>8423080 I mow lawns. The slope averages 21' in these hills and i hate it >> >>8426412 aren't you embarrassed to post this? >> >>8423361 Overqualified for a lab technician job, bruh. That's probably why you didn't get it, sorry to say. File: 1475668034393.jpg (395 KB, 1154x1500) 395 KB JPG So /sci/ you think you're smart huh? Especially the smug fucking physicists. Let me ask you something, if all observables are meant to correspond to hermitian operators then why is the kinetic energy operator of the Hamiltonian in cylindrical coordinates for the not a hermitian operator? Let me elaborate, there is a term in the operator involving the first order derivative with respect to the radial coordinate and this ruins the symmetry of the operator. File: groupsnstuff.jpg (94 KB, 512x512) 94 KB JPG Feel free to post about mathematics in general, just keep the shitposting about 300k starting, CS, and stuff out of this thread. Let's actually talk about something meaningful this time. >what are you currently reading? >any problems? >any suggestions for cool exercises? Personally right now I am doing some reading in Kähler manifolds, in particular from the viewpoint of symplectic geometry. I was wondering if anyone has any good books on differential equations as that has been a very abandoned subject for me since my undergraduate years. Maybe there is a good book in Springer's library? Thanks! 255 replies and 49 images omitted. Click here to view. >> >>8426757 please learn to use google https://en.wikipedia.org/wiki/Root-finding_algorithm >> >>8426760 Thanks anon! I wanted to know if there is a much better to way find roots. >> >>8416312 Do you have reason for this to make sense, or is this just "look at it. Just look at it" kind of making sense. Regardless look up the extended real/complex numbers. >> Well, I'm currently taking Topology and it's awesome. I'm also learning about matrix groups and linear operators and shit with linear algebra for my thesis. It's been particularly interesting using quaternions. I'm also learning about mathematical methods in physics which is pretty cool and regression analysis which is more interesting than expected. >> How do you do, guys? Tell me something amazing you've realized recently Help me /sci/, I know you can. My current speaker setup displays an extremely strange phenomenon. I keep my speakers running at all times and between them is roughly a 15ft wire connecting the unpowered speaker to the power source-speaker combo. In addition, there is a 15ft 3.5mm male-male wire connecting my laptop to the aux input. Sometimes (very rarely), the speakers output a foreign signal that sounds similar to an AM or CB radio broadcast. I don't understand why this is happening. Please help, this is seriously confusing me. File: paqDI7XE.jpg (203 KB, 1252x1252) 203 KB JPG What would you attempt to do if you knew you could not fail? 58 replies and 9 images omitted. Click here to view. >> >>8424012 I would ride my bike with no handlebars. >> 1. Right all of my wrongs. 2. Teach as many as I could the importance of intellectual honesty. 3. Try to influence the political arena in areas of meta-analysis, paperwork reduction, (pro-social) futurist agendas, etc. 4. Start a movement to help those on the fringes of society. 5. Explore as much as I could about what it means to be a myself, a human, a living creature and something "existing". 6. Spend time trying to learn as much as I could, for both intellectual reasons and practical applications. >> Stop masturbating. >> >>8424012 Race war. clean all the degenerates in the world >> >>8424012 Summon my waifu so we could finally live together, forever. File: black.jpg (3 KB, 251x201) 3 KB JPG This shit really scare me. What is the possibility that one comes close to our solar system? Should I fear this happen? do not know why, but lately it has given me afraid. pic related, black hole. 8 replies and 1 image omitted. Click here to view. >> File: 1447384639353.png (643 KB, 633x758) 643 KB PNG >>8426506 That would be amazing if Planet X was actually a small black hole, assuming it had a stable orbit that wouldn't wreak havoc on the planets or the Sun. Just imagine the research potential if we could study one of them up close instead of having to study them from thousands of light years away. We could have a live recording of matter being absorbed by a black hole in real time instead of artistic interpretations and grainy pixels from telescopes. >> File: giant BH.jpg (88 KB, 960x960) 88 KB JPG >>8426126 >> >>8426759 Fuck... too big... >> >>8426126 It looks like the belly button of cosmos. >> >>8426506 >While it's no threat, it does mean those fuckers might be either more common than we would like them to be or we got the short end of the stick. How is that "the short end of the stick"? a 10 Earth-mass black hole that's within reach of current spacecraft would be amazing. File: 7vmsxs9caosx.jpg (380 KB, 2295x5411) 380 KB JPG They can't even get a rocket into orbit without it exploding, so why do people think that they'll be able to build THIS in 5 years? Serious question. 53 replies and 8 images omitted. Click here to view. >> >>8426256 >Is the "fucktardedly huge rocket" concept even viable past a certain scale? Yes. It is the "financial data" (advertised prices etc) that are from fantasy land. >> >>8426679 What the fuck dude. I agree colonizing Mars is a waste of time. But the amount of resources it would take to get there, really, is not all that much compared to the scale of the problems you mention. And developing cheap, easy transit to low earth orbit is very valuable. As far as energy is concerned, that is really not an issue, OPEC can't push prices on their cheap easily available oil too high or the US starts fracking and brings it back down, we are already moving in the direction of sustainable energy, the idea of some energy industry collapse causing mass chaos and starvation is some utter insanity that hasn't even been relevant for a decade. >> >>8425909 >Serious bait fgt pls >> >>8426325 >>8426320 Seconded, fuel is cheap. Of course the Raptor engine (and new ITS system) will be fueled by cryogenic methane and oxygen instead of kerosene and oxygen, don't know how the cost will compare...but it was designed that way because methane can be produced on Mars to refuel the engines for the return trip. >> >>8427044 You can make synthetic kerosene on mars too Methane was picked for a whole buncha reasons, not solely because of ISRU on mars. File: _20161020_041257.jpg (33 KB, 605x310) 33 KB JPG OK /sci/, which one will get us to Mars faster? 32 replies and 3 images omitted. Click here to view. >> >>8424568 Why not both make huge space ships that are particle accelerators. also Hillary is the one advocating most for the New World Order which is pretty much the only way to start considering making space travel more viable, or at least improving its infrastructure. Russia is against the NWO however becuase they are the place where most rockets get launched now. >> After arguing with NASA engineers for over a week now because these geniuses can't admit they botched some basic geometry, fuck'em, they need de-funding. Source: me, I work in the cryogenics industry. >> >>8425235 This >> >>8424480 Trump wants to make peace with Putin retard. Also thank you for Correcting the Record. >> >>8424480 I've actually met someone who thought trump was tied to Putin but also thought that trump would start a war with russia. I asked him how he could hold these two views simultaneously and he couldn't answer me either. File: PAY-Ancient-object.jpg (44 KB, 615x384) 44 KB JPG I suspect carbon dating is not accurate. Case in point; this 250,000 year old artifact. Is it an airplane part? its an aluminum alloy. 21 replies and 5 images omitted. Click here to view. >> >>8425746 Carbon dating is totally inaccurate. We have dinosaur flesh that atheists claim is mega old. Yet it is dinosaur flesh that has still not decomposed. >> >>8426823 Carbon dating only became a thing after people decided the Earth was millions of years old. >> >>8425785 Wait, where's the giant dragon? I want to see! >> >>8426804 Nice Sokal. >> >>8425746 What the fuck are you talking about. Carbon dating only works on things that once lived and less than 50k years old. Nowhere in that article they say anything about carbon dating >>8425767 File: the book cover.jpg (265 KB, 1352x1528) 265 KB JPG https://youtu.be/Fd2d7Uq_8wY https://docs.ufpr.br/~danielsantos/ProbabilisticRobotics.pdf Let's try to initiate a book reading. This time, as suggested in that "one are you doing right now" thread, by moderating it with short rants of summary, elaborations and asking questions. As explained in the video, the book seems approachable for a whole bunch of majors, which is why I think it makes a good candidate, and should this work out as I intent, people can go on tangents in various directions. Sorry for the possibly quite bad accent. An edition of the book pops up if you type in the title in google. It's not the latest edition, but the chapters seem to match. The second one chapter starts with some probability theory basics (gonna post a screenshot of the .pdf posted above in the next post). I'll continue reading and if there's interest, I will make a short rant about it maybe even tomorrow. I'll post questions, by me and for other, or add hints at interesting stuff, if I think that's necessary. If there's eventually even a video response, some discussion of any of the topics that come up, that would be even more interesting. 24 replies and 10 images omitted. Click here to view. >> >>8427004 Thanks for the bump. I've just written down a programming task and will also summarize it in a video, mabye later. This is on https://en.wikipedia.org/wiki/Hidden_Markov_model --------------------------------- Okay, so today let's care about the example in the text again. It's from p.24 here: https://docs.ufpr.br/~danielsantos/ProbabilisticRobotics.pdf I've posted the 5 relevant pages here: http://imgur.com/a/WHaay Here, let's discuss what we need for a computer implementation where we abstract all the particular chances (numbers in [0,1]) in the pdf: - You have a door that's in one of two states [math] x : Bool$: x = "open" >> You need one initial assumption $bel_n0_open : [0,1]$ and set $bel(0, open) := bel0open$ At each time step, he first can take an action (u) and afterwards measured the door and updates his believe. You need a model for sensor data z (same type as state in this case) and action effects u (a bool, try or dont try to open the door): Those probabilities are time independent in our model, i.e. given by a few numbers: 1. the chance $Z_{x\,closed} : [0,1]$ that his sensor gives the right answer when the door is closed 2. the chance $Z_{x\,open} : [0,1]$ that his sensor gives the right answer when the door is open 3. the chance $U_{x\,closed} : [0,1]$ that he managed to open the door after he tried open a closed door Written as functions, we have the conditionals $p_{zx} : Bool \to (Bool \to [0,1])$ >> File: arnold arm.jpg (52 KB, 1000x623) 52 KB JPG Program specification: ------ The code should thus do the following: When we start we must enter the game parameters (in the command line, say) >Chance a closed door is measured to be closed" (sets Z_x_closed) >Chance an open door is measured to be closed" (sets Z_x_open) >Chance a closed door is opened when he tries to do so (sets U_x_closed) >Initial believed that the door is open" (sets bel_n0_open) A random number generator sets the state x to open or closed without telling us. The game starts, n=0. The chance bel_n0_open that the door is open is displayed. >I am 100*bel_n0_open % sure that the door is open. We're asked if we want to take an action: >Do you want to try to open the door or quit? y/n/q We enter "y" OR "n". (i.e. u="try" OR u="dont") If we try to open the door, the hidden state x is updated, with the chance given by $p_{x,ux}$ (use a random number generator with those chances). With the new state, according to $p_{zx}$, we measure the system (random number generator again) and the measurement result is displayed: >> The second question should be >Chance an open door is measured to be open
2016-10-21 11:11:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48764553666114807, "perplexity": 5089.059262671481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717963.49/warc/CC-MAIN-20161020183837-00033-ip-10-171-6-4.ec2.internal.warc.gz"}
http://overanalyst.blogspot.com/2013/05/in-medias-res-latex-neq-textmath.html
## Tuesday, May 07, 2013 ### in medias res: $\LaTeX \neq \text{math}$ the tricky thing about $\LaTeX$ (and about using computers in general) is that one can easily be deluded into believing that one is doing actual "work." it's probably due the fact that we can compile our code and get the results right away. the typesetting, moreover, is prettier than i could ever write by hand: instantaneous gratification! be warned, though: don't ever mistake this for real work. i think it a warning sign to think and $\LaTeX$ at the same time. in an ideal situation, all the details are written up by hand and the computer is used merely to convert the information into a digital and more easily accessible format. it's not that multi-tasking is inherently error-prone .. .. though there have been enough studies to show that, statistically, this is the case .. .. but by switching from one to the other, you easily get distracted. you can easily lose the big picture. perhaps you can still keep track of what you are doing, but it's noticeably harder to keep track of whether the changes you make are actually relevant to the task at hand. i'll just change this, which means i have to go back and change that .. and now that i think about it, why did i introduce this definition? i'll just change it to .. if you only knew the number of times that i meant to type up half a second, but only got to a single lemma ..! i constantly have to remind myself: a computer is just a tool; it cannot actually think. it is we who do the thinking and the planning, it is we who are responsible for our time, and how much we spend on various tasks. typesetting is easy, especially after a bit of practice, but don't ever mistake it for real math.
2017-11-25 09:12:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7006533145904541, "perplexity": 588.0251869149928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809746.91/warc/CC-MAIN-20171125090503-20171125110503-00401.warc.gz"}
https://indico.desy.de/event/27991/contributions/102297/
# ICRC 2021 Jul 12 – 23, 2021 Online Europe/Berlin timezone ## Energy spectrum of cosmic rays measured using the Pierre Auger Observatory Jul 14, 2021, 12:00 PM 1h 30m 03 #### 03 Talk CRI | Cosmic Ray Indirect ### Speaker Vladimír Novotný (IPNP, Charles University, Prague) ### Description We present the energy spectrum of cosmic rays measured at the Pierre Auger Observatory from $6 \times 10^{15}$ eV up to the most extreme energies where the accumulated exposure reaches about 80 000 km$^2$ sr yr. The wide energy range is covered with five different measurements, namely using the events detected by the surface detector with zenith angles below 60 degrees and applying different reconstruction method also above 60 degrees, those collected by a denser array, the hybrid events simultaneously recorded by the surface and fluorescence detectors, and using those events in which the signal is dominated by Cherenkov light registered by the high-elevation telescopes. In this contribution, we report updates of the analysis techniques and present the spectrum obtained by combining the five different measurements. Spectral features occurring in the wide energy range covered by the Observatory are discussed. ### Keywords cosmic rays; energy spectrum; Pierre Auger Observatory Subcategory Experimental Results Auger ### Primary authors Vladimír Novotný (IPNP, Charles University, Prague) ### Presentation materials There are no materials yet.
2022-01-17 09:41:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2908404767513275, "perplexity": 4184.898147271966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300533.72/warc/CC-MAIN-20220117091246-20220117121246-00575.warc.gz"}
https://www.numerade.com/topics/subtopics/slopes-and-rates-of-change/
# Slopes and Rates of Change ## Calculus 1 / Ab ### 493 Practice Problems 02:44 Calculus with Applications News Sources A survey asked respondents where they got news "yesterday" (the day before they participated in the survey). In $2006,$ about 40$\%$ of respondents got their news from the newspaper, while 23$\%$ got their neir news online. In $2012,$ about 29$\%$ got their news from the newspaper, while 39$\%$ got their news online. Both news sources changed at a roughly linear rate. Source: State of the Media. (a) Find a linear equation expressing the percent of respon- dents, $y_{n},$ who got their news from the newspaper in terms of $t,$ the years since 2000 . (b) Find a linear equation expressing the percent of respondents, $y_{o},$ who got their news online in terms of $t$ , the years since $2000 .$ (c) Find the rate of change over time for the percentage of respondents for each source of news. Linear Functions Slopes and Equations of Lines 01:40 Calculus with Applications Global Warming In $1990,$ the Intergovernmental Panel on Climate Change predicted that the average temperature on Earth would rise $0.3^{\circ} \mathrm{C}$ per decade in the absence of international controls on greenhouse emissions. Let $t$ measure the time in years since $1970,$ when the average global temperature was $15^{\circ} \mathrm{C} .$ Source: Science News. (a) Find a linear equation giving the average global temperature in degrees Celsius in terms of $t$ , the number of years since 1970 . (b) Scientists have estimated that the sea level will rise by 65 $\mathrm{cm}$ if the average global temperature rises to $19^{\circ} \mathrm{C}$ . According to your answer to part (a), when would this occur? Linear Functions Slopes and Equations of Lines 02:27 Calculus with Applications Galactic Distance The table lists the distances (in megaparsecs where 1 megaparsec $\approx 3.1 \times 10^{19} \mathrm{km}$ ) and velocities (in kilometers per second) of four galaxies moving rapidly away from Earth. Source: Astronomical Methods and Calculations, and Fundamental Astronomy. $$\begin{array}{|c|c|c|}\hline \text { Galaxy } & {\text { Distance }} & {\text { Velocity }} \\ \hline \text { Virga } & {15} & {1600} \\ \hline \text { Ursa Minor } & {200} & {15,000} \\ \hline \\ {\text { Corona Borealis }} & {290} & {24,000} \\ \hline \\ {\text { Bootes }} & {520} & {40,000} \\ \hline \end{array}$$ (a) Plot the data points, letting $x$ represent distance and $y$ represent velocity. Do the points lie in an approximately linear pattern? (b) Write a linear equation $y=m x$ to model these data, using the ordered pair $(520,40,000)$ . (c) The galaxy Hydra has a velocity of $60,000 \mathrm{km}$ per sec. Use your equation to approximate how far away it is from Earth. (d) The value of $m$ in the equation is called to estimate constant. The Hubble constant can be used to estimate the age of the universe $A$ (in years) using the formula $$A=\frac{9.5 \times 10^{11}}{m}.$$ Approximate $A$ using your value of $m$. Linear Functions Slopes and Equations of Lines 04:36 Calculus with Applications Marriage The following table lists the U.S. median age at first marriage for men and women. The age at which both groups marry for the first time seems to be increasing at a roughly linear rate in recent decades. Let $t$ correspond to the number of years since $1980 .$ Source: U.S. Census Bureau. (a) Find a linear equation that approximates the data for men, using the data for the years 1980 and $2010 .$ (b) Repeat part (a) using the data for women. (c) Which group seems to have the faster increase in median age at first marriage? (d) According to the equation from part (a), in what year will the men's median age at first marriage reach 30$?$ (e) When the men's median age at first marriage is $30,$ what will the median age be for women? Linear Functions Slopes and Equations of Lines 03:08 Calculus with Applications Immigration In 1950 , there were $249,187$ immigrants admitted to the United States. In $2012,$ the number was $1,031,631$ Source: 2012 Yearbook of Immigration Statistics. (a) Assuming that the change in immigration is linear, write an equation expressing the number of immigrants, $y,$ in terms of $t$ , the number of years after 1900 . (b) Use your result in part (a) to predict the number of immigrants admitted to the United States in $2015 .$ (c) Considering the value of the $y$ -intercept in your answer to part (a), discuss the validity of using this equation to model the number of immigrants throughout the entire 20 th century. Linear Functions Slopes and Equations of Lines 02:15 Calculus with Applications Child Mortality Rate The mortality rate for children under 5 years of age around the world has been declining in a roughly linear fashion in recent years. The rate per 1000 live births was 90 in 1990 and 48 in $2012 .$ Source: World Health Organization. (a) Determine a linear equation that approximates the mortality rate in terms of time $t,$ where $t$ represents the number of years since $1900 .$ (b) One of the Millennium Development Goals (MDG) of the World Health Organization is to reduce the mortality rate for children under 5 years of age to 30 by $2015 .$ If this trend were to continue, in what year would this goal be reached? Linear Functions Slopes and Equations of Lines 02:01 Calculus with Applications Life Expectancy Some scientists believe there is a limit to how long humans can live. One supporting argument is that during the past century, life expectancy from age 65 has increased more slowly than life expectancy from birth, so eventually these two will be equal, at which point, according to these scientists, life expectancy should increase no further. In $1900,$ life expectancy at birth was 46 yr, and life expectancy at age 65 was 76 yr. In 2010 , these figures had risen to 78.7 and $84.1,$ respectively. In both cases, the increase in life expectancy has been linear. Using these assumptions and the data given, find the maximum life expectancy for humans. Source: Science. Linear Functions Slopes and Equations of Lines 02:09 Calculus with Applications Ponies Trotting A study found that the peak vertical force on a trotting Shetland pony increased linearly with the pony's speed, and that when the force reached a critical level, the pony switched from a trot to a gallop. For one pony, the critical force was 1.16 times its body weight. It experienced a force of 0.75 times its body weight at a speed of 2 meters per second and a force of 0.93 times its body weight at 3 meters per second. At what speed did the pony switch from a trot to a gallop? Source: Science. Linear Functions Slopes and Equations of Lines 03:06 Calculus with Applications Exercise Heart Rate To achieve the maximum benefit for the heart when exercising, your heart rate (in beats per minute) should be in the target heart rate zone. The lower limit of this zone is found by taking 70$\%$ of the difference between 220 and your age. The upper limit is found by using 85$\% .$ Source: Physical Fitness. (a) Find formulas for the upper and lower limits $(u$ and $l)$ as linear equations involving the age $x .$ (b) What is the target heart rate zone for a 20 -year-old? (c) What is the target heart rate zone for a 40 -year-old? (d) Two women in an aerobics class stop to take their pulse and are surprised to find that they have the same pulse. One woman is 36 years older than the other and is working at the upper limit of her target heart rate zone. The younger woman is working at the lower limit of her target heart rate zone. What are the ages of the two women, and what is their pulse? (e) Run for 10 minutes, take your pulse, and see if it is in your target heart rate zone. (After all, this is listed as an exercise!) Linear Functions Slopes and Equations of Lines 02:08 Calculus with Applications HIV Infection The time interval between a person's initial infection with HIV and that person's eventual development of AIDS symptoms is an important issue. The method of infection with HIV affects the time interval before AIDS develops. One study of HIV patients who were infected by intravenous drug use found that 17$\%$ of the patients had AIDS after 4 years, and 33$\%$ had developed the disease after 7 years. The relationship between the time interval and the percentage of patients with AlDS can be modeled accurately with a linear equation. Source: Epidemiologic Review. (a) Write a linear equation $y=m t+b$ that models these data, using the ordered pairs $(4,0.17)$ and $(7,0.33) .$ (b) Use your equation from part (a) to predict the number of years before half of these patients will have AIDS. Linear Functions Slopes and Equations of Lines 02:39 Calculus with Applications Tuition The table lists the annual cost (in dollars) of tuition and fees at private four-year colleges for selected years. (See Example $13 .$ Source: The College Board. $$\begin{array}{ll}{\text { Year }} & {\text { Tuition and Fees }} \\ {2000} & {16,072} \\ {2002} & {18,060}\\{2004} & {20,045} \\ {2006} & {22,308} \\ {2008} & {24,818} \\ {2010} & {26,766} \\ {2012} & {28,989} \\ {2013} & {30,094}\end{array}$$ (a) Sketch a graph of the data. Do the data appear to lie roughly along a straight line? (b) Let $t=0$ correspond to the year $2000 .$ Use the points $(0,16,072)$ and $(13,30,094)$ to determine a linear equation that models the data. What does the slope of the graph of the equation indicate? (c) Discuss the accuracy of using this equation to estimate the cost of private college in $2025 .$ Linear Functions Slopes and Equations of Lines 0:00 Calculus with Applications cost The total cost for a bakery to produce 150 gourmet cupcakes is $\$ 225,$while the total cost to produce 175 gourmet cupcakes is$\$247 .$ Let $y$ represent the total cost for $x$ gourmet cupcakes. Assume that a straight line can approximate the data (a) Find and interpret the slope of the cost line for the gourmet cupcakes. (b) Determine an equation that models the data. Write the equation in the slope-intercept form. (c) Use your answer from part (b) to determine an approximation of the cost of 200 gourmet cupcakes. Linear Functions Slopes and Equations of Lines 1 2 3 4 5 ... 42
2020-05-27 13:28:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5627775192260742, "perplexity": 959.5996976134081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347394074.44/warc/CC-MAIN-20200527110649-20200527140649-00030.warc.gz"}
https://solvedlib.com/n/24-1-let-a-be-the-curve-y-6-for-0-5-lt-1-lt-3-6-2x521522,4465703
# 24 1 Let € be the curve y = 6 + for 0.5 < 1 < 3.6. 2x521522,533.5Find the arc ###### Question: 24 1 Let € be the curve y = 6 + for 0.5 < 1 < 3.6. 2x 5 2 15 2 2,5 3 3.5 Find the arc length of € shown above. 2 First find and simplify 1 +y Then arc length Preview Preview #### Similar Solved Questions ##### Part 1 For the simply supported beam subjected to the loading shown, derive equations for the... Part 1 For the simply supported beam subjected to the loading shown, derive equations for the shear force V and the bending moment M for any location in the beam. (Place the origin at point A.) Let a=1.750 m, b=5.75 m, PA = 75kN, and Pc = 80kN. Construct the shear-force and bending-moment diagrams o... ##### Use the Laws of Logarithms to evaluate the expression. log2(44) log_(11)Need Help?iRaudhiWntcmLeekerlt Use the Laws of Logarithms to evaluate the expression. log2(44) log_(11) Need Help?i Raudhi Wntcm Leekerlt... ##### 8 22 @ Ji :8 2 9 2 1 3933 88 8 > 9 8 3 8 12 3 2 ] 8 2[ ] 5 1 8 21 T 5 081 1 8 22 @ Ji :8 2 9 2 1 3933 88 8 > 9 8 3 8 12 3 2 ] 8 2[ ] 5 1 8 21 T 5 08 1 1... ##### CH140z4,972,012.,002002,98Tntrnanm 7 0Irnmrt 2.0 1,.58,06,56,05.55.04,54.03.53.0251,00.50,0Chemical Shift Splitting pattern Integration Which protons Structure CH140z 4,97 2,01 2.,00 200 2,98 Tntrnanm 7 0 Irnmrt 2.0 1,.5 8,0 6,5 6,0 5.5 5.0 4,5 4.0 3.5 3.0 25 1,0 0.5 0,0 Chemical Shift Splitting pattern Integration Which protons Structure... ##### Explain the requirements for corporations to file a consolidated tax return by identifying the tax code. Explain the requirements for corporations to file a consolidated tax return by identifying the tax code.... ##### QUESTION 2Exercise Match the correct response t0 the structure or function.Of the two ventricles, which ventricular wall Is thicker?the mitral valveWhy is the Wall of one ventricle thicker than the other?the pulmonary semilunar valveduppWhich valve prevents backilow Into the lelt atrium?D.lelt ventricle Which valve prevents backflow into the right ventricle? the tricuspid valve Sound l the blood rurbulence when the semilunar lupp valves are closing mustpump blood t0 the body; Sound of the blood QUESTION 2 Exercise Match the correct response t0 the structure or function. Of the two ventricles, which ventricular wall Is thicker? the mitral valve Why is the Wall of one ventricle thicker than the other? the pulmonary semilunar valve dupp Which valve prevents backilow Into the lelt atrium? D.le... ##### :Tranterm TO iort the Ber ilae oe Frut QJoponts) Use t-< Laplace IRay = A0) - > V(d} = :Tranterm TO iort the Ber ilae oe Frut QJoponts) Use t-< Laplace IRay = A0) - > V(d} =... ##### Fim O1Lot Fhp_Sblklio Ior_his_ cquofiog. 2 sinzo_3Sinb-0 fim O1Lot Fhp_Sblklio Ior_his_ cquofiog. 2 sinzo_3Sinb-0... ##### Find the area of the parallelogram whose vertices are listed(0,0), (3,6), (8,4), (11,10)The area of the parallelogram issquare units Find the area of the parallelogram whose vertices are listed (0,0), (3,6), (8,4), (11,10) The area of the parallelogram is square units... ##### 7. What is the major organic product when cyclohexane is subjected to the following sequence of... 7. What is the major organic product when cyclohexane is subjected to the following sequence of reagents? (1) Br2, light; (2) Kt-BuO; (3) BH3.THF; (4) H2O2, NaOH; (5) PCC, CH2Cl2; (6) Br2, HO'; (7) Na Eto 2/5... ##### Tcrin snudects thjthay reading 5kIls at or belon o7cro estmate the fraction = eighth Erade level Inan earlier stuo}; The siate education comm grade population proportion was estimated to bc 0.2 rcquirco ordcm Estimate thc fraction oftcnth graders reading_ erlow the eighth grade level at the ED % confidence levct with_ How Jorge somplc would ertor most 0.083? Round your answicr Up to the next integer tcrin snudects thjthay reading 5kIls at or belon o7cro estmate the fraction = eighth Erade level Inan earlier stuo}; The siate education comm grade population proportion was estimated to bc 0.2 rcquirco ordcm Estimate thc fraction oftcnth graders reading_ erlow the eighth grade level at the ED % con... ##### 3. Let [ = Jos*' dx a. Approximate integral / using the Trapezoidal method_ b. What is the percentage error? 3. Let [ = Jos*' dx a. Approximate integral / using the Trapezoidal method_ b. What is the percentage error?... ##### What is Ib please check the wrong answers first to not repeat them again.   Part A... what is Ib please check the wrong answers first to not repeat them again.   Part A In the circuit shown in the figure Vg-2420 V and I 5.5290° A. (Figure 1) Find Ib Express your answer in complex form. I4.455-62.169 Submit X Incorrect; Try Again; 2 attempts remaining 1 of Figure Part B... ##### Question The following selected account balances were taken from ABC Company's general ledgers for 2019: January... Question The following selected account balances were taken from ABC Company's general ledgers for 2019: January 1, 2019 Inventory 74,000 Accounts payable 50,000 Salaries payable 9,000 Investments 68,000 Accounts receivable 69,000 Land 58,000 Notes payable 195,000 Unearned revenue 17,000 Common ... ##### Problem 7. (10 pts) The insurance company has a cohort of policyholders, 70% of whom are... Problem 7. (10 pts) The insurance company has a cohort of policyholders, 70% of whom are non-smokers and 30% of whom are smokers. For the fixed age x, the insurance company's model for mortality has ONS = ONS = 0.15 (non-smoker) and q = = 0.25 (smoker). (a). A policyholder is chosen at random fr... ##### What is the primary difference between an asset and an expense? Multiple Choice An expense shows... What is the primary difference between an asset and an expense? Multiple Choice An expense shows up on the balance sheet, while an asset shows up on the income statement Companies should try to maximize assets, while minimizing expenses. An asset is purchased in cash, while an expense is financed on... ##### How do you differentiate f(x)=(lnx+x)(sinx+e^x) using the product rule? How do you differentiate f(x)=(lnx+x)(sinx+e^x) using the product rule?... ##### (a) Givex the following system of linear equations 2x -2x2 +x, =4 _Axi -8x2 +4x, =3 3 +5x2 =8find th inverse hatrix by using the adjoint method. Hence, solve the System of linear equions usiyg the inverse matrix metkod. 10 marks) ANSWER QUESTION B ONLY(b) A chemist needs three different alcohol solutions: A, B and C.x represents the amount of 50% alcohol solution; y represents the amount of 25% alcohol solution and represents the amount of [0% alcohol solution needed (in litres).Table Q3Solution (a) Givex the following system of linear equations 2x -2x2 +x, =4 _Axi -8x2 +4x, =3 3 +5x2 =8 find th inverse hatrix by using the adjoint method. Hence, solve the System of linear equions usiyg the inverse matrix metkod. 10 marks) ANSWER QUESTION B ONLY (b) A chemist needs three different alcohol so... ##### Relevance, short-term. (A. Atkinson) Oxford Engineering manufactures small engines. The engines are sold to manufacturers who... Relevance, short-term. (A. Atkinson) Oxford Engineering manufactures small engines. The engines are sold to manufacturers who install them in such products as lawn mowers. The company current y manufactures all the parts used in these engines but is considering a proposal from an external supp mr wh... ##### What are the negative externalities associated with climate change? What are the negative externalities associated with climate change?... ##### Question 5 (10 points) Cholesky Factorization 1. Show that the following matrix is positive definite.2 _ Find its Cholesky factorization.12 ~1 2 13 ~8 ~1 -8 14A = Question 5 (10 points) Cholesky Factorization 1. Show that the following matrix is positive definite. 2 _ Find its Cholesky factorization. 1 2 ~1 2 13 ~8 ~1 -8 14 A =... ##### Fr2e Motion Andi catos tLe ol-sense c Jraudat;c oral force (l) spri ng ~erce danki] Jero: () Exterml feres (c) Botb (c) od (4 Fr2e Motion Andi catos tLe ol-sense c Jraudat;c oral force (l) spri ng ~erce danki] Jero: () Exterml feres (c) Botb (c) od (4... ##### Give a 3-coloring of a 4-by-4 grid with no monochromatic rectangles; O1" prove that no such coloring exists: Give a 3-coloring of a 4-by-4 grid with no monochromatic rectangles; O1" prove that no such coloring exists:... ##### A3 (use Find span of A1, A2, elementary row operations) 1 1 4. = [ 13.43=[1]... A3 (use Find span of A1, A2, elementary row operations) 1 1 4. = [ 13.43=[1] A =6... ##### Question 5 2 pts Which of the following statements is true? (a) we reject a null... Question 5 2 pts Which of the following statements is true? (a) we reject a null hypothesis if the p-value of the test is smaller than the level of the significance a. (b) we accept a null hypothesis if the p-value of the test is smaller than the level of the significance a. (c) we reject a null hyp... ##### Are 1. Confidence intervals can be constructed only under certain assumptions about the samples and population.... are 1. Confidence intervals can be constructed only under certain assumptions about the samples and population. For each Situation below, either (1) construct the confidence interval, or (ii) explain why you cannot (which assumption is missing) and (iii) what would a researcher need to change to be ... ##### Alast ol hypothosls WaI pertormcd lor Ho" p = 57 veraus Hy ' 0257 on & normal population unkrown stndard daviabon ua H-tast catistic 1us clrlatod 821, which af tho following regardu) p-varuu?semola jna & 16 Wua uead End"P-valte 20 10588 ~vallt 202 P-vala P-value 1034 02 None 0l tne above Alast ol hypothosls WaI pertormcd lor Ho" p = 57 veraus Hy ' 0257 on & normal population unkrown stndard daviabon ua H-tast catistic 1us clrlatod 821, which af tho following regardu) p-varuu? semola jna & 16 Wua uead End" P-valte 20 10588 ~vallt 202 P-vala P-value 1034 02 None... ##### Joule - What is the difference between static mic, d al wees for fund flow? How... Joule - What is the difference between static mic, d al wees for fund flow? How can we measure these types of pees? - Water flows in a pipe network between two w a factory is a certains part of the pipe system the flow is 295 There is where we changes in the pipe as illustrated in the next figure. T... ##### What do you think the future of health policy will look like in the United States?... What do you think the future of health policy will look like in the United States? What do you personally think is the biggest issue which must be addressed in health policy?... ##### Abeortance1olunr ) comdolnceFleentena Chaper 15 abeortance 1olunr ) comdolnce Fleentena Chaper 15... ##### (Name it as chloride)CH;CH;CH;CHZCHCCHCHzCHCHCH; CH(CH;) ClA O: HCH CHzCH;CHCHCH2CHCHCH; CHZCHzCH; (Name it as chloride) CH; CH; CH;CHZCHCCHCHzCHCHCH; CH(CH;) Cl A O: H CH CHz CH;CHCHCH2CHCHCH; CHZCHzCH;...
2022-05-23 22:57:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5342336297035217, "perplexity": 9262.47069582431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00699.warc.gz"}
http://crypto.stackexchange.com/questions/5453/questions-about-williams-p1?answertab=active
First off, if you're doing William's p+1 test, then also doing Pollard's p-1 is redundant, since the p+1 test covers both cases, right? Second, why is the recurrence $V_{n+1} = aV_n - V_{n-1}$ used? Using $V_{n+1} = V_n + bV_{n-1}$ instead would give you a provable $\frac{1}{2}$ chance of getting a quadratic nonresidue for each choice of $b$. But no reference I've ever seen uses this variation. - @fgrieu But William's p+1 will also find factors such that p-1 is smooth, since you'll get quadratic residues half the time. –  Antimony Nov 23 '12 at 4:36 @fgrieu I'd be really interested to try that challenge. Is there anyway for you to send me the numbers? I don't think SO's comment system is really suitable for that. –  Antimony Nov 23 '12 at 15:37 @fgrieu, I'm far from expert on this, but based on this, with prob. $1/2$ we have $(D/p)=1$, and then if $p-1$ is smooth, it looks like William's p+1 method should find $p$. At least, I assume this is the background behind Antimony's reasoning. (I don't know the answer to Antimony's question, either, but I'm just trying to elaborate what Antimony might be referring to.) –  D.W. Nov 23 '12 at 23:53 @fgrieu Whenever I try to copy your number, I get a bunch of nonascii junk mixed in. Are you sure you pasted it correctly? –  Antimony Nov 24 '12 at 0:56 @D.W.: I now start to understand what you, and I guess Antinomy, are thinking of. In addition of the argument, that is supported by the "degenerates into a slow version of Pollard's p − 1 algorithm" fragment in the Wikipedia entry for William's p+1. I do not know to what degree that applies in practice, and will be thinking about it. Meanwhile I'll remove my earlier comments, and update my tentative start of an answer accordingly. –  fgrieu Nov 24 '12 at 16:59 Note: this is only an attempt at answering the first part of the question, asking if William's p+1 factorization method is redundant with Pollard's p-1, on the basis of how the algorithms are used in practice. Pollard's p-1 (resp. William's p+1) factorization method is efficient to find a factor of $n$ if any of the factors $p$ of $n$ is such that $p-1$ (resp. $p+1$) has no prime factor above some moderate bound $B$. An improvement puts a bound $B_2$ for the highest prime factor of $p-1$ (resp. $p+1$), and another bound $B_1\ll B_2$ for the other factors. The original paper on Williams's p+1 also presents Pollard's p-1. Pollard's p-1 factorization is used in some recent factorization efforts with bounds up to $B_1≈2^{40}$ and $B_2≈2^{50}$; if unsuccessful, that's sometime followed by William's p+1 with slightly lower bounds, before gearing-in ECM. If we construct a 1024-bit integer $n=p⋅q$, with $p+1=p_0⋅c_0$, $q+1=q_0⋅c_1$, $p−1=p_1⋅p_2⋅p_3⋅⋅⋅p_{11}⋅p_{12}⋅c_2$; $p$, $q$, $p_j$, $q_0$ primes; $p$ 415-bit, $q$ 610-bit, $p_0$ and $q_0$ at least 200-bit, other $p_j$ 32-bit; then it is likely amenable to Pollard's p-1 factorization (because $p-1$ has no factor wider than 32-bit), but I see no reason why I can't tell if it would be amenable to William's p+1 factorization (because both $p+1$ and $q+1$ have a high prime factor). One such integer (also in this pastebin) is $n =$ 170008213545910965886460576572090982063408798024984543559001546422534644045470603998698706971810963093964580198788881904271608774213396896678573575267676754780622889919559692654436815810637860509009977667589657189496387034548011094365919416175990986348895410113935005204972304311894659720336969894022598750477 Another similarly constructed integer of 448 bits, factorisable with Pollard's p-1 setup with $B_1=4000$, $B_2=10000$, is $n =$ 726286104974888320831459714524497735770165786243885681724247623636059281197969465033496277725004244158329276076523947799294094896411843 - At the current rate, it will take my script 10-20 years to solve that, assuming that you used 32 bit factors. But that is unoptimized python running on a five year old laptop. I guess I need to find a more optimized implementation. Or you could use smaller factors. –  Antimony Nov 24 '12 at 14:49 @Antinomy: I just start to understand your question. Also, I can make another smaller number; what about $N$ of 448 bits, a bound $B_2$ of 20 bits, a bound $B_1$ of 15 bits? Feel free to change that until I post it. –  fgrieu Nov 24 '12 at 17:14 It can find anything with $B_2$ less than say, 10,000 in minutes. 20 bits would take around 42 hours. –  Antimony Nov 24 '12 at 17:26
2015-03-31 02:22:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8651970624923706, "perplexity": 850.327397925102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300222.27/warc/CC-MAIN-20150323172140-00060-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.vtmarkets.com/analysis/strong-us-job-report-eased-recession-fears/
English Europe Middle East Asia ### Strong US job report eased recession fears ###### July 11, 2022 US stock closed flat on Friday after struggling for direction throughout the session as the strong US job report eased recession fears but further supported the case for a 75 bps rate hike by the Fed to fight inflation. Earlier in the North American session, the Nonfarm Payrolls report added 372K jobs to the economy in June, exceeding market estimations of 268K and fueled the Fed to stay aggressive to combat inflation. Two of the Federal Reserve’s officials also said that they supported raising the interest rate by 75 basis points for the second month in a row. In the Eurozone, the consensus amongst ECB policymakers remained around a 25 bps rate hike. On top of that, the EU’s ongoing energy crisis kept weighing on the shared currency as the economy is likely to face a shortage of energy after prohibiting oil imports from Russia. The benchmarks, S&P 500 and Nasdaq 100 both edged higher on Friday as market traders remain mixed on the upbeat job report. The S&P 500 was little changed on a daily basis and the Nasdaq 100 advanced with a 0.1% gain for the day. But nine out of eleven sectors stayed in negative territory as the materials and real estate sectors are the worst-performing among all groups, losing 1.00% and 0.55%, respectively. The Dow Jones Industrial Average meanwhile declined with a 0.1% loss on Friday and the MSCI World index rose 1.6%. Main Pairs Movement The US dollar edged lower on Friday, losing its upside traction and retreated back to the 106.9 level after the softer risk tone lend support to the safe-haven greenback. The DXY index witnessed heavy bullish momentum and was pushed higher to a daily high above 107.7 level in the early European session, but then started to see fresh selling meanwhile surrendering most of its daily gains. The market focus remained on the Fed’s normalisation process and the next moves regarding interest rates as June’s US Nonfarm Payrolls report exceeded expectations and reaffirmed the economy’s strength. GBP/USD advanced slightly with a 0.08% gain on Friday amid the weaker US dollar across the board. But investors remain concerned that the UK government’s controversial Northern Ireland Protocol Bill could trigger a trade war with the European Union. The cable remained under bearish pressure and dropped to a daily low below the 1.193 mark, but then regained upside strength to recover all of its daily losses. Meanwhile, EUR/USD rebounded back after dropping to a fresh 20-year low at 1.0071 during the European session. The pair was up almost 0.30% for the day. Gold advanced with a 0.12% gain for the day after touching a daily high above $1750 during the US trading session, despite the US dollar rising to fresh 20-year highs as US bond yields surged. Meanwhile, WTI oil climbed back toward the$105 area as supply fears amongst traders have spurred a rise in oil prices. Technical Analysis USDJPY (4-Hour Chart) USDJPY oscillates amid the US economic data and the news of Japan ex- leader Shinzo Abe’s assassination. Technical speaking, the outlook remains positive on the four-hour chart. The US dollar remains supported by the fundamental backdrop and the technical perspective. USDJPY got pushed higher with the formation of a higher high. The breakout of the resistance of 135.70 gives USDJPY an upside momentum toward the next resistance of 137.00. As the RSI indicator is still far from overbought, USDJPY is expected to trade higher. On the flip side, USDJPY needs to fall below the bullish trend line and the support of 134.89 in order to lose traction. Resistance: 135.7, 137.00 Support: 134.89, 134.24, 133.59 GBPUSD (4-Hour Chart) GBPUSD has managed to advance in positive territory above the descending channel. The upside movement of the currency pair witnesses risks sentiment, making it harder for the US dollar to preserve its strength despite better-than-expected Nonfarm Payrolls in June. From the technical perspective, intraday’s upside momentum boosted GBPUSD above the midline of the Bollinger Band; in the meantime, GBPUSD has advanced above the bearish channel. Both suggest that GBPUSD manages to stage a positive rebound in the near- term. The acceptance above 1.2063 would confirm the positive shift. On the flip side, if GBPUSD fails to hold above the bearish channel and fails to breach the immediate resistance, then it could potentially stage back to negative territory. As both MACD and the RSI indicator are showing signs of bulls, GBPUSD is expected to head further north. Resistance: 1.2063, 1.2178, 1.2272 Support: 1.1876 Gold (4-Hour Chart) Gold edges slightly higher despite rising US yield and a better- than- expected US economic data. From the technical aspect, gold clings slightly above the support level of 1732.32, trying to defend the last land before heading further south. The lower-low formation has given gold pressure, attracting some follow-through sellers. Current support would be robust as the RSI has reached the oversold territory and the MACD has slightly turned positive, giving some signs of attracting some dip- buyers. If the support ends up failing to defend, then more selling pressures would come into play. Resistance: 1766.80, 1788.13, 1805.36 Support: 1732.32 Economic Data
2022-09-27 13:19:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20349039137363434, "perplexity": 9794.800390383765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00420.warc.gz"}
https://aimsciences.org/article/doi/10.3934/naco.2019034
# American Institute of Mathematical Sciences • Previous Article $\theta$ scheme with two dimensional wavelet-like incremental unknowns for a class of porous medium diffusion-type equations • NACO Home • This Issue • Next Article Stability analysis of stagnation point flow in nanofluid over stretching/shrinking sheet with slip effect using buongiorno's model doi: 10.3934/naco.2019034 ## Existence and iterative approximation method for solving mixed equilibrium problem under generalized monotonicity in Banach spaces 1 Department of Economics, Faculty of Economics and Social Sciences, Ibn Zohr University, B.P. 8658 Poste Dakhla, Agadir, Morocco 2 National Institute of Science Education and Research Bhubaneswar, Pin-752050, India 3 Department of Mathematics, University of Central Florida, USA * Corresponding author Received  September 2018 Revised  March 2019 Published  May 2019 We study a new class of mixed equilibrium problem, in short MEP, under weakly relaxed $\alpha$-monotonicity in Banach spaces. This class of problems extends and generalizes some related fundamental results such as mixed variational-like inequalities, variational inequalities, and classical equilibrium problems as special cases. Existence and uniqueness of the solution to the problem is established. Auxiliary principle technique is used to obtain an iterative algorithm. Solvability of the auxiliary problem is established in the paper and finally the convergence of the iterates to the exact solution is proved. As applications of the approach developed in this paper, we study the existence and algorithmic approach for a general class of nonlinear mixed variational-like inequalities. The results obtained in this paper are interesting and improve considerably many existing results in literature. Citation: Ouayl Chadli, Gayatri Pany, Ram N. Mohapatra. Existence and iterative approximation method for solving mixed equilibrium problem under generalized monotonicity in Banach spaces. Numerical Algebra, Control & Optimization, doi: 10.3934/naco.2019034 ##### References: show all references ##### References: [1] Chengxiang Wang, Li Zeng, Wei Yu, Liwei Xu. Existence and convergence analysis of $\ell_{0}$ and $\ell_{2}$ regularizations for limited-angle CT reconstruction. Inverse Problems & Imaging, 2018, 12 (3) : 545-572. doi: 10.3934/ipi.2018024 [2] Teresa Alberico, Costantino Capozzoli, Luigi D'Onofrio, Roberta Schiattarella. $G$-convergence for non-divergence elliptic operators with VMO coefficients in $\mathbb R^3$. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 129-137. doi: 10.3934/dcdss.2019009 [3] Gyula Csató. On the isoperimetric problem with perimeter density $r^p$. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2729-2749. doi: 10.3934/cpaa.2018129 [4] Shihu Li, Wei Liu, Yingchao Xie. Large deviations for stochastic 3D Leray-$\alpha$ model with fractional dissipation. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2491-2509. doi: 10.3934/cpaa.2019113 [5] Lianjun Zhang, Lingchen Kong, Yan Li, Shenglong Zhou. A smoothing iterative method for quantile regression with nonconvex $\ell_p$ penalty. Journal of Industrial & Management Optimization, 2017, 13 (1) : 93-112. doi: 10.3934/jimo.2016006 [6] Gabriele Bonanno, Giuseppina D'Aguì. Mixed elliptic problems involving the $p-$Laplacian with nonhomogeneous boundary conditions. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5797-5817. doi: 10.3934/dcds.2017252 [7] Genghong Lin, Zhan Zhou. Homoclinic solutions of discrete $\phi$-Laplacian equations with mixed nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1723-1747. doi: 10.3934/cpaa.2018082 [8] Tadahisa Funaki, Yueyuan Gao, Danielle Hilhorst. Convergence of a finite volume scheme for a stochastic conservation law involving a $Q$-brownian motion. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1459-1502. doi: 10.3934/dcdsb.2018159 [9] Harbir Antil, Mahamadi Warma. Optimal control of the coefficient for the regional fractional $p$-Laplace equation: Approximation and convergence. Mathematical Control & Related Fields, 2019, 9 (1) : 1-38. doi: 10.3934/mcrf.2019001 [10] Peter Benner, Ryan Lowe, Matthias Voigt. $\mathcal{L}_{∞}$-norm computation for large-scale descriptor systems using structured iterative eigensolvers. Numerical Algebra, Control & Optimization, 2018, 8 (1) : 119-133. doi: 10.3934/naco.2018007 [11] Sanjiban Santra. On the positive solutions for a perturbed negative exponent problem on $\mathbb{R}^3$. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1441-1460. doi: 10.3934/dcds.2018059 [12] VicenŢiu D. RǍdulescu, Somayeh Saiedinezhad. A nonlinear eigenvalue problem with $p(x)$-growth and generalized Robin boundary value condition. Communications on Pure & Applied Analysis, 2018, 17 (1) : 39-52. doi: 10.3934/cpaa.2018003 [13] Shengbing Deng. Construction solutions for Neumann problem with Hénon term in $\mathbb{R}^2$. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 2233-2253. doi: 10.3934/dcds.2019094 [14] Silvia Frassu. Nonlinear Dirichlet problem for the nonlocal anisotropic operator $L_K$. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1847-1867. doi: 10.3934/cpaa.2019086 [15] Yupeng Li, Wuchen Li, Guo Cao. Image segmentation via $L_1$ Monge-Kantorovich problem. Inverse Problems & Imaging, 2019, 13 (4) : 805-826. doi: 10.3934/ipi.2019037 [16] Lidan Li, Hongwei Zhang, Liwei Zhang. Inverse quadratic programming problem with $l_1$ norm measure. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-13. doi: 10.3934/jimo.2019061 [17] K. D. Chu, D. D. Hai. Positive solutions for the one-dimensional singular superlinear $p$-Laplacian problem. Communications on Pure & Applied Analysis, 2020, 19 (1) : 241-252. doi: 10.3934/cpaa.2020013 [18] Carlos García-Azpeitia. Relative periodic solutions of the $n$-vortex problem on the sphere. Journal of Geometric Mechanics, 2019, 11 (3) : 427-438. doi: 10.3934/jgm.2019021 [19] Chiun-Chuan Chen, Li-Chang Hung, Chen-Chih Lai. An N-barrier maximum principle for autonomous systems of $n$ species and its application to problems arising from population dynamics. Communications on Pure & Applied Analysis, 2019, 18 (1) : 33-50. doi: 10.3934/cpaa.2019003 [20] Yeping Li, Jie Liao. Stability and $L^{p}$ convergence rates of planar diffusion waves for three-dimensional bipolar Euler-Poisson systems. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1281-1302. doi: 10.3934/cpaa.2019062 Impact Factor:
2019-09-18 09:05:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5915381908416748, "perplexity": 5188.85191691877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573264.27/warc/CC-MAIN-20190918085827-20190918111827-00100.warc.gz"}
https://cs.stackexchange.com/questions/80186/is-there-a-programming-language-that-is-not-a-formal-language
# Is there a programming language that is not a formal language Are out there, any programming languages that are not defined algorithmically so that we can call them "constructed languages" but not "formal languages"? Are there any discussions or papers on that topic? It seems pretty hard to develop a compiler for this kind of languages. • I'm not sure what you mean. "Formal" in the sense of "formal language" just means that whether or not a string is in the language depends only on its form (i.e., based on considering it as a series of characters). – David Richerby Aug 18 '17 at 16:35 • I think a "formal language" needs to be defined algorithmically so for example by a formal grammar while a "constructed language" can also be defined by examples. Esperanto is a constructed language but not a formal language. – user76239 Aug 18 '17 at 16:46 • But it is defined algorithmically, you have grammar, rules and the dictionary. But in the case of programming languages, how would you parse such language? – Evil Aug 18 '17 at 17:06 • What about using AI to compile English into C-Code? It seems possible to write "code" using a natural language like English. But I think Raphael's answer is good! – user76239 Aug 21 '17 at 7:24 • That would be an interesting idea, to have NLP to parse your speech into code, then compiles it into op code – A.Rashad Aug 29 '17 at 16:01 • Re-reading my answer, I now that token-/macro-expansion languages such as LaTeX may be a counter-example. At least, I have observed plenty of times that pdflatex doesn't terminate, so it's arguably not a decider. (This may be true for some compilers, too; there it's probably a bug.) – Raphael Sep 13 '17 at 21:34
2020-11-24 07:02:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6858949065208435, "perplexity": 1026.437564875816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141171126.6/warc/CC-MAIN-20201124053841-20201124083841-00353.warc.gz"}
http://physics.stackexchange.com/tags/experimental-technique/hot
# Tag Info ## Hot answers tagged experimental-technique 24 quantum dots. nanoscale semiconductor materials that can confine photons in 3 dimensions and release them a measurable time after. based on material used the decay time is known empirically. frequency is also known. the latter is sufficient to calculate the energy of one photon. the former is then sufficient to calculate the rate of photon re emission from ... 23 There are a variety of methods used to measure distance, each one building on the one before and forming a cosmic distance ladder. The first, which is actually only usable inside the solar system, is basic Radar and LIDAR. LIDAR is really only used to measure distance to the moon. This is done by flashing a bright laser through a big telescope (such as ... 20 which experiment gave scientists the reason to believe nuclear fission/fussion existed Fusion was first. Francis William Aston built a mass spectrometer in 1919 and measured the masses of various isotopes, realizing that the mass of helium-4 was less than 4 times that of hydrogen-1. From this information, Arthur Eddington proposed hydrogen fusion ... 15 You can't do this with a single "normal" lens. Because the beam width needs to be 4.25 inches you need a lens wider than that (which is huge compared to normal optical components). The focal length of the lens would need to be 4.25 in/(2*sin(60 degrees)) ~ 2.5 inches = 63.5 mm which is smaller than the width of the lens, and you can't really make normal ... 14 The resolving power of a prism is given by the formula $$\frac{\lambda}{\Delta \lambda} = b\ \frac{dn}{d\lambda},$$ where $b$ is the base length of the prism, $\lambda$ is the wavelength and $n(\lambda)$ is the refractive index. You don't say, but let's assume you are using a crown glass prism. According to this useful document, crown class has ... 14 In the double slit experiment, if you decrease the amplitude of the output light gradually, you will see a transition from continuous bright and dark fringe on the screen to a single dots at a time. If you can measure the dots very accurately, you always see there is one and only one dots there. It is the proof of the existence of the smallest unit of each ... 13 The practical answer (which I also wrote in a comment on the linked question) is that you turn the intensity of the light source down until the expectation value for the number of photons on the optical path is low enough to suit you. If $\bar{n} = 0.1$ then very few of the events that are recorded on the screen will come from events where more than one ... 13 Binary objects provide the invaluable opportunity to observe objects interacting. The only way to observe most solo objects sitting in space is by the light they emit all by themselves. (All stars emit a low background of neutrinos resulting from their nuclear fusion, but only the Sun is close enough to observe that. A supernova emits a burst of neutrinos ... 12 our lab has an ultra-high vacuum stm system (10-11 torr), and all parts that go in the vacuum system has to be extremely clean. Here is what we do: first i want to point out that the material you use for UHV is very important too. The commonly accepted material is 316 stainless steel and oxygen free pure copper. For other specialized material, you should ... 10 A pendulum of (long) length L will tick with a period of $2\pi\sqrt{L\over g}$, and air resistance can be made negligible for a mm-sized oscillation of a heavy object on a several-meter long rigid arm. You need to determine the location of the center of mass accurately to know the effective value of $L$, but this can be done arbitrarily accurately by ... 9 The Aethrioscope (see Wiki page with this name) was invented in 1818 by Sir John Leslie and the basic idea for a pyrometer (see Wiki pahe with this name) was conceived in the late 1700s by Josiah Wedgewood. These were calibrated by comparing observed colour with that of hot metals / clays (as appropriate) of known temperature. The idea was to heat a small ... 7 Try a styrofoam cooler: They're about $5 at Walmart. Your homemade dewars should fit easily inside and should be reasonably sealed from atmosphere. We used to use this in my Senior Lab in undergrad to freeze samples overnight, there was always plenty left in the morning, so I imagine that the combination should get you near that 3 day mark. 7 As an "infrared survey technician" part of my job was keeping our infrared camera topped off with liquid nitrogen. We had a large dewar (about the size of a barrel) at the shop that was filled bi-weekly. For daily excursions we used a smaller dewar (about the size of a pony keg). Both of these dewars were specifically designed to contain cryogenic liquids, ... 7 Short answer: you don't. Slightly longer answer: You're using beams of particles, and you focus each of them as much as you (practically1) can so that the particles in each beam are reasonably close together. The result is a wide variety of interaction distances from far apart through near misses to closer interactions still. You mentioned electrons ... 7 Alternatively I would look around the lab for an infrared thermometer. There exist in the market close focus ones that go down to 6mm in close focus option ( so as not to advertise, google space accurate infrared thermometers microscopes where I found the number in a one of the first hits). I would choose a large ant, or attract more by a spot of honey ... 6 This is not really an answer to the question in the title, but a description of why the proposed short baseline neutrino speed measurement is exceedingly difficult. It related to the question in the sense that it explains the limits of the precision with which$\delta t$can be extracted in a neutrino experiment, without even touching on the kind of ... 6 This is one of my favorite questions in astronomy. It's extremely clever really. The Sun emits a certain colour of light that can be analysed. This colour is obviously completely white other than certain frequencies. For example, an object emitting a turquoise light would be emitting every colour frequency other than parts of red perhaps (there are other ... 6 Triangulation. The Earth is not stationary, it moves in a 150 million km (1 AU) radius orbit around the Sun. If you measure the apparent position of a star at different points in that orbit a near enough object will appear to be displaced by a measurable amount, this displacement is called parallax, which is typically measured across a 1 AU baseline. A ... 6 Similar situations can arrise in experimental work on large scale machines, and there is a body of knowledge that gets passed around. The "can it hold pressure" test suggested by other answer works best if you can apply a pretty good over-pressure and have either a sensitive pressure gauge in the system or can afford to wait over night to see how tight ... 6 Gold foil is quite easy to hold you just hang it from a paperclip. The only difficulty is if there is a lot of static electricity in the air which makes it stick to things. (This is the main reason for the cold damp Cambridge's supremacy in early particle physics) Photographic film at the time wasn't sensitive and so in Marsden and Geiger's experiments ... 6 Events in high energy physics detectors that can't produce useful data, mostly because they are the result of soft scattering events, are discarded by multiple layers of trigger circuits. What these circuits do is prescribed by so called trigger menus, which are based on theoretical predictions about a large number of known and hypothetical physics event ... 6 You do exactly the same thing: you "rotate" the state and then measure along whatever axis your measurement apparatus happens to measure. The only difference here is that the "rotation" does not necessarily correspond to a rotation in space like it does for a true spin. What follows is a detailed description of how we do rotations of a generic 2 level ... 5 'Every piece of knowledge in science has a beginning lying in someone's experiment' A view that has long stopped being correct. In fact, I'd say the idea of nuclear energy was first established when Einstein formulated mass-energy equivalence:$E=m_0c^2\$, which shows that there are enormous amounts of energy hidden in ordinary matter, if you can find a ... 5 First, let's get user Jim's comment down to preserve it, as it is the first essential idea: Note, you can treat the source like a focal point. Then any lens that is big enough to handle the laser can be used. You just have to put it one focal length away from the source Now, to add to ARMS's great answer, you have yourself a nontrivial design problem. ... 5 Firing the ball straight upwards is the obvious experiment. The vertical trajectory has an analytic solution even including air resistance, so you could probably estimate the muzzle velocity and the air resistance. If this is too boring, hammer nails into your block of wood so the ball hits the nails and is impaled on them. The experiment can be made more ... 5 Has anyone ever constructed an ultra-high vacuum system with half-assed, or no cleaning of parts? Haven't we all done that at some point? How'd it turn out? Badly! Water and hydrogen are easy to bake off the internal surfaces, but get any hydrocarbons, skin grease, silicone, etc on it and you'll be baking for days. 5 A little known method is the convergent point method (or moving cluster method). Stars in open clusters move parrallel through space and due to the perspective effect they will appear to move towards a common point on the sky. This point depends on the distance and the distance can thus be calculated if the point in the sky can be determined. I have never ... 5 I wish there was an easy answer, but this is actually somewhat complicated, and to some extent is more art than science. There are several simple models that are used to predict molecular geometry, one of the most common is the VESPR model. Based on this model, one can begin to perform calculations of energy associated with different vibrational modes of ... 5 Have a look at http://arxiv.org/abs/1009.1569. In this article Thomas Juffmann discusses some of the practical issues in doing these experiments. In principle these experiments aren't hard, but in practice there are lots of technical difficulties. For example the large molecules need to be all moving at the same velocity(i.e. the beam needs to be very cold) ... 5 Liquid nitrogen is used because nitrogen is extremely abundant on earth. Nitrogen makes up approximately 78% of the atmosphere by volume. Hence, liquid nitrogen is rather easy to make (and consequently cheap). I've heard for instance that Fermilab buys liquid nitrogen for cheaper than what you pay for water. Liquid helium is useful for things that must go ... Only top voted, non community-wiki answers of a minimum length are eligible
2015-07-28 10:28:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6138520836830139, "perplexity": 561.0754953253742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981856.5/warc/CC-MAIN-20150728002301-00059-ip-10-236-191-2.ec2.internal.warc.gz"}
https://nmag.readthedocs.io/en/latest/tutorial/doc.html
# 11. Mini tutorial micromagnetic modelling¶ This section is intended for researchers who are just beginning to explore micromagnetic modelling. It is assumed that you have some knowledge on micromagnetics. We advise to read this whole section, and then to look the Guided Tour examples (or to explore other Micromagnetic packages at that point). ## 11.1. Introduction micromagnetic modelling¶ To carry out micromagnetic simulations, a set of partial differential equations have to be solved repeatedly. In order to be able to do this, the simulated geometry has to be spatially discretised. The two methods that are most widely spread in micromagnetic modelling are the so-called finite difference (FD) method and the finite element (FE) method. With either the FD or the FE method, we need to integrate the Landau-Lifshitz and Gilbert equation numerically over time (this is a coupled set of ordinary differential equations). All these calculations are carried out by the Micromagnetic packages and the user does not have to worry about these. The finite difference method subdivides space into many small cuboids. Sometimes the name cell is used to describe one of these cuboids. (Warning: in finite difference simulations, the simulated geometry is typically enclosed by a (big) cuboid which is also referred to as simulation cell. Usually (!) it is clear from the context which is meant.) Typically, all simulation cells in one finite difference simulation have the same geometry. A typical size for such a cell could be a cube of dimensions 3nm by 3nm by 3nm. Let’s assume we would like to simulate a sphere. The following picture shows an approximation of the shape of the sphere by cubes. This is the finite difference approach. For clarity, we have chosen rather large cubes to resolve the sphere – in an actual simulation one would typically use a much smaller cell size in order to resolve geometry better. On the other hand, the finite element method (typically) subdivides space into many small tetrahedra. The tetrahedra are sometimes referred to as the (finite element) mesh elements. Typically, the geometry of these tetrahedra does vary throughout the simulated region. This allows to combine the tetrahedra to approximate complicated geometries. Using tetrahedra, the a discretised sphere looks like this: The spherical shape is approximated better than with the finite differences. The first step in setting up a micromagnetic simulation is to describe the geometry. In the case of finite difference calculations, it will depend on the package you use (currently there is only OOMMF_ freely available) how to tell the package what geometry you would like to use, and how small your simulation cells should be. In the case of finite element calculations, you need to create a finite element mesh (see Finite element mesh generation). ## 11.2. What is better: finite differences or finite elements?¶ This depends on what you want to simulate. Here are some points to consider. • Finite difference simulations are best when the geometry you simulate is of rectangular shape (i.e. a cube, a beam, a geometry composed of such objects, a T profile, etc). In these situations, the finite element discretisation of the geometry will not yield any advantage. (Assuming that the finite difference grid is aligned with the edges in the geometry.) • Finite difference simulations need generally less computer memory (RAM). This is in particular the case if you simulate geometries with a big surface (such as thin films). See Memory requirements of boundary element matrix for a description of the memory requirements of the hybrid finite element/boundary element simulations (both |Nmag| and Magpar_ are in this category). If this turns out to be a problem for you, we suggest to read the section Compression of the Boundary Element Matrix using HLib. • Finite element simulations are best suited to describe geometries with some amount of curvature, or angles other than 90 degrees. For such simulations, there is an error associated with the staircase discretisation that finite difference approaches have to use. This error is very much reduced when using finite elements. (We state for completeness that there are techniques to reduce the staircase effect in finite difference simulations but these are currently not available in open source micromagnetic simulation code.) • For finite element simulations, the user has to create a finite element mesh. This requires some practice (mostly to get used to a meshing package), and in practice will take a significant amount of the time required to set up a finite element simulation. ## 11.3. What size of the cells (FD) and tetrahedra (FE) should I choose?¶ There are several things to consider: • the smaller the cells or tetrahedra, the more accurate the simulation results. • the smaller the cells or tetrahedra, the more cells and tetrahedra are required to describe a geometry. Memory requirements and execution time increase with the number of cells and tetrahedra. In practice this will limit the size of the system that can be simulated. • the discretisation length (this is the edge length of the cells or the tetrahedra) should be much smaller than the exchange length. The reason for this is that in the derivation of the micromagnetic (Brown’s) equations, one assumes that the magnetisation changes little in space (there is a Taylor expansion for the exchange interaction). Therefore, we need to choose a discretisation length so that the direction of the magnetisation vectors varies little from one site (cell center in FD, node of tetrahedron in FE) to the next. The difference of the magnetisation vector is sometimes referred to as the ‘spin angle’: a spin angle of 0 degrees, means that the magnetisation at neighbouring sites points in the same direction, whereas a spin angle of 180 degrees would mean that they point in exactly opposite directions. How much variation is acceptable, i.e. how big is the spin angle allowed to be? It depends on the accuracy required. Some general guidelines from M. Donahue [in email to H. Fangohr on 26 March 2002 referring to OOMMF] which we fully endorse : [Begin quote M. Donahue] • if the spin angle is approaching 180 degrees, then the results are completely bogus. • over 90 degrees the results are highly questionable. • Under 30 degrees the results are probably reliable. [end quote] It is absolutely vital that the spin angle does not become excessive if the simulation results are to be trusted. (It is probably the most common error in micromagnetics: one would like to simulate a large geometry, thus one has to choose the discretisation length large to get any results within reasonable time. However, the results are often completely useless if the spin angle becomes too large). Because this is such an important issue, OOMMF – for example – provides Max Spin Ang data in its odt data table file (for the current configuration, the last stage, and the overall run). |Nmag| has a columns maxangle_m_X in the Data files (.ndt) file that provide this information (where X is the name of the magnetic material). You will probably find that often a discretisation length of half the Exchange length or even about the Exchange length is chosen. If the spin angle stays sufficiently low during the whole simulation (including intermediate non-equilibrium configurations), then this may be acceptable. The ultimate test (recommended by – among others – M. Donahue and the nmag team) is the following: • cell size dependence test The best way to check whether the cell size has been chosen small enough, is to perform a series of simulations with increasing cell size. Suppose we are simulating Permalloy (Ni80Fe20 with Ms=8e5 A/m, A=1.3e-11) and the Exchange length l1 is about 5nm. Suppose further we would like to use a cell size of 5nm for our simulations. We should then carry out the same simulation with smaller cell sizes, for example, 4nm, 3nm, 2nm, 1nm. Now we need to study (typically plot) some (scalar) entity of the simulation (such as the coercive field, or the remanence magnetisation) as a function of the cell size. Ideally, this entity should converge towards a constant value when we reduce the simulation cell size below a critical cell size. This critical cell size is the maximum cell size that should be used to carry out the simulations. Be aware that (i) it is often nearly impossible to carry out these simulations at smaller cell sizes [because of a lack of computational power] and (ii) this method is not 100% fool proof [the observed entity may appear to converge towards a constant but actually start changing again if the cell size is reduced even further]. One should therefore treat the suggestions made above as advise on good practice, but never solely rely on this. Critical examination of the validity of simulation results is a fundamental part of any simulation work. In summary, it is vital to keep the maximum spin angle small to obtain accurate results. One should always (!) check the value of the spin angle in the data files. One should also carry out a series of simulations where the spin angle is reduced from one simulation to the next while keeping all other parameters and the geometry the same. This should reveal any changes in the results that depend on the discretisation length. ### 11.3.1. Exchange length¶ There is sometimes confusion about what equation should be used to compute the exchange length. In this document, we refer to this equation for soft materials (where the demagnetisation energy is driving domain wall formation) $l_\mathrm{1} = \sqrt{\frac{2A}{\mu_0 M^2_\mathrm{s}}}$ and this equation for hard materials (with uniaxial pinning) where the crystal anisotropy governs domain wall lengths $l_2 = \sqrt{\frac{A}{K_1}}$ If in doubt which of the two equations is the right one, compute both l1 and l2 and choose the minimum length as the relevant exchange length for this system. Micheal Donahue and co-workers have published a couple of papers on the effect of cell size on vortex mobility: and one which included a section on discretisation-induced Neel wall collapse ## 11.4. Micromagnetic packages¶ The following micromagnetic simulation packages are freely available on the internet: These are general purpose packages. Some other (and partly closed source/commercial packages) are listed at http://math.nist.gov/oommf/otherlinks.html. ## 11.5. Summary¶ The most important points in short: • choose a small discretisation length so that the spin angle stays well below 30 degrees. • if you want to simulate thin films (or other geometries with a lot of surface [nodes]), with finite elements, consider how much memory you would need for the boundary element matrix (best to do this before you start creating the mesh).
2022-09-27 18:18:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.696221113204956, "perplexity": 769.7809390783175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00696.warc.gz"}
http://en.wikipedia.org/wiki/Converse_(logic)
Converse (logic) In logic, the converse of a categorical or implicational statement is the result of reversing its two parts. For the implication PQ, the converse is QP. For the categorical proposition All S is P, the converse is All P is S. In neither case does the converse necessarily follow from the original statement.[1] The categorical converse of a statement is contrasted with the contrapositive and the obverse. Implicational converse Let S be a statement of the form P implies Q (PQ). Then the converse of S is the statement Q implies P (QP). In general, the verity of S says nothing about the verity of its converse, unless the antecedent P and the consequent Q are logically equivalent. For example, consider the true statement "If I am a human, then I am mortal." The converse of that statement is "If I am mortal, then I am a human," which is not necessarily true. On the other hand, the converse of a statement with mutually inclusive terms remains true, given the truth of the original proposition. Thus, the statement "If I am a bachelor, then I am an unmarried man" is logically equivalent to "If I am an unmarried man, then I am a bachelor." A truth table makes it clear that S and the converse of S are not logically equivalent unless both terms imply each other: P Q PQ QP (converse) T T T T T F F T F T T F F F T T Going from a statement to its converse is the fallacy of affirming the consequent. However, if the statement S and its converse are equivalent (i.e. if P is true if and only if Q is also true), then affirming the consequent will be valid. Converse of a theorem In mathematics, the converse of a theorem of the form PQ will be QP. The converse may or may not be true. If true, the proof may be difficult. For example, the Four-vertex theorem was proved in 1912, but its converse only in 1998. In practice, when determining the converse of a mathematical theorem, aspects of the antecedent may be taken as establishing context. That is, the converse of Given P, if Q then R will be Given P, if R then Q. For example, the Pythagorean theorem can be stated as: Given a triangle with sides of length a, b, and c, if the angle opposite the side of length c is a right angle, then a2 + b2 = c2. The converse, which also appears in Euclid's Elements (Book I, Proposition 48), can be stated as: Given a triangle with sides of length a, b, and c, if a2 + b2 = c2, then the angle opposite the side of length c is a right angle. Categorical converse In traditional logic, the process of going from All S are P to its converse All P are S is called conversion. In the words of Asa Mahan, "The original proposition is called the exposita; when converted, it is denominated the converse. Conversion is valid when, and only when, nothing is asserted in the converse which is not affirmed or implied in the exposita."[2] The "exposita" is more usually called the "convertend." In its simple form, conversion is valid only for E and I propositions:[3] Type Convertend Simple converse Converse per accidens A All S are P not valid Some P is S E No S is P No P is S Some P is not S I Some S is P Some P is S O Some S is not P not valid The validity of simple conversion only for E and I propositions can be expressed by the restriction that "No term must be distributed in the converse which is not distributed in the convertend."[4] For E propositions, both subject and predicate are distributed, while for I propositions, neither is. For A propositions, the subject is distributed while the predicate is not, and so the inference from an A statement to its converse is not valid. As an example, for the A proposition "All cats are mammals," the converse "All mammals are cats" is obviously false. However, the weaker statement "Some mammals are cats" is true. Logicians define conversion per accidens to be the process of producing this weaker statement. Inference from a statement to its converse per accidens is generally valid. However, as with syllogisms, this switch from the universal to the particular causes problems with empty categories: "All unicorns are mammals" is often taken as true, while the converse per accidens "Some mammals are unicorns" is clearly false. In first-order predicate calculus, All S are P can be represented as $\forall x. S(x) \to P(x)$.[5] It is therefore clear that the categorical converse is closely related to the implicational converse, and that S and P cannot be swapped in All S are P. References 1. ^ Robert Audi, ed. (1999), The Cambridge Dictionary of Philosophy, 2nd ed., Cambridge University Press: "converse". 2. ^ Asa Mahan (1857), The Science of Logic: or, An Analysis of the Laws of Thought, p. 82. 3. ^ William Thomas Parry and Edward A. Hacker (1991), Aristotelian Logic, SUNY Press, p. 207. 4. ^ James H. Hyslop (1892), The Elements of Logic, C. Scribner's sons, p. 156. 5. ^ Gordon Hunnings (1988), The World and Language in Wittgenstein's Philosophy, SUNY Press, p. 42.
2014-07-31 18:01:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6597089767456055, "perplexity": 831.7450246766571}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273513.48/warc/CC-MAIN-20140728011753-00423-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.projecteuclid.org/euclid.em/1175789782
## Experimental Mathematics ### Some Geometry and Combinatorics for the $S$-Invariant of Ternary Cubics P. M. H. Wilson #### Abstract In earlier papers, P. M. H. Wilson, “Sectional Curvatures of Kähler Moduli,” and B. Totaro, “The Curvature of a Hessian Metric,” the $S$-invariant of a ternary cubic $f$ was interpreted in terms of the curvature of related Riemannian and pseudo-Riemannian metrics. This is clarified further in Section 3 of this paper. In the case that $f$ arises from the cubic form on the second cohomology of a smooth projective threefold with second Betti number three, the value of the $S$-invariant is closely linked to the behavior of this curvature on the open cone consisting of Kähler classes. In this paper, we concentrate on the cubic forms arising from complete intersection threefolds in the product of three projective spaces, and investigate various conjectures of a combinatorial nature arising from their invariants. #### Article information Source Experiment. Math., Volume 15, Issue 4 (2006), 479-490. Dates First available in Project Euclid: 5 April 2007 Wilson, P. M. H. Some Geometry and Combinatorics for the $S$-Invariant of Ternary Cubics. Experiment. Math. 15 (2006), no. 4, 479--490. https://projecteuclid.org/euclid.em/1175789782
2020-01-21 14:17:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6881148219108582, "perplexity": 593.470739482236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604397.40/warc/CC-MAIN-20200121132900-20200121161900-00471.warc.gz"}
https://www.drmaciver.com/2017/10/reality-is-a-countably-infinite-sierpinski-cube/
# Reality is a countably infinite Sierpiński cube I was thinking out loud on Twitter about what weird beliefs I hold, after realising (more or less as I was writing it) that my philosophical positions are practically banal (at least to anyone who has thought about these issues a bit, whether or not they agree with me). I came up with a couple, but probably the most interesting (if very very niche) that I thought of is that one true and accurate mathematical model of reality is  time cube a closed, connected, subset of the countably infinite Sierpinski cube. I consider this opinion to be not that weird and more importantly obviously correct, but I’m aware that this is a niche opinion, but hear me out. Before we start, a quick note on the nature of reality. I am being deliberately imprecise about what I mean by “reality” here, and basically mean “any physical system”. This could be “life the universe and everything” and we are attempting to solve physics, or it could be some toy restricted physical system of interest and we are trying to nail down its behaviour. This post applies equally well to any physical system we want to be able to model. Consider an experiment. Let’s pretend we can have deterministic experiments for convenience – you can easily work around the impossibility by making countably infinitely many copies of the experiment and considering each of them to be the answer you got the nth time you ran the experiment. Also for simplicity we’ll assume that experiments can only have one of two outcomes (this is no loss of generality as long as experiments can only have finitely many outcomes – you just consider the finitely many experiments of the form “Was the outcome X?” – and if they have infinitely many outcomes you still need to ultimately make a finite classification of the result and so can consider the experiment composed with that classification). There are three sensible possible outcomes you could have here: • Yes • No • I don’t know, maybe? Physical experiments are inherently imprecise – things go wrong in your experiment, in your setup, in just about every bloody thing, so set of experiments whose outcome will give you total certainty is implausible and we can ignore it. Which leaves us with experiments where one of the answer is maybe. It doesn’t matter which answer the other one is (we can always just invert the question). So we’ve run an experiment and got an answer. What does that tell us about the true state of reality? Well whatever reality is we must have some notion of “an approximate region” – all of our observation of reality is imprecise, so there must be some notion of precision to make sense of that. What does the result of a physical experiment tell us about the state of reality? Well if the answer is “maybe” it doesn’t tell us anything. Literally any point in reality could be mapped to “maybe”. But if the answer is yes then this should tell us only imprecisely where we are in reality. i.e. the set of points that map to yes must be an open set. So an experiment is a function from reality to {yes, maybe}. The set of points mapping to yes must be an open set. And what this means is that experiments are continuous functions to the set {yes, maybe} endowed with the Sierpiński topology. The set {yes} is open, and the whole set and the empty set are open, but nothing else is. Now let’s postulate that if two states of reality give exactly the same answer on every single experiment, they’re the same state of reality. This is true in the same sense that existing is the thing that reality does – a difference that makes no difference might as well be treated as if it is no difference. So what we have is the following: 1. Any state of reality is a point in the cube $$S^E$$ where $$E$$ is the set of available experiments and $$S = \{\mathrm{yes}, \mathrm{maybe}\}$$. 2. All of the coordinate functions are continuous functions when $$S$$ is endowed with the Sierpinski topology. This is almost enough to show that reality can be modelled as a subset of the Sierpinski cube, not quite: There are many topologies compatible with this – reality could have the discrete topology. But we are finite beings. And what that means is that any given point in time we can have observed the outcome of at most finitely many experiments. Each of these experiments determine where we are only in the open set of some coordinate in our cube, thus the set that the experiments have determined us to be in is an intersection of finitely many open sets in the product topology on that cube, and thus is open in that topology. Therefore the set of states of reality that we know we are in is always an open set in the product topology. So this is the “natural” topology on reality. So reality is a subset of a Sierpiński cube. We now have to answer two questions to get the rest of the way: • How many dimensions does the cube have? • What sort of subset is it? The first one is easy: The set of experiments we can perform is definitely infinite (we can repeat a single experiment arbitrarily many times). It’s also definitely countable, because any experiment we can perform is one we can describe (and two experiments are distinct only up to our ability to describe that distinction), and there are only countably many sentences. So reality is a subset of the countably infinite dimensional Sierpiński cube. What sort of subset? Well that’s harder, and my arguments for it are less convincing. It’s probably not usually the whole set. It’s unlikely that reality contains a state that is just constantly maybe. It might as well be a closed set, because if it’s not we can’t tell – there is no physical experiment we can perform that will determine that a point in the closure of reality is not in reality, and it would be aesthetically and philosophically displeasing to have aphysical states of reality that are approximated arbitrarily well. In most cases it’s usually going to be a connected set. Why? Well, because you’re “in” some state of reality, and you might as well restrict yourself to the path component of that state – if you can’t continuously deform from where you are to another state, that state probably isn’t interesting to you even if it in some sense exists. Is it an uncountable subset of the Sierpinski cube? I don’t know, maybe. Depends on what you’re modelling. Anyway, so there you have it. Reality is a closed, connected, subset of the countably infinite dimensional Sierpiński cube. What are the philosophical implications? Well, one obvious philosophical implication is that reality is compact, path connected, and second countable, but may not be Hausdorff. (You should imagine a very very straight face as I delivered that line) More seriously, the big implication for me is on how we model physical systems. We don’t have to model physical systems as the Sierpiński cube. Indeed we usually won’t want to – it’s not very friendly to work with – but whatever model we choose for our physical systems should have a continuous function (or, really, a family of continuous functions to take into account the fact that we fudged the non-determinism of our experiments) from it to the relevant Sierpiński cube for the physical system under question. Another thing worth noting is that the argument is more interesting than the conclusion, and in particular the specific embedding is more important that the embedding exists. In fact every second countable T0 topological space embeds in the Sierpinski cube, so the conclusion boils down to the fact that reality is a T0, second countable, compact, and connected (path connected really) topological space (which are more or less the assumptions we used!). But I think the specific choice of embedding matters more than that, and the fact that we the coordinates correspond to natural experiments we can run. And, importantly, any decision we make based on that model needs to factor through that function. Decisions are based on a finite set of experiments, and anything that requires us to be able to know our model to more precision than the topology of reality allows us to is aphysical, and should be avoided.
2019-11-22 22:29:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7534902095794678, "perplexity": 390.1334319943434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672170.93/warc/CC-MAIN-20191122222322-20191123011322-00480.warc.gz"}
https://www.gamedev.net/forums/topic/262065-quaternion-problems/
# Quaternion Problems This topic is 4914 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi there, I'm attempting to use quaternions for the rotation of ships in a 3d Descent-like game. Here's the issue I'm having: I rotate/multiply/whatever the quaternion, turn it into a matrix, apply it as the viewpoint, and then translate the viewpoint back back with no problems, but when I then apply the matrix a second time to align the player's ship with the camera, it adds to the rotation instead of negating it. I've tried rotating a second quaternion in the opposite direction and applying that, but that doesnt seem to help much except while rotating about a vertical axis. As soon as a roll, or try to pitch up it screws up. I don't really understand the math too well, and can't seem to figure much out from the gamedev articles, so if you could either point out a flaw in my thinking or point me to an example/tutorial of some sort I would greatly appreciate it. Thanks, Jim ##### Share on other sites post this code: Quote: apply it as the viewpoint, and then translate the viewpoint back back with no problems, but when I then apply the matrix a second time to align the player's ship with the camera, it adds to the rotation instead of negating it -me ##### Share on other sites void CShip::positioncamera() { GLfloat rotMatrix[ 16 ]; qcamorientation.getMatrix( rotMatrix ); glMultMatrixf( rotMatrix ); } void CShip::draw() { glPushMatrix(); glEnable(GL_LIGHTING); glTranslatef(position.x,position.y,position.z); GLfloat rotMatrix[ 16 ]; qcamorientation.getMatrix( rotMatrix ); glMultMatrixf( rotMatrix ); glScalef(.005,.005,.005); model.Draw(); glDisable(GL_LIGHTING); glPopMatrix(); } ##### Share on other sites oh rite. if you've already positioned the camera and you know the ship is going to be in the same place don't do any extra rotation on the ship. you've already moved the world so that the camera is in the right place. drawing the ship is easy, you just need to draw it without any transformations and it should show up in the right place. drawing everything else is, IMHO, confusing the way you've done it. you need to transform everything else into camera local space and draw it there, rather than drawing it where it's supposed to be. the, again IMHO, easier way to do things is to use gluLookAt to position the camera. this will transform the appropriate matrices so the camera appears where you want it to be. then you can just draw everything else without any additional transformations. -me ##### Share on other sites The second way sounds a whole lot easier. =P I've got a function that can extract a heading vector from the quaternion, so I'd guess I add that to the position to get the view vector, but how do I get an up vector for the gluLookAt funct? ##### Share on other sites if you can get the view out of the quaternion you can get the up out of it as well. if you can't find happy docs for specific quaternion examples, convert the quat to a matrix and extract the view & up from there. you should be able to find plenty of info on how to extract orientation vectors from a quaternion. actually just take whatever your default forward vector is and postmultiply it by the quaternion, it'll give you the forward. then do the same with whatever is your default up vector (typically 0,0,1) though in openGL it may be (0,1,0) i forget. -me ##### Share on other sites I don't mean to be mean, but that's precicely your problem. If you simply copy/paste code here and there, you'll never be able to achieve as much as someone who actually understand what's going on. Do you even need quaternions in the first place? There's other ways to rotate besides quaternions! I would strongly suggest that you get a book which can help you undertsand what's hapenning. You can easilly do a search on amazon for something like "mathematics game" and you'll get a list of books available on the subject. Since I've lookde into it myself not so long ago, you can also check overstock.com It has a book which is unbelieveably cheap and recent "Mathematics for Game Developers". ##### Share on other sites After playing around with it for a little while I got it to work with the gluLookAt() function, but it demonstrates the same behavior it did before. =( Here's what I've got: void CShip::positioncamera() { CQuaternion qup = qcamorientation; qup.postMult( CQuaternion( 0, 1, 0 ) ); CVector up = qup.getDirectionVector(); CVector camview = position; gluLookAt(camposition.x,camposition.y,camposition.z,camview.x,camview.y,camview.z,up.x,up.y,up.z); } void CShip::draw() { glPushMatrix(); glEnable(GL_LIGHTING); glTranslatef(position.x,position.y,position.z); GLfloat rotMatrix[ 16 ]; qcamorientation.getMatrix( rotMatrix ); glMultMatrixf( rotMatrix ); glScalef(.005,.005,.005); model.Draw(); glDisable(GL_LIGHTING); glPopMatrix(); } ##### Share on other sites Quote: Original post by Anonymous PosterI don't mean to be mean, but that's precicely your problem. If you simply copy/paste code here and there, you'll never be able to achieve as much as someone who actually understand what's going on. Do you even need quaternions in the first place? There's other ways to rotate besides quaternions! I would strongly suggest that you get a book which can help you undertsand what's hapenning. You can easilly do a search on amazon for something like "mathematics game" and you'll get a list of books available on the subject. Since I've lookde into it myself not so long ago, you can also check overstock.com It has a book which is unbelieveably cheap and recent "Mathematics for Game Developers". I've tried using eular angles, which seemed impossibly difficult when rotating around arbitrary axises, and, and polar coors, which dont have roll information. I don't copy/paste code for everything, but I'll admit that when I come across something I don't understand and can't figure out after researching and coding for a couple hours I generally try to make example code work. I'm definately still beginning and am overwhelmed by the vast amount of stuff you need to know to make something kinda-sorta work right. As far as the book thing goes, thats why I was asking not only for specific help but for tutorials. I've found a lot of basic OpenGL/Direct3D tutorials, and a lot of more advanced stuff, but nothing that lies where I'm at. I'm kinda broke and can't afford to spend $30-$40 every time I've encounter a problem. That's why I'm asking for suggestions ##### Share on other sites I wrote small program that among other things does quaternion rotation, and there was a thread about it... (and also,with sources) edit:if you will use it,post yout FPS to thread :) it's not too old so it's not a necro. ##### Share on other sites I am not an expert on this but are you using a left handed system? I think quaternions are right handed so you may need to take the inverse of it to make it left handed (if that's applicable).
2018-01-21 17:39:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34320560097694397, "perplexity": 888.6352916463856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890795.64/warc/CC-MAIN-20180121155718-20180121175718-00223.warc.gz"}
https://www.physicsforums.com/threads/deep-sea-diver-pressure.201696/
Deep Sea Diver Pressure 1. Nov 30, 2007 clipperdude21 1. A deep sea diver has descended a distance of 50m below the ocean surface. Asummer that he is supplied with air from the survace. a) what is oxygen partial pressure in the lungs of the diver at this depth. b) suppose we want the oxygen partial pressure in the lungs of the diver at this depth to be the same as that at the surface. This can be done by providing the diver with a mixutre of helium and air. Use Dalton;s Law to compute the proper mixing ratio in moles between helium and air and explain the mechanism. 2. Relevant equations 3. (a) I got this right im pretty sure. I just did P=patm + pgh and got 605,700. Then multiplied it by 0.21 since oxygen is 21% of air and got 127,197 Pa. (b) I couldnt do this one! 2. Nov 30, 2007 Staff: Mentor It would be helpful if one showed the steps and units. for part b. at the surface the pressure of air is 1 atm, 14.7 psia, or 101325 Pa, of which ~0.21 is oxygen. What is the mixture of He/O2 such that the oxygen partial pressure is the same as the surface pressure when the diver is 50 m below the surface?
2017-11-19 09:47:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8270188570022583, "perplexity": 1660.4375539650068}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805466.25/warc/CC-MAIN-20171119080836-20171119100836-00292.warc.gz"}
http://www.gradesaver.com/textbooks/science/physics/conceptual-physics-12th-edition/chapter-15-think-and-explain-page-299-300/89
## Conceptual Physics (12th Edition) Published by Addison-Wesley # Chapter 15 - Think and Explain: 89 #### Answer Water occupies the smallest volume at $4^{\circ}C$. If it starts at that temperature, then whether the water cools down slightly or heats up slightly, the volume increases either way. The fluid level will rise, but we don't know what happened to the temperature. #### Work Step by Step This is discussed on pages 293-295 and shown in Figures 15.20-15.21. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2017-03-24 00:27:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7228771448135376, "perplexity": 1416.7728823317207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187227.84/warc/CC-MAIN-20170322212947-00252-ip-10-233-31-227.ec2.internal.warc.gz"}
http://images.planetmath.org/shanthaprime
# Shantha prime A Shantha prime $p$ is a prime number of the form $p=3^{q}-2$ with $q$ being a Mangammal prime (http://planetmath.org/MangammalPrime). The smallest Shantha prime is $7=3^{2}-2$. The next is $3^{541}-2$ and has 259 digits. Shantha primes are very rare among the smaller numbers. The above formulation generates mostly composite numbers. ## References • 1 A. K. Devaraj, ”Euler’s Generalization of Fermat’s Theorem-A Further Generalization”, in Proceedings of Hawaii International Conference on Statistics, Mathematics & Related Fields, 2004. Title Shantha prime ShanthaPrime 2013-03-22 17:49:54 2013-03-22 17:49:54 PrimeFan (13766) PrimeFan (13766) 26 PrimeFan (13766) Definition msc 11A41
2018-03-18 13:35:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 5, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7743818163871765, "perplexity": 10877.182497089829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645775.16/warc/CC-MAIN-20180318130245-20180318150245-00680.warc.gz"}
https://math.stackexchange.com/questions/3201419/absolute-value-rational-inequalities-help-please
# Absolute Value Rational Inequalities Help Please I have been read plenty of questions on questions like this, but I still dont quite get it. For example, this question: $$\left| \frac{2x+1}{x-3} \right| \ge 2$$ How would I go about solving this? -The method in the example went on to just square both sides of the equation and from there it formed a quadratic equation to solve it. But, I dont get why you can just square it, dont we need to take into consideration since the expression is absolute value, it could also have a negative value as well? Like, 2x+1 can also be -(2x+1). Is it because, regardless if it were positive or negative, once you square it, the result would be the same? I am confused because other times, typically we would have 2 cases, one for 2x+1 > 0 and 2x+1 <0. Another questions is, when can I just square both sides and continue from there? Is there a more complete way of doing it rather than just squaring both sides and be done? In what scenarios, cant I square both sides? And what is the alternative method to solving those types of problems? Thanks. • It completely depends on the problem and squaring may be useful to get rid of lots of square roots, et cetera – user665856 Apr 25 at 4:53 • Is this $$\left|\frac{2x+1}{x-3}\right|\geq 2$$ ? – Dr. Sonnhard Graubner Apr 25 at 5:33 • Can you see why $|x|\ge2$ is the same thing as $x^2\ge4$? – Gerry Myerson Apr 25 at 5:51 • @Dr.SonnhardGraubner. No, he wrote 2x + 1/x - 3 >= 2. – William Elliot Apr 25 at 6:11 • @WilliamElliot ... Was / in the original ASCII inequality meant to have high or low precedence? I interpreted it as low precedence. If that was a mistake, someone please roll back my edit. – Gregory Nisbet Apr 25 at 6:14 Hint: For $$x\neq 3$$ we can write $$|2x+1|\geq 2|x-3|$$ so we have to distinguish the following cases: a) $$x>3$$ then we have $$2x+1\geq 2(x-3)$$ b)$$-\frac{1}{2}\le x<3$$ and we get $$2x+1>-2(x+3)$$ c) $$x<-\frac{1}{2}$$ and we have $$-(2x+1)>-2(x-3)$$ • Thanks, I have used this method. But I have a problem with the squaring both sides of the inequality method. I don't get how you can square |2x+1| >= 2|x-3| on both sides, knowing that if x were negative, the inequality sign would have to change? I watched a video proving why you can square both sides of an inequality. For example |a|^2 > |b|^2, then what he wrote next was (a)^2 > (b)^2. You can square the modulus because absolute value is always positive but if you get rid of the modulus, a could be negative and upon squaring a, the inequality symbol wouldould have to swap, right? – Andrew Lee Apr 26 at 6:31 For $$x \ne 3$$ we have $$\left|\frac{2x+1}{x-3}\right|\geq 2 \iff 4x^2+4x+1 \ge 4(x^2-6x+9)$$ Can you proceed ? • It is $x^2-6x+9$. – Word Shallow Apr 25 at 11:02 • Ooops ! Thanks. – Fred Apr 25 at 11:03 • Thanks! I get that you can square it, but how do you know that the expressions inside the modulus are non-negative? If they were, then the inequality symbol would have to change right? That's my problem here – Andrew Lee Apr 26 at 6:21 You can square both sides of an inequality provided they are both nonnegative; this is because the squaring operation preserves order for nonnegative quantities. That is, given $$x,y\geq 0,$$ and $$x\leq y,$$ then it follows that $$x^2\leq y^2.$$ The reason is simple: Since both $$x$$ and $$y$$ are nonnegative, you can multiply $$x\leq y$$ by each in turn to get, respectively, $$x^2\leq xy$$ and $$xy\leq y^2,$$ from which the result follows immediately. Thus, you can square both sides of $$x\leq y$$ without qualms, provided that $$x,y$$ are nonnegative. Recall that when the symbol $$|{\cdot}|$$ encloses an expression $$E(x,y),$$ say, we mean that the result -- here $$\left |E(x,y)\right |$$ -- is nonnegative, by definition -- that is, it is either positive, or else it vanishes. It follows from the explanation above that you can square both sides of your inequality, since $$\text{LHS}\ge 0,$$ and obviously $$2>0.$$ The explanation above holds, strictly speaking, only for inequalities involving $$\leq$$ (and of course, $$\geq$$), called weak inequalities; not for those involving the strong $$\lt.$$ The point of difference may appear minimal, but is essential from a strict perspective. If we have two nonnegative $$x,y$$ satisfying $$x\lt y,$$ then we can square them without disturbing the order $$\lt$$ only provided $$x,y$$ do not vanish simultaneously, for it is easy to see in that case that we have a false statement right from the beginning, namely $$0\lt 0.$$ This is the caveat to the above statements. PS. This is for completeness, as OP seems to need more clarification in this direction. I shall now solve your inequality in order to ascertain that everything I said is indeed clear. So we have $$\left| \frac{2x+1}{x-3} \right| \ge 2,$$ upon squaring both sides of which (this is legit as explained above) gives $$\left| \frac{2x+1}{x-3} \right|^2 \ge 4.$$ We pause here and note that the equality $$|A||B|=|AB|,$$ for any quantities $$A,B,$$ holds. This is a property satisfied by the modulus operation, whose proof would take us away from the main goal (try to convince yourself of its truth; by cases, perhaps). In that equality, if $$A=B,$$ then we have the true statement $$|A||A|=|A|^2=|AA|=|A^2|=A^2,$$ the last equality following provided that $$A$$ is real. Now returning to the main thread and applying the above result, we then have $$\left| \frac{2x+1}{x-3} \right|^2= \left| \left (\frac{2x+1}{x-3} \right)^2\right|=\left (\frac{2x+1}{x-3} \right)^2\ge 4.$$ I believe you can now proceed from here. • Thanks, but for here, how do we know if the expression inside the modulus is non-negative? For example, if the x were -2, and you were to cross multiply ending up with |2x+1| >= 2|x-3|. How can you square, both sides not knowing what the value of x is in the modulus? If x were -2 the inequality sign has to change right? – Andrew Lee Apr 26 at 6:26 • @AndrewLee We don't care what's inside the moldulus symbol. We're squaring the modulus of whatever it is; since moduli are never nonnegative, it is legit if the other side of the inequality is also nonnegative. Thus, what you're squaring is $\left |E(x,y,\ldots)\right |,$ and not $E(x,y,\ldots).$ – Allawonder Apr 26 at 10:57 • @AndrewLee I have added some more explanation in my answer. See if it now makes some sense to you. – Allawonder Apr 26 at 11:16
2019-07-24 00:07:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 42, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8845717906951904, "perplexity": 272.7912473706526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530246.91/warc/CC-MAIN-20190723235815-20190724021815-00335.warc.gz"}
https://eprint.iacr.org/2015/634
### Phasing: Private Set Intersection using Permutation-based Hashing Benny Pinkas, Thomas Schneider, Gil Segev, and Michael Zohner ##### Abstract Private Set Intersection (PSI) allows two parties to compute the intersection of private sets while revealing nothing more than the intersection itself. PSI needs to be applied to large data sets in scenarios such as measurement of ad conversion rates, data sharing, or contact discovery. Existing PSI protocols do not scale up well, and therefore some applications use insecure solutions instead. We describe a new approach for designing PSI protocols based on permutation-based hashing, which enables to reduce the length of items mapped to bins while ensuring that no collisions occur. We denote this approach as Phasing, for Permutation-based Hashing Set Intersection. Phasing can dramatically improve the performance of PSI protocols whose overhead depends on the length of the representations of input items. We apply Phasing to design a new approach for circuit-based PSI protocols. The resulting protocol is up to 5 times faster than the previously best Sort-Compare-Shuffle circuit of Huang et al. (NDSS 2012). We also apply Phasing to the OT-based PSI protocol of Pinkas et al. (USENIX Security 2014), which is the fastest PSI protocol to date. Together with additional improvements that reduce the computation complexity by a logarithmic factor, the resulting protocol improves run-time by a factor of up to 20 and can also have similar communication overhead as the previously best PSI protocol in that respect. The new protocol is only moderately less efficient than an insecure PSI protocol that is currently used by real-world applications, and is therefore the first secure PSI protocol that is scalable to the demands and the constraints of current real-world settings. Note: Added a note on how to achieve correctness when using multiple mapping functions, as was pointed out in http://eprint.iacr.org/2016/665. Available format(s) Category Applications Publication info Published elsewhere. MAJOR revision.USENIX Security Symposium 2015 Contact author(s) michael zohner @ ec-spride de History 2016-07-27: last of 2 revisions See all versions Short URL https://ia.cr/2015/634 CC BY BibTeX @misc{cryptoeprint:2015/634, author = {Benny Pinkas and Thomas Schneider and Gil Segev and Michael Zohner}, title = {Phasing: Private Set Intersection using Permutation-based Hashing}, howpublished = {Cryptology ePrint Archive, Paper 2015/634}, year = {2015}, note = {\url{https://eprint.iacr.org/2015/634}}, url = {https://eprint.iacr.org/2015/634} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
2022-12-06 16:45:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.648504376411438, "perplexity": 3222.0794993707223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00102.warc.gz"}
https://homework.zookal.com/questions-and-answers/before-beginning-a-hypothesis-test-an-analyst-specified-a-significance-786134192
1. Math 2. Statistics And Probability 3. before beginning a hypothesis test an analyst specified a significance... # Question: before beginning a hypothesis test an analyst specified a significance... ###### Question details Before beginning a hypothesis test, an analyst specified a significance level of 0.10. Which of the following is true? There is a 90% chance that the alternative hypothesis is true. There is a 90% chance that the confidence interval will include the true mean of the population. There is a 10% chance of rejecting the null hypothesis when it is actually true. There is a 90% chance of rejecting the null hypothesis when it is actually false.
2021-04-11 15:38:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8125073909759521, "perplexity": 339.76221505705456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064520.8/warc/CC-MAIN-20210411144457-20210411174457-00485.warc.gz"}
https://mathoverflow.net/questions/315341/primes-arising-from-permutations-ii
Primes arising from permutations (II) In Question 315259 (cf. Primes arising from permutations) I asked a question on primes arising from permutations which looks quite challenging. Here I pose a new question in this direction which does not involve upper bounds for the least prime in an arithmetic progression with common difference $$n$$. QUESTION: Is my following conjecture true? Conjecture. (i) For each $$n=1,2,3,\ldots$$, there is a permutation $$\pi_n$$ of $$\{1,\ldots,n\}$$ such that $$k^2+k\pi_n(k)+\pi_n(k)^2$$ is prime for every $$k=1,\ldots,n$$. (ii) For any positive integer $$n\not=7$$, there is a permutation $$\pi_n$$ of $$\{1,\ldots,n\}$$ such that $$k^2+\pi_n(k)^2$$ is prime for every $$k=1,\ldots,n$$. (iii) For each $$n=1,2,3,\ldots$$, the number of permutations $$\pi_n$$ of $$\{1,\ldots,n\}$$ with $$k^2+\pi_n(k)^2$$ prime for all $$k=1,\ldots,n$$, is always a square. I have checked this conjecture for $$n$$ up to $$11$$. For example, $$(6,3,2,5,4,1)$$ is the unique permutation of $$\{1,\ldots,6\}$$ meeting the requirement in part (i) with $$n=6$$, and $$(1,3,2,5,4)$$ is the unique permutation of $$\{1,\ldots,5\}$$ meeting the requirement in part (ii) with $$n=5$$. Part (iii) of the conjecture looks quite mysterious! Let $$r(n)$$ be the number of permutations $$\pi_n$$ of $$\{1,\ldots,n\}$$ meeting the requirement in part (i), and let $$s(n)$$ be the number of permutations $$\pi_n$$ of $$\{1,\ldots,n\}$$ meeting the requirement in part (ii). Then $$(r(1),\ldots,r(11))=(1,1,3,1,5,1,17,9,21,16,196)$$ and $$(s(1),\ldots,s(11))=(1,1,1,1,1,4,0,16,4,144,64).$$ • $r(2n)$ might be a square too. – Zhi-Wei Sun Nov 15 '18 at 0:48 • The argument from my answer to mathoverflow.net/questions/315351 confirms (iii) as well, so nothing that mysterious happens... – Ilya Bogdanov Nov 15 '18 at 9:02 • ...and the same happens for $r(2n)$, since all even numbers should map to all odd ones. This is what breaks in the odd case here: there may be diffetent pairs of odd numbers providng a prime. – Ilya Bogdanov Nov 15 '18 at 9:26 • Affirmative answer to (i) or (ii) would imply that for every $k$, a degree $2$ polynomial $x^2+k^2$ or $x^2+xk+k^2$ takes a prime value. I am willing to bet this is an open problem. – Wojowu Nov 15 '18 at 9:57 • You confirmed conjectures (ii) and (iii) through n=11. Here are the number of partitions for conjecture (iii) for n=12 through 25": 81, 256, 5184, 1600, 25600, 8100, 183184, 108900, 5924356, 342225, 9066121, 11356900, 106853569, 105698961 - they are all squares. – Jud McCranie Nov 16 '18 at 4:38
2020-07-03 14:39:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 32, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8507705926895142, "perplexity": 161.46322349246617}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882051.19/warc/CC-MAIN-20200703122347-20200703152347-00254.warc.gz"}
http://vlab.amrita.edu/?sub=3&brch=65&sim=174&cnt=1
. . . Population with Continuous and Discrete Growth . . # Exponential Growth: If a population has a constant birth rate through time and is never limited by food or disease, it has what is known as exponential growth. With exponential growth the birth rate alone controls how fast (or slow) the population grows. # Objectives: • To study the growth pattern of a population if there are no factors to limit its growth. • To understand the various parameters of a population such as per capita rate of increase(r), per capita rate of birth (b) and per capita rate of death(d). • To understand how these parameters affect the rate of growth of a population. # Theory: ### What is a population? A population is a collection of interbreeding individuals of the same species that live together in a region. Population ecology is the study of population and how they change over time in relation to the environment including the environmental influences on population density and distribution, age structure and variations in population size. Population size can be constant, increasing or decreasing. ### Various factors that affect growth pattern of a population: • Environmental factors: Every organism requires an optimum condition for its growth and proliferation. Polar bears will not survive if they are exposed to desert conditions. • Food resources: Availability of food and water is very important in development of an organism and hence to maintain its healthy body so as to reproduce efficiently. • Species interactions: One species may depend on other species for survival. One species might be the food of the other or existence of one species might be mandatory for the other. Two species might share the same food and habitat which eventually leads to competition. In such cases population of one species might depend on that of the other. • Natural calamity: An occurrence of a natural calamity will cause a steep decline in the number of organism or might even wipe off the species. • Human interventions: Human activities such as poaching, deforestation, pollution caused by nearby human settlements can cause a drastic decrease in the number of species. If there is situation, such that none of these factors are present to disturb the growth and proliferation of an organism, then the organism will keep on multiplying. Such a growth pattern of an organism is known as exponential growth. Nearly all populations will tend to grow exponentially as long as the resources are available. This is the simplest type of population growth. # Exponential Population Growth Model: Exponential population growth occurs when a single species is not limited by other species (no predation, parasitism, competition), resources are not limited and environmental conditions are constant. In such conditions, population grows exponentially at constant percentage per time. Such a condition that permits exponential growth of a population is called an ecological vacuum. Ecological vacuum does not often occur in nature for a long period. In nature exponential population growth occurs commonly during a recovery of a population after a large scale disturbance (fire, epidemic. etc). In an exponential population growth model, the change in population size may be determined by the following factors. Change in population size during a fixed time interval=birth during time interval-deaths during time interval. ### Birth rate: Birth rate or natality rate is a measure of the extent to which a population replenishes itself through births. ### Death rate: Death rate is the rate at which a population is losing individuals. ### Exponential population growth models may be classified into: 1. Discrete population growth model. 2. Continuous population growth model. # Discrete Population Growth Model: An exponential population growth model may be defined as a discrete population growth model if the individuals of a population show: • Discrete breeding season. • Overlapping generations or Non overlapping generation. • Semelparous life history or Iteroparous life history. In discrete breeding population the species may breed only at a specific time usually at a particular time of the year. Breeding seasons introduce some delay in the regulative process. If a species lives for a number of years and produces relatively few young ones each year the delay time of one year due to discrete breeding season is likely to be short compared to the natural period of the species and so any oscillations caused by the delay will be convergent. But if the adults breeding in one season rarely or never survive to breed in the next has an important effect on their dynamics. In discrete growth we consider birth rate and death rate of the organisms. In population of overlapping generation each generation lives for two periods like youth and old age (two period life versions). At any time period one generation of youth coexist with one generation of elderly. At the beginning of the next period the elderly die off, the youth themselves become elderly and a new generation of elderly is born. Thus there are two overlapping generation of people living at any one time. In non overlapping generation every period a new generation arises and old one dies off. Generation precedes and follows each other but they do not overlap at any point. Semelparity and iteroparity refer to the reproductive strategy of an organism. A species is considered semelparous if it reproduces only a single time before it dies and Iteroparous if it can reproduce more than once in its lifetime. In population with discrete population growth, the population growth depends on the R (geometric growth factor). Growth factor is the factor by which a quantity multiplies itself over time or fundamental net reproductive rate. Geometric growth factor is obtained from the difference in the number of birth per year and the number of death per year. Imagine a situation where the government of India starts a sanctuary far from human settlements for protection of tigers (Panthera tigris). There is surplus food available in form of small mammals. Poaching is strictly prohibited. There is no other factor to disturb the intrinsic growth of the population. Mating can occur generally more common between November and April. The gestation period is 16 weeks. If the population starts with 50 tigers and number of birth per year of the population from which the tigers were obtained was 0.45 and number of death per year was 0.1, then the R value will be 1+(0.45-0.1)=1.35 if so the population at the end of first year, second year, etc can be predicted. At the end of first year there would be: At the end of second year there would be: Thus at the end of t years the population will be: # Continuous Population Growth: A population growth model may be defined as continuous population growth model if the individuals of a population show continuous breeding season. They never depend on parameters like season, food, climate for breeding. Here the population depends on instantaneous per capita rate of growth. It is best described in term of rate of population size change. $\frac{dN}{dt}=r*N$ Image source:http://en.wikipedia.org Here, r is the instantaneous per capita rate of growth. Here r can be calculated by the following equation: Consider the example of lion-tailed monkey. The Lion-tailed Macaque (Macaca silenus) is an Old World monkey that is endemic to the Western Ghats of South India. The Lion-tailed Macaque primarily eats indigenous fruits, leaves, buds, insects and small vertebrates in virgin forest. These animals are disappearing due to loss of habitat. Gestation is approximately six months. The young are nursed in the first year. Sexual maturity is reached at four years for females, six years for males. The life expectancy in the wild is approximately 20 years. Let us suppose human settlement is withdrawn from a small area in the Western Ghats and the forest begin to grow rapidly, slowly the number of trees, insects and small vertebrates begin to increase. Let us say 102 lion-tailed monkeys find their way into this ideal place of growth and the growth of the lion-tailed monkeys are as follows: Year Interval Population in Numbers 1972 0 102 1977 5 156 1982 10 202 1987 15 257 1992 20 302 1997 25 355 Then from the data: the rate of growth = (355-102)/25 = 10.12 = 0.1012. Since the growth is between 0 and 1 make the data into decimal value by dividing by 100. $\frac{dN}{dt}=rN$, On Integrating $N_{t}=N_{0}*e^{r}$ Growth affects how common or rare a species is, which in turn affect the rate of population harvest. Like, how many fishes will be there after a period of trawling? How many trees can be cut from the population ensuring the same growth of the tree population? It is also important to protect the now endangered species. This can be easily done if we know growth rate of these species and protect them. This is especially useful in expanding the population. Thus though continuous or discrete mode of population growth is a very rare event it has its own importance in predicting the behavior of a population. Calculate the growth using the equation and make it to decimals to run in the simulator; Growth rate - 0.108 0.1 0.103 0.1 0.101 ## Note: Cite this Simulator: ..... ..... ..... Copyright @ 2020 Under the NME ICT initiative of MHRD
2020-09-20 03:56:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5056199431419373, "perplexity": 1316.157909608066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193391.9/warc/CC-MAIN-20200920031425-20200920061425-00706.warc.gz"}
https://socratic.org/questions/59374b0511ef6b0d2b7d3713#435782
# Question d3713 Jun 7, 2017 see explanation #### Explanation: Q : $\frac{\sin x + 1}{\cos} x = \left(\frac{\left(\tan x + \sec x\right) - 1}{\left(\tan x + 1\right) - \sec x}\right) \cdot \left(\frac{\left(\tan x + \sec x\right) + 1}{\left(\tan x + 1\right) + \sec x}\right)$ Let we take RHS to prove LHS, $\left(\frac{\left(\tan x + \sec x\right) - 1}{\left(\tan x + 1\right) - \sec x}\right) \cdot \left(\frac{\left(\tan x + \sec x\right) + 1}{\left(\tan x + 1\right) + \sec x}\right) = \frac{{\left(\tan x + \sec x\right)}^{2} - 1}{{\left(\tan x + 1\right)}^{2} - {\sec}^{2} x}$ =(tan^2 x + 2tan x sec x + sec ^2 x -1) / (tan^2 x + 2tan x + 1 - sec^2 x rearrange, =(tan^2 x + 2tan x sec x +( sec ^2 x -1)) / (tan^2 x + 1 + 2tan x - (sec^2 x) replace ${\sec}^{2} x - 1 = {\tan}^{2} x , {\sec}^{2} x = {\tan}^{2} x + 1$ in the equation =(tan^2 x + 2tan x sec x + tan^2 x) / (tan^2 x + 1 + 2tan x - (tan^2 x + 1) # $= \frac{2 {\tan}^{2} x + 2 \tan x \sec x}{2 \tan x}$ $= \tan x + \sec x = \sin \frac{x}{\cos} x + \frac{1}{\cos} x$ $= \frac{\sin x + 1}{\cos} x$ $\to$ proved
2022-01-24 23:37:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8890928030014038, "perplexity": 11424.474585682143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304686.15/warc/CC-MAIN-20220124220008-20220125010008-00412.warc.gz"}
https://www.zbmath.org/?q=an%3A1460.65117
## Iterative solvers for Biot model under small and large deformations.(English)Zbl 1460.65117 Summary: We consider $$L$$-scheme and Newton-based solvers for Biot model under large deformation. The mechanical deformation follows the Saint Venant-Kirchoff constitutive law. Furthermore, the fluid compressibility is assumed to be non-linear. A Lagrangian frame of reference is used to keep track of the deformation. We perform an implicit discretization in time (backward Euler) and propose two linearization schemes for solving the non-linear problems appearing within each time step: Newton’s method and $$L$$-scheme. Each linearization scheme is also presented in a monolithic and a splitting version, extending the undrained split methods to non-linear problems. The convergence of the solvers, here presented, is shown analytically for cases under small deformation and numerically for examples under large deformation. Illustrative numerical examples are presented to confirm the applicability of the schemes, in particular, for large deformation. ### MSC: 65M60 Finite element, Rayleigh-Ritz and Galerkin methods for initial value and initial-boundary value problems involving PDEs 65M22 Numerical solution of discretized equations for initial value and initial-boundary value problems involving PDEs 76S05 Flows in porous media; filtration; seepage ### Software: deal.ii; poroelasticity Full Text:
2022-07-02 08:39:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4990772604942322, "perplexity": 1367.2920496684392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00580.warc.gz"}
http://math.stackexchange.com/questions/97558/prove-fourier-series-of-e-cos-x-sin-sin-x-is-sum-n-0-infty-frac
# Prove: Fourier series of $e^{\cos x} \sin (\sin x)$ is $\sum_{n=0}^{\infty}\frac{\sin (nx)}{n!}$ I'd love your help with proving that the following series $$\sum_{n=0}^{\infty}\frac{\sin (nx)}{n!}$$ is the Fourier series of $e^{\cos x} \sin (\sin x)$. I tried to find $\hat f(n)$ using integration by parts and I tried to use Taylor's series of $e^x$ in order to get $n!$ by I didn't reach to anything close of what I should. Thanks a lot. - ## 2 Answers I will elaborate Misty's answer: $e^{e^{ix}}=e^{\cos x + i \sin x}=e^{\cos x} (\cos (\sin x) +i \sin (\sin x ))$, and the second way of looking at $e^{e^{ix}}$ is: $$e^{e^{ix}}= \sum_{n=0}^{\infty} (e^{ix})^n/n!=\sum_{n=0}^{\infty} e^{inx}/n!= \sum_{n=0}^{\infty} (\cos nx + i\sin nx )/n!.$$ By equating the imaginary terms in both expressions we get $$e^{\cos x}\sin (\sin x)=\sum_{n=0}^\infty \frac{\sin nx}{n!}$$ as required. - Perfect, thanks a lot. – Jozef Jan 9 '12 at 15:26 Try thinking about the imaginary part of $e^{e^{i x}}$ in two different ways. - Can you please extend your answer? I can't see it nor understand it. Thanks a lot! – Jozef Jan 9 '12 at 8:01
2016-05-06 07:34:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9295361638069153, "perplexity": 241.98135010648892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861735203.5/warc/CC-MAIN-20160428164215-00165-ip-10-239-7-51.ec2.internal.warc.gz"}
http://clay6.com/qa/10748/a-ball-is-thrown-vertically-upwards-from-ground-and-a-student-gazing-out-of
Browse Questions # A ball is thrown vertically upwards from ground and a student gazing out of window sees it moving upward past him at 5 m/s. The window is 10 m above the ground.Find the maximum height of the ball. $(a)\;11.25 m \quad (b)\;12.15 m \quad (c)\;10.50m \quad (d)\;14.25m$ $v^2-u^2=2as$ $u=? \quad v=5 m/s\quad s=10m\quad a=-10$ Therefore $5^2-u^2=2 \times (-10) \times 10$ $u^2=225m/s$ $Maximum\; height=h=\large\frac{u^2}{2g}$ $=11.25m$ Hence a is the correct answer. edited May 23, 2014
2017-04-26 09:59:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39538446068763733, "perplexity": 1271.0456984160705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121267.21/warc/CC-MAIN-20170423031201-00500-ip-10-145-167-34.ec2.internal.warc.gz"}
https://calculator.academy/ml-to-lbs-calculator/
Enter the total milliliters into the Calculator. The calculator will evaluate the Lbs from mL. ## Lbs from mL Formula lbs = mL * .002204623 Variables: • lbs is the Lbs from mL (pounds) • mL (milliliters) is the total milliliters To calculate Lbs from mL, multiply the milliliters of water by .02204623. ## How to Calculate Lbs from mL? The following steps outline how to calculate the Lbs from mL. • First, determine the total milliliters. • Next, gather the formula from above = lbs = mL * .002204623. • Finally, calculate the Lbs from mL. • After inserting the variables and calculating the result, check your answer with the calculator above. Example Problem : Use the following variables as an example problem to test your knowledge. total milliliters = 252
2023-03-31 20:05:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.497310996055603, "perplexity": 6101.471118680886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00538.warc.gz"}
https://dsp.stackexchange.com/questions/10884/heisenberg-uncertainly-principle-and-wavelet-transform/10895
Heisenberg Uncertainly Principle and wavelet transform I am using CMOR(complex morlet) wavelet in Fourier space in order to reconstruct my signal and also estimate damping and frequencies of the embedded modes. there are two main parameters in cmor. the Fb and Fc. bandwidth and center frequency, with increasing the Fb i get better result for frequency estimation, but i cant get a good damping estimation with high Fb. is it sth to do with Heisenberg Uncertainly Principle? and if yes, would you explain the role of fb and fc in the accuracy of frequency and damping estimations? The wavelet transforms provide a unified framework for getting around the Heisenberg Uncertainly Principle.
2019-11-19 00:08:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8445224761962891, "perplexity": 1377.7491248134081}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669868.3/warc/CC-MAIN-20191118232526-20191119020526-00270.warc.gz"}
http://mediacenterimac.com/9tesb3/jee-advanced-subjective-questions-94b800
Family Events In Frederick, Md, Bonding System Materials, Ape In Korean, Section 8 Homes For Rent In Atlanta, Ga 30310, Science Laboratory In Schools, Garner State Park Trails Map, Broward County Public Schools It Department, Books On Manifesting Your Dreams, Cosmos Flower Images, Louisiana Food Shipped, Openhab Vs Home Assistant, " /> Family Events In Frederick, Md, Bonding System Materials, Ape In Korean, Section 8 Homes For Rent In Atlanta, Ga 30310, Science Laboratory In Schools, Garner State Park Trails Map, Broward County Public Schools It Department, Books On Manifesting Your Dreams, Cosmos Flower Images, Louisiana Food Shipped, Openhab Vs Home Assistant, " /> The 5th enhanced editions of the Combo NCERT Xtract objective Physics, Chemistry & Biology for NEET consists of quality selected MCQs as per current NCERT syllabus covering the entire syllabus of 11th and 12th standard. In the Physics section, questions on integers involved lengthy calculations making it the most difficult. If you go through past question papers, you’d know. Question 1: The function f : [0, 3] -> [1, 29] defined by f(x) = 2x 3 – 15x 2 + 36x + 1 is (a) one-one and onto (b) onto but not one-one (c) one-one but not onto The book 41 years IITJEE Advanced + 17 years JEE Mains (/AIEEE) Chapterwise Solved papers of Physics, Mathematics and Chemistry are excellent books containing the chapterwise collection of past years JEE Advanced and Main/AIEEE questions.It has been published by Disha Publication. View and download the largest collection of JEE Advanced (2020-2021) free practice questions & mock test papers. Hence option (4) is the answer. Get all the JEE Advanced Important Questions as free PDF downloads to help you prepare for the competitive exam. 2. JEE Advanced Previous Year Questions of Math with Solutions are available at eSaral. AUTHOR – . JEE Main 2020: Only a couple of days are left for over 9 lakh candidates who have applied … (172 more words) … IIT JEE Advanced Previous Year Questions Problem for Quadratic Equation. DOWNLOAD Past 11 YEARS AIEEE question chapter wise with solution. The books have chapters aligned in appropriate order (as per the NCERT books). Because of this reason, there might be two answers of these subjective questions. JEE Advanced Previous Year Questions of Math with Solutions are available at eSaral. The Important questions on electrochemistry JEE Advanced aims to help you to teach to solve the most complicated sums with simple and easy steps and with utmost accuracy. Quadratic Equation and Inequalities's Previous Year Questions with solutions of Mathematics from JEE Advanced subject wise and chapter wise with solutions JEE Main Previous Year Solved Questions on IUPAC Nomenclature. JEE Main Alternating Current Previous Year Questions with Solutions Alternating current (A.C.) changes its magnitude and polarity at a regular interval of time. An exclusive feature of JEE Advanced is that the marking scheme for JEE Advanced comprises full, zero and partial marks. Download IUPAC Nomenclature Previous Year Solved Questions PDF . PHYSICS: 1. In 2019 JEE Advanced question Paper, the distinctive thing was that in the Subjective Type Questions, the answer can be limited to two digits after the decimal or to the nearest two digits after the decimal. For such questions, you need to start right from scratch; apply the basic concepts and only then jump at the problem. BEST INORGANIC CHEMISTRY BOOK FOR IIT JEE. The problems given here are created after a detailed study of the entire syllabus and the question pattern of numerous years. Also Read: JEE Advanced may go tough with written section. It has Objective questions, subjective questions, Comprehension and statement matching problems, Previous years questions of IIT JEE Mains and Advanced. jee mains previous year questions chapter wise pdf. JEE Advanced 2020-2021 Solved Subjective Questions View Subjective Questions for PAGES – VARIABLE. Until last year, the JEE Advanced included objective type questions and was conducted in offline (pen-and-paper) mode. SS Krotov 3. As per officials of the JEE Advanced 2015 organizing team, the inclusion of a subjective section in the Advanced test is ruled out. JEE Main Previous Year Solved Questions on Electrostatics Q1: Three charges +Q, q, +Q are placed respectively, at distance 0, d/2 and d from the origin, on the x-axis. Practicing JEE Advanced Previous Year Papers Questions of Chemistry will help the JEE aspirants in realizing the question pattern as well as help in analyzing weak & strong areas. If the net force experienced by +Q placed at x = 0 is zero, then value of q is BOOK NAME – OBJECTIVE CHEMISTRY SERIES FOR IIT JEE AND NEET. The JEE Advanced 2014 question paper with answer key PDF helps the student go through the pattern of questions asked, as the subjective questions and the numerical ones can surprise even the best students if they have not prepared for it. Arihant jee mains solved papers. Practicing JEE Advanced Previous Year Papers Questions of mathematics will help the JEE aspirants in realizing the question pattern as well as help in analyzing weak & strong areas. All solutions are prepared by our subject experts to present the factual information to the students related to problems covered in JEE previous exams. The fifth improved version of the book NCERT Xtract Objective Physics for NEET/JEE Main comprises of value chosen MCQs according to the current NCERT prospectus covering the whole schedule of eleventh and twelfth guidelines.The most featuring highlight of the book is the consideration of a ton of new inquiries made precisely on the example of NCERT. Best inorganic chemistry book pdf download free for iit jee advanced Download JD Lee inorganic chemistry book pdf free. Overall, JEE Advanced Paper 2 was more difficult as compared to Paper 1. PAGES - 440. JEE Main 2020 best books – your best friends “There is no friend as loyal as a book,” said Ernest Hemingway. 42 YEAR (1978-2019) IIT JEE ADVANCED PAPER SOLUTION 19 YEAR (2002-2020) JEE MAIN (AIEEE) PAPER SOLUTION Every aspirant must check the JEE Main/Advanced previous year question papers to understand the nature of the exam. Download JEE Advanced Last 10-15 Year Question Papers With Answer Keys in PDF. Objective Physics Chapter-wise MCQ is a complete study pack that has been conceptualized on the class 12th syllabus to guide students aspiring for different engineering entrances. 1. eSaral helps the students in clearing and understanding each topic in a better way. Find it's wavelength. The time interval between a definite value of two successive cycles is the time period and the number of cycles or … JEE Main 2020 in the new pattern is scheduled to be held from January 6 onwards, if you are among the aspirants, map your timing and solve this mock question paper to assess your performance. A subjective element has been infused in this year's test by incorporating several questions with integer-type answers. JEE Main Maths Trigonometry Previous Year Questions With Solutions Trigonometry is important to mathematics as it involves the study of calculus, statistics and linear algebra. ... Subjective More. Students who solve the previous year question papers of JEE Main/Advanced understand better the trend of questions and topics asked in past exams. The IUPAC name of neopentane is : (1) 2–methylpropane (2) 2, 2–dimethylbutane (3) 2–methyl butane (4) 2, 2–dimethylpropane. JEE Advanced is known for asking questions in which routine methods may prove be time taking. The advantages of solving IIT JEE previous years papers is that students get to know the type of questions asked in the JEE exam. A ball of mass 100 g is moving with 100 ms-1. DOWNLOAD JEE Advanced has MCQs, numeric value-based answer questions etc. This article covers the identities of trigonometry and trigonometric equations, functions of trigonometry, properties of inverse trigonometric functions, problems on heights and distances. JEE Advanced 2014 (Offline) GO TO QUESTION A rocket is moving in a gravity free space with a constant acceleration of 2 ms–2 along + x direction (see figure). JEE Main Past Year Questions With Solutions on Relations and Functions. Pathfinder for Olympiads and JEE Advanced by Sachin Tiwari 4. JEE aspirants can evaluate their preparation after finishing the entire syllabus, topics and chapters. JEE previous year questions with solutions are extremely helpful for students preparing for JEE and other competitive exams. SIZE – VARIABLE. With the IIT JEE advanced 2014 question paper in free PDF download provided by Vedantu, you can help your child prepare better for the exams coming his/her way. Problems in General Physics by IE Irodov 2. Architecture Aptitude Test (AAT) is held separately after JEE Advanced. Well yes considering that acing the exam will need some extra preparation, which are the best books for JEE Main 2020 is one question that makes a difference. Practicing JEE Advanced Previous Year Papers Questions of mathematics will help the JEE aspirants in realizing the question pattern as well as help in analyzing weak & strong areas. These JEE Advanced Topic Chapterwise questions and solutions are prepared by our subject matter experts with huge experience. eSaral helps the students in clearing and understanding each topic in a better way. Structure of Atom's Previous Year Questions with solutions of Chemistry from JEE Advanced subject wise and chapter wise with solutions. JEE Advanced Previous Year Questions of Physics with Solutions are available at eSaral. View and download the largest collection of JEE Advanced (2020-2021) free practice questions & mock test papers. Solution: IUPAC name is 2,2- dimethyl propane. Properties of Triangle's Previous Year Questions with solutions of Mathematics from JEE Advanced subject wise and chapter wise with solutions JEE Advanced question paper 2021 was divided into three parts with 18 questions in each section with a maximum score of 198. Practicing JEE Advanced Previous Year Papers Questions of Physics will help the JEE aspirants in realizing the question pattern as well as help in analyzing weak & strong areas. IIT-JEE 2004. Pages - 342. Simulator Previous Years JEE Advance Questions Paragraph for questions 1 to 3 The hydrogen-like species $\mathbf{L} \mathbf{i}^{2+}$ is in a spherically symmetric state $\mathrm{S}_{1}$ with one radial node. With complete theory and sufficient Objective Questions, each book of the series covers IIT JEE Main and Advanced syllabi. Problems Sheets are a collection of a question bank for IIT JEE Mains and Advanced. https://engineering.careers360.com/articles/jee-advanced-question-papers A professor from the JEE (A) 2015 organizing team, however, ruled out the possibility of a subjective section in the test. For JEE Advanced Objective type questions and topics asked in Past exams at eSaral advantages of solving IIT JEE comprises... Know the type of questions asked in the Advanced test is ruled out wise and chapter wise with.. 2020 best books – your best friends “ there is no friend as loyal a! Year, the JEE Advanced Important questions as free PDF downloads to help you prepare for the competitive exam chapter! The Previous Year questions with solutions AAT ) is held separately after JEE Advanced Previous Year questions with solutions available! The marking jee advanced subjective questions for JEE and other competitive exams Last 10-15 Year question papers JEE! Practice questions & mock test papers g is moving with 100 ms-1 if you go through Past question papers Answer... ( pen-and-paper ) mode with solutions are available at eSaral helpful for students preparing for JEE Previous. And other competitive exams per the NCERT books ) the entire syllabus, topics and chapters complete. Chapters aligned in appropriate order ( as per officials of the JEE exam team, the JEE is. 2015 organizing team, the JEE exam the marking scheme for JEE and other exams... Question bank for IIT JEE Previous years papers is that the marking scheme for JEE is! Students related to problems covered in JEE Previous years questions of IIT JEE Mains and.... Paper 1 you go through Past question papers of JEE Advanced ( 2020-2021 ) free practice questions & test. Has Objective jee advanced subjective questions, subjective questions, you need to start right from scratch ; apply basic. And chapter wise with solution was divided into three parts with 18 questions in each section with a maximum of... Be two answers of these subjective questions series for IIT JEE Main Previous questions... Helps the students in clearing and understanding each topic in a better way Last Year... 2015 organizing team, the JEE Advanced is known for asking questions in each section with a maximum of. Prove be time taking: //engineering.careers360.com/articles/jee-advanced-question-papers View and download the largest collection of JEE Advanced is that students to... The inclusion of a subjective section in the Advanced test is ruled out at eSaral and JEE Advanced MCQs! Of IIT JEE Mains and Advanced g is moving with 100 ms-1 with integer-type answers a subjective element has infused... Olympiads and JEE Advanced after JEE Advanced ( 2020-2021 ) free practice questions & mock test papers Tiwari 4 students. Each book of the series covers IIT JEE Mains and Advanced syllabi students related to problems in! And statement matching problems, Previous years questions of Math with solutions of Chemistry from JEE Advanced is that marking! And understanding each topic in a better way largest collection of JEE Advanced ( 2020-2021 ) free practice questions mock! And topics asked in Past exams solving IIT JEE Mains and Advanced and. Be two answers of these subjective questions, Comprehension and statement matching problems, Previous years questions of Math solutions... Paper 1 known for asking questions in each section with a maximum score of 198 papers JEE... Past exams have chapters aligned in appropriate order ( as per officials of the JEE Advanced included Objective questions. Overall, JEE Advanced ( 2020-2021 ) free practice questions & mock test papers of IIT JEE Main Previous questions! Held separately after JEE Advanced is that students get to know the type of and! Infused in this Year 's test by incorporating several questions with solutions are available at eSaral – Objective series. Two answers of these subjective questions per officials of the series covers IIT JEE Mains and Advanced syllabi right scratch! Order ( as per the NCERT books ) these subjective questions, book. Year 's test by incorporating several questions with solutions on Relations and Functions prepared by our subject experts present... Jee Main Previous Year Solved questions on integers involved lengthy calculations making it the most difficult collection of JEE (! In the JEE exam Year 's test by incorporating several questions with solutions Relations! Are prepared by our subject matter experts with huge experience “ there is friend! Subject matter experts with huge experience books have chapters aligned in appropriate (., subjective questions, Comprehension and statement matching problems, Previous years papers is that the marking for... Pdf downloads to help you prepare for the competitive exam for Olympiads JEE! Competitive exam all solutions are available at eSaral our subject matter experts with huge experience topic questions! The most difficult maximum score of 198 feature of JEE Advanced Previous Year question papers of JEE Previous! Pattern of numerous years chapter wise with solutions marking scheme for JEE and other competitive exams reason, might. Prepared by our subject matter experts with huge experience ( AAT ) is held separately after Advanced... Mock test papers Tiwari 4 years papers is that the marking scheme for JEE Advanced included Objective type questions solutions. Past Year questions with integer-type answers Previous exams years AIEEE question chapter wise with solutions of Chemistry from Advanced... The NCERT books ) the problems given here are created after a detailed study of the entire syllabus and question! The inclusion of a question bank for IIT JEE Advanced Paper 2 was more difficult compared. Your best friends “ there is no friend as loyal as a book, ” said Ernest Hemingway better... Scheme for JEE Advanced Previous Year questions with integer-type answers and chapter wise with solutions on Relations and.! Name – Objective Chemistry series for IIT JEE Advanced subject wise and chapter wise solution., zero and partial marks in which routine methods may prove be time taking subject and... A detailed study of the JEE Advanced Previous Year questions of Math with solutions available! The type of questions and topics asked in the JEE Advanced ( 2020-2021 free... A detailed study of the entire syllabus, topics and chapters Year 's test by incorporating several questions solutions! Officials of the series covers IIT JEE Main Past Year questions Problem for Quadratic Equation questions solutions. Other competitive exams of IIT JEE Advanced may go tough with written section Advanced go! Paper 2021 was divided into three parts with 18 questions in which routine methods may prove be time.. Advanced question Paper 2021 was divided into three parts with 18 questions in which routine may. Important questions as free PDF downloads to help you prepare for the competitive.. Advanced question Paper 2021 was divided into three parts with 18 questions in which routine may... Objective Chemistry series for IIT JEE Advanced 2015 organizing team, the inclusion of a question bank for IIT Mains... And understanding each topic in a better way AIEEE question chapter wise with solution a! Appropriate order ( as per the NCERT books ) infused in this Year 's by... Subject wise and chapter wise with solutions are prepared by our subject matter experts huge... Can evaluate their preparation after finishing the entire syllabus and the question pattern of numerous.. Finishing the entire syllabus, topics and chapters Year question papers of Advanced... Prepare for the competitive exam full, zero and partial marks 10-15 Year question papers of JEE subject... Paper 2 was more difficult as compared to Paper 1 type questions topics... Competitive exams better the trend of questions and topics asked in the Physics section, questions on involved... 18 questions in each section with a maximum score of 198 with 18 questions in routine... Get all the JEE Advanced Previous Year questions Problem for Quadratic Equation, each book of the series IIT! Was more difficult as compared to Paper 1 Advanced Previous Year questions with solutions are available at eSaral as book... A question bank for IIT JEE Main Past Year questions of Math with solutions of solving JEE. And statement matching problems, Previous years papers is that students get know! And JEE Advanced is known for asking questions in each section with a score! Math with solutions are extremely helpful for students preparing for JEE and other competitive exams maximum score of 198 was... In PDF papers of JEE Main/Advanced understand better the trend of questions and topics asked Past. Bank for IIT JEE Mains and Advanced a maximum score of 198 Chemistry from JEE Advanced by Sachin 4...: //engineering.careers360.com/articles/jee-advanced-question-papers View and download the largest collection of a subjective section in the Physics,... The Previous Year questions with solutions on Relations and Functions Main Previous Year question papers, ’. Into three parts with 18 questions in which routine methods may prove be time taking solve the Previous questions. Question Paper 2021 was divided into three parts with 18 questions in each section jee advanced subjective questions a maximum score 198... Several questions with integer-type answers largest collection of JEE Main/Advanced understand better the trend questions! 2015 organizing team, the inclusion of a subjective element has been infused this! It the most difficult the basic concepts and only then jump at the Problem a question bank IIT. ) is held separately after JEE Advanced topic Chapterwise questions and solutions extremely. Present the factual information to the students in clearing and understanding each topic in a way! Advanced comprises full, zero and partial marks years papers is that students to... Parts with 18 questions in each section with a maximum score of 198, topics and chapters covered... By our subject experts to present the factual information to the students related to problems covered in JEE Previous questions... Help you prepare for the competitive exam preparing for JEE and NEET questions... Are a collection of a subjective section in the Physics section, questions on involved. After a detailed study of the JEE Advanced Previous Year Solved questions on integers involved lengthy making. Divided into three parts with 18 questions in which routine methods may prove be time taking chapters in... Is no friend as loyal as a book, ” said Ernest.... With 18 questions in each section with a maximum score of 198 an exclusive feature of JEE Advanced that. Type questions and was conducted in offline ( pen-and-paper ) mode papers, you ’ d know type questions solutions...
2021-06-16 22:20:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3268142640590668, "perplexity": 2720.4191221903}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626122.27/warc/CC-MAIN-20210616220531-20210617010531-00177.warc.gz"}
http://logfc.wordpress.com/
# Sloppy Science Last week, Science has published a paper by Rodriguez and Laio on a density-based clustering algorithm. As a non-expert, I found the results actually quite good compared to the standard tools that I am using in my everyday work. I even implemented the package as an R package (soon to be published on CRAN, look out for “fsf”). However, there are problems with the paper. More than one. 1. The authors claim that the density for each sample is determined with a simple formula which is actually the number of other samples within a certain diameter. This does not add up, since then the density must be always a whole number. It is obvious from the figures that this is not the case. When you look up the original matlab code in the supplementary material, you see that the authors actually use a Gaussian kernel function for density calculation. 2. If you use the simple density count as described in the paper, the algorithm will not and cannot work. Imagine a relatively simple case with two distinct clusters. Imagine that in one cluster, there is a sample A with density 25, and in the other cluster, there are two samples, B and C, with identical densities 24. This is actually quite likely to happen. The algorithm now determines, for each sample, $\delta$, that is the distance to the next sample with higher density. The whole idea of the algorithm is that for putative cluster centres, this distance will be very high, because it will point to the center of another cluster. However, with ties, we have the following problem. If we choose the approach described by the authors, then both of the samples with density B and C (which have identical density 24) will be assigned a large $\delta$ value and will become cluster center candidates. If we choose to use a weak inequality, then B will point to C, and C to B, and both of them will have a small $\delta$. Therefore, we either have both B and C as equivalent cluster candidates, or none of them. No wonder that the authors never used this approach! 3. The authors explicitly claim that their algorithm can “automatically find the correct number of clusters.” This does not seem to be true, at least there is nothing in the original paper that warrants this statement. If you study their matlab code, you will find that the selection of cluster centers is done manually by a user selecting a rectangle on the screen. Frankly, I cannot even comment on that, this is outrageous. I think that Science might have done a great disservice to the authors — everyone will hate them for having a sloppy, half-baked paper that others would get rejected in PLoS ONE published in Science. I know I do :-) # Copy text from images All the elements were there. Algorithms freely available; implementations ready to download, for free. It just took one clever person to FINALLY make it: a seamless way to copy text from images. At least in Chrome, for now, but since it is Open Source, I guess it is just a matter of time and we will see it as a build-in feature in Firefox and other browsers. For now, use the Project Naphta Chrome browser extension. The main project page is also an interesting read. Words on the web exist in two forms: there’s the text of articles, emails, tweets, chats and blogs— which can be copied, searched, translated, edited and selected— and then there’s the text which is shackled to images, found in comics, document scans, photographs, posters, charts, diagrams, screenshots and memes. Interaction with this second type of text has always been a second class experience, the only way to search or copy a sentence from an image would be to do as the ancient monks did, manually transcribing regions of interest. (from the Naphta web site) It works really nice, assuming that you select the “English tesseract” option from the “Languages” sub-menu — the off-line javascript implementation of the OCRad is not very effective, in contrast to the cloud-based tesseract service. # Rotating movies with RGL Learned another thing today: it is very simple to create animated GIFs in the rgl package with the build-in functions spin3d, play3d and movie3d. library( pca3d ) data(metabo) pca <- prcomp( metabo[,-1], scale.= TRUE ) pca3d( pca, group= metabo[,1] ) rot <- spin3d( axis= c( 0, 1, 0 ) ) movie3d( rot, duration= 12 ) Here is the result: # Check credit card numbers using the Luhn algorithm You can find a nice infographic explaining how to understand the credit card number here. Credit card numbers include, as the last digit, a simple checksum calculated using the Luhn algorithm. In essence, we add up the digits of the card number in a certain way, and if the resulting sum divided by 10 is not an integer (i.e., the reminder of dividing by 10 is zero), then the number is invalid (the reverse is not true; a number divisible by 10 can still be invalid). This can be easily computed in R, although I suspect that there might be an easier way: checkLuhn <- function( ccnum ) { # helper function sumdigits <- function( x ) sapply( x, function( xx ) sum( as.numeric( unlist( strsplit( as.character( xx ), "" )))) ) # split the digits ccnum <- as.character( ccnum ) v <- as.numeric( unlist( strsplit( ccnum, "" ) ) ) v <- rev( v ) # indices of every second digit i2 <- seq( 2, length( v ), by= 2 ) v[i2] <- v[i2] * 2 v[ v > 9 ] <- sumdigits( v[ v > 9 ] ) if( sum( v ) %% 10 == 0 ) return( TRUE ) else return( FALSE ) } # Sample size / power calculations for Kaplan-Meier survival curves The problem is simple: we have two groups of animals, treated and controls. Around 20% of the untreated animals will die during the course of the experiment, and we would like to be able to detect effect such that instead of 20%, 80% of animals will die in the treated group, with power 0.8 and alpha=0.05. Group sizes are equal and no other parameters are given. What is the necessary group size? I used the ssizeCT.default function from the powerSurvEpi R package. Based on the explanation in the package manual, this calculates (in my simple case) the required sample size in a group as follows: $n = \frac{m}{p_E + p_C}$ where $p_E$ and $p_C$ are, respectively, probabilities of failure in the E(xpermiental) and C(ontrol) groups. I assume that in my case I should use 0.8 and 0.2, respectively, so $n=m$. The formulas here are simplified in comparison with the manual page of ssizeCT.default, simply because the group sizes are identical. $m$ is calculated as $m=\big(\frac{RR+1}{RR-1}\big)^2(z_{1-\alpha/2}+z_{1-\beta})^2$ RR is the minimal effect size that we would like to be able to observe with power 0.8 and at alpha 0.05. That means, if the real effect size is RR or greater, we have 80% chance of getting a p-value smaller than 0.05 if the group sizes are equal to $m$. To calculate RR, I first calculate $\theta$, the hazard ratio, and for this I use the same approximate, expected mortality rates (20% and 80%): $\theta = \log(\frac{\log(0.8)}{\log(0.2)}) = -1.98$ Since $RR=exp(\theta)=0.139$; thus $m=18.3$. This seems reasonable (based on previous experience). # Copy-paste and R interactive sessions I wanted to have a way of easily copying R commands from my interactive sessions. I often work interactively, while creating a pipeline or lab-book like readme in a separate file. Of course, you can just copy anything that is on the screen and paste it. However, I want to have my readme file to be directly executable, so generally I want only to copy the commands. The solution was to write a small function, called xclip, which copies an arbitrary number of lines from history into the X window system primary clipboard. Naturally, it is not compatible with Windows or MacOS. However, I can copy now a fragment of recent history with one command and one mouse click (since the primary X window system clipboard runs without shortcuts). The function actually is only a wrapper around the little xclip command utility, available for Linux. xclip <- function (n = 5) { t <- tempfile() savehistory(file = t) system(paste0("tail -n ", n, " ", t, "|xclip")) } # riverplot Prompted by this cross-validated discussion, I have created the riverplot package. Here is a minimal gallery of the graphics produced by the package: Here is an example which recreates the famous Minard plot: So, how to do these figures: First, you need to create a specific riverplot object that can be directly plotted. (Use riverplot.example to generate an example object). Here, I show how to recreate the Minard plot using the provided data. makeRiver, the function that will create the object necessary for plotting, will use data frames to input information about nodes and edges, but we must use specific naming of the columns: library( riverplot ) data( minard ) nodes <- minard$nodes edges <- minard$edges colnames( nodes ) <- c( "ID", "x", "y" ) colnames( edges ) <- c( "N1", "N2", "Value", "direction" ) Now we can add some style information to the “edge” columns, to mimick the orignal Minard plot: # color the edges by troop movement direction edges$col <- c( "#e5cbaa", "black" )[ factor( edges$direction ) ] # color edges by their color rather than by gradient between the nodes edges$edgecol <- "col" # generate the riverplot object river <- makeRiver( nodes, edges ) The makeRiver function reads any columns that match the style information (like colors of the nodes) and uses it to create the river object. The river object is just a simple list, you can easily view and manipulate it — or create it with your own functions. The point about makeRiver is to make sure that the data is consistent. Once you have created a riverplot object with one of the above methods (or manually), you can plot it either with plot(x) or riverplot(x). I enforce the use of lines, and I also tell the plotting function to use a particular style. The default edges look curvy. style <- list( edgestyle= "straight", nodestyle= "invisible" ) # plot the generated object plot( river, lty= 1, default.style= style ) # Add cities with( minard$cities, points( Longitude, Latitude, pch= 19 ) ) with( minard\$cities, text( Longitude, Latitude, Name, adj= c( 0, 0 ) ) )
2014-07-22 09:16:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5850904583930969, "perplexity": 1597.2320586829308}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857714.64/warc/CC-MAIN-20140722025737-00105-ip-10-33-131-23.ec2.internal.warc.gz"}
https://sbl.inria.fr/doc/Multiple_interface_string_alignment-user-manual.html
Structural Bioinformatics Library Template C++ / Python API for developping structural bioinformatics applications. User Manual # Multiple_interface_string_alignment Authors: S. Bereux and F. Cazals # Goals: combining MSA and interface models Multiple Sequence Alignments are pivotal to understand commonalities and differences between protein sequences. Likewise, interface models are pivotal to mine to stability and the specificity of protein interactions. Combining MSA and interface models yields Multiple Interface String Alignment (MISA), namely alignments of strings coding properties of a.a. found at protein-protein interfaces. Assume the interface of a complex has been found – we do so using the package Space_filling_model_interface . MISA are a visualization tool to display coherently various sequence and structure based statistics for the residues found at this interface, on a chain instance basis. That is, a MISA primarily consists of annotations for chain instances. Currently supported annotations for are: • : secondary structure elements (SSE) • : buried surface area (BSA), • : variation of the solvent accessible area, • : B factor values. The benefit of MISA are: • to collate annotated sequences of (homologous) chains • to allow for a comparison of properties of chains found in different biological contexts i.e. bound with different partners, but also unbound. In particular, the aggregated views , , etc make it trivial to identify commonalities and differences between chains, to infer key interface residues, and to understand where conformational changes occur upon binding. The logo of the package illustrates the two central objects used thereafter: (Top) the Voronoi interface (green polygons) shared by two proteins, and (Bottom) the multiple sequence alignment of selected amino-acids – those found at the interface, colored with biological / biophysical properties. # Pre-requisites ## Terminology MISA id. A complex is specified by two sorted lists of chains ids i.e. for the two partners and defining the complex. For the sake of exposure, we also assume each partner is given a structure name (e.g. antibody or antigen or ...). Likewise, an unbound structure is specified by a sorted list of chain ids . We consider a collection of complexes , and optionally a collection of unbound structures . The minimal setup is naturally that of a single complex without any unbound structure. Our goal is two build one MISA for each so-called MISA id, which we formally define as: The MISA id of a chain involved in the specification of a complex is the string defined by the structure name followed by the index of the chain in its sorted list or . Note that because of th sortedness, a given chain gets the same MISA id in a complex or unbound structure. Consider two antibody-antigen complexes. , each involving the heavy chains (chains H and M), the light chains (chains L and N), and the antigen (chains A and B). The two structure names are thus antibody (IG for short) and antigen (Ag for short). The MISA id of the heavy chain is IG_0, that of the light chain IG_1, and that of the antigen Ag_0. ## MISA Interface strings (i-strings). In the sequel, we present MISA informally. We represent each instance with a so-called interface string encoding properties of amino acids found interfaces involving that chain. The interface string of a chain instance is a character string with one character per residue, and is actually defined from all instances with the same MISA id. To build this character string, we first define: The consensus is the set of aa defined from all Voronoi interfaces of all complexes in the set . At each position of the consensus interface, the most frequent residue observed in all the bound structures is chosen as the consensus residue, with ties broken using the alphabetical order (of the 1-letter code of a.a.) Using this consensus interface, the residues of a given chain instance (bound or unbound) are displayed as follows: The interface string (i-string) of a chain instance is the string with one character per amino acid, defined as follows: 1. Residue not part of the consensus interface : 1. Displayed with a dash "-" if it is part of the crystal structure, underscore "\_" otherwise. 2. Residue part of the consensus interface: 1. Residue not found in the crystal structure: displayed with the star '*'. 2. Residue found at the interface for this particular chain: displayed with the uppercase one letter code if the a.a. matches the consensus a.a., or with the lowercase letter otherwise. 3. Residue not found at the interface for this particular chain, even though the corresponding position contributes to the consensus interface: displayed in an italicized font. (Note that this is the case for all residues of unbound structures, as no partner implies no interface.) Finally, assembling i-strings yields MISA: The MISA of all chain instances with the same MISA id is the multiple alignment of their interface strings. ## Colored MISA A colored MISA is a plain MISA whose 1-letter code of a.a. are colored using specific biological / biophysical properties: coloring based on Secondary Structure Elements The type of SSE a given a.a. belongs to is especially useful when comparing bound and unbound structures, to assess perturbations in the hydrogen bonding network. Practically, we use the dictionary of SSE from ~[91] . : coloring based on Buried Surface Area. Consider a chain instance in a complex. The BSA of this chain is defined as the accessible surface area (ASA) [133] of this chain in the partner alone minus the ASA of the chain in the complex. We report the BSA on a per residue basis, computed using the algorithm from [34]. : coloring based on the variation of accessible surface area. A limitation of the BSA is that its calculation uses the geometry of the bound structure only. The calculation is thus oblivious to conformational changes which may be at play in case of induced fit or conformer selection. To mitigate the previous plot, we also provide a the so-called coloring scheme. Consider an interface partner , in the complex, and consider the i-th residue of one of its chains. Let be the ASA of the i-th residue in the structure involving only those chains of partner . Also denote the average ASA of the i-th residue in unbound structures containing chains identical to those of partner . We compute for the i-th residue of a bound structure the quantity and display it with a color map. : coloring based on B-factors. The B-factors reflects the atomic thermal motions. Selected recent crystal structures report this information as a 3x3 ANISOU matrix (the anisotropic B-factor). In order to have a single quantity for all the crystals, the ANISOU matrix is converted into B-factor thanks to the formula [158] . Optionally, one can choose to normalize the B-factor with respect to (i) all the residues in the chain, or (ii) all the displayed residues. # Using Multiple_interface_string_alignment The package actually provides four complementary scripts: • : building MISA from a description of structures. • : mixing selected colored MISA into a single (html) file. • : post-processing the output of interface model and perform statistics on the BSA of selected specified residues. • : comparing i-strings and associated properties (in particular BSA) of two interfaces. Has two modes: comparing two i-strings; comparing one i-string and a list of interface residues typically coming from a publication. The reader is referred to section Dependencies for installation related issues. In the following, we briefly specify the entry of the scripts provided by this package, and refer the reader to the jupyter notebook of use cases. ## Main script: sbl-misa.py This is the main script computing (colored) MISA. Specification file. The link between Voronoi interfaces and interface strings is done by providing a list of so-called Interface string specifications: Given a PDB file, the interface string specification of a chain is the four-tuple: • intervor-partner-id: A or B • chain-name: the character specifying the chain instance in the PDB file. • structure-name: a string used to identify homologous structures. • tag: a string providing extra information - annotations. We note that the tag ie string providing extra information is a placeholder to accommodate any relevant information. It should contain only alphanumerical characters, without blank spaces or special characters. For example, the tag may specify a feature (eg open or close) qualifying the crystallized conformation of the chain X. Such interface string specifications are possibly enriched by specifying windows, ie a list of ranges of amino acids of interest – to be use to restrict the display. Here is an illustration for the first example provided in the jupyter notebook: # Windows for ACE2-bound-to-SARS-CoV-1 [ACE2-bound-to-SARS-CoV-1_0 (19, 83) (321,393)] ./pdb/2ajf.pdb (A, E, SARS-CoV-1-RBD, bound) (B, A, ACE2-bound-to-SARS-CoV-1, bound) ./pdb/2ajf.pdb (A, F, SARS-CoV-1-RBD, bound) (B, B, ACE2-bound-to-SARS-CoV-1, bound) ./pdb/5x58.pdb (A, A, SARS-CoV-1-RBD, unbound-closed) ./pdb/6crz.pdb (A, C, SARS-CoV-1-RBD, unbound-closed) # Specification for SARS-CoV-2 ./pdb/6m0j.pdb (C, E, SARS-CoV-2-RBD, bound) (D, A, ACE2-bound-to-SARS-CoV-2, bound) ./pdb/6lzg.pdb (C, B, SARS-CoV-2-RBD, bound) (D, A, ACE2-bound-to-SARS-CoV-2, bound) ./pdb/6vxx.pdb (C, A, SARS-CoV-2-RBD, unbound-closed) ./pdb/6vyb.pdb (C, A, SARS-CoV-2-RBD, unbound-closed) Main options. The main arguments of the script are: • spec file, denoted ifile as just described. • a directory to store the output of the parsing of the files generated by . • a directory, to store the created MISA. • a string which is added at the beginning from the output filenames. • a directory to store the output of the i-RMSD computations. The script also requires some additional input data, which can either be re-computed, or directly provided by the user: • a directory : If the option is provided, directory expected to contain the output files produced by . (Nb: run with .) Otherwise, directory to be created to host the files generated by . • a directory : If the option is provided, directory expected to contain the output files produced by . (Nb: run with the option ) should be located. Otherwise, where to create the files generated by . ## Mixing colored MISA: sbl-misa-mix.py Specification file. The MISA id and coloring as well as the location of the individual files must be provided. These three pieces of information are provided thanks to a specification file, organized in three lines. Each line begins by a tag which indicates the information type provided on the line : • The first line indicates the path to the directory(ies) containing the figures to be gathered : • The second line indicates the MISA ids to gather : • The third line indicates the coloring(s) to display : Here is an illustration for the first example provided in the jupyter notebook: # List of input directories localisation (./misa-RBD-ACE2-cmp/MISA) # List of MISA_chain_ids misa_chain_id (SARS-CoV-1-RBD_0, SARS-CoV-2-RBD_0) # List of colorings of interest coloring (SSE, BSA, Delta_ASA) Main options. Summarizing, here are the main input arguments of the script : • a spec file as just described. • a directory to store the created mixed figure. • a string , which is added at the beginning from the output filenames. ## BSA: sbl-misa-bsa.py The script displays the BSA of all (or selected user defined) residues. Specification file. By default, all residues are processed. A specification file can also be provided to to specify selected residues. (residue identifier) Given an file containing the buried surface area from intervor, we call residue identifier the 3-tuple containing : • intervor-partner-id: A or B • chain-name: the character specifying the chain instance in the original PDB file • index of the residue in the chain The specification file then consists of a list of residue identifiers: By default, in the absence of such a specification file, the BSA of every residue (with a BSA greater than or equal to 0.01) is displayed. Here is an illustration for the first example provided in the jupyter notebook: [(A, E, 303), (A, E, 403), (A, E, 449), (A, E, 455), (A, E, 486), (A, E, 502), (B, A, 79), (B, A, 35)] Main options. The main options are: • a spec file as just described. • an .xml file produced by containing the BSA data. • a directory to store the output : if not provided, the result will be dispayed in the command line. ## MISA diff: sbl-misa-diff.py The script compares the i-strings and associated properties (in particular BSA) of two interfaces. needs a specification file with two lines. Each of them contains the specification of one interface. There are two possible ways to specify an interface in this . Automatic interface specification file. The first consists of extracting it from the output. One must then specify the following 3-tuple: which gives : • MISA directory containing the chain in question • PDBID of the original PDB file • chain-name : character specifying the chain instance in the original PDB file Here is an illustration for the first example provided in the jupyter notebook, which uses both specifications: (./misa-RBD-ACE2-cmp/MISA/raw-data, 2ajf, E) (./misa-RBD-ACE2-cmp/SARS-CoV-1-RBD-Harisson-2005.txt) Manual interface specification. The second consists of enumerating the interface in a (def def-interface-file) whose format is detailed in the following paragraph). One must then indicate in the : . (short-residue-spec) We call short-residue-spec the compact specification of a residue, composed of the nature of the residue (written in the one-letter code), denoted by N, joined to its index in the protein sequence, denoted by XXX, which gives NXXX. Using this compact residue specification, one can manually specify an interface in an : (interface-file) An interface-file is a file, organized as follows: • First line: Name of the chain • Each of the following lines: one short-residue-spec (def def-short-res-spec) per line Here is an illustration for the corresponding to the first example in the jupyter-notebook: harisson-interface T402 R426 Y436 Y440 Y442 L472 N473 Y475 N479 Y484 T486 T487 G488 Y491 Main options. Summarizing, here are the main input arguments of the script : • a spec file as just described. • a directory to store the output. # Algorithms and Methods The analysis provided by the previous scripts involve the following seven steps: • Step 1 : Parsing the input : Parse the specification file and gather the structural data processed • Step 2 : Constructing the MISA: Initializing the MISA by gathering the chains with the same MISA id and aligning them • Step 3 : Coloring the MISA: Collecting the coloring values and attributing the coloring to each chain of the MISA • Main classe(s): SBL::Multiple_interface_string_alignment::cMISA_constructor, SBL::Multiple_interface_string_alignment::coloring_parser, SBL::Multiple_interface_string_alignment::color_engine • Step 4 : Recording the cMISA: recording the colored MISA into an HTML file presenting simultaneously the four colorings • Main classe(s): SBL::Multiple_interface_string_alignment::cMISA_recorder • Step 5 : Construction of the PDB interface files: storing the interface residues shared by each pair of chain into PDB files • Main classe(s): SBL::Multiple_interface_string_alignment::Extract_PDB_interface • Step 6 : Computation of the i-RMSD for each pair of chain: computing the iRMSD for each pair of PDB interface files • Main classe(s): SBL::Multiple_interface_string_alignment::Compute_interface_iRMSD_for_pairs • Step 7 : Analysis of the interface RMSD (i-RMSD): clustering the chains according to the iRMSD and recording statistics on the iRMSD • Main classe(s): SBL::Multiple_interface_string_alignment::Plot_tools Overview of sbl-misa.py # Dependencies This packages has the following dependencies: Packages from the SBL. • Parsing XML files to collect interface atoms: package PALSE Other packages. • Dictionary of secondary structures, DSSP [91] : DSSP • The package DSSP is used to infer the type of secondary structure element a given aa belong to. The program mkdssp used is described here; one may also consult directly the source code. executable: • Parsing PDB files, recovering SSE information: Biopython Structural Bioinformatics Biophython Installation. To use the aforementioned four scripts: • Make sure all executables used are visible from one's PATH environment variable. The executables are , , , , . • Also make sure that the following script, from the package Molecular_interfaces, is visible from one's PATH : # Jupyter demo See the following jupyter notebook: • Jupyter notebook file • Multiple_interface_string_alignment # Multiple_interface_string_alignments (MISA)¶ All the following scripts require (among others) the .pdb files associated to the structures of interest. They are provided in the ./pdb/ repository. # Example 1: comparing the RBD or Sars-CoV-1 and Sars-CoV-2¶ This first example provides a step by step comparison on the RBD. ## Step 1: generating all colored MISA with sbl-misa.py¶ First, the MISA is calculated for each of the chains specified in ifile-misa.txt. Content of ./misa-RBD-ACE2-cmp/ifile-misa.txt : # Windows for ACE2-bound-to-SARS-CoV-1 [ACE2-bound-to-SARS-CoV-1_0 (19, 83) (321,393)] ./pdb/2ajf.pdb (A, E, SARS-CoV-1-RBD, bound) (B, A, ACE2-bound-to-SARS-CoV-1, bound) ./pdb/2ajf.pdb (A, F, SARS-CoV-1-RBD, bound) (B, B, ACE2-bound-to-SARS-CoV-1, bound) ./pdb/5x58.pdb (A, A, SARS-CoV-1-RBD, unbound-closed) ./pdb/6crz.pdb (A, C, SARS-CoV-1-RBD, unbound-closed) # Specification for SARS-CoV-2 ./pdb/6m0j.pdb (C, E, SARS-CoV-2-RBD, bound) (D, A, ACE2-bound-to-SARS-CoV-2, bound) ./pdb/6lzg.pdb (C, B, SARS-CoV-2-RBD, bound) (D, A, ACE2-bound-to-SARS-CoV-2, bound) ./pdb/6vxx.pdb (C, A, SARS-CoV-2-RBD, unbound-closed) ./pdb/6vyb.pdb (C, A, SARS-CoV-2-RBD, unbound-closed) Each line corresponds to a complex, or to an unbound structure if the structure is alone on its line. For each complex, we provided one or two examples of bound complexes, as well as two examples of unbound complexes, in order to be able to calculate the $\Delta\_ASA$ induced by the conformational change. Otherwise, the $\Delta\_ASA$ won't be computed. The first line of the file is used to restrict the displayed portion of the interface for the SARS-CoV-1 RBD, in order to compact the output. The details of the specification are developed in the paper. In [1]: #!/usr/bin/python3 import os import subprocess import re import shutil from IPython.core.display import display, HTML from collections import defaultdict from IPython.display import IFrame from SBL import SBL_pytools from SBL_pytools import SBL_pytools as sblpyt In [2]: exe = shutil.which('sbl-misa.py') if not exe: # if exe == None ifile = './misa-RBD-ACE2-cmp/ifile-misa.txt' prefix_dir = './misa-RBD-ACE2-cmp' # To append at the beginning of every input and output directories # It allows to compacif prefix = 'demo-misa-1' # To append at the beginning of the output files verbose = '0' normalize_b_factor = '2' # Normalization with only respect to the displayed residues cmd = [exe, "-ifile", ifile, "-prefix_dir", prefix_dir, '-prefix', prefix, '--verbose', verbose, '-normalize_b_factor', normalize_b_factor] print('Running %s -ifile %s -prefix_dir %s -prefix %s --verbose %s -normalize_b_factor %s' % (exe, ifile, prefix_dir, prefix, verbose, normalize_b_factor)) s = subprocess.check_output(cmd, encoding='UTF-8') print('\nDone') #print(s) Running /user/fcazals/home/projects/proj-soft/sbl-install/bin/sbl-misa.py -ifile ./misa-RBD-ACE2-cmp/ifile-misa.txt -prefix_dir ./misa-RBD-ACE2-cmp -prefix demo-misa-1 --verbose 0 -normalize_b_factor 2 Done sbl-misa.py displays the MISA, with several colorings showing complementary data. An individual summary figure for each of the chains specified in ifile-misa.txt is produced. For example, here are the figures generated for SARS-CoV-2-RBD and for SARS-CoV-1-ACE2 (for which the effect of the window specification, restricting the range of displayed residues, can be observed) : In [3]: #IFrame(src='./misa-RBD-ACE2-cmp/MISA/SARS-CoV-2-RBD_0-demo-misa-1.html', width="100%", height=600) display(HTML('./misa-RBD-ACE2-cmp/MISA/SARS-CoV-2-RBD_0-demo-misa-1.html')) Legend for the amino acids (aa) encoding : For aa not at the interface : _ if aa is missing - if aa is present For aa at the interface : * if aa is missing X if consensus aa (= most frequent among the bound structures, and in case of tie the first by alphabetical order) x otherwise (For bound structure files only) : x or X if the aa is part of the consensus interface but not part of the interface of this file MISA SSE for MISA-ID SARS-CoV-2-RBD_0 3-turn helix - 4-turn helix - 5-turn helix - Isolated beta-bridge residue - Extended strand - Bend - Hydrogen bonded turn - Other - Missing Residue - Residue Index 403----410-------420-------430-------440-------450-------460-------470-------480-------490-------500---- | | | | | | | | | | | bound-6m0j-E, res :2.45 Å, 27 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ bound-6lzg-B, res :2.5 Å, 39 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ unbound-closed-6vxx-A, res :2.8 Å, 0 interf res R-DE----------KI--Y-----------------N-----**G-Y---Y-**_____-------____*_***______****_YF-LQSYGFQPTN*VGYQ unbound-closed-6vyb-A, res :3.2 Å, 0 interf res R-DE----------KI--Y-----------------N---__***-Y---Y-LF--------------__*_***______****_*F-LQSYGFQPTN*VGYQ MISA BSA for MISA-ID SARS-CoV-2-RBD_0 In dark grey, residues with missing data for coloring Buried Surface Area (BSA) (in Å2) | bound-6m0j-E : total bsa = 887.29 Å2 | bound-6lzg-B : total bsa = 1120.20 Å2 Residue Index 403----410-------420-------430-------440-------450-------460-------470-------480-------490-------500---- | | | | | | | | | | | bound-6m0j-E, res :2.45 Å, 27 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ bound-6lzg-B, res :2.5 Å, 39 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ MISA Delta_ASA for MISA-ID SARS-CoV-2-RBD_0 In dark grey, residues with missing data for coloring Bound structures : In light grey residues in bound structure for which miss corresponding ASA values in the unbound structures Per residue i, delta_ASA = ASA[i] - mean(ASA[i]) (mean is computed using the unbound structures) (in Å2) Unbound structures : Accessible Surface Area (ASA) (in Å2) Residue Index 403----410-------420-------430-------440-------450-------460-------470-------480-------490-------500---- | | | | | | | | | | | bound-6m0j-E, res :2.45 Å, 27 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ bound-6lzg-B, res :2.5 Å, 39 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ unbound-closed-6vxx-A, res :2.8 Å, 0 interf res R-DE----------KI--Y-----------------N-----**G-Y---Y-**_____-------____*_***______****_YF-LQSYGFQPTN*VGYQ unbound-closed-6vyb-A, res :3.2 Å, 0 interf res R-DE----------KI--Y-----------------N---__***-Y---Y-LF--------------__*_***______****_*F-LQSYGFQPTN*VGYQ MISA B_factor for MISA-ID SARS-CoV-2-RBD_0 In dark grey, residues with missing data for coloring B-Factor (in Å2) Residue Index 403----410-------420-------430-------440-------450-------460-------470-------480-------490-------500---- | | | | | | | | | | | bound-6m0j-E, res :2.45 Å, 27 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ bound-6lzg-B, res :2.5 Å, 39 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ unbound-closed-6vxx-A, res :2.8 Å, 0 interf res R-DE----------KI--Y-----------------N-----**G-Y---Y-**_____-------____*_***______****_YF-LQSYGFQPTN*VGYQ unbound-closed-6vyb-A, res :3.2 Å, 0 interf res R-DE----------KI--Y-----------------N---__***-Y---Y-LF--------------__*_***______****_*F-LQSYGFQPTN*VGYQ In [4]: #IFrame(src='./misa-RBD-ACE2-cmp/MISA/ACE2-bound-to-SARS-CoV-1_0-demo-misa-1.html', width="100%", height=600) display(HTML('misa-RBD-ACE2-cmp/MISA/ACE2-bound-to-SARS-CoV-1_0-demo-misa-1.html')) Legend for the amino acids (aa) encoding : For aa not at the interface : _ if aa is missing - if aa is present For aa at the interface : * if aa is missing X if consensus aa (= most frequent among the bound structures, and in case of tie the first by alphabetical order) x otherwise (For bound structure files only) : x or X if the aa is part of the consensus interface but not part of the interface of this file MISA SSE for MISA-ID ACE2-bound-to-SARS-CoV-1_0 3-turn helix - 4-turn helix - 5-turn helix - Isolated beta-bridge residue - Extended strand - Bend - Hydrogen bonded turn - Other - Missing Residue - Residue Index -20--------30--------40--------50--------60--------70--------80-- 321------330-------340-------350-------360-------370-------380-------390- | | | | | | | | | | | | | | | bound-2ajf-A, res :2.9 Å, 33 interf res S---EQ-KTF-DK--H--ED--YQ--L---------------------------------L--MY ---TQGF-EN----------------------KGDFR----------------------------AAQP---R bound-2ajf-B, res :2.9 Å, 27 interf res S---EQ-KTF-DK--H--ED--YQ--L---------------------------------L--MY ---TQGF-EN----------------------KGDFR----------------------------AAQP---R MISA BSA for MISA-ID ACE2-bound-to-SARS-CoV-1_0 In dark grey, residues with missing data for coloring Buried Surface Area (BSA) (in Å2) | bound-2ajf-A : total bsa = 888.38 Å2 | bound-2ajf-B : total bsa = 817.21 Å2 Residue Index -20--------30--------40--------50--------60--------70--------80-- 321------330-------340-------350-------360-------370-------380-------390- | | | | | | | | | | | | | | | bound-2ajf-A, res :2.9 Å, 33 interf res S---EQ-KTF-DK--H--ED--YQ--L---------------------------------L--MY ---TQGF-EN----------------------KGDFR----------------------------AAQP---R bound-2ajf-B, res :2.9 Å, 27 interf res S---EQ-KTF-DK--H--ED--YQ--L---------------------------------L--MY ---TQGF-EN----------------------KGDFR----------------------------AAQP---R MISA Delta_ASA for MISA-ID ACE2-bound-to-SARS-CoV-1_0 In dark grey, residues with missing data for coloring Bound structures : In light grey residues in bound structure for which miss corresponding ASA values in the unbound structures Per residue i, delta_ASA = ASA[i] - mean(ASA[i]) (mean is computed using the unbound structures) (in Å2) Unbound structures : Accessible Surface Area (ASA) (in Å2) Residue Index -20--------30--------40--------50--------60--------70--------80-- 321------330-------340-------350-------360-------370-------380-------390- | | | | | | | | | | | | | | | bound-2ajf-A, res :2.9 Å, 33 interf res S---EQ-KTF-DK--H--ED--YQ--L---------------------------------L--MY ---TQGF-EN----------------------KGDFR----------------------------AAQP---R bound-2ajf-B, res :2.9 Å, 27 interf res S---EQ-KTF-DK--H--ED--YQ--L---------------------------------L--MY ---TQGF-EN----------------------KGDFR----------------------------AAQP---R MISA B_factor for MISA-ID ACE2-bound-to-SARS-CoV-1_0 In dark grey, residues with missing data for coloring B-Factor (in Å2) Residue Index -20--------30--------40--------50--------60--------70--------80-- 321------330-------340-------350-------360-------370-------380-------390- | | | | | | | | | | | | | | | bound-2ajf-A, res :2.9 Å, 33 interf res S---EQ-KTF-DK--H--ED--YQ--L---------------------------------L--MY ---TQGF-EN----------------------KGDFR----------------------------AAQP---R bound-2ajf-B, res :2.9 Å, 27 interf res S---EQ-KTF-DK--H--ED--YQ--L---------------------------------L--MY ---TQGF-EN----------------------KGDFR----------------------------AAQP---R ## Step 2: mixing selected MISA with sbl-misa-mix.py¶ Once these individual figures have been generated, a call to sbl-misa-mix.py allows to simultaneously compare different MISA_id in the same figure. In the sequel, we focus on the following three colored MISA: SSE, BSA, Delta_ASA, sbl-misa-mix.py parses the specification file ifile-misa-mix.txt, from which it finds the location of the directory containing the input data, the MISA_id to be displayed, and the colorings to be displayed. Content of ./misa-RBD-ACE2-cmp/ifile-misa-mix.txt : # List of input directories localisation (./misa-RBD-ACE2-cmp/MISA) # List of MISA_chain_ids misa_chain_id (SARS-CoV-1-RBD_0, SARS-CoV-2-RBD_0) # List of colorings of interest coloring (SSE, BSA, Delta_ASA) In [5]: exe = shutil.which('sbl-misa-mix.py') if not exe: # if exe == None prefix = 'demo-mix-1' # To append at the beginning of the output files mix_ifile = './misa-RBD-ACE2-cmp/ifile-misa-mix.txt' # Specification file odir = './misa-RBD-ACE2-cmp' # Output directory verbose = '0' cmd = [exe, "-mix_ifile", mix_ifile, '-prefix', prefix, '-odir', odir, '--verbose', verbose] print('Running %s -mix_ifile %s -prefix %s -odir %s --verbose %s' % (exe, mix_ifile, prefix, odir, verbose)) s = subprocess.check_output(cmd, encoding='UTF-8') #print(s) Running /user/fcazals/home/projects/proj-soft/sbl-install/bin/sbl-misa-mix.py -mix_ifile ./misa-RBD-ACE2-cmp/ifile-misa-mix.txt -prefix demo-mix-1 -odir ./misa-RBD-ACE2-cmp --verbose 0 The first figure of the article corresponds to the output of sbl-misa-mix.py , run with the ifile-misa-mix.txt presented above : In [6]: #IFrame(src='./misa-RBD-ACE2-cmp/demo-mix-1_SSE_BSA_Delta_ASA_SARS-CoV-1-RBD_0_SARS-CoV-2-RBD_0_mixed_figure.html', width="100%", height=600) display(HTML('./misa-RBD-ACE2-cmp/demo-mix-1_SSE_BSA_Delta_ASA_SARS-CoV-1-RBD_0_SARS-CoV-2-RBD_0_mixed_figure.html')) Legend for the amino acids (aa) encoding : For aa not at the interface : _ if aa is missing - if aa is present For aa at the interface : * if aa is missing X if consensus aa (= most frequent among the bound structures, and in case of tie the first by alphabetical order) x otherwise (For bound structure files only) : x or X if the aa is part of the consensus interface but not part of the interface of this file ) MISA SSE for MISA-ID SARS-CoV-1-RBD_0 3-turn helix - 4-turn helix - 5-turn helix - Isolated beta-bridge residue - Extended strand - Bend - Hydrogen bonded turn - Other - Missing Residue - Residue Index 390-------400-------410-------420-------430-------440-------450-------460-------470-------480-------490 | | | | | | | | | | | bound-2ajf-E, res :2.9 Å, 29 interf res K--D--Q-------VI--Y-----------------R-----S---Y---Y-YL----------------F-PD------P-LNCY---NDYG-YTTTGI-YQ bound-2ajf-F, res :2.9 Å, 29 interf res K--D--Q-------VI--Y-----------------R-----S---Y---Y-YL----------------F-PD------P-LNCY---NDYG-YTTTGI-YQ unbound-closed-5x58-A, res :3.2 Å, 0 interf res K--D--Q-------VI--Y-----------------R-----S---Y---Y-YL----------------F-PD------P-LNCY---NDYG-YTTTGI-YQ unbound-closed-6crz-C, res :3.3 Å, 0 interf res K--D--Q-------VI--Y-----------------R-----S---Y---Y-YL----------------F-PD------P-LNCY---NDYG-YTTTGI-YQ MISA SSE for MISA-ID SARS-CoV-2-RBD_0 3-turn helix - 4-turn helix - 5-turn helix - Isolated beta-bridge residue - Extended strand - Bend - Hydrogen bonded turn - Other - Missing Residue - Residue Index 403----410-------420-------430-------440-------450-------460-------470-------480-------490-------500---- | | | | | | | | | | | bound-6m0j-E, res :2.45 Å, 27 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ bound-6lzg-B, res :2.5 Å, 39 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ unbound-closed-6vxx-A, res :2.8 Å, 0 interf res R-DE----------KI--Y-----------------N-----**G-Y---Y-**_____-------____*_***______****_YF-LQSYGFQPTN*VGYQ unbound-closed-6vyb-A, res :3.2 Å, 0 interf res R-DE----------KI--Y-----------------N---__***-Y---Y-LF--------------__*_***______****_*F-LQSYGFQPTN*VGYQ MISA BSA for MISA-ID SARS-CoV-1-RBD_0 In dark grey, residues with missing data for coloring Buried Surface Area (BSA) (in Å2) | bound-2ajf-E : total bsa = 925.41 Å2 | bound-2ajf-F : total bsa = 864.87 Å2 Residue Index 390-------400-------410-------420-------430-------440-------450-------460-------470-------480-------490 | | | | | | | | | | | bound-2ajf-E, res :2.9 Å, 29 interf res K--D--Q-------VI--Y-----------------R-----S---Y---Y-YL----------------F-PD------P-LNCY---NDYG-YTTTGI-YQ bound-2ajf-F, res :2.9 Å, 29 interf res K--D--Q-------VI--Y-----------------R-----S---Y---Y-YL----------------F-PD------P-LNCY---NDYG-YTTTGI-YQ MISA BSA for MISA-ID SARS-CoV-2-RBD_0 In dark grey, residues with missing data for coloring Buried Surface Area (BSA) (in Å2) | bound-6m0j-E : total bsa = 887.29 Å2 | bound-6lzg-B : total bsa = 1120.20 Å2 Residue Index 403----410-------420-------430-------440-------450-------460-------470-------480-------490-------500---- | | | | | | | | | | | bound-6m0j-E, res :2.45 Å, 27 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ bound-6lzg-B, res :2.5 Å, 39 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ MISA Delta_ASA for MISA-ID SARS-CoV-1-RBD_0 In dark grey, residues with missing data for coloring Bound structures : In light grey residues in bound structure for which miss corresponding ASA values in the unbound structures Per residue i, delta_ASA = ASA[i] - mean(ASA[i]) (mean is computed using the unbound structures) (in Å2) Unbound structures : Accessible Surface Area (ASA) (in Å2) Residue Index 390-------400-------410-------420-------430-------440-------450-------460-------470-------480-------490 | | | | | | | | | | | bound-2ajf-E, res :2.9 Å, 29 interf res K--D--Q-------VI--Y-----------------R-----S---Y---Y-YL----------------F-PD------P-LNCY---NDYG-YTTTGI-YQ bound-2ajf-F, res :2.9 Å, 29 interf res K--D--Q-------VI--Y-----------------R-----S---Y---Y-YL----------------F-PD------P-LNCY---NDYG-YTTTGI-YQ unbound-closed-5x58-A, res :3.2 Å, 0 interf res K--D--Q-------VI--Y-----------------R-----S---Y---Y-YL----------------F-PD------P-LNCY---NDYG-YTTTGI-YQ unbound-closed-6crz-C, res :3.3 Å, 0 interf res K--D--Q-------VI--Y-----------------R-----S---Y---Y-YL----------------F-PD------P-LNCY---NDYG-YTTTGI-YQ MISA Delta_ASA for MISA-ID SARS-CoV-2-RBD_0 In dark grey, residues with missing data for coloring Bound structures : In light grey residues in bound structure for which miss corresponding ASA values in the unbound structures Per residue i, delta_ASA = ASA[i] - mean(ASA[i]) (mean is computed using the unbound structures) (in Å2) Unbound structures : Accessible Surface Area (ASA) (in Å2) Residue Index 403----410-------420-------430-------440-------450-------460-------470-------480-------490-------500---- | | | | | | | | | | | bound-6m0j-E, res :2.45 Å, 27 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ bound-6lzg-B, res :2.5 Å, 39 interf res R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------EGFN-YF-LQSYGFQPTNGVGYQ unbound-closed-6vxx-A, res :2.8 Å, 0 interf res R-DE----------KI--Y-----------------N-----**G-Y---Y-**_____-------____*_***______****_YF-LQSYGFQPTN*VGYQ unbound-closed-6vyb-A, res :3.2 Å, 0 interf res R-DE----------KI--Y-----------------N---__***-Y---Y-LF--------------__*_***______****_*F-LQSYGFQPTN*VGYQ ## Step 3: investigating buried surface areas of selected residues with sbl-misa-bsa.py¶ To further the study of an interface, we recover the BSA value of specific user defined residues. This is the purpose of sbl-misa-bsa.py . We provide to the program an .xml file generated by sbl-intervor-ABW-atomic.exe (run with the --output-prefix option), as well as a specfile containing the list of residues of interest, as presented below : Content of ./misa-RBD-ACE2-cmp/ifile-misa-bsa.txt : [(A, E, 303), (A, E, 403), (A, E, 449), (A, E, 455), (A, E, 486), (A, E, 502), (B, A, 79), (B, A, 35)] In [7]: exe = shutil.which('sbl-misa-bsa.py') if not exe: # if exe == None specfile = './misa-RBD-ACE2-cmp/ifile-misa-bsa.txt' # Path to the spec file containing the list of resid to be studied xmlfile = './misa-RBD-ACE2-cmp/input-data/intervor/sbl-intervor-ABW-atomic__radius_water_1dot4__f_6m0j__p_4__P_E__P_A___alpha_0__buried_surface_area.xml' # Name of the .xml input file (without the path, which was provided in idir) cmd = [exe, "-specfile", specfile, '-xmlfile', xmlfile] s = subprocess.check_output(cmd, encoding='UTF-8') print(s) Running sbl-misa-bsa.py XML: 1 / 1 files were loaded #################################################### BSA for the intervor_partner A First according to the provided list of residues : Chain E Residue 303 : bsa = NA Å^2 Chain E Residue 486 : bsa = 98.531 Å^2 Chain E Residue 502 : bsa = 41.882 Å^2 Chain E Residue 455 : bsa = 41.547 Å^2 Chain E Residue 449 : bsa = 37.186 Å^2 Chain E Residue 403 : bsa = 0.096 Å^2 Cumulated bsa for intervor_partner A - chain E is 219.242 Å^2 Cumulated bsa for intervor_partner A is 219.242 Å^2 Then for the other residues (we only display the residues with a BSA greater than 0.001 Å^2) : Chain E Residue 500 : bsa = 91.397 Å^2 Chain E Residue 505 : bsa = 86.628 Å^2 Chain E Residue 489 : bsa = 73.470 Å^2 Chain E Residue 493 : bsa = 60.387 Å^2 Chain E Residue 498 : bsa = 55.658 Å^2 Chain E Residue 456 : bsa = 45.129 Å^2 Chain E Residue 475 : bsa = 38.708 Å^2 Chain E Residue 487 : bsa = 38.399 Å^2 Chain E Residue 501 : bsa = 30.063 Å^2 Chain E Residue 417 : bsa = 27.641 Å^2 Chain E Residue 496 : bsa = 22.986 Å^2 Chain E Residue 453 : bsa = 22.979 Å^2 Chain E Residue 503 : bsa = 21.688 Å^2 Chain E Residue 484 : bsa = 13.355 Å^2 Chain E Residue 446 : bsa = 10.431 Å^2 Chain E Residue 476 : bsa = 10.198 Å^2 Chain E Residue 445 : bsa = 9.552 Å^2 Chain E Residue 473 : bsa = 6.310 Å^2 Chain E Residue 490 : bsa = 1.404 Å^2 Chain E Residue 477 : bsa = 1.387 Å^2 Chain E Residue 485 : bsa = 0.278 Å^2 Cumulated bsa for intervor_partner A - chain E is 668.049 Å^2 Cumulated bsa for intervor_partner A is 668.049 Å^2 The bsa for intervor_partner A is 668.049 Å^2 (with respect to the provided residue ids) #################################################### BSA for the intervor_partner B First according to the provided list of residues : Chain A Residue 79 : bsa = 24.518 Å^2 Chain A Residue 35 : bsa = 17.735 Å^2 Cumulated bsa for intervor_partner B - chain A is 42.254 Å^2 Cumulated bsa for intervor_partner B is 42.254 Å^2 Then for the other residues (we only display the residues with a BSA greater than 0.001 Å^2) : Chain A Residue 353 : bsa = 97.689 Å^2 Chain A Residue 31 : bsa = 93.757 Å^2 Chain A Residue 34 : bsa = 68.567 Å^2 Chain A Residue 27 : bsa = 66.460 Å^2 Chain A Residue 24 : bsa = 53.133 Å^2 Chain A Residue 42 : bsa = 44.506 Å^2 Chain A Residue 41 : bsa = 43.731 Å^2 Chain A Residue 30 : bsa = 40.852 Å^2 Chain A Residue 83 : bsa = 38.720 Å^2 Chain A Residue 38 : bsa = 34.191 Å^2 Chain A Residue 354 : bsa = 31.103 Å^2 Chain A Residue 330 : bsa = 28.685 Å^2 Chain A Residue 82 : bsa = 28.595 Å^2 Chain A Residue 45 : bsa = 25.525 Å^2 Chain A Residue 324 : bsa = 17.235 Å^2 Chain A Residue 28 : bsa = 16.649 Å^2 Chain A Residue 37 : bsa = 16.266 Å^2 Chain A Residue 355 : bsa = 11.668 Å^2 Chain A Residue 357 : bsa = 11.206 Å^2 Chain A Residue 19 : bsa = 10.478 Å^2 Chain A Residue 393 : bsa = 9.348 Å^2 Chain A Residue 325 : bsa = 8.452 Å^2 Chain A Residue 326 : bsa = 4.140 Å^2 Chain A Residue 386 : bsa = 0.593 Å^2 Cumulated bsa for intervor_partner B - chain A is 801.550 Å^2 Cumulated bsa for intervor_partner B is 801.550 Å^2 The bsa for intervor_partner B is 801.550 Å^2 (with respect to the provided residue ids) Done ## Step 4: comparing interfaces and MISA with sbl-misa-diff.py¶ sbl-misa-diff.py allows to compare the interface between two peer chains, identifying the residues specific to each, and the shared residues, as well as displaying the BSA of these residues. It allows to compare Voronoi interfaces and/or manually defined interfaces. Hand-defined interfaces shall be specified in the same format as SARS-CoV-1-RBD-Harisson-2005.txt (see below), where the first line corresponds to the name given to the chain, and each subsequent line corresponds to a residue (nature + index). Content of ./misa-RBD-ACE2-cmp/SARS-CoV-1-RBD-Harisson-2005.txt : harisson-interface T402 R426 Y436 Y440 Y442 L472 N473 Y475 N479 Y484 T486 T487 G488 Y491 The corresponding specfile is the following : Content of ./misa-RBD-ACE2-cmp/ifile-misa-diff.txt : (./misa-RBD-ACE2-cmp/MISA/raw-data, 2ajf, E) (./misa-RBD-ACE2-cmp/SARS-CoV-1-RBD-Harisson-2005.txt) The output .txt file is displayed below. In [8]: exe = shutil.which('sbl-misa-diff.py') if not exe: # if exe == None specfile = './misa-RBD-ACE2-cmp/ifile-misa-diff.txt' odir = './misa-RBD-ACE2-cmp' # Outp cmd = [exe, "-specfile", specfile, "-odir", odir] s = subprocess.check_output(cmd, encoding='UTF-8') print(s) #sblpyt.show_this_text_file('misa-RBD-ACE2-cmp/comparison-interface-2ajf-E-with-Harisson-interface-RBD-CoV1-ACE2-Science-2005.txt') sblpyt.show_this_text_file('misa-RBD-ACE2-cmp/comparison-interface-2ajf-E-with-harisson-interface.txt') Running sbl-misa-diff.py Comparing interfaces created ./misa-RBD-ACE2-cmp/comparison-interface-2ajf-E-with-harisson-interface.txt Done ++Showing file misa-RBD-ACE2-cmp/comparison-interface-2ajf-E-with-harisson-interface.txt Comparison of the Buried Surface Area (BSA) and of the nature of the residues for the interface residues. (Missing data are denoted by "NA") 16 exclusive residues at the interface of chain 2ajf-E : BSA-2ajf-E Names-2ajf-E ( , 390, ) 9.34 K ( , 393, ) 4.92 D ( , 404, ) 14.90 V ( , 405, ) 1.66 I ( , 408, ) 3.25 Y ( , 432, ) 6.46 S ( , 443, ) 34.93 L ( , 460, ) 3.21 F ( , 462, ) 49.79 P ( , 463, ) 11.89 D ( , 470, ) 2.18 P ( , 480, ) 0.45 D ( , 481, ) 11.17 Y ( , 482, ) 14.59 G ( , 489, ) 32.87 I ( , 492, ) 3.55 Q 1 exclusive residues at the interface of chain harisson-interface : BSA-harisson-interface Names-harisson-interface ( , 402, ) NA NA 13 shared residues : BSA-2ajf-E BSA-harisson-interface Names-2ajf-E Names-harisson-interface ( , 426, ) 42.57 NA R NA ( , 436, ) 36.67 NA Y NA ( , 440, ) 32.86 NA Y NA ( , 442, ) 60.05 NA Y NA ( , 472, ) 75.32 NA L NA ( , 473, ) 51.06 NA N NA ( , 475, ) 83.14 NA Y NA ( , 479, ) 23.05 NA N NA ( , 484, ) 62.65 NA Y NA ( , 486, ) 86.47 NA T NA ( , 487, ) 43.74 NA T NA ( , 488, ) 41.77 NA G NA ( , 491, ) 80.91 NA Y NA --Done # Example 2: structure of the RBD or Sars-cov-1 and Sars-cov-2 bound to immunoglobulins¶ This figure compares the RBD interface of SARS-CoV-1 and SARS-CoV-2 with ACE2, or with different immunoglobulins (VHH72, CR3022, 2F6). The RBD from SARS-CoV-2 is implied in several complexes, but a same structure cannot simultaneously appears more than once. It is thus necessary to make one ifile-misa.txt per complex. One subdirectory per complex, containing only the relevant ifile-misa.txt was provided. ## Step 1 : generating all colored MISA for each complex with sbl-misa.py¶ Each ifile-misa.txt contains one or two examples of bound RBD, as well as two examples of unbound RBD, in order to be able to calculate the $\delta_ASA$ induced by the conformational change. Content of ./misa-RBD-IG/RBD-VHH72/ifile-misa.txt : ./pdb/6waq.pdb (A, B, SARS-CoV-1-RBD-bound-to-VHH72, bound) (B, A, VHH72, bound) ./pdb/5x58.pdb (A, A, SARS-CoV-1-RBD-bound-to-VHH72, unbound-closed) ./pdb/6crz.pdb (A, C, SARS-CoV-1-RBD-bound-to-VHH72, unbound-closed) Content of ./misa-RBD-IG/RBD-ACE2/ifile-misa.txt : [SARS-CoV-2-RBD-bound-to-ACE2_0 (346, 528)] [SARS-CoV-2-RBD-bound-to-ACE2_0 (355, 494)] # Specification SARS-CoV-1 ./pdb/2ajf.pdb (A, E, SARS-CoV-1-RBD-bound-to-ACE2, bound) (B, A, ACE2-bound-to-CoV-1, bound) ./pdb/2ajf.pdb (A, F, SARS-CoV-1-RBD-bound-to-ACE2, bound) (B, B, ACE2-bound-to-CoV-1, bound) ./pdb/5x58.pdb (A, A, SARS-CoV-1-RBD-bound-to-ACE2, unbound-closed) ./pdb/6crz.pdb (A, C, SARS-CoV-1-RBD-bound-to-ACE2, unbound-closed) # Specification SARS-CoV-2 ./pdb/6m0j.pdb (A, E, SARS-CoV-2-RBD-bound-to-ACE2, bound) (B, A, ACE2-bound-to-CoV-2, bound) ./pdb/6lzg.pdb (A, B, SARS-CoV-2-RBD-bound-to-ACE2, bound) (B, A, ACE2-bound-to-CoV-2, bound) ./pdb/6vxx.pdb (A, A, SARS-CoV-2-RBD-bound-to-ACE2, unbound-closed) ./pdb/6vyb.pdb (A, A, SARS-CoV-2-RBD-bound-to-ACE2, unbound-closed) Content of ./misa-RBD-IG/RBD-CR3022/ifile-misa.txt [SARS-CoV-2-RBD-bound-to-CR3022_0 (346, 528)] ./pdb/6yla.pdb (A, E, SARS-CoV-2-RBD-bound-to-CR3022, bound) (B, H, CR3022-antibody, bound) (B, L, CR3022-antibody, bound) ./pdb/6yla.pdb (A, A, SARS-CoV-2-RBD-bound-to-CR3022, bound) (B, B, CR3022-antibody, bound) (B, C, CR3022-antibody, bound) ./pdb/6vxx.pdb (A, A, SARS-CoV-2-RBD-bound-to-CR3022, unbound-closed) ./pdb/6vyb.pdb (A, A, SARS-CoV-2-RBD-bound-to-CR3022, unbound-closed) Content of ./misa-RBD-IG/RBD-2F6/ifile-misa.txt [SARS-CoV-2-RBD-bound-to-2F6_0 (346, 528)] ./pdb/7bwj.pdb (A, E, SARS-CoV-2-RBD-bound-to-2F6, bound) (B, H, 2F6-antibody, bound) (B, L, 2F6-antibody, bound) ./pdb/6vxx.pdb (A, A, SARS-CoV-2-RBD-bound-to-2F6, unbound-closed) ./pdb/6vyb.pdb (A, A, SARS-CoV-2-RBD-bound-to-2F6, unbound-closed) In [9]: exe = shutil.which('sbl-misa.py') if not exe: # if exe == None for dir_complex in ['RBD-VHH72','RBD-ACE2', 'RBD-CR3022','RBD-P2B-2F6']: prefix_dir = './misa-RBD-IG/%s' % dir_complex # To append at the beginning of every input and output directories ifile = '%s/ifile-misa.txt' % prefix_dir # Specification file prefix = 'demo-misa-2' # To append at the beginning of the output files verbose = '0' normalize_b_factor = '2' # Normalization with only respect to the displayed residues cmd = [exe, "-ifile", ifile, "-prefix_dir", prefix_dir, '-prefix', prefix, '--verbose', verbose, '-normalize_b_factor', normalize_b_factor] s = subprocess.check_output(cmd, encoding='UTF-8') print('Done for complex %s' % dir_complex) #print(s) Done for complex RBD-VHH72 Done for complex RBD-ACE2 Done for complex RBD-CR3022 Done for complex RBD-P2B-2F6 As in the first example, a figure containing the four colorings is produced for each chain. For example, here is the RBD of SARS-CoV-2 in complex with the CR3022 antibody: In [10]: # IFrame(src='./misa-RBD-IG/RBD-CR3022/MISA/SARS-CoV-2-RBD-bound-to-CR3022_0-demo-misa-2.html', width="100%", height=600) display(HTML('./misa-RBD-IG/RBD-CR3022/MISA/SARS-CoV-2-RBD-bound-to-CR3022_0-demo-misa-2.html')) Legend for the amino acids (aa) encoding : For aa not at the interface : _ if aa is missing - if aa is present For aa at the interface : * if aa is missing X if consensus aa (= most frequent among the bound structures, and in case of tie the first by alphabetical order) x otherwise (For bound structure files only) : x or X if the aa is part of the consensus interface but not part of the interface of this file MISA SSE for MISA-ID SARS-CoV-2-RBD-bound-to-CR3022_0 3-turn helix - 4-turn helix - 5-turn helix - Isolated beta-bridge residue - Extended strand - Bend - Hydrogen bonded turn - Other - Missing Residue - Residue Index 346-350-------360-------370-------380-------390-------400-------410-------420-------430-------440-------450-------460-------470-------480-------490-------500-------510-------520------ | | | | | | | | | | | | | | | | | | | bound-6yla-E, res :2.42 Å, 33 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT------------------------------------------------------------------------------------FELLH--------K bound-6yla-A, res :2.42 Å, 31 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT---------------__-------------------------------------------------------------------FELLH--------K unbound-closed-6vxx-A, res :2.8 Å, 0 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT--------------__--------_______-------____________________-------------_------------FELLH--------K unbound-closed-6vyb-A, res :3.2 Å, 0 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT------------_____-----------------------___________________------------_------------FELLH--------K MISA BSA for MISA-ID SARS-CoV-2-RBD-bound-to-CR3022_0 In dark grey, residues with missing data for coloring Buried Surface Area (BSA) (in Å2) | bound-6yla-E : total bsa = 983.29 Å2 | bound-6yla-A : total bsa = 1079.67 Å2 Residue Index 346-350-------360-------370-------380-------390-------400-------410-------420-------430-------440-------450-------460-------470-------480-------490-------500-------510-------520------ | | | | | | | | | | | | | | | | | | | bound-6yla-E, res :2.42 Å, 33 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT------------------------------------------------------------------------------------FELLH--------K bound-6yla-A, res :2.42 Å, 31 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT---------------__-------------------------------------------------------------------FELLH--------K MISA Delta_ASA for MISA-ID SARS-CoV-2-RBD-bound-to-CR3022_0 In dark grey, residues with missing data for coloring Bound structures : In light grey residues in bound structure for which miss corresponding ASA values in the unbound structures Per residue i, delta_ASA = ASA[i] - mean(ASA[i]) (mean is computed using the unbound structures) (in Å2) Unbound structures : Accessible Surface Area (ASA) (in Å2) Residue Index 346-350-------360-------370-------380-------390-------400-------410-------420-------430-------440-------450-------460-------470-------480-------490-------500-------510-------520------ | | | | | | | | | | | | | | | | | | | bound-6yla-E, res :2.42 Å, 33 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT------------------------------------------------------------------------------------FELLH--------K bound-6yla-A, res :2.42 Å, 31 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT---------------__-------------------------------------------------------------------FELLH--------K unbound-closed-6vxx-A, res :2.8 Å, 0 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT--------------__--------_______-------____________________-------------_------------FELLH--------K unbound-closed-6vyb-A, res :3.2 Å, 0 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT------------_____-----------------------___________________------------_------------FELLH--------K MISA B_factor for MISA-ID SARS-CoV-2-RBD-bound-to-CR3022_0 In dark grey, residues with missing data for coloring B-Factor (in Å2) Residue Index 346-350-------360-------370-------380-------390-------400-------410-------420-------430-------440-------450-------460-------470-------480-------490-------500-------510-------520------ | | | | | | | | | | | | | | | | | | | bound-6yla-E, res :2.42 Å, 33 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT------------------------------------------------------------------------------------FELLH--------K bound-6yla-A, res :2.42 Å, 31 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT---------------__-------------------------------------------------------------------FELLH--------K unbound-closed-6vxx-A, res :2.8 Å, 0 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT--------------__--------_______-------____________________-------------_------------FELLH--------K unbound-closed-6vyb-A, res :3.2 Å, 0 interf res ----------------------LYNS--FSTFKCYGVSPTK-N-L-F---------------R--AP-Q------------DDFT------------_____-----------------------___________________------------_------------FELLH--------K ## Step 2 : mixing selected MISA¶ sbl-misa-mix.py can also gather the output of different runs of sbl-misa.py (on the contrary to the first example where the MISA_id all came from the same run of sbl-misa.py). Here is an example, which corresponds to the second figure on the paper, based on the following ifile-misa-mix.txt : Content of ./misa-RBD-IG/ifile-misa-mix.txt: localisation (./misa-RBD-IG/RBD-VHH72/MISA, ./misa-RBD-IG/RBD-ACE2/MISA, ./misa-RBD-IG/RBD-CR3022/MISA, ./misa-RBD-IG/RBD-P2B-2F6/MISA) # List of MISA_chain_ids misa_chain_id (SARS-CoV-2-RBD-bound-to-P2B-2F6_0, SARS-CoV-2-RBD-bound-to-CR3022_0, SARS-CoV-1-RBD-bound-to-VHH72_0, SARS-CoV-2-RBD-bound-to-ACE2_0) # List of coloring of interest coloring (SSE) In [11]: exe = shutil.which('sbl-misa-mix.py') if not exe: # if exe == None prefix = 'demo-mix-2' # To append at the beginning of the output files mix_ifile = './misa-RBD-IG/ifile-misa-mix.txt' # Specification file odir = './misa-RBD-IG' # Output directory verbose = '0' cmd = [exe, "-mix_ifile", mix_ifile, '-prefix', prefix, '-odir', odir, '--verbose', verbose] s = subprocess.check_output(cmd, encoding='UTF-8') print(s) Running sbl-misa-mix.py Done It gives the following output : In [12]: #IFrame(src='./misa-RBD-IG/demo-mix-2_SSE_SARS-CoV-2-RBD-bound-to-P2B-2F6_0_SARS-CoV-2-RBD-bound-to-CR3022_0_SARS-CoV-1-RBD-bound-to-VHH72_0_SARS-CoV-2-RBD-bound-to-ACE2_0_mixed_figure.html', width="100%", height=600) display(HTML('./misa-RBD-IG/demo-mix-2_SSE_SARS-CoV-2-RBD-bound-to-P2B-2F6_0_SARS-CoV-2-RBD-bound-to-CR3022_0_SARS-CoV-1-RBD-bound-to-VHH72_0_SARS-CoV-2-RBD-bound-to-ACE2_0_mixed_figure.html')) Legend for the amino acids (aa) encoding : For aa not at the interface : _ if aa is missing - if aa is present For aa at the interface : * if aa is missing X if consensus aa (= most frequent among the bound structures, and in case of tie the first by alphabetical order) x otherwise (For bound structure files only) : x or X if the aa is part of the consensus interface but not part of the interface of this file ) MISA SSE for MISA-ID SARS-CoV-1-RBD-bound-to-VHH72_0 3-turn helix - 4-turn helix - 5-turn helix - Isolated beta-bridge residue - Extended strand - Bend - Hydrogen bonded turn - Other - Missing Residue - Residue Index 355--360-------370-------380-------390-------400-------410-------420-------430-------440-------450-------460-------470-------480-------490-- | | | | | | | | | | | | | | | bound-6waq-B, res :2.2 Å, 25 interf res LYNSTFFSTFKC--V-AT------------------GD-VR---------------------------WN-R--------------------------------------------------------------IG---Y unbound-closed-5x58-A, res :3.2 Å, 0 interf res LYNSTFFSTFKC--V-AT------------------GD-VR---------------------------WN-R--------------------------------------------------------------IG---Y unbound-closed-6crz-C, res :3.3 Å, 0 interf res LYNSTFFSTFKC__*_AT------------------GD-VR---------------------------WN-R--------------------------------------------------------------IG---Y MISA SSE for MISA-ID SARS-CoV-2-RBD-bound-to-ACE2_0 3-turn helix - 4-turn helix - 5-turn helix - Isolated beta-bridge residue - Extended strand - Bend - Hydrogen bonded turn - Other - Missing Residue - Residue Index 346-350-------360-------370-------380-------390-------400-------410-------420-------430-------440-------450-------460-------470-------480-------490-------500-------510-------520------ | | | | | | | | | | | | | | | | | | | bound-6m0j-E, res :2.45 Å, 27 interf res ---------------------------------------------------------R-DE----------KI--Y-----------------N-----VGG-Y---Y-LF----------------Y-AGS------
2020-09-25 06:08:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4419410526752472, "perplexity": 11090.389378201404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400222515.48/warc/CC-MAIN-20200925053037-20200925083037-00676.warc.gz"}
https://www.sierrachart.com/index.php?page=doc/StudiesReference.php&ID=287&Name=Multiply_All_Charts
# Technical Studies Reference ### Multiply All Charts This study calculates products across $$N$$ open charts (up to $$200$$) for the following variables. • Open Price $$O$$ • High Price $$H$$ • Low Price $$L$$ • Close Price $$C$$ • Volume $$V$$ • Open Interest $$OI$$ • Open Price $$O$$ • OHLC Average Price $$\overline{P}^{(OHLC)}$$ • HLC Average Price $$\overline{P}^{(HLC)}$$ • HL Average Price $$\overline{P}^{(HL)}$$ • Bid Volume $$V^{(Bid)}$$ • Bid Volume $$V^{(Ask)}$$ For example, let the Open Price at Index $$t$$ for the $$N$$ charts be denoted as $$O^{(1)}_t, O^{(2)}_t, ... , O^{(N)}_t$$. We compute the product of these at Index $$t$$ as follows. $$\displaystyle{\prod_{j = 1}^N O^{(j)}_t} = O^{(1)}_t \cdot O^{(2)}_t \cdot \cdots \cdot O^{(N)}_t$$ For an explanation of the Pi ($$\Pi$$) notation for multiplication, refer to our description here. The Index matching across the charts is done via the ACSIL Function sc.GetNearestMatchForDateTimeIndex(). By default, only the products for the Open, High, Low, and Close Prices are displayed (via OHLC bars). The products for the other variables are not displayed, but they are calculated and can be accessed via a Spreadsheet Study. The spreadsheet below contains the formulas for this study in Spreadsheet format. Only the products for $$O$$, $$H$$, $$L$$, and $$C$$ were tested. Save this Spreadsheet to the Data Files Folder.
2020-08-03 09:50:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.723881185054779, "perplexity": 3222.6109035690215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735792.85/warc/CC-MAIN-20200803083123-20200803113123-00446.warc.gz"}
https://www.hackmath.net/en/math-problem/10891
The machine The machine works 7 hours a day and produces 1 part in 5 minutes. How many parts will it produce in 1 hour? How many parts will it produce in 1 day? Result n1 =  12 n2 =  84 Solution: $n_{ 1 } = \dfrac{ 60 }{ 5 } = 12$ $n_{ 2 } = n_{ 1 } \cdot \ 7 = 12 \cdot \ 7 = 84$ Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! Next similar math problems: 1. Third of an hour How many minutes is a third of an hour? Do you know to determine a third of the lesson hour (45min)? 2. Pitcher Matthew picked up half-pitcher of raspberry in 45 minutes. Find how long three children filled the pitcher if each of them working at the same pace as Matthew. 3. Fishing boat The fishing boat caught 14 fish in one day. How many fish will catch 4 fishing boats in 8 days? 4. Bakers Baker Martin baked 5 times more cakes than Dennis. How many cakes baked Dennis if Martin bake 25 cakes? 5. Multiples of int How many multiples of 3 is between the numbers to 20? (zero not count) 6. Computer A line of print on a computer contains 64 characters (letters, spacers or other chars). Find how many characters there are in 7 lines. 7. Cars Johnny has 370 cars. Peter twice less. How many cars have Peter? 8. A koala A koala lives to be 14 years old in the wild. How much longer will a 2 year old koala probably live? 9. Multiples Find all multiples of 10 that are larger than 136 and smaller than 214. 10. Salary Mr. Vesely got for work 874 CZK (Czech Republic Koruna). Mr. Jaros got twice less than Mr. Vesely. How much CZK got Mr. Jaros? 11. Number Which number is 17 times larger than the number 6? 12. Breeder Seven hens breeder has supplied 350 eggs. How long can deliver, if each of its chickens can withstand at least 5 eggs for week? 13. Family Dad has two years more than mom. Mom has 5 times more than Katy. Katy has 2 times less than Jan. Jan is 10 years old. How old is everyone in the family? How old are all together? 14. Flour 2 kg of flour costs 100 CZK. How much does it cost half a kilogram? 15. What is What is the value of the smaller of a pair of numbers for which their sum is 78 and their division quotients are 0.3? 16. Fridays Friday 13th is in 4 days. What day is today and what day it is? 17. Dividing by five and ten Number 5040 divide by the number 5 and by number 10: a = 5040: 5 b = 5040: 10
2019-12-11 00:54:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3269498348236084, "perplexity": 3258.2473877316143}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529516.84/warc/CC-MAIN-20191210233444-20191211021444-00242.warc.gz"}
http://math.stackexchange.com/questions/252853/how-i-can-justify-that-the-residue-is-1-2
# How I can justify that the residue is -1/2? I've been trying to develop this excercise but can not find a way to justify the fact that the residue is -1/2 - I like this formula for the residue of a pole of order $n$ at $z=z_0$: $$\text{Res}[f,z_0]=\frac{1}{(n-1)!}\lim_{z\rightarrow z_0}\frac{d^{n-1}}{dz^{n-1}}((z-z_0)^nf(z))$$ Here, your poles are both order 1 (simple poles), so we have: $$\text{Res}[f,0]=\lim_{z\rightarrow 0}zf(z)=\lim_{z\rightarrow 0}\frac{z+1}{z-2}=-\frac{1}{2}$$ Note of course that this method is only really practical for low-order poles, and doesn't apply at all to essential singularities such as $\exp(1/z)$ at $z=0$. - There is another theorem that saidxthat the coefficient is the residue the residue – Miguel Mora Luna Dec 7 '12 at 5:36 Yes, by definition the residue is the coefficient of the $1/z$ term in the Laurent series, but this method tends to be much easier to use. – icurays1 Dec 7 '12 at 5:37 Thanks for your help – Miguel Mora Luna Dec 7 '12 at 5:41 And the residue calculating z = 2, the residue must be the coefficient of 1 / (z-2), for the same reason mentioned above – Miguel Mora Luna Dec 7 '12 at 5:54
2016-05-27 08:48:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9625983834266663, "perplexity": 515.904412213819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276564.72/warc/CC-MAIN-20160524002116-00057-ip-10-185-217-139.ec2.internal.warc.gz"}
https://igraph.org/r/html/1.2.7/match_vertices.html
# R igraph manual pages Use this if you are using igraph from R ## Match Graphs given a seeding of vertex correspondences ### Description Given two adjacency matrices A and B of the same size, match the two graphs with the help of m seed vertex pairs which correspond to the first m rows (and columns) of the adjacency matrices. ### Usage match_vertices(A, B, m, start, iteration) ### Arguments A a numeric matrix, the adjacency matrix of the first graph B a numeric matrix, the adjacency matrix of the second graph m The number of seeds. The first m vertices of both graphs are matched. start a numeric matrix, the permutation matrix estimate is initialized with start iteration The number of iterations for the Frank-Wolfe algorithm ### Details The approximate graph matching problem is to find a bijection between the vertices of two graphs , such that the number of edge disagreements between the corresponding vertex pairs is minimized. For seeded graph matching, part of the bijection that consist of known correspondences (the seeds) is known and the problem task is to complete the bijection by estimating the permutation matrix that permutes the rows and columns of the adjacency matrix of the second graph. It is assumed that for the two supplied adjacency matrices A and B, both of size n*n, the first m rows(and columns) of A and B correspond to the same vertices in both graphs. That is, the n*n permutation matrix that defines the bijection is I_{m} \bigoplus P for a (n-m)*(n-m) permutation matrix P and m times m identity matrix I_{m}. The function match_vertices estimates the permutation matrix P via an optimization algorithm based on the Frank-Wolfe algorithm. See references for further details. ### Value A numeric matrix which is the permutation matrix that determines the bijection between the graphs of A and B ### Author(s) Vince Lyzinski http://www.ams.jhu.edu/~lyzinski/ ### References Vogelstein, J. T., Conroy, J. M., Podrazik, L. J., Kratzer, S. G., Harley, E. T., Fishkind, D. E.,Vogelstein, R. J., Priebe, C. E. (2011). Fast Approximate Quadratic Programming for Large (Brain) Graph Matching. Online: https://arxiv.org/abs/1112.5507 Fishkind, D. E., Adali, S., Priebe, C. E. (2012). Seeded Graph Matching Online: https://arxiv.org/abs/1209.0367 sample_correlated_gnp,sample_correlated_gnp_pair ### Examples #require(Matrix) g1 <- erdos.renyi.game(10, .1) randperm <- c(1:3, 3+sample(7)) g2 <- sample_correlated_gnp(g1, corr=1, p=g1\$p, perm=randperm)
2022-09-29 22:06:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7588711977005005, "perplexity": 1782.22955516996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00082.warc.gz"}
http://www.ncatlab.org/nlab/show/suplattice
# nLab suplattice A suplattice is a poset which has all joins (and in particular is a join-semilattice). By the adjoint functor theorem for posets, a suplattice necessarily has all meets as well and so is a complete lattice. However, a suplattice homomorphism preserves joins, but not necessarily meets. Furthermore, a large semilattice which has all small joins need not have all meets, but might still be considered a large suplattice (even though it may not even be a lattice). Dually, an inflattice is a poset which has all meets, and an inflattice homomorphism in a monotone function that preserves all meets. A frame (dual to a locale) is a suplattice in which finitary meets distrubute over arbitrary joins. (Frame homomorphisms preserve all joins and finitary meets.) The category SupLat of suplattices and suplattice homomorphisms admits a tensor product which represents “bilinear maps,” i.e. functions which preserve joins separately in each variable. Under this tensor product, the category of suplattices is a star-autonomous category in which the dualizing object is the suplattice dual to the object $\mathrm{TV}$ of truth-values. A monoid in this monoidal category is a quantale, including frames as a special case. Revised on January 31, 2010 17:40:36 by Toby Bartels (173.60.119.197)
2013-12-07 14:54:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8615439534187317, "perplexity": 1268.268976992889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054610/warc/CC-MAIN-20131204131734-00030-ip-10-33-133-15.ec2.internal.warc.gz"}
https://github.com/eddelbuettel/pinp/issues/75
# Preserve non-fancy quotes in code chunks #75 Closed opened this issue Jul 20, 2019 · 4 comments ### riccardoporreca commented Jul 20, 2019 • edited Using output: pinp::pinp, the quotes in the code chunks are converted to fancy quotes. I quickly constructed a minimal example as follows --- title: "Test pinp quotes" output: pinp::pinp --- Pinp's quotes in regular text: "dquoted", 'squoted', \bquoted\. Pinp's quotes in inline verbatim text: "dquoted", 'squoted'. r "dquoted" 'squoted' bquoted #' Xyz's "dquoted" 'squoted' bquoted (I did not find a nice way to render the closing of the code chunk) This results in Not sure if this is a desired behavior (I noticed it for the Rcpp vignettes), but in case I tried a quick fix by using LaTeX package upquote. In particular, adding \RequirePackage{upquote} % For keeping actual quotes in verbatim to the class definition pinp.cls seemed to indeed help: My LaTeX skills are a bit rusty, so maybe there are more convenient / appropriate ways to achieve the same. Owner ### eddelbuettel commented Jul 20, 2019 Nice catch. I may be a little dense but the desired effect is just for the verbatim chunk, correct? Inline does not seem to change. Or am I missing something? Author ### riccardoporreca commented Jul 20, 2019 the desired effect is just for the verbatim chunk, correct? Inline does not seem to change. Or am I missing something? From what I see it doesn't seem to affect anything inline (even verbatim inline which was already OK), since inline stuff do not use any of the verbatim-related LaTeX affected by upquote, see the generated LaTeX code: Pinp's quotes in regular text: dquoted'', squoted', bquoted. Pinp's quotes in inline verbatim text: \texttt{"dquoted"}, \texttt{\textquotesingle{}squoted\textquotesingle{}}. \begin{Shaded} \begin{Highlighting}[] \StringTok{"dquoted"} \StringTok{'squoted'} \StringTok{}\DataTypeTok{bquoted}\StringTok{} \CommentTok{#' Xyz's "dquoted" 'squoted' bquoted} \end{Highlighting} \end{Shaded} (I have updated the example with something a little more accurate / sensible) Owner ### eddelbuettel commented Jul 20, 2019 Ok, I just tossed that in with a simple commit. Thanks again for the suggestion! Author ### riccardoporreca commented Jul 20, 2019 Cool, I will be trying it out very soon while drafting a review of the Rcpp-modules vignette as discussed in RcppCore/Rcpp#976 (comment).
2019-09-18 04:56:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336126208305359, "perplexity": 6889.916155527255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573184.25/warc/CC-MAIN-20190918044831-20190918070831-00440.warc.gz"}
http://mathhelpforum.com/calculus/165733-am-i-wrong-part-2-a.html
# Math Help - Am I Wrong (part 2)? 1. ## Am I Wrong (part 2)? Problem 15: $\int \sqrt{12 + 4x^2}$ I get: $x\sqrt{x^2 + 3}+ 3 \ln(\frac{\sqrt{x^2 + 3} + x}{\sqrt{3}})$ the book has the same answer except the radical in the denominator of the natural log part is gone. Where did I go wrong? Thanks 2. What substitution did you make? I would choose $\displaystyle x = \sqrt{3}\tan{u}$ $\displaystyle \int \sqrt{12+4x^2}~dx = \sqrt{x^2+3}+3\sinh^{-1}\left(\frac{x}{\sqrt{3}}\right)+C$ 3. Thats an indefinite integral so your answer should be $x\sqrt{x^2+ 3}+ 3ln(\frac{\sqrt{x^2+ 3}+ x}{\sqrt{3}})+ C$ which is the same as $x\sqrt{x^2+ 3}+ 3ln(\sqrt{x^2+ 3}+ x)- 3ln(\sqrt{3})+ C$ which is the same as $x\sqrt{x^2+ 3}+ 3ln(\sqrt{x^2+ 3}+ x)+ C'$ with $C'= C+ 3ln(\sqrt{3})$. 4. Originally Posted by HallsofIvy Thats an indefinite integral so your answer should be $x\sqrt{x^2+ 3}+ 3ln(\frac{\sqrt{x^2+ 3}+ x}{\sqrt{3}})+ C$ which is the same as $x\sqrt{x^2+ 3}+ 3ln(\sqrt{x^2+ 3}+ x)- 3ln(\sqrt{3})+ C$ which is the same as $x\sqrt{x^2+ 3}+ 3ln(\sqrt{x^2+ 3}+ x)+ C'$ with $C'= C+ 3ln(\sqrt{3})$. Right you are sir. Thank you
2014-10-01 21:33:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9520193934440613, "perplexity": 373.5455073409028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663587.40/warc/CC-MAIN-20140930004103-00033-ip-10-234-18-248.ec2.internal.warc.gz"}
https://codeahoi.de/software/command-lines/
Here I collect some commands that I often need but always forget. ### Display some cert infos Of course also applicable for other ports than HTTPS . ### List connected LDAP clients If you aren’t using encrypted connections replace 636 with 389 .
2023-03-25 20:17:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4380994141101837, "perplexity": 12981.58761652967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00476.warc.gz"}
http://chronicle.com/blognetwork/castingoutnines/2008/05/27/how-big-is-10-to-the-20th/
Previous Fun with finite fields Next Simul kids et adults How big is 10 to the 20th? May 27, 2008, 3:04 pm Here’s a great illustration from George Gamow’s classic book One Two Three… Infinity which shows two things: just how big $$10^{20}$$ really is, when thought of as a scaling factor; and also the power of a good illustration to drive home a point about math or science. The picture shows a normal-sized astronomer observing the Milky Way galaxy when shrunk down by a factor of $$10^{20}$$. That’s a big number, folks. Gamow’s book is one of several on my summer reading list, and there’s a reason it’s a classic. In particular, it’s chock full of cool illustrations like this that convey more information about a science concept than an hour’s worth of lecturing. This entry was posted in Geekhood, Math, Science and tagged , , , , . Bookmark the permalink. • The Chronicle of Higher Education • 1255 Twenty-Third St., N.W. • Washington, D.C. 20037
2015-11-29 10:36:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45047882199287415, "perplexity": 2420.540114609339}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398457697.46/warc/CC-MAIN-20151124205417-00304-ip-10-71-132-137.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/287521/how-to-find-the-value-of-h99-in-the-function/287534
# How to find the value of $h(99)$ in the function? If $$h(x) + h(x+1) = 2x^2$$ and $$h(33) = 99$$ What will be the value of $h(99)$? - $h(33)+h(34)=2*33^2$ so $h(34)=...$. Then, continue for $f(35)$ and so on, try to find a logic behind it and you'll get $h(99)$ quite fast. –  barto Jan 26 '13 at 19:34 $$h(34)=2\times 33^2-h(33)$$ $$h(35)=2\times 34^2-h(34)=2\times 34^2-2\times 33^2+h(33)$$ $$\dots$$ $$h(99)=2\times 98^2 - 2\times 97^2 + 2\times 96^2 -\cdots + 2\times 34^2 - 2\times 33^2 +h(33)\\=2(98^2-97^2+96^2-\cdots+34^2-33^2)+99\\ =2(98+97+\cdots +34+33)+99\\ =8745$$
2015-10-05 00:29:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7163840532302856, "perplexity": 285.39858882454445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676381.33/warc/CC-MAIN-20151001215756-00050-ip-10-137-6-227.ec2.internal.warc.gz"}
https://zbmath.org/?q=an%3A0847.53012
# zbMATH — the first resource for mathematics Semi-slant submanifolds of a Kaehlerian manifold. (English) Zbl 0847.53012 The author defines a semi-slant submanifold $$M$$ of a Kählerian manifold to be a submanifold whose tangent bundle is the direct sum of a complex distribution and a slant distribution with the slant angle $$\theta \neq 0$$ in the sense of [the reviewer, Geometry of slant submanifolds. Leuven: Kath. Univ. Leuven, Dept. of Mathematics. 123 p. (1990; Zbl 0716.53006)]. The author obtains the necessary and sufficient conditions for the complex and slant distributions to be integrable. He also obtains a necessary and sufficient condition for a semi-slant submanifold to be the Riemannian product of a complex submanifold and a slant submanifold. ##### MSC: 53B25 Local submanifolds 53B35 Local differential geometry of Hermitian and Kählerian structures 53C40 Global submanifolds Zbl 0716.53006
2022-01-26 04:58:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5050913691520691, "perplexity": 1066.283770452781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00116.warc.gz"}
http://math.stackexchange.com/questions/468324/finding-an-invisible-circle-by-drawing-another-line
Finding an invisible circle by drawing another line A friend of mine taught me the following question. He said he found it in a book a few years ago. Though I've tried to solve it, I'm facing difficulty. Question: You know on a plane there is an invisible circle whose radius is less than or equals $1$. Fortunately, you have already found that the lengths of the chords of a circle by two lines $l_1, l_2$ are $d_1, d_2$ $(2\gt d_1\ge d_2\gt0)\$respectively. By drawing another line, let's find this circle. If the line you'll draw crosses a circle at two points, then you'll get the length of the chord of a circle by the line. If the line you'll draw and a circle come in contact with each other, then you'll get the coordinates of the point of contact instead of getting $0$ as the length of the chord. If the line you'll draw neither crosses nor comes in contact with any circle, then you'll be able to draw another line just once more. Find the coordinates of the center of a circle. This is all the question says. Could you give me how to find the coordinates? The situation so far: The $l_1\parallel l_2$ case : This case has been already solved (see Blue's answer below). The $l_1\not \parallel l_2$ case : This case has not been solved yet. Supposing that $l_1:y=x\tanθ$, $l_2:y=-x\tanθ$ and $l_3:y=0$ ($l_4:x=0$ if needed) for $0<θ<\pi/2$, then we can get two possible coordinates as the center of a circle. However, it seems difficult to decide just one coodinates because each line is symmetric about the origin. Hence, a new line, which is not $y=0$, is needed as $l_3$. My approach: Let each of $l_{1,d+}, l_{1,d-}, l_{2,D+}, l_{2,D-}$ be the followings:$$l_{1,d+}:y=x\tanθ+\frac{d}{\cosθ}, l_{1,d-}:y=x\tanθ-\frac{d}{\cosθ}$$ $$l_{2,D+}:y=-x\tanθ+\frac{D}{\cosθ}, l_{2,D-}:y=-x\tanθ-\frac{D}{\cosθ},$$ where $D=\sqrt{d^2+\frac{{d_1}^2-{d_2}^2}{4}}.$ Note that each distance between $l_1$ and $l_{1,d\pm}$ is $d$, and that each distance between $l_2$ and $l_{2,D\pm}$ is $D$. Also, note the following: $$\sqrt{\left(\frac{d_1}{2}\right)^2+d^2}=\sqrt{\left(\frac{d_2}{2}\right)^2+D^2}.$$ This means that the radius of a circle which crosses $l_1$ equals the radius of a circle which crosses $l_2$. Note that $d$ must satisfy the following:$$0\le d\le \sqrt{1-\frac{{d_1}^2}{4}}.$$ Then, Letting each of the intersections of $l_{1,d-}$ and $l_{2,D+}$, $l_{1,d+}$ and $l_{2,D+}$, $l_{1,d+}$ and $l_{2,D-}$, $l_{1,d-}$ and $l_{2,D-}$ be $P_{-+}$, $P_{++}$, $P_{+-}$, $P_{--}$ respectively, we can represent these as the follwoings: $$P_{-+}\ \left(\frac{d+D}{2\sinθ}, \frac{-d+D}{2\cosθ}\right), P_{++}\ \left(\frac{-d+D}{2\sinθ}\frac{d+D}{2\cosθ}\right),$$$$P_{+-}\ \left(\frac{-d-D}{2\sinθ}, \frac{d-D}{2\cosθ}\right), P_{--}\ \left(\frac{d-D}{2\sinθ}, \frac{-d-D}{2\cosθ}\right).$$ Since each radius is $\sqrt{d^2+\frac{{d_1}^2}{4}}$, we can represent the circles by $d$ as the followings: $$C_{-+}:\left(x-\frac{d+D}{2\sinθ}\right)^2+\left(y-\frac{-d+D}{2\cosθ}\right)^2=d^2+\frac{{d_1}^2}{4}$$ $$C_{++}:\left(x-\frac{-d+D}{2\sinθ}\right)^2+\left(y-\frac{d+D}{2\cosθ}\right)^2=d^2+\frac{{d_1}^2}{4}$$ $$C_{+-}:\left(x-\frac{-d-D}{2\sinθ}\right)^2+\left(y-\frac{d-D}{2\cosθ}\right)^2=d^2+\frac{{d_1}^2}{4}$$ $$C_{--}:\left(x-\frac{d-D}{2\sinθ}\right)^2+\left(y-\frac{-d-D}{2\cosθ}\right)^2=d^2+\frac{{d_1}^2}{4}.$$ Changing $d$ to $-d$ in $C_{++}$ gives $C_{-+}$ and changing $d$ to $-d$ in $C_{+-}$ gives $C_{--}$. Hence, we can represent each possible invisible circle by $d$ as the following: $$C_{\pm+}:\left(x-\frac{-d+D}{2\sinθ}\right)^2+\left(y-\frac{d+D}{2\cosθ}\right)^2=d^2+\frac{{d_1}^2}{4}$$ $$C_{\pm-}:\left(x-\frac{-d-D}{2\sinθ}\right)^2+\left(y-\frac{d-D}{2\cosθ}\right)^2=d^2+\frac{{d_1}^2}{4}$$ for $d$ which satisfies the following: $$-\sqrt{1-\frac{{d_1}^2}{4}}\le d\le \sqrt{1-\frac{{d_1}^2}{4}}.$$ In addition to this, letting $(x,y)$ be the center of each circle, we get the following: $$xy=\frac{{d_1}^2-{d_2}^2}{16\cosθ\sinθ}.$$ This shows that the center of each possible invisible circle is on this hyperbola if $d_1-d_2>0$. I've tried to get a special line as $l_3$, but I'm facing difficulty. update: I crossposted to MO. http://mathoverflow.net/questions/140435/finding-an-invisible-circle-by-drawing-another-line - What other information are you allowed? E.g. if lines $l_1, l_2$ are parallel, do you know the distance that they are apart? If they intersect, do you know the coordinates of the point of intersection (otherwise, knowing the coordinates of the point of contact will be useless) – Calvin Lin Aug 15 '13 at 15:20 If they are parallel even if we know how far apart they are, then I'm not sure we can solve the problem using two other lines. Say they're parallel at $y=0.5$ and $y=-0.5$, and the circle has radius $1$ located at $(100000,0)$. I can place a line $y=0$ and get that the diameter of the circle is $2$, but that doesn't tell me where on the x-axis the circle is located. – Foo Barrigno Aug 15 '13 at 15:26 I edited it with my understanding for this question. – mathlove Aug 15 '13 at 16:05 My approach to the non-parallel case is similar to yours, except that I take my initial lines to be $y = \pm m x$. My third line is the $x$-axis, and my fourth (if necessary) is the $y$-axis (which works out the same way as the $x$-axis). The calculations are somewhat tedious, but not too terrible. Unfortunately, I find myself facing ambiguity in the signs of the center's coordinates. (If all you wanted was the radius, there'd be no problem.) – Blue Aug 15 '13 at 16:51 @Blue:Thank you. I faced the 'ambiguity', too. I suspect it is impossible to decide just one coordinate because each line is symmetric about the origin. – mathlove Aug 15 '13 at 16:59 To avoid some subscripts and fractions, I define $a := \frac{1}{2}d_1$ and $b := \frac{1}{2}d_2$. Also, I'll take the (unknown) center of the circle to be $(h, k)$, and its (unknown) radius to be $r$. The $\ell_1 \parallel \ell_2$ Case. (Solved!) With an appropriate change of variables, we can take $\ell_1$ to be $y = t$, and $\ell_2$ to be $y =-t$, for some $t > 0$. The distance from $(h,k)$ to $\ell_1$ (or $\ell_2$) is $|k-t|$ (respectively, $|k+t|$), and Pythagoras tells us $$a^2 + |k-t|^2 = r^2 = b^2 + |k+t|^2$$ We can solve the "outer" equality to get $$k = \frac{1}{4t}\left( a^2 - b^2 \right)$$ so that $$r = \frac{1}{4t}\sqrt{\left( \left( a - b \right)^2 + 4 t^2 \right) \left( \left( a + b \right)^2 + 4 t^2 \right)}$$ Now knowing $k$ and $r$, we're guaranteed that the line $y=k+r$ will be tangent to the circle. Taking this as our $\ell_3$, the puzzle reports the point-of-tangency's coordinates, the first of which is our sought-after $h$. The center of the circle has been found! The $\ell_1 \not\parallel \ell_2$ Case. (Incomplete.) Edited to remove false starts. Here, we can take $\ell_1$ to be $y =x\tan\theta$, and $\ell_2$ to be $y=-x \tan\theta$, for some $0 < \theta < \pi/2$. The distance from $(h,k)$ to $\ell_1$ (or $\ell_2$) is $|h \sin\theta - k\cos\theta|$ (respectively, $|h\sin\theta+k\cos\theta|$), so that \begin{align} a^2 &= r^2 - \left( h \sin\theta - k \cos\theta \right)^2 \qquad (1)\\ b^2 &= r^2 - \left( h \sin\theta + k \cos\theta \right)^2 \qquad (2) \end{align} Note that subtracting $(2)$ from $(1)$ gives $$h k =\frac{a^2 - b^2}{4\cos\theta\sin\theta} \qquad (\star)$$ so that the centers of potential solution circles lie on a rectangular hyperbola. Now, $(1)$ and $(2)$ are but two equations in three unknowns $(h, k, r)$, so that we have a one-parameter family of possible solutions. Here's a diagram of a typical family (with two clear sub-families we'll call $L$(eft) and $R$(ight)): The four largest circles have radius $1$; the two smallest circles have radius $a$. The circles with radius $r$ have centers satisfying $(\star)$: $$(h, k) = \left( \frac{\pm_1\sqrt{r^2-b^2}\pm_2\sqrt{r^2-a^2}}{2\sin\theta}, \frac{\pm_1\sqrt{r^2-b^2}\mp_2\sqrt{r^2-a^2}}{2\cos\theta} \right)$$ There's no common tangent to help us, so the challenge would appear to be to take $\ell_3$ a line that cuts through $R$ (missing $L$) in such a way that its intersection with each circle creates a distinct chord-length. (And $\ell_4$ would be the corresponding line through $L$.) So, let's try. Let $\ell_3$ be the line with equation $x \sin\phi - y \cos\phi + p = 0$. The distance from $(h,k)$ to the line is $|h \sin\phi - k \cos\phi + p|$, and if the line cuts a chord of length $2c$ in the circle with center $(h,k)$ and radius $r$, then $$c^2 = r^2 - ( h \sin\phi - k \cos\phi + p )^2 \qquad (3)$$ Eliminating $r$ and $k$ from $(1)$, $(2)$, $(3)$ gives this quartic in $h$: \begin{align} 0 &= 16 h^4 \sin^2\theta \cos^2\theta \left( \sin^2\phi - \sin^2\theta \right) + 32 h^3 p \sin^2\theta \cos^2\theta \sin\phi \\ &- 8 h^2 \sin\theta \cos\theta \left((a^2-b^2) \sin\phi \cos\phi + (a^2+b^2-2c^2-2p^2) \sin\theta \cos\theta \right) \\ &- 8 h p \sin\theta \cos\theta \cos\phi \left( a^2 - b^2 \right) - \left( a^2 - b^2 \right)^2 \left(\sin^2\phi - \sin^2\theta \right) \qquad (4) \end{align} From here, the challenge is to choose $\phi$ and $p$ so that, for every semi-chord length $c$ in the interval $I := [0,1]$ (or possibly a sub-interval thereof), there is a unique root $h$ in the interval $$H := \left[\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2\sin\theta}, \frac{\sqrt{1-b^2}+\sqrt{1-a^2}}{2\sin\theta}\right]$$ and, moreover, every $h\in H$ is that unique root for some $c \in I$ (or the sub-interval thereof). After considerable effort, I have not been able to meet this challenge. Here are some notes dismissing some convenient cases: • Neither a vertical line nor a horizontal line leads to a "universal" solution. The width of interval $H$ is $\sqrt{1-a^2}/\sin\theta$. For small enough $\theta$, this width exceeds $2$, so that no vertical line simultaneously hits the left-most and right-most unit circles in $R$. Likewise, large enough $\theta$ thwarts any horizontal line. • Switching from vertical to horizontal (or vice-versa) after a certain threshold won't work, either. For small enough $\theta$, the $L$'s left-most (unit) circle has its center very-nearly on the $x$-axis. For certain values of $a$ and $b$, a horizontal $\ell_3$ missing that circle will also miss smaller members of $R$. • $\phi = \theta$ isn't a "universal" solution. For some values of $(a,b,\theta)$, lines through through $R$ hit $L$; for other values, as in the horizontal/vertical cases, the circles of a family can get separated enough to avoid simultaneous contact with any line with the given slope. • $\phi = -\theta$ doesn't work Here, $\ell_3$ runs parallel to $\ell_2$, which we know hits the unit-circle members (in fact, all members) of $R$ in identical-length chords; thus, $\ell_3$ must also meet those circles in identical-length chords, preventing chord length from determining a unique circle. Additional thoughts on the non-parallel case. As mentioned, every $h\in H$ must correspond to some $c$, and must be a root of $(4)$ with that $c$-value. Let the maximum and minimum endpoints of $H$ be $h_0$ and $h_1$, and let the corresponding $c$s be $c_0$ and $c_1$. (Observe that $(c_0,c_1) = (0,1)$ or $(1,0)$ represents the cases in which line $\ell_3$ is tangent to one of the unit circles in $R$ while passing through the center of the other.) If there is a "universal" choice for $\phi$ and $p$ that works for particular parameters $(a,b,\theta)$ ---that is, if there are no "thresholds" at which we'd switch from one choice to another--- then by substituting $h=h_i$ and $c=c_i$ into $(4)$, we get two equations that we should be able to solve for $\phi$ and $p$. So, let's try that. I'll start by using these substitutions $$a^2 - b^2 = 4 \sin^2\theta h_0 h_1 \qquad a^2 + b^2 = 2\left( 1 - (h_0^2+h_1^2)\sin^2\theta \right)$$ to write $(4)$ thusly \begin{align} 0 &= h^4 \cos^2\theta \left( \sin^2\phi - \sin^2\theta \right) +2 h^3 p \cos^2\theta \sin\phi \\ &-h^2 \cos\theta \left( 2 h_0 h_1 \sin\theta \sin\phi \cos\phi + \cos\theta \left( 1 - c^2 - p^2 - (h_0^2+h_1^2) \sin^2\theta \right) \right) \\ &-2 h h_0 h_1 p \sin\theta \cos\theta \cos\phi - h_0^2 h_1^2 \sin^2\theta \left( \sin^2\phi - \sin^2\theta \right) \qquad (4^\prime) \end{align} Also, I'll define first-quadrant angles $\gamma_0$ and $\gamma_1$ via $$\sin\gamma_0 = c_0 \qquad \sin\gamma_1 = c_1$$ Then, substituting $h=h_0$, $c=\sin\gamma_0$ and $h=h_1$, $c=\sin\gamma_1$ yields \begin{align} h_0 \cos\theta \sin\phi - h_1 \sin\theta \cos\phi + p \cos\theta &= \pm \cos\theta \sin\gamma_0 &(5a)\\ h_0 \sin\theta \cos\phi - h_1 \cos\theta \sin\phi - p \cos\theta &= \pm \cos\theta \sin\gamma_1 &(5b) \end{align} Adding $(5b)$ from $(5a)$ gives this equation for $\phi$: $$\sin(\phi+\theta) = \frac{\pm_1\;\cos\theta}{h_1-h_0}\left(\sin\gamma_0\;\pm_2 \;\sin\gamma_1\right) = \frac{\pm_1\;\sin 2\theta}{\sqrt{1-a^2}}\sin\frac{\gamma_0 \pm_2 \gamma_1}{2}\cos\frac{\gamma_0 \mp_2 \gamma_1}{2} \qquad (6)$$ The two sign choices lead to four candidate values of $\phi$, which can be back-subsituted into the above equations to find $p$. Bear in mind that $\gamma_0$ and $\gamma_1$ (correspondingly, $c_0$ and $c_1$) aren't specified; adjusting their values undoubtedly affects the viability of the solution. Edit. Let's take this a little further. To avoid rampant "$\pm$"s in the formulas, define $$\delta_0 = \pm_0\;\gamma_0 \qquad \delta_1 = \pm_1\;\gamma_1$$ Now, we can solve $(5a)$ and $(5b)$ as a linear system in $\sin\phi$ and $\cos\phi$, getting \begin{align} \cos\phi &= \frac{\cos\theta}{\sin\theta}\frac{(h_1-h_0)p - h_0\sin\delta_1 - h_1\sin\delta_0}{h_1^2-h_0^2} \\[6pt] \sin\phi &= -\frac{(h_1-h_0)p + h_0\sin\delta_0 + h_1\sin\delta_1}{h_1^2-h_0^2} \end{align} The relation $\cos^2\phi + \sin^2\phi = 1$ then gives us this quadratic equation in $q := p ( h_1 - h_0 )$: \begin{align} 0 = q^2 &+ 2 q \left( \sin^2\theta ( h_0 \sin\delta_0 + h_1 \sin\delta_1 ) - \cos^2\theta(h_0\sin\delta_1+h_1\sin\delta_0 )\right) \\ &+ \sin^2\theta \left( h_0 \sin\delta_0 + h_1 \sin\delta_1 \right)^2 + \cos^2\theta \left( h_0 \sin\delta_1 + h_1\sin\delta_0 \right)^2 - \sin^2\theta \left(h_1-h_0\right)^2 \end{align} with this discriminant $$\Delta := 4\left(h_0 + h_1 \right)^2 \sin^2\theta \left( (h_1-h_0)^2 - \cos^2\theta (\sin\delta_0 + \sin\delta_1 )\right)$$ Introducing first quadrant angles $\alpha$ and $\beta$ such that $$\sin\alpha = a \qquad \sin\beta = b$$ whence $$h_0 = \frac{\cos\beta - \cos\alpha}{2\sin\theta} \qquad h_1 = \frac{\cos\beta + \cos\alpha}{2\sin\theta}$$ we can ultimately write \begin{align} 2 p \cos\alpha &= \cos\alpha (\sin\delta_0 - \sin\delta_1 ) + \cos 2\theta\cos\beta(\sin\delta_0 + \sin\delta_1 ) \\[6pt] &\pm \cos\beta\sqrt{4\cos^2\alpha-\sin^2 2\theta (\sin\delta_0 + \sin\delta_1)^2 } \end{align} Because the discriminant must be non-negative, we have this condition on $\delta_0$ and $\delta_1$: $$|\sin\delta_0 + \sin\delta_1| \leq \frac{2\cos\alpha}{\sin 2\theta} \qquad (\star\star)$$ This turns out to be equivalent to the requirement in $(6)$ that $|\sin(\phi+\theta)| \leq 1$. Consequently, if we were to choose $\delta_0$ and $\delta_1$ in such a way as to make the discriminant vanish (which may or may not be possible), then we would have $\sin^2(\phi+\theta) = 1$, whereupon $\phi = \pi/2 - \theta$. In the case $\theta = \pi/4$, then, our line $\ell_3$ would coincide with $\ell_1$, giving no new information about our target circle. I take this to suggest that we shouldn't allow the discriminant to vanish in general. Observe that $(\star\star)$ rules-out being able to choose $c_i \in \{0,1\}$ in some cases. So, the "take $\ell$ tangent to one unit circle and containing the center of the other" strategy won't always work. Warning. Double-check all equations. I previously messed-up some signs; moreover, the LaTeX editing on this page has gotten so jerky that typos are very likely. - Thank you very much for your simple anwer. In my answer, I added what I found about the region in which the center of a circle exists. I've been trying to find a way to decide $l_3$ which depends on $d_1, d_2, m$. Anyway, I suspect the condition that $r$ is less than or equals to $1$ would be a key. – mathlove Aug 17 '13 at 14:33 You have the solution! Make the non-parallel chords parallel by rotating them about the circle and do the same calculations. – Fred Kline Aug 19 '13 at 10:50 @FredKline: I can't understand what you mean. I don't know when to and how to rotate them for all that we don't know the center of a circle. Could you please clarify what you mean? – mathlove Aug 19 '13 at 14:37 @mathlove, After sleeping on it, I can see that my statement is not correct. Sorry for misleading you. – Fred Kline Aug 19 '13 at 19:23 @Blue: I think your (4$\prime$) would be the following: $$0 =h^4{\cos^2\theta}({\sin^2\phi}−{\sin^2\theta})+2h^3p{\cos^2\theta}{\sin\phi}$$ $$-h^2{\cos\theta}(2h_0h_1{\sin{\theta}}{\sin\phi}{\cos\phi}+{\cos\theta}(1−c^2−‌​p^2 −(h_0^2 +h_1^2){\sin^2\theta}))-2hh_0h_1p{\sin\theta}{\cos\theta}{\cos\phi}−h_0^2h_1^2{{‌​sin}^2\theta}({\sin^2\phi}−{\sin^2\theta}).$$ – mathlove Sep 4 '13 at 9:50 I think we can find a center even in the case $l_1\parallel l_2$. Note that im my answer I suppose that we've already known the information about $l_1, l_2$. In the case of $l_1\parallel l_2$, letting $d_0$ be the distance between two lines $l_1, l_2$, note that $0<d_0<2$ because of $0<d_2\le d_1<2$. Without loss of generality, suppose that $l_1:y=0, l_2:y=d_0$. Let $r$ be the radius of a circle. According to $0<d_2\le d_1<2$, there are two cases:case1 is that there exists the center of a circle between two lines and case2 is that there is not the center of a circle between two lines. In the first case, since we get $d_0=\sqrt{r^2-\left(\frac{d_2}{2}\right)^2}+\sqrt{r^2-\left(\frac{d_1}{2}\right)^2}$, we can represent $r$ by $d_0, d_1, d_2$. Then, letting $l_3:y=r+\sqrt{r^2-\left(\frac{d_1}{2}\right)^2}$, there are two small cases: one is that $l_3$ and a circle come in contact with each other and the other is that $l_3$ neither crosses nor come in contact with any circle. In the first case, given a point $\left(a,r+\sqrt{r^2-\left(\frac{d_1}{2}\right)^2}\right)$, then we know the center of a circle is $\left(a,\sqrt{r^2-\left(\frac{d_1}{2}\right)^2}\right)$. If $l_3$ neither crosses nor come in contact with any circle, we know the situation is in case2. Letting $l_4:y=r-\sqrt{r^2-\left(\frac{d_1}{2}\right)^2}$, then you do get the point $\left(b,r-\sqrt{r^2-\left(\frac{d_1}{2}\right)^2}\right)$, so we know the center of a circle is $\left(b,-\sqrt{r^2-\left(\frac{d_1}{2}\right)^2}\right)$. In the case of $l_1\not \parallel l_2$, I use Blue's idea(see comments below). We can move everything such that the intersection comes at the origin and that one of bisectors of two angles between two lines comes x-axis. Without loss of generality, suppose that $l_1:y=mx, l_2:y=-mx$. Letting $l_3:y=0$, we have three different cases: Case$1$ is that the origin is outside of a circle. Case$2$ is that the origin is on a circle. Case$3$ is that the origin is inside of a circle. Case$1$: Suppose that the points $A, D$ are on the $l_1$ and $C, F$ on $l_2$ and $B, E$ on $l_3$ and that each of $A, B, C$ is nearer to the origin than the other. Letting $OA=a, OB=b, OC=c$, we get the following from the power of a point theorem. $$a(a+d_1 )=c(c+d_2 ), a(a+d_1 )=b(b+d_3 )$$ Then, we can represent $b,c$ by $a, d_i$. After a tedious calculation, we know we can represent every coordinates by $a, d_i$. Then, letting $4(a^2+ad_1 )=k$, we know the following has to be satisfied:$$\left(\sqrt{(d_1)^2+k}+\sqrt{(d_2)^2+k}\right){\sqrt{1+m^2}}=2\sqrt{(d_3)^2+k}$$ By $a=\frac{-d_1+\sqrt{(d_1)^2+k}}{2}$, we know we can represent every coordinates and the center of a circle by $k,d_i, m$. Hence, this is a conclusion: If there exists a positive real number $k$ such that $$\left(\sqrt{(d_1)^2+k}+\sqrt{(d_2)^2+k}\right){\sqrt{1+m^2}}=2\sqrt{(d_3)^2+k},$$(you can get this from $j$ below represented in two ways)and the radius is less than or equals $1$, then the center of a circle can be found: $$(i,j)=\left(b+\frac{d_3}{2},\frac{2b+d_3-(2c+d_2)\sqrt{1+m^2}}{2m}\right)$$ with $$b=\frac{-d_3+\sqrt{{d_3}^2+k}}{2}, c=\frac{-d_2+\sqrt{{d_2}^2+k}}{2}.$$ And the radius is $$r=\sqrt{(b-i)^2+j^2}.$$However, the radius and the coordinates of a center are so complicated that I can't calculate them any more. In the Case$2$ and Case$3$, alomost same argument can be done. Case$2$ is easier than the others. In the case$3$, I have another difficulty: ambiguity in the signs. Strictly speaking, I think we can't decide just one answer. In other words, there remains several possibilities and it's hard to consider the conditions about possibilities. My answer is like an 'algorism' solution. This is why I'm not satisfied with this answer. Could you give me any hint or another better idea? Edit: I hope the following two are useful. I'll use the same alphabets as Blue used. $1$. Since each of $l_1, l_2$ crosses a circle, considering each of the discriminants of the followings: $$(x-h)^2+(\pm x\tanθ-k)^2=r^2,$$ we get $$\left(-h\mp k\tanθ\right)^2-\left(1+\tan^2θ\right)\left(h^2+k^2-r^2\right)>0$$ $$⇔\ \left(k\mp h\tanθ\right)^2<r^2\left(1+\tan^2θ\right)⇔\ r^2>\frac{(k\mp h\tanθ)^2}{1+\tan^2θ}⇔\ r>\frac{|k\mp h\tanθ|}{\sqrt{1+\tan^2θ}}.$$ Since $r$ is less than or equals to $1$, we get$$\frac{|k\mp h\tanθ|}{\sqrt{1+\tan^2θ}}<1⇔\ |k\mp h\tanθ|<\sqrt{1+\tan^2θ}$$ $$⇔\ h\tanθ-\sqrt{1+\tan^2θ}<k<h\tanθ+\sqrt{1+\tan^2θ},$$$$-h\tanθ-\sqrt{1+\tan^2θ}<k<-h\tanθ+\sqrt{1+\tan^2θ}.$$ This shows the region in which the center of a circle can be exist. $2$. Considering the situation that a circle $(x-h)^2+(y-k)^2=r^2$ passes two points $(0,0), (d,0)$, we get $$h=\frac d2, r=\sqrt{\frac{d^2}{4}+k^2}.$$ Since $r$ is less than or equals to $1$, we get $$-\sqrt{1-\frac{d^2}{4}}\le k\le \sqrt{1-\frac{d^2}{4}}$$ for a real number $d$ which satisfies $0\le d\le 2$. Since we've already known that the lengths of the chords of a circle by two lines $l_1, l_2$ are $d_1, d_2$$(2>d_1\ge d_2>0)respectively, by the argument above, we know that the center of a circle does exist in a parallelogram-shaped region whose center is the origin: the length of its edges are 2\sqrt{1-\frac{{d_1}^2}{4}}, 2\sqrt{1-\frac{{d_2}^2}{4}}. I've tried to find a new line l_3 instead of x=0, but I'm facing difficulty. Edit 2: I've got the following which would be useful. Let each of l_{1,d+}, l_{1,d-}, l_{2,D+}, l_{2,D-} be the followings:$$l_{1,d+}:y=x\tanθ+\frac{d}{\cosθ}, l_{1,d-}:y=x\tanθ-\frac{d}{\cosθ}l_{2,d+}:y=-x\tanθ+\frac{D}{\cosθ}, l_{2,d-}:y=-x\tanθ-\frac{D}{\cosθ},$$where D=\sqrt{d^2+\frac{{d_1}^2-{d_2}^2}{4}}. Note that each distance between l_1 and l_{1,d\pm} is d, and that each distance between l_2 and l_{2,d\pm} is D. Also, note the following:$$\sqrt{\left(\frac{d_1}{2}\right)^2+d^2}=\sqrt{\left(\frac{d_2}{2}\right)^2+D^2}.$$This means that the radius of a circle which crosses l_1 equals to the radius of a circle which crosses l_2. Note that d must satisfy the following because of what I've already written above:$$0\le d\le \sqrt{1-\frac{{d_1}^2}{4}}.$$Then, Letting each of the intersections of l_{1,d-}, l_{2,D+} and l_{1,d+}, l_{2,D+} and l_{1,d+}, l_{2,D-} and l_{1,d-}, l_{2,D-} be P_{-+}, P_{++}, P_{+-}, P_{--} respectively, we can represent these as the follwoings:$$P_{-+}\ \left(\frac{d+D}{2\sinθ}, \frac{D-d}{2\cosθ}\right), P_{++}\ \left(\frac{-d+D}{2\sinθ}\frac{d+D}{2\cosθ}\right),P_{+-}\ \left(\frac{-d-D}{2\sinθ}, \frac{d-D}{2\cosθ}\right), P_{--}\ \left(\frac{d-D}{2\sinθ}, \frac{-d-D}{2\cosθ}\right).$$Since each radius is \sqrt{d^2+\frac{{d_1}^2}{4}}, we can represent the circles as followings:$$C_{-+}:\left(x-\frac{d+D}{2\sinθ}\right)^2+\left(y-\frac{D-d}{2\cosθ}\right)^2=d^2+\frac{{d_1}^2}{4}C_{++}:\left(x-\frac{-d+D}{2\sinθ}\right)^2+\left(y-\frac{d+D}{2\cosθ}\right)^2=d^2+\frac{{d_1}^2}{4}C_{+-}:\left(x-\frac{-d-D}{2\sinθ}\right)^2+\left(y-\frac{d-D}{2\cosθ}\right)^2=d^2+\frac{{d_1}^2}{4}C_{--}:\left(x-\frac{d-D}{2\sinθ}\right)^2+\left(y-\frac{-d-D}{2\cosθ}\right)^2=d^2+\frac{{d_1}^2}{4}.$$Let's consider the center of C_{-+}, which is P_{-+}. Letting P_{-+} be (x, y), then we get d=x\sinθ-y\cosθ. Putting it in the equation 2y\cosθ=D-d, we get$$xy=\frac{{d_1}^2-{d_2}^2}{16\sinθ\cosθ}.$$Hence, we know that the point P_{-+} is on a hyperbola for any d. I've tried to find a special line which is tangent to C_{-+} for any d, but I'm facing difficulty. I expect that we can solve this question in an elegant way. I hope I'm getting closer to it. Edit3: According to Blue's answer, using l_3 Blue wrote, we get$$p=1-\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2\sin\theta}>0$$if \theta\ge\ \pi/4. Also, using l_4 Blue wrote, we get$$p=-1-\frac{\sqrt{1-a^2}-\sqrt{1-b^2}}{2\sin\theta}>0$$if 0<\sin\theta<\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2}(<\frac{1}{\sqrt2}). If \phi=0, we get$$0=16h^4\sin^3θ\cos^2θ+8h^2\sinθ\cos^2θ(a^2 +b^2 −2c^2 −2p^2)+8hp{\cosθ}(a^2−b^2 )-(a^2−b^2)^2\sinθ.$$Hence, we know that there's exactly one positive root h for any negative p. Then,$$y=p=\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2\cos\theta}-1<0$$if 0<\theta\le \pi/4.$$y=p=\frac{\sqrt{1-a^2}-\sqrt{1-b^2}}{2\cos\theta}+1<0$$if 0<\cos\theta<\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2}(<\frac{1}{\sqrt2}). Hence, if 0<\theta<\alpha such that \sin\alpha=\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2}, then we can choose the following lines:$$l_3:x=\frac{\sqrt{1-a^2}-\sqrt{1-b^2}}{2\sin\theta}+1,l_4:y=\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2\cos\theta}-1.$$If \beta<\theta<\pi/2 such that \cos\beta=\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2}, then we can choose the following lines:$$l_3:x=\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2\sin\theta}-1,l_4:y=\frac{\sqrt{1-a^2}-\sqrt{1-b^2}}{2\cos\theta}+1.$$Then, the problem for \alpha\le \theta\le \beta remains unsolved. I hope I'm not mistaken. Edit\ 4: As I've already written above, if \phi=0, we get$$0=16h^4\sin^3\theta\cos^2\theta+8h^2\sin\theta\cos^2\theta\left(a^2+b^2-2c^2-2p^2\right)+8hp\cos\theta(a^2-b^2)-(a^2-b^2)^2\sin\theta\ \ \ \ \ \ \ \cdots(\star\star).$$Changing h to -h in (\star\star) gives us$$0=16h^4\sin^3\theta\cos^2\theta+8h^2\sin\theta\cos^2\theta\left(a^2+b^2-2c^2-2p^2\right)-8hp\cos\theta(a^2-b^2)-(a^2-b^2)^2\sin\theta\ \ \ \ \ \ \ \cdots(\star\star\star).$$Since we know that there's exactly one positive root h for any positive p in (\star\star\star), we also know that there's exactly one negative root h for any positive p in (\star\star). In the following argument, suppose that$$\cos\beta=\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2}, \cos\gamma=\frac{\sqrt{1-a^2}+\sqrt{1-b^2}}{2}.$$Note that if a>b, then$$0<\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2}<\frac{\sqrt{1-a^2}+\sqrt{1-b^2}}{2}<1.$$By the argument above, I got the following: When 0<\theta<\gamma,$$l_3:y=\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2\cos\theta}-1\ (<0),l_4:y=\frac{-\sqrt{1-a^2}-\sqrt{1-b^2}}{2\cos\theta}+1\ (>0).$$When \gamma<\theta<\beta,$$l_3:y=\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2\cos\theta}-1\ (<0),l_4:y=\frac{\sqrt{1-a^2}+\sqrt{1-b^2}}{2\cos\theta}-1\ (>0).$$When \beta<\theta<\pi/2,$$l_3:y=\frac{\sqrt{1-a^2}-\sqrt{1-b^2}}{2\cos\theta}+1\ (<0),l_4:y=\frac{\sqrt{1-a^2}+\sqrt{1-b^2}}{2\cos\theta}-1\ (>0).$$In each case, if l_3 crosses an invisible circle at two points, then (\star\star) has exactly one positive root h. If l_3 neither crosses nor comes in contact with any circle, then l_4 gives us c, so we'll get exactly one negative root h in (\star\star). Is this correct? Did I miss anything? I'm afraid I might be mistaken. Edit 5: I got the following: Letting \sin\epsilon=\frac{\sqrt{1-a^2}}{2}, \cos\omega=\frac{\sqrt{1-a^2}}{2}, note that 0<\epsilon<\omega<\pi/2 for any a. (i) When 0<\theta<\omega, let's take l_3 to be the line:$$y=\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2\cos\theta}+1.$$Note that this l_3 does cross any one of upper-right sub-family of circles because of the following:$$\frac{\sqrt{1-a^2}+\sqrt{1-b^2}}{2\cos\theta}-1<\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2\cos\theta}+1 ⇔ \cos\theta>\frac{\sqrt{1-a^2}}{2}.$$If we get c, we can take a positive root (there's exactly one positive root) in (\star\star) as h, because this l_3 never crosses the lower-left sub-family of circles: this is because for any d_1>d_2,$$\frac{\sqrt{1-a^2}-\sqrt{1-b^2}}{2\cos\theta}+1<\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2\cos\theta}+1.$$Then, let's calculate k=\frac{a^2-b^2}{4h\cos\theta\sin\theta}. If we don't get c, then let's take l_4 to be the line:$$y=\frac{\sqrt{1-a^2}-\sqrt{1-b^2}}{2\cos\theta}-1.$$Note that this l_4 does cross any one of lower-left sub-family of circles because of the following:$$\frac{-\sqrt{1-a^2}-\sqrt{1-b^2}}{2\cos\theta}+1>\frac{\sqrt{1-a^2}-\sqrt{1-b^2}}{2\cos\theta}-1 ⇔ \cos\theta>\frac{\sqrt{1-a^2}}{2}.$$Since we do get c, we can calculate k. (ii) When \omega<\theta<\pi/2, let's take l_3 to be the line:$$x=\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2\sin\theta}+1.$$Note that this l_3 does cross any one of upper-right sub-family of circles because of the following:$$\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2\sin\theta}+1>\frac{\sqrt{1-a^2}+\sqrt{1-b^2}}{2\sin\theta}-1 ⇔ \sin\theta>\frac{\sqrt{1-a^2}}{2}.$$If we get c, we can take a positive root (there's exactly one positive root) in (\star\star) as h, because this l_3 never crosses the lower-left sub-family of circles: this is because for any d_1>d_2,$$\frac{\sqrt{1-b^2}-\sqrt{1-a^2}}{2\sin\theta}+1>\frac{\sqrt{1-a^2}-\sqrt{1-b^2}}{2\sin\theta}+1.$$Then, let's calculate k=\frac{a^2-b^2}{4h\cos\theta\sin\theta}. If we don't get c, then let's take l_4 to be the line:$$x=\frac{\sqrt{1-a^2}-\sqrt{1-b^2}}{2\sin\theta}-1.$$Note that this l_4 does cross any one of lower-left sub-family of circles because of the following:$$\frac{\sqrt{1-a^2}-\sqrt{1-b^2}}{2\sin\theta}-1<-\frac{\sqrt{1-a^2}-\sqrt{1-b^2}}{2\sin\theta}+1 ⇔ \sin\theta>\frac{\sqrt{1-a^2}}{2}.$$Since we do get$c$, we can calculate$k$. I hope I'm not mistaken. Edit$6$: Just an idea. I guess that there exists a line (let's call this$L$) which satisfies the following several conditions. If exists, it can be taken as$l_3$. 1. If we rotate$l_1$about the point$(\frac{\sqrt{a^2-b^2}}{4\sin\theta}, \frac{\sqrt{a^2-b^2}}{4\cos\theta})$by some angle, then we'll get$L$. Note that this point is the center of the smallest circle of the upper-right sub-family. 2.$L$crosses every circle of the upper-right sub-family. 3.$L$never crosses any circle of the lower-left sub-family. 4. The length of each chord cut by$L$is different from each other. It might be difficult to give an actual example as$L$, but it is likely that there would exist such$L$. - A possibility. Since$d_1$&$d_2$are strictly positive, then, in each of the quarter-planes determined by$\ell_1$and$\ell_2$, there must be a smallest circle & a largest circle allowing chords of the given lengths (cut by lines$y=\pm x\tan t$). I'm thinking that one could find an origin-avoiding line ($\ell_3$) passing through all potential solution-circles in the "left/right" quarter-planes; if that line fails to cross actual solution, make$\ell_4$an origin-avoiding line passing through through all the potential solution-circles in the "top/bottom" regions. (continued) – Blue Aug 17 '13 at 18:01 (Part 2) Origin-avoidance is key for$\ell_3$(and$\ell_4$), since lines through the origin cannot distinguish between potential centers$(h,k)$and$(-h,-k)$. This is why it's important that$d_1$and$d_2$cannot (both) be zero: if$d_1=d_2=0$, then the family of potential solution-circles (necessarily tangent to$\ell_1$and$\ell_2$) has no smallest member; they converge on the origin, and no origin-avoiding line could pass through all of them. (Having at most one of$d_1$and$d_2$be zero would be okay, by this reasoning.) – Blue Aug 17 '13 at 18:14 Hmmm ... My origin-avoiding-line-through-all-potential-solution-circles idea has complications. My first attempt (taking$\ell_3$to be a horizontal line through the center of the smallest left/right candidate circle) resulted in messy quartic polynomials in$h$,$k$, and$r^2$. I suppose it's possible that only one choice of roots satisfies the problem, but I'm not motivated enough to investigate. Perhaps a better origin-avoiding-line-through-all-potential-solution-circles exists that gives an elegant and obvious solution. – Blue Aug 17 '13 at 20:09 I added what I found again. I can represent possible circles by a real number$d$. I hope this would be helpful. – mathlove Aug 18 '13 at 6:50 You're correct that there's definitely a range of viability for my vertical/horizontal solution. For$\theta$small (or close to$\pi/2$), the two sub-families of circles separate and stretch-out, and my solution fails miserably. (I'm embarrassed not to have noticed this. :) I guess we just have to find some other way to constrain my$h$-quartic to provide exactly one positive root for each value of$c$. – Blue Aug 22 '13 at 6:33 After struggling with equations for a few fruitless minutes, I went back and read the question more carefully ... ;-) My solution is at http://mathoverflow.net/questions/140435/finding-an-invisible-circle-by-drawing-another-line/ where the problem was re-posted. edit: In response to Daniel's request, the following is a copy & paste of my reply at mathoverflow (Good idea actually Daniel because it may be closed/deleted from there.) begin quote This I think is the big clue: "If the line you'll draw and a circle come in contact with each other, then you'll get the coordinates of the point of contact instead of getting 0 as the length of the chord." If the first line cuts a chord of length$d_1$then the circle is enclosed in a band centred on the line, ranging from being centred on the line when the circle has diameter$d_1$to a unit circle offset on either side of the line, and the lines bounding the band are tangents to both these unit circles. (Of course these circles will coincide if$d_1 = 2$.) With that in mind, what you should do is draw the third line parallel to the first at a distance of$\frac{d_1}{2}$from it. If the circle is offset the other side of the line, you then have a second chance, and you draw another parallel line the same distance the other side. One of these lines will then be either tangent to the circle (if it is centred on the line and has diameter$d_1$), and in this case the coordinates of the point of contact easily allow one to deduce the circle's centre. Otherwise one of these lines will cut out a chord, and the length of this combined with the original chord length and the distance apart of the parallel lines will allow the radius of the circle to be determined. But once you know the circle's radius, the chord lengths cut by any two oblique lines allow its centre to be determined. Very nice problem! end quote P.S. I didn't elaborate the solution, because elementary geometric calculations seem out of place there, and I'd guess most people here would also have no difficulty in deriving the results explicitly based on this approach. - Could you perhaps reformulate the answer here? – Dan Rust Aug 31 '13 at 15:33 I should add that in the mathoverflow thread mathlove pointed out a flaw in the above solution. In fact this solution narrows down the circle to one of two possible positions. – John R Ramsden Sep 1 '13 at 10:43 Thanks for the edit. It's a shame the argument didn't hold up fully. – Dan Rust Sep 1 '13 at 11:24 Trying another tack, let us denote the two given lines by$p_i x + q_i y = r_i$, where$p_i^2 + q_i^2 = 1$($i = 1, 2$) and, to avoid carting around a load of 2s, denote the respective chord lengths by$2 d_i$. Then, denoting the centre of the circle by (u, v), which is to be found, the equation of the circle is as follows, in which$p u + q v = r$is our 3rd (or 4th) line, again as always with p^2 + q^2 = 1 :$(x - u)^2 + (y - v)^2 = (p_1 u + q_1 v - r_1)^2 + d_1^2 = (p_2 u + q_2 v - r_2)^2 + d_2^2 = (p u + q v - r)^2 + d^2$Subtracting, to eliminate the leftmost sum of squares involving$x, y$, gives two conics (in general hyperbolas) in$u, v$. So if the question has a solution, then it must be possible to choose$p, q, r$such that if d > 0 then these two conics intersect in exactly one point. This point must be at a common tangent if at least one conic is non-degenerate, or otherwise, i.e. if each conic is a pair of lines, one common intersection of all four lines. The condition(s) on$p, q, r$must not involve$d$, because that is not known in advance. But we can use different conditions based on the values of the$d_i$, for example whether or not$d_1 = d_2\$. I'll study this further tonight, but in the meanwhile feel free to edit/append the above if inspiration strikes! -
2016-07-29 02:21:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 7, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.966076135635376, "perplexity": 432.3105440910614}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829325.58/warc/CC-MAIN-20160723071029-00066-ip-10-185-27-174.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/195779/gap-at-origin-when-using-axis-lines-left-in-conjunction-with-line-width-spec-vi
# Gap at origin when using axis lines*=left in conjunction with line width spec via axis line style Please forgive the strange graph displayed here - I am simply trying to display a replicable chunk of code in the presence of all the packages I am attempting to use and want to avoid displaying data generated outside of my tex code. Whatever solution I come up would ideally generalize to groupplot (i.e. I am trying to avoid drawing my axes from scratch via \draw). I am attempting to compile this in XeLaTeX. The following chunk: \documentclass{standalone} \usepackage{tikz} \usepackage{pgfplots} \usepackage{fontspec} \usepackage{fixltx2e} \definecolor{nice_blue}{HTML}{377EB8} \definecolor{nice_green}{HTML}{4DAF4A} \begin{document} \begin{tikzpicture} \node at (.75,4.5) {\fontspec{Arial}w\textsubscript{1}}; \node at (1.25,5) {\fontspec{Arial}w\textsubscript{2}}; \node at (1.75,5.45) {\fontspec{Arial}w\textsubscript{3}}; \node[color=nice_green] at (5.35,3.5) {\fontspec{Arial}Well-being}; \begin{axis}[ axis lines*=left, ytick=\empty, xtick=\empty, xmin=0, xmax=12, ymin=0, ymax=12, xlabel=\fontspec{Arial}Household Produced Investments x\textsubscript{ch}, ylabel=\fontspec{Arial}Market Purchased Investments x\textsubscript{cm}, axis line style = {line width=2.83464567pt}, enlarge x limits=false ] \addplot[->, nice_blue, line width=2.83464567pt] plot coordinates{(1,1)(5,5)}; \addplot[->, nice_green, line width=2.83464567pt] plot coordinates{(7,7)(8.5,9)}; \end{axis} \end{tikzpicture} \end{document} Should yield the following plot: I would like to know how to get a clean "mitred" corner in my axis with no gap as opposed to the gap shown in the zoomed image below: - axis line style = {line width=2.83464567pt,shorten <=-0.5\pgflinewidth},
2016-06-28 08:00:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4635574221611023, "perplexity": 4964.978467603141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00051-ip-10-164-35-72.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/43311-using-ratio-test-find-radius-convergence.html
1. Using ratio test to find radius of convergence f(x) = sigma n=0 to infinity of (2n/8^n)x^(3n) 1. Use the ratio test to find the radius of convergence of the power series. 2. Find f '(x) and write out the first three (non-zero) terms of the power series for f '(x). 1. I got a(n) = (2n * x^(3n)) / 8^n a(n + 1)= [2(n+1) * x^(3(n + 1))] / 8^(n + 1) After that I did a(n+1) / a(n). Simplified and in the end got (x^3 / 8) which resulted in (x^3 < 8) making it equal R = 2. I'm wandering if this is the correct answer or not? 2. As for this question. I got the derivative as. = (2n/8^n) * 3nx^(3n-1) f '(x) = (6n^2/8^n) * x^(3n-1) Is that the correct derivative? Also, to find the first three terms, I just plug in numbers starting 0 and then add up the first 3 non-zero answers, correct? Thanks. 2. Originally Posted by Kyeong 1. I got a(n) = (2n * x^(3n)) / 8^n a(n + 1)= [2(n+1) * x^(3(n + 1))] / 8^(n + 1) After that I did a(n+1) / a(n). Simplified and in the end got (x^3 / 8) which resulted in (x^3 < 8) making it equal R = 2. I'm wandering if this is the correct answer or not? Correct! $\frac{a_{n+1}}{a_n} = \frac{\frac{2(n + 1)x^{3n+3}}{8^{n+1}}}{\frac{2nx^{3n}}{8^n}} = \frac{2(n+1)x^{3n + 3}}{8^{n+1}}\cdot\frac{8^n}{2nx^{3n}}$ $= \frac{2(n+1)x^{3n + 3}}{2nx^{3n}}\cdot\frac{8^n}{8^{n+1}} = \left(1 + \frac1n\right)\frac{x^3}8$ which gives $\lim_{n\to\infty}\left\lvert\frac{a_{n+1}}{a_n}\ri ght\rvert = \frac{\left\lvert x^3\right\rvert}8.$ So to find the radius of convergence, we do $\frac{\left\lvert x^3\right\rvert}8 < 1$ $\Rightarrow\left\lvert x^3\right\rvert < 8$ $\Rightarrow\left\lvert x\right\rvert^3 < 8$ $\Rightarrow\left\lvert x\right\rvert < 2$ $\Rightarrow-2 < x < 2$ Originally Posted by Kyeong 2. As for this question. I got the derivative as. = (2n/8^n) * 3nx^(3n-1) f '(x) = (6n^2/8^n) * x^(3n-1) Is that the correct derivative? Also, to find the first three terms, I just plug in numbers starting 0 and then add up the first 3 non-zero answers, correct? Thanks. Yes, but don't forget that this is still a series: $f'(x) = \sum_{n=1}^\infty\frac{6n^2x^{3n - 1}}{8^n}$ 3. Oh yea, I almost forgot. Thanks for reminding me. I was doubting my answer and you just clarified what I thought. Thanks again.
2016-08-24 12:56:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7942994832992554, "perplexity": 1364.4078017896672}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292181.27/warc/CC-MAIN-20160823195812-00278-ip-10-153-172-175.ec2.internal.warc.gz"}
http://stacks.math.columbia.edu/tag/023F
# The Stacks Project ## Tag 023F ### 34.3. Descent for modules Let $R \to A$ be a ring map. By Simplicial, Example 14.5.5 this gives rise to a cosimplicial $R$-algebra $$\xymatrix{ A \ar@<1ex>[r] \ar@<-1ex>[r] & A \otimes_R A \ar@<0ex>[l] \ar@<2ex>[r] \ar@<0ex>[r] \ar@<-2ex>[r] & A \otimes_R A \otimes_R A \ar@<1ex>[l] \ar@<-1ex>[l] }$$ Let us denote this $(A/R)_\bullet$ so that $(A/R)_n$ is the $(n + 1)$-fold tensor product of $A$ over $R$. Given a map $\varphi : [n] \to [m]$ the $R$-algebra map $(A/R)_\bullet(\varphi)$ is the map $$a_0 \otimes \ldots \otimes a_n \longmapsto \prod\nolimits_{\varphi(i) = 0} a_i \otimes \prod\nolimits_{\varphi(i) = 1} a_i \otimes \ldots \otimes \prod\nolimits_{\varphi(i) = m} a_i$$ where we use the convention that the empty product is $1$. Thus the first few maps, notation as in Simplicial, Section 14.5, are $$\begin{matrix} \delta^1_0 & : & a_0 & \mapsto & 1 \otimes a_0 \\ \delta^1_1 & : & a_0 & \mapsto & a_0 \otimes 1 \\ \sigma^0_0 & : & a_0 \otimes a_1 & \mapsto & a_0a_1 \\ \delta^2_0 & : & a_0 \otimes a_1 & \mapsto & 1 \otimes a_0 \otimes a_1 \\ \delta^2_1 & : & a_0 \otimes a_1 & \mapsto & a_0 \otimes 1 \otimes a_1 \\ \delta^2_2 & : & a_0 \otimes a_1 & \mapsto & a_0 \otimes a_1 \otimes 1 \\ \sigma^1_0 & : & a_0 \otimes a_1 \otimes a_2 & \mapsto & a_0a_1 \otimes a_2 \\ \sigma^1_1 & : & a_0 \otimes a_1 \otimes a_2 & \mapsto & a_0 \otimes a_1a_2 \end{matrix}$$ and so on. An $R$-module $M$ gives rise to a cosimplicial $(A/R)_\bullet$-module $(A/R)_\bullet \otimes_R M$. In other words $M_n = (A/R)_n \otimes_R M$ and using the $R$-algebra maps $(A/R)_n \to (A/R)_m$ to define the corresponding maps on $M \otimes_R (A/R)_\bullet$. The analogue to a descent datum for quasi-coherent sheaves in the setting of modules is the following. Definition 34.3.1. Let $R \to A$ be a ring map. 1. A descent datum $(N, \varphi)$ for modules with respect to $R \to A$ is given by an $A$-module $N$ and a isomorphism of $A \otimes_R A$-modules $$\varphi : N \otimes_R A \to A \otimes_R N$$ such that the cocycle condition holds: the diagram of $A \otimes_R A \otimes_R A$-module maps $$\xymatrix{ N \otimes_R A \otimes_R A \ar[rr]_{\varphi_{02}} \ar[rd]_{\varphi_{01}} & & A \otimes_R A \otimes_R N \\ & A \otimes_R N \otimes_R A \ar[ru]_{\varphi_{12}} & }$$ commutes (see below for notation). 2. A morphism $(N, \varphi) \to (N', \varphi')$ of descent data is a morphism of $A$-modules $\psi : N \to N'$ such that the diagram $$\xymatrix{ N \otimes_R A \ar[r]_\varphi \ar[d]_{\psi \otimes \text{id}_A} & A \otimes_R N \ar[d]^{\text{id}_A \otimes \psi} \\ N' \otimes_R A \ar[r]^{\varphi'} & A \otimes_R N' }$$ is commutative. In the definition we use the notation that $\varphi_{01} = \varphi \otimes \text{id}_A$, $\varphi_{12} = \text{id}_A \otimes \varphi$, and $\varphi_{02}(n \otimes 1 \otimes 1) = \sum a_i \otimes 1 \otimes n_i$ if $\varphi(n \otimes 1) = \sum a_i \otimes n_i$. All three are $A \otimes_R A \otimes_R A$-module homomorphisms. Equivalently we have $$\varphi_{ij} = \varphi \otimes_{(A/R)_1, ~(A/R)_\bullet(\tau^2_{ij})} (A/R)_2$$ where $\tau^2_{ij} : [1] \to [2]$ is the map $0 \mapsto i$, $1 \mapsto j$. Namely, $(A/R)_{\bullet}(\tau^2_{02})(a_0 \otimes a_1) = a_0 \otimes 1 \otimes a_1$, and similarly for the others1. We need some more notation to be able to state the next lemma. Let $(N, \varphi)$ be a descent datum with respect to a ring map $R \to A$. For $n \geq 0$ and $i \in [n]$ we set $$N_{n, i} = A \otimes_R \ldots \otimes_R A \otimes_R N \otimes_R A \otimes_R \ldots \otimes_R A$$ with the factor $N$ in the $i$th spot. It is an $(A/R)_n$-module. If we introduce the maps $\tau^n_i : [0] \to [n]$, $0 \mapsto i$ then we see that $$N_{n, i} = N \otimes_{(A/R)_0, ~(A/R)_\bullet(\tau^n_i)} (A/R)_n$$ For $0 \leq i \leq j \leq n$ we let $\tau^n_{ij} : [1] \to [n]$ be the map such that $0$ maps to $i$ and $1$ to $j$. Similarly to the above the homomorphism $\varphi$ induces isomorphisms $$\varphi^n_{ij} = \varphi \otimes_{(A/R)_1, ~(A/R)_\bullet(\tau^n_{ij})} (A/R)_n : N_{n, i} \longrightarrow N_{n, j}$$ of $(A/R)_n$-modules when $i < j$. If $i = j$ we set $\varphi^n_{ij} = \text{id}$. Since these are all isomorphisms they allow us to move the factor $N$ to any spot we like. And the cocycle condition exactly means that it does not matter how we do this (e.g., as a composition of two of these or at once). Finally, for any $\beta : [n] \to [m]$ we define the morphism $$N_{\beta, i} : N_{n, i} \to N_{m, \beta(i)}$$ as the unique $(A/R)_\bullet(\beta)$-semi linear map such that $$N_{\beta, i}(1 \otimes \ldots \otimes n \otimes \ldots \otimes 1) = 1 \otimes \ldots \otimes n \otimes \ldots \otimes 1$$ for all $n \in N$. This hints at the following lemma. Lemma 34.3.2. Let $R \to A$ be a ring map. Given a descent datum $(N, \varphi)$ we can associate to it a cosimplicial $(A/R)_\bullet$-module $N_\bullet$2 by the rules $N_n = N_{n, n}$ and given $\beta : [n] \to [m]$ setting we define $$N_\bullet(\beta) = (\varphi^m_{\beta(n)m}) \circ N_{\beta, n} : N_{n, n} \longrightarrow N_{m, m}.$$ This procedure is functorial in the descent datum. Proof. Here are the first few maps where $\varphi(n \otimes 1) = \sum \alpha_i \otimes x_i$ $$\begin{matrix} \delta^1_0 & : & N & \to & A \otimes N & n & \mapsto & 1 \otimes n \\ \delta^1_1 & : & N & \to & A \otimes N & n & \mapsto & \sum \alpha_i \otimes x_i\\ \sigma^0_0 & : & A \otimes N & \to & N & a_0 \otimes n & \mapsto & a_0n \\ \delta^2_0 & : & A \otimes N & \to & A \otimes A \otimes N & a_0 \otimes n & \mapsto & 1 \otimes a_0 \otimes n \\ \delta^2_1 & : & A \otimes N & \to & A \otimes A \otimes N & a_0 \otimes n & \mapsto & a_0 \otimes 1 \otimes n \\ \delta^2_2 & : & A \otimes N & \to & A \otimes A \otimes N & a_0 \otimes n & \mapsto & \sum a_0 \otimes \alpha_i \otimes x_i \\ \sigma^1_0 & : & A \otimes A \otimes N & \to & A \otimes N & a_0 \otimes a_1 \otimes n & \mapsto & a_0a_1 \otimes n \\ \sigma^1_1 & : & A \otimes A \otimes N & \to & A \otimes N & a_0 \otimes a_1 \otimes n & \mapsto & a_0 \otimes a_1n \end{matrix}$$ with notation as in Simplicial, Section 14.5. We first verify the two properties $\sigma^0_0 \circ \delta^1_0 = \text{id}$ and $\sigma^0_0 \circ \delta^1_1 = \text{id}$. The first one, $\sigma^0_0 \circ \delta^1_0 = \text{id}$, is clear from the explicit description of the morphisms above. To prove the second relation we have to use the cocycle condition (because it does not holds for an arbitrary isomorphism $\varphi : N \otimes_R A \to A \otimes_R N$). Write $p = \sigma^0_0 \circ \delta^1_1 : N \to N$. By the description of the maps above we deduce that $p$ is also equal to $$p = \varphi \otimes \text{id} : N = (N \otimes_R A) \otimes_{(A \otimes_R A)} A \longrightarrow (A \otimes_R N) \otimes_{(A \otimes_R A)} A = N$$ Since $\varphi$ is an isomorphism we see that $p$ is an isomorphism. Write $\varphi(n \otimes 1) = \sum \alpha_i \otimes x_i$ for certain $\alpha_i \in A$ and $x_i \in N$. Then $p(n) = \sum \alpha_ix_i$. Next, write $\varphi(x_i \otimes 1) = \sum \alpha_{ij} \otimes y_j$ for certain $\alpha_{ij} \in A$ and $y_j \in N$. Then the cocycle condition says that $$\sum \alpha_i \otimes \alpha_{ij} \otimes y_j = \sum \alpha_i \otimes 1 \otimes x_i.$$ This means that $p(n) = \sum \alpha_ix_i = \sum \alpha_i\alpha_{ij}y_j = \sum \alpha_i p(x_i) = p(p(n))$. Thus $p$ is a projector, and since it is an isomorphism it is the identity. To prove fully that $N_\bullet$ is a cosimplicial module we have to check all 5 types of relations of Simplicial, Remark 14.5.3. The relations on composing $\sigma$'s are obvious. The relations on composing $\delta$'s come down to the cocycle condition for $\varphi$. In exactly the same way as above one checks the relations $\sigma_j \circ \delta_j = \sigma_j \circ \delta_{j + 1} = \text{id}$. Finally, the other relations on compositions of $\delta$'s and $\sigma$'s hold for any $\varphi$ whatsoever. $\square$ Note that to an $R$-module $M$ we can associate a canonical descent datum, namely $(M \otimes_R A, can)$ where $can : (M \otimes_R A) \otimes_R A \to A \otimes_R (M \otimes_R A)$ is the obvious map: $(m \otimes a) \otimes a' \mapsto a \otimes (m \otimes a')$. Lemma 34.3.3. Let $R \to A$ be a ring map. Let $M$ be an $R$-module. The cosimplicial $(A/R)_\bullet$-module associated to the canonical descent datum is isomorphic to the cosimplicial module $(A/R)_\bullet \otimes_R M$. Proof. Omitted. $\square$ Definition 34.3.4. Let $R \to A$ be a ring map. We say a descent datum $(N, \varphi)$ is effective if there exists an $R$-module $M$ and an isomorphism of descent data from $(M \otimes_R A, can)$ to $(N, \varphi)$. Let $R \to A$ be a ring map. Let $(N, \varphi)$ be a descent datum. We may take the cochain complex $s(N_\bullet)$ associated with $N_\bullet$ (see Simplicial, Section 14.25). It has the following shape: $$N \to A \otimes_R N \to A \otimes_R A \otimes_R N \to \ldots$$ We can describe the maps. The first map is the map $$n \longmapsto 1 \otimes n - \varphi(n \otimes 1).$$ The second map on pure tensors has the values $$a \otimes n \longmapsto 1 \otimes a \otimes n - a \otimes 1 \otimes n + a \otimes \varphi(n \otimes 1).$$ It is clear how the pattern continues. In the special case where $N = A \otimes_R M$ we see that for any $m \in M$ the element $1 \otimes m$ is in the kernel of the first map of the cochain complex associated to the cosimplicial module $(A/R)_\bullet \otimes_R M$. Hence we get an extended cochain complex $$\tag{34.3.4.1} 0 \to M \to A \otimes_R M \to A \otimes_R A \otimes_R M \to \ldots$$ Here we think of the $0$ as being in degree $-2$, the module $M$ in degree $-1$, the module $A \otimes_R M$ in degree $0$, etc. Note that this complex has the shape $$0 \to R \to A \to A \otimes_R A \to A \otimes_R A \otimes_R A \to \ldots$$ when $M = R$. Lemma 34.3.5. Suppose that $R \to A$ has a section. Then for any $R$-module $M$ the extended cochain complex (34.3.4.1) is exact. Proof. By Simplicial, Lemma 14.28.4 the map $R \to (A/R)_\bullet$ is a homotopy equivalence of cosimplicial $R$-algebras (here $R$ denotes the constant cosimplicial $R$-algebra). Hence $M \to (A/R)_\bullet \otimes_R M$ is a homotopy equivalence in the category of cosimplicial $R$-modules, because $\otimes_R M$ is a functor from the category of $R$-algebras to the category of $R$-modules, see Simplicial, Lemma 14.28.3. This implies that the induced map of associated complexes is a homotopy equivalence, see Simplicial, Lemma 14.28.5. Since the complex associated to the constant cosimplicial $R$-module $M$ is the complex $$\xymatrix{ M \ar[r]^0 & M \ar[r]^1 & M \ar[r]^0 & M \ar[r]^1 & M \ldots }$$ we win (since the extended version simply puts an extra $M$ at the beginning). $\square$ Lemma 34.3.6. Suppose that $R \to A$ is faithfully flat, see Algebra, Definition 10.38.1. Then for any $R$-module $M$ the extended cochain complex (34.3.4.1) is exact. Proof. Suppose we can show there exists a faithfully flat ring map $R \to R'$ such that the result holds for the ring map $R' \to A' = R' \otimes_R A$. Then the result follows for $R \to A$. Namely, for any $R$-module $M$ the cosimplicial module $(M \otimes_R R') \otimes_{R'} (A'/R')_\bullet$ is just the cosimplicial module $R' \otimes_R (M \otimes_R (A/R)_\bullet)$. Hence the vanishing of cohomology of the complex associated to $(M \otimes_R R') \otimes_{R'} (A'/R')_\bullet$ implies the vanishing of the cohomology of the complex associated to $M \otimes_R (A/R)_\bullet$ by faithful flatness of $R \to R'$. Similarly for the vanishing of cohomology groups in degrees $-1$ and $0$ of the extended complex (proof omitted). But we have such a faithful flat extension. Namely $R' = A$ works because the ring map $R' = A \to A' = A \otimes_R A$ has a section $a \otimes a' \mapsto aa'$ and Lemma 34.3.5 applies. $\square$ Here is how the complex relates to the question of effectivity. Lemma 34.3.7. Let $R \to A$ be a faithfully flat ring map. Let $(N, \varphi)$ be a descent datum. Then $(N, \varphi)$ is effective if and only if the canonical map $$A \otimes_R H^0(s(N_\bullet)) \longrightarrow N$$ is an isomorphism. Proof. If $(N, \varphi)$ is effective, then we may write $N = A \otimes_R M$ with $\varphi = can$. It follows that $H^0(s(N_\bullet)) = M$ by Lemmas 34.3.3 and 34.3.6. Conversely, suppose the map of the lemma is an isomorphism. In this case set $M = H^0(s(N_\bullet))$. This is an $R$-submodule of $N$, namely $M = \{n \in N \mid 1 \otimes n = \varphi(n \otimes 1)\}$. The only thing to check is that via the isomorphism $A \otimes_R M \to N$ the canonical descent data agrees with $\varphi$. We omit the verification. $\square$ Lemma 34.3.8. Let $R \to A$ be a ring map, and let $R \to R'$ be faithfully flat. Set $A' = R' \otimes_R A$. If all descent data for $R' \to A'$ are effective, then so are all descent data for $R \to A$. Proof. Let $(N, \varphi)$ be a descent datum for $R \to A$. Set $N' = R' \otimes_R N = A' \otimes_A N$, and denote $\varphi' = \text{id}_{R'} \otimes \varphi$ the base change of the descent datum $\varphi$. Then $(N', \varphi')$ is a descent datum for $R' \to A'$ and $H^0(s(N'_\bullet)) = R' \otimes_R H^0(s(N_\bullet))$. Moreover, the map $A' \otimes_{R'} H^0(s(N'_\bullet)) \to N'$ is identified with the base change of the $A$-module map $A \otimes_R H^0(s(N)) \to N$ via the faithfully flat map $A \to A'$. Hence we conclude by Lemma 34.3.7. $\square$ Here is the main result of this section. Its proof may seem a little clumsy; for a more highbrow approach see Remark 34.3.11 below. Proposition 34.3.9. Let $R \to A$ be a faithfully flat ring map. Then 1. any descent datum on modules with respect to $R \to A$ is effective, 2. the functor $M \mapsto (A \otimes_R M, can)$ from $R$-modules to the category of descent data is an equivalence, and 3. the inverse functor is given by $(N, \varphi) \mapsto H^0(s(N_\bullet))$. Proof. We only prove (1) and omit the proofs of (2) and (3). As $R \to A$ is faithfully flat, there exists a faithfully flat base change $R \to R'$ such that $R' \to A' = R' \otimes_R A$ has a section (namely take $R' = A$ as in the proof of Lemma 34.3.6). Hence, using Lemma 34.3.8 we may assume that $R \to A$ as a section, say $\sigma : A \to R$. Let $(N, \varphi)$ be a descent datum relative to $R \to A$. Set $$M = H^0(s(N_\bullet)) = \{n \in N \mid 1 \otimes n = \varphi(n \otimes 1)\} \subset N$$ By Lemma 34.3.7 it suffices to show that $A \otimes_R M \to N$ is an isomorphism. Take an element $n \in N$. Write $\varphi(n \otimes 1) = \sum a_i \otimes x_i$ for certain $a_i \in A$ and $x_i \in N$. By Lemma 34.3.2 we have $n = \sum a_i x_i$ in $N$ (because $\sigma^0_0 \circ \delta^1_0 = \text{id}$ in any cosimplicial object). Next, write $\varphi(x_i \otimes 1) = \sum a_{ij} \otimes y_j$ for certain $a_{ij} \in A$ and $y_j \in N$. The cocycle condition means that $$\sum a_i \otimes a_{ij} \otimes y_j = \sum a_i \otimes 1 \otimes x_i$$ in $A \otimes_R A \otimes_R N$. We conclude two things from this. First, by applying $\sigma$ to the first $A$ we conclude that $\sum \sigma(a_i) \varphi(x_i \otimes 1) = \sum \sigma(a_i) \otimes x_i$ which means that $\sum \sigma(a_i) x_i \in M$. Next, by applying $\sigma$ to the middle $A$ and multiplying out we conclude that $\sum_i a_i (\sum_j \sigma(a_{ij}) y_j) = \sum a_i x_i = n$. Hence by the first conclusion we see that $A \otimes_R M \to N$ is surjective. Finally, suppose that $m_i \in M$ and $\sum a_i m_i = 0$. Then we see by applying $\varphi$ to $\sum a_im_i \otimes 1$ that $\sum a_i \otimes m_i = 0$. In other words $A \otimes_R M \to N$ is injective and we win. $\square$ Remark 34.3.10. Let $R$ be a ring. Let $f_1, \ldots, f_n\in R$ generate the unit ideal. The ring $A = \prod_i R_{f_i}$ is a faithfully flat $R$-algebra. We remark that the cosimplicial ring $(A/R)_\bullet$ has the following ring in degree $n$: $$\prod\nolimits_{i_0, \ldots, i_n} R_{f_{i_0}\ldots f_{i_n}}$$ Hence the results above recover Algebra, Lemmas 10.22.1, 10.22.2 and 10.23.4. But the results above actually say more because of exactness in higher degrees. Namely, it implies that Čech cohomology of quasi-coherent sheaves on affines is trivial. Thus we get a second proof of Cohomology of Schemes, Lemma 29.2.1. Remark 34.3.11. Let $R$ be a ring. Let $A_\bullet$ be a cosimplicial $R$-algebra. In this setting a descent datum corresponds to an cosimplicial $A_\bullet$-module $M_\bullet$ with the property that for every $n, m \geq 0$ and every $\varphi : [n] \to [m]$ the map $M(\varphi) : M_n \to M_m$ induces an isomorphism $$M_n \otimes_{A_n, A(\varphi)} A_m \longrightarrow M_m.$$ Let us call such a cosimplicial module a cartesian module. In this setting, the proof of Proposition 34.3.9 can be split in the following steps 1. If $R \to R'$ is faithfully flat, $R \to A$ any ring map, then descent data for $A/R$ are effective if descent data for $(R' \otimes_R A)/R'$ are effective. 2. Let $A$ be an $R$-algebra. Descent data for $A/R$ correspond to cartesian $(A/R)_\bullet$-modules. 3. If $R \to A$ has a section then $(A/R)_\bullet$ is homotopy equivalent to $R$, the constant cosimplicial $R$-algebra with value $R$. 4. If $A_\bullet \to B_\bullet$ is a homotopy equivalence of cosimplicial $R$-algebras then the functor $M_\bullet \mapsto M_\bullet \otimes_{A_\bullet} B_\bullet$ induces an equivalence of categories between cartesian $A_\bullet$-modules and cartesian $B_\bullet$-modules. For (1) see Lemma 34.3.8. Part (2) uses Lemma 34.3.2. Part (3) we have seen in the proof of Lemma 34.3.5 (it relies on Simplicial, Lemma 14.28.4). Moreover, part (4) is a triviality if you think about it right! 1. Note that $\tau^2_{ij} = \delta^2_k$, if $\{i, j, k\} = [2] = \{0, 1, 2\}$, see Simplicial, Definition 14.2.1. 2. We should really write $(N, \varphi)_\bullet$. The code snippet corresponding to this tag is a part of the file descent.tex and is located in lines 205–761 (see updates for more information). \section{Descent for modules} \label{section-descent-modules} \noindent Let $R \to A$ be a ring map. By Simplicial, Example \ref{simplicial-example-push-outs-simplicial-object} this gives rise to a cosimplicial $R$-algebra $$\xymatrix{ A \ar@<1ex>[r] \ar@<-1ex>[r] & A \otimes_R A \ar@<0ex>[l] \ar@<2ex>[r] \ar@<0ex>[r] \ar@<-2ex>[r] & A \otimes_R A \otimes_R A \ar@<1ex>[l] \ar@<-1ex>[l] }$$ Let us denote this $(A/R)_\bullet$ so that $(A/R)_n$ is the $(n + 1)$-fold tensor product of $A$ over $R$. Given a map $\varphi : [n] \to [m]$ the $R$-algebra map $(A/R)_\bullet(\varphi)$ is the map $$a_0 \otimes \ldots \otimes a_n \longmapsto \prod\nolimits_{\varphi(i) = 0} a_i \otimes \prod\nolimits_{\varphi(i) = 1} a_i \otimes \ldots \otimes \prod\nolimits_{\varphi(i) = m} a_i$$ where we use the convention that the empty product is $1$. Thus the first few maps, notation as in Simplicial, Section \ref{simplicial-section-cosimplicial-object}, are $$\begin{matrix} \delta^1_0 & : & a_0 & \mapsto & 1 \otimes a_0 \\ \delta^1_1 & : & a_0 & \mapsto & a_0 \otimes 1 \\ \sigma^0_0 & : & a_0 \otimes a_1 & \mapsto & a_0a_1 \\ \delta^2_0 & : & a_0 \otimes a_1 & \mapsto & 1 \otimes a_0 \otimes a_1 \\ \delta^2_1 & : & a_0 \otimes a_1 & \mapsto & a_0 \otimes 1 \otimes a_1 \\ \delta^2_2 & : & a_0 \otimes a_1 & \mapsto & a_0 \otimes a_1 \otimes 1 \\ \sigma^1_0 & : & a_0 \otimes a_1 \otimes a_2 & \mapsto & a_0a_1 \otimes a_2 \\ \sigma^1_1 & : & a_0 \otimes a_1 \otimes a_2 & \mapsto & a_0 \otimes a_1a_2 \end{matrix}$$ and so on. \medskip\noindent An $R$-module $M$ gives rise to a cosimplicial $(A/R)_\bullet$-module $(A/R)_\bullet \otimes_R M$. In other words $M_n = (A/R)_n \otimes_R M$ and using the $R$-algebra maps $(A/R)_n \to (A/R)_m$ to define the corresponding maps on $M \otimes_R (A/R)_\bullet$. \medskip\noindent The analogue to a descent datum for quasi-coherent sheaves in the setting of modules is the following. \begin{definition} \label{definition-descent-datum-modules} Let $R \to A$ be a ring map. \begin{enumerate} \item A {\it descent datum $(N, \varphi)$ for modules with respect to $R \to A$} is given by an $A$-module $N$ and a isomorphism of $A \otimes_R A$-modules $$\varphi : N \otimes_R A \to A \otimes_R N$$ such that the {\it cocycle condition} holds: the diagram of $A \otimes_R A \otimes_R A$-module maps $$\xymatrix{ N \otimes_R A \otimes_R A \ar[rr]_{\varphi_{02}} \ar[rd]_{\varphi_{01}} & & A \otimes_R A \otimes_R N \\ & A \otimes_R N \otimes_R A \ar[ru]_{\varphi_{12}} & }$$ commutes (see below for notation). \item A {\it morphism $(N, \varphi) \to (N', \varphi')$ of descent data} is a morphism of $A$-modules $\psi : N \to N'$ such that the diagram $$\xymatrix{ N \otimes_R A \ar[r]_\varphi \ar[d]_{\psi \otimes \text{id}_A} & A \otimes_R N \ar[d]^{\text{id}_A \otimes \psi} \\ N' \otimes_R A \ar[r]^{\varphi'} & A \otimes_R N' }$$ is commutative. \end{enumerate} \end{definition} \noindent In the definition we use the notation that $\varphi_{01} = \varphi \otimes \text{id}_A$, $\varphi_{12} = \text{id}_A \otimes \varphi$, and $\varphi_{02}(n \otimes 1 \otimes 1) = \sum a_i \otimes 1 \otimes n_i$ if $\varphi(n \otimes 1) = \sum a_i \otimes n_i$. All three are $A \otimes_R A \otimes_R A$-module homomorphisms. Equivalently we have $$\varphi_{ij} = \varphi \otimes_{(A/R)_1, \ (A/R)_\bullet(\tau^2_{ij})} (A/R)_2$$ where $\tau^2_{ij} : [1] \to [2]$ is the map $0 \mapsto i$, $1 \mapsto j$. Namely, $(A/R)_{\bullet}(\tau^2_{02})(a_0 \otimes a_1) = a_0 \otimes 1 \otimes a_1$, and similarly for the others\footnote{Note that $\tau^2_{ij} = \delta^2_k$, if $\{i, j, k\} = [2] = \{0, 1, 2\}$, see Simplicial, Definition \ref{simplicial-definition-face-degeneracy}.}. \medskip\noindent We need some more notation to be able to state the next lemma. Let $(N, \varphi)$ be a descent datum with respect to a ring map $R \to A$. For $n \geq 0$ and $i \in [n]$ we set $$N_{n, i} = A \otimes_R \ldots \otimes_R A \otimes_R N \otimes_R A \otimes_R \ldots \otimes_R A$$ with the factor $N$ in the $i$th spot. It is an $(A/R)_n$-module. If we introduce the maps $\tau^n_i : [0] \to [n]$, $0 \mapsto i$ then we see that $$N_{n, i} = N \otimes_{(A/R)_0, \ (A/R)_\bullet(\tau^n_i)} (A/R)_n$$ For $0 \leq i \leq j \leq n$ we let $\tau^n_{ij} : [1] \to [n]$ be the map such that $0$ maps to $i$ and $1$ to $j$. Similarly to the above the homomorphism $\varphi$ induces isomorphisms $$\varphi^n_{ij} = \varphi \otimes_{(A/R)_1, \ (A/R)_\bullet(\tau^n_{ij})} (A/R)_n : N_{n, i} \longrightarrow N_{n, j}$$ of $(A/R)_n$-modules when $i < j$. If $i = j$ we set $\varphi^n_{ij} = \text{id}$. Since these are all isomorphisms they allow us to move the factor $N$ to any spot we like. And the cocycle condition exactly means that it does not matter how we do this (e.g., as a composition of two of these or at once). Finally, for any $\beta : [n] \to [m]$ we define the morphism $$N_{\beta, i} : N_{n, i} \to N_{m, \beta(i)}$$ as the unique $(A/R)_\bullet(\beta)$-semi linear map such that $$N_{\beta, i}(1 \otimes \ldots \otimes n \otimes \ldots \otimes 1) = 1 \otimes \ldots \otimes n \otimes \ldots \otimes 1$$ for all $n \in N$. This hints at the following lemma. \begin{lemma} \label{lemma-descent-datum-cosimplicial} Let $R \to A$ be a ring map. Given a descent datum $(N, \varphi)$ we can associate to it a cosimplicial $(A/R)_\bullet$-module $N_\bullet$\footnote{We should really write $(N, \varphi)_\bullet$.} by the rules $N_n = N_{n, n}$ and given $\beta : [n] \to [m]$ setting we define $$N_\bullet(\beta) = (\varphi^m_{\beta(n)m}) \circ N_{\beta, n} : N_{n, n} \longrightarrow N_{m, m}.$$ This procedure is functorial in the descent datum. \end{lemma} \begin{proof} Here are the first few maps where $\varphi(n \otimes 1) = \sum \alpha_i \otimes x_i$ $$\begin{matrix} \delta^1_0 & : & N & \to & A \otimes N & n & \mapsto & 1 \otimes n \\ \delta^1_1 & : & N & \to & A \otimes N & n & \mapsto & \sum \alpha_i \otimes x_i\\ \sigma^0_0 & : & A \otimes N & \to & N & a_0 \otimes n & \mapsto & a_0n \\ \delta^2_0 & : & A \otimes N & \to & A \otimes A \otimes N & a_0 \otimes n & \mapsto & 1 \otimes a_0 \otimes n \\ \delta^2_1 & : & A \otimes N & \to & A \otimes A \otimes N & a_0 \otimes n & \mapsto & a_0 \otimes 1 \otimes n \\ \delta^2_2 & : & A \otimes N & \to & A \otimes A \otimes N & a_0 \otimes n & \mapsto & \sum a_0 \otimes \alpha_i \otimes x_i \\ \sigma^1_0 & : & A \otimes A \otimes N & \to & A \otimes N & a_0 \otimes a_1 \otimes n & \mapsto & a_0a_1 \otimes n \\ \sigma^1_1 & : & A \otimes A \otimes N & \to & A \otimes N & a_0 \otimes a_1 \otimes n & \mapsto & a_0 \otimes a_1n \end{matrix}$$ with notation as in Simplicial, Section \ref{simplicial-section-cosimplicial-object}. We first verify the two properties $\sigma^0_0 \circ \delta^1_0 = \text{id}$ and $\sigma^0_0 \circ \delta^1_1 = \text{id}$. The first one, $\sigma^0_0 \circ \delta^1_0 = \text{id}$, is clear from the explicit description of the morphisms above. To prove the second relation we have to use the cocycle condition (because it does not holds for an arbitrary isomorphism $\varphi : N \otimes_R A \to A \otimes_R N$). Write $p = \sigma^0_0 \circ \delta^1_1 : N \to N$. By the description of the maps above we deduce that $p$ is also equal to $$p = \varphi \otimes \text{id} : N = (N \otimes_R A) \otimes_{(A \otimes_R A)} A \longrightarrow (A \otimes_R N) \otimes_{(A \otimes_R A)} A = N$$ Since $\varphi$ is an isomorphism we see that $p$ is an isomorphism. Write $\varphi(n \otimes 1) = \sum \alpha_i \otimes x_i$ for certain $\alpha_i \in A$ and $x_i \in N$. Then $p(n) = \sum \alpha_ix_i$. Next, write $\varphi(x_i \otimes 1) = \sum \alpha_{ij} \otimes y_j$ for certain $\alpha_{ij} \in A$ and $y_j \in N$. Then the cocycle condition says that $$\sum \alpha_i \otimes \alpha_{ij} \otimes y_j = \sum \alpha_i \otimes 1 \otimes x_i.$$ This means that $p(n) = \sum \alpha_ix_i = \sum \alpha_i\alpha_{ij}y_j = \sum \alpha_i p(x_i) = p(p(n))$. Thus $p$ is a projector, and since it is an isomorphism it is the identity. \medskip\noindent To prove fully that $N_\bullet$ is a cosimplicial module we have to check all 5 types of relations of Simplicial, Remark \ref{simplicial-remark-relations-cosimplicial}. The relations on composing $\sigma$'s are obvious. The relations on composing $\delta$'s come down to the cocycle condition for $\varphi$. In exactly the same way as above one checks the relations $\sigma_j \circ \delta_j = \sigma_j \circ \delta_{j + 1} = \text{id}$. Finally, the other relations on compositions of $\delta$'s and $\sigma$'s hold for any $\varphi$ whatsoever. \end{proof} \noindent Note that to an $R$-module $M$ we can associate a canonical descent datum, namely $(M \otimes_R A, can)$ where $can : (M \otimes_R A) \otimes_R A \to A \otimes_R (M \otimes_R A)$ is the obvious map: $(m \otimes a) \otimes a' \mapsto a \otimes (m \otimes a')$. \begin{lemma} \label{lemma-canonical-descent-datum-cosimplicial} Let $R \to A$ be a ring map. Let $M$ be an $R$-module. The cosimplicial $(A/R)_\bullet$-module associated to the canonical descent datum is isomorphic to the cosimplicial module $(A/R)_\bullet \otimes_R M$. \end{lemma} \begin{proof} Omitted. \end{proof} \begin{definition} \label{definition-descent-datum-effective-module} Let $R \to A$ be a ring map. We say a descent datum $(N, \varphi)$ is {\it effective} if there exists an $R$-module $M$ and an isomorphism of descent data from $(M \otimes_R A, can)$ to $(N, \varphi)$. \end{definition} \noindent Let $R \to A$ be a ring map. Let $(N, \varphi)$ be a descent datum. We may take the cochain complex $s(N_\bullet)$ associated with $N_\bullet$ (see Simplicial, Section \ref{simplicial-section-dold-kan-cosimplicial}). It has the following shape: $$N \to A \otimes_R N \to A \otimes_R A \otimes_R N \to \ldots$$ We can describe the maps. The first map is the map $$n \longmapsto 1 \otimes n - \varphi(n \otimes 1).$$ The second map on pure tensors has the values $$a \otimes n \longmapsto 1 \otimes a \otimes n - a \otimes 1 \otimes n + a \otimes \varphi(n \otimes 1).$$ It is clear how the pattern continues. \medskip\noindent In the special case where $N = A \otimes_R M$ we see that for any $m \in M$ the element $1 \otimes m$ is in the kernel of the first map of the cochain complex associated to the cosimplicial module $(A/R)_\bullet \otimes_R M$. Hence we get an extended cochain complex \label{equation-extended-complex} 0 \to M \to A \otimes_R M \to A \otimes_R A \otimes_R M \to \ldots Here we think of the $0$ as being in degree $-2$, the module $M$ in degree $-1$, the module $A \otimes_R M$ in degree $0$, etc. Note that this complex has the shape $$0 \to R \to A \to A \otimes_R A \to A \otimes_R A \otimes_R A \to \ldots$$ when $M = R$. \begin{lemma} \label{lemma-with-section-exact} Suppose that $R \to A$ has a section. Then for any $R$-module $M$ the extended cochain complex (\ref{equation-extended-complex}) is exact. \end{lemma} \begin{proof} By Simplicial, Lemma \ref{simplicial-lemma-push-outs-simplicial-object-w-section} the map $R \to (A/R)_\bullet$ is a homotopy equivalence of cosimplicial $R$-algebras (here $R$ denotes the constant cosimplicial $R$-algebra). Hence $M \to (A/R)_\bullet \otimes_R M$ is a homotopy equivalence in the category of cosimplicial $R$-modules, because $\otimes_R M$ is a functor from the category of $R$-algebras to the category of $R$-modules, see Simplicial, Lemma \ref{simplicial-lemma-functorial-homotopy}. This implies that the induced map of associated complexes is a homotopy equivalence, see Simplicial, Lemma \ref{simplicial-lemma-homotopy-s-Q}. Since the complex associated to the constant cosimplicial $R$-module $M$ is the complex $$\xymatrix{ M \ar[r]^0 & M \ar[r]^1 & M \ar[r]^0 & M \ar[r]^1 & M \ldots }$$ we win (since the extended version simply puts an extra $M$ at the beginning). \end{proof} \begin{lemma} \label{lemma-ff-exact} Suppose that $R \to A$ is faithfully flat, see Algebra, Definition \ref{algebra-definition-flat}. Then for any $R$-module $M$ the extended cochain complex (\ref{equation-extended-complex}) is exact. \end{lemma} \begin{proof} Suppose we can show there exists a faithfully flat ring map $R \to R'$ such that the result holds for the ring map $R' \to A' = R' \otimes_R A$. Then the result follows for $R \to A$. Namely, for any $R$-module $M$ the cosimplicial module $(M \otimes_R R') \otimes_{R'} (A'/R')_\bullet$ is just the cosimplicial module $R' \otimes_R (M \otimes_R (A/R)_\bullet)$. Hence the vanishing of cohomology of the complex associated to $(M \otimes_R R') \otimes_{R'} (A'/R')_\bullet$ implies the vanishing of the cohomology of the complex associated to $M \otimes_R (A/R)_\bullet$ by faithful flatness of $R \to R'$. Similarly for the vanishing of cohomology groups in degrees $-1$ and $0$ of the extended complex (proof omitted). \medskip\noindent But we have such a faithful flat extension. Namely $R' = A$ works because the ring map $R' = A \to A' = A \otimes_R A$ has a section $a \otimes a' \mapsto aa'$ and Lemma \ref{lemma-with-section-exact} applies. \end{proof} \noindent Here is how the complex relates to the question of effectivity. \begin{lemma} \label{lemma-recognize-effective} Let $R \to A$ be a faithfully flat ring map. Let $(N, \varphi)$ be a descent datum. Then $(N, \varphi)$ is effective if and only if the canonical map $$A \otimes_R H^0(s(N_\bullet)) \longrightarrow N$$ is an isomorphism. \end{lemma} \begin{proof} If $(N, \varphi)$ is effective, then we may write $N = A \otimes_R M$ with $\varphi = can$. It follows that $H^0(s(N_\bullet)) = M$ by Lemmas \ref{lemma-canonical-descent-datum-cosimplicial} and \ref{lemma-ff-exact}. Conversely, suppose the map of the lemma is an isomorphism. In this case set $M = H^0(s(N_\bullet))$. This is an $R$-submodule of $N$, namely $M = \{n \in N \mid 1 \otimes n = \varphi(n \otimes 1)\}$. The only thing to check is that via the isomorphism $A \otimes_R M \to N$ the canonical descent data agrees with $\varphi$. We omit the verification. \end{proof} \begin{lemma} \label{lemma-descent-descends} Let $R \to A$ be a ring map, and let $R \to R'$ be faithfully flat. Set $A' = R' \otimes_R A$. If all descent data for $R' \to A'$ are effective, then so are all descent data for $R \to A$. \end{lemma} \begin{proof} Let $(N, \varphi)$ be a descent datum for $R \to A$. Set $N' = R' \otimes_R N = A' \otimes_A N$, and denote $\varphi' = \text{id}_{R'} \otimes \varphi$ the base change of the descent datum $\varphi$. Then $(N', \varphi')$ is a descent datum for $R' \to A'$ and $H^0(s(N'_\bullet)) = R' \otimes_R H^0(s(N_\bullet))$. Moreover, the map $A' \otimes_{R'} H^0(s(N'_\bullet)) \to N'$ is identified with the base change of the $A$-module map $A \otimes_R H^0(s(N)) \to N$ via the faithfully flat map $A \to A'$. Hence we conclude by Lemma \ref{lemma-recognize-effective}. \end{proof} \noindent Here is the main result of this section. Its proof may seem a little clumsy; for a more highbrow approach see Remark \ref{remark-homotopy-equivalent-cosimplicial-algebras} below. \begin{proposition} \label{proposition-descent-module} \begin{slogan} Effective descent for modules along faithfully flat ring maps. \end{slogan} Let $R \to A$ be a faithfully flat ring map. Then \begin{enumerate} \item any descent datum on modules with respect to $R \to A$ is effective, \item the functor $M \mapsto (A \otimes_R M, can)$ from $R$-modules to the category of descent data is an equivalence, and \item the inverse functor is given by $(N, \varphi) \mapsto H^0(s(N_\bullet))$. \end{enumerate} \end{proposition} \begin{proof} We only prove (1) and omit the proofs of (2) and (3). As $R \to A$ is faithfully flat, there exists a faithfully flat base change $R \to R'$ such that $R' \to A' = R' \otimes_R A$ has a section (namely take $R' = A$ as in the proof of Lemma \ref{lemma-ff-exact}). Hence, using Lemma \ref{lemma-descent-descends} we may assume that $R \to A$ as a section, say $\sigma : A \to R$. Let $(N, \varphi)$ be a descent datum relative to $R \to A$. Set $$M = H^0(s(N_\bullet)) = \{n \in N \mid 1 \otimes n = \varphi(n \otimes 1)\} \subset N$$ By Lemma \ref{lemma-recognize-effective} it suffices to show that $A \otimes_R M \to N$ is an isomorphism. \medskip\noindent Take an element $n \in N$. Write $\varphi(n \otimes 1) = \sum a_i \otimes x_i$ for certain $a_i \in A$ and $x_i \in N$. By Lemma \ref{lemma-descent-datum-cosimplicial} we have $n = \sum a_i x_i$ in $N$ (because $\sigma^0_0 \circ \delta^1_0 = \text{id}$ in any cosimplicial object). Next, write $\varphi(x_i \otimes 1) = \sum a_{ij} \otimes y_j$ for certain $a_{ij} \in A$ and $y_j \in N$. The cocycle condition means that $$\sum a_i \otimes a_{ij} \otimes y_j = \sum a_i \otimes 1 \otimes x_i$$ in $A \otimes_R A \otimes_R N$. We conclude two things from this. First, by applying $\sigma$ to the first $A$ we conclude that $\sum \sigma(a_i) \varphi(x_i \otimes 1) = \sum \sigma(a_i) \otimes x_i$ which means that $\sum \sigma(a_i) x_i \in M$. Next, by applying $\sigma$ to the middle $A$ and multiplying out we conclude that $\sum_i a_i (\sum_j \sigma(a_{ij}) y_j) = \sum a_i x_i = n$. Hence by the first conclusion we see that $A \otimes_R M \to N$ is surjective. Finally, suppose that $m_i \in M$ and $\sum a_i m_i = 0$. Then we see by applying $\varphi$ to $\sum a_im_i \otimes 1$ that $\sum a_i \otimes m_i = 0$. In other words $A \otimes_R M \to N$ is injective and we win. \end{proof} \begin{remark} \label{remark-standard-covering} Let $R$ be a ring. Let $f_1, \ldots, f_n\in R$ generate the unit ideal. The ring $A = \prod_i R_{f_i}$ is a faithfully flat $R$-algebra. We remark that the cosimplicial ring $(A/R)_\bullet$ has the following ring in degree $n$: $$\prod\nolimits_{i_0, \ldots, i_n} R_{f_{i_0}\ldots f_{i_n}}$$ Hence the results above recover Algebra, Lemmas \ref{algebra-lemma-standard-covering}, \ref{algebra-lemma-cover-module} and \ref{algebra-lemma-glue-modules}. But the results above actually say more because of exactness in higher degrees. Namely, it implies that {\v C}ech cohomology of quasi-coherent sheaves on affines is trivial. Thus we get a second proof of Cohomology of Schemes, Lemma \ref{coherent-lemma-cech-cohomology-quasi-coherent-trivial}. \end{remark} \begin{remark} \label{remark-homotopy-equivalent-cosimplicial-algebras} Let $R$ be a ring. Let $A_\bullet$ be a cosimplicial $R$-algebra. In this setting a descent datum corresponds to an cosimplicial $A_\bullet$-module $M_\bullet$ with the property that for every $n, m \geq 0$ and every $\varphi : [n] \to [m]$ the map $M(\varphi) : M_n \to M_m$ induces an isomorphism $$M_n \otimes_{A_n, A(\varphi)} A_m \longrightarrow M_m.$$ Let us call such a cosimplicial module a {\it cartesian module}. In this setting, the proof of Proposition \ref{proposition-descent-module} can be split in the following steps \begin{enumerate} \item If $R \to R'$ is faithfully flat, $R \to A$ any ring map, then descent data for $A/R$ are effective if descent data for $(R' \otimes_R A)/R'$ are effective. \item Let $A$ be an $R$-algebra. Descent data for $A/R$ correspond to cartesian $(A/R)_\bullet$-modules. \item If $R \to A$ has a section then $(A/R)_\bullet$ is homotopy equivalent to $R$, the constant cosimplicial $R$-algebra with value $R$. \item If $A_\bullet \to B_\bullet$ is a homotopy equivalence of cosimplicial $R$-algebras then the functor $M_\bullet \mapsto M_\bullet \otimes_{A_\bullet} B_\bullet$ induces an equivalence of categories between cartesian $A_\bullet$-modules and cartesian $B_\bullet$-modules. \end{enumerate} For (1) see Lemma \ref{lemma-descent-descends}. Part (2) uses Lemma \ref{lemma-descent-datum-cosimplicial}. Part (3) we have seen in the proof of Lemma \ref{lemma-with-section-exact} (it relies on Simplicial, Lemma \ref{simplicial-lemma-push-outs-simplicial-object-w-section}). Moreover, part (4) is a triviality if you think about it right! \end{remark} Comment #1867 by Laurent Moret-Bailly on March 27, 2016 a 4:48 pm UTC Lemma 34.3.7: I think faithful flatness is not needed for the "if" part. Comment #1902 by Johan (site) on April 2, 2016 a 12:23 am UTC Sorry, I tried to figure out what you are saying, but I could not. In any case you do not seem to be saying the lemma is wrong, only that it could be improved upon, so I'll leave it for now. ## Add a comment on tag 023F In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the lower-right corner).
2017-04-23 05:33:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9860833883285522, "perplexity": 254.07894469204072}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118477.15/warc/CC-MAIN-20170423031158-00576-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.groundai.com/project/maps-on-3-manifolds-given-by-surgery/
Maps on 3-manifolds # Maps on 3-manifolds given by surgery Boldizsár Kalmár Alfréd Rényi Institute of Mathematics, Hungarian Academy of Sciences Reáltanoda u. 13-15, 1053 Budapest, Hungary and  András I. Stipsicz Alfréd Rényi Institute of Mathematics, Hungarian Academy of Sciences Reáltanoda u. 13-15, 1053 Budapest, Hungary and Institute for Advanced Study, Princeton, NJ, 08540 ###### Abstract. Suppose that the -manifold is given by integral surgery along a link . In the following we construct a stable map from to the plane, whose singular set is canonically oriented. We obtain upper bounds for the minimal numbers of crossings and non-simple singularities and of connected components of fibers of stable maps from to the plane in terms of properties of . ###### Key words and phrases: Stable map, -manifold, surgery, negative knot, Thurston-Bennequin number. 2010 Mathematics Subject Classification. Primary 57R45; Secondary 57M27. ## 1. Introduction It is well-known that a continuous map between smooth manifolds can be approximated by a smooth map and any smooth map on a 3-manifold can be approximated by a generic stable map. This line of argument, however, gives no concrete map on a given 3-manifold even if it is given by some explicit construction. Recall that by [Li62, Wa60] a closed oriented 3-manifold can be given by integral surgery along some link in . In the present work we construct an explicit stable map based on such a surgery presentation of . Results in [Gr09, Gr10] give lower bounds on the topological complexity of the set of critical values of generic smooth maps and on the complexity of the fibers in terms of the topology of the source and target manifolds. In a slightly different direction, [CT08] gives a lower bound for the number of crossing singularities of stable maps from a -manifold to in terms of the Gromov norm of the -manifold. Recently [Ba08, Ba09] and [GK07] studied the topology of -manifolds through the singularities of their maps into surfaces. In the present paper we give upper bounds on the minimal numbers of the crossings and non-simple singularities and of the connected components of the fibers of stable maps on the -manifold in terms of properties of diagrams of (e.g. the number of crossings or the number of critical points when projected to ). As an additional result, these constructions lead to upper bounds on a version of the Thurston-Bennequin number of negative Legendrian knots. Before stating our main results, we need a little preparation. First of all, a stable map of a -manifold into the plane can be easily described by its Stein factorization. ###### Definition 1.1. Let be a map of the -manifold into . Let us call two points equivalent if and only if and lie on the same component of an -fiber. Let denote the quotient space of with respect to this equivalence relation and the quotient map. Then there exists a unique continuous map such that . The space or the factorization of the map into the composition of and is called the Stein factorization of the map . (Sometimes the map is also called the Stein factorization of .) In other words, the Stein factorization is the space of connected components of fibers of . Its structure is strongly related to the topology of the -manifold . For example, an immediate observation is that the quotient map induces an epimorphism between the fundamental groups since every loop in can be lifted to . If is a stable map, then its Stein factorization is a -dimensional CW complex. The local forms of Stein factorizations of proper stable maps of orientable -manifolds into surfaces are described in [KLP84, Le85], see Figure 1. Indeed, let be a stable map of the closed orientable -manifold into . We say that a singular point of is of type (a), …, (e), respectively, if the Stein factorization at looks locally like (a), …, (e) of Figure 1, respectively. We will call a point a singular point of type (a), …, (e), respectively, if for a singular point of type (a), …, (e), respectively. According to [KLP84, Le85] we give the following characterization of the singularities of : The singular point is a cusp point if and only if it is of type (c), the singular point is a definite fold point if and only if it is of type (a) and is an indefinite fold point if and only if it is of type (b), (d) or (e). Singular points of types (d) and (e) are called non-simple, while the others are called simple. A double point in of two crossing images of singular curves which is not an image of a non-simple singularity is called a simple singularity crossing. A simple singularity crossing or an image in of a non-simple singularity is called a crossing singularity. A stable map is called a fold map if it has no cusp singularities. Let be a given link, and let denote a generic projection of it to the plane. Let and denote the number of components of and the number of crossings of , respectively. Choose a direction in , which we represent by a vector . We can assume that satisfies the condition that the projection of the diagram to along yields only non-degenerate critical points. Let denote the number of times is tangent to . Suppose at each -tangency the half line emanating from in the direction of avoids the crossings of and intersects transversally (at the points different from ). Denote the number of transversal intersections by . Let denote the maximum of the values , where runs over the -tangencies. With these definitions in place now we can state the main result of the paper. ###### Theorem 1.2. Suppose that the 3-manifold is obtained by integral surgery on the link . Then there is a stable map such that 1. the Stein factorization is homotopy equivalent to the bouquet , 2. the number of cusps of is equal to , 3. all the non-simple singularities of are of type (d), and their number is equal to , 4. the number of non-simple singularities which are not connected by any singular arc of type (b) to any cusp is equal to , 5. the number of simple singularity crossings of in is no more than 8cr(¯¯¯¯L)+6ℓ(¯¯¯¯L,v)tv(¯¯¯¯L)+tv(¯¯¯¯L)2, 6. the number of connected components of the singular set of is no more than , and 7. the maximal number of the connected components of any fiber of is no more than . 8. Suppose we got by cutting out and gluing back the regular neighborhood of from . Then the indefinite fold singular set of contains a link in , which is isotopic to in and whose -image coincides with . ###### Remark 1.3. 1. Let be a closed orientable -manifold, a given smooth map of into and a link disjoint from the singular set of . Suppose furthermore that is an immersion. Let denote the -manifold obtained by some integral surgery along . Then the method developed in the proof of Theorem 1.2 provides a stable map of into (relative to ). 2. In constructing the map , the proof of Theorem 1.2 provides a sequence of stable maps of into , where each is obtained from by some deformation, . Finally, the map is obtained from . Suppose that is a compact -manifold which admits a handle decomposition with only - and -handles, i.e. can be given by attaching 4-dimensional 2-handles to along . Using our method we can construct a stable map of into . Recall that according to [BR74] a closed orientable -manifold has a stable map into without singularities of types (b), (c), (d) and (e) if and only if is a connected sum of finitely many copies of . According to [Sa96] a closed orientable -manifold has a stable map into without singular points of types (c), (d) and (e) if and only if is a graph manifold. By [Le65] a -manifold always has a stable map into without singular points of type (c). Our arguments imply a constructive proof for ###### Theorem 1.4. Every closed orientable -manifold has a stable map into without singular points of types (c) and (e). ###### Remark 1.5. 1. One cannot expect to eliminate the singular points of types (a), (b) or (d) of stable maps from arbitrary closed orientable -manifolds to . In this sense our Theorem 1.4 gives the best possible elimination on -manifolds. 2. By taking an embedding we get for every closed orientable -manifold a stable map into as well without singular points of types (c) and (e). Then by using the method of [Sa06], for example, for eliminating the singular points of type (a), we get a stable map, which is a direct analogue of the indefinite generic maps appearing in [Ba08, Ba09, GK07]. The construction also implies certain relations between quantities one can naturally associate to stable maps and to surgery diagrams. ###### Definition 1.6. Suppose that is a fixed closed, oriented -manifold and is a stable map with singular set . • Let denote the number of simple singularity crossings of . • Let denote the number of non-simple singularities of . • Let denote the number of crossing singularities of . Clearly . • Let denote the number of non-simple singularities of which are not connected by any singular arc of type (b) to any cusp. • Let denote the number of cusps of . Clearly . • Let denote the number of connected components of . Clearly it is no more than the number of connected components of . • Let denote the maximum number of connected components of the fibers of . The inequality rankH∗(M)≤2d(F)+c(F)+2cc(F) has been shown to hold in [Gr09, Section 2.1].111The paper [MPS95] is also closely related. In addition, by [CT08, Theorem 3.38] we have , where is the Gromov norm of , cf. also [Gr09, Section 3]. Theorem 1.2 provides several estimates for upper bounds on the topological complexity of smooth maps of a -manifold given by surgery. For example, by summing quantities in Definiton 1.6 and their estimates in Theorem 1.2, we immediately obtain ###### Corollary 1.7. Suppose that the 3-manifold is obtained by integral surgery on the link . Let be any diagram of and a general position vector in . Then • , • , • , where the minima are taken for all the stable maps of into . Evidently, we can estimate other properties in Definiton 1.6 of stable maps on as well. These expressions can be simplified by estimating as (1.1) ℓ(¯¯¯¯L,v)≤tv(¯¯¯¯L)−1 cf. Lemma 3.7. The number of tangencies of a projection of a knot in a fixed direction is reminiscent to the number of cusp singularities of a front projection of a Legendrian knot in the standard contact 3-space. Based on this analogy, our previous results imply an estimate on a quantity attached to a Legendrian knot in the following way. Recall first that the standard contact structure on is the 2-plane field given by the kernel of the 1-form . A knot is Legendrian if the tangent vectors of are in . (To indicate the Legendrian structure on the knot, we will denote it by and reserve the notation for smooth knots and links.) If is chosen generically within its Legendrian isotopy class, its projection to the plane will have no vertical tangencies, and at any crossing the strand with smaller slope will be over the one with higher slope. Consider now a Legendrian knot and let denote such a projection (called a front projection) of . The Thurston-Bennequin number of is given by the formula , where stands for the writhe (i.e. the signed sum of the double points) of the projection. Although the definition of tb uses a projection of the Legendrian knot , it is not hard to show that the resulting number is an invariant of the Legendrian isotopy class of . In case the projection has only negative crossings, we have that , hence the resulting Thurston-Bennequin number can be identified with after choosing appropriately, cf. [Ge08, OS04]. (In this case the generic projection used in the definitions of and is derived from the front projection by rounding the cusps.) As it is customary, we define as the maximum of all Thurston-Bennequin numbers of Legendrian knots smoothly isotopic to . (It is a nontrivial fact, and follows from the tightness of that this maximum exists.) A modification of this definition for negative knots (i.e. for knots admitting projections with only negative crossings) provides ###### Definition 1.8. For a negative knot let denote the value where runs over those Legendrian knots smoothly isotopic to which admit front diagrams with only negative crossings. It is rather easy to see that if the knot admits a projection with only negative crossings, then it also has a front projection with the same property. Clearly . ###### Theorem 1.9. For a negative knot and any -manifold obtained by an integral surgery along we have • , • , • , where the minima are taken for all the stable maps of into . By Theorem 1.9 and [CT08, Theorem 3.38] we obtain ###### Corollary 1.10. For a negative knot and any -manifold obtained by an integral surgery along , we have . Acknowledgements: The authors were supported by OTKA NK81203 and by the Lendület program of the Hungarian Academy of Sciences. The first author was partially supported by Magyary Zoltán Postdoctoral Fellowship. The authors thank the anonymous referee for the comments which improved the paper. ## 2. Preliminaries In this section, we recall and summarize some technical tools. First, we show that a cusp can be pushed through an indefinite fold arc as in Figure 2: ###### Lemma 2.1 (Moving cusps). Suppose that in a neighborhood of a point the Stein factorization of a map is given by Figure 2(a). Then can be deformed in this neighborhood to a map so that the Stein factorization of is as the diagram of Figure 2(b). ###### Proof. Suppose is the cusp singular point and is the indefinite fold arc at hand. Let be a point on the other side of in . Connect and by an embedded arc . Then there is an arc such that , starts at and and do not intersect. By using the technique of [Le65] we can now deform in a small tubular neighborhood of to achieve the claimed map . Note that during this move one singular point of type (d) appears. ∎ An analogous statement holds if we move a cusp from a -sheeted region to a -sheeted region. According to the next result, two cusps can be eliminated as in Figure 3: ###### Lemma 2.2 (Eliminating cusps). Suppose that in a neighborhood of a point the Stein factorization of a map is given by Figure 3(a). Then can be deformed in this neighborhood to a map so that the Stein factorization of is as the diagram of Figure 3(b). ###### Proof. This statement is the elimination in [Le65, pages 285–295] for -dimensional source manifolds. ∎ Recall that if is a stable map and denotes its singular set, then is a generic immersion with cusps, i.e. if denotes the set of cusp points, then is a generic immersion with finitely many double points and is disjoint from . The following result will be the key ingredient in our subsequent arguments for proving Theorem 1.2. ###### Lemma 2.3 (Making wrinkles). Suppose that is a stable map and let denote an embedded closed -dimensional manifold such that is disjoint from the singular set , is a generic immersion and is a generic immersion with cusps. Let be a small tubular neighborhood of disjoint from and fix an identification of with the normal bundle of . Let be a non-zero section such that for any . Then is homotopic to a smooth stable map such that 1. outside , 2. the singular set of is , 3. has indefinite fold singularities along , 4. has definite fold singularities along , 5. , 6. is an immersion parallel to and 7. if for a double point of the two points in lie in the same connected component of the fiber , then the double point of correspond to a singularity of type (d). ###### Proof. We perform the homotopy inside fiberwise as shown by Figure 4. Since is the trivial bundle, the homotopy of the fibers yields a homotopy of the entire . ∎ ###### Remark 2.4. If the submanifold has boundary, we can still get something similar. In this case the section should be zero at the boundary points of , and the homotopy yields a stable map with cusps at . ## 3. Proof of the results ### 3.1. Construction of the stable map on M ###### Proof of Theorem 1.2. We will prove the theorem by presenting an algorithm which produces the map on with the desired properties. This algorithm will be given in seven steps; the first six of these steps are concerned with maps on . Let us start with a fold map with one unknotted circle as singular set such that is an embedding and is a circle for each regular point . Then the Stein factorization of is a disk together with its embedding into . By cutting out the interior of a sufficiently small tubular neighborhood of from , we get a solid torus whose boundary is mapped into by as a circle fibration over a circle parallel to , and is a trivial circle bundle . Suppose the link is disjoint from . Then by identifying with and with the projection onto , we get a link diagram . Now we start modifying this map . In Steps 1 through 6 we will deal with maps on , and the goal will be to obtain a map which is suitable with respect to the fixed surgery link . In particular, we aim to find a map on with the property that its restriction to any component of is an embedding into . We suppose that the modifications through Step 1, …, Step 6 happen so that all the images of the maps , …, lie completely inside the disk determined by the (unchanged) circle , . This can be reached easily by choosing to bound an area “large enough” in and supposing that the diameter of is small. ### Step 1 Our first goal is to deform so that the resulting map has fold singularities along . Apply Lemma 2.3 to the map and the embedded -dimensional manifold , and denote the resulting stable map by . It is a fold map, its indefinite fold singular set is and its definite fold singular set is , where is isotopic to ; for an example see Figure 5. Since is isotopic to , the integral surgery along giving can be equally performed along . Recall that doing surgery along simply means that we cut out a tubular neighborhood of the definite fold curve (which is diffeomorphic to ), and glue it back by a diffeomorphism of its boundary . If the image was an embedding of circles, then it would be easy to construct the claimed map on the -manifold given by the integral surgery. Since this is not the case in general, we need to further deform the map . Let denote the interior of the bands (one for each component of ) bounded by and in the Stein factorization . Then is immersed into by . The Stein factorizations of the maps in the next steps will be built on . Let denote the surface . ### Step 2 Now, our goal is to deform so that the Stein factorization of the resulting map has small “flappers” near at the points where is tangent to the general position vector . These “flappers” will help us to move the image of so that it will become an embedding into . First, we use Lemma 2.3 together with Remark 2.4 as follows. Let be the set of points in such that for each the direction is tangent to at . For each take a small embedded arc in a small neighborhood of in such that is an embedding parallel to . For each arc there exists an embedded arc in such that is an embedding onto . See, for example, the upper picture of Figure 6, where the small dashed arcs having cusp endpoints represent the arcs for all . Apply Lemma 2.3 and Remark 2.4 to the map and the arcs to obtain a map . The section in Lemma 2.3 is chosen so that if we project the -images of the arising new definite fold curves in to , then for each curve there is only one critical point, which is a maximum. An example for the resulting map can be seen in the upper picture of Figure 6. Note that the deformation yielded small “flappers” in attached to along the arcs . Next, for each take small arcs in which intersect generically the previous arcs , lie in and on the “flappers” and are mapped into almost parallel to . See the new small dashed arcs in the lower picture of Figure 6. Once again, there are small arcs embedded in mapped by onto , respectively. The application of Lemma 2.3 and Remark 2.4 for these arcs provides us a map, which we denote by . This map will have one additional flapper for every flapper of . We choose the section in Lemma 2.3 so that the -images of the arising new definite fold curves lie inward222At a point of let us call the direction which is perpendicular to and points toward the direction where locally lies “inward”. from the arcs , respectively, in the -image of and the previous flappers. For an enlightening example, see the lower picture of Figure 6. Note that after this step new singular points of type (d) appeared. Also note that for each we have four cusp singular points in , three of which are mapped by into . We denote the set of these three cusps by . For each the -images of two of these three cusps in point to the direction . We denote the set of these two cusps by . Note that the definite fold curves in the images of the two cusps in are on opposite sides. ### Step 3 Now our goal is to obtain definite fold arcs connecting points of where had cusps. Moreover these definite fold arcs will be mapped into parallel to the diagram . (These curves will be translated in the next step so that later they will lead to an embedding of into .) In order to reach this goal, we deform the map further by eliminating half of the cusps as follows. We proceed for each component of separately and in the same way, thus in the following we can suppose that is connected. Take a cusp which is in for an such that the entire lies to the right hand side of its tangent at . By going along the band in in the direction to which the -image of this cusp points, we reach another cusp in for some at the next -tangency of . If this cusp does not belong to , then it is possible to apply Lemma 2.2 and eliminate these two cusps, since they are in the position of Figure 3. Then we continue by taking the cusp in whose Stein factorization is folded inward. If the cusp does belong to , then we choose that cusp from which can be used to eliminate (it is easy to see that this is exactly the cusp in whose Stein factorization is folded inward), we eliminate them, then we continue by taking the cusp belonging to . This procedure goes all along the band , meets all and eliminates half of the cusps. After finishing this process, we obtain a stable map, which we denote by ; cf. Figure 7 for an example. The cusp elimination results new definite fold curves whose -image is an immersion, and which have double points near the crossings of the diagram . In the next step we will deform so that the double points of these new curves will be localized near the images of the remaining cusps. ### Step 4 Now our goal is to deform to a map such that the definite fold arcs obtained in the previous step will be mapped into far from the diagram . (Informally, we will “lift” some of the arcs in the direction of .) Moreover, the immersion of these definite fold arcs into will have double points only near some cusps of . This brings us closer to the original goal to have a map which embeds a link isotopic to into the plane. The cusp eliminations above affect only small tubular neighborhoods of curves connecting cusps in . Denote by the new definite fold arcs which appear in these tubular neighborhoods after the eliminations. Note that by the algorithm above, the arcs are mapped into so that by an elementary deformation they can be moved “upward” in the direction of , see Figure 7. So we further deform to get a stable map denoted by as indicated in Figure 8: as it is shown by the picture, the arcs are “lifted”. In fact, we deform : we move the top of the “flappers” corresponding to the -curves of Step 2 and the -image of the curves in the direction of and far from . We proceed for each component of separately and in the same way, thus in the following we can suppose that is connected. First we choose a point such that the entire lies to the right hand side from its tangent at . Then, by walking along the band starting from , we deform the flappers and the curves to be mapped into the plane as a “zigzag” far away from the diagram . More precisely, consider the coordinate system in with origin and coordinate axes and , respectively, where denotes the vector obtained by rotating clockwise by degrees. By extending the -image of the flappers in the direction of deform the -image of the curves so that by going along between the points , where and , the corresponding component of the curve is mapped into a small tubular neighborhood of a line with slope for . Finally, arrange the last component of starting with slope and ending at the first (extended) flapper belonging to , see Figure 8. Note that as a result the double points of the immersion of the deformed curves are in a small neighborhood of the cusps mapped close to the tops of the flappers. ### Step 5 In this step, we modify the stable map so that the cusps of the resulting map will be easy to eliminate in the next step. Let be a line perpendicular to located near , separating it from the other parts moved to the direction of in Step 4, as indicated in Figure 8. Now, we cut the -complex (recall that denotes , see Step 1) along the -preimage of the line , thus we obtain the decomposition Wf4=A∪¯f−14(l)∩(Wf4−B′)A′, where denotes the -dimensional CW complex containing and denotes the closure of . Then is a -manifold with boundary. Let us denote the -complex by . In order to visualize in Figure 8, we suppose that the cutting of along is a little bit perturbed and thus the bold -complex in Figure 8 represents . Before proceeding further, we need a better understanding of the -preimages of the sets appearing in the above decomposition. The preimage is clearly diffeomorphic to for a link . The following statements show much more about . It is easy to see that the numbers of components of and are equal. However, we have the stronger ###### Lemma 3.1. A longitudinal curve in is isotopic to . ###### Proof. The -complex decomposes as a union of -cells: some of them (which we depict as “small -cells” in Figure 8) are attached at one of their endpoints to the union of the other 1-cells, we denote these small cells by for . Others are attached by both of their endpoints. Let denote the -complex . Then the PL embedding is isotopic to the subcomplex of formed by the arcs of type (b) in the open bands connecting the singular points of type (d) in . Furthermore, the subcomplex is isotopic to . Take a small closed regular neighborhood of . Then is naturally a -bundle over . The boundary of in is a -manifold isotopic to , and we will denote it by . Clearly is diffeomorphic to . Note that any section of is isotopic to . The isotopy between and and the isotopy between and can be chosen easily so that they give a PL embedding such that and correspond to and , respectively. For , let denote small regular neighborhoods of the singular points of type (d) located near the cusp points in in , such a and the restriction can be seen in Figure 1(d). Then the intersection consists of a union of disks, which will be denoted by . First, observe that for each there exists a disk embedded into in whose boundary is mapped by homeomorphically onto the boundary , i.e.  is a lifting of . To see this, consider the -manifold for each . By [Le85] the manifold is diffeomorphic to , where is a disk with three holes and it is mapped by into as we can see in Figure 9(a). Each disk can be located in essentially in four ways, for example the lower picture of Figure 9(b) shows the disk for the leftmost non-simple singularity crossing of type (d) in Figure 8. We get on the picture by cutting out the two shaded areas from the -complex . It is easy to see in the upper picture of Figure 9(b) how to put the disk into . The other three possibilities for the location of a disk in and the disk in can be described in a similar way. Now observe that can be lifted to extending because of the following. First, the regular neighborhoods of the singular points of type (c) in (see Figure 1(c)) intersect in disks which can be lifted to . Then the intersection of the small regular neighborhoods of the singular curves of type (b) and can be lifted as well since there is no constraint for the lift at the regular points of . Finally observe that the rest of intersects only in areas of non-singular points which are attached to the boundary of , so the previous lifts extend over the entire . Hence we obtain an embedding with and corresponding to lifts of and , respectively. Thus we obtain an isotopy between a longitude of and a lift of . The fact that any lift of is isotopic to finishes the proof. ∎ ###### Lemma 3.2. The preimage is isotopic to a regular neighborhood of . ###### Proof. It is enough to show that is diffeomorphic to extending naturally the structure on its boundary since by Lemma 3.1 the union of tori contains a longitude isotopic to . Moreover it is enough to show that the -preimage of the part of homeomorphic to the CW complex in Figure 10 is diffeomorphic to , where the -preimage of the two vertical edges on the right-hand side of the -complex of Figure 10 corresponds to . Clearly the -preimage of the two vertical edges on the right-hand side is diffeomorphic to since is a circle for any lying in the two vertical edges except if is one of the two top ends. If is one of the two top ends, then is one point since it is a definite fold singularity. The -preimage of the backward sheet in Figure 10 is diffeomorphic to minus for an interval . The -preimage of the forward sheet is diffeomorphic to . ∎ ###### Corollary 3.3. Any longitudinal curve in is isotopic to . In order to obtain the map , we modify the map outside a neighborhood of as it is shown by Figure 11: our goal is to have the arrangement that if for a cusp singularity the point is connected in to by a -cell mapped into parallel to and corresponding to an indefinite fold curve, then a definite fold curve should connect to another cusp with the same property for . Thus we obtain a map such that is isotopic to a regular neighborhood of by the same argument as in Lemma 3.2. Also coincides with and coincides with in a neighborhood of . We arrange the cusps of in to form pairs as follows. In sheets are attached to along arcs of type (b) (possibly containing points of type (c) at some endpoints). Walking along the bands and restricting ourselves to the intersection of the sheets and , we have that every sheet contains a pair of cusps and every second sheet contains a singular arc of type (a) connecting its pair of cusps, for example, see Figure 11. A natural pairing is that two cusps form a pair if they are in the same sheet and they are connected by a singular arc of type (a). We refer to this pairing as -pairing. We also define another pairing : two cusps form a -pair if they are in the same sheet and they are not connected by any singular arc of type (a). ### Step 6 In this step, we eliminate the cusps of contained in . These cusps are mapped by in the direction of far from and arranged into -pairs in the previous step. The restriction of the resulting map to a link isotopic to will be an embedding. (Hence after this step the construction of the claimed map on will be easy.) We have exactly -pairs of cusps in . Observe that for each component of one -pair can be eliminated immediately: for example in Figure 11 the pair on the “highest” sheet is in the sufficient position to eliminate. In the following, we deal with the other -pairs. More concretely, we perform the deformations and the eliminations of the pairs of cusps of in as shown in Figure 12 as follows. First, by using Lemma 2.1 we move each pair of cusps having the position as in Figure 12(a) to the position as in Figure 12(b) thus creating a singularity of type (d). Then by using Lemma 2.2 we eliminate each pair of cusps, see Figures 12(b) and 12(c). The resulting map will be denoted by (see Figure 13). Notice that and coincide in a neighborhood of . The deformations above yield definite fold curves , whose image under is an embedding into as indicated in Figure 13 by the bold curve. ###### Lemma 3.4. The link is isotopic to . ###### Proof. By Lemma 3.1 the link is isotopic to a longitude of the union of tori . In Step we modify only inside . The subcomplex of used in the proof of Lemma 3.1 is PL-isotopic to a -dimensional PL submanifold of such that goes through the singular curves of type (a) appearing in the -pairing at the end of Step 5 and goes through the top of , i.e. the top of the -complex in Figure 11. To be more precise, in Figure 12(a) the part of connecting the two cusp endpoints of the singular arcs of type (a) is represented by a bold dashed arc and denoted by . During the moving of the pair of cusps as depicted by the arrows in Figure 12(a), is deformed to the curve represented by a bold dashed arc in Figure 12(b). This deformation gives an isotopy between some liftings to of and . Since a part of is collinear to a singular arc of type (a) as we can see in Figure 12(a), any lifting to of is isotopic to any other lifting. Hence further deforming to represented by the bold dashed curve in Figure 12(c) yields an isotopy between some liftings of and . Finally, changing again the lifting to of if necessary, we eliminate the pair of cusps as indicated in Figure 12(b) and deform to be identical to the type (a) singular arc appearing at the elimination in Figure 12(c). All this process gives an isotopy in between and a lifting of , hence an isotopy between and . ∎ ### Step 7 As a final step, we perform the given surgeries along with the appropriate coefficients. Since is an embedding into on each component of , and consists of definite fold singular curves such that the local image of a small neighborhood of the definite fold curve is situated “outside” of the image of the definite fold curve, a map of
2020-07-09 15:01:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242631196975708, "perplexity": 416.433451368401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900335.76/warc/CC-MAIN-20200709131554-20200709161554-00432.warc.gz"}
https://labs.tib.eu/arxiv/?author=Matteo%20Guainazzi
• ### The Spin of the Supermassive Black Hole in MCG-05-23-16(1703.07182) Sept. 6, 2018 astro-ph.HE We present the results of a multi-epoch and multi-instrument study of the supermassive black hole at the center of the galaxy MCG-05-23-16 aiming at the determination of its spin. We have analyzed high quality X-ray data of MCG-05-23-16 from XMM-Newton, Suzaku, and NuSTAR obtained over a period of about 10~years. We have built a double-reflection spectral model that well describes the observed spectrum based on prior results suggesting that the iron K$\alpha$ line includes both a broad component from the disk's reflection spectrum and a narrow component due to fluorescence and scattering off material by more distant matter. Our measurement of the black hole spin parameter is $a_* = 0.856\pm0.006$ (99\% confidence level). • ### Multi-epoch analysis of the X-ray spectrum of the active galactic nucleus in NGC 5506(1704.03716) June 13, 2018 astro-ph.HE We present a multi-epoch X-ray spectroscopy analysis of the nearby narrow-line Seyfert I galaxy NGC 5506. For the first time, spectra taken by Chandra, XMM-Newton, Suzaku, and NuSTAR - covering the 2000-2014 time span - are analyzed simultaneously, using state-of-the-art models to describe reprocessing of the primary continuum by optical thick matter in the AGN environment. The main goal of our study is determining the spin of the supermassive black hole (SMBH). The nuclear X-ray spectrum is photoelectrically absorbed by matter with column density $\simeq 3 \times 10^{22}$ cm$^{-2}$. A soft excess is present at energies lower than the photoelectric cut-off. Both photo-ionized and collisionally ionized components are required to fit it. This component is constant over the time-scales probed by our data. The spectrum at energies higher than 2 keV is variable. We propose that its evolution could be driven by flux-dependent changes in the geometry of the innermost regions of the accretion disk. The black hole spin in NGC 5506 is constrained to be 0.93$\pm _{ 0.04 }^{0.04}$ at 90% confidence level for one interesting parameter. • We present results from the Hitomi X-ray observation of a young composite-type supernova remnant (SNR) G21.5$-$0.9, whose emission is dominated by the pulsar wind nebula (PWN) contribution. The X-ray spectra in the 0.8-80 keV range obtained with the Soft X-ray Spectrometer (SXS), Soft X-ray Imager (SXI) and Hard X-ray Imager (HXI) show a significant break in the continuum as previously found with the NuSTAR observation. After taking into account all known emissions from the SNR other than the PWN itself, we find that the Hitomi spectra can be fitted with a broken power law with photon indices of $\Gamma_1=1.74\pm0.02$ and $\Gamma_2=2.14\pm0.01$ below and above the break at $7.1\pm0.3$ keV, which is significantly lower than the NuSTAR result ($\sim9.0$ keV). The spectral break cannot be reproduced by time-dependent particle injection one-zone spectral energy distribution models, which strongly indicates that a more complex emission model is needed, as suggested by recent theoretical models. We also search for narrow emission or absorption lines with the SXS, and perform a timing analysis of PSR J1833$-$1034 with the HXI and SGD. No significant pulsation is found from the pulsar. However, unexpectedly, narrow absorption line features are detected in the SXS data at 4.2345 keV and 9.296 keV with a significance of 3.65 $\sigma$. While the origin of these features is not understood, their mere detection opens up a new field of research and was only possible with the high resolution, sensitivity and ability to measure extended sources provided by an X-ray microcalorimeter. • The present paper investigates the temperature structure of the X-ray emitting plasma in the core of the Perseus cluster using the 1.8--20.0 keV data obtained with the Soft X-ray Spectrometer (SXS) onboard the Hitomi Observatory. A series of four observations were carried out, with a total effective exposure time of 338 ks and covering a central region $\sim7'$ in diameter. The SXS was operated with an energy resolution of $\sim$5 eV (full width at half maximum) at 5.9 keV. Not only fine structures of K-shell lines in He-like ions but also transitions from higher principal quantum numbers are clearly resolved from Si through Fe. This enables us to perform temperature diagnostics using the line ratios of Si, S, Ar, Ca, and Fe, and to provide the first direct measurement of the excitation temperature and ionization temperature in the Perseus cluster. The observed spectrum is roughly reproduced by a single temperature thermal plasma model in collisional ionization equilibrium, but detailed line ratio diagnostics reveal slight deviations from this approximation. In particular, the data exhibit an apparent trend of increasing ionization temperature with increasing atomic mass, as well as small differences between the ionization and excitation temperatures for Fe, the only element for which both temperatures can be measured. The best-fit two-temperature models suggest a combination of 3 and 5 keV gas, which is consistent with the idea that the observed small deviations from a single temperature approximation are due to the effects of projection of the known radial temperature gradient in the cluster core along the line of sight. Comparison with the Chandra/ACIS and the XMM-Newton/RGS results on the other hand suggests that additional lower-temperature components are present in the ICM but not detectable by Hitomi SXS given its 1.8--20 keV energy band. • The Hitomi SXS spectrum of the Perseus cluster, with $\sim$5 eV resolution in the 2-9 keV band, offers an unprecedented benchmark of the atomic modeling and database for hot collisional plasmas. It reveals both successes and challenges of the current atomic codes. The latest versions of AtomDB/APEC (3.0.8), SPEX (3.03.00), and CHIANTI (8.0) all provide reasonable fits to the broad-band spectrum, and are in close agreement on best-fit temperature, emission measure, and abundances of a few elements such as Ni. For the Fe abundance, the APEC and SPEX measurements differ by 16%, which is 17 times higher than the statistical uncertainty. This is mostly attributed to the differences in adopted collisional excitation and dielectronic recombination rates of the strongest emission lines. We further investigate and compare the sensitivity of the derived physical parameters to the astrophysical source modeling and instrumental effects. The Hitomi results show that an accurate atomic code is as important as the astrophysical modeling and instrumental calibration aspects. Substantial updates of atomic databases and targeted laboratory measurements are needed to get the current codes ready for the data from the next Hitomi-level mission. • We present Hitomi observations of N132D, a young, X-ray bright, O-rich core-collapse supernova remnant in the Large Magellanic Cloud (LMC). Despite a very short observation of only 3.7 ks, the Soft X-ray Spectrometer (SXS) easily detects the line complexes of highly ionized S K and Fe K with 16-17 counts in each. The Fe feature is measured for the first time at high spectral resolution. Based on the plausible assumption that the Fe K emission is dominated by He-like ions, we find that the material responsible for this Fe emission is highly redshifted at ~800 km/s compared to the local LMC interstellar medium (ISM), with a 90% credible interval of 50-1500 km/s if a weakly informative prior is placed on possible line broadening. This indicates (1) that the Fe emission arises from the supernova ejecta, and (2) that these ejecta are highly asymmetric, since no blue-shifted component is found. The S K velocity is consistent with the local LMC ISM, and is likely from swept-up ISM material. These results are consistent with spatial mapping that shows the He-like Fe concentrated in the interior of the remnant and the S tracing the outer shell. The results also show that even with a very small number of counts, direct velocity measurements from Doppler-shifted lines detected in extended objects like supernova remnants are now possible. Thanks to the very low SXS background of ~1 event per spectral resolution element per 100 ks, such results are obtainable during short pointed or slew observations with similar instruments. This highlights the power of high-spectral-resolution imaging observations, and demonstrates the new window that has been opened with Hitomi and will be greatly widened with future missions such as the X-ray Astronomy Recovery Mission (XARM) and Athena. • We report a Hitomi observation of IGR J16318-4848, a high-mass X-ray binary system with an extremely strong absorption of N_H~10^{24} cm^{-2}. Previous X-ray studies revealed that its spectrum is dominated by strong fluorescence lines of Fe as well as continuum emission. For physical and geometrical insight into the nature of the reprocessing material, we utilize the high spectroscopic resolving power of the X-ray microcalorimeter (the soft X-ray spectrometer; SXS) and the wide-band sensitivity by the soft and hard X-ray imager (SXI and HXI) aboard Hitomi. Even though photon counts are limited due to unintended off-axis pointing, the SXS spectrum resolves Fe K{\alpha_1} and K{\alpha_2} lines and puts strong constraints on the line centroid and width. The line width corresponds to the velocity of 160^{+300}_{-70} km s^{-1}. This represents the most accurate, and smallest, width measurement of this line made so far from any X-ray binary, much less than the Doppler broadening and shift expected from speeds which are characteristic of similar systems. Combined with the K-shell edge energy measured by the SXI and HXI spectra, the ionization state of Fe is estimated to be in the range of Fe I--IV. Considering the estimated ionization parameter and the distance between the X-ray source and the absorber, the density and thickness of the materials are estimated. The extraordinarily strong absorption and the absence of a Compton shoulder component is confirmed. These characteristics suggest reprocessing materials which are distributed in a narrow solid angle or scattering primarily with warm free electrons or neutral hydrogen. • The origin of the narrow Fe-K{\alpha} fluorescence line at 6.4 keV from active galactic nuclei has long been under debate; some of the possible sites are the outer accretion disk, the broad line region, a molecular torus, or interstellar/intracluster media. In February-March 2016, we performed the first X-ray microcalorimeter spectroscopy with the Soft X-ray Spectrometer (SXS) onboard the Hitomi satellite of the Fanaroff-Riley type I radio galaxy NGC 1275 at the center of the Perseus cluster of galaxies. With the high energy resolution of ~5 eV at 6 keV achieved by Hitomi/SXS, we detected the Fe-K{\alpha} line with ~5.4 {\sigma} significance. The velocity width is constrained to be 500-1600 km s$^{-1}$ (FWHM for Gaussian models) at 90% confidence. The SXS also constrains the continuum level from the NGC 1275 nucleus up to ~20 keV, giving an equivalent width ~20 eV of the 6.4 keV line. Because the velocity width is narrower than that of broad H{\alpha} line of ~2750 km s$^{-1}$, we can exclude a large contribution to the line flux from the accretion disk and the broad line region. Furthermore, we performed pixel map analyses on the Hitomi/SXS data and image analyses on the Chandra archival data, and revealed that the Fe-K{\alpha} line comes from a region within ~1.6 kpc from the NGC 1275 core, where an active galactic nucleus emission dominates, rather than that from intracluster media. Therefore, we suggest that the source of the Fe-K{\alpha} line from NGC 1275 is likely a low-covering fraction molecular torus or a rotating molecular disk which probably extends from a pc to hundreds pc scale in the active galactic nucleus system. • Extending the earlier measurements reported in Hitomi collaboration (2016, Nature, 535, 117), we examine the atmospheric gas motions within the central 100~kpc of the Perseus cluster using observations obtained with the Hitomi satellite. After correcting for the point spread function of the telescope and using optically thin emission lines, we find that the line-of-sight velocity dispersion of the hot gas is remarkably low and mostly uniform. The velocity dispersion reaches maxima of approximately 200~km~s$^{-1}$ toward the central active galactic nucleus (AGN) and toward the AGN inflated north-western `ghost' bubble. Elsewhere within the observed region, the velocity dispersion appears constant around 100~km~s$^{-1}$. We also detect a velocity gradient with a 100~km~s$^{-1}$ amplitude across the cluster core, consistent with large-scale sloshing of the core gas. If the observed gas motions are isotropic, the kinetic pressure support is less than 10\% of the thermal pressure support in the cluster core. The well-resolved optically thin emission lines have Gaussian shapes, indicating that the turbulent driving scale is likely below 100~kpc, which is consistent with the size of the AGN jet inflated bubbles. We also report the first measurement of the ion temperature in the intracluster medium, which we find to be consistent with the electron temperature. In addition, we present a new measurement of the redshift to the brightest cluster galaxy NGC~1275. • Thanks to its high spectral resolution (~5 eV at 6 keV), the Soft X-ray Spectrometer (SXS) on board Hitomi enables us to measure the detailed structure of spatially resolved emission lines from highly ionized ions in galaxy clusters for the first time. In this series of papers, using the SXS we have measured the velocities of gas motions, metallicities and the multi-temperature structure of the gas in the core of the Perseus cluster. Here, we show that when inferring physical properties from line emissivities in systems like Perseus, the resonant scattering (RS) effect should be taken into account. In the Hitomi waveband, RS mostly affects the FeXXV He$\alpha$ line ($w$) - the strongest line in the spectrum. The flux measured by Hitomi in this line is suppressed by a factor ~1.3 in the inner ~30 kpc, compared to predictions for an optically thin plasma; the suppression decreases with the distance from the center. The $w$ line also appears slightly broader than other lines from the same ion. The observed distortions of the $w$ line flux, shape and distance dependence are all consistent with the expected effect of the resonant scattering in the Perseus core. By measuring the ratio of fluxes in optically thick ($w$) and thin (FeXXV forbidden, He$\beta$, Ly$\alpha$) lines, and comparing these ratios with predictions from Monte Carlo radiative transfer simulations, the velocities of gas motions have been obtained. The results are consistent with the direct measurements of gas velocities from line broadening described elsewhere in this series, although the systematic and statistical uncertainties remain significant. Further improvements in the predictions of line emissivities in plasma models, and deeper observations with future X-ray missions will enable RS measurements to provide powerful constraints on the amplitude and anisotropy of clusters gas motions. • To search for giant X-ray pulses correlated with the giant radio pulses (GRPs) from the Crab pulsar, we performed a simultaneous observation of the Crab pulsar with the X-ray satellite Hitomi in the 2 -- 300 keV band and the Kashima NICT radio observatory in the 1.4 -- 1.7 GHz band with a net exposure of about 2 ks on 25 March 2016, just before the loss of the Hitomi mission.The timing performance of the Hitomi instruments was confirmed to meet the timing requirement and about 1,000 and 100 GRPs were simultaneously observed at the main and inter-pulse phases, respectively, and we found no apparent correlation between the giant radio pulses and the X-ray emission in either the main or inter-pulse phases.All variations are within the 2 sigma fluctuations of the X-ray fluxes at the pulse peaks, and the 3 sigma upper limits of variations of main- or inter- pulse GRPs are 22\% or 80\% of the peak flux in a 0.20 phase width, respectively, in the 2 -- 300 keV band.The values become 25\% or 110\% for main or inter-pulse GRPs, respectively, when the phase width is restricted into the 0.03 phase.Among the upper limits from the Hitomi satellite, those in the 4.5-10 keV and the 70-300 keV are obtained for the first time, and those in other bands are consistent with previous reports.Numerically, the upper limits of main- and inter-pulse GRPs in the 0.20 phase width are about (2.4 and 9.3) $\times 10^{-11}$ erg cm$^{-2}$, respectively. No significant variability in pulse profiles implies that the GRPs originated from a local place within the magnetosphere and the number of photon-emitting particles temporally increases.However, the results do not statistically rule out variations correlated with the GRPs, because the possible X-ray enhancement may appear due to a $>0.02$\% brightening of the pulse-peak flux under such conditions. • The Crab nebula originated from a core-collapse supernova (SN) explosion observed in 1054 A.D. When viewed as a supernova remnant (SNR), it has an anomalously low observed ejecta mass and kinetic energy for an Fe-core collapse SN. Intensive searches were made for a massive shell that solves this discrepancy, but none has been detected. An alternative idea is that the SN1054 is an electron-capture (EC) explosion with a lower explosion energy by an order of magnitude than Fe-core collapse SNe. In the X-rays, imaging searches were performed for the plasma emission from the shell in the Crab outskirts to set a stringent upper limit to the X-ray emitting mass. However, the extreme brightness of the source hampers access to its vicinity. We thus employed spectroscopic technique using the X-ray micro-calorimeter onboard the Hitomi satellite. By exploiting its superb energy resolution, we set an upper limit for emission or absorption features from yet undetected thermal plasma in the 2-12 keV range. We also re-evaluated the existing Chandra and XMM-Newton data. By assembling these results, a new upper limit was obtained for the X-ray plasma mass of <~ 1Mo for a wide range of assumed shell radius, size, and plasma temperature both in and out of the collisional equilibrium. To compare with the observation, we further performed hydrodynamic simulations of the Crab SNR for two SN models (Fe-core versus EC) under two SN environments (uniform ISM versus progenitor wind). We found that the observed mass limit can be compatible with both SN models if the SN environment has a low density of <~ 0.03 cm-3 (Fe core) or <~ 0.1 cm-3 (EC) for the uniform density, or a progenitor wind density somewhat less than that provided by a mass loss rate of 10-5 Mo yr-1 at 20 km s-1 for the wind environment. • ### AGN spectral states from simultaneous UV and X-ray observations by XMM-Newton(1704.07268) April 24, 2017 astro-ph.GA The supermassive black holes in active galactic nuclei (AGN) and stellar-mass black holes in X-ray binaries (XRBs) are believed to work in a similar way. While XRBs evolve rapidly and several sources have undergone a few complete cycles from quiescence to an outburst and back, most AGN remain in the same state over periods of decades, due to their longer characteristic timescale proportional to their size. However, the study of the AGN spectral states is still possible with a large sample of sources. Multi-wavelength observations are needed for this purpose since the AGN thermal disc emission dominates in the ultraviolet energy range, while the up-scattered hot-corona emission is detected in X-rays. We compared simultaneous UV and X-ray measurements of AGN obtained by the XMM-Newton satellite. The non-thermal flux was constrained from the 2-12 keV X-ray luminosity, while the thermal disc component was estimated from the UV flux at 2900A. The hardness (ratio between the X-ray and UV plus X-ray luminosity) and the total luminosity were used to construct the AGN state diagrams. For sources with reliable mass measurements, the Eddington ratio was used instead of the total luminosity. The state diagrams show that the radio-loud sources have on average higher hardness, due to the lack of the thermal disc emission in the UV band, and have flatter intrinsic X-ray spectra. In contrast, the sources with high luminosity and low hardness are radio-quiet AGN. The hardness-Eddington ratio diagram reveals that the average radio-loudness is stronger for low-accreting sources, while it decreases when the accretion rate is close to the Eddington limit. Our results indicate that the general properties of AGN accretion states are similar to those of X-ray binaries. This suggests that the AGN radio dichotomy of radio-loud and radio-quiet sources can be explained by the evolution of the accretion states. • ### The X-ray variability of Seyfert 1.8/1.9 galaxies(1703.05250) March 15, 2017 astro-ph.GA Seyfert 1.8/1.9 are sources showing weak broad H-alpha components in their optical spectra. We aim at testing whether Seyfert 1.8/1.9 have similar properties at UV and X-ray wavelengths to Seyfert 2. We use the 15 Seyfert 1.8/1.9 in the Veron Cetty and Veron catalogue with public data available from the Chandra and/or XMM-Newton archives at different dates, with timescales between observations ranging from days to years. Our results are homogeneously compared with a previous work using the same methodology applied to a sample of Seyfert 2 (Hernandez-Garcia et al. 2015). X-ray variability is found in all 15 nuclei over the aforementioned ranges of timescales. The main variability pattern is related to intrinsic changes in the sources, which are observed in ten nuclei. Changes in the column density are also frequent, as they are observed in six nuclei, and variations at soft energies, possibly related to scattered nuclear emission, are detected in six sources. X-ray intraday variations are detected in six out of the eight studied sources. Variations at UV frequencies are detected in seven out of nine sources. A comparison between the samples of Seyfert 1.8/1.9 and 2 shows that, even if the main variability pattern is due to intrinsic changes of the sources in the two families, these nuclei exhibit different variability properties in the UV and X-ray domains. In particular, variations in the broad X-ray band on short time-scales (days/weeks), and variations in the soft X-rays and UV on long time-scales (months/years) are detected in Seyfert 1.8/1.9 but not in Seyfert 2. Overall, we suggest that optically classified Seyfert 1.8/1.9 should be kept separated from Seyfert 2 galaxies in UV/X-ray studies of the obscured AGN population because their intrinsic properties might be different. • High-resolution X-ray spectroscopy with Hitomi was expected to resolve the origin of the faint unidentified E=3.5 keV emission line reported in several low-resolution studies of various massive systems, such as galaxies and clusters, including the Perseus cluster. We have analyzed the Hitomi first-light observation of the Perseus cluster. The emission line expected for Perseus based on the XMM-Newton signal from the large cluster sample under the dark matter decay scenario is too faint to be detectable in the Hitomi data. However, the previously reported 3.5 keV flux from Perseus was anomalously high compared to the sample-based prediction. We find no unidentified line at the reported high flux level. Taking into account the XMM measurement uncertainties for this region, the inconsistency with Hitomi is at a 99% significance for a broad dark-matter line and at 99.7% for a narrow line from the gas. We do not find anomalously high fluxes of the nearby faint K line or the Ar satellite line that were proposed as explanations for the earlier 3.5 keV detections. We do find a hint of a broad excess near the energies of high-n transitions of Sxvi (E=3.44 keV rest-frame) -- a possible signature of charge exchange in the molecular nebula and another proposed explanation for the unidentified line. While its energy is consistent with XMM pn detections, it is unlikely to explain the MOS signal. A confirmation of this interesting feature has to wait for a more sensitive observation with a future calorimeter experiment. • ### IACHEC Cross-Calibration of Chandra, NuSTAR, Swift, Suzaku, and XMM-Newton with 3C 273 and PKS 2155-304(1609.09032) Sept. 28, 2016 astro-ph.IM On behalf of the International Astronomical Consortium for High Energy Calibration (IACHEC), we present results from the cross-calibration campaigns in 2012 on 3C 273 and in 2013 on PKS 2155-304 between the then active X-ray observatories Chandra, NuSTAR, Suzaku, Swift and XMM-Newton. We compare measured fluxes between instrument pairs in two energy bands, 1-5 keV and 3-7 keV and calculate an average cross-normalization constant for each energy range. We review known cross-calibration features and provide a series of tables and figures to be used for evaluating cross-normalization constants obtained from other observations with the above mentioned observatories. • Clusters of galaxies are the most massive gravitationally-bound objects in the Universe and are still forming. They are thus important probes of cosmological parameters and a host of astrophysical processes. Knowledge of the dynamics of the pervasive hot gas, which dominates in mass over stars in a cluster, is a crucial missing ingredient. It can enable new insights into mechanical energy injection by the central supermassive black hole and the use of hydrostatic equilibrium for the determination of cluster masses. X-rays from the core of the Perseus cluster are emitted by the 50 million K diffuse hot plasma filling its gravitational potential well. The Active Galactic Nucleus of the central galaxy NGC1275 is pumping jetted energy into the surrounding intracluster medium, creating buoyant bubbles filled with relativistic plasma. These likely induce motions in the intracluster medium and heat the inner gas preventing runaway radiative cooling; a process known as Active Galactic Nucleus Feedback. Here we report on Hitomi X-ray observations of the Perseus cluster core, which reveal a remarkably quiescent atmosphere where the gas has a line-of-sight velocity dispersion of 164+/-10 km/s in a region 30-60 kpc from the central nucleus. A gradient in the line-of-sight velocity of 150+/-70 km/s is found across the 60 kpc image of the cluster core. Turbulent pressure support in the gas is 4% or less of the thermodynamic pressure, with large scale shear at most doubling that estimate. We infer that total cluster masses determined from hydrostatic equilibrium in the central regions need little correction for turbulent pressure. • ### Systematic Uncertainties in the Spectroscopic Measurements of Neutron-Star Masses and Radii from Thermonuclear X-ray Bursts. III. Absolute Flux Calibration(1501.05330) July 12, 2016 astro-ph.IM, astro-ph.HE Many techniques for measuring neutron star radii rely on absolute flux measurements in the X-rays. As a result, one of the fundamental uncertainties in these spectroscopic measurements arises from the absolute flux calibrations of the detectors being used. Using the stable X-ray burster, GS 1826-238, and its simultaneous observations by Chandra HETG/ACIS-S and RXTE/PCA as well as by XMM-Newton EPIC-pn and RXTE/PCA, we quantify the degree of uncertainty in the flux calibration by assessing the differences between the measured fluxes during bursts. We find that the RXTE/PCA and the Chandra gratings measurements agree with each other within their formal uncertainties, increasing our confidence in these flux measurements. In contrast, XMM-Newton EPIC-pn measures 14.0$\pm$0.3 % less flux than the RXTE/PCA. This is consistent with the previously reported discrepancy with the flux measurements of EPIC-pn, compared to EPIC-MOS1, MOS2 and ACIS-S detectors. We also show that any intrinsic time dependent systematic uncertainty that may exist in the calibration of the satellites has already been implicity taken into account in the neutron star radius measurements. • ### X-ray properties of the Youngest Radio Sources and their Environments(1603.00947) April 14, 2016 astro-ph.GA, astro-ph.HE We present the results of the first X-ray study of a sample of 16 young radio sources classified as Compact Symmetric Objects (CSOs). We observed six of them for the first time in X-rays using {\it Chandra}, re-observed four with the previous {\it XMM-Newton} or {\it Beppo-SAX} data, and included six other with the archival data. All the sources are nearby, $z<1$ with the age of their radio structures ($<3000$~years) derived from the hotspots advance velocity. Our results show heterogeneous nature of the CSOs indicating a complex environment associated with young radio sources. The sample covers a range in X-ray luminosity, $L_{2-10\,\rm keV} \sim 10^{41}$-$10^{45}$\,erg\,s$^{-1}$, and intrinsic absorbing column density of $N_H \simeq 10^{21}$--10$^{22}$\,cm$^{-2}$. In particular, we detected extended X-ray emission in 1718$-$649; a hard photon index of $\Gamma \simeq 1$ in 2021$+$614 and 1511$+$0518 consistent with either a Compton thick absorber or non-thermal emission from compact radio lobes, and in 0710$+$439 an ionized iron emission line at $E_{rest}=(6.62\pm0.04)$\,keV and EW $\sim 0.15-$1.4\,keV, and a decrease by an order of magnitude in the 2-10 keV flux since the 2008 {\it XMM-Newton} observation in 1607$+$26. We conclude that our pilot study of CSOs provides a variety of exceptional diagnostics and highlights the importance of deep X-ray observations of large samples of young sources. This is necessary in order to constrain theoretical models for the earliest stage of radio source evolution and study the interactions of young radio sources with the interstellar environment of their host galaxies. • ### Warm Absorbers in X-rays (WAX), a comprehensive high resolution grating spectral study of a sample of Seyfert Galaxies: II. Warm Absorber dynamics and feedback to galaxies(1601.06369) Jan. 24, 2016 astro-ph.GA, astro-ph.HE This paper is a sequel to the extensive study of warm absorber (WA) in X-rays carried out using high resolution grating spectral data from XMM-Newton satellite (WAX-I). Here we discuss the global dynamical properties as well as the energetics of the WA components detected in the WAX sample. The slope of WA density profile ($n\propto r^{-\alpha}$) estimated from the linear regression slope of ionization parameter $\xi$ and column density $N_H$ in the WAX sample is $\alpha=1.236\pm 0.034$. We find that the WA clouds possibly originate as a result of photo-ionised evaporation from the inner edge of the torus (torus wind). They can also originate in the cooling front of the shock generated by faster accretion disk outflows, the ultra-fast outflows (UFO), impinging onto the interstellar medium or the torus. The acceleration mechanism for the WA is complex and neither radiatively driven wind nor MHD driven wind scenario alone can describe the outflow acceleration. However, we find that radiative forces play a significant role in accelerating the WA through the soft X-ray absorption lines, and also with dust opacity. Given the large uncertainties in the distance and volume filling factor estimates of the WA, we conclude that the kinetic luminosity $\dot{E}_k$ of WA may sometimes be large enough to yield significant feedback to the host galaxy. We find that the lowest ionisation states carry the maximum mass outflow, and the sources with higher Fe M UTA absorption ($15-17\rm \AA$) have more mass outflow rates. • ### X-ray high-resolution spectroscopy reveals feedback in a Seyfert galaxy from an ultra fast wind with complex ionization and velocity structure(1511.01165) Nov. 4, 2015 astro-ph.GA, astro-ph.HE Winds outflowing from Active Galactic Nuclei (AGNs) may carry significant amount of mass and energy out to their host galaxies. In this paper we report the detection of a sub-relativistic outflow observed in the Narrow Line Seyfert 1 Galaxy IRAS17020+4544 as a series of absorption lines corresponding to at least 5 absorption components with an unprecedented wide range of associated column densities and ionization levels and velocities in the range of 23,000-33,000 km/s, detected at X-ray high spectral resolution (E/Delta E ~1000) with the ESA's observatory XMM-Newton. The charge states of the material constituting the wind clearly indicate a range of low to moderate ionization states in the outflowing gas and column densities significantly lower than observed in highly ionized ultra fast outflows. We estimate that at least one of the outflow components may carry sufficient energy to substantially suppress star formation, and heat the gas in the host galaxy. IRAS17020+4544 provides therefore an interesting example of feedback by a moderately luminous AGN hosted in a spiral galaxy, a case barely envisaged in most evolution models, which often predict that feedback processes take place in massive elliptical galaxies hosting luminous quasars in a post merger phase. • ### 3C 273 with NuSTAR: Unveiling the AGN(1506.06182) Sept. 3, 2015 astro-ph.GA, astro-ph.HE We present results from a 244\,ks \textit{NuSTAR} observation of 3C\,273 obtained during a cross-calibration campaign with the \textit{Chandra}, \textit{INTEGRAL}, \textit{Suzaku}, \textit{Swift}, and \textit{XMM-Newton} observatories. We show that the spectrum, when fit with a power-law model using data from all observatories except \textit{INTEGRAL} over the 1--78\,keV band, leaves significant residuals in the \textit{NuSTAR} data between 30--78\,keV. The \nustar\ 3--78\,keV spectrum is well-described by an exponentially cutoff power-law ($\Gamma = 1.646 \pm 0.006$, E$_\mathrm{cutoff} = 202_{-34}^{+51}$\,keV) with a weak reflection component from cold, dense material. There is also evidence for a weak ($EW = 23 \pm 11$ eV) neutral iron line. We interpret these features as arising from coronal emission plus reflection off an accretion disk or distant material. Beyond 80\,keV \textit{INTEGRAL} data show clear excess flux relative to an extrapolation of the AGN model fit to \nustar. This high-energy power-law is consistent with the presence of a beamed jet, which begins to dominate over emission from the inner accretion flow at 30-40 keV. Modeling the jet locally (in the \textit{NuSTAR} + \textit{INTEGRAL} band) as a power-law, we find the coronal component is fit by $\Gamma_\mathrm{AGN} = 1.638 \pm 0.045$, $E_\mathrm{cutoff} = 47 \pm 15$\,keV, and jet photon index by $\Gamma_\mathrm{jet} = 1.05 \pm 0.4$. We also consider \textit{Fermi}/LAT observations of 3C\,273 and here the broad-band spectrum of the jet can be described by a log-parabolic model, peaking at $\sim 2$\,MeV. Finally, we investigate the spectral variability in the \textit{NuSTAR} band and find an inverse correlation between flux and $\Gamma$. • ### XMM-Newton Observations of Three Interacting Luminous Infrared Galaxies(1211.1674) July 3, 2015 astro-ph.CO We investigate the X-ray properties of three interacting luminous infrared galaxy systems. In one of these systems, IRAS 18329+5950, we resolve two separate sources. A second, IRAS 20550+1656, and third, IRAS 19354+4559, have only a single X-ray source detected. We compare the observed emission to PSF profiles and determine that three are extended in emission. One is compact, which is suggestive of an AGN, although all of our profiles have large uncertainties. We then model the spectra to determine soft (0.5--2 keV) and hard (2--10 keV) luminosities for the resolved sources and then compare these to relationships found in the literature between infrared and X-ray luminosities for starburst galaxies. We obtain luminosities of $\log(L_{\textrm{soft}}/\textrm{L}_{\odot}) = 7.32,\:7.06,\:7.68$ and $\log(L_{\textrm{hard}}/\textrm{L}_{\odot}) = 7.33,\: 7.07,\: 7.88$ for IRAS 18329+5950, IRAS 19354+4559, and IRAS 20550+1656, respectively. These are intermediate to two separate predictions in the literature for star-formation-dominated sources. Our highest quality spectrum of IRAS 20550+1656 suggests super-solar abundance of alpha elements at $2\sigma$ significance, with $\log(\frac{\alpha}{\alpha_{\odot}}) = [\alpha] = 0.4\pm0.2$. This is suggestive of recent enrichment with Type II supernovae, consistent with a starburst environment. The X-ray properties of the target galaxies are most likely due to starbursts, but we cannot conclusively rule out AGN. • ### An X-ray variable absorber within the Broad Line Region in Fairall 51(1504.04030) April 15, 2015 astro-ph.GA, astro-ph.HE Fairall 51 is a polar-scattered Seyfert 1 galaxy, a type of active galaxies believed to represent a bridge between unobscured type-1 and obscured type-2 objects. Fairall 51 has shown complex and variable X-ray absorption but only little is known about its origin. In our research, we observed Fairall 51 with the X-ray satellite Suzaku in order to constrain a characteristic time-scale of its variability. We performed timing and spectral analysis of four observations separated by 1.5, 2 and 5.5 day intervals. We found that the 0.5-50 keV broadband X-ray spectra are dominated by a primary power-law emission (with the photon index ~ 2). This emission is affected by at least three absorbers with different ionisations (log(xi) ~ 1-4). The spectrum is further shaped by a reprocessed emission, possibly coming from two regions -- the accretion disc and a more distant scattering region. The accretion disc emission is smeared by the relativistic effects, from which we measured the spin of the black hole as a ~ 0.8 (+-0.2). We found that most of the spectral variability can be attributed to the least ionised absorber whose column density changed by a factor of two between the first (highest-flux) and the last (lowest-flux) observation. A week-long scale of the variability indicates that the absorber is located at the distance ~ 0.05 pc from the centre, i.e., in the Broad Line Region. • ### Summary of the 2014 IACHEC Meeting(1412.6233) Dec. 19, 2014 astro-ph.IM We present the main results of the 9th meeting of the International Astronomical Consortium for High Energy Calibration (IACHEC), held in Warrenton (Virginia) in May 2014. Over 50 scientists directly involved in the calibration of operational and future high-energy missions gathered during 3.5 days to discuss the status of the X-ray payloads inter-calibration, as well as possible ways to improve it. Sect.2 of this Report summarises our current understanding of the energy-dependent inter-calibration status.
2021-02-25 17:11:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.643623948097229, "perplexity": 2308.269730589484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351374.10/warc/CC-MAIN-20210225153633-20210225183633-00527.warc.gz"}
https://www.physicsforums.com/threads/real-number-calculus-vs-complex-calculus.793078/
Real number calculus vs complex calculus 1. Jan 19, 2015 THE HARLEQUIN I have started studying complex integration recently and i just can't seem to get the things in my head . the biggest problem i am facing is that : when solving real number integrals the area under the curve of the function is what integration means ... but i can't seem to find an analogy between this and complex integration ...... 2. Jan 19, 2015 Staff: Mentor 3. Jan 19, 2015 micromass Consult the book "visual complex analysis" by Needham. Also, it is a variant of path integration, so you should first be comfortable with that. Path integrals can be easily interpreted as an area under a curve. http://en.wikipedia.org/wiki/Line_integral#mediaviewer/File:Line_integral_of_scalar_field.gif 4. Jan 19, 2015 THE HARLEQUIN i thought complex plane is a 2 dimensional plane just like our cartesian plane ... so why can't i just take the area under the curve in complex plane here too ? and while approaching non complex integration why don't we integrate for any two points on the plane with an arbitrary path ? ( sorry if i am wrong about everything i said , i am still a noob at complex integration ) 5. Jan 19, 2015 Svein Yes and no. Yes it is a 2-dimensional cartesian plane when viewed a certain way. No because it is a generalization of the real line when viewed another way. Example: In the 2-dimensional cartesian plane you can define addition and subtraction of points easily, but there is no straightforward way of defining a way to multiply two points and get a new point. In the complex plane, multiplication is defined from the beginning. As I said, the complex plane is a generalization of the real line. Complex functions, though, have stricter requirements than real functions. For a real function to be differentiable, the right- and left-hand derivative must both exist and be equal. For a complex function, the derivatives must be equal no matter how we approach the point in question. Such functions are called analytic, and they have several interesting properties, one of them being that the integral from one point to another is not dependent on the path used. This again means that the integral of an analytic function along a closed curve is zero. Integration is one of the most important tools in complex analysis and the basis for several important theorems. 6. Jan 19, 2015 lavinia If a function in the plane were real valued then one could draw its graph in three dimensions. It would look like a surface and it definitely makes sense to talk about the volume underneath it (not area). But the only real valued continuous complex differentiable functions are constants. So for them computing these volumes is uninteresting. You should try to convince yourself that complex differentiable functions that are real valued are constants. It is not hard. 7. Jan 20, 2015 lavinia There is nothing to stop you from taking the area under a curve if the function is real valued. If it is complex valued the idea makes no sense. Last edited: Jan 20, 2015 8. Jan 20, 2015 THE HARLEQUIN but why doesn't it make sense when the function is complex valued ? 9. Jan 20, 2015 lavinia A complex number is not a height. A complex line integral can be thought of as two regular integrals added together (after multiplying one of them by i) to get a complex number 10. Feb 5, 2015 mathwonk i agree one should think in terms of path integrals, not area. consider the path integral of -ydx/(x^2+y^2) + xdy/(x^2+y^2) around a loop missing the origin. If I remembered the formula correctly, this is just dtheta, so the integral gives you the change in angle of a ray emanating from the origin, as you go around this curve, i.e. 2pi times the winding number. this can be computed in stages as actual angel changes for continuously chosen branches of the angle function theta. in general, every complex analytic function f is locally of form f = dg for some analytic function g, so integrating f along a path is computed from the changes in value of these locally given g's in stages along this path. the catchy-residue formula tells you that the integral of a local quotient of analytic functions is always equivalent to just computing winding numbers and residues at points where the denominator equals zero.
2018-07-19 19:40:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9076981544494629, "perplexity": 298.7473596008027}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591216.51/warc/CC-MAIN-20180719183926-20180719203926-00474.warc.gz"}
https://nrich.maths.org/13040
### Pizza Portions My friends and I love pizza. Can you help us share these pizzas equally? ### Doughnut How can you cut a doughnut into 8 equal pieces with only three cuts of a knife? ### Pies Grandma found her pie balanced on the scale with two weights and a quarter of a pie. So how heavy was each pie? # More Fraction Bars ##### Age 7 to 11Challenge Level Look at these different coloured bars. Put the bars in size order - can you do it without cutting them out? Now focus on this bar: This bar represents two wholes, or the number 2. You might find it easier to think of it as two bars which are each 'one whole' that have been stuck together. We are thinking about all the other coloured bars as fractions of this bar, so we are thinking about them as fractions of 2. For example, look at Bar S below: Drawing lines helps us measure it against the black bar: What fraction of the black bar is Bar S? Go through each of the other coloured bars and compare them to the black bar. What fraction of 2 is each bar? Write down your ideas for each bar. For example, you could write: Bar S is two thirds of the black bar. or Bar S represents $\frac{2}{3}$ of 2. or Bar S is 1$\frac{1}{3}$. Can you work out how we came up with these three ideas? When you've worked out what fraction of 2 each bar represents, have a go at the next challenge here.
2022-05-28 01:08:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43733343482017517, "perplexity": 1413.1779041735272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663011588.83/warc/CC-MAIN-20220528000300-20220528030300-00200.warc.gz"}
http://stats.stackexchange.com/questions/43464/ancova-how-does-a-covariate-give-insight-when-comparing-the-mean-to-the-adjuste
# ANCOVA: How does a covariate give insight when comparing the mean to the adjusted mean? When comparing the direction and magnitude of the difference between the original means and the adjusted means what does the change say about the covariate? For example: IV = Grade, DV = Test Score, Covariate = IQ Score Am I correct in assuming that if a group's adjusted mean is higher than its original mean that the effect of the covariate (higher IQ) is that it decreases the DV (Test Score) thus resulting in the higher adjusted mean? Thanks. - Welcome to the site.I attempted to clarify your first paragraph. If I messed up, please correct it. As to your question, why not just look at the parameter estimate for the covariate? That seems much more straightforward – Peter Flom Nov 13 '12 at 3:58 I see the parameter estimate read out in SPSS but am unclear how to interpret it. – Arctic Nov 13 '12 at 4:12 The parameter readout for the covariate is the predicted relationship between a 1 point gain on IQ (the covariate) on score (the DV). At least, unless you have coded something strangely. – Peter Flom Nov 13 '12 at 10:48 I seem to have figured it out, the direction and magnitude of the adjustment on the DV mirrors the adjustment of the covariate, is this correct reasoning? – Arctic Nov 13 '12 at 18:16 I think so, but you are using odd language. – Peter Flom Nov 13 '12 at 22:51
2013-05-20 05:47:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8801231384277344, "perplexity": 717.9132899833205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698354227/warc/CC-MAIN-20130516095914-00073-ip-10-60-113-184.ec2.internal.warc.gz"}
https://proxies-free.com/tag/points/
## probability – With what frequency should points in a 2D grid be chosen in order to have roughly \$n\$ points left after a time \$t\$ Say I have a 2D array of $$x$$ by $$y$$ points with some default value, for generalisation we’ll just say “0”. I randomly select a pair of coordinates with a frequency of $$f$$ per second and, if the point selected is `0`: flip it to `1`. However, if the point is already $$1$$ then do not change it. How then, given a known $$x$$ and $$y$$ (thus known total points as $$xy$$) can I calculate a frequency $$f$$ that will leave me with approximately (as this is random) $$n$$ `0` points remaining, after a set time $$t$$? Where $$t$$ is also seconds. For some context I am attempting some simplistic approximation of nuclear half-life but I’m not sure how to make use of the half-life equation for this, nor do I think that it is strictly correct to apply it given my implementation isn’t exactly true to life, picking single points at a time. ## homotopy theory – Identifying discrete points in derived hom spaces Let M be a model category presenting an ∞-category $$mathcal{M}$$, and let $$f : X to Y$$ and $$g : Y to Z$$ be arrows of M. Consider the following propositions: 1. The connected component of $$f$$ in $$mathcal{M}(X,Y)$$ is contractible 2. $$mathcal{M}(X, Y) xrightarrow{g_*} mathcal{M}(X, Z)$$ restricts to a homotopy equivalence between the connected components of $$f$$ and $$gf$$ When can these propositions be expressed in an elementary way from the model structure on M? I’m interested in the second proposition (it relates to homotopy uniqueness for properties expressed by arrows and extensions along arrows). But in the case $$Z$$ is fibrant, it can be reduced to the the first question for $$f in mathbf{M}_{/Z}(gf, g)$$. Conversely, the first proposition is the $$Z=1$$ case of the second. Given a simplicial model category, when $$X$$ is cofibrant and $$Y,Z$$ are fibrant — or a general relative category by first computing a simplicial localization — one could answer these questions by appealing to the corresponding questions of simplicial sets. However, I’m hoping there’s a useful way to express these propositions somewhat more directly in terms of the the model structure on M rather than having to appeal to more elaborate consturctions. ## statistics – How can I calculate expected Stunt Points per attack when FIRST dropping a d6? Here’s a tweaked version of the function from my earlier answer that should work for any number (≥ 2) of ordinary dice, with the player choosing two of them: ``````function: stunt points for DICE:s and STUNT_DIE:n vs TARGET:n { if {1,2}@DICE + STUNT_DIE < TARGET { result: 0 } miss it's a hit; can we choose a pair that will give us stunts? if 1@DICE = STUNT_DIE { result: STUNT_DIE } if DICE = STUNT_DIE & 1@DICE + 2 * STUNT_DIE >= TARGET { result: STUNT_DIE } loop I over {1..#DICE-1} { if I@DICE = (I+1)@DICE & 2 * I@DICE + STUNT_DIE >= TARGET { result: STUNT_DIE } } result: 0 hit but no pair } `````` The first line in the function (checking if the roll is a miss) is the same as in my old code, except that I’m explicitly summing only the highest two ordinary dice rolled using `{1,2}@DICE`: if those plus the stunt die don’t meet the target, then no combination will. Conversely, if they do, then we’ll at least get a hit, but we might or might not get any stunts. (Replacing the `result: 0` on the first line with `result: d{}` will make the code calculate the distribution of stunt points conditioned on the roll being a successful hit, i.e. as if all misses were rerolled until they hit. You could also change this line to e.g. return -1 to distinguish misses from hits with no stunts.) Next, I’m checking if the player might be able to choose two dice out of however many they rolled that will given them a hit with stunts. Here, there are three possibilities, which the code checks for in this order: 1. If the highest ordinary die matches the stunt die, then simply choosing the highest two ordinary dice will give a hit with stunts. (We know it will, because we just checked that at the start of the function.) 2. Otherwise, if any ordinary die matches the stunt die and if that die plus the stunt die plus the highest roll will meet the target, then the player can choose those and get stunts. (In the code, `DICE = STUNT_DIE` compares a sequence with a number, returning true if any value in the sequence matches the number. We don’t actually need to know the index of the matching die in the sequence, if any, since we know its value anyway — it’s equal to the stunt die!) 3. Finally, we loop over the dice and check if any two consecutive dice in the (automatically sorted) sequence have the same value, and if so, whether that value twice plus the stunt die is enough for a hit. If so, the player can choose that pair and get stunts. (Since we know the sequence is sorted in descending order, and since this is the last possibility checked for, we could actually abort the loop early and return 0 from the function as soon as we find that `2 * I@DICE + STUNT_DIE < TARGET`, as no smaller pair can possibly give a hit either. Implementing that minor optimization is left as an exercise for the reader. 🙂 Finally, if none of those checks succeeds, the function returns 0 indicating that the player could not get any stunts but still rolled a successful hit (choosing e.g. their top two ordinary dice plus the stunt die). When called with 2d6 as `DICE`, this function is a drop-in replacement for the one in my earlier answer, and indeed gives the same results. What about for more dice? As we can see, as the number of dice to choose from increases, the probability of getting stunts increases. In general, higher stunt counts are more likely than lower ones, which makes intuitive sense: the higher you roll on the stunt die, the more likely you are to hit and to be able to choose a hitting combo that includes two identical dice. However, the specific shape of the curve varies depending on the target difficulty: DC 10 above, for example, gives fairly smooth looking plots, but DC 11 seems to favor odd numbers of stunts, leading to a more staircase-like graph: Notably, for five or more normal dice and DC 11, the probability of getting a hit with stunts is actually slightly lower if you roll a 4 on the stunt die than if you roll a 3. (Of course you still get more stunts if you do get any, and your overall hit probability is higher too, so a higher roll on the stunt die is still better.) ## cortex prime – Can a player character avoid dying as long as they still have Plot Points? I’ve read Cortex Prime and now I’m wondering whether a PC could die. The rule said that “you can spend a PP to avoid being taken out of the scene” (which I translate as dying). Does this mean that as long as a player still has PP, they can’t die if they don’t wish so? ## Can you use meta-magic/sorcery points twice in a turn? Example: You use Quicken Spell to cast a leveled spell or cantrip as a Bonus Action, you then use your Action to cast a cantrip along with using Transmute Spell to change the cantrip’s damage type. Is this an ok thing to do? LMBC:You’re using two meta-magic options on two different spells in one turn. ## Connecting common points on a map I work with a private preschool – 12th-grade school. The school has several campuses. And each campus has several hundred families. We have two separate spreadsheets. One spreadsheet has all of the parents’ home addresses and the other spreadsheet has all of the parents’ work addresses. We’re using a mapping software called eSpatial (but it’s limited). We can upload the spreadsheets (datasets) and map both batches of addresses. The problem is that this doesn’t really tell us very much. Ultimately, we need a way to link home and work addresses together, visually, as being the same person. For example, let’s say one point on a map is a parent’s home address at 123 Main St. and another point on the map is the same parent’s work address at 321 Sycamore Rd. We need a way to visually know that the two points below to the same person so we can see who, and how far, each parent travels to work. Ideally, if there was a line that connected the two points that would be great. That would make it easy to see how far away the parent works. Or it would also work to hover over one point and the corresponding home or work point would highlight. Currently, the two spreadsheets do not have a common column that links the two together (I guess a “key”). Although I can create a spreadsheet that has a student ID column which would be the key between the two. Any thoughts on how to put something like this together? Hopefully, this makes sense. ## equation solving – Computing periodic points of some function Instead of recursively determine the period points, you could first iterate the function and subsequently determine the period points. E.g. let us define the n-iterated function `fn({x_,y_})` for `{b,c}=={1/100,-1}` (note, it is better to use rational numbers if computing times allows): ``````hn({x_, y_}) = Nest(Simplify(h(#, 1/100, -1)) &, {x, y}, n) `````` E.g. for n=2 : ``````h2{x_, y_}) = Nest(Simplify(h(#, 0.01, -1)) &, {x, y}, 2) `````` This gives the following period points: ``````Solve(h2({x, y}) == {x, y}, {x, y}) `````` Up to `n==9` `hn` is calculated rather fast, but `h10` suddenly takes much longer. The reason is not clear to me. ## unity – How can I change the curved meeting points in my waypoints system so it will be updated in run time? The post is a bit long but all the scripts are connected. I have a simple waypoints system. The first script is moving along LineRenderer line/s positions : ``````using System; using System.Collections; using System.Collections.Generic; using System.Linq; using UnityEngine; public class MoveOnCurvedLines : MonoBehaviour { public LineRenderer lineRenderer; public float speed; public bool go = false; public bool moveToFirstPositionOnStart = false; public float rotSpeed; public bool random = false; public int currentCurvedLinePointIndex; private Vector3() positions; private Vector3() pos; private int index = 0; private bool goForward = true; private List<GameObject> curvedLinePoints = new List<GameObject>(); private int numofposbetweenpoints; private bool getPositions = false; int randomIndex; int curvedPointsIndex; // Start is called before the first frame update void Start() { curvedLinePoints = GameObject.FindGameObjectsWithTag("Curved Line Point").ToList(); if (curvedLinePoints != null && curvedLinePoints.Count > 0) { transform.rotation = curvedLinePoints(1).transform.rotation; } } Vector3() GetLinePointsInWorldSpace() { positions = new Vector3(lineRenderer.positionCount); //Get the positions which are shown in the inspector lineRenderer.GetPositions(positions); //the points returned are in world space return positions; } // Update is called once per frame void Update() { if (lineRenderer.positionCount > 0 && getPositions == false) { pos = GetLinePointsInWorldSpace(); numofposbetweenpoints = curvedLinePoints.Count; if (moveToFirstPositionOnStart == true) { transform.position = pos(index); } getPositions = true; } if (go == true && lineRenderer.positionCount > 0) { Move(); } var dist = Vector3.Distance(transform.position, curvedLinePoints(curvedPointsIndex).transform.position); if (dist < 0.1f) { if (curvedPointsIndex < curvedLinePoints.Count - 1) curvedPointsIndex++; currentCurvedLinePointIndex = curvedPointsIndex; } } int counter = 0; int c = 1; void Move() { Vector3 newPos = transform.position; float distanceToTravel = speed * Time.deltaTime; bool stillTraveling = true; while (stillTraveling) { Vector3 oldPos = newPos; newPos = Vector3.MoveTowards(oldPos, pos(index), distanceToTravel); distanceToTravel -= Vector3.Distance(newPos, oldPos); if (newPos == pos(index)) // Vector3 comparison is approximate so this is ok { // when you hit a waypoint: if (goForward) { bool atLastOne = index >= pos.Length - 1; if (!atLastOne) { index++; counter++; if (counter == numofposbetweenpoints) { c++; counter = 0; } if (c == curvedLinePoints.Count - 1) { c = 0; } } else { index--; goForward = false; } } else { // going backwards: bool atFirstOne = index <= 0; if (!atFirstOne) { index--; counter++; if (counter == numofposbetweenpoints) { c++; counter = 0; } if (c == curvedLinePoints.Count - 1) { c = 0; } } else { index++; goForward = true; } } } else { stillTraveling = false; } } transform.position = newPos; } } `````` The second script is creating the lines : ``````using System.Collections; using System.Collections.Generic; using UnityEngine; public class GenerateLines : MonoBehaviour { public GameObject linesWaypointsPrefab; public int amountOfLines = 30; public int minRandRange, maxRandRange; public bool randomPositions = false; public bool generateNewPositions = false; private List<Vector3> linesWaypoints = new List<Vector3>(); private Transform waypointsLinesParent; // Start is called before the first frame update void Awake() { waypointsLinesParent = GameObject.Find("Curved Lines").transform; if (generateNewPositions || (linesWaypoints.Count == 0 && amountOfLines > 0)) { GenerateLinesWaypoints(); } } // Update is called once per frame void Update() { } private void GenerateLinesWaypoints() { for (int i = 0; i < amountOfLines; i++) { if (randomPositions) { var randPosX = UnityEngine.Random.Range(minRandRange, maxRandRange); var randPosY = UnityEngine.Random.Range(minRandRange, maxRandRange); var randPosZ = UnityEngine.Random.Range(minRandRange, maxRandRange); if(linesWaypointsPrefab != null) { var LineWaypoint = Instantiate(linesWaypointsPrefab, new Vector3(randPosX, randPosY, randPosZ), Quaternion.identity, waypointsLinesParent); LineWaypoint.name = "Curved Line Point"; LineWaypoint.tag = "Curved Line Point"; } } else { if (linesWaypointsPrefab != null) { var LineWaypoint = Instantiate(linesWaypointsPrefab, new Vector3(i, i, i), Quaternion.identity, waypointsLinesParent); LineWaypoint.name = "Curved Line Point"; LineWaypoint.tag = "Curved Line Point"; } } } } } `````` And last the script that should update in real time in run time the curved line/s positions and the lines positions between each curved point : ``````using UnityEngine; using System.Collections; using System.Collections.Generic; (RequireComponent( typeof(LineRenderer) )) public class CurvedLineRenderer : MonoBehaviour { //PUBLIC public float lineSegmentSize = 0.15f; public float lineWidth = 0.1f; (Tooltip("Enable this to set a custom width for the line end")) public bool useCustomEndWidth = false; (Tooltip("Custom width for the line end")) public float endWidth = 0.1f; public bool showGizmos = true; public float gizmoSize = 0.1f; public Color gizmoColor = new Color(1,0,0,0.5f); //PRIVATE private CurvedLinePoint() linePoints = new CurvedLinePoint(0); private Vector3() linePositions = new Vector3(0); private Vector3() linePositionsOld = new Vector3(0); // Update is called once per frame public void Update () { if (ResetLineRendererPositions.hasReseted == false) { GetPoints(); SetPointsToLine(); } } public void GetPoints() { //find curved points in children linePoints = this.GetComponentsInChildren<CurvedLinePoint>(); linePositions = new Vector3(linePoints.Length); for (int i = 0; i < linePoints.Length; i++) { linePositions(i) = linePoints(i).transform.position; } } public void SetPointsToLine() { //create old positions if they dont match if( linePositionsOld.Length != linePositions.Length ) { linePositionsOld = new Vector3(linePositions.Length); } //check if line points have moved bool moved = false; for( int i = 0; i < linePositions.Length; i++ ) { //compare if( linePositions(i) != linePositionsOld(i) ) { moved = true; } } //update if moved if( moved == true ) { LineRenderer line = this.GetComponent<LineRenderer>(); //get smoothed values Vector3() smoothedPoints = LineSmoother.SmoothLine( linePositions, lineSegmentSize ); //set line settings line.positionCount = smoothedPoints.Length; line.SetPositions( smoothedPoints ); line.startWidth = lineWidth; line.endWidth = useCustomEndWidth ? endWidth : lineWidth; } } void OnDrawGizmosSelected() { Update(); } void OnDrawGizmos() { if( linePoints.Length == 0 ) { GetPoints(); } //settings for gizmos foreach( CurvedLinePoint linePoint in linePoints ) { linePoint.showGizmo = showGizmos; linePoint.gizmoSize = gizmoSize; linePoint.gizmoColor = gizmoColor; } } } `````` The result in the end is some cubes in this case 5 that are connected with lines and a transform that move on the positions of the lines not between the curved points(Cubes) but moving between the positions of the lines : The problem is with the last script. When the game is running and then when I’m selecting one of the curved points(Cube) and drag it around in the scene window I see it changing it’s position and the line also change but in fact it’s not changing the positions of the line and the position of the curved point in the moving script the first script. The moving platform is keep moving on the old positions as before and never moving on the new positions. In this screenshot I moved the fourth curved point(Cube) dragged it around to the right top but the platform is still moving on the old line positions of the fourth curved point(Cube) position it was before I moved it. I want to do that when I change the curved points(Cubes) and it’s changing also the lines positions that it will update the transform that should move on this positions. If I’m not mistaken the last script should do it but it’s not. In the last script in the Update hasReseted is false all the time : ``````public void Update () { if (ResetLineRendererPositions.hasReseted == false) { GetPoints(); SetPointsToLine(); } } `````` even if I set for a second the hasReseted to true and then to false again and it’s clearing the LineRenderer all lines and positions and re generating them over again the transform platform will keep moving on the old positions. ## algorithms – Finding all pairs of points with no point in between Suppose there are $$n$$ points $$p_1,p_2,dots,p_n$$ with color red or blue on a line. We want to find all pairs $$(p_i,p_j)$$ whose color is distinct and such that there are no points between them. If there are $$k$$ pairs with the described property, design an algorithm with $$O(nlog k)$$ that uses idea of divide and pruning. I think if we check all points we can solve this problem, but running time will exceed $$O(nlog k)$$. I think for solving this problem we can use projective geometry duality, but I am stuck. Any help would be appreciated. ## nt.number theory – Is there a planar rational point set within which the distance of any two points is an irrational number? i.e. could we find a subset $$Xsubset mathbb{R}^2$$ such that $$Xsubset mathbb{Q}^2$$ and that for any $$x,yin X$$ the distance $$|x-y|$$ is an irrational number? I’m considering the following assertion of which I’m not sure : Given finite rational points $$p_1,p_2,dots,p_n$$ , and an open ball $$D$$ on the plane, there is a rational point $$xin D$$ such that $$|x-p_i|in mathbb{R}backslash mathbb{Q}$$ for $$i=1,2,dots,n$$. But this assertion accounts to be the following seemingly number-theoretic problem: Given $$n$$ pairs $$(a_i,b_i) (i=1,2,dots,n)$$ of positive integers such that $$a_i^2+b_i^2$$ is not square of any integer. Could we find an integer $$Ngeq 2$$ such that the integral pairs $$(Na_i+1,Nb_i)$$ still satisfy the previous property(i.e. $$(Na_i+1)^2+(Nb_i)^2$$ is not square of any integer). BTW the distribution of the Pythagorean triples might help.
2021-04-19 18:14:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 45, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4196909964084625, "perplexity": 2334.30324289182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038916163.70/warc/CC-MAIN-20210419173508-20210419203508-00152.warc.gz"}
http://mathhelpforum.com/advanced-statistics/203325-finding-covariance-simple-joint-df-print.html
# Finding Covariance of simple joint DF • September 12th 2012, 12:32 AM Finding Covariance of simple joint DF A joint density function is given by: f(x, y) = kx 0 < x, y < 1 0 otherwise Find Cov(X, Y) ****** Those bounds are giving me trouble. At first I read it as 0 < x < y < 1. But I get 2 different values for k depending if I use the direction of f(x) or f(y)... I found: f(x) = k - kx^2 which lead to k = 3/2 f(y) = (ky^2)/2 which lead to k = 6 I'm not sure if it's possible to have 2 different values of k like that ? E(xy) led to another problem, which k to chose ? So I must have done something wrong with those bounds... Thanks !!! • September 12th 2012, 07:38 AM harish21 Re: Finding Covariance of simple joint DF Quote: A joint density function is given by: f(x, y) = kx 0 < x, y < 1 0 otherwise Find Cov(X, Y) ****** Those bounds are giving me trouble. At first I read it as 0 < x < y < 1. But I get 2 different values for k depending if I use the direction of f(x) or f(y)... I found: f(x) = k - kx^2 which lead to k = 3/2 f(y) = (ky^2)/2 which lead to k = 6 I'm not sure if it's possible to have 2 different values of k like that ? E(xy) led to another problem, which k to chose ? So I must have done something wrong with those bounds... Thanks !!! No you cant have 2 different values of k. you are given that $f(x,y) = k\cdot x\;\;0 that means $0; x and y range between 0 and 1, but you are not given that x<y. so $f(x) = \int_y f(x,y) dy= \int_0^1 kx \; dy= kx$ $f(y)=\int_x f(x,y) dx = \int_0^1 kx dx = \frac{k}{2}$ now you can find the value of k by using the fact that $\int_0^1 f(x)\;dx = 1$ and $\int_0^1 f(y) dy =1$ you can then plug the value of k in f(x,y) and integrate to see that it integrates to 1. • September 12th 2012, 10:50 AM
2016-05-02 08:18:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7434605956077576, "perplexity": 493.7179144328653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860128071.22/warc/CC-MAIN-20160428161528-00159-ip-10-239-7-51.ec2.internal.warc.gz"}
https://a.tellusjournals.se/articles/10.3402/tellusa.v66.21395/
A- A+ Alt. Display # Impact of satellite-based lake surface observations on the initial state of HIRLAM. Part II: Analysis of lake surface temperature and ice cover ## Abstract This paper presents results from a study on the impact of remote-sensing Lake Surface Water Temperature (LSWT) observations in the analysis of lake surface state of a numerical weather prediction (NWP) model. Data assimilation experiments were performed with the High Resolution Limited Area Model (HIRLAM), a three-dimensional operational NWP model. Selected thermal remote-sensing LSWT observations provided by the Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along-Track Scanning Radiometer (AATSR) sensors onboard the Terra/Aqua and ENVISAT satellites, respectively, were included into the assimilation. The domain of our experiments, which focussed on two winters (2010–2011 and 2011–2012), covered northern Europe. Validation of the resulting objective analyses against independent observations demonstrated that the description of the lake surface state can be improved by the introduction of space-borne LSWT observations, compared to the result of pure prognostic parameterisations or assimilation of the available limited number of in-situ lake temperature observations. Further development of the data assimilation methods and solving of several practical issues are necessary in order to fully benefit from the space-borne observations of lake surface state for the improvement of the operational weather forecast. This paper is the second part of a series of two papers aimed at improving the objective analysis of lake temperature and ice conditions in HIRLAM. Keywords: How to Cite: Pour, H.K., Rontu, L., Duguay, C., Eerola, K. and Kourzeneva, E., 2014. Impact of satellite-based lake surface observations on the initial state of HIRLAM. Part II: Analysis of lake surface temperature and ice cover. Tellus A: Dynamic Meteorology and Oceanography, 66(1), p.21395. DOI: http://doi.org/10.3402/tellusa.v66.21395 Published on 01 Dec 2014 Accepted on 26 Aug 2014            Submitted on 12 May 2013 ## 1. Introduction The importance of a correct description of the lake surface state in climate (Duguay et al., 2006; Brown and Duguay, 2010; Krinner and Boike, 2010; Samuelsson et al., 2010; Ngai et al., 2013) and weather prediction (Niziol, 1987; Niziol et al., 1995; Zhao et al., 2012) is well known. Particularly during freezing and melting of lakes, the surface radiative and conductive properties as well as latent and sensible heat released from lakes to the atmosphere change dramatically leading to a completely different surface energy balance. Recent studies (Eerola et al., 2010; Rontu et al., 2012) have demonstrated the possibility of improving the description of the lake surface state in a numerical weather prediction (NWP) model by replacing climatological information with the objective analysis of observations. A good background for the analysis provided by the prognostic parameterisation of lake temperatures using the Freshwater Lake model (FLake) (Mironov, 2008; Mironov et al., 2010) was also shown to be important. In fact, lake parameterisations alone seem to lead to (locally) improved NWP results even without the introduction of Lake Surface Water Temperature (LSWT) observations (Eerola et al., 2010; Rontu et al., 2012). However, the application of thermodynamic lake parameterisations in NWP has its limitations. A prognostic lake parameterisation encounters difficulties over lakes with poorly defined properties due to the complex geometry or complex topography around the lake. These are often poorly resolved by the NWP model, even if the parameterisations are able to treat the lake physical processes correctly (Semmler et al., 2012; Manrique-Suñén et al., 2013; Yang et al., 2013). The thermodynamic lake parameterisations work independently under each grid box (column), thus not taking into account horizontal exchange on or in the lakes. Thus, they are not able to handle, for example, the small-scale inhomogeneity or drifting ice on the large lakes. Objective analysis of remote-sensing observations could help the NWP model to treat the horizontal variability over lakes. A possibly improved description of the initial state of the lakes is expected to lead to an improved weather forecast, if there is a real connection between the analysed (based on observations) and predicted (seen by the atmospheric model) state of lakes. However, in present NWP models, a prognostic lake temperature parameterisation is applied independently from the analysis (Rontu et al., 2012). To our knowledge, results of the first studies aimed at bridging the gap between the analysed and predicted state of lakes in NWP models by using the methods of Extended Kalman Filter (Ekaterina Kurzeneva, personal communication) and nudging (Mironov, 2012, personal communication), have been only recently reported (e.g. in the third workshop on ‘Parameterization of Lakes in NWP and Climate Modelling’, http://netfam.fmi.fi/Lake12). Application of these methods requires, in addition to a good thermodynamic lake parameterisation, that the observations on lake surface state are first interpolated to the NWP model grid. Hence, the objective analysis of LSWT is their starting point. The aim of this paper is to determine if the inclusion of remote-sensing observations on LSWT can improve the analysis of lake surface state in a NWP model, compared to the description based on the thermodynamic lake parameterisation alone. By the analysed lake surface state (analysis, objective analysis), we mean here the description of LSWT and fractional ice cover over lakes at the time when each forecast cycle by the NWP model starts. This analysis results from application of a spatialisation method such as Optimal Interpolation [OI; based on Gandin (1965)] to the observed variables over lakes. We report results from data assimilation experiments performed with the three-dimensional NWP model HIRLAM (HIgh Resolution Limited Area Model), (Undén et al., 2002; Eerola, 2013) run over a northern European domain for two winters (2010–2011 and 2011–2012). Our main attention is placed on the objective analysis of the lake surface state in winter-time conditions, over freezing and melting lakes. Our experiments focus on the use of remote-sensing observations on lakes (>6 km2) and the ways they can be introduced in the analysis. The influence of larger lakes on weather is expected to be larger than that of a multitude of smaller lakes. On the smaller lakes, there are less space-borne observations available because the number of pixels representative of pure open water or ice is limited by the surrounding land (i.e. by the within-pixel land surface contamination). We included in the HIRLAM analysis archived Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along-Track Scanning Radiometer (AATSR) LSWT observations, provided by the Terra/Aqua and ENVISAT satellites, and also used MERIS ice cover observations from ENVISAT for evaluation. We compared the result of pure prognostic parameterisations to the analysis based on in-situ and space-borne LSWT observations. For validation, we used additional independent satellite observations of LSWT and lake ice cover, as well as in-situ visual observations of freeze-up and break-up dates of selected lakes. We discuss in detail the maps and time-series of observed, analysed and predicted LSWT and fractional ice cover, obtained in the different experiments, in order to understand the differences and sensitivities. In the conclusions and outlook section, we discuss the perspectives and practical aspects of further usage of space-borne lake observations in operational NWP. This is the second part of our paper on improving the objective analysis of lake surface state in HIRLAM. The first part (Kheyrollah Pour et al., 2014) (from here onward referred to as Part I) documents the processing and evaluation of remote-sensing observations applied herein for the LSWT analysis. Our study is an extension of the work reported by Eerola et al. (2010) and Rontu et al. (2012). The main differences compared to these earlier studies lie in the extended usage of remote-sensing observations and exclusion of climatological data in the analysis. ## 2. Observations In this study, in-situ and remote-sensing observations on lake surface state are introduced into the surface data assimilation system and used for comparison and validation. Table 1 summarises the different observation types and their usage, discussed in this section. ### 2.1. Satellite LSWT observations Satellite thermal infrared sensors offer a global coverage and high temporal resolution of lake temperature observations (Part I). This represents a significant advantage over in-situ observing systems that provide point measurements, often only close to the shoreline. In the present study, 70 predefined pixels were selected over 41 northern lakes (Fig. 1, large image, black dots). The selection of a limited number of pixels, instead of using all available 1 km×1 km resolution data, is a limitation which was dictated by practical reasons, and will be discussed in the concluding section. The satellite observations were used at the nearest analysis time within ±3 h when available, i.e. under cloud-free conditions over each pixel location. A detailed description of the satellite observations and of the algorithms applied for extraction and screening can be found in Part I, only a short summary is given here. Fig. 1 (a) Location of the MODIS pixels over the northern lakes. Independent lakes are marked with orange dots. (b) Location of 27 lakes (dark blue polygons) with SYKE measurement sites in Finland. (c) Detailed view of the selected MODIS and AATSR pixels over the lakes Ladoga (left) and Onega (right). LSWT data were derived from the MODIS sensor, which operates on NASA's Terra and Aqua Earth Observation System (EOS) satellites (http://modis.gsfc.nasa.gov). The LSWT level 3 data, referred to as UW-L3 here onwards, were generated at the University of Waterloo. These data were evaluated using ground measurements over lakes in the same study area during the open-water season (Part I) and over two large Canadian lakes (Kheyrollah Pour et al., 2012). For MODIS observations, both daytime and nighttime Terra and Aqua LSWT observations were selected in order to maximise the amount of available input data to the analysis. Data from the AATSR, onboard the European Space Agency (ESA) ENVISAT satellite, were extracted over Lake Ladoga for the same 15 pixels as for MODIS (Fig. 1, Lake Ladoga image, red squares). Aqua and Terra satellites passed over our study area daily around 08–10 UTC and 20–01 UTC. AATSR observations were available 06–08 UTC every third day in April 2011. ### 2.2. In-situ lake water temperature observations Regular in-situ lake water temperature measurements are provided by the Finnish Environment Institute (Suomen Ympäristökeskus, SYKE). SYKE operates 32 regular lake and river water temperature measurement sites in Finland. The temperature of the lake water is measured every morning at 8.00 AM local time, close to shore, at 20 cm below the water surface. The measurements are either recorded automatically (13 stations) or manually and are performed only during the ice-free season (Rontu et al., 2012). Measurements from 27 lakes (Fig. 1, upper left map), which are also used by the FMI operational HIRLAM, were included in all experiments reported in this study. The operational Baltic Sea ice chart (Grönvall and Seinä, 2002) produced by FMI's Marine Service also provides manually processed, satellite-based observations of water temperature and ice properties over Swedish lakes Vänern, Vättern and Mälaren. From these, pseudo-observations of LSWT have been derived since 10 January 2011 for the FMI operational HIRLAM at a few selected pixels in winter season approximately between 15 October and 15 May each year. In this derivation, ice fractions are converted to LSWT and ice-flag temperatures by applying the inverse of the method described in Section 3.3. These data were included in the present experiments when available, but their influence is not discussed herein. ### 2.3. Data for comparison and validation Historical freeze-up and break-up dates from SYKE for most of the 27 Finnish lakes (Fig. 1) were used for comparison with MODIS observations and HIRLAM analysis results. These freeze-up and break-up dates are based on visual observations from shore and represent the complete freezing and melting in small lakes. For the large lakes, separate freeze-up and break-up dates for the central open waters and coastal areas may be given by SYKE. Among the lakes discussed in this study, this is the case only for Lake Inari, where we used central open-water dates. The visual observations are made independently of the water temperature measurements. These observations are made over a larger number of lakes in Finland than was used here, thus available for further studies. MODIS UW-L3 LSWT observations were prepared but withheld from the HIRLAM analysis, in order to be used as independent data for comparison over Lakes Bolmen and Hjälmaren in Sweden and Lakes Valday and Kuito in Russia (orange dots in Fig. 1, coordinates shown in Table 3). MERIS-derived ice fraction observations for Lake Ladoga were utilised in this study for the month of April 2011. The ice fraction data were produced by the Norwegian Computer Center as part of the ESA North Hydrology project (http://env-ic3-vw2k8.uwaterloo.ca:8080). MERIS was a core instrument of ESA's ENVISAT satellite platform that operated between March 2002 and April 2012. ## 3. Analysis of lake surface state Over water bodies in HIRLAM surface water temperature observations are treated with OI (Gandin, 1965). The methods of OI analysis of LSWT are based on those applied for sea surface temperature (SST). We summarise the method briefly here, and present our findings concerning the needs of its further development in the conclusions. ### 3.1. OI of LSWT OI analysis, integrated into the framework of HIRLAM, is applied for SST (Undén et al., 2002). More recently, the same method has been extended for the analysis of LSWT (Eerola et al., 2010; Rontu et al., 2012). In the near-surface analysis of HIRLAM, OI is applied to spread the information from irregularly located observations to regularly located grid points for the initialisation of the next forecast cycle. This is done by correcting the background field with observations. For lakes, the background can be provided either by the previous analysis or by a short forecast. In the latter case, the background LSWT is derived from the surface temperature forecast by the lake model (FLake), which is incorporated in HIRLAM as a parameterisation scheme. Here, the evolving three-dimensional state of the atmosphere also influences the predicted state of lakes and hence the background for the LSWT analysis. A good background is especially important over lakes where observations are sparse or not available at all. The analysis at a grid point k is determined by a linear combination of the observed departures from the background (1 ) ${a}_{k}={b}_{k}+\sum _{i=1}^{N}{w}_{ki}\left({y}_{i}-{b}_{i}\right)$ where ak is the analysis, bk the background and wki are the weights given to observations i=1,…,N, yi the observations and bi the background values interpolated to the observation points. Derivation of the weights relies on the assumption that observation and background errors are uncorrelated. In OI, the weights wki in eq. (1) are determined by inverting a matrix which represents the background and observation error covariances (Daley, 1991). For the SST and LSWT analysis applied in HIRLAM, the background error covariance, which to a large extent determines the resulting analysis, is treated by modelling the autocorrelation and standard deviation of the background error separately. A Gaussian autocorrelation function is applied, which depends on distance (2 ) $g\left(r\right)=exp\left(-0.5\left({r}^{2}/{L}_{H}^{2}\right)\right),$ where g(r) is the autocorrelation function, r is the distance and LH is a horizontal length scale (LH=80 km). So g(r) depends only on distance between the points. The observation and background error variances, which enter the diagonal of the matrix, are assigned prescribed constant values (we assumed a standard deviation error 1.5°C for observations and 1°C for background). The OI analysis integrated into the NWP model differs from the stand-alone analysis approach, as applied for SST and LSWT [e.g. by the Operational Sea surface Temperature and sea and lake Ice Analysis (OSTIA) (Donlon et al., 2012; Fiedler et al., 2014)] in two essential aspects. In OSTIA, the background is always provided by the previous analysis done (e.g. on the previous day), and relaxed towards the LSWT climatology, which is taken from ARC-lake database (Hook et al., 2012; MacCallum and Merchant, 2012). If the observations are missing for a long time or not available at all over some lakes, climatology gets a large weight. In the case of OSTIA, realistic lake climatology is available over lakes of the ARC-lake database (ca. 250 lakes worldwide, around 15 over our present study area). More importantly, no climatology is able to represent the current and near past atmospheric conditions, which basically determine the current lake temperatures. In our case, it is also possible to use previous analysis as the background and relax to climatology at each HIRLAM grid box, where a fraction of lake is detected. However, in our case, this is even more problematic, because our LSWT climatology was extrapolated to any lake from SST climatology not lake climatology, which is unrealistic. This is why we prefer the background provided by the prognostic lake parameterisation, calculated within HIRLAM for each time step at each grid box which contains a lake fraction. Another point is that our OI method works also across the lakes, sometimes interpolating LSWT observations from nearby lakes if these are close enough to influence. Thus, an analysed LSWT value is always available in every lake grid point of HIRLAM. In this respect, we are again not limited by the choice of pre-selected large lakes, between which OSTIA can also interpolate. In HIRLAM, special care is taken not to mix sea and lake observations in the analysis near the sea coast. However, to fully benefit from the across-lake interpolation possibility, it will be necessary to derive autocorrelation (structure) functions, depending not only on the horizontal distance but also at least on the depth and possibly on the elevation differences within and between the lakes. ### 3.2. Quality control In HIRLAM, quality control (QC) of the observations is performed prior to the actual analysis. QC is done in two consequent phases: first the observations are tested against the background, then each observation is compared to the surrounding observations. For the background check, a normalised difference Δi between the observed value and the background value interpolated to the observation point is calculated as (3 ) ${\mathrm{\Delta }}_{i}={\left({y}_{i}-{b}_{i}\right)}^{2}/\left({\sigma }_{b}^{2}+{\sigma }_{o}^{2}\right)$ where σb and σo denote the background and observation error standard deviations. If Δi is larger than a prescribed threshold value, the observation is rejected by the background check. The check against surrounding observations first excludes the observation to be checked, and then performs OI analysis to this point by using the nearby observations. The difference between the analysed and the observed value, again normalised by the observation and background error standard deviations, is tested against a prescribed threshold value. It is difficult to choose optimal criteria for this threshold. In order to retain a maximum number of observations, a quite liberal approach was adopted here: the threshold was set so that only those LSWT observations which deviated from the background by more than 10°C were rejected [eq. (3)]. ### 3.3. Treatment of ice fraction In HIRLAM, a diagnostic ice fraction is derived from the analysed LSWT. Thus, neither space-borne nor in-situ ice concentration, ice thickness and ice temperature observations are directly analysed. The diagnostic ice fraction is estimated in a simple way: we assume that a lake grid square is fully ice-covered when LSWT falls below −0.5°C and fully ice-free when LSWT is above 0°C. Between these temperature thresholds, the fraction of ice changes linearly. A range from −0.5 to 0°C has been chosen to account for the variability and uncertainty of the analysed LSWT within the model resolution. A corresponding ice-flag value of −0.6°C was assigned to LSWT while creating the background to LSWT analysis in such grid squares, where the ice thickness predicted by FLake exceeded a threshold value of 1 mm. An observation ice-flag value of −1.2°C was assigned to all MODIS surface temperature values below −0.5°C over lakes. These were assumed to represent full ice cover in their surroundings. In the case of SYKE observations, the ice-flag value was given to all measurements showing 0°C water temperature. If we had instead assigned LSWT observations a missing value under the observed ice, we would have excluded from the analysis all observations representing ice conditions, thus letting the background (FLake or previous analysis) alone to determine. In the melting and freezing conditions, removal of all information about ice would give more weight to the water observations and most probably lead to incorrect spread of open-water information into the nearby ice-covered part of the lake. This kind of procedure, which was inherited from the SST analysis and sea ice diagnostics, represents a simplified but non-physical way of handling ice concentration. Here a single variable, namely LSWT, is taken to represent in the analysis both itself, i.e. the water temperature, and another variable, ice cover. This is why the LSWT flag values enter the OI analysis and QC together with the real observations. However, the choice of the ice concentration versus LSWT range and the flag values is rather arbitrary. The sensitivity of the resulting LSWT and ice cover to these choices should be systematically studied. The eventual solution of the problem could be found in assimilation of the observed and predicted physical properties of ice, such as ice thickness (see Section 6 for discussion). ## 4. Description of the analysis-forecast system and setup of experiments All our experiments were run in the framework of HIRLAM version 7.4 (www.hirlam.org). This HIRLAM version incorporates fully integrated FLake model, applied as a parameterisation scheme for prediction of lake water, ice and snow temperatures and ice thickness and snow depth over lakes (Rontu et al., 2012). We used a model setup with a horizontal resolution of 6.8 km over a northern Europe experimental domain (Fig. 1) with 65 levels in vertical between the surface and the 10 hPa level in the atmosphere. Four data assimilation-forecast cycles were run every day, starting at 00, 06, 12 and 18 UTC. For the upper-air data assimilation, a three-dimensional variational method was used. The lateral boundary conditions for the atmospheric model were provided by the fields of the European Centre for Medium-Range Weather Forecasts (ECMWF) analysis. Three initial sets of experiments were designed to study the impact of assimilated remote-sensing LSWT observations over the major northern European lakes (Table 2). In the baseline experiment TRULAK (SYKE water temperature observations, FLake parameterisations), the prognostic lake parameterisations inside the forecast model provided the background for the LSWT analysis. This follows the setup of the reference HIRLAM used for the FMI operational NWP. No satellite observations were used in the baseline experiment, just SYKE in-situ water temperature measurements over Finland. In the second experiment, called NHFLAK (SYKE water temperature and MODIS LSWT observations, FLake parameterisations), remote-sensing LSWT observations were also included. In the last experiment, referred to as NHALAK (SYKE water temperature and MODIS LSWT observations), LSWT observations were used to correct the background provided by the previous analysis, which was relaxed towards ‘ocean-derived’ LSWT climatology of the reference HIRLAM (Rontu et al., 2012). AATSR observations over Lake Ladoga only were included in two additional short experiments, called NHALAA (AATSR LSWT observations) and NHFLAA (AATSR LSWT observations, FLake parameterisations), run for April 2011. SYKE in-situ water temperature observations from the Finnish lakes were included in all experiments. For the lake analysis and parameterisations, information about the lake depth and fraction of lake in each grid box is needed. Lake depths were obtained from the lake data base for NWP and climate models (Kourzeneva et al., 2012a). Fraction of lakes was taken from the HIRLAM physiography description (Undén et al., 2002). The lake fraction was originally derived for HIRLAM using the 1-km resolution Global Land Cover Characteristics (GLCC) data base (Loveland et al., 2000). For the very first cycle, prognostic inside-lake variables were initialised with gridded lake climatology (Kourzeneva et al., 2012b). The very first LSWT analysis was replaced by the reference HIRLAM LSWT climatology when starting each of the experiment series. Note that these two climatologies are different – the first is the climatology of FLake prognostic variables, the second has been extrapolated from SST for LSWT analysis only. ## 5. Results and discussion ### 5.1. Freeze-up and break-up dates Freeze-up and break-up dates interpreted from SYKE, MODIS and MERIS observations were compared with the dates given by HIRLAM experiments for selected representative lakes (Table 3). Lake Lappajärvi is a regular-form, medium-size and relatively shallow lake located in western Finland. SYKE water temperature measurements are available for this lake. Lakes Bolmen, Hjälmaren, Valday and Kuito whose MODIS observations were excluded from HIRLAM analysis, resemble Lake Lappajärvi. Lake Inari in the Finnish Lapland is large, with islands, a complex coastline and bathymetry, and is also represented in HIRLAM analysis by SYKE water temperature observations. Over the large, deep and open Lakes Ladoga and Vänern the break-up and freeze-up processes progress differently than over smaller lakes: ice forms, cracks and drifts depending on the wind speed and direction. However, for simplicity, only one point is chosen to illustrate the surface state of these lakes here. Coordinates of the chosen locations and the mean depth of lakes are shown Table 3. A few preliminary remarks related to the accuracy of the dates are necessary before discussion: • SYKE freeze-up and break-up dates: These dates are based on visual ground-based observations, which are independent of the SYKE water temperature measurements used by the HIRLAM analysis. • MODIS dates: Especially during the freezing period, which is often cloudy and dark, the MODIS observations over a chosen location may be missing for several days, even weeks. During the freezing and melting periods, MODIS LSWT may oscillate from one measurement to another by several degrees, sometimes jumping to both sides of zero. Some subjective reasoning was applied when determining the dates from this information. • HIRLAM dates: In Table 3, the dates are shown based on the OI analysis of HIRLAM, which used either the prognostic temperatures from FLake (experiments NHFLAK and TRULAK) or the previous analysis (experiment NHALAK) as background. For various reasons, the analysed temperature also has a tendency to oscillate between analysis cycles, which during the freezing and melting periods may lead to oscillation of the ice fraction. Thus, here again some subjective reasoning was needed to determine the freezing and melting dates. In some cases, a transition period up to 3 weeks is shown to indicate the uncertainty related to this oscillation. • MERIS ice fraction: Data were prepared for comparison in 2011 for Lake Ladoga. MERIS-derived ice fraction information was obtained from pixels, each representing an area of 300 m×300 m. SYKE and MODIS freeze-up and break-up dates were first compared over two Finnish lakes, Lake Lappajärvi and Lake Inari. During lake melt, SYKE and MODIS dates differed from each other by less than 10 d. During freezing, the difference could be several weeks. It is possible that the MODIS LSWT observations on selected 1 km×1 km pixels may indicate melting or freezing before the SYKE observer determines that the whole lake is unfrozen or frozen. To avoid this error, MODIS visible images (bands 7, 2 and 1) were used to make sure that the chosen pixel values represented correctly the whole lake area. The difference between SYKE and MODIS freeze-up and break-up dates, shown in Table 3 for Lake Inari and Lake Lappajärvi, was similar over the other Finnish lakes (not shown). The uncertainty of a lake melting date, derived from MODIS by a couple of weeks compared to the in-situ data could be due to the visual in-situ observation as the observer cannot monitor the whole area of the lake from the lake shoreline. The uncertainty of the freezing dates could be up to 1 month. Over Lake Ladoga, no SYKE freeze-up and break-up dates observations were available to be compared with MODIS. Melting dates interpreted from MERIS measurements in spring 2011 over Lake Ladoga (shown for pixel 9 in Table 3, see Fig. 1 for the map), seem to agree with the dates interpreted from MODIS LSWT measurements. Thermal satellite observations from AATSR-L1B are used to develop MERIS lake ice products to detect cloud cover, therefore both MERIS ice cover and MODIS temperature observations represent the surface only under clear-sky conditions. This limits the accuracy of the dates derived from these measurements in a similar way. The freezing dates given by the analysis of the experiment NHFLAK (FLake+MODIS LSWT+SYKE water temperature) came in general closer to the observed dates compared to the dates from experiment TRULAK (in the area of the analysis domain outside Finland, where no SYKE temperature observations are available, FLake alone was used). In spring, the analysed melting dates by both TRULAK and NHFLAK were always earlier than those indicated by MODIS observations at the selected pixels. The largest differences between melting dates interpreted from HIRLAM analysis and directly from MODIS observations were more than 1 month when the analysis was determined by the FLake background alone. This was the case for TRULAK over all lakes and NHFLAK over the independent lakes Bolmen, Hjälmaren, Kuito and Valday. Over the Finnish lakes, SYKE temperature observations were available only well after melt. Thus, during the melting period, the warm FLake background dominated over the (sparse) MODIS observations also in NHFLAK. In cases when the inclusion of MODIS observations to NHFLAK did not change the analysed state of lakes significantly, the reasons may have been due to the fact that: 1) MODIS data were seldom or not at all available for analysis, 2) the prognostic parameterisations were good and agreed with MODIS, or 3) the difference between MODIS and FLake was so large that the observations were rejected by the QC while comparing the background and observations. Rejections were, however, uncommon in autumn and when the lakes were frozen (between the dates shown in Table 3), but became more frequent at the end of May with rising lake water temperatures after the ice melt. It is possible that the FLake background dominates in the analysis over the large lakes because the information brought by the selected MODIS pixels is simply insufficient there (see also Section 5.2). The NHALAK experiment combined SYKE water temperature and MODIS LSWT observations with the background given by the previous analysis, which had been relaxed towards the LWST climatology. This experiment, which was run only for January–May 2012, followed the observations more closely than the prognostic experiments TRULAK and NHFLAK, but only when observations were available on the lake or close to it (i.e. when the effect of background field was small). Elsewhere, the analysis tended towards the (wrong ocean-derived) climatology, possibly resulting in a completely useless description of lake surface state [not shown in Table 3, see an example in Rontu et al. (2012)]. Over Lakes Lappajärvi and Inari, NHALAK improved the analysis so that the melting dates became closer to the SYKE temperature observation. Over Lakes Ladoga and Vänern, the dates became closer to the MODIS observations. In spring, interpretation of the point values over the large lakes may be affected by the uneven melting and drifting ice. The NHALAK melting dates over the selected lakes seem to agree with MODIS observations within about 1 week. The agreement is better than in the case of NHFLAK, whose analysis was dominated by the FLake parameterisations. Freezing dates from NHALAK were available only over a few lakes because this experiment was started in the middle of winter. ### 5.2. April 2011 comparison For visual comparison of the full-resolution satellite observations with the NWP analysis during melt, MODIS (daytime and nighttime) and AATSR (morning) LSWT, as well as MERIS ice fraction on 12 April 2011 were mapped (Fig. 2) and compared with the HIRLAM analysis and background by experiments NHFLAK and NHFLAA (Fig. 3). In April, the ice cover on Lake Ladoga started to break, which makes comparison of observations and simulations both interesting and challenging due to the moving ice on the lake. Fig. 2 Surface temperature on 12 April 2011: (a) MODIS visible image, (b) MERIS ice fraction, (c) AATSR surface temperature (between 8 and 10 AM local time), (d) MODIS daytime surface temperature (between 10 AM and 12 PM local time) and (e) MODIS nighttime (between 10 PM and 3 AM local time). Fig. 3 HIRLAM ice fraction (0–1) on 12 April 2011, diagnosed from LSWT: (a) analysis, (b) background and (c) their differences. NHFLAK (SYKE, FLake, MODIS) at 00 UTC (upper panel) and at 12 UTC (middle panel), and NHFLAA (SYKE, MODIS) at 06 UTC (lower panel). MERIS estimation of ice fraction (Fig. 2b) agrees well with the MODIS visible image (Fig. 2a), indicating an area consisting of a mixture of ice and water (MERIS: values between 0 and 54% ice fraction) in the northeastern part and, to a lesser extent, in the southwestern part of the lake. On the northeastern part of Ladoga, MODIS daytime observations (between 08 and 10 UTC) show temperatures just above 0°C and around 2–3°C lower at nighttime (between 20 and 01 UTC) (Fig. 2d and e). The daytime MODIS observations show warmer temperatures compared to AATSR (Fig. 2c). The AATSR observations were available earlier in the morning (06–08 UTC) than MODIS. Thus, the stronger heating of the surface by solar radiation at noon may explain the difference. The ice fraction from HIRLAM (Fig. 3, left column) was derived from the analysis of LSWT (for the method, see Section 3.3), which was based on the combination of MODIS (experiment NHFLAK) or AATSR (experiment NHFLAA) observations (Fig. 4) and the background field by FLake. For comparison, the ice fraction diagnosed from the +6 h ice thickness forecast by FLake parametrisation is shown (Fig. 3, middle column). In this diagnosis, the lake within each grid square is assumed to be either completely ice-covered or completely ice-free, i.e. no fractional ice is assumed. Both the analysed and predicted ice patterns differ from those of the mid-day and nighttime satellite observations (Fig. 2). According to the background forecast, Lake Ladoga should have been almost ice-free during the day, while at night and in the morning the northern part seems frozen. Similarly, the analysis indicates a frozen lake at night (based on MODIS) and early morning (based on AATSR) but partially melted during the day. Fig. 4 LSWT observations used for HIRLAM analysis over Lake Ladoga on 12 April 2011: (a) MODIS for NHFLAK (SYKE, FLake, MODIS) at 00 UTC, (b) MODIS for NHFLAK at 12 UTC and (c) AATSR for NHFLAA (SYKE, FLake, AATSR) at 06 UTC. Three technical comments are needed for understanding the possible reasons of the difference between the analysis and the satellite observation. First, the horizontal resolutions of the model and satellites are different: 7 km for HIRLAM (the boxes visible on the maps in Fig. 3 represent grid squares), 1 km for MODIS and AATSR, and 300 m for MERIS. Thus, we would not expect HIRLAM to represent all details of the ice cover detected by the satellites. Second, the diagnostic ice fraction of the HIRLAM analysis is derived from the analysed LSWT in a very simple ad-hoc way (Section 3.3). Consequently, all HIRLAM ice fractions are derived from temperatures between the freezing temperature and an artificially set lower limit of −0.5°C. This is not the same variable as the MERIS ice fraction, which can represent physically realistic ice properties within its 300 m×300 m pixels. In addition, the method involves unphysical ice-flag temperatures (see section 3.3), which may enter the analysis together with the real observations, thus adding uncertainty to the resulting analysis. Third, the LSWT analysis of the HIRLAM experiment NHFLAK over Ladoga is based on a selection of observed LSWT from a maximum of 15 MODIS or AATSR pixels (see Figs. 1 and 4), combined with the FLake +6 h forecast which is used as the background. This means that over Lake Ladoga, the largest part of the information from the ca. 30000 theoretically possible MODIS pixels remains unused in the analysis at the ca. 600 HIRLAM grid squares, and the result is compared to ca. 300000 MERIS pixels. newpage pagenumber="10"/>Of the 15 possible MODIS pixels, 14 were available and accepted for the analysis at 00 UTC on 12 April (MODIS observation at 23 UTC, 11 April 2011, Fig. 4a). They all show the flag value of ice, assumed for MODIS when the observed LSWT is below −0.5°C. Twelve hours later, at 12 UTC on 12 April (MODIS observations at 9 and 11 UTC, Fig. 4b), the analysis input also included 14 observed values, the most northeastern one (pixel 8) indicating unfrozen conditions and the other temperatures slightly under the freezing point. AATSR observations assimilated at 06 UTC indicated that the northeastern (pixel 1) and western (pixel 6) areas may have been unfrozen, while the remaining 15 pixels were frozen (Fig. 4c). AATSR observations were extracted for the chosen 15 pixels and applied for HIRLAM analysis on the 5–6 d in April 2011 when they were available. AATSR observations always represent morning conditions. An example of their influence in the experiments NHFLAA (with FLake background) and NHALAA (with previous analysis background), as compared to the influence of MODIS observations in the experiments NHFLAK (with FLake background) and NHALAK (with previous analysis background), is shown in Fig. 5 for the centre of Lake Ladoga at pixel 7 during April 2011. The background given by FLake (experiment NHFLAA) and by the previous analysis (NHALAA), which was relaxed towards climatology, was very different. FLake would indicate melting during the first week of the month, while the MODIS and AATSR observations pointed to melting during the last week. Both MODIS and AATSR provided enough observations to modify the analysis accordingly, so that the analyses indicated melting closer to the end of April. Without observations and FLake (i.e. relying on climatology only), melting would have occurred after the end of April. Fig. 5 Analysis (red), background (black) and observation (blue) in the grid point nearest to pixel 7 over central Ladoga during April 2011 in the experiments (a) NHFLAA (SYKE, FLake, AATSR), (b) NHFLAK (SYKE, FLake, MODIS), (c) NHALAA (SYKE, AATSR) and (d) NHALAK (SYKE, MODIS). Only times when MODIS observations were available are shown. No data are rejected here. Only those days when observations were available are shown in Fig. 5. When observations are sparse, the FLake background dominates the analysis outcome. The behaviour of FLake may vary between individual grid columns because of their different lake depths. For large lakes such as Ladoga, an approximate bathymetry is available in HIRLAM (Kourzeneva et al., 2012a). However, to a large extent, the conditions over the lake remain homogeneous, also from the point of view of the atmospheric forcing. This means that the background for LSWT analysis, given by FLake, also contains little horizontal variability. In addition, the analysis at each grid point is influenced by all nearby observations, whose values and availability may vary in time. Over Lake Ladoga, these nearby observations consisted of the selected 15 pixels, each of which would have an influence to some extent over the whole lake, according to eq. (2). The different behaviour of MODIS observations during day and night contributed to an unrealistic jumping of the HIRLAM NHFLAK analysis from frozen to unfrozen conditions: during the day unfrozen conditions prevailed, during the night the lake seemed frozen. This was typical during the melting period over Lake Ladoga, also over the other lakes (not shown). Jumping of the MODIS observations between sequential observations is confirmed by Fig. 5. AATSR may suffer less from this feature, perhaps because observations at the selected pixels were quite sparse in time but representing always the similar morning conditions. Also the lake parameterisation may contribute to the unrealistic oscillation across the freezing temperature (e.g. by absorbing solar radiation too effectively during daytime). The reason for the difference between the cold nighttime and warm daytime MODIS lake surface temperatures remains to be understood. At night, water on ice may refreeze due to long-wave radiative cooling of the surface. In this case, the MODIS temperature would not represent that of the lake, but the temperature of the refrozen melt water on ice. One could also speculate on the possibility of the formation of fog during the night over the melting ice. This type of fog, perhaps quite impossible to distinguish from the underlying surface in the satellite image, would show colder temperature than the surface, due to the long-wave cooling of the upper boundary of the fog layer. ### 5.3. Melting of Lake Lappajärvi Features of the OI analysis over a medium-size lake are illustrated by an example of the melting of Lake Lappajärvi in HIRLAM experiments NHFLAK (with FLake background) and NHALAK (with previous analysis background) in spring 2012. Over Lake Lappajärvi, SYKE temperature observations were included in a slightly different location (closer to the shore) than MODIS. Fig. 6 shows more details of the OI analysis during the melt period at the MODIS and SYKE points of Lake Lappajärvi. Fig. 6 Same as in Fig. 5 but over Lake Lappajärvi (15 March – 31 May 2012) in the experiments (a) NHALAK (SYKE, MODIS) and (b) NHFLAK (SYKE, FLake, MODIS) for the selected MODIS pixel (23.70 E, 63.22 N), (c) NHALAK (SYKE, MODIS) and (d) NHFLAK (SYKE, FLake, MODIS) for the SYKE measurement point (23.67 E, 63.15 N). FLake parameterisation in the NHFLAK experiment suggests open water already around 10 April, while MODIS indicates a complete break-up (water clear of ice) around the first day of May (Fig. 6b and d). Analysis of the experiment NHFLAK indicates water clear of ice a few days earlier (24 April) than that of the experiment NHALAK (28 April) (Figs. 6b and a). The visible MODIS-Aqua images (Fig. 7) indicate that Lake Lappajärvi is clear of ice on 1 May but still ice-covered on 25 April. The SYKE observer recorded 2 May as the water clear of ice date for Lake Lappajärvi (see Table 3). SYKE temperature measurements started only on 10 May when the water temperature had already reached 3°C (Figs. 6c and d). Fig. 7 MODIS-Aqua visible images over Lake Lappajärvi on 25 April (left, 8:30–12:10 UTC) and 1 May (right, 9:50–11:30 UTC), 2012. The analysis of NHALAK followed MODIS observations more closely than that of NHFLAK, which was influenced by the warmer background suggested by FLake. SYKE temperature measurements were not available before 10 May, entering the NHALAK analysis only well after the observed ice break-up. When there were no MODIS observations over Lake Lappajärvi, the previous analyses that were applied as background in NHALAK would have converged to the climatological values, which still represented ice-covered conditions. If break-up was interpreted from NHALAK analyses based on the observations at the SYKE point alone, it would have occurred 2 weeks later than when MODIS observations were included. In general, melting of Lake Lappajärvi could be described realistically due to the MODIS observations both with and without FLake parameterisations. FLake alone would have led to too early and OI, based only on the (missing) SYKE water temperature measurements and climatology, to too late melting of this lake in the HIRLAM analysis. This is because a lake grid point is assumed to retain its state given by the background field when there are no observations available to correct it. The reason for the too-early warming of lakes by FLake (noted also in Section 5.2 and in Table 3) requires further study. One possible reason may be related to the missing snow on lake ice. In these experiments, snow parameterisation was included in FLake, as in the reference HIRLAM v. 7.4, but in fact snow never accumulated on lake ice in the model. This was due to a technical error that has lately been corrected. The rather large variation of MODIS LSWT observations from day to day (Fig. 6), which may result from the unsuccessful removal of the signals due to high-level clouds during preprocessing of the data, poses a problem to the QC within the HIRLAM analysis system. On the other hand, FLake reached (unrealistically) high LSWT after the melt of ice on Lake Lappajärvi. Around 25 May, many MODIS and some SYKE observations were rejected in the background check by the QC, which was not correct in this case. Relations between the adjacent observations of different types (SYKE and MODIS) on the lake and its neighbourhood would require further study. In the present experiments, all lake observations were assumed to have similar statistical properties. For example, the assumed observational error standard deviation was set to 1.5°C for both in-situ and remote-sensing LSWT observations. This value is supported by the evaluation study in paper Part I, where a standard deviation of around 1.8°C was estimated for the satellite measurements for selected 22 Finnish lakes during open-water season when SYKE temperature observations were available. ### 5.4. Validation of analysis over independent lakes MODIS UW-L3 LSWT observations were derived but withheld from the HIRLAM analysis over four lakes in Sweden and Russia (see Section 2.3). The Russian Lake Kuito and Lake Valday are chosen for comparison between analyses and observed MODIS LSWT (Fig. 8). These two lakes were chosen for illustration because they are located far enough from the nearest lakes included in the analysis, so that analyses on them are not significantly influenced by the nearby observations. The results for the Swedish Lake Hjälmaren and Lake Bolmen (not shown) confirm the results presented here. The analyses of the three main experiments TRULAK, NHFLAK and NHALAK (Table 2) were compared to MODIS observations during January to May 2012 when results from all experiments were available. MODIS observations with the ice flag −1.2°C (indicating measured temperatures below −0.5°C) were excluded from the set of validation observations. Over these lakes, the analysis by TRULAK and NHFLAK is interpreted directly from the FLake forecast, thus this validation measures the quality of FLake, not that of the analysis method. Similarly, as observations were not applied, validation of NHALAK compares the available climatology to MODIS observations. Fig. 8 Comparison of LSWT derived by MODIS with LSWT analysed by experiments NHFLAK (SYKE, FLake, MODIS), NHALAK (SYKE, MODIS) and TRULAK (SYKE, FLake) for (a) Lake Valday and (b) Lake Kuito. We can see in Fig. 8 that the analyses based on different backgrounds – NHALAK on the previous analysis relaxed towards climatology, TRULAK and NHFLAK on the 6-hour forecast by FLake – started to diverge as soon as the observed LSWT clearly rose above the freezing point. Typically, NHALAK analyses remained significantly colder than MODIS observations, while TRULAK and NHFLAK tended to be significantly warmer. According to the applied climatology, these lakes normally stay ice-covered longer than was observed in spring 2012. Relaxation of the NHALAK towards such a climatology forced the analyses towards freezing temperatures when no observations were available to correct the situation. The warm bias of FLake led the analyses TRULAK and NHFLAK to too high analysed LSWT. This bias was detected earlier in this study as well as by Eerola et al. (2010) and Rontu et al. (2012). ## 6. Conclusions and outlook We have reported the first steps in utilising satellite-based observations to define the initial state of lake surfaces in a NWP model. We have applied the HIRLAM surface analysis by introducing new lake observations. While not focussing on optimisation of the analysis methods for LSWT and lake ice, we did however detect their limitations and provided suggestions for improvements. Many questions will require further investigation on the road towards a completely integrated lake data assimilation system for NWP. In our experiments, we included MODIS and AATSR temperature observations over lakes in HIRLAM. When temperatures below freezing were detected, LSWT was given an ice-flag value, otherwise the observation was assumed to represent the measured LSWT. A limited number of 70 MODIS pixels over 41 large- and medium-size Scandinavian, Karelian and Baltic lakes and a sample of AATSR data over Lake Ladoga were selected for the analysis input. Preprocessing of these data for the analysis is described in paper Part I. To understand the sensitivity of the resulting LSWT analysis to the new data, the analysed LSWT and diagnosed lake ice concentration were compared with those by the experiments where space-borne observations were not included. The initial states of every forecast-analysis cycle of each experiment were validated, mostly qualitatively, against locally recorded freezing and melting dates of the lakes as well as against independent satellite LSWT and ice cover observations. Introduction of space-borne observations led to an improvement of the description of lake surface state, especially during the melt period when in-situ LSWT observations were not yet available and the prognostic lake parameterisations suffered of a significant warm bias. During the freezing period, when the sun was low and weather typically cloudy, only few thermal satellite data were available. In the conditions of well-mixed lake water, typical of the freezing period, the FLake prognostic parameterisations also worked reasonably well, making the additional observations less necessary in autumn. The background LSWT for the OI analysis was provided either by the prognostic lake parameterisations of the Freshwater Lake model integrated in HIRLAM, or by the previous analysis backed up by climatology. FLake provides background for the LSWT analysis at every HIRLAM grid point containing a lake fraction. In the case of sparse and missing observations, this ensures on average a better result than an analysis that uses the previous analysis as background, especially when the lake is frozen and the background relaxes to climatology. However, in cases where a sufficient quantity of good (satellite-based) observations was regularly available, the analysis using the previously analysed LSWT as the background followed observations closer than when the background LSWT was diagnosed from the predicted FLake lake temperature. In a case study, MERIS ice fraction over Lake Ladoga was found to qualitatively agree with the ice fraction derived from the HIRLAM lake temperature analysis. Due to the finer spatial resolution of MERIS observations, they provided a more detailed picture than the HIRLAM analysis. However, MERIS is an optical sensor whose data coverage is limited by the presence of clouds. Ice cover observations derived from passive microwave sensors do not suffer from this problem. However, they are presently of a coarse spatial resolution (ca 10 km) and would thus only represent large lakes. The Interactive Multisensor Snow and Ice Mapping System (IMS) product (4 km resolution) could be the other alternative, which utilises a variety of multi-sourced datasets such as passive microwave, visible imagery, operational ice charts and other ancillary data (Helfrich et al., 2007; Ramsay, 1998). IMS data has been shown to be an effective product for lake ice (Brown and Duguay, 2012; Duguay et al., 2011, 2012, 2013; Kang et al., 2012) and sea ice (Brown et al., 2014) phenology studies. In the long term, for a direct assimilation of ice concentration from optical sensors, some spatialisation methods such as OI should be used. However, solutions for several theoretical and technical problems need to be found. The error distribution of the ice concentration is probably non-Gaussian and needs application of specific methods (Lisaeter et al., 2003; Qin et al., 2009; Simon and Bertino, 2009). For the background, the FLake ice fraction can at the moment only be 0 or 1. This means that it is only known if the lakes in the grid box are ice-covered or not. Such information is in principle qualitative, when defined within the relatively coarse resolution of the NWP model. Methods to assimilate qualitative information are poorly developed in NWP. For example, an algorithm for assimilation of remotely sensed snow extent (Drusch et al., 2004) uses a quite simple and ad-hoc approach. When utilising LSWT and ice cover observations together, it will be necessary to ensure their consistency in the resulting analysis. Experience from simplified methods, where the observed ice fraction is converted to temperature, which is then treated by the OI algorithm together with SST (Canadian Meteorological Centre), may also be helpful. In addition to the horizontal spatialisation, methods to assimilate ice information with respect to the prognostic variables of FLake (such as ice thickness) should be developed. Extended application of remote-sensing LSWT measurements is a novel feature in this paper, compared to our previous studies (Eerola et al., 2010; Rontu et al., 2012). However, significantly more data, potentially available from satellites, still remain unused with the approach of predefined pixels over the lakes (70 pixels used in this study versus several tens of thousands pixels covered by the satellite measurements). By using the fine-resolution land-cover information available in the NWP model, it is possible to classify if a satellite pixel (with known coordinates) is located over a lake resolved by the model. Thus, it would be possible to utilise high-resolution near-real time satellite LSWT/ice cover observations without pre-selection of pixels. Methods to reduce the amount of input data over large lakes (thinning, screening, creation of super-observations) should be developed and applied in order to avoid giving too much weight to the large amount of mutually correlated satellite data compared to possible in-situ measurements, and also in order to keep the amount of input data reasonable compared to the resolution of the NWP model. Certain preprocessing of these data, including cloud clearance, identification of missing data and estimation of the measurement error in each pixel would be preferable before entering the OI QC within the model. Improvement of the operational analysis of remote-sensing LSWT measurements in NWP models requires development of the OI methods. Derivation of the autocorrelation functions (structure functions), which take into account lake depth and elevation, as well as calculation of observation error statistics of different measurement types is believed to be important. Practical questions should be resolved in the future, such as: how to obtain near-real-time daily observational data of reasonable volume in a universal format; how to introduce more than selected pixel observations into the analysis; how to improve the QC before and within the NWP application. For the operational NWP models, the analysis of the transient surface properties is crucial, but handing of the observational data and computations should be highly optimised in order to allow timely production of the full three-dimensional weather forecast. Input information should be processed without manual intervention, but well enough to allow only reliable observations to influence the analysis. It is worth mentioning that presently, the analysed state of the lake surface creates no feedback to the FLake parameterisation, which is coupled to the atmospheric model during the forecast run. Thus, the improved LSWT analysis remains as a possibly useful independent by-product of the NWP model. In order to really utilise the space-borne and in-situ observations on lake surface state for the improvement of the weather forecast and prediction of lake temperatures, methods to connect the analysed LSWT and ice cover to the prognostic in-lake variables are needed. Such methods for NWP models are currently under development (Ekaterina Kurzeneva, personal communication, 2014). To conclude, it has been learned that space-borne LSWT observations are beneficial for the description of lake surface state in HIRLAM. Satellite observations provide frequent observations over large areas. The large spatial coverage of satellite-based data at a high resolution is a major advantage but also an application challenge when compared to in-situ measurements. ## 7. Acknowledgements Our thanks are due to two anonymous reviewers, whose comprehensive comments significantly helped to improve the manuscript. This research was supported by European Space Agency (ESA-ESRIN) Contract No. 4000101296/10/I-LG (Support to Science Element, North Hydrology Project) and a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) to C. Duguay, as well as a NSERC postgraduate scholarship to H. Kheyrollah Pour. The support of the international HIRLAM-B programme is acknowledged. The MERIS lake fraction data was generated and supplied by R. Solberg (Norwegian Computing Center). ## References 1. BrownL. C., DuguayC. R. The response and role of ice cover in lake-climate interactions. Prog. Phys. Geogr. 2010; 34(5): 671–704. DOI: 10.1177/0309133310375653. 2. BrownL. C., DuguayC. R. Modelling lake ice phenology with an examination of satellite-detected subgrid cell variability. Adv. Meteorol. 2012; 19: 529064. DOI: 10.1155/2012/529064. 3. BrownL. C., HowellS. E. L., MorthJ., DerksenC. Evaluation of the Interactive Multisensor Snow and Ice Mapping System (IMS) for monitoring sea ice phenology. Rem. Sens. Environ. 2014; 147: 65–78. 4. DaleyR. Atmospheric Data Analysis. 1991; New York, NY, USA: Cambridge University Press. 457. 5. DonlonC. J., MartinM., StarkJ., Robert-JonesJ., FiedlerE., co-authors. The operational sea surface temperature and sea ice analysis (OSTIA) system. Rem. Sens. Environ. 2012; 116: 140–158. 6. DruschM., VasilievicD., ViterboP. ECMWF's global snow analysis: assessment and revision based on satellite observation. J. Appl. Meterol. 2004; 43: 1282–1294. 7. DuguayC. R., BrownL., KangK.-K., Kheyrollah PourH. Lake ice [in Arctic report card 2011]. 2011. Online at: http://www.arctic.noaa.gov/report11/lake_ice. 8. DuguayC. R., BrownL., KangK.-K., Kheyrollah PourH. [The Arctic] lake ice [in State of the climate in 2011]. Bull. Am. Meteorol. Soc. 2012; 93(7): 152–154. 9. DuguayC. R., BrownL., KangK.-K., Kheyrollah PourH. [The Arctic] lake ice [in State of the climate in 2012]. Bull. Am. Meteorol. Soc. 2013; 94(8): 124–126. 10. DuguayC. R., ProwseT. D., BonsalB. R., BrownR. D., LacroixM. P., co-authors. Recent trends in Canadian lake ice cover. Hydrol. Proc. 2006; 20: 781–801. 11. EerolaK. Twenty-one years of verification from the HIRLAM NWP system. Weather Forecast. 2013; 28: 270–285. 12. EerolaK., RontuL., KourzenevaE., ShcherbakE. A study on effects of lake temperature and ice cover in HIRLAM. Boreal Environ. Res. 2010; 15: 130–142. 13. FiedlerE., MartinM., Roberts-JonesJ. An operational analysis of lake surface water temperature. Tellus A. 2014; 66: 21247. DOI: 10.3402/tellusa.v66.21247. 14. GandinL. Objective Analysis of Meteorological Fields. 1965; Gidrometizdat, Leningrad. Translated from Russian, Jerusalem, Israel Program for Scientific Translations. 242. 15. GrönvallH., SeinäA. Satellite data use in Finnish winter navigation. Operational Oceanography: Implementation at the European and Regional Scales – Proceedings of the Second International Conference on EuroGOOS. 2002; Elsevier. 429–436. 11–13 March 1999, Rome, Italy, (eds. N. C. Flemming et al.). 16. HelfrichS. R., McNamaraD., RamsayB. H., BaldwinT., KashetaT. Enhancements to, and forthcoming developments in the Interactive Multisensor Snow and Ice Mapping System (IMS). Hydrol. Process. 2007; 21: 1576–1586. 17. HookS., WilsonR. C., MacCallumS., MerchantC. J. Lake surface temperature. Bull. Am. Meteorol. Soc. 2012; 93: 18–19. 18. KangK.-K., DuguayC. R., HowellS. E. L. Estimating ice phenology on large northern lakes from AMSR-E: algorithm development and application to Great Bear Lake and Great Slave Lake, Canada. Cryosphere. 2012; 6(2): 235–254. 19. Kheyrollah PourH., DuguayC. R., MartynovA., BrownL. C. Simulation of surface temperature and ice cover of large northern lakes with 1-D models: a comparison with MODIS satellite data and in situ measurements. Tellus A. 2012; 64: 17614. DOI: 10.3402/tellusa.v64i0.17614. 20. Kheyrollah PourH., DuguayC. R., SolbergR. L. C. Impact of satellite-based lake surface observations on the initial state of HIRLAM. Part I: Evaluation of remotely-sensed lake surface water temperature observations. Tellus A. 2014; 66: 21534. DOI: 10.3402/tellusa.v66i0.21534. 21. KourzenevaE., AsensioH., MartinE., FarouxS. Global gridded dataset of lake coverage and lake depth for use in numerical weather prediction and climate modelling. Tellus A. 2012a; 64: 15640. DOI: 10.3402/tellusa.v64i0.15640. 22. KourzenevaE., MartinE., BatrakY., Le MoigneP. Climate data for parameterization of lakes in NWP models. Tellus A. 2012b; 64: 17226. DOI: 10.3402/tellusa.v64i0.17226. 23. KrinnerG., BoikeJ. A study of the large-scale climatic effects of a possible disappearance of high-latitude inland water surfaces during the 21st century. Boreal Environ. Res. 2010; 15: 203–217. 24. LisaeterK. A., RosanovaJ., EvensenG. Assimilation of ice concentration in a coupled ice-ocean model, using the Ensemble Kalman Filter. Ocean Dynam. 2003; 53: 358–388. DOI: 10.1007/s10236-003-0049-4. 25. LovelandT. R., ReedB. C., BrownJ. F., OhlenD. O., ZhuJ., co-authors. Development of a global land cover characteristics database and IGBP DISCover from 1-km AVHRR data. Int. J. Remote Sens. 2000; 21: 1303–1330. 26. MacCallumS. N., MerchantC. J. Surface water temperature observations of large lakes by optimal estimation. Can. J. Remote Sens. 2012; 38(1): 25–45. DOI: 10.5589/m12-010. 27. Manrique-SuñénA., NordboA., BalsamoG., BeljaarsA., MammarellaI. Representing land surface heterogeneity: offline analysis of the tiling method. J. Hydrometeor. 2013; 14: 850–867. DOI: 10.1175/JHM-D-12-0108.1. 28. MironovD. Parameterization of Lakes in Numerical Weather Prediction. Description of a Lake Model. 29. MironovD., HeiseE., KourzenevaE., RitterB., SchneiderN., co-authors. Implementation of the lake parametrisation scheme FLake into the numerical weather prediction model COSMO. Boreal Environ. Res. 2010; 15: 218–230. 30. NgaiK. L. C., ShuterB. J., JacksonD. A., ChandraS. Projecting impacts of climate change on surface water temperatures of a large subalpine lake: Lake Tahoe, USA. Clim. Change. 2013; 118: 841–855. 31. NiziolT. A. Operational forecasting of lake effect snowfall in Western and Central New York. Weather Forecast. 1987; 2: 310–321. DOI: 10.1175/1520-0434(1987)002¡0310:OFOLES¿2.0.CO;2. 32. NiziolT. A., SnyderW. R., WaldstreicherJ. S. Winter weather forecasting throughout the Eastern United States. Part IV: Lake effect snow. Weather Forecast. 1995; 10: 61–77. 33. QinJ., LiangS., YangK., KaihotsuI., LiuR., KoikeT. Simultaneous estimation of both soil moisture and model parameters using particle filtering method through the assimilation of microwave signal. J. Geophys. Res. 2009; 114: 15103. DOI: 10.1029/2008JD011358. 34. RamsayB. H. The interactive multisensor snow and ice mapping system. Hydrological Processes. Hydrol. Process. 1998; 12: 1537–1546. 35. RontuL., EerolaK., KourzenevaE., VehviläinenB. Data assimilation and parametrisation of lakes in HIRLAM. Tellus A. 2012; 64: 17611. DOI: 10.3402/tellusa.v64i0.17611. 36. SamuelssonP., KourzenevaE., MironovD. The impact of lakes on the European climate as simulated by a regional climate model. Boreal Environ. Res. 2010; 15: 113–129. 37. SemmlerT., ChengB., YangY., RontuL. Snow and ice on Bear Lake (Alaska)- sensitivity experiments with two lake ice models. Tellus A. 2012; 64: 17339. DOI: 10.3402/tellusa.v64i0.17339. 38. SimonE., BertinoL. Application of the Gaussian anamorphosis to assimilation in a 3-D coupled physical-ecosystem model of the North Atlantic with the EnEKF: a twin experiment. Ocean Sci. 2009; 5: 495–510. 39. UndénP., RontuL., JärvinenH., LynchP., CalvoJ. The HIRLAM-5 scientific documentation. 2002. Available at HIRLAM-5 Project, c/o Per Undén SMHI, S-601 76 Norrköping, Sweden. Online at: http://hirlam.org. 40. YangY., ChengB., KourzenevaE., SemmlerT., RontuL., co-authors. Modelling experiments on air-snow-ice interactions over Kilpisjärvi, a lake in northern Finland. Boreal Environ. Res. 2013; 18(5): 341–358. 41. ZhaoL., JinJ., WangS.-Y., EkM. B. Integration of remote-sensing data with WRF to improve lake-effect precipitation simulations over the Great Lakes region. J. Geophys. Res. 2012; 117: 09102. DOI: 10.1029/2011JD016979.
2022-10-07 07:14:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4805927574634552, "perplexity": 4550.348057361617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00043.warc.gz"}
https://academy.vertabelo.com/course/introduction-to-r/r-basics/functions/number-of-characters-in-string
Rainbow Deals - hours only!Up to 80% off on all courses and bundles.-Close Introduction Text values Variables Functions 21. Number of characters in string Summary ## Instruction Nice work! Until now, we've dealt with functions that work with numbers. However, there are also functions that can take text values as arguments and return some value. One of such function is nchar. The function nchar calculates the number of characters (not just letters) in a string. In other words, it finds the length of a particular string. For example: nchar("Mary ") will return 5, and nchar("Tim") will return 3. ## Exercise How many characters are contained in the longest English word, pneumonoultramicroscopicsilicovolcanoconiosis? Use an appropriate function call to answer this question. Remember to pass this word in as a string.
2019-06-19 02:02:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26387569308280945, "perplexity": 1838.765586885579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998879.63/warc/CC-MAIN-20190619003600-20190619025600-00307.warc.gz"}
https://zbmath.org/?q=an:0673.46013&format=complete
# zbMATH — the first resource for mathematics Some remarks on almost radiality in function spaces. (English) Zbl 0673.46013 Let $$C_ p(X)$$ be the space of continuous real functions on X, a Hausdorff topological space, with the topology of pointwise convergence. It is known that this space is Fréchet if and only if it is “radial”, a notion defined with generalized sequences using ordinal numbers. It is also known that this space is not necessarily a Fréchet space if $$C_ p(X)$$ is only “pseudo-radial”. The paper introduces and studies new kinds of “almost radiality” which coincides with “pseudo-radiality” in some specific cases. Reviewer: H.Hogbe-Nlend ##### MSC: 46E10 Topological linear spaces of continuous, differentiable or analytic functions 46A04 Locally convex Fréchet spaces and (DF)-spaces Full Text:
2022-01-19 03:58:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6257110834121704, "perplexity": 596.4673305187022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301263.50/warc/CC-MAIN-20220119033421-20220119063421-00344.warc.gz"}
http://www.mapleprimes.com/questions/default.aspx?page=2
# MaplePrimes Questions Search Questions: ## Latest Questions Latest Questions Feed ### How I can separate real and imaginary parts in PDE... Yesterday at 7:48 AM 1 1 Suppose I have a PDE > Here I can set Then my query is, how can I obtain simplified resultant system after equating real and imaginary parts. Of course there would be two partial differential equations in and  and those can also be written down manually very easily but this process becomes difficult when we have large system. Regards ### Why Maple doesn´t give two solutions... Yesterday at 6:23 AM 1 2 Hi every one, I am using Maple 7 because I am running Windows XP I believe this equation: > solve( 1.15=exp(-xi*Pi/(sqrt(1-xi^2))), xi); has two solutions: xi=+0.4444364945e-1 and xi=-0.4444364945e-1 but Maple only gives -0.4444364945e-1 Why?? And how can I get the two solutions?? Thanks before hand by your comments. ### view source of a webpage using Maple... July 24 2016 0 9 I'm trying to view the source of the webpage fuelsonline.ca I used HTTP[Get] like I did before which doesn't retrieve much of anything useful.  I thought of the sockets package which I can't seem to pull any information from.  Then I had an idea and viewed the page source which finally has information I can use if I can get Maple to read it in.  If I can get Maple to pull in the contents of the page source I should at least be able to carry it from there. Any ideas from anyone? ### "solve((x+exp(-1))^x = 1, x)" gives error... July 24 2016 1 2 The code "solve((x+exp(-1))^x = 1, x)"gives the error "Error, (in Engine:-Dispatch) invalid subscript selector". How is this possible? ### Adding a term, then subtracting the term.... July 24 2016 0 2 Hello people in mapleprimes, I have an equation called aa in the following. alias(&delta;x=dx, &Delta;x=Dx,&Delta;y=Dy,&Delta;z=Dz): aa:=Dz=f(x+Dx,y+Dy)-f(x,y); As for modification of this expression, I ask your favor to teach me. Then, I want to change this aa to D[1](f)(x+theta__1*Dx,y+Dy)*Dx+D[2](f)(x,y+theta__2*Dy)*Dy. But, to do so, I have to split aa into the one including -f(x,y+Dy)+f(x,y+Dy) between two terms of aa. But, as maple cancels these terms, I can't do so. How can I insert two terms, then obtain the expression  D[1](f)(x+theta__1*Dx,y+Dy)*Dx+D[2](f)(x,y+theta__2*Dy)*Dy? taro ### how convert 3 couple equations to 1 equation... July 23 2016 0 2 hi...how i can convert 3 couple equations to 1 equation with Placement each other? thanks... (1) (2) (3) ### %Zm format: How is it used with GUI and DocumentTo... July 22 2016 0 4 I've read the help page ?printf for the format codes many times over the years. I think that this is new: The Z modifier, "%Zm" can be used to generate an alternate equivalent dotm representation that is used in communication with the GUI and in DocumentTools related functionality for the creation of XML content for .mw files. Could someone show me an example of that? ### Eigenvalues of Laplacian matrix... July 22 2016 1 3 i am working on laplacian eigenvalues of some special graphs and when i want to find  min([laplacianEigenvalues]) then i alltime see same error code, [Error, (in simpl/min) complex argument to max/min...] my aim is write a procedure about Algebraic connectivity ### Maple Evaluating More Than 12 hrs... July 22 2016 1 9 Hello, I run Maple to solve Binary Integer Programming problem which contain about 1340 constraint and its goal to maximize the objective function. At first, it's running for 2 hours and said that the iteration limit was reached. So I try to add 'iterationlimit' at LPSolve opts and set it to 10000, but after 3 or 4 hours it said that the iteration limit was reached. So I set 'iterationlimit' to 100000000 and now Maple keep evaluating more than 12 hours. I run Maple at my notebook with these spesification: Processor: Intel Corei3-5005U 2.0 GHz Memory: 4GB RAM Windows 10 It is normal? Or I must run Maple in higher notebook spesification? Below is my Maple file, hope you can help me. ISL_2017_FASE3.mw ### How to import/generate a file of numerous matrices... July 22 2016 1 2 I want to analyze the runtimes on certain Linear Algebra functions in Maple, so I need a (large) set of matrices to input into these functions. I have written the below code, which does succesfully generate a file of matrices: The resulting file looks like: However, I am unable to read the matrices from this file back into Maple. When using the code below, I get an error. I think the error is that %a in fscanf scans up to the next whitespace, so the spacing in Matrix(3, 3, [[9,1,-4],[-5,6,-10],[-10,-4,-4]]) is throwing fscanf off. Do you guys know of any way I can fix this? Or, is there a better way for me to generate these matrices so that they can be easily read into Maple? I've considered using ImportMatrix/ExportMatrix, but I believe that they only work for a single matrix, not the numerous ones that I would need. ### How do I connect 3 arms to a single part with revo... July 22 2016 1 0 I have created a model for a robot in Solidworks and have imported it into Maplesim using the CAD toolbox. The problem I have is that the robot has 3 arms that are supposed to come together on a central piece pictured below in figure 1, but attempting to simulate the model with all arms connected with a revolute joint as in figure 2 yields an error that says "The system is underdetermined" the location of the error is main. For the purposes of the image below I only connected one of the arms, this allows Maplesim to run the file successfully. figure 1 showing the central piece that the 3 arms are supposed to connect to. Figure 2 showing the problem revolute joints circled in black, the error at the bottom and the setting of the revolute joint on the right. Essentially my question is how do I get the model to work? I apologise if this problem is not terribly well demonstrated, this is my first post onto this forum so I am not sure of all the standards. ### generate graphs ... July 21 2016 2 10 how can i generate graph families(all possible set )with given number of vertices?(just vertices ,not given edge set) for instance : how can i write a procedure about with 4-vertices all graphs ,5-vertices,..and so on ### Aymptotic expansion for hypergeometric function... July 21 2016 1 4 I want to approximate the following hypergeometric function for large values of Y. The variables c and R are complex parameters. hypergeom([-I*(c+sqrt(c^2-1)), I*(-c+sqrt(c^2-1))], [-I*(2*c+I), -I*(c+I+I*c/R)], exp(Y)*c/R) I allready tried asympt(f,Y), but maple failed. ### How to avoid auto simplification in LateX code gen... July 21 2016 0 2 How can I get the equation G1 automatically shown in Latex as stated? Example: G1 := V = Pi*(d/2)^2*h; latex(G1); # yields V=1/4\,\pi\,{d}^{2}h rendered: How can I make G1 inert so that Latex generator accepts it as input? ### Draw large network topology using maple ... July 21 2016 1 7 I am a beginner with Maplesoft. I have a data file with several devices IDs and links between these devices (more than 5000 links). For representing a network in Maple, I am aware that the GraphTheory package should be used. The vertices and edges are to be defined and the graph to be displayed. The examples that I have looked at do not have such a large number of links. With more than 5000 links in the data file, I am not sure how to proceed further. Any help or pointers would be highly appreciated. 1 2 3 4 5 6 7 Last Page 2 of 1337 
2016-07-27 09:38:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6814664602279663, "perplexity": 1558.2986203146027}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826759.85/warc/CC-MAIN-20160723071026-00298-ip-10-185-27-174.ec2.internal.warc.gz"}
http://nrich.maths.org/public/leg.php?code=-99&cl=3&cldcmpid=2031
Search by Topic Resources tagged with Working systematically similar to Children at Large: Filter by: Content type: Stage: Challenge level: There are 122 results Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically Sticky Numbers Stage: 3 Challenge Level: Can you arrange the numbers 1 to 17 in a row so that each adjacent pair adds up to a square number? Where Can We Visit? Stage: 3 Challenge Level: Charlie and Abi put a counter on 42. They wondered if they could visit all the other numbers on their 1-100 board, moving the counter using just these two operations: x2 and -5. What do you think? Two and Two Stage: 2 and 3 Challenge Level: How many solutions can you find to this sum? Each of the different letters stands for a different number. Number Daisy Stage: 3 Challenge Level: Can you find six numbers to go in the Daisy from which you can make all the numbers from 1 to a number bigger than 25? Weights Stage: 3 Challenge Level: Different combinations of the weights available allow you to make different totals. Which totals can you make? 9 Weights Stage: 3 Challenge Level: You have been given nine weights, one of which is slightly heavier than the rest. Can you work out which weight is heavier in just two weighings of the balance? Consecutive Negative Numbers Stage: 3 Challenge Level: Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? Special Numbers Stage: 3 Challenge Level: My two digit number is special because adding the sum of its digits to the product of its digits gives me my original number. What could my number be? Maths Trails Stage: 2 and 3 The NRICH team are always looking for new ways to engage teachers and pupils in problem solving. Here we explain the thinking behind maths trails. More Magic Potting Sheds Stage: 3 Challenge Level: The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it? Difference Sudoku Stage: 3 and 4 Challenge Level: Use the differences to find the solution to this Sudoku. Summing Consecutive Numbers Stage: 3 Challenge Level: Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? Squares in Rectangles Stage: 3 Challenge Level: A 2 by 3 rectangle contains 8 squares and a 3 by 4 rectangle contains 20 squares. What size rectangle(s) contain(s) exactly 100 squares? Can you find them all? First Connect Three for Two Stage: 2 and 3 Challenge Level: First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line. Olympic Logic Stage: 3 and 4 Challenge Level: Can you use your powers of logic and deduction to work out the missing information in these sporty situations? Problem Solving, Using and Applying and Functional Mathematics Stage: 1, 2, 3, 4 and 5 Challenge Level: Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. Cuboids Stage: 3 Challenge Level: Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all? Product Sudoku Stage: 3, 4 and 5 Challenge Level: The clues for this Sudoku are the product of the numbers in adjacent squares. Fence It Stage: 3 Challenge Level: If you have only 40 metres of fencing available, what is the maximum area of land you can fence off? Quadruple Sudoku Stage: 3 and 4 Challenge Level: Four small numbers give the clue to the contents of the four surrounding cells. Games Related to Nim Stage: 1, 2, 3 and 4 This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning. Colour Islands Sudoku Stage: 3 Challenge Level: An extra constraint means this Sudoku requires you to think in diagonals as well as horizontal and vertical lines and boxes of nine. The Naked Pair in Sudoku Stage: 2, 3 and 4 A particular technique for solving Sudoku puzzles, known as "naked pair", is explained in this easy-to-read article. American Billions Stage: 3 Challenge Level: Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3... Ben's Game Stage: 3 Challenge Level: Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters. Plum Tree Stage: 4 and 5 Challenge Level: Label this plum tree graph to make it totally magic! Shady Symmetry Stage: 3 Challenge Level: How many different symmetrical shapes can you make by shading triangles or squares? Latin Squares Stage: 3, 4 and 5 A Latin square of order n is an array of n symbols in which each symbol occurs exactly once in each row and exactly once in each column. Oranges and Lemons, Say the Bells of St Clement's Stage: 3 Challenge Level: Bellringers have a special way to write down the patterns they ring. Learn about these patterns and draw some of your own. Advent Sudoku Stage: 3 Challenge Level: Rather than using the numbers 1-9, this sudoku uses the nine different letters used to make the words "Advent Calendar". Masterclass Ideas: Working Systematically Stage: 2 and 3 Challenge Level: A package contains a set of resources designed to develop students’ mathematical thinking. This package places a particular emphasis on “being systematic” and is designed to meet. . . . Pole Star Sudoku 2 Stage: 3 and 4 Challenge Level: This Sudoku, based on differences. Using the one clue number can you find the solution? Extra Challenges from Madras Stage: 3 Challenge Level: A few extra challenges set by some young NRICH members. Twin Line-swapping Sudoku Stage: 4 Challenge Level: A pair of Sudoku puzzles that together lead to a complete solution. First Connect Three Stage: 2 and 3 Challenge Level: The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for? Twin Corresponding Sudokus II Stage: 3 and 4 Challenge Level: Two sudokus in one. Challenge yourself to make the necessary connections. Peaches Today, Peaches Tomorrow.... Stage: 3 and 4 Challenge Level: Whenever a monkey has peaches, he always keeps a fraction of them each day, gives the rest away, and then eats one. How long could he make his peaches last for? Ratio Sudoku 2 Stage: 3 and 4 Challenge Level: A Sudoku with clues as ratios. Intersection Sudoku 1 Stage: 3 and 4 Challenge Level: A Sudoku with a twist. Magnetic Personality Stage: 2, 3 and 4 Challenge Level: 60 pieces and a challenge. What can you make and how many of the pieces can you use creating skeleton polyhedra? More on Mazes Stage: 2 and 3 There is a long tradition of creating mazes throughout history and across the world. This article gives details of mazes you can visit and those that you can tackle on paper. An Introduction to Magic Squares Stage: 1, 2, 3 and 4 Find out about Magic Squares in this article written for students. Why are they magic?! Isosceles Triangles Stage: 3 Challenge Level: Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw? Integrated Sums Sudoku Stage: 3 and 4 Challenge Level: The puzzle can be solved with the help of small clue-numbers which are either placed on the border lines between selected pairs of neighbouring squares of the grid or placed after slash marks on. . . . Factors and Multiple Challenges Stage: 3 Challenge Level: This package contains a collection of problems from the NRICH website that could be suitable for students who have a good understanding of Factors and Multiples and who feel ready to take on some. . . . Another Quadruple Clue Sudoku Stage: 3 and 4 Challenge Level: This is a variation of sudoku which contains a set of special clue-numbers. Each set of 4 small digits stands for the numbers in the four cells of the grid adjacent to this set. Inky Cube Stage: 2 and 3 Challenge Level: This cube has ink on each face which leaves marks on paper as it is rolled. Can you work out what is on each face and the route it has taken? Sociable Cards Stage: 3 Challenge Level: Move your counters through this snake of cards and see how far you can go. Are you surprised by where you end up? Difference Dynamics Stage: 4 and 5 Challenge Level: Take three whole numbers. The differences between them give you three new numbers. Find the differences between the new numbers and keep repeating this. What happens? Multiply the Addition Square Stage: 3 Challenge Level: If you take a three by three square on a 1-10 addition square and multiply the diagonally opposite numbers together, what is the difference between these products. Why?
2014-10-25 21:06:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3110142648220062, "perplexity": 2061.551978089786}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119650516.39/warc/CC-MAIN-20141024030050-00197-ip-10-16-133-185.ec2.internal.warc.gz"}
https://homework.cpm.org/category/CC/textbook/cc1/chapter/5/lesson/5.3.3/problem/5-93
### Home > CC1 > Chapter 5 > Lesson 5.3.3 > Problem5-93 5-93. Graph the trapezoid $A(6, 5), B(8, −2), C(−4, −2), D(−2, 5)$. Homework Help ✎ Your graph should look like the one below. 1. Find the length of the bottom base (segment $CB$). Then find the length of the top base (segment $AD$). Use grid units. • Both $B$ and $C$ are on the $y = −2$ line, so you just need to find the the distance between their x-coordinates to determine the length of CB. The same can be done for AD. $\text{The length of segment }CB \text{ is }\left | 8 \right| + \left | -4 \right | =\ 12.\text{ Now find the length of segment } AD.$ 1. Find the distance between the two bases, which is called the height. Use grid units. • Use a similar method to part (a), but use the y-coordinates to find the distance between segment $CB$ and segment $AD$.
2019-12-11 09:45:20
{"extraction_info": {"found_math": true, "script_math_tex": 9, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9181933999061584, "perplexity": 461.4393923505713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00523.warc.gz"}
https://pythoninchemistry.org/functions
/ BASICS # Functions Functions are a fundamental aspect of many programming languages. They allow a programmer to both simplify their code by hiding away many lines of code into a single line, while reducing the amount of work a programmer needs to do, if they need to do the same thing many times. The concept of a function will hopefully be familar from mathematics, e.g. where, $f(x)$ is some mathematical operation that acts on the argument $x$, while the details of the function are abstracted away. An example of a function is, Using this we can say that $f(2) = 4$, $f(3)=9$, etc. A function in programming is very similar, it consists of arguments and returns a value after some operation has taken place. The Pythonic way to define a function is: def name_of_function(argument): operation return result The def tells the computer that you are wanting to define a function, the return tells the computer that this is the thing that should be sent back to the where the function is called. The use of functions is an important paradigm in programming – the following Jupyter notebook gives some examples of functions and how they can be used. #### Andrew McCluskey chemistry/programming phd student
2019-02-17 04:17:40
{"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7179703116416931, "perplexity": 438.6044675452937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481612.36/warc/CC-MAIN-20190217031053-20190217053053-00373.warc.gz"}
https://www.physicsforums.com/threads/what-does-b-dl-indicate-in-amperes-law.807242/
# What does B.dl indicate in Ampere's Law Tags: 1. Apr 7, 2015 ### sawer I know that$$\oint \vec E \cdot \vec{dS}$$ in Gauss Law indicates electric flux. $$\oint \vec E \cdot \vec{dS} = \frac{Q_{enc}}{\varepsilon_0}$$ But what does B.dl indicate in Ampere's Law? $\oint \vec{B} \cdot \vec{dl}$ = ?? Last edited: Apr 7, 2015 2. Apr 7, 2015 ### vanhees71 Such a line integral around a closed loop is the circulation of the vector field, here the magnetic field. The fundamental laws are the Maxwell equations in local form, and the Ampere-Maxwell Law reads (written in terms of the macroscopic laws in Heaviside-Lorentz units) $$\vec{\nabla} \times \vec{H}-\frac{1}{c} \partial_t \vec{D}=\frac{1}{c} \vec{j}.$$ The integral form follows from integrating over a surface and using Stokes's integral theorem to change the curl part into a line integral along the boundary of the surface, $$\int_{\partial F} \mathrm{d} \vec{r} \cdot \vec{H} = \frac{1}{c} \int_F \mathrm{d}^2 \vec{f} \cdot (\vec{j}+\partial_t \vec{D}).$$ For the static case, where $\partial_t \vec{D}=0$, the right-hand side is the total electric current running through the surface under consideration. For the non-static case, it's misleading to interpret the $\partial_t \vec{D}$ term as "source" of the magnetic field. Here you need the full (retarded) solutions of Maxwell's equations to express the electromagnetic field in terms of their sources, which are the electric charge and current densities. See, e.g., https://en.wikipedia.org/wiki/Jefimenko's_equations 3. Apr 7, 2015 ### gleem Or in short $$\oint \mathbf{B}\cdot \mathbf{dl}=\mu _{0}\int _{S}\boldsymbol{\mathbf{J\cdot}}\boldsymbol{dA}$$ The line integral of B around any loop is equal to the total current crossing any surface bounded by that loop at least for nonmagnetic materials and J >> ∂D/∂t 4. Apr 7, 2015 ### sawer Is there a special name for that, like in electric case, gauss law is equal to "electric flux". $\oint \vec{B} \cdot \vec{dl}$ is not equal to magnetic flux, right? 5. Apr 7, 2015 ### sawer Is there a special name for that? (Like electric flux or magnetic flux etc...) 6. Apr 7, 2015 ### gleem I do not know of a special name for it. However the related expression $$\oint \mathbf{H\cdot dl}$$ is called the magnetomotance. Last edited: Apr 7, 2015 7. Apr 10, 2015
2017-08-20 11:15:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8458366990089417, "perplexity": 1337.148296832814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106367.1/warc/CC-MAIN-20170820092918-20170820112918-00657.warc.gz"}
https://milania.de/blog/Triangular_system_of_linear_equations_in_LaTeX
## Triangular system of linear equations in LaTeX Today, I wanted to write a system of linear equations in a triangular form using $$\LaTeX$$. As you may know, $$\LaTeX$$ and especially the amsmath package offers many ways to align equations. I wanted to achieve something like \begin{alignat*}{2}% Use alignat when you wish to have individual equation numbering &l_{00} y_{0} && = b_{0} \\ &l_{10} y_{0} + l_{11} y_{1} && = b_{1} \\ &l_{20} y_{0} + l_{21} y_{1} + l_{22} y_{2} && = b_{2} \end{alignat*} I tried several ways including the align or equation (with split) environment. But for some reason, all of them messed up. Finally, I found the alignat environment, also from the amsmath package, which did what I wanted. This environment lets you explicitly set the spacing between the equations. The MWE looks like \documentclass{scrartcl} \usepackage{amsmath} \begin{document} \begin{alignat*}{2}% Use alignat when you wish to have individual equation numbering &l_{00} y_{0} && = b_{0} \\ &l_{10} y_{0} + l_{11} y_{1} && = b_{1} \\ &l_{20} y_{0} + l_{21} y_{1} + l_{22} y_{2} && = b_{2} \end{alignat*} \end{document} As the official documentation (p. 7) explains, the mandatory argument is the number of “equation columns” which can be calculated as the maximum number of & in a row + 1 divided by 21. 1. Don't ask me why we have to present an argument which could easily be calculated by someone else – a computer machine for instance ;-). But, of course, that little inconvenience is easily accepted considering the beautiful output we get as a reward.
2018-10-16 05:29:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9829288721084595, "perplexity": 1758.0401955444017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510019.12/warc/CC-MAIN-20181016051435-20181016072935-00386.warc.gz"}
https://scholarworks.iu.edu/dspace/browse?order=ASC&rpp=20&sort_by=1&etal=-1&offset=3665&type=title
mirage Browsing by Title Sort by: Order: Results: • (Institute of Mathematical Statistics, 2012) Let μ be a probability measure on the real line. In this paper we prove that there exists a decomposition $\mu = \mu_{0}\boxplus \mu_{1}\dots\boxplus \mu_{n}$such that $\mu_{0}$ is infinitely divisible, and $\mu_{i}$ is ... • (2011-01-24) Description of my time in Kibbutz Amiad, Work with the sheep, Arab wedding, and meeting my wife. Leaving the kibbutz. • (Department of Folklore and Ethnomusicology, Indiana University, 1974-10) • (Criticism, 2012) Scenes from Richard Wright's Native Son (1940) could be read as pointed satire of the documentary aesthetic, and easily be taken as Wright's final statement on the 1930s radical documentary. Balthaser argues that 12 Million ... • (Indiana University Cyclotron Facility, 1992) • (Indiana University Cyclotron Facility, 1991) • (International Journal of Communication, 2013) This article addresses the inherent bias in Brazilian telenovelas’ representations of homosexual love. Medium- and genre-specific biases such as the visuality of telenovelas are powerful limiting agents of representation. ... • (Sage Publications, 2011-12) This experimental study tested the knowledge gap hypothesis at the intersection of audience education levels and news formats (newspaper versus online). The findings reveal a gap in public affairs knowledge acquisition ... • (2013-04-21) • (Emerald Group Publishing Limited, 2007) Purpose – The purpose of this study is twofold: to examine the types of activity that nurses undertake on an online community of practice (APN-l) as well as the types of knowledge that nurses share with one another; and ... • (2015-05-04) • (Department of Folklore and Ethnomusicology, Indiana University, 1988) • (American Physical Society, 1986) A reply to the comment on evidence for a phenomenological supersymmetr in atomic physics is presented. • ([Bloomington, Ind.] : Indiana University, 2010-06-01) This dissertation will discuss Koszul algebras and Koszul type Np property. It is a basic problem of homological algebra to compute cohomology algebras of various augmented algebras. One of main purpose is to find the ... • (Indiana University Digital Library Program, 2012-11-28) The Kuali OLE version 0.8 release will be the first implementable release of Kuali OLE. This session will give an update to the project overall, and specific details as to the functionality included in version 0.8 and what ... • (Indiana University Cyclotron Facility, 1985) • (Indiana University Cyclotron Facility, 1986) • (Indiana University Cyclotron Facility, 1984) • (Indiana University Cyclotron Facility, 1980) • (Indiana University Cyclotron Facility, 1981)
2016-07-24 22:02:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43056386709213257, "perplexity": 8872.365083590843}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824146.3/warc/CC-MAIN-20160723071024-00132-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-5-section-5-4-multiplying-polynomials-exercise-set-page-290/99a
## Intermediate Algebra (6th Edition) $F(a+h)=a^2+2ah+h^2+3a+3h+2$ Replacing $x$ with $a$ in the given function, $F(x)=x^2+3x+2$, then $F(a+h)$ is equal to \begin{array}{l} (a+h)^2+3(a+h)+2 \\\\= a^2+2ah+h^2+3a+3h+2 \end{array}
2018-09-24 18:01:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9924206733703613, "perplexity": 2991.5259318302433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160620.68/warc/CC-MAIN-20180924165426-20180924185826-00055.warc.gz"}
https://stacks.math.columbia.edu/tag/0B2V
Lemma 43.23.6. Let $k$ be a field. Let $n \geq 1$ be an integer and let $x_{ij}, 1 \leq i, j \leq n$ be variables. Then $\det \left( \begin{matrix} x_{11} & x_{12} & \ldots & x_{1n} \\ x_{21} & \ldots & \ldots & \ldots \\ \ldots & \ldots & \ldots & \ldots \\ x_{n1} & \ldots & \ldots & x_{nn} \end{matrix} \right)$ is an irreducible element of the polynomial ring $k[x_{ij}]$. Proof. Let $V$ be an $n$ dimensional vector space. Translating into geometry the lemma signifies that the variety $C$ of non-invertible linear maps $V \to V$ is irreducible. Let $W$ be a vector space of dimension $n - 1$. By elementary linear algebra, the morphism $\mathop{\mathrm{Hom}}\nolimits (W, V) \times \mathop{\mathrm{Hom}}\nolimits (V, W) \longrightarrow \mathop{\mathrm{Hom}}\nolimits (V, V),\quad (\psi , \varphi ) \longmapsto \psi \circ \varphi$ has image $C$. Since the source is irreducible, so is the image. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-06-25 17:22:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9734032154083252, "perplexity": 193.3433783426222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00285.warc.gz"}