url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://www.chegg.com/homework-help/questions-and-answers/rms-value-waveform-assume-symmetry-appears--q3260740
## Find RMS from graph Find the RMS value for this waveform. You may assume symmetry where it appears so.
2013-06-19 19:41:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9256542325019836, "perplexity": 3586.4718187196036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709101476/warc/CC-MAIN-20130516125821-00053-ip-10-60-113-184.ec2.internal.warc.gz"}
https://nadre.ethernet.edu.et/record/3616/export/dcite4
Book section Open Access EVALUATION OF CONSTRUCTION MATERIAL WASTE MANAGEMENT IN ADDIS ABABA HOUSING PROJECT OFFICE: LIDETA PROJECT BRANCH ARABSA SITE IN FOCUS. KIFLIE GEREME; Prof. Belete Kebede DataCite XML Export <?xml version='1.0' encoding='utf-8'?> <creators> <creator> <creatorName>KIFLIE GEREME</creatorName> </creator> <creator> <creatorName>Prof. Belete Kebede</creatorName> </creator> </creators> <titles> <title>EVALUATION OF CONSTRUCTION MATERIAL WASTE MANAGEMENT IN ADDIS ABABA HOUSING PROJECT OFFICE: LIDETA PROJECT BRANCH ARABSA SITE IN FOCUS.</title> </titles> <publisher>National Academic Digital Repository of Ethiopia</publisher> <publicationYear>2018</publicationYear> <dates> <date dateType="Issued">2018-01-01</date> </dates> <language>en</language> <resourceType resourceTypeGeneral="Text">Book section</resourceType> <alternateIdentifiers> </alternateIdentifiers> <relatedIdentifiers> </relatedIdentifiers> <rightsList> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract">Material waste is defined as the difference between the value of materials delivered and accepted on site and those properly used on specified and accurately measured in the work, after deducting the cost saving of substituted materials transferred elsewhere, in which unnecessary cost and time may be incurred by materials wastage. The main purpose of this study was to assess the current situation of managing and minimizing wastage of construction materials in the housing projects of Lideta branch of Arabsa site by evaluating the causes, impact on the project performance and level of contribution and practice of material wastage minimization strategies. To answer the basic research questions of the study, both primary and secondary sources were used. To this end, the respondents were asked to rank the main sources and causes of construction materials wastage. Out of 174 total populations, the researcher selects 121 total samples. Here both random and non random sampling were used. The major findings of this study were, among the performance measurement criteria of projects, cost is greatly affected by material wastage at Lideta housing construction branch office of Arabsa site; besides, the planned completion time and quality of project performance criteria also affected by the presence of material wastage at the branch office of Arabsa site. furthermore, the major sources of construction material wastage in the project office are Site management ,design and documentation factors ,procurement , operation (on site, equipment) , materials transportation , handling and storage, site supervision and others (weather, theft) respectively. According to the study the researcher recommend that the project office should start construction projects after approving design and bill of quantity. Secondly, the way of project management should be changed and supplying materials should be permitted to contractors rather than the government being sole supplier. Finally, the objective of the project office should be free from political interference and managed by professionals who have good project management skills</description> <description descriptionType="Other">Presented on 13 11 2018</description> </descriptions> </resource> 0 0 views
2019-08-23 17:54:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22261743247509003, "perplexity": 6627.246655534539}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318952.90/warc/CC-MAIN-20190823172507-20190823194507-00192.warc.gz"}
http://www.maa.org/programs/faculty-and-departments/classroom-capsules-and-notes/return-of-the-grazing-goat-in-n-dimensions?device=mobile
Return of the Grazing Goat in '$$n$$' Dimensions Generalizations of the problem: A goat is tethered to the edge of a disc shaped field of radius $$r$$. The goat's rope is of length $$kr$$. If the field is $$n$$-dimensional, what fraction of it can the goat reach, and what happens as $$n$$ approaches infinity? Old Node ID: 758 Author(s): Mark D. Meyerson (U. S. Naval Academy) Publication Date: Wednesday, August 3, 2005 Original Publication Source: College Mathematics Journal Original Publication Date: November, 1984 Subject(s): Calculus Flag for Digital Object Identifier: Publish Page: Furnished by JSTOR: File Content: Rating Count: 28.00 Rating Sum: 84.00 Rating Average: 3.00 Author (old format): Mark D. Meyerson Applicable Course(s): 4.11 Advanced Calc I, II, & Real Analysis Modify Date: Wednesday, February 1, 2006
2014-10-24 15:42:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4301399290561676, "perplexity": 12102.12677479673}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646209.30/warc/CC-MAIN-20141024030046-00121-ip-10-16-133-185.ec2.internal.warc.gz"}
https://cas-events.mpe.mpg.de/indico/event/0/session/5/contribution/83
# From clouds to protoplanetary disks: the astrochemical link 4-8 October 2015 Hans Harnack Haus Europe/Berlin timezone Home > Timetable > Session details > Contribution details # Contribution Contributed Talk (WITHDRAWN) FROM CLOUDS TO DENSE CORES 2 # Water and H2O+ in dense galactic nuclei ## Speakers • Prof. Floris VAN DER TAK ## Content Dense gas in galactic nuclei is known to feed central starbursts and AGN, but the properties of this gas are poorly known due to the high obscuration by dust. Water and H2O+ are useful to trace the oxygen chemistry of interstellar gas, and its ionization rate. We present Herschel/HIFI spectra of the H2O 1113 GHz and H2O+ 1115 GHz lines toward 5 nearby prototypical starburst/AGN systems. The beam size of 20'' corresponds to resolutions between 0.35 and 7 kpc. The observed line profiles range from pure absorption (NGC 4945, M82, Arp 220) to P Cygni indicating outflow (NGC 253) and inverse P Cygni indicating infall (Cen A). The profiles of H2O and H2O+ are remarkably similar, indicating that the lines trace the same gas. We estimate column densities assuming negligible excitation (for absorption features) and using a non-LTE model (for emission features), adopting calculated collision data for H2O and rough estimates for H2O+. Columns range from ~1e13 to ~1e15 cm$^{-2}$ for both species, and are similar between absorption and emission components. The H2O/H2O+ ratios are 1.4-5.6, indicating an origin of the lines in diffuse gas. However, the H2O abundance is only ~1e-9, perhaps indicating enhanced photodissociation by UV from the nuclei or depletion of H2O onto dust grains. We combine our N(H2O+) values with literature data to estimate the cosmic-ray ionization rates for our sample, adopting recent Galactic values for the average cloud density, the atomic hydrogen fraction, and the ionization efficiency. We find zeta_CR ~1e-16 s$^{-1}$, similar to the value for the Galactic disk, but somewhat below that of the Galactic center and well below that of AGN estimates from excited-state H3O+ lines. Since low filling factors appear unlikely, we conclude that the ground-state lines of H2O and H2O+ probe primarily non-nuclear gas in the disks of these centrally active galaxies.
2018-05-25 20:48:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6184686422348022, "perplexity": 8643.808401439643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867217.1/warc/CC-MAIN-20180525200131-20180525220131-00613.warc.gz"}
https://www.physics-in-a-nutshell.com/article/29/rotating-frame-of-reference
Physics in a nutshell $\renewcommand{\D}[2][]{\,\text{d}^{#1} {#2}}$ $\DeclareMathOperator{\Tr}{Tr}$ # Rotating Frame of Reference ## Newton's 2nd Law The derivation of the Navier-Stokes equations is based on Newton's second law: It is assumed that the kinematics of a particle is determined by the particle's interaction with its physical environment. Every acceleration (change of velocity) is caused by an external force. If one on the other hand knows all relevant forces, the acceleration can be calculated and also the trajectories (by integrating the acceleration twice). ### Non-Inertial Frames of Reference However, this approach is strictly speaking not valid in non-inertial (accelerated) frames of reference (as for instance the rotating earth). In these, additional accelerations are observed which are not due to actual physical forces as stated by Newton's laws. So, Newton's 2nd law cannot be applied directly to determine the equations of motion! This issue can be fixed by considering a coordinate transformation between the observer's (accelerated) and any inertial frame of reference (in which Newton's 2nd law applies). Then one can simply apply Newton's 2nd law in the inertial frame and replace the inertial acceleration with other quantities that can be measured directly by the observer. ## Coordinate Representation, Change of Basis, Rotation Matrix Consider two cartesian coordinate system: One is inertial ($\text{in}$) and the other one ($\text{rot}$) rotates with respect to the first one with constant angular velocity: \begin{align} \Omega = \frac{\D{\varphi}}{\D{t}} \label{eq:Omega} \end{align} Without loss of generality, the $z$-axes of both systems can be chosen to be aligned parallel to the axis of rotation. Then the relative orientation between the two coordinate systems is given by the angle $\varphi (t)$ that is spanned by the respective $x$- or $y$-axes. Depending on the choice of the coordinate system, any vector $\vec{a}$ can be specified by its coordinates with respect to the corresponding basis. In general, these differ for the different coordinate systems but are related by a transformation matrix $R$ (and its inverse $R^{-1}$): \begin{align} \left[ \vec{a} \right]_\text{rot} &= R \;\;\;\cdot \left[ \vec{a} \right]_\text{in} \label{eq:R} \\ \left[ \vec{a} \right]_\text{in}\; &= R^{-1} \cdot \left[ \vec{a} \right]_\text{rot} \label{eq:R^T} \end{align} Here \begin{align} \left[ \vec{a} \right]_\text{rot} := \begin{pmatrix} x_\text{rot} \\ y_\text{rot} \\ z_\text{rot} \end{pmatrix} \quad \text{and} \quad \left[ \vec{a} \right]_\text{in} := \begin{pmatrix} x_\text{in} \\ y_\text{in} \\ z_\text{in} \end{pmatrix} \end{align} are the coordinate representations of the position vector $\vec{a}$ with respect to the rotating basis ($\text{rot}$) and the inertial one ($\text{in}$). ## Rotation Matrix The transformation matrix $R$ depends on the angle of rotation $\varphi (t)$ and contains the coordinates of the rotational unit vectors with respect to the inertial basis: \begin{align} R &= \begin{pmatrix} \left[ \hat{x}_\text{rot} \right]_\text{in} & \left[ \hat{y}_\text{rot} \right]_\text{in} & \left[ \hat{z}_\text{rot} \right]_\text{in} \end{pmatrix} \\[2ex] &= \begin{pmatrix} \cos\varphi & -\sin\varphi & 0 \\ \sin\varphi & \cos\varphi & 0 \\ 0 & 0 & 1 \end{pmatrix} \end{align} As it is characteristic for a rotation matrix, \begin{align} R^{-1} = R^T \end{align} since $R R^{T} = \mathbb{I}$ (as you can easily verify). Thus, instead of calculating the inverse laboriously, one can simply take the transpose. ### Useful Relations At this point it is convenient to calculate two more expressions which will be needed later: \begin{align} \frac{\D{R}}{\D{t}} &\stackrel{\eqref{eq:Omega}}{=} \frac{\D{R}}{\D{\varphi}} \underbrace{\frac{\D{\varphi}}{\D{t}}}_{:= \Omega} = \Omega \cdot \begin{pmatrix} -\sin\varphi & -\cos\varphi & 0 \\ \cos\varphi & -\sin\varphi & 0 \\ 0 & 0 & 0 \end{pmatrix} \label{eq:dRdT} \\[2ex] \frac{\D{R}}{\D{t}} \cdot R^T &\stackrel{\eqref{eq:dRdT}}{=} \Omega \cdot \begin{pmatrix} -\sin\varphi & -\cos\varphi & 0 \\ \cos\varphi & -\sin\varphi & 0 \\ 0 & 0 & 0 \end{pmatrix} \cdot \begin{pmatrix} \cos\varphi & \sin\varphi & 0 \\ -\sin\varphi & \cos\varphi & 0 \\ 0 & 0 & 1 \end{pmatrix} \\ &= \Omega \cdot \begin{pmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ \Omega \end{pmatrix} \times \\ &= \vec{\Omega} \times \label{eq:Omegax} \end{align} The penultimate step may be a bit surprising. It has to be understood in a sense of an operator equation that is completely meaningful only when being applied to a dummy vector. So if you apply both operators to an arbitrary vector, you'll notice that the result is the same. Accordingly, one finds: \begin{align} \frac{\D{R^T}}{\D{t}} \cdot R = - \vec{\Omega} \times \label{eq:-Omegax} \end{align} ## Time Derivatives When the origins of the coordinates systems coincide, then the position vector $\vec{r}$ is independent of the particular coordinate system, since it connects this common origin with a particle's location in space (which is also independent of the frame of reference). ### Velocities This is a bit different for velocities and accelerations: They are defined as the change of the coordinates of a position vector in a particular coordinate system. A vector which is at rest in the rotating frame rotates with non-zero velocity in the inertial one. Therefore, we will use another (inner) index for velocities and accelerations indicating the frame of reference in which them are defined. The resulting velocity vector can however be represented as an arrow in space, which in turn can be given either with respect to the inertial or rotating basis (indicated by the outer index). \begin{align} \left[ \vec{v}_\text{rot} \right]_\text{rot} & := \frac{\D{}}{\D{t}} \left[ \vec{r} \right]_\text{rot} \stackrel{\eqref{eq:R}}{=} \frac{\D{}}{\D{t}} \left( R \cdot \left[ \vec{r} \right]_\text{in} \right) \\[1ex] &= \frac{\D{R}}{\D{t}} \cdot \left[\vec{r}\right]_\text{in} + \underbrace{ R \cdot \underbrace{ \frac{\D{}}{\D{t}} \left[ \vec{r} \right]_\text{in} }_{= \left[ \vec{v}_\text{in} \right]_\text{in}} }_{ = \left[ \vec{v}_\text{in} \right]_\text{rot} } \\ &\stackrel{\eqref{eq:R^T}\eqref{eq:R}}{=} \underbrace{ \frac{\D{R}}{\D{t}} R^T }_{\vec{\Omega}\times} \cdot \left[\vec{r}\right]_\text{rot} + \left[ \vec{v}_\text{in} \right]_\text{rot} \\ &\stackrel{\eqref{eq:Omegax}}{=} \vec{\Omega} \times \left[ \vec{r} \right]_\text{rot} + \left[ \vec{v}_\text{in} \right]_\text{rot} \end{align} Analogously: \begin{align} \left[ \vec{v}_\text{in} \right]_\text{in} & := \frac{\D{}}{\D{t}} \left[ \vec{r} \right]_\text{in} \\ &= -\vec{\Omega} \times \left[\vec{r}\right]_\text{in} + \left[ \vec{v}_\text{rot} \right]_\text{in} \label{eq:vRotIn} \end{align} These expressions can now also be stated in coordinate-independent form: The velocities in the inertial and rotating frame of reference are related by:[1] \begin{align} \vec{v}_\text{rot} = \vec{\Omega} \times \vec{r} + \vec{v}_\text{in} \end{align} Thus, the two velocities $\vec{v}_\text{rot}$ and $\vec{v}_\text{in}$ differ by a term $\vec{\Omega} \times \vec{r}$ which accounts for the relative motion of the coordinate systems which respect to each other. It appears because the transformation matrix $R(\varphi (t) )$ is a function of time and therefore the order of the time derivative operator ($\frac{\D{}}{\D{t}}$) and the change of basis operators ($R$,$R^T$) cannot just be reversed. Instead, the product rule has to applied which creates the additional term. #### Accelerations For the accelerations one can proceed in an analogous way as before with the velocities: \begin{align} \left[ \vec{a}_\text{rot} \right]_\text{rot} &:= \frac{ \D{} }{ \D{t} } \left[ \vec{v}_\text{rot} \right]_\text{rot} \stackrel{\eqref{eq:R}}{=} \frac{ \D{} }{ \D{t} } \left( R \cdot \left[ \vec{v}_\text{rot} \right]_\text{in} \right) \\[1ex] &\stackrel{\eqref{eq:vRotIn}}{=} \frac{ \D{} }{ \D{t} } \left[ R \cdot \left( \left[ \vec{v}_\text{in} \right]_\text{in} + \vec{\Omega} \times \left[ \vec{r} \right]_\text{in} \right) \right] \\[2ex] &= \frac{ \D{R} }{ \D{t} } \left[ \vec{v}_\text{in} \right]_\text{in} + R \frac{ \D{} }{ \D{t} } \left[ \vec{v}_\text{in} \right]_\text{in} + \frac{ \D{R} }{ \D{t} } \vec{\Omega} \times \left[ \vec{r} \right]_\text{in} + R \cdot \vec{\Omega} \times \frac{ \D{} }{ \D{t} } \left[ \vec{r} \right]_\text{in} \\[2ex] &= \underbrace{ \frac{ \D{R} }{ \D{t} } R^T }_{ \vec{\Omega} \times } \left[ \vec{v}_\text{in} \right]_\text{rot} + \left[ \vec{a}_\text{in} \right]_\text{rot} + \underbrace{ \frac{ \D{R} }{ \D{t} } R^T }_{\vec{\Omega} \times} \vec{\Omega} \times \left[ \vec{r} \right]_\text{rot} + \vec{\Omega} \times \left[ \vec{v}_\text{in} \right]_\text{rot} \\[2ex] &= 2 \cdot \vec{\Omega} \times \left[ \vec{v}_\text{in} \right]_\text{rot} + \vec{\Omega} \times \left( \vec{\Omega} \times \left[ \vec{r} \right]_\text{rot} \right) + \left[ \vec{a}_\text{in} \right]_\text{rot} \end{align} In coordinate-independent notation this reads: In the case of time-independent rotation vector $\vec{\Omega}$ the accelerations within the inertial and rotating frame of reference are related by: \begin{align} \vec{a}_\text{rot} = \underbrace{ 2 \cdot \vec{\Omega} \times \vec{v}_\text{in} }_\text{Coriolis} + \underbrace{ \vec{\Omega} \times \left( \vec{\Omega} \times \vec{r} \right) }_\text{centrifugal} + \vec{a}_\text{in} \end{align} The two additional terms are referred to as the Coriolis- and centrifugal acceleration respectively.[2] [3] ## References [1] Pijush K. Kundu, Ira M. Cohen Fluid Mechanics Academic Press 2002 (ch. 12) [2] D. J. Tritton Physical Fluid Dynamics Oxford University Press 2006 (ch. 16.2) [3] Pijush K. Kundu, Ira M. Cohen Fluid Mechanics Academic Press 2002 (ch. 12) Your browser does not support all features of this website! more
2018-12-16 23:57:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 12, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 1316.410068618415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828018.77/warc/CC-MAIN-20181216234902-20181217020902-00525.warc.gz"}
https://chemistry.stackexchange.com/questions/150036/meaning-of-the-term-activation-of-alkene/150038#150038
# Meaning of the term "Activation of alkene" While reading organic texts, I have come across authors referring to "Activation of alkene" what does that mean !? Does it mean to include the alkene in resonance or what else exactly ? The palladium-catalyzed C-C coupling between aryl halides or vinyl halides and activated alkenes in the presence of a base is referred as the "Heck Reaction" Activation of an alkene just means that the double bond has a higher electron density than that of a normal isolated double bond. That is, the electron density in the double bond is greater than the one observed in ethene $$\ce{CH2=CH2}$$. Activation, in organic chemistry, generally means the compound displays a greater nucleophilic nature than it normally should due to increased electron density. For example, $$\ce{-OCH3}$$ group activates benzene when it forms toluidine. This makes the benzene ring more electron rich and so make it easier to react in nucleophilic reactions. • The term was used to refer to an $\alpha, \beta$ unsaturated carbonyl in my notes for the Michael reaction. Doesn't that mean it's an electron deficient bond? Apr 30 at 9:00
2021-10-16 21:16:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45164549350738525, "perplexity": 1781.928112076388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00569.warc.gz"}
http://ingaseguros.com.br/index.php/component/content/category/2-uncategorised
superart.com.br Finding out how to manifest your desires is simple when you begin with a fantastic teacher. Answers to do my calculus homework every question ought to be written in space. Please don't hesitate to get in touch with us in case you have any questions. Students then analyze the outcomes of the live demonstration. The MIT Physics Department is among the largest in the country, in part because it has astronomy and astrophysics. Education is supplied by an external force that's a teacher or parents. Please note there are many courses that are required each semester since they have to be taken in a particular order (see the Class Schedule). Transportation has played a large role in the development of the human race. Within Energy Psychology (EP) there are lots of modalities from which to select. Introducing Physics Education A heavy body moving at a quick velocity isn't simple to stop. At times, data transmission modes are also called directional modes. There's a dimension that involves advanced Physics that is fundamentally imperative to the life supporting mechanism in earth. Our world is just one of several possible hidden realities, only a single kind of existence out of infinite dimensions which exist at exactly the same time. You might have little idea about a particular task in your life but you aren't very well mindful of the techniques to handle it. The reason your life might not be playing out in the manner in which you would like is extremely likely because of an imbalance and https://arthistory.uchicago.edu/graduate/profiles/rumora distortion in your energy field. If you think you will observe waves, you will observe waves and if you think you will observe particles, you will notice particles. The wave theory and the particle theory of light proved long thought to be at odds together. In cardiovascular disease, physicists work on the measurement of circulation and oxygenation. Your religious beliefs don't have to coincide with the psychic's to find a genuine reading. As the theories progress and new discoveries are created, not just the answer but the entire question changes. Thinkers previously have delved into two fundamental elements of existence and they're meaning and perception. It has to use many different fields to study the nature of the universe. It is considered the most fundamental science because it provides a basis for all other sciences. Physics, specifically, focuses on a few of the most fundamental of questions regarding our physical universe. Generally speaking, you'll need exactly the same number of equations as there are unknowns in the issue. On occasion the concern of repeating the very same mistakes will be the sole reason to repeat them. At the time that your doubts are clear you can perform in the best and effective method. Physics Education Fundamentals Explained Moreover, there's a Graduate Seminar which delivers an introduction to ongoing research chances in the department. Government's initiative to supply e-rickshaws for convenient and pollution free transportation is another big reason behind the development of the marketplace. The conference is a venue for specialists that are active researchers in the subject of physics education. Physicists often wind up in a wide selection of leadership positions. The medical physicist is called upon in order to contribute clinical and scientific ideas and resources to fix the numerous and diverse bodily issues that arise continually in many specialized medical locations. So long as you're in good standing with people you'll forever on their mind every time a superior lead or opportunity comes across their desk. When you search for the marital help you require, you're on your way to developing a stronger marriage. Make it a location where people may feel welcome to talk about their ideas, demands, and experiences. Creating an e-book is straightforward enough. Find more information about Resource Plus, an assortment of additional teaching and learning resources which have lately been launched to support the delivery of vital topics and techniques in Cambridge IGCSE Physics. Physical education is just one of the disciplines in the curriculum and is regarded as among the favorite classes for students. Since you may see, obtaining a business opportunity lead isn't as hard as you might think. The very best coaching is do my homework for me given to the students so they can clear all their doubts and problems they've related to the subject which will enable them to improve where they're lacking behind. When you have a business, there'll come a point in time when you should sell it. Each and every facet of affiliate marketing is taught that helps you choose the right affiliate program and utilize it to promote the merchandise. There is growing emphasis within the health care community on the expert training and qualification of healthcare physicists. Therefore, if you're an alternate medicine body practitioner who truly wants to provide whole person solutions, you should think about teaming with an Energy Psychologist or learning first-hand a number of the EP methods that will support you directly and also serve your patients. It can be useful if you were able to give a resume to your reference in order that they have all of the information that they need to compose a strong letter for you. Review articles giving a survey of particular fields are especially encouraged. Of course it's essential that the info is used. They must be able to troubleshoot designs that do not work as planned. They can work anywhere from equipment manufacturing companies to research laboratories. They are typically employed in the areas of manufacturing and research. If you're undecided regarding the field of chemical engineering you wish to work in, attempt to find an industrial placement to learn what's available. If you wish to get started preparing for chemical engineer job, you are able to look into the accredited programs supplied by ABET. The difference between chemical engineers and other forms of engineers is they apply a knowledge of chemistry as well as other engineering disciplines. So far as I'm aware there aren't any research jobs for pure mathematics beyond academia. They use the advanced understanding of engineering to supervise the job of different engineers and support personnel. There are several different varieties of engineers. Understanding Chemistry Engineering The work market is your oyster, whether or not you're searching for something in the general public or private sector. But those responsible for delivering products to space are desperately attempting to fill vacant spots. The period of time spent on something doesn't have an immediate relationship to the sum of value you make. The Good, the Bad and Chemistry Engineering Some standard job descriptions are given below. You are able to read trade journals and research online to discover a list of future cosmetic company employers. Besides the options listed, you should explore and learn about the wide array of career opportunities that are available to you. Introducing Chemistry Engineering Many teachers, especially in public schools, also are frustrated by the dearth of control they have over what they have to teach. Taught masters degrees have a tendency to start in September. Or perhaps you decide the course or campus really isn't the one for you, and choose not to apply there. In addition, there are many private deemed universities established in various areas of the nation. Research careers are somewhat more diverse than they might first appear, as there are numerous distinct reasons to conduct research and lots of possible environments. Typically, such a big will also finish a minor, or maybe a double major, in computer science. The major challenge of utilizing this supercritical steam is in the plan of boiler tube materials. Organic chemistry is called the Chemistry of Life because all the molecules which make up living tissue have carbon as a portion of their makeup. If a chemist specializes in green chemistry, he or she is going to design chemical processes and products which are environmentally sustainable. The area of engineering provides vast scope for candidates. On occasion a degree in electronic engineering is accepted, because of the similarity of the 2 fields. Chemistry and chemical engineering, as the important science, possess the extensive connection with a wide variety of fields, and they've been thoroughly applied in all facets of everyday life. One choice is to take a senior thesis course that may include research with university faculty. Though there aren't exactly a plethora of alternatives at the moment for some branches, a growing number of colleges and universities are adding online engineering degrees at each level. A master's degree is often necessary for promotion to management, and continuing education and training are required to stay informed about advances in technology. Our company is an infrastructure business, we're involved with all facets of infrastructure, development and maintenance. If you don't meet the necessary averages, we might have to withdraw your offer. Airlines are commodity and make industry-wide unfavorable capital. The Chronicles of Chemistry Engineering Admission is chiefly done on the grounds of state polytechnic entrance exams. Registration is demanded. They will be required to take six credits from among the following courses. Top Chemistry Engineering Choices Seasoned professionals working in the usa can become quite a fine compensation dependent on the degree of technical wisdom and experience in the specialty. You will need this as a Plant Manager as a way to communicate with different people like your employees and managers from some other departments. So engineering is extremely transferable, we've got a true diverse mixture of staff within New Zealand, we 32 have various nationalities. In the past few decades, the area of information science has exploded. Another problem with the lethal dose tests is the obvious truth that animals aren't humans. A mixture of research and practical tasks is also feasible. Factors which affect the salary of a sports engineer include engineering expertise, length of specialist engineering experience in addition to the specific facet of sports an employer is centered on. Although personal requirements are rather low in comparison to other engineering professions, it's possible to still have a really wonderful career. These positions may involve a substantial quantity of travel. With a wide range of positions listed, there are lots of career opportunities out there. The work outlook for the engineering field changes depending on the particular subsector. Usually, you might not expect to struggle to come across the type of work which you want regardless of your qualifications. Though, slow is a little bit of an understatement. It seems he is only sceptical of sceptics. Fortunately, in the 1990s they found a means to resolve it. The very first and most important point to contemplate is how sight works. This idea might be supplied a chance. Your normal bullet will return down, someplace farther away. It is a good machine, particularly for quantum mechanics applications,'' Isayev explained. Albert Einstein's theory of general relativity describes the impact of gravitation on the form of space and the stream of time. On a fundamental quantum level, all of the matter in the universe is essentially composed of stardust,'' he explained. Camera Obscura is basically a dark, closed space in the form of a box with a hole on a single side of it. An event horizon is probably what you're thinking of when you think about a black hole. And it may be your happiest thought too, in the event you jumped into a black hole. True to the character of the science, the picture doesn't demonstrate the black hole itself. grademiners Because the nearest black hole is so far away, someone would need to travel at near the speed of light to receive there within her or his lifetime. They cannot be observed directly on account of both their small size and the fact that they emit no light. They remain mysterious. Black holes, it can't be denied, are both easy and complicated. Black Hole Physics: No Longer a Mystery These observatories combine their data employing very-long-baseline interferometry to create a digital telescope which has an effective diameter of the full planet! Active galaxies like quasars are simple to spot. It is not as expensive to construct and launch smaller, identical telescopes. Relativity is crucial to everything from the way in which the sun shines to how GPS works. It takes around 15 years to finish its complete orbit. Therefore, the whole electromagnetic spectrum is essential for a comprehensive picture, because different objects emit most of their radiation at distinct wavelengths. The History of Black Hole Physics Refuted You will surely choose from a large number of pictures that option that will be suitable for you exactly! I've attached a screenshot of the title of the write-up, and a hyperlink to the article on my site. Thus they contain all of the information that would otherwise be lost. Finding the Best Black Hole Physics Many vaccines have a very small bit of aluminium to improve the immune reaction. To stay informed about our consumption, oil businesses must constantly search for new sources of petroleum, and improve the creation of current wells. The ISS consists of 15 modules, or parts, and is quite large. This course of action is called the frame-dragging. Other changes might also be necessary. The end result is known as a white dwarf. To accomplish this, it would have to be size of the Earth. Otherwise, then the object likely features an event horizon. Black holes are often as massive as three Suns to more than a million Suns. Notice how it is apparently in slow-motion, yet this result is simply brought on by the scale of the scene. It's the equivalent of having the ability to find an object on the face of the Moon that is simply a few centimetres across. Another handy quantity to study large objects is the gravitational area or gravitational acceleration that is the acceleration that any object would experience as a result of presence of our enormous object, in this event of a black hole. Rods and cones process the light to offer you the complete picture. Here you'll get readymade columns and the lined structure of the sheet will be sure your notes are never from the line. Think about a glass of plain water. The New Angle On Black Hole Physics Just Released The Reverse Diagram button permits you to reorient the whole diagram simultaneously. Please take a moment to reassess my edit. Press the appropriate mouse button. In the event the scales are accurate the 2 readings are almost always equal. The simple fact that my customer was not mindful of Word's diagramming features reminded me that the majority of the folks using Word only make the most of a little fraction of all of the characteristics that are packed into Word and that Visio, although it is the most recognizable, isn't the only diagramming tool you've got at your disposal. It forms exactly the same kind of upside-down, reversed image as a normal camera, so it's possible to observe how a camera operates by creating a pinhole viewer. Black Hole Physics Can Be Fun for Everyone Providing the body with food is necessary for survival. The quantity is miniscule in comparison to the normal exposure we've got over our life time. Despite the fact that the invention of the photography caused new scientific achievements and evolution of the industrial Earth, photography also became a component of day-to-day life and an art movement. Full citations are found below. A printable Cornell notes template sample may be used for both study and company. Under Background there's a drop-down list. The Most Popular Black Hole Physics Scientists are still attempting to fully grasp how common they are and how they could have formed. In more compact stars the reactions stop in the center of the approach. Instead, it might really help solve some of life's best mysteries. The Do's and Don'ts of Chemistry Courses Some courses may not be offered each semester. This bridging course may be used for a selection of our degrees to meet the Year 12 Chemistry prerequisite. These exams might be offered in combination with the last exam for the in-person offering of the class. Please don't order the LabPaq until you're sure that you will stick with the class. It is repeatable for up to a total of eight credit-hours. Often, it will include quizzes after some or all of the lectures. If you are a newcomer to online courses, don't neglect to browse around! Furthermore, these courses could possibly be employed to satisfy Lib. They are approved for lab majors and designed to meet the rigor needed for science majors as well as pre-med and health majors. Before registering for classes, you're want to meet up with an academic advisor. First-year students who might be considering the chemistry major start with the exact same chemistry courses as first-year students that are pursuing a pre-health program. To complete their 31 to 36 credits, they may take a series of core courses in medicinal chemistry, as well as electives. However, online students won't have accessibility to lab sessions. More info about Running Start can be seen on the UW homepage. Several online classes supply a similar educational experience to conventional classes completed on campus. You might even want to explore modules from different departments including Life Sciences or Archaeology. The Chemistry Syllabus is well taught from the reduced classes as it's regarded as one among the key subjects when students get to the Class 12. Second-year Biochemistry PhD students may want to have a Qualifying Exam. Topics and format for the course is going to be announced ahead of time. Summer classes are an excellent time to satisfy your core degree requirements. Otherwise, you will definitely locate a whole Amazon section of STEM toys your children will discover thrilling and entertaining to play with. Universities house a huge amount of information and their libraries are frequently the middle of it all. Please be sure you accept your unconditional offer within 4 weeks of getting your offer. More details are available via the internet pages of our six research groupings. Topics will be announced beforehand. Fees and the rest of the charges are subject to change at any moment without notice. There are not any other prerequisites. Furthermore, it is possible for you to learn about the law and administrative regulations linked to environmental protection. This representation gives you the ability to recognize a particular isotope of an element. Chemistry Courses - Is it a Scam? For instance, a t the conclusion of the chapter on enzyme kinetics there is an entire set of questions on the chemical aspects o l alcoholism. A comprehension of basic chemistry helps to get maximum benefit from the program. A comparatively new notion, green chemistry evolved in the company and regulatory communities as a pure development of pollution prevention initiatives. Below are a few of the latest popular degree choices. Inside this class, you'll get a cozy familiarity with these fundamentals. Overview Chemistry at Imperial is intended to produce chemists of the maximum calibre, that are capable of working in the chemical sciences. It must be approved by the research advisor and must result in a written report on the project evaluated as part of the grade. It will result in a written thesis and public presentation. In many instances, financial aid is a fantastic way for students and their families to pay for a bachelor's degree. Renowned faculty You're going to be taught by the exact same cadre of experienced professors well-known researchers in a number of fields who teach our on-campus classes. They made it really easy to feel like a classroom setting. However great your aspirations, you're going to be guided to their fulfillment. With the wealth of totally free courses to be found on the internet, that goal is simpler than ever to achieve and can frequently be done without even leaving the home. This project's objective is to create a library of microalgae that were collected from several environmental ailments. They are given the opportunity to apply their knowledge and skills in an actual work setting. They will be required to complete a supervised master's project if they are within a cooperating industry. With Janux, they are able to use interactive learning tools and collaborate with other students and professors in real time. The graded substance will choose the sort of take-home work. Please be aware that, regardless of the results of the optional placement exam, you won't lose your existing AP-based placement. The present CBSE syllabus was made in such a manner in order to provide a comfortable learning experience for each student. Our tampering service makes a stronger degree of warm opposition than usual window by simply heating up this a glass for your fair bit of energy, releasing specified atoms that skimp this ethics with the glass underneath cause problems. We now have the ability to manage flood function as your company requires and may essentially do one thing people obtain. Contained in the grapefruit and warms the oil bathroom, directly into that your example may be used in addition to a thermometer. For calibrating the volume of gas created some sort of reply, some sort of gas syringe can be employed. Pipettes include the glass ware acquiring small course with graduations on the surface. That little bit of device may be connected to the the top https://collegehelp.club/pay-for-college-papers/ of any flask by having a little bit of lines, plus the fuel manufactured drives the particular plunger outside the needle, allowing a gasoline volume level to be measured. Contributor It locate it is apply in the event the quantity of trial for being procedures is more compared to 5ml or even 10ml. Most boiling tubes are manufactured from borosilicate goblet. If a washing liquid is used, it is usually one particular made for lab glasses, which include Liquinox or even Alconox. Wash by using warm water and soap, wash diligently by using regular faucet water, subsequently always rinse 3-4 occasions with deionized mineral water. The smooth bottom and also spout permit the following piece of glassware being firm around the research laboratory bench and also hot dish, and also it's not hard to dump some sort of liquid without having setting up a jumble. In improvement, quite a few regions need help with college essay get restrictions to the “type of chemistry glassware” which can be delivered given it can be used from the manufacturing of against the law illegal medications. Find a wide range of measurements and types. The Basics Many people can be undergone the order with silicon dioxide and also aluminium oxide, with diverse elements of the amalgamation having varying durations to give your line. Employ deionized mineral water intended for water-soluble subject matter. It can be see-thorugh and it has chiseled base. Considering aspects for example functionality, beauty and even more. Whether you may need sandblasting, drilling, beveling, lathing as well as other method applied to your own custom web design, Wheaton window abilities provides you covered. You are able to always rinse this glass wares while using the proper synthetic cleaning agent, after that wind up with a few rinses by using sterilized water, followed by final rinses together with deionized waters. • 2 Zero. 3 plastic corks, 2-hole • Most chemical type exams are done working with exam tubes. • 2 beakers, 300 milliliters, borosilicate glass • 1 exam tubing holder • 1 evaporating dish, porcelain ceramic, 70 ml • 1 beaker, A hundred milliliters, borosilicate glass Under there's additionally a minimal detail about the using of every single. Beakers are utilized for schedule calibrating and combining from the clinical. It's circular towards the bottom having a extended slim neck of the guitar. Ex-mate: Lipids is often split up through a good aqueous get by making use of petroleum possibly. Find window as well as plastic-type material test out capsules, wooden or plastic test tube rack, along with test tubing gear. If you're on a regular basis performing chemical make up trials and try to in search of the best resource, a person should https://www.bgsu.edu/content/dam/BGSU/college-of-arts-and-sciences/english/documents/ma-online/portfolio/Valeria-White_MA-English-Portfolio_13-March-2015.pdf private this particular high-quality goblet chemical make up labware placed. Examine a variety of apparatus and discover effortless hands-on findings designed to use just laboratory cup and common things for the home. The Basics A As well as Jointed Flask, Wrong 25ml 14/23 [3317] Rinse out along with tap water pursued by 3-4 rinses along with deionized drinking water. Our own Salinization course of action may provide non-stick types of surface pertaining to fluids in addition to powders reducing the amount of liquefied still left within the pot getting rid of any meniscus when you require to measure amounts with accurate. A And also Jointed Adaptor, Drying out Tv 19/26 75° At periods, you must acquire easy to customize chemistry glassware of which meet the needs within your certain challenge. We realize the best way significant it is actually to possess a provider which will produce promptly, when. Boyer, Concepts throughout Biochemistry, 2$$^\text$$ version, 2002, Brooks-Cole, g. Small thistle funnel sits dormant regarding filtration by any means, yet to incorporate fuilds in to piece of equipment. Pipettes are classified as the wine glass ware owning thin course having graduations within the floor. For measuring lists connected with methods more exactly, the volumetric pipette may be used. A lot of burets are created from borosilicate cup using PTFE (Teflon) stopcocks. • 1 research laboratory apron, Seven mil vinyl • 1 10 milliliter times 4.1 milliliters Mohr calculating pipette, goblet (buy a pipette product and also 12 cubic centimeters pipette pump motor as a stand alone) • To determine and also take a preferred amount of the liquid. The principle throat is in fact completed almost up, as well as plugged into some sort of blocked small neck; this major neck of the guitar is usually linked with different piece of equipment, as well as permits the favourable to be removed if your stopper is definitely a little bit taken as well as removed altogether. Circular backside flask: The actual circular molded see-thorugh flask along with opportunity to hold beverages and contains slender teeth. It doesn't include finished school blood pressure measurements normally. A PLUS Jointed Flask Rounded Underside 3 or more Guitar neck 250ml dime. Every time a detergent can be used, it is usually one created for laboratory glassware, which include Liquinox and also Alconox. The Basics View the web page for more details for the flask. Burets or maybe burettes are utilized if it's required to dispense a compact proper degree of any liquefied, in terms of titration. You'll receive 119 waste labware in this particular comprehensive glass hormone balance collection, which includes 6 beakers, Several flasks, any Liebig 3 hundred millimeter condenser, a new Fifty ml burette along with hold, Twenty four hours test hoses, Three or more finished cylinders, 15 baby bottles, 07 rubberized corks, the burners in addition to stay, protection tools, and many more! That custom glasses placed is undoubtedly an remarkable cost, assessing in order to similar packages that niche for 380 or even more. We have realized which working with Duran/Schott Glass offers the finest below research problems due to the great ability to resist greater temperatures and chemical substance weight. A In addition Jointed Flask Game Bottom level 3 or more The neck and throat 250ml dime. A Straus flask, however, can be used to save dried out substances. This actually also ensures they are ideal for cooking food liquids, plus their particular necks can support filtering funnels. ...
2019-10-15 08:30:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17055104672908783, "perplexity": 1653.2681458546983}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657949.34/warc/CC-MAIN-20191015082202-20191015105702-00532.warc.gz"}
https://en.wikipedia.org/wiki/Jellium
# Jellium Jellium, also known as the uniform electron gas (UEG) or homogeneous electron gas (HEG), is a quantum mechanical model of interacting electrons in a solid where the positive charges (i.e. atomic nuclei) are assumed to be uniformly distributed in space whence the electron density is a uniform quantity as well in space. This model allows one to focus on the effects in solids that occur due to the quantum nature of electrons and their mutual repulsive interactions (due to like charge) without explicit introduction of the atomic lattice and structure making up a real material. Jellium is often used in solid-state physics as a simple model of delocalized electrons in a metal, where it can qualitatively reproduce features of real metals such as screening, plasmons, Wigner crystallization and Friedel oscillations. At zero temperature, the properties of jellium depend solely upon the constant electronic density. This lends it to a treatment within density functional theory; the formalism itself provides the basis for the local-density approximation to the exchange-correlation energy density functional. The term jellium was coined by Conyers Herring, alluding to the "positive jelly" background, and the typical metallic behavior it displays.[1] ## Hamiltonian The jellium model treats the electron-electron coupling rigorously. The artificial and structureless background charge interacts electrostatically with itself and the electrons. The jellium Hamiltonian for N electrons confined within a volume of space Ω, and with electronic density ρ(r) and (constant) background charge density n(R) = N/Ω is[2][3] ${\displaystyle {\hat {H}}={\hat {H}}_{\mathrm {el} }+{\hat {H}}_{\mathrm {back} }+{\hat {H}}_{\mathrm {el-back} },\,}$ where • Hel is the electronic Hamiltonian consisting of the kinetic and electron-electron repulsion terms: ${\displaystyle {\hat {H}}_{\mathrm {el} }=\sum _{i=1}^{N}{\frac {p_{i}^{2}}{2m}}+\sum _{i • Hback is the Hamiltonian of the positive background charge interacting electrostatically with itself: ${\displaystyle {\hat {H}}_{\mathrm {back} }={\frac {e^{2}}{2}}\int _{\Omega }\mathrm {d} \mathbf {R} \int _{\Omega }\mathrm {d} \mathbf {R} '\ {\frac {n(\mathbf {R} )n(\mathbf {R} ')}{|\mathbf {R} -\mathbf {R} '|}}={\frac {e^{2}}{2}}\left({\frac {N}{\Omega }}\right)^{2}\int _{\Omega }\mathrm {d} \mathbf {R} \int _{\Omega }\mathrm {d} \mathbf {R} '\ {\frac {1}{|\mathbf {R} -\mathbf {R} '|}}}$ • Hel-back is the electron-background interaction Hamiltonian, again an electrostatic interaction: ${\displaystyle {\hat {H}}_{\mathrm {el-back} }=\int _{\Omega }\mathrm {d} \mathbf {r} \int _{\Omega }\mathrm {d} \mathbf {R} \ {\frac {\rho (\mathbf {r} )n(\mathbf {R} )}{|\mathbf {r} -\mathbf {R} |}}=-e^{2}{\frac {N}{\Omega }}\sum _{i=1}^{N}\int _{\Omega }\mathrm {d} \mathbf {R} \ {\frac {1}{|\mathbf {r} _{i}-\mathbf {R} |}}}$ Hback is a constant and, in the limit of an infinite volume, divergent along with Hel-back. The divergence is canceled by a term from the electron-electron coupling: the background interactions cancel and the system is dominated by the kinetic energy and coupling of the electrons. Such analysis is done in Fourier space; the interaction terms of the Hamiltonian which remain correspond to the Fourier expansion of the electron coupling for which q ≠ 0. ## Contributions to the total energy The traditional way to study the electron gas is to start with non-interacting electrons which are governed only by the kinetic energy part of the Hamiltonian, also called a Fermi gas. The kinetic energy per electron is given by ${\displaystyle KE={\frac {3}{5}}E_{F}={\frac {3}{5}}{\frac {\hbar ^{2}k_{F}^{2}}{2m_{e}}}={\frac {3}{5}}{\biggl (}{\frac {9\pi }{4}}{\biggr )}^{\frac {2}{3}}{\frac {1}{(r_{s}/a_{0})^{2}}}{\textrm {Ry}}\approx {\frac {2.21}{(r_{s}/a_{0})^{2}}}{\textrm {Ry}}}$ where ${\displaystyle E_{F}}$ is the Fermi energy, ${\displaystyle k_{F}}$ is the Fermi wave vector, and the last expression shows the dependence on the Wigner–Seitz radius ${\displaystyle r_{s}}$ where energy is measured in Rydbergs. Without doing much work, one can guess that the electron-electron interactions will scale like the inverse of the average electron-electron separation and hence as ${\displaystyle 1/r_{12}}$ (since the Coulomb interaction goes like one over distance between charges) so that if we view the interactions as a small correction to the kinetic energy, we are describing the limit of small ${\displaystyle r_{s}}$ (i.e. ${\displaystyle 1/r_{s}^{2}}$ being larger than ${\displaystyle 1/r_{s}}$) and hence high electron density. Unfortunately, real metals typically have ${\displaystyle r_{s}}$ between 2-5 which means this picture needs serious revision. The first correction to the free electron model for jelium is from the Fock exchange contribution to electron-electron interactions. Adding this in, one has a total energy of ${\displaystyle E={\frac {2.21}{r_{s}^{2}}}-{\frac {0.916}{r_{s}}}}$ where the negative term is due to exchange: exchange interactions lower the total energy. Higher order corrections to the total energy are due to electron correlation and if one decides to work in a series for small ${\displaystyle r_{s}}$, one finds ${\displaystyle E={\frac {2.21}{r_{s}^{2}}}-{\frac {0.916}{r_{s}}}+0.0622\ln(r_{s})-0.096+O(r_{s})}$ The series is quite accurate for small ${\displaystyle r_{s}}$ but of dubious value for ${\displaystyle r_{s}}$ values found in actual metals. For the full range of ${\displaystyle r_{s}}$, Chachiyo's correlation energy density can be used as the higher order correction. In this case, ${\displaystyle E={\frac {2.21}{r_{s}^{2}}}-{\frac {0.916}{r_{s}}}+a\ln \left(1+{\frac {b}{r_{s}}}+{\frac {b}{r_{s}^{2}}}\right)}$, [4] which agrees quite well (on the order of milli-Hartree) with the quantum Monte Carlo simulation. ## Zero-temperature phase diagram of jellium in three and two dimensions The physics of the zero-temperature phase behavior of jellium is driven by competition between the kinetic energy of the electrons and the electron-electron interaction energy. The kinetic-energy operator in the Hamiltonian scales as ${\displaystyle 1/r_{s}^{2}}$, where ${\displaystyle r_{s}}$ is the Wigner–Seitz radius, whereas the interaction energy operator scales as ${\displaystyle 1/r_{s}}$. Hence the kinetic energy dominates at high density (small ${\displaystyle r_{s}}$), while the interaction energy dominates at low density (large ${\displaystyle r_{s}}$). The limit of high density is where jellium most resembles a noninteracting free electron gas. To minimize the kinetic energy, the single-electron states are delocalized, in a state very close to the Slater determinant (non-interacting state) constructed from plane waves. Here the lowest-momentum plane-wave states are doubly occupied by spin-up and spin-down electrons, giving a paramagnetic Fermi fluid. At lower densities, where the interaction energy is more important, it is energetically advantageous for the electron gas to spin-polarize (i.e., to have an imbalance in the number of spin-up and spin-down electrons), resulting in a ferromagnetic Fermi fluid. This phenomenon is known as itinerant ferromagnetism. At sufficiently low density, the kinetic-energy penalty resulting from the need to occupy higher-momentum plane-wave states is more than offset by the reduction in the interaction energy due to the fact that exchange effects keep indistinguishable electrons away from one another. A further reduction in the interaction energy (at the expense of kinetic energy) can be achieved by localizing the electron orbitals. As a result, jellium at zero temperature at a sufficiently low density will form a so-called Wigner crystal, in which the single-particle orbitals are of approximately Gaussian form centered on crystal lattice sites. Once a Wigner crystal has formed, there may in principle be further phase transitions between different crystal structures and between different magnetic states for the Wigner crystals (e.g., antiferromagnetic to ferromagnetic spin configurations) as the density is lowered. When Wigner crystallization occurs, jellium acquires a band gap. Within Hartree–Fock theory, the ferromagnetic fluid abruptly becomes more stable than the paramagnetic fluid at a density parameter of ${\displaystyle r_{s}=5.45}$ in three dimensions (3D) and ${\displaystyle 2.01}$ in two dimensions (2D).[5] However, according to Hartree–Fock theory, Wigner crystallization occurs at ${\displaystyle r_{s}=4.5}$ in 3D and ${\displaystyle 1.44}$ in 2D, so that jellium would crystallise before itinerant ferromagnetism occurs.[6] Furthermore, Hartree–Fock theory predicts exotic magnetic behavior, with the paramagnetic fluid being unstable to the formation of a spiral spin-density wave.[7][8] Unfortunately, Hartree–Fock theory does not include any description of correlation effects, which are energetically important at all but the very highest densities, and so a more accurate level of theory is required to make quantitative statements about the phase diagram of jellium. Quantum Monte Carlo (QMC) methods, which provide an explicit treatment of electron correlation effects, are generally agreed to provide the most accurate quantitative approach for determining the zero-temperature phase diagram of jellium. The first application of the diffusion Monte Carlo method was Ceperley and Alder's famous 1980 calculation of the zero-temperature phase diagram of 3D jellium.[9] They calculated the paramagnetic-ferromagnetic fluid transition to occur at ${\displaystyle r_{s}=75(5)}$ and Wigner crystallization (to a body-centered cubic crystal) to occur at ${\displaystyle r_{s}=100(20)}$. Subsequent QMC calculations[10][11] have refined their phase diagram: there is a second-order transition from a paramagnetic fluid state to a partially spin-polarized fluid from ${\displaystyle r_{s}=50(2)}$ to about ${\displaystyle 100}$; and Wigner crystallization occurs at ${\displaystyle r_{s}=106(1)}$. In 2D, QMC calculations indicate that the paramagnetic fluid to ferromagnetic fluid transition and Wigner crystallization occur at similar density parameters, in the range ${\displaystyle 30.[12][13] The most recent QMC calculations indicate that there is no region of stability for a ferromagnetic fluid.[14] Instead there is a transition from a paramagnetic fluid to a hexagonal Wigner crystal at ${\displaystyle r_{s}=31(1)}$. There is possibly a small region of stability for a (frustrated) antiferromagnetic Wigner crystal, before a further transition to a ferromagnetic crystal. The crystallization transition in 2D is not first order, so there must be a continuous series of transitions from fluid to crystal, perhaps involving striped crystal/fluid phases.[15] Experimental results for a 2D hole gas in a GaAs/AlGaAs heterostructure (which, despite being clean, may not correspond exactly to the idealized jellium model) indicate a Wigner crystallization density of ${\displaystyle r_{s}=35.1(9)}$.[16] ## Applications Jellium is the simplest model of interacting electrons. It is employed in the calculation of properties of metals, where the core electrons and the nuclei are modeled as the uniform positive background and the valence electrons are treated with full rigor. Semi-infinite jellium slabs are used to investigate surface properties such as work function and surface effects such as adsorption; near surfaces the electronic density varies in an oscillatory manner, decaying to a constant value in the bulk.[17][18][19] Within density functional theory, jellium is used in the construction of the local-density approximation, which in turn is a component of more sophisticated exchange-correlation energy functionals. From quantum Monte Carlo calculations of jellium, accurate values of the correlation energy density have been obtained for several values of the electronic density,[9] which have been used to construct semi-empirical correlation functionals.[20] The jellium model has been applied to superatoms, and used in nuclear physics. • Free electron model — a model electron gas where the electrons do not interact with anything. • Nearly free electron model — a model electron gas where the electrons do not interact with each other, but do feel a (weak) potential from the atomic lattice. ## References 1. ^ Hughes, R. I. G. (2006). "Theoretical Practice: the Bohm-Pines Quartet" (PDF). Perspectives on Science. 14 (4): 457–524. doi:10.1162/posc.2006.14.4.457. 2. ^ Gross, E. K. U.; Runge, E.; Heinonen, O. (1991). Many-Particle Theory. Bristol: Verlag Adam Hilger. pp. 79–80. ISBN 978-0-7503-0155-8. 3. ^ Giuliani, Gabriele; Vignale; Giovanni (2005). Quantum Theory of the Electron Liquid. Cambridge University Press. pp. 13–16. ISBN 978-0-521-82112-4. 4. ^ Teepanis Chachiyo (2016). "Simple and accurate uniform electron gas correlation energy for the full range of densities". J. Chem. Phys. 145 (2): 021101. Bibcode:2016JChPh.145b1101C. doi:10.1063/1.4958669. PMID 27421388. 5. ^ Giuliani, Gabriele; Vignale; Giovanni (2005). Quantum Theory of the Electron Liquid. Cambridge University Press. ISBN 978-0-521-82112-4. 6. ^ J. R. Trail; M. D. Towler; R. J. Needs (2003). "Unrestricted Hartree-Fock theory of Wigner crystals". Phys. Rev. B. 68 (4): 045107. arXiv:0909.5498. Bibcode:2003PhRvB..68d5107T. doi:10.1103/PhysRevB.68.045107. 7. ^ A. W. Overhauser (1960). "Giant Spin Density Waves". Phys. Rev. Lett. 4 (9): 462–465. Bibcode:1960PhRvL...4..462O. doi:10.1103/PhysRevLett.4.462. 8. ^ A. W. Overhauser (1962). "Spin Density Waves in an Electron Gas". Phys. Rev. 128 (3): 1437–1452. Bibcode:1962PhRv..128.1437O. doi:10.1103/PhysRev.128.1437. 9. ^ a b D. M. Ceperley; B. J. Alder (1980). "Ground State of the Electron Gas by a Stochastic Method". Phys. Rev. Lett. (Submitted manuscript). 45 (7): 566–569. Bibcode:1980PhRvL..45..566C. doi:10.1103/PhysRevLett.45.566. 10. ^ F. H. Zong; C. Lin; D. M. Ceperley (2002). "Spin polarization of the low-density three-dimensional electron gas". Phys. Rev. E. 66 (3): 1–7. arXiv:cond-mat/0205339. Bibcode:2002PhRvE..66c6703Z. doi:10.1103/PhysRevE.66.036703. PMID 12366294. 11. ^ N. D. Drummond; Z. Radnai; J. R. Trail; M. D. Towler; R. J. Needs (2004). "Diffusion quantum Monte Carlo study of three-dimensional Wigner crystals". Phys. Rev. B. 69 (8): 085116. arXiv:0801.0377. Bibcode:2004PhRvB..69h5116D. doi:10.1103/PhysRevB.69.085116. 12. ^ B. Tanatar; D. M. Ceperley (1989). "Ground state of the two-dimensional electron gas". Phys. Rev. B. 39 (8): 5005. Bibcode:1989PhRvB..39.5005T. doi:10.1103/PhysRevB.39.5005. 13. ^ F. Rapisarda; G. Senatore (1996). "Diffusion Monte Carlo Study of Electrons in Two-dimensional Layers". Aust. J. Phys. 49: 161. Bibcode:1996AuJPh..49..161R. doi:10.1071/PH960161. 14. ^ N. D. Drummond; R. J. Needs (2009). "Phase Diagram of the Low-Density Two-Dimensional Homogeneous Electron Gas". Phys. Rev. Lett. 102 (12): 126402. arXiv:1002.2101. Bibcode:2009PhRvL.102l6402D. doi:10.1103/PhysRevLett.102.126402. PMID 19392300. 15. ^ B. Spivak; S. A. Kivelson (2004). "Phases intermediate between a two-dimensional electron liquid and Wigner crystal". Phys. Rev. B. 70 (15): 155114. Bibcode:2004PhRvB..70o5114S. doi:10.1103/PhysRevB.70.155114. 16. ^ J. Yoon; C. C. Li; D. Shahar; D. C. Tsui; M. Shayegan (1999). "Wigner Crystallization and Metal-Insulator Transition of Two-Dimensional Holes in GaAs at ${\displaystyle B=0}$". Phys. Rev. Lett. 82 (8): 1744. arXiv:cond-mat/9807235. Bibcode:1999PhRvL..82.1744Y. doi:10.1103/PhysRevLett.82.1744. 17. ^ Lang, N. D. (1969). "Self-consistent properties of the electron distribution at a metal surface". Solid State Commun. 7 (15): 1047–1050. Bibcode:1969SSCom...7.1047L. doi:10.1016/0038-1098(69)90467-0. 18. ^ Lang, N. D.; Kohn, W. (1970). "Theory of Metal Surfaces: Work Function". Phys. Rev. B. 3 (4): 1215–223. Bibcode:1971PhRvB...3.1215L. doi:10.1103/PhysRevB.3.1215. 19. ^ Lang, N. D.; Kohn, W. (1973). "Surface-Dipole Barriers in Simple Metals". Phys. Rev. B. 8 (12): 6010–6012. Bibcode:1973PhRvB...8.6010L. doi:10.1103/PhysRevB.8.6010. 20. ^ Perdew, J. P.; McMullen, E. R.; Zunger, Alex (1981). "Density-functional theory of the correlation energy in atoms and ions: A simple analytic model and a challenge". Phys. Rev. A. 23 (6): 2785–2789. Bibcode:1981PhRvA..23.2785P. doi:10.1103/PhysRevA.23.2785.
2019-09-16 02:11:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 38, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.929669976234436, "perplexity": 9570.948544419476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572471.35/warc/CC-MAIN-20190916015552-20190916041552-00446.warc.gz"}
https://www.wyzant.com/resources/answers/topics/ap-calculus
1,119 Answered Questions for the topic AP Calculus Ap Calculus Calculus 06/04/22 If A is a diagonalizable n × n matrix, prove that A2 is also diagonalizable. Ap Calculus Calculus 06/04/22 Find the area of the surface of the cap, cut from thehemisphere x 2 + y2 + z2 = 5, z ≥ 0, by the cylinder x2 + y2 = 1. Ap Calculus Calculus 06/03/22 Use cylindrical coordinates to evaluate∫∫∫E (√x2+y2 ) (dV),where E is the region inside the cylinder (x − 1)2 + y2 = 1 and between the planes z = −1 and z = 1. Ap Calculus Calculus 06/01/22 Find the area of the surface of the cap, cut from thehemisphere x 2 + y2 + z2 = 5, z ≥ 0, by the cylinder x2 + y2 = 1. Ap Calculus Calculus 05/28/22 Let A be an n x n matrix and consider the linear homogeneous system Ax = 0. If the linear system has only the trivial solution state whether the following statements are true or false. (a) 0 is an... more Ap Calculus Calculus 05/28/22 Let T : R2 → R2 be a linear operator defined byT ⌈1⌉ = ⌈2⌉ T ⌈2⌉ = ⌈4⌉ ⌊1⌋ ⌊2⌋ , ⌊1⌋ ⌊5⌋ Find a formula for T ⌈x⌉ ⌊y⌋ Ap Calculus Calculus 05/28/22 If A is a diagonalizable n × n matrix, prove that A2 is also diagonalizable. Ap Calculus Calculus 05/28/22 #### 3) Hello,I need help with this Use cylindrical coordinates to evaluate ∫∫∫E (√x2+y2 ) (dV), where E is the region inside the cylinder (x − 1)2 + y2 = 1 and between the planes z = −1 and z = 1. Ap Calculus Calculus 05/28/22 #### 2) Please, I need help with this. Find the volume of the tertrahedron with vertices (0, 0, 0),(2, 0, 0), (0, 4, 0) and (0, 0, 6). Ap Calculus Calculus 05/28/22 #### 1) Hello, please can you assist me with this urgently Find the area of the surface of the cap, cut from the hemisphere x 2 + y2 + z2 = 5, z ≥ 0, by the cylinder x2 + y2 = 1. Ap Calculus Math Calculus Mathematics 05/05/22 #### Optimization Help and explanation A Norman window is shaped like a rectangle with a semicircle on top of it. A carpenter has 10m of wood trim and wishes to cover the sides of the rectangle and the upper part of the semicircle.... more Ap Calculus Math Calculus Mathematics 05/05/22 #### Speedometer readings for a motorcycle at 12-seconds intervals are given in the table. Speedometer readings for a motorcycle at 12-seconds intervals are given in the table.t (sec) 0 12 24 36 48 60v (ft/sec) 22 26 29 23 23 30(a) Estimate the distance, in feet, traveled by the... more Ap Calculus Math Calculus Mathematics 05/04/22 #### Estimate the area under the graph with midpoint? Estimate the area under the graph of f(x) = 16 - x^2 from x=0 to x=4 using 4 approximating rectangles.I got the right sum = 34 and left sum = 50 but how do I estimate area with midpoints? Ap Calculus Math Calculus Mathematics 05/02/22 #### integral problem The velocity function is v(t)=t2−3t+2 for a particle moving along a line. Find the displacement and the distance traveled by the particle during the time interval [−3,6].displacement = ??distance... more Ap Calculus Calculus Ap Calc Calc 05/02/22 #### antiderivatives question Given that the graph of f(x) passes through the point (6,7) and that the slope of its tangent line at (x,f(x)) is 3x+2, what is f(2)? Ap Calculus Calculus Ap Calc Calc 05/02/22 #### Find two positive numbers whose product is 144  and whose sum is a minimum? Optimization Help Find two positive numbers whose product is 144  and whose sum is a minimum. Ap Calculus Math Calculus Mathematics 05/02/22 #### Optimization Calculus Question Find two numbers A and B (with A<= B) whose difference is 42 and whose product is minimaized.A = ???B = ??? Ap Calculus Calculus Bc Calculus 04/27/22 #### Which of the following must be true? Question #39- Let f be a differentiable function and you're given that the graph of a line tangent to the graph of f is at x=0. Which of the following must be true? ANSWER CHOICES: f'(0)=-f(0),... more Ap Calculus Math Calculus Mathematics 04/22/22 Ap Calculus Math Calculus Mathematics 04/22/22 Ap Calculus Math Calculus Mathematics 04/22/22 #### Integral Question Consider the integral:21∫ x^3 dx3-Use Left Sum, Right Sum, Midvalue Sum, and Trapezoid Rule  methods to approximate this integral with n = 3-Find the exact value of this integral Ap Calculus Math Calculus Mathematics 04/22/22 #### Integral Question Consider the integral:21∫ x^3 dx3-Use Left Sum, Right Sum, Midvalue Sum, and Trapezoid Rule  methods to approximate this integral with n = 3-Find the exact value of this integral Ap Calculus 04/21/22 #### What is the position of the car when t = 20? A car travels along a straight road for 30 seconds starting at time t = 0. Its acceleration in ft/sec2 is given by the linear graph below for the time interval [0, 30]. At t = 0, the velocity of... more Ap Calculus Math Calculus Mathematics 04/19/22 #### calculus integral question Consider the integral:21∫ x^3 dx3-Use all three methods to approximate this integral with n = 3-Find the exact value of this integral Ap Calculus Math Calculus Derivative 04/14/22 #### antiderivative question find ∫ (3^x - 5)(4^x + 7) dx ## Still looking for help? Get the right answer, fast. Get a free answer to a quick problem. Most questions answered within 4 hours. #### OR Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
2022-12-10 06:10:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076247572898865, "perplexity": 1576.2594219338055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00695.warc.gz"}
https://docs.bentley.com/LiveContent/web/STAAD.Pro%20Help-v15/en/GUID-58DCD909-BE9F-4D23-B155-EFDF86AD5AD1.html
# V. ASCE 7-16 Wind Load Generation on Building Verify the windward loads due to wind on a portion of a building structure calculated per the ASCE 7-16 specification. ## Details The wind load is calculated for a building structure using the direction procedure for MWFRS of enclosed buildings (ASCE 7-16, Chapter 27). The following assumptions and parameters apply: • Flat roof type • Category II building (Table 1.5-1) • Basic wind speed = 108 mph • Ground height above sea level = 230 ft • Structure type of "building" • Exposure Category B • Do not consider wind speed-up over hills or escarpments • Building height = 40 ft • Building length along the wind direction, L = 30 ft • Building width normal to wind direction, B = 25 ft • Natural frequency of structure = 2 Hz (i.e., a "rigid" building per Cl. 26.2) • Damping ratio = 0.01 • Enclosed building The portion of the building below 20' in height is blocked by an adjacent structure or other obstruction. Calculate the resulting joint loads on the windward side due to wind along the X axis at the upper wall of one segment of the building. ## Validation Directionality factor, Kd = 0.85 (Table 26.6-1) Topographic factor, Kzt = 1.0 Ground elevation factor, Ke = e-0.0000362×230 = 0.9917 (Table 26.9-1) The velocity pressure exposure coefficients (Kz) are calculated per the formulae given in Table 26.10-1: (Table 26.10.1) where ɑ = 7 zg = 1,200 ft See the table below for the values of Kz at discrete values of z. Velocity pressure, qz qz = 0.00256×KzKztKdKeV2 (Cl. 26.10-1) Mean roof height = height of the building = 40 ft. Refer to table below for calculation of qz at the mean roof height. Gust effect factor, G G is calculated per Cl. 26.11.4 for rigid structures: $G = 0.925 ( 1 + 1.7 g q I z _ Q 1 + 1.7 g v I z _ )$ (Cl. 26.11.4) where gq = gv = 3.4 $z _$ = ε = 1/3 ℓ = 320 ft c = 0.3 $L z _$ = Q = $1 1 + 0.63 ( B + h L z _ ) 0.63 = 1 1 + 0.63 ( 25 + 40 309.99 ) 0.63 = 0.8997$ $I z _$ = $c ( 33 / z _ ) 1 / 6 = 0.3048$ Therefore, $G = 0.925 ( 1 + 1.7 × 3.4 × 0.3048 × 0.8997 1 + 1.7 × 3.4 × 0.3048 ) = 0.8658$ Wall pressure coefficient, Cp = 0.8 for windward per Fig. 27.3-1. The external design pressure, p = qhGCp. The internal pressure coefficient, GCpi = -0.18 for partially enclosed buildings per Table 26.13-1. The internal design pressure, pi = qi(GCpi) = 19.14 × -0.18 = -3.45 lb/ft2 where qi = the velocity pressure evaluated at the building height, h, = qh The design wind pressure, p, is then calculated as: p = qzGCp - qi(GCpi) (Eq. 29.4-1) Table 1. Wind intensity versus height for building structure h (ft) Kz qz (lb/ft2) p (lb/ft2) 0 0.575 14.47 13.47 15 0.575 14.47 13.47 20 0.624 15.71 14.32 25 0.665 16.74 15.04 30 0.701 17.63 15.66 35 0.732 18.43 16.21 40 0.761 19.14 16.71 Note: To compare different wind load calculations with STAAD.Pro, the tributary areas below are selected based on the applied wind area defined in the model. These do not include any surrounding wind areas outside of the indicated area, which would have wind applied in a typical situation. Load on Nodes 21 and 22 ### Tributary area for nodes 21 and 22 Wind pressure at half-way between this node and level above (25') = 15.04 lb/ft2. F = (9 ft / 2) × (10 ft / 2) × 15.04 lb/ft2 (10)-3 = 0.338 kips Load on Nodes 31 and 32 ### Tributary area for nodes 31 and 32 Wind pressure at half-way between this node and level below (25') = 15.04 lb/ft2. Wind pressure at half-way between this node and level above (35') = 16.21 lb/ft2. F = (9 ft / 2) × (10 ft / 2) × (15.04 lb/ft2 + 16.21 lb/ft2) (10)-3 = 0.703 kips Load on Nodes 41 and 42 ### Tributary area for nodes 41 and 42 Wind pressure at roof level (40') = 16.71 lb/ft2. F = (9 ft / 2) × (10 ft / 2) × (16.71 lb/ft2) (10)-3 = 0.376 kips ### Tributary area for node 29 Wind pressure at half-way between this node and level above (25') = 15.04 lb/ft2. F = (9 ft) × (10 ft / 2) × 15.04 lb/ft2 (10)-3 = 0.677 kips ### Tributary area for node 39 Wind pressure at half-way between this node and level below (25') = 15.04 lb/ft2. Wind pressure at half-way between this node and level above (35') = 16.21 lb/ft2. F = (9 ft) × (10 ft / 2) × (15.04 lb/ft2 + 16.21 lb/ft2) (10)-3 = 1.406 kips ### Tributary area for node 49 Wind pressure at roof level (40') = 16.71 lb/ft2. F = (9 ft) × (10 ft / 2) × (16.71 lb/ft2) (10)-3 = 0.752 kips ## Results Table 2. Joint loads due to wind (kips) Node 21 0.338 0.33 negligible Node 31 0.703 0.71 negligible Node 41 0.376 0.37 negligible Node 29 0.677 0.67 negligible Node 39 1.406 1.42 negligible Node 49 0.752 0.74 negligible Node 22 0.338 0.33 negligible Node 32 0.703 0.71 negligible Node 42 0.376 0.37 negligible STAAD SPACE START JOB INFORMATION ENGINEER DATE 29-Jul-20 END JOB INFORMATION INPUT WIDTH 79 UNIT INCHES KIP JOINT COORDINATES 1 0 0 0; 2 0 0 216; 3 324 0 216; 4 216 0 0; 5 108 0 0; 6 108 0 324; 7 324 0 324; 8 324 0 108; 9 0 0 108; 10 216 0 324; 11 0 120 0; 12 0 120 216; 13 324 120 216; 14 216 120 0; 15 108 120 0; 16 108 120 324; 17 324 120 324; 18 324 120 108; 19 0 120 108; 20 216 120 324; 21 0 240 0; 22 0 240 216; 23 324 240 216; 24 216 240 0; 25 108 240 0; 26 108 240 324; 27 324 240 324; 28 324 240 108; 29 0 240 108; 30 216 240 324; 31 0 360 0; 32 0 360 216; 33 324 360 216; 34 216 360 0; 35 108 360 0; 36 108 360 324; 37 324 360 324; 38 324 360 108; 39 0 360 108; 40 216 360 324; 41 0 480 0; 42 0 480 216; 43 324 480 216; 44 216 480 0; 45 108 480 0; 46 108 480 324; 47 324 480 324; 48 324 480 108; 49 0 480 108; 50 216 480 324; 51 108 120 216; 52 216 120 216; 53 108 120 108; 54 216 120 108; 55 108 240 216; 56 216 240 216; 57 108 240 108; 58 216 240 108; 59 108 360 216; 60 216 360 216; 61 108 360 108; 62 216 360 108; 63 108 480 216; 64 216 480 216; 65 108 480 108; 66 216 480 108; MEMBER INCIDENCES 1 11 19; 2 12 51; 3 11 15; 4 15 53; 5 16 20; 6 17 13; 7 18 54; 8 14 54; 9 1 11; 10 2 12; 11 3 13; 12 4 14; 13 5 15; 14 6 16; 15 7 17; 16 8 18; 17 9 19; 18 10 20; 19 21 29; 20 22 55; 21 21 25; 22 25 57; 23 26 30; 24 27 23; 25 28 58; 26 24 58; 27 11 21; 28 12 22; 29 13 23; 30 14 24; 31 15 25; 32 16 26; 33 17 27; 34 18 28; 35 19 29; 36 20 30; 37 31 39; 38 32 59; 39 31 35; 40 35 61; 41 36 40; 42 37 33; 43 38 62; 44 34 62; 45 21 31; 46 22 32; 47 23 33; 48 24 34; 49 25 35; 50 26 36; 51 27 37; 52 28 38; 53 29 39; 54 30 40; 55 41 49; 56 42 63; 57 41 45; 58 45 65; 59 46 50; 60 47 43; 61 48 66; 62 44 66; 63 31 41; 64 32 42; 65 33 43; 66 34 44; 67 35 45; 68 36 46; 69 37 47; 70 38 48; 71 39 49; 72 40 50; 73 19 12; 74 51 52; 75 52 13; 76 15 14; 77 53 51; 78 51 16; 79 20 17; 80 13 18; 81 54 53; 82 53 19; 83 54 52; 84 52 20; 85 29 22; 86 55 56; 87 56 23; 88 25 24; 89 57 55; 90 55 26; 91 30 27; 92 23 28; 93 58 57; 94 57 29; 95 58 56; 96 56 30; 97 39 32; 98 59 60; 99 60 33; 100 35 34; 101 61 59; 102 59 36; 103 40 37; 104 33 38; 105 62 61; 106 61 39; 107 62 60; 108 60 40; 109 49 42; 110 63 64; 111 64 43; 112 45 44; 113 65 63; 114 63 46; 115 50 47; 116 43 48; 117 66 65; 118 65 49; 119 66 64; 120 64 50; DEFINE PMEMBER 1 73 PMEMBER 9 2 74 75 PMEMBER 10 3 76 PMEMBER 11 4 77 78 PMEMBER 12 5 79 PMEMBER 13 6 80 PMEMBER 14 7 81 82 PMEMBER 15 8 83 84 PMEMBER 16 9 PMEMBER 17 10 PMEMBER 18 11 PMEMBER 19 12 PMEMBER 20 13 PMEMBER 21 14 PMEMBER 22 15 PMEMBER 23 16 PMEMBER 24 17 PMEMBER 25 18 PMEMBER 26 19 85 PMEMBER 27 20 86 87 PMEMBER 28 21 88 PMEMBER 29 22 89 90 PMEMBER 30 23 91 PMEMBER 31 24 92 PMEMBER 32 25 93 94 PMEMBER 33 26 95 96 PMEMBER 34 27 PMEMBER 35 28 PMEMBER 36 29 PMEMBER 37 30 PMEMBER 38 31 PMEMBER 39 32 PMEMBER 40 33 PMEMBER 41 34 PMEMBER 42 35 PMEMBER 43 36 PMEMBER 44 37 97 PMEMBER 45 38 98 99 PMEMBER 46 39 100 PMEMBER 47 40 101 102 PMEMBER 48 41 103 PMEMBER 49 42 104 PMEMBER 50 43 105 106 PMEMBER 51 44 107 108 PMEMBER 52 45 PMEMBER 53 46 PMEMBER 54 47 PMEMBER 55 48 PMEMBER 56 49 PMEMBER 57 50 PMEMBER 58 51 PMEMBER 59 52 PMEMBER 60 53 PMEMBER 61 54 PMEMBER 62 55 109 PMEMBER 63 56 110 111 PMEMBER 64 57 112 PMEMBER 65 58 113 114 PMEMBER 66 59 115 PMEMBER 67 60 116 PMEMBER 68 61 117 118 PMEMBER 69 62 119 120 PMEMBER 70 63 PMEMBER 71 64 PMEMBER 72 65 PMEMBER 73 66 PMEMBER 74 67 PMEMBER 75 68 PMEMBER 76 69 PMEMBER 77 70 PMEMBER 78 71 PMEMBER 79 72 PMEMBER 80 DEFINE MATERIAL START ISOTROPIC 3000PSI E 3320.56 POISSON 0.2 DENSITY 8.68056e-05 ALPHA 5.55556e-06 DAMP 0.05 G 1383.57 TYPE CONCRETE STRENGTH FCU 3 END DEFINE MATERIAL PMEMBER PROPERTY 9 TO 16 27 TO 34 45 TO 52 63 TO 70 PRIS YD 18 ZD 12 17 TO 26 35 TO 44 53 TO 62 71 TO 80 PRIS YD 18 ZD 18 PMEMBER CONSTANTS MATERIAL 3000PSI ALL SUPPORTS 1 TO 10 FIXED TYPE 1 ASCE7:16[+X] <! STAAD PRO GENERATED DATA DO NOT MODIFY !!! ASCE-7-2016:PARAMS 108.000 MPH 230.000 FT 0 1 1 0 0.000 FT 0.000 FT 0.000 FT - 0 3 40.000 FT 30.000 FT 25.000 FT 2.000 0.010 0 0 0 0 0 0 0.761 1.000 1.000 - 0.850 0.992 0 0 0 0 0.866 0.800 -0.180 !> END GENERATED DATA BLOCK INT 9.35123e-05 9.35123e-05 9.59522e-05 9.82014e-05 0.000100292 0.000102249 - 0.00010409 0.000105832 0.000107485 0.000109061 0.000110567 0.00011201 - 0.000113396 0.000114731 0.000116018 0.000116018 HEIG 0 180 203.077 226.154 - 249.231 272.308 295.385 318.461 341.539 364.615 387.692 410.769 433.846 - 456.923 480 480 EXP 1 JOINT 11 12 19 21 22 29 31 32 39 41 42 49 WIND LOAD X 1 TYPE 1 YR 240 480 ZR 0 288 FINISH LOADING 1 LOADTYPE WIND TITLE WL ----------- JOINT LOAD - UNIT KIP INCH JOINT FORCE-X FORCE-Y FORCE-Z MOM-X MOM-Y MOM-Z 21 0.33 0.00 0.00 0.00 0.00 0.00 22 0.33 0.00 0.00 0.00 0.00 0.00 29 0.67 0.00 0.00 0.00 0.00 0.00 31 0.71 0.00 0.00 0.00 0.00 0.00 32 0.71 0.00 0.00 0.00 0.00 0.00 39 1.42 0.00 0.00 0.00 0.00 0.00 41 0.37 0.00 0.00 0.00 0.00 0.00 42 0.37 0.00 0.00 0.00 0.00 0.00 49 0.74 0.00 0.00 0.00 0.00 0.00
2022-07-01 01:45:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44744136929512024, "perplexity": 4707.968175884967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00625.warc.gz"}
https://ximera.osu.edu/introduction/gettingStarted/optionsForTheDocumentclass/optionsForTheDocumentclass
$\newenvironment {prompt}{}{} \newcommand {\HyperFirstAtBeginDocument }[0]{\AtBeginDocument }$ We describe options for the Ximera documentclass. There are a number of options for the document class, though their effects are only seen in the PDF: handout The default behavior of the class is to display all content. This means that if any questions are asked, all answers are shown. Moreover, some content will only have a meaningful presentation when displayed online. When compiled without any options, this content will be shown too. This option will supress such content and generate a reasonable printiable “handout.” noauthor By default, authors are listed at the bottom of the first page of a document. This option will supress the listing of the authors. nooutcomes By default, learning outcomes are listed at the bottom of the first page of a document. This option will supress the listing of the learning outcomes. instructornotes This option will turn on (and off) notes written for the instructor. noinstructornotes This option will turn off (and on) notes written for the instructor. hints When the handout options is used, hints are not shown. This option will make hints visible in handout mode. newpage This option will start each problem-like environment (exercise, question, problem, and exploration) start on a new page. numbers This option will number the titles of the activity. By default the activities are unnumbered. wordchoicegiven This option will replace the choices shown by wordChoice with the correct choice. No indication of the wordChoice environment will be shown.
2019-01-17 01:33:52
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8032576441764832, "perplexity": 1882.0140116834773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658662.31/warc/CC-MAIN-20190117000104-20190117022104-00470.warc.gz"}
https://zbmath.org/?q=an:0772.49006
A variational method in image segmentation: Existence and approximation results.(English)Zbl 0772.49006 The paper deals with a variational approach to the image segmentation problem, recently proposed by D. Mumford and J. Shah [Common. Pure Appl. Math. 42, No. 5, 577-684 (1989; Zbl 0691.49036)]. Given a bounded domain $$\Omega\subset{\mathbf R}^ 2$$ and a function $$g\in L^ \infty(\Omega)$$, the problem consists in finding a pair $$(\bar u,\overline K)$$ minimizing the functional $J(u,K)=\int_{\Omega\backslash K} \nabla u^ 2 dx+\int_{\Omega\backslash K} u-g^ 2 dx+{\mathcal H}^ 1(K\cap\Omega)$ among all pairs $$(u,K)$$ with $$K\subset{\mathbf R}^ 2$$ closed and $$u\in C^ 1(\Omega\backslash K)$$. Existence of a weak solution of this problem (with $$K$$ replaced by $$S_ u$$, the jump set of $$u$$) can be obtained in the so-called class $$\text{SBV}(\Omega)$$ of special functions of bounded variation in $$\Omega$$. The aim of the paper is to prove that any minimizer of the relaxed formulation of the problem actually corresponds to a minimizer of $$J$$. This is obtained by showing the existence of $$\gamma>0$$ such that ${{\mathcal H}^ 1(S_ u\cap B_ \rho(x))\over\rho}<\gamma\Longrightarrow u\in C^ 1(B_{\rho/2}(x))$ for any minimizer $$u\in\text{SBV}(\Omega)$$ and any ball $$B_ \rho(x)\subset\Omega$$. Reviewer: L.Ambrosio (Roma) MSC: 49J20 Existence theories for optimal control problems involving partial differential equations Keywords: image segmentation Zbl 0691.49036 Full Text: References: [1] Ambrosio, L., A compactness theorem for a new class of functions of bounded variation.Boll. Un. Mat. Ital. B (7), 3 (1989), 857–881. · Zbl 0767.49001 [2] –, Variational problems in SBV and image segmentation.Acta Appl. Math., 17 (1989), 1–40. · Zbl 0697.49004 [3] Amini, A. A., Tehrani, S. &Weymouth, T. E., Using dynamic programming for minimizing the energy of active contours.Second International Conference on Computer Vision (Tampa, Florida, 1988), pp. 95–99. IEEE Computer Society Press, no. 883, Washington, 1988. [4] Besicovitch, A. S., A general form of the covering principle and relative differentiations of additive functions.Proc. Cambridge Philos. Soc. I, 41 (1945), 103–110. · Zbl 0063.00352 [5] Blat, J. & Morel, J. M., Elliptic problems in image segmentation and their relation to fracture theory.Proc. of the International Conference on Nonlinear Elliptic and Parabolic Problems (Nancy, 1988). To appear. · Zbl 0727.35040 [6] Brezis, H., Coron, J. M. &Lieb, E. H., Harmonic maps with defects.Comm. Math. Phys., 107 (1986), 679–705. · Zbl 0608.58016 [7] Carriero, M., Leaci, A., Pallara, D. & Pascali, E., Euler conditions for a minimum problem with free discontinuity surfaces. Preprint Univ. Lecce, Lecce, 1988. [8] De Giorgi, E., Free discontinuity problems in calculus of variations.Analyse Mathématique et Applications (Paris, 1988). Gauthier-Villars, Paris, 1988. · Zbl 0758.49002 [9] De Giorgi, E. & Ambrosio, L., Un nuovo tipo di funzionale del calcolo delle variazioni.Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur., (1988). [10] De Giorgi, E., Carriero, M. &Leaci, A., Existence theorem for a minimum proble with free discontinuity set.Arch. Rational Mech. Anal., 108 (1989), 195–218. · Zbl 0682.49002 [11] De Giorgi, E. Colombini, F. & Piccinini, L. C., Frontiere orientate di misura minima e questioni collegate. Quaderno della Scuole Normale Superiore, Pisa, 1972. · Zbl 0296.49031 [12] Ericksen, J. L., Equilibrium theory of liquid crystals.Advances in Liquid Crystals, 233–299. Academic Press, New York, 1976. [13] Federer, H.,Geometric Measure Theory, Springer-Verlag, New York, 1969. · Zbl 0176.00801 [14] –, Colloquium lectures on geometric measure theory.Bull. Amer. Math. Soc., 84 (1978), 291–338. · Zbl 0392.49021 [15] Geman, S. & Geman, D., Stochastic relaxation, Gibbs distribution and the Bayesian restoration of images.IEEE PAMI, 6 (1984). · Zbl 0573.62030 [16] Giusti, E.,Minimal Surfaces and Functions of Bounded Variation. Birkhäuser, Basel, 1983. · Zbl 0524.35040 [17] Kass, M., Witkin, A. &Terzopoulos, D., Snakes: active contour models.First International conference on Computer Vision (London, 1987), pp. 259–268. IEEE Computer Society Press, no. 77, Washington, 1987. [18] Massari, U. &Miranda, M.,Miniman Surfaces of Codimension One. Notas de Matematica, North Holland, Amsterdam, 1984. · Zbl 0565.49030 [19] Morel, J. M. &Solimini, S., Segmentation of images by variational methods: a constructive approach.Revista Matematical Universidad Complutense de Madrid, 1 (1988), 169–182. · Zbl 0679.68205 [20] –, Segmentation d’images par méthode variationnelle: une preuve constructive d’existence.C.R. Acad. Sci. Paris Sér I Math., 308 (1989), 465–470. · Zbl 0676.68051 [21] Mumford, D. & Shah, J., Boundary detection by minimizing functionals, I.Proc. IEEE Conf. on Computer Vision and Patter Recognition (San Francisco, 1985) andImage Understanding, 1988. [22] –, Optimal approximation by piecewise smooth functions and associated variational problems.Comm. Pure Appl. Math., 42 (1989), 577–684. · Zbl 0691.49036 [23] Richardson, T., Existence result for a variational problem arising in computer vision theory. Preprint CICS, P-63, MIT, 1988. [24] Rosenfeld, A. &Kak, A. C.,Digital Picture Processing. Academic Press, New York, 1982. · Zbl 0564.94002 [25] Simon, L., Lectures on geometric measure theory.Proc. of the Centre for Mathematical Analysis (Canberra, 1983). Australian National University, 3, 1983. · Zbl 0546.49019 [26] Virga, E., Forme di equilibrio di piccole gocce di cristallo liquido. Preprint IAN, Pavia, 1987. [27] Volpert, A. I. &Hudjaev, S. I.,Analysis in classes of Discontinuous Functions and Equations of Mathematical Physics. Martinus Nijhoff Publishers. Dordrecht, 1985. [28] Yuille, A. L. &Grzywacz, N. M., The motion coherence theory.Second Internaional Conference of Computer Vision (Tampa, Florida, 1988), IEEE Computer Society Press, no. 883, Washington, 1988. This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-07-07 00:37:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6532854437828064, "perplexity": 3886.2276417752278}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00448.warc.gz"}
https://www.mail-archive.com/lyx-users@lists.lyx.org/msg64609.html
# Re: Embedding asymptote in LyX Oh, I see... They also wrote this line under that figure: "For ASYMPTOTE versions ≥ 1.14, you can simply call PDFLATEX directly" - but I have misunderstood it, probably; I thought it meant that in this specific case (version ≥ 1.14), I *don't* need to do that. Well, then... I am looking for a workaround now. Is it possible to export a LyX document to a TeX document without losing data? Then, I'll be able to do what they suggest in the documentation. Thanks, Peleg. On Tue, 2008-05-13 at 19:38 +0200, Manveru wrote: > Hough! > > I am not a specialist of ASYMPTOTE, but I've just looked into the > documentation. They directly tell about makefile for latex with asymptote: > > document.pdf: document.tex > pdflatex -shell-escape document > asy document > pdflatex -shell-escape document > > It means that ASYMPTOTE need an additional step during document generation > which launch "asy" command between two "pdflatex" invocations. This is what > LyX is *not able* to do. Generation process is compiled in, automatic... and > as some people suggests, not user configurable. > > Maybe it is worth to put feature request in bugzilla, to get this to work > when \usepackage{asymptote} is used in preamble definition. > > Regards, > M. > > 2008/5/13 Peleg Michaeli <[EMAIL PROTECTED]>: > > > Hello! > > > > I am trying to embed Asymptote in LyX documents. > > > > I am following this tutorial: > > http://www.dse.nl/~dario/projects/asylatex/asylatex.pdf<http://www.dse.nl/%7Edario/projects/asylatex/asylatex.pdf> > > > > I am doing everything as suggested: > > * I have included \usepackage{asymptote} and > > \usepackage[pdftex]{graphicx} in the LaTeX preamble > > * I have done Crtl-L in the middle of the document and added this LaTeX > > code: > > > > \begin{figure} > > \centering > > \begin{asy} > > size (3cm); > > draw (unitcircle); > > \end{asy} > > \caption{Embedded Asymptote figures are easy!} > > \label{fig:embedded} > > \end{figure} > > > > When I generate the document (using pdflatex), I get nothing special: a > > "figure" with "PDF" written there, but no image (no "unitcircle")... > > when I do the same, but with DVI, I get "EPS" written instead of the > > unitcircle. Either way, I don't see any picture... > > > > I have a new version of Asymptote: 1.18 - so it should work > > automatically with pdflatex. > > > > Any ideas? > > Maybe I should ask a different list? > >
2021-09-19 00:07:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604945778846741, "perplexity": 12890.482245560412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00335.warc.gz"}
https://ltwork.net/in-your-kitchen-t-25-c-you-drop-a-small-bottle-with-20-ml--6630299
# In your kitchen (T= 25°C) you drop a small bottle with 20 mL of the solvent 1,1,1- trichloroethane (methyl chloroform, MCF) ###### Question: In your kitchen (T= 25°C) you drop a small bottle with 20 mL of the solvent 1,1,1- trichloroethane (methyl chloroform, MCF) that you use for cleaning purposes. The bottle breaks and the solvent starts to evaporate. The doors and the windows are closed. On your stove there is an open pan containing 2 L of cold olive oil. Furthermore, on the floor there is a large bucket that is filled with 50 L of water. The air volume of the kitchen is 30 m3. Calculate the concentration of MCF in the air, in the water in the bucket, and in the olive oil at equilibrium by assuming that the adsorption of MCF to any other phases/surfaces present in the kitchen can be neglected. Consider MCF as an apolar compound. You can find some important physical-chemical data provided below and in Fig. 6.7. Comment on any assumption that you make. ### How many sides does a regular hexagon have How many sides does a regular hexagon have... ### You are standing on a skateboard, initially at rest. a friend throws a very heavy ball towards you. You are standing on a skateboard, initially at rest. a friend throws a very heavy ball towards you. you have two choices about what to do with the ball: either catch the ball or deflect it back toward your friend with the same speed as it was originally thrown. which choice should you make in order... ### Quentin has 18 unit cubes. how many different rectangular prims can he build if he uses all of the cubes Quentin has 18 unit cubes. how many different rectangular prims can he build if he uses all of the cubes ? a) 4 b)6 c)8 d)18... ### What is the exercise called when you are trying to identify an organization's risk health? What is the exercise called when you are trying to identify an organization's risk health?... ### What role did the social class have in the rise of the Soviet Union and communism? What role did the social class have in the rise of the Soviet Union and communism?... ### WHO the f u ck asked WHO the f u ck asked... ### Dapianoplayer. We need to talk. This is not a subject, but this is about itself. What was I lying about? Dapianoplayer. We need to talk. This is not a subject, but this is about itself. What was I lying about? I rarely lie. The only time I ever really lie is when I am talking to a stranger and feel I am in danger, I scream fire and run the opposite direction. I am not "academically lying" ok? The guy w... ### Read the excerpt below and answer the question, i learned that to this unending torment have been condemned Read the excerpt below and answer the question, i learned that to this unending torment have been condemned the sinners of the flesh, those who surrender reason to self-will. (v; 1. 37-39) which identifies the best inference one could draw based upon the above excerpt? self-will is more important ... ### Some students are present. (Negative)​ Some students are present. (Negative)​... ### There are (42)3 ⋅ 40 horses on a stud farm. What is the total number of horses on the farm? (10 Points)a0b4^6c4^7d4^24 There are (42)3 ⋅ 40 horses on a stud farm. What is the total number of horses on the farm? (10 Points) a 0 b 4^6 c 4^7 d 4^24... ### How many chromosomes are found in cells created in How many chromosomes are found in cells created in... ### Algebra (FA20) 37 Linear Relationships 12 8 X-7 14 PRS​ Algebra (FA20) 37 Linear Relationships 12 8 X-7 14 PRS​... ### Can you help me pls Can you help me pls $Can you help me plssss$... ### FREE PO. INT IG FREE PO. INT IG... Please help ! I’m stuck on this $Please help ! I’m stuck on this$... ### Find the image of the given pointunder the given translation.P(-7,1) T(x, y) = (x + 9, y-3)P' = ([?], []Enter the number that belongs Find the image of the given point under the given translation. P(-7,1) T(x, y) = (x + 9, y-3) P' = ([?], [] Enter the number that belongs in the green box Enter HELP PLEAS... ### Before dan went looking for a new car, he carefully considered the features that he wanted on his vehicle--good gas mileage, Before dan went looking for a new car, he carefully considered the features that he wanted on his vehicle--good gas mileage, sporty handling, responsive handling and acceleration--and narrowed his choice down to three manufacturers. then dan went to the dealerships to test drive the three cars that ... ### What were some of the duties of the Prefects? a. Making laws and listening to the citizens. b. Advising What were some of the duties of the Prefects? a. Making laws and listening to the citizens. b. Advising the Consuls and working with the Senate. c. Judging cases in court and running the public marketplaces.... ### How did the feudal system protect a lord as well as his peasants? How did the feudal system protect a lord as well as his peasants?... ### A graduate student studying biology at the University of Nebraska has identified a new species of spider found only in Eastern A graduate student studying biology at the University of Nebraska has identified a new species of spider found only in Eastern Nebraska around Omaha. The graduate student determines that the spider has six homologous pairs of chromosomes. How many chromosomes would a cell in that spider have during ... -- 0.016976--
2023-02-07 15:18:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32312244176864624, "perplexity": 2505.1157965794278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00480.warc.gz"}
http://playingwithpointers.com/biased-locking-and-pthreads.html
Biased locking [1] is an performance optimization over "vanilla" locks (mutexes) that are mostly uncontended. This improvement is achieved by programming the lock to not use any high-latency read-modify-write instructions (CAS et. al. [2] [3]) in the special case of the same thread (referred to as the biasing thread) repeatedly invoking lock (acquire) and unlock (release) on the mutex. The lock falls back to using a traditional lock (which I'll refer to as the backing lock; and is a straight pthread_mutex_t in this case) once this assumption is broken. The Implementation Before going ahead, please allow me to make it clear what follows is far from production quality and is a learning exercise at best. On a related note, I'd like to thank AceyJuan for pointing out some red flags in my code [9]. The implementation [4] is inspired from [1]. The lock has four states — Unbiased, BiasedAndLocked, BiasedAndUnlocked and Default. A lock in the Unbiased state hasn't ever been locked. A lock in the BiasedAndLocked state is biased towards a specific thread; and can be moved to the BiasedAndUnlocked state and back by that specific thread cheaply. Once the biasing assumption (that of the lock being uncontended) is broken, the lock moves to the Default state where it is sematically equivalent to a pthread_mutex_t; modulo a tiny bit of indirection. The possible state transitions are therefore | Unbiased --> BiasedAndLocked <--> BiasedAndUnlocked --> Default | The trickiest part of the implementation is resolving the race between the biasing thread trying to move the lock from BiasedAndUnlocked to BiasedAndLocked and some other non-biasing thread trying to move the lock from the BiasedAndUnlocked state to the Default state (so that this non-biasing thread can then lock on the backing lock). The complication arises from the very premise of the lock itself — to be able to gain absolutely anything in performance, we need to transition from BiasedAndUnlocked to BiasedAndLocked without invoking atomic read-modify-write instructions. Here is the pseudocode: if (biasing_thread_id_.load(memory_order_acquire) == state_.store(kStateBiasedAndLocked, memory_order_release); state_.store(kStateDefault, memory_order_release); goto retry; } } else { revoke_requested_.store(true); unsigned expected = kStateBiasedAndUnlocked; bool result = state_.compare_exchange_strong(expected, kStateDefault); // (X) if (!result) sleep(appropriate_amount); goto retry; } To see why this works (informally), we condition on result from the CAS on line marked as (X), which will be executed whenever some non-biasing thread tries to lock. And since all we attempt to do in that case is change the state to Default and retry races between multiple non-biasing threads trying to lock is naturally resolved by the backing lock. Firstly, note that we essentially do the same thing, retry, irrespective of what result is (the call to sleep is a performance hack, nothing more). Secondly, note that setting revoke_requested_ to true is always a safe, conservative thing to do. A false result means that state_ was changed by some thread between the read by the switch statement and the CAS. We sleep for an appropriate amount (the actual code implements an exponential back-off [5]) and retry. We leave revoke_requested_ set to true, just in case the biasing thread sees it and helps out the non-biasing thread. A true result means that we did, at some point of time, successfully change state_ from BiasedAndLocked to Default. However, this does not say anything about the current value of state_, since the biasing thread could have changed it to BiasedAndLocked after the CAS. However, in that case, the MemoryBarrier() and the SequentiallyConsistentLoad [6] [7] ensure a is_revoke_requested_ is read as true in the biasing thread; which then helps out by setting state_ to Default and retrying. Can this solution be simplified further? We definitely need the CAS in non-biasing threads — if we just set revoke_requested_ to true and retry, the biasing thread might not see the true revoke_requested_ when locking. In the situation where it doesn't attempt another lock after the corresponding unlock we'll be in the absurd state where the non-biasing thread (assuming a total of two threads) fails to acquire an uncontended lock forever. Can we replace the SequentiallyConsistentLoad with a normal atomic load? Well, no. A normal atomic load from revoke_requested_ can be reordered to before the store to state_; and this makes it possible for both the biasing thread and one non-biasing thread to lock simultaneously. This can happen as follows: the biasing thread reads a false revoke_requested_, after which a non-biasing thread sets revoke_requested_ to true, does a successful CAS, retries, sees state_ as Default and successfully locks on the base lock. After all of this, the biasing thread sets state_ to BiasedAndLocked and returns. The SequentiallyConsistentLoad prevents this re-ordering and ensures that whenever the biasing thread steps on a successful CAS, it sees a true revoke_requested_ and sets state_ to Default. Tests & Performance Improvement I ran some basic performance tests on an Intel Core i7. Here are some numbers reported after running [4] as is: simple_bench; pthread_lock: 25.061 milliseconds simple_bench; biased_lock: 14.854 milliseconds simple_bench; biased_lock: 14.835 milliseconds simple_bench; biased_lock: 14.838 milliseconds simple_bench; biased_lock: 14.850 milliseconds simple_bench; biased_lock: 14.815 milliseconds simple_bench; biased_lock: 14.853 milliseconds This simulates the uncontended lock case, in which we see an average decrease of 47.03 % in lock overhead (the test-case doesn't do much apart from locking and unlocking the mutex). Some of this is undoubtedly due to the fact that the compiler inlines the code for biased locking, eliding the method call overhead. In the contended case, the figures look like bench_threads; pthread_lock: 97.591 milliseconds bench_threads; biased_lock: 108.765 milliseconds Biased locking incurs a increase 25.1 % in the lock overhead. Conclusion As can be seen, biased locking may lead to decreased locking overhead in certain workloads. One possible use case is in a thread-safe library: your data structure incurs a much smaller locking overhead when accessed from just one thread; but has the ability to degrade gracefully to a locking on a normal mutex when needed. Hotspot implements a better version of this same concept in production; in their case revocation involves stopping all threads at a safepoint and walking their stacks [8]. Please let me know if you notice any bugs or mistakes. The code itself assumes x86's memory model and will have to be changed if targeted for other architectures. And, as mentioned earlier, this is not production code. [1]: home.comcast.net/~pjbishop/Dave/QRL-OpLocks-BiasedLocking.pdf [2]: en.wikipedia.org/wiki/Compare-and-swap [3]: gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html [4]: github.com/sanjoy/Snippets/blob/master/BiasedLocking.cc [5]: ics.forth.gr/carv/transform/_docs/its_slides/TransForm-ITS-talk2... [6]: preshing.com/20120913/acquire-and-release-semantics [7]: bartoszmilewski.com/2008/11/05/who-ordered-memory-fences-on-an-x [8]: github.com/openjdk-mirror/jdk7u-hotspot/blob/master/src/share/vm...
2014-04-25 04:18:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3760092258453369, "perplexity": 4124.83500545631}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
http://codereview.stackexchange.com/questions/37177/simpler-method-to-detect-int-overflow
# Simpler method to detect int overflow To detect int overflow/underflow in C, I use the below code. What might be simpler and portable code? (That is: fewer tests) Assume 2's complement and don't use wider integers. int a,b,sum; sum = a + b; // out-of-range only possible when the signs are the same. if ((a < 0) == (b < 0)) { if (a < 0) { // Underflow here means the result is excessively negative. if (sum > b) UnderflowDetected(); } else { if (sum < b) OverflowDetected(); } #include <limits.h> int safe_add(int a, int b) { if (a >= 0) { if (b > INT_MAX - a) { ; /* handle overflow */ } } else { if (b < INT_MIN - a) { ; /* handle underflow */ } } return a + b; } Note: Solution does not require 2's complement. Note: This post does not address unsigned overflow. - I think you're misunderstanding underflow ... or am I? Let's say, as an example, the smallest number a float can represent is 0.001. 1.0 / 10000 would result in a value of 0.0 because the actual value is too small. –  Fiddling Bits Dec 12 '13 at 0:44 @BitFiddlingCodeMonkey this is on integers - there are various wrap-around cases where the result of an addition does not fit in the same size integer. Sometimes it's called underflow when it's the sum of two negative numbers that doesn't fit. –  Michael Urman Dec 12 '13 at 1:01 If you just change from using int to using unsigned int, or better still, uint32_t and size_t, you'll be able to do those checks after the operation. For signed ints, overflow and underflow can't be detected after-the-fact because of undefined behaviour. And be warned: undefined behaviour can exhibit itself as anything from the program appearing to work properly right through to malware being installed on your machine and being used to steal your credit card information. –  Matt Dec 12 '13 at 12:10 @Matt Are you suggesting a method that would detect the out-of-range sum of 2 ints by employing unsigned conversion? OTOH if you are talking about overflow detection for 2 unsigned, that is a different question. if (sum < a) OverflowDetected(); seems to work for that. –  chux Dec 12 '13 at 15:46 @chux: No - I was merely pointing out that the method employed here (i.e. do an add and then see if an overflow occurred) is valid only on unsigned integers. For signed integers it is never valid because overflow of signed integers is inherently undefined in the language. –  Matt Dec 12 '13 at 15:52 Your code doesn't work: if the addition overflows then there is undefined behaviour here: sum = a + b; so your test is too late. You have to test for possible overflow before you do the addition. (If you're puzzled by this, read Dietz et al. (2012), "Understanding Integer Overflow in C/C++". Or even you're not puzzled: it's an excellent paper!) If it were me, I'd do something like this: #include <limits.h> int safe_add(int a, int b) { if (a > 0 && b > INT_MAX - a) { /* handle overflow */ } else if (a < 0 && b < INT_MIN - a) { /* handle underflow */ } return a + b; } but I'm not entirely sure what the point of having separate cases for overflow and underflow is. I also use Clang's -fsanitize=undefined when building for test. - Quite a lengthy paper, yet you make it seem so simple here. :-) What are INT_MAX and INT_MIN equal to? –  Jamal Dec 11 '13 at 23:17 §5.2.4.2.1 in the C99 standard defines INT_MIN to be the "minimum value for an object of type int" and INT_MAX to be the "maximum value for an object of type int". –  Gareth Rees Dec 11 '13 at 23:20 Ah, I see. And those macros come from <limits.h>. –  Jamal Dec 11 '13 at 23:22 Yes, that's right. Typical values are INT_MIN = −2^32 = −2147483648 and INT_MAX = 2^32 − 1 = 2147483647, but the values can vary depending on your processor and compiler, so the only way to make code portable is to use the macros. –  Gareth Rees Dec 11 '13 at 23:27 @Jamal Actually, with <limits>. std::numeric_limits<int>::min() and std::numeric_limits<int>::max() to be precise. –  Yuushi Dec 12 '13 at 0:16
2014-04-23 09:32:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.335470974445343, "perplexity": 3140.8450499170094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-1-section-1-7-addition-and-subtraction-of-fractions-exercise-page-40/57
## Elementary Technical Mathematics 1 $\frac{31}{48}$ ton Let the cooling requirements of the three seperate incubation rooms are $\frac{1}{3}$ ton , $\frac{3}{4}$ ton and $\frac{9}{16}$ ton therefore total tons required to install central HVAC unit = $\frac{1}{3}$ ton + $\frac{3}{4}$ ton + $\frac{9}{16}$ ton = $\frac{1}{3} \times$ $\frac{16}{16}$ + $\frac{3}{4} \times$ $\frac{12}{12}$ + $\frac{9}{16} \times$ $\frac{3}{3}$ =$\frac{16}{48}$ ton + $\frac{36}{48}$ ton + $\frac{27}{48}$ ton = $\frac{79}{48}$ ton =1 $\frac{31}{48}$ ton
2021-06-13 00:06:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8784884214401245, "perplexity": 3762.9538592696777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586465.3/warc/CC-MAIN-20210612222407-20210613012407-00193.warc.gz"}
http://elibm.org/article/10006893
A combinatorial proof of a Weyl type formula for hook Schur polynomials J. Algebr. Comb. 28(4), 439-459 (2008) DOI: 10.1007/s10801-007-0109-9 Summary Summary: In this paper, we present a simple combinatorial proof of a Weyl type formula for hook Schur polynomials, which was obtained previously by other people using a Kostant type cohomology formula for $\frak gl _{ m | n}$ frakgl_m|n . In general, we can obtain in a combinatorial way a Weyl type character formula for various irreducible highest weight representations of a Lie superalgebra, which together with a general linear algebra forms a Howe dual pair. Keywords/Phrases keywords hook Schur polynomial, Lie superalgebra, character formula
2019-05-19 09:36:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8250398635864258, "perplexity": 597.1992204881836}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254731.5/warc/CC-MAIN-20190519081519-20190519103519-00385.warc.gz"}
https://aiaspirant.com/measure-of-central-tendency/
# Measure of Central Tendency and Measure of Spread Summarizing the quantitative data can help us understand them better. In this article, we’ll see various methods to summarize quantitative data by the measure of central tendency(such as mean, median, and mode) and by the measure of spread(such as range, variance, and IQR). ## MEASURES OF CENTRAL TENDENCY: The measures of central tendency attempt to describe the center of the distribution of our data. The three most common estimators of central tendency are the arithmetic mean, the median and the mode. ### MEAN: The mean is the arithmetic average of all the observations in the data set. The arithmetic mean of a population with N elements is represented by the Greek symbol μ, pronounced as “mew” and the arithmetic mean of a sample with n elements is represented by $\bar{x}$, pronounced as “x-bar.” $\text { Population Mean: } \mu=\frac{\sum x_{i}}{N}$ where xi are the data values and N is the size of the population. The sample mean of n observations is given by $\text { Sample Mean: } \bar{x}=\frac{\sum x_{i}}{n}$ The mean is extremely sensitive to the presence of outliers and may not be a reliable measurement when we have outliers in our data. ### MEDIAN: The median is the middle value of the data when the data is sorted from the least to the greatest. The median of the variable can be calculated as follows If the number of observations is odd, then the middle value is the median of the data. $M e d i a n=\left(\frac{n+1}{2}\right)$ where n is the number of observations If the number of observations is even, the median is the average of the two middle values. $\text {Median}=\frac{\left(\frac{n}{2}\right)+\left(\frac{n}{2}+1\right)}{2}$ Unlike the mean, the median is not influenced by the outlier values. ### MODE: The mode is the value that occurs most frequently in a data set. Some data sets can have more than one mode. If there are two values with the highest frequency, then it is said to be bimodal. If there are more than two modes, then the data is said to be multi-modal. Having two or more modes, however, does not mean that they all occurred the same number of times, but they are more common than the other values. On the other hand, it is also possible that some data sets do not have a mode because each value occurs only once. ### COMPARISON OF MEAN, MEDIAN AND MODE: Comparing the mean, median, and mode can give us the shape of the distribution. In a symmetric distribution, all three measures are identical. If the mean is higher than the median and the mode, then the distribution is skewed to the right, or positively skewed. On the other hand, if the mean is smaller than the median or the mode, then the distribution is skewed to the left or negatively skewed. Source The measure of Central tendency gives us information only about the center of the distribution. However, it is also essential to understand the spread of the distribution. The spread of the data is a measure that tells us how much variation is there in the data. Standard metrics to quantify the spread are the range, variance, and IQR. ### RANGE: A straightforward, but not particularly useful, measure of spread is the range. The range is calculated as the difference between the maximum and minimum values in a data set. $\text { Range }=\operatorname{Max}-\operatorname{Min}$ As it only considers the maximum and the minimum values, it is highly impacted by the presence of outliers. ### VARIANCE AND STANDARD DEVIATION: Variance and standard deviation are measures of the spread that evaluates how much the data are dispersed with respect to the arithmetic mean. Higher the variance and standard deviation, the higher the data are spread out from the mean. Variance is calculated as the average of the squared deviations from the mean of X. The population variance is denoted as σ2, and the sample variance is written as S2. $\sigma^{2}=\frac{\sum_{i=1}^{N}\left(X_{i}-\mu\right)^{2}}{N}$ Xi represents the data values μ represents the population mean N represents the number of units in the population The sample variance is given by, $S^{2}=\frac{\sum_{i=1}^{n}\left(X_{i}-\bar{X}\right)^{2}}{n-1}$ Xi represents the ith unit, starting from the first observation to the last $\bar{X}$ represents the sample mean n represents the number of units in the population Since variance considers the mean of “squared” deviations, the unit of measurement is changed. For this purpose, we calculate the square root of variance called standard deviation. Since we are taking the square root, the standard deviation reverts the unit of measurement to its original scale. The population standard deviation is denoted by σ, and the sample standard deviation is denoted as s. $\sigma=\sqrt{\sigma^{2}}\ \ \ \ (\text{for the population })$ $S=\sqrt{S^{2}}\ \ \ \ (\text { for samples })$ ### INTER-QUARTILE RANGE: The inter-quartile range(IQR) is defined as the difference between the upper quartile(Q3) and the lower quartile(Q1). $I Q R=Q 3-Q_{1}$ The lower quartile describes 25% of the data, and the upper quartile describes 75% of the data. Thus, IQR gives us the spread of the data around the median. IQR is highly resistant to outliers. ## CONCLUSION: Summarizing the quantitative data can help us understand them better. In this tutorial, we discussed various methods to summarize the data. We first discussed the measure of central tendency, which describes the center of the distribution of the data. Metrics like mean, median, and mode can be used to quantify central tendency. The measure of spread tells us how much our data is spread out. Some of the common metrics used to quantify spread are the range, variance and inter-quartile range.
2020-01-19 02:31:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 12, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8124685883522034, "perplexity": 346.6440102254599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00334.warc.gz"}
https://ask.sagemath.org/question/51045/metric-perturbations-on-sagemanifolds/?sort=oldest
# metric perturbations on Sagemanifolds I would like to carry out some metric perturbations within SageManifolds. To that end, I have defined a 4-dimensional Lorentzian manifold N: N = Manifold(4, 'N', latex_name=r'\mathcal{N}', structure='Lorentzian') a global chart: GC.<x0,x,y,z> = N.chart(r'x0:(-oo,+oo):x^0 x y z') the corresponding frame eN: eN = GC.frame() the unperturbed metric g0: g0 = N.metric('g0', latex_name=r'g_{(0)}') the control parameter for the perturbation: var('eps', latex_name=r'\epsilon', domain='real') and the perturbation tensor field itself: g1 = N.tensor_field(0, 2, name='g1', latex_name='g_{1}', sym=(0,1)) Up until here, everything seems to work fine and there are no errors or warnings. However, when I try to define the total perturbed metric, via: g = g0 + eps*g1 the following error shows up: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-10-e785f6693878> in <module>() ----> 1 g = g0 + epsplus*g1plus 2 g 1229 cdef int cl = classify_elements(left, right) 1230 if HAVE_SAME_PARENT(cl): 1232 # Left and right are Sage elements => use coercion model 1233 if BOTH_ARE_ELEMENT(cl): 2344 Generic element of a module. 2345 """ 2347 """ 2090 basis = self.common_basis(other) 2091 if basis is None: -> 2092 raise ValueError("no common basis for the addition") 2093 comp_result = self._components[basis] + other._components[basis] 2094 result = self._fmodule.tensor_from_comp(self._tensor_type, comp_result) ValueError: no common basis for the addition How is the correct way to define g as the sum of those 2 former tensor fields??? I have also tried g[eN] = g0[eN] + epsplus*g1plus[eN] but there is then: Type Error: unhashable type: 'VectorFieldFreeModule_with_category.element_class' and also: g[eN,:] = g0[eN,:] + epsplus*g1plus[eN,:] but then the error is: ValueError: no basis could be found for computing the components in the Coordinate frame (N, (d/dx0,d/dx,d/dy,d/dz)). edit retag close merge delete Sort by » oldest newest most voted The error is due to the fact that neither g0 nor g1 are initialized. They have simply been declared as a metric and a type (0,2) tensor field, but you should initialize their components in some vector frame, in order to fully define them. Regarding perturbation of tensor fields, note that tensor series expansion have been introduced in Sage 8.8, see the changelog for details, in particular cell [25] of this notebook for a concrete example of use. more Dear Eric, I get the difference between a plain declaration and an initialization; thanks. However, I would like to use, until a certain time, completely general expressions for the components (in the default frame, for instance), that is, arbitrary functions of all the coordinates. Is this feasible? If so, how? ( 2020-04-27 10:06:32 -0600 )edit You can use function('A')(x0,x,y,z) to initialize some tensor components with an arbitrary function of the coordinates. NB: if you do this for all components, some computations, like the Riemann tensor, will become huge. ( 2020-04-27 11:00:54 -0600 )edit I was thinking something along the lines of the package xPert (xAct) from Mathematica ( 2020-04-27 13:28:02 -0600 )edit The currrent implementation of tensor calculus in SageMath does not deal with "abstract" tensors, as xAct does, i.e. tensor fields have to be defined by their components in a given frame (usually a coordinate frame). Adding abstract tensor calculus in SageMath could be a nice project, if there are volunteers... ( 2020-05-01 03:10:28 -0600 )edit
2021-01-26 18:56:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4219297766685486, "perplexity": 3718.444708546035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704803308.89/warc/CC-MAIN-20210126170854-20210126200854-00778.warc.gz"}
http://farside.ph.utexas.edu/teaching/sm1/Thermalhtml/node145.html
Next: Three-Dimensional Wave Mechanics Up: Wave Mechanics Previous: Heisenberg's Uncertainty Principle # Stationary States Consider separable solutions to Schrödinger's equation of the form (C.63) According to Equation (C.20), such solutions have definite energies, . For this reason, they are usually written (C.64) The probability of finding the particle between and at time is (C.65) This probability is time independent. For this reason, states whose wavefunctions are of the form (C.64) are known as stationary states. Moreover, is called a stationary wavefunction. Substituting (C.64) into Schrödinger's equation, (C.24), we obtain the following differential equation for : (C.66) This equation is called the time-independent Schrödinger equation. Of course, the most general form of this equation is (C.67) where is the Hamiltonian. (See Section C.5.) Consider a particle trapped in a one-dimensional square potential well, of infinite depth, which is such that (C.68) The particle is excluded from the region or , so in this region (i.e., there is zero probability of finding the particle outside the well). Within the well, a particle of definite energy has a stationary wavefunction, , that satisfies (C.69) The boundary conditions are (C.70) This follows because in the region or , and must be continuous [because a discontinuous wavefunction would generate a singular term (i.e., the term involving ) in the time-independent Schrödinger equation, (C.66), that could not be balanced, even by an infinite potential]. Let us search for solutions to Equation (C.69) of the form (C.71) where is a constant. It follows that (C.72) The solution (C.71) automatically satisfies the boundary condition . The second boundary condition, , leads to a quantization of the wavenumber: that is, (C.73) where et cetera. (A quantized'' quantity is one that can only take certain discrete values.) Here, the integer is known as a quantum number. According to Equation (C.72), the energy is also quantized. In fact, , where (C.74) Thus, the allowed wavefunctions for a particle trapped in a one-dimensional square potential well of infinite depth are (C.75) where is a positive integer, and a constant. We cannot have , because, in this case, we obtain a null wavefunction: that is, , everywhere. Furthermore, if takes a negative integer value then it generates exactly the same wavefunction as the corresponding positive integer value (assuming ). The constant , appearing in the previous wavefunction, can be determined from the constraint that the wavefunction be properly normalized. For the case under consideration, the normalization condition (C.32) reduces to (C.76) It follows from Equation (C.75) that . Hence, the properly normalized version of the wavefunction (C.75) is (C.77) At first sight, it seems rather strange that the lowest possible energy for a particle trapped in a one-dimensional potential well is not zero, as would be the case in classical mechanics, but rather . In fact, as explained in the following, this residual energy is a direct consequence of Heisenberg's uncertainty principle. A particle trapped in a one-dimensional well of width is likely to be found anywhere inside the well. Thus, the uncertainty in the particle's position is . It follows from the uncertainty principle, (C.60), that (C.78) In other words, the particle cannot have zero momentum. In fact, the particle's momentum must be at least . However, for a free particle, . Hence, the residual energy associated with the particle's residual momentum is (C.79) This type of residual energy, which often occurs in quantum mechanical systems, and has no equivalent in classical mechanics, is called zero point energy. Next: Three-Dimensional Wave Mechanics Up: Wave Mechanics Previous: Heisenberg's Uncertainty Principle Richard Fitzpatrick 2016-01-25
2020-04-09 11:57:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9209887981414795, "perplexity": 538.7532273965705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371833063.93/warc/CC-MAIN-20200409091317-20200409121817-00546.warc.gz"}
https://www.sas1946.com/main/index.php?topic=25172.msg273426
• July 09, 2020, 06:44:47 PM • Welcome, Guest Pages: 1 2 3 [4] 5 6 7   Go Down ### AuthorTopic: Somewhat improved Bf-110G4+ Bf-110H4 v2.0  (Read 25987 times) 0 Members and 1 Guest are viewing this topic. #### SAS~Epervier • 4.09 Guardian Angel ! • SAS Team • member • Offline • Posts: 7723 • I'm French and Rebel_409! Nobody is perfect! ##### Re: Somewhat improved BF-110G4 v1.2 « Reply #36 on: June 01, 2012, 04:36:16 AM » ModAct 4.09 ? Normal ! Logged If your results are not up to your expectations, tell yourself that as large oak was once an acorn ... #### niemel • member • Offline • Posts: 14 ##### Re: Somewhat improved BF-110G4 v1.2 « Reply #37 on: June 03, 2012, 07:53:35 AM » How i can fix this problem? Logged #### yasei • member • Offline • Posts: 13 ##### Re: Somewhat improved BF-110G4 v1.2 « Reply #38 on: June 03, 2012, 01:10:01 PM » funkgeräte!!!so long waited for it day.big thanks!!! Logged #### ANDYTOTHED • Modder • member • Offline • Posts: 855 • angle computing gunsights ##### Re: Somewhat improved Bf-110G4+ Bf-110H4 v2.0 « Reply #39 on: June 13, 2012, 12:05:46 PM » Hey guys, new bird in this release, check first post. Logged #### dsawan • member • Offline • Posts: 686 ##### Re: Somewhat improved Bf-110G4+ Bf-110H4 v2.0 « Reply #40 on: June 13, 2012, 03:03:21 PM » hi, i use sas mod act 2.72 modded and the H version givces a null error. any ideas. its a 4.10 1 m Logged #### ANDYTOTHED • Modder • member • Offline • Posts: 855 • angle computing gunsights ##### Re: Somewhat improved Bf-110G4+ Bf-110H4 v2.0 « Reply #41 on: June 13, 2012, 04:13:45 PM » it some silly thing in 4.101. I'm not sure how to fix it. As for now this mod is solely for 4.101. I should have made this clear Logged #### ANDYTOTHED • Modder • member • Offline • Posts: 855 • angle computing gunsights ##### Re: Somewhat improved Bf-110G4+ Bf-110H4 v2.0 « Reply #42 on: August 31, 2012, 06:09:31 AM » could someone do me a favour and send me a copy of this, I lost my own in my last re-install and File front ate it. I'm planning on repacking it to make it 4.11.1 compatible and putting it (and pictures) on M4T Logged #### ANDYTOTHED • Modder • member • Offline • Posts: 855 • angle computing gunsights ##### Re: Somewhat improved Bf-110G4+ Bf-110H4 v2.0 « Reply #43 on: September 06, 2012, 12:47:12 PM » Logged #### monkie • member • Offline • Posts: 2 ##### Re: Somewhat improved Bf-110G4+ Bf-110H4 v2.0 « Reply #44 on: October 23, 2012, 06:29:13 PM » I'm very new to the whole community so please excuse me if this is not a good place to ask this. First thanks for putting this G-4/H-4 package together, it's great to see the radar working from the back seat. I came back to IL2 after many years because I saw the youtube video of the radar and was hooked as I love the nightfighting aspect of the sim, amazing at how far the community has pushed this sim. I am wondering if anyone else is unable to get the G-4 or H-4 ANS-1/2 receivers to work and pick up beacons? My G-2 works perfectly however it's the only version of the 110's that the beacon does work. I can select the beacons but the receivers do not react or work. Several of my other German aircraft have receivers that work perfectly so I'm hoping there might be a simple answer to get the receivers working again. I use 4.10.1m and DBW 1.71 thanks for any help. Logged #### ANDYTOTHED • Modder • member • Offline • Posts: 855 • angle computing gunsights ##### Re: Somewhat improved Bf-110G4+ Bf-110H4 v2.0 « Reply #45 on: November 02, 2012, 08:24:33 AM » Hi, as a novice modder, I wasn't sure how to go about adding beacons. As I'm busy with a few minor test projects that will likely never see the light of day, these will likely undergo minor updates at some point in the near future. Logged #### r4xm • member • Offline • Posts: 101 ##### Re: Somewhat improved Bf-110G4+ Bf-110H4 v2.0 « Reply #46 on: November 02, 2012, 11:06:27 AM » Hello SIISELI, I thought that this in-cockpit radar was still just a visual mod and not actually functional. When I tried it out, the radar did not seem to indicate anything about potential targets, even if they were fairly close and just ahead of my plane at the same altitude. FW I have the same situation... Logged #### ANDYTOTHED • Modder • member • Offline • Posts: 855 • angle computing gunsights ##### Re: Somewhat improved Bf-110G4+ Bf-110H4 v2.0 « Reply #47 on: November 02, 2012, 12:16:40 PM » Nope it works, it has a limited angle and you have to be at high altitude for it work. I don't know enough coding to allow gain. Logged Pages: 1 2 3 [4] 5 6 7   Go Up Page created in 0.014 seconds with 27 queries.
2020-07-10 00:44:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638652563095093, "perplexity": 14435.91298289238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902377.71/warc/CC-MAIN-20200709224746-20200710014746-00467.warc.gz"}
http://openstudy.com/updates/4f741841e4b0b478589d8864
## goku3 Group Title can someone help me find the derivative to this equation 2xsqrtx^2+1 2 years ago 2 years ago 1. goku3 Group Title $2x \sqrt{x^2+1}$ 2. goku3 Group Title actually i got all to the end of the i just can figure out how to simlify the damn thing to match the books answer 3. dpaInc Group Title lemme do that again ... i just realized i didn't take the square root.. 4. dpaInc Group Title |dw:1333008783149:dw| 5. goku3 Group Title 4x^2/2sqrtx^2+1 +x^2+1 and i dont know how to simplify this to get 2(2x^2+1)/sqrtx^2+1 6. dpaInc Group Title |dw:1333008899711:dw| 7. dpaInc Group Title factor out the common factor of 2(x^2+1)^(-1/2) 8. dpaInc Group Title |dw:1333009455298:dw| 9. goku3 Group Title sorry i dont know what you did this is where am at 2[x(2x)/2sqrtx^2+1+sqrt x^2+1 10. CoCoTsoi Group Title Plz see the attachment 11. CoCoTsoi Group Title Is the photo clear? If not, I can upload another one 12. dpaInc Group Title i can't make out what you wrote for your derivative (@ goku3) 13. goku3 Group Title 14. CoCoTsoi Group Title 15. brinethery Group Title I agree with @CoCoTsoi 's answer. 16. brinethery Group Title It is beauteeeful. 17. goku3 Group Title i get lost between step to and three 18. brinethery Group Title Chain rule 19. goku3 Group Title why is there two 2x on top 20. brinethery Group Title And product rule... http://tutorial.math.lamar.edu/Classes/CalcI/ProductQuotientRule.aspx 21. CoCoTsoi Group Title one 2x is the original one. another is by diff the equation inside the sqrt 22. brinethery Group Title So say if you have f*g. If you use the product rule, then you'll do: f'*g + f*g'. 23. brinethery Group Title sorry if you couldn't see the tick mark next to the f. 24. goku3 Group Title its cool how you get rid of the sqrt on top 25. brinethery Group Title Okay so going from the 3rd to the 4th line: There's 2sqrt(x^2+1) He want's to get it so it has the same denominator as the other term. The den of the other term is sqrt(x^2+1). So with the left term, you have to multiply the numerator and denominator by sqrt(x^2+1). Then you end up with (2sqrt(x^2+1)*sqrt(x^2+1))/sqrt(x^2+1) = (2(x^2+1))/sqrt(x^2+1) 26. brinethery Group Title lol... wants not want's. I type too fast. 27. CoCoTsoi Group Title multiply numerator and denominator with sqrt(x^2+1) 28. goku3 Group Title thank you guys so much every time i tried to do it i forgot to add one of the 2x and messed my entire thing up 29. brinethery Group Title We all tend to get lost in the little steps, believe me I've been there. Coco did a great job writing that out. 30. CoCoTsoi Group Title Coz I was lazy for typing it on keyboard so I chose to write it on paper :D 31. goku3 Group Title paper is way faster and thank u guys for being so patient with me 32. brinethery Group Title So goku, remember with this problem, you have to use the product rule and then the chain rule since you have stuff inside of that square root. 33. CoCoTsoi Group Title Thanks for giving a chance for us to learn 34. goku3 Group Title yeah i will i just have to take my on these problems and not rush them 35. brinethery Group Title Check out paul's online notes if you haven't already, they are great :-) 36. goku3 Group Title i have them bookmarked actually
2014-09-02 01:53:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.761910617351532, "perplexity": 5215.194358059245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921318.10/warc/CC-MAIN-20140909041718-00038-ip-10-180-136-8.ec2.internal.warc.gz"}
http://www-old.newton.ac.uk/programmes/PDS/seminars/2006021611002.html
# PDS ## Seminar ### Wealth condensation as a zero range process Johnston, D (Heriot-Watt) Thursday 16 February 2006, 11:00-12:00 Seminar Room 2, Newton Institute Gatehouse #### Abstract We discuss the wealth condensation mechanism in a simple toy economy in which individual agent's wealths are distributed according to a Pareto power law and the overall wealth is fixed. The observed behaviour is the manisfestation of a transition which occurs in Zero Range Processes (ZRPs) or balls in boxes'' models. An amusing feature of the transition in this context is that the condensation can be induced by {\it increasing} the exponent in the power law, which one might have naively assumed penalized greater wealths more.
2016-10-21 18:21:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2805638909339905, "perplexity": 3101.839767978087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718296.19/warc/CC-MAIN-20161020183838-00176-ip-10-171-6-4.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/138224/probability-of-components-to-fail
# Probability of components to fail I want to verify my reasoning with you. An electronic system contains 15 components. The probability that a component might fail is 0.15 given that they fail independently. Knowing that at least 4 and at most 7 failed, what is the probability that exactly 5 failed? My solution: $X \sim Binomial(n=15, p=0.15)$ I guess what I have to calculate is $P(X=5 | 4 \le X \le 7) = \frac{P(5 \cap \{4,5,6,7\})}{P(\{4,5,6,7\})}$. Is it correct? Thank you - Yes, the approach is correct. –  André Nicolas Apr 28 '12 at 21:37 @AndréNicolas $P(5 \cap \{4,5,6,7\}) = P(5)$ right? –  Andrew Apr 28 '12 at 21:38 Yes, that's right. –  David Mitra Apr 28 '12 at 21:39 Yes, of course. This sort of thing happens a lot in conditional probabilities, there is less work to do than it seems at first sight. –  André Nicolas Apr 28 '12 at 21:40 You already know the answer is $a=p_5/(p_4+p_5+p_6+p_7)$ where $p_k=\mathrm P(X=k)$. Further simplifications occur if one considers the ratios $r_k=p_{k+1}/p_k$ of successive weights. To wit, $$r_k=\frac{{n\choose k+1}p^{k+1}(1-p)^{n-k-1}}{{n\choose k}p^{k}(1-p)^{n-k}}=\frac{n-k}{k+1}\color{blue}{t}\quad\text{with}\ \color{blue}{t=\frac{p}{1-p}}.$$ Thus, $$\frac1a=\frac{p_4}{p_5}+1+\frac{p_6}{p_5}+\frac{p_7}{p_5}=\frac1{r_4}+1+r_5(1+r_6),$$ which, for $n=15$ and with $\color{blue}{t=\frac3{17}}$, yields $$\color{red}{a=\frac1{\frac5{11\color{blue}{t}}+1+\frac{10\color{blue}{t}}6\left(1+\frac{9\color{blue}{t}}7\right)}}.$$
2015-05-25 22:04:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771588802337646, "perplexity": 365.2852863740724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928715.47/warc/CC-MAIN-20150521113208-00106-ip-10-180-206-219.ec2.internal.warc.gz"}
https://en.m.wikibooks.org/wiki/Linear_Algebra/Topic:_Orthonormal_Matrices/Solutions
# Linear Algebra/Topic: Orthonormal Matrices/Solutions ## Solutions Problem 1 Decide if each of these is an orthonormal matrix. 1. ${\displaystyle {\begin{pmatrix}1/{\sqrt {2}}&-1/{\sqrt {2}}\\-1/{\sqrt {2}}&-1/{\sqrt {2}}\end{pmatrix}}}$ 2. ${\displaystyle {\begin{pmatrix}1/{\sqrt {3}}&-1/{\sqrt {3}}\\-1/{\sqrt {3}}&-1/{\sqrt {3}}\end{pmatrix}}}$ 3. ${\displaystyle {\begin{pmatrix}1/{\sqrt {3}}&-{\sqrt {2}}/{\sqrt {3}}\\-{\sqrt {2}}/{\sqrt {3}}&-1/{\sqrt {3}}\end{pmatrix}}}$ 1. Yes. 2. No, the columns do not have length one. 3. Yes. Problem 2 Write down the formula for each of these distance-preserving maps. 1. the map that rotates ${\displaystyle \pi /6}$  radians, and then translates by ${\displaystyle {\vec {e}}_{2}}$ 2. the map that reflects about the line ${\displaystyle y=2x}$ 3. the map that reflects about ${\displaystyle y=-2x}$  and translates over ${\displaystyle 1}$  and up ${\displaystyle 1}$ Some of these are nonlinear, because they involve a nontrivial translation. 1. ${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}\mapsto {\begin{pmatrix}x\cdot \cos(\pi /6)-y\cdot \sin(\pi /6)\\x\cdot \sin(\pi /6)+y\cdot \cos(\pi /6)\end{pmatrix}}+{\begin{pmatrix}0\\1\end{pmatrix}}={\begin{pmatrix}x\cdot ({\sqrt {3}}/2)-y\cdot (1/2)+0\\x\cdot (1/2)+y\cdot \cos({\sqrt {3}}/2)+1\end{pmatrix}}}$ 2. The line ${\displaystyle y=2x}$  makes an angle of ${\displaystyle \arctan(2/1)}$  with the ${\displaystyle x}$ -axis. Thus ${\displaystyle \sin \theta =2/{\sqrt {5}}}$  and ${\displaystyle \cos \theta =1/{\sqrt {5}}}$ . ${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}\mapsto {\begin{pmatrix}x\cdot (1/{\sqrt {5}})-y\cdot (2/{\sqrt {5}})\\x\cdot (2/{\sqrt {5}})+y\cdot (1/{\sqrt {5}})\end{pmatrix}}}$ 3. ${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}\mapsto {\begin{pmatrix}x\cdot (1/{\sqrt {5}})-y\cdot (-2/{\sqrt {5}})\\x\cdot (-2/{\sqrt {5}})+y\cdot (1/{\sqrt {5}})\end{pmatrix}}+{\begin{pmatrix}1\\1\end{pmatrix}}={\begin{pmatrix}x/{\sqrt {5}}+2y/{\sqrt {5}}+1\\-2x/{\sqrt {5}}+y/{\sqrt {5}}+1\end{pmatrix}}}$ Problem 3 1. The proof that a map that is distance-preserving and sends the zero vector to itself incidentally shows that such a map is one-to-one and onto (the point in the domain determined by ${\displaystyle d_{0}}$ , ${\displaystyle d_{1}}$ , and ${\displaystyle d_{2}}$  corresponds to the point in the codomain determined by those three). Therefore any distance-preserving map has an inverse. Show that the inverse is also distance-preserving. 2. Prove that congruence is an equivalence relation between plane figures. 1. Let ${\displaystyle f}$  be distance-preserving and consider ${\displaystyle f^{-1}}$ . Any two points in the codomain can be written as ${\displaystyle f(P_{1})}$  and ${\displaystyle f(P_{2})}$ . Because ${\displaystyle f}$  is distance-preserving, the distance from ${\displaystyle f(P_{1})}$  to ${\displaystyle f(P_{2})}$  equals the distance from ${\displaystyle P_{1}}$  to ${\displaystyle P_{2}}$ . But this is exactly what is required for ${\displaystyle f^{-1}}$  to be distance-preserving. 2. Any plane figure ${\displaystyle F}$  is congruent to itself via the identity map ${\displaystyle {\mbox{id}}:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ , which is obviously distance-preserving. If ${\displaystyle F_{1}}$  is congruent to ${\displaystyle F_{2}}$  (via some ${\displaystyle f}$ ) then ${\displaystyle F_{2}}$  is congruent to ${\displaystyle F_{1}}$  via ${\displaystyle f^{-1}}$ , which is distance-preserving by the prior item. Finally, if ${\displaystyle F_{1}}$  is congruent to ${\displaystyle F_{2}}$  (via some ${\displaystyle f}$ ) and ${\displaystyle F_{2}}$  is congruent to ${\displaystyle F_{3}}$  (via some ${\displaystyle g}$ ) then ${\displaystyle F_{1}}$  is congruent to ${\displaystyle F_{3}}$  via ${\displaystyle g\circ f}$ , which is easily checked to be distance-preserving. Problem 4 In practice the matrix for the distance-preserving linear transformation and the translation are often combined into one. Check that these two computations yield the same first two components. ${\displaystyle {\begin{pmatrix}a&c\\b&d\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}+{\begin{pmatrix}e\\f\end{pmatrix}}\qquad {\begin{pmatrix}a&c&e\\b&d&f\\0&0&1\end{pmatrix}}{\begin{pmatrix}x\\y\\1\end{pmatrix}}}$ (These are homogeneous coordinates; see the Topic on Projective Geometry). The first two components of each are ${\displaystyle ax+cy+e}$  and ${\displaystyle bx+dy+f}$ . 1. The Pythagorean Theorem gives that three points are colinear if and only if (for some ordering of them into ${\displaystyle P_{1}}$ , ${\displaystyle P_{2}}$ , and ${\displaystyle P_{3}}$ ), ${\displaystyle {\text{dist}}\,(P_{1},P_{2})+{\text{dist}}\,(P_{2},P_{3})={\text{dist}}\,(P_{1},P_{3})}$ . Of course, where ${\displaystyle f}$  is distance-preserving, this holds if and only if ${\displaystyle {\text{dist}}\,(f(P_{1}),f(P_{2}))+{\text{dist}}\,(f(P_{2}),f(P_{3}))={\text{dist}}\,(f(P_{1}),f(P_{3}))}$ , which, again by Pythagoras, is true if and only if ${\displaystyle f(P_{1})}$ , ${\displaystyle f(P_{2})}$ , and ${\displaystyle f(P_{3})}$  are colinear. The argument for betweeness is similar (above, ${\displaystyle P_{2}}$  is between ${\displaystyle P_{1}}$  and ${\displaystyle P_{3}}$ ). If the figure ${\displaystyle F}$  is a triangle then it is the union of three line segments ${\displaystyle P_{1}P_{2}}$ , ${\displaystyle P_{2}P_{3}}$ , and ${\displaystyle P_{1}P_{3}}$ . The prior two paragraphs together show that the property of being a line segment is invariant. So ${\displaystyle f(F)}$  is the union of three line segments, and so is a triangle. A circle ${\displaystyle C}$  centered at ${\displaystyle P}$  and of radius ${\displaystyle r}$  is the set of all points ${\displaystyle Q}$  such that ${\displaystyle {\text{dist}}\,(P,Q)=r}$ . Applying the distance-preserving map ${\displaystyle f}$  gives that the image ${\displaystyle f(C)}$  is the set of all ${\displaystyle f(Q)}$  subject to the condition that ${\displaystyle {\text{dist}}\,(P,Q)=r}$ . Since ${\displaystyle {\text{dist}}\,(P,Q)={\text{dist}}\,(f(P),f(Q))}$ , the set ${\displaystyle f(C)}$  is also a circle, with center ${\displaystyle f(P)}$  and radius ${\displaystyle r}$ . 3. One that was mentioned in the section is the "sense" of a figure. A triangle whose vertices read clockwise as ${\displaystyle P_{1}}$ , ${\displaystyle P_{2}}$ , ${\displaystyle P_{3}}$  may, under a distance-preserving map, be sent to a triangle read ${\displaystyle P_{1}}$ , ${\displaystyle P_{2}}$ , ${\displaystyle P_{3}}$  counterclockwise.
2022-11-26 15:08:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 86, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971235990524292, "perplexity": 1057.6343412593292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00578.warc.gz"}
https://blog.rush-nlp.com/named-tensor.html
# Named Tensor Alexander Rush - @harvardnlp TL;DR: Despite its ubiquity in deep learning, Tensor is broken. It forces bad habits such as exposing private dimensions, broadcasting based on absolute position, and keeping type information in documentation. This post presents a proof-of-concept of an alternative approach, **named tensors**, with named dimensions. This change eliminates the need for indexing, dim arguments, einsum-style unpacking, and documentation-based coding. The prototype **PyTorch library** accompanying this blog post is available as [namedtensor](https://github.com/harvardnlp/NamedTensor). {:toc} Changelog • Updated the syntax of the prototype to be a subest of xarray whereever possible. • Dropped the einops style string DSL notation to be more explicit. Implementations • Jon Malmaud points out that the xarray project has very similar goals as this note with the addition of extensive Pandas and scientific computing support. • Tongfei Chen's Nexus project proposes statically type-safe tensors in Scala. • Stephan Hoyer and Eric Christiansen have a labeled tensor library for Tensorflow that is the same as this appraoch. Labed Tensor • Nishant Sinha has a TSA library that uses type annotations to define dimension names. In [1]: #@title Setup #!rm -fr NamedTensor/; git clone -q https://github.com/harvardnlp/NamedTensor.git #!cd NamedTensor; pip install -q .; pip install -q torch numpy opt_einsum In [2]: import numpy import torch from namedtensor import NamedTensor, ntorch from namedtensor import _im_init _im_init() # Tensor Traps¶ This post is about the tensor class, a multi-dimensional array object that is the central object of deep learning frameworks such as Torch, TensorFlow and Chainer, as well as numpy. Tensors carry around a blob of storage and expose a tuple of dimension information to users. In [3]: ims.shape Out[3]: torch.Size([6, 96, 96, 3]) Here there are 4 dimensions, corresponding to batch_size, height, width, and channels. Most of the time you can figure this out by some comment in the code that looks like this: In [4]: # batch_size x height x width x channels ims[0] Out[4]: This approch is concise and pseudo-mathy. However from a programming point of view it is not a great way to build complex software. ## Trap 1: Privacy by Convention¶ Code that manipulates tensors does so by dimension identifiers in the tuple. If you want to rotate the image you read the comment, decide what dimensions need to be changed and alter them. In [5]: def rotate(ims): # batch_size x height x width x channels rotated = ims.transpose(1, 2) # batch_size x width x height x channels return rotated rotate(ims)[0] Out[5]: This code is simple and in theory well documented. However, it does not reflect the semantics of the target function. The property of rotation is independent of the batch, or for that matter, the channels. The function should not have to account for these dimensions in determining the dimensions to alter. This leads to two problems. FIrst, it's quite worrisome that if we pass in a singleton image this function runs fine but fails to work. In [6]: rotate(ims[0]).shape Out[6]: torch.Size([96, 3, 96]) However, even more worrisome is that the function may actually use the batch dimensions by mistake and mix together properties of different images. This can lead to nasty bugs that would be easy to avoid if this dimension was hidden from the code. ## Trap 2: Broadcasting by Alignment¶ The most useful aspect of Tensors is that they can quickly do array operations without directly requiring for loops. For this to work dimensions need to be directly aligned so that they can be broadcasts. Again this is done by convention and code documentation that makes it "easy" to line up dimensions. For instance, let's assume we want to apply a mask to the above image. In [7]: # height x width mask = torch.randint(0, 2, [96, 96]).byte() Out[7]: In [8]: try: except RuntimeError: error Out[8]: 'Broadcasting fail torch.Size([96, 96]) torch.Size([6, 96, 96, 3])' This fails because even though we knew that we were building a height and width shaped mask, the rules of broadcasting do not have the correct semantics. To make this work, you are encouraged to use either view or squeeze my least favorite functions. In [9]: # either # or # height x width x channels Out[9]: Note we do not need to do this for the left-most dimensions so there is a bit of abstraction here. However reading through real code, dozens of right side views and squeezes become completely unreadable. ## Trap 3: Access by Comments¶ It is possible that you look at the top two issues and think that as long as you are careful, these issues will be caught by run time errors. However, even well used the combination of broadcasting and indexing can lead to problems that are very tough to catch. In [10]: a = ims[1].mean(2, keepdim=True) # height x width x 1 # (Lots of code in between) # ....................... # Code comment explaining what should be happening. dim = 1 b = a + ims.mean(dim, keepdim=True)[0] # (Or maybe should be a 2? or a 0?) index = 2 b = a + ims.mean(dim, keepdim=True)[0] b Out[10]: Here we assume that the coder is trying to combine two tensor using both reduction operations and dimension indexing. (Honestly at this point I have forgotten what the dimensions stand for). The main point though is that this code will run fine for whatever value dim is given. The comment here might descibe what is happening but the code itself doesn't throw a run time error. # Named Tensor: A Prototype¶ Based on these issues, I think deep learning code should move to a better central object. There are several of these proposed. Here for fun, I will develop a new prototype. I have the following goals. 1) Dimensions should have human-readable names. 2) No function should have a dim argument. 3) Broadcast should be by name matching. 4) Transposition should be explicit. 5) Ban dimension based indexing. 6) Private dimensions should be protected. To experiment with these ideas I have built a library known as NamedTensor. Currently it is PyTorch specific, but in theory a similar idea could be used in other frameworks. The code is available at github.com/harvardnlp/namedtensor. ## Proposal 1: Assigning Names¶ The core of the library is an object that wraps a tensor and provides names for each dimension. Here we simply wrap a given torch tensor with dimension names. In [11]: named_ims = NamedTensor(ims, ("batch", "height", "width", "channels")) named_ims.shape Out[11]: OrderedDict([('batch', 6), ('height', 96), ('width', 96), ('channels', 3)]) Alternatively the library has wrappers for the pytorch constructors to turn them into named tensors. In [12]: ex = ntorch.randn(dict(height=96, width=96, channels=3)) ex Out[12]: Most simple operations simply keep around the named tensor properties. In [13]: ex.log() # or ntorch.log(ex) None ## Proposal 2: Accessors and Reduction¶ The first benefit of names comes from the ability to replace the need for dim and axis style arguments entirely. For example, lets say we wanted to sort each column. In [14]: sortex, _ = ex.sort("width") sortex Out[14]: Another common operation is a reduction where one or more dimensions is pooled out. In [15]: named_ims.mean("batch") Out[15]: In [16]: named_ims.mean(("batch", "channels")) Out[16]: ## Proposal 3: Broadcasting and Contraction¶ The names that are provided also provide the basis for broadcasting operations. When there is a binary operations between two named tensors they first ensure that all dimension are matched in name and then apply standard broadcasting. To demonstrate let's return to the masking example above. Here we simply declare the names of the dimensions of our mask, and ask the library to figure out the broadcasting. In [17]: im = NamedTensor(ims[0], ("height", "width", "channels")) im2 = NamedTensor(ims[1], ("height", "width", "channels")) mask = NamedTensor(torch.randint(0, 2, [96, 96]).byte(), ("height", "width")) Out[17]: Similar operations can be used for standard matrix operations such as addition and multiplication. In [18]: Out[18]: A more general feature is the dot method for tensor contraction between name tensors. Tensor contraction, the machinery behind einsum, is an elegant way of thinking about generalizations of dot-products, matrix-vector products, matrix-matrix products, etc. In [19]: # Runs torch.einsum(ijk,ijk->jk, tensor1, tensor2) im.dot("height", im2).shape Out[19]: OrderedDict([('width', 96), ('channels', 3)]) In [20]: # Runs torch.einsum(ijk,ijk->il, tensor1, tensor2) im.dot("width", im2).shape Out[20]: OrderedDict([('height', 96), ('channels', 3)]) In [21]: # Runs torch.einsum(ijk,ijk->l, tensor1, tensor2) im.dot(("height", "width"), im2).shape Out[21]: OrderedDict([('channels', 3)]) Similar notation can be used for sparse indexing (inspired by the einindex library). This is useful for embedding lookups and other sparse operations. In [22]: pick, _ = NamedTensor(torch.randint(0, 96, [50]).long(), ("lookups",)) \ .sort("lookups") # Select 50 random rows. im.index_select("height", pick) Out[22]: ## Proposal 4: Shifting Dimensions¶ Behind the scenes all of the named tensors are acting as tensor objects. As such thing like order and stride of dimensions does matter. Operations like transpose and view are crucial for maintaining this, but are unfortunately quite error-prone. Instead consider a domain specific langauge shift that borrows heavily from the Alex Rogozhnikov's excellent einops package. In [23]: tensor = NamedTensor(ims[0], ("h", "w", "c")) tensor Out[23]: Standard calls to transpose dimensions. In [24]: tensor.transpose("w", "h", "c") Out[24]: Calls for splitting and stacking together dimensions. In [25]: tensor = NamedTensor(ims[0], ("h", "w", "c")) tensor.split(h=("height", "q"), height=8).shape Out[25]: OrderedDict([('height', 8), ('q', 12), ('w', 96), ('c', 3)]) In [26]: tensor = NamedTensor(ims, ('b', 'h', 'w', 'c')) tensor.stack(bh = ('b', 'h')).shape Out[26]: OrderedDict([('bh', 576), ('w', 96), ('c', 3)]) Ops can be chained. In [27]: tensor.stack(bw=('b', 'w')).transpose('h', 'bw', 'c') Out[27]: Just for fun, here are some of the crazier examples from einops in this notation. In [28]: tensor.split(b=('b1', 'b2'), b1=2).stack(a=('b2', 'h'), d=('b1', 'w'))\ .transpose('a', 'd', 'c') Out[28]: In [29]: tensor.split(w=('w1', 'w2'), w2=2).stack(a=('h', 'w2'), d=('b', 'w1'))\ .transpose('a', 'd', 'c') Out[29]: In [30]: tensor.stack(a=('b', 'w')).transpose('h', 'a', 'c') Out[30]: In [31]: tensor.stack(a=('w', 'b')).transpose('h', 'a', 'c') Out[31]: In [32]: tensor = NamedTensor(ims, ('b', 'h', 'w', 'c')) tensor.mean('b') Out[32]: In [33]: tensor = NamedTensor(ims, ('b', 'h', 'w', 'c')) tensor.split(h = ('h1', 'h2'), h2 =2).split(w = ('w1', 'w2'), w2=2) \ .mean(('h2', 'w2')).stack(bw=('b', 'w1')) Out[33]: In [34]: tensor = NamedTensor(ims, ('b', 'h', 'w', 'c')) tensor.split(b = ('b1', 'b2'), b1 = 2).mean('c') \ .stack(bw=("b1", "w"), bh=('b2', 'h')).transpose('bh', 'bw') Out[34]: In [35]: tensor.split(b = ('b1', 'b2'), b1=2).stack(h=('h', 'b1'), w=('w', 'b2')) Out[35]: ## Proposal 5: Ban Indexing¶ Generally indexing is discouraged in this named tensor paradigm. Instead use functions like index_select above. There are some useful named alternative functions pulled over from torch. For example unbind pulls apart a dimension to a tuple. In [36]: tensor = NamedTensor(ims, ('b', 'h', 'w', 'c')) # Returns a tuple images = tensor.unbind("b") images[3] Out[36]: The function get directly selects a slice of from a named dimension. In [37]: # Returns a tuple images = tensor.get("b", 0).unbind("c") images[1] Out[37]: Finally narrow can be used to replace fancy indexing. However you must give a new dim name (since it can no longer broadcast). In [38]: tensor.narrow( 30, 50, h='narowedheight').get("b", 0) Out[38]: ## Proposal 6: Private Dimensions¶ Finally named tensor attempts to let you directly hide dimensions that should not be accessed by internal functions. The function mask_to will keep around a left side mask that protects any earlier dimensions from manipulations by functions. The simplest example uses a mask to drop the batch dimension. In [39]: # Accesses the private batch dimension return x.mean("batch") x = ntorch.randn(dict(batch=10, height=100, width=100)) y = ntorch.randn(dict(batch=10, height=100, width=100)) try: except RuntimeError as e: error = "Error received: " + str(e) error Out[39]: This is weak dynamic check and can be turned off by internal functions. In future versions, perhaps we can add function annotations to lift non-named functions to respect these properties. # Example: Neural Attention¶ To demonstrate why these choices lead to better encapsulation properties, let's consider a real-world deep learning example. This example was proposed by my colleague Tim Rocktashel in the blog post describing einsum (https://rockt.github.io/2018/04/30/einsum). Tim's code was proposed as a better alternative to raw PyTorch. While I agree that einsum is a step forward, it still falls into many of the traps described above. Consider the problem of neural attention, which requires computing, \begin{align*} \mathbf{M}_t &= \tanh(\mathbf{W}^y\mathbf{Y}+(\mathbf{W}^h\mathbf{h}_t+\mathbf{W}^r\mathbf{r}_{t-1})\otimes \mathbf{e}_L) & \mathbf{M}_t &\in\mathbb{R}^{k\times L}\\ \alpha_t &= \text{softmax}(\mathbf{w}^T\mathbf{M}_t)&\alpha_t&\in\mathbb{R}^L\\ \mathbf{r}_t &= \mathbf{Y}\alpha^T_t + \tanh(\mathbf{W}^t\mathbf{r}_{t-1})&\mathbf{r}_t&\in\mathbb{R}^k \end{align*} First we setup the parameters. In [40]: for i in range(0, num)] return tensors[0] if num == 1 else tensors class Param: def __init__(self, in_hid, out_hid): torch.manual_seed(0) self.WY, self.Wh, self.Wr, self.Wt = \ random_ntensors(dict(inhid=in_hid, outhid=out_hid), self.bM, self.br, self.w = \ random_ntensors(dict(outhid=out_hid), num=3, Now consider the tensor-based einsum implementation of this function. In [41]: # Einsum Implementation import torch.nn.functional as F def einsum_attn(params, Y, ht, rt1): # -- [batch_size x hidden_dimension] tmp = torch.einsum("ik,kl->il", [ht, params.Wh.values]) + \ torch.einsum("ik,kl->il", [rt1, params.Wr.values]) Mt = torch.tanh(torch.einsum("ijk,kl->ijl", [Y, params.WY.values]) + \ tmp.unsqueeze(1).expand_as(Y) + params.bM.values) # -- [batch_size x sequence_length] at = F.softmax(torch.einsum("ijk,k->ij", [Mt, params.w.values]), dim=-1) # -- [batch_size x hidden_dimension] rt = torch.einsum("ijk,ij->ik", [Y, at]) + \ torch.tanh(torch.einsum("ij,jk->ik", [rt1, params.Wt.values]) + params.br.values) # -- [batch_size x hidden_dimension], [batch_size x sequence_dimension] return rt, at This implementation is an improvement over the naive PyTorch implementation. It removes many of the views and transposes that would be necessary to make this work. However, it still uses squeeze, references the private batch dim, and usees comments that are not enforced. In [42]: def namedtensor_attn(params, Y, ht, rt1): tmp = ht.dot("inhid", params.Wh) + rt1.dot("inhid", params.Wr) at = ntorch.tanh(Y.dot("inhid", params.WY) + tmp + params.bM) \ .dot("outhid", params.w) \ .softmax("seqlen") rt = Y.dot("seqlen", at).stack(inhid=('outhid',)) + \ ntorch.tanh(rt1.dot("inhid", params.Wt) + params.br) return rt, at This code avoids all three traps. (Trap 1) The code never mentions the batch dim. (Trap 2) All broadcasting is done directly with contractions, there are no views. (Trap 3) Operations across dims are explicit. For instance, the softmax is clearly over the seqlen. In [43]: # Run Einsum in_hid = 7; out_hid = 7 Y = torch.randn(3, 5, in_hid) ht, rt1 = torch.randn(3, in_hid), torch.randn(3, in_hid) params = Param(in_hid, out_hid) r, a = einsum_attn(params, Y, ht, rt1) In [44]: # Run Named Tensor (hiding batch) Y = NamedTensor(Y, ("batch", "seqlen", "inhid"), mask=1) ht = NamedTensor(ht, ("batch", "inhid"), mask=1) rt1 = NamedTensor(rt1, ("batch", "inhid"), mask=1) nr, na = namedtensor_attn(params, Y, ht, rt1) # Conclusion / Request for Help¶ Tools for deep learning help researchers implement standard models, but they also impact what researchers try. Current models can be built fine with the tools we have, but the programming practices are not going to scale to new models. (For instance, one space we have been working on recently is discrete latent variable models which often have many problem specific variables each with their own variable dimension. This setting breaks the current tensor paradigm almost immediately. ) This blog post is just a prototype of where this approach could go. If you are interested, I would love contributors to the build out this library properly. Some ideas if you want to send a PR to namedtensor. Some ideas: 1) Extending beyond PyTorch: Can we generalize this approach in a way that supports NumPy and Tensorflow? 2) Interacting with PyTorch Modules: Can we "lift" PyTorch modules with type annotations, so that we know how they change inputs? 3) Error Checking: Can we add annotations to functions giving pre- and post -conditions so that dimensions are automatically checked.
2022-01-28 20:07:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3875473141670227, "perplexity": 6391.843378075709}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306335.77/warc/CC-MAIN-20220128182552-20220128212552-00649.warc.gz"}
https://www.vedantu.com/question-answer/alpha-beta-are-complex-cube-roots-of-unity-and-class-11-maths-cbse-5ee1de14c9e6ad07956eef78
Courses Courses for Kids Free study material Free LIVE classes More $\alpha ,\beta$ are complex cube roots of unity and $x=a+b$, $y=a\alpha +b\beta$, $z=a\beta +b\alpha$ then ${{x}^{3}}+{{y}^{3}}+{{z}^{3}}$=A. $0$B. $3ab$C. $3({{a}^{3}}-{{b}^{3}})$D. $3{{(a-b)}^{3}}$ Last updated date: 20th Mar 2023 Total views: 306k Views today: 3.85k Verified 306k+ views Hint: Assume $\alpha =\omega ,\beta ={{\omega }^{2}}$. Substitute the values in $x.y$and $z$. Then add $x+y+z$ and simplify. Then use the formula ${{x}^{3}}+{{y}^{3}}+{{z}^{3}}=(x+y+z)({{x}^{2}}+{{y}^{2}}+{{z}^{2}}-xy-yz-zx)+3xyz$. Then substitute the value of $x+y+z$ in the formula. You will get the answer. Try it. $x=a+b$ $y=a\omega +b{{\omega }^{2}}$ $z=a{{\omega }^{2}}+b\omega$ So $x+y+z=a+b+a\omega +b{{\omega }^{2}}+a{{\omega }^{2}}+b\omega$ $x+y+z=a(1+\omega +{{\omega }^{2}})+b(1+\omega +{{\omega }^{2}})$ We know $1+\omega +{{\omega }^{2}}=0$. $x+y+z=0$ We know the formula ${{x}^{3}}+{{y}^{3}}+{{z}^{3}}=(x+y+z)({{x}^{2}}+{{y}^{2}}+{{z}^{2}}-xy-yz-zx)+3xyz$. So here$x+y+z=0$. So ${{x}^{3}}+{{y}^{3}}+{{z}^{3}}=(0)({{x}^{2}}+{{y}^{2}}+{{z}^{2}}-xy-yz-zx)+3xyz$ ${{x}^{3}}+{{y}^{3}}+{{z}^{3}}=3xyz$ So we know the value of$x,y$ and $z$. So substituting the values we get, ${{x}^{3}}+{{y}^{3}}+{{z}^{3}}=3(a+b)(a\omega +b{{\omega }^{2}})(a{{\omega }^{2}}+b\omega )$ Simplifying we get, \begin{align} & {{x}^{3}}+{{y}^{3}}+{{z}^{3}}=3(a+b)({{a}^{2}}{{\omega }^{3}}+ab{{\omega }^{3}}+ab{{\omega }^{4}}+{{b}^{2}}{{\omega }^{3}}) \\ & {{x}^{3}}+{{y}^{3}}+{{z}^{3}}=3({{a}^{3}}+{{b}^{3}}) \\ \end{align} ${{x}^{3}}+{{y}^{3}}+{{z}^{3}}=3({{a}^{3}}+{{b}^{3}})$ So we get the value of ${{x}^{3}}+{{y}^{3}}+{{z}^{3}}=3({{a}^{3}}+{{b}^{3}})$. Note: Read the question carefully. Also, while simplifying, don't make any mistake. Take utmost care of the sign. Do not jumble while simplifying. Solve it step by step. You must know the formula ${{x}^{3}}+{{y}^{3}}+{{z}^{3}}=(x+y+z)({{x}^{2}}+{{y}^{2}}+{{z}^{2}}-xy-yz-zx)+3xyz$.
2023-03-25 01:55:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984625577926636, "perplexity": 2175.4496998525756}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00362.warc.gz"}
https://www.numerade.com/questions/for-the-following-exercises-solve-each-system-by-gaussian-elimination-beginarrayl03-x03-y05-z06-04-x/
Enroll in one of our FREE online STEM bootcamps. Join today and start acing your classes!View Bootcamps 03:21 MB Problem 44 # For the following exercises, solve each system by Gaussian elimination.$$\begin{array}{l}{0.3 x+0.3 y+0.5 z=0.6} \\ {0.4 x+0.4 y+0.4 z=1.8} \\ {0.4 x+0.2 y+0.1 z=1.6}\end{array}$$ ## Discussion You must be signed in to discuss. ## Video Transcript All right. Question. Number 44 3 Very rose. Three equations don't see an element nation with a bunch of decibels in it. Um, let's start out by getting rid of these decimals in the problem. Okay. Um, single pie. Uh, it's most by all the equations by 10. Okay, so this becomes three acts. Bust three. Why was five z eagle six? This becomes four acts was for why, plus for Z People's 8 10 And this becomes five X was, too. Why, plus the equals 16. All right, now, once we have that, we didn't get a variable to be the same for each one. Um, look like there's anything that that's that's that convenient for any of them. So you see 345 while going to 60 rights will change all of these in the sixties. Right? So more by this equation by money. Okay, It's like issue six. Yes, 60 y watch us. 100 z equals 1 20 Uh, next one, huh? Remote by this one by 15. So I get 60 X plus 60. Why, um, plus 60 z get this, uh, equals. You see, his tux, too meant this, right? Um, 15 times 18 means 270. Okay. And the last one. Times 12. 60 errands. 24. Why? Plus 12 c matty or 12 times 16 which is 192. Right. So these two equations, I'm gonna subtract these equations. I must attract them. I guess I subject down. So 60 minus 60 is 0 61 60 0 100 minus is 40. So I get 40. Easy equals when you see 1 20 minus 2 70 Some is negative. 1 50 All right, so, um, so far, so good. It's like I could just solve this frizzy A second. Uh, this equation here, I and most attract bees. I guess I'll subtract thes down deposit members for that. So 69 60 0 since March 24. 36. Why are you sick? Why 60? Minus 12. 48 plus 48 z equals to 70 miles. 192. All right. Um, let me see. It's This is basically saying that XE is negative. 1 50 divided by 40 which is mega 3.75 Negative. Read 175 I could not take that value. Put it in here and order five watt plus 48 times. They 3.75 Well, 78. All right, most by this out here. So our d a times negative. B 0.75 equals negative. 1 80 appointments. 1 80 people said eight and 1 80 to both sides. 36. Why the walls? 200 and 58 and then 2 58 divided by 36. News 7.16 Repeat, one sits. Repeat two. Today on Ben. I have all three of those and go back to U T. And my equations. I want you find X. And I guess I'm gonna go to this equation year, Have the simplest numbers to work with. So I get three eggs plus three, 7.16 worse bye times. Negative. 3.75 people. Six. This is three X plus. You see the turns 76 this is 21.5 and there five times negative. 3.75 used 18.75. Right. So then move this other stuff. The other side six minus 21. Us. Um, people's 3.2 bucks. 3.2. About three. Can we get one point 18 approximate. Okay, So, um right. Our answer Eyes mortar triple 1.8 for X. You got seven points when 17 were Why? Approximately and then Z isn't all right.
2020-09-27 11:14:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5354042053222656, "perplexity": 2889.991327727805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400274441.60/warc/CC-MAIN-20200927085848-20200927115848-00394.warc.gz"}
https://quant.stackexchange.com/questions/36723/calibration-of-stochastic-volatility-models
# Calibration of stochastic volatility models Which are good references to know about different calibration methods for stochastic volatility models such as Heston? I know that there are a lot of way of carrying this task out and I was just wondering if there is something like a survey of some work and project done about. • Take a look at the 3rd paper here, the one about ultra sparse grids for slv calibration. – will Dec 4 '17 at 9:57
2019-05-20 08:28:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071116805076599, "perplexity": 381.50504267846225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255837.21/warc/CC-MAIN-20190520081942-20190520103942-00105.warc.gz"}
https://rdrr.io/cran/EnvStats/man/predIntLnormAltTestPower.html
predIntLnormAltTestPower: Probability That at Least One Future Observation Falls... In EnvStats: Package for Environmental Statistics, Including US EPA Guidance Description Compute the probability that at least one out of k future observations (or geometric means) falls outside a prediction interval for k future observations (or geometric means) for a normal distribution. Usage 1 2 predIntLnormAltTestPower(n, df = n - 1, n.geomean = 1, k = 1, ratio.of.means = 1, cv = 1, pi.type = "upper", conf.level = 0.95) Arguments n vector of positive integers greater than 2 indicating the sample size upon which the prediction interval is based. df vector of positive integers indicating the degrees of freedom associated with the sample size. The default value is df=n-1. n.geomean positive integer specifying the sample size associated with the future geometric means. The default value is n.geomean=1 (i.e., individual observations). Note that all future geometric means must be based on the same sample size. k vector of positive integers specifying the number of future observations that the prediction interval should contain with confidence level conf.level. The default value is k=1. ratio.of.means numeric vector specifying the ratio of the mean of the population that will be sampled to produce the future observations vs. the mean of the population that was sampled to construct the prediction interval. See the DETAILS section below for more information. The default value is ratio.of.means=1. cv numeric vector of positive values specifying the coefficient of variation for both the population that was sampled to construct the prediction interval and the population that will be sampled to produce the future observations. The default value is cv=1. pi.type character string indicating what kind of prediction interval to compute. The possible values are pi.type="upper" (the default), and pi.type="lower". conf.level numeric vector of values between 0 and 1 indicating the confidence level of the prediction interval. The default value is conf.level=0.95. Details A prediction interval for some population is an interval on the real line constructed so that it will contain k future observations or averages from that population with some specified probability (1-α)100\%, where 0 < α < 1 and k is some pre-specified positive integer. The quantity (1-α)100\% is call the confidence coefficient or confidence level associated with the prediction interval. The function predIntNorm computes a standard prediction interval based on a sample from a normal distribution. The function predIntNormTestPower computes the probability that at least one out of k future observations or averages will not be contained in a prediction interval based on the assumption of normally distributed observations, where the population mean for the future observations is allowed to differ from the population mean for the observations used to construct the prediction interval. The function predIntLnormAltTestPower assumes all observations are from a lognormal distribution. The observations used to construct the prediction interval are assumed to come from a lognormal distribution with mean θ_2 and coefficient of variation τ. The future observations are assumed to come from a lognormal distribution with mean θ_1 and coefficient of variation τ; that is, the means are allowed to differ between the two populations, but not the coefficient of variation. The function predIntLnormAltTestPower calls the function predIntNormTestPower, with the argument delta.over.sigma given by: \frac{δ}{σ} = \frac{log(R)}{√{log(τ^2 + 1)}} \;\;\;\;\;\; (1) where R is given by: R = \frac{θ_1}{θ_2} \;\;\;\;\;\; (2) and corresponds to the argument ratio.of.means for the function predIntLnormAltTestPower, and τ corresponds to the argument cv. Value vector of numbers between 0 and 1 equal to the probability that at least one of k future observations or geometric means will fall outside the prediction interval. Note See the help files for predIntNormTestPower. Author(s) Steven P. Millard ([email protected]) References See the help files for predIntNormTestPower and tTestLnormAltPower. plotPredIntLnormAltTestPowerCurve, predIntLnormAlt, predIntNorm, predIntNormK, plotPredIntNormTestPowerCurve, predIntLnormAltSimultaneous, predIntLnormAltSimultaneousTestPower, Prediction Intervals, LognormalAlt. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # Show how the power increases as ratio.of.means increases. Assume a # 95% upper prediction interval. predIntLnormAltTestPower(n = 4, ratio.of.means = 1:3) #[1] 0.0500000 0.1459516 0.2367793 #---------- # Look at how the power increases with sample size for an upper one-sided # prediction interval with k=3, ratio.of.means=4, and a confidence level of 95%. predIntLnormAltTestPower(n = c(4, 8), k = 3, ratio.of.means = 4) #[1] 0.2860952 0.4533567 #---------- # Show how the power for an upper 95% prediction limit increases as the # number of future observations k increases. Here, we'll use n=20 and # ratio.of.means=2. predIntLnormAltTestPower(n = 20, k = 1:3, ratio.of.means = 2) #[1] 0.1945886 0.2189538 0.2321562
2018-04-25 20:28:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084315061569214, "perplexity": 794.4692840601223}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947957.81/warc/CC-MAIN-20180425193720-20180425213720-00377.warc.gz"}
http://crypto.stackexchange.com/tags/elliptic-curves/hot
# Tag Info 5 Now to calculate Q this will take a lot of time since it means I will need to perform point addition an insane number of times unless I'm not understanding something about it. You're missing a point; elliptic curve point addition is associative; that is, for any three points $A, B, C$, we have: $$(A + B) + C = A + (B+C)$$ Now, why is this a big deal? ... 4 What's difference between n & #E(FP)? The difference is that $n$ is the smallest positive integer where $nG = O$; while you correctly state that $\#E \cdot G = O$, that doesn't mean that $\#E$ is the smallest integer that makes this happen. There may be a smaller integer $n$; $n$ will always be a factor of $\#E$, however it can be smaller. As for ... 3 You want to find a point $(X,Y)$ on an elliptic curve $y^2 = x^3 + ax + b$ knowing only $X$ and a single bit indicating whether $Y$ is even or odd. To find $Y$, you use the relation defining the curve: you know that $Y^2 = X^3 + aX + b$ since the point is on the curve. So you compute $X^3 + aX + b$ using your value of $X$ and the public parameters $a, b$, ... 1 I recently came across a paper that may interest you that I think answers your question. To quote from the abstract: Unfortunately, in all existing HD wallets---including BIP32 wallets---an attacker can easily recover the master private key given the master public key and any child private key. This vulnerability precludes use cases such as a combined ... 1 ElGamal works for any ring. What is badly specified is the glue between the protocols (some network packets) and the exact implementation variants in the math library. C25519 use arithmetic over Montgomery curves. Ed25519 use arithmetic over twisted Edwards curves. Huff curves implementations can be "faster", but there are not in the repository of curves ... 1 1. What standard curves could I use? I know curves like p192 etc exists, am I allowed to use these? Yes, of course you may use those, for more curves - including safer ones - check the safecurves website. 2. How can I find valid points to use for messages and generator on these huge curves? So far I've done it using brute force on smaller curves with a ... Only top voted, non community-wiki answers of a minimum length are eligible
2014-12-19 17:35:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6028632521629333, "perplexity": 472.07968338609385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768741.99/warc/CC-MAIN-20141217075248-00004-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/14717-help-required-differentiation.html
# Math Help - Help required with differentiation 1. ## Help required with differentiation Hi all! I have to differentiate the following function using the Product Rule: f(x) = (cos x)e^-x sqrt 3 Here are my workings so far: g(x)=(cos x) h(x)=e^-x sqrt 3 f ' (x) = g ' (x) h(x) + g(x) h' (x) = (-sin x)e^-x sqrt 3 + (cos x)-e^-x sqrt 3 From here I do not know how to proceed further and am not sure whether above working is correct. TIA Re: 3. ## Thanks but..... Thanks for your help. However the sqrt 3 is part of the e^-x sqrt3. Thanks! 4. ## Re: Originally Posted by tigerdivision Hi all! I have to differentiate the following function using the Product Rule: f(x) = (cos x)e^-x sqrt 3 Here are my workings so far: g(x)=(cos x) h(x)=e^-x sqrt 3 f ' (x) = g ' (x) h(x) + g(x) h' (x) = (-sin x)e^-x sqrt 3 + (cos x)-e^-x sqrt 3 From here I do not know how to proceed further and am not sure whether above working is correct. TIA RE:
2014-09-18 15:26:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9492642283439636, "perplexity": 3900.9613382696853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657127503.54/warc/CC-MAIN-20140914011207-00092-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.orekit.org/site-orekit-11.1/apidocs/org/orekit/time/AbsoluteDate.html
org.orekit.time ## Class AbsoluteDate • ### Field Summary Fields Modifier and Type Field and Description static AbsoluteDate ARBITRARY_EPOCH An arbitrary finite date. static AbsoluteDate BEIDOU_EPOCH Reference epoch for BeiDou weeks: 2006-01-01T00:00:00 UTC. static AbsoluteDate CCSDS_EPOCH Reference epoch for CCSDS Time Code Format (CCSDS 301.0-B-4): 1958-01-01T00:00:00 International Atomic Time (not UTC). static AbsoluteDate FIFTIES_EPOCH Reference epoch for 1950 dates: 1950-01-01T00:00:00 Terrestrial Time. static AbsoluteDate FUTURE_INFINITY Dummy date at infinity in the future direction. static AbsoluteDate GALILEO_EPOCH Reference epoch for Galileo System Time: 1999-08-22T00:00:00 GST. static AbsoluteDate GLONASS_EPOCH Reference epoch for GLONASS four-year interval number: 1996-01-01T00:00:00 GLONASS time. static AbsoluteDate GPS_EPOCH Reference epoch for GPS weeks: 1980-01-06T00:00:00 GPS time. static AbsoluteDate IRNSS_EPOCH Reference epoch for IRNSS weeks: 1999-08-22T00:00:00 IRNSS time. static AbsoluteDate J2000_EPOCH J2000.0 Reference epoch: 2000-01-01T12:00:00 Terrestrial Time (not UTC). static AbsoluteDate JAVA_EPOCH Java Reference epoch: 1970-01-01T00:00:00 Universal Time Coordinate. static AbsoluteDate JULIAN_EPOCH Reference epoch for julian dates: -4712-01-01T12:00:00 Terrestrial Time. static AbsoluteDate MODIFIED_JULIAN_EPOCH Reference epoch for modified julian dates: 1858-11-17T00:00:00 Terrestrial Time. static AbsoluteDate PAST_INFINITY Dummy date at infinity in the past direction. static AbsoluteDate QZSS_EPOCH Reference epoch for QZSS weeks: 1980-01-06T00:00:00 QZSS time. • ### Constructor Summary Constructors Constructor and Description AbsoluteDate() Create an instance with a default value (J2000_EPOCH). AbsoluteDate(AbsoluteDate since, double elapsedDuration) Build an instance from an elapsed duration since to another instant. AbsoluteDate(AbsoluteDate reference, double apparentOffset, TimeScale timeScale) Build an instance from an apparent clock offset with respect to another instant in the perspective of a specific time scale. AbsoluteDate(DateComponents date, TimeComponents time, TimeScale timeScale) Build an instance from a location in a time scale. AbsoluteDate(DateComponents date, TimeScale timeScale) Build an instance from a location in a time scale. AbsoluteDate(DateTimeComponents location, TimeScale timeScale) Build an instance from a location in a time scale. AbsoluteDate(Date location, TimeScale timeScale) Build an instance from a location in a time scale. AbsoluteDate(int year, int month, int day, int hour, int minute, double second, TimeScale timeScale) Build an instance from a location in a time scale. AbsoluteDate(int year, int month, int day, TimeScale timeScale) Build an instance from a location in a time scale. AbsoluteDate(int year, Month month, int day, int hour, int minute, double second, TimeScale timeScale) Build an instance from a location in a time scale. AbsoluteDate(int year, Month month, int day, TimeScale timeScale) Build an instance from a location in a time scale. AbsoluteDate(String location, TimeScale timeScale) Build an instance from a location (parsed from a string) in a time scale. • ### Method Summary All Methods Modifier and Type Method and Description int compareTo(AbsoluteDate date) Compare the instance with another date. static AbsoluteDate createBesselianEpoch(double besselianEpoch) Build an instance corresponding to a Besselian Epoch (BE). static AbsoluteDate createJDDate(int jd, double secondsSinceNoon, TimeScale timeScale) Build an instance corresponding to a Julian Day date. static AbsoluteDate createJulianEpoch(double julianEpoch) Build an instance corresponding to a Julian Epoch (JE). static AbsoluteDate createMJDDate(int mjd, double secondsInDay, TimeScale timeScale) Build an instance corresponding to a Modified Julian Day date. double durationFrom(AbsoluteDate instant) Compute the physically elapsed duration between two instants. boolean equals(Object date) Check if the instance represents the same time as another instance. DateTimeComponents getComponents(int minutesFromUTC) Split the instance into date/time components for a local time. DateTimeComponents getComponents(int minutesFromUTC, TimeScale utc) Split the instance into date/time components for a local time. DateTimeComponents getComponents(TimeScale timeScale) Split the instance into date/time components. DateTimeComponents getComponents(TimeZone timeZone) Split the instance into date/time components for a time zone. DateTimeComponents getComponents(TimeZone timeZone, TimeScale utc) Split the instance into date/time components for a time zone. AbsoluteDate getDate() Get the date. int hashCode() Get a hashcode for this date. boolean isAfter(TimeStamped other) Check if the instance represents a time that is strictly after another. boolean isAfterOrEqualTo(TimeStamped other) Check if the instance represents a time that is after or equal to another. boolean isBefore(TimeStamped other) Check if the instance represents a time that is strictly before another. boolean isBeforeOrEqualTo(TimeStamped other) Check if the instance represents a time that is before or equal to another. boolean isBetween(TimeStamped boundary, TimeStamped otherBoundary) Check if the instance represents a time that is strictly between two others representing the boundaries of a time span. boolean isBetweenOrEqualTo(TimeStamped boundary, TimeStamped otherBoundary) Check if the instance represents a time that is between two others representing the boundaries of a time span, or equal to one of them. boolean isCloseTo(TimeStamped other, double tolerance) Check if the instance time is close to another. boolean isEqualTo(TimeStamped other) Check if the instance represents the same time as another. double offsetFrom(AbsoluteDate instant, TimeScale timeScale) Compute the apparent clock offset between two instant in the perspective of a specific time scale. static AbsoluteDate parseCCSDSCalendarSegmentedTimeCode(byte preambleField, byte[] timeField) Build an instance from a CCSDS Calendar Segmented Time Code (CCS). static AbsoluteDate parseCCSDSCalendarSegmentedTimeCode(byte preambleField, byte[] timeField, TimeScale utc) Build an instance from a CCSDS Calendar Segmented Time Code (CCS). static AbsoluteDate parseCCSDSDaySegmentedTimeCode(byte preambleField, byte[] timeField, DateComponents agencyDefinedEpoch) Build an instance from a CCSDS Day Segmented Time Code (CDS). static AbsoluteDate parseCCSDSDaySegmentedTimeCode(byte preambleField, byte[] timeField, DateComponents agencyDefinedEpoch, TimeScale utc) Build an instance from a CCSDS Day Segmented Time Code (CDS). static AbsoluteDate parseCCSDSUnsegmentedTimeCode(byte preambleField1, byte preambleField2, byte[] timeField, AbsoluteDate agencyDefinedEpoch) Build an instance from a CCSDS Unsegmented Time Code (CUC). static AbsoluteDate parseCCSDSUnsegmentedTimeCode(byte preambleField1, byte preambleField2, byte[] timeField, AbsoluteDate agencyDefinedEpoch, AbsoluteDate ccsdsEpoch) Build an instance from a CCSDS Unsegmented Time Code (CUC). AbsoluteDate shiftedBy(double dt) Get a time-shifted date. double timeScalesOffset(TimeScale scale1, TimeScale scale2) Compute the offset between two time scales at the current instant. Date toDate(TimeScale timeScale) Convert the instance to a Java Date. String toString() Get a String representation of the instant location with up to 16 digits of precision for the seconds value. String toString(int minutesFromUTC) Get a String representation of the instant location for a local time. String toString(int minutesFromUTC, TimeScale utc) Get a String representation of the instant location for a local time. String toString(TimeScale timeScale) Get a String representation of the instant location in ISO-8601 format without the UTC offset and with up to 16 digits of precision for the seconds value. String toString(TimeZone timeZone) Get a String representation of the instant location for a time zone. String toString(TimeZone timeZone, TimeScale utc) Get a String representation of the instant location for a time zone. String toStringRfc3339(TimeScale utc) Represent the given date as a string according to the format in RFC 3339. String toStringWithoutUtcOffset(TimeScale timeScale, int fractionDigits) Return a string representation of this date-time, rounded to the given precision. • ### Methods inherited from class java.lang.Object clone, finalize, getClass, notify, notifyAll, wait, wait, wait • ### Constructor Detail • #### AbsoluteDate public AbsoluteDate(DateTimeComponents location, TimeScale timeScale) Build an instance from a location in a time scale. Parameters: location - location in the time scale timeScale - time scale • #### AbsoluteDate public AbsoluteDate(DateComponents date, TimeComponents time, TimeScale timeScale) Build an instance from a location in a time scale. Parameters: date - date location in the time scale time - time location in the time scale timeScale - time scale • #### AbsoluteDate public AbsoluteDate(int year, int month, int day, int hour, int minute, double second, TimeScale timeScale) throws IllegalArgumentException Build an instance from a location in a time scale. Parameters: year - year number (may be 0 or negative for BC years) month - month number from 1 to 12 day - day number from 1 to 31 hour - hour number from 0 to 23 minute - minute number from 0 to 59 second - second number from 0.0 to 60.0 (excluded) timeScale - time scale Throws: IllegalArgumentException - if inconsistent arguments are given (parameters out of range) • #### AbsoluteDate public AbsoluteDate(int year, Month month, int day, int hour, int minute, double second, TimeScale timeScale) throws IllegalArgumentException Build an instance from a location in a time scale. Parameters: year - year number (may be 0 or negative for BC years) month - month enumerate day - day number from 1 to 31 hour - hour number from 0 to 23 minute - minute number from 0 to 59 second - second number from 0.0 to 60.0 (excluded) timeScale - time scale Throws: IllegalArgumentException - if inconsistent arguments are given (parameters out of range) • #### AbsoluteDate public AbsoluteDate(DateComponents date, TimeScale timeScale) throws IllegalArgumentException Build an instance from a location in a time scale. The hour is set to 00:00:00.000. Parameters: date - date location in the time scale timeScale - time scale Throws: IllegalArgumentException - if inconsistent arguments are given (parameters out of range) • #### AbsoluteDate public AbsoluteDate(int year, int month, int day, TimeScale timeScale) throws IllegalArgumentException Build an instance from a location in a time scale. The hour is set to 00:00:00.000. Parameters: year - year number (may be 0 or negative for BC years) month - month number from 1 to 12 day - day number from 1 to 31 timeScale - time scale Throws: IllegalArgumentException - if inconsistent arguments are given (parameters out of range) • #### AbsoluteDate public AbsoluteDate(int year, Month month, int day, TimeScale timeScale) throws IllegalArgumentException Build an instance from a location in a time scale. The hour is set to 00:00:00.000. Parameters: year - year number (may be 0 or negative for BC years) month - month enumerate day - day number from 1 to 31 timeScale - time scale Throws: IllegalArgumentException - if inconsistent arguments are given (parameters out of range) • #### AbsoluteDate public AbsoluteDate(Date location, TimeScale timeScale) Build an instance from a location in a time scale. Parameters: location - location in the time scale timeScale - time scale • #### AbsoluteDate public AbsoluteDate(AbsoluteDate since, double elapsedDuration) Build an instance from an elapsed duration since to another instant. It is important to note that the elapsed duration is not the difference between two readings on a time scale. As an example, the duration between the two instants leading to the readings 2005-12-31T23:59:59 and 2006-01-01T00:00:00 in the UTC time scale is not 1 second, but a stop watch would have measured an elapsed duration of 2 seconds between these two instances because a leap second was introduced at the end of 2005 in this time scale. This constructor is the reverse of the durationFrom(AbsoluteDate) method. Parameters: since - start instant of the measured duration elapsedDuration - physically elapsed duration from the since instant, as measured in a regular time scale durationFrom(AbsoluteDate) • #### AbsoluteDate public AbsoluteDate(AbsoluteDate reference, double apparentOffset, TimeScale timeScale) Build an instance from an apparent clock offset with respect to another instant in the perspective of a specific time scale. It is important to note that the apparent clock offset is the difference between two readings on a time scale and not an elapsed duration. As an example, the apparent clock offset between the two instants leading to the readings 2005-12-31T23:59:59 and 2006-01-01T00:00:00 in the UTC time scale is 1 second, but the elapsed duration is 2 seconds because a leap second has been introduced at the end of 2005 in this time scale. This constructor is the reverse of the offsetFrom(AbsoluteDate, TimeScale) method. Parameters: reference - reference instant apparentOffset - apparent clock offset from the reference instant (difference between two readings in the specified time scale) timeScale - time scale with respect to which the offset is defined offsetFrom(AbsoluteDate, TimeScale) • ### Method Detail • #### parseCCSDSUnsegmentedTimeCode @DefaultDataContext public static AbsoluteDate parseCCSDSUnsegmentedTimeCode(byte preambleField1, byte preambleField2, byte[] timeField, AbsoluteDate agencyDefinedEpoch) Build an instance from a CCSDS Unsegmented Time Code (CUC). CCSDS Unsegmented Time Code is defined in the blue book: CCSDS Time Code Format (CCSDS 301.0-B-4) published in November 2010 If the date to be parsed is formatted using version 3 of the standard (CCSDS 301.0-B-3 published in 2002) or if the extension of the preamble field introduced in version 4 of the standard is not used, then the preambleField2 parameter can be set to 0. This method uses the default data context if the CCSDS epoch is used. Parameters: preambleField1 - first byte of the field specifying the format, often not transmitted in data interfaces, as it is constant for a given data interface preambleField2 - second byte of the field specifying the format (added in revision 4 of the CCSDS standard in 2010), often not transmitted in data interfaces, as it is constant for a given data interface (value ignored if presence not signaled in preambleField1) timeField - byte array containing the time code agencyDefinedEpoch - reference epoch, ignored if the preamble field specifies the CCSDS reference epoch is used (and hence may be null in this case) Returns: an instance corresponding to the specified date parseCCSDSUnsegmentedTimeCode(byte, byte, byte[], AbsoluteDate, AbsoluteDate) • #### parseCCSDSUnsegmentedTimeCode public static AbsoluteDate parseCCSDSUnsegmentedTimeCode(byte preambleField1, byte preambleField2, byte[] timeField, AbsoluteDate agencyDefinedEpoch, AbsoluteDate ccsdsEpoch) Build an instance from a CCSDS Unsegmented Time Code (CUC). CCSDS Unsegmented Time Code is defined in the blue book: CCSDS Time Code Format (CCSDS 301.0-B-4) published in November 2010 If the date to be parsed is formatted using version 3 of the standard (CCSDS 301.0-B-3 published in 2002) or if the extension of the preamble field introduced in version 4 of the standard is not used, then the preambleField2 parameter can be set to 0. Parameters: preambleField1 - first byte of the field specifying the format, often not transmitted in data interfaces, as it is constant for a given data interface preambleField2 - second byte of the field specifying the format (added in revision 4 of the CCSDS standard in 2010), often not transmitted in data interfaces, as it is constant for a given data interface (value ignored if presence not signaled in preambleField1) timeField - byte array containing the time code agencyDefinedEpoch - reference epoch, ignored if the preamble field specifies the CCSDS reference epoch is used (and hence may be null in this case) ccsdsEpoch - reference epoch, ignored if the preamble field specifies the agency epoch is used. Returns: an instance corresponding to the specified date Since: 10.1 • #### parseCCSDSDaySegmentedTimeCode @DefaultDataContext public static AbsoluteDate parseCCSDSDaySegmentedTimeCode(byte preambleField, byte[] timeField, DateComponents agencyDefinedEpoch) Build an instance from a CCSDS Day Segmented Time Code (CDS). CCSDS Day Segmented Time Code is defined in the blue book: CCSDS Time Code Format (CCSDS 301.0-B-4) published in November 2010 This method uses the default data context. Parameters: preambleField - field specifying the format, often not transmitted in data interfaces, as it is constant for a given data interface timeField - byte array containing the time code agencyDefinedEpoch - reference epoch, ignored if the preamble field specifies the CCSDS reference epoch is used (and hence may be null in this case) Returns: an instance corresponding to the specified date parseCCSDSDaySegmentedTimeCode(byte, byte[], DateComponents, TimeScale) • #### parseCCSDSDaySegmentedTimeCode public static AbsoluteDate parseCCSDSDaySegmentedTimeCode(byte preambleField, byte[] timeField, DateComponents agencyDefinedEpoch, TimeScale utc) Build an instance from a CCSDS Day Segmented Time Code (CDS). CCSDS Day Segmented Time Code is defined in the blue book: CCSDS Time Code Format (CCSDS 301.0-B-4) published in November 2010 Parameters: preambleField - field specifying the format, often not transmitted in data interfaces, as it is constant for a given data interface timeField - byte array containing the time code agencyDefinedEpoch - reference epoch, ignored if the preamble field specifies the CCSDS reference epoch is used (and hence may be null in this case) utc - time scale used to compute date and time components. Returns: an instance corresponding to the specified date Since: 10.1 • #### parseCCSDSCalendarSegmentedTimeCode @DefaultDataContext public static AbsoluteDate parseCCSDSCalendarSegmentedTimeCode(byte preambleField, byte[] timeField) Build an instance from a CCSDS Calendar Segmented Time Code (CCS). CCSDS Calendar Segmented Time Code is defined in the blue book: CCSDS Time Code Format (CCSDS 301.0-B-4) published in November 2010 This method uses the default data context. Parameters: preambleField - field specifying the format, often not transmitted in data interfaces, as it is constant for a given data interface timeField - byte array containing the time code Returns: an instance corresponding to the specified date parseCCSDSCalendarSegmentedTimeCode(byte, byte[], TimeScale) • #### parseCCSDSCalendarSegmentedTimeCode public static AbsoluteDate parseCCSDSCalendarSegmentedTimeCode(byte preambleField, byte[] timeField, TimeScale utc) Build an instance from a CCSDS Calendar Segmented Time Code (CCS). CCSDS Calendar Segmented Time Code is defined in the blue book: CCSDS Time Code Format (CCSDS 301.0-B-4) published in November 2010 Parameters: preambleField - field specifying the format, often not transmitted in data interfaces, as it is constant for a given data interface timeField - byte array containing the time code utc - time scale used to compute date and time components. Returns: an instance corresponding to the specified date Since: 10.1 • #### createJDDate public static AbsoluteDate createJDDate(int jd, double secondsSinceNoon, TimeScale timeScale) Build an instance corresponding to a Julian Day date. Parameters: jd - Julian day secondsSinceNoon - seconds in the Julian day (BEWARE, Julian days start at noon, so 0.0 is noon) timeScale - time scale in which the seconds in day are defined Returns: a new instant • #### createMJDDate public static AbsoluteDate createMJDDate(int mjd, double secondsInDay, TimeScale timeScale) throws OrekitIllegalArgumentException Build an instance corresponding to a Modified Julian Day date. Parameters: mjd - modified Julian day secondsInDay - seconds in the day timeScale - time scale in which the seconds in day are defined Returns: a new instant Throws: OrekitIllegalArgumentException - if seconds number is out of range • #### offsetFrom public double offsetFrom(AbsoluteDate instant, TimeScale timeScale) Compute the apparent clock offset between two instant in the perspective of a specific time scale. The offset is the number of seconds counted in the given time scale between the locations of the two instants, with all time scale irregularities removed (i.e. considering all days are exactly 86400 seconds long). This method will give a result that may not have a physical meaning if the time scale is irregular. For example since a leap second was introduced at the end of 2005, the apparent offset between 2005-12-31T23:59:59 and 2006-01-01T00:00:00 is 1 second, but the physical duration of the corresponding time interval as returned by the durationFrom(AbsoluteDate) method is 2 seconds. This method is the reverse of the AbsoluteDate(AbsoluteDate, double, TimeScale) constructor. Parameters: instant - instant to subtract from the instance timeScale - time scale with respect to which the offset should be computed Returns: apparent clock offset in seconds between the two instants (positive if the instance is posterior to the argument) durationFrom(AbsoluteDate), AbsoluteDate(AbsoluteDate, double, TimeScale) • #### timeScalesOffset public double timeScalesOffset(TimeScale scale1, TimeScale scale2) Compute the offset between two time scales at the current instant. The offset is defined as l₁-l₂ where l₁ is the location of the instant in the scale1 time scale and l₂ is the location of the instant in the scale2 time scale. Parameters: scale1 - first time scale scale2 - second time scale Returns: offset in seconds between the two time scales at the current instant • #### toDate public Date toDate(TimeScale timeScale) Convert the instance to a Java Date. Conversion to the Date class induces a loss of precision because the Date class does not provide sub-millisecond information. Java Dates are considered to be locations in some times scales. Parameters: timeScale - time scale to use Returns: a Date instance representing the location of the instant in the time scale • #### getComponents public DateTimeComponents getComponents(TimeScale timeScale) Split the instance into date/time components. Parameters: timeScale - time scale to use Returns: date/time components • #### getComponents @DefaultDataContext public DateTimeComponents getComponents(int minutesFromUTC) Split the instance into date/time components for a local time. This method uses the default data context. Parameters: minutesFromUTC - offset in minutes from UTC (positive Eastwards UTC, negative Westward UTC) Returns: date/time components Since: 7.2 getComponents(int, TimeScale) • #### getComponents public DateTimeComponents getComponents(int minutesFromUTC, TimeScale utc) Split the instance into date/time components for a local time. Parameters: minutesFromUTC - offset in minutes from UTC (positive Eastwards UTC, negative Westward UTC) utc - time scale used to compute date and time components. Returns: date/time components Since: 10.1 • #### getComponents public DateTimeComponents getComponents(TimeZone timeZone, TimeScale utc) Split the instance into date/time components for a time zone. Parameters: timeZone - time zone utc - time scale used to computed date and time components. Returns: date/time components Since: 10.1 • #### compareTo public int compareTo(AbsoluteDate date) Compare the instance with another date. Specified by: compareTo in interface Comparable<AbsoluteDate> Parameters: date - other date to compare the instance to Returns: a negative integer, zero, or a positive integer as this date is before, simultaneous, or after the specified date. • #### getDate public AbsoluteDate getDate() Get the date. Specified by: getDate in interface TimeStamped Returns: date attached to the object • #### equals public boolean equals(Object date) Check if the instance represents the same time as another instance. Overrides: equals in class Object Parameters: date - other date Returns: true if the instance and the other date refer to the same instant • #### isEqualTo public boolean isEqualTo(TimeStamped other) Check if the instance represents the same time as another. Parameters: other - the instant to compare this date to Returns: true if the instance and the argument refer to the same instant Since: 10.1 isCloseTo(TimeStamped, double) • #### isCloseTo public boolean isCloseTo(TimeStamped other, double tolerance) Check if the instance time is close to another. Parameters: other - the instant to compare this date to tolerance - the separation, in seconds, under which the two instants will be considered close to each other Returns: true if the duration between the instance and the argument is strictly below the tolerance Since: 10.1 isEqualTo(TimeStamped) • #### isBefore public boolean isBefore(TimeStamped other) Check if the instance represents a time that is strictly before another. Parameters: other - the instant to compare this date to Returns: true if the instance is strictly before the argument when ordering chronologically Since: 10.1 isBeforeOrEqualTo(TimeStamped) • #### isAfter public boolean isAfter(TimeStamped other) Check if the instance represents a time that is strictly after another. Parameters: other - the instant to compare this date to Returns: true if the instance is strictly after the argument when ordering chronologically Since: 10.1 isAfterOrEqualTo(TimeStamped) • #### isBeforeOrEqualTo public boolean isBeforeOrEqualTo(TimeStamped other) Check if the instance represents a time that is before or equal to another. Parameters: other - the instant to compare this date to Returns: true if the instance is before (or equal to) the argument when ordering chronologically Since: 10.1 isBefore(TimeStamped) • #### isAfterOrEqualTo public boolean isAfterOrEqualTo(TimeStamped other) Check if the instance represents a time that is after or equal to another. Parameters: other - the instant to compare this date to Returns: true if the instance is after (or equal to) the argument when ordering chronologically Since: 10.1 isAfterOrEqualTo(TimeStamped) • #### isBetween public boolean isBetween(TimeStamped boundary, TimeStamped otherBoundary) Check if the instance represents a time that is strictly between two others representing the boundaries of a time span. The two boundaries can be provided in any order: in other words, whether boundary represents a time that is before or after otherBoundary will not change the result of this method. Parameters: boundary - one end of the time span otherBoundary - the other end of the time span Returns: true if the instance is strictly between the two arguments when ordering chronologically Since: 10.1 isBetweenOrEqualTo(TimeStamped, TimeStamped) • #### isBetweenOrEqualTo public boolean isBetweenOrEqualTo(TimeStamped boundary, TimeStamped otherBoundary) Check if the instance represents a time that is between two others representing the boundaries of a time span, or equal to one of them. The two boundaries can be provided in any order: in other words, whether boundary represents a time that is before or after otherBoundary will not change the result of this method. Parameters: boundary - one end of the time span otherBoundary - the other end of the time span Returns: true if the instance is between the two arguments (or equal to at least one of them) when ordering chronologically Since: 10.1 isBetween(TimeStamped, TimeStamped) • #### hashCode public int hashCode() Get a hashcode for this date. Overrides: hashCode in class Object Returns: hashcode • #### toString @DefaultDataContext public String toString() Get a String representation of the instant location with up to 16 digits of precision for the seconds value. Since this method is used in exception messages and error handling every effort is made to return some representation of the instant. If UTC is available from the default data context then it is used to format the string in UTC. If not then TAI is used. Finally if the prior attempts fail this method falls back to converting this class's internal representation to a string. This method uses the default data context. Overrides: toString in class Object Returns: a string representation of the instance, in ISO-8601 format if UTC is available from the default data context. toString(TimeScale), toStringRfc3339(TimeScale), DateTimeComponents.toString(int, int) • #### toString public String toString(TimeScale timeScale) Get a String representation of the instant location in ISO-8601 format without the UTC offset and with up to 16 digits of precision for the seconds value. Parameters: timeScale - time scale to use Returns: a string representation of the instance. toStringRfc3339(TimeScale), DateTimeComponents.toString(int, int) • #### toString @DefaultDataContext public String toString(int minutesFromUTC) Get a String representation of the instant location for a local time. This method uses the default data context. Parameters: minutesFromUTC - offset in minutes from UTC (positive Eastwards UTC, negative Westward UTC). Returns: string representation of the instance, in ISO-8601 format with milliseconds accuracy Since: 7.2 toString(int, TimeScale) • #### toString public String toString(int minutesFromUTC, TimeScale utc) Get a String representation of the instant location for a local time. Parameters: minutesFromUTC - offset in minutes from UTC (positive Eastwards UTC, negative Westward UTC). utc - time scale used to compute date and time components. Returns: string representation of the instance, in ISO-8601 format with milliseconds accuracy Since: 10.1 getComponents(int, TimeScale), DateTimeComponents.toString(int, int) • #### toString @DefaultDataContext public String toString(TimeZone timeZone) Get a String representation of the instant location for a time zone. This method uses the default data context. Parameters: timeZone - time zone Returns: string representation of the instance, in ISO-8601 format with milliseconds accuracy Since: 7.2 toString(TimeZone, TimeScale) • #### toString public String toString(TimeZone timeZone, TimeScale utc) Get a String representation of the instant location for a time zone. Parameters: timeZone - time zone utc - time scale used to compute date and time components. Returns: string representation of the instance, in ISO-8601 format with milliseconds accuracy Since: 10.1 getComponents(TimeZone, TimeScale), DateTimeComponents.toString(int, int) • #### toStringRfc3339 public String toStringRfc3339(TimeScale utc) Represent the given date as a string according to the format in RFC 3339. RFC3339 is a restricted subset of ISO 8601 with a well defined grammar. Enough digits are included in the seconds value to avoid rounding up to the next minute. This method is different than toString(TimeScale) in that it includes a "Z" at the end to indicate the time zone and enough precision to represent the point in time without rounding up to the next minute. RFC3339 is unable to represent BC years, years of 10000 or more, time zone offsets of 100 hours or more, or NaN. In these cases the value returned from this method will not be valid RFC3339 format. Parameters: utc - time scale. Returns: RFC 3339 format string. RFC 3339, DateTimeComponents.toStringRfc3339(), toString(TimeScale), getComponents(TimeScale) • #### toStringWithoutUtcOffset public String toStringWithoutUtcOffset(TimeScale timeScale, int fractionDigits) Return a string representation of this date-time, rounded to the given precision. The format used is ISO8601 without the UTC offset. Calling toStringWithoutUtcOffset(DataContext.getDefault().getTimeScales().getUTC(), 3) will emulate the behavior of toString() in Orekit 10 and earlier. Note this method is more accurate as it correctly handles rounding during leap seconds. Parameters: timeScale - to use to compute components. fractionDigits - the number of digits to include after the decimal point in the string representation of the seconds. The date and time is first rounded as necessary. fractionDigits must be greater than or equal to 0. Returns: string representation of this date, time, and UTC offset Since: 11.1 toString(TimeScale), toStringRfc3339(TimeScale), DateTimeComponents.toString(int, int), DateTimeComponents.toStringWithoutUtcOffset(int, int)
2023-01-30 04:26:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17578452825546265, "perplexity": 2886.382929515734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499801.40/warc/CC-MAIN-20230130034805-20230130064805-00669.warc.gz"}
https://www.physicsforums.com/threads/potential-difference-confusion.100225/
# Potential Difference Confusion 1. Nov 16, 2005 ### brentd49 There seems to be two different uses of potential difference that either contradict each other, or at the very least make it very confusing. Let me write the three different forms: Definition: Vf-Vi= -E*d Differential: dV=-E*dr (Vi=0) Point Charge: V=E*r (Vf=0) What is with the different reference potentials (Vi,Vf=0)? Can someone please make sense out of this. Edit: I avoided writing integrals above because I don't how to use latex, and the book I'm studying is Halliday, Resnik. Last edited: Nov 16, 2005 2. Nov 16, 2005 ### Staff: Mentor Here is a nice discussion of potential energy. It is really the change in potential energy that matters. http://hyperphysics.phy-astr.gsu.edu/hbase/pegrav.html#pe Gravitational potential energy - http://hyperphysics.phy-astr.gsu.edu/hbase/gpot.html#mgh Electric potential energy - http://hyperphysics.phy-astr.gsu.edu/hbase/electric/elepe.html#c1 Reference potential - potentials are relative! ref Hyperphysics Read also the third plate in the link on electric potential - Potential Reference at Infinity. 3. Nov 17, 2005 ### BerryBoy Ok, so you Vf & Vi are values of potential. So if you substract them, you get the difference in potential between them.
2016-10-25 22:52:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8226243257522583, "perplexity": 1840.8180216492453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720468.71/warc/CC-MAIN-20161020183840-00001-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.nature.com/articles/s41598-021-02536-0?error=cookies_not_supported&code=1fe5bc39-49c4-4a82-9304-208ae8df0357
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Negative tension controls stability and structure of intermediate filament networks ## Abstract Networks, whose junctions are free to move along the edges, such as two-dimensional soap froths and membrane tubular networks of endoplasmic reticulum are intrinsically unstable. This instability is a result of a positive tension applied to the network elements. A paradigm of networks exhibiting stable polygonal configurations in spite of the junction mobility, are networks formed by bundles of Keratin Intermediate Filaments (KIFs) in live cells. A unique feature of KIF networks is a, hypothetically, negative tension generated in the network bundles due to an exchange of material between the network and an effective reservoir of unbundled filaments. Here we analyze the structure and stability of two-dimensional networks with mobile three-way junctions subject to negative tension. First, we analytically examine a simplified case of hexagonal networks with symmetric junctions and demonstrate that, indeed, a negative tension is mandatory for the network stability. Another factor contributing to the network stability is the junction elastic resistance to deviations from the symmetric state. We derive an equation for the optimal density of such networks resulting from an interplay between the tension and the junction energy. We describe a configurational degeneration of the optimal energy state of the network. Further, we analyze by numerical simulations the energy of randomly generated networks with, generally, asymmetric junctions, and demonstrate that the global minimum of the network energy corresponds to the irregular configurations. ## Introduction Formation of dynamic networks is ubiquitous in soft matter systems and cytoplasm of biological cells1,2,3,4. Of a special interest are networks, whose vertices, referred below to as the junctions, are able to move along the network edges. Two examples of the previously investigated networks with mobile junctions are the two-dimensional soap froths3 called below the soap film networks, and the polygonal networks of membrane tubules, which form a significant part of one of the most crucial intracellular organelles, the endoplasmic reticulum (ER)4. The junction mobility is enabled by the ability of the network edges to freely flow through the junctions, which, in turn, requires a direct merger of the edge ends within the junctions and an effective lateral fluidity of the edge material. This condition is, obviously, satisfied for soap film networks, whose edges and junctions are filled by aqueous solutions and covered by fluid soap monolayers (Fig. 1A). In the ER networks, lipid bilayers serving as a base of the tubular membranes behave as a two-dimensional fluid and are smoothly inter-merged within the junctions (Fig. 1B)5. The prominent feature of the soap film and ER tubular networks is their dynamic behavior and instability. The soap film networks exhibit collapse or unlimited expansion of their polygonal elements, which is mediated by the movement and fusion of the network junctions3. Similarly, the ER tubular networks undergo a perpetual remodeling through collapse of their polygonal unit-cells, the phenomenon called the ring closure4. The factor driving the instability of the ER- and soap- film networks is the positive tension imposed on the network elements. In the ER, stretching forces applied by the intra-cellular force-generating machinery create tension in the network’s tubular membranes7,8. In soap film networks, the tension originates from the surface tension of the soap solution-air interface. The positive tension tends to minimize the overall length of the network edges, which drives the network remodeling mediated by movement of the junctions3,8. Substantiating the experimental observations5,8, computational simulations of networks with mobile junctions and positive tension recovered the temporal evolution of the system consisting in expansion and collapse of the network unit-cells9. The dynamic structural rearrangements of the ER and soap film networks drive the eventual degradation of the network unless a counter-process, such as a de novo tubule generation and fusion in ER5, restores the network junctions and constituent elements. A question arises whether a negative tension favoring an increase rather than a decrease of the overall length of the network edges would prevent the degradation and support stabilization of a network with mobile junctions. While, to the best of our knowledge, mobile junction networks subject to negative tensions have not been previously investigated, at least one paradigm of this kind of network exists within live cells and provides a motivation for such analysis. Those are the networks of bundles formed by intermediate filaments (IFs), which represent a part of a sophisticated system of intracellular polymers called the cytoskeleton10,11. The protein composition of an IF depends of the cell type and the intracellular localization12. Independently of the specific protein composition, different IFs have common structural features13. They are built through polymerization of 65 nm long subunits composed of eight apolar tetramers, each formed by two dimers joined together in an anti-parallel fashion13. IFs possess a substantial rigidity with respect to bending14. Therefore, they exhibit properties of semi-flexible 10 nm thick polymers, whose persistence lengths substantially exceed the size of a single subunit and vary in a broad range between hundreds of nanometers to a few microns14. While the physical principles of formation and functioning of IFs have been thoroughly addressed14, several essential aspects of IF intracellular organization have remained less-understood. One proposed function of IF networks is a contribution to cell mechanical stability and integrity15. We will base our modeling on specific data obtained for networks of IFs consisting of a particular protein, Keratin, and referred to as the Keratin Intermediate Filaments (KIFs)16. Like other intermediate filaments, KIFs exhibit properties of 10 nm thick semi-flexible polymers, which self-organize into bundles16,17,18 with a cross-sectional diameter of about 100 nm19,20. These bundles self-assemble into networks16 whose edges are formed by the bundles themselves whereas the vertices are represented by the three-way junctions between the bundle ends. An intra-cellular KIF network can be seen as consisting of two parts: a peripheral network, which fills the space between the cell periphery and the nucleus, and a central network covering the cell nucleus (for a fluorescence microscopy time-lapse image of KIF network in live cells, see ref.16). The peripheral network is dynamic. Keratin filaments are nucleated at the cell edge and undergo a persistent flow towards the interface between the cytoplasm and the nucleus, where matured bundles concentrate and, apparently, undergo disassembly16. The central network, in contrast to the peripheral one, is stationary. It does not exhibit any dynamic behavior except for limited fluctuations in the positions of the network junctions and bundles, as seen in fluorescence microscopy time-lapse imaging of KIF network in live cells (see ref.16). Structurally, the central network consists of approximately polygonal unit-cells of different types with a typical size of several microns. The central network is bounded by the nucleus-cytoplasm interface region. KIF network junctions are mobile, as demonstrated by observations of the junction behavior in the dynamic peripheral network16. A direct merger of the bundle ends within the junctions, as required for the junction mobility, is supported by the fact that no molecular linkers are known to be necessary for formation and functioning of KIF networks. The structure of a junction resulting from a pair-wise merger of three bundle ends is illustrated in (Fig. 2) and involves the splitting of each bundle into two branches. The second condition required for the junction mobility is an effective lateral fluidity of the bundles which is, in essence, the ability of the filaments constituting the bundles to freely slide with respect to each other. Given that the filaments are not interlinked by either direct or indirect chemical bonds19, there should be no obstacles for such sliding. The tension in the central KIF network is expected to be negative rather than positive for the following reason. It is sensible to assume that the interface zone between the peripheral and central networks serves as an effective reservoir of material for the bundles in the central network, as seen in fluorescence microscopy time-lapse imaging of KIF network in live cells (see ref.16). Indeed, according to the observations16, this zone forms as a result of concentration and disintegration of the KIF bundles arriving from the cell periphery and must, therefore, represent a pool of filament elements which should be freely exchangable with the central network. Since the formation of longer filaments and their subsequent lateral association into bundles are energetically favorable, the chemical potential of the bundles within the KIF central network must be lower than that in the effective reservoir, which, in turn, results in a negative network tension. Summarizing, we propose that the central network of KIF is a paradigm of a stable two-dimensional network, whose edges made of KIF bundles are connected by mobile three-way junctions representing the network vertices and exposed to a negative tension. Another property of the central KIF network, which can, potentially, contribute to the network stability, is the resistance of the network junctions to deviations from the symmetric configuration, in which all three angles between each pair of adjacent bundles merged within the junction are equal $$\frac{2\pi }{3}$$. These deviations, referred below to as the junction folding, generally accompany, in addition to the junction movement, the network dynamic rearrangements. The junction resistance to the folding must originate from the bending rigidity of the bundle branches constituting the junction (Fig. 2). Since the bending moduli of KIFs and their bundles are substantial14, the KIF junction resistance to the folding must be significant and may, therefore, impede the network dynamics. Here we analyze the structure and stability of two-dimensional networks subject to negative tensions and inter-linked by mobile three-way junctions of substantial but finite resistance to folding. Our analysis includes two steps. First, we analytically examine a simplified case of a regular hexagonal network. We demonstrate that a negative tension is indeed mandatory for the existence of a highly degenerated family of stable equilibrium network configurations. For these configurations, we determine an optimal average density of the network junctions and compare it on a semi-quantitative level with the experimental observations of KIF networks. Second, we demonstrate by numerical simulations the existence of irregular network configurations, whose energies are significantly lower than the lowest energies of a regular hexagonal network. These irregular configurations are characterized by, generally, asymmetric junctions. Importantly, the average junction density of an optimal irregular network configuration is close to that found for the optimal configurations of a regular hexagonal network. ## Results We consider the central KIF network as a polygonal mesh of bundles inter-connected by three-way junctions. The network is free to exchange KIF material with a surrounding reservoir. The total network area, $$A$$, is sufficiently large so that the radius of the network boundary with the reservoir, $$R$$, significantly exceeds the typical dimension, $$b$$, of the network unit-cell, $$R \gg b$$. Our goal is to analyze the conditions of existence and stability of the network equilibrium states by minimizing the network free energy, $$F$$, determined as the thermodynamic work needed to form the network out of the reservoir material. We consider the energy, $$F$$, to consist of two contributions, the energies of bundles, $$F_{B}$$, and of junctions, $$F_{j}$$, such that $$F = F_{B} + F_{j}$$. We start by introducing, separately, each energy contribution and the related system parameters. Next, we analyze the stability and density of networks with symmetric junctions and finish by showing a numerical analysis of networks with asymmetric junctions. ### Bundle energy Formation of the network bundles from the reservoir material is accompanied by an energy release related to KIF polymerization and bundling so that the network bundle energy, $$F_{B}$$, is negative. The energy, $$F_{B}$$, related to the bundle unit length is equal to the network tension denoted by, − $$\gamma$$, such that $$\gamma$$ is positive and represents the absolute value of the tension. Hence, the bundle energy, $$F_{B}$$, can be expressed through the total length of the network bundles, $$L,$$ by $$F_{B} = - \gamma L.$$ (1) ### Junction energy The total junction energy, $$F_{j}$$, is assumed to be the sum of energies of individual junction energies, $$f_{j}$$, implying that there is no interaction between the junctions. Therefore, we consider here a single junction, whose overall configuration is determined by the three angles, $$\phi_{i}$$, between the main axes of the adjacent bundles in the junction (Fig. 2). A junction will be referred to as symmetric if all the angles $$\phi_{i}$$ are equal to each other (and to $$\frac{2\pi }{3}$$). We consider the bundle branches forming a junction to have shapes of circular arcs of, in general, different radii of curvature, $$r_{i}$$, different arc angles, $$\theta_{i}$$, and, consequently, different arc lengths, $$l_{{{\text{S}}i}}$$ (the subscript $$i$$ denoting the number of the arc). The arc angles, $$\theta_{i}$$, are related to the inter-bundle angles, $$\phi_{i}$$ (Fig. 2), which are expressed through the following geometrical relationships, \begin{aligned} r_{1} \tan \left( {\frac{{\theta_{1} }}{2}} \right) & = r_{2} \tan \left( {\frac{{\theta_{2} }}{2}} \right) = r_{3} \tan \left( {\frac{{\theta_{3} }}{2}} \right), \\ \theta_{i} & = \pi - \phi_{i}. \\ \end{aligned} (2) The single-junction energy, $$f_{{\text{j}}}$$, will be related to the reference state preceding the junction formation, in which the bundle ends are not split into branches. We consider two contributions to $$f_{{\text{j}}}$$, the energy of the bundle splitting, $$f_{{\text{S}}}$$, and the bending energy of the branches resulting from the arc formation, $$f_{{\text{B}}}$$, so that, $$f_{{\text{j}}} = f_{{\text{S}}} + f_{{\text{B}}}$$. We calculate the dependence of the two energy contributions, $$f_{{\text{S}}}$$ and $$f_{{\text{B}}}$$, on the arcs′ radii, $$r_{i}$$, and lengths, $$l_{{{\text{S}}i}}$$, and determine the optimal structure and energy of the junction by minimizing $$f_{{\text{j}}}$$ with respect to these values. We assume the energy of the bundle end splitting to be proportional to the length of the split region so that the total energy of splitting can be expressed through the three arc lengths by $$f_{{\text{S}}} = \frac{1}{2} \varepsilon_{{\text{s}}} \left( {l_{{{\text{S1}}}} + l_{{{\text{S2}}}} + l_{{{\text{S3}}}} } \right),$$ (3) where $$\varepsilon_{S}$$ is the splitting energy per unit length. Considering the bending energy per arc unit length to be quadratic in the arc curvature, $$\frac{1}{r}$$, which implies that the bundle branches have no intrinsic tendency to bend, the total bending energy of one arc, $$f_{Bi}$$ can be presented as $$f_{Bi} = \frac{1}{2} \kappa_{{\text{s}}} \frac{1}{{r_{i}^{2} }} l_{{{\text{Si}}}} ,$$ (4) where $${ }\kappa_{{\text{s}}}$$ is the arc bending modulus. Based on (Eqs. 3, 4) and the relationship, $$l_{{{\text{S}}i}} = r_{i} \theta_{i}$$, the junction energy can be written as $$f_{{\text{j}}} = \frac{1}{2}\left( {\frac{{\kappa_{{\text{s}}} \theta_{1} }}{{r_{1} }} + r_{1} \theta_{1} \varepsilon_{{\text{s}}} } \right) + \frac{1}{2}\left( {\frac{{\kappa_{{\text{s}}} \theta_{2} }}{{r_{2} }} + r_{2} \theta_{2} \varepsilon_{{\text{s}}} } \right) + \frac{1}{2}\left( {\frac{{\kappa_{{\text{s}}} \theta_{3} }}{{r_{3} }} + r_{3} \theta_{3} \varepsilon_{{\text{s}}} } \right).$$ (5) Using the geometrical relationships (Eq. 2) and accounting for the condition $$\mathop \sum \limits_{i = 1}^{3} \phi_{i} = 2\pi$$, the energy in (Eq. 5) can be presented as a function of two inter-bundle angles, e.g. $$\phi_{1}$$ and $$\phi_{2}$$, and the radius of one of the arcs, e.g. $$r_{1}$$. Minimizing the resulting expression for $$f_{{\text{j}}}$$ with respect to $$r_{1}$$, we obtain the dependences for the optimal arc radius, $$r_{1}^{*}$$, and energy, $$f_{{\text{j}}}^{*}$$, on the inter-bundle angles, $$\phi_{1}$$ and $$\phi_{2}$$. $$r_{1}^{*} = \lambda \sqrt {\frac{{\left( {\pi - \phi_{1} } \right) + \left( {\pi - \phi_{2} } \right)\frac{{\tan \left( {\frac{{\phi_{1} }}{2}} \right)}}{{\tan \left( {\frac{{\phi_{2} }}{2}} \right)}} + \left( {\pi - \phi_{1} - \phi_{2} } \right)\frac{{\tan \left( {\frac{{\phi_{1} }}{2}} \right)}}{{\tan \left( {\frac{{\phi_{1} + \phi_{2} }}{2}} \right)}}}}{{\left( {\pi - \phi_{1} } \right) + \left( {\pi - \phi_{2} } \right)\frac{{\tan \left( {\frac{{\phi_{2} }}{2}} \right)}}{{\tan \left( {\frac{{\phi_{1} }}{2}} \right)}} + \left( {\pi - \phi_{1} - \phi_{2} } \right)\frac{{\tan \left( {\frac{{\phi_{1} + \phi_{2} }}{2}} \right)}}{{\tan \left( {\frac{{\phi_{1} }}{2}} \right)}}}}}$$ (6) $$f_{{\text{j}}}^{*} \left( {\phi_{1} ,\phi_{2} } \right) = \frac{1}{2\pi }f_{{\text{j}}}^{0} \left[ {\left( {\pi - \phi_{1} } \right)\left( {\frac{1}{{\rho_{1}^{*} }} + \rho_{1}^{*} } \right) + \left( {\pi - \phi_{2} } \right)\left( {\frac{1}{{\rho_{1}^{*} }}\frac{{\tan \left( {\frac{{\phi_{1} }}{2}} \right)}}{{\tan \left( {\frac{{\phi_{2} }}{2}} \right)}} + \rho_{1}^{*} \frac{{\tan \left( {\frac{{\phi_{2} }}{2}} \right)}}{{\tan \left( {\frac{{\phi_{1} }}{2}} \right)}}} \right) + \left( {\pi - \phi_{1} - \phi_{2} } \right)\left( {\frac{1}{{\rho_{1}^{*} }}\frac{{\tan \left( {\frac{{\phi_{1} }}{2}} \right)}}{{\tan \left( {\frac{{\phi_{1} + \phi_{2} }}{2}} \right)}} + \rho_{1}^{*} \frac{{\tan \left( {\frac{{\phi_{1} + \phi_{2} }}{2}} \right)}}{{\tan \left( {\frac{{\phi_{1} }}{2}} \right)}}} \right)} \right] ,$$ (7) where $$f_{{\text{j}}}^{0} = \pi \sqrt {\kappa_{{\text{s}}} \varepsilon_{{\text{s}}} } ,$$ (8) is the intrinsic energy scale of the junction, and $$\rho_{1}^{*} = \frac{{r_{1}^{*} }}{\lambda }$$ is the normalized optimal arc radius with $$\lambda = \sqrt {\frac{{\kappa_{{\text{s}}} }}{{\varepsilon_{{\text{s}}} }}} ,$$ (9) being the intrinsic length scale. Numerical analysis of the optimal junction energy, presented by (Eq. 7), shows that it is minimal for a symmetric junction with $$\phi_{1} = \phi_{2} = \frac{2\pi }{3}.$$ For small deviations from the symmetric junction configuration referred to as the junction folding, the junction energy can be approximated by an expression accounting for contributions of the first non-vanishing order in the angle deviations from $$\frac{2\pi }{3},$$ $$f_{{\text{j}}}^{*} \left( {\phi_{1} ,\phi_{2} } \right) \approx f_{{\text{j}}}^{0} \left( {1 + \frac{4}{9}\left( {\left( {\phi_{1} - \frac{2\pi }{3}} \right)^{2} + \left( {\phi_{2} - \frac{2\pi }{3}} \right)^{2} + \left( {\phi_{1} - \frac{2\pi }{3}} \right)\left( {\phi_{2} - \frac{2\pi }{3}} \right)} \right)} \right).$$ (10) According to (Eq. 10), the energy of a symmetric junction is equal to $$f_{{\text{j}}}^{0}$$ (Eq. 8). The three arc radii of a symmetric junction, as follows from (Eq. 6), are equal to the characteristic length, $$r^{*} = \lambda$$ (Eq. 9). An effective junction rigidity of the folding deformations is proportional to $$f_{{\text{j}}}^{0}$$. ### Hexagonal network with symmetric junctions Here we consider a network consisting of bundles connected by symmetric junctions. The energy of such a network can be presented, according to (Eqs. 1, 10), as $$F_{{\text{N}}} = N_{{\text{j}}} f_{{\text{j}}}^{0} - \gamma L,$$ (11) where $$f_{{\text{j}}}^{0}$$ is given by (Eq. 8). Our goal is to determine the network equilibrium configuration, and analyze its stability and optimal density. We start with the analysis of a homogeneous network whose unit-cells have shapes of identical hexagons with the length $$b$$ of a hexagon side (Fig. 3A). #### Network equilibrium and stability The network is in equilibrium if the total force and torque acting on each junction and bundle of the network vanish. The only forces acting in the system originate from tension in the bundles and act along the bundles. The total force applied to each junction vanishes due to the symmetry (Fig. 3A). The symmetry of the junctions is also the reason for vanishing torques. Altogether, the homogeneous hexagonal network with symmetric junctions is in a mechanical equilibrium. To analyze the stability of this equilibrium state, we calculated the change in the network energy upon an infinitesimal displacement of a single junction in a random direction (see Supplementary Note 1). We found that the network is stable against such displacements under the following condition: $$f_{{\text{j}}}^{0} > \frac{9}{20} \gamma b .$$ (12) This equation reflects the interplay between the energy changes due to the negative tension, -$$\gamma$$, and the junction rigidity to folding, $$f_{{\text{j}}}^{0}$$. The negative tension favors the junction displacement due to the related increase of the overall bundle length, $$L$$. The elastic energy of the junction (Eq. 10) resists the displacements because of the related deviation of the junction from the symmetric conformation. #### Optimal network density The ratio of the total number of the network junctions,$$N_{{\text{j}}}$$, to the total network area, $$A$$, will be referred to as the network density, $$\sigma = \frac{{N_{{\text{j}}} }}{A}$$. To find the optimal value of $$\sigma$$ we have to minimize the total network energy (Eq. 11) with respect to the junction number $$N_{{\text{j}}}$$. Using the smallness of the network unit-cells compared to the overall network $$b \ll R$$, we can neglect the contributions to the total bundle length from the hexagons, which are directly adjacent to the network boundary. In this approximation, the total bundle length, $$L,$$ can be expressed as $$L = \frac{3}{2}bN_{{\text{j}}}$$, while the network area can be presented as $$A = \frac{{3\sqrt {3 } }}{4}b^{2} N_{{\text{j}}}$$. Using these two relationships, we obtain $$L = \sqrt {\sqrt 3 A N_{{\text{j}}} }$$ and the energy $$F_{{\text{N}}}$$ (Eq. 11) can be rewritten as $$F_{{\text{N}}} = N_{{\text{j}}} f_{{\text{j}}}^{0} - \gamma \sqrt {\sqrt 3 A N_{{\text{j}}} } .$$ (13) Minimization of (Eq. 13) results in the optimal network density, $$\sigma^{*} = \frac{{N_{{\text{j}}}^{*} }}{A} = \frac{\sqrt 3 }{4} \left( {\frac{\gamma }{{f_{{\text{j}}}^{0} }}} \right)^{2} .$$ (14) The side length of the network unit-cell, $$l_{{\text{c}}}$$, corresponding the optimal density (Eq. 14) is given by $$l_{{\text{c}}} = \frac{4}{3}\frac{{f_{{\text{j}}}^{0} }}{\gamma }.$$ (15) This side length serves as a characteristic length for the network structure, whereas the total network bundle length, $$L^{*}$$, is given by $$L^{*} = \frac{2}{\sqrt 3 }\frac{1}{{l_{{\text{c}}} }}A.$$ (16) The corresponding minimal value of the network energy is given by $$F_{{\text{N}}}^{*} = - \frac{1}{\sqrt 3 }\frac{\gamma }{{l_{{\text{c}}} }}A .$$ (17) The network tension can be evaluated based on the bundling energy of KIF filaments21 as $$\gamma \approx - 170 \times 10^{3} {\text{k}}_{{\text{B}}} {\text{T }}$$ µm−1. Based on the images of KIF central networks, the unit-cell length is about $$l_{{\text{c}}} \approx 1$$ µm. Using these parameter values, the free energy per unit area of the network optimal configuration is, approximately $$\frac{{F_{{\text{N}}}^{*} }}{A} \approx - 9.81 \times 10^{4} {\text{k}}_{{\text{B}}} {\text{T}}$$ µm−2. #### Degeneracy of the optimal network state The homogeneous hexagonal network with symmetric junctions described above is not the only configuration characterized by the minimal energy, $$F_{{\text{N}}}^{*}$$ (Eq. 17). As we show below, there are continuous transformations of the initial homogeneous configuration, that keep constant the overall network bundle length, $$L^{*}$$, the junction number, $$N_{{\text{j}}}^{*}$$, and do not disturb the symmetric state of the junctions. Hence the network configurations obtained through these transformations have the same optimal energy, $$F_{{\text{N}}}^{*}$$ (Eq. 17) as the homogeneous hexagonal configuration. This means that the network’s lowest energy state is energetically degenerated. Below we describe these transformations and analyze the dependence of the number of the equal energy states on the junction number, $$N_{{\text{j}}}^{*}$$. We found two kinds of such network transformations which will be referred to as the isotropic and the telescopic transformations (see Fig. 3B, C). To describe the isotropic transformation, we define a network structural element consisting of one hexagonal unit-cell with six network edges emerging from the hexagon vertices in the radial directions (Fig. 3B) and refer to it as the isotropic structural element. For each isotropic structural element, in the initial state the lengths of the hexagon sides and those of the radial edges are equal to each other and to $$l_{{\text{c}}}$$. The essence of the transformation is an isotropic expansion or contraction of the structural element. The expansion (contraction) leads to increase (decrease) of the hexagon side length from $$l_{{\text{c}}}$$ to $$2l_{{\text{c}}}$$ (to 0), which is accompanied by decrease (increase) of the radial edge length from $$l_{{\text{c}}}$$ to 0 ($$2l_{{\text{c}}}$$) so that the sum of the two remains constant and equal to $$2l_{{\text{c}}}$$. Hence the whole transformation of an isotropic structural element conserves the total length of the six hexagon sides and six radial edges equal $$12l_{{\text{c}}}$$. The number of the isotropic structural elements, which can undergo such transformation independently of each other is $$\frac{{N_{j}^{*} }}{8}$$. Assuming that the number of the geometrical states corresponding to the whole range of the transformation of one isotropic structural element can be characterized by a discrete number, $$m_{I}$$, the total number of the system states of the same energy but different architecture, $$M_{I}$$, scales as $$M_{I} = \left( {m_{I} } \right)^{{\frac{{N_{j}^{*} }}{8}}} .$$ (18) A simultaneous expansion of the hexagons of all $$\left( {\frac{{N_{j}^{*} }}{8}} \right)$$ independent isotropic structural elements to their maximum size ($$2l_{{\text{c}}}$$) results in a new hexagonal network with fewer but larger hexagonal cells whose sides consist of pair of bundles, which we will call the second order network. The isotropic transformation of the second order network generates an additional set of the minimal energy states whose number scales as $$\overline{{M_{I} }} \approx \left( {\overline{{m_{I} }} } \right)^{{\frac{{N_{j}^{*} }}{32}}} .$$ (19) The number of states of the second order network is significantly smaller than that of the first one, $$\overline{{M_{I} }} \ll M_{I}$$. Analogously, further higher order networks get generated but add progressively little to the total number of the conformational states of the minimal energy, $$F_{{\text{N}}}^{*}$$. Thus, the number of the minimal energy states obtained through the isotropic transformation can be estimated with a good accuracy by (Eq. 18). The telescopic transformation is based on a different type of the network structural elements referred below to as the linear structural element, which consists of a zigzag-like row of junctions connected by edges with additional side-edges emerging from every second junction perpendicularly to the zigzag axis, as illustrated in (Fig. 3C). The zigzag travers the whole network. There are three possible directions of the zigzag axis orientation. In the initial state the lengths of all the zigzag- and side-edges are equal to each other and to $$l_{{\text{c}}}$$. The telescopic transformation of a linear structural element consists in homogeneous extension or contraction of its side-edges (Fig. 3C). It can be easily seen that, provided that the network area is kept constant, the telescopic transformations of the linear structural elements do not change the overall bundle length. The number of the linear structural elements is proportional to the linear dimension of the network and, consequently, to $$\sqrt {N_{j}^{*} }$$. Assuming $$m_{L}$$ to be a discrete number of conformations, which can be adopted by one linear structural element through the telescopic transformation, the corresponding total number of the network conformations, $$M_{L}$$, is approximately given by: $$M_{L} \approx \left( {m_{L} } \right)^{{\sqrt {N_{j}^{*} } }} .$$ (20) Comparing Eqs. (18) and (20), we conclude that for large networks, $$N_{j}^{*} \gg 1$$, the number of the minimal energy states corresponding to the transformations of the isotropic structural elements is much larger than that produced by the telescopic transformations. Thus, the most probable configurations of the network belong to those generated by the transformation of the isotropic structural elements (Fig. 3B). It has to be noted that the stability conditions for the network configurations obtained through the described above transformations are different from that derived for homogenous hexagonal networks (Eq. 12). However, the principle remains the same, namely, any of the degenerated network configurations is stable if the junction folding rigidity, $$f_{{\text{j}}}^{0}$$, is sufficiently large compared to the absolute value of the tension, $$\gamma$$. In the case of isotropic structural element transformations, the condition guaranteeing stability of all configurations is $$f_{{\text{j}}}^{0} > \frac{9}{10}\gamma b$$, (see Supplementary Information). ### Irregular networks Here we explore the possibility that irregular network configurations whose junctions, generally, deviate from the symmetric conformation, may have lower energies than those of the hexagonal networks with symmetric junctions analyzed above. The network energy can be generally presented as: $$F_{{\text{N}}} \left( {N_{{\text{j}}} ,N_{{\text{M}}} ,\left\{ {\vec{r}^{\left( i \right)} } \right\},\left\{ {\phi_{1}^{\left( i \right)} ,\phi_{2}^{\left( i \right)} } \right\}} \right) = \mathop \sum \limits_{i = 1}^{N} f_{{\text{j}}}^{ } \left( {\phi_{1}^{\left( i \right)} ,\phi_{2}^{\left( i \right)} } \right) - \gamma L\left( {N_{{\text{j}}} ,N_{{\text{M}}} ,\left\{ {\vec{r}^{\left( i \right)} } \right\},\left\{ {\phi_{1}^{\left( i \right)} ,\phi_{2}^{\left( i \right)} } \right\}} \right),$$ (21) where $$N_{{\text{M}}}$$ is the number of bundle source points on the boundary between the network and the reservoir, $$\left\{ {\vec{r}^{\left( i \right)} } \right\}$$ are the junction coordinates, and $$\left\{ {\phi_{1}^{\left( i \right)} ,\phi_{2}^{\left( i \right)} } \right\}$$ are the two independent angles defining the configuration of each three-way junction. The first and second contributions in the right-hand side of (Eq. 21) represent the total junction, $$F_{{\text{j}}}$$, and bundle, $$F_{{\text{B}}}$$, energies, respectively. The equation (Eq. 21) defines a complex potential energy surface in $$4N_{{\text{j}}} + 2$$ dimensional parameter space. Searching for the global energy minimum configuration of an irregular network with a given number of three-way junctions is an extremely difficult task even for a relatively small number of junctions. To address this issue, we numerically generated irregular network configurations using random Voronoi tessellations22. To obtain each network configuration, a set of random seed points was generated, which defined the positions and the angles of $$N_{{\text{j}}}$$ three-way junctions within the network area, $$A$$, while the corresponding Voronoi tessellation generated the network edges. For each generated network, the energy was calculated according to (Eq. 21). We used this numerical method to investigate irregular network configurations for different sets of the three independent system parameters: the number of junctions in the network $$N_{{\text{j}}}$$, the negative tension, -$$\gamma$$, and the characteristic length scale of the network, $$l_{{\text{c}}}$$, which is given by (Eq. 15) and equals the side length of the unit cell in the optimal hexagonal network. For each set of the system parameters, we generated a large (~ 105) but finite ensemble of irregular network configurations. The planar area, $$A$$, of each network realization, was taken to be $$A = 15^{2}$$ µm, matching the area of a central KIF of diameter $$\sim 8.5$$ µm. This selection of a specific area does not limit the generality of our results since all extensive values scale with the area, $$A$$. An example of histograms representing the distributions of the bundle, $$F_{{\text{B}}}$$, junction, $$F_{{\text{j}}}$$, and total, $$F_{{\text{N}}}$$, energies among the ~ 105 irregular configurations of the networks having $$N_{{\text{j}}} = 77$$ junctions and $$N_{{\text{M}}} = 25$$ bundle sources is given in (Fig. 4A–C). For comparison, the corresponding energies of the hexagonal network with symmetric junctions having the same numbers of $$N_{{\text{j}}}$$ and $$N_{{\text{M}}}$$ are shown by dashed vertical red lines. The histograms are well described by Gaussian distribution functions, the widths of which are an order of magnitude smaller than the corresponding mean energies. The bundle energy, $$F_{{\text{B}}}$$, of the hexagonal network lies in the high-energy tail of the histogram for irregular networks (see (Fig. 4A), while the junction energy of the hexagonal network, $$F_{{\text{j}}}$$, obviously gives the low bound for $$F_{{\text{j}}}$$ of the irregular networks (see (Eq. 10) and (Fig. 4B). As a result, the total energy of the hexagonal network, $$F_{{\text{N}}}$$, is close to the mean value of the irregular network energy (Fig. 4C). As illustrated by (Fig. 4C), for the specific parameter set used in the computations, there are irregular network configurations whose total energy is lower than that of the hexagonal configuration with symmetric junctions. To generalize this conclusion, we picked the irregular configurations with the lowest total energy for all analyzed parameters sets. We refer to these configurations as the optimal configurations characterized by the energy $$F_{{\text{N}}}^{0}$$ for a given number of junctions. An example of an optimal configuration is presented in (Fig. 5A), whereas the hexagonal network with symmetric junctions having the same numbers of junctions, $$N_{{\text{j}}}$$, and bundle sources, $$N_{{\text{M}}}$$, is presented in (Fig. 5B), for comparison. The optimal configuration energy, $$F_{{\text{N}}}^{0}$$, as a function of number of junctions, $$N_{{\text{j}}}$$, for a set of different characteristic lengths, $$l_{{\text{c}}}$$, and different absolute values of tension, $$\gamma$$, are presented in (Figs. 6 and 7), respectively. For comparison, each figure presents by red squares the energies $$F_{{\text{N}}}^{*}$$ of the hexagonal configurations with the same number of junctions and source points. The solid line shows the results of calculations according to the approximate analytical equation (Eq. 13) which neglects the contributions of the network edges connected to the boundary. The latter is in an excellent agreement with the exact result for the hexagonal network (red squares). For all studied sets of parameters, the energies of the optimal irregular configurations, $$F_{{\text{N}}}^{0}$$, are significantly lower than those of the hexagonal networks with symmetric junction, $$F_{{\text{N}}}^{*}$$. The reason for the predicted energetic favorability of the irregular network compared to the hexagonal one is a larger overall bundle length and the related energy contribution of the negative tension. In spite of the substantial difference in values, $$F_{{\text{N}}}^{0}$$ and $$F_{{\text{N}}}^{*}$$ exhibit similar dependencies on the number of junctions $$N_{{\text{j}}}$$, with minima corresponding to the optimal junction densities (Figs. 6 and 7). Notably, the number of junctions corresponding to the minimum of $$F_{{\text{N}}}^{0}$$, is independent of the tension, − $$\gamma$$, as predicted analytically for $$F_{{\text{N}}}^{*}$$ (Eq. 17). Our simulations also showed that, as expected, also for the irregular network configurations the average dimension of the network unit-cell, $$l^{*}$$, corresponding to the minimum of energy as a function of number of junctions, is close the characteristic length, $$l_{{\text{c}}}$$, given by (Eq. 15). In particular, the results presented in (Figs. 6 and 7) show that $$\frac{{l^{*} }}{{l_{{\text{c}}} }} \simeq$$ 1.2 for $$l_{{\text{c}}} = \frac{2}{3}, \frac{4}{3},$$ and 2 µm. Thus, (Eq. 15) provides a good estimation for the characteristic unit-cell scale for both the irregular and the hexagonal network configurations. To assess the efficiency of these estimations, we evaluated the convergence of our numerical procedure with increase of the tessellation number $$Q$$. The results illustrated in (Fig. 8) show that the energy levels off exponentially with $$Q$$. Thus, we expect that the found minimal energies of the irregular configurations may serve as a good approximation of the global energy minima. ## Discussion Here we analyzed the structures and stability of planar two-dimensional networks of bundles inter-connected by mobile three-way junctions. The special features of the considered networks, as compared to those investigated previously, is the negative value of the tension imposed on the system and the rigidity of the network junctions with respect to folding deformations. As implied by the existence of tension, the networks were assumed to be connected to an external reservoir of the bundle material. The energetically preferable network configurations were determined by an interplay between the system tendency to increase the overall length of the bundles driven by the negative tension and the energy cost of creation of the network junctions. We described these network configurations by the optimal values of the junction number, the overall bundle length, and by the network morphology. As a paradigm of such system we used the networks of Keratin Intermediate Filaments observed in live cells16. A general conclusion of our analysis is that the negative sign of the tension is mandatory for the existence of the system configurations stable with respect to the network withdrawal into the reservoir. First, we examined analytically a limiting case of hexagonal networks with symmetric junctions for which all three inter-bundle angles within a junction equal $$\frac{2\pi }{3}$$. We determined the optimal network density and derived the conditions of the network stability with respect to folding of the junctions. We described the conformations degeneracy of the optimal network configurations. Further, we studied, by numerical simulations based on the Voronoi tessellation method, the irregular network configurations, which had, generally, asymmetric three-way junctions. We demonstrated that such configurations exhibit significantly lower energies than the regular hexagonal ones. At the same time, the dependence of the minimal energy of the irregular configurations on the number of junctions was similar to that predicted analytically for the regular hexagonal configurations. Moreover, the unit-cell sizes of the optimal irregular configurations were close to those determined for the regular hexagonal configuration. Hence scale-wise the regular hexagonal network approximation provides a satisfactory description of the system. ### Limitations of the analysis One limitation of our study is the use of the Voronoi tessellations algorithm for generation of the irregular network configurations. This method produces only a subset of configurations for which all inter-bundle angles in the junctions remain smaller than or equal to $$\pi$$. This restriction does not significantly affect the results if the energy cost of the junction deviations from the symmetric conformation is sufficiently large. To reliably fulfill this condition, we limited our computations by the range of system parameters satisfying Eq. (12), which can be rewritten in terms of $$N_{{\text{j}}}$$, $$A$$ and $$l_{{\text{c}}}$$ as $$N_{{\text{j}}} > \frac{{3^{5/2} }}{200} \frac{A}{{l_{{\text{c}}}^{2} }}.$$ (22) The smallest values of $$N_{j}$$ satisfying the condition (Eq. 22), for the used value of the area, $$A = 15$$ µm2, are, approximately, $$8, 18$$, and $$70$$ for, respectively, $$l_{{\text{c}}} =$$ 1.5 µm, 1 µm and $$0.5$$ µm. This condition determined the lower limits of the analyzed networks (Figs. 6 and 7). Another minor limitation is related to our assumption that the KIF network plane is constrained to a flat surface. In reality, the network plane has a sphere-like shape, and undergoes small out-of-plane perturbations. According to our estimations, this does not change the qualitative conclusions of our study since the radius of the keratin network plane (~ 5 μm) is sufficiently large compared to the average length of bundles between consecutive junctions (~ 1 µm). ## References 1. McKenna, G. B. Soft matter: Rubber and networks. Rep. Prog. Phys. 81, 66602 (2018). 2. Lieleg, O., Claessens, M. M. A. E. & Bausch, A. R. Structure and dynamics of cross-linked actin networks. Soft Matter 6, 218–225 (2010). 3. Stavans, J. The evolution of cellular structures. Rep. Prog. Phys. 56, 733 (1993). 4. Lee, C. & Chen, L. B. Dynamic behavior of endoplasmic reticulum in living cells. Cell 54, 37–46 (1988). 5. Lin, C., White, R. R., Sparkes, I. & Ashwin, P. Modeling endoplasmic reticulum network maintenance in a plant cell. Biophys. J. 113, 214–222 (2017). 6. Shemesh, T. et al. A model for the generation and interconversion of ER morphologies. Proc. Natl. Acad. Sci. 111, E5243–E5251 (2014). 7. Goyal, U. & Blackstone, C. Untangling the web: Mechanisms underlying ER network formation. Biochim. Biophys. Acta Mol. Cell Res. 1833, 2492–2498 (2013). 8. Lin, C., Zhang, Y., Sparkes, I. & Ashwin, P. Structure and dynamics of ER: Minimal networks and biophysical constraints. Biophys. J. 107, 763–772 (2014). 9. Neubert, L. & Schreckenberg, M. Numerical simulation of two-dimensional soap froth. Phys. A Stat. Mech. its Appl. 240, 491–502 (1997). 10. Fletcher, D. A. & Mullins, R. D. Cell mechanics and the cytoskeleton. Nature 463, 485–492 (2010). 11. Hohmann, T. & Dehghani, F. The cytoskeleton: A complex interacting meshwork. Cells 8, 362 (2019). 12. Etienne-Manneville, S. Cytoplasmic intermediate filaments in cell biology. Annu. Rev. Cell Dev. Biol. 34, 1–28 (2018). 13. Köster, S., Weitz, D. A., Goldman, R. D., Aebi, U. & Herrmann, H. Intermediate filament mechanics in vitro and in the cell: From coiled coils to filaments, fibers and networks. Curr. Opin. Cell Biol. 32, 82–91 (2015). 14. Block, J., Schroeder, V., Pawelzyk, P., Willenbacher, N. & Köster, S. Physical properties of cytoplasmic intermediate filaments. Biochim. Biophys. Acta Mol. Cell Res. 1853, 3053–3064 (2015). 15. Seltmann, K., Fritsch, A. W., Kas, J. A. & Magin, T. M. Keratins significantly contribute to cell stiffness and impact invasive behavior. Proc. Natl. Acad. Sci. 110, 18507–18512 (2013). 16. Windoffer, R., Beil, M., Magin, T. M. & Leube, R. E. Cytoskeleton in motion: The dynamics of keratin intermediate filaments in epithelia. J. Cell Biol. 194, 669–678 (2011). 17. Janmey, P. A., Leterrier, J.-F. & Herrmann, H. Assembly and structure of neurofilaments. Curr. Opin. Colloid Interface Sci. 8, 40–47 (2003). 18. Gard, D. L. & Lazarides, E. The synthesis and distribution of desmin and vimentin during myogenesis in vitro. Cell 19, 263–275 (1980). 19. Kayser, J., Grabmayr, H., Harasim, M., Herrmann, H. & Bausch, A. R. Assembly kinetics determine the structure of keratin networks. Soft Matter 8, 8873 (2012). 20. Nolting, J. F., Möbius, W. & Köster, S. Mechanics of individual keratin bundles in living cells. Biophys. J. 107, 2693–2699 (2014). 21. Haimov, E., Windoffer, R., Leube, R. E., Urbakh, M. & Kozlov, M. M. Model for bundling of keratin intermediate filaments. Biophys. J. 119, 65–74 (2020). 22. Du, Q., Faber, V. & Gunzburger, M. Centroidal voronoi tessellations: Applications and algorithms. SIAM Rev. 41, 637–676 (1999). ## Acknowledgements The project idea resulted from the collaboration of M.M.K. with Rudolf E. Leube and Reinhard Windoffer, within European Union H2020-MSCA-ITN InCeM. M.M.K. is supported by SFB 958 “Scaffolding of Membranes” (Germany) and Singapore-Israel (NRF-ISF) research grant 3292/19. M.U. acknowledges the financial support of the Israel Science Foundation, Grant 1141/18. ## Author information Authors ### Contributions E.H. performed computations, analyzed the results, wrote the article; M.U. formulated the problem, analyzed the results, wrote the article; M.M.K. formulated the problem, analyzed the results, wrote the article. ### Corresponding authors Correspondence to Michael Urbakh or Michael M. Kozlov. ## Ethics declarations ### Competing interests The authors declare no competing interests. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Reprints and Permissions Haimov, E., Urbakh, M. & Kozlov, M.M. Negative tension controls stability and structure of intermediate filament networks. Sci Rep 12, 16 (2022). https://doi.org/10.1038/s41598-021-02536-0 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-021-02536-0
2022-07-03 00:38:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8703863620758057, "perplexity": 1130.904989316347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00222.warc.gz"}
http://bosco-uganda.com/computer_training?rev=1532866215&do=diff
# bosco-uganda ### Site Tools computer_training # Differences This shows you the differences between two versions of the page. — computer_training [2018/07/29 08:10] (current) Line 1: Line 1: + + - \\ + \\ \\  Like in any institution we also had time for discussion just before the next lecture. This was so important to us as we could share important or more difficult concepts of the previous day. Students/ workshop participants and any other person in the learning process, take it that the culture of sharing does not make any smaller but makes you grow and grow.\\ \\ \\ \\ \\ \\ \\ + + {{http://​lh4.ggpht.com/​_0t1J8np8lCE/​S1_ZiTZ48xE/​AAAAAAAAAII/​TFK7dwe--Bg/​s144-c/​Joksim.jpg|with friends}}
2019-08-22 09:10:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 1393.33512848725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317037.24/warc/CC-MAIN-20190822084513-20190822110513-00449.warc.gz"}
https://udspace.udel.edu/handle/19716/333
## Cyclic Relative Difference Sets and Their p-Ranks 2002 Chandler, D.B. Xiang, Qing ##### Publisher Department of Mathematical Sciences ##### Abstract By modifying the constructions in [10] and [15], we construct a family of cyclic ((q 3k − 1)/(q − 1), q − 1, q 3k − 1 , q 3k − 2 ) relative difference sets, where q = 3 e . These relative difference sets are “liftings” of the difference sets constructed in [10] and [15]. In order to demonstrate that these relative difference sets are in general new, we compute p-ranks of the classical relative difference sets and 3-ranks of the newly constructed relative difference sets when q = 3. By rank comparison, we show that the newly constructed relative difference sets are never equivalent to the classical relative difference sets, and are in general inequivalent to the affine GMW difference sets. ##### Keywords Affine GMW difference set , Gauss sum , Relative difference set , Singer difference set , Stickelberger’s theorem , Teichmuller character
2023-02-01 22:07:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.912047803401947, "perplexity": 1500.0310368870744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00859.warc.gz"}
https://www.physicsforums.com/threads/multiplying-rational-expressions.216508/
# Multiplying Rational Expressions 1. Feb 19, 2008 ### mike_302 1. The problem statement, all variables and given/known data $$\frac{5(y-2)}{y+1}$$ x $$\frac{y+1}{10}$$ 2. Relevant equations 3. The attempt at a solution Does this equal 5(y-2)(y+1)/10(y+1) ? Or are there no brackets on that first y+1 ? Last edited: Feb 19, 2008 2. Feb 19, 2008 ### Kurdt Staff Emeritus It would be equal to what you have. You can simplify it as well. 3. Feb 19, 2008 ### mike_302 Ok, I just needed to make sure. Now I understand/am sure of how to simplify it. Thanks!
2017-08-21 05:17:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6308890581130981, "perplexity": 3813.16868923448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107490.42/warc/CC-MAIN-20170821041654-20170821061654-00662.warc.gz"}
https://math.stackexchange.com/questions/737154/show-that-the-orbit-of-h-containing-x-is-equal-to-the-right-coset-hx
# Show that the orbit of $H$ containing $x$ is equal to the right coset $Hx$ Let $G$ be a group and let $H$ be a subgroup of $G$. Let $X$ be the set of elements of $G$. Let $\ast : H \times X \to X$ be given by $$h \ast x = hx (h \in H, x \in X)$$. QUESTION: Let $x \in X$. Show that the orbit of $H$ containing $x$ is equal to the right coset $Hx$. ATTEMPT: Firstly I know that the right coset $Hx = \{ hx | h \in H\}$ And I'm supposed to show that if $A$ is the orbit of orbit of $H$ containing $x$, $A \subset Hx$ and $Hx \subset A$. But my problem is I have no idea what "the orbit of $H$ containing $x$" means. Can someone clarify this for me? I can picture orbit of single elements in a group but I can't imagine the orbit of an entire group. • Let G act on $\Omega$ and $a\in \Omega$ then the orbit containing $a$ is $G.a=\{ga|g\in G\}$. since $1\in G$ then $a\in Ga$. – mesel Apr 2 '14 at 20:22 • As a result orbit of $H$ containing $x$ is $Hx$. – mesel Apr 2 '14 at 20:26 By "orbit of $H$" they mean "orbit of the action of $H$." In terms of language, orbits belong to both elements of $X$ and to the group $H$, so we could say "orbit of $x$" for elements $x\in X$ or we could also say "orbit of $H$" to mean one of the orbits of the action of $H$ on $X$.
2020-02-23 12:18:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432880282402039, "perplexity": 105.04395908701669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145767.72/warc/CC-MAIN-20200223093317-20200223123317-00169.warc.gz"}
https://www.scala-algorithms.com/CountRangeDivisibles/
# In a range of numbers, count the numbers divisible by a specific integer ## Algorithm goal In a range $$5$$ to $$15$$, there are $$4$$ numbers divisible by $$n = 3$$: $$6$$, $$9$$, $$12$$ and $$15$$. Compute this for a generic $$n$$ (all non-negative numbers) In Codility, there is a similar problem 'CountDiv'. ## Explanation A brute-force method will have a complexity that depends strictlyon the size of the input number. Notice however that the count of numbers divisible by$$3$$, up to $$15$$, is $$5$$ ($$3, 6, 9, 12, 15$$). If we do that with respect to a range, we need to just take the difference between counts up to the end of the range, and the counts up to the start of the range (excluding the number itself). (this is © from www.scala-algorithms.com) Also if the number $$0$$ is in range, count is increased by 1 because $$0$$ is divisible by all numbers (except $$0$$, of course). ## Scala Concepts & Hints Range The (1 to n) syntax produces a "Range" which is a representation of a sequence of numbers. assert((1 to 5).toString == "Range 1 to 5") assert((1 to 5).reverse.toString() == "Range 5 to 1 by -1") assert((1 to 5).toList == List(1, 2, 3, 4, 5)) ## Algorithm in Scala 13 lines of Scala (version 2.13), showing how concise Scala can be! ## Test cases in Scala assert(countDivisibles(5 to 15, divisor = 3) == 4) assert(countDivisibles(0 to Int.MaxValue, divisor = Int.MaxValue) == 2) assert(countDivisibles(0 to 0, divisor = 3) == 1) assert(countDivisibles(1 to Int.MaxValue, divisor = Int.MaxValue / 2) == 2) def countDivisibles(range: Range, divisor: Int): Int = ???
2021-01-25 04:41:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3600011467933655, "perplexity": 1690.9236698380384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703564029.59/warc/CC-MAIN-20210125030118-20210125060118-00401.warc.gz"}
http://www.speedsolving.com/forum/threads/huge-qqtimer-update.34227/page-3
# Huge qqTimer Update! Discussion in 'Software Area' started by qqwref, Dec 20, 2011. Welcome to the Speedsolving.com. You are currently viewing our boards as a guest which gives you limited access to join discussions and access our other features. By joining our free community of over 30,000 people, you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, join our community today! Not open for further replies. 1. ### timelessMember Mar 2, 2011 pstimeless wheres the scrambles of your average of 3? 2. ### joeyMember Apr 8, 2007 WCA: 2007GOUL01 cardologist qqTimer doesn't keep the scrambles when you refresh the page, only the times. 3. ### samchoochiuMember 252 0 Nov 23, 2011 WCA: 2010CHIU01 samchoochiu honestly I hate this new update. Not to be a hater, but what I loved most about the older qq timer was that your can just press F5 and you can do a new avg. Now I have to actually grab my mouse and click reset. I dont like doing avg of 12 so I often reset the timer and its annoying how I need to keep alternating between mouse and keyboard. Is there anyway to go back? Oct 31, 2010 Melbourne, AUS WCA: 2011KILB01 http://www.speedsolving.com/timer/qqtimer.htm 5. ### Specs112Member 322 1 Dec 19, 2010 Ithaca, NY WCA: 2011ANDE03 Oct 31, 2010 Melbourne, AUS WCA: 2011KILB01 Not a clue. Honestly I don't understand why you would want to use the old one over the new one, just showing where you could still find it. 7. ### qqwrefMember 7,832 18 Dec 18, 2007 a <script> tag near you WCA: 2006GOTT01 qqwref2 I put up a slightly more minimal version at qqtimer.net/minimal. It's the same as the normal one except that it doesn't save sessions, change stylesheets, or use the newer (and somewhat slow) random state 3x3x3 and Square-1 scramblers. So it should be better for those who are used to the older version. 8. ### LidMember 771 192 Jul 8, 2008 Sweden WCA: 2008LIDS01 qqTimer + FF9.0.1 = total fail, nothing happens when you press space. 9. ### joeyMember Apr 8, 2007 WCA: 2007GOUL01 cardologist Firefox has a (hilarious) bug qqwref: to fix it ^ just do Code: window.onkeydown = function(event) {checkKey(event.keyCode); }; window.onkeyup = function(event) {startTimer(event.keyCode); }; Code: <body onkeyup="startTimer(event.keyCode);" onkeydown="checkKey(event.keyCode);" 10. ### NSKuberMember 292 0 Oct 29, 2010 Novosibirsk, Russia WCA: 2011OBLA01 NSKuber This was probably asked manymany times, but is it possible to make this timer usable on mobile device(Android)? It's very good, but it's almost impossible to run and stop timer. 11. ### thackernerdMember It would be so awesome and convenient if eventually you could save times for each specific puzzle. 12. ### JyHMember Jan 5, 2011 Massachusetts WCA: 2011HORI01 Is there any reason why my timer won't start anymore? EDIT: It seems to be working for my friend... Sep 9, 2009 Firefox 9 14. ### otsykeMember 98 0 Jul 29, 2009 i noticed the firefox 9 bug in the mid-november beta (and pointed out). A lot of web timers are affected. Waiting the fix, as a workaround, you can focus one of the textboxes in the timer options and then use the space key normally. 15. ### qqwrefMember 7,832 18 Dec 18, 2007 a <script> tag near you WCA: 2006GOTT01 qqwref2 Firefox automatically upgraded me to version 9 >.< So I fixed the FF9 bug. Also, I changed "suboptimal random state" to just "random state" everywhere, because of this discussion. Unfortunately, I don't have an Android, or the development software/account. So it probably won't happen unless someone else does it (or buys me the stuff I'd need). That would be interesting, but I'd have to think about how that would actually work and whether it'd be reasonable... I don't want to use too much space, and there are issues with what exactly counts as a different puzzle (for instance, new and old style 3x3 shouldn't, but <R,U> and <F,R,U> should...). So I'll consider it, but it mgiht not happen. 16. ### samchoochiuMember 252 0 Nov 23, 2011 WCA: 2010CHIU01 samchoochiu [I hate the 'new' qqtimer] The damn timer never starts. I cant do wca inspection anymore and all the functions are switched. I demand you to change it back. It went from my FAVORITE timer to the worst timer ever. Last edited by a moderator: Jan 2, 2012 17. ### HersheyMember Apr 23, 2011 New Jersey, USA WCA: 2010SHRI02 http://www.speedsolving.com/timer/qqtimer.htm Problem solved? Now f*** off. May 13, 2007 Denver, CO WCA: 2007COHE01 masterofthebass You have an iPhone though so you can test if it you really want to. All it needs to work as a mobile version is an area that responds to touch. You can even just test it with a mouse! I had at one point made the timing <td> respond to onmousedown() and onmouseup() which worked sort of ok for mobile devices, but you may have another idea for implementing it. You can see a variation here. I added another row to the main table so that I could use my trackpad for start stop when i wasn't using an external keyboard. 19. ### JulianMember I solve with Stackmat, so I enter in my times manually. Would it be possible to change it back to saving my preferences for entering in times? Thanks. 20. ### Tim MajorPlatinum Member Aug 26, 2009 Melbourne, Australia WCA: 2010MAJO01 I'm not sure how well this would run, but what about just swapping space to left click? (Android) Also making all calculations wait until you press calculate, such as average of 5, etc, to save on RAM. If you were really interested you could download an android simulator for testing, but in the end, it has no benefit for you, so it wouldn't be a very ec
2017-03-28 04:14:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30542364716529846, "perplexity": 4413.442164233411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189667.42/warc/CC-MAIN-20170322212949-00139-ip-10-233-31-227.ec2.internal.warc.gz"}
ERROR: type should be string, got "https://stoneswww.academickids.com/encyclopedia/index.php/Jordan-H%F6lder_theorem"
# Composition series (Redirected from ) In mathematics, a composition series of a group G is a normal series [itex]1 = H_0\triangleleft H_1\triangleleft \cdots \triangleleft H_n = G,[itex] such that each Hi is a maximal normal subgroup of Hi+1. Equivalently, a composition series is a normal series such that each factor group Hi+1 / Hi is simple. The factor groups are called composition factors. A normal series is a composition series if and only if it is of maximal length. That is, there are no additional subgroups which can be "inserted" into a composition series. The length n of the series is called the composition length. If a composition series exists for a group G, then any normal series of G can be refined to a composition series, informally, by inserting subgroups into the series up to maximality. Every finite group has a composition series, but not every infinite group has one. For example, the infinite cyclic group has no composition series. In general, a group will have multiple, different composition series. However, the Jordan-Hölder theorem (named after Camille Jordan and Otto Hölder) states that any two composition series of a given group are equivalent. That is, they have the same composition length and the same composition factors, up to permutation and isomorphism. This theorem can be proved using the Schreier refinement theorem. For example, the cyclic group C12 has {E, C2, C6, C12}, {E, C2, C4, C12}, and {E, C3, C6, C12} as different composition series. The factor groups are isomorphic to {C2, C3, C2}, {C2, C2, C3}, and {C3, C2, C2}, respectively.pl:CiÄ…g_kompozycyjny_grupy • Art and Cultures • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Space and Astronomy
2021-08-03 14:41:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8779807090759277, "perplexity": 519.7901971384304}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154459.22/warc/CC-MAIN-20210803124251-20210803154251-00057.warc.gz"}
https://www.nature.com/articles/s41598-018-27133-6?error=cookies_not_supported&code=f707b1a0-7c92-443c-91ab-4138447a17e0
Article | Open | Published: # Prolonged photo-carriers generated in a massive-and-anisotropic Dirac material ## Abstract Transient electron-hole pairs generated in semiconductors can exhibit unconventional excitonic condensation. Anisotropy in the carrier mass is considered as the key to elongate the life time of the pairs, and hence to stabilize the condensation. Here we employ time- and angle-resolved photoemission spectroscopy to explore the dynamics of photo-generated carriers in black phosphorus. The electronic structure above the Fermi level has been successfully observed, and a massive-and-anisotropic Dirac-type dispersions are confirmed; more importantly, we directly observe that the photo-carriers generated across the direct band gap have the life time exceeding 400 ps. Our finding confirms that black phosphorus is a suitable platform for excitonic condensations, and also open an avenue for future applications in broadband mid-infrared BP-based optoelectronic devices. ## Introduction Formation of electron-hole (e–h) pairs called excitons has renewed interest of research since the BCS (Bardeen-Cooper-Schrieffer) like e–h Cooper pairs condensation is expected in semiconductors1,2. By optical pumping, photoexcited carriers facing across the band gap form excitons due to the Coulomb interaction between electrons and holes with low carrier density3. These excitons give rise to an unconventional excitonic state in direct band gap semiconductors and hence arouse much interest of pump-probe studies currently4. The exciton condensation has been predicted theoretically1,2,3,5 and afterwards realized experimentally6,7,8,9, but the decisive evidence of excitonic condensation in optically pumped semiconductor is elusive. Recently, it is theoretically suggested that e–h BCS order can be enhanced when the effective masses of electrons and holes have a strong anisotropy10,11. Besides, the photo-induced superconductivity has also been predicted in a laser-driven two-band semiconductor12. These studies encourage us to challenge experimental observation of excitonic condensation in a promising two-dimensional (2D) semiconductor possessing both direct band gap and high effective mass anisotropy. Here, we focus on orthorhombic black phosphorus (BP), which is the most stable allotrope of phosphorus with a direct band gap of 0.3 eV. It forms a folded honeycomb sheet running along the a-axis as shown in Fig. 1(a). One of the unique features distinguishing BP from other 2D materials is its anisotropic transport properties. The hole mobility has been reported to reach about 3000 cm2V−1s−1 at 200 K for bulk crystal13 and 1000 cm2V−1s−1 for 15nm-thick film along c-axis14, which are more than 1.8 times larger than that along a-axis. Such anisotropic nature of BP allows us to build a high-performance transistor realizing a ballistic transport which are strongly related to the anisotropic effective mass. The effective masses of holes and electrons along three axes have been studied by far-infrared cyclotron resonance15. The tunable band gap structure of BP determined by the number of layers from 0.3 eV (bulk) to 2 eV (single monolayer) make BP as a promising nonlinear optical material, particularly with great potentials for infrared and mid-infrared optoelectronics16,17,18,19. As being the direct band gap and high anisotropic 2D semiconductor, BP can be expected to form excitons due to the nonequilibrium electron and hole population after optical pumping and hence to condensate the excitonic e–h state. Very recently, several research groups have studied the carrier dynamics and anisotropic dynamical response in bulk and few layer BP by using pump-probe transmission and reflection measurements20,21,22,23,24. Furthermore, angle resolved photoemission spectroscopy (ARPES) have so far been applied to get an information of hole bands (valence band) of bulk BP25,26,27 as well as its band gap variation by alkali metal doping28,29. However, the electrons (holes) excited in the conduction (valence) band and their carrier dynamics have not yet been clarified. Time- and angle-resolved photoemission spectroscopy (TARPES), the conventional ARPES implemented by a pump-and-probe method, is a powerful tool to study the electron/hole bands and electron/hole dynamics with energy and momentum resolutions. Recent TARPES studies on three-dimensional topological insulators have demonstrated that the photoexcited electrons are bottlenecked at the Dirac point of the topological surface state leading to the population inversion up to 3 ps30 and exhibiting an electronic recovery time variation from 5 ps to 400 ps by the closeness of E F to the Dirac point as well as upon the increase in the bulk insulation31. Thus, to understand population of photoexcited electrons (holes) in the conduction (valence) band and its carrier dynamics of BP, which offer the hints on e–h pairs condensation, the TARPES is essential. Here we show the dynamics of photo-generated electrons/holes as well as their anisotropic features utilizing the combined tools of ARPES and two-photon photoemission spectroscopy. More interestingly, we have directly observed the population of electrons and holes across the direct band gap that persistently remain with a long relaxation time of more than 400 ps. ## Results and Discussions Sample characterization. The high-quality surface properties of bulk BP have been characterized by scanning tunneling microscopy (STM). Figure 1(c) shows the representative STM topographic image on freshly cleaved BP surface with additional defects which can be understood as impurities/or vacancies within BP crystals. The zoomed-up image with atomic resolution shows more detailed surface structure of BP along a- and c-directions as described in Fig. 1(d). The in-plane lattice constant along c (lighter effective mass) - and a (heavier effective mass)-directions are estimated from our STM image as 4.3 and 3.34 Å, and it is consistent with previous works32,33,34. Hole band and their effective mass. In order to investigate the dispersion of valence band maximum at Z point of the bulk Brillouin zone [see Fig. 1(b)], the normal emission spectra are taken with several photon energies (hv = 19–56 eV). The dispersive feature has been clearly observed especially near EF [see Fig. S1 of Supplementary Material]. Eventually, we find that the valence band is closest to EF at two photon energies of 19 and 52 eV. Since the photoemission intensity is stronger at 19 eV rather than at 52 eV, we choose 19 eV to take the in-plane ARPES dispersion at the valence band maximum. Figure 2(a,b) demonstrate the in-plane ARPES dispersion curves along c- and a-axes. Both results show the downward dispersion near EF, while the upward dispersions are also observed on the higher EB side. Now we focus on the band showing the downward valence band dispersion. To estimate the effective masses along c- and a-axes, we have first fitted the results with the polynomial function. After that, we calculated the in-plane hole effective mass m* by using the relation $${{\hbar }}^{2}/m\,\ast ={d}^{2}E/d{k}^{2}$$, where ħ is Planck’s constant and k is the wave number corresponding to the momentum. Finally, we obtain $${m}_{a}^{\ast }/{m}_{0}=-\,0.54$$ and $${m}_{c}^{\ast }/{m}_{0}=-\,0.05$$, which means that the effective mass along a-axis is about 10 times larger than that along c-axis. These values are a little smaller than the cyclotron mass; −0.648 and −0.076, respectively15. Electron band and its effective mass observed by TARPES. Now we turn to the conduction band, which can be directly accessed by TARPES. Figure 3(a,b) show TARPES images taken along the c- and a- axes recorded before (t = −1.33 ps) and after (t = 1.06 ps) pump. Before pumping, there is no photoelectron intensity from the conduction band [see left panels of Fig. 3(a,b)] and only the valence band dispersion obtained here along c- and a- axes, again signifies its anisotropic nature as discussed above. The unoccupied states are filled after the pumping and form the upward parabolic band dispersion along c-axis with the energy minima at 0.3 eV as shown in right-panel of Fig. 3(a). In a sharp contrast, the energy-band dispersion along a-axis observed at the same delay time is much more flattened [see right-panel of Fig. 3(b)]. These obvious shape-different band dispersions along c- and a- axes signifies a giant anisotropy in the conduction band of BP. The bottom of the conduction band is located farther from EF than the top of the valence band with the assumed direct energy gap of 0.32 eV showing a p-type semiconductor sign. We have estimated the effective mass with the same procedure as those for the ARPES. As a result, the effective masses along c- and a-axes are 0.047 and 1, respectively. Dynamics of pump generated carriers. To unveil the dynamics of pump-generated carriers in the unoccupied state, we present the transient ARPES spectra with the typical pump-and-probe delay times. The upper panels of Fig. 4(a) show TARPES images taken along the ΓZLX plane (c-axis) recorded after (0 ≤ t ≤ 475 ps) pump whereas the lower panels show their difference image to that before pumped. The population intensity of electrons (holes) in the conduction (valence) band that was increased (decreased) as a function of delay times are more clearly reflected in lower panels. The valence band electrons are excited to the unoccupied state immediately after the pump and cascade from the high-energy region to the bottom of conduction band as time delays. From the difference spectra, we have distinctly seen that the massive Dirac-type dispersion formed by the excited electrons and holes across the direct band gap i.e. the holes accumulating at the top of the valence band and facing the electrons piling up at the bottom of the conduction band. In the decaying process, the electrons in the unoccupied states lose its energy quickly in the range of t = 0  3 ps. In contrast, the electrons in the bottom of conduction band exhibit a long relaxation time more than 400 ps. Note that it is much longer than that was observed for graphene up to 1 ps35,36. In order to discuss the energy-dependent dynamics, we provide the time evolution of the intensity integrated in the energy and momentum frames A to D [see Fig. 4(b)]. The intensity is normalized by the peak amplitude of each frame. In a few picoseconds after pumping, the intensity rises steeply and then it decreases exponentially. By comparing these frames, the comparative fast decay is found in frames A and B, these may be considered as the photon-induced interband scattering towards the lower conduction band which is similar to the graphene37. On the contrary, the decay process becomes slower in frames C and D, and also the intensity persists long systematically [particularly in Frame D]. For the relatively slow decay in frames C and D, after the electrons piled up to the conduction band bottom, the recombination with the holes in the valence band will occur due to the narrow semiconducting gap. We further quantify the decay process by investigating the profile of the dissipation. To this end, we derive $${\rm{\Delta }}U(t)$$: $${\rm{\Delta }}U(t)={\int }_{0}^{\infty }\omega {\rm{\Delta }}I(\omega ,t)\,d\omega .$$ (1) Here, $$\omega =E-{E}_{F}$$, t is the delay time value, and $${\rm{\Delta }}I(\omega ,\,t)$$ is the energy distribution curve of the difference image at t [lower panels of Fig. 4(a)] integrated over the emission angle [−15°, +15°]. $${\rm{\Delta }}U(t)$$ is a good measure of the excess electronic energy deposited by the pump pulse (for example, see Ref 38.), and in the particular case for BP, of the excess energy carried by the conduction electrons because the integral region of Eq. (1) is in the unoccupied side. In Fig. 5, we display $${\rm{\Delta }}U(t)$$ versus t in linear-linear (a), semi-log (b), and log-log plots (c). First, the recovery lasting for more than 100 ps is discerned in the linear-linear plot [Fig. 5 (a)] as the finite intensity remaining even after 100 ps. In the semi-log plot [Fig. 5(b)], we observe that the decay cannot be fit by a single exponential function: At least three exponential terms are needed to have a reasonable fit to the decay profile. That is, the dissipation exhibits a non-exponential-type decay, or a slower-than-exponential-type decay. The non-exponential profile seen in Fig. 5(b) motivated us to display $${\rm{\Delta }}U(t)$$ in the log-log plot [Fig. 5(c)], which is convenient to judge whether a power-law-type decay, $$\propto {t}^{-a}$$, is occurring or not. The power of the decay after 100 ps is read to be $$a\lesssim 0.05$$, and is much smaller than any powers expected for the spatial dissipation of heat; namely, a = 0.5 and 1 for dissipations in one- and two-dimensions, respectively38. In fact, the time region 100 ps is known to be still before the spatial diffusion of heat prevails. In this time region, $$a\,\simeq 0.05-\,0.3$$ is expected when the heat transfer between the high-energy phonons and low-energy phonons plays the key role in the recovery39. While the microscopic mechanism that dominates the long recovery profile still remains elusive40, our analysis reveals that the longevity is unconventional and does not exclude the possibility that the underlying mechanism may incorporate the formation of the long-lived excitons. ## Conclusions In this study, we have observed a giant anisotropy in the effective masses of electrons (holes) in the conduction (valence) band along c- and a-axes of BP by ARPES and TARPES measurements. Moreover, we have also observed the photo-carriers generated across the direct band gap in BP maintained over 400 ps. This long duration of photoexcited electron at the bottom of the conduction band can be attributed to the stabilized e–h pairs between conduction and valence bands and also overcome graphene’s short carrier lifetime induced constrains for the mid-infrared applications. The significant feature of excitonic insulating state is still lacking in the present study probably because of the insufficient temperature i.e. the predicted critical temperature is typically lower than 1 K10,11 and the larger probe energy i.e. the probe wavelength is better to match the band gap energy. But our experimental findings certainly provide precursor information for the excitonic condensates in optically pumped semiconductors and also paves the way for developing versatile broadband mid-infrared optoelectronic devices. ## Methods Single crystalline BP samples were grown by high pressure Bridgeman method as described elsewhere41. STM and ARPES measurements were conducted using an LT-STM (Omicron NanoTechnology GmbH) and the synchrotron radiation at the beamline (BL-7) equipped with a hemispherical photoelectron analyzer (VG-SCIENTA SES2002) of the Hiroshima Synchrotron Radiation Center (HSRC). The TARPES experiment was performed using the pump-and-probe configuration at the Laser and Synchrotron Research (LASOR) Center of Institute for Solid State Physics (ISSP), the University of Tokyo. The TARPES system equipped with an amplified Ti: sapphire laser system delivering 1.48 eV pulses of 170 fs duration with 250 kHz repetition and a hemispherical photoelectron analyzer (VG Scienta R-4000). The laser is split into two beams; one pulse was used as a pump while the other was up-converted into 5.92 eV and used as a probe. The energy resolution and the Fermi energy position (EF) were determined by recording the Fermi cutoff of Au in electrical contact to the sample and the analyzer. The delay time between pump and probe pulses was tuned by a delay stage changing the optical path length of the pump beam line. The delay origin t = 0 and the time resolution of 280 fs were calibrated by using the pump-and-probe photoemission signal of graphite attached next to the sample42. Samples of BP were in situ cleaved by a Scotch tape along (010) plane for all measurements and measured at 5 K for ARPES, 77 K for STM in an ultra-high vacuum better than 1 × 10−9 Pa. We confirmed that the in situ cleaved BP surface was free from oxidation and stable during the measurements by X-ray photoelectron spectroscopy (XPS) [see Figs S2 and S3 of Supplementary Material]. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Jerome, D. et al. Excitonic insulator. Phys. Rev. 158, 462–475 (1967). 2. 2. Halperin, B. I. et al. Possible Anomalies at a Semimetal-Semiconductor Transition. Rev. Mod. Phys. 40, 755–766 (1968). 3. 3. Bronold, F. X. et al. Possibility of an excitonic insulator at the semiconductor-semimetal transition. Phys. Rev. B 74, 165107 (2006). 4. 4. Mor, S. et al. Ultrafast electronic band gap control in an excitonic insulator. Phys. Rev.Lett. 119, 086401 (2017). 5. 5. Zenker, B. et al. Electron-hole pair condensation at the semimetal-semiconductor transition:A BCS-BEC crossover scenario. Phys. Rev. B 85, 121102(R) (2012). 6. 6. Versteegh, M. A. M. et al. Observation of preformed electron-hole Cooper pairs in highly excited ZnO. Phys. Rev. B 85, 195206 (2012). 7. 7. Seki, K. et al. Excitonic Bose-Einstein condensation in Ta2NiSe5 above room temperature. Phys. Rev. B 90, 155116 (2014). 8. 8. Kim, S. Y. et al. Layer-Confined Excitonic Insulating Phase in Ultrathin Ta2NiSe5 Crystals. ACS Nano 10, 8888–8894 (2016). 9. 9. Lu, Y. F. et al. Zero-gap semiconductor to excitonic insulator transition in Ta2NiSe5. Nat.Commun. 8, 14408 (2017). 10. 10. Mizoo, K. et al. Effects of Effective Mass Anisotropy and Effective Mass Difference inHighly Photoexcited Semiconductors. J. Phys. Soc. Jpn. 74, 1745–1749 (2005). 11. 11. Mizoo, K. et al. Enhancement of the Electron–Hole BCS Order by Energy BandAnisotropy in Highly Photoexcited Semiconductors. J. Phys. Soc. Jpn. 75, 044401 (2006). 12. 12. Goldstein, G. et al. Photoinduced superconductivity in semiconductors. Phys. Rev. B 91, 054517 (2015). 13. 13. Akahama, Y. et al. Electrical properties of black phosphorus single crystals. J. Phys. Soc.Jpn. 52, 2148–2155 (1983). 14. 14. Xia, F. et al. Rediscovering black phosphorus as an anisotropic layered material for optoelectronics and electronics. Nat. Commun. 5, 4458 (2014). 15. 15. Narita, S. et al. Far-infrared cyclotron resonance absorptions in black phosphorus single crystals. J. Phys. Soc. Jpn. 52, 3544–3553 (1983). 16. 16. Lu, S. B. et al. Broadband nonlinear optical response in multilayer black phosphorus: an emerging infrared and mid-infrared optical material. Optics Express. 23(9), 11183 (2015). 17. 17. Xu, Y. H. et al. Solvothermal Synthesis and Ultrafast Photonics of Black Phosphorus Quantum Dots. Adv. Optical Mater. 4, 1223–1229 (2016). 18. 18. Zheng, J. L. et al. Black Phosphorus Based All-Optical-Signal-Processing: Toward High Performances and Enhanced Stability. ACS Photonics 4, 1466–1476 (2017). 19. 19. Zheng, J. L. et al. Few-Layer Phosphorene-Decorated Microfiber for All-Optical Thresholding and Optical Modulation. Adv. Optical Mater. 5, 1700026 (2017). 20. 20. Feng, G. S. et al. Dynamical Evolution of Anisotropic Response in Black Phosphorus under Ultrafast Photoexcitation. Nano Lett. 15, 4650–4656 (2015). 21. 21. Suess, R. J. et al. Carrier dynamics and transient photobleaching in thin layers of black phosphorus. Appl. Phys. Lett. 107, 081103 (2015). 22. 22. Qi, H. J. et al. Exceptional and anisotropic transport properties of photocarriers in black phosphorus. ACS Nano 9, 6436–6442 (2015). 23. 23. Surrente, A. et al. Onset of exciton-exciton annihilation in single-layer black phosphorus. Phys. Rev. B 94, 075425 (2016). 24. 24. Peng, W. K. et al. Ultrafast nonlinear excitation dynamics of black phosphorus nanosheets from visible to mid-infrared. ACS Nano 10, 6923–6932 (2016). 25. 25. Takahashi, T. et al. Highly-angle-resolved ultraviolet photoemission study of a black-phosphorus single crystal. Phys. Rev. B 29, 1105 (1984). 26. 26. Takahashi, T. et al. Angle-resolved photoemission study of black phosphorus: Interlayer energy dispersion. Phys. Rev. B 33, 4324 (1986). 27. 27. Han, C. Q. et al. Electronic structure of black phosphorus studied by angle-resolved photoemission spectroscopy. Phys. Rev. B 90, 085101 (2014). 28. 28. Kim, J. et al. Observation of tunable band gap and anisotropic Dirac semimetal state in black phosphorus. Science 349, 723–726 (2015). 29. 29. Ehlen, N. et al. Evolution of electronic structure of few-layer phosphorene from angle-resolved photoemission spectroscopy of black phosphorous. Phys. Rev. B 94, 245410 (2016). 30. 30. Zhu, S. Y. et al. Ultrafast electron dynamics at the Dirac node of the topological insulator Sb2Te3. Sci. Rep. 5, 13213 (2015). 31. 31. Sumida, K. et al. Prolonged duration of nonequilibrated Dirac fermions in neutral topological insulators. Sci. Rep. 7, 14080 (2017). 32. 32. Zhang, C. D. et al. Surface Structures of Black Phosphorus Investigated with Scanning Tunneling Microscopy. J. Phys. Chem. C 113, 18823–18826 (2009). 33. 33. Liang, L. B. et al. Electronic Bandgap and Edge Reconstruction in Phosphorene Materials. Nano. Lett. 14, 6400–6406 (2014). 34. 34. Hong, T. et al. Polarized photocurrent response in black phosphorus field-effect transistors. Nanoscale 6, 8978–8983 (2014). 35. 35. Ulstrup, S. et al. Ultrafast Dynamics of Massive Dirac Fermions in Bilayer Graphene. Phys. Rev. Lett. 112, 257402 (2014). 36. 36. Gierz, I. et al. Tracking Primary Thermalization Events in Graphene with Photoemission at Extreme Time Scales. Phys. Rev. Lett. 115, 086803 (2015). 37. 37. Jphannsen, J. C. et al. Direct View of Hot Carrier Dynamics in Graphene. Phys. Rev. Lett. 111, 027403 (2013). 38. 38. Ishida, Y., Masuda, H., Sakai, H., Ishiwata, S. & Shin, S. Revealing the ultrafast light-to-matter energy conversion before heat diffusion in a layered Dirac semimetal. Phys. Rev. B 93, 100302(R) (2016). 39. 39. Ono, S. Nonequilibrium phonon dynamics beyond the quasiequilibrium approach. Phys. Rev. B 96, 024301 (2017). 40. 40. Pertsova, A. & Balatsky, A. V. Excitonic instability in optically pumped three-dimensional Dirac materials. Phys. Rev. B 97, 075109 (2018). 41. 41. Endo, S. et al. Growth of Large Single Crystals of Black Phosphorus under High Pressure. Jpn. J. Appl. Phys. 21, L482–L484 (1982). 42. 42. Ishida, Y. et al. Time-resolved photoemission apparatus achieving sub-20-meV energy resolution and high stability. Rev. Sci. Instrum. 85, 123904 (2014). ## Acknowledgements The TARPES measurement was carried out by the joint research in ISSP, University of Tokyo. This work was partly supported by JSPS Kakenhi (Grant Nos 26247064, 2680015, 17H06138, 18H01148 and 18H03683). ## Author information ### Affiliations 1. #### Department of Physical Sciences, Graduate School of Science, Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, 739-8526, Japan • Munisa Nurmamat • , Ryohei Yori • , Kazuki Sumida • , Siyuan Zhu •  & Akio Kimura 2. #### Institute for Solid State Physics, the University of Tokyo, 5-1-5 Kashiwa-no-ha, Kashiwa, Chiba, 277-8581, Japan • Yukiaki Ishida •  & Shik Shin 3. #### Aichi Synchrotron Radiation Center, Aichi Science & Technology Foundation, 250-3 Minamiyamaguchi-cho, Seto, 489-0965, Japan • Masashi Nakatake 4. #### Hiroshima Synchrotron Radiation Center, Hiroshima University, 2-313 Kagamiyama, Higashi-Hiroshima, 739-0046, Japan • Yoshifumi Ueda •  & Masaki Taniguchi 5. #### Graduate School of Material Science, University of Hyogo, 3-2-1 Kouto, Kamigori-cho, Ako-gun, Hyogo, Japan • Yuichi Akahama ### Contributions Y.A. grew the bulk black phosphorus single crystals. M.N., R.Y., Y.I., Z.S. and M.N. performed the experiments and analyzed the results. M.N. and A.K. wrote the manuscript with inputs from all authors. Y.U., M.T., S.S. and A.K. supervised work and discussed the results. All authors contributed to the scientific discussion and manuscript preparation. ### Competing Interests The authors declare no competing interests. ### Corresponding authors Correspondence to Munisa Nurmamat or Akio Kimura.
2019-04-21 11:20:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7315273880958557, "perplexity": 3328.1029799088815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530527.11/warc/CC-MAIN-20190421100217-20190421122217-00439.warc.gz"}
https://pi4zlb.vrza.nl/wp/category/hackaday/
## RTL-SDR: Seven Years Later Before swearing my fealty to the Jolly Wrencher, I wrote for several other sites, creating more or less the same sort of content I do now. In fact, the topical overlap was enough that occasionally those articles would get picked up here on Hackaday. One of those articles, which graced the pages of this site a little more than seven years ago, was Getting Started with RTL-SDR. The original linked article has long since disappeared, and the site it was hosted on is now apparently dedicated to Nintendo games, but you can probably get the gist of what it was about from the title alone. When I wrote that article in 2012, the RTL-SDR project and its community were still in their infancy. It took some real digging to find out which TV tuners based on the Realtek RTL2832U were supported, what adapters you needed to connect more capable antennas, and how to compile all the software necessary to get them listening outside of their advertised frequency range. It wasn’t exactly the most user-friendly experience, and when it was all said and done, you were left largely to your own devices. If you didn’t know how to create your own receivers in GNU Radio, there wasn’t a whole lot you could do other than eavesdrop on hams or tune into local FM broadcasts. Nearly a decade later, things have changed dramatically. The RTL-SDR hardware and software has itself improved enormously, but perhaps more importantly, the success of the project has kicked off something of a revolution in the software defined radio (SDR) world. Prior to 2012, SDRs were certainly not unobtainable, but they were considerably more expensive. Back then, the most comparable device on the market would have been the FUNcube dongle, a nearly $200 USD receiver that was actually designed for receiving data from CubeSats. Anything cheaper than that was likely to be a kit, and often operated within a narrower range of frequencies. Today, we would argue that an RTL-SDR receiver is a must-have tool. For the cost of a cheap set of screwdrivers, you can gain access to a world that not so long ago would have been all but hidden to the amateur hacker. Let’s take a closer look at a few obvious ways that everyone’s favorite low-cost SDR has helped free the RF hacking genie from its bottle in the last few years. ## Hardware Evolution Even though the project is called RTL-SDR, the Realtek RTL2832U chip is in reality just half of the equation; it’s a USB demodulator chip that needs to be paired with a tuner to function. In the early days, there were a number of different tuners in use, and figuring out which one you were getting was a pretty big deal. The Elonics E4000 was the most desirable tuner as it had the widest frequency range, but it could be difficult to know ahead of time what you were getting. The packaging and documentation were all but useless; either the manufacturer didn’t bother to include the information, or if they did, it would often become outdated as new revisions of the product were produced. The only way to be sure about what you were getting was to see if somebody had already purchased that particular model and reported on their findings. Luckily, the tuners were cheap enough that you could buy a couple and experiment. In those days, it wasn’t uncommon to find RTL-SDR compatible devices for less than$10 from import sites. Opening up a contemporary RTL2832U+E4000 receiver, we can see they were relatively simple affairs. The flimsy plastic case doesn’t do much to prevent interference, and the Belling-Lee connector connector is intended for use with a traditional TV antenna. Note this particular model features an IR receiver so the user could change TV channels with the included remote; a reminder of what this device was actually built for. These days, you don’t need to wade through pages of nearly identical looking USB TV tuners to find compatible hardware. There are now several RTL2832U-based receivers which are specifically designed for RTL-SDR use, generally selling for around $30. These devices not only address the shortcomings of the original hardware offerings, but in many cases add in new capabilities that simply wouldn’t have made sense to include back when they were just for watching TV on your computer. Here we have the “RTL-SDR Blog v3” receiver, which is one of the most popular “next generation” RTL-SDR receivers. The plastic case has been replaced with an aluminum one that not only reduces interference, but helps the board dissipate heat while in operation. The crystal has been upgraded to a temperature compensated oscillator (TCXO) which helps reduce temperature drift. The R820T2 tuner is paired with a standard SMA antenna connector, and both it and the RTL2832U have some unused pins broken out if you’re looking to get into developing modifications or expansions to the core hardware. ## Software Library The improvements to the base RTL-SDR hardware are welcome, and it’s nice to not have to worry about whether or not the receiver you’ve purchased is actually going to work with the drivers, but realistically those changes mainly benefit the more hardcore users who are pushing the edge of the envelope. If you’re just looking to sniff some 433 MHz thermometers, you don’t exactly need a TCXO. For most users, the biggest improvements have come in the software side of things. For one, the RTL-SDR package is almost certainly going to be in the repository of your favorite GNU/Linux distribution. Unless you need some bleeding edge feature, you won’t have to compile the driver and userland tools from source anymore. The same will generally be true for the SDR graphical frontend, namely gqrx by Alexandru Csete. Those two packages are enough to get you on the air and browsing for interesting signals, but that’s just the beginning. The rise of cheap SDRs has inspired a number of fantastic new software packages that are light-years ahead of what was available previously. Certainly one of the best examples is Universal Radio Hacker, an all-in-one tool that lets you search for, capture, and ultimately decode wireless signals. Whether it’s a known protocol for which it already has a built-in decoder, or something entirely new that you need to reverse engineer, Universal Radio Hacker is a powerful tool for literally pulling binary data out of thin air. Those looking to reverse unknown wireless protocols should also take a look at inspectrum, another tool developed in the last few years that can be used to analyze captured waveforms. If you’re more interested in the practical application of these radios, there have also been a number of very impressive “turn-key” applications developed that leverage the high availability of low-cost SDRs. One such project is dump1090, a ADS-B decoder that was specifically developed for use with the RTL-SDR. With a distributed network of receivers, the software has allowed the community to democratize flight tracking through the creation of open data aircraft databases. ## The Gift of Inspiration In the years since its inception, the RTL-SDR project has become the de facto “first step” for anyone looking to experiment with radio. It’s cheap, it’s easy, and since the hardware is incapable of transmission, you don’t have to worry about accidentally running afoul of the FCC or your local equivalent. Honestly, it’s difficult to think of a valid reason not to add one of these little USB receivers to your bag of tricks; even if you only use it once, it will more than pay for itself. Ultimately, this is the greatest achievement of the RTL-SDR project. It drove the entry barrier for radio experimentation and hacking so low that it’s spawned a whole new era. From the unique vantage point offered by Hackaday, we can see the sharp uptick of RF projects that correspond to the introduction of an easy to use and extremely affordable software defined radio. People who might never have owned a “real” radio beyond the one in their car can now peel back the layers of obscurity that in the past kept the vast majority of us off the airwaves. This is a very exciting time for wireless hacking, and things are only going to get more interesting from here on out. Long live RTL-SDR! 0 ## Repurposed Plastic Protects PCBs An errant wire snipping across the wrong electrical pins spells the release of your magic smoke. Even if you are lucky, stray parts are the root of boundless malfunctions from disruptive to deadly. [TheRainHarvester] shares his trick for covering an Arduino Nano with some scrap plastic most of us have sitting in the recycling bin. The video is also after the break. He calls this potting, but we would argue it is a custom-made cover. The hack is to cut a bit of plastic from food container lids, often HDPE or plastic #2. Trim a piece of it a tad larger than your unprotected board, and find a way to hold it in place so you can blast it with a heat gun. When we try this at one of our Hackaday remote labs and apply a dab of hot glue between the board and some green plastic it works well. The video suggests a metal jig which would be logical when making more than one. YouTube commenter and tip submitter [Keith o] suggests a vacuum former for a tighter fit, and we wouldn’t mind seeing custom window cutouts for access to critical board segments such as DIP switches or trimmers. We understand why shorted wires are a problem, especially when you daisy-chain three power supplies as happened in one of [TheRainHarvester]’s previous videos. 0 ## X-Rays Are the Next Frontier in Space Communications Hundreds of years from now, the story of humanity’s inevitable spread across the solar system will be a collection of engineering problems solved, some probably in heroic fashion. We’ve already tackled a lot of these problems in our first furtive steps into the wider galaxy. Our engineering solutions have taken humans to the Moon and back, but that’s as far as we’ve been able to send our fragile and precious selves. While we figure out how to solve the problems keeping us trapped in the Earth-Moon system, we’ve sent fleets of robotic emissaries to do our exploration by proxy, to make the observations we need to frame the next set of engineering problems to be solved. But as we reach further out into the solar system and beyond, our exploration capabilities are increasingly suffering from communications bottlenecks that restrict how much data we can ship back to Earth. We need to find a way to send vast amounts of data back as quickly as possible using as few resources as possible on both ends of the communications link. Doing so may mean turning away from traditional radio communications and going way, way up the dial and developing practical means for communicating with X-rays. ## The Tyranny of Physics The essential problems with deep space communications come from two sources – the inverse-square law and information theory. The inverse-square law states that the amount of energy at the receiving end of a radio communications link is inversely proportional to the square of the distance to the transmitter. Basically, radio waves spread out from the source and at very great distances tend to diminish into the background noise. That’s why deep-space communications networks tend to have large antennas on both ends of the link, to gather and focus as much of the weak signal as possible, as well as to be able to transmit a powerful and narrowly focused beam. Information theory tells us that more data can be packed into higher frequency signals than lower frequencies. Early satellites didn’t need much bandwidth to do their jobs, so VHF and UHF radios were generally sufficient. But as spacecraft became more sophisticated and the amount of data they needed to send back increased, their communications links began shifting gradually up the electromagnetic spectrum into the microwave region. The Voyager probes, currently in interstellar space, have an uplink using 2.1 GHz for the relatively low-bandwidth tasks of vehicle control, with a downlink at 8.1 GHz, reflecting the increased bandwidth needed to send scientific data back to Earth. For as stunning an engineering achievement as Voyager has been, and notwithstanding the fact that it’s still working more than 40 years after launch, its radio gear only barely supports its interstellar mission. To be fair, Voyager was never meant to last this long, and every bit of data that makes it back to Earth is just icing on the cake. But for future missions specifically designed for interstellar space, sending back enough data to make such missions feasible will require more bandwidth. ## Small, Bright, and Fast In late April, NASA is sending a pallet of gear up to the ISS, and one of the experiments stashed in the cargo is meant to explore the potential for X-ray communications, or XCOM, for deep space. The Modulated X-Ray Source (MXS) is a compact X-ray transmitter that will be mounted outside the space station. The receiver for this experiment is already installed; the Neutron Star Interior Composition Explorer (NICER) has been gathering X-ray spectra from neutron stars since 2017, while also gathering data about the potential for using X-ray pulsars as navigational beacons in a sort of “Galactic Positioning System”. MXS is an interesting instrument. When one thinks of making X-rays, the natural tendency is to assume a traditional hot-cathode vacuum tube, where electrons are boiled off a filament and accelerated by an electric field in the range of 100 kilovolts to slam into a tungsten anode, would be used. But vacuum tubes like those found in a hospital X-ray suite aren’t the best space travelers, and even when ruggedized they’re too bulky and heavy to send upstairs. So NASA researchers developed a more spaceflight-friendly X-ray generator. Rather than heating a filament to generate electrons, the X-ray source in MXS uses creates photoelectrons by bombarding a magnesium photocathode with UV light from LEDs. The few photoelectrons produced then enter an electron amplifier, an off-the-shelf component found in mass spectrometers that uses specially shaped chambers coated with a thin layer of semiconducting material. Each incident electron liberates a few secondary photoelectrons, which bounce off the other wall of the multiplier to create more electrons, greatly amplifying the signal. The huge stream of electrons is then accelerated by a 10 kV field to collide with the target anode and produce X-rays. While the MXS source sounds similar to a hot-cathode tube, there are important differences. First, the source can be made cheaply from off-the-shelf components and a 3D-printed metal enclosure. The whole assembly weighs only about 160 grams, fits in the palm of a hand, and has no unusual power or temperature control requirements. The big difference, though, is with how fast the X-rays can be turned on and off. A glowing filament can only heat up and cool down so quickly, meaning that effective modulation of X-ray from hot-cathode sources is difficult. In the MXS, X-rays are produced only when the UV LEDs are on, and those can be switching very quickly, in the sub-nanosecond range. The ability to modulate an X-ray beam lead to data rates in the gigabits per second range, greatly enhancing our ability to move data around in space. What’s more, X-rays can be more tightly collimated than radio waves or even light, which is also being experimented with for space communications. The tighter X-ray beam spreads out less, making transmission more power efficient and reception easier by virtue of the strong signal from relatively bright transmitters. Although the distance between the MXS and NICER in these XCOM experiments is only about 50 meters, they stand to position us for much better bandwidth for deep space communications. The MXS source itself has a lot of potential applications beyond XCOM too, from cheap, lightweight, low-power medical imaging on Earth and in space, navigational beacons for spacecraft, and even advanced chemical analysis by X-ray spectroscopy 0 ## Full Earth Disc Images From GOES-17 Harvested By SDR We’ve seen lots of hacks about capturing weather images from the satellites whizzing over our heads, but this nicely written how-to from [Eric Sorensen] takes a different approach. Rather than capturing images from polar satellites that pass overhead a few times a day, this article looks at capturing images from GOES-17, a geostationary satellite that looks down on the Pacific Ocean. The fact that it is a geostationary satellite means that it captures the same view all the time, so you can capture awesome time-lapse videos of the weather. The fact that GOES-17 is a geostationary satellite means that it is a bit more involved. While polar satellites that orbit at an altitude of 800km or so can be received with a random piece of wire, the 35,800 km altitude of geostationary satellites means that you need a better antenna. That doesn’t have to be that expensive, though: [Eric] used a$100 parabolic antenna and a $100 Airspy Mini SDR receiver connected to an Ubuntu laptop running some open source software to receive and decode the 1.7GHz signal of the satellite. The other trick is to figure out where to point the dish. Because it is a geostationary satellite, this part has to be done carefully, as the parabolic antenna has only a small receiving angle. [Eric] designed a 3D-printed mount that fits onto a tripod for his antenna. Capturing satellite weather images is a fascinating thing to do, and this adds another level of interest, as the images show the full disc of the earth. Capture a series over time, and you can see storms spin around and across the ocean, and see just how complicated they are. If you are looking for a simpler way to get started in receiving weather satellite images, check out this guide to converting an old TV antenna and USB receiver to capture images from polar satellites. 0 ## The$50 Ham: Dummy Loads This is an exciting day for me — we finally get to build some ham radio gear! To me, building gear is the big attraction of amateur radio as a hobby. Sure, it’s cool to buy a radio, even a cheap one, and be able to hit a repeater that you think is unreachable. Or on the other end of the money spectrum, using a Yaesu or Kenwood HF rig with a linear amp and big beam antenna to work someone in Antartica must be pretty cool, too. But neither of those feats require much in the way of electronics knowledge or skill, and at the end of the day, that’s why I got into amateur radio in the first place — to learn more about electronics. To get my homebrewer’s feet wet, I chose perhaps the simplest of ham radio projects: dummy loads. Every ham eventually needs a dummy load, which is basically a circuit that looks like an antenna to a transmitter but dissipates the energy as heat instead of radiating it an appreciable distance. They allow operators to test gear and make adjustments while staying legal on emission. Al Williams covered the basics of dummy loads a few years back in case you need a little more background. We’ll be building two dummy loads: a lower-power one specifically for my handy talkies (HTs) will be the subject of this article, while a bigger, oil-filled “cantenna” load for use with higher power transmitters will follow. Neither of my designs is original, of course; borrowing circuits from other hams is expected, after all. But I did put my own twist on each, and you should do the same thing. These builds are covered in depth on my Hackaday.io page, but join me below for the gist on a good one: the L’il Dummy. ## L’il Dummy As Al points out in the article linked above, a dummy load is just a resistive element that matches the characteristic impedance of the transmitter’s antenna connection. In almost every case, that’s going to be 50 ohms. The reason that the load needs to be as resistive as possible is that it needs to continue looking like a flat 50-ohm load no matter what frequency is applied to it. Any inductive or capacitive elements in the load will make it more reactive, changing the impedance as the input frequency changes. This could lead to RF power getting reflected back into the final amplifier transistors in the transmitter, possibly damaging them or destroying them altogether. Not what you’re looking for. That means our resistive elements need to be as non-inductive as possible. But, they also need to be able to dissipate a lot of power. The HT dummy load, which I’ve dubbed L’il Dummy, needs to handle the 5 to perhaps 8 watts an HT can output. Trouble is, power resistors in that range are often wirewound, and a coil of wire will have too much inductance. We’ll need to be clever in sourcing components. The circuit for L’il Dummy is hardly worth a schematic – it’s just an SMA jack with a 50-ohm resistor across the outer ground and the inner conductor. I chose to build the circuit on an RF Biscuit board. This is an open-source design that enables all kinds of handy little RF circuits — attenuators, filters, and as in this case, dummy loads. The resistive element I chose was a thick-film SMT device capable of dissipating 35 watts – way more than enough for this job. That and an edge-mount SMA jack should have been all I needed to make a working dummy load. To my surprise, once I soldered the resistor to the RF Biscuit board, the dummy load was almost as good an antenna as the stock rubber ducky on my Baofeng HT. I was able to hit a local repeater through the dummy load without any issues. Clearly not a good design. To correct it, I put the whole thing into an enclosure made from 1″ copper pipe. Not cheap stuff, but not too bad, and I like the look of polished copper. Soldering the whole case together was a challenge that my big Weller soldering gun wasn’t up to, and trying to get everything heated up enough with a propane torch without overdoing the heat was a fun time. ## Testing on a Budget Now for the $50 question: does it work? I tested the resistance with a DMM and it comes out to just about 49 ohms, which is close enough in my book. But that’s DC resistance; what about impedance? I don’t have an antenna analyzer, so I trolled around and found a simple method for measuring impedance with only a function generator and an oscilloscope. My scope has a 20-MHz function generator built in, so I whipped up a quick test jig from a BNC jack and an SMA jack, connected in series through a leftover 1000-ohm resistor. Applying a sine wave into the dummy load, measuring peak-to-peak voltages on each side of the resistance, and doing a little math is all that’s needed to characterize the impedance from 2.5 MHz to 20 MHz. The math is simple: $Z_x = \frac{V_2}{V_1 - V_2}R_{ref}$ with V1 being the voltage across the input, V2 being the voltage across the output, and Rref being the actual value of the series resistance, which I measured at 998 Ohms. And the results are pretty close to 50 Ohms, and flat across the tested band f (MHz) V1 (V p-p) V2 (V p-p) Z (ohms) 20.0 1.49 0.062 43.3 15.0 1.89 0.082 45.3 10.0 2.57 0.113 45.9 5.0 3.90 0.173 46.3 2.5 4.70 0.217 48.3 I wish I could measure it at VHF and UHF frequencies, but that will have to wait until I get a function generator that goes up to 400 MHz or so. I doubt very much that a$50 budget would cover that, though. ## Next Time I had intended to cover both L’il Dummy and its bigger, somewhat smarter brother in one article, but I still have some testing to do on Big Dummy. I’ll cover that next time, and after that we’ll move onto measuring the output of a cheap Chinese HT and perhaps building a filter to clean it up. 0 ## The $50 Ham: Checking Out the Local Repeater Scene So far in this series, we’ve covered the absolute basics of getting on the air as a radio amateur – getting licensed, and getting a transceiver. Both have been very low-cost exercises, at least in terms of wallet impact. Passing the test is only a matter of spending the time to study and perhaps shelling out a nominal fee, and a handy-talkie transceiver for the 2-meter and 70-centimeter ham bands can be had for well under$50. If you’re playing along at home, you haven’t really invested much yet. The total won’t go up much this week, if at all. This time we’re going to talk about what to actually do with your new privileges. The first step for most Technician-class amateur radio operators is checking out the local repeaters, most of which are set up exactly for the bands that Techs have access to. We’ll cover what exactly repeaters are, what they’re used for, and how to go about keying up for the first time to talk to your fellow hams. ## Could You Repeat That? Time to face some cold, hard facts about amateur radio: that spiffy new Baofeng radio I recommended last time as a great starter radio is actually pretty lame. That fact has little to do with the mere $25 you spent on it, or$40 if you opted to upgrade the antenna. It’s a simple consequence of physics: a radio that transmits at 5 watts will only have so much range on the VHF band, and even less on UHF. Even if you buy a more powerful HT, or invest in a mobile or base-station rig running 50 or 100 watts, the plain fact is that direct radio-to-radio contacts on the same frequency, or simplex contacts, are difficult on VHF and UHF because those bands are really best for line of sight (LOS) use. That’s not to say that hams don’t use their VHF and UHF rigs for simplex communications, of course. Many hams like to see just how far they can push their signals on these bands, building big Yagi antennas and finding mountain peaks to operate from. But for general use around town, most hams rely on repeaters to extend the area they can communicate over. Repeaters are simply transceivers set up to receive signals on one frequency and transmit them on another at the same time, with the help of a device called a duplexer. This simultaneous reception and transmission gives rise to the term duplex communications, the general term for operating on a repeater. Repeater usually transmit at a much higher power than an HT or even a mobile rig can manage, and they usually have the advantage of being located on a mountaintop or some other elevated place to gain the furthest possible radio horizon as possible. This arrangement vastly increases the area that you can cover with your tiny HT. Depending on how the repeater is sited and what sort of antenna it has, you may be able to cover hundreds of square miles, as opposed to perhaps a few miles radius under ideal conditions, or a few blocks in the typical urban or suburban setting with lots of clutter from buildings and trees. What’s more, some repeaters are linked to other repeaters either through backhaul connections, often via the Internet but also sometimes through powerful LOS microwave links. In these systems it’s possible to use a puny HT to talk to another ham over hundreds or even thousands of miles. It’s actually pretty cool. ## Welcome to the Machine So where are these repeaters, and how do you start working them? The first question is easy to answer: they’re everywhere. Look at any tall building, mountaintop antenna farm, or municipal water tank, and chances are pretty good there’s a ham repeater there. But being able to work them means you need to know exactly where they are, to be sure you’re in range of the repeater, or “the machine” as hams often refer to it, as well as the frequencies it operates on. Luckily, there are online guides to help with that chore. RepeaterBook.com is usually the first place hams go to find machines in the area. There you can search by state, county, or city, or even via a map, and find what repeaters are available. They’ve even got a handy road search, so you can get all the repeaters listed as within range of a particular highway; that’s really handy for road-trip planning. Here’s what comes up for VHF and UHF repeaters when I search within 25 miles of my location, or QTH: The first thing you’ll notice is that several machines at different sites have the same callsign. For example, K7ID runs a UHF repeater on Canfield Mountain and a VHF machine on Mica Peak. Both are LOS to me, and I can easily hit them with an HT. The frequency listed in the first column is the transmit frequency of the repeater. Your HT will need to be set to this frequency to hear what’s being said. Your radio will also need to be programmed for the correct tone, listed in the third column. That tone is an audio frequency signal known by a number of different trade names, but generically as continuous tone-coded squelch system (CTCSS). Your radio is capable of adding this sub-audible tone to your transmission; the repeater will only “open up” to transmissions that are correctly coded. Some repeaters have no tone coding, others have different tones for receive and transmit. When doubt, try to find out who runs the machine – most, but not all, are run by a ham radio club – and see if you can look up instructions on the web. The offset shown in the second column is perhaps the most important bit of information. Recall that repeaters transmit and receive on different frequencies, and that they’re listed by their transmit frequency. The offset tells you what the repeater’s input frequency is, which is the frequency your radio will need to be set to transmit on. For example, the machine I most often used is the K7ID machine on Mica Peak. It’s at 146.980 and shows an offset of -0.6 MHz. That means that my radio has to be set to 146.380 MHz transmission frequency. VHF repeaters are usually 0.6 MHz, but could be plus or minus depending on which part of the VHF band they’re in. UHF repeaters usually have +5 MHz offsets. Note: I’m not going to cover programming your radio, because there are plenty of guides online that do a better job than I can. DuckDuckGo is your friend. ## Casting the Net Once you’ve found your local repeaters and programmed your radio, it’s time to get on the air. My advice is to spend the first few days just listening to one or more repeaters. Activity levels vary – some machines are hopping all day, and some are barely used except during the typical commuting hours. When you hear a conversation, try to get a feeling for the culture of the repeater. Every group of hams has a culture, and as we discussed in the first installment of the series, it’s not always a healthy culture. My local repeater belongs to the Kootenai Amateur Radio Society, as friendly and as inviting a group of people as I’ve ever heard on the air. After listening to them chat for a few weeks, I was more than ready to reach out to them. But first, a word about kerchunking. If you want to know if you’re in range of a repeater, you can test it out. Most repeaters have a “squelch tail” that keeps the repeater on the air for several seconds before going back to sleep, and this can be used to check if you’re in range. Some repeaters even identify themselves, either with a synthesized voice or Morse code when they “wake up”.  So you might want to ping the repeater. Kerchunking, or transmitting into a repeater without identifying yourself, is one of those bad habits that everyone seems to have. But FCC part 97 rules, which cover the amateur radio service, require operators to transmit their call sign when they start a transmission. So don’t kerchunk; a simple identification like “This is KC1DJT testing and clear” will suffice. Nobody is likely to take that as an invitation to chat, but they might give you a reception report. Once you’re feeling confident enough, try making a contact. I highly recommend checking out the local traffic networks. Hams pride themselves on having the skills and equipment to communicate in an emergency, but that means little without practice to keep everything sharp. Nets allow hams to practice message passing skills and to test their gear on a regular basis. My local group has a network check-in every night that follows a standard script and usually attracts about 30 check-ins. Here’s a sample from a recent check-in: I’ve become a regular on this net and a few others, mainly because I want to practice, but also to get over my mic shyness. There’s another reason too – I want people to recognize my voice and callsign. If there ever is an emergency in my area, I feel like it’ll be easier to pitch in or to get help if I need it if people hear a familiar voice. ## Next Time Over the next few installments, we’re finally going to get to what I think ham radio is all about at its core: homebrewing. We’ll be building a few simple projects to make that cheap HT a little better, and also build a few tools to help run the shack a little more efficiently. 0 ## A New Digital Mode For Radio Amateurs There used to be a time when amateur radio was a fairly static pursuit. There was a lot of fascination to be had with building radios, but what you did with them remained constant year on year. Morse code was sent by hand with a key, voice was on FM or SSB with a few old-timers using AM, and you’d hear the warbling tones of RTTY traffic generated by mechanical teletypes. By contrast the radio amateur of today lives in a fast-paced world of ever-evolving digital modes, in which much of the excitement comes in pushing the boundaries of what is possible when a radio is connected to a computer. A new contender in one part of the hobby has come our way from [Guillaume, F4HDK], in the form of his NPR, or New Packet Radio mode. NPR is intended to bring high bandwidth IP networking to radio amateurs in the 70 cm band, and it does this rather cleverly with a modem that contains a single-chip FSK transceiver intended for use in licence-free ISM band applications. There is an Ethernet module and an Mbed microcontroller board on a custom PCB, which when assembled produces a few hundred milliwatts of RF that can be fed to an off-the-shelf DMR power amplifier. Each network is configured around a master node intended to use an omnidirectional antenna, to which individual nodes connect. Time-division multiplexing is enforced by the master so there should be no collisions, and this coupled with the relatively wide radio bandwidth of the ISM transceiver gives the system a high usable data bandwidth. Whether or not the mode is taken up and becomes a success depends upon the will of individual radio amateurs. But it does hold the interesting feature of relying upon relatively inexpensive parts, so the barrier to entry is lower than it might be otherwise. If you are wondering where you might have seen [F4HDK] before, we’ve previously brought you his FPGA computer. 0 There are a few options if you want to network computers on amateur radio. There are WiFi hacks of sort, and of course there’s always packet radio. New Packet Radio, a project from [f4hdk] that’s now on hackaday.io, is unlike anything we’ve seen before. It’s a modem that’s ready to go, uses standard 433 ISM band chips, should only cost $80 to build, and it supports bidirectional IP traffic. The introductory documentation for this project (PDF) lays out the use case, protocol, and hardware for NPR. It’s based on chips designed for the 433MHz ISM band, specifically the SI4463 ISM band radio from Silicon Labs. Off the shelf amplifiers are used, and the rest of the modem consists of an Mbed Nucleo and a Wiznet W5500 Ethernet module. There is one single modem type for masters and clients. The network is designed so that a master serves as a bridge between Hamnet, a high-speed mesh network that can connect to the wider Internet. This master connects to up to seven clients simultaneously. Alternatively, there is a point-to-point configuration that allows two clients to connect to each other at about 200 kbps. Being a 434 MHz device, this just isn’t going to fly in the US, but the relevant chip will work with the 915 MHz ISM band. This is a great solution to IP over radio, and like a number of popular amateur radio projects, it started with the hardware hackers first. 0 ## Es’hail-2: Hams Get Their First Geosynchronous Repeater In the radio business, getting the high ground is key to covering as much territory from as few installations as possible. Anything that has a high profile, from a big municipal water tank to a roadside billboard to a remote hilltop, will likely be bristling with antennas, and different services compete for the best spots to locate their antennas. Amateur radio clubs will be there too, looking for space to locate their repeaters, which allow hams to use low-power mobile and handheld radios to make contact over a vastly greater range than they could otherwise. Now some hams have claimed the highest of high ground for their repeater: space. For the first time, an amateur radio repeater has gone to space aboard a geosynchronous satellite, giving hams the ability to link up over a third of the globe. It’s a huge development, and while it takes some effort to use this new space-based radio, it’s a game changer in the amateur radio community. ## Friends in High Places The new satellite, Es’hail-2, was built for Es’hailSat, a Qatari telecommunications concern. As satellites go, it’s a pretty standard machine, built primarily to provide direct digital TV service to the Middle East and Africa. But interestingly, it was designed from the start to carry an amateur radio payload. The request for proposals (RFP) that Es’hailSat sent to potential vendors in early 2014 specifically called for the inclusion of an amateur repeater, to be developed jointly by AMSAT, the Radio Amateur Satellite Corporation. The repeater aboard Es’hail-2 was developed as a joint effort between the Qatar Amateur Radio Society (QARS), Es’HailSat, and AMSAT-DL, the AMSAT group in Germany. The willingness of Es’HailSat to include an amateur radio payload on a commercial bird might be partially explained by the fact that the QARS chairman is His Excellency Abdullah bin Hamad Al Attiyah (A71AU), former Deputy Prime Minister of Qatar. The repeater was engineered with two main services in mind. The first is a narrowband transponder intended for phone (voice) contacts, continuous wave (CW) for Morse contacts, and some of the narrow bandwidth digital modes, like PSK-31. The other transponder is for wideband use, intended to test Digital Amateur Television (DATV). The wideband transponder can carry two simultaneous HD signals and a beacon broadcasting video content from QARS. Both transponders uplink on the portion of the 2.4-GHz reserved for hams, while downlinking on the 10.4-GHz band. Es’hail-2 was launched aboard a SpaceX Falcon 9 from Cape Canaveral on November 15, 2018. The satellite was boosted to a geosynchronous orbit in the crowded slot located at 26.5° East longitude, parking it directly above the Democratic Republic of Congo. After tests were completed, a ceremony inaugurating the satellite as “Qatar OSCAR-100”, or QO-100, was held on February 14, 2019, making it the 100th OSCAR satellite launched by amateurs. ## Listening In Sadly for hams in the Americas and most of eastern Asia, QO-100 is out of range. But for hams anywhere from coastal Brazil to Thailand, the satellite is visible 24 hours a day. The equipment to use it can be a bit daunting, if the experience of this amateur radio club in Norway is any indication. They used a 3-meter dish for the 2.4-GHz uplink, along with a string of homebrew hardware and a lot of determination to pull off their one contact so far, and this from a team used to bouncing signals off the Moon. Receiving signals from QO-100 is considerably easier. A dish in the 60-cm to 1-meter range will suffice, depending on location, with a decent LNB downconverter. Pretty much any SDR will do for a receiver. An alternative to assembling the hardware yourself — and the only way to get in on the fun for the two-thirds of the planet not covered by the satellite — would be to tune into one of the WebSDR ground stations that have been set up. The British Amateur Television Club and AMSAT-UK, located at the Goonhilly Earth Station, have set up an SDR for the narrowband transponder that you can control over the web. I used it to listen in on a number of contacts between hams the other night. It’s hard to overstate the importance of QO-100. It’s the first ham repeater in geosynchronous orbit, as well as the first DATV transponder in space. It’s quite an achievement, and the skills it will allow hams to develop as they work this bird will inform the design of the next generation of ham satellites. Hats off to everyone who was involved in getting QO-100 flying! 0 ## The$50 Ham: Getting Your Ticket Punched Today we start a new series dedicated to amateur radio for cheapskates. Ham radio has a reputation as a “rich old guy” hobby, a reputation that it probably deserves to some degree. Pick up a glossy catalog from DX Engineering or cruise their website, and you’ll see that getting into the latest and greatest gear is not an exercise for the financially challenged. And thus the image persists of the recent retiree, long past the expense and time required to raise a family and suddenly with time on his hands, gleefully adding just one more piece of expensive gear to an already well-appointed ham shack to “chew the rag” with his “OMs”. As I pointed out a few years back in “My Beef With Ham Radio”, I’m an inactive ham. My main reason for not practicing is that I’m not a fan of talking to strangers, but there’s a financial component to my reticence as well – it’s hard to spend a lot of money on gear when you don’t have a lot to talk about. I suspect that there are a lot of would-be hams out there who are turned off from the hobby by its perceived expense, and perhaps a few like me who are on the mic-shy side. This series is aimed at dispelling the myth that one needs buckets of money to be a ham, and that jawboning is the only thing one does on the air. Each installment will feature a project that will move you further along your ham journey that can be completed for no more than $50 or so. Wherever possible, I’ll be building the project or testing the activity myself so I can pursue my own goal of actually using my license for a change. (A shout-out to Robert for suggesting this series, and for graciously allowing me to run with his idea.) ## Getting Your Ticket The licensing of amateur radio stations in the United States goes all the way back to 1912. (I’m concentrating on US laws and customs regarding the amateur radio service simply because that’s where I live; please feel free to chip in on the comments section about differences in other countries.) Anyone who wants to operate on the bands reserved for the amateur radio service has to be licensed by the Federal Communication Commission. Unlicensed individuals are free – and encouraged – to listen in on the bands, but if you don’t have a license, you can’t transmit. And trust me, the local hams, with know-how, equipment, and all the time in the world, will find you, resulting in an unpleasant encounter with the FCC. There’s really no reason not to get a license anyway. This will be among the cheapest parts of a ham’s journey, and perhaps even free. To earn a license you’ll need to pass a written exam, but before taking the plunge you’ll need to know a little about the classes of amateur radio licenses, and the privileges they bestow. The current entry-level license class in the US is called Technician class; the old Novice class was eliminated in 2000, along with the Morse code requirement for all classes. Technicians have privileges to operate mainly on the upper frequencies, primarily on the 2-meter (144 MHz) and 70-cm (420 MHz) bands in phone mode, which means voice transmissions. Technicians also have access to small slices of the 10-meter band using data modes, and small sections of 15-, 40-, and 80-meters if they learn Morse or use a computer to send and receive it. This limits the Technician to mainly local communications, but there’s plenty to do and loads to learn on these bands. ## Practice, Practice, Practice Even with all the limitations, a Technician license still offers access to a lot of spectrum and serves as the gateway to the next two classes, General and Extra. Everyone has to start with a Technician license, which requires passing a 35-question multiple choice examination. The exam is standardized with questions selected from a fixed pool, with topics ranging from knowing FCC Part 97 rules to basic electronics and RF theory. The exam is pretty easy, especially for anyone with a background in electronics. In fact, many complete newbies come to exam sessions after having run through enough online practice tests to see every possible pool question and pass the exam without understanding a thing about radios or electronics. There are lively debates over whether that’s a good thing or not – personally, I’m not a fan of it – but it is what it is; the Technician exam is dead easy. Your investment in a Technician license will be minimal, and mostly consists of the time it takes to study. Online practice tests – I recommend the tests on QRZ.com – are free to take as many times as you need to. Some ham clubs offer local classes aimed at helping you to prepare, and those generally charge only a nominal fee. There are even one-day intensive “ham cram” sessions where you’re guided through all the material and take the exam at the end of the day. Exam sessions are run by Volunteer Exam Coordinators (VECs) Volunteer Examiners (VEs), hams who have special training in administering and grading exams. They too charge only a nominal fee – I think I paid$15 – and may even waive the fee under certain circumstances. There are also occasional special events like the annual Field Day, where hams set up tents and booths in public places as an outreach to the public, where exams are often administered for free. Honestly, getting your Technician license is about as low impact as the amateur radio hobby gets. Once you can consistently pass practice tests online, the actual exam is a breeze. Exams are graded on the spot so you’ll know instantly how you did, and you can even take the next exam for no extra charge if you’re ready. Give it a shot even if you haven’t studied – I nearly passed my Extra exam going in cold after I aced my General. ## Next Time In the next installment I’ll start discussing what the newly minted Technician can do with his or her license. It may seem like a pipe dream to get on the air for less than \$50, but it’s surprising what’s available these days, and you’ll find that fifty dollars can go a long way toward making your first contact. 0
2019-08-23 05:03:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32361721992492676, "perplexity": 1614.9279787526461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317847.79/warc/CC-MAIN-20190823041746-20190823063746-00297.warc.gz"}
http://www.oalib.com/relative/3365943
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " All listed articles are free for downloading (OA Articles) Page 1 /100 Display every page 5 10 20 Item Physics , 2005, DOI: 10.1103/PhysRevLett.95.097002 Abstract: We compare calculations based on the Dynamical Mean-Field Theory of the Hubbard model with the infrared spectral weight $W(\Omega,T)$ of La$_{2-x}$Sr$_x$CuO$_4$ and other cuprates. Without using fitting parameters we show that most of the anomalies found in $W(\Omega,T)$ with respect to normal metals, including the existence of two different energy scales for the doping- and the $T$-dependence of $W(\Omega,T)$, can be ascribed to strong correlation effects. Physics , 2005, DOI: 10.1007/BF02679519 Abstract: We interpret the optical spectra of $\alpha$-(BEDT-TTF)$_2M$Hg(SCN)$_4$ (M=NH$_4$ and Rb) in terms of a 1/4 filled metallic system close to charge ordering and show that in the conductivity spectra of these compounds a fraction of the spectral weight is shifted from the Drude-peak to higher frequencies due to strong electronic correlations. Analyzing the temperature dependence of the electronic parameters, we distinguish between different aspects of the influence of electronic correlations on optical properties. We conclude, that the correlation effects are slightly weaker in the NH$_4$ compound compared to the Rb one. Physics , 2014, DOI: 10.1080/00018732.2014.940227 Abstract: We review the intermediate coupling model for treating electronic correlations in the cuprates. Spectral signatures of the intermediate coupling scenario are identified and used to adduce that the cuprates fall in the intermediate rather than the weak or the strong coupling limits. A robust, beyond LDA' framework for obtaining wide-ranging properties of the cuprates via a GW-approximation based self-consistent self-energy correction for incorporating correlation effects is delineated. In this way, doping and temperature dependent spectra, from the undoped insulator to the overdoped metal, in the normal as well as the superconducting state, with features of both weak and strong coupling can be modeled in a material-specific manner with very few parameters. Efficacy of the model is shown by considering available spectroscopic data on electron and hole doped cuprates from angle-resolved photoemission (ARPES), scanning tunneling microscopy/spectroscopy (STM/STS), neutron scattering, inelastic light scattering, optical and other experiments. Generalizations to treat systems with multiple correlated bands such as the heavy-fermions, the ruthenates, and the actinides are discussed. Advances in Condensed Matter Physics , 2010, DOI: 10.1155/2010/920860 Abstract: The Hubbard-Holstein model is a simple model including both electron-phonon interaction and electron-electron correlations. We review a body of theoretical work investigating, the effects of strong correlations on the electron-phonon interaction. We focus on the regime, relevant to high- superconductors, in which the electron correlations are dominant. We find that electron-phonon interaction can still have important signatures, even if many anomalies appear, and the overall effect is far from conventional. In particular in the paramagnetic phase the effects of phonons are much reduced in the low-energy properties, while the high-energy physics can still be affected by phonons. Moreover, the electron-phonon interaction can give rise to important effects, like phase separation and charge-ordering, and it assumes a predominance of forward scattering even if the bare interaction is assumed to be local (momentum independent). Antiferromagnetic correlations reduce the screening effects due to electron-electron interactions and revive the electron-phonon effects. 1. Introduction A wealth of materials, including the most challenging systems (cuprates, manganites, fullerenes, etc), present clear signatures of both electron-electron (e-e) and electron-phonon (e-ph) interactions, leading to a competition-or- interplay which can give rise to different physics according to the value of relevant control parameters and of the chemical and electronic properties of the materials. The results presented in this paper are mainly motivated by high-temperature superconductors, with the copper-oxide compounds (cuprates) in a prominent role, and an attention to the alkali-doped fullerides. In the case of the cuprates, which are arguably the most accurately studied materials in the last twenty-five years, the signatures of electron-phonon interactions are nowadays clear, even though the overall scenario is far from ordinary [1–3]: Electron-phonon fingerprints are evident in some properties, while they are weak or absent in other observables. Specifically, clear polaronic features are observed in optical conductivity [4–6] as well as in angle-resolved photoemission experiments (ARPES) [7] in very lightly doped compounds. A substantial e-ph coupling can also be inferred by the Fano line shapes of phonons in Raman spectra and by the rather large frequency shift and linewidth broadening of some phonons at . Phonons are also good candidates to account for the famous “kink” in the electronic dispersions observed in ARPES experiments [8, 9]. Tunneling experiments are often advocated Physics , 2010, DOI: 10.1038/nphys1706 Abstract: High temperature superconductivity was achieved by introducing holes in a parent compound consisting of copper oxide layers separated by spacer layers. It is possible to dope some of the parent compounds with electrons, and their physical properties are bearing some similarities but also significant differences from the hole doped counterparts. Here, we use a recently developed first principles method, to study the electron doped cuprates and elucidate the deep physical reasons why their behavior is so different than the hole doped materials. We find that electron doped compounds are Slater insulators, e.g. a material where the insulating behavior is the result of the presence of magnetic long range order. This is in sharp contrast with the hole doped materials, where the parent compound is a Mott charge transfer insulator, namely a material which is insulating due to the strong electronic correlations but not due to the magnetic order. Physics , 2007, DOI: 10.1063/1.2820379 Abstract: The correlated behavior of electrons determines the structure and optical properties of molecules, semiconductor and other systems. Valuable information on these correlations is provided by measuring the response to femtosecond laser pulses, which probe the very short time period during which the excited particles remain correlated. The interpretation of four-wave-mixing techniques, commonly used to study the energy levels and dynamics of many-electron systems, is complicated by many competing effects and overlapping resonances. Here we propose a coherent optical technique, specifically designed to provide a background-free probe for electronic correlations in many-electron systems. The proposed signal pulse is generated only when the electrons are correlated, which gives rise to an extraordinary sensitivity. The peak pattern in two-dimensional plots, obtained by displaying the signal vs. two frequencies conjugated to two pulse delays, provides a direct visualization and specific signatures of the many-electron wavefunctions. Physics , 2004, DOI: 10.1103/PhysRevB.72.224517 Abstract: The doping and temperature dependent conductivity of electron-doped cuprates is analysed. The variation of kinetic energy with doping is shown to imply that the materials are approximately as strongly correlated as the hole-doped materials. The optical spectrum is fit to a quasiparticle scattering model; while the model fits the optical data well, gross inconsistencies with photoemission data are found, implying the presence of a large, strongly doping dependent Landau parameter. Physics , 2008, DOI: 10.1016/j.jpcs.2008.03.038 Abstract: We calculate the optical and Raman response within a phenomenological model of fermion quasiparticles coupled to nearly critical collective modes. We find that, whereas critical scaling properties might be masked in optical spectra due to charge conservation, distinct critical signatures of charge and spin fluctuations can be detected in Raman spectra exploiting specific symmetry properties. We compare our results with recent experiments on the cuprates. Physics , 2003, DOI: 10.1103/PhysRevB.68.195117 Abstract: We establish the quasi-one-dimensional Li purple bronze as a photoemission paradigm of Luttinger liquid behavior. We also show that generalized signatures of electron fractionalization are present in the angle resolved photoemission spectra for quasi-two-dimensional purple bronzes and certain cuprates. An important component of our analysis for the quasi-two-dimensional systems is the proposal of a `melted holon'' scenario for the k-independent background that accompanies but does not interact with the peaks that disperse to define the Fermi surface. Physics , 2008, DOI: 10.1209/0295-5075/96/27004 Abstract: We demonstrate that most features ascribed to strong correlation effects in various spectroscopies of the cuprates are captured by a calculation of the self-energy incorporating effects of spin and charge fluctuations. The self energy is calculated over the full doping range of electron-doped cuprates from half filling to the overdoped system. The spectral function reveals four subbands, two widely split incoherent bands representing the remnant of the split Hubbard bands, and two additional coherent, spin- and charge-dressed in-gap bands split by a spin-density wave, which collapses in the overdoped regime. The incoherent features persist to high doping, producing a remnant Mott gap in the optical spectra, while transitions between the in-gap states lead to pseudogap features in the mid-infrared. Page 1 /100 Display every page 5 10 20 Item Home Copyright © 2008-2017 Open Access Library. All rights reserved.
2020-01-19 04:59:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5761020183563232, "perplexity": 1783.9211915840576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00103.warc.gz"}
http://mathhelpforum.com/number-theory/49023-solved-proof-involving-prime-numbers.html
# Math Help - [SOLVED] Proof involving prime numbers 1. ## [SOLVED] Proof involving prime numbers Prove that if p is a prime number and p does not equal 3, then 3 divides $p^2 + 2$. I'm given a hint that says, "When p is divided by 3, the remainder is either 0, 1, or 2. That is, for some integer k, p=3k or p=3k+1 or p=3k+2. I understand the hint and the initial statement, I just don't know where to start. 2. First off, $p \neq 3k$ since that would mean p is divisible by something other than 1 and itself and we already said $p \neq 3$ So just go through the other two cases: If $p = 3k + 1$ then $p^2 + 2 = (3k+1)^2 + 2 = 9k^2 + 6k + 3 = 3(3k^2 + 2k + 1) \ \Rightarrow \ 3 \mid (p^2 + 2)$ If $p = 3k + 2$ then ........... Finish it off. 3. That's where I got to, but I didn't think that was proof of the statement. Why does $3(3k^2 + 2k + 1) \ \Rightarrow \ 3 \mid (p^2 + 2)$? 4. By definition, $3 \mid (p^2 + 2)$ iff there exists an integer $c$ such that $3c = p^2 + 2$. From our work, we can see that $c = 3k^2 + 2k + 1$ Or in layman terms, we have 3 times something equal to $p^2 + 2$. So obviously, 3 and that something iare a factor of it, i.e. 3 and that something divide $p^2 + 2$. 5. Gotcha. Thanks for the help. I have one more question that is somewhat on topic. We are proving things in class but only in the sense that we have to take the work someone else has done and assume it's true. At some point in a proof do you just have to use common sense. For example, "Let k be an integer. Then $2k$ is even and $2k+1$ is odd." What does the proof for either of those look like? 6. Originally Posted by Ryaη Gotcha. Thanks for the help. I have one more question that is somewhat on topic. We are proving things in class but only in the sense that we have to take the work someone else has done and assume it's true. At some point in a proof do you just have to use common sense. For example, "Let k be an integer. Then $2k$ is even and $2k+1$ is odd." What does the proof for either of those look like? it is a definition. even numbers are those that are divisible by 2 and odd numbers are those that are not (meaning, when we divide them by 2, we have a remainder of 1). and the definitions follow immediately 7. Originally Posted by Ryaη Gotcha. Thanks for the help. I have one more question that is somewhat on topic. We are proving things in class but only in the sense that we have to take the work someone else has done and assume it's true. At some point in a proof do you just have to use common sense. For example, "Let k be an integer. Then $2k$ is even and $2k+1$ is odd." What does the proof for either of those look like? As Jhevon has already pointed out, those facts follow from the definition of even and odd. Sometimes you will have to use axioms (e.g. two parallel lines never intersect), but that's about as close to "common sense" as a proof can get.
2016-07-24 00:03:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8753777742385864, "perplexity": 171.25463537610327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823802.12/warc/CC-MAIN-20160723071023-00265-ip-10-185-27-174.ec2.internal.warc.gz"}
https://no.overleaf.com/articles/a-look-at-hilbert-spaces/crmgkcbksqbn
# A Look at Hilbert Spaces Author Ryan T Whyte License Creative Commons CC BY 4.0 AbstractA look at Hilbert Spaces, and the question "Are there natural, separable Hilbert Spaces on the Euclidean Ball for which all composition operators are bounded.
2021-05-18 03:45:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5859820246696472, "perplexity": 1868.6986173312387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989820.78/warc/CC-MAIN-20210518033148-20210518063148-00380.warc.gz"}
http://nrich.maths.org/public/leg.php?code=5039&cl=2&cldcmpid=6948
# Search by Topic #### Resources tagged with Interactivities similar to Which Numbers? (1): Filter by: Content type: Stage: Challenge level: ### There are 213 results Broad Topics > Information and Communications Technology > Interactivities ### Number Differences ##### Stage: 2 Challenge Level: Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this? ### First Connect Three ##### Stage: 2 and 3 Challenge Level: The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for? ##### Stage: 2 Challenge Level: If you have only four weights, where could you place them in order to balance this equaliser? ### Got It! Article ##### Stage: 2 and 3 This article gives you a few ideas for understanding the Got It! game and how you might find a winning strategy. ### Stars ##### Stage: 3 Challenge Level: Can you find a relationship between the number of dots on the circle and the number of steps that will ensure that all points are hit? ### Cuisenaire Environment ##### Stage: 1 and 2 Challenge Level: An environment which simulates working with Cuisenaire rods. ### Magic Potting Sheds ##### Stage: 3 Challenge Level: Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it? ### A Dotty Problem ##### Stage: 2 Challenge Level: Starting with the number 180, take away 9 again and again, joining up the dots as you go. Watch out - don't join all the dots! ### Colour Wheels ##### Stage: 2 Challenge Level: Imagine a wheel with different markings painted on it at regular intervals. Can you predict the colour of the 18th mark? The 100th mark? ### Part the Piles ##### Stage: 2 Challenge Level: Try to stop your opponent from being able to split the piles of counters into unequal numbers. Can you find a strategy? ### Countdown ##### Stage: 2 and 3 Challenge Level: Here is a chance to play a version of the classic Countdown Game. ### More Carroll Diagrams ##### Stage: 2 Challenge Level: How have the numbers been placed in this Carroll diagram? Which labels would you put on each row and column? ### Seven Flipped ##### Stage: 2 Challenge Level: Investigate the smallest number of moves it takes to turn these mats upside-down if you can only turn exactly three at a time. ### Multiplication Square Jigsaw ##### Stage: 2 Challenge Level: Can you complete this jigsaw of the multiplication square? ### Multiples Grid ##### Stage: 2 Challenge Level: What do the numbers shaded in blue on this hundred square have in common? What do you notice about the pink numbers? How about the shaded numbers in the other squares? ### See the Light ##### Stage: 2 and 3 Challenge Level: Work out how to light up the single light. What's the rule? ### One Million to Seven ##### Stage: 2 Challenge Level: Start by putting one million (1 000 000) into the display of your calculator. Can you reduce this to 7 using just the 7 key and add, subtract, multiply, divide and equals as many times as you like? ### Cycling Squares ##### Stage: 2 Challenge Level: Can you make a cycle of pairs that add to make a square number using all the numbers in the box below, once and once only? ### Counters ##### Stage: 2 Challenge Level: Hover your mouse over the counters to see which ones will be removed. Click to remover them. The winner is the last one to remove a counter. How you can make sure you win? ### Red Even ##### Stage: 2 Challenge Level: You have 4 red and 5 blue counters. How many ways can they be placed on a 3 by 3 grid so that all the rows columns and diagonals have an even number of red counters? ### Colour in the Square ##### Stage: 2 Challenge Level: Can you put the 25 coloured tiles into the 5 x 5 square so that no column, no row and no diagonal line have tiles of the same colour in them? ### Factor Lines ##### Stage: 2 Challenge Level: Arrange the four number cards on the grid, according to the rules, to make a diagonal, vertical or horizontal line. ### Power Crazy ##### Stage: 3 Challenge Level: What can you say about the values of n that make $7^n + 3^n$ a multiple of 10? Are there other pairs of integers between 1 and 10 which have similar properties? ### GOT IT ##### Stage: 2 and 3 Challenge Level: A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. ### Light the Lights Again ##### Stage: 2 Challenge Level: Each light in this interactivity turns on according to a rule. What happens when you enter different numbers? Can you find the smallest number that lights up all four lights? ### Spot Thirteen ##### Stage: 2 Challenge Level: Choose 13 spots on the grid. Can you work out the scoring system? What is the maximum possible score? ### Arrangements ##### Stage: 2 Challenge Level: Is it possible to place 2 counters on the 3 by 3 grid so that there is an even number of counters in every row and every column? How about if you have 3 counters or 4 counters or....? ### Beat the Drum Beat! ##### Stage: 2 Challenge Level: Use the interactivity to create some steady rhythms. How could you create a rhythm which sounds the same forwards as it does backwards? ### Locate the Lion's Lair ##### Stage: 2 Challenge Level: Use the sightings of the lion to guess the location of its lair. ### Which Symbol? ##### Stage: 2 Challenge Level: Choose a symbol to put into the number sentence. ### Light the Lights ##### Stage: 2 Challenge Level: Investigate which numbers make these lights come on. What is the smallest number you can find that lights up all the lights? ### A Square of Numbers ##### Stage: 2 Challenge Level: Can you put the numbers 1 to 8 into the circles so that the four calculations are correct? ### Venn Diagrams ##### Stage: 1 and 2 Challenge Level: Use the interactivities to complete these Venn diagrams. ### Difference ##### Stage: 2 Challenge Level: Place the numbers 1 to 10 in the circles so that each number is the difference between the two numbers just below it. ### Code Breaker ##### Stage: 2 Challenge Level: This problem is based on a code using two different prime numbers less than 10. You'll need to multiply them together and shift the alphabet forwards by the result. Can you decipher the code? ### Domino Numbers ##### Stage: 2 Challenge Level: Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be? ### Ratio Pairs 2 ##### Stage: 2 Challenge Level: A card pairing game involving knowledge of simple ratio. ### Coordinate Tan ##### Stage: 2 Challenge Level: What are the coordinates of the coloured dots that mark out the tangram? Try changing the position of the origin. What happens to the coordinates now? ### Cogs ##### Stage: 3 Challenge Level: A and B are two interlocking cogwheels having p teeth and q teeth respectively. One tooth on B is painted red. Find the values of p and q for which the red tooth on B contacts every gap on the. . . . ### Teddy Town ##### Stage: 1 and 2 Challenge Level: There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules? ### Triangles All Around ##### Stage: 2 Challenge Level: Can you find all the different triangles on these peg boards, and find their angles? ### Board Block Challenge ##### Stage: 2 Challenge Level: Choose the size of your pegboard and the shapes you can make. Can you work out the strategies needed to block your opponent? ### More Transformations on a Pegboard ##### Stage: 2 Challenge Level: Use the interactivity to find all the different right-angled triangles you can make by just moving one corner of the starting triangle. ##### Stage: 2 Challenge Level: How can the same pieces of the tangram make this bowl before and after it was chipped? Use the interactivity to try and work out what is going on! ### Rod Ratios ##### Stage: 2 Challenge Level: Use the Cuisenaire rods environment to investigate ratio. Can you find pairs of rods in the ratio 3:2? How about 9:6? ### Nine-pin Triangles ##### Stage: 2 Challenge Level: How many different triangles can you make on a circular pegboard that has nine pegs? ### Sorting Symmetries ##### Stage: 2 Challenge Level: Find out how we can describe the "symmetries" of this triangle and investigate some combinations of rotating and flipping it. ### Overlapping Circles ##### Stage: 2 Challenge Level: What shaped overlaps can you make with two circles which are the same size? What shapes are 'left over'? What shapes can you make when the circles are different sizes? ### Train ##### Stage: 2 Challenge Level: A train building game for 2 players. ### Fault-free Rectangles ##### Stage: 2 Challenge Level: Find out what a "fault-free" rectangle is and try to make some of your own.
2013-05-19 00:00:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29292455315589905, "perplexity": 2449.881577400758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382989/warc/CC-MAIN-20130516092622-00033-ip-10-60-113-184.ec2.internal.warc.gz"}
https://robotics.stackexchange.com/questions/479/particle-filters-how-to-do-resampling
# Particle filters: How to do resampling? I understand the basic principle of a particle filter and tried to implement one. However, I got hung up on the resampling part. Theoretically speaking, it is quite simple: From the old (and weighted) set of particles, draw a new set of particles with replacement. While doing so, favor those particles that have high weights. Particles with high weights get drawn more often and particles with low weights less often. Perhaps only once or not at all. After resampling, all weights get assigned the same weight. My first idea on how to implement this was essentially this: 1. Normalize the weights 2. Multiply each weight by the total number of particles 3. Round those scaled weights to the nearest integer (e.g. with int() in Python) Now I should know how often to draw each particle, but due to the roundoff errors, I end up having less particles than before the resampling step. The Question: How do I "fill up" the missing particles in order to get to the same number of particles as before the resampling step? Or, in case I am completely off track here, how do I resample correctly? ## 5 Answers The issue you're running into is often referred to as sample impoverishment. We can see why your approach suffers from it with a fairly simple example. Let's say you have 3 particles and their normalized weights are 0.1, 0.1, 0.8. Then multiplying by each weight by the 3 yields 0.3, 0.3, and 2.4. Then rounding yields 0, 0, 2. This means you would not pick the first two particles and the last one would be picked twice. Now you are down to two particles. I suspect this is what you have been seeing when you say "due to the roundoff errors, I end up having less particles." An alternative selection method would be as follows. 1. Normalize weights. 2. Calculate an array of the cumulative sum of the weights. 3. Randomly generate a number & determine which range in that cumulative weight array to which the number belongs. 4. The index of that range would correspond to the particle that should be created. 5. Repeat until you have the desired number of samples. So, using the example above we would start with the normalized weights. We would then calculate the array [0.1, 0.2, 1]. From there we calculate 3 random numbers say 0.15, 0.38, and 0.54. This would have us pick the second particle once and the third particle twice. The point is that it gives the smaller particles a chance to propagate. One thing to note is that while this method will deal with impoverishment it can lead to a suboptimal solutions. For instance, it may be that none of the particles really match your given location well (assuming you're using this for localization). The weights only tell you which particles match best, not the quality of the match. As such when you take additional readings and repeat the process you may find that all your particles group at a single location that is not the correct location. This is usually because there were no good particles to start. • Thanks for the insightful response! The selection method you suggested seems familiar. If i recall correctly, that was a common way of treating the sample impoverishment problem. I have seen it before but never really understood the reason for this procedure. Now i know better! – Daniel Eberts Nov 21 '12 at 17:45 • I think your interpretation of sampling impoverishment may be a little misleading. The fact the the poster looses particles is due to an unsuitable method for resampling. Particle impoverishment is when your posterior distribution is not adequately represented by the particles anymore. – Jakob Nov 22 '12 at 8:11 As I guess you found out yourself, the resampling method you are proposing is slightly flawed, as it should not alter the number of particles (unless you want to). The principle is that the weight represents the relative probability with respect to the other particles. In the resampling step, you draw from the set of particles such that for each particle, the normalized weight times the number of particles represents the number of times that particle is drawn on average. In that your idea is correct. Only by using rounding instead of sampling, you will always eliminate particles for which the expected value is less than half. There are a number of ways to perform the resampling properly. There is a nice paper called On resampling algorithms for particle filters, comparing the different methods. Just to give a quick overview: • Multinomial resampling: imagine a strip of paper where each particle has a section, where the length is proportional to its weight. Randomly pick a location on the strip N times, and pick the particle associated with the section. • Residual resampling: this approach tries to reduce the variance of the sampling, by first allocating each particle their integer floor of the expected value, and leave the rest to multinomial resampling. E.g. a particle with an expected value of 2.5 will have 2 copies in the resampled set and another one with an expected value of 0.5. • Systematic resampling: take a ruler with regular spaced marks, such that N marks are the same length as your strip of paper. Randomly place the ruler next to your strip. Take the particles at the marks. • Stratified resampling: same as systematic resampling, except that the marks on the ruler are not evenly placed, but added as N random processes sampling from the interval 0..1/N. So to answer your question: what you have implemented could be extended to a form of residual sampling. You fill up the missing slots by sampling based on a multinonmial distribution of the reminders. • +1 for having already answered my follow-up question :) – Daniel Eberts Nov 22 '12 at 11:26 For an example of python code that properly implements resampling, you might find this github project to be useful: https://github.com/mjl/particle_filter_demo Plus, it comes with its own visual representation of the resampling process, that should help you debug your own implementation. In this visualization, the green turtle shows the actual position, the large grey dot shows the estimated position and turns green when it converges. The weight goes from likely (red) to unlikely (blue). • Thanks for the link. It's always insightful to see how other people implemented an algorithm. – Daniel Eberts Nov 21 '12 at 17:47 • This is a visualization of a particle filter converging. Not sure what insight it provides with respect to the question. – Jakob Nov 22 '12 at 8:15 • I included the visualization since it's what is produced by the code I posted -- an example of how to properly implement resampling. – Ian Nov 24 '12 at 0:40 one simple way to do this is numpy.random.choice(N, N, p=w, replace=True) where N is the no. of particles and w = normalized weights. • Welcome to Robotics, narayan. Could you please expand this answer some? For instance, why use a random choice? What is p in your function? The more detailed you can make your answer, the more useful it will be for future visitors who have the same problem. – Chuck May 25 '16 at 12:13 I use @narayan's approach to implement my particle filter: new_sample = numpy.random.choice(a=particles, size=number_of_particles, replace=True, p=importance_weights) a is the vector of your particles to sample, size is the count of particles and p is the vector of their normalized weights. replace=True handles bootstrap sampling with replacement. The return value is a vector of new particle objects.
2021-07-23 15:01:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6725543737411499, "perplexity": 551.367153990013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00309.warc.gz"}
http://royvanhelden.nl/3eicbs/thermodynamics-for-jee-mains-5fb6ef
The type of reaction such as exothermic and endothermic reactions also have effects. Thermodynamics – JEE Main Previous Year Questions with Solutions. JEE Main & Advanced Chemistry Thermodynamics Question Bank play_arrow Single Choice play_arrow Fill In The Blanks play_arrow All.. done Mock Test - Chemical Thermodynamics 1308 Views. It is the first step for future engineers who seek admission in IITs, NITs, IIITs, and CFTIs. PDF version handwritten notes of Physics for 10+2 competitive exams like JEE Main, WBJEE, NEST, IISEREntrance Exam, CUCET, AIPMT, JIPMER, EAMCET etc. NCERT books for physics: NCERT books are a must-have for your JEE MAIN preparation. IIT-JEE. I am specialized in coaching students who are preparing for entrance exams like JEE, NEET, etc. THERMODYNAMICS & THERMOCHEMISTRY IIT-JEE, AIPMT, CBSE (Hint to study: Divided into small parts for you to study easily and make daily goals.) Thermodynamics IIT JEE Study Material Thermodynamics deals with the interaction of one body with another in terms of quantities of heat and work. The file is available in PDF format. Law is based upon the first law of thermodynamics and states that if a chemical change can be made to take place in two or more ways involving one or more steps, the net amount of heat change in the complete process is the same regardless of the method employed. 17 Comments. JEE Main 2016 (Online) 9th April Morning Slot GO TO QUESTION The heats of combustion of carbon and carbon monoxide are –393.5 and –283.5 kJ mol–1, respectively. Congratulations! A spherical constant temperature heat source of radius r 1 is at the center of a uniform solid sphere of radius r 2.The rate at which heat is transferred through the surface of the sphere is propertional to (A) r 2 2 – r 1 2 (B) r 2 – r 1 (C) ln r 1 – ln r 2 (D) 1/r 2 – 1/r 1 (E) (1/r 2 – 1/r 1)-1. Hindi Physics. X Well begun is half done. But a sound knowledge of the subject and its formulae blended with the ability to apply tips and tricks can give you a golden opportunity to crack JEE Mains, this year. In thermodynamics, the system is defined as a quantity of matter or a region in space under investigation. This is a complete set of notes of Thermodynamics which is a part of Physics syllabus for NEET, JEE. Hey guys, in this lecture we are going to study Thermodynamics for JEE Main/Advanced. JEE Main 2021 is the biggest engineering entrance exam for which more than 9 lakh candidates are expected to apply. Problem 4:-. JEE 2019: Thermodynamics Thermodynamics is a very important chapter in the syllabus of JEE Main and JEE Advanced. If you skip these chapters, you are going to be at a great loss. Important notes are also helpful for revision when you have less time and have to study many topics. Paper by Super 30 Aakash Institute, powered by embibe analysis. Hi, I am a professional educator of Chemistry subject at ChemChamp. This session will cover the complete NCERT and this is the detailed course for class 11th, NEET and JEE Aspirants. Exothermic And Endothermic Reaction.. Read Now. Heat & Thermodynamics’ contribute to about 3-4 questions in jee mains. Candidates may refer this study material for their IIT JEE exam preparation. Download Heat-1 (Thermodynamics) NOTES by MOTION for Jee Mains & Jee Advanced (IIT JEE) exam preparation. Open System Closed System Isolated System So dear aspirants this thermodynamics and KTG class notes PDF are a basic need for IIT JEE … Given that the heat capacity of the calorimeter is 2.5 kJ K −1, the numerical value for the enthalpy of combustion of the gas in kJ mol −1 is [IIT JEE 2009] In thermodynamics, the system is defined as a quantity of matter or a region in space under investigation. Thermal Processes, 5. Hindi Physics. Login. D. C. Gupta, have achieved a lot of acclaim by the IIT-JEE teachers and students “1. Heat Of Reaction Or Enthalpy Of Rea.. Read Now. It has a substantial amount of theory,with good examples. Improve your score by 22% minimum while there is still time. There are certain topics that are in the JEE MAIN syllabus. Read Now. Watch the video directly from youtube and complete the jee chemistry syllabus. September 14, 2019. Here is the list of all formulas of Thermodynamics chemistry Class 11, JEE, NEET. Key Features of notes thermodynamics. JEE Main Previous Year Papers Questions of Chemistry With Solutions are available at eSaral. Click on the links below. Call Us Today! HEAT It is the energy which is transferred from a system to surrounding or vice-versa due to temperature difference between system and surroundings. Dec 13, 2020 - MCQ: Chapter 6 - Thermodynamics, Class 11, Chemistry JEE Notes | EduRev is made by best teachers of JEE. First,go through Concepts of Physics HC Verma vol 1 for waves and vol 2 for thermodynamics. Thermodynamics – Chemistry Allen Kota Study Material for JEE Mains and Advanced Examination (in PDF) ₹ 160.00 ₹ 80.00 Free Energy And Free Energy Change . About 2-3 questions have always been asked from this chapter in … This data (2002) (1)violates 1st law of thermodynamics (2)violates Is‘ law of thermodynamics if Q, is -ve (3)violates […] Teaching in a common understanding language so that students can easily relate it, is the main aim. Specific Heat Capacities, 7. Ended on Oct 24, 2019. By eSaral. This document is highly rated by JEE … Given this, competition in the exam is obviously high. PRE-JEE MAIN PRE-AIPMT 2016? Aman Dhattarwal. The First Law of Thermodynamics, 4. Physics for Joint Entrance Examination JEE (Advanced): waves and Thermodynamics, a Cengage exam Crack series™ product, is designed to help aspiring engineers focus on the subject of physics from two standpoints: to develop their caliber, aptitude, and attitude for … Read Now. You have joined No matter what your level. Read Now. First law of thermodynamics gives a relationship between heat, work and internal energy. Thermodynamics iit jee video classes for mains. 08226060233, 09977204422 | iitjeemaster@gmail.com Thermodynamic Systems and Their Surroundings , 2. The Class will be Hindi and notes will be provided in English. In this session, Shikha Munjal will conduct the super mega quiz on Thermodynamics from the chapter "Thermodynamics". Third Law Of Thermodynamics . Free classes & tests. All Formulas of Thermodynamics Chemistry Class 11. Thermal Processes Using an Ideal Gas, 6. Share This Post Facebook Twitter ... [jEE-Mains-2015] SHOW SOLUTION. Enroll for free with facebook. Also read: Heat-2 ( Thermodynamics ) NOTES for IIT JEE 2020 pdf free download. Heat and Thermodynamics's Previous Year Questions with solutions of Physics from JEE Main subject wise and chapter wise with solutions Laws Of Thermochemistry . Easy to understand and learn. Please go through all the formulas below. Thermodynamics IIT JEE Study Material Thermodynamics deals with the interaction of one body with another in terms of quantities of heat and work. JEE Main Previous Year Papers Questions With Solutions Chemistry Thermodynamics and Chemical Energitics 1.A heat engine absorbs heat Q, at temperature T, and heat Q2 at temperature Tr Work done by the engine is J(Q, + Q2). Considered as one of the easiest amongst the science subjects, Chemistry is like an acid test for JEE aspirants. Public Notice: Rescheduling of JOINT ENTRANCE EXAMINATION JEE (Main) April -2020 Examination. Solution:- Notes for JEE Main & Advanced Chemistry Thermodynamics Bond Energy Or Bond Enthalpies . Solve the exercises completely. Subtopics of JEE Mains Physics Thermodynamics : 1. JEE Main Thermodynamics Previous Year Solved Questions. The Second Law of Thermodynamics, 8. Important notes of Physics for NEET, JEE for Thermodynamics are useful for all aspirants preparing for entrance exams including JEE, NEET. They will help you clear your concepts. I … Capacitors for JEE Main/Advanced | Lecture 7. Properties of Solids and Liquids; It contains the important ideas of class-lecture and book. JEE mains tips and tricks : Second Law Of Thermodynamics Entropy The second principle of thermodynamics is not limited exclusively to thermal machines but deals, in general, with all natural processes that occur spontaneously. Thermodynamics - System – When a definite amount of gas is filled in a cylinder fitted with a piston then it constitutes a system which is classified in 3 categories. Thermodynamics - Introduction for JEE Main/Advanced ... COE continued for JEE Mains/Advance. IIT JEE. Thermodynamics and Thermochemistry_1 ... Easy HCV Solutions is the app you all need for JEE Mains And JEE Advanced. Thermodynamic processes: Isothemal process: $\quad T =$ constant\begin{array}{l} dT =0 \\ \Delta T … Candidates can download notes as per their requirements from the links given below. The temperature of the calorimeter was found to increase from 298.0 K to 298.45 K due to the combustion process. Thermodynamics is also very important for bachelor’s degree in Electrical, Mechanical, Civil, Chemical, or even Bio-Technology. The JEE MAIN syllabus is almost the same as the syllabus of class 11 and class 12 of the CBSE curriculum. Subtopic of Jee Mains Thermodynamics and Thermochemistry : Thermodynamic system, state and type of a system | Laws of Thermodynamics | Work, heat, energy and thermodynamic equilibrium | Different processes and Carnot cycle | Relation between internal energy and enthalpy | Specific heat capacity and relation between Cp and Cv | Entropy and Gibb’s free energy | Hess law | … I have listed below some good books for JEE MAINS which will help to prepare better for PHYSICS. Q1: “Heat cannot by itself flow from a body at a lower temperature to a body at a higher temperature” is a statement or consequence of (a) The second law of thermodynamics (b) conservation of momentum (c) conservation of mass (d) The first law of thermodynamics The thermodynamics for JEE chemistry can be broadly studied under three main energy exchange headings: Chemical Reactions – Different chemical reactions have different energy exchanges. The Zeroth Law of Thermodynamics, 3. Heat Engines, 9. Thermodynamics - JEE Mains Previous Year Questions Hindi and notes will be Hindi and notes will be provided in English JEE Advanced part. Cbse curriculum youtube and complete the JEE Main preparation for Thermodynamics are useful for all Aspirants for! A professional educator of Chemistry subject at ChemChamp in Thermodynamics, the system is as. While there is still time Rea.. Read Now complete the JEE Main Previous Year with... With Solutions 298.0 K to 298.45 K due to the combustion process hey guys, this... Iit JEE exam preparation there is still time on Thermodynamics from the links given below s degree in Electrical Mechanical! Twitter... [ jEE-Mains-2015 ] SHOW SOLUTION … Public Notice thermodynamics for jee mains Rescheduling of JOINT entrance EXAMINATION JEE Main... Class 11, JEE is a part of Physics for NEET, JEE, NEET students easily... Study Thermodynamics for JEE Mains Physics Thermodynamics: 1 of reaction or Enthalpy of Rea.. Read Now a understanding. 11 and class 12 of the calorimeter was found to increase from 298.0 K to K... Session, Shikha Munjal will conduct the super mega quiz on Thermodynamics from the links given below of 11! Body with another in terms of quantities of heat and work calorimeter was found to increase 298.0! This study Material for their IIT JEE 2020 pdf free download still time IIITs, and.... Coe continued for JEE Main/Advanced... COE continued for JEE Mains/Advance you skip these chapters, you going. The first step for future engineers who seek admission in IITs, NITs, IIITs, and.! Class 12 of the calorimeter was found to increase from 298.0 K to 298.45 K due to difference! As per their requirements from the links given below NEET, JEE for Thermodynamics are useful all. Your score by 22 % minimum while there is still time certain topics that are in the JEE Chemistry thermodynamics for jee mains! Between system and surroundings and have to study Thermodynamics for JEE Main/Advanced the video from... Jee Mains which will help to prepare better for Physics: NCERT are! Thermodynamics – JEE Main syllabus is almost the same as the syllabus of class,! Step thermodynamics for jee mains future engineers who seek admission in IITs, NITs, IIITs, and CFTIs with... Exams like JEE, NEET like JEE, NEET and JEE Advanced to at! Of JEE Mains Previous Year Questions with Solutions the combustion process exam preparation are preparing for exams! Physics Thermodynamics: 1 complete the JEE Chemistry syllabus Thermodynamics which is part. For future engineers who seek admission in IITs, NITs, IIITs, and CFTIs Mechanical Civil. Of Solids and Liquids ; it contains the important ideas of class-lecture book... Physics for NEET, etc at a great loss this, competition in exam! Notice: Rescheduling of JOINT entrance EXAMINATION JEE ( Main ) April -2020 EXAMINATION JEE study Material Thermodynamics deals the... Thermodynamics are useful for all Aspirants preparing for entrance exams including JEE, NEET and Advanced! First, go through Concepts of Physics syllabus for NEET, etc the chapter ''... Of Thermodynamics which is transferred from a system to surrounding or vice-versa due to temperature difference between and... Read Now and class 12 of the calorimeter was found to increase 298.0... Important notes of Physics for NEET, etc the app you all need for JEE Mains/Advance JEE Aspirants bachelor s... In a common understanding language so that students can easily relate it, is biggest. Set of notes of Thermodynamics Chemistry class 11, JEE endothermic reactions also effects! It is the app you all need for JEE Main 2021 is the biggest engineering exam!, Mechanical, Civil, Chemical, or even Bio-Technology some good books for JEE Mains/Advance of. There is still time directly from youtube and complete the JEE Main Previous Year Questions Subtopics JEE! And CFTIs properties of Solids and Liquids ; it contains the important of. Heat-2 ( Thermodynamics ) notes for IIT JEE exam preparation Rea.. Now! Students “ 1 preparing for entrance thermodynamics for jee mains like JEE, NEET and JEE Advanced... [ jEE-Mains-2015 SHOW. Quiz on Thermodynamics from the links given below waves and vol 2 for Thermodynamics are useful for all Aspirants for... One body with another in terms of quantities of heat and work entrance including. Will cover the complete NCERT and this is the detailed course for class 11th, NEET professional! Terms of quantities of heat and work surrounding or vice-versa due to the combustion process -2020.! Are certain topics that are in the JEE Main Previous Year Questions with Solutions between system and surroundings,. Can easily relate it, is the Main aim subject at ChemChamp coaching students who are preparing for exams... Notes of Physics syllabus for NEET, JEE for Thermodynamics Mains which will help to prepare for. Also have effects a lot of acclaim by the IIT-JEE teachers and students 1... Which is a complete set of notes of Physics for NEET, JEE class 12 of the was! At ChemChamp JEE study Material Thermodynamics deals with the interaction of one body with another terms... Very important for bachelor ’ s degree in Electrical, Mechanical, Civil,,! The calorimeter was found to increase from 298.0 K to 298.45 K due to the combustion process also thermodynamics for jee mains... The same as the syllabus of class 11 and class 12 of the CBSE curriculum are going to at. Part of Physics for NEET, JEE, NEET the exam is obviously high be Hindi and will..., Civil, Chemical, or even Bio-Technology and have to study Thermodynamics for JEE Main/Advanced... COE for... A must-have for your JEE Main preparation given this, competition in the exam is obviously.! The same as the syllabus of class 11, JEE, NEET JEE..., Shikha Munjal will conduct the super mega quiz on Thermodynamics from the ! Study Thermodynamics for JEE Mains and JEE Advanced entrance exam for which more than 9 lakh are! It, is the Main aim JEE, NEET who are preparing for entrance exams like JEE,,. Syllabus for NEET, JEE, NEET the same as the syllabus of class 11 class... 2020 pdf free download and notes will be Hindi and notes will be Hindi and notes be! This lecture we are going to be at a great loss was found to from! Reaction or Enthalpy of Rea.. Read Now the calorimeter was found to increase from 298.0 K 298.45. Have effects are certain topics that are in the JEE Chemistry syllabus be at great... Vol 1 for waves and vol 2 for Thermodynamics are useful for all Aspirants for... Deals with the interaction of one body with another in terms of quantities of heat and work JEE! And class 12 of the calorimeter was found to increase from 298.0 K to 298.45 K due to temperature between... And book admission in IITs, NITs, IIITs, and CFTIs, or even Bio-Technology language that... Of Physics for NEET, etc can easily relate it, is the list of formulas... Below some good books for JEE Mains Previous Year Questions Subtopics of JEE Mains Physics thermodynamics for jee mains: 1 of which. Amount of theory, with good examples to be at a great loss... COE for! That students can easily relate it, is the Energy which is a part Physics. When you have less time and have to study Thermodynamics for JEE Main/Advanced Thermodynamics IIT 2020. Complete NCERT and this is the app you all need for JEE Main/Advanced of Physics NEET... Almost the same as the syllabus of class 11 and class 12 of the calorimeter found..., powered by embibe analysis to be at a great loss for future engineers seek! Have less time and have to study many topics JEE Aspirants Chemical, or even Bio-Technology at ChemChamp IIT study! Iit JEE 2020 pdf free download are useful for all Aspirants preparing for entrance like. Easily relate it, is the list of all formulas of Thermodynamics which is transferred from system... Entrance exams like JEE, NEET 12 of the calorimeter was found to increase 298.0! A must-have for your JEE Main syllabus is almost the same as the syllabus of class 11 thermodynamics for jee mains JEE NEET. Was found to increase from 298.0 K to 298.45 K due to the process. Thermodynamics '' including JEE, NEET, etc and have to study Thermodynamics for JEE Mains/Advance study many topics Shikha... For revision when you have less time and have to study many topics the exam obviously... You are going to study many topics April -2020 EXAMINATION quantity of matter or a region space! Which will help to prepare better for Physics from 298.0 K to 298.45 K to! Under investigation transferred from a system to surrounding or vice-versa due to the combustion process directly from and. All formulas of Thermodynamics which is transferred from a system to surrounding or vice-versa due to the process., competition in the exam is obviously high surrounding or vice-versa due to temperature between... A professional educator of Chemistry subject at ChemChamp this study Material Thermodynamics deals with the of... Liquids ; it contains the important ideas of class-lecture and book for all Aspirants preparing for exams!, or even Bio-Technology Shikha Munjal will conduct the super mega quiz on Thermodynamics from the . Iits, NITs, IIITs, and CFTIs step for future engineers who seek admission IITs... Be Hindi and notes will be provided in English a common understanding so! The video directly from youtube and complete the JEE Chemistry syllabus Bond or. ( Thermodynamics ) notes for JEE Main/Advanced... COE continued for JEE Mains Physics Thermodynamics: 1 lakh are! Defined as a quantity of matter or a region in space under investigation from 298.0 to! Tonto Trails Van Rental, Epic Aircraft Layoffs, Whole Foods Apple Cider Donuts, Grass Seed Life Cycle, Victorian Era Shoes For Sale, Turbo Yeast In Sri Lanka, Vocabulary From Classical Roots, Golden Lotus Newtown Review,
2021-04-13 14:17:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3510604798793793, "perplexity": 3321.054677162445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072366.31/warc/CC-MAIN-20210413122252-20210413152252-00317.warc.gz"}
https://amrvac.org/md_doc_vacpp.html
MPI-AMRVAC  3.0 The MPI - Adaptive Mesh Refinement - Versatile Advection Code The VAC preprocessor This document briefly describes the use of the VAC PreProcessor (VACPP) which converts dimension independent notation into Fortran 90. VACPP is a specialized implementation of the general LASY Preprocessor. VACPP is implemented in Perl in the src/vacpp.pl script. The main variables of VACPP, the number of dimensions \$ndim, are normally modified by another Perl script src/setup.pl. Based on these variables and the LASY patterns in the .t source files VACPP generates the Fortran 90 source code of the output .f files. The preprocessor is mainly used via the makefile, but one can translate a single file directly, or even use the preprocessor interactively, when the dimension independent notation is typed in line by line from the keyboard and the expanded code appears on the screen. The interactive use is a very efficient way of checking the syntax of complex dimension independent notation. You may use vacpp.pl for translation directly vacpp.pl -d=3 FILENAME.t > FILENAME.f You may change the maximum line length to e.g. 72 directly on the command line vacpp.pl -d=2 -maxlen=72 FILENAME.t > FILENAME.f You can call vacpp.pl interactively with vacpp.pl -d=2 - and then type a line of code to see how it is translated.
2022-12-08 17:32:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6279754042625427, "perplexity": 4842.976270727765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00393.warc.gz"}
https://pos.sissa.it/390/093/
Volume 390 - 40th International Conference on High Energy physics (ICHEP2020) - Posters: Higgs Physics Higgs Boson measurements in the H$\rightarrow$WW$\rightarrow\ell\nu\ell\nu$ decay channel A. Aggarwal* on behalf of the ATLAS collaboration *corresponding author Full text: pdf Pre-published on: December 01, 2020 Published on: Abstract Due to its large branching fraction, Higgs boson decays into pairs of W bosons are among the most promising signatures to measure CP and coupling properties of the Higgs boson, as well as its inclusive and differential cross-section. The leptonic final state $\ell\nu\ell\nu$ provides a clean signature and is efficiently selected with lepton triggers. The combination of a high rate and a clean signature provides an opportunity to measure all the major production modes (ggF, VBF, WH, ZH) in a single decay channel. The studies presented here are based on the proton--proton collision data recorded by the ATLAS detector at the LHC at a centre-of-mass energy of 13 TeV with an integrated luminosity up to 139 fb$^{-1}$ data. All the measurements are found to be in agreement with the Standard Model. How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
2021-02-26 01:15:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42382851243019104, "perplexity": 2030.1153836566777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00001.warc.gz"}
https://www.biogeosciences.net/15/2819/2018/
Journal cover Journal topic Biogeosciences An interactive open-access journal of the European Geosciences Union Journal topic Biogeosciences, 15, 2819-2834, 2018 https://doi.org/10.5194/bg-15-2819-2018 Biogeosciences, 15, 2819-2834, 2018 https://doi.org/10.5194/bg-15-2819-2018 Reviews and syntheses 09 May 2018 Reviews and syntheses | 09 May 2018 # Reviews and syntheses: Revisiting the boron systematics of aragonite and their application to coral calcification Boron systematics of aragonite Thomas M. DeCarlo1,2, Michael Holcomb1,2, and Malcolm T. McCulloch1,2 Thomas M. DeCarlo et al. • 1Oceans Institute and Oceans Graduate School, The University of Western Australia, 35 Stirling Hwy, Crawley 6009, Australia • 2ARC Centre of Excellence for Coral Reef Studies, The University of Western Australia, 35 Stirling Hwy, Crawley 6009, Australia Abstract The isotopic and elemental systematics of boron in aragonitic coral skeletons have recently been developed as a proxy for the carbonate chemistry of the coral extracellular calcifying fluid. With knowledge of the boron isotopic fractionation in seawater and the B∕Ca partition coefficient (KD) between aragonite and seawater, measurements of coral skeleton δ11B and B∕Ca can potentially constrain the full carbonate system. Two sets of abiogenic aragonite precipitation experiments designed to quantify KD have recently made possible the application of this proxy system. However, while different KD formulations have been proposed, there has not yet been a comprehensive analysis that considers both experimental datasets and explores the implications for interpreting coral skeletons. Here, we evaluate four potential KD formulations: three previously presented in the literature and one newly developed. We assess how well each formulation reconstructs the known fluid carbonate chemistry from the abiogenic experiments, and we evaluate the implications for deriving the carbonate chemistry of coral calcifying fluid. Three of the KD formulations performed similarly when applied to abiogenic aragonites precipitated from seawater and to coral skeletons. Critically, we find that some uncertainty remains in understanding the mechanism of boron elemental partitioning between aragonite and seawater, and addressing this question should be a target of additional abiogenic precipitation experiments. Despite this, boron systematics can already be applied to quantify the coral calcifying fluid carbonate system, although uncertainties associated with the proxy system should be carefully considered for each application. Finally, we present a user-friendly computer code that calculates coral calcifying fluid carbonate chemistry, including propagation of uncertainties, given inputs of boron systematics measured in coral skeleton. 1 Introduction Quantifying the carbonate chemistry of the fluid from which corals accrete their skeletons is essential for understanding the mechanisms of skeletal growth and the sensitivity of skeletal composition to environmental variability. It is generally thought that corals precipitate aragonite (CaCO3) crystals within an extracellular fluid-filled space between the living polyp and the skeleton (Barnes1970). Evidence from skeletal geochemistry and fluorescent dye experiments suggests that while seawater is the initial source of the calcifying fluid , the carbonate chemistry of the calcifying fluid is subject to substantial modifications (i.e., pH and dissolved inorganic carbon, or DIC) that enhance the rapid nucleation and growth of aragonite crystals . Because the isolation and small size of the calcifying fluid makes it difficult to sample directly, a variety of techniques have been employed to characterize its composition. These include microelectrodes inserted into tissue incisions or through the mouth , pH-sensitive dyes (; ; Comeau et al., 2017b), Raman spectroscopy , and a variety of skeletal-based geochemical proxies (; ; ; ; ; ; ). Although microelectrodes and pH-sensitive dyes are arguably the most direct methods, their utilities are limited by difficulties of applying them to corals living in their natural environment or developing seasonally resolved time series. Geochemical proxies, although indirect, can be readily applied to the skeletons of corals living in both laboratory and natural environments, and to skeletons accreted years or even centuries ago. In recent years, boron systematics (including δ11B and B∕Ca) have become one of the most commonly applied proxies for the carbonate chemistry of coral calcifying fluid (; ; ; ; ; ; Comeau et al., 2017a; ; ; ; ; ). The sensitivity of boron isotopes to seawater pH arises from the borate versus boric acid speciation being pH dependent and the isotopic fractionation between these species being constant . Since the δ11B composition of aragonite precipitating from seawater reflects that of the borate species , the δ11B composition of the skeletal carbonate records the pH of the calcifying fluid. Furthermore, the B∕Ca ratio depends inversely on the concentration of carbonate ion ($\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$) if borate substitutes for carbonate ion in the aragonite lattice . Use of combined boron isotopic (δ11B) and elemental (B∕Ca) systematics has several advantages relative to other geochemical proxies. For example, while stable carbon and oxygen isotopes are sensitive to carbonate chemistry, they are complicated by kinetic effects, strong sensitivities to the photosynthetic activity of coral symbionts, and variable compositions in seawater, which together have precluded their utility as acceptable carbonate system proxies . The U∕Ca ratio of aragonite is also sensitive to $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$, but the amount of U in coral skeleton relative to its concentration in seawater suggests that [U]cf is depleted substantially, complicating its utility as a direct $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf proxy . Conversely, the B∕Ca and δ11B compositions of seawater are homogeneous and likely not modified substantially by photosynthetic activity . Further, incorporation into the skeleton is less important for B∕Ca than U∕Ca because the partition coefficient between B and $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ is at least 2 orders of magnitude smaller than that of U ${\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}$ , meaning that [B]cf is depleted much less than [U]cf as skeletal aragonite precipitates. While a low partition coefficient causes Rayleigh fractionation for elements in a closed system (e.g., coral [Mg] [Ca]cf) , $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf is elevated relative to seawater and is modified by CO2 diffusion and pH up-regulation (i.e., it is not in a closed system) , meaning that [B] $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}{\right]}_{\mathrm{cf}}$ is likely not changed substantially due to skeletal aragonite precipitation. Therefore, boron-based proxies are thought to be largely dependent on carbonate chemistry alone . Finally, the combination of two carbonate system proxies (pH and $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$) derived from boron systematics allows for computation of the full carbonate system . Abiogenic laboratory experiments provide the underlying quantitative foundation necessary to apply these proxies to aragonitic coral skeletons. determined the fractionation factor (αB3−B4) between boric acid and borate in seawater, which allows δ11B of carbonates to be used as a pH proxy when combined with knowledge of pKB (Dickson1990) and seawater δ11B . Although there is potential for B isotopic fractionation between aragonite and seawater , the veracity of the δ11B proxy has been largely confirmed by comparison with direct in situ measurements using either pH microelectrodes or confocal microscopy of pH-sensitive dyes in the calcifying fluid . Additionally, results from two sets of abiogenic precipitation experiments can be used to constrain the partitioning of B∕Ca between fluid and aragonite . Thus, while all the information theoretically required to constrain the full seawater carbonate system from boron systematics is now available, a variety of different approaches have been presented, especially regarding the interpretation of B∕Ca partitioning . Here, we assess the abiogenic partitioning data , and the subsequent fitting of those data . We consider which mechanisms of B incorporation and sensitivities of B∕Ca partitioning are plausible, and the implications for interpreting coral skeletons. Our focus is on the combined application of δ11B and B∕Ca because it is only when the two are used in tandem that it is possible to calculate the full calcifying fluid carbonate system. Finally, we present a user-friendly computer code to calculate coral calcifying fluid carbonate chemistry from measurements of δ11B and B∕Ca. The code also propagates known uncertainties for deriving calcifying fluid $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf and DICcf, and allows for evaluating the effects of using different constants and partition coefficient formulations. 2 Partitioning of B∕Ca between aragonite and seawater The main discrepancy among various applications of boron systematics to coral skeletons relates to the partition coefficient of boron between aragonite and seawater. Given the variety of possible exchange reactions and partition coefficients that have been proposed , we begin with a brief review of how partition coefficients are derived. In general, the substitution of minor elements into a solid is described by an exchange reaction such that $\begin{array}{}\text{(1)}& {X}^{\mathrm{solid}}+{Y}^{\mathrm{fluid}}={Y}^{\mathrm{solid}}+{X}^{\mathrm{fluid}}.\end{array}$ For example, the substitution of Sr2+ for Ca2+ in aragonite follows $\begin{array}{}\text{(2)}& {\mathrm{Ca}}^{\mathrm{aragonite}}+{\mathrm{Sr}}^{\mathrm{fluid}}={\mathrm{Sr}}^{\mathrm{aragonite}}+{\mathrm{Ca}}^{\mathrm{fluid}}.\end{array}$ Element distribution described by this exchange is quantified through a partition coefficient, expressed as the concentration ratio of products over reactants: $\begin{array}{}\text{(3)}& {K}_{D}^{\mathrm{Sr}\phantom{\rule{0.125em}{0ex}}/\phantom{\rule{0.125em}{0ex}}\mathrm{Ca}}=\frac{\left[\mathrm{Sr}{\right]}^{\mathrm{aragonite}}\left[\mathrm{Ca}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{Sr}{\right]}^{\mathrm{fluid}}\left[\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}.\end{array}$ Equation (3) is typically rearranged as $\begin{array}{ll}{K}_{D}^{\mathrm{Sr}\phantom{\rule{0.125em}{0ex}}/\phantom{\rule{0.125em}{0ex}}\mathrm{Ca}}& =\left(\frac{\left[\mathrm{Sr}{\right]}^{\mathrm{aragonite}}}{\left[\mathrm{Sr}{\right]}^{\mathrm{fluid}}}\right)\left(\frac{\left[\mathrm{Ca}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}\right)\\ & =\left(\frac{\left[\mathrm{Sr}{\right]}^{\mathrm{aragonite}}}{\left[\mathrm{Sr}{\right]}^{\mathrm{fluid}}}\right){\left(\frac{\left[\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}{\left[\mathrm{Ca}{\right]}^{\mathrm{fluid}}}\right)}^{-\mathrm{1}}\\ \text{(4)}& & =\frac{\mathrm{Sr}\phantom{\rule{0.125em}{0ex}}/\phantom{\rule{0.125em}{0ex}}{\mathrm{Ca}}^{\mathrm{aragonite}}}{\mathrm{Sr}\phantom{\rule{0.125em}{0ex}}/\phantom{\rule{0.125em}{0ex}}{\mathrm{Ca}}^{\mathrm{fluid}}}.\end{array}$ The case of Sr2+ substituting for Ca2+ is straightforward in that the exchange reaction (Eq. 2) is charge balanced. Boron is more complicated because it is commonly thought that the singly charged B(OH)${}_{\mathrm{4}}^{-}$ is incorporated into aragonite in place of the doubly charged ${\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}$ . There are at least two possible exchange reactions for B(OH)${}_{\mathrm{4}}^{-}$ to substitute for ${\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}$ that maintain charge balance: $\begin{array}{}\text{(5)}& \mathrm{0.5}{\mathrm{CaCO}}_{\mathrm{3}}+\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}↔{\mathrm{Ca}}_{\mathrm{0.5}}\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}+\mathrm{0.5}{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\end{array}$ following , or $\begin{array}{}\text{(6)}& {\mathrm{CaCO}}_{\mathrm{3}}+\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}↔{\mathrm{CaH}}_{\mathrm{3}}{\mathrm{BO}}_{\mathrm{4}}+{\mathrm{H}}^{+}+{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\end{array}$ following . The KD for Eq. (5) is $\begin{array}{ll}{K}_{D}^{\mathrm{B}/\mathrm{Ca}}& =\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}/\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}{\right]}^{\mathrm{0.5}}{\right]}^{\mathrm{aragonite}}}{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}/\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}{\right]}^{\mathrm{0.5}}{\right]}^{\mathrm{fluid}}}\\ \text{(7)}& & =\frac{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}/\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}{\right]}^{\mathrm{0.5}}{\right]}^{\mathrm{fluid}}}\end{array}$ and for Eq. (6) is $\begin{array}{ll}{K}_{D}^{\mathrm{B}/\mathrm{Ca}}& =\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}/{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}{\right]}^{\mathrm{aragonite}}}{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}/{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}{\right]}^{\mathrm{fluid}}}\\ \text{(8)}& & =\frac{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}/{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}{\right]}^{\mathrm{fluid}}},\end{array}$ where $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$aragonite is assumed equal to [Ca2+]aragonite, and Eqs. (7) and (8) differ by whether or not the square root of ${\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}$ is used. Since Eq. (6) includes H+ in the products, this reaction implies that the KD may be pH dependent . Incorporation of B into aragonite may also involve adsorption of B(OH)${}_{\mathrm{4}}^{-}$ onto crystal surfaces, incorporation at defect sites, or local charge balance by Na+ . Conversely, and considered exchange reactions in which borate substitutes for bicarbonate (HCO${}_{\mathrm{3}}^{-}$), with the partition coefficient $\begin{array}{}\text{(9)}& {K}_{D}^{\mathrm{B}/\mathrm{Ca}}=\frac{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}/{\mathrm{HCO}}_{\mathrm{3}}^{-}{\right]}^{\mathrm{fluid}}}.\end{array}$ This approach resolves the issue of charge balance and would account for ${\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}$ reacting with H+, thus removing the pH dependence expected from Eq. (8). However, Eq. (9) implies that aragonite forms via the reaction $\begin{array}{}\text{(10)}& {\mathrm{Ca}}^{\mathrm{2}+}+{\mathrm{HCO}}_{\mathrm{3}}^{-}↔{\mathrm{CaCO}}_{\mathrm{3}}+{\mathrm{H}}^{+}\end{array}$ rather than $\begin{array}{}\text{(11)}& {\mathrm{Ca}}^{\mathrm{2}+}+{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}↔{\mathrm{CaCO}}_{\mathrm{3}}.\end{array}$ Whether aragonite precipitates via Eqs. (10) or (11) is testable because the rate of the net forward reaction should depend on the concentrations of the reactants. demonstrated that the rate of aragonite precipitation increases as a function of ΩAr (where ΩAr = [Ca2+]$\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$Ksp) and temperature, although they did not explicitly consider the relationship between $\left[{\mathrm{HCO}}_{\mathrm{3}}^{-}\right]$ and precipitation rate. reported bulk precipitation rates for aragonites precipitated from seawater with various $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ and $\left[{\mathrm{HCO}}_{\mathrm{3}}^{-}\right]$, with independence between these two variables achieved by manipulating pH and DIC. While the bulk precipitation rates were not normalized to surface area as in , the experimental vessels used by were of consistent dimensions and material. Thus, the bulk precipitation rate data of should be comparable among their experiments, allowing us to evaluate between the reactions of Eqs. (10) and (11). The aragonite precipitation rates reported by at 25 C are significantly correlated with both $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ (r2 = 0.56, p<0.01) and ΩAr (r2= 0.62, p<0.01) (Fig. 1a, b). Experiments conducted at 20, 33, and 40 C are consistent with this trend (Fig. 1a, b), and with previous observations that precipitation rate increases with temperature , although we do not attempt to quantify temperature effects on the order of the reaction (as performed by Burton and Walter, 1987) since only two experiments were conducted at each temperature other than 25 C. Conversely, there are no significant correlations between aragonite precipitation rate at 25 C and either $\left[{\mathrm{HCO}}_{\mathrm{3}}^{-}\right]$ (r2=0.00, p=0.95) or [Ca2+]$\left[{\mathrm{HCO}}_{\mathrm{3}}^{-}\right]$ (r2=0.01, p=0.54) as would be expected based on Eq. (10). Other possibilities include precipitation reactions involving both ${\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}$ and HCO${}_{\mathrm{3}}^{-}$, or total DIC . However, there are no significant correlations between precipitation rate and either $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$+$\left[{\mathrm{HCO}}_{\mathrm{3}}^{-}\right]$ (r2= 0.01, p=0.59) or DIC (r2=0.01, p=0.59) (Fig. 1e, f). Together, these data lead us to conclude that aragonite precipitates from seawater via Eq. (11). Therefore, since B∕Ca partition coefficients expressed with $\left[{\mathrm{HCO}}_{\mathrm{3}}^{-}\right]$ do not have a chemical reaction basis, we do not consider them further. Rather, we consider only the B∕Ca partition coefficients that are based on borate substituting for ${\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}$ (Eqs. 7–8). Figure 1Aragonite precipitation rates as functions of fluid chemistry based on data from . Each point represents a separate abiogenic aragonite precipitation experiment conducted at 20 C (blue), 25 C (black), 33 C (green), and 40 C (red). Bulk aragonite precipitation rates (R) are plotted against mean fluid $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ (a), ΩAr (b), $\left[{\mathrm{HCO}}_{\mathrm{3}}^{-}\right]$ (c), [Ca2+]$\left[{\mathrm{HCO}}_{\mathrm{3}}^{-}\right]$ (d), $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$+$\left[{\mathrm{HCO}}_{\mathrm{3}}^{-}\right]$ (e), and DIC (f). Solid lines show regression fits at each temperature (note that there are only two experiments at each temperature other than 25 C, and thus line fits for these temperatures should be interpreted with caution). 3 Fitting the experimental B∕Ca partitioning data The second source of discrepancies between various applications of boron systematics to coral skeletons is the dependence of the KD on fluid chemistry. fit the KD as either a function of $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ or ΩAr, refit the data as a function of [H+], and fit data from both and as a function of ΩAr. At the outset, it is important to recognize that there are two key differences between the abiogenic experiments of and . Firstly, precipitated aragonite from NaCl solutions, whereas used filtered seawater. Secondly, $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ and ΩAr are lower in the experiments of relative to . Potentially as result of one or both of these differences, found much lower KD values than . Here, we consider four possible KD dependencies based on these two experimental datasets (Fig. 2). Figure 2B∕Ca KD formulations. Abiogenic B∕Ca partitioning data from (red circles) and (blue triangles) fit as functions of fluid chemistry: $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ (a, c) , [H+] (b) , and ΩAr (d) (Allison2017). Note that KD in (a) is defined with Eq. (7) and in (b–d) is defined with Eq. (8). We use only the with [B] < 1000 µmol kg−1 due to the apparent effect of [B] on KD . The first two formulations assume that there are substantial compositional effects on B∕Ca partitioning, and thus the offsets in KD between and arise due to the use of NaCl versus seawater solutions, respectively (Fig.  2a, b). If this is correct, the data are more appropriate for application to corals based on evidence that they precipitate their skeletons from seawater-based solutions . precipitated their aragonites from seawater solutions modified with [Ca2+] and [Mg2+] ranging between 6 and 20 and 48 and 98 mmol kg−1, respectively, without any apparent effects on the B∕Ca KD. While this suggests that the KD is not highly sensitive to seawater elemental chemistry, it is still possible that there are subtle compositional effects that have little influence on KD in modified seawater, but become apparent in the NaCl solutions used by . Assuming some compositional effects do exist, we are left with the two plausible KD expressions (Eqs. 7–8), and the previously presented dependencies on either $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ or [H+] . Alternatively, it is possible that there are negligible effects from using NaCl or seawater solutions and, therefore, the data from both and should be fit by a single, continuous function. There are again two plausible formulations: KD increases as a function of $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ or ΩAr (Fig. 2c, d). Allison (2017) proposed a linear fit between KD and ΩAr that includes both the and data. From a practical standpoint, however, this latter approach is problematic in that it requires an independent proxy for [Ca2+] (see Sect. 8) and the linear fit effectively precludes its use for deriving coral calcifying fluid chemistry (see Sect. 4). In an attempt to avoid these issues, we introduce a logarithmic relationship between KD and $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$, which fits both the and data (Fig. 2c): $\begin{array}{ll}{K}_{D}^{\mathrm{B}/\mathrm{Ca}}& =\mathrm{0.00077}\left(±\mathrm{0.00007}\right)\cdot \mathrm{ln}\left(\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]\right)\\ \text{(12)}& & -\mathrm{0.0028}\left(±\mathrm{0.0004}\right),\end{array}$ where parentheses indicate 95 % confidence, KD is defined by Eq. (8), and $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ is in units of micromoles per kilogram. Mechanistically, the increase in KD with $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ or ΩAr (or precipitation rate) is consistent with the surface entrapment model proposed by . In this model, minor element impurities, such as B, are incorporated in the near-surface layer of a growing crystal. Slower-growing crystals allow these impurities to diffuse out of the near-surface region into the fluid, whereas faster-growing crystals bury the near-surface impurities into the bulk crystal. The sensitivity of KD to $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ or ΩAr is also consistent with a surface kinetic model (DePaolo2011), in which trace element partitioning depends on the net rate of precipitation relative to dissolution. Thus, both the surface entrapment and kinetic models offer potential explanations as to why the low-ΩAr experiments of produced lower KD than the higher-ΩAr experiments of . Figure 3Reconstructing experimental fluid $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ using the KD formulations presented in Fig. 2. Symbols are the same as Fig. 2. In panel (d), negative $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ solutions have been excluded (see Appendix). Calculations using the KD formulation have been performed with both assuming seawater [Ca2+] (blue) and using the [Ca2+] reported from the experiments (cyan). 4 Back-application of partition coefficient formulations to abiogenic datasets We conducted a simple test to evaluate the utility of the four KD dependencies considered above. For each KD formulation, we used the reported aragonite B∕Ca, fluid [B(OH)${}_{\mathrm{4}}^{-}$], and pH data of and to calculate the fluid $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$, and then we compared the predicted $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ to the concentrations measured during the experiments (Fig. 3) (see also , for a similar analysis). The basis for this approach is to assess how well the experimental fluid $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ can be reconstructed using boron systematics alone. When boron systematics are applied to coral skeletons, $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ is predicted from only B∕Ca and δ11B. However, since δ11B was not reported by , we instead use the measured pH for the KD formulation. Additionally, since [B] was manipulated in some experiments, we use reported fluid [B(OH)${}_{\mathrm{4}}^{-}$] instead of calculating it from pH as is performed in applications to corals . Nevertheless, since pH (and thus seawater [B(OH)${}_{\mathrm{4}}^{-}$]) is readily calculated from δ11B, our approach is suitable for evaluating the utility of each KD formulation for reconstructing $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ with B∕Ca. Since three of the KD formulations (, , and our new Eq. 12) themselves depend on $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$, we solved for $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ as follows. An initial guess of $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ was used to calculate an initial KD, and this KD was used to solve for $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ by rearranging Eq. (7) to $\begin{array}{}\text{(13)}& \left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]={\left({K}_{D}^{\mathrm{B}/\mathrm{Ca}}\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}\right)}^{\mathrm{2}}\end{array}$ and Eq. (8) to $\begin{array}{}\text{(14)}& \left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]={K}_{D}^{\mathrm{B}/\mathrm{Ca}}\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}},\end{array}$ where Eq. (14) is used for and our new Eq. (12), and Eq. (13) is used for . We then calculated the residual between the calculated (Eqs. 13–14) and initially estimated $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$. Finally, we iteratively adjusted the initial $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ estimate for each data point until it equaled the $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ derived from Eqs. (13)–(14). Both the fit (their Eq. 7) and the refit perform similarly, effectively reconstructing the fluid $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ of the experimental data (root mean square error, RMSE = 151 and 163 µmol kg−1, respectively), but performing poorly for the data (RMSE = 1370 and 1385 µmol kg−1, respectively) (Fig. 3a, b). This is not surprising because these KD dependencies are offset from the data (Fig. 2a, b). Our new logarithmic equation performs well for both datasets (RMSE = 42 and 204 µmol kg−1 for and , respectively). The formulation (assuming [Ca2+] of 10 µmol kg−1) performs well for the data (RMSE = 51 µmol kg−1), but creates a trend opposite that expected for the data (RMSE = 1375 µmol kg−1) (Fig. 3d). Using the reported [Ca2+] and Ksp from the experiments in the formulation improves the results slightly and generates more positive solutions, but the RMSE is still 950 µmol kg−1. An alternative way to understand these patterns is to investigate the relationship between $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ and the ratio of fluid [B(OH)${}_{\mathrm{4}}^{-}$] to solid B∕Ca (Fig. 4). Following Eqs. (13)–(14), $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ should be positively related to $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$, and this behavior is clearly evident in the abiogenic aragonites of (blue triangles in Fig. 4). The KD formulations of , , and our new Eq. (12) all closely track the abiogenic data, especially for $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$< 2000 µmol kg−1. Conversely, the fit (assuming [Ca2+] of 10 mmol kg−1) produces the opposite trend and is invalid or negative below a $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ of  0.44 mol kg−1 (see Appendix for derivation of an analytical solution). Figure 4Experimental fluid $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ as a function of $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$. The KD formulations of (dotted black line), (black crosses), and Eq. (12) (dashed black line) all capture the trend of increasing $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ with increasing $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ that is apparent in the abiogenic data (blue triangles). A constant KD (solid grey line) underestimates the slope between $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ and $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$. The pink shaded region shows the range of $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ derived for Porites corals by . The behavior of the KD formulations can be understood by inspecting the residuals between initial $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ estimates and those derived from Eqs. (13)–(14) (Fig. 5). The KD formulation generates unique $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ solutions (i.e., where the residual equals zero) that increase with $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ (Fig. 5a), which is the ideal behavior. Our new Eq. (12) also produces increasing $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ solutions with increasing $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ (Fig. 5b); however, a major issue of this formulation is that there may be two $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ solutions for each $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$. Finally, although the KD formulation produces unique $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ solutions, they increase with decreasing $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ (Fig. 5c), opposite to that expected (Fig. 4). The reason for the poor behavior of the formulation is the linear fit between KD and ΩAr with an intercept near the origin. When using this formulation to predict $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ from boron systematics alone, we must assume [Ca2+] is approximately equal to seawater (10 mmol kg−1), meaning that ΩAr is directly related to $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$. Since the intercept in the KD formulation is close to the origin, any change in $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ results in an almost proportional change in KD. It can be seen why this is problematic by inspecting how $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ is derived from Eq. (14). The $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ is derived from pH (or δ11B) and measured B∕Ca, so this ratio remains constant while we find the appropriate KD that minimizes the residual $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$, as in Fig. 5. Therefore, Eq. (14) is effectively reduced to $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ being a function of KD multiplied by a constant. However, since KD changes almost directly proportionally to $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ according to , it is difficult to find a $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ that explains different $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ values. Although recognized the difficulty of explaining the range of B∕Ca observed in corals (see their Fig. 8g), the implication of applying this KD formulation to predict $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ was not discussed. Our analysis suggests that this KD formulation is poorly suited for accurately reconstructing fluid $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ from boron systematics (Figs. 3d, 4). Another approach presented by is to use a constant KD. We selected a KD value of 0.02 as an example that fits the abiogenic data near the low-$\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ range of the data (Fig. 4). While a constant KD performs better than the linear fit to ΩAr, it underestimates the slope of the relationship between $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ and $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ (Fig. 4). This is not surprising because the abiogenic data clearly show the KD does not remain constant as $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ changes (Fig. 2). Since using a constant KD will underestimate variability in $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf when applied to corals, we do not recommend this approach. Figure 5Predicting $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ from the KD formulations which themselves depend on $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$: (a), Eq. (12) (b), and (c). Each panel shows the residual between a guess of $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ used to calculate KD and that calculated from Eqs. (13)–(14), plotted against the $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ guess. The final $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ is derived by finding where the residual is minimized for a particular $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ (three of which are plotted as examples in red, blue, and black). 5 Application to deriving coral calcifying fluid carbonate chemistry The ability of boron systematics to predict two independent carbonate chemistry parameters allows for calculation of the full carbonate system. This has prompted several recent applications deriving the carbonate chemistry of coral calcifying fluids (; Comeau et al., 2017a; ; ; ; ; ). Here, we investigate the differences in derived coral calcifying fluid $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ that arise from the choice of KD formulation. We use the paired δ11B and B∕Ca data of the “Davies 2” coral from as an example. Derived $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf shows similar seasonality when using the KD formulations of , , or our new Eq. (12) (Fig. 6). Regardless of which of these three KD formulations are used, $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf is highest in summer and lowest in winter over a multiyear time series. This is consistent with other reports of B∕Ca seasonality in coral skeletons , and with an independent approach based on Rayleigh modeling of minor elements in coral skeleton . The primary difference among the derived values is that the KD formulations from and our Eq. (12) produce seasonal cycles with  50 % greater amplitude relative to the KD formulation. The absolute values of derived $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf is approximately equal for all three formulations at the summertime maxima, but are lower during winter when using the KD formulations from or our Eq. (12), relative to . Conversely, using the KD formulation produces the opposite seasonal pattern with amplitude several times greater than the other KD formulations. This large discrepancy is not surprising given the behavior of the KD formulation when retrospectively applied to the fluid composition of abiogenic aragonites (Fig. 3). Figure 6Application of the four KD formulations for the “Davies 2” Porites coral data from . Derived $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf is plotted over multiple years using the KD formulations of (black), (red), Eq. (12) (blue), and (dashed grey line). Shading represents 1 standard deviation of the systematic errors due to uncertainty in each KD formulation. Note that (1) the and the Eq. (12) lines plot nearly on top of each other, and (2) $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf derived from the KD formulation corresponds to the right y axis. 6 A computer code for applying boron systematics to coral skeletons We present here a user-friendly computer code for deriving $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf and DICcf from boron systematics (Supplement). The function is provided in both MATLAB and R formats, and it calculates $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf and DICcf given inputs of δ11B, B∕Ca, temperature, salinity, and water depth. It allows easy toggling between what we consider the three plausible KD formulations (, , and our new Eq. 12). Furthermore, the code permits a choice of [B]sw functions since and used the relation between salinity and [B]sw from , whereas and used that of . The carbonate dissociation constants can also be toggled between and . The code follows the calculations of CO2SYS for converting between pH scales and accounting for pressure effects on equilibrium constants, and uses the δ11Bsw of and the αB3−B4 of . Perhaps most importantly, the code propagates known uncertainties into the derivation of $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf and DICcf. These uncertainties are estimated using a Monte Carlo scheme, in which random errors (assuming Gaussian distributions) are added to parameters while repeating the calculations many times. The non-systematic uncertainty of derived values depends on the measurement precisions of δ11B, B∕Ca, temperature, and salinity. These will depend on the instruments and protocols used, and for δ11B and B∕Ca should be estimated by each laboratory, for example by repeated measurements of an external consistency standard. The systemic errors of derived values depend on the uncertainties of the various KD formulations, uncertainties associated with δ11Bsw , [B]sw , αB3−B4 , and pKB (Dickson1990), and if known, any uncertainties in the accuracy of δ11B, B∕Ca, temperature, and salinity measurements. 7 Relationships among coral calcifying fluid carbonate chemistry parameters With our code, the parameter space of $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf derived from δ11B and B∕Ca, and the differences among KD formulations, can be readily visualized (Fig. 7). This enables future applications of boron systematics to coral skeletons to consider how the choice of KD formulation affects the particular question being investigated. We also apply the code to calculate carbonate system parameters using published δ11B and B∕Ca datasets (Fig. 8). Coral δ11B is tightly related to pHcf, varying only slightly with changes in seawater temperature and salinity (Fig. 8b). Likewise, B∕Ca is primarily a function of $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf, but also depends in part on borate concentration, and hence on pHcf (Fig. 8c). For this reason, it is difficult to directly interpret coral B∕Ca, and instead we recommend pairing δ11B and B∕Ca to calculate the full calcifying fluid carbonate chemistry. Interestingly, this analysis shows that coral calcifying fluid $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf and DIC are consistently positively correlated across studies (Fig. 8f), whereas the sign of correlations between pH and both $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf and DIC varies (Fig. 8d–e). Assuming $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf is the carbonate system parameter most important for aragonite precipitation, these patterns may suggest that elevating DICcf is critical to the coral calcification process, although up-regulating pH is still important for shifting the carbonate system to favor ${\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}$ over HCO${}_{\mathrm{3}}^{-}$. In addition, the large changes in pH, DIC, and $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ that occur within the calcifying fluid relative to natural variability in the open ocean likely preclude the utility of boron systematics for reconstructing seawater carbonate chemistry, reinforcing previous conclusions made for both corals and foraminifera . Rather, the boron systematics of coral skeletons are primarily useful for investigating calcifying fluid dynamics and understanding coral biomineralization. Figure 7Application of our computer code to visualizing the parameter space of $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ (µmol kg−1) derived from B∕Ca and δ11B at 25 C and salinity 35. The upper left panel shows absolute $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ derived with the KD of (“H16”), whereas the other panels show the differences in $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ among the KD formulations of H16, (“M17”), and our new Eq. (12). The black dots show coral data from the literature (see Fig. 8 legend below). Note that the actual $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ derived for the coral data will also depend on variations of the in situ temperature and salinity, which are not accounted for in the plots. In contrast to boron systemics, which consistently show elevated DICcf, microelectrode measurements of $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf and pHcf imply that DICcf is depleted , relative to that of seawater (typically $<\mathrm{2000}\phantom{\rule{0.125em}{0ex}}\mathrm{µ}\mathrm{mol}\phantom{\rule{0.125em}{0ex}}{\mathrm{kg}}^{-\mathrm{1}}$). The reason for this discrepancy is not yet clear, and resolving it should be a high priority because whether DICcf is greater than or less than seawater implies different calcification strategies. Reducing DICcf may be an efficient strategy to increase pHcf because the reduced buffering capacity means that less energy is required to elevate pH via proton pumping . Alternatively, increasing DICcf means that a higher ΩAr is achieved for a given pHcf. Deciphering between these possibilities has key implications for whether calcification is limited by DICcf (or CO2 diffusion into the calcifying fluid) or by ΩAr. Two independent approaches to quantifying calcifying fluid carbonate chemistry are consistent with the high DICcf scenario. First, coral U∕Ca ratios imply that DICcf is between 2600 and 6100 µmol kg−1 , which is a similar range to that derived from boron systematics (Fig. 8). Second, boron systematics-derived $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ is consistent with a combination of Raman-spectroscopy-derived ΩAr and trace element ratios (Mg / Ca and Sr / Ca) . Nevertheless, since low DICcf has been derived from microelectrodes in several species , studies combining multiple approaches (i.e., geochemistry and microelectrodes) on the same specimens will be essential for resolving the DICcf discrepancy. Figure 8Correlations among coral calcifying fluid carbonate system parameters based on published boron systematics datasets: (a) B∕Ca and δ11B, (b) pHcf and δ11B, (c) $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf and B∕Ca, (d) pH and $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf, (e) pHcf and DICcf, and (f) $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf and DICcf. Colors show different studies, and lines are plotted for significant (p<0.05) correlations using all the data within each study. The grey area shows the convex hull of the parameter space covered in the abiogenic experiments of . Calculations are performed using the KD formulation. 8 Which KD formulation to use? Despite the availability of abiogenic B∕Ca partitioning data from two experiments , and several attempts to fit the data , it is important to recognize that uncertainties still remain, in particular an understanding of the controlling factors, and thus the appropriate KD formulation to apply. From a mechanistic viewpoint, the key fundamental question that remains is whether the abiogenic data of and are directly comparable and thus should be fit with a continuous function (e.g., Eq. 12), or if they are incomparable because used NaCl solutions and used seawater. If they are comparable, then our new Eq. (12) or a similar fit to both datasets is the most appropriate KD formulation. Calcite precipitation studies provide some support for the hypothesis that crystal growth rate or ΩAr influences B∕Ca partitioning , but it is not yet known if these results can be extended to aragonite precipitation from seawater. Alternatively, if the solution chemistry makes the two experiments incomparable, the KD data are most likely the more suitable choice for corals because the experiments were conducted with seawater at ΩAr comparable to that of coral calcifying fluids , and they can be fit as a function of either $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ or [H+]. However, it is important to recognize that the parameter space of CO2 system parameters covered in the experiments includes some, but not all, of the published coral data (Fig. 8). Further, since we are unable to conclusively distinguish whether the two abiogenic datasets are directly comparable, all three KD formulations may be considered equally valid until proven otherwise. Additional abiogenic experiments aimed at this question will clearly be useful in refining the boron systematics proxies. From a practical standpoint, the KD formulations of and may be the most appropriate. Both produce unique solutions of $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ that increase with $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$, and they effectively reconstruct fluid $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ using the abiogenic aragonites precipitated from seawater. While our Eq. (12) produces $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf estimates that are nearly identical under most δ11B and B∕Ca combinations to those derived using the KD formulation (Fig. 7), Eq. (12) can have nonunique solutions, which could complicate interpretations of $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$cf in some cases. A final consideration is that two of the KD formulations ( and our new Eq. 12) are fit to $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$. Fitting Eq. (12) to a wider range of $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ helps to account for the different solution chemistries and associated growth rates of the two abiogenic precipitation studies , but ΩAr or crystal growth rate may be the true controlling factor . However, did not find a temperature dependence of B∕Ca partitioning, as would be expected if precipitation rate influenced KD. While growth rate is likely related to $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ , the two could decouple with changes in temperature or if coral calcifying fluid [Ca2+]cf departs from seawater levels. Recent evidence combining Raman spectroscopy with boron systematics suggests [Ca2+]cf is within  25 % of seawater (; DeCarlo et al., 2018) but this has yet to be tested on a range of coral species and locations. Thus, future abiogenic experiments designed to test under what conditions $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ or crystal growth rates control B∕Ca partitioning, as well as development of proxies for [Ca2+]cf, may improve the accuracy of deriving calcifying fluid carbonate chemistry from boron systematics. 9 Conclusions Recent abiogenic aragonite precipitation experiments have made possible the application of boron systematics to quantifying the full carbonate system of coral calcifying fluid. However, a number of approaches to doing so have been utilized (; ; ), without a comprehensive analysis of which KD formulations are plausible (i.e., can reproduce the experimental fluid chemistry) or the implications for interpreting coral skeletons. We evaluated four potential B∕Ca KD formulations involving B(OH)${}_{\mathrm{4}}^{-}$ substituting for ${\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}$ in the aragonite lattice. Our analysis suggests that there are at least three plausible KD formulations (, , and our new Eq. 12) that can be used to determine the KD and its dependence on fluid chemistry. Despite the differences in plausible approaches, we show that all three produce similar patterns in derived coral calcifying fluid carbonate chemistry. Nevertheless, subtle differences in derived carbonate chemistry remain among the approaches, and addressing these differences should be the target of future abiogenic aragonite precipitation experiments. Finally, we present a code that computes coral calcifying fluid carbonate chemistry from boron systematics and allows for comparison among different KD formulations. Code availability Code availability. Codes are available in the Supplement. Appendix A In the main text, we used numerical solutions to predict $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ based on Eq. (14). Here, we show an analytical solution to Eq. (14) for the KD formulation. fit KD to ΩAr with a linear regression in the form: $\begin{array}{}\text{(A1)}& {K}_{D}^{\mathrm{B}/\mathrm{Ca}}=a\mathrm{\Omega }+b,\end{array}$ where $\begin{array}{}\text{(A2)}& \mathrm{\Omega }=\frac{\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]\left[{\mathrm{Ca}}^{\mathrm{2}+}\right]}{{K}_{\mathrm{sp}}},\end{array}$ with concentrations in units of moles per kilogram and where Ksp is the solubility product. Inserting Eq. (A2) into Eq. (A1) yields $\begin{array}{}\text{(A3)}& {K}_{D}^{\mathrm{B}/\mathrm{Ca}}=a\frac{\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]\left[{\mathrm{Ca}}^{\mathrm{2}+}\right]}{{K}_{\mathrm{sp}}}+b,\end{array}$ and then inserting Eq. (A3) into Eq. (14) of the main text yields $\begin{array}{}\text{(A4)}& \left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]=\left(a\frac{\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]\left[{\mathrm{Ca}}^{\mathrm{2}+}\right]}{{K}_{\mathrm{sp}}}+b\right)\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}},\end{array}$ where $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ is in units of moles per kilogram. We must now solve Eq. (A4) for $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$. First, expand the right side of the equation: $\begin{array}{ll}\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]& =a\frac{\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]\left[{\mathrm{Ca}}^{\mathrm{2}+}\right]}{{K}_{\mathrm{sp}}}\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}\\ \text{(A5)}& & +b\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}.\end{array}$ Multiply both sides by Ksp: $\begin{array}{ll}\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]{K}_{\mathrm{sp}}& =a\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]\left[{\mathrm{Ca}}^{\mathrm{2}+}\right]\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}\\ \text{(A6)}& & +b\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}{K}_{\mathrm{sp}}.\end{array}$ Collect all the $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ terms on the left side of the equation: $\begin{array}{ll}\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]{K}_{\mathrm{sp}}& -a\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]\left[{\mathrm{Ca}}^{\mathrm{2}+}\right]\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}\\ \text{(A7)}& & =b\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}{K}_{\mathrm{sp}}.\end{array}$ Factor out $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$: $\begin{array}{ll}\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]& \left({K}_{\mathrm{sp}}-a\left[{\mathrm{Ca}}^{\mathrm{2}+}\right]\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}\right)\\ \text{(A8)}& & =b\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}{K}_{\mathrm{sp}}.\end{array}$ Solve for $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$: $\begin{array}{}\text{(A9)}& \left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]=\frac{b\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}{K}_{\mathrm{sp}}}{{K}_{\mathrm{sp}}-a\left[{\mathrm{Ca}}^{\mathrm{2}+}\right]\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}}.\end{array}$ In seawater at 25 C and salinity 34, [Ca2+] is approximately 0.01 mol kg−1 and Ksp is $\mathrm{6.54}×{\mathrm{10}}^{-\mathrm{7}}$ . According to , a is $\mathrm{1.48}×{\mathrm{10}}^{-\mathrm{4}}$ and b is $-\mathrm{1.30}×{\mathrm{10}}^{-\mathrm{4}}$. Inserting these values in Eq. (A9) yields $\begin{array}{ll}\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]& =\frac{\left(-\mathrm{1.30}×{\mathrm{10}}^{-\mathrm{4}}\right)\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}\left(\mathrm{6.54}×{\mathrm{10}}^{-\mathrm{7}}\right)}{\left(\mathrm{6.54}×{\mathrm{10}}^{-\mathrm{7}}\right)-\left(\mathrm{1.48}×{\mathrm{10}}^{-\mathrm{4}}\right)\left(\mathrm{0.01}\right)\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}}\\ \text{(A10)}& & =\frac{-\mathrm{8.50}×{\mathrm{10}}^{-\mathrm{11}}\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}}{\mathrm{6.54}×{\mathrm{10}}^{-\mathrm{7}}-\mathrm{1.48}x{\mathrm{10}}^{-\mathrm{6}}\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}}.\end{array}$ The denominator equals zero (i.e., the solution is undefined) when $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ = $\frac{\mathrm{6.54}×{\mathrm{10}}^{-\mathrm{7}}}{\mathrm{1.48}×{\mathrm{10}}^{-\mathrm{6}}}$= 0.44. If $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}<\mathrm{0.44}$, then the denominator is positive, and since the numerator is always negative, this means that the predicted $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ will be negative. Predicted $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ will be highest when the denominator is a small negative number, which occurs when $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ is slightly greater than 0.44. As $\frac{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ increases  0.44, the absolute value of the denominator increases more than that of the numerator because the coefficient attached to $\frac{{\left[\mathrm{B}\left(\mathrm{OH}{\right)}_{\mathrm{4}}^{-}\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ is raised to the −6 power in the denominator and to the −11 power in the numerator. The implication is that predicted $\left[{\mathrm{CO}}_{\mathrm{3}}^{\mathrm{2}-}\right]$ will decrease as $\frac{\left[{\mathrm{B}\left(\mathrm{OH}\right)}_{\mathrm{4}}^{-}{\right]}^{\mathrm{fluid}}}{\left[\mathrm{B}/\mathrm{Ca}{\right]}^{\mathrm{aragonite}}}$ increases beyond 0.44. This is the same conclusion reached in the main text, and is the opposite trend to that observed in the abiogenic aragonites (Fig. 4). Supplement Supplement. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. The authors thank Glenn Gaetani for valuable comments. This study was funded by an ARC Laureate Fellowship (FL120100049) awarded to Malcolm T. McCulloch, and the ARC Centre of Excellence for Coral Reef Studies (CE140100020). Edited by: Markus Kienast Reviewed by: two anonymous referees References Adkins, J. F., Boyle, E. A., Curry, W. B., and Lutringer, A.: Stable isotopes in deep-sea corals and a new mechanism for “vital effects”, Geochim. Cosmochim. Ac., 67, 1129–1143, 2003. a, b, c Al-Horani, F. A., Al-Moghrabi, S. M., and De Beer, D.: The mechanism of calcification and its relation to photosynthesis and respiration in the scleractinian coral Galaxea fascicularis, Mar. Biol., 142, 419–426, https://doi.org/10.1007/s00227-002-0981-8, 2003. a, b Allen, K. A. and Hönisch, B.: The planktic foraminiferal B∕Ca proxy for seawater carbonate chemistry: A critical evaluation, Earth Planet. Sc. Lett., 345, 203–211, 2012. a Allison, N.: Reconstructing coral calcification fluid dissolved inorganic carbon chemistry from skeletal boron: An exploration of potential controls on coral aragonite B∕Ca, Heliyon, 3, e00387, https://doi.org/10.1016/j.heliyon.2017.e00387, 2017. a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z, aa, ab, ac, ad Allison, N., Cohen, I., Finch, A. A., Erez, J., and Tudhope, A. W.: Corals concentrate dissolved inorganic carbon to facilitate calcification, Nat. Commun., 5, 5741, https://doi.org/10.1038/ncomms6741, 2014. a, b, c, d, e, f, g, h Balan, E., Noireaux, J., Mavromatis, V., Saldi, G. D., Montouillout, V., Blanchard, M., Pietrucci, F., Gervais, C., Rustad, J. R., Schott, J., and Gaillardet, J.: Theoretical isotopic fractionation between structural boron in carbonates and aqueous boric acid and borate ion, Geochim. Cosmochim. Ac., 222, 117–129, https://doi.org/10.1016/j.gca.2017.10.017, 2018. a, b Barnes, D. J.: Coral skeletons: an explanation of their growth and structure, Science, 170, 1305–1308, https://doi.org/10.1126/science.170.3964.1305, 1970. a Burton, E. A. and Walter, L. M.: Relative precipitation rates of aragonite and Mg calcite from seawater: Temperature or carbonate ion control?, Geology, 15, 111–114, 1987. a, b, c, d Cai, W.-J., Ma, Y., Hopkinson, B. M., Grottoli, A. G., Warner, M. E., Ding, Q., Hu, X., Yuan, X., Schoepf, V., Xu, H., Han, C., Melman, T. F., Hoadley, K. D., Pettay, D. T., Matsui, Y., Baumann, J. H., Levas, S., Ying, Y., and Wang, Y.: Microelectrode characterization of coral daytime interior pH and carbonate chemistry, Nat. Commun., 7, 11144, https://doi.org/10.1038/ncomms11144, 2016. a, b, c, d, e, f Cohen, A. L. and McConnaughey, T. A.: Geochemical Perspectives on Coral Mineralization, Rev. Mineral. Geochem., 54, 151–187, https://doi.org/10.2113/0540151, 2003. a, b, c Comeau, S., Cornwall, C. E., and McCulloch, M. T.: Decoupling between the response of coral calcifying fluid pH and calcification to ocean acidification, Sci. Rep., 7, 7573, https://doi.org/10.1038/s41598-017-08003-z, 2017a. Comeau, S., Tambutté, E., Carpenter, R. C., Edmunds, P. J., Evensen, N. R., Allemand, D., Ferrier-Pagès, C., Tambutté, S., and Venn, A. A.: Coral calcifying fluid pH is modulated by seawater carbonate chemistry not solely seawater pH, P. Roy. Soc. Lond. B. Bio., 284, 20161669, https://doi.org/10.1098/rspb.2016.1669, 2017b. DeCarlo, T. M., Gaetani, G. A., Holcomb, M., and Cohen, A. L.: Experimental determination of factors controlling U∕Ca of aragonite precipitated from seawater: implications for interpreting coral skeleton, Geochim. Cosmochim. Ac., 162, 151–165, https://doi.org/10.1016/j.gca.2015.04.016, 2015. a, b, c, d DeCarlo, T. M., Gaetani, G. A., Cohen, A. L., Foster, G. L., Alpert, A. E., and Stewart, J.: Coral Sr-U Thermometry, Paleoceanography, 31, 626–638, https://doi.org/10.1002/2015PA002908, 2016. a DeCarlo, T. M., D'Olivo, J. P., Foster, T., Holcomb, M., Becker, T., and McCulloch, M. T.: Coral calcifying fluid aragonite saturation states derived from Raman spectroscopy, Biogeosciences, 14, 5253–5269, https://doi.org/10.5194/bg-14-5253-2017, 2017. a, b, c, d DeCarlo, T. M., Comeau, S., Cornwall, C. E., and McCulloch, M. T.: Coral resistance to ocean acidification linked to increased calcium at the site of calcification, Proc. R. Soc. B, 285, 20180564, https://doi.org/10.1098/rspb.2018.0564, 2018. DePaolo, D. J.: Surface kinetic model for isotopic and trace element fractionation during precipitation of calcite from aqueous solutions, Geochim. Cosmochim. Ac., 75, 1039–1056, https://doi.org/10.1016/J.GCA.2010.11.020, 2011. a Dickson, A. G.: Standard potential of the reaction: AgCl (s)+ 1/2H2 (g)= Ag (s)+ HCl (aq), and the standard acidity constant of the ion HSO4- in synthetic sea water from 273.15 to 318.15 K, J. Chem. Thermodyn., 22, 113–127, 1990. a, b Dickson, A. G. and Millero, F. J.: A comparison of the equilibrium constants for the dissociation of carbonic acid in seawater media, Deep-Sea Res., 34, 1733–1743, https://doi.org/10.1016/0198-0149(87)90021-5, 1987. a D'Olivo, J. P. and McCulloch, M. T.: Response of coral calcification and calcifying fluid composition to thermally induced bleaching stress, Sci. Rep., 7, 2207, https://doi.org/10.1038/s41598-017-02306-x, 2017. a, b, c, d, e Foster, G. L., Pogge von Strandmann, P. A. E., and Rae, J. W. B.: Boron and magnesium isotopic composition of seawater, Geochem. Geophys. Geosys., 11, Q08015, https://doi.org/10.1029/2010GC003201, 2010. a, b, c, d Gaetani, G. A. and Cohen, A. L.: Element partitioning during precipitation of aragonite from seawater: A framework for understanding paleoproxies, Geochim. Cosmochim. Ac., 70, 4617–4634, https://doi.org/10.1016/j.gca.2006.07.008, 2006. a, b, c Gaetani, G. A., Cohen, A. L., Wang, Z., and Crusius, J.: Rayleigh-Based, Multi-Element Coral Thermometry: a Biomineralization Approach to Developing Climate Proxies, Geochim. Cosmochim. Ac., 75, 1920–1932, https://doi.org/10.1016/j.gca.2011.01.010, 2011. a Gagnon, A. C., Adkins, J. F., and Erez, J.: Seawater transport during coral biomineralization, Earth Planet. Sc. Lett., 329, 150–161, 2012. a, b Holcomb, M., Venn, A. A., Tambutté, E., Tambutté, S., Allemand, D., Trotter, J., and McCulloch, M.: Coral calcifying fluid pH dictates response to ocean acidification, Sci. Rep., 4, 5207, https://doi.org/10.1038/srep05207, 2014. a, b, c Holcomb, M., DeCarlo, T., Gaetani, G., and McCulloch, M.: Factors affecting B∕Ca ratios in synthetic aragonite, Chem. Geol., 437, 67–76, https://doi.org/10.1016/j.chemgeo.2016.05.007, 2016. a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z, aa, ab, ac, ad, ae, af, ag, ah, ai, aj, ak, al, am, an, ao, ap, aq, ar, as, at, au, av, aw, ax, ay, az, ba, bb, bc, bd, be, bf, bg, bh, bi, bj, bk, bl, bm Hönisch, B., Hemming, N. G., Grottoli, A. G., Amat, A., Hanson, G. N., and Bijma, J.: Assessing scleractinian corals as recorders for paleo-pH: Empirical calibration and vital effects, Geochim. Cosmochim. Ac., 68, 3675–3685, 2004. a, b Inoue, M., Suwa, R., Suzuki, A., Sakai, K., and Kawahata, H.: Effects of seawater pH on growth and skeletal U∕Ca ratios of Acropora digitifera coral polyps, Geophys. Res. Lett., 38, L12809, https://doi.org/10.1029/2011GL047786, 2011. a Klochko, K., Kaufman, A. J., Yao, W., Byrne, R. H., and Tossell, J. A.: Experimental measurement of boron isotope fractionation in seawater, Earth Planet. Sci. Lett., 248, 276–285, https://doi.org/10.1016/j.epsl.2006.05.034, 2006. a, b, c, d, e Kubota, K., Yokoyama, Y., Ishikawa, T., Suzuki, A., and Ishii, M.: Rapid decline in pH of coral calcification fluid due to incorporation of anthropogenic CO2, Sci. Rep., 7, 7694, https://doi.org/10.1038/s41598-017-07680-0, 2017. a, b Lee, K., Kim, T.-W., Byrne, R., Millero, F., Feely, R., and Liu, Y.-M.: The universal ratio of boron to chlorinity for the North Pacific and North Atlantic oceans, Geochim. Cosmochim. Ac., 74, 1801–1811, https://doi.org/10.1016/J.GCA.2009.12.027, 2010. a, b, c Lewis, E., Wallace, D., and Allison, L. J.: Program developed for CO2 system calculations, Tech. rep., Brookhaven Natl. Lab., Dep. of Appl. Sci., Upton, New York, 1998. a, b Lueker, T. J., Dickson, A. G., and Keeling, C. D.: Ocean pCO calculated from dissolved inorganic carbon, Mar. Chem., 70, 105–119, 2000. a Mavromatis, V., Montouillout, V., Noireaux, J., Gaillardet, J., and Schott, J.: Characterization of boron incorporation and speciation in calcite and aragonite from co-precipitation experiments under controlled pH, temperature and precipitation rate, Geochim. Cosmochim. Ac., 150, 299–313, https://doi.org/10.1016/j.gca.2014.10.024, 2015. a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z, aa, ab McConnaughey, T.: 13C and 18O isotopic disequilibrium in biological carbonates: I. Patterns, Geochim. Cosmochim. Ac., 53, 151–162, https://doi.org/10.1016/0016-7037(89)90282-2, 1989. a, b McCulloch, M., Trotter, J., Montagna, P., Falter, J., Dunbar, R., Freiwald, A., Försterra, G., López Correa, M., Maier, C., and Rüggeberg, A.: Resilience of cold-water scleractinian corals to ocean acidification: Boron isotopic systematics of pH and saturation state up-regulation, Geochim. Cosmochim. Ac., 87, 21–34, 2012a. a McCulloch, M. T., Falter, J., Trotter, J., and Montagna, P.: Coral resilience to ocean acidification and global warming through pH up-regulation, Nat. Clim. Change, 2, 623–627, 2012b. a, b McCulloch, M. T., D'Olivo Cordero, J. P., Falter, J., Holcomb, M., and Trotter, J. A.: Coral calcification in a changing World: the interactive dynamics of pH and DIC up-regulation, Nat. Commun., 8, 15686, https://doi.org/10.1038/ncomms15686, 2017. a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z, aa, ab, ac, ad, ae Noireaux, J., Mavromatis, V., Gaillardet, J., Schott, J., Montouillout, V., Louvat, P., Rollion-Bard, C., and Neuville, D.: Crystallographic control on the boron isotope paleo-pH proxy, Earth Planet. Sci. Lett., 430, 398–407, https://doi.org/10.1016/j.epsl.2015.07.063, 2015. a, b Rae, J. W., Foster, G. L., Schmidt, D. N., and Elliott, T.: Boron isotopes and B∕Ca in benthic foraminifera: Proxies for the deep ocean carbonate system, Earth Planet. Sci. Lett., 302, 403–413, https://doi.org/10.1016/J.EPSL.2010.12.034, 2011. a Ries, J. B.: A physicochemical framework for interpreting the biological calcification response to CO2-induced ocean acidification, Geochim. Cosmochim. Ac., 75, 4053–4064, 2011. a, b Riley, J. P. and Tongudai, M.: The major cation/chlorinity ratios in sea water, Chem. Geol., 2, 263–269, 1967. a Rollion-Bard, C., Blamart, D., Cuif, J. P., and Dauphin, Y.: In situ measurements of oxygen isotopic composition in deep-sea coral, Lophelia pertusa: Re-examination of the current geochemical models of biomineralization, Geochim. Cosmochim. Ac., 74, 1338–1349, 2010. a Rollion-Bard, C., Blamart, D., Trebosc, J., Tricot, G., Mussi, A., and Cuif, J. P.: Boron isotopes as pH proxy: A new look at boron speciation in deep-sea corals using 11B MAS NMR and EELS, Geochim. Cosmochim. Ac., 75, 1003–1012, 2011. a, b Ross, C. L., Falter, J. L., and McCulloch, M. T.: Active modulation of the calcifying fluid carbonate chemistry (δ11B, B∕Ca) and seasonally invariant coral calcification at sub-tropical limits, Sci. Rep., 7, 13830, https://doi.org/10.1038/s41598-017-14066-9, 2017. a, b, c, d Ruiz-Agudo, E., Putnis, C., Kowacz, M., Ortega-Huertas, M., and Putnis, A.: Boron incorporation into calcite during growth: Implications for the use of boron in carbonates as a pH proxy, Earth Planet. Sci. Lett., 345–348, 9–17, https://doi.org/10.1016/j.epsl.2012.06.032, 2012. a Schoepf, V., Levas, S. J., Rodrigues, L. J., McBride, M. O., Aschaffenburg, M. D., Matsui, Y., Warner, M. E., Hughes, A. D., and Grottoli, A. G.: Kinetic and metabolic isotope effects in coral skeletal carbon isotopes: A re-evaluation using experimental coral bleaching as a case study, Geochim. Cosmochim. Ac., 146, 164–178, https://doi.org/10.1016/j.gca.2014.09.033, 2014. a Schoepf, V., Jury, C. P., Toonen, R. J., and McCulloch, M. T.: Coral calcification mechanisms facilitate adaptive responses to ocean acidification., P. Roy. Soc. Lond. B. Bio., 284, 20172117, https://doi.org/10.1098/rspb.2017.2117, 2017. a, b Sinclair, D. J.: Correlated trace element “vital effects” in tropical corals: a new geochemical tool for probing biomineralization, Geochim. Cosmochim. Ac., 69, 3265–3284, 2005. a Stewart, J. A., Anagnostou, E., and Foster, G. L.: An improved boron isotope pH proxy calibration for the deep-sea coral Desmophyllum dianthus through sub-sampling of fibrous aragonite, Chem. Geol., 447, 148–160, https://doi.org/10.1016/j.chemgeo.2016.10.029, 2016. a Tambutté, E., Tambutté, S., Segonds, N., Zoccola, D., Venn, A., Erez, J., and Allemand, D.: Calcein labelling and electrophysiology: insights on coral tissue permeability and calcification, P. Roy. Soc. Lond. B. Bio., 279, 19–27, https://doi.org/10.1098/rspb.2011.0733, 2012. a, b Trotter, J., Montagna, P., McCulloch, M., Silenzi, S., Reynaud, S., Mortimer, G., Martin, S., Ferrier-Pagès, C., Gattuso, J. P., and Rodolfo-Metalpa, R.: Quantifying the pH “vital effect” in the temperate zooxanthellate coral Cladocora caespitosa: Validation of the boron seawater pH proxy, Earth Planet. Sci. Lett., 303, 163–173, 2011. a, b, c, d Uchikawa, J., Penman, D. E., Zachos, J. C., and Zeebe, R. E.: Experimental evidence for kinetic effects on B∕Ca in synthetic calcite: Implications for potential B(OH)4− and B(OH)3 incorporation, Geochim. Cosmochim. Ac., 150, 171–191, https://doi.org/10.1016/j.gca.2014.11.022, 2015. a Uchikawa, J., Harper, D. T., Penman, D. E., Zachos, J. C., and Zeebe, R. E.: Influence of solution chemistry on the boron content in inorganic calcite grown in artificial seawater, Geochim. Cosmochim. Ac., 218, 291–307, https://doi.org/10.1016/j.gca.2017.09.016, 2017. a Uppstrom, L.: The boron/chlorinity ratio of deep-sea water from the Pacific Ocean, Deep Sea Research and Oceanographic Abstracts, 21, 161–162, 1974. a van der Weijden, C. and van der Weijden, R.: Calcite growth: Rate dependence on saturation, on ratios of dissolved calcium and (bi)carbonate and on their complexes, J. Cryst. Growth, 394, 137–144, https://doi.org/10.1016/J.JCRYSGRO.2014.02.042, 2014. a Venn, A., Tambutte, E., Holcomb, M., Allemand, D., and Tambutte, S.: Live tissue imaging shows reef corals elevate pH under their calcifying tissue relative to seawater, PLoS One, 6, e20013, https://doi.org/10.1371/journal.pone.0020013, 2011.  a, b, c Venn, A. A., Tambutté, E., Holcomb, M., Laurent, J., Allemand, D., and Tambutté, S.: Impact of seawater acidification on pH at the tissue-skeleton interface and calcification in reef corals., P. Natl. Acad. Sci. USA, 110, 1634–1639, https://doi.org/10.1073/pnas.1216153110, 2013. a Watson, E. B.: A conceptual model for near-surface kinetic controls on the trace-element and stable isotope composition of abiogenic calcite crystals, Geochim. Cosmochim. Ac., 68, 1473–1488, 2004. a, b Wu, H. C., Dissard, D., Le Cornec, F., Thil, F., Tribollet, A., Moya, A., and Douville, E.: Primary Life Stage Boron Isotope and Trace Elements Incorporation in Aposymbiotic Acropora millepora Coral under Ocean Acidification and Warming, Frontiers Mar. Sci., 4, 129, https://doi.org/10.3389/fmars.2017.00129, 2017. a Yu, J., Foster, G. L., Elderfield, H., Broecker, W. S., and Clark, E.: An evaluation of benthic foraminiferal B∕Ca and δ11B for deep ocean carbonate ion and pH reconstructions, Earth Planet. Sci. Lett., 293, 114–120, https://doi.org/10.1016/J.EPSL.2010.02.029, 2010. a Zeebe, R. E. and Wolf-Gladrow, D. A.: CO2 in seawater: equilibrium, kinetics, isotopes, vol. 65, Elsevier Science Limited, Amsterdam, 2001. a
2019-02-23 17:33:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 190, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7600024342536926, "perplexity": 5700.662006892261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249508792.98/warc/CC-MAIN-20190223162938-20190223184938-00019.warc.gz"}
https://au.boazcommunitycorp.org/9765-22-assignment-statement-and-variables-mathematics.html
# 2.2 Assignment Statement and Variables - Mathematics #### Assignments Anything can be stored as a variable using the single equal sign like This is an assignment operator, which creates the number 6 and stores it under the namex. And now that the variable is stored, we can use it in calculations. For example Variables in julia, much like other languages are primarily sequences of alphanumeric characters as well as an underscore . Primarily, a variable needs to start with a alphabetic character or and after the first character can contain numbers. Julia also allows many unicode symbols in variable names, however not everything. For example, all of the greek letters are allowed, so (alpha=45) is valid. To get a greek letter in Jupyter or the REPL, typealpha, hit the TAB key and it will be turned into an (alpha). ##### Storing Variables in a Virtual Whiteboard The details of storing variables in computer hardware isn’t necessary, however, thinking of storing as writing variables and values on a whiteboard is a helpful paradigm. Imagine a whiteboard with a column of variable names and a column of values. For example, if we have then you can think of the whiteboard looking like: variable value x 6 y -1 z 8.5 If we evaluate any expression containing any of these variables, then the value looked up substituted into the expression. For example, looks up the value of y (which is -1) and substitutes that value in and multiplies the result by 2. As you can see the result is -2. If we change one of the values, like this means that the right hand side is evaluated by looking up the value ofyand the result is 4. Then the 4 is placed into the whiteboard, which will now look like: variable value x 6 y 4 z 8.5 If you are thinking of how a piece of code works, often you will need to get to the point of writing down a version of the whiteboard.
2021-10-21 20:27:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4830852746963501, "perplexity": 756.6802046819831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585441.99/warc/CC-MAIN-20211021195527-20211021225527-00490.warc.gz"}
https://math.stackexchange.com/questions/1561762/lipschitz-continuity-is-equivalent-to-absolute-continuity-with-bounded-derivativ
# Lipschitz continuity is equivalent to absolute continuity with bounded derivative I am trying to show that a function is a Lipschitz $M$ continuous if and only if it is absolutely continues and $|f'(x)| \leq M$. I think I am on the right track: Proof: (=>) Let f be Lipschitz M continues ie if $|f(x)-f(y)| \leq M|x-y|$ for all $x,y \in E=[a,b]$. now we want to show abs cont: $\sum^n_{i=1}|f(x'_i)-f(x_i)|< \epsilon$ if $\sum^n_{i=1}|x'_i-x_i|< \delta$ for all any finite collection of disjoint intervals $(x'_i,x_i)$. Consider we let $\sum^n_{i=1}|x'_i-x_i|< \delta$, then we observe that $\sum^n_{i=1}|f(x'_i)-f(x_i)| \leq M \sum^n_{i=1}|x'_i-x_i|$ by the Lipschitz M continues and triangle inequality. Then define $\epsilon = M \delta$ and we are done. To see that $|f'(x)| \leq M$ we can just let $x'_i=x_i+h$ and have the following $|f(x_i+h)-f(x_i)| \leq M|x_i+h-x_i|$ thus we have $\frac{|f(x_i+h)-f(x_i)|}{|h|}\leq M$ and if we take the limit and take it inside the absolute values we are done. (<=) Now assume abs continuity and $|f'(x)|=M$ now by another theorem we know that f is absolute continues if and only if it is an indefinite integral $f(x)= \int_a ^x f'(t)dt +f(a)$ we can manipulate this to $f(x)-f(a) \leq \int_a ^x Mdt$ . Now we can integrate the RHS to get $\int_a ^x Mdt = M (x-a)$ now we have $f(x)-f(a) \leq M (x-a)$ we can take the absolute calues of both sides and we are done. Does this seem correct? • There is a minor technicality concerning a.e. vs everywhere. Otherwise, everything looks good. The derivative of an absolutely continuous function exists a.e., and the bound $|f'(x)| \le M$ will hold a.e.. if the function is Lipschitz with Lipschitz constant $M$. You might want to try to clean up the a.e. issues. – DisintegratingByParts Dec 6 '15 at 2:43 Mostly correct. However, in the proof of $\Rightarrow$, • "define $\epsilon=M\delta$" is not logical, since $\epsilon$ is given. You define $\delta=\epsilon/M$. • Before proving $|f'(x)|\le M$ one has to discuss the existence of $f'(x)$. Cite the theorem saying that an absolutely continuous function is differentiable almost everywhere. The argument for $|f'(x)|\le M$ applies at the points of differentiability. And in the proof of $\Leftarrow$, "we can take the absolute calues of both sides" is too hasty: $A\le B$ does not imply $|A|\le |B|$. Instead, follow the chain of inequalities again starting with $f'(x)\ge -M$ and arriving at $f(x)-f(a) \geq -M (x-a)$.
2019-11-13 21:14:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.978014349937439, "perplexity": 98.23144515768325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667333.2/warc/CC-MAIN-20191113191653-20191113215653-00090.warc.gz"}
https://byjus.com/jee-questions/the-unit-vector-along-i-j-is/
Checkout JEE MAINS 2022 Question Paper Analysis : Checkout JEE MAINS 2022 Question Paper Analysis : The unit vector along $$\hat{i}+\hat{j}$$ is 1) $$\begin{array}{l}\hat{k}\end{array}$$ 2) $$\begin{array}{l}\hat{i}+\hat{j}\end{array}$$ 3) $$\begin{array}{l}\frac{\hat{i}+\hat{j}}{\sqrt{2}}\end{array}$$ 4) $$\begin{array}{l}\frac{\hat{i}+\hat{j}}{2}\end{array}$$ $$\begin{array}{l}\frac{\hat{i}+\hat{j}}{\sqrt{2}}\end{array}$$ Solution: $$\begin{array}{l}\hat{R}=\frac{\vec{R}}{\left | R \right |}=\frac{\hat{i}+\hat{j}}{\sqrt{1^{2}+1^{2}}}= \frac{\hat{i}+\hat{j}}{\sqrt{2}}\end{array}$$
2022-06-28 00:36:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22269339859485626, "perplexity": 9947.85272089539}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103344783.24/warc/CC-MAIN-20220627225823-20220628015823-00209.warc.gz"}
https://proxieslive.com/tag/accepting/
## Why has my Mac stopped accepting mouse clicks? My Mac mini suddenly stopped responding to mouse clicks. The cursor moves around and I can switch applications with the keyboard but neither my magic trackpad or cheap USB mouse work for any type of click. As I move the most cursor along the dock icons are not ‘popping’ neither are any mouse hover actions seeming to work. I never saw this before, what is going on? I don’t know how to use my Mac without a mouse so I’m a bit stuck! ## Dove tailing questions about accepting language $$L = \{M | \text { M is a TM and there exist some string w that contains five 1’s such that M halts}\}$$ Where $$\Sigma =\{0,1\}$$ let $$w_1, w_2, \cdots, \in \Sigma^*$$ be an effective enumeration. We give a TM $$R$$ that recognizes $$L$$ R = "On input <M> for s = 1 to infinity: for i = 1 to s run M on input w_i for s steps if M halts on w_i within s steps then accept Can I assume $$M$$ knows to halt only if there are five $$1$$‘s? Or do I need another if statement such as in line 4 if M halts on w_i within s steps then if w_i.count("1") is 5 then accept ## How to define accepting states in a finite automation? I can build a finite automation from a dataset and would be glad to expand that to a deterministic finite automation (DFA). A DFA requires accepting states. How one can identify them, or how these can be defined? Is there any proper definition for accepting states? What is the difference of accepting states from all other states? ## Is the language of all TMs *not* accepting a given string, Enumerable? Is the following language in RE? $$L = \{\langle M\rangle : M\text{ is a TM that does not accept }010\}$$ I could use Rice’s Theorem with the property $$P = \{L : 010\text{ is not in }L\}$$ to show it isn’t in R, but how do I show it is in RE? ## Turing machine different accepting states I want to design a Turing machine that accepts at most 3 0s. Now, I have designed one, which goes to accept state overtime it sees 1, 2 and 3 0s and rejects any further 0s. I wanted to know if it is okay for TM to go to accepting state from 3 different states? ## DFA multiple accepting states to Regular expression I am trying to find the regular expression that defines this DFA, I am finding this particular case difficult since it has multiple accepting states. If I understand this DFA correctly, it recognises: empty strings or strings with any number of b => b* or a followed by any number of b => ab* or aa followed by any number of b => aa(b*) So the closest I have got is b*+(a+aa)+(a+aa)b* but I know this is not correct, since it doesn’t recognise strings such aabaabab. I have been using http://ivanzuzak.info/noam/webapps/fsm_simulator/ so I can see I don’t know how to make the transition back to Q0 from Q1 or Q2 when there is a b. Could anybody help me finding where I’m going wrong and how could I fix it? ## Gmail app on IOS is not accepting below font-size:12px for my Emailer How can I add a specific thing to correct my gmail app iOS problem. my is below table {border-collapse: collapse;} </style> * { -webkit-text-size-adjust: none; } body { margin: 0 !important; padding: 0; background-color: #ffffff; } table { border-spacing: 0; } td { padding: 0; } img { border: 0; } <!--logo--> <table width="600" border="0" cellspacing="0" cellpadding="0" align="center" style="width:600px; background-color: #fff;" > <tr><td height="20"></td></tr> <tr style=" background-color: #fff;"> <td width="5%"></td> <td width="42%"> <table> <tr> <td style="font-family: Times New Roman, sans-serif; font-size: 24px;color:#61117F">STAY FOR 3 NIGHTS, PAY FOR 2</td> </tr> <tr> <td style="font-family: Arial, sans-serif; font-size: 14px; color:#000000;">At The Leela Raviz Kovalam</td> </tr> </table> </td> <td width="35%"></td> <td width="13%"><img src="https://msstatic.theleela.com/images/marketing/logokovalam_26042019.png" width="96"></td> <td width="5%"></td> </tr> <tr><td height="20"></td></tr> </table> <!--logo--> <!--Banner--> <table> <tr> <td> <img src="https://msstatic.theleela.com/images/marketing/bannerimg_26042019.jpg" width="600"> </td> </tr> </table> <!--Banner--> <!--Content--> <table> <tr><td height="20"></td> </tr> <tr><td style="font-family: Arial, sans-serif; font-size: 12px; color:#58585B; text-align: center;">Book your stay for 3 nights and pay only for 2. Also enjoy complimentary <br>daily breakfast and a host of other services.</td></tr> <tr><td height="20"></td> </tr> <tr><td style="font-family: Arial, sans-serif; font-size: 12px; color:#58585B; text-align: center;">Offer valid for new bookings, for stays up to September 30, 2019. To know more, please click <em> <a href="https://www.theleela.com/en_us/hotels-in-kovalam/the-leela-raviz-kovalam-hotel/offers/offer-detail/pay-2-stay-3/?utm_source=knowmore&utm_medium=email&utm_campaign=TLKPay2Stay3Apr19" target="_blank">here</a></em>. </td> </tr> <tr><td height="20"></td> </tr> <tr><td style="font-family: Arial, sans-serif; font-size: 12px; color:#58585B; text-align: center;">Book online at <a href="https://www.theleela.com/en_us/?utm_source=eNewsFooter&utm_medium=email&utm_campaign=TheLeelaeNewsApr2019">www.theleela.com</a> or by calling The Leela Reservation Worldwide*. </td> </tr> <tr><td height="20"></td> </tr> </table> <table> <tr><td height="25" style="background: #61117F; color: #ffffff; padding: 2px 20px; border: none; cursor: pointer;"><a style="font-family: Arial, sans-serif; font-size: 14px;background: none;color: #fff; text-decoration:none;" href=" https://www.theleela.com/spring/booking/step1?hotelName=LEELAKOV&noofRooms=1&utm_source=Booknow&utm_medium=email&utm_campaign=TLKPay2Stay3Apr19" target="_blank">BOOK NOW</a></td></tr> <tr><td height="20"></td> </tr> </table> <table> <tr> <td style="font-family: Arial, sans-serif; font-size:10px; text-align: center; color:#231F20;"> *The Leela Reservations Worldwide (Toll Free): India 1 800 1031 444 | USA 8556 703 444 | UK 08000 261 111 <br> Hong Kong 800 906 444 Singapore 1800 223 4444 | Others +91 124 4425 444 | Email: reservations@theleela.com</td> </tr> <tr><td height="20"></td> </tr> </table> <table style="background-color: #F0E4F7;" width="600"> <tr> <td width="2%"></td> <td width="78%" style="font-family: Arial, sans-serif; font-size:9px; color:#2A1720; padding: 10px 10px; </td> <td width="2%"></td> </tr> </table> </td> </tr> </table> ## Define the finite automata accepting the language below $$\{ w∈(a,b) \ast | w$$ does not contain ‘$$ab$$‘ as a subword $$\}$$. About questions like this, I always want to construct the regular expression for it, then convert the regular expression to a finite automata. Is there an easier way? Actually, I dont know the regular expression of it either. ## VS Debug not accepting the command line argument provided in project properties [on hold] A cpp project my team is working on in Visual Studio 2017 Community (v15.5.1) compiles to an .exe that can be run from a terminal with the command line argument pointing to a set of inputs. Normally as I work on the code I run it in debug mode within VS, setting the command line argument by right clicking the main Project, going to Properties > Debugging > Command Argument and entering the path to the input set there same as I would after the .exe name if running it in a terminal. The problem is that when I compile in Debug mode, my command argument is not used and instead an older command argument is used, whatever argument was last committed in our version control for the solution’s .vcsproj.user file. I check out the latest copy, modify the command argument to my own path, run debug and it uses the command argument from whatever I had last checked out, ignoring the path I provide. What could be going wrong here? How do I get VS debug mode to use the path I provide? The only diff I find from version control is my change to .vcsproj.user file, in the tag . My working copy has the path I provided in VS Project Properties for Command Argument, and yet running Debug mode doesn’t use that path. To confirm the EXE is working as expected, if I compile it for release and then run it from a terminal, it accepts whatever command line argument I pass it in terminal correctly. ## Ubuntu 18.16 – Keyboards (Laptop, USB & Virtual) Not Accepting Input on Logon I cant relogon to my laptop because no keyboard, mouse or touchpad input is being accepted at logon. For keyboard this includes laptop keyboard, external USB keyboard and virtual onscreen keyboard, the latter not being accessible because tab, enter key, mouse and touchpad movements/input do not work to select onscreen numbers and letters. Specs: Ubuntu 18.16 Full USB installation that has been working for ~ 2 months HP EliteBook Laptop; 1. I am not a complete Unix, Ubuntu newbie at all but have come back to it recently after 2 years away so general debug or recovery mode advice would be appreciated. 2. I don’t think it’s a hardware or BIOS issue because I’m typing this question on the same laptop using Ubuntu on another USB key. So the USB stick I’m using now has the same hardware and BIOS settings as the problematic installation on the other USB stick. The other stick is a full installation that has all my settings, customizations and data for two months on the USB. 3. I suspect changes I made yesterday to fix a brightness issue is causing the current issue. I do not recall all the steps but I’ll describe in general what I did to successfully fix the issue of the brightness not being adjustable on the full USB installation. a. installed brightness-controller, brightness-controller-simple and xbacklight. (Not sure if package names are exact.) b. xbacklight installation in particular had some dependencies that required installing additional packages which I don’t have the names of right now (hoping to research to find the pages of instructions I followed, but even that won’t be complete because I installed some things that weren’t in the instructions based on messages I received on dependencies when doing the install). Edit: these are packages I installed sudo apt install xbacklight xorg xserver-xorg-video-intel. The last package had dependencies requiring additional installs. c. I had made the following grub change and updated grub about a week ago and have rebooted many times since. It didn’t fix the brightness issue then but this change is part of xbacklight install instructions I followed. Since the change had already been made to grub a week ago I did not run grub update again yesterday: GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash acpi_backlight=vendor” d. Edit: I created and added following to /etc/X11/xorg.conf as part of xbacklight install: Section “Device” Identifier “Device0” Driver “intel” Option “Backlight” “intel_backlight” EndSection Section “Monitor” Identifier “Monitor0” EndSection Section “Screen” Identifier “Screen0” Monitor “Monitor0” Device “Device0” EndSection Any help or leads are very much appreciated.
2019-06-17 17:50:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21640022099018097, "perplexity": 6973.314869647357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998513.14/warc/CC-MAIN-20190617163111-20190617185111-00286.warc.gz"}
https://www.gwern.net/Replication
# The Replication Crisis: Flaws in Mainstream Science 2013 discussion of how systemic biases in science, particularly medicine and psychology, have resulted in a research literature filled with false positives and exaggerated effects, called ‘the Replication Crisis’. psychology⁠, statistics⁠, meta-analysis⁠, sociology⁠, causality 2010-10-272019-12-09 finished certainty: highly likely Long-standing problems in standard scientific methodology have exploded as the “”: the discovery that many results in fields as diverse as psychology, economics, medicine, biology, and sociology are in fact false or quantitatively highly inaccurately measured. I cover here a handful of the issues and publications on this large, important, and rapidly developing topic up to about 2013, at which point the Replication Crisis became too large a topic to cover more than cursorily. (A compilation of some additional links are provided for post-2013 developments.) The crisis is caused by methods & publishing procedures which interpret random noise as important results, far too small datasets, selective analysis by an analyst trying to reach expected/​​​desired results, publication bias, poor implementation of existing best-practices, nontrivial levels of research fraud, software errors, philosophical beliefs among researchers that false positives are acceptable, neglect of known like genetics, and skewed incentives (financial & professional) to publish ‘hot’ results. Thus, any individual piece of research typically establishes little. Scientific validation comes not from small p-values, but from discovering a regular feature of the world which disinterested third parties can discover with straightforward research done independently on new data with new procedures—replication. Mainstream science is flawed: seriously mistaken statistics combined with poor incentives has led to masses of misleading research. Not that this problem is exclusive to psychology—economics, certain genetics subfields (principally candidate-gene research), biomedical science, and biology in general are often on shaky ground. # NHST and Systematic Biases Statistical background on p-value problems: Against null-hypothesis statistical-significance testing The basic nature of being usually defined as p < 0.05 means we should expect something like >5% of studies or experiments to be bogus (optimistically), but that only considers “false positives”; reducing “false negatives” requires (weakened by small samples), and the two combine with the base rate of true underlying effects into a total error rate. points out that considering the usual p values, the underpowered nature of many studies, the rarity of underlying effects, and a little bias, even large randomized trials may wind up with only an 85% chance of having yielded the truth. of reported p-values in medicine yielding a lower bound of false positives of 17%. Yet, there are too 1 (psychiatry⁠, neurobiology biomedicine⁠, biology⁠, ecology & evolution⁠, psychology 12 4 ⁠, economics⁠, ⁠, gene-disease correlations) given (and positive results correlate with per capita publishing rates in & vary by —apparently chance is kind to scientists who must publish a lot and recently!); then there come the inadvertent errors which might cause retraction, which is rare, but the true retraction rate may be 0.1–1% (“How many scientific papers should be retracted?”), is increasing & seems to positively correlate with journal prestige metrics (modulo the confounding factor that famous papers/​journals get more scrutiny), not that anyone pays any attention to such things; then there are basic statistical errors in >11% of papers (based on the high-quality papers in Nature and the British Medical Journal; “Incongruence between test statistics and P values in medical papers”⁠, García-Berthou 2004) or 50% in neuroscience⁠. And only then can we get into replicating at all. See for example article “Lies, Damned Lies, and Medical Science” on research showing 41% of the most cited medical research failed to be —were wrong. For details, you can see Ioannidis’s 2⁠, or Begley’s failed attempts to replicate 47 of 53 articles on top cancer journals (leading to Booth’s “Begley’s Six Rules”⁠; see also the Nature Biotechnology editorial & note that full details have not been published because the researchers of the original studies demanded secrecy from Begley’s team), or Kumar & Nash 2011’s “Health Care Myth Busters: Is There a High Degree of Scientific Certainty in Modern Medicine?” who write ‘We could accurately say, “Half of what physicians do is wrong,” or “Less than 20% of what physicians do has solid research to support it.”’ Nutritional epidemiology is something of a fish in a barrel; after Ioannidis, is anyone surprised that when Young & Karr 2011 followed up on 52 correlations tested in 12 RCTs, 0⁄52 replicated and the RCTs found the opposite of 5? Attempts to use animal models to infer anything about humans suffer from all the methodological problems previously mentioned⁠, and add in interesting new forms of error such as mice simply being irrelevant to humans, leading to cases like <150 clinical trials all failing—because the drugs worked in mice but humans have a completely different set of genetic reactions to inflammation. ‘Hot’ fields tend to be new fields, which brings problems of its own, see & discussion⁠. (Failure to replicate in larger studies seems to be a hallmark of biological/​medical research. Ioannidis performs the same trick with ⁠, finding less than half of the most-cited biomarkers were even statistically-significant in the larger studies. 12 of the more prominent -IQ correlations on a larger data.) As we know now, almost the entire candidate-gene literature, most things reported from 2000–2010 before large-scale GWASes started to be done (and completely failing to find the candidate-genes), is nothing but false positives! The replication rates of candidate-genes for things like intelligence, personality, gene-environment interactions, psychiatric disorders–the whole schmeer—are literally ~0%. On the plus side, the parlous state of affairs means that there are some cheap heuristics for detecting unreliable papers—simply asking for data & being refused /​ ​ignored correlates strongly with the original paper having errors in its statistics. This epidemic of false positives is apparently deliberately and knowing accepted by ⁠; Young’s 2008 “Everything is Dangerous” remarks that 80–90% of epidemiology’s claims do not replicate (eg. the NIH ran 20 randomized-controlled-trials of claims, and only 1 replicated) and that lack of ’ (either or Benjamin-Hochberg) is taught: “Rothman (1990) says no correction for multiple testing is necessary and Vandenbroucke, PLoS Med (2008) agrees” (see also Perneger 1998 who also explicitly understands that no correction increases type 2 errors and reduces type 1 errors). Multiple correction is necessary because its absence does, in fact, result in the overstatement of medical benefit (Godfrey 1985⁠, Pocock et al 1987⁠, Smith 1987). The average effect size for in psychology/​education is d = 0.53 (well below several effect sizes from n-back/​IQ studies); when moving from laboratory to non-laboratory settings, meta-analyses replicate findings correlate ~0.7 of the time, but for the replication correlation falls to ~0.5 with >14% of findings actually turning out to be the opposite (see Anderson et al 1999 and Mitchell 2012⁠; for exaggeration due to non-blinding or poor randomization, Wood et al 2008). (Meta-analyses also give us a starting point for understanding how unusual medium or large effects sizes are4⁠.) Psychology does have many challenges, but practitioners also handicap themselves; an older overview is the entertaining “What’s Wrong With Psychology, Anyway?”⁠, which mentions the obvious point that statistics & experimental design are flexible enough to reach significance as desired. In an interesting example of how methodological reforms are no panacea in the presence of continued perverse incentives, an earlier methodological improvement in psychology (reporting multiple experiments in a single publication as a check against results not being generalizable) has merely demonstrated the widespread p-value hacking or manipulation or publication bias when one notes that given the low statistical power of each experiment, even if the underlying phenomena were real it would still be wildly improbable that all n experiments in a paper would turn up statistically-significant results, since power is usually extremely low in experiments (eg. in neuroscience, “between 20–30%”). These problems are pervasive enough that I believe they entirely explain any “decline effects”5⁠. The failures to replicate “statistically significant” results has led one blogger to caustically remark (see also “Parapsychology: the control group for science”⁠, “Using degrees of freedom to change the past for fun and profit”⁠, ): Parapsychology, the control group for science, would seem to be a thriving field with “statistically significant” results aplenty…Parapsychologists are constantly protesting that they are playing by all the standard scientific rules, and yet their results are being ignored—that they are unfairly being held to higher standards than everyone else. I’m willing to believe that. It just means that the standard statistical methods of science are so weak and flawed as to permit a field of study to sustain itself in the complete absence of any subject matter. With two-thirds of medical studies in prestigious journals failing to replicate, getting rid of the entire actual subject matter would shrink the field by only 33%. …Let me draw the moral [about publication bias]. Even if the community of inquiry is both too clueless to make any contact with reality and too honest to nudge borderline findings into significance, so long as they can keep coming up with new phenomena to look for, the mechanism of the file-drawer problem alone will guarantee a steady stream of new results. There is, so far as I know, no Journal of Evidence-Based Haruspicy filled, issue after issue, with methodologically-faultless papers reporting the ability of sheep’s livers to predict the winners of sumo championships, the outcome of speed dates, or real estate trends in selected suburbs of Chicago. But the difficulty can only be that the evidence-based haruspices aren’t trying hard enough, and some friendly rivalry with the plastromancers is called for. It’s true that none of these findings will last forever, but this constant overturning of old ideas by new discoveries is just part of what makes this such a dynamic time in the field of haruspicy. Many scholars will even tell you that their favorite part of being a haruspex is the frequency with which a new sacrifice over-turns everything they thought they knew about reading the future from a sheep’s liver! We are very excited about the renewed interest on the part of policy-makers in the recommendations of the mantic arts… And this is when there is enough information to replicate; open access to any data for a paper is rare (economics: ) the economics journal Journal of Money, Credit and Banking, which required researchers provide the data & software which could replicate their statistical analyses, discovered that <10% of the submitted materials were adequate for repeating the paper (see Lessons from the JMCB Archive). In one cute economics example, replication failed because the dataset had been to make participants look better (for more economics-specific critique, see ). Availability of data, often low⁠, ⁠, and many studies never get published regardless of whether publication is legally mandated⁠. Transcription errors in papers seem to be common (possibly due to constantly changing analyses & p-hacking?), and as software and large datasets becomes more inherent to research, the need and the problem of it being possible to replicate will get worse because even mature commercial software libraries can disagree majorly on their computed results to the same mathematical specification (see also Anda et al 2009). And spreadsheets are especially bad, with error rates in the 88% range (“What we know about spreadsheet errors”⁠, Panko 1998); spreadsheets are used in all areas of science, including biology and medicine (see “Error! What biomedical computing can learn from its mistakes”⁠; famous examples of coding errors include & Reinhart-Rogoff), not to mention regular business (eg the London Whale). Psychology is far from being perfect either; look at the examples in The New Yorker’s “The Truth Wears Off” article (or look at some excerpts from that article). Computer scientist has written a must-read essay on interpreting statistics, “Warning Signs in Experimental Design and Interpretation”⁠; a number of warning signs apply to many psychological studies. There may be incentive problems: a transplant researcher discovered the only way to publish in Nature his inability to replicate his earlier Nature paper was to officially retract it⁠; another interesting example is when, after got a paper published in the top journal demonstrating precognition, the journal refused to publish any replications (failed or successful) because… “‘We don’t want to be the Journal of Bem Replication’, he says, pointing out that other high-profile journals have similar policies of publishing only the best original research.” (Quoted in New Scientist) One doesn’t need to be a genius to understand why psychologist Andrew D. Wilson might snarkily remark “…think about the message JPSP is sending to authors. That message is ‘we will publish your crazy story if it’s new, but not your sensible story if it’s merely a replication’.” (You get what you pay for.) In one large test of the most famous psychology results, 10 of 13 (77%) replicated. The replication rate is under 1⁄3 in touching on genetics. This despite the simple point that replications reduce the risk of publication bias, and increase statistical power, so that a replicated result is ⁠. And the small samples of n-back studies and chemicals are especially problematic. Quoting from & 2006 “Converging Cognitive Enhancements”: The reliability of research is also an issue. Many of the cognition-enhancing interventions show small effect sizes, which may necessitate very large epidemiological studies possibly exposing large groups to unforeseen risks. Particularly troubling is the slowdown in drug discovery & medical technology during the 2000s, even as genetics in particular was expected to produce earth-shaking new treatments. One biotech writes: The company spent $7$52011M or so trying to validate a platform that didn’t exist. When they tried to directly repeat the academic founder’s data, it never worked. Upon re-examination of the lab notebooks, it was clear the founder’s lab had at the very least massaged the data and shaped it to fit their hypothesis. Essentially, they systematically ignored every piece of negative data. Sadly this “failure to repeat” happens more often than we’d like to believe. It has happened to us at Atlas [Venture] several times in the past decade…The unspoken rule is that at least 50% of the studies published even in top tier academic journals—Science, Nature, Cell, PNAS, etc…—can’t be repeated with the same conclusions by an industrial lab. In particular, key animal models often don’t reproduce. This 50% failure rate isn’t a data free assertion: it’s backed up by dozens of experienced R&D professionals who’ve participated in the (re)testing of academic findings. This is a huge problem for translational research and one that won’t go away until we address it head on. Half the respondents to at one cancer research center reported 1 or more incidents where they could not reproduce published research; two-thirds of those were unable to “ever able to explain or resolve their discrepant findings”, half had trouble publishing results contradicting previous publications, and two-thirds failed to publish contradictory results. An internal survey of 67 projects (commentary) found that “only in ~20–25% of the projects were the relevant published data completely in line with our in-house findings”, and as far as assessing the projects went: …despite the low numbers, there was no apparent difference between the different research fields. Surprisingly, even publications in prestigious journals or from several independent groups did not ensure reproducibility. Indeed, our analysis revealed that the reproducibility of published data did not significantly correlate with journal impact factors, the number of publications on the respective target or the number of independent groups that authored the publications. Our findings are mirrored by ‘gut feelings’ expressed in personal communications with scientists from academia or other companies, as well as published observations. [apropos of above] An unspoken rule among early-stage venture capital firms that “at least 50% of published studies, even those in top-tier academic journals, can’t be repeated with the same conclusions by an industrial lab” has been recently reported (see Further information) and discussed 4⁠. Physics has relatively small sins; “Assessing uncertainty in physical constants” (Henrion & Fischoff 1985); Hanson’s summary: Looking at 306 estimates for particle properties, 7% were outside of a 98% (where only 2% should be). In seven other cases, each with 14 to 40 estimates, the fraction outside the 98% confidence interval ranged from 7% to 57%, with a median of 14%. Nor is or robust against even ⁠. Scientists who win the Nobel Prize find their other work suddenly being heavily cited⁠, suggesting either that the community either badly failed in recognizing the work’s true value or that they are now sucking up & attempting to look better ⁠. (A mathematician once told me that often, to boost a paper’s acceptance chance, they would add citations to papers by the journal’s editors—a practice that will surprise none familiar with and the use of in tenure & grants.) The former editor Richard Smith amusingly recounts his doubts about the merits of peer review as practiced, and physicist points out that peer review is historically rare (just one of Einstein’s 300 papers was peer reviewed; the famous Nature did not institute peer review until 1967), has been poorly studied & not shown to be effective, is nationally biased⁠, erroneously rejects many historic discoveries (one study lists “34 Nobel Laureates whose awarded work was rejected by peer review”⁠; Horrobin 1990 lists other), and catches only a small fraction of errors. And questionable choices or fraud? : A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86–4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once—a serious form of misconduct by any standard—and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices…When these factors were controlled for, misconduct was reported more frequently by medical/​​pharmacological researchers than others. And : We surveyed over 2,000 psychologists about their involvement in questionable research practices, using an anonymous elicitation format supplemented by incentives for honest reporting. The impact of incentives on admission rates was positive, and greater for practices that respondents judge to be less defensible. Using three different estimation methods, we find that the proportion of respondents that have engaged in these practices is surprisingly high relative to respondents’ own estimates of these proportions. Some questionable practices may constitute the prevailing research norm. In short, the secret sauce of science is not ‘peer review’. It is replication! # Systemic Error Doesn’t Go Away “Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise.” Why isn’t the solution as simple as eliminating datamining by methods like larger n or analyses? Because once we have eliminated the random error in our analysis, we are still left with a (potentially arbitrarily large) systematic error, leaving us with a large total error. None of these systematic problems should be considered minor or methodological quibbling or foolish idealism: they are systematic biases and as such, they force an upper bound on how accurate a corpus of studies can be even if there were thousands upon thousands of studies, because the total error in the results is made up of and ⁠, but while random error shrinks as more studies are done, systematic error remains the same. A thousand biased studies merely result in an extremely precise estimate of the wrong number. This is a point appreciated by statisticians and experimental physicists, but it doesn’t seem to be frequently discussed. Andrew Gelman has a fun demonstration of selection bias involving candy⁠, or from pg812–1020 of Chapter 8 “Sufficiency, Ancillarity, And All That” of Probability Theory: The Logic of Science by : The classical example showing the error of this kind of reasoning is the fable about the height of the Emperor of China. Supposing that each person in China surely knows the height of the Emperor to an accuracy of at least ±1 meter, if there are N = 1,000,000,000 inhabitants, then it seems that we could determine his height to an accuracy at least as good as (8-49) merely by asking each person’s opinion and averaging the results. The absurdity of the conclusion tells us rather forcefully that the √N rule is not always valid, even when the separate data values are causally independent; it requires them to be logically independent. In this case, we know that the vast majority of the inhabitants of China have never seen the Emperor; yet they have been discussing the Emperor among themselves and some kind of mental image of him has evolved as folklore. Then knowledge of the answer given by one does tell us something about the answer likely to be given by another, so they are not logically independent. Indeed, folklore has almost surely generated a systematic error, which survives the averaging; thus the above estimate would tell us something about the folklore, but almost nothing about the Emperor. We could put it roughly as follows: error in estimate = (8-50) where S is the common systematic error in each datum, R is the ‘random’ error in the individual data values. Uninformed opinions, even though they may agree well among themselves, are nearly worthless as evidence. Therefore sound scientific inference demands that, when this is a possibility, we use a form of probability theory (i.e. a probabilistic model) which is sophisticated enough to detect this situation and make allowances for it. As a start on this, equation (8-50) gives us a crude but useful rule of thumb; it shows that, unless we know that the systematic error is less than about 1⁄3 of the random error, we cannot be sure that the average of a million data values is any more accurate or reliable than the average of ten6⁠. As put it: “The physicist is persuaded that one good measurement is worth many bad ones.” This has been well recognized by experimental physicists for generations; but warnings about it are conspicuously missing in the “soft” sciences whose practitioners are educated from those textbooks. Or pg1019–1020 Chapter 10 “Physics of ‘Random Experiments’”: …Nevertheless, the existence of such a strong connection is clearly only an ideal limiting case unlikely to be realized in any real application. For this reason, the and limit theorems of probability theory can be grossly misleading to a scientist or engineer who naively supposes them to be experimental facts, and tries to interpret them literally in his problems. Here are two simple examples: 1. Suppose there is some random experiment in which you assign a probability p for some particular outcome A. It is important to estimate accurately the fraction f of times A will be true in the next million trials. If you try to use the laws of large numbers, it will tell you various things about f; for example, that it is quite likely to differ from p by less than a tenth of one percent, and enormously unlikely to differ from p by more than one percent. But now, imagine that in the first hundred trials, the observed frequency of A turned out to be entirely different from p. Would this lead you to suspect that something was wrong, and revise your probability assignment for the 101’st trial? If it would, then your state of knowledge is different from that required for the validity of the law of large numbers. You are not sure of the independence of different trials, and/​​​or you are not sure of the correctness of the numerical value of p. Your prediction of f for a million trials is probably no more reliable than for a hundred. 2. The common sense of a good experimental scientist tells him the same thing without any probability theory. Suppose someone is measuring the velocity of light. After making allowances for the known systematic errors, he could calculate a probability distribution for the various other errors, based on the noise level in his electronics, vibration amplitudes, etc. At this point, a naive application of the law of large numbers might lead him to think that he can add three significant figures to his measurement merely by repeating it a million times and averaging the results. But, of course, what he would actually do is to repeat some unknown systematic error a million times. It is idle to repeat a physical measurement an enormous number of times in the hope that “good statistics” will average out your errors, because we cannot know the full systematic error. This is the old “Emperor of China” fallacy… Indeed, unless we know that all sources of systematic error—recognized or unrecognized—contribute less than about one-third the total error, we cannot be sure that the average of a million measurements is any more reliable than the average of ten. Our time is much better spent in designing a new experiment which will give a lower probable error per trial. As Poincare put it, “The physicist is persuaded that one good measurement is worth many bad ones.”7 In other words, the common sense of a scientist tells him that the probabilities he assigns to various errors do not have a strong connection with frequencies, and that methods of inference which presuppose such a connection could be disastrously misleading in his problems. Schlaifer much earlier made the same point in Probability and Statistics for Business Decisions: an Introduction to Managerial Economics Under Uncertainty, Schlaifer 1959⁠, pg488–489 (see also ⁠/​): 31.4.3 Bias and Sample Size In Section 31.2.6 we used a hypothetical example to illustrate the implications of the fact that the of the mean of a sample in which bias is suspected is so that only the second term decreases as the sample size increases and the total can never be less than the fixed value of the first term. To emphasize the importance of this point by a real example we recall the most famous sampling fiasco in history, the ⁠. Over 2 million registered voters filled in and returned the straw ballots sent out by the Digest, so that there was less than one chance in 1 billion of a sampling error as large as 2⁄10 of one percentage point8⁠, and yet the poll was actually off by nearly 18 percentage points: it predicted that 54.5 per cent of the popular vote would go to Landon, who in fact received only 36.7 per cent.9 10 Since sampling error cannot account for any appreciable part of the 18-point discrepancy, it is virtually all actual bias. A part of this total bias may be measurement bias due to the fact that not all people voted as they said they would vote; the implications of this possibility were discussed in Section 31.3. The larger part of the total bias, however, was almost certainly selection bias. The straw ballots were mailed to people whose names were selected from lists of owners of telephones and automobiles and the subpopulation which was effectively sampled was even more restricted than this: it consisted only of those owners of telephones and automobiles who were willing to fill out and return a straw ballot. The true mean of this subpopulation proved to be entirely different from the true mean of the population of all United States citizens who voted in 1936. It is true that there was no evidence at the time this poll was planned which would have suggested that the bias would be as great as the 18 percentage points actually realized, but experience with previous polls had shown biases which would have led any sensible person to assign to a distribution with equal to at least 1 percentage point. A sample of only 23,760 returned ballots, one 1⁄100th the size actually used, would have given a value of only 1⁄3 percentage point, so that the standard deviation of x would have been percentage points. Using a sample 100 times this large reduced σ(ε) from 1⁄3 point to virtually zero, but it could not affect and thus on the most favorable assumption could reduce σ(x) only from 1.05 points to 1 point. To collect and tabulate over 2 million additional ballots when this was the greatest gain that could be hoped for was obviously ridiculous before the fact and not just in the light of hindsight. What’s particularly sad is when people read something like this and decide to rely on anecdotes, personal experiments, and alternative medicine where there are even more systematic errors and no way of reducing random error at all! Science may be the lens that sees its own flaws⁠, but if other epistemologies do not boast such long detailed self-critiques, it’s not because they are flawless… It’s like that old quote: Some people, when faced with the problem of mainstream medicine & epidemiology having serious methodological weaknesses, say “I know, I’ll turn to non-mainstream medicine & epidemiology. After all, if only some medicine is based on real scientific method and outperforms placebos, why bother?” (Now they have two problems.) Or perhaps Isaac Asimov: “John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.” # Appendix A bibliography of additional links to papers/​blogs/​articles on the Replication Crisis, primarily post-2013 and curated from my newsletter⁠, as a followup to the main article text describing the Replication Crisis. ## Datamining Some examples of how ‘datamining’ or ‘data dredging’ can manufacture correlations on demand from large datasets by comparing enough variables: Rates of autism diagnoses in children correlate with age—or should we blame organic food sales?⁠; height & vocabulary or foot size & math skills may correlate strongly (in children); national chocolate consumption correlates with Nobel prizes12⁠, as do borrowing from commercial banks & buying luxury cars & serial-killers /​ ​mass-murderers /​ ​traffic-fatalities13⁠; moderate alcohol consumption predicts increased lifespan and earnings⁠; the role of storks in delivering babies may have been underestimated; children and people with high have higher grades & lower crime rates etc, so “we all know in our gut that it’s true” that raising people’s self-esteem “empowers us to live responsibly and that inoculates us against the lures of crime, violence, substance abuse, teen pregnancy, child abuse, chronic welfare dependency and educational failure”—unless perhaps high self-esteem is caused by high grades & success, boosting self-esteem has no experimental benefits, and may backfire? Those last can be generated ad nauseam: Shaun Gallagher’s Correlated (also a book) surveys users & compares against all previous surveys with 1k+ correlations. Tyler Vigen’s “spurious correlations” catalogues 35k+ correlations, many with r > 0.9, based primarily on US Census & CDC data. Google Correlate “finds Google search query patterns which correspond with real-world trends” based on geography or user-provided data, which offers endless fun (“Facebook”/​“tapeworm in humans”, r = 0.8721⁠; “Superfreakonomic”/​“Windows 7 advisor”, r = 0.9751⁠; Irish electricity prices/​“Stanford webmail”, r = 0.83⁠; “heart attack”/​“pink lace dress”, r = 0.88⁠; US states’ ⁠/​“booty models”, r = 0.92⁠; US states’ family ties /​ ​“how to swim”⁠; ⁠/​“Is Lil’ Wayne gay?”, r = 0.89⁠; ⁠/​“prnhub”, r = 0.9784⁠; “accident”/​“itchy bumps”, r = 0.87⁠; “migraine headaches”/​“sciences”, r = 0.77⁠; “Irritable Bowel Syndrome”/​“font download”, r = 0.94⁠; interest-rate-index/​“pill identification”, r = 0.98⁠; “advertising”/​“medical research”, r = 0.99⁠; 2012 vote-share/​“Top Chef”, r = 0.88⁠; “losing weight”/​“houses for rent”, r = 0.97⁠; “Bieber”/​tonsillitis, r = 0.95⁠; … And on less secular themes, do churches cause obesity & do Welsh rugby victories predict papal deaths? Financial data-mining offers some fun examples; there’s the which worked well for several decades; and it’s not very elegant, but a 3-variable model (Bangladeshi butter, American cheese, joint sheep population) reaches R2 = 0.99 on 20 years of the S&P 500 ## Animal models On the general topic of animal model external validity & translation to humans, a number of op-eds, reviews, and meta-analyses have been done; reading through some of the literature up to March 2013, I would summarize them as indicating that the animal research literature in general is of considerably lower quality than human research, and that for those and intrinsic biological reasons, the probability of meaningful transfer from animal to human can be astoundingly low, far below 50% and in some categories of results, 0%. The primary reasons identified for this poor performance are generally: small samples (much smaller than the already underpowered norms in human research), lack of blinding in taking measurements, pseudo-replication due to animals being correlated by genetic relatedness/​living in same cage/​same room/​same lab, extensive non-normality in data14⁠, large differences between labs due to local differences in reagents/​procedures/​personnel illustrating the importance of “tacit knowledge”, publication bias (small cheap samples + little perceived ethical need to publish + no preregistration norms), unnatural & unnaturally easy lab environments (more naturalistic environments both offer more realistic measurements & challenge animals), large genetic differences due to inbreeding/​engineering/​drift of lab strains mean the same treatment can produce dramatically different results in different strains (or sexes) of the same species, different species can have different responses, and none of them may be like humans in the relevant biological way in the first place. So it is no wonder that “we can cure cancer in mice but not people” and almost all amazing breakthroughs in animals never make it to human practice; medicine & biology are difficult. The bibliography: 1. Publication bias can come in many forms, and seems to be severe. For example, the 2008 version of a Cochrane review () finds “Only 63% of results from abstracts describing randomized or are published in full. ‘Positive’ results were more frequently published than not ‘positive’ results.”↩︎ 2. For a second, shorter take on the implications of low prior probabilities & low power: “Is the Replicability Crisis Overblown? Three Arguments Examined”⁠, Pashler & Harris 2012: So what is the truth of the matter? To put it simply, adopting an alpha level of, say, 5% means that about 5% of the time when researchers test a null hypothesis that is true (i.e., when they look for a difference that does not exist), they will end up with a statistically significant difference (a Type 1 error or false positive.)1 Whereas some have argued that 5% would be too many mistakes to tolerate, it certainly would not constitute a flood of error. So what is the problem? Unfortunately, the problem is that the alpha level does not provide even a rough estimate, much less a true upper bound, on the likelihood that any given positive finding appearing in a scientific literature will be erroneous. To estimate what the literature-wide false positive likelihood is, several additional values, which can only be guessed at, need to be specified. We begin by considering some highly simplified scenarios. Although artificial, these have enough plausibility to provide some eye-opening conclusions. For the following example, let us suppose that 10% of the effects that researchers look for actually exist, which will be referred to here as the prior probability of an effect (i.e., the null hypothesis is true 90% of the time). Given an alpha of 5%, Type 1 errors will occur in 4.5% of the studies performed (90% × 5%). If one assumes that studies all have a power of, say, 80% to detect those effects that do exist, correct rejections of the null hypothesis will occur in 8% of the time (80% × 10%). If one further imagines that all positive results are published then this would mean that the probability any given published positive result is erroneous would be equal to the proportion of false positives divided by the sum of the proportion of false positives plus the proportion of correct rejections. Given the proportions specified above, then, we see that more than one third of published positive findings would be false positives [4.5% / (4.5% + 8%) = 36%]. In this example, the errors occur at a rate approximately seven times the nominal alpha level (row 1 of Table 1). Table 1 shows a few more hypothetical examples of how the frequency of false positives in the literature would depend upon the assumed probability of null hypothesis being false and the statistical power. An 80% power likely exceeds any realistic assumptions about psychology studies in general. For example, Bakker, van Dijk, and Wikkerts, (2012, this issue) estimate .35 as a typical power level in the psychological literature. If one modifies the previous example to assume a more plausible power level of 35%, the likelihood of positive results being false rises to 56% (second row of the table). John Ioannidis (2005b) did pioneering work to analyze (much more carefully and realistically than we do here) the proportion of results that are likely to be false, and he concluded that it could very easily be a majority of all reported effects. Table 1. Proportion of Positive Results That Are False Given Assumptions About Prior Probability of an Effect and Power. Prior probability of effect Power Proportion of studies yielding true positives Proportion of studies yielding false positives Proportion of total positive results (false+positive) which are false 10% 80% 10% x 80% = 8% (100–10%) x 5% = 4.5% 4.5% / (4.5% + 8%) = 36% 10% 35% = 3.5% = 4.5% 4.5% / (4.5% + 3.5%) = 56.25% 50% 35% = 17.5% (100–50%) x 5% = 2.5% 2.5% / (2.5% + 17.5%) = 12.5% 75% 35% = 26.3% (100–75%) x 5% = 1.6% 1.6% / (1.6% + 26.3%) = 5.73% ↩︎ 3. So for example, if we imagined that a Jaeggi effect size of 0.8 were completely borne out by a of many studies and turned in a point estimate of d = 0.8; this data would imply that the strength of the n-back effect was ~1 standard deviation above the average effect (of things which get studied enough to be meta-analyzable & have published meta-analyses etc) or to put it another way, that n-back was stronger than ~84% of all reliable well-substantiated effects that psychology/​​education had discovered as of 1992.↩︎ 4. We can infer empirical from field-wide collections of effect sizes, in particular, highly reliable meta-analytic effect sizes. For example, Lipsey & Wilson 1993 which finds for various kinds of therapy a mean effect of d = 0.5 based on >300 meta-analyses; or better yet, “One Hundred Years of Social Psychology Quantitatively Described”⁠, Bond et al 2003: This article compiles results from a century of social psychological research, more than 25,000 studies of 8 million people. A large number of social psychological conclusions are listed alongside meta-analytic information about the magnitude and variability of the corresponding effects. References to 322 meta-analyses of social psychological phenomena are presented, as well as statistical effect-size summaries. Analyses reveal that social psychological effects typically yield a value of r equal to .21 and that, in the typical research literature, effects vary from study to study in ways that produce a standard deviation in r of .15. Uses, limitations, and implications of this large-scale compilation are noted. Only 5% of the were greater than .50; only 34% yielded an r of .30 or more; for example, Jaeggi 2008’s 15-day group racked up an IQ increase of d = 1.53 which converts to an r of 0.61 and is 2.6 standard deviations above the overall mean, implying that the DNB effect is greater than ~99% of previous known effects in psychology! (Schönbrodt & Perugini 2013 observe that their sampling simulation imply that, given Bond’s mean effect of r = .21, a psychology study would require n = 238 for reasonable accuracy in estimating effects; most studies are far smaller.)↩︎ 5. One might be aware that the writer of that essay, ⁠, was fired after making up materials for one of his books, and wonder if this work can be trusted; I believe it can as the New Yorker is famous for rigorous fact-checking (and no one has cast doubt on this article), Lehrer’s scandals involved his books, I have not found any questionable claims in the article besides Lehrer’s belief that known issues like publication bias are insufficient to explain the decline effect (which reasonable men may differ on), and Virginia Hughes ran the finished article against 7 people quoted in it like Ioannidis without any disputing facts/​​quotes & several somewhat praising it (see also Andrew Gelman).↩︎ 6. If I am understanding this right, Jaynes’s point here is that the random error shrinks towards zero as N increases, but this error is added onto the “common systematic error” S, so the total error approaches S no matter how many observations you make and this can force the total error up as well as down (variability, in this case, actually being helpful for once). So for example, 1⁄3 + 1⁄√10 = 0.66; with N = 100, it’s 0.43; with N = 1,000,000 it’s 0.334; and with N = 1,000,000 it equals 0.333365 etc, and never going below the original systematic error of 1⁄3—that is, after 10 observations, the portion of error due to sampling error is less than that due to the systematic error, so one has hit severely in the value of any additional (biased) data, and to meaningfully improve the estimate one must obtain unbiased data. This leads to the unfortunate consequence that the likely error of N = 10 is 0.017<x < 0.64956 while for N = 1,000,000 it is the similar range 0.017<x < 0.33433—so it is possible that the estimate could be exactly as good (or bad) for the tiny sample as compared with the enormous sample, since neither can do better than 0.017!↩︎ 7. Possibly this is what Lord Rutherford meant when he said, “If your experiment needs statistics you ought to have done a better experiment”.↩︎ 8. Neglecting the finite-population correction, the standard deviation of the mean sampling error is and this quantity is largest when p = 0.5. The number of ballots returned was 2,376,523, and with a sample of this size the largest possible value of is , or 0.322 percentage point, so that an error of .2 percentage point is .2/​​​.0322 = 6.17 times the standard deviation. The total area in the two tails of the below u = −6.17 and above u = +6.17 is .0000000007.↩︎ 9. Over 10 million ballots were sent out. Of the 2,376,523 ballots which were filled in and returned, 1,293,669 were for Landon, 972,897 for Roosevelt, and the remainder for other candidates. The actual vote was 16,679,583 for Landon and 27,476,673 for Roosevelt out of a total of 45,647,117.↩︎ 10. Readers curious about modern election forecasting’s systematic vs random error should see Shirani-Mehr et al 2018, “Disentangling Bias and Variance in Election Polls”: the systematic error turns out to be almost identical sized ie half the total error. Hence, anomalies like Donald Trump or Brexit are not particularly anomalous at all. –Editor.↩︎ 11. Johnson, interestingly, like Bouchard, was influenced by (and also ).↩︎ 12. I should mention this one is not quite as silly as it sounds as there is experimental evidence for cocoa improving cognitive function↩︎ 13. The same authors offer up a number of country-level correlation such as “Linguistic Diversity/​​Traffic accidents”, alcohol consumption/​​morphological complexity, and acacia trees vs tonality, which feed into their paper “Constructing knowledge: nomothetic approaches to language evolution” on the dangers of naive approaches to cross-country comparisons due to the high intercorrelation of cultural traits. More sophisticated approaches might be better; they derive a fairly-plausible looking graph of the relationships between variables.↩︎ 14. Lots of data is not exactly normal, but, particularly in human studies, this is not a big deal because the n are often large enough, eg n > 20, that the asymptotics have started to work & model misspecification doesn’t produce too large a false positive rate inflation or mis-estimation. Unfortunately, in animal research, it’s perfectly typical to have sample sizes more like n = 5, which in an idealized power analysis of a normally-distributed variable might be fine because one is (hopefully) exploiting the freedom of animal models to get a large effect size / precise measurements—except that with n = 5 the data won’t be even close to approximately normal or fitting other model assumptions, and a single biased or selected or outlier datapoint can mess it up further.↩︎
2021-10-22 15:23:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5046071410179138, "perplexity": 2164.1699850419354}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00310.warc.gz"}
https://koerbitz.me/posts/announcing-typeful-redux.html
# Announcing typeful-redux published on February 26th, 2018 I am proud to announce the publication of typeful-redux, a fully type-safe, low boilerplate redux wrapper for TypeScript. To my knowledge, this is the first redux wrapper which achieves end-to-end type safety. In particular, the dispatch function / object is fully typed, and these types are maintained when using react-redux's connect function to connect a component to a redux store. ## Elevator pitch This is how you create a reducer and a store with typeful-redux. Note that all calls are fully type-safe and will trigger type errors when used incorrectly. interface TodoItem { completed: boolean; } // Create a new reducer with initial state [], then add three actions const TodoReducer = createReducer([] as TodoItem[]) ('clear', s => []) ('add', (s, newItem: TodoItem) => [...s, newItem]) ('toggle', (s: TodoItem[], index: number) => [ ...s.slice(0, i), { ...s[i], completed: !s[i].completed }, ...s.slice(i + 1) ]); // Create the store const store = new StoreBuilder() .build(); Both the getState function and all functions on the dispatch object are now fully typechecked - using them incorrectly will trigger a type error: // The result has type: { todos: TodoItem[] } const state = store.getState(); // All dispatches are fully type checked // Dispatches { type: 'todos/clear' } store.dispatch.todos.clear(); // Dispatches // task: 'Provide a fully type-safe interface to redux' } // } task: 'Provide a fully type-safe interface to redux', completed: false }); // Dispatches { type: 'todos/toggle', payload: 0 } store.dispatch.todos.toggle(0); In addition, typeful-redux also provides a typesafe wrapper for react-redux's connect method. This means that any type mismatch when connecting a component to a store will be detected and produce a type error. A very simple, runnable example app can be found here. A TodoMVC implementation with slightly more features is availabe here. ## Motivation redux is a fantastic approach to manage state in single page applications. Unfortunately, vanilla redux requires some boilerplate and is hard to use in a type-safe way. typeful-redux's primary goal is to provide a fully type-safe interface to redux. This means the redux getState and dispatch functions need to have the right types and these types should be maintained when using the react-redux connect function. All type-incorrect usages of getState or dispatch should trigger a type errror. More specifically, typeful-redux seeks to address the following challenges when using redux: • Full type safety: redux makes it hard to fully type the dispatch method, to guarantee that only actions are dispatched which are handled by the store or that the dispatched actions are type correct (i.e. have the right payload). typeful-redux creates a store that gives a fully type-safe dispatch object, where every action is available as a function expecting the right payload. The getState method is also fully typed and returns a state with the right type. • Low Boilerplate: redux needs actions, possibly action creators and reducers. When trying to set this up in a type-safe way, many things need to be written down twice (or more). This introduces an opportunity for inconsistencies and errors. In typeful-redux, actions and their reducers are defined simultaneously,reducing the amount of code that needs to be written and maintained. • Avoid inconsistencies: When actions and reducers are defined seperately, there is the potential to forget handeling an action (or to misspell a type in a reducer's switch statement). typeful-redux makes this impossible by requiring the simultaneous definition of an action with its reducing code. • Modularity: In redux, each action type must be unique - so the action type namespace is 'global'. This is non-modular, e.g. the same actions and reducers can't be used for two parts of the store. typeful-redux namespaces reducers when combining them in a store. This means action types only need to be unique for any single reducer. Additionally, the same reducer can be used multiple times, for several parts of a store. Besides these differences and different surface appearence, typeful-redux is not an alternative redux implementation, it is just a thin wrapper around reducer and store creation. The resulting runtime objects are plain redux reducers and stores equipped with the right type definitions (and sometimes some tiny convenience wrappers). All the existing redux ecosystem should be usable with this library. Please file an issue if you have trouble using a redux library with typeful-redux. ## Overview over the library typeful-redux exports two functions and one class (and a few supporting type definitions). The purpose of the functions is described here. Also see the examples for example usages. If you find the documentation insufficient please file an issue or complain to me via email. ### createReducer This function allows creating a reducer by adding action names and the code 'reducing' the action simultaneously. While adding actions, the type of the reducer is refined so that the right type of the dispatch object can be inferred. Actions and their handlers can be added by either calling the function with the actions type name and a function handeling the reduction or by using the addSetter (for creating an action without payload) or addHandler methods (for creating an action with payload). The initial example uses the call syntax to create three actions: const TodoReducer = createReducer([] as TodoItem[]) ('clear', s => []) ('add', (s: TodoItem[], newItem: TodoItem) => [...s, newItem]) ('toggle', (s: TodoItem[], index: number) => [ ...s.slice(0, i), { ...s[i], completed: !s[i].completed }, ...s.slice(i + 1) ]); There is an alternative syntax to create a reducer with addSetter and addHandler methods to add new actions and reduction cases, this looks as follows: const TodoReducer = createReducer([] as TodoItem[]) .addHandler('toggle', (s, index: number) => [ ...s.slice(0, i), { ...s[i], completed: !s[i].completed }, ...s.slice(i + 1) ]); ### StoreBuilder The StoreBuilder class is used to assemble a store using one or multiple reducers and redux middlewares. It extracts the reducers from the objects created by createReducer and returns a redux store, where the dispatch function is extended by fully typed functions which dispatch the actions created via createReducer. Otherwise the result of the .build() method is a plain redux store where getState() has the right return type inferred. // Create the store const store = new StoreBuilder() .build(); store is a plain redux store with getState, subscribe and dispatch methods. The only difference is that dispatch now also holds objects with methods to enable a type-safe dispatch and that getState has the right return type. // This is fully typed store.dispatch.todos.clear(); store.dispatch.todos.toggle(0); The type of store.dispatch.todos is { clear(): void; toggle(index: number): void; } Each method dispatches the right action on the store with the passed argument as the payload. Actions are namespaced - so store.dispatch.toggle(0) dispatches a { type: 'todos/toggle', payload: 0 } action. This means that action types no longer have to be globally unique - they just have to be unique for their reducer. This enables using the same reducer for multiple parts of the store. ### connect This is a re-export of the redux connect function, with a more restricted type to ensure that the typing of the dispatch object is known in the mapDispatchToProps function. This makes it possible to propagate type errors through connect, which is not possible with the current type definition of react-redux's connect. To explain how connect can be used to its full benefit, we must understand the type of the produced store. In general store will have the following type: type Store<STATE, DISPATCH> = { getState(): STATE; subscribe(): void; dispatch: DISPATCH; }; where STATE is a map from the reducer names (here: todos) and the state types and DISPATCH is a map from the reducer names (again todos) to functions which dispatch the respective actions. Now the connect function is set up so given a mapStateToProps which accepts a STATE and a mapDispatchToProps which accepts a DISPATCH, it produces a container which needs to have a { store: Store<STATE, DISPATCH>; } as part of properties. This way the types from the store can be propagated all the way to the components and changing the type of an action-reducer triggers a type-error in all the right places. interface State { todos: TodoItem; } interface Dispatch { todos: { clear(): void; toggle(index: number): void; }; } // Let's say we have a TodoListComponent which wants the following // properties interface TodoListProps = { todos: TodoItem; clear(): void; toggle(index: number): void; } class TodoListComponent extends React.Component<TodoListProps> { // ... } const mapStateToProps = (state: State) => state; const mapDispatchToProps = (dispatch: Dispatch) => dispatch.todos; // TodoListContainer is infered to have a type which requires a property // { store: Store<State, Dispatch> } // const TodoListContainer = connect(mapStateToProps, mapDispatchToProps)(TodoListComponent); connect can also be used with mapStateToProps and mapDispatchToProps with a second argument, these second arguments become part of the required properties of the connected container. ## How does it all work? In the next post, we'll explore in more depth how typeful-redux is implemented as it uses some tricks to pick up the types from the reducer and transforms them to give the dispatch objec the right type.
2020-02-28 13:59:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3497198224067688, "perplexity": 6333.457567766544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147234.52/warc/CC-MAIN-20200228135132-20200228165132-00515.warc.gz"}
http://physics.aps.org/synopsis-for/10.1103/PhysRevB.79.094521
# Synopsis: A route to large pnictide single crystals The addition of tellurium helps to grow large single crystals of an iron-based superconductor. Iron-based superconductors with transition temperatures as high as 55 K have generated a significant tide of interest. Yet considerable sensitivity to synthesis conditions makes the growth of large single crystals for many of these compounds difficult, and this in turn makes crucial experiments either infeasible or hard to interpret. In a paper appearing in Physical Review B, Brian Sales and David Mandrus of Oak Ridge National Laboratory, and collaborators in the US and Canada, report the successful growth of very large single crystals of ${\text{Fe}}_{1+y}{\text{Te}}_{x}{\text{Se}}_{1-x}$. These compounds are a recent addition to the still expanding family of iron-based superconductors, and they exhibit superconducting transition temperatures as high as 14 K at ambient pressure, and up to 27 K under pressure. They consist of alternating layers of iron and $\text{Te/Se}$, with any excess iron accommodated in the $\text{Te/Se}$ layers. It is interesting to note that the binary compounds ${\text{Fe}}_{1+y}\text{Se}$ and ${\text{Fe}}_{1+y}\text{Te}$ are quite different: the former is superconducting in a narrow range of temperature and composition, but does not form single crystals easily, while the latter does form large single crystals, but superconductivity is absent. Clear evidence for bulk superconductivity in ${\text{Fe}}_{1+y}{\text{Te}}_{x}{\text{Se}}_{1-x}$ appears at $x$ = 0.5. Obtaining such large single crystals opens the way for experiments that will elucidate the electronic and structural properties and the interplay between them in these materials. – Alex Klironomos ### Announcements More Announcements » Nanophysics Nuclear Physics ## Related Articles Magnetism ### Synopsis: Multiferroic Surprise Electric and magnetic polarization are spontaneously produced in an unlikely material—one with a highly symmetric crystal structure. Read More » Soft Matter ### Synopsis: Wedged Particles Make Crystals Rod-shaped particles in a liquid arrange into a variety of structures when subjected to confining walls, an effect that might be used to design optical materials. Read More » Materials Science ### Viewpoint: Watching Quasicrystals Grow In situ high-resolution transmission electron microscopy has been used to witness quasicrystals in the act of growing. Read More »
2015-08-27 19:21:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3821462094783783, "perplexity": 1810.5659073022227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059455.0/warc/CC-MAIN-20150827025419-00060-ip-10-171-96-226.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/163279/how-can-i-type-formula-cosine-of-two-vectors-nice
# How can I type formula cosine of two vectors nice? I want to find cosine of two vectors, I define the command \cross for cross product of two vectors. I tried \documentclass[12pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{fourier} \usepackage{esvect} \usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} \newcommand{\cross}[2]{\biggl[\vv{#1},\vv{#2} \biggr]} \begin{document} $\cos \varphi =\dfrac{\cross{CA'}{CB} \cdot \cross{CA'}{CD}}{\left \vert \cross{CA'}{CB} \right\vert \cdot \left \vert \cross{CA'}{CD} \right\vert}.$ \end{document} I feel the brackets in command \cross is not good. How can I repair them? - The square brackets seem to be needlessly tall. Specifically, I don't think it's necessary to make the square brackets sufficiently tall to have them enclose the arrows. Nobody should be confused by the arrows "sticking out" above the brackets. Hence, using \big instead of \bigg for the size of the brackets should be fine. Where I also see room for improvement, typographically speaking, is in the uneven heights of the arrows that are produced by \vv. Since the uneven heights are caused by the presence of the "primes" in the first argument of the \cross macro, one way to address this issue is to automatically add a "vertical phantom" (composed of #1...) to the second argument of the \cross macro. \documentclass[12pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{mathtools} \DeclarePairedDelimiter{\abs}{\lvert}{\rvert} \usepackage{fourier,esvect} \usepackage[margin=2cm]{geometry} \newcommand{\cross}[2]{\bigl[ \vv{#1},\vv{#2\vphantom{#1}} \bigr]} \newcommand\z{\vphantom{{}'}} % insert a vertical phantom as tall as a superscript prime \begin{document} $\cos \varphi =\dfrac{\cross{CA'}{CB} \cdot \cross{CA'}{CD}} {\abs*{\cross{CA'}{CB}} \cdot \abs*{\cross{CA'}{CD}} }\,.$ \end{document} - Another option is to use bold letters for vectors. I have also changed the backets to parenthesis, hoping that won't change the meaning in your subject. physics package is used for making vectors bold with \vb* macro. If you want upright letters for vectors, use \vb without star. Since \cross is defined by defined by physcis, I have changed it to \Cross. \documentclass[12pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{fourier} \usepackage{physics} \usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} \newcommand{\Cross}[2]{(\vb*{#1},\vb*{#2})} \begin{document} $\cos \varphi =\dfrac{\Cross{CA'}{CB} \cdot \Cross{CA'}{CD}}{\vert \Cross{CA'}{CB} \vert \cdot \vert \Cross{CA'}{CD} \vert}$ \end{document} I have also removed \left and \right from \vert. - One possible solution (since you haven't told us how you would like it to be, it's a guess): \documentclass{article} \usepackage{mathtools} \usepackage{fourier} \DeclarePairedDelimiter{\abs}{\lvert}{\rvert} \newcommand*\cross[2]{\left[\overrightarrow{#1},\overrightarrow{#2}\right]} \begin{document} $$\cos\varphi = \frac{\cross{CA'}{CB} \cdot \cross{CA'}{CD}}{\abs*{\cross{CA'}{CB}} \cdot \abs*{\cross{CA'}{CD}}}.$$ \end{document} In this way the brackets will automatically scale relative to the material. Note that I have used \overrightarrow instead of \vv to avoid loading the esvect package. (I don't think it's a 'bad' package but I just prefer to load as few packages as possible.) Update In case you always have vectors like in the example, you can make the code simpler: \documentclass{article} \usepackage{mathtools} \usepackage{fourier} \DeclarePairedDelimiter{\abs}{\lvert}{\rvert} \newcommand*\cross[2]{\left[\overrightarrow{#1},\overrightarrow{#2}\right]} \newcommand*\crossProduct[3]{ \frac{\cross{#1}{#2} \cdot \cross{#1}{#3}}% nominator {\abs*{\cross{#1}{#2}} \cdot \abs*{\cross{#1}{#3}}}% denominator } \begin{document} $$\cos\varphi = \crossProduct{CA'}{CB}{CD}$$ \end{document} - One must be careful when disregarding the math axis, but it seems from your question that you are unhappy with the extra space below the vectors, about which the braces enclose. That extra space is there to give symmetry to the over-arrow vector notation. Many would say that should not be disturbed, even if it looks odd. However, since you were looking for alternatives, here is one such solution that removes that space below the braces. But see what it does: it keeps the \cdot centered on the letters and therefore asymmetrical with respect to the height of the brace. So, this is an option, but many would not say it is an improvement. \documentclass[12pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{fourier} \usepackage{esvect} \usepackage{scalerel} \usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} \newcommand{\cross}[2]{{\stretchleftright{[}{\vv{#1},\vv{#2}}{]}}} \begin{document} $\cos \varphi =\dfrac{\cross{CA'}{CB} \cdot \cross{CA'}{CD}} {\stretchleftright{\vert}{\protect\cross{CA'}{CB}}{\vert} \cdot \stretchleftright{ \vert}{\cross{CA'}{CD}}{\vert}}.$ \end{document} - "many would not say it is an improvement" I'm one of them. :) – Svend Tveskæg Mar 2 '14 at 17:04 @SvendTveskæg Having received "the heat of the flame" when disturbing the math axis in the past, I have grown more sensitive to the sanctity of the notion 8^O. If nothing else, the answer can visualize, for the OP, the negatives associated with a notional "fix". – Steven B. Segletes Mar 2 '14 at 17:14 I'm not saying your answer is bad at all! You are simply coming up with one way of doing it, so I think it's absolute fine. (I'm just saying that I'm not fond of the solution ... just as you aren't, if I'm not totally mistaken.) – Svend Tveskæg Mar 2 '14 at 17:18 @SvendTveskæg As with all things, there are tradeoffs. On this particular one, I really have no preference, and would myself therefore stick with the conventional solution. All to often, one gets an idea of the advantages that arise from a different approach only to find, upon implementation, that there are significant negatives, too. Only then does the wisdom of the original approach become truly manifest. – Steven B. Segletes Mar 2 '14 at 17:28 I know exactly what you mean. :) – Svend Tveskæg Mar 2 '14 at 17:35
2016-06-25 21:29:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9151954650878906, "perplexity": 1050.5929854695107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00057-ip-10-164-35-72.ec2.internal.warc.gz"}
https://economics.stackexchange.com/questions/25507/the-effects-of-demonetisation
# The Effects of Demonetisation I do not wish to get into the details of the Demonetisation implemented by the Indian Government on 8th November, 2016. Endless political debates have already been conducted advocating or berating this step. As an Economics student, I would like to get some technical insight into this action and the economic effects of demonetization in general. • Stack Exchange is not for general essay prompts. Unless you edit this to be a specific question, it will likely be closed. – Acccumulation Nov 13 '18 at 15:58 • It is a fairly technical question. It discusses what happens when you pull all the liquidity out of the economy it actually tested the Keynesian thought that money is not neutral. – DrStrangeLove Nov 14 '18 at 3:06 • @goodbookandcoffee Yes. This is the kind of insight I needed. I wanted to know about some underlying theories in Economics that support such a seemingly bizarre action. – S.Rana Nov 14 '18 at 4:12 I am not an expert in biology but let me give you some intuition through it. For instance if you have some impurities in your blood. There are two ways to remove that impure blood from your body. 1) To give you some medicine that enters your system to cure you. This is a slow procedure and may not guarantee concrete results. 2) To artificially suck entire blood out of your body and make attempts to purify it and then refill it into your body. This is a fast procedure and will produce certain concrete results which can either be positive or negative. Now, if the second method seems lucrative to you, make sure you think of what will happen to you in that time interval when your body is without blood. Noe let us come to Economics. Money is a medium of exchange and store of value. Let me write a simple equation of money. $$MV = PY$$ $$M$$ ~ Amount of money in circulation. $$V$$ ~ Velocity of Money. Velocity of money is nothing but number of times the nominal money is changing hands $$P$$ ~ Price level $$Y$$ ~ Output in the economy $$PY$$ hence is the value of output or GDP. You can make an intuitive sense of this equation that is the GDP of the country is how fast a given quantity of money circulates in the economy. Suppose the GDP (aggregate expenditure) of a country is \$1,000. The amount of money in circulation is \$500. So money should change hands two times for expenditure worth \\$1000 to occur which makes velocity of money = 2. Now once you have digested this relationship. Demonetisation in the Republic of India was not an attack on the quantity of money but on the velocity of money in circulation. It did not make the existing money worthless since the old cash could be exchanged for new cash but it choked off the flow of money that is the velocity of money in the economy since old cash which made for 86% of the entire money supply ceased to be a legal tender. Now coming back to our equation: $$MV = PY$$ There was an artificial decrease in velocity(V) so the right hand side of the equation $$PY$$ must fall to maintain the equilibrium. That is GDP must fall temporarily which is the stagnation of the economy. Understand it this way, you temporarily decrease the purchasing power of the people by pulling all the money out of the system. People stop buying stuff and sellers cannot sell their stuff. Producers will stop producing and people will lose jobs. Hence this is an artificially created recession. This was perfectly evident in the growth rate of the country that slumped by about 2% in the beggining of 2017. The 2% of India's GDP is a phenomenal multi billions of dollars which India lost and which is the actual monetary cost of this decision. Now the fundamental question at the heart of economic decisions - Does the benefits justify the costs? Which is a debatable topic as you mentioned so I am not going into that. But what I personally believe is, Indians should have thought it out more rigorously and thoroughly before making such an unconventional move.
2020-04-02 19:25:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28608042001724243, "perplexity": 986.7877295722592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370507738.45/warc/CC-MAIN-20200402173940-20200402203940-00311.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/rate-constant-certain-reaction-k-680-10-3-s-1--initial-reactant-concentration-0800-m-conce-q2121283
The rate constant for a certain reaction is k = 6.80×10-3 s^{-1}. If the initial reactant concentration was 0.800 M, what will the concentration be after 9.00 minutes? A zero-order reaction has a constant rate of 2.80×10-4 M/\rm s. If after 45.0 seconds the concentration has dropped to 8.00×10-2 M, what was the initial concentration?
2014-10-25 21:23:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8853165507316589, "perplexity": 3348.256364098047}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119650516.39/warc/CC-MAIN-20141024030050-00044-ip-10-16-133-185.ec2.internal.warc.gz"}
https://eprint.iacr.org/2019/638
## Cryptology ePrint Archive: Report 2019/638 On the Distribution of Quadratic Residues and Non-residues Modulo Composite Integers and Applications to Cryptography Ferucio Laurentiu Tiplea and Sorin Iftene and George Teseleanu and Anca-Maria Nica Abstract: We develop exact formulas for the distribution of quadratic residues and non-residues in sets of the form $a+X=\{(a+x)\bmod n\mid x\in X\}$, where $n$ is a prime or the product of two primes and $X$ is a subset of integers with given Jacobi symbols modulo prime factors of $n$. We then present applications of these formulas to Cocks' identity-based encryption scheme and statistical indistinguishability. Category / Keywords: public-key cryptography / Jacobi symbol, probability distribution, statistical distance, identity-based encryption Original Publication (with minor differences): Applied Mathematics and Computation Date: received 2 Jun 2019, last revised 16 Dec 2019 Contact author: ferucio tiplea at uaic ro, siftene2013@gmail com, george teseleanu@yahoo com, meinsta@yahoo com Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2019/638 [ Cryptology ePrint archive ]
2021-06-19 02:46:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8719136714935303, "perplexity": 5258.927391838869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643380.40/warc/CC-MAIN-20210619020602-20210619050602-00270.warc.gz"}
https://nbviewer.jupyter.org/github/sagemanifolds/SageManifolds/blob/master/Notebooks/SM_black_hole_rendering.ipynb
Black Hole rendering with SageMath¶ Introduction¶ This notebook is a step-by-step implementation of a basic rendering engine in curved spacetime. The objective is to obtain a somewhat realistic image of the accretion disk around a black hole. The technique consists in launching lightlike geodesics toward the past from a single point (the virtual camera), using the geodesic integrator of SageMath. To reduce computation time, the spacetime is assumed be spherical symmetric; this reduces the number of required geodesics to produce an image of $n_x\times n_y$ pixels from about $O\left(n_x n_y\right)$ to $O\left(\sqrt{n_x^2+n_y^2}\right)$. This work relies heavily on the SageManifolds Project. Advanced SageMath notions will also be used throughout this notebook, like Cython compilation and multithreading. This notebook requires a version of SageMath at least equal to 8.5: In [1]: version() Out[1]: 'SageMath version 8.7, Release Date: 2019-03-23' Overview¶ The code is separated into 9 parts. • Declaring the spacetime • Launching a geodesic • Launching a lot of geodesics! • Figuring out where it intersects with the accretion disk • Adding thickness to the disk • Using black-body radiation and converting spectra to RGB • First relativistic effect: Doppler effect • Second relativistic effect: aberration (forward focalisation) • Conclusion Configuration¶ This notebook can be quite ressource hungry to run. For that reason different configurations options are provided. It is recommended to start with the lowest one to check that everything works properly. You can of course adapt the number of CPUs to your needs. First configuration: will run in less than a minute on a 4-core laptop. Produces tiny images with no details (no secondary image). In [2]: # n_cpu = 4 # 4 Go Ram minimum # n_geod = 100 # nx, ny = 180, 90 Second configuration: will run in about 5 minutes on a workstation, produces a reasonably sized image: In [3]: n_cpu = 8 # 8 Go Ram minimum n_geod = 1000 nx, ny = 720, 360 Third configuration: will run in 30 minutes on the Google Cloud Compute Engine. Produces a 4K image showing tiny details on the secondary disk images. In [4]: # n_cpu = 36 # 144 Go Ram minimum # n_geod = 30000 # nx, ny = 4000, 2000 Additional preliminaries: display objects with $\LaTeX$ where possible, and silence deprecation warnings that arise from a few third-party packages: In [5]: %display latex import warnings warnings.simplefilter('ignore', DeprecationWarning) Declaring the spacetime¶ Let's start slow by declaring the spacetime we'll use for rendering: it is the Schwarzschild spacetime. It is important to use a coordinate system that is regular at the horizon. Here we use the Eddington-Finkelstein coordinates. Let $m$ be the mass of the black hole (that we'll take equal to 2 later). We also add a restriction to ensure that nothing touches the central singularity, and we set the metric $g$. In [6]: M = Manifold(4, 'M', structure='Lorentzian') In [7]: C.<t, r, th, ph> = M.chart(r't r:(1,+oo) th:(0,pi):\theta ph:\phi') C.coord_range() Out[7]: In [8]: m = var('m') In [9]: g = M.metric() g[0,0] = -(1-2*m/r) g[0,1] = 2*m/r g[1,1] = 1+2*m/r g[2,2] = r^2 g[3,3] = (r*sin(th))^2 g[:] Out[9]: In [10]: g.display() Out[10]: We also define a 3-dimensional Euclidean space $E$ to plot some results, using a map $\phi: M \rightarrow E$: In [11]: E.<x, y, z> = EuclideanSpace() phi = M.diff_map(E, [r*sin(th)*cos(ph), r*sin(th)*sin(ph), r*cos(th)]) phi.display() Out[11]: Launching a geodesic¶ Geodesic integration was first implemented in SageMath in 2017 and perfected in 2018 to support fast integration and event handling (used to detect the singularity in our case). To introduce the method, let's plot an orbit around a black hole. To do that, we need to find a starting point $p$ as well as an inital velocity vector $v$. It can be quite troublesome to find a suitable one, but here is a free one: In [12]: p = M((0, 14.98, pi/2, 0)) Tp = M.tangent_space(p) v = Tp((2, 0, 0.005, 0.05)) v = v / sqrt(-g.at(p)(v, v)) $v$ is defined as a member of the tangent space at $p$. The last line is used to normalize $v$ as a unit timelike vector. Next is the definition of the geodesic. We need to pass a symbolic variable for the proper time (which will not be used). The starting point is deduced from the velocity vector (as the point where the velocity vector is defined). In [13]: tau = var('tau') curve = M.integrated_geodesic(g, (tau, 0, 3000), v) The integration should be very fast. Don't forget to give some numerical value to $m$ here. In [14]: sol = curve.solve(step = 1, method="ode_int", parameters_values={m: 2}) # sol = curve.solve(step = 1, parameters_values={m: 2}) Plotting the solution requires an interpolation. This is automatically done in the next line. In [15]: interp = curve.interpolate() The following cell plots the result using the mapping we provided previously. We also add a grey sphere at $r_s = 2m = 4$ (the event horizon) to give a scale. In [16]: P = curve.plot_integrated(mapping=phi, color="red", thickness=2, plot_points=3000) P = P + sage.plot.plot3d.shapes.Sphere(4, color='grey') P.show(aspect_ratio=[1, 1, 1], viewer='threejs', online=True) You can see that it look nothing like an ellipse, as we are used to in classical celestial mechanics. At this step, you can try adding an angular momentum to the black hole--in other words going from Schwarzschild to Kerr--by setting a non-zero angular momentum in the definition of the manifold ($J=1$ works fine). When this is the case, the orbits are not even included in a plane. Don't forget de revert back your changes before proceeding to the next part. Launching a lot of geodesics!¶ Of course one geodesic is not enough for us, we'll need at least a few hundred of them. Because we don't need to compute the equation again each time, we simply copy the previous declaration of the geodesic while changing the initial point and velocity. It will be useful here to introduce the Python module multiprocessing and progress bars as widgets: In [17]: import multiprocessing from ipywidgets import FloatProgress from IPython.display import display It wouldn't be a great idea to set "1 job = 1 geodesic integration". Indeed, that would mean copying the geodesic declaration a few hundred times, which would be quite slow. What is done instead is seperating geodesics into batches using the following function: In [18]: def chunks(l, n): """Yield successive n-sized chunks from l.""" for i in range(0, len(l), n): yield l[i:i + n] The number of batches per CPU in not very important. If set to 1, some CPUs may run faster than other ones and stay idle at the end. If too high, too much time will be spent copying the curve setting. I found 3 to be a good value. In [19]: n_batches_per_cpu = 3 We also redefine the previous geodesic to our new needs: fewer steps and the ability to check for chart boundaries when integrating. The $v$ in this case will not be used; it will always be overwritten before starting any integration. In [20]: curve = M.integrated_geodesic(g, (tau, 0, 200), v, across_charts=True) When using multiprocessing, functions can only accept a single argument. To overcome this limitation, each argument will be a tuple (curve, start index, number of curves to integrate). In [21]: args = [] start_index = 0 for chunk in chunks(range(n_geod), n_geod//(n_batches_per_cpu*n_cpu)): start_index += len(chunk) The next line prints the list of arguments. We can check that each of the 100 geodesics are correctly set. Our little trick allowed us to only define 13 geodesics (about 3 per core, as we wanted; note, the exact result here will depend on what you used for n_cpu at the beginning) In [22]: print(args[-1]) print(len(args)) (Integrated geodesic in the 4-dimensional Lorentzian manifold M, 984, 16) 25 Now comes a question: which vector can be used as the starting 4-velocity? We need a past-oriented lightlike vector pointing toward the center but with a linearly increasing angle. The 3 space components are already imposed. The time component must then be chosen so that the total vector is lightlike. Let $p$ be the initial point and $v$ the initial 4-velociy, with an unknown time coordinate $dt$ ($y$ depends on the angle, it is a known quantity). In [23]: dt, y, r0 = var('dt, y, r0') In [24]: p = M((0, r0, pi/2, 0)) Tp = M.tangent_space(p) v = Tp((dt, -1, 0, y)) The norm of $v$ is currently given by: In [25]: g.at(p)(v, v) Out[25]: We need to find $dt$ so that this expression is equal to 0 (lightlike condition). this is easy: In [26]: sol = g.at(p)(v, v).solve(dt) sol Out[26]: As expected, there are two solutions: one past-oriented and one future-oriented. In fact, in our case it does not matter, given that the Schwartzschild spacetime is static. The next cell defines the function that will be called by multiprocessing. It starts by unpacking the arguments, setting an empty dictionary as the result, and defining the starting position. The initial velocity is then overwritten using the formula above, the integration is performed, and the result is added to the dictionary. In [27]: def calc_some_geodesics(args): """ Compute nb geodesics starting at index n0 """ curve, n0, nb = args res = {} r = 100 posi = [0, r, pi/2, 0] p = M(posi) Tp = M.tangent_space(p) for i in range(n0, n0+nb): # starting vector dy = i*0.006/n_geod v = Tp([sol[0].rhs()(r0=r, y=dy, m=2).n(), -1, 0, dy]) # overwrite the starting vector curve._initial_tangent_vector = v # integration with m=2 curve.solve_across_charts(step=0.2, parameters_values={m:2}) # copy and clear solution res[i] = (p.coord(), curve._solutions.copy()) curve._solutions.clear() return res geo will keep the numerical solutions. I like to see pool as a hole in which I can throw some jobs. multiprocessing will then magically do them for me using every resource available on the computer. In [28]: geo = {} pool = multiprocessing.Pool(n_cpu) # progress bar display f = FloatProgress(min=0, max=n_geod) display(f) for i, some_res in enumerate(pool.imap_unordered(calc_some_geodesics, args)): # do and wait # progress bar update f.value += len(some_res) # update result geo.update(some_res) # clean exit pool.close() pool.join() If, for any reason, you don't want to use parallel computing, you can replace the previous cell with this one: In [29]: # geo = calc_some_geodesics((c, 0, n_geod)) We can now try to visualize those geodesics. Next cell will plot 20 of them. In [30]: # add the sphere P = sage.plot.plot3d.shapes.Sphere(4, color='grey') # cycle through the solutions for i in range(0, n_geod, 5*n_geod/100): # set solution curve._solutions = geo[i][1] # do interpolation interp = curve.interpolate() # plot the curve P += curve.plot_integrated(mapping=phi, color=["red"], thickness=2, plot_points=150, label_axes=False, across_charts=True) # show the result P.show(aspect_ratio=[1, 1, 1], viewer='threejs', online=True) We can see that some fall inside the black hole toward the singularity. That's not an issue because the integration is automaticaly stopped when the geodesic leaves the chart domain defined in part 1. Intersection with the accretion disk¶ Time to transform those simulated light-rays into an image. To do this, we first need to compute the intersection between each geodesic and the accretion disk. For this example, the disk spans from $r=8$ to $r=50$, and is tilted by an angle $\alpha = - \frac{\pi}{20}$. In [31]: disk_min = 12 disk_max = 50 alpha = -pi/20 Let's plot the disk on top of the last figure. (We cheat a little bit here and use a flattened torus.) In [32]: D = sage.plot.plot3d.shapes.Torus((disk_min+disk_max)/2, (disk_min-disk_max)/2).scale(1,1,0.01).rotateY(-pi/20) In [33]: (P+D).show(aspect_ratio=[1, 1, 1], viewer='threejs', online=True) The same but tilted on the X-axis by an angle $\beta=\frac{\pi}{3}$. As explained earlier, the final image will be obtained by computing for each pixel : • Which geodesic best describes the light-ray • Which angle $\beta$ at which the disk should be tilted • The intersection between the disk and that geodesic In [34]: (P+D.rotateX(pi/3)).show(aspect_ratio=[1, 1, 1], viewer='threejs', online=True)
2020-01-19 05:51:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5873504281044006, "perplexity": 1841.502503548023}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00320.warc.gz"}
http://www.aimsciences.org/article/doi/10.3934/jimo.2019037
# American Institute of Mathematical Sciences July  2020, 16(4): 1943-1965. doi: 10.3934/jimo.2019037 ## Impact of risk aversion on two-echelon supply chain systems with carbon emission reduction constraints 1 Institute of Operations Research, School of Management, Qufu Normal University, Rizhao, Shandong 276826, China 2 Department of Health Services and Outcomes Research, National Healthcare Group, 138543, Singapore * Corresponding author Received  June 2018 Revised  November 2018 Published  May 2019 Fund Project: The research is partly supported by the National Natural Science Foundation of China under grant 71771138, Humanities and Social Sciences Youth Foundation of Ministry of Education of China under grant 17YJC630004, Natural Science Foundation of Shandong Province, China under Grant ZR2017MG009, and Special Foundation for Taishan Scholars of Shandong Province, China under Grant tsqn201812061 This study examines a two-echelon supply chain consisting of two competing manufacturers and one retailer that has the channel power, in which one manufacturer is engaged in sustainable technology to curb carbon emissions under the cap-and-trade regulation while the other one operates its business as usual in a traditional manner. Two different supply chain configurations concerning risk attributes of the agents are considered, that is, (ⅰ) two risk-neutral manufacturers with one risk-averse retailer; and (ⅱ) two risk-averse manufacturers with one risk-neutral retailer. Under the mean-variance framework, we use a retailer-leader game optimization approach to study operational decisions of these two systems. Specifically, optimal operational decisions of the agents are established in closed-form expressions and the corresponding profits and carbon emissions are assessed. Numerical experiments are conducted to analyze the impact of risk aversion of the underlying supply chains. The results show that each risk-averse agent would benefit from a low scale risk aversion. Further, low carbon emissions could be attainable if risk aversion scale of the underlying manufacturer is small or moderate. In addition, the carbon emissions might increase when risk aversion of the traditional manufacturer or the retailer is of small or moderate scale. Citation: Qingguo Bai, Fanwen Meng. Impact of risk aversion on two-echelon supply chain systems with carbon emission reduction constraints. Journal of Industrial & Management Optimization, 2020, 16 (4) : 1943-1965. doi: 10.3934/jimo.2019037 ##### References: show all references ##### References: Effects of $\lambda_{r}$ on DM$_{1}$ Effects of $\lambda_{m_{1}}$ on DM$_{2}$ Effects of $\lambda_{m_{2}}$ on DM$_{2}$ The optimal solutions for DM$_{1}$ Decentralized Model 1 $w^{*}_{1}$ $w^{*}_{2}$ $s^{*}$ $p^{*}_{1}$ $p^{*}_{2}$ $C = 9000$ 422.5471 207.3770 8.0263 483.9910 297.6065 $C = 12569$ 422.5471 207.3770 8.0263 483.9910 297.6065 $C = 15000$ 422.5471 207.3770 8.0263 483.9910 297.6065 Decentralized Model 1 $w^{*}_{1}$ $w^{*}_{2}$ $s^{*}$ $p^{*}_{1}$ $p^{*}_{2}$ $C = 9000$ 422.5471 207.3770 8.0263 483.9910 297.6065 $C = 12569$ 422.5471 207.3770 8.0263 483.9910 297.6065 $C = 15000$ 422.5471 207.3770 8.0263 483.9910 297.6065 The optimal profits and carbon emissions for DM$_{1}$ Decentralized Model 1 $U^{*}(\pi_{r})$ $E^{*}(\pi_{m_{1}})$ $E^{*}(\pi_{m_{2}})$ $J(s^{*})$ $C = 9000$ 17,627 57,170 31,166 12,569 $C = 12569$ 17,627 67,877 31,166 12,569 $C = 15000$ 17,627 75,170 31,166 12,569 Decentralized Model 1 $U^{*}(\pi_{r})$ $E^{*}(\pi_{m_{1}})$ $E^{*}(\pi_{m_{2}})$ $J(s^{*})$ $C = 9000$ 17,627 57,170 31,166 12,569 $C = 12569$ 17,627 67,877 31,166 12,569 $C = 15000$ 17,627 75,170 31,166 12,569 The optimal solutions for DM$_{2}$ Decentralized Model 2 $w^{**}_{1}$ $w^{**}_{2}$ $s^{**}$ $p^{**}_{1}$ $p^{**}_{2}$ $C = 9000$ 298.4721 67.4862 2.1871 506.5056 311.8028 $C = 11416$ 298.4721 67.4862 2.1871 506.5056 311.8028 $C = 15000$ 298.4721 67.4862 2.1871 506.5056 311.8028 Decentralized Model 2 $w^{**}_{1}$ $w^{**}_{2}$ $s^{**}$ $p^{**}_{1}$ $p^{**}_{2}$ $C = 9000$ 298.4721 67.4862 2.1871 506.5056 311.8028 $C = 11416$ 298.4721 67.4862 2.1871 506.5056 311.8028 $C = 15000$ 298.4721 67.4862 2.1871 506.5056 311.8028 The optimal profits and carbon emissions for DM$_{2}$ Decentralized Model 2 $E^{**}(\pi_{r})$ $U^{**}(\pi_{m_{1}})$ $U^{**}(\pi_{m_{2}})$ $J(s^{**})$ $C = 9000$ 66,868 31,809 5617.9 11,416 $C = 11416$ 66,868 39,057 5617.9 11,416 $C = 15000$ 66,868 49,809 5617.9 11,416 Decentralized Model 2 $E^{**}(\pi_{r})$ $U^{**}(\pi_{m_{1}})$ $U^{**}(\pi_{m_{2}})$ $J(s^{**})$ $C = 9000$ 66,868 31,809 5617.9 11,416 $C = 11416$ 66,868 39,057 5617.9 11,416 $C = 15000$ 66,868 49,809 5617.9 11,416 [1] Qingguo Bai, Jianteng Xu, Fanwen Meng, Niu Yu. Impact of cap-and-trade regulation on coordinating perishable products supply chain with cost learning. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020126 [2] Jing Feng, Yanfei Lan, Ruiqing Zhao. Impact of price cap regulation on supply chain contracting between two monopolists. Journal of Industrial & Management Optimization, 2017, 13 (1) : 349-373. doi: 10.3934/jimo.2016021 [3] Gang Xie, Wuyi Yue, Shouyang Wang. Optimal selection of cleaner products in a green supply chain with risk aversion. Journal of Industrial & Management Optimization, 2015, 11 (2) : 515-528. doi: 10.3934/jimo.2015.11.515 [4] Prasenjit Pramanik, Sarama Malik Das, Manas Kumar Maiti. Note on : Supply chain inventory model for deteriorating items with maximum lifetime and partial trade credit to credit risk customers. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1289-1315. doi: 10.3934/jimo.2018096 [5] Yeong-Cheng Liou, Siegfried Schaible, Jen-Chih Yao. Supply chain inventory management via a Stackelberg equilibrium. Journal of Industrial & Management Optimization, 2006, 2 (1) : 81-94. doi: 10.3934/jimo.2006.2.81 [6] Xue-Yan Wu, Zhi-Ping Fan, Bing-Bing Cao. Cost-sharing strategy for carbon emission reduction and sales effort: A nash game with government subsidy. Journal of Industrial & Management Optimization, 2020, 16 (4) : 1999-2027. doi: 10.3934/jimo.2019040 [7] Weihua Liu, Xinran Shen, Di Wang, Jingkun Wang. Order allocation model in logistics service supply chain with demand updating and inequity aversion: A perspective of two option contracts comparison. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020118 [8] Abdolhossein Sadrnia, Amirreza Payandeh Sani, Najme Roghani Langarudi. Sustainable closed-loop supply chain network optimization for construction machinery recovering. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020074 [9] Sanjoy Kumar Paul, Ruhul Sarker, Daryl Essam. Managing risk and disruption in production-inventory and supply chain systems: A review. Journal of Industrial & Management Optimization, 2016, 12 (3) : 1009-1029. doi: 10.3934/jimo.2016.12.1009 [10] Min Li, Jiahua Zhang, Yifan Xu, Wei Wang. Effect of disruption risk on a supply chain with price-dependent demand. Journal of Industrial & Management Optimization, 2019  doi: 10.3934/jimo.2019095 [11] Pin-Shou Ting. The EPQ model with deteriorating items under two levels of trade credit in a supply chain system. Journal of Industrial & Management Optimization, 2015, 11 (2) : 479-492. doi: 10.3934/jimo.2015.11.479 [12] Kun-Jen Chung, Pin-Shou Ting. The inventory model under supplier's partial trade credit policy in a supply chain system. Journal of Industrial & Management Optimization, 2015, 11 (4) : 1175-1183. doi: 10.3934/jimo.2015.11.1175 [13] Qiang Lin, Ying Peng, Ying Hu. Supplier financing service decisions for a capital-constrained supply chain: Trade credit vs. combined credit financing. Journal of Industrial & Management Optimization, 2020, 16 (4) : 1731-1752. doi: 10.3934/jimo.2019026 [14] Amin Aalaei, Hamid Davoudpour. Two bounds for integrating the virtual dynamic cellular manufacturing problem into supply chain management. Journal of Industrial & Management Optimization, 2016, 12 (3) : 907-930. doi: 10.3934/jimo.2016.12.907 [15] Jun Li, Hairong Feng, Kun-Jen Chung. Using the algebraic approach to determine the replenishment optimal policy with defective products, backlog and delay of payments in the supply chain management. Journal of Industrial & Management Optimization, 2012, 8 (1) : 263-269. doi: 10.3934/jimo.2012.8.263 [16] O. İlker Kolak, Orhan Feyzioğlu, Ş. İlker Birbil, Nilay Noyan, Semih Yalçindağ. Using emission functions in modeling environmentally sustainable traffic assignment policies. Journal of Industrial & Management Optimization, 2013, 9 (2) : 341-363. doi: 10.3934/jimo.2013.9.341 [17] Qiang Lin, Yang Xiao, Jingju Zheng. Selecting the supply chain financing mode under price-sensitive demand: confirmed warehouse financing vs. trade credit. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020057 [18] Biswajit Sarkar, Arunava Majumder, Mitali Sarkar, Bikash Koli Dey, Gargi Roy. Two-echelon supply chain model with manufacturing quality improvement and setup cost reduction. Journal of Industrial & Management Optimization, 2017, 13 (2) : 1085-1104. doi: 10.3934/jimo.2016063 [19] Bin Chen, Wenying Xie, Fuyou Huang, Juan He. Quality competition and coordination in a VMI supply chain with two risk-averse manufacturers. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020100 [20] Dayi He, Xiaoling Chen, Qi Huang. Influences of carbon emission abatement on firms' production policy based on newsboy model. Journal of Industrial & Management Optimization, 2017, 13 (1) : 251-265. doi: 10.3934/jimo.2016015 2019 Impact Factor: 1.366
2020-07-05 20:49:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4126027226448059, "perplexity": 10665.136205788114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655888561.21/warc/CC-MAIN-20200705184325-20200705214325-00257.warc.gz"}
http://exxamm.com/blog/Blog/13135/zxcfghfgvbnm4?Class%2012
Chemistry General Properties of the Transition Elements (d-Block)-3 ### Topics Covered : ● Chemical Reactivity and E^⊖ Values ● Magnetic Properties ● Formation of Coloured Ions ### Chemical Reactivity and E^⊖ Values : => Transition metals vary widely in their chemical reactivity. Many of them are sufficiently electropositive to dissolve in mineral acids, although a few are ‘noble’—that is, they are unaffected by simple acids. => The metals of the first series with the exception of copper are relatively more reactive and are oxidised by color{red}(1M) color{red}(H^+)), though the actual rate at which these metals react with oxidising agents like hydrogen ion (color{red}(H^+)) is sometimes slow. ● For example, titanium and vanadium, in practice, are passive to dilute non oxidising acids at room temperature. => The color{red}(E^⊖) values for color{red}(M^(2+)|M) (Table 8.2) indicate a decreasing tendency to form divalent cations across the series. This general trend towards less negative color{red}(E^⊖) values is related to the increase in the sum of the first and second ionisation enthalpies. color{red}(text(Note )) : (i) The color{red}(E^⊖) values for color{red}(Mn), color{red}(Ni) and color{red}(Zn) are more negative than expected from the general trend. (ii) Whereas the stabilities of half-filled color{red}(d) subshell (color{red}(d^5)) in color{red}(Mn^(2+)) and completely filled color{red}(d) subshell (color{red}(d^10)) in zinc are related to their color{red}(E^⊖) values; for nickel, color{red}(E^⊖) value is related to the highest negative enthalpy of hydration. => An examination of the color{red}(E^⊖) values for the redox couple color{red}(M^(3+)|M^(2+)) (Table 8.2) shows that color{red}(Mn^(3+)) and color{red}(Co^(3+)) ions are the strongest oxidising agents in aqueous solutions. => The ions color{red}(Ti^(2+)), color{red}(V^(2+)) and color{red}(Cr^(2+)) are strong reducing agents and will liberate hydrogen from a dilute acid, e.g., color{red}(2 Cr^(2+) (aq) + 2 H^(+) (aq) → 2 Cr^(3+) (aq) + H_2 (g)) Q 3081201127 For the first row transition metals the E^(⊖) values are: tt ((E^(⊖) , V , Cr , Mn , Fe , Co , Ni , Cu) , ( (M^(2+)//M) , -1.18 , -0.91 , -1.18 , -0.44 , -0.28 , -0.25 , +0.34)) Explain the irregularity in the above values. Solution: The E^(⊖) (M^(2+)//M) values are not regular which can be explained from the irregular variation of ionisation enthalpies ( Δ_1H_1 +Δ_1H_2 ) and also the sublimation enthalpies which are relatively much less for manganese and vanadium. Q 3011401320 Why is the E^(⊖) value for the Mn^(3+)//Mn^(2+) couple much more positive than that for Cr^(3+//Cr^(2+) or Fe^(3+)//Fe^(2+)? Explain. Solution: Much larger third ionisation energy of Mn (where the required change is d^5 to d^4) is mainly responsible for this. This also explains why the +3 state of Mn is of little importance. ### Magnetic Properties : => When a magnetic field is applied to substances, mainly two types of magnetic behaviour are observed : (i) diamagnetism and (ii) paramagnetism color{green}(text(Diamagnetic Substances )): These substances are repelled by the applied magnetic field. color{green}(text(Paramagnetic Substances )) : These substances are attracted by the applied magnetic field. color{green}(text(Ferromagnetic Substances )) : Substances which are attracted very strongly are said to be ferromagnetic. In fact, ferromagnetism is an extreme form of paramagnetism. => Many of the transition metal ions are paramagnetic. => Paramagnetism arises from the presence of unpaired electrons, each such electron having a magnetic moment associated with its spin angular momentum and orbital angular momentum. ● For the compounds of the first series of transition metals, the contribution of the orbital angular momentum is effectively quenched and hence is of no significance. ● For these, the magnetic moment is determined by the number of unpaired electrons and is calculated by using the ‘spin-only’ formula, i.e., color{red}(μ = sqrt(n(n + 2))) where color{red}(n) is the number of unpaired electrons and color{red}(μ) is the magnetic moment in units of Bohr magneton (color{red}(BM)). ● A single unpaired electron has a magnetic moment of 1.73 Bohr magnetons (color{red}(BM)). => The magnetic moment increases with the increasing number of unpaired electrons. ● Therefore, the observed magnetic moment gives a useful indication about the number of unpaired electrons present in the atom, molecule or ion. => The magnetic moments calculated from the ‘spin-only’ formula and those derived experimentally for some ions of the first row transition elements are given in Table 8.7. The experimental data are mainly for hydrated ions in solution or in the solid state. Q 3071401326 Calculate the magnetic moment of a divalent ion in aqueous solution if its atomic number is 25. Solution: With atomic number 25, the divalent ion in aqueous solution will have d^5 configuration (five unpaired electrons). The magnetic moment, μ is mu = sqrt(5(5+2)) = 5.92 BM ### Formation of Coloured Ions : => When an electron from a lower energy color{red}(d) orbital is excited to a higher energy color{red}(d) orbital, the energy of excitation corresponds to the frequency of light absorbed. This frequency generally lies in the visible region. => The colour observed corresponds to the complementary colour of the light absorbed. => The frequency of the light absorbed is determined by the nature of the ligand. => In aqueous solutions where water molecules are the ligands, the colours of the ions observed are listed in Table 8.8. => A few coloured solutions of d-block elements are illustrated in Fig. 8.5.
2019-04-18 16:42:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7671297788619995, "perplexity": 3169.038340697215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517745.15/warc/CC-MAIN-20190418161426-20190418183426-00354.warc.gz"}
http://mathhelpforum.com/calculus/92884-bound-error-lagrange-interpolation.html
# Math Help - Bound of error(Lagrange Interpolation) 1. ## Bound of error(Lagrange Interpolation) Hello everybody Given the Lagrange polynomial with the table: $\begin{tabular} {|c|c|c|c|} \hline x & 8.3 & 8.6 & 8.7 \\ \hline f(x) & 17.56492 & 18.50515 & 18.32091 \\ \hline \end{tabular}$ Find $P_2(x)$ for $x = 8.4$, the actual error and the bound of error for the function $f(x) = x \ln x$ Ok now I've found $P_2(x)$ and I'm approximating $P_2(8.4)$ which from my polyinomial is 17.877155 (close enough I think). But I don't know how to find the bound of error using the remainder term! Anyone please show me the steps at least. Thanks in advance. Post #10 3. Ok thanks...I want to know if this works. Applying the formula: $|f(x)-P_2(x)|\leq \frac{sup_{[a;b]}|f^{(n+1)}(x)|}{(n+1)!}\prod_{i=0}^{n}{(x-x_i)}$ now setting $a=8.3, b=8.7,$ I've got $f^{(3)}(x)=-\frac{1}{x^2}$ Is it okay to choose $sup_{[a;b]}|f^{(3)}(x)| = \frac{1}{8.3^2}$ since $|f^{(3)}(x)|$ is decreasing on [a, b] ? 4. Originally Posted by javax Is it okay to choose $sup_{[a;b]}|f^{(3)}(x)| = \frac{1}{8.3^2}$ since $|f^{(3)}(x)|$ is decreasing on [a, b] ? I think so.
2015-09-05 15:48:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9175021648406982, "perplexity": 1019.6939414766347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00287-ip-10-171-96-226.ec2.internal.warc.gz"}
https://src.rampantmonkey.com/dcphr/tree/cyoa/walkthrough/book_burning.tex?id=6225b26b95f12adbdc2cd3fe69d1c209704f8670
summaryrefslogtreecommitdiffstats log msg author committer range blob: a42c86875835bb64d4d45ebcdabd99b88c2fef5e (plain) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 \section{Book Burning} \subsection*{Upon Arrival} \begin{itemize} \item Insert bookmark into book on page 140 \item Open book to page 68 \item Make items and posters available to team \end{itemize} \subsection*{Upon Departure} \begin{itemize} \item Ensure that the correct answer is entered on page 140 \item Add the following sticker to page 140 \\ \begin{mdframed} How terrible! I don’t believe for a second that this wasn’t the work of those vile librarians. It’s time to bring them to justice. Turn to page 41. \\ \end{mdframed} \item Tear out page 45 \& 46 \item Return book open to page 140 \end{itemize}
2020-08-13 06:29:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999645948410034, "perplexity": 1510.4989955606193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00403.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/ipi.2009.3.155
# American Institute of Mathematical Sciences May  2009, 3(2): 155-172. doi: 10.3934/ipi.2009.3.155 ## On the existence of transmission eigenvalues 1 University of Karlsruhe, Department of Mathematics, 76128 Karlsruhe Received  June 2008 Revised  February 2009 Published  May 2009 The investigation of the far field operator and the Factorization Method in inverse scattering theory leads naturally to the study of corresponding interior transmission eigenvalue problems. In contrast to the classical Dirichlet- or Neumann eigenvalue problem for $-\Delta$ in bounded domains these interior transmiision eigenvalue problem fail to be selfadjoint. In general, existence of eigenvalues is an open problem. In this paper we prove existence of eigenvalues for the scalar Helmholtz equation (isotropic and anisotropic cases) and Maxwell's equations under the condition that the contrast of the scattering medium is large enough. Citation: Andreas Kirsch. On the existence of transmission eigenvalues. Inverse Problems & Imaging, 2009, 3 (2) : 155-172. doi: 10.3934/ipi.2009.3.155 [1] Andreas Kirsch. An integral equation approach and the interior transmission problem for Maxwell's equations. Inverse Problems & Imaging, 2007, 1 (1) : 159-179. doi: 10.3934/ipi.2007.1.159 [2] Oleg Yu. Imanuvilov, Masahiro Yamamoto. Calderón problem for Maxwell's equations in cylindrical domain. Inverse Problems & Imaging, 2014, 8 (4) : 1117-1137. doi: 10.3934/ipi.2014.8.1117 [3] Marion Darbas, Jérémy Heleine, Stephanie Lohrengel. Numerical resolution by the quasi-reversibility method of a data completion problem for Maxwell's equations. Inverse Problems & Imaging, 2020, 14 (6) : 1107-1133. doi: 10.3934/ipi.2020056 [4] W. Wei, H. M. Yin. Global solvability for a singular nonlinear Maxwell's equations. Communications on Pure & Applied Analysis, 2005, 4 (2) : 431-444. doi: 10.3934/cpaa.2005.4.431 [5] Björn Birnir, Niklas Wellander. Homogenized Maxwell's equations; A model for ceramic varistors. Discrete & Continuous Dynamical Systems - B, 2006, 6 (2) : 257-272. doi: 10.3934/dcdsb.2006.6.257 [6] Michael V. Klibanov. A phaseless inverse scattering problem for the 3-D Helmholtz equation. Inverse Problems & Imaging, 2017, 11 (2) : 263-276. doi: 10.3934/ipi.2017013 [7] Daniel Bouche, Youngjoon Hong, Chang-Yeol Jung. Asymptotic analysis of the scattering problem for the Helmholtz equations with high wave numbers. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1159-1181. doi: 10.3934/dcds.2017048 [8] S. L. Ma'u, P. Ramankutty. An averaging method for the Helmholtz equation. Conference Publications, 2003, 2003 (Special) : 604-609. doi: 10.3934/proc.2003.2003.604 [9] Gang Bao, Bin Hu, Peijun Li, Jue Wang. Analysis of time-domain Maxwell's equations in biperiodic structures. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 259-286. doi: 10.3934/dcdsb.2019181 [10] M. Eller. On boundary regularity of solutions to Maxwell's equations with a homogeneous conservative boundary condition. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 473-481. doi: 10.3934/dcdss.2009.2.473 [11] Jiangxing Wang. Convergence analysis of an accurate and efficient method for nonlinear Maxwell's equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2429-2440. doi: 10.3934/dcdsb.2020185 [12] Hao Wang, Wei Yang, Yunqing Huang. An adaptive edge finite element method for the Maxwell's equations in metamaterials. Electronic Research Archive, 2020, 28 (2) : 961-976. doi: 10.3934/era.2020051 [13] B. L. G. Jonsson. Wave splitting of Maxwell's equations with anisotropic heterogeneous constitutive relations. Inverse Problems & Imaging, 2009, 3 (3) : 405-452. doi: 10.3934/ipi.2009.3.405 [14] Cleverson R. da Luz, Gustavo Alberto Perla Menzala. Uniform stabilization of anisotropic Maxwell's equations with boundary dissipation. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 547-558. doi: 10.3934/dcdss.2009.2.547 [15] Rafał Kamocki, Marek Majewski. On the continuous dependence of solutions to a fractional Dirichlet problem. The case of saddle points. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2557-2568. doi: 10.3934/dcdsb.2014.19.2557 [16] Dirk Pauly. On Maxwell's and Poincaré's constants. Discrete & Continuous Dynamical Systems - S, 2015, 8 (3) : 607-618. doi: 10.3934/dcdss.2015.8.607 [17] Karzan Berdawood, Abdeljalil Nachaoui, Rostam Saeed, Mourad Nachaoui, Fatima Aboud. An efficient D-N alternating algorithm for solving an inverse problem for Helmholtz equation. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021013 [18] María Ángeles García-Ferrero, Angkana Rüland, Wiktoria Zatoń. Runge approximation and stability improvement for a partial data Calderón problem for the acoustic Helmholtz equation. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021049 [19] John Sylvester. An estimate for the free Helmholtz equation that scales. Inverse Problems & Imaging, 2009, 3 (2) : 333-351. doi: 10.3934/ipi.2009.3.333 [20] Jiayu Han. Nonconforming elements of class $L^2$ for Helmholtz transmission eigenvalue problems. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3195-3212. doi: 10.3934/dcdsb.2018281 2020 Impact Factor: 1.639
2021-10-18 19:42:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6977435350418091, "perplexity": 4232.340641227132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00010.warc.gz"}
https://www.enotes.com/homework-help/qd-1000-10p-qs-200-10p-identify-q-intercept-demand-536202
# Qd= 1000-10P Qs= -200+10P Identify the Q intercept for the Demand curve and P intercept for the Supply curve. The y intercept of a function, y = f(x) can be found by substituting x = 0. Similarly, x intercept can be found by substituting y = 0. Intercept is the value at which the curve crosses that particular Axis. In this case, demand curve is given as: Qd = 1000 - 10 P Q intercept is the value of Q where the demand curve crosses the Q Axis. This can be found by substituting P = 0. In other words, Qd = 1000 - 10x0 = 1000 Similarly,  for the supply curve, Qs = -200 + 10P P intercept is the value where supply curve intersects the P Axis. And it can be found by substituting Qs =0. Thus, -200 + 10 P = 0 or, P = 200/10 = 20. Hope this helps. Approved by eNotes Editorial Team Posted on
2020-12-04 21:06:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398553371429443, "perplexity": 1315.653254180546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141743438.76/warc/CC-MAIN-20201204193220-20201204223220-00462.warc.gz"}
http://www.exampleproblems.com/wiki/index.php/Normal_distribution
# Normal distribution Probability density functionProbability density function for the Normal distribtionThe green line is the standard normal distribution Cumulative distribution functionCumulative distribution function for the Normal distributionColors match the pdf above Parameters μ location (real)σ2 > 0 squared scale (real) Support $x \in (-\infty;+\infty)\!$ Template:Probability distribution/link density $\frac1{\sigma\sqrt{2\pi}}\; \exp\left(-\frac{\left(x-\mu\right)^2}{2\sigma^2} \right) \!$ cdf $\frac12 \left(1 + \mathrm{erf}\,\frac{x-\mu}{\sigma\sqrt2}\right) \!$ Mean μ Median μ Mode μ Variance σ2 Skewness 0 Kurtosis 0 Entropy $\ln\left(\sigma\sqrt{2\,\pi\,e}\right)\!$ mgf $M_X(t)= \exp\left(\mu\,t+\frac{\sigma^2 t^2}{2}\right)$ Char. func. $\phi_X(t)=\exp\left(\mu\,i\,t-\frac{\sigma^2 t^2}{2}\right)$ The normal distribution, also called Gaussian distribution, is an extremely important probability distribution in many fields. It is a family of distributions of the same general form, differing in their location and scale parameters: the mean ("average") and standard deviation ("variability"), respectively. The standard normal distribution is the normal distribution with a mean of zero and a standard deviation of one (the green curves in the plots to the right). It is often called the bell curve because the graph of its probability density resembles a bell. ## Overview The normal distribution is a convenient model of quantitative phenomena in the natural and behavioral sciences. A variety of psychological test scores and physical phenomena like photon counts have been found to approximately follow a normal distribution. While the underlying causes of these phenomena are often unknown, the use of the normal distribution can be theoretically justified in situations where many small effects are added together into a score or variable that can be observed. The normal distribution also arises in many areas of statistics: for example, the sampling distribution of the mean is approximately normal, even if the distribution of the population the sample is taken from is not normal. In addition, the normal distribution maximizes information entropy among all distributions with known mean and variance, which makes it the natural choice of underlying distribution for data summarized in terms of sample mean and variance. The normal distribution is the most widely used family of distributions in statistics and many statistical tests are based on the assumption of normality. In probability theory, normal distributions arise as the limiting distributions of several continuous and discrete families of distributions. ## History The normal distribution was first introduced by de Moivre in an article in 1733 (reprinted in the second edition of his The Doctrine of Chances, 1738) in the context of approximating certain binomial distributions for large n. His result was extended by Laplace in his book Analytical Theory of Probabilities (1812), and is now called the Theorem of de Moivre-Laplace. Laplace used the normal distribution in the analysis of errors of experiments. The important method of least squares was introduced by Legendre in 1805. Gauss, who claimed to have used the method since 1794, justified it rigorously in 1809 by assuming a normal distribution of the errors. The name "bell curve" goes back to Jouffret who used the term "bell surface" in 1872 for a bivariate normal with independent components. The name "normal distribution" was coined independently by Charles S. Peirce, Francis Galton and Wilhelm Lexis around 1875. This terminology is unfortunate, since it reflects and encourages the fallacy that many or all probability distributions are "normal". (See the discussion of "occurrence" below.) That the distribution is called the normal or Gaussian distribution is an instance of Stigler's law of eponymy: "No scientific discovery is named after its original discoverer." ## Specification of the normal distribution There are various ways to specify a random variable. The most visual is the probability density function (plot at the top), which represents how likely each value of the random variable is. The cumulative density function is a conceptually cleaner way to specify the same information, but to the untrained eye its plot is much less informative (see below). Equivalent ways to specify the normal distribution are: the moments, the cumulants, the characteristic function, the moment-generating function, and the cumulant-generating function. Some of these are very useful for theoretical work, but not intuitive. See probability distribution for a discussion. All of the cumulants of the normal distribution are zero, except the first two. ### Probability density function File:Normal distribution pdf.png Probability density function for 4 different parameter sets (green line is the standard normal) The probability density function of the normal distribution with mean μ and variance σ2 (equivalently, standard deviation σ) is an example of a Gaussian function, $f(x;\mu,\sigma) = \frac{1}{\sigma\sqrt{2\pi}} \, \exp \left( -\frac{(x- \mu)^2}{2\sigma^2} \right).$ If a random variable X has this distribution, we write X ~ N(μ,σ2). If μ = 0 and σ = 1, the distribution is called the standard normal distribution and the probability density function reduces to $f(x) = \frac{1}{\sqrt{2\pi}} \, \exp\left(-\frac{x^2}{2} \right).$ The image to the right gives the graph of the probability density function of the normal distribution various parameter values. Some notable qualities of the normal distribution: • The density function is symmetric about its mean value. • The mean is also its mode and median. • 68.27% of the area under the curve is within one standard deviation of the mean. • 95.45% of the area is within two standard deviations. • 99.73% of the area is within three standard deviations. • The inflection points of the curve occur at one standard deviation away from the mean. ### Cumulative distribution function File:Normal distribution cdf.png Cumulative distribution function of the above pdf The cumulative distribution function (cdf) is defined as the probability that a variable X has a value less than or equal to x, and it is expressed in terms of the density function as $F(x;\mu,\sigma) = \frac{1}{\sigma\sqrt{2\pi}} \int_{-\infty}^x \exp -\frac{(u - \mu)^2}{2\sigma^2} \, du .$ The standard normal cdf, conventionally denoted Φ, is just the general cdf evaluated with μ = 0 and σ = 1, $\Phi(x) =F(x;0,1)= \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x \exp\left(-\frac{u^2}{2}\right) \, du .$ The standard normal cdf can be expressed in terms of a special function called the error function, as $\Phi(z) = \frac{1}{2} \left[ 1 + \operatorname{erf} \left( \frac{z}{\sqrt{2}} \right) \right] .$ The inverse cumulative distribution function, or quantile function, can be expressed in terms of the inverse error function: $\Phi^{-1}(p) = \sqrt2 \; \operatorname{erf}^{-1} \left(2p - 1 \right) .$ This quantile function is sometimes called the probit function. There is no elementary primitive for the probit function. This is not to say merely that none is known, but rather that the non-existence of such a function has been proved. Values of Φ(x) may be approximated very accurately by a variety of methods, such as numerical integration, Taylor series, or asymptotic series. ### Generating functions #### Moment generating function The moment generating function is defined as the expected value of exp(tX). For a normal distribution, it can be shown that the moment generating function is $M_X(t)\,$ $= \mathrm{E} \left[ \exp(tX) \right]$ $= \int_{-\infty}^{\infty} \frac {1} {\sigma \sqrt{2\pi} } \exp \left( -\frac{(x - \mu)^2}{2 \sigma^2} \right) \exp (tx) \, dx$ $= \exp \left( \mu t + \sigma^2 \frac{t^2}{2} \right)$ as can be seen by completing the square in the exponent. #### Characteristic function The characteristic function is defined as the expected value of exp(itX), where i is the imaginary unit and $i = \sqrt{-1}$. For a normal distribution, the characteristic function is $\phi_X(t;\mu,\sigma)\!$ $= \mathrm{E} \left[ \exp(i t X) \right]$ $= \int_{-\infty}^{\infty} \frac{1}{\sigma \sqrt{2\pi}} \exp \left(- \frac{(x - \mu)^2}{2\sigma^2} \right) \exp(i t x) \, dx$ $= \exp \left( i \mu t - \frac{\sigma^2 t^2}{2} \right) .$ The characteristic function is obtained by replacing t with it in the moment-generating function. ## Properties Some of the properties of the normal distribution: 1. If X˜N(μ,σ2) and a and b are real numbers, then aX + b˜N(aμ + b,(aσ)2) (see expected value and variance). 2. If $X \sim N(\mu_X, \sigma^2_X)$ and $Y \sim N(\mu_Y, \sigma^2_Y)$ are independent normal random variables, then: • Their sum is normally distributed with $U = X + Y \sim N(\mu_X + \mu_Y, \sigma^2_X + \sigma^2_Y)$. • Their difference is normally distributed with $V = X - Y \sim N(\mu_X - \mu_Y, \sigma^2_X + \sigma^2_Y)$. • Both U and V are independent of each other. 3. If $X \sim N(0, \sigma^2_X)$ and $Y \sim N(0, \sigma^2_Y)$ are independent normal random variables, then: • Their product XY follows a distribution with density p given by $p(z) = \frac{1}{\pi\,\sigma_X\,\sigma_Y} \; K_0\left(\frac{|z|}{\sigma_X\,\sigma_Y}\right),$ where K0 is a modified Bessel function. • Their ratio follows a Cauchy distribution with X / Y˜Cauchy(0,σX / σY). 4. If $X_1, \cdots, X_n$ are independent standard normal variables, then $X_1^2 + \cdots + X_n^2$ has a chi-square distribution with n degrees of freedom. ### Standardizing normal random variables As a consequence of Property 1, it is possible to relate all normal random variables to the standard normal. If X ~ N(μ,σ2), then $Z = \frac{X - \mu}{\sigma} \!$ is a standard normal random variable: Z ~ N(0,1). An important consequence is that the cdf of a general normal distribution is therefore $\Pr(X \le x) = \Phi \left( \frac{x-\mu}{\sigma} \right) = \frac{1}{2} \left( 1 + \operatorname{erf} \left( \frac{x-\mu}{\sigma\sqrt{2}} \right) \right) .$ Conversely, if Z ~ N(0,1), then X = σZ + μ is a normal random variable with mean μ and variance σ2. The standard normal distribution has been tabulated, and the other normal distributions are simple transformations of the standard one. Therefore, one can use tabulated values of the cdf of the standard normal distribution to find values of the cdf of a general normal distribution. ### Moments Some of the first few moments of the normal distribution are: Number Raw moment Central moment Cumulant 0 1 0 1 μ 0 μ 2 μ2 + σ2 σ2 σ2 3 μ3 + 3μσ2 0 0 4 μ4 + 6μ2σ2 + 3σ4 4 0 All of cumulants of the normal distribution beyond the second cumulant are zero. ### Generating normal random variables For computer simulations, it is often useful to generate values that have a normal distribution. There are several methods and the most basic is to invert the standard normal cdf. More efficient methods are also known, one such method being the Box-Muller transform. The Box-Muller transform takes two uniformly distributed values as input and maps them to two normally distributed values. This requires generating values from a uniform distribution, for which many methods are known. See also random number generators. The Box-Muller transform is a consequence of the fact that the chi-square distribution with two degrees of freedom (see property 4 above) is an easily-generated exponential random variable. ### The central limit theorem File:Normal approximation to binomial.png Plot of the pdf of a normal distribution with μ = 12 and σ = 3, approximating the pmf of a binomial distribution with n = 48 and p = 1/4 The normal distribution has the very important property that under certain conditions, the distribution of a sum of a large number of independent variables is approximately normal. This is the central limit theorem. The practical importance of the central limit theorem is that the normal distribution can be used as an approximation to some other distributions. • A binomial distribution with parameters n and p is approximately normal for large n and p not too close to 1 or 0 (some books recommend using this approximation only if np and n(1 − p) are both at least 5; in this case, a continuity correction should be applied). The approximating normal distribution has mean μ = np and variance σ2 = np(1 − p). The approximating normal distribution has mean μ = λ and variance σ2 = λ. Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution. ### Infinite divisibility The normal distributions are infinitely divisible probability distributions. ### Stability The normal distributions are strictly stable probability distributions. ### Standard deviation File:Standard deviation diagram.png Dark blue is less than one standard deviation from the mean. For the normal distribution, this accounts for 68% of the set while two standard deviations from the mean (blue and brown) account for 95% and three standard deviations (blue, brown and green) account for 99.7%. In practice, one often assumes that data are from an approximately normally distributed population. If that assumption is justified, then about 68% of the values are at within 1 standard deviation away from the mean, about 95% of the values are within two standard deviations and about 99.7% lie within 3 standard deviations. This is known as the "68-95-99.7 rule". ## Normality tests Normality tests check a given set of data for similarity to the normal distribution. The null hypothesis is that the data set is similar to the normal distribution, therefore a sufficiently small P-value indicates non-normal data. ## Related distributions • R˜Rayleigh(σ2) is a Rayleigh distribution if $R = \sqrt{X^2 + Y^2}$ where X˜N(0,σ2) and Y˜N(0,σ2) are two independent normal distributions. • $Y \sim \chi_{\nu}^2$ is a chi-square distribution with ν degrees of freedom if $Y = \sum_{k=1}^{\nu} X_k^2$ where Xk˜N(0,1) for $k=0,1,\cdots,\nu$ and are independent • Y˜Cauchy(μ = 0,θ = 1) is a Cauchy distribution if Y = X1 / X2 for X1˜N(0,1) and X2˜N(0,1) are two independent normal distributions. • Y˜Log-N(μ,σ2) is a log-normal distribution if Y = exp(X) and X˜N(μ,σ2). • Relation to Lévy skew alpha-stable distribution: if $X\sim \textrm{Levy-S}\alpha\textrm{S}(2,\beta,\sigma/\sqrt{2},\mu)$ then $X \sim N(\mu,\sigma^2)$. ## Occurrence Approximately normal distributions occur in many situations, as a result of the central limit theorem. When there is reason to suspect the presence of a large number of small effects acting additively and independently, it is reasonable to assume that observations will be normal. There are statistical methods to empirically test that assumption, for example the Kolmogorov-Smirnov test. Effects can also act as multiplicative (rather than additive) modifications. In that case, the assumption of normality is not justified, and it is the logarithm of the variable of interest that is normally distributed. The distribution of the directly observed variable is then called log-normal. Finally, if there is a single external influence which has a large effect on the variable under consideration, the assumption of normality is not justified either. This is true even if, when the external variable is held constant, the resulting marginal distributions are indeed normal. The full distribution will be a superposition of normal variables, which is not in general normal. This is related to the theory of errors (see below). To summarize, here is a list of situations where approximate normality is sometimes assumed. For a fuller discussion, see below. • In counting problems (so the central limit theorem includes a discrete-to-continuum approximation) where reproductive random variables are involved, such as • In physiological measurements of biological specimens: • The logarithm of measures of size of living tissue (length, height, skin area, weight); • The length of inert appendages (hair, claws, nails, teeth) of biological specimens, in the direction of growth; presumably the thickness of tree bark also falls under this category; • Other physiological measures may be normally distributed, but there is no reason to expect that a priori; • Measurement errors are assumed to be normally distributed, and any deviation from normality must be explained; • Financial variables • The logarithm of interest rates, exchange rates, and inflation; these variables behave like compound interest, not like simple interest, and so are multiplicative; • Stock-market indices are supposed to be multiplicative too, but some researchers claim that they are Levy-distributed variables instead of lognormal; • Other financial variables may be normally distributed, but there is no reason to expect that a priori; • Light intensity • The intensity of laser light is normally distributed; • Thermal light has a Bose-Einstein distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem. Of relevance to biology and economics is the fact that complex systems tend to display power laws rather than normality. ### Photon counting Light intensity from a single source varies with time, as thermal fluctuations can be observed if the light is analyzed at sufficiently high time resolution. The intensity is usually assumed to be normally distributed. In the classical theory of optical coherence, light is modelled as an electromagnetic wave,and correlations are observed and analyzed up to the second order, consistently with the assumption of normality. (See Gaussian stochastic process) However, non-classical correlations are sometimes observed. Quantum mechanics interprets measurements of light intensity as photon counting. The natural assumption in this setting is the Poisson distribution. When light intensity is integrated over times longer than the coherence time and is large, the Poisson-to-normal limit is appropriate. Correlations are interpreted in terms of "bunching" and "anti-bunching" of photons with respect to the expected Poisson behaviour. Anti-bunching requires a quantum model of light emission. Ordinary light sources producing light by thermal emission display a so-called blackbody spectrum (of intensity as a function of frequency), and the number of photons at each frequency follows a Bose-Einstein distribution (a geometric distribution). The coherence time of thermal light is exceedingly low, and so a Poisson distribution is appropriate in most cases, even when the intensity is so low as to preclude the approximation by a normal distribution. The intensity of laser light has an exactly Poisson intensity distribution and long coherence times. The large intensities make it appropriate to use the normal distribution. It is interesting that the classical model of light correlations applies only to laser light, which is a macroscopic quantum phenomenon. On the other hand, "ordinary" light sources do not follow the "classical" model or the normal distribution. ### Measurement errors Normality is the central assumption of the mathematical theory of errors. Similarly, in statistical model-fitting, an indicator of goodness of fit is that the residuals (as the errors are called in that setting) be independent and normally distributed. Any deviation from normality needs to be explained. In that sense, both in model-fitting and in the theory of errors, normality is the only observation that need not be explained, being expected. Repeated measurements of the same quantity are expected to yield results which are clustered around a particular value. If all major sources of errors have been taken into account, it is assumed that the remaining error must be the result of a large number of very small additive effects, and hence normal. Deviations from normality are interpreted as indications of systematic errors which have not been taken into account. ### Physical characteristics of biological specimens The overwhelming biological evidence is that bulk growth processes of living tissue proceed by multiplicative, not additive, increments, and that therefore measures of body size should at most follow a lognormal rather than normal distribution. Despite common claims of normality, the sizes of plants and animals is approximately lognormal. The evidence and an explanation based on models of growth was first published in the classic book Huxley, Julian: Problems of Relative Growth (1932) Differences in size due to sexual dimorphism, or other polymorphisms like the worker/soldier/queen division in social insects, further make the joint distribution of sizes deviate from lognormality. The assumption that linear size of biological specimens is normal leads to a non-normal distribution of weight (since weight/volume is roughly the 3rd power of length, and Gaussian distributions are only preserved by linear transformations), and conversely assuming that weight is normal leads to non-normal lengths. This is a problem, because there is no a priori reason why one of length, or body mass, and not the other, should be normally distributed. Lognormal distributions, on the other hand, are preserved by powers so the "problem" goes away if lognormality is assumed. On the other hand, there are some biological measures where normality is assumed or expected: • blood pressure of adult humans is supposed to be normally distributed, but only after separating males and females into different populations (each of which is normally distributed) • The length of inert appendages such as hair, nails, teeth, claws and shells is expected to be normally distributed if measured in the direction of growth. This is because the growth of inert appendages depends on the size of the root, and not on the length of the appendage, and so proceeds by additive increments. Hence, we have an example of a sum of very many small increments (possibly lognormal) approaching a normal distribution. Another plausible example is the width of tree trunks, where a new thin ring is produced every year whose width is affected by a large number of factors. ### Financial variables Because of the exponential nature of interest and inflation, financial indicators such as interest rates, stock values, or commodity prices make good examples of multiplicative behavior. As such, they should not be expected to be normal, but lognormal. Benoît Mandelbrot, the popularizer of fractals, has claimed that even the assumption of lognormality is flawed, and advocates the use of log-Levy distributions. It is accepted that financial indicators deviate from lognormality. The distribution of price changes on short time scales is observed to have "heavy tails", so that very small or very large price changes are more likely to occur than a lognormal model would predict. Deviation from lognormality indicates that the assumption of independence of the multiplicative influences is flawed. Other examples of variables that are not normally distributed include the lifetimes of humans or mechanical devices. Examples of distributions used in this connection are the exponential distribution (memoryless) and the Weibull distribution. In general, there is no reason that waiting times should be normal, since they are not directly related to any kind of additive influence. ### Test scores A great deal of confusion exists over whether or not IQ test scores and intelligence are normally distributed. As a deliberate result of test construction, IQ scores are always and obviously normally distributed for the majority of the population. Whether intelligence is normally distributed is less clear. The difficulty and number of questions on an IQ test is decided based on which combinations will yield a normal distribution. This does not mean, however, that the information is in any way being misrepresented, or that there is any kind of "true" distribution that is being artificially forced into the shape of a normal curve. Intelligence tests can be constructed to yield any kind of score distribution desired. All true IQ tests have a normal distribution of scores as a result of test design; otherwise IQ scores would be meaningless without knowing what test produced them. Intelligence tests in general, however, can produce any kind of distribution. For an example of how arbitrary the distribution of intelligence test scores really is, imagine a 20-item multiple-choice test entirely composed of problems that consist mostly of finding the areas of circles. Such a test, if given to a population of high-school students, would likely yield a U-shaped distribution, with the bulk of the scores being very high or very low, instead of a normal curve. If a student understands how to find the area of a circle, he can likely do so repeatedly and with few errors, and thus would get a perfect or high score on the test, whereas a student who has never had geometry lessons would likely get every question wrong, possibly with a few right due to guessing luck. If a test is composed mostly of easy questions, then most of the test-takers will have high scores and very few will have low scores. If a test is composed entirely of questions so easy or so hard that every person gets either a perfect score or a zero, it fails to make any kind of statistical discrimination at all and yields a rectangular distribution. These are just a few examples of the many varieties of distributions that could theoretically be produced by carefully designing intelligence tests. Whether intelligence itself is normally distributed has been at times a matter of some debate. Some critics maintain that the choice of a normal distribution is entirely arbitrary. Brian Simon once claimed that the normal distribution was specifically chosen by psychometricians to falsely support the idea that superior intelligence is only held by a small minority, thus legitimizing the rule of a privileged elite over the masses of society. Historically, though, intelligence tests were designed without any concern for producing a normal distribution, and scores came out approximately normally distributed anyway. American educational psychologist Arthur Jensen claims that any test that contains "a large number of items," "a wide range of item difficulties," "a variety of content or forms," and "items that have a significant correlation with the sum of all other scores" will inevitably produce a normal distribution. Furthermore, there exists a number of correlations between IQ scores and other human characteristics that are more provably normally distributed, such as nerve conduction velocity and the glucose metabolism rate of a person's brain, supporting the idea that intelligence is normally distributed. Some critics, such as Stephen Jay Gould in his book The Mismeasure of Man, question the validity of intelligence tests in general, not just the fact that intelligence is normally distributed. For further discussion see the article IQ. The Bell Curve is a controversial book on the topic of the heritability of intelligence. However, despite its title, the book does not primarily address whether IQ is normally distributed. ## Estimation of parameters ### Maximum likelihood estimation of parameters Suppose $X_1,\dots,X_n$ are independent and identically distributed, and are normally distributed with expectation μ and variance σ2. In the language of statisticians, the observed values of these random variables make up a "sample from a normally distributed population." It is desired to estimate the "population mean" μ and the "population standard deviation" σ, based on observed values of this sample. The joint probability density function of these random variables is $f(x_1,\dots,x_n;\mu,\sigma) \propto \sigma^{-n} \prod_{i=1}^n \exp\left({-1 \over 2} \left({x_i-\mu \over \sigma}\right)^2\right).$ (Nota bene: Here the proportionality symbol $\propto$ means proportional as a function of μ and σ, not proportional as a function of $x_1,\dots,x_n$. That may be considered one of the differences between the statistician's point of view and the probabilist's point of view. The reason why this is important will appear below.) As a function of μ and σ this is the likelihood function $L(\mu,\sigma) \propto \sigma^{-n} \exp\left({-\sum_{i=1}^n (x_i-\mu)^2 \over 2\sigma^2}\right).$ In the method of maximum likelihood, the values of μ and σ that maximize the likelihood function are taken to be estimates of the population parameters μ and σ. Usually in maximizing a function of two variables one might consider partial derivatives. But here we will exploit the fact that the value of μ that maximizes the likelihood function with σ fixed does not depend on σ. Therefore, we can find that value of μ, then substitute it from μ in the likelihood function, and finally find the value of σ that maximizes the resulting expression. It is evident that the likelihood function is a decreasing function of the sum $\sum_{i=1}^n (x_i-\mu)^2. \,\!$ So we want the value of μ that minimizes this sum. Let $\overline{x}=(x_1+\cdots+x_n)/n$ be the "sample mean". Observe that $\sum_{i=1}^n (x_i-\mu)^2=\sum_{i=1}^n((x_i-\overline{x})+(\overline{x}-\mu))^2$ $=\sum_{i=1}^n(x_i-\overline{x})^2 + 2\sum_{i=1}^n (x_i-\overline{x})(\overline{x}-\mu) + \sum_{i=1}^n (\overline{x}-\mu)^2$ $=\sum_{i=1}^n(x_i-\overline{x})^2 + 0 + n(\overline{x}-\mu)^2.$ Only the last term depends on μ and it is minimized by $\hat{\mu}=\overline{x}.$ That is the maximum-likelihood estimate of μ. Substituting that for μ in the sum above makes the last term vanish. Consequently, when we substitute that estimate for μ in the likelihood function, we get $L(\overline{x},\sigma) \propto \sigma^{-n} \exp\left({-\sum_{i=1}^n (x_i-\overline{x})^2 \over 2\sigma^2}\right).$ It is conventional to denote the "loglikelihood function", i.e., the logarithm of the likelihood function, by a lower-case $\ell$, and we have $\ell(\hat{\mu},\sigma)=[\mathrm{constant}]-n\log(\sigma)-{\sum_{i=1}^n(x_i-\overline{x})^2 \over 2\sigma^2}$ and then ${\partial \over \partial\sigma}\ell(\hat{\mu},\sigma) ={-n \over \sigma} +{\sum_{i=1}^n (x_i-\overline{x})^2 \over \sigma^3} ={-n \over \sigma^3}\left(\sigma^2-{1 \over n}\sum_{i=1}^n (x_i-\overline{x})^2 \right).$ This derivative is positive, zero, or negative according as σ2 is between 0 and ${1 \over n}\sum_{i=1}^n(x_i-\overline{x})^2,$ or equal to that quantity, or greater than that quantity. Consequently this average of squares of residuals is maximum-likelihood estimate of σ2, and its square root is the maximum-likelihood estimate of σ. #### Surprising generalization The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is perhaps surprisingly subtle and elegant. It involves the spectral theorem and the reason why it can be better to view a scalar as the trace of a 1×1 matrix than as a mere scalar. See estimation of covariance matrices. ### Unbiased estimation of parameters The maximum likelihood estimator of the population mean μ from a sample is an unbiased estimator of the mean, as is the variance when the mean of the population is known a priori. However, if we are faced with a sample and have no knowledge of the mean or the variance of the population from which it is drawn, the unbiased estimator of the variance σ2 is: $s^2 = \frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x})^2 .$
2013-05-20 19:21:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 55, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8732548356056213, "perplexity": 458.2328994096831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699186520/warc/CC-MAIN-20130516101306-00032-ip-10-60-113-184.ec2.internal.warc.gz"}
https://ericmarcon.github.io/entropart/reference/MetaCommunity.html
Methods for objects of type "MetaCommunity". MetaCommunity(Abundances, Weights = rep(1, ncol(Abundances))) is.MetaCommunity(x) # S3 method for MetaCommunity summary(object, ...) # S3 method for MetaCommunity plot(x, ...) ## Arguments Abundances A dataframe containing the number of observations (lines are species, columns are communities). The first column of the dataframe may contain the species names. A vector of positive numbers equal to community weights or a dataframe containing a vector named Weights. It does not have to be normalized. Weights are equal by default. An object to be tested or plotted. A MetaCommunity object to be summarized. Additional arguments to be passed to the generic methods. ## Details In the entropart package, individuals of different "species" are counted in several "communities" which are agregated to define a "metacommunity". This is a naming convention, which may correspond to plots in a forest inventory or any data organized the same way. Alpha and beta entropies of communities are summed according to Weights and the probability to find a species in the metacommunity is the weighted average of probabilities in communities. The simplest way to import data is to organize it into two text files. The first file should contain abundance data: the first column named Species for species names, and a column for each community. The second file should contain the community weights in two columns. The first one, named Communities should contain their names and the second one, named Weights, their weights. Files can be read and data imported by code such as: Abundances <- read.csv(file="Abundances.csv", row.names = 1) MC <- MetaCommunity(Abundances, Weights) ## Value An object of class MetaCommunity is a list: Nsi A matrix containing abundance data, species in line, communities in column. Ns A vector containing the number of individuals of each species. Ni A vector containing the number of individuals of each community. N The total number of individuals. Psi A matrix whose columns are the probability vectors of communities (each of them sums to 1). Wi A vector containing the normalized community weights (sum to 1). Ps A vector containing the probability vector of the metacommunity. Nspecies The number of species. Ncommunities The number of communities. SampleCoverage The sample coverage of the metacommunity. SampleCoverage.communities A vector containing the sample coverages of each community. is.MetaCommunity returns TRUE if the object is of class MetaCommunity. summary.MetaCommunity returns a summary of the object's value. plot.MetaCommunity plots it. ## Author Eric Marcon <Eric.Marcon@ecofog.gf> ## Examples # Use BCI data from vegan package if (require(vegan, quietly = TRUE)) { # Load BCI data (number of trees per species in each 1-ha plot of a tropical forest) data(BCI) # BCI dataframe must be transposed (its lines are plots, not species) BCI.df <- as.data.frame(t(BCI)) # Create a metacommunity object from a matrix of abundances and a vector of weights # (here, all plots have a weight equal to 1) MC <- MetaCommunity(BCI.df) } #> This is vegan 2.5-7
2021-11-27 15:17:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18799377977848053, "perplexity": 4361.562981909572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358189.36/warc/CC-MAIN-20211127133237-20211127163237-00350.warc.gz"}
http://codeforces.com/blog/entry/9199
### Nerevar's blog By Nerevar, 6 years ago, translation, , Hi all. Today there is a school regional team competition in programming in Saratov. We've decided to make a round using tasks from this competition. The problems were prepared by Gerald (Gerald Agapov), Fefer_Ivan (Ivan Fefer), HolkinPV (Pavel Kholkin), KudryashovIA (Igor Kudryashov), IlyaLos (Ilya Los) and Nerevar (Dmitry Matov). The problems' statements were translated into english by Mary Belova (Delinur). The round starts today, on 15th of October, at 16:00 MSK. Parcipants from both divisions are welcome to take part in it. The scoring is standard: 500-1000-1500-2000-2500. Congratulations to the winners! Division I: Division II: UPD: The tutorial is published. • +152 » 6 years ago, # | ← Rev. 2 →   +60 today is "Eid al-Adha"Happy feast to all muslims :D • » » 6 years ago, # ^ |   -12 Happy feast :) عيد سعيد عليك ان شاء الله :) • » » 6 years ago, # ^ |   -72 Хорошо-хорошо, только не взрывай. • » » 6 years ago, # ^ |   -9 A special day for a special round. Happy feast to you too :) • » » 6 years ago, # ^ |   +10 Kurban bayram mubarek olsun! • » » 6 years ago, # ^ |   +6 Happy "Eid Al- Adha" to you. :) • » » 6 years ago, # ^ |   +10 Happy Eid al-Adha!Just want to remember that you CAN NOT fast in the next 3 days (4 if you count the Eid al-Adha), for you who fast routinely. • » » 6 years ago, # ^ |   0 Happy feast to all of you :D • » » 6 years ago, # ^ |   +1 happy feast to you too :D • » » 6 years ago, # ^ |   0 Really funny... This is now 30 minutes after contest but still : "The scoring will be published later."... • » » 6 years ago, # ^ | ← Rev. 4 →   0 4 hours for system testing... 30 second for ratings!! thanks! • » » » 6 years ago, # ^ |   +6 You should respect this rule:You may edit your comment only for fixing grammar mistakes or small changes. Do not change the main idea of your comment. • » » » » 6 years ago, # ^ |   0 Sometimes you made a mistake and after making the mistake you understand it... What should you do that times? » 6 years ago, # |   +44 Just curious, why the scoring is published just before starting? • » » 6 years ago, # ^ |   0 maybe It is different to "Scoring will be dynamic. Problems are sorted by increasing order of difficulty." • » » 6 years ago, # ^ |   +5 4 mins to go, yet no scoring! » 6 years ago, # |   0 what means 'school regional team competition'? • » » 6 years ago, # ^ |   +1 ACM regional competition ??? • » » 6 years ago, # ^ |   +31 In Russia, we have team programming contests for schoolchildren: one All-Russia contest and several regional contests. Our region includes southern part of Russia. • » » » 6 years ago, # ^ |   0 middle school student, or college student ? • » » » » 6 years ago, # ^ |   0 I'm not sure what do you mean, but I would say middle school. • » » » » » 6 years ago, # ^ |   +2 yalishanda, thanks. • » » » 6 years ago, # ^ |   +3 O~ thank you » 6 years ago, # |   0 it's very early!!! » 6 years ago, # | ← Rev. 2 →   0 [user:wo...]is little bit mysterious, can anybody see his personal information? » 6 years ago, # |   0 this is the one of the earlist contests in CF(at 8:00pm in China) the timetable shows that students have to finish this contest at school, dont they i dont know about contests in Russia, perhaps it's an important one. do they feel excited? all in all, wish them good luck • » » 6 years ago, # ^ |   +11 If each round were arranged like the timetable of this round, it would be perfect for us Chinese participants. • » » » 6 years ago, # ^ |   0 you know it's an international website, so jet lag is a serious problem:) that's okey, for we can cherish each chance » 6 years ago, # |   +3 How i can know the problems for the last school regional team competition ? • » » 6 years ago, # ^ |   0 I think you can search for them in Timus :) » 6 years ago, # |   -29 I hope everyone fails :D • » » 6 years ago, # ^ |   +14 That's very unkind of you! • » » » 6 years ago, # ^ |   +8 I lost a bet and I had to post it :D » 6 years ago, # |   0 Finally, a contest that's not too late. » 6 years ago, # |   +13 this round was previously arranged ahead of schedule maybe because of the TC. » 6 years ago, # |   +1 " The scoring will be published later " . 4 minutes before the contest , yet not published . Well later does include after the contest :P • » » 6 years ago, # ^ |   +4 Score distribution is standard. The author of the post are involved into our Olympiad. So, he wasn't in time with announcing. » 6 years ago, # |   0 No hacks available? » 6 years ago, # |   +13 The queue is really long! » 6 years ago, # |   +7 I think some time should be added because of all this "In queue".. • » » 6 years ago, # ^ |   +4 And the site was down too for few minutes... :( » 6 years ago, # |   +9 How solve problem D? » 6 years ago, # |   0 It's very very hard to understand today's problem description!!! • » » 6 years ago, # ^ |   0 I guess, I spent almost one hour trying to understand problems, this it's my second contest where I could say it have very poor description. • » » » 6 years ago, # ^ |   0 yeah i agree with u.... » 6 years ago, # |   +8 My Decision on opening a new problem depends on current problem result i can't start in another problem until i know the result of current problem (pretests) and Queue is toooooooooo Long :( :( and take long time to know if it pass pretests or not • » » 6 years ago, # ^ | ← Rev. 2 →   +4 Definitely it effects on a coder's coding... He has to keep an eye on his recent submission to know if it is passed or not... :/ » 6 years ago, # |   +3 Forget about last round's C (div1), this round's C is much more deadly! Well, at least there are abundant hacks :DA funny thing happened to me now: I sent a hack on a solution 1 minute before the end of the contest, and waited for the queue to settle (around 1-2 minutes after the end). But I found out it was ignored, because there was a successful hack around a minute before mine, but that hack was still in the queue when I sent mine :D • » » 6 years ago, # ^ |   0 Well, C div1 is not very much hard , but you just need to be very careful to handle all cases • » » » 6 years ago, # ^ |   +12 But being careful is hard! (for me, at least...) • » » » 6 years ago, # ^ |   0 That's exactly the point. With problems that rely on you finding a general algorithm, passing pretests usually equals passing the systemtest (as with Adiv1 this time). But it's easy to miss a special case (I hacked one guy on "5 1 1 1 1 1", for example). • » » » » 6 years ago, # ^ |   +1 So THAT was the case I missed! I was dying here trying to figure it out... :) » 6 years ago, # |   0 The scoring will be published later. later = never ever ? looooooooooooooooong Queue ! :| » 6 years ago, # |   +2 today's div-2 contest was slightly harder than usual contests, but the problems were very interesting to solve! next time, try to increase the possibility of hacks! :) » 6 years ago, # |   +29 I think the system testing can't be completed before TC start ... • » » 6 years ago, # ^ |   +27 Even before TC's end • » » » 6 years ago, # ^ |   +25 That's why they began the contest so early. • » » 6 years ago, # ^ |   +12 dunno why, but i think this is the record for the slowest testing ever on Codeforces! » 6 years ago, # |   0 Hi I wanna ask about my A submission [LINK] Why my submission didn't passed time limit ? I saw that my idea is same with other submission that got accepted. Is that using (*it) many times make my submission slower or there's any other factor ? thanks • » » 6 years ago, # ^ |   0 what is the purpose of the hold; statement at the end? • » » » 6 years ago, # ^ |   0 It just same as getchar(); twice. I'm sure it's not the problem because I got TLE on preetest 11 • » » 6 years ago, # ^ |   +12 Probably because of the lower_bound(s.begin(), s.end(), l) call. From the docs:On non-random-access iterators, the iterator advances produce themselves an additional linear complexity in N on average.If you want log complexity, you have to call s.lower_bound(l) instead. • » » 6 years ago, # ^ |   0 I think that this line:while ((*it) <= r)should bewhile (it!=s.end() && (*it) <= r) • » » 6 years ago, # ^ |   0 I think it's because you are erasing while you increment the iterator. IMO you should erase the whole range after you assigned the winner. • » » 6 years ago, # ^ | ← Rev. 2 →   0 ffao gave the correct answer.I got TLE for using lower_bound(s.begin(),s.end(),l) later i replaced it with s.lower_bound(l), i got accepted! so its really something to keep in mind. » 6 years ago, # |   +8 25% system test = 30 min 100% system test = ??? • » » 6 years ago, # ^ |   +1 Supposing linear behavior you can simply calculate it by a proportion xD • » » 6 years ago, # ^ |   0 430 min? • » » 6 years ago, # ^ |   +6 50% system test = 115 minutes! • » » » 6 years ago, # ^ |   0 25% system test = 30 min 50% system test = 115 minutes what's about 100% system test ??? :P • » » » » 6 years ago, # ^ |   0 400 mins • » » » » » 6 years ago, # ^ |   0 Whats that? quadratic interpolation? • » » » » » » 6 years ago, # ^ |   +4 I hope when I wake up tomorrow to find the system test has finished » 6 years ago, # | ← Rev. 2 →   +3 LOL at least 140 testcases for div1 C well, at most 140 seconds (=more than 2min) are spent on each user. » 6 years ago, # |   +5 Why today's judgement such slowly? • » » 6 years ago, # ^ | ← Rev. 4 →   +12 Because there are many test case to run (20-150 cases) for each submission, and each case is about 1-3s time limit.. and there are many submission too.EDIT: here is a picture » 6 years ago, # |   0 is anybody else facing a problem opening the TopCoder arena? the SRM registration closes in 2 minutes, but i am unable to launch the arena! :( • » » 6 years ago, # ^ |   0 Yes, me too • » » » 6 years ago, # ^ |   0 just when registration closed, the arena opened! how unlucky we are! :( • » » 6 years ago, # ^ |   0 Did you update your Java version? I faced that problem some days ago. • » » » 6 years ago, # ^ |   0 i redownloaded the .jnlp file from the website just before trying to launch the arena, but still it wasn't opening! • » » » » 6 years ago, # ^ | ← Rev. 2 →   0 I did the same but it was a problem with a jar file (logging.jar) So, I updated from Java 6 to Java 7 and it worked. • » » » » » 6 years ago, # ^ |   0 Same here. • » » » » 6 years ago, # ^ |   0 always delete your cache by typing javaws -viewer in terminal and then restart the arena .. Even then if it doesn't work , restart your OS and then open the arena .. It has happened a lot of times with me . » 6 years ago, # |   0 Ahh... Why the system testing is so slow? I really don't like when I have to wait few minutes for my solution being checked on pretests, especially when I have a stupid bug and have to resubmit it many times. Today I got RE because I've written ios_base::sync_with_stdio(0); and later used scanf. I submitted it 6 times and had to wait few minutes for each of them. Is it really necessary to put so many big pretests?And now 40 minutes already passed and system testing is in 20% of its progress... » 6 years ago, # |   0 Hi, i wanna ask about my submission 4791878 , why i got WA on pretest 1 ? Thank you very much • » » 6 years ago, # ^ |   +3 Your check function doesn't return true. • » » » 6 years ago, # ^ |   0 Thankkk youu ! » 6 years ago, # |   0 May somebody explain C, D, E (div. 2), please? C requires a segment tree, yep? • » » 6 years ago, # ^ |   +4 There's no need to implement a segment tree for C in Div 2, a disjoint set or simply a linked list would suffice. • » » » 6 years ago, # ^ |   +3 Or a regular STL set. 4789501 • » » » » 6 years ago, # ^ |   0 Oh God, really. Thanks. And may you give some little tips for D & E, please? Don't know how to solve them :\ • » » » » » 6 years ago, # ^ |   +3 D: let gcd(length(x), length(y)) = d; i-th character in x will be paired -times with every character in string y on position . Count how many chars equal to c in x and in y are , and then the answer is N·length(x) minus the number of all pairs of chars equal to c at every remainder modulo d (those are zeroes in the Hamming sum).E is ugly, I don't want to write anything about it... • » » » » » » 6 years ago, # ^ |   +16 There is non-ugly solution for d2 E — d1 C • » » » » » » » 6 years ago, # ^ |   0 Could you explain your Div1 C solution? I think it's better than handling different cases. • » » » » » » » » 6 years ago, # ^ |   +1 Basically there is not that many ways to ditribute that many people between n compatments of 4, 3 and 0 in each. For each such variant we can use simple greedy • » » » » » 6 years ago, # ^ | ← Rev. 6 →   +4 For E(Cdiv1), let iterate number of final happy compartments from 1 to n and check if it's possible to be our answer (and how many swap). Final number of compartments x is possible if and only if 3* x <= number of students <= 4* x x <= number of compartments that already has some student sit on it (imagine that answer has more number of happy compartments, this mean we do some waste) Now, we will find minimum number of swap if we want to got x happy compartments in totals, we do as follows Let cnt[i] be number of compartments with i students on it Let S be number of students Let C be number of compartments with students on it Let final[i] be number of compartments with i students on it at the end final[4] = S- 3* x final[3] = x- final[4] Let ans = 0 Let D = C- x be number of compartments we want to get rid of if there are extra compartments with 4 students we let that student out of seat; ans+=max(0, cnt[4]- final[4]) then we choose D compartments with least total student and let all student out of seat; ans+=min( cnt[1], D)+ 2*min(0, D- cnt[1]) our answer will be minimum ans over all suitable xYou can look at my submission 4801911 • » » » » » » 6 years ago, # ^ |   0 Could you explain why final[4] = S-3*x for a valid x? • » » » » » » » 6 years ago, # ^ | ← Rev. 2 →   +3 We have x compartments, in these x there are final[4] compartments with 4 students. 3*x + final[4] = S (total number of students should be same), so final[4] = S — 3*x. • » » » » » 6 years ago, # ^ |   0 For E, first you should combine 1s and 2s to 3s, then some 1s OR 2s left. If 1s left , brute force how many 1s will not move while other 1s must move. That's easy to think. If 2s left , just do the same thing as 1s.Hope my code easy to read and understand.4801885 • » » » » 6 years ago, # ^ |   0 A nice intuition the STL set indeed. How can we deduce if the set is enough in this problem or even in general, what is the border between set or list being enough compared to segment tree. I know the question is not so specific but I think also some advises (not necessarily related to this specific problem) from more experienced coders will be well appreciated from biggest part of the audience.Thank you. PS: Congratulations for becoming a red coder for first time! :) • » » » » » 6 years ago, # ^ |   0 It's hard to say when sets could be "enough", all these data structures have their own usage. STL sets can do element existance queries in time, and supports insertion and deletion with the same complexity, also it can find the element before or after another element. Linked lists can do insertion, deletion, and moving to next element in O(1) time, but takes O(n) to find an element. And as for segment trees, it has a totally different usage. For more information maybe you can check out wikipedia. • » » » » » » 6 years ago, # ^ |   0 *Takes O(LogN) to find an element • » » » » » » » 6 years ago, # ^ |   0 ... Sorry, my mistake » 6 years ago, # |   0 Does anybody have some idea of div1 B and C? • » » 6 years ago, # ^ |   0 B (D in DIV II) , as far as I got it from others solutions, revolves around taking note of the fact that strings x and y are cycles — i.e. each letter in x or y spans a specific set of letters in the other string. For example with A="abcd", and B="abcdef" (2 strings of sizes 4 and 6), A[0]='a' spans only the letters {a, c, e} in B even if the two strings are repeated infinitely (try it yourself). Simple observation shows that we could partition each string into n partitions = gcd of their sizes. In the previous A and B example, there exist gcd(6,4) = 2 partitions for each string; {a,c} and {b, d} in A, and {a, c, e} and {b, d, f} in B, and all elements in one partition in A are matched only with corresponding partition in B. Hashing corresponding partitions in each string and comparing them against each other yields the result — check top submissions like this for code. » 6 years ago, # |   +8 Slowest system testing I've ever seen....... • » » 6 years ago, # ^ |   +3 Most probably, fastest update of rating ... :D » 6 years ago, # | ← Rev. 2 →   0 what a system test!!! why Codeforces don't make system test faster permanently? » 6 years ago, # |   +15 Eid mubarak to all muslims ..system testing too slow !!! » 6 years ago, # |   +26 I think system testing went to participate in TC srm and will come back. » 6 years ago, # |   +21 Topcoder's contest and system testing will complete before CF's today :P • » » 6 years ago, # ^ |   0 I think TC and this system testing finished together :D :D • » » » 6 years ago, # ^ |   +5 tc finished and cf still running. • » » 6 years ago, # ^ | ← Rev. 2 →   -8 the TopCoder SRM started at 2300 IST and has finished system testing and rating updates, but the Codeforces round that finished at 2000 IST has only finished 90% system testing, and has probably broken the record for slowest system testing ever! i know this doesn't always happen, but Codeforces should really improve the speed of judging on both pretests and system tests! » 6 years ago, # |   +96 • » » 6 years ago, # ^ |   0 lol nice pic » 6 years ago, # |   0 It would be so better if codeforces people could use extra servers for load distribution .. Cummon professionals , help mike with the funds .. after all we all benefit from the platform .. » 6 years ago, # |   +7 TC srm finished and codeforces still has not finished system testing » 6 years ago, # |   +2 I haven't seen system testing like that...! thanks! » 6 years ago, # |   0 First, happy feast to all muslims !Second, I'd like all this problems. But can you tell me the reason why the result come late ?? » 6 years ago, # | ← Rev. 2 →   0 Time limit exceeded on test 61 [final tests] → 4793492. whats problem with BFS? • » » 6 years ago, # ^ |   0 I have seen some other bfs solutions to be failed. I don't know the exact reason. My dfs solution passed but I have seen some other bfs solutions to be failed.The reason could be (not sure) same nodes are being queued multiple times.if a node is pushed into the que and marked as visited then it could be a little bit faster. You can try that.But TLE on CF server with bfs where other solution passes with dfs!!! well i am surprised ! Thank God I dont know bfs :p • » » 6 years ago, # ^ |   0 It could be because you use memset everytime you expand a node. • » » 6 years ago, # ^ | ← Rev. 2 →   0 I do no DFS/BFS .. Its AC .. http://codeforces.com/contest/357/submission/4798719 » 6 years ago, # |   -10 I just leave it here : Yeputons comment about codeforces down » 6 years ago, # |   +95 Actually, right now I don't know why judging is so slow. The possible reasons are: we are hosting school regional and ACM-ICPC subregional contests, that's why we use our most of our machines for teams but not for Codeforces testing (right now we use 8 instead of 20 of them), the problems are really slow to testing, something goes wrong and I'll investigate it. Sorry for really slow testing. • » » 6 years ago, # ^ |   0 Current submits are all 'in queue', any mistake? • » » » 6 years ago, # ^ |   0 seems it continues to work, ok • » » 6 years ago, # ^ |   +12 hey, if its not too much to ask, can u implement a way (if it doesn't already exist) by which we can sort the users in the standings by score in a particular problem, or by score on hacks? thanks in advance! » 6 years ago, # |   +5 Happy feast! Could someone explain me solution of problem C in Div2 or A in Div1? • » » 6 years ago, # ^ |   0 You can do operations using Map, Linked list, disjoint set or Segment tree .. All will suffice .. • » » » 6 years ago, # ^ |   0 Could you explain one of them? • » » » » 6 years ago, # ^ |   0 At first put all knights to the TreeSet, then sequentially for each fight take corresponding subset (for TreeSet it takes R*logN, where R is number of items in subset) and remove all elements of subset except winner. The total complexity will be N*logN • » » » » » 6 years ago, # ^ |   0 I got it. Thanks. • » » » » 6 years ago, # ^ | ← Rev. 2 →   0 See maintain a map/set of all those who aren't defeated .. initially it will consist of all the knights as none is killed . now as you get the queries .. find the lower bound for l and delete elements from map/set except the one index which won .. and go on updating the loose array for those who have lost .. searching requires logn for set/map since map/set are a balanced rb tree and you will go to n-1 elements exactly once .. hence your complexity O(nlogn)== 5*10^5 which is within limits for 1 sec .. Apart from that you could have used Disjoint set DS as it is also ammotized O(Nlogn) or in the same way a segment tree .. » 6 years ago, # |   +1 System testing was too slow...but ratings has been updated very fast instead! » 6 years ago, # |   0 Wow ratings got updated so quickly.. great!! » 6 years ago, # |   0 Hi everyone. I don't understand why my code for Div2-C gets TLE. Here is my solution --> 4796407 I kept all the elements in a vector and when time came, erased it. I think it has complexity MlogN + M + N . Please tell me where am I going wrong so that I can be careful not to make such mistakes in the future. • » » 6 years ago, # ^ |   +3 Erasing an element from a vector takes O(vectorsize) time, imagine that it's because you need to re-number all elements after it. So you complexity is O(N2). • » » 6 years ago, # ^ | ← Rev. 2 →   +1 Hi! Your solution would have worked perfect with another container like set. Unfortunately the erase function from vector is ( almost ) linear in the size of the vector. So your complexity is not MlogN so that's why you are getting TLE. Try with set and you will get AC! • » » » 6 years ago, # ^ |   0 I don't usually use set. But I realize it's a pretty handy structure to use at times. I'll try what you said and get back to you. Thanks mate. • » » » 6 years ago, # ^ |   0 Yay! Coded with set and got AC. Runtime less than 1 sec. :)Thanks a lot! • » » 6 years ago, # ^ |   +1 I have a confusion with you bsearch function. specially with this line hi = v.size()-1; Your M times bsearch will cost MlogN if and only if hi = v.size()-1 is an O(1) operation. But I am not sure that it is O(1). Also in the case of erasing vector elements. Erase operation is O(n) and M times O(n) is bad. • » » 6 years ago, # ^ |   -8 Thanks. I must have counted the complexity wrong. I'll try fixing the solution. » 6 years ago, # | ← Rev. 4 →   -34 del • » » 6 years ago, # ^ |   0 Nice joke. » 6 years ago, # |   0 I love this problemset, especially for problem 1D ;) • » » 6 years ago, # ^ |   +8 Could you explain how to solve D? Thanks • » » » 6 years ago, # ^ |   0 I got TLE #82, but assuming my solution was correct, the problem asks you to make a nesting with the given bags, such that the sum of the values of all top-level bags is s. The key observation is to note that any selection of top-level bags is actually possible, as long as the biggest bag is top-level. The trick is to nest each non-selected bag within the next-highest bag, which is always possible. In other words, you can reduce the problem to a subset sum problem. • » » » » 6 years ago, # ^ |   0 Why "any selection of top-level bags is actually possible, as long as the biggest bag is top-level"? Under the condition that the total number of coins is fixed, I cannot understand this property. Could you elaborate on it? Thanks. • » » » » » 6 years ago, # ^ |   0 Let's sort the bags by decreasing ai. The necessary condition is: there's a set S of bags such that , and (1st bag is the largest one). We can construct the solution now. Let's just take the bags in S and put ai coins into bag . Then, we have some bags left over, and we want to put those bags so that the constraints would be satisfied.Let's process the remaining bags by decreasing ai. When adding ith bag into the configuration, we know that there's a bag j which contains aj ≥ ai coins and no bags (for the largest one of the remaining bags, it's the largest bag in total; for any later one, it's the bag that was added to the configuration just before it), so we remove ai coins from bag j, put them into bag i and put bag i into bag j. Notice that this doesn't change the total number of coins, and that there were enough coins in bag j for it to happen (there will be aj - ai left over directly in j after this). In this way, we can construct a solution.The condition is also necessary, because the largest bag (if there are more of them, choose an arbitrary one) can't be nested in any other, and the number of coins (directly or indirectly) in bags of S (those that aren't nested in any other) must be equal to s. • » » » 6 years ago, # ^ |   0 DP and use bit set to make it faster. » 6 years ago, # |   0 why my C problem is MLE in test 19 ?(div 2) can some one help me? • » » 6 years ago, # ^ |   0 Probably, stack overflow because of the infinite recursion. » 6 years ago, # |   +2 problem: Round #207 B. Flag Dayexample : 7 3 1 2 3 4 5 6 5 2 7 Who can tell me the answer ? Why many people who is accepted can't pass it ? • » » 6 years ago, # ^ |   +3 This is an illegal case — in the third dance, you have two dancers who already participated in dances before; 5 and 7. Problem statement indicates that in each dance, at most one dancer who have already danced can participate. This makes the problem way easier actually! • » » » 6 years ago, # ^ |   +3 Without this constraint it would be a 3-Colorability problem which in fact is NP-Complete :) • » » » 6 years ago, # ^ |   0 Thank you very much, I see. » 6 years ago, # |   0 pls explain the scoring system ,i solved a question(A) in prev round at the same time as this time(in this contest) but last time i had an increase of +28 points but received a -48 this time • » » 6 years ago, # ^ | ← Rev. 2 →   0 In previous contest, you were 1284th from 2158 participants. This contest, you took the same place, but number of participants were less than previous, so you lost rating. • » » » 6 years ago, # ^ |   0 Thank u » 6 years ago, # |   0 Excuse me, may I ask when will the tutorial be published? Thanks • » » 6 years ago, # ^ |   0 Found the tutorial was updated here http://codeforces.com/blog/entry/9210 » 6 years ago, # |   0 Well, nice problem, especially C » 6 years ago, # |   -6 I made WA on problem C, on the test 141...I coded for(int i=0;i • » » 6 years ago, # ^ |   0 All of us have made a silly mistake in some contest caused losing the ratings » 6 years ago, # |   0 Versatile problem set !! #loved it again <<>> #CF, the best. » 5 years ago, # |   +8 0xCF round : ))
2019-11-15 10:35:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3492808938026428, "perplexity": 2795.6998099674656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668618.8/warc/CC-MAIN-20191115093159-20191115121159-00380.warc.gz"}
https://www.jobilize.com/online/course/0-18-game-theory-applied-finite-mathematics-by-openstax?qcr=www.quizover.com&page=2
# 0.18 Game theory  (Page 3/4) Page 3 / 4 Suppose in [link] , Robert decides to show a dime with $\text{.}\text{20}$ probability and a quarter with $\text{.}\text{80}$ probability, and Carol decides to show a dime with $\text{.}\text{70}$ probability and a quarter with $\text{.}\text{30}$ probability. What is the expected payoff for Robert? Let $R$ denote Robert's strategy and $C$ denote Carol's strategy. Since Robert is a row player and Carol is a column player, their strategies are written as follows: $R=\left[\begin{array}{cc}\text{.}\text{20}& \text{.}\text{80}\end{array}\right]$ and $C=\left[\begin{array}{c}\text{.}\text{70}\\ \text{.}\text{30}\end{array}\right]$ . To find the expected payoff, we use the following reasoning. Since Robert chooses to play row 1 with $\text{.}\text{20}$ probability and Carol chooses to play column 1 with $\text{.}\text{70}$ probability, the move row 1, column 1 will be chosen with $\left(\text{.}\text{20}\right)\left(\text{.}\text{70}\right)=\text{.}\text{14}$ probability. The fact that this move has a payoff of 10 cents for Robert, Robert's expected payoff for this move is $\left(\text{.}\text{14}\right)\left(\text{.}\text{10}\right)=\text{.}\text{014}$ cents. Similarly, we compute Robert's expected payoffs for the other cases. The table below lists expected payoffs for all four cases. Move Probability Payoff Expected Payoff Row 1, Column 1 $\left(\text{.}\text{20}\right)\left(\text{.}\text{70}\right)=\text{.}\text{14}$ 10 cents 1.4 cents Row 1, Column 2 $\left(\text{.}\text{20}\right)\left(\text{.}\text{30}\right)=\text{.}\text{06}$ -10 cents -.6 cents Row 2, Column 1 $\left(\text{.}\text{80}\right)\left(\text{.}\text{70}\right)=\text{.}\text{56}$ -25 cents -14 cents Row 2, Column 2 $\left(\text{.}\text{80}\right)\left(\text{.}\text{30}\right)=\text{.}\text{24}$ 25 cents 6.0 cents Totals 1 -7.2 cents The above table shows that if Robert plays the game with the strategy $R=\left[\begin{array}{cc}\text{.}\text{20}& \text{.}\text{80}\end{array}\right]$ and Carol plays with the strategy $C=\left[\begin{array}{c}\text{.}\text{70}\\ \text{.}\text{30}\end{array}\right]$ , Robert can expect to lose 7.2 cents for every game. Alternatively, if we call the game matrix $G$ , then the expected payoff for the row player can be determined by multiplying matrices $R$ , $G$ and $C$ . Thus, the expected payoff $P$ for Robert is as follows: Which is the same as the one obtained from the table. For the following game matrix $G$ , determine the optimal strategy for both the row player and the column player, and find the value of the game. $G=\left[\begin{array}{cc}1& -2\\ -3& 4\end{array}\right]$ Let us suppose that the row player uses the strategy $R=\left[\begin{array}{cc}r& 1-r\end{array}\right]$ . Now if the column player plays column 1, the expected payoff $P$ for the row player is $P\left(r\right)=1\left(r\right)+\left(-3\right)\left(1-r\right)=4r-3$ . Which can also be computed as follows: $P\left(r\right)=\left[\begin{array}{cc}r& 1-r\end{array}\right]\left[\begin{array}{c}1\\ -3\end{array}\right]$ or $4r-3$ . If the row player plays the strategy $\left[\begin{array}{cc}r& 1-r\end{array}\right]$ and the column player plays column 2, the expected payoff $P$ for the row player is $P\left(r\right)=\left[\begin{array}{cc}r& 1-r\end{array}\right]\left[\begin{array}{c}-2\\ 4\end{array}\right]=-6r+4$ . We have two equations $P\left(r\right)=4r-3$ and $P\left(r\right)=-6r+4$ The row player is trying to improve upon his worst scenario, and that only happens when the two lines intersect. Any point other than the point of intersection will not result in optimal strategy as one of the expectations will fall short. Solving for $r$ algebraically, we get $4r-3=-6r+4$ $r=7/\text{10}$ . Therefore, the optimal strategy for the row player is $\left[\begin{array}{cc}\text{.}7& \text{.}3\end{array}\right]$ . Alternatively, we can find the optimal strategy for the row player by, first, multiplying the row matrix with the game matrix as shown below. $\left[\begin{array}{cc}r& 1-r\end{array}\right]\left[\begin{array}{cc}1& -2\\ -3& 4\end{array}\right]=\left[\begin{array}{cc}4r-3& -6r+4\end{array}\right]$ And then by equating the two entries in the product matrix. Again, we get $r=\text{.}7$ , which gives us the optimal strategy $\left[\begin{array}{cc}\text{.}7& \text{.}3\end{array}\right]$ . We use the same technique to find the optimal strategy for the column player. Suppose the column player's optimal strategy is represented by $\left[\begin{array}{c}c\\ 1-c\end{array}\right]$ . We, first, multiply the game matrix by the column matrix as shown below. $\left[\begin{array}{cc}1& -2\\ -3& 4\end{array}\right]\left[\begin{array}{c}c\\ 1-c\end{array}\right]=\left[\begin{array}{c}3c-2\\ -7c+4\end{array}\right]$ And then equate the entries in the product matrix. We get $3c-2=-7c+4$ $c=\text{.}6$ Therefore, the column player's optimal strategy is $\left[\begin{array}{c}\text{.}6\\ \text{.}4\end{array}\right]$ . To find the expected value, $V$ , of the game, we find the product of the matrices $R$ , $G$ and $C$ . $\begin{array}{c}V=\left[\begin{array}{cc}\text{.}7& \text{.}3\end{array}\right]\left[\begin{array}{cc}1& -2\\ -3& 4\end{array}\right]\left[\begin{array}{c}\text{.}6\\ \text{.}4\end{array}\right]\\ -\text{.}2\end{array}$ That is, if both players play their optimal strategies, the row player can expect to lose $\text{.}2$ units for every game. how can chip be made from sand is this allso about nanoscale material Almas are nano particles real yeah Joseph Hello, if I study Physics teacher in bachelor, can I study Nanotechnology in master? no can't Lohitha where is the latest information on a no technology how can I find it William currently William where we get a research paper on Nano chemistry....? nanopartical of organic/inorganic / physical chemistry , pdf / thesis / review Ali what are the products of Nano chemistry? There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others.. learn Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level learn da no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts Bhagvanji hey Giriraj Preparation and Applications of Nanomaterial for Drug Delivery revolt da Application of nanotechnology in medicine has a lot of application modern world Kamaluddeen yes narayan what is variations in raman spectra for nanomaterials ya I also want to know the raman spectra Bhagvanji I only see partial conversation and what's the question here! what about nanotechnology for water purification please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment. Damian yes that's correct Professor I think Professor Nasa has use it in the 60's, copper as water purification in the moon travel. Alexandre nanocopper obvius Alexandre what is the stm is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? what is a peer What is meant by 'nano scale'? What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION Anam analytical skills graphene is prepared to kill any type viruses . Anam Any one who tell me about Preparation and application of Nanomaterial for drug Delivery Hafiz what is Nano technology ? write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian If March sales will be up from February by 10%, 15%, and 20% at Place I, Place II, and Place III, respectively, find the expected number of hot dogs, and corn dogs to be sold 8. It is known that 80% of the people wear seat belts, and 5% of the people quit smoking last year. If 4% of the people who wear seat belts quit smoking, are the events, wearing a seat belt and quitting smoking, independent? Mr. Shamir employs two part-time typists, Inna and Jim for his typing needs. Inna charges $10 an hour and can type 6 pages an hour, while Jim charges$12 an hour and can type 8 pages per hour. Each typist must be employed at least 8 hours per week to keep them on the payroll. If Mr. Shamir has at least 208 pages to be typed, how many hours per week should he employ each student to minimize his typing costs, and what will be the total cost? At De Anza College, 20% of the students take Finite Mathematics, 30% take Statistics and 10% take both. What percentage of the students take Finite Mathematics or Statistics?
2021-06-23 11:22:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 53, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7572550773620605, "perplexity": 1234.7559088092962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488538041.86/warc/CC-MAIN-20210623103524-20210623133524-00395.warc.gz"}
https://open.kattis.com/contests/na19warmup3/problems/howtopaint
2019 NAIPC Practice Contest 3 #### Start 2019-01-26 18:00 UTC ## 2019 NAIPC Practice Contest 3 #### End 2019-01-26 23:00 UTC The end is near! Contest is over. Not yet started. Contest is starting in -490 days 22:39:28 5:00:00 0:00:00 # Problem HHow to Paint? Being a Pokenom trainer is very stressful. You always need to take good care of all your Pokenoms and always be prepared to battle other Pokenom trainers. To cope with the stressful life, Bash learns to paint in his free time. Today Bash is working on his masterpiece: ‘The two staircases’. The painting is divided into a grid with $M$ rows and $N$ columns. Rows and columns are numbered starting from $1$, from bottom to top and from left to right, respectively. Let $(i, j)$ denote the cell at the $i$-th row and $j$-th column. Bash colors each cell red or blue, such that the bottom part forms a blue ‘staircase’ and the top part forms a red ‘inverted staircase’. More formally, Bash’s picture has the following properties: • For every column $i$ $(1 \leq i \leq N)$, there exists two integers $b_{i}, r_{i}$ satisfying: • $0 \leq b_{i}, 0 \leq r_{i}, b_{i} + r_{i} \leq M$. • $b_{i}$ bottommost cells (i.e, cells $(1, i), (2, i), \ldots , (b_{i}, i)$) are blue. • $r_{i}$ topmost cells (i.e, cells $(M, i), (M - 1, i), \ldots , (M - r_{i} + 1, i)$) are red. • All other cells are not painted. • $M \geq b_{1} \geq b_{2} \geq \ldots \geq b_{N} \geq 0$. • $0 \leq r_{1} \leq r_{2} \leq \ldots \leq r_{N} \leq M$. Hence, Bash’s picture can be uniquely determined by two sequences $b = (b_{1}, b_{2}, \ldots , b_{N})$ and $r = (r_{1}, r_{2}, \ldots r_{N})$. This is an example of a valid picture with $M=5$, $N=4$, $b = (4, 2, 2, 0)$ and $r = (1, 1, 2, 3)$: Below are three examples of invalid pictures: After few hours of hard work, Bash has finished his painting, and shows his best friend Cee. The picture satisfies all the above properties, with parameters $b = (c_{1}, c_{2}, \ldots , c_{N})$ and $r = (M - c_{1}, M - c_{2}, \ldots M - c_{N})$. No cells are left unpainted in this picture. Cee wants to know step-by-step of how Bash created such beautiful painting. Bash can not remember the order which he painted the cells, but Bash remembers that he always followed these rules: • Bash starts with an empty picture. • First, Bash paints the bottom-left cell $(1, 1)$ blue and the top-right cell $(M, N)$ red. • In each step, Bash chooses some unpainted cell, paints it either red or blue, such that the picture after this step satisfies all the above properties. • The process stops when the picture is completed. Cee tries to follow Bash’s rules to replicate the painting. But first Cee wants to know how many ways Cee can create the painting. Two ways are considered different if there exists a step where the painted cells are different. Represent the result as $100\, 003^ X \times Y$, where $Y$ is not divisible by $100\, 003$, and output $X \; Y_ m$ where $Y_ m$ is the result of $Y \bmod 100\, 003$. ## Input • Line $1$: Two integers $N$ and $M$ $(1 \le M, N \le 1\, 000)$. • Line $2$: $N$ integers $c_1, c_2, \cdots , c_ N$ $(M \ge c_1 \ge c_2 \ge \cdots \ge c_ n \ge 0)$ describes the blue parameters in Bash’s final picture (see description above). ## Output Two integers $X$ and $Y_ m$ as described above. ## Sample clarification Bash’s pictures in $2$ sample cases: Sample Input 1 Sample Output 1 3 3 3 2 1 0 672 Sample Input 2 Sample Output 2 4 4 4 3 1 0 0 16296
2020-05-31 16:39:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5755496621131897, "perplexity": 2101.370725147088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413551.52/warc/CC-MAIN-20200531151414-20200531181414-00335.warc.gz"}
https://linguaplus.org.ge/disturbing-the-moju/b5db65-stepwise-linear-regression
###### Category » Korean Brides « 10/12/2020 To do so, we plot the actual values (targets) of the output variable “Log-Price” in the X-axis and the predicted values of the output variable “Log-Price” in the Y-axis. Suppose both $$x_{1}$$ and $$x_{2}$$ made it into the two-predictor stepwise model and remained there. Consider an analyst who wishes to establish a linear relationship between the daily change in … Stepwise regression. I am trying to understand the basic difference between stepwise and backward regression in R using the step function. Omit any previously added predictors if their p–value exceeded $$\alpha_R$$. The good news is that most statistical software — including Minitab — provides a stepwise regression procedure that does all of the dirty work for us. Then, here, we would prefer the model containing the three predictors $$x_{1}$$ , $$x_{2}$$ , and $$x_{4}$$ , because its adjusted $$R^{2} \text{-value}$$ is 97.64%, which is higher than the adjusted $$R^{2} \text{-value}$$ of 97.44% for the final stepwise model containing just the two predictors $$x_{1}$$ and $$x_{2}$$ . •You want to interactively explore which predictors seem to provide a good fit. Stepwise Linear Regression is a method by which you leave it up to a statistical model test each predictor variable in a stepwise fashion, meaning 1 is inserted into the model and kept if it "improves" the model. Run PIQ vs Brain, Height, Weight - weight is the only 3rd predictor. Our final regression model, based on the stepwise procedure contains only the predictors $$x_1 \text{ and } x_2 \colon$$. A regression equation is a polynomial regression equation if the power of … I show how they can be implemented in SAS (PROC GLMSELECT) and offer pointers to how they can be done in R and Python.Stepwise methods are also problematic for other types of regression, but we do not discuss these. The results of each of Minitab's steps are reported in a column labeled by the step number. Now, fit each of the three-predictor models that include $$x_{1}$$ and $$x_{2}$$ as predictors — that is, regress $$y$$ on $$x_{1}$$ , $$x_{2}$$ , and $$x_{3}$$ , regress $$y$$ on $$x_{1}$$ , $$x_{2}$$ , and $$x_{4}$$ , ..., and regress $$y$$ on $$x_{1}$$ , $$x_{2}$$ , and $$x_{p-1}$$ . Also continuous variables nested … This formula will be applied to each data point in every feature individually. That took a lot of work! Include the predictor with the smallest p-value < $$\alpha_E = 0.15$$ and largest |T| value. Stepwise regression is a systematic method for adding and removing terms from a linear or generalized linear model based on their statistical significance in explaining the response variable. Stepwise regression is a method that iteratively examines the statistical significance of each independent variable in a linear regression model. converting the values of numerical variables into values within a specific interval. The null model has no predictors, just one intercept (The mean over Y). For example, if you toss a coin ten times and get ten heads, then you are pretty sure that something weird is going on. Do not add weight since its p-value $$p = 0.998 > \alpha_E = 0.15$$. The simplest of probabilistic models is the straight line model: where 1. y = Dependent variable 2. x = Independent variable 3. That is, check the, a stepwise regression procedure was conducted on the response $$y$$ and four predictors $$x_{1}$$ , $$x_{2}$$ , $$x_{3}$$ , and $$x_{4}$$, the Alpha-to-Enter significance level was set at $$\alpha_E = 0.15$$ and the Alpha-to-Remove significance level was set at $$\alpha_{R} = 0.15$$, Just as our work above showed, as a result of Minitab's. Through backward elimination, we can successfully eliminate all the least significant features and build our model based on only the significant features. It can be useful in the following situations: •There is little theory to guide the selection of terms for a model. sklearn.linear_model.LinearRegression¶ class sklearn.linear_model.LinearRegression (*, fit_intercept=True, normalize=False, copy_X=True, n_jobs=None) [source] ¶. Though it might look very easy and simple to understand, it is very important to get the basics right, and this knowledge will help tackle even complex machine learning problems that one comes across. Nearly 50% of the variance in the forest fire occurrence data was explained using linear stepwise regression. Here, we are given the size of houses (in sqft) and we need to predict the sale price. The value of ‘d’ is the error, which has to be minimized. •You want to improve a model’s prediction performance by reducing the variance caused by estimating unnecessary terms. Stepwise regression is a regression technique that uses an algorithm to select the best grouping of predictor variables that account for the most variance in the outcome (R-squared). A magazine wants to improve their customer satisfaction. ... For example, you can enter one block of variables into the regression model using stepwise selection and a second block using forward selection. mdl = stepwiselm(ingredients,heat,'PEnter',0.06) As a result of the second step, we enter $$x_{1}$$ into our stepwise model. This data set includes the variables ingredients and heat. While we will soon learn the finer details, the general idea behind the stepwise regression procedure is that we build our regression model from a set of candidate predictor variables by entering and removing predictors — in a stepwise manner — into our model until there is no justifiable reason to enter or remove any more. Improve is defined by the type of stepwise regression being done, this can be defined by AIC, BIC, or any other variables. The test data values of Log-Price are predicted using the predict() method from the Statsmodels package, by using the test inputs. Of course, we also need to set a significance level for deciding when to remove a predictor from the stepwise model. Again, many software packages — Minitab included — set this significance level by default to $$\alpha_{R} = 0.15$$. This problem can be solved by creating a new variable by taking the natural logarithm of Price to be the output variable. Let's learn how the stepwise regression procedure works by considering a data set that concerns the hardening of cement. Stepwise regression is useful in an exploratory fashion or when testing for associations. Here, we have been given several features of used-cars and we need to predict the price of a used-car. Brain size and body size. Stepwise regression is an approach to selecting a subset of effects for a regression model. The t-statistic for $$x_{4}$$ is larger in absolute value than the t-statistic for $$x_{2}$$ — 4.77 versus 4.69 — and therefore the P-value for $$x_{4}$$ must be smaller. We also remove the Model feature because it is an approximate combination of Brand, Body and Engine Type and will cause redundancy. It may be necessary to force the procedure to include important predictors. That's because what is commonly known as 'stepwise regression' is an algorithm based on p-values of coefficients of linear regression, and scikit-learn deliberately avoids inferential approach to model learning (significance testing etc). stepwise — Stepwise ... performs a backward-selection search for the regression model y1 on x1, x2, d1, d2, d3, x4, and x5. Let us understand this through an example. stepwise, pr(.10): regress y1 x1 x2 (d1 d2 d3) (x4 x5) One should not jump to the conclusion that all the important predictor variables for predicting $$y$$ have been identified, or that all the unimportant predictor variables have been eliminated. That is, regress PIQ on Brain, regress PIQ on Height, and regress PIQ on Weight. This equation will be of the form y = m*x + c. Then, it calculates the square of the distance between each data point and that line (distance is squared because it can be either positive or negative but we only need the absolute value). Linear regression answers a simple question: Can you measure an exact relationship between one target variables and a set of predictors? It then adds the second strongest predictor (sat3). In multiple linear regression, since we have more than one input variable, it is not possible to visualize all the data together in a 2-D chart to get a sense of how it is. Stepwise regression is a technique for feature selection in multiple linear regression. a. This, and other cautions of the stepwise regression procedure, are delineated in the next section. In multiple linear regression, you have one output variable but many input variables. because stepwise regression is a linear sequence of selection based on the rules mentioned in . = random error component 4. Now, we can clearly see that all features have a p-value < 0.01. This is one of many tricks to overcome the non-linearity problem while performing linear regression. Stepwise regression is a procedure we can use to build a regression model from a set of predictor variables by entering and removing predictors in a stepwise manner into the model until there is no statistically valid reason to enter or remove any more. That is, first: Continue the steps as described above until adding an additional predictor does not yield a t-test P-value below $$\alpha_E = 0.15$$. Add Height since its p-value = 0.009 is the smallest. Instead, a subset of those features need to be selected which can predict the output accurately. Stepwise regression is useful in an exploratory fashion or when testing for associations. No, not at all! So, now if we need to predict the price of a house of size 1100 sqft, we can simply plot it in the graph and take the corresponding Y-axis value on the line. It was observed that the dummy variable Brand_Mercedes-Benz had a p-value = 0.857 > 0.01. Six regression algorithms were used to build the sugarcane AFW model: multiple linear regression (MLR), stepwise multiple regression (SMR), generalized linear … We can do forward stepwise in context of linear regression whether n is less than p or n is greater than p. Forward selection is a very attractive approach, because it's both tractable and it gives a good sequence of models. This table illustrates the stepwise method: SPSS starts with zero predictors and then adds the strongest predictor, sat1, to the model if its b-coefficient in statistically significant (p < 0.05, see last column). Multiple linear regression (MLR), also known simply as multiple regression, is a statistical technique that uses several explanatory variables to predict the outcome of a response variable. Second, “outlier” may be a dangerous term, and not accurate in helping people in certain fields derive understanding of distributions, etc. We have sample data containing the size and price of houses that have already been sold. To create a small model, start from a constant model. I have have been performing stepwise linear regression (direction = "both") in r. I know how to do this on a variable by variable basis, and I also know how to run linear regression on multiple variables at once. Linear Regression vs. Ordinary least squares Linear Regression. Improve is defined by the type of stepwise regression being done, this can be defined by AIC, BIC, or any other variables. The procedure yields a single final model, although there are often several equally good models. Minitab considers a step any addition or removal of a predictor from the stepwise model, whereas our steps — step #3, for example — considers the addition of one predictor and the removal of another as one step. I was wondering if there is a way to loop through this process. Include Brain as the first predictor since its p-value = 0.019 is the smallest. In particular, the researchers were interested in learning how the composition of the cement affected the heat evolved during the hardening of the cement. For example in Minitab, select Stat > Regression > Regression > Fit Regression Model, click the Stepwise button in the resulting Regression Dialog, select Stepwise for Method and select Include details for each step under Display the table of model selection details. Whew! Now, following step #2, we fit each of the two-predictor models that include $$x_{4}$$ as a predictor — that is, we regress $$y$$ on $$x_{4}$$ and $$x_{1}$$ , regress $$y$$ on $$x_{4}$$ and $$x_{2}$$ , and regress $$y$$ on $$x_{4}$$ and $$x_{3}$$ , obtaining: The predictor $$x_{2}$$ is not eligible for entry into the stepwise model because its t-test P-value (0.687) is greater than $$\alpha_E = 0.15$$. Such programs So, instead we can choose to eliminate the year of birth variable. It took Minitab 4 steps before the procedure was stopped. However, we have run into a problem. = intercept 5. c. Omit any previously added predictors if their p–value exceeded $$\alpha_R = 0.15$$. Did you notice what else is going on in this data set though? The final model is not guaranteed to be optimal in any specified sense. In this article, we will discuss what multiple linear regression is and how to solve a simple problem in Python. As a result of the first step, we enter $$x_{4}$$ into our stepwise model. Therefore, they measured and recorded the following data (Cement dataset) on 13 batches of cement: Now, if you study the scatter plot matrix of the data: you can get a hunch of which predictors are good candidates for being the first to enter the stepwise model. step(lm(mpg~wt+drat+disp+qsec,data=mtcars),direction="both") I got the below output for the above code. Stepwise Regression Control Panel . Latest news from Analytics Vidhya on our Hackathons and some of our best articles! a stepwise regression procedure was conducted on the response yand four predictors x1, x2, x3, and x4 the Alpha-to-Enter significance level was set at αE= 0.15 and the Alpha-to-Remove significance level was set at αR= 0.15 But note the tie is an artifact of Minitab rounding to three decimal places. Linear Regression Options. What is the final model identified by your stepwise regression procedure? Hence, it can be concluded that our multiple linear regression backward elimination algorithm has accurately fit the given data, and is able to predict new values accurately. For backward variable selection I used the following command Some researchers observed the following data (Blood pressure dataset) on 20 individuals with high blood pressure: The researchers were interested in determining if a relationship exists between blood pressure and age, weight, body surface area, duration, pulse rate and/or stress level. Take a look, Building a Simple COVID-19 Dashboard in InfluxDB v2 with Mathematica, Goodreads — Exploratory Data Analysis and Discovering Relationships, How to Get Better at Making Decisions and Predicting the Future. The null model has no predictors, just one intercept (The mean over Y). Stepwise Linear Regression is a method by which you leave it up to a statistical model test each predictor variable in a stepwise fashion, meaning 1 is inserted into the model and kept if it "improves" the model. So here, we use the concept of dummy variables. Odit molestiae mollitia laudantium assumenda nam eaque, excepturi, soluta, perspiciatis cupiditate sapiente, adipisci quaerat odio voluptates consectetur nulla eveniet iure vitae quibusdam? First, we set a significance level (usually alpha = 0.05). Ordinary least squares Linear Regression. Upon completion of all the above steps, we are ready to execute the backward elimination multiple linear regression algorithm on the data, by setting a significance level of 0.01. We can see that they have a linear relationship that resembles the y = x line. Select Stepwise as the entry method.. As well, the use of stepwise regression could be questioned since the results of stepwise analyses are often unstable (Pedhazur, 1982). = Coefficient of x Consider the following plot: The equation is is the intercept. Polynomial Regression. Now, let's make this process a bit more concrete. Therefore, as a result of the third step, we enter $$x_{2}$$ into our stepwise model. This variable is eliminated and the regression is performed again. You can quantify exactly how unlikely such an event is, given that the probability of heads on any one toss is 0.5. On plotting a graph between the price of houses (on Y-axis) and the size of houses (on X-axis), we obtain the graph below: We can clearly observe a linear relationship existing between the two variables, and that the price of a house increases on increase in size of a house. The predictors $$x_{1}$$ and $$x_{3}$$ tie for having the smallest t-test P-value — it is < 0.001 in each case. Let us explore what backward elimination is. Using Spotify data to predict which “Novidades da semana” songs would become hits, Breast Cancer Classification Using Python. a stepwise regression procedure was conducted on the response $$y$$ and four predictors $$x_{1}$$ , $$x_{2}$$ , $$x_{3}$$ , and $$x_{4}$$ ... First, fit each of the three possible simple linear regression models. NorthEast SAS Users Group. Now, fit each of the two-predictor models that include $$x_{1}$$ as a predictor — that is, regress $$y$$ on $$x_{1}$$ and $$x_{2}$$ , regress $$y$$ on $$x_{1}$$ and $$x_{3}$$ , ..., and regress $$y$$ on $$x_{1}$$ and $$x_{p-1}$$ . Case in point! This brings us to the end of our regression. Real Statistics Data Analysis Tool: We can use the Stepwise Regression option of the Linear Regression data analysis tool to carry out the stepwise regression process. Specify 0.06 as the threshold for the criterion to add a term to the model. We got very good correlations in the data we were using, and got the maximum value out of it using non-linear regressions and stepwise confirmation. This through a small model, although there are often several equally good models of multiple regression what output. An approximate combination of Brand, Body and Engine Type and will cause redundancy consectetur adipisicing.. Include all input variables in the model. = 0.022 > 0.01 houses ( in )! Chaque étape model to the code and explore how simple it is removed from the Scikit-learn library, regress! Remove redundant variables problem can be solved by creating a new variable by taking the natural of... Input variables algorithm determined to best fit the model the 2nd predictor with p-value... Natural logarithm of price to be minimized about the stepwise regression – simple Tutorial by Ruben Geert den... And Dependent variables to interactively explore which predictors seem to provide a broad of. The code and explore stepwise linear regression simple it is not guaranteed to be used a! Little theory to guide the selection of terms for a model. options apply either... Coefficients that are to be a great help indirectly proportional to Log price, Mileage indirectly... Or stepwise variable selection method has been specified this the Alpha-to-Enter significance level by default to (! Weighted stepwise are considered alpha = 0.05 ) data analysis without the dire necessity to visualize the data then out! Age are directly correlated, and bidirectional elimination steps involved the dire necessity to visualize the data natural of. Said to be optimal in any specified sense also remove the model the 3rd predictor the... We illustrated multiple regression with the features and build our model based on a stepwise-regression algorithm, measures! { 2 } \ ) into our stepwise model. — Minitab included — set significance! Can be useful in an exploratory fashion or when testing for associations a. Technique for feature selection in multiple linear regression is a Polynomial regression single. Variable and one output variable ) fit a stepwise linear regression as their first.... Are automatically added to or trimmed from a constant model. level ( usually alpha = 0.05 ) stepwise linear regression significance. Single final model is not too easy to remove redundant variables linear models > regression... Backward regression in R using the read method from Pandas generates the descriptive. Regression algorithm to identify a linear regression = 0.009 is the straight line model: where y... Ingredients contains the two predictors Brain and Height estimating unnecessary terms input variables following situations: is! Procedure until you can not justify entering or removing any more predictors, algorithm. C. Omit any previously added predictor Brain is retained since its p-value = 0.019 is the slope of the a! A constant model. reducing the variance caused by estimating unnecessary terms is called, the predictor with smallest 0.01 the dummy variable Brand_Mercedes-Benz had p-value... Next section 's steps are reported in a multiple-regression model. mike Fritz, Paul Berger... Feature with the highest p-value popular data-mining tool that uses statistical significance to select appropriate stepwiselm... To best fit line equation y = b0 + b1 * x, consectetur adipisicing.. And we need to set a significance level ( usually alpha = 0.05 ) correlation among the \... Our hope is, of course, we will learn about the stepwise regression procedure lead us to the best. Largest T value. Report > stepwise regression is a Polynomial regression expliquées suivant. = Coefficient of x Consider the following video will walk through this process a bit concrete! Any previously added predictor Brain is retained since its stepwise linear regression = 0.857 > 0.01 along way... Predict the Height of a person with two variables: age and gender add a term to the model ''! This will typically be greater than the usual 0.05 level so that it is removed from the into! Scipy but I would still like to implement them of houses that have already been sold does the stepwise method. Interactively explore which predictors seem to provide a good fit can proceed further we find the with! Include important predictors can clearly see that all features have a linear regression with the features and 3 numerical do... Which measures the effect of cement useful regression model with interaction effects and interpret the results of each independent 3... The straight line model: where 1. y = x line still like to implement them of incrementally larger smaller! Reducing the variance caused by estimating unnecessary terms between stepwise and similar selection are! Step function features: gender, year of birth variable mike Fritz, Paul D. Berger in! In Minitab R using the test inputs b1 * x level by default to \ ( \alpha_R\ ) help... Based on a stepwise-regression algorithm, which measures the effect of cement composition on its hardening heat has to used. Model for the purpose of identifying a useful subset of the output variable ) two predictor by. Information, go to Basics of stepwise regression ipsum dolor sit amet, consectetur adipisicing.! Programs this video provides a demonstration of forward, backward, or stepwise selection! Remove redundant variables easy to remove a predictor from the model dialog box with.... With no predictors in our stepwise model. of birth variable p-values are both below. For associations tricks to overcome the non-linearity problem while performing linear regression their. Chaque étape which we discussed earlier in this regard would be a term to the model., year birth... Function stepAIC ( ) method from the Scikit-learn library, and age are directly stepwise linear regression to price... Method of regressing multiple variables while simultaneously removing those that are to be the output contains the composition! Of ‘ d ’ by creating a new variable by taking the logarithm! Am not able to do stepwise then compares the explanatory variables to the code and explore simple. Type II error along the way called, the predictor \ ( x_ { 2 } \.... Engine Type and will cause redundancy any more predictors: age and gender information, go to Basics of regression...
2021-06-13 08:13:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44346192479133606, "perplexity": 841.3282359333243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487607143.30/warc/CC-MAIN-20210613071347-20210613101347-00456.warc.gz"}
https://homework.cpm.org/category/CON_FOUND/textbook/gc/chapter/2/lesson/2.2.4/problem/2-93
### Home > GC > Chapter 2 > Lesson 2.2.4 > Problem2-93 2-93. On graph paper, plot quadrilateral $ABCD$ if $A (2, 7)$, $B (4, 8)$, $C (4, 2)$, and $D (2, 3)$. 1. What is the best name for this shape? Justify your conclusion. This shape is an isosceles trapezoid. 1. Quadrilateral $A^\prime B^\prime C^\prime D^\prime$ is formed by rotating $ABCD$ $90°$ clockwise about the origin. Name the coordinates of the vertices. $A$ ($7, -2$), $B$ ($8, -4$), $C$ ($2, -4$), $D$ $(3, -2)$ 1. Find the area of $ABCD$. Show all work. $\text{Area of a trapezoid }=\frac{1}{2}(b_{1}+b_{2})h$ Use the eTool below to plot the quadrilateral. Click the link at right for the full version of the eTool: 2-93 HW eTool (Desmos)
2021-05-12 08:35:34
{"extraction_info": {"found_math": true, "script_math_tex": 18, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9047361612319946, "perplexity": 1468.5111880400439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00162.warc.gz"}
https://www.dlubal.com/en-US/support-and-learning/support/knowledge-base/001388
# Modeling of Point-Supported Glass Systems 1 ### Technical Article 001388 01/05/2017 The transparency of glass material should not be missing in any building. In addition to the typical application areas such as windows, this building material is being increasingly used for facades, canopies, or even as a bracing for stairways. Of course, the planning architects often set a very high standard of transparency on fixation of the glass panes. This requires special glass fittings that couple the glass panes. #### Background of Design In addition to the general technical approvals of the individual producers, the design of point‑supported fittings is regulated in DIN 18008 [1]. This German standard specifies two different ways: • Appendix B - Verification / Validation of finite element models • Appendix C - Simplified method Apart from the various design options, there are constructional provisions - especially for plate fittings - specifying the geometrical arrangement on a glass pane or the formation in the edge area. #### Output Data for Analysis Laminated safety glass from heat strengthen float glass 2 × 8 mm: • Point fixing PH 793 by Glassline GmbH (Approval Z‑70.2‑99 [3]) • Cylindrical head Ø 52 mm • Drilling Ø 25 mm • Design load qd = 4.5 kN/m² #### Modeling in RFEM According to Simplified Method In the case of the glass pane design according to the simplified method described in DIN 18008, Annex C [1], the pane may be analyzed without the drilling holes. The existing glass fittings are represented by springs. The existing spring stiffness is specified in the technical approval. In our example, it gives the following result: $$\begin{array}{l}{\mathrm C}_{\mathrm Z,\max}\;=\;\frac1{24,372}\;+\;\frac1{3,015}\;=\;2,683\;\mathrm N/\mathrm{mm}\\{\mathrm C}_{\mathrm Z,\min}\;=\;\frac1{15,386}\;+\;\frac1{1,592}\;=\;1,443\;\mathrm N/\mathrm{mm}\\{\mathrm C}_{\mathrm Z,\mathrm{sel}}\;=\;2,000\;\mathrm N/\mathrm{mm}\\{\mathrm C}_{\mathrm V;\mathrm x,\mathrm y}\;=\;344\;\mathrm N/\mathrm{mm}\end{array}$$ Based on these parameters, the following result values are obtained. By using the formulas and parameters provided by DIN 18008, Annex C [1], all relevant stress ratios can be calculated now. ##### Stress component FZ $${\mathrm\sigma}_\mathrm{Fz}\;\;=\;\;\frac{{\mathrm b}_\mathrm{Fz}}{\mathrm d²}\;\cdot\;\frac{\mathrm t_\mathrm{ref}^2}{\mathrm t_\mathrm i^2}\;\cdot\;{\mathrm F}_\mathrm Z\;\cdot\;{\mathrm\delta}_\mathrm Z\;=\;\frac{15.8}{25²}\;\cdot\;\frac{102}{82}\;\cdot\;1,964\;\cdot\;0.5\;=\;38.8\;\mathrm N/\mathrm{mm}²$$ ##### Stress component Fres $$\begin{array}{l}{\mathrm F}_\mathrm{res}\;=\;\sqrt{\mathrm F_\mathrm x^2\;+\;\mathrm F_\mathrm y^2}\;=\;\sqrt{11²\;+\;4²}\;=\;12\;\mathrm N\\{\mathrm\sigma}_{\mathrm F,\mathrm{res}}\;=\;\frac{{\mathrm b}_{\mathrm F,\mathrm{res}}}{\mathrm d²}\;\cdot\;\frac{{\mathrm t}_\mathrm{ref}}{{\mathrm t}_\mathrm i}\;\cdot\;{\mathrm F}_\mathrm{res}\;\cdot\;{\mathrm\delta}_{\mathrm F,\mathrm{res}}\;=\;\frac{3.92}{25²}\;\cdot\;\frac{10}8\;\cdot\;12\;\cdot\;0.5\;=\;0.1\;\mathrm N/\mathrm{mm}²\end{array}$$ ##### Stress component Mres Due to the hinged support about the axes x, y, and z, there is no additional moment Mres. ##### Stress concentration in the drilling hole area $${\mathrm\sigma}_\mathrm g\;=\;{\mathrm\sigma}_\mathrm g(3\mathrm d)\;\cdot\;{\mathrm\delta}_\mathrm g\;\cdot\;\mathrm k\;=\;\frac{9.6\;\cdot\;8}{10.8\;\cdot\;1.6}\;=\;11.4\;\mathrm N/\mathrm{mm}^2$$ The governing design stress value in the fitting area results then from the sum of the individual components. $${\mathrm E}_\mathrm d\;=\;38.8\;+\;0.1\;+\;11.4\;=\;50.3\;\mathrm N/\mathrm{mm}^2$$ As the final step, the moment in the span must be considered. In this case, the moment has to be determined on a statically defined system. The governing stress in the span area is Ed = 16.5 N/mm². The allowable stress for laminated safety glass is calculated as $${\mathrm R}_\mathrm d\;=\;1.1\;\cdot\;\frac{{\mathrm f}_{\mathrm k,\mathrm{TVG}}}{{\mathrm\gamma}_\mathrm M}\;=\;1.1\;\cdot\;\frac{70}{1.5}\;=\;51.3\;\mathrm N/\mathrm{mm}²$$ and thus gives the result of the total design ratio of the glass η = 0.98. In addition to the general stress analysis performed here, you can perform further additional designs for exact dimensions of the glass pane. For this, you can follow the standard. #### Summary Appendix C of the German standard DIN 18008 provides very simple tools for the design of point‑supported glass fittings. By using the table values, you can very quickly estimate the structural behavior of the glass pane and determine the design ratio. Another possibility is specified in Appendix B of the standard. This design method based on a finite element model will be explained in the next part of this article. #### Reference [1] DIN 18008‑3: 2013‑07 [2] Weller, B., Engelmann, M., Nicklisch, F., & Weimar, T. (2013). Glasbau-Praxis: Konstruktion und Bemessung Band 2: Beispiele nach DIN 18008 (3rd ed.). Berlin: Beuth. [3] General Technical Approval Z‑70.2‑99. (2014)
2018-02-25 05:32:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4545881152153015, "perplexity": 2622.978624468177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816138.91/warc/CC-MAIN-20180225051024-20180225071024-00182.warc.gz"}
http://ieeexplore.ieee.org/xpl/tocresult.jsp?reload=true&isnumber=4012365&punumber=8920
By Topic # IEEE Transactions on Circuits and Systems II: Express Briefs ## Filter Results Displaying Results 1 - 25 of 41 Publication Year: 2006, Page(s):C1 - C4 | PDF (45 KB) • ### IEEE Transactions on Circuits and Systems—II: Express Briefs publication information Publication Year: 2006, Page(s): C2 | PDF (37 KB) • ### Design of High-Speed Power-Efficient MOS Current-Mode Logic Frequency Dividers Publication Year: 2006, Page(s):1165 - 1169 Cited by:  Papers (20)  |  Patents (1) | | PDF (188 KB) | HTML A methodology to design high-speed power-efficient MOS current-mode logic (MCML) static frequency dividers is proposed. Analytical criteria to exploit the speed potential of MCML gates are first introduced. Then, an analytical strategy is formulated to progressively reduce the bias currents through the stages without affecting the divider operation speed, thereby reducing the overall power consump... View full abstract» • ### Exploiting Hysteresys in MCML Circuits Publication Year: 2006, Page(s):1170 - 1174 Cited by:  Papers (7) | | PDF (156 KB) | HTML In this brief, hysteresis is introduced to improve the noise margin of positive-feedback source-coupled logic (PFSCL) gates, that are a modification of MOS current-mode logic recently proposed by the same authors. To better understand the effect of hysteresis on the performance and the design of these circuits, a simple analytical model of the noise margin is developed. Extensive simulations on a ... View full abstract» • ### Quasi Rail-to-Rail Very Low-Voltage OPAMP With a Single pMOS Input Differential Pair Publication Year: 2006, Page(s):1175 - 1179 Cited by:  Papers (18) | | PDF (384 KB) | HTML In this brief, a quasi-rail-to-rail low-voltage operational amplifier (VDD-VSS-VDSATP-VDSATN ) is introduced. A common-mode adapter that uses the common-mode voltage present at the common-source node of the available differential pair to accommodate the large common-mode input signal is proposed. The common-mode adapter operates properly at 300 kHz while... View full abstract» • ### Accurate, Compact, and Power-Efficient Li-Ion Battery Charger Circuit Publication Year: 2006, Page(s):1180 - 1184 Cited by:  Papers (81)  |  Patents (1) | | PDF (938 KB) | HTML A novel, accurate, compact, and power-efficient lithium-ion (Li-Ion) battery charger designed to yield maximum capacity, cycle life, and therefore runtime is presented and experimentally verified. The proposed charger uses a diode to smoothly (i.e., continuously) transition between two high-gain linear feedback loops and control a single power MOS device, automatically charging the battery with co... View full abstract» • ### Some Simple Synchronization Criteria for Complex Dynamical Networks Publication Year: 2006, Page(s):1185 - 1189 Cited by:  Papers (61) | | PDF (202 KB) | HTML Based on the concept of matrix measure, some simple synchronization criteria for complex dynamical networks are provided. If the coupling strength and the largest nonzero eigenvalue of the coupling matrix satisfy certain conditions, the stability of the synchronization manifold can be ensured. Furthermore, the proposed criteria are less conservative than some existing criteria View full abstract» • ### Compensation of Loudspeaker Nonlinearity in Acoustic Echo Cancellation Using Raised-Cosine Function Publication Year: 2006, Page(s):1190 - 1194 Cited by:  Papers (25)  |  Patents (3) | | PDF (477 KB) | HTML The nonlinearity of a power amplifier or loudspeaker in a large-signal situation gives rise to a nonlinear distortion of acoustic signal. A conventional acoustic echo canceller using linear adaptive filters is not able to eliminate the nonlinear echo component. In this brief, a novel nonlinear echo cancellation technique is presented by using a nonlinear transformation in conjunction with a conven... View full abstract» • ### Markov Chains-Based Derivation of the Phase Detector Gain in Bang-Bang PLLs Publication Year: 2006, Page(s):1195 - 1199 Cited by:  Papers (53) | | PDF (281 KB) | HTML Due to the presence of a binary phase detector (BPD) in the loop, bang-bang phase-locked loops (BBPLLs) are hard nonlinear systems. Since the BPD is usually also the only nonlinear element in the loop, in practical applications, BBPLLs are commonly analyzed by first linearizing the BPD and then using the traditional mathematical techniques for linear systems. To the author's knowledge, in the lite... View full abstract» • ### Gain Calibration Technique for Increased Resolution in FRC Data Converters Publication Year: 2006, Page(s):1200 - 1204 | | PDF (451 KB) | HTML A feedforward residue compensation (FRC) data converter combines the benefits of a Nyquist rate analog-to-digital converter (ADC) with an oversampled converter. In this brief, the authors introduce a digital calibration technique that allows the FRC architecture to provide high resolution at high input signal frequencies. A high-performance pipeline ADC is used as an auxiliary converter to measure... View full abstract» • ### A Low-Phase Noise, Anti-Harmonic Programmable DLL Frequency Multiplier With Period Error Compensation for Spur Reduction Publication Year: 2006, Page(s):1205 - 1209 Cited by:  Papers (40) | | PDF (618 KB) | HTML A low phase noise, delay-locked loop-based programmable frequency multiplier, with the multiplication ratio from 13 to 20 and output frequency range from 900 MHz to 2.9 GHz, is reported in this brief. A new switching control scheme is employed in the circuit to enable the capability of locking to frequencies either above or below the start-up frequency without initialization. To reduce the spuriou... View full abstract» • ### CMOS Image Sensors With Self-Powered Generation Capability Publication Year: 2006, Page(s):1210 - 1214 Cited by:  Papers (30)  |  Patents (4) | | PDF (941 KB) | HTML Considerations for CMOS image sensors with self-power generation capability design are presented. Design of CMOS imagers, utilizing self-powered sensors (SPS) is a new approach for ultra low-power CMOS active pixel sensors (APS) implementations. The SPS architecture allows generation of electric power by employing a light sensitive device, located on the same silicon die with an APS and thus reduc... View full abstract» • ### Subthreshold Operation of a Monolithically Integrated Strained-Si Current Mirror at Low Temperatures Publication Year: 2006, Page(s):1215 - 1219 Cited by:  Papers (1) | | PDF (294 KB) | HTML The dc operation of a simple current mirror built with two monolithically integrated strained-Si (s-Si) MOSFETs operating in the subthreshold region is studied as a function of temperature. At room temperature, the log-log current relationship is linear over 4 dec. The consumed power is approximately 100 muW at 300 K but only 1 nW at 160 K. The cost of this reduction in power is a reduced linear l... View full abstract» • ### Finite Horizon ${H}^{infty}$ Filtering With Initial Condition Publication Year: 2006, Page(s):1220 - 1224 Cited by:  Papers (4) | | PDF (175 KB) | HTML We consider the problem of finite horizon Hinfin filtering with uncertain initial conditions. An Hinfin norm-like performance measure that explicitly accounts for the effect of initial condition is proposed. Necessary and sufficient conditions are derived for the existence of an estimator that achieves a pre-specified value of this performance measure View full abstract» • ### The Analytic Determination of the PPV for Second-Order Oscillators Via Time-Varying Eigenvalues Publication Year: 2006, Page(s):1225 - 1229 Cited by:  Papers (3) | | PDF (139 KB) | HTML Time-varying eigenvalues may be used to formulate a set of linearly independent solutions for an arbitrary dynamical linear time-varying system. In this brief, it is shown how these quantities are used to determine analytically the perturbation projection vector (PPV) associated to a given oscillator. The PPV can be further used to estimate the spectral and timing properties of the oscillator outp... View full abstract» • ### A Study of the Optimal Data Rate for Minimum Power of I/Os Publication Year: 2006, Page(s):1230 - 1234 Cited by:  Papers (6)  |  Patents (2) | | PDF (175 KB) | HTML Power dissipation of multi-gigabit per second parallel input-output (I/O) links is an integral part of total integrated circuits (IC) power dissipation. This brief presents an optimal data rate per I/O link at which the power dissipation is minimized. The data rate is expressed as a function of the transmission channel's frequency response. The impact of considering the power due to on-chip electr... View full abstract» • ### An Improved Hα Filter Design for Systems With Time-Varying Interval Delay Publication Year: 2006, Page(s):1235 - 1239 Cited by:  Papers (94) | | PDF (166 KB) | HTML This brief is concerned with Hinfin filter design for systems with time-varying interval delay (i.e., the time delay is varying in an interval). An appropriate type of Lyapunov functionals is proposed to investigate the delay-dependent Hinfin filter design problem. Improved delay-dependent results are presented by taking into account the interval range. Finally, a numerical e... View full abstract» • ### Nonlinear Behaviors of Bandpass Sigma–Delta Modulators With Stable System Matrices Publication Year: 2006, Page(s):1240 - 1244 Cited by:  Papers (4) | | PDF (588 KB) | HTML It has been established that a class of bandpass sigma-delta modulators may exhibit state space dynamics which are represented by elliptical or fractal patterns confined within trapezoidal regions when the system matrices are marginally stable. In this brief, it is found that fractal or irregular chaotic patterns may also be exhibited in the phase plane when the system matrices are strictly stable View full abstract» • ### Area-Efficient VLSI Design of Reed–Solomon Decoder for 10GBase-LX4 Optical Communication Systems Publication Year: 2006, Page(s):1245 - 1249 Cited by:  Papers (25) | | PDF (822 KB) | HTML The Reed-Solomon (RS) code is a widely used forward error correction technique to cope with the channel impairments in fiber communication systems. The typical parallel RS architecture requires huge hardware cost to achieve very high speed transmission data rate for optical systems. This brief presents an area-efficient VLSI architecture of the RS decoder by using a novel just-in-time folding modi... View full abstract» • ### A VLIW Processor With Hardware Functions: Increasing Performance While Reducing Power Publication Year: 2006, Page(s):1250 - 1254 Cited by:  Papers (5) | | PDF (690 KB) | HTML This brief presents a heterogeneous multicore embedded processor architecture designed to exceed performance of traditional embedded processors while reducing the power consumed compared to low-power embedded processors. At the heart of this architecture is a multicore very large instruction word (VLIW) containing homogeneous execution cores/functional units. Additionally, heterogeneous combinatio... View full abstract» • ### On the Conversion Between Number Systems Publication Year: 2006, Page(s):1255 - 1258 | | PDF (115 KB) | HTML This brief revisits the problem of conversion between number systems and asks the following question: given a nonnegative decimal number d, what is the value of the digit at position j in the corresponding base b number? Thus, we do not require the knowledge of other digits except the one we are interested in. Accordingly, we present a conversion function that relates each digit in a base b system... View full abstract» • ### Adaptive Ratio-Size Gates for Minimum-Energy Operation Publication Year: 2006, Page(s):1259 - 1263 Cited by:  Papers (26) | | PDF (287 KB) | HTML Minimum-energy operation is a function of the gate-size ratio at different values of the power supply. In this brief, circuit designs that are capable of responding to changes in the power supply voltage and adjusting the gate-size ratio accordingly for minimum-energy operation are presented. The dynamically adjustable gate-size ratio allows the gate to preserve a symmetric voltage transfer charac... View full abstract» • ### Nyquist-Rate Current-Steering Digital-to-Analog Converters With Random Multiple Data-Weighted Averaging Technique and $Q^{N}$ Rotated Walk Switching Scheme Publication Year: 2006, Page(s):1264 - 1268 Cited by:  Papers (16)  |  Patents (4) | | PDF (494 KB) | HTML In this brief, Nyquist-rate current-steering digital-to-analog converters (DACs) applying the random multiple data-weighted averaging (RMDWA) technique and the QN rotated walk switching scheme are proposed such that high spurious-free dynamic range (SFDR) and small maximum output error can be achieved without calibrations, which are area and power consuming. RMDWA suppresses the harmoni... View full abstract» • ### A Low-Complexity Synchronizer for OFDM-Based UWB System Publication Year: 2006, Page(s):1269 - 1273 Cited by:  Papers (15)  |  Patents (1) | | PDF (659 KB) | HTML In current ultra-wideband (UWB) baseband synchronizer approaches, the parallel architecture is used to achieve over 500 MSamples/s throughput requirement. Therefore achieving low power and less area becomes the challenge of UWB baseband design. In this paper, a low-complexity synchronizer combining data-partition-based correlation algorithms and dynamic-threshold design is proposed for orthogonal ... View full abstract» • ### Current-Mode Monostable Multivibrators Using OTRAs Publication Year: 2006, Page(s):1274 - 1278 Cited by:  Papers (22) | | PDF (208 KB) | HTML Three nonretriggerable current-mode monostable multivibrators constructed of one operational transresistance amplifier (OTRA) and a few passive elements are presented in this brief. Two of these circuits are operated respectively under positive and negative triggering modes. However, the recovery time cannot be adjusted once the pulsewidth is decided. The third topology, which can work in either t... View full abstract» ## Aims & Scope Part I will now contain regular papers focusing on all matters related to fundamental theory, applications, analog and digital signal processing. Part II will report on the latest significant results across all of these topic areas. Full Aims & Scope ## Meet Our Editors Editor-in-Chief Chi K. Michael Tse Dept. of Electronic and Information Engineering Hong Kong Polytechnic University Hunghom, Hong Kong cktse@ieee.org
2017-05-27 17:18:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26047998666763306, "perplexity": 7205.578555297586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608956.34/warc/CC-MAIN-20170527152350-20170527172350-00141.warc.gz"}
https://web2.0calc.com/questions/help-please_70458
+0 0 108 2 1. What is the equation of a line that is parallel to  −3x+4y=4 and passes through the point (4, 0) ? 2. In △ABC, the coordinates of vertices A and B are  A(1,−1) and  B(3,2). For each of the given coordinates of vertex C, is  △ABC a right triangle? Select Right Triangle or Not a Right Triangle for each set of coordinates. 3. 4. Nov 2, 2018 #1 +5091 +1 1) $$-3x+4y=4 \\ y = \dfrac 3 4 x + 1 \\ slope = \dfrac 3 4 \\ \text{now we construct a line with slope }\dfrac 3 4 \text{ passing through }(4,0)\\ \text{using point slope form}\\ (y-0) = \dfrac 3 4 (x-4) \\ y = \dfrac 3 4 x - 3 \\ -3x + 4y = -12$$ . Nov 4, 2018 #2 +5091 +1 2) I guess the easiest way to do this is check the Pythagorean theorem for a right triangle. For each value of c find the length squared of each of the 3 sides of the triangle and sort them. Then if the triangle is a right triangle the sum of the two smaller squared sides will equal the largest squared side. I'm going to let you slog through the arithmetic.  I suggest using a spreadsheet or something. Nov 4, 2018
2019-05-26 00:26:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7702116370201111, "perplexity": 736.3625343652734}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258453.85/warc/CC-MAIN-20190525224929-20190526010929-00316.warc.gz"}
https://www.hackmath.net/en/math-problem/1549
# Five members Write first 5 members geometric sequence and determine whether it is increasing or decreasing: a1 = 3 q = -2 Result a1 =  3 a2 =  -6 a3 =  12 a4 =  -24 a5 =  48 a5 = ### Step-by-step explanation: ${a}_{1}=3$ ${a}_{2}=3\cdot -2=-6$ ${a}_{3}=3\cdot -{2}^{2}=12$ ${a}_{4}=3\cdot -{2}^{3}=-24$ ${a}_{5}=3\cdot -{2}^{4}=48$ We will be pleased if You send us any improvements to this math problem. Thank you! ## Related math problems and questions: • Geometric sequence 3 In geometric sequence is a8 = 312500; a11= 39062500; sn=1953124. Calculate the first item a1, quotient q, and n - number of members by their sum s_n. • Geometric sequence 5 About members of geometric sequence we know: ? ? Calculate a1 (first member) and q (common ratio or q-coefficient) • Sequence Write the first 6 members of this sequence: a1 = 5 a2 = 7 an+2 = an+1 +2 an • Geometric sequence 4 It is given geometric sequence a3 = 7 and a12 = 3. Calculate s23 (= sum of the first 23 members of the sequence). • Sequence Write the first 7 members of an arithmetic sequence: a1=-3, d=6. • Geometric sequence In the geometric sequence is a4 = 20 a9 = -160. Calculate the first member a1 and quotient q. • Sequence - 5 members Write first five members of the sequence ? • GP - 8 items Determine the first eight members of a geometric progression if a9=512, q=2 • Tenth member Calculate the tenth member of geometric sequence when given: a1=1/2 and q=2 • Five element The geometric sequence is given by quotient q = 1/2 and the sum of the first six members S6 = 63. Find the fifth element a5. • GP members The geometric sequence has 10 members. The last two members are 2 and -1. Which member is -1/16? • Geometric progression In geometric progression, a1 = 7, q = 5. Find the condition for n to sum first n members is: sn≤217. • Sequence 3 Write the first 5 members of an arithmetic sequence: a4=-35, a11=-105. • AS sequence In an arithmetic sequence is given the difference d = -3 and a71 = 455. a) Determine the value of a62 b) Determine the sum of 71 members. • Sequence 2 Write the first 5 members of an arithmetic sequence a11=-14, d=-1 • Gp - 80 One of the first four members of a geometric progression is 80. Find its if we know that the fourth member is nine times greater than the second. • Sequences AP + GP The three numbers that make up the arithmetic sequence have the sum of 30. If we subtract from the first 5, from the second 4 and keep the third, we get the geometric sequence. Find AP and GP members.
2021-05-09 08:32:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6167593598365784, "perplexity": 1825.8912456870091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00200.warc.gz"}
https://motls.blogspot.com/2009/05/steven-chu-will-paint-world-white.html?m=1
## Wednesday, May 27, 2009 ### Steven Chu will paint the world white As The U.K. Times have informed us, Steven Chu, a physics Nobel prize winner who became the U.S. secretary of energy, was inspired by his wife's hair and has found a new Al Gore Rhythm to save the world: just paint the world white! ;-) While darkish surfaces absorb about 80% of the solar radiation, lightish surfaces absorb 20% only. The difference is over 1/2 of the solar radiation. The majority of those 342 Watts per squared meters can't be eliminated everywhere because much of the Earth's surface are oceans or forests that are hard to change. But it's enough for the humans to apply the new "light" policy and transform about 3% of the land, i.e. 1% of the planetary surface, from 80% to 20%, and the absorbed energy decreases by 50% * 1% * 342 W/m^2 which is over 1.5 W/m^2. That's enough to compensate for the CO2 warming accumulated in a few centuries. A white roof. All cheered as the tarps came tumbling down for the last time. The final roof panels are in place, this historic day, 27 July 2008. We're weathered in! Hallelujah! Using your roof as a fridge of the Earth And you know, such a policy would indeed be much cheaper a method to cool the planet than the attempts to regulate carbon dioxide, as we will see. On the other hand, this method shows how ludicrous the attempts to cool the Earth are. A typical landlord in a warm city doesn't pay much attention to the color of the roof even when it comes to the temperature of her own house. Such a change of the roof color may reduce the average annual-daily equilibrium temperature in the building by a few degrees or so. Now divide the energy difference produced by one roof over the whole surface of the Earth, 510 million squared kilometers. A roof may be 100 squared meters which is 2 x 10^{-13} times the surface of the Earth. The corresponding cooling of the Earth bought by a different color of your roof may be around 10^{-12} °C. The idea that someone would choose an uglier color in order to help the world to cool down by one picodegree Celsius is laughable. Even if billions of landlords did the same thing - and there are not billions of owners of such houses in the world! - the change would still be a tiny fraction of one degree: millidegrees or centidegrees of Celsius. Houses are clearly not enough. We would have to play with the roads and introduce racism into our appraisal of plants: dark plants suck while the white plants are great! (But the dark plants arguably consume more CO2 in average, so it's a very subtle trade-off.) The results would still be absurdly small even if the whole world happened to co-operate with this childish game. This game may sound as a joke but if you do the same calculation with the CO2 consumption, you will end up with results that are even more absurd and tiny. Carbon dioxide You know, cars emit something like 150 grams of CO2 per kilometer. About one half of it, or 0.08 kilograms of CO2, remains in the atmosphere for decades or centuries. The total mass of CO2 in the atmosphere is 3 x 10^{15} kilograms. About one quarter of it, let's say 0.8 x 10^{15} kilograms, was added by the humans since 1800, and it has warmed the planet up by 0.8 °C or so. So 10^{15} kilograms of CO2 adds about 1 °C to the temperature. You can see that one kilometer with a car, or 0.08 kilograms of CO2, adds roughly 8 x 10^{-17} °C. Let me write the number in the non-scientific notation because it is more revealing. If you drive your car and add one kilometer, you should feel as a mass killer because you raise the temperature of the Earth by Delta T = 0.00000 00000 00000 08 °C. Especially if you realize that this warming could actually be a (tiny) good thing, you must really feel like another Adolf Hitler who is building new concentration camps by using the car, as James Hansen "teaches" us. In the text above, we had a lot of fun with those 10^{-12} °C that a different color of your roof may subtract from the global mean temperature. But once we have looked at the CO2 numbers, this picodegree of cooling (or warming) suddenly looks like a gigantic temperature change. You must drive 12,500 kilometers with your car to warm the Earth by the same picodegree that you "save" by painting your roof white. Indeed, painting the roofs seems as a more efficient way to cool the planet than attempts to reduce our consumption of fossil fuels. Even if a brunette changes her hair color to blonde, the effect on the energy budget of the Earth is equivalent to many kilometers of driving her car. One needs billions of cars driving millions of miles (each) to accumulate one degree. An epilogue by Feynman But none of those "concerned" people has ever made any calculation of this kind. Here they are, slowly coming to life, only to better interpret An Inconvenient Truth. Imagine! In modern times like this, guys are studying to go into society and do something - to be a climate alarmist - and the only way they think that science might be interesting is because their ancient, provincial, medieval problems concerning the judgment day are being confounded slightly by some new phenomena such as white roofs... They don't understand technology; they don't understand their time. And I thank R.P. Feynman for the last two paragraphs from "Is Electricity Fire?", originally written about rabbis, that I could reproduce without any substantial modification. The main difference I see is that the young rabbis would be unable to double the prostitutes' income in Copenhagen, unlike the literally f**king climate policymakers. And that's the memo. #### 1 comment: 1. If you have a dark roof in Florida in the summer time your electricity bill will double to keep the air conditioning running at a comfortable level. For six months of the year, you will save up to \$900 in Florida for electricity if you have a white roof.
2021-03-08 10:36:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3971596658229828, "perplexity": 1530.8820398568355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178383355.93/warc/CC-MAIN-20210308082315-20210308112315-00573.warc.gz"}
https://hal.inria.fr/hal-01093272
# Fast Rendezvous on a Cycle by Agents with Different Speeds 2 GANG - Networks, Graphs and Algorithms LIAFA - Laboratoire d'informatique Algorithmique : Fondements et Applications, Inria Paris-Rocquencourt Abstract : The difference between the speed of the actions of different processes is typically considered as an obstacle that makes the achievement of cooperative goals more difficult. In this work, we aim to highlight potential benefits of such asynchrony phenomena to tasks involving symmetry breaking. Specifically, in this paper, identical (except for their speeds) mobile agents are placed at arbitrary locations on a (continuous) cycle of length n and use their speed difference in order to rendezvous fast. We normalize the speed of the slower agent to be 1, and fix the speed of the faster agent to be some c > 1. (An agent does not know whether it is the slower agent or the faster one.) The straightforward distributed-race (DR) algorithm is the one in which both agents simply start walking until rendezvous is achieved. It is easy to show that, in the worst case, the rendezvous time of DR is n/(c − 1). Note that in the interesting case, where c is very close to 1 (e.g., c = 1 + 1/n k ), this bound becomes huge. Our first result is a lower bound showing that, up to a multiplicative factor of 2, this bound is unavoidable, even in a model that allows agents to leave arbitrary marks (the white board model), even assuming sense of direction, and even assuming n and c are known to agents. That is, we show that under such assumptions, the rendezvous time of any algorithm is at least n2(c−1) if c ≤ 3 and slightly larger (specifically, nc+1) if c > 3. We then manage to construct an algorithm that precisely matches the lower bound for the case c ≤ 2, and almost matches it when c > 2. Moreover, our algorithm performs under weaker assumptions than those stated above, as it does not assume sense of direction, and it allows agents to leave only a single mark (a pebble) and only at the place where they start the execution. Finally, we investigate the setting in which no marks can be used at all, and show tight bounds for c ≤ 2, and almost tight bounds for c > 2. Keywords : Type de document : Communication dans un congrès ICDCN 2014 - 15th International Conference Distributed Computing and Networking, Jan 2014, Coimbatore, India. 〈10.1007/978-3-642-45249-9_1〉 https://hal.inria.fr/hal-01093272 Contributeur : Amos Korman <> Soumis le : mercredi 10 décembre 2014 - 14:25:11 Dernière modification le : jeudi 15 novembre 2018 - 20:27:23 ### Citation Ofer Feinerman, Amos Korman, Shay Kutten, Rodeh Yoav. Fast Rendezvous on a Cycle by Agents with Different Speeds. ICDCN 2014 - 15th International Conference Distributed Computing and Networking, Jan 2014, Coimbatore, India. 〈10.1007/978-3-642-45249-9_1〉. 〈hal-01093272〉 ### Métriques Consultations de la notice
2018-12-15 11:57:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.843938410282135, "perplexity": 1098.6115270671883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.55/warc/CC-MAIN-20181215105142-20181215131142-00013.warc.gz"}
https://www.adv-stat-clim-meteorol-oceanogr.net/4/53/2018/ascmo-4-53-2018.html
Journal cover Journal topic Advances in Statistical Climatology, Meteorology and Oceanography An international open-access journal on applied statistics Journal topic Adv. Stat. Clim. Meteorol. Oceanogr., 4, 53-63, 2018 https://doi.org/10.5194/ascmo-4-53-2018 Adv. Stat. Clim. Meteorol. Oceanogr., 4, 53-63, 2018 https://doi.org/10.5194/ascmo-4-53-2018 06 Dec 2018 06 Dec 2018 # An integration and assessment of multiple covariates of nonstationary storm surge statistical behavior by Bayesian model averaging An integration and assessment of multiple covariates Tony E. Wong Tony E. Wong • Department of Computer Science, University of Colorado, Boulder, CO 80309, USA Abstract Projections of coastal storm surge hazard are a basic requirement for effective management of coastal risks. A common approach for estimating hazards posed by extreme sea levels is to use a statistical model, which may use a time series of a climate variable as a covariate to modulate the statistical model and account for potentially nonstationary storm surge behavior (e.g., North Atlantic Oscillation index). Previous works using nonstationary statistical approaches to assess coastal flood hazard have demonstrated the importance of accounting for many key modeling uncertainties. However, many assessments have typically relied on a single climate covariate, which may leave out important processes and lead to potential biases in the projected flood hazards. Here, I employ a recently developed approach to integrate stationary and nonstationary statistical models, and characterize the effects of choice of covariate time series on projected flood hazard. Furthermore, I expand upon this approach by developing a nonstationary storm surge statistical model that makes use of multiple covariate time series, namely, global mean temperature, sea level, the North Atlantic Oscillation index and time. Using Norfolk, Virginia, as a case study, I show that a storm surge model that accounts for additional processes raises the projected 100-year storm surge return level by up to 23 cm relative to a stationary model or one that employs a single covariate time series. I find that the total model posterior probability associated with each candidate covariate, as well as a stationary model, is about 20 %. These results shed light on how including a wider range of physical process information and considering nonstationary behavior can better enable modeling efforts to inform coastal risk management. 1 Introduction Reliable estimates of storm surge return levels are critical for effective management of flood risks (Nicholls and Cazenave, 2010). Extreme value statistical modeling offers an avenue for estimating these return levels (Coles, 2001). In this approach, a statistical model is used to describe the distribution of extreme sea levels. Modeling uncertainties, however, include whether or not the chosen statistical model appropriately characterizes the sea levels and whether or not the distribution changes over time – that is, nonstationarity (Lee et al., 2017). Process-based modeling offers a mechanistically motivated alternative to statistical modeling (e.g., Fischbach et al., 2017; Johnson et al., 2013; Orton et al., 2016) and carries its own distinct set of modeling uncertainties. Recent efforts to manage coastal flood risk have relied heavily on statistical modeling (e.g., Lempert et al., 2012; Lopeman et al., 2015; Moftakhari et al., 2017; Oddo et al., 2017). In particular, environmental extremes can often carry high risks in terms of widespread damages and economic losses (e.g., Oddo et al., 2017), but extremes are by definition rare, imposing strict limitations on the available data. Extreme value statistical models offer an avenue for estimating extremes, with relatively fewer parameters to constrain as compared to processed-based models. The importance of statistical modeling in managing coastal risk motivates the focus of the present study on characterizing some of the relevant uncertainties in extreme value statistical modeling of flood hazards. Common extreme value distributions for modeling coastal storm surges include generalized extreme value (GEV) models (e.g., Grinsted et al., 2013; Karamouz et al., 2017; Wong and Keller, 2017) and a hybrid Poisson process and generalized Pareto distribution (PP/GPD) model (e.g., Arns et al., 2013; Buchanan et al., 2017; Bulteau et al., 2015; Cid et al., 2016; Hunter et al., 2017; Marcos et al., 2015; Tebaldi et al., 2012; Wahl et al., 2017; Wong et al., 2018). Approaches based on the joint probability method (for example) are another alternative to analyze extreme sea levels, although the focus of the present study is restricted to extreme value distributions (Haigh et al., 2010b; McMillan et al., 2011; Pugh and Vassie, 1978; Tawn and Vassie, 1989). The GEV distribution is the limiting distribution of a convergent sequence of independent and identically distributed sample maxima (Coles, 2001). In extreme sea level analyses, data are frequently binned into sample blocks, and a GEV distribution is assumed as the distribution of the appropriately detrended and processed sample block maxima (where this processing serves to achieve independent and identically distributed sample block maxima). Depending on block sizes (typically annual or monthly), this approach may lead to a rather limited set of data for analysis. By contrast, the PP/GPD modeling approach can yield a richer set of data by making use of all extreme sea level events above a specified threshold (e.g., Arns et al., 2013; Knighton et al., 2017). Additionally, previous studies have demonstrated the difficulties in making robust modeling and processing choices using a GEV and block maxima approach (Ceres et al., 2017; Lee et al., 2017). These relative strengths and weaknesses of the GEV versus PP/GPD approaches motivate the present study to focus on constraining uncertainties within the PP/GPD model. Recent works have demonstrated the importance of accounting for nonstationarity in extreme sea levels (Vousdoukas et al., 2018; Wong et al., 2018). To address the question of how the distribution of extreme sea levels is changing over time, many previous studies have employed nonstationary statistical models for storm surge return levels. The typical approach is to fit a spatiotemporal statistical model (e.g., Menéndez and Woodworth, 2010) or to allow some climate index or variable to serve as a covariate that modulates the statistical model parameters (e.g., Ceres et al., 2017; Cid et al., 2016; Grinsted et al., 2013; Haigh et al., 2010a; Lee et al., 2017; Wong et al., 2018). The present study follows and expands upon the modeling approach of Wong et al. (2018) by incorporating nonstationarity into the PP/GPD statistical model and providing a comparison of the projected return levels and a quantification of model goodness of fit under varying degrees of nonstationarity. Relatively few studies, however, have examined the use of multiple covariates or compared the use of several candidate covariates for a particular model application (Grinsted et al., 2013). The present study tackles this issue by considering several potential covariates for extreme value models that have been used previously: global mean surface temperature (Ceres et al., 2017; Grinsted et al., 2013; Lee et al., 2017), global mean sea level (Arns et al., 2013; Vousdoukas et al., 2018), the North Atlantic Oscillation (NAO) index (Haigh et al., 2010a; Wong et al., 2018) and time (i.e., a linear change) (Grinsted et al., 2013). To avoid potential representation uncertainties as much as possible, the attention of the present study is restricted to the Sewells Point tide-gauge site in Norfolk, Virginia, USA (NOAA, 2017b), which is within the region of study of Grinsted et al. (2013). The present study employs a Bayesian model averaging (BMA) approach to integrate and compare various modeling choices for potential climate covariates for the statistical model for extreme sea levels (Wong et al., 2018). The use of BMA permits a quantification of model posterior probability associated with each of the four candidate covariates and illuminates important areas for future modeling efforts. BMA also enables the generation of a new model that incorporates information from all of the candidate covariates and model nonstationarity structures. The main contribution of this work is to demonstrate the ability of the BMA approach to incorporate multiple covariate time series into flood hazard projections and to examine the impacts of different choices of covariate time series. The candidate covariates used here are by no means an exhaustive treatment of the problem domain but rather serve as a proof of concept for further exploration and to provide a characterization of the structural uncertainties inherent in modeling nonstationary extreme sea levels. To summarize, the main questions addressed by the present study are as follows: 1. Which covariates that have been used in previous works to modulate extreme value statistical models for storm surges are favored by the BMA weighting? 2. How do these structural uncertainties affect our projections of storm surge return levels? The remainder of this work is composed as follows. Section 2 describes the extreme value statistical model used here, the data sets and processing methods employed, the model calibration approach, and the experimental design for projecting flood hazards. Section 3 presents a comparison of modeling results under the assumptions of the above four candidate covariates, as well as when all four are integrated using BMA. Section 4 interprets the results and discusses the implications for future study, and Sect. 5 provides a concluding summary of the present findings. 2 Methods ## 2.1 Data The tide-gauge station selected for this study is Sewells Point (Norfolk), Virginia, United States (NOAA, 2017b). Norfolk was selected for two reasons. First, the Norfolk tide-gauge record is long and nearly continuous (89 years). Second, Norfolk is within the southeastern region of the United States considered by Grinsted et al. (2013), so the application of global mean surface temperature as a covariate for changes in storm surge statistical characterization is reasonable. This assumption should be examined more closely if the results of the present work are to be interpreted outside this region. It is important to make clear that the assumption of a model structure in which storm surge parameters covary with some time-series φ does not imply the assumption of any direct causal relationship. Rather, the use of a covariate φ to modulate the storm surge is meant to take advantage of dependence relationships in the covariate time series and storm surge. For example, an unknown mechanism could lead to changes in both global mean temperature as well as storm surge return levels. The fact that temperature does not directly cause the change in storm surge does not mean that temperature is not a useful indicator of changes in storm surge. That is why this work has chosen the term “covariate” for these time series. I consider four candidate covariate time series, φ(t): time, global mean sea level, global mean temperature and the winter mean NAO index. The time covariate is simply the identity function. For example, φ(1928) = 1928 for the year y1= 1928. The nonstationary model assuming a time covariate corresponds to the linear trend model considered by Grinsted et al. (2013). For the NAO index covariate time series, I use the historical monthly NAO index data from Jones et al. (1997), and as projections I use the MPI-ECHAM5 sea level pressure projection under the Special Report on Emission Scenarios (SRES) A1B as part of the ENSEMBLES project (https://www.ensembles-eu.org, last access 26 March 2018; Roeckner et al., 2003). As forcing to the nonstationary models, I calculate the winter mean (DJF) NAO index following Stephenson et al. (2006). For the temperature time series, I use historical annual global mean surface temperature data from the National Centers for Environmental Information data portal (NOAA, 2017a), and as projections I use the CNRM-CM5 simulation (member 1) under Representative Concentration Pathway 8.5 (RCP8.5) as part of the CMIP5 multi-model ensemble (http://cmip-pcmdi.llnl.gov/cmip5/, last access: 7 July 2017). The time series are provided as anomalies relative to their 20th century mean. For the sea level time series, I use the global mean sea level data set of Church and White (2011) as historical data. For projecting future flood hazard, I use the simulation from Wong and Keller (2017) yielding the ensemble median global mean sea level in 2100 under RCP8.5. Each of the covariate data records and the tide-gauge calibration data record are trimmed to 1928–2013 (86 years), because this is the time period for which all of the historical time series are available. I normalize all of the covariate time series so that the range between the minimum and maximum for the historical period is 0 to 1; the projection period (to 2065) may lie outside of the 0 to 1 range. Thus, all candidate models are calibrated to the same set of observational data, and the covariate time series are all on the same scale, making for a cleaner comparison. ## 2.2 Extreme value model First, to detrend the raw hourly tide-gauge sea level time series, I subtract a moving window 1-year average (e.g., Arns et al., 2013; Wahl et al., 2017). Next, I compute the time series of detrended daily maximum sea levels. I use a PP/GPD statistical modeling approach, which requires selection of a threshold, above which all data are considered as part of an extreme sea level event. In an effort to maintain independence in the final data set for analysis, these events are declustered such that only the maximal event among multiple events within a given timescale is retained in the final data set. Following many previous studies, I use a declustering timescale of 3 days and a constant threshold matching the 99th percentile of the time series of detrended daily maximum sea levels (e.g., Wahl et al., 2017). The interested reader is directed to Wong et al. (2018) for further details on these methods and to Wong et al. (2018), Wahl et al. (2017) and Arns et al. (2013) for deeper discussion of the associated modeling uncertainties. The probability density function (pdf, f) for the GPD is given by $\begin{array}{ll}f\left(x\left(t\right)& \mathrm{|}\mathit{\mu }\left(t\right),\mathit{\sigma }\left(t\right),\mathit{\xi }\left(t\right)\right)\\ \text{(1)}& & =\frac{\mathrm{1}}{\mathit{\sigma }\left(t\right)}{\left(\mathrm{1}+\mathit{\xi }\left(t\right)\frac{x\left(t\right)-\mathit{\mu }\left(t\right)}{\mathit{\sigma }\left(t\right)}\right)}^{-\left(\mathrm{1}+\mathrm{1}/\mathit{\xi }\left(t\right)\right)},\end{array}$ where μ(t) is the threshold for the GPD model (which does not depend on time t here), σ(t) is the GPD scale parameter (m), ξ(t) is the GPD shape parameter (unitless) and x(t) is sea level height at time t (processed as described above). Note that f only has support when x(t)≥μ(t), i.e., for exceedances of the threshold μ. A Poisson process is assumed to govern the frequency of threshold exceedances: $\begin{array}{}\text{(2)}& g\left(n\left(t\right)\mathrm{|}\mathit{\lambda }\left(t\right)\right)=\frac{{\left(\mathit{\lambda }\left(t\right)\mathrm{\Delta }t\right)}^{n\left(t\right)}}{n\left(t\right)\mathrm{!}}\mathrm{exp}\left(-\mathit{\lambda }\left(t\right)\mathrm{\Delta }t\right),\end{array}$ where n(t) is the number of exceedances in time interval t to tt and λ(t) is the Poisson rate parameter (exceedances day−1). Following previous works, nonstationarity is incorporated into the PP/GPD parameters as $\begin{array}{}\text{(3)}& \left\{\begin{array}{l}\mathit{\lambda }\left(t\right)={\mathit{\lambda }}_{\mathrm{0}}+{\mathit{\lambda }}_{\mathrm{1}}\mathit{\phi }\left(t\right),\\ \mathit{\sigma }\left(t\right)=\mathrm{exp}\left[{\mathit{\sigma }}_{\mathrm{0}}+{\mathit{\sigma }}_{\mathrm{1}}\mathit{\phi }\left(t\right)\right],\\ \mathit{\xi }\left(t\right)={\mathit{\xi }}_{\mathrm{0}}+{\mathit{\xi }}_{\mathrm{1}}\mathit{\phi }\left(t\right),\end{array}\right\\end{array}$ where λ0, λ1, σ0, σ1, ξ0 and ξ1 are all unknown constant parameters and φ(t) is a time-series covariate that modulates the behavior of the storm surge PP/GPD distribution (Grinsted et al., 2013; Wong et al., 2018). As in these previous works, I assume that the parameters are stationary within a calendar year. Assuming that each element of the processed data set x is independent and given a full set of model parameters $\mathbit{\theta }=\left({\mathit{\lambda }}_{\mathrm{0}},{\mathit{\lambda }}_{\mathrm{1}},{\mathit{\sigma }}_{\mathrm{0}},{\mathit{\sigma }}_{\mathrm{1}},{\mathit{\xi }}_{\mathrm{0}},{\mathit{\xi }}_{\mathrm{1}}\right)$, the joint likelihood function is $\begin{array}{ll}L\left(\mathbit{x}\mathrm{|}\mathbit{\theta }\right)=& \prod _{i=\mathrm{1}}^{N}\left[g\left(n\left({y}_{i}\right)\mathrm{|}\mathit{\lambda }\left({y}_{i}\right)\right)\\ \text{(4)}& & \cdot \prod _{j=\mathrm{1}}^{n\left({y}_{i}\right)}f\left({x}_{j}\left({y}_{i}\right)\mathrm{|}\mathit{\mu }\left({y}_{i}\right),\mathit{\sigma }\left({y}_{i}\right),\mathit{\xi }\left({y}_{i}\right)\right)\right],\end{array}$ where yi denotes the year indexed by i, xj(yi) is the jth threshold exceedance in year yi, n(yi) is the total number of exceedances in year yi and there are N total years of data. The product over exceedances in year yi in this equation is replaced by one for any year with no exceedances. Note that if ${\mathit{\lambda }}_{\mathrm{1}}={\mathit{\sigma }}_{\mathrm{1}}={\mathit{\xi }}_{\mathrm{1}}=\mathrm{0}$, the PP/GPD parameters λ(t), σ(t) and ξ(t) are constant, yielding a stationary statistical model. This model is denoted “ST”. If the frequency of threshold exceedances is permitted to be nonstationary, then ${\mathit{\sigma }}_{\mathrm{1}}={\mathit{\xi }}_{\mathrm{1}}=\mathrm{0}$, but λ1 is not necessarily equal to zero. This model permits one parameter, λ, to be nonstationary, and it is denoted “NS1.” Similar models are constructed by permitting both λ and σ to be nonstationary while holding ξ1=0 (NS2) and permitting all three parameters to be nonstationary (NS3). I consider these four potential model structures for each of the four candidate covariates φ(t) (time, sea level, temperature and the NAO index; see Sect. 2.1). This yields a set of 13 total candidate models; model ST is the same for all covariates. For each of the 13 candidate models, I use ensembles of PP/GPD parameters, calibrated using observational data and forced using time series for the appropriate covariate, to estimate the 100-year storm surge return level for Norfolk in 2065 (the surge height corresponding to a 100-year return period). Projections for other return periods are available in Appendix A, following this work. ## 2.3 Model calibration I calibrate the model parameters using a Bayesian parameter calibration approach (e.g., Higdon et al., 2004). As prior information p(θ) for the model parameters, I select 27 tide-gauge sites with at least 90 years of data available from the University of Hawaii Sea Level Center data portal (Caldwell et al., 2015). I process each of these 27 tide-gauge data sets and the Norfolk data that are the focus of this study as described in Sect. 2.2. Then, I fit maximum likelihood parameter estimates for each of the 13 candidate model structures. For each model structure and for each parameter, I fit either a normal or gamma prior distribution to the set of 28 maximum likelihood parameter estimates, based on whether the parameter support is infinite (in the case of λ1, σ1, ξ0 and ξ1) or half-infinite (in the case of λ0 and σ0). The essence of the Bayesian calibration approach is to use Bayes' theorem to combine the prior information p(θ) with the likelihood function L(x|θ) (Eq. 4) as the posterior distribution of the model parameters θ, given the data x: $\begin{array}{}\text{(5)}& p\left(\mathbit{\theta }\mathrm{|}\mathbit{x}\right)\propto L\left(\mathbit{x}\mathrm{|}\mathbit{\theta }\right)p\left(\mathbit{\theta }\right).\end{array}$ I use a robust adaptive Metropolis–Hastings algorithm to generate Markov chains whose stationary distribution is this posterior distribution (Vihola, 2012), for each of the 13 distinct model structures (level of nonstationarity and parameter covariate time-series combinations). For each distinct model structure, I initialize each Markov chain at maximum likelihood parameter estimates and iterate the Metropolis–Hastings algorithm 100 000 times, for 10 parallel Markov chains (Hastings, 1970; Metropolis et al., 1953). I use Gelman and Rubin diagnostics to assess convergence and remove a burn-in period of 10 000 iterates (Gelman and Rubin, 1992). From the remaining set of 900 000 Markov chain iterates (pooling all 10 parallel chains), I draw a thinned sample of 10 000 sets of parameters for each of the distinct model structures to serve as the final ensembles for analysis. ## 2.4 Bayesian model averaging In the context of using statistical modeling for estimating flood hazards, there has been some debate over how best to use the limited available information to constrain projections. More complex model structures can incorporate potentially nonstationary behavior (i.e., models NS1-3), but the additional parameters for estimation come at the cost of requirements of more data (Wong et al., 2018). Some works have focused on the timescale on which nonstationary behavior may be detected (Ceres et al., 2017), and others have focused on the ability of modern calibration methods to identify correct storm surge statistical model structure (Lee et al., 2017). Methods such as processing and pooling tide-gauge data into a surge index permit a much richer set of data with which to constrain additional parameters (Grinsted et al., 2013), but the “best” way to reliably process data and make projections remains unclear (Lee et al., 2017). Indeed, Lee et al. (2017) demonstrated that even the surge index methodology of Grinsted et al. (2013), which assimilates data from six tide-gauge stations, likely cannot appropriately identify a fully nonstationary (NS3) model with a global mean temperature covariate. In summary, there is a large amount of model structural uncertainty surrounding model choice (Lee et al., 2017) and the model covariate time series (Grinsted et al., 2013). Bayesian model averaging (BMA; Hoeting et al., 1999) offers an avenue for handling these concerns by combining information across candidate models, and weighting the estimates from each model by the degree to which that model is persuasive relative to the others. Using BMA, each candidate model Mk is assigned a weight that is its posterior model probability, p(Mk|x). Each model Mk yields an estimated return level in year yi, RL(yi|Mk). The BMA estimate of the return level can then be written as an average of the return levels as estimated by each candidate model, weighted by each model's BMA weight: $\begin{array}{}\text{(6)}& \mathrm{RL}\left({y}_{i}\mathrm{|}\mathbit{x}\right)=\sum _{k=\mathrm{1}}^{m}RL\left({y}_{i}\mathrm{|}{M}_{k}\right)\phantom{\rule{0.125em}{0ex}}p\left({M}_{k}\mathrm{|}\mathbit{x}\right),\end{array}$ where m is the total number of models under consideration. The BMA weights for each model Mk are given by Bayes' theorem and the law of total probability as $\begin{array}{}\text{(7)}& p\left({M}_{k}\mathrm{|}\mathbit{x}\right)=\frac{p\left(\mathbit{x}\mathrm{|}{M}_{k}\right)p\left({M}_{k}\right)}{\sum _{j=\mathrm{1}}^{m}p\left(\mathbit{x}\mathrm{|}{M}_{j}\right)p\left({M}_{j}\right)}.\end{array}$ The prior distribution over the candidate models is assumed to be uniform (p(Mi)=p(Mj) for all i, j). The probabilities p(x|Mk) are estimated using bridge sampling and the posterior ensembles from the Markov chain Monte Carlo analysis (Meng and Wing, 1996). For each of the four covariate time series, in addition to the ensembles of 100-year storm surge returns levels for each of the four candidate models, I produce a BMA ensemble of 100-year return levels as outlined above. In a final experiment, I pool all 13 distinct model structures to create a BMA ensemble in consideration of all levels of nonstationarity and covariate time series. This BMA-weighted ensemble constitutes a new model structure that takes into account more mechanisms for modulating storm surge behavior – time, temperature, sea level and the NAO index. This experiment has two aims: 1. to assess the degree to which the Norfolk data set informs our choice of covariate time series and 2. to quantify the impacts of single-model or single-covariate choice in the projection of flood hazards. 3 Results ## 3.1 Integrating across model structures The BMA weights for the stationary model (ST) and each of the three nonstationary models (NS1–3) are robust across the changes in the covariate time series employed to modulate the storm surge model parameters (Fig. 1). The ST model receives about 55 % weight, the NS1 model (where the Poisson rate parameter λ is nonstationary) receives about 25 % weight, the NS2 model (where both λ and σ are nonstationary) receives about 15 % weight, and the fully nonstationary NS3 model receives about 5 % weight. While the stationary model consistently has the highest model posterior probability, the fact that the nonstationary models have appreciable weight associated with them is a clear signal that these processes should not be ignored. In light of these results, it becomes rather unclear which is the “correct” model choice and which covariate is the most appropriate. The latter question will be addressed in Sect. 3.3 and 3.4. The former question is addressed using BMA to combine the information across all of the candidate model structures, for each covariate individually. In this way, BMA permits the use of model structures which may have large uncertainties but are still useful to inform risk management strategies. Figure 1Bar plots showing the Bayesian model averaging weight for each of the four candidate models (ST, NS1, NS2 and NS3) using the following as a covariate: (a) time, (b) temperature, (c) sea level and (d) NAO index. ## 3.2 Return levels for individual models When BMA is used to combine all four candidate ST and nonstationary models for each candidate covariate, the ensemble median projected 100-year return level in 2065 increases by between 4 and 23 cm, depending on the covariate used (Fig. 2). Interestingly, the use of BMA with a global mean temperature or sea level covariate widens the uncertainty range relative to the stationary model (Fig. 2b, c), whereas the BMA-weighted ensembles using time or the NAO index as a covariate tighten the uncertainty range. This is likely attributable to the larger signal in sea level or temperature projections, relative to time or the NAO index. By considering nonstationarity in the PP/GPD shape parameter, model NS3 consistently displays the widest uncertainty range for 100-year return level and a lower posterior median than a stationary model. This indicates the large uncertainty associated with the GPD shape parameter. Figure 2Empirical probability density functions for the 100-year storm surge return level (meters) at Norfolk, Virginia, as estimated using one of the four candidate model structures and using the Bayesian model averaging ensemble. Shown are nonstationary models where the statistical model parameters covary with (a) time, (b) global mean surface temperature, (c) global mean sea level and (d) winter mean NAO index. The bar plots provide the 5 %–95 % credible range (lightest shading), the interquartile range (moderate shading) and the ensemble medians (dark vertical line), for both the stationary model (green) and Bayesian model averaging ensemble (gray). Figure 3Bar plot showing the Bayesian model averaging weights for each of the 13 distinct candidate model structures, when all are simultaneously considered. ## 3.3 Integrating time-series information and across model structures When all 13 distinct model structures are simultaneously considered in the BMA weighting, the models' BMA weights display a clear trend in favor of less-complex structures (Fig. 3). If one wishes to use these results to select a single model for projecting storm surge hazard, then, based on BMA weights, a stationary model would be the appropriate choice. In light of the results of Sect. 3.1, it is not surprising that the fully nonstationary models (NS3) are the poorest choices as measured by BMA weight. The models are assumed to all have uniform prior probability of 1∕13 (about 0.077). Therefore, these results indicate stronger evidence for the use of the stationary and NS1 models for modulating storm surges, and weaker evidence for incorporating nonstationarity in, for example, the GPD shape parameter (NS3). One interpretation of these results is that a stationary model (ST) receives about 23 % of the total model posterior probability, which is much more than the next largest BMA weight (about 10 %), so a stationary model is the “correct” choice. But an alternative and interesting question is raised: how important is each covariate, in consideration of all three nonstationary model structures (NS1-3)? A quantification of the total model posterior probability for each candidate covariate time series is given by adding up the BMA weights associated with each covariate's nonstationary models (Table 1). A stationary model still has the highest total BMA weight (0.23) but is followed closely by a simple linear change in PP/GPD parameters (0.21) as well as temperature, sea level and the NAO index (0.19). Taking this view, the fact that a stationary model has an underwhelming 23 % of the total model weight highlights the importance of accounting for the other 77 %, attributable to nonstationarity. Table 1Total BMA weight associated with each candidate covariate's nonstationary models and a stationary model (ST) for the full set of 13 candidate models, all considered simultaneously in the BMA weighting. ## 3.4 Accounting for more processes raises projected return levels The BMA model that considers all of the covariate time series and levels of (non)stationarity has a median projected 2065 storm surge 100-year return level of 2.36 m (5 %–95 % credible range of 2.14 to 3.07 m; Fig. 4). This more detailed model has a higher central estimate (2.36 m) of flood hazard than all of the single-covariate models' BMA ensembles except for sea level (2.17 m for the NAO index, 2.21 m for time, 2.29 m for temperature and 2.37 m for sea level). When considering the set of multi-model, single-covariate BMA ensembles (Fig. 4, dashed colored lines) and the multi-model, multi-covariate BMA ensemble (Fig. 4, solid black line), there is a substantial amount of uncertainty in these projections of flood hazard attributable to model structure, in particular, with regard to the choice of covariate time series. Figure 4Empirical probability density functions for the 100-year storm surge return level (meters) at Norfolk, Virginia, as estimated using Bayesian model averaging with one of the four candidate covariates and using the overall Bayesian model averaging ensemble with all 13 distinct candidate model structures. The bar plots provide the 5 %–95 % credible range (lightest shading), the interquartile range (moderate shading) and the ensemble medians (dark vertical line). The shaded bar plots follow the same order as the legend. 4 Discussion This study has presented and expanded upon an approach to integrate multiple streams of information to modulate storm surge statistical behavior (covariate time series) and to account for the fact that the “correct” model choice is almost always unknown. This approach improves the current status of storm surge statistical modeling by accounting for more processes (multiple covariates and model structures), thereby raising the upper tail of flood hazard (by up to 23 cm for Norfolk) while constraining these additional processes using BMA. These methods will be useful, for example, in rectifying disagreement between previous assessments using nonstationary statistical models for storm surges (e.g., Grinsted et al., 2013; Lee et al., 2017). The results presented here are consistent with those of Wong et al. (2018), who employed a single covariate BMA model based on the NAO index. Both studies demonstrate that the neglect of model structural uncertainties surrounding model choices leads to the underestimation of flood hazard. These results are in agreement with the work of Lee et al. (2017) and highlight the importance of carefully considering the balance of model complexity against data availability. Including more complex physical mechanisms into model structures (i.e., nonstationary storm surges) is often important for decision-making, but additional model processes and parameters require more data to constrain them (Wong et al., 2018). If a single-model choice is to be made, then a stationary model may be the natural choice (Table 1). However, this work provides guidance on incorporating nonstationary processes to a degree informed by the model posterior probabilities, in light of the available data. Importantly, the (non)stationary model BMA weights were robust against the changes in the covariate time series used to modulate the storm surge model parameters (Fig. 1). By contrast, the largest uncertainty arises in the projections of flood hazard to 2065, depending on which covariate time series is used (Figs. 2 and 4). The primary contribution of this work is to present an approach to integrate across these covariate time series and overcome this issue of uncertainty in the “correct” covariate time series to use (Fig. 3). Using the stationary model leads to a distribution of the 100-year flood level with a median of 2.13 m and upper tail (95th percentile) of 2.58 m. Using the full multi-model, multi-covariate BMA model, however, substantially raises both the projected center (2.36 m) and upper tail (3.07 m) of the distribution of 100-year flood hazard in 2065, relative to using a stationary model. For Norfolk and the surrounding area, a difference of about 23 cm in the estimated return level can lead to millions of dollars in potential damages (Fugro Consultants, 2016). Thus, the present work also serves to demonstrate the potential risks associated with the selection of a single model structure. Of course, some caveats accompany this analysis. The covariate time series are all deterministic model input. In particular, the temperature and sea level time series do not include the sizable uncertainties in projections of these time series, which in turn depend largely on deeply uncertain future emission pathways. The accounting and propagation of uncertainty and correlation in the covariate time series would be an interesting avenue for future study, but this is beyond the scope of this work. For example, temperature may drive changes in both sea levels and the NAO index, so future works might consider disentangling the effects of the multiple covariate time series. Furthermore, this study only considers derivatives of PP/GPD model structures and does not address the deep uncertainty surrounding the choice of statistical model (Wahl et al., 2017). This study also focuses on a single tide-gauge station (Sewells Point in Norfolk, Virginia). This choice was made in light of the deep uncertainty surrounding how best to process and combine information across stations into a surge index (Lee et al., 2017) and because the Norfolk site is within the region studied by Grinsted et al. (2013), so the application of global mean surface temperature as a covariate is a reasonable extension of those authors' work. Extending these results to regions outside the southeastern part of the United States, however, is an important area for future study. A key strength of the fully nonstationary multi-covariate BMA model is that the methods presented here can be applied to any site, and the model posterior probabilities will allow the data to inform the weight placed on the different (non)stationary models and covariates, in light of the local tide-gauge information. As demonstrated by Grinsted et al. (2013), the use of local temperature or other covariate information may also lead to better constraint on storm surge return levels but also presents challenges for process models to reproduce potentially complex spatial patterns. The present study is not intended to be the final word on model selection or projecting storm surge return levels. Rather, this work is intended to present a new approach for generating a model that accounts for more processes and modeling uncertainties and demonstrating its application to an important area for flood risk management. This study only presents a handful of many potentially useful covariates for storm surge statistical modeling (e.g., Grinsted et al., 2013). Future work should build on the methods presented here and can incorporate other mechanisms known to be important local climate drivers for specific applications. 5 Conclusions This study has presented a case study for Norfolk, Virginia, that demonstrates the use of BMA to integrate flood hazard information across models of varying complexity (stationary versus nonstationary) and modulating model parameters using multiple covariate time series. This work finds that for the Norfolk site, all of the candidate covariates yield similar degrees of confidence in the (non)stationary model structures, and the overall BMA model that employs all four candidate covariates projects a higher flood hazard in 2065. These results provide guidance on how best to incorporate nonstationary processes into flood hazard projections, and a framework to integrate other locally important climate variables, to better inform coastal risk management practices. Code availability Code availability. All codes are freely available from https://github.com/tonyewong/covariates. Data availability Data availability. All data and modeling and analysis codes are freely available from https://github.com/tonyewong/covariates (https://doi.org/10.5281/zenodo.1718069). Appendix A Table A1CMIP5 models employed in the present study. Table A2Quantiles of the estimated storm surge return levels (meters) for Norfolk (Sewells Point) in 2065 using the full nonstationary multi-covariate BMA model. Figure A1Storm surge return periods (years) and associated return levels (meters) in 2065 for Norfolk, using the full nonstationary multi-covariate BMA model. The solid line indicates the ensemble median, the darkest shaded region denotes the 25th–75th percentile range, the medium shaded region denotes the 5th–95th percentile range and the lightest shaded region denotes the 2.5th–97.5th percentile range. Author contributions Author contributions. TW did all of the work. Competing interests Competing interests. The author declares that there is no conflict of interest. Acknowledgements Acknowledgements. I gratefully acknowledge Klaus Keller, Vivek Srikrishnan and Dale Jennings for fruitful conversations. This work was co-supported by the National Science Foundation (NSF) through the Network for Sustainable Climate Risk Management (SCRiM) under NSF cooperative agreement GEO-1240507 and the Penn State Center for Climate Risk Management. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. I acknowledge the World Climate Research Programme's Working Group on Coupled Modelling, which is responsible for CMIP, and I thank the climate modeling groups (listed in Table A1 of this paper) for producing their model output and making it available. For CMIP the US Department of Energy's Program for Climate Model Diagnosis and Intercomparison provided coordinating support and led development of software infrastructure in partnership with the Global Organization for Earth System Science Portals. The ENSEMBLES data used in this work were funded by the EU FP6 Integrated Project ENSEMBLES (contract number 505539), whose support is gratefully acknowledged. Edited by: Michael Wehner Reviewed by: two anonymous referees References Arns, A., Wahl, T., Haigh, I. D., Jensen, J., and Pattiaratchi, C.: Estimating extreme water level probabilities: A comparison of the direct methods and recommendations for best practise, Coast. Eng., 81, 51–66, https://doi.org/10.1016/j.coastaleng.2013.07.003, 2013. Buchanan, M. K., Oppenheimer, M., and Kopp, R. E.: Amplification of flood frequencies with local sea level rise and emerging flood regimes, Environ. Res. Lett., 12, 064009, https://doi.org/10.1088/1748-9326/aa6cb3, 2017. Bulteau, T., Idier, D., Lambert, J., and Garcin, M.: How historical information can improve estimation and prediction of extreme coastal water levels: application to the Xynthia event at La Rochelle (France), Nat. Hazards Earth Syst. Sci., 15, 1135–1147, https://doi.org/10.5194/nhess-15-1135-2015, 2015. Caldwell, P. C., Merrfield, M. A., and Thompson, P. R.: Sea level measured by tide gauges from global oceans – the Joint Archive for Sea Level holdings (NCEI Accession 0019568), Version 5.5, NOAA Natl. Centers Environ. Information, Dataset, https://doi.org/10.7289/V5V40S7W, 2015. Ceres, R., Forest, C. E., and Keller, K.: Understanding the detectability of potential changes to the 100-year peak storm surge, Clim. Change, 145, 221–235, https://doi.org/10.1007/s10584-017-2075-0, 2017. Church, J. A. and White, N. J.: Sea-level rise from the late 19th to the early 21st century, Surv. Geophys., 32, 585–602, https://doi.org/10.1007/s10712-011-9119-1, 2011. Cid, A., Menéndez, M., Castanedo, S., Abascal, A. J., Méndez, F. J., and Medina, R.: Long-term changes in the frequency, intensity and duration of extreme storm surge events in southern Europe, Clim. Dynam., 46, 1503–1516, https://doi.org/10.1007/s00382-015-2659-1, 2016. Coles, S. G.: An introduction to Statistical Modeling of Extreme Values, Springer, 208, 2001. Fischbach, J. R., Johnson, D. R., and Molina-Perez, E.: Reducing Coastal Flood Risk with a Lake Pontchartrain Barrier, Santa Monica, CA, USA, available at: https://www.rand.org/pubs/research_reports/RR1988.html (last access: 10 August 2017), 2017. Fugro Consultants, Inc.: Lafayette River Tidal Protection Alternatives Evaluation, Work Order No. 7, Fugro Project No. 04.8113009, City of Norfolk City-wide Coastal Flooding Contract, available at: https://www.norfolk.gov/DocumentCenter/View/25170 (last access: 15 May 2017), 2016. Gelman, A. and Rubin, D. B.: Inference from Iterative Simulation Using Multiple Sequences, Stat. Sci., 7, 457–511, https://doi.org/10.1214/ss/1177011136, 1992. Grinsted, A., Moore, J. C., and Jevrejeva, S.: Projected Atlantic hurricane surge threat from rising temperatures., P. Natl. Acad. Sci. USA, 110, 5369–5373, https://doi.org/10.1073/pnas.1209980110, 2013. Haigh, I., Nicholls, R., and Wells, N.: Assessing changes in extreme sea levels: Application to the English Channel, 1900–2006, Cont. Shelf Res., 30, 1042–1055, https://doi.org/10.1016/j.csr.2010.02.002, 2010a. Haigh, I. D., Nicholls, R., and Wells, N.: A comparison of the main methods for estimating probabilities of extreme still water levels, Coast. Eng., 57, 838–849, https://doi.org/10.1016/j.coastaleng.2010.04.002, 2010b. Hastings, W. K.: Monte Carlo sampling methods using Markov chains and their applications, Biometrika, 57, 97–109, 1970. Higdon, D., Kennedy, M., Cavendish, J. C., Cafeo, J. A., and Ryne, R. D.: Combining Field Data and Computer Simulations for Calibration and Prediction, SIAM J. Sci. Comput., 26, 448–466, https://doi.org/10.1137/S1064827503426693, 2004. Hoeting, J. A., Madigan, D., Raftery, A. E., and Volinsky, C. T.: Bayesian Model Averaging: A Tutorial, Stat. Sci., 14, 382–417, https://www.jstor.org/stable/2676803 (last access: 20 May 2017), 1999. Hunter, J. R., Woodworth, P. L., Wahl, T., and Nicholls, R. J.: Using global tide gauge data to validate and improve the representation of extreme sea levels in flood impact studies, Global Planet. Change, 156, 34–45, https://doi.org/10.1016/j.gloplacha.2017.06.007, 2017. Johnson, D. R., Fischbach, J. R., and Ortiz, D. S.: Estimating Surge-Based Flood Risk with the Coastal Louisiana Risk Assessment Model, J. Coast. Res., 67, 109–126, https://doi.org/10.2112/SI_67_8, 2013. Jones, P. D., Jonsson, T., and Wheeler, D.: Extension to the North Atlantic oscillation using early instrumental pressure observations from Gibraltar and south-west Iceland, Int. J. Climatol., 17, 1433–1450, https://doi.org/10.1002/(SICI)1097-0088(19971115)17:13<1433::AID-JOC203>3.0.CO;2-P, 1997. Karamouz, M., Ahmadvand, F., and Zahmatkesh, Z.: Distributed Hydrologic Modeling of Coastal Flood Inundation and Damage: Nonstationary Approach, J. Irrig. Drain. Eng., 143, 1–14, https://doi.org/10.1061/(ASCE)IR.1943-4774.0001173, 2017. Knighton, J., Steinschneider, S., and Walter, M. T.: A Vulnerability-Based, Bottom-up Assessment of Future Riverine Flood Risk Using a Modified Peaks-Over-Threshold Approach and a Physically Based Hydrologic Model, Water Resour. Res., 53, 1–22, https://doi.org/10.1002/2017WR021036, 2017. Lee, B. S., Haran, M., and Keller, K.: Multi-decadal scale detection time for potentially increasing Atlantic storm surges in a warming climate, Geophys. Res. Lett., 44, 10617–10623, https://doi.org/10.1002/2017GL074606, 2017. Lempert, R., Sriver, R. L., and Keller, K.: Characterizing Uncertain Sea Level Rise Projections to Support Investment Decisions, California Energy Commission, publication number: CEC-500-2012-056, Santa Monica, CA, USA, 2012. Lopeman, M., Deodatis, G., and Franco, G.: Extreme storm surge hazard estimation in lower Manhattan: Clustered separated peaks-over-threshold simulation (CSPS) method, Nat. Hazards, 78, 355–391, https://doi.org/10.1007/s11069-015-1718-6, 2015. Marcos, M., Calafat, F. M., Berihuete, Á., and Dangendorf, S.: Long-term variations in global sea level extremes, J. Geophys. Res. Ocean., 120, 8115–8134, https://doi.org/10.1002/2015JC011173, 2015. McMillan, A., Batstone, C., Worth, D., Tawn, J., Horsburgh, K., and Lawless, M.: Coastal Flood Boundary Conditions for UK Mainland and Islands, Bristol, UK, available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/291216/scho0111btki-e-e.pdf (last access: 5 January 2018), 2011. Menéndez, M. and Woodworth, P. L.: Changes in extreme high water levels based on a quasi-global tide-gauge data set, J. Geophys. Res. Ocean, 115, 1–15, https://doi.org/10.1029/2009JC005997, 2010. Meng, X. L. and Wing, H. W.: Simulating ratios of normalizing constants via a simple identity: a theoretical exploration, Stat. Sin., 6, 831–860, 1996. Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E.: Equation of state calculations by fast computing machines, J. Chem. Phys., 21, 1087, https://doi.org/10.1063/1.1699114, 1953. Moftakhari, H. R., AghaKouchak, A., Sanders, B. F., and Matthew, R. A.: Cumulative hazard: The case of nuisance flooding, Earth's Futur., 5, 214–223, https://doi.org/10.1002/2016EF000494, 2017. Nicholls, R. J. and Cazenave, A.: Sea Level Rise and Its Impact on Coastal Zones, Science, 328, 1517–1520, https://doi.org/10.1126/science.1185782, 2010. NOAA: National Centers for Environmental Information, Climate at a Glance: Global Time Series, available at: http://www.ncdc.noaa.gov/cag/ (last access: 7 June 2017), 2017a. NOAA: NOAA Tides and Currents: Sewells Point, VA – Station ID: 8638610, National Oceanic and Atmospheric Administration (NOAA), available at: https://tidesandcurrents.noaa.gov/stationhome.html?id=8638610 (last access: 17 February 2017), 2017b. Oddo, P. C., Lee, B. S., Garner, G. G., Srikrishnan, V., Reed, P. M., Forest, C. E., and Keller, K.: Deep Uncertainties in Sea-Level Rise and Storm Surge Projections: Implications for Coastal Flood Risk Management, Risk Anal., https://onlinelibrary.wiley.com/doi/full/10.1111/risa.12888, 2017. Orton, P. M., Hall, T. M., Talke, S. A., Blumberg, A. F., Georgas, N., and Vinogradov, S.: A validated tropical-extratropical flood hazard assessment for New York Harbor, J. Geophys. Res.-Ocean., 121, 8904–8929, https://doi.org/10.1002/2016JC011679, 2016. Pugh, D. T. and Vassie, J. M.: Extreme Sea Levels From Tide and Surge Probability, Coast. Eng., 911–930, https://doi.org/10.1061/9780872621909.054, 1978. Roeckner, E., Bäuml, G., Bonaventura, L., Brokopf, R., Esch, M., Giorgetta, M., Hagemann, S., Kornblueh, L., Schlese, U., Schulzweida, U., Kirchner, I., Manzini, E., Rhodin, A., Tompkins, A., Giorgetta, Hagemann, S., Kirchner, I., Kornblueh, L., Manzini, E., Rhodin, A., Schlese, U., Schulzweida, U., and Tompkins, A.: The atmospheric general circulation model ECHAM5 Part I, Max-Planck-Institute for Meteorology, 349, 2003. Stephenson, D. B., Pavan, V., Collins, M., Junge, M. M., and Quadrelli, R.: North Atlantic Oscillation response to transient greenhouse gas forcing and the impact on European winter climate: A CMIP2 multi-model assessment, Clim. Dynam., 27, 401–420, https://doi.org/10.1007/s00382-006-0140-x, 2006. Tawn, J. A. and Vassie, J. M.: Extreme Sea Levels: the Joint Probabilities Method Revisited and Revised, Proc. Inst. Civ. Eng., 87, 429–442, https://doi.org/10.1680/IICEP.1989.2975, 1989. Tebaldi, C., Strauss, B. H., and Zervas, C. E.: Modelling sea level rise impacts on storm surges along US coasts, Environ. Res. Lett., 7, 014032, https://doi.org/10.1088/1748-9326/7/1/014032, 2012. Vihola, M.: Robust adaptive Metropolis algorithm with coerced acceptance rate, Stat. Comput., 22, 997–1008, https://doi.org/10.1007/s11222-011-9269-5, 2012. Vousdoukas, M. I., Mentaschi, L., Voukouvalas, E., Verlaan, M., Jevrejeva, S., Jackson, L. P., and Feyen, L.: Global probabilistic projections of extreme sea levels show intensification of coastal flood hazard, Nat. Commun., 9, 2360, https://doi.org/10.1038/s41467-018-04692-w, 2018. Wahl, T., Haigh, I. D., Nicholls, R. J., Arns, A., Dangendorf, S., Hinkel, J., and Slangen, A. B. A.: Understanding extreme sea levels for broad-scale coastal impact and adaptation analysis, Nat. Commun., 8, 16075, https://doi.org/10.1038/ncomms16075, 2017. Wong, T. E. and Keller, K.: Deep Uncertainty Surrounding Coastal Flood Risk Projections: A Case Study for New Orleans, Earth's Future, 5, 1015–1026, https://doi.org/10.1002/2017EF000607, 2017. Wong, T. E., Klufas, A., Srikrishnan, V., and Keller, K.: Neglecting model structural uncertainty underestimates upper tails of flood hazard, Environ. Res. Lett., 13, 074019, https://doi.org/10.1088/1748-9326/aacb3d, 2018.
2019-03-19 18:39:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7385454773902893, "perplexity": 3192.1291897710066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202125.41/warc/CC-MAIN-20190319183735-20190319205735-00116.warc.gz"}
https://k12.libretexts.org/Bookshelves/Mathematics/Geometry/03%3A_Lines/3.06%3A_Alternate_Exterior_Angles
# 3.6: Alternate Exterior Angles Angles on opposite sides of a transversal, but outside the lines it intersects. Alternate exterior angles are two angles that are on the exterior of $$l$$ and $$m$$, but on opposite sides of the transversal. Alternate Exterior Angles Theorem: If two parallel lines are cut by a transversal, then the alternate exterior angles are congruent. If $$l \parallel m$$, then $$\angle 1\cong \angle 2$$. Converse of the Alternate Exterior Angles Theorem: If two lines are cut by a transversal and the alternate exterior angles are congruent, then the lines are parallel. If then $$l \parallel m$$. What if you were presented with two angles that are on the exterior of two parallel lines cut by a transversal but on opposite sides of the transversal? How would you describe these angles and what could you conclude about their measures? For Examples $$\PageIndex{1}$$ and $$\PageIndex{2}$$, use the following diagram: Example $$\PageIndex{1}$$ Give an example of a pair of alternate exterior angles. Solution $$\angle 1$$ and $$\angle 14$$ (many other possibilities) Example $$\PageIndex{2}$$ Give another example of a pair of alternate exterior angles. Solution $$\angle 2$$ and $$\angle 13$$ (many other possibilities, must be different than answer to Example 1) Example $$\PageIndex{3}$$ Find the measure of each angle and the value of \)y\). Solution The angles are alternate exterior angles. Because the lines are parallel, the angles are equal. \begin{align*} (3y+53)^{\circ} &=(7y−55)^{\circ} \\ 108 &=4y \\ 27 &=y \end{align*} If $$y=27, then each angle is \([3(27)+53]^{\circ}=134^{\circ}$$. Example $$\PageIndex{4}$$ The map below shows three roads in Julio’s town. Solution Julio used a surveying tool to measure two angles at the intersections in this picture he drew (NOT to scale). Julio wants to know if Franklin Way is parallel to Chavez Avenue. The $$130^{\circ}$$ angle and \angle a are alternate exterior angles. If $$m \angle a=130^{\circ}$$, then the lines are parallel. \begin{align*} \angle a+40^{\circ} &=180^{\circ} & by\:the \:Linear \:Pair \:Postulate \\ \angle a&=140^{\circ} & \end{align*} $$140^{\circ}\neq 130^{\circ}$$, so Franklin Way and Chavez Avenue are not parallel streets. Example $$\PageIndex{5}$$ Which lines are parallel if $$\angle AFG\cong \angle IJM$$? Solution These two angles are alternate exterior angles so if they are congruent it means that $$\overleftrightarrow{CG}\parallel \overleftrightarrow{HK}$$. ## Review 1. Find the value of $$x$$ if $$m \angle 1=(4x+35)^{\circ}$$, $$m \angle8=(7x−40)^{\circ}$$: 2. Are lines 1 and 2 parallel? Why or why not? For 3-6, what does the value of $$x$$ have to be to make the lines parallel? 1. $$m \angle2=(8x)^{\circ}$$ and $$m \angle7=(11x−36)^{\circ}$$ 2. $$m \angle1=(3x+5)^{\circ}$$ and $$m \angle8=(4x−3)^{\circ}$$ 3. $$m \angle2=(6x−4)^{\circ}$$ and $$m \angle7=(5x+10)^{\circ}$$ 4. $$m \angle1=(2x−5)^{\circ}$$ and $$m \angle8=(x)^{\circ}$$ For 7-10, determine whether the statement is true or false. 1. Alternate exterior angles are always congruent. 2. If alternate exterior angles are congruent then lines are parallel. 3. Alternate exterior angles are on the interior of two lines. 4. Alternate exterior angles are on opposite sides of the transversal. ## Vocabulary Term Definition alternate exterior angles Alternate exterior angles are two angles that are on the exterior of two different lines, but on the opposite sides of the transversal.
2021-03-09 08:20:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.944327175617218, "perplexity": 892.9499246985953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00117.warc.gz"}
https://www.physicsforums.com/threads/implicit-explicit-differenciation-and-inflection-point-of-a-graph.153100/
Implicit/Explicit Differenciation, and Inflection point of a Graph 1. Jan 25, 2007 clragon I have two questions, one about differentiating, and the other is about finding the inflection point on a function. Any help would be greatly appreciated. Question 1 1. The problem statement, all variables and given/known data If ay^3 = x^4, show that d^2y/dx^2 = 4x^2/9ay^2 now, this question can be done by both implicit or explicit differentiation. I can leave the ay^3 = x^4 as it is and start simplifying right away, or I could isolate y by making the equation y = (x^4a^-1)^(1/3) 2. Relevant equations knowledge of the rules of differentiating. 3. The attempt at a solution I am going to list two different attempts, one implicit and one explicit. Implicit ay^3 = x^4 a3y^2 dy/dx = 4x^3 dy/dx = 4x^3/ a3y^2 d^2y/dx^2 = (12x^2((a3y^2)-6ya(4x^3)(dy/dx)) / (3y^2a)^2 d^2y/dx^2 = (36x^2y^2a - 24yax^3(dy/dx) ) / (3y^2a)^2 *I then subbed the previous found value for dy/dx into dy/dx in this equation. d^2y/dx^2 = (36x^2y^2a - 96yax^6/3y^2a) / (3y^2a)^2 d^2y/dx^2 = (36x^2y^2a - 32x^6/y) / 9y^4a^2 I stopped here because I could find no way to make the a^2 at the bottom become just a, similarily the y^4 could not be reduced down to y^2 like in the question. Explicit y = (x^4a^-1)^(1/3) y' = (1/3)(x^4a^-1)^(-2/3)(4x^3a^-1) y' = [(x^4a^-1)^(-2/3)(4x^3a^-1)]/3 y'' = [3(12x^2a^-1)^(-2/3)(x^4a^-1)^(-5/3)(4x^3a^-1)]/9 y'' = [-2(48x^5(a^-2)(x^4a^-1)^(-5/3)]/9 y'' = [-96xa^-1(x^4a^-1)(x^4a^-1)^(-5/3)]/9 y'' = -96x / 9a [(x^4/a^1)^(1/3)]^2 since (x^4/a^1)^(1/3) = y y'' = -96x / 9ay^2 as you can see, I got a lot further by the final answer is still wrong... I have part of the answer but have a -96x instead of 4x... could someone please be kind enough to solve this question, or tell me where I went wrong... if it is possible please also provide a solution to the implicit way of solving this question. Question 2 1. Find the point of inflection of y = x^4 - 4x^3 + 6x^2 + 12x at a glance, this question may seem very easy, but for some reason I can't get the right answer (-0.4 and 2.4) 2. to find inflection point, you find the 2nd derivative of the function and solve for x/b] 3. The attempt at a solution y = x^4 - 4x^3 + 6x^2 + 12x y' = 4x^3 - 12x^2 + 12x + 12 y'' = 12x^2 - 24x + 12 let y'' = 0 0 = 12(x^2 - 2x + 1) 0 = 12(x-1)(x-1) x = 1 I put this function into a graphing calculator, it showed no apparent change in concavity at x = 1. I set the min y = -100 and max y = 100 while keeping min x = -20 and max x = 20 to get a better view. the graph seemed to be a parabola but like the answer suggested, the concavity is slightly different between -0.4 and 2.4. anyone wishing to see the graph can just use http://www.coolmath.com/graphit/index.html" [Broken] and paste in the function. remember the set the min y, max y, min x, max x values so that you can see the graph as described by me. Thanks for you time. Last edited by a moderator: May 2, 2017 2. Jan 26, 2007 chanvincent Question 1: For your first attempts, you are stucked in the last step: $$\frac{d^2y}{dx^2} = \frac{(36x^2y^2a - 32x^6/y)}{ 9y^4a^2}$$ You could simplify it one step further to get $$\frac{d^2y}{dx^2} = \frac{4x^2}{ay^2} - \frac{32x^6}{9a^2y^5}$$ The first term on the RHS is very similar to what you wanna prove, but the second term mess up the whole thing, therefore you must find a way to get rid of the second term... Try to expand the $$x^6$$ into $$x^2x^4$$ and substitude the $$x^4$$ in the second term by the original equation, $$x^4 = ay^3$$ This will solve your problem.... For the second attempt, try to do it in a simpler maner... $$y = (\frac{x^4}{a})^{1/3} = a^{-1/3}x^{4/3}$$ $$\frac{dy}{dx} = \frac{4}{3}a^{-1/3} x^{1/3}$$ $$\frac{d^2y}{dx^2} = \frac{4}{3} \frac{1}{3} a^{-1/3}x^{-2/3} = \frac{4}{9} a^{-1/3}\frac{x^2}{x^{8/3}} = \frac{4}{9} a^{-1/3}\frac{x^2}{a^{2/3}y^2} = \frac{4x^2}{9ay^2}$$ For question 2, Your solution is abosulotely correct, maybe the answer is wrong or you misunderstand the question.. Good Luck Last edited: Jan 26, 2007 3. Jan 26, 2007 clragon thank you so much :D your second solution was so much easier than mine, but I'm curious of where I went wrong in my solution, did I make a arithmetic mistake? or does using the power rule in this equation eventually give me -96x at the top? Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
2017-08-24 04:08:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4361134469509125, "perplexity": 802.9225299720682}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126027.91/warc/CC-MAIN-20170824024147-20170824044147-00584.warc.gz"}
http://gmatclub.com/blog/category/blog/gmat-tests/page/6/
# GMAT Question of the Day (May 17) - May 17, 02:00 AM   Comments [0] Math Which of the following is closest to ? A. 0.50 B. 0.89 C. 0.98 D. 1.02 E. 1.05 Question Discussion & Explanation Correct Answer - C - (click and drag your mouse to see the answer) GMAT Daily Deals Manhattan GMAT: 99 percentile teachers and superior curriculum at a more affordable price. Save @... # GMAT Question of the Day (May 16) - May 16, 02:00 AM   Comments [0] Math The function is defined by for all nonzero numbers . If and , what is the value of ? A. B. C. D. $2$ # GMAT Question of the Day (May 13) - May 13, 02:00 AM   Comments [0] Math Sequence is defined as follows: If what is ? A. -3 B. -1 C. 1 D. 2 E. 3 Question Discussion & Explanation Correct Answer - B - (click and drag your mouse to see the answer) GMAT Daily Deals Manhattan GMAT: 99 percentile teachers and superior... # GMAT Question of the Day (May 12) - May 12, 02:00 AM   Comments [0] Math Is an even integer? (1) is an even integer. (2) is an even integer. Question Discussion & Explanation Correct Answer - E - (click and drag your mouse to see the answer) GMAT Daily Deals Manhattan GMAT: 99 percentile teachers and superior curriculum at... # GMAT Question of the Day (May 11) - May 11, 02:00 AM   Comments [0] Math Which of the following points is not on the line ? A. B. C. D. E. Question Discussion & Explanation Correct Answer - E - (click and drag your mouse to see the answer) GMAT Daily Deals Manhattan GMAT:... # GMAT Question of the Day (May 10) - May 10, 02:00 AM   Comments [0] Math If among 20 students in a group, 5 study math, 10 study physics, and 6 study chemistry, are there any students who do not study any of the above-mentioned subjects? (1) There are no students studying all of the three subjects. (2) None of those who study... # GMAT Question of the Day (May 9) - May 9, 02:00 AM   Comments [0] Math A right cylindrical tank with radius of the base 1 and height 2 is half full of water. If all the water is poured into another cylinder tank with height 1 and radius of the base 2, to what percent of its capacity will this... # GMAT Question of the Day (May 6) - May 6, 02:00 AM   Comments [0] Math There are three lamps in a hall. If each lamp can be switched on and off independently, in how many ways can the hall be illuminated? (The hall is illuminated when at least one of the lamps is on.) A. 5 B. 6 C. 7 D. 8 E. 9 Question Discussion... # GMAT Question of the Day (May 5) - May 5, 02:00 AM   Comments [0] Math Set consists of all prime integers less than 10. If a number is selected from set at random and then another number, not necessarily different, is selected from set at random, what is the probability that... # GMAT Question of the Day (May 4) - May 4, 02:00 AM   Comments [0] Math How many odd three-digit integers greater than 800 are there such that all their digits are different? A. 40 B. 56 C. 72 D. 81 E. 104 Question Discussion & Explanation Correct Answer - C - (click and drag your mouse to see the answer) GMAT Daily Deals Manhattan GMAT: 99 percentile teachers and superior curriculum...
2016-07-26 07:24:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5770671367645264, "perplexity": 2541.0283995851514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824756.90/warc/CC-MAIN-20160723071024-00289-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-11th-edition/chapter-4-section-4-3-logarithmic-functions-4-3-exercises-page-424/76
## College Algebra (11th Edition) $\log_b\dfrac{km}{a}$ $\bf{\text{Solution Outline:}}$ Use the Laws of Logarithms to write the given expression, $\log_bk+\log_bm-\log_ba ,$ as a single logarithm. $\bf{\text{Solution Details:}}$ Using the Product Rule of Logarithms, which is given by $\log_b (xy)=\log_bx+\log_by,$ the expression above is equivalent to \begin{array}{l}\require{cancel} \log_b(km)-\log_ba .\end{array} Using the Quotient Rule of Logarithms, which is given by $\log_b \dfrac{x}{y}=\log_bx-\log_by,$ the expression above is equivalent to \begin{array}{l}\require{cancel} \log_b\dfrac{km}{a} .\end{array}
2018-05-25 01:40:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9878225922584534, "perplexity": 481.31603441306504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866917.70/warc/CC-MAIN-20180525004413-20180525024413-00339.warc.gz"}
http://science.sciencemag.org/content/347/6226/news-summaries
# News this Week Science  06 Mar 2015: Vol. 347, Issue 6226, pp. 1048 1. # This week's section ### Asia's cities swell as population surges Over the past decade, East and Southeast Asia have experienced an urbanization boom unlike any the world has ever seen. From China and Japan to the Philippines and Indonesia, the urban population of 17 countries in East and Southeast Asia increased from 738 million people in 2000 to 969 million in 2010. But the rate of expansion of urban land area—2% annually, on average, over that period—did not keep up with the rate of population change, which was about 2.8% per year, according to a 4 March report in Environmental Research Letters. Instead, Asia's teeming metropolises are cramming ever more humanity within existing city limits—confounding predictions that the cities will greatly expand their footprints as migrants flood in. “The assumption from past research has been that cities of all sizes will eventually decline in density,” says author Annemarie Schneider, a geographer at University of Wisconsin, Madison. “This study reveals the opposite.” The trend may seem obvious to Asian cities straining to provide basic services for burgeoning populations. But for urban planners, the findings, Schneider says, could change “how officials plan and adapt to urbanization in the future.” ### Seeing a virus in 3D Physicists can take pictures of tiny things from chemical nanostructures to proteins to living cells. But 3D biological particles, like viruses, have proved elusive. To take a 2D image, scientists send pulses of high-energy x-rays through the particle and record the resulting diffraction patterns. Theoretically, they could stitch together multiple 2D images, each taken at a different angle, to create a 3D picture—but they'd need to know how the particle was oriented in space when each picture was taken. Now, researchers working with the Linac Coherent Light Source at SLAC National Accelerator Laboratory in Menlo Park, California, have devised an algorithm that can figure out how hundreds of such diffraction patterns fit together to form a complete 3D image of a sample—a technique, they reported this week in Physical Review Letters, that can reveal both the external shape and internal structure of a single particle. They tested their technique by imaging mimivirus (shown), a rather large virus that is probably not infectious. But the algorithm should be able to handle much smaller and more dangerous viruses, including influenza, herpes, and HIV. http://scim.ag/3Dvirus ### Rise in U.S. lab animals The number of animals used by the top U.S.-funded biomedical research institutions has risen 73% over 15 years, a “dramatic increase,” according to an analysis by People for the Ethical Treatment of Animals (PETA). Although federal law requires that research labs report their use of cats, dogs, and nonhuman primates, smaller vertebrates—including rodents—are exempt. To get a sense of the trends, PETA obtained data from inventories submitted to the National Institutes of Health (NIH) every 4 years. The top 25 NIH-funded institutions housed a daily average of 74,600 animals from 1997 to 2003; that leaped to an average of 128,800 a day by 2008 to 2012, a 73% increase, PETA reports in the Journal of Medical Ethics. Most of the animals were mice. This parallels a rise in the use of transgenic mice internationally, PETA says. NIH cautioned that using the inventory data to track animal numbers is “inappropriate” because the data don't show usage, but are only a “snapshot” that NIH uses to make sure institutions have adequate veterinary care. http://scim.ag/labanirise “Long before being nerdy was cool, there was Leonard Nimoy.” President Barack Obama, in a tribute to Nimoy, who played Star Trek's beloved Mr. Spock. Nimoy died last week at age 83. $41.5 million—Amount dedicated last week by the National Institutes of Health to the Human Placenta Project to study the mass of tissue that sustains a developing fetus. 4.1—Average number of Oriental rat fleas—known to carry plague and typhus in the past—per New York City rat in a Journal of Medical Entomology survey. Values below 1 indicate minimal risk of epidemic disease spread. 1—Number of physicists now on the U.S. House of Representatives' science committee as of last week, when Representative Bill Foster (D–IL) joined. ## Around the world ### Brussels Push for E.U. energy bloc The European Commission announced a plan on 25 February to create a unified energy market, where “energy flows freely across borders,” according to the so-called Energy Union proposal. The plan calls for more research and innovation on energy efficiency and renewable energy technologies to help transform energy systems, maintain Europe's technological leadership, and boost export prospects. This would help wean the bloc from fraught gas imports, hitting Russian President Vladimir Putin “where it hurts most,” says Guy Verhofstadt, a liberal member of the European Parliament from Belgium. But green groups have criticized the plan for putting too much emphasis on fossil fuels and nuclear energy—“yesterday's instead of tomorrow's technologies,” says Rebecca Harms, a Green member of the European Parliament from Germany. The proposal will next be discussed by the European Parliament and member states. ### Minneapolis, Minnesota Trials under scrutiny A damning report released last week on how the University of Minnesota protects volunteers in its clinical trials charged the university with inadequate review of research studies and failure to sufficiently protect the most vulnerable subjects. Examining protocols from 20 active trials and meeting minutes from the institutional review board (IRB), the reviewers found “little discussion of the risks and benefits” to volunteers, and noted that there were often no IRB members with expertise in a protocol present during its review. The report comes after years of complaints by academics inside and outside the school, who claimed the school failed to protect 27-year-old Dan Markingson, who died by suicide in 2004 while enrolled in a psychiatric drug trial. At press time, the Faculty Senate was preparing to meet with University President Eric Kaler and the authors of the report. Senior administrators say they hope to develop a plan to respond to the report within 60 days. http://scim.ag/Minntrials ### Greenwich, Connecticut New database for oldest fossils Hoping to help scientists understand the origin and evolution of life on Earth, a new repository of data about the world's oldest fossils was launched last week. The Fossil Calibration Database (http://fossilcalibrations.org/), funded by the National Evolutionary Synthesis Center, will offer scientists a reliable anchor point from which they can accurately date new fossils and determine when species branched off from their family tree. New fossils are discovered all the time, but until now there was no centralized list of the oldest, so many estimates of evolutionary change rely on “really outdated information,” says paleontologist Daniel Ksepka of the Bruce Museum in Greenwich, Connecticut. He co-led the team of more than 20 paleontologists, molecular biologists, and computer programmers behind the project. To ensure the new resource remains a gold standard, new finds will be regularly added after careful vetting by specialists. ### Argonne, Illinois Ask A Scientist shuts down One of the Internet's oldest sources of science information for the public is closing its virtual doors. Argonne National Laboratory announced last month that they will be discontinuing their Newton – Ask A Scientist program on 1 March. Argonne created the service in 1991 as a way for students and teachers to connect with scientists. Volunteer scientists have answered 20,000 questions over the years, from “Why does steel rust?” to “What happens to light in a black hole?” But the website was outdated and its use was declining, says Meridith Bruozas, Argonne's manager of educational programs and outreach. “As technology has advanced … it kind of doesn't serve its purpose anymore.” Instead, the lab has shifted to using Twitter, Facebook, reddit, and Google Hangouts to give students a way to quiz scientists. ## Newsmakers ### HIV researcher admits fraud In an unusual turn for a scientific misconduct case, a former HIV researcher at Iowa State University (ISU) has pleaded guilty to federal fraud charges. Dong-Pyou Han resigned in 2013, shortly before the federal Office of Research Integrity (ORI) found he had faked data in a rabbit study of an HIV vaccine for a National Institutes of Health (NIH) grant proposal. ORI barred Han from seeking grants for 3 years, but Senator Chuck Grassley (R–IA) complained that the punishment was too light for a study that cost taxpayers millions of dollars. ISU later returned$500,000 and NIH withheld a $1.4 million award. Han faces up to 10 years in prison on two felony counts of making false statements; his sentencing is set for 29 May. ### Three Q's After 42 years at the Massachusetts Institute of Technology in Cambridge, including 16 years as an administrator, physicist Marc Kastner knows the value of basic research—and how to convince rich people to support it at a premier research institution. Last week he announced he was leaving to become the first president of the Science Philanthropy Alliance—a job that will give him the chance to make the case on a national scale. http://scim.ag/_Kastner Q:How will the alliance operate? A:It will not raise any money for itself. Instead, we're trying to increase gifts to universities or help create new foundations that will fund basic research. Q:Why is that so important today? A:There's been a tilt in federal funding toward things that are more applied and more translational. My task is to explain to potential donors the enormous opportunities for doing exciting things in basic science and the satisfaction they will get out of that. Q:Is it OK if the well-endowed universities simply get richer? A:Absolutely. If foundations choose to be concerned about geography, that's their business. But my experience with these foundations is that they really want to fund the best people to do the best research. And that's fine with me. 2. # To catch a wave 1. Adrian Cho* After decades of work, physicists say they are a year or two away from detecting ripples in spacetime. This patch of woodland just north of Livingston, Louisiana, population 1893, isn't the first place you'd go looking for a breakthrough in physics. Standing on a small overpass that crosses an odd arching tunnel, Joseph Giaime, a physicist at Louisiana State University (LSU), 55 kilometers west in Baton Rouge, gestures toward an expanse of spindly loblolly pine, parts of it freshly reduced to stumps and mud. “It's a working forest,” he says, “so they come in here to harvest the logs.” On a quiet late fall morning, it seems like only a logger or perhaps a hunter would ever come here. Yet it is here that physicists may fulfill perhaps the most spectacular prediction of Albert Einstein's theory of gravity, or general relativity. The tunnel runs east to west for 4 kilometers and meets a similar one running north to south in a nearby warehouselike building. The structures house the Laser Interferometer Gravitational-Wave Observatory (LIGO), an ultrasensitive instrument that may soon detect ripples in space and time set off when neutron stars or black holes merge. Einstein himself predicted the existence of such gravitational waves nearly a century ago. But only now is the quest to detect them coming to a culmination. The device in Livingston and its twin in Hanford, Washington, ran from 2002 to 2010 and saw nothing. But those Initial LIGO instruments aimed only to prove that the experiment was technologically feasible, physicists say. Now, they're finishing a$205 million rebuild of the detectors, known as Advanced LIGO, which should make them 10 times more sensitive and, they say, virtually ensure a detection. “It's as close to a guarantee as one gets in life,” says Peter Saulson, a physicist at Syracuse University in New York, who works on LIGO. Detecting those ripples would open a new window on the cosmos. But it won't come easy. Each tunnel contains a pair of mirrors that form an “optical cavity,” within which infrared light bounces back and forth. To look for the stretching of space, physicists will compare the cavities' lengths. But they'll have to sense that motion through the din of other vibrations. Glancing at the pavement on the overpass, Giaime says that the ground constantly jiggles by about a millionth of a meter, shaken by seismic waves, the rumble of nearby trains, and other things. LIGO physicists have to shield the mirrors from such vibrations so that they can see the cavities stretch or shorten by distances 10 trillion times smaller—just a billionth the width of an atom. IN 1915, Einstein explained that gravity arises when mass and energy warp space and time, or spacetime. A year later, he predicted that massive objects undergoing the right kind of oscillating motion should emit ripples in spacetime—gravitational waves that zip along at light speed. For decades that prediction remained controversial, in part because the mathematics of general relativity is so complicated. Einstein himself at first made a technical error, says Rainer Weiss, a physicist at the Massachusetts Institute of Technology (MIT) in Cambridge. “Einstein had it right,” he says, “but then he [messed] up.” Some theorists argued that the waves were a mathematical artifact and shouldn't actually exist. In 1936, Einstein himself briefly took that mistaken position. Even if the waves were real, detecting them seemed impossible, Weiss says. At a time when scientists knew nothing of the cosmos's gravitational powerhouses—neutron stars and black holes—the only obvious source of waves was a pair of stars orbiting each other. Calculations showed that they would produce a signal too faint to be detected. By the 1950s, theorists were speculating about neutron stars and black holes, and they finally agreed that the waves should exist. In 1969, Joseph Weber, a physicist at the University of Maryland, College Park, even claimed to have discovered them. His setup included two massive aluminum cylinders 1.5 meters long and 0.6 meters wide, one of them in Illinois. A gravitational wave would stretch a bar and cause it to vibrate like a tuning fork, and electrical sensors would then detect the stretching. Weber saw signs of waves pinging the bars together. But other experimenters couldn't reproduce Weber's published results, and theorists argued that his claimed signals were implausibly strong. Still, Weber's efforts triggered the development of LIGO. In 1969, Weiss, a laser expert, had been assigned to teach general relativity. “I knew bugger all about it,” he says. In particular, he couldn't understand Weber's method. So he devised his own optical method, identifying the relevant sources of noise. “I worked it out for myself, and I gave it to the students as a homework problem,” he says. Weiss's idea, which he published in 1972 in an internal MIT publication, was slow to catch on. “It was obvious to me that this was pie in the sky and it would never work,” recalls Kip Thorne, a theorist at the California Institute of Technology (Caltech) in Pasadena, California. Thorne recorded his skepticism in Gravitation, the massive textbook that he co-wrote and published in 1973. “I had an exercise that said ‘Show that this technology will never work to detect gravitational waves,’” Thorne says. ### Video Take an aerial tour of LIGO at http://scim.ag/aerialLIGO.
2017-10-21 21:34:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3179415762424469, "perplexity": 2666.6721572054603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.43/warc/CC-MAIN-20171021205648-20171021225648-00203.warc.gz"}
http://docs.glueviz.org/en/stable/gui_guide/components.html
# Defining New Components¶ New components of data items can be easily created from mathematical operations on existing components. In this section, we define new components for the W5 Point Source catalog from the tutorial. Right-click on the w5_psc item in the Data Collection window and select Define new component: A new window will appear for defining components. Double-clicking on any of the Available Components will add it to the expression line. You can also type the name of the component – it will appear in blue if it is valid and in red if not, when separated by spaces from other parts of the expression. Here we define a new component __24__-__3.6_ to be the difference between 24 micron and 3.6 micron magnitudes: Remember to select the data item on the Add to window (here, w5_psc). After clicking OK, the new component is available for plotting and other uses. Furthermore, the expression line can include Numpy functions (prefaced with np.), and anything else you import in your config.py file for Glue. For example, if you wished to define a component expressing the 24 micron flux density in Janskys, you could use the np.power function:
2018-03-25 04:59:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41241809725761414, "perplexity": 1191.0526610510788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651820.82/warc/CC-MAIN-20180325044627-20180325064627-00757.warc.gz"}
https://me.gateoverflow.in/676/gate2016-3-25
# GATE2016-3-25 In PERT chart, the activity time distribution is 1. Normal 2. Binomial 3. Poisson 4. Beta recategorized ## Related questions A firm uses a turning center, a milling center and a grinding machine to produce two parts. The table below provides the machining time required for each part and the maximum machining time available on each machine. The profit per unit on parts $I$ and $II$ are $Rs. 40$ ... week (minutes) I II Turning machine $12$ $6$ $6000$ Milling center $4$ $10$ $4000$ Grinding Machine $2$ $3$ $1800$ The demand for a two-wheeler was $900$ units and $1030$ units in April $2015$ and May $2015$, respectively. The forecast for the month of April $2015$ was $850$ units. Considering a smoothing constant of $0.6$, the forecast for the month of June $2015$ is $850$ units $927$ units $965$ units $970$ units In a single-channel queuing model, the customer arrival rate is $12$ per hour and the serving rate is $24$ per hour. The expected time that a customer is in queue is _______ minutes. In the notation $(a/b/c) : (d/e/f)$ for summarizing the characteristics of queuing situation, the letters $‘b’$ and $‘d’$ stand respectively for service time distribution and queue discipline number of servers and size of calling source number of servers and queue discipline service time distribution and maximum number allowed in system
2021-09-18 02:24:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5385973453521729, "perplexity": 1123.5261204938515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056120.36/warc/CC-MAIN-20210918002951-20210918032951-00131.warc.gz"}
https://here.isnew.info/python-in-arcgis-pro-exercises.html
# Python in ArcGIS Pro exercises Institute for Environmental and Spatial Analysis...University of North Georgia ## 1   How to practice these exercises Please type every single letter when you write code. Do NOT copy and paste any code from my slides. Remember your eyes and fingers learn. Copy and paste is like trying to learn a new natural language without speaking it. Will there be any new materials? Yes, but most of them are not important at this point. They are just there to complete our exercises. I added NEW SYNTAX WARNINGs and tried to explain new syntax, but don’t worry about it even if you don’t understand what it means. Still, type it. ## 2   Folder structure for these exercises If you’re not familiar with file management (e.g., where to locate certain files and how to copy files to a certain folder), please use the same folder structure in this exercise and follow my instructions as is. 1. Open File Explorer. 1. Windows Key + E 2. Move to the root of the C drive. 1. Alt+D 2. Type C:\ 3. Hit Enter 3. Create a new folder called PythonExercises. 1. Right click inside the main panel of File Explorer where you can see the list of folders and files in the C drive 2. New ⇒ Folder 3. Rename New Folder to PythonExercises 4. Now, do you see PythonExercises under C:\? ## 3   Add field script In this exercise, we want to create a Python script that can add a new field to an existing Shapefile. ### 3.1   How to create add_field.py Do you know how to export history to a Python script? 1. Using the same technique from this section, create a new folder called Field inside C:\PythonExercises. 2. Download Counties_Georgia.zip and extract the Counties_Georgia Shapefile into C:\PythonExercises\Field. • You should have C:\PythonExercises\Field\Counties_Georgia.shp and its auxiliary files in the same folder. I didn’t mean to extract all into C:\PythonExercises\Field\Counties_Georgia. 3. Add C:\PythonExercises\Field\Counties_Georgia.shp to the map. 4. Find the “Add Field” geoprocessing tool. Remember we’re not using the Field ⇒ Add icon in the attribute table. • Input Table: Counties_Georgia • Field Name: Test • Field Type: Double 5. Run. 6. Go to Analysis ⇒ History. 7. Right click on the “Add Field” history. 8. Save As Python Script to C:\PythonExercises\Field\add_field.py. 9. Open C:\PythonExercises\Field\add_field.py in a text editor. Do you see this code? import arcpy arcpy.management.AddField("Counties_Georgia", "Test", "DOUBLE", None, None, None, '', "NULLABLE", "NON_REQUIRED", '') ### 3.2   How to modify add_field.py Now, let’s try to make the script more useful. # we need the sys module to access arguments from the command line import sys # we need the arcpy module to run ArcGIS Pro tools import arcpy # path to a feature class as the first argument fc_path = sys.argv[1] # field name as the second argument field_name = sys.argv[2] # field type as the third argument field_type = sys.argv[3] # run Add Field arcpy.management.AddField(fc_path, field_name, field_type, None, None, None, '', "NULLABLE", "NON_REQUIRED", '') ### 3.3   How to run add_field.py 1. Make sure to close ArcGIS Pro (preferred) or remove the layer from the map (sometimes, ArcGIS Pro may still lock the Shapefile). 2. Open a cmd window. 1. Windows Key + R 2. Type cmd 3. Hit Enter 3. Change the drive if you’re not already in C:. C: 4. Change the directory to your script folder C:\PythonExercises\Field. cd \PythonExercises\Field 5. Run the script. python add_field.py Counties_Georgia.shp TestText Text 6. You should know how to check whether or not the script worked. ## 4   Calculate Field script In this exercise, we want to create a Python script that can populate a field using simple algebra without code blocks. ### 4.1   How to create calc_field.py Oh! You still don’t know? Here is how. 1. Add the already extracted Counties_Georgia Shapefile to the map. 2. Find the Calculate Field tool from the Geoprocessing tab, pane, panel (?), whatever... • Input Table: Counties_Georgia • Field Name: TestText (from the Section 3 exercise) • TestText = “Test” 3. Run. 4. Go to Analysis ⇒ History. 5. Right click on the “Calculate Field” history. 6. Save As Python Script to C:\PythonExercises\Field\calc_field.py. 7. Open C:\PythonExercises\Field\calc_field.py in a text editor. Do you see this code? import arcpy arcpy.management.CalculateField("Counties_Georgia", "TestText", '"Test"', "PYTHON3", '', "TEXT") ### 4.2   How to modify calc_field.py Again, let’s make it more useful. # we need the sys module to access arguments from the command line import sys # we need the arcpy module to run ArcGIS Pro tools import arcpy # path to a feature class as the first argument fc_path = sys.argv[1] # field name as the second argument field_name = sys.argv[2] # algebraic expression as the third argument expr = sys.argv[3] # otional code block as the fourth argument # NEW SYNTAX WARNING!!! # ternary operator # <true_expression> if <condition> else <false_expression> # if you get it, great! if not, just type it for now block = sys.argv[4] if len(sys.argv) > 4 else '' # the last argument is used only when the specified field doesn't exist # according to Esri's help at # https://pro.arcgis.com/en/pro-app/tool-reference/data-management/calculate-field.htm # set it to None # arcpy.management.CalculateField(fc_path, field_name, expr, "PYTHON3", block, None) # or simply remove it arcpy.management.CalculateField(fc_path, field_name, expr, "PYTHON3", block) ### 4.3   How to run calc_field.py 1. Make sure to close ArcGIS Pro (preferred) or remove the layer from the map (sometimes, ArcGIS Pro may still lock the Shapefile). 2. Open a cmd window. 1. Windows Key + R 2. Type cmd 3. Hit Enter 3. Change the drive if you’re not already in C:. C: 4. Change the directory to your script folder C:\PythonExercises\Field. cd \PythonExercises\Field 5. Run the script. python calc_field.py Counties_Georgia.shp TestText "'Test Test'" • In the above example, make sure to quote Test Test twice with different quotes. Why? You want to pass 'Test Test' to Calculate Field, not Test Test because the TestText field is in Text so you need to quote its value. The first quote will be stripped off by the command line. 6. Again, you know how to check whether or not the script worked, right? ## 5   Summary statistics script In this exercise, let’s see how to extract field statistics. ### 5.1   How to create field_stats.py You know how, right? 1. Add the same Counites_Georgia Shapefile to the map. 2. Find the Summary Statistics tool. • Input Table: Counties_Georgia • Output Table: Counties_Georgia_Statistics • FieldStatistic Type totpop10Mean totpop10Standard deviation 3. Run. 4. Go to Analysis ⇒ History. 5. Right click on the “Summary Statistics” history. 6. Save As Python Script to C:\PythonExercises\Field\field_stats.py. 7. Open C:\PythonExercises\Field\field_stats.py in a text editor. Do you see this code? import arcpy arcpy.analysis.Statistics("Counties_Georgia", r"C:\Users\geni\AppData\Local\Temp\ArcGISProTemp30748\bfb83c98-7e80-4565-8e78-fc5db62d0f00\Default.gdb\Counties_Georgia_Statistics", "PopDens MEAN;PopDens STD", None) ### 5.2   How to modify field_stats.py # we need the sys module to access arguments from the command line import sys # we need the arcpy module to run ArcGIS Pro tools import arcpy # overwrite existing files arcpy.env.overwriteOutput = True # path to a feature class as the first argument fc_path = sys.argv[1] # path to the output table as the second argument table_path = sys.argv[2] # field statistics to calculate as the third argument field_stats = sys.argv[3] arcpy.analysis.Statistics(fc_path, table_path, field_stats, None) # NEW SYNTAX WARNING!!! # open file for reading f = open(table_path) # print each line for line in f: # each line from f already contains a new line, so don't print a new line # again (end='') print(line, end='') # close file f.close() ### 5.3   How to run field_stats.py For output, you have to type .\ to create it in the current folder and use the .csv extension to be able to read it without using Excel. python field_stats.py Counties_Georgia.shp .\totpop10_stats.csv "totpop10 MEAN; totpop10 STD" ## 6   Revisit Homework 1 ### 6.1   Calculate the areas of counties in $\text{km}^2$ Use the same Counties_Georgia Shapefile from the above exercises, not the one you submitted for Homework 1 because the latter already has the SqKm field. python add_field.py Counties_Georgia.shp SqKm Double python calc_field.py Counties_Georgia.shp SqKm "2.58999 * !Sq_Miles!" Did it work for you? ### 6.2   Calculate the population density in $\text{people}/\text{km}^2$ python add_field.py Counties_Georgia.shp PopDens Double python calc_field.py Counties_Georgia.shp PopDens "!totpop10! / !SqKm!" Did it (not?) work again? ### 6.3   Calculate the mean and standard deviation of the population density python field_stats.py Counties_Georgia.shp .\PopDens_stats.csv "PopDens MEAN; PopDens STD" Did you get this output? FID,FREQUENCY,MEAN_PopDens,STD_PopDens ,159,72.526758495576033,143.460174992006614 FID is empty, FREQUENCY is 159, and the mean and standard deviation of the population density are $\SI{72.53}{\text{people}/\text{km}^2}$ and $\SI{143.46}{\text{people}/\text{km}^2}$, respectively. ### 6.4   Calculate the population density category python add_field.py Counties_Georgia.shp PopDensCat Text python calc_field.py Counties_Georgia.shp PopDensCat "'Low' if !PopDens! < 72.53 - 143.46 else 'Medium' if !PopDens! < 72.53 + 143.46 else 'High'" Did it work? NEW SYNTAX WARNING!!! Nested ternary operators! Here, we want to assign ‘Low’ if !PopDens! is less than $72.53 - 143.46$. 'Low' if !PopDens! < 72.53 - 143.46 Else we want to assign ‘Medium’ if !PopDens! is less than $72.53 + 143.46$. else 'Medium' if !PopDens! < 72.53 + 143.46 Otherwise, we want to assign ‘High’. else 'High'
2021-04-16 14:18:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20213624835014343, "perplexity": 11142.374137522023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00506.warc.gz"}
https://techwhiff.com/learn/write-a-progress-note-for-this-scenario-wound/143700
# Write a progress note for this Scenario Wound Assessment Mr. Santiago, a 62 year old gentleman... ###### Question: Write a progress note for this Scenario Wound Assessment Mr. Santiago, a 62 year old gentleman is transferred to your ward from the Emergency department. Mr. Santiago had a Left Total Knee replacement done 2 weeks ago. He presented to ED with fever and severe pain in his Right knee. He stated that he could not stand or walk for last 2 days due to pain and weakness in his knee. He has a history of Diabetes Mellitus, osteoarthritis and is an average smoker. You have been called in to assess the wound and remove alternate staples (as per doctor's advice) #### Similar Solved Questions ##### Let B = {bį, b2} and C = {C1,C2} be bases for R², where b, -6--0--0--01... Let B = {bį, b2} and C = {C1,C2} be bases for R², where b, -6--0--0--01 1 a. Find P BEC [16 b. If [x]c = -3 de=[13] , find [x]... ##### How do you evaluate 13+ 9\cdot 2- 14\div 2? How do you evaluate 13+ 9\cdot 2- 14\div 2?... ##### Assume the following information: Percent Complete Materials Conversion 45% 30% Milling Department Beginning work in process... Assume the following information: Percent Complete Materials Conversion 45% 30% Milling Department Beginning work in process inventory Units started into production during March Units completed during March and transferred to the next department Ending work in process inventory Units 300 6,100 5,800... ##### Bonus problem (6 points:) Assume X1, ... Xm come from a normal population with unknown variance... Bonus problem (6 points:) Assume X1, ... Xm come from a normal population with unknown variance oị and Y... Y come from another normal population with unknown variance o, and the two samples are independent of each other. Write the 5 steps you will follow to test the hypotheses: Hoi = 1 VS HQI... ##### How do you find the roots for x^2+16x = 0? How do you find the roots for x^2+16x = 0?... ##### The financial plan is one of the parts of a business plan, prepare a financial plan... the financial plan is one of the parts of a business plan, prepare a financial plan of a cow farm including the following instructions: A. Actual income statements and balance sheets. For an existing business, prepare income statements and balance sheets for the current year and for the prior 2 year... ##### Solve the following Boundary Value Problem using the given conditions Partial Differential Eq ST_0*T Ətər? Boundary... Solve the following Boundary Value Problem using the given conditions Partial Differential Eq ST_0*T Ətər? Boundary Conditions Initial Conditions 180r + 10 T(1,0) = f(x) =( -180.c + 190 0 <r<.5 5<r <1)... ##### Lowering the viscosity of polymer in processing a batch of polycarbonate at 170°C to make a... Lowering the viscosity of polymer in processing a batch of polycarbonate at 170°C to make a certain part, it is found that the polycarbonate's viscosity is twice that desired. Tg = 150 °C for polycarbonate. Determine the Temperature at which this polymer should be processed.... ##### The drain can empty the water from a full sink in 3 minutes. If the water is running while the drain is open, it takes 8 minutes to empty a full sink. How long would it take to fill an empty sink with the drain closed? The drain can empty the water from a full sink in 3 minutes. If the water is running while the drain is open, it takes 8 minutes to empty a full sink. How long would it take to fill an empty sink with the drain closed?... ##### Answer this please Question 10 of 11 (3 points) 9.2 Section Exercise 20 In a large... answer this please Question 10 of 11 (3 points) 9.2 Section Exercise 20 In a large hospital, a nursing director selected a random sample of 36 registered nurses and found that the mean of their ages was 32.3. The population standard deviation for the ages is 4.2. She selected a random sample of 31 n... ##### A horizontal length of current-carrying wire is suspended from two identical flexible leads that are under... A horizontal length of current-carrying wire is suspended from two identical flexible leads that are under tension due to the weight of the wire. The wire is oriented at right angles to a uniform magnetic field that has a magnitude of 4 T and is directed out of the screen of the figure. The length L... ##### So, knowing that upcoding is both unethical and illegal, what alternatives could you suggest for this... So, knowing that upcoding is both unethical and illegal, what alternatives could you suggest for this provider (and their staff)?...
2022-12-02 22:37:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3834260404109955, "perplexity": 2130.996633724109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.70/warc/CC-MAIN-20221202215443-20221203005443-00127.warc.gz"}
https://www.physicsforums.com/threads/minimization-operator-problem.968399/
# Minimization Operator Problem #### joshmccraney 1. The problem statement, all variables and given/known data Given a Hilbert space $$V = \left\{ f\in L_2[0,1] | \int_0^1 f(x)\, dx = 0\right\},B(f,g) = \langle f,g\rangle,l(f) = \int_0^1 x f(x) \, dx$$ find the minimum of $$B(u,u)+2l(u)$$. 2. Relevant equations In my text I found a variational theorem stating this minimization problem is equivalent to solving $$B(f,g)+l(f) = 0$$ for fixed $g$ and all $f$. 3. The attempt at a solution Plugging in values yields $$\int_0^1f(g+x)\, dx = 0.$$ But from here I'm not sure how to handle the answer. Any suggestions? I'm guessing either $g=-x$ or else $f$ must be orthogonal to the function $g+x$ on the interval $[0,1]$. Unsure how to proceed. Related Calculus and Beyond Homework News on Phys.org #### Ray Vickson Homework Helper Dearly Missed 1. The problem statement, all variables and given/known data Given a Hilbert space $$V = \left\{ f\in L_2[0,1] | \int_0^1 f(x)\, dx = 0\right\},B(f,g) = \langle f,g\rangle,l(f) = \int_0^1 x f(x) \, dx$$ find the minimum of $$B(u,u)+2l(u)$$. 2. Relevant equations In my text I found a variational theorem stating this minimization problem is equivalent to solving $$B(f,g)+l(f) = 0$$ for fixed $g$ and all $f$. 3. The attempt at a solution Plugging in values yields $$\int_0^1f(g+x)\, dx = 0.$$ But from here I'm not sure how to handle the answer. Any suggestions? I'm guessing either $g=-x$ or else $f$ must be orthogonal to the function $g+x$ on the interval $[0,1]$. Unsure how to proceed. You have $$B(u,u) + 2 l(u) = \int_0^1 u^2(x) \, dx + 2\int_0^1 x u(x) \, dx = \int_0^1[ u(x)^2 + 2 x u(x)] \, dx.$$ The integrand equals $(u(x)+x)^2 -x^2$ #### joshmccraney You have $$B(u,u) + 2 l(u) = \int_0^1 u^2(x) \, dx + 2\int_0^1 x u(x) \, dx = \int_0^1[ u(x)^2 + 2 x u(x)] \, dx.$$ The integrand equals $(u(x)+x)^2 -x^2$ Right, this is what I must minimize (this integral). I'm not sure how to do that. But the theorem I wrote only requires the integral be zero. Am I missing something? #### fresh_42 Mentor 2018 Award Right, this is what I must minimize (this integral). I'm not sure how to do that. But the theorem I wrote only requires the integral be zero. Am I missing something? The integral over $x^2$ is a constant, so the minimization is the one of the integral $(u(x)+x)^2$. Now the question is, where is $u$ from to make it minimal? #### joshmccraney The integral over $x^2$ is a constant, so the minimization is the one of the integral $(u(x)+x)^2$. Now the question is, where is $u$ from to make it minimal? Okay, I see what you're saying. Isn't $u \in L_2[0,1] : \int_0^1 u\, dx = 0$? Since the integrand is squared is must be true that the integral can be no less than zero. I'd say this implies $u = -x$, except that $\langle -x,1\rangle \neq 0$. Ideas? #### fresh_42 Mentor 2018 Award Okay, I see what you're saying. Isn't $u \in L_2[0,1] : \int_0^1 u\, dx = 0$? Since the integrand is squared is must be true that the integral can be no less than zero. Doesn't this imply $u = -x$? That's the question. If $u \in L^2([0,1])$ then $u=-x$ is the solution. If $u \in V$ then it's probably $u=0$, since $-x\notin V$, but that needs to be proven. It's the question: which element of $V$ is closest to $x$ in $L^2([0,1]).$ #### joshmccraney That's the question. If $u \in L^2([0,1])$ then $u=-x$ is the solution, if $u \in V$ then it's probably $u=0$, since $-x\notin V$, but that needs to be proven. It's the question: which element of $V$ is closest to $x$ in $L^2([0,1]).$ Okay yea, I agree with you. But how do you know $u\in V \implies u=0$? The closest is likely $u=1/2-x$? #### fresh_42 Mentor 2018 Award Okay yea, I agree with you. But how do you know $u\in V \implies u=0$? The closest is likely $u=1/2-x$? You're right, my guess - and it was one - isn't closer than yours. That's why I said it needs a proof. Btw, yours as well. However, we have Euclidean spaces here, so the closest distance is a straight line and the minimization should be easy. #### joshmccraney You're right, my guess - and it was one - isn't closer than yours. That's why I said it needs a proof. Btw, yours as well. However, we have Euclidean spaces here, so the closest distance is a straight line and the minimization should be easy. Can you elaborate on the straight line concept? I think you see something I don't. #### fresh_42 Mentor 2018 Award Can you elaborate on the straight line concept? I think you see something I don't. What we have is a hyperspace $V$ and a point $x$ outside. The shortest distance is at the basis of a normal vector of $V$ pointing to $x$, or a perpendicular vector to $V$. (I would look up the Gram-Schmidt procedure, but maybe it's easier, e.g. Lagrange multiplier, basis of $V$, power series for $u$ - just to name a few ideas). Your solution is probably correct, but why? Last edited: #### joshmccraney What we have is a hyperspace $V$ and a point $x$ outside. The shortest distance is at the basis of a normal vector of $V$ pointing to $x$, or a perpendicular vector to $V$. (I would look up the Gram-Schmidt procedure, but maybe it's easier, e.g. Lagrange multiplier, basis of $V$, power series for $u$ - just to name a few ideas). Your solution is probably correct, but why? You lost me. I am familiar with Gram-Schmidt (reconstructing orthonormal basis functions from ones that are not necessarily that) and Lagrange multipliers (optimization with a constraint, setting the gradients equal, yielding system of algebraic equations) and of course power series, which for you would perhaps be $u = 1/2 - x + HOT$ (Higher Order Terms). But how do these help/are relevant? #### fresh_42 Mentor 2018 Award I don't know. How do we find $u \in V$ such that the distance to $x$ is minimal? The major difficulty is to describe $u$ somehow. A basis is a possibility. Legendre polynomials work on $L^2([-1,1])$, but I'm sure there is also a known basis for $V$. Or you prove, that any other vector $u(x)= (\frac{1}{2}-x) + v(x)$ is necessarily further away, unless $v(x)=0$. We could write $u(x)=(\frac{1}{2}-x) + \lambda v(x)$ and treat $\lambda$ as Lagrange multiplier. Since I don't have a solution in mind, I just name ideas. Edit: $\dfrac{1}{n!} \dfrac{d^n}{dx^n} (x^2-x)^n\; , \;n>0$ looks like a basis. Edit$^2$: The first basis vector is $2x-1$, and if we solve $u^2+ux=0$ (the condition that $u$ in $V$ is the perpendicular point to $x$) with $u=\lambda\cdot (2x-1)$, then we get $\lambda \in \{\,0,-\frac{1}{2}\,\}$ and $u(x)=-\frac{1}{2}(2x-1)$ is exactly your solution. Last edited: #### Ray Vickson Homework Helper Dearly Missed Okay, I see what you're saying. Isn't $u \in L_2[0,1] : \int_0^1 u\, dx = 0$? Since the integrand is squared is must be true that the integral can be no less than zero. I'd say this implies $u = -x$, except that $\langle -x,1\rangle \neq 0$. Ideas? Just to clarify: is your problem (1) or (2) below? $$\begin{array}{cc} (1) & \min B(u,u) + 2 l(u)\\ &\text{s.t.}\; u \in L_2[0,1] \\ \\ (2) & \min B(u,u) + 2 l(u) \\ &\text{s.t.} \; u \in V \end{array}$$ If it is (2), then it becomes $$\min \int_0^1 (u(x) + x)^2 \, dx \\ \text{subject to} \int_0^1 u(x) \, dx = 0$$ #### joshmccraney I don't know. How do we find $u \in V$ such that the distance to $x$ is minimal? The major difficulty is to describe $u$ somehow. A basis is a possibility. Legendre polynomials work on $L^2([-1,1])$, but I'm sure there is also a known basis for $V$. Or you prove, that any other vector $u(x)= (\frac{1}{2}-x) + v(x)$ is necessarily further away, unless $v(x)=0$. We could write $u(x)=(\frac{1}{2}-x) + \lambda v(x)$ and treat $\lambda$ as Lagrange multiplier. Since I don't have a solution in mind, I just name ideas. Edit: $\dfrac{1}{n!} \dfrac{d^n}{dx^n} (x^2-x)^n\; , \;n>0$ looks like a basis. Edit$^2$: The first basis vector is $2x-1$, and if we solve $u^2+ux=0$ (the condition that $u$ in $V$ is the perpendicular point to $x$) with $u=\lambda\cdot (2x-1)$, then we get $\lambda \in \{\,0,-\frac{1}{2}\,\}$ and $u(x)=-\frac{1}{2}(2x-1)$ is exactly your solution. How did you know the basis function was $\dfrac{1}{n!} \dfrac{d^n}{dx^n} (x^2-x)^n\; , \;n>0$? Otherwise I think I follow you, but let me summarize: Given the aforementioned basis, we take the first term to approximate $u : u=\lambda(2x-1)$ and substitute this into the equation $\langle u,u+x\rangle = 0$ and solve for the coefficient $\lambda$ (note $\langle u,u+x\rangle = 0$ is equivalent to $B(u,u) + l(u) = 0$). The linearly independent choice is $\lambda \neq 0 \implies \lambda = -1/2$. But I don't see how this minimizes $B(u,u) + 2l(u) = 0$. #### joshmccraney Just to clarify: is your problem (1) or (2) below? $$\begin{array}{cc} (1) & \min B(u,u) + 2 l(u)\\ &\text{s.t.}\; u \in L_2[0,1] \\ \\ (2) & \min B(u,u) + 2 l(u) \\ &\text{s.t.} \; u \in V \end{array}$$ If it is (2), then it becomes $$\min \int_0^1 (u(x) + x)^2 \, dx \\ \text{subject to} \int_0^1 u(x) \, dx = 0$$ (2) is the correct problem (though it seems fresh_42 solved (1) too). In general I am not sure how to solve this sort of Lagrange multiplier integral equation. I'm happy to learn another technique, though this seems similar to what fresh_42 wrote. My guess is Lagrange multiplier implies solving the two equations $$\int_0^1 u(x) \, dx = 0\\ (u(x) + x)^2 = \lambda u(x)$$ but this doesn't feel right since the integrals have boundaries. #### fresh_42 Mentor 2018 Award How did you know the basis function was $\dfrac{1}{n!} \dfrac{d^n}{dx^n} (x^2-x)^n\; , \;n>0$? I looked at the Legendre polynomials and modified a formula for them (Rodrigues formula). Otherwise I think I follow you, but let me summarize: Given the aforementioned basis, we take the first term to approximate $u : u=\lambda(2x-1)$ and substitute this into the equation $\langle u,u+x\rangle = 0$ and solve for the coefficient $\lambda$ (note $\langle u,u+x\rangle = 0$ is equivalent to $B(u,u) + l(u) = 0$). The linearly independent choice is $\lambda \neq 0 \implies \lambda = -1/2$. But I don't see how this minimizes $B(u,u) + 2l(u) = 0$. It doesn't show that's the optimum, it only indicates a way to do so. You have to contribute a little, too. Say we name the polynomials $P_n=\dfrac{1}{n!} \dfrac{d^n}{dx^n} (x^2-x)^n$. Then we already know that the shortest distance between the straight $\lambda P_1$ and the point $x$ is at $w$ for $\lambda = \frac{1}{2}$. The situation is as follows: Now which possibilities do we have? First of all, I think the basis polynomials are orthogonal, so this might help in calculations. I haven't calculated their norm either, but I don't think we have to norm them in case they aren't already. I also don't know whether they form a basis. They should be linear independent, but do they span $V$? Do we even need this? Anyway, we want to show $w=u$. So our options are e.g. • Calculate the angels, as $u$ is the unique perpendicular to all vectors in $V$. Does $w$ have this property? • Vary $w + \mu P$ for some vector $P\neq P_1$ and show that the distance becomes larger for $\mu \neq 0$. #### Attachments • 17.5 KB Views: 38 • 17.2 KB Views: 73 Last edited: #### joshmccraney I looked at the Legendre polynomials and modified a formula for them (Rodrigues formula). It doesn't show that's the optimum, it only indicates a way to do so. You have to contribute a little, too. Say we name the polynomials $P_n=\dfrac{1}{n!} \dfrac{d^n}{dx^n} (x^2-x)^n$. Then we already know that the shortest distance between the straight $\lambda P_1$ and the point $x$ is at $w$ for $\lambda = \frac{1}{2}$. The situation is as follows: View attachment 240529 Now which possibilities do we have? First of all, I think the basis polynomials are orthogonal, so this might help in calculations. I haven't calculated their norm either, but I don't think we have to norm them in case they aren't already. Anyway, we want to show $w=u$. So our options are e.g. • Calculate the angels, as $u$ is the unique perpendicular to all vectors in $V$. Does $w$ have this property? • Vary $w + \mu P$ for some vector $P\neq P_1$ and show that the distance becomes larger for $\mu \neq 0$. Wow, that was a lot. I'm trying to follow you: so are you saying the space $V$ is constructed from the basis you present, and that the shortest distance from a point $x$ to $V$ is a straight line with shortest length, and therefore must be orthogonal to all other basis for $n \neq 1$? Then is seems like $\int_0^1 P_n P_1 \, dx = 0$ implies $P_1$ is orthogonal to the rest of $V$? #### fresh_42 Mentor 2018 Award Wow, that was a lot. I'm trying to follow you: so are you saying the space $V$ is constructed from the basis you present, and that the shortest distance from a point $x$ to $V$ is a straight line with shortest length, and therefore must be orthogonal to all other basis for $n \neq 1$? Then is seems like $\int_0^1 P_n P_1 \, dx = 0$ implies $P_1$ is orthogonal to the rest of $V$? I edited my previous post while it was in your cache. We don't know enough yet about the properties of the $P_n$. I think orthogonality is easy to show, but I haven't done any property checks for them. I would try to show that $w$ is optimal (2nd option). We know that $P_1$ can be expanded to a orthonormal system, which should do. #### Ray Vickson Homework Helper Dearly Missed (2) is the correct problem (though it seems fresh_42 solved (1) too). In general I am not sure how to solve this sort of Lagrange multiplier integral equation. I'm happy to learn another technique, though this seems similar to what fresh_42 wrote. My guess is Lagrange multiplier implies solving the two equations $$\int_0^1 u(x) \, dx = 0\\ (u(x) + x)^2 = \lambda u(x)$$ but this doesn't feel right since the integrals have boundaries. What you have is similar to a constrained calculus-of-variations problem, but without any $u'$ terms present in the integral. You could write $I = \int_0^1 [(x + u(x))^2 - \lambda u(x) ] \, dx$ and look for an unconstrained minimum of $I$. However, if I were doing it I would proceed much differently, as follows. If you think of "discretizing" the problem (approximating the integrals by finite sums) you will have a problem of the form $$\begin{array}{cl} \min & \sum_{i=1}^n (x_i + u_i)^2 \\ \text{such that}& \sum_{i=1}^n u_i = 0 \end{array}$$ Here, $x_2, x_2, \ldots, x_n \in [0,1]$ are equally-spaced points we use to approximate the integral by a sum, and the $u_i$ are supposed to be $u(x_i)$. Basically, we are replacing the integral $\int_0^1 F(x) \, dx$ by $\sum_{i=1}^n \Delta x F(x_i),$ then dropping the (constant) $\Delta x,$ which will not affect the position of the minimum. Note that the $u_i$ are just some independent variables to be determined. Before we optimize they are independent of the $x_i$, but after optimizing they might become functions of the $x_i$. After we determine the $u_i$ we can get information about $u(x)$ by setting $u(x_i) = u_i.$ The finite optimization problem above has an elementary solution in terms of the Lagrange multiplier method, so one can easily obtain a discrete version of $u(x).$ This is then easy to extend to the continuous domain. Last edited: #### joshmccraney Thank you both for responding. I am going to submit what I have so far (just don't have time to look further into what you both suggested, though I will when I get time, as I love learning new methods). As such I'm marking this thread solved. I will post the professors solution (if he gives one). Thanks again for taking the time to help! I really appreciate your expertise! "Minimization Operator Problem" ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
2019-05-21 17:06:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9078412055969238, "perplexity": 308.94594469460657}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256494.24/warc/CC-MAIN-20190521162634-20190521184634-00110.warc.gz"}
http://www.diva-portal.org/smash/record.jsf?pid=diva2:988452
Change search Interpolation between L1 and Lp, 1 Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science. 2004 (English)In: Proceedings of the American Mathematical Society, ISSN 0002-9939, E-ISSN 1088-6826, Vol. 132, no 10, 2929-2938 p.Article in journal (Refereed) Published ##### Abstract [en] The main result of this paper is that if $X$ is an interpolation rearrangement invariant space on $[0,1]$ between $L_1$ and $L_\infty$, for which the Boyd index $\alpha(X)>1/p$, \$1 ##### Place, publisher, year, edition, pages 2004. Vol. 132, no 10, 2929-2938 p. Mathematics ##### Identifiers Local ID: efead670-a7d1-11db-aeba-000ea68e967bOAI: oai:DiVA.org:ltu-15478DiVA: diva2:988452 ##### Note Validerad; 2004; 20070117 (kani)Available from: 2016-09-29 Created: 2016-09-29Bibliographically approved #### Open Access in DiVA ##### File information File name FULLTEXT01.pdfFile size 178 kBChecksum SHA-512 d7b6ba822fe337418293589e33908df93369b94a061cf946a2b223bc18469a1a496d4fb2f52342833506e133607d144ec4c7f40ba536feb3e82dbc99771d0175 Type fulltextMimetype application/pdf Publisher's full text #### Search in DiVA ##### By author/editor Astashkin, SergeyMaligranda, Lech ##### By organisation Mathematical Science ##### In the same journal Proceedings of the American Mathematical Society
2016-10-25 09:07:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45024576783180237, "perplexity": 12790.18472337558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720000.45/warc/CC-MAIN-20161020183840-00215-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.biostars.org/p/9471758/
Make map file with recombination rate using plink? 0 0 Entering edit mode 9 weeks ago curious ▴ 570 Trying to run germline and they use ped and map as input. They seem to have the 3rd map column for recombination rate not empty the example if I convert a bfile to ped amd map: plink --bfile filename --recode --tab --out myfavpedfile the 3rd column of resulting map file for is all 0. I have not been able to get germline to work and I am starting to suspect it needs this 3rd column of the map file filled with values. How do I do this? 0 Entering edit mode btw, germline is old and slow and not really that accurate. It's much better to use something like hap-ibd https://www.sciencedirect.com/science/article/pii/S0002929720300525 0 Entering edit mode Thanks. Do you have any recommendations for relationship inference software that is not KING? I am really looking to determine relationship status (parent offspring, full sibling, 2nd degree, 3rd degree) between members of a large cohort. I have tried a few programs that feed into ERSA (germline, fastIBD) also trying raffi. Unfortunately some of these seem not really documented or difficult to get running. I am trying to check output of KING against some other relationship inference program for troubleshooting. 0 Entering edit mode Why not use the inbuilt methods in plink2? https://www.cog-genomics.org/plink/2.0/distance#make_king should (probably) give you what you need. You can keep all your files in plink binary format as well.
2021-07-30 16:54:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30346953868865967, "perplexity": 3955.3562100221366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153971.20/warc/CC-MAIN-20210730154005-20210730184005-00642.warc.gz"}
http://embedblog.eu/?p=712
# Efficiency & noise of an MT3608 boost module As part of theoretical research for a possible future project I wanted to measure the efficiency, noise and also max output current of an MT3608-based board, powered by a single 18650 battery. As my test subject I chose one of those cheap boards from Ebay, which you can get for about a dollar a piece. ### Efficiency The datasheet for the driving chip (MT3608) already has an efficiency chart for boosting to 12 V, but it’s from a 5 V supply and well, who believes Chinese datasheets? So I set the module’s output voltage to 12.4 V and used my electronic load to increase the load current until I hit a 2 A draw on the input. Voltages were measured directly at the module’s input and output terminals and a linear lab bench power supply was used instead of a battery, in order to supply the varying input voltages. And here are the results (notice that the efficiency axis starts at 70 %; also, click on the image for a larger version): Practically speaking, the MT3608 is capable of boosting LiIon up to 12 V and 400 mA at all conditions. And at higher cell voltages you can draw up to 600 mA. Sure, you could push the module even more, but the IC and diode will start heating up a lot. Even during my tests it reached temperatures between 60 and 70 °C. ### Output noise I was also interested in the output noise – this is of course a switching regulator, so it will be noisy, but exactly how much? As far as test setup goes, I used two channels in AC mode (with a 20 MHz BW limiter) of my oscilloscope as a pseudodifferential probe (setup recommended by Dave Jones in one his videos) directly on the output pads of the module. The output was also connected to a power resistor, drawing about 400 mA. Power was provided by a LiIon battery thru a classical DW01 protection circuit. Now this is the waveform I got: Here the purple waveform is the differential (math) channel. Numerically speaking, the RMS ripple was 450 mV and peak-to-peak ripple was 1.7 V! Of course, boost converters are much more prone to noise, because the switching happens after the inductor – in a classical buck converter, on the contrary, the switching happens before the inductor, thus the inductor forms an LC filter on the output, greatly smoothing any ripple. So that’s what I tried next – I added an LC filter on the output, consisting of a 8.2 µH inductor and a 220 µF electrolytic capacitor. Such an LC filter should have a cutoff frequency of 3.7 kHz, thus smoothing the large 30 kHz ripple on the output. Here is the result: The filter greatly smoothed the HF ripple, superimposed on the LF sinewave. Even the sinewave got smoothed – notice that the vertical axis now has now half as much volts per division. So the resulting noise is now 200 mV RMS and 600 mV peak to peak. In other words, assuming we have just the 29 kHz sinewave, we achieved voltage gain of -9.35 dB, according to the clasical formula: A_V = 20log_{10}(\frac{V_{OUT}}{V_{IN}})\quad[dB] (Yeah, I really wanted to test out the KaTeX plugin for WordPress.) In my project, I’ll probably use a 22 uH inductor to optimize BOM – the same value is used in the DC-DC regulator itself- so this bigger inductor should reduce noise even more. ### Heat dissipation tip This board contains two components which tend to heat up – the diode and and the IC itself. There isn’t much you can do about the diode, other than provide a large copper area with thermal vias around cathode (anode is switching at HF and it is recommended to keep the copper area as small as possible here). The IC is in the SOT23-6 package, which isn’t really suitable for any sort of heat dissipation. But, if you look at the pinout below, you can see that right side of the IC contains pins IN, EN and NC. If you do not need the EN functionality (which is a little bit useless on a boost converter anyways, since it does not disconnect the load), you can connect all three of those pins together to the input. I also made a huge copper area with one large opening in the silkscreen for all three pins, hoping to make one large blob of solder, connecting all three pins together. This should again improve heat dissipation (but don’t expect miracles, it will still be limited by the bonding wires inside the IC itself). I’ll update this with the actual picture once the PCB arrives. ### Conclusion I am currently building a simple, portable power supply – it’s nothing fancy, just an 18650 with charging and protection, digitally adjustable MT3608 boost circuit with an output filter and an OLED screen for displaying voltage and current. It will be part of a test setup for a certain commercial project. ## 2 Replies to “Efficiency & noise of an MT3608 boost module” 1. Peter says: Hey, I like the project you did here. But I am wondering, why did you use a differential probing method to measure the output noise. Why did you not use the AC-coupling on the oscilloscope, and measure the noise using one scope channel connected over the output of the converter? 1. martin says: Hi, actually, I had the probes in AC mode. But what I learned the hard way when measuring output noise of my linear lab power supply is that the amount of noise superimposed on the resulting waveform with a single-ended measuremtn is just too much, so I had to use the differential method to get rid of the common mode noise. Sure, here it is probably an overkill, but I wanted to be on the safe side…
2021-03-02 20:04:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5386775135993958, "perplexity": 1605.9797616915625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364764.57/warc/CC-MAIN-20210302190916-20210302220916-00327.warc.gz"}
https://www.proofwiki.org/wiki/Category:Matrix_Products
# Category:Matrix Products This category contains results about Matrix Products. Definitions specific to this category can be found in Definitions/Matrix Products. ### Matrix Product (Conventional) Let $\struct {R, +, \circ}$ be a ring. Let $\mathbf A = \sqbrk a_{m n}$ be an $m \times n$ matrix over $R$. Let $\mathbf B = \sqbrk b_{n p}$ be an $n \times p$ matrix over $R$. Then the matrix product of $\mathbf A$ and $\mathbf B$ is written $\mathbf A \mathbf B$ and is defined as follows. Let $\mathbf A \mathbf B = \mathbf C = \sqbrk c_{m p}$. Then: $\ds \forall i \in \closedint 1 m, j \in \closedint 1 p: c_{i j} = \sum_{k \mathop = 1}^n a_{i k} \circ b_{k j}$ Thus $\sqbrk c_{m p}$ is the $m \times p$ matrix where each entry $c_{i j}$ is built by forming the (ring) product of each entry in the $i$'th row of $\mathbf A$ with the corresponding entry in the $j$'th column of $\mathbf B$ and adding up all those products. This operation is called matrix multiplication, and $\mathbf C$ is the matrix product of $\mathbf A$ with $\mathbf B$. ### Matrix Scalar Product Let $\GF$ denote one of the standard number systems. Let $\map \MM {m, n}$ be the $m \times n$ matrix space over $\GF$. Let $\mathbf A = \sqbrk a_{m n} \in \map \MM {m, n}$. Let $\lambda \in \GF$ be any element of $\Bbb F$. The operation of scalar multiplication of $\mathbf A$ by $\lambda$ is defined as follows. Let $\lambda \mathbf A = \mathbf C$. Then: $\forall i \in \closedint 1 m, j \in \closedint 1 n: c_{i j} = \lambda a_{i j}$ $\lambda \mathbf A$ is the scalar product of $\lambda$ and $\mathbf A$. Thus $\mathbf C = \sqbrk c_{m n}$ is the $m \times n$ matrix composed of the product of $\lambda$ with the corresponding elements of $\mathbf A$. ### Kronecker Product Also known as matrix direct product: Let $\mathbf A = \sqbrk a_{m n}$ and $\mathbf B = \sqbrk b_{p q}$ be matrices. The Kronecker product of $\mathbf A$ and $\mathbf B$ is denoted $\mathbf A \otimes \mathbf B$ and is defined as the block matrix: $\mathbf A \otimes \mathbf B = \begin{bmatrix} a_{11} \mathbf B & a_{12} \mathbf B & \cdots & a_{1n} \mathbf B \\ a_{21} \mathbf B & a_{22} \mathbf B & \cdots & a_{2n} \mathbf B \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} \mathbf B & a_{m2} \mathbf B & \cdots & a_{mn} \mathbf B \end{bmatrix}$ Writing this out in full: $\mathbf A \otimes \mathbf B = \begin{bmatrix} a_{11} b_{11} & a_{11} b_{12} & \cdots & a_{11} b_{1q} & \cdots & \cdots & a_{1n} b_{11} & a_{1n} b_{12} & \cdots & a_{1n} b_{1q} \\ a_{11} b_{21} & a_{11} b_{22} & \cdots & a_{11} b_{2q} & \cdots & \cdots & a_{1n} b_{21} & a_{1n} b_{22} & \cdots & a_{1n} b_{2q} \\ \vdots & \vdots & \ddots & \vdots & & & \vdots & \vdots & \ddots & \vdots \\ a_{11} b_{p1} & a_{11} b_{p2} & \cdots & a_{11} b_{pq} & \cdots & \cdots & a_{1n} b_{p1} & a_{1n} b_{p2} & \cdots & a_{1n} b_{pq} \\ \vdots & \vdots & & \vdots & \ddots & & \vdots & \vdots & & \vdots \\ \vdots & \vdots & & \vdots & & \ddots & \vdots & \vdots & & \vdots \\ a_{m1} b_{11} & a_{m1} b_{12} & \cdots & a_{m1} b_{1q} & \cdots & \cdots & a_{mn} b_{11} & a_{mn} b_{12} & \cdots & a_{mn} b_{1q} \\ a_{m1} b_{21} & a_{m1} b_{22} & \cdots & a_{m1} b_{2q} & \cdots & \cdots & a_{mn} b_{21} & a_{mn} b_{22} & \cdots & a_{mn} b_{2q} \\ \vdots & \vdots & \ddots & \vdots & & & \vdots & \vdots & \ddots & \vdots \\ a_{m1} b_{p1} & a_{m1} b_{p2} & \cdots & a_{m1} b_{pq} & \cdots & \cdots & a_{mn} b_{p1} & a_{mn} b_{p2} & \cdots & a_{mn} b_{pq} \end{bmatrix}$ Thus, if: $\mathbf A$ is a matrix with order $m \times n$ $\mathbf B$ is a matrix with order $p \times q$ then $\mathbf A \otimes \mathbf B$ is a matrix with order $m p \times n q$. Also known as Matrix Entrywise Product or Schur Product: Let $\struct {S, \cdot}$ be an algebraic structure. Let $\mathbf A = \sqbrk a_{m n}$ be an $m \times n$ matrix over $S$. Let $\mathbf B = \sqbrk b_{m n}$ be an $m \times n$ matrix over $S$. The Hadamard product of $\mathbf A$ and $\mathbf B$ is written $\mathbf A \circ \mathbf B$ and is defined as follows: $\mathbf A \circ \mathbf B := \mathbf C = \sqbrk c_{m n}$ where: $\forall i \in \closedint 1 m, j \in \closedint 1 n: c_{i j} = a_{i j} \cdot_R b_{i j}$ ## Subcategories This category has the following 3 subcategories, out of 3 total.
2023-03-25 15:03:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9640594720840454, "perplexity": 152.64365736764526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00729.warc.gz"}