url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://mathhelpforum.com/differential-geometry/102331-pathological-example.html
# Math Help - Pathological example 1. ## Pathological example Will someone help me on this problem? Let $f(x):R \rightarrow R$ be a continuous but differentiable nowhere ( for example the Weierstrass's function). Define $F(x)=\int_0^x f(r)dr$. Why does F(x) has the first derivative everywhere? Does F(x) have a second derivative? I think F(x) has the first derivative since that's why the FTOC part 2 says, but I'm stuck on part 2. 2. By definition of 'integral function' if... $F(x)= \int_{0}^{x} f(\tau)\cdot d\tau$ (1) ... then $F(*)$ has prime derivative everywhere $f(*)$ is continous and is... $F^{'} (x)= f(x)$ (2) But $f(*)$ in continous $\forall x \in \mathbb{R}$ so that $F^{'}(*)$ exists $\forall x \in \mathbb{R}$... Now from (2) we derive that... $F^{''} (x) = f^{'} (x)$ (3) ... so that, because $f^{'} (*)$ nowhere exists, the same is for $F^{''} (*)$... Kind regards $\chi$ $\sigma$ 3. According to... Weierstrass Function -- from Wolfram MathWorld ... the 'Weierstrass function of degree a' is defined as... $f_{a} (x) = \sum_{k=1}^{\infty} \frac {\sin \pi\cdot k^{a}\cdot x}{\pi\cdot k^{a}}$ (1) ... and it is everywhere continous but it's derivative exists only in a set of measure zero. More precisely in recent years it has been demostrated that the derivative exists and is $f^{'}_{a} (x)= \frac{1}{2}$ only for $x= \frac{2A+1}{2B+1}$, $A$ & $B$ integers. The Weierstrass function written in the form (1) is an example of Fractal Fourier Series... Kind regards $\chi$ $\sigma$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9772472977638245, "perplexity": 1140.1968067434175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678705742/warc/CC-MAIN-20140313024505-00034-ip-10-183-142-35.ec2.internal.warc.gz"}
http://pipingdesigner.co/index.php/mathematics/geometry/plane-geometry/2539-circle-corner
# Circle Corner Written by Jerry Ratzlaff on . Posted in Plane Geometry • Circle corner (a two-dimensional figure) is a right triangle having acute vertices on a circle with the hypotenuse outside the circle. • Chord is a line segment on the interior of a circle. • Segment of a circle is an interior part of a circle bound by a chord and an arc. ## Formulas that use area of a Circle Corner $$\large{ A_{area} = \frac{a\;b \;-\; r \; l \;+\; s \; \left(r \;-\; h\right) }{2 } }$$ ### Where: $$\large{ A_{area} }$$ = area $$\large{ l }$$ = arc length $$\large{ s }$$ = chord length $$\large{ a, b }$$ = edge $$\large{ r }$$ = radius $$\large{ h }$$ = segment heigh ## Formulas that use Arc Length of a Circle Corner $$\large{ l = r \; \theta }$$ ### Where: $$\large{ l }$$ = arc length $$\large{ \theta }$$ = angle $$\large{ r }$$ = radius ## Formulas that use Chord Length of a Circle Corner $$\large{ s = a^2 \; b^2 }$$ ### Where: $$\large{ s }$$ = chord length $$\large{ a, b }$$ = edge ## Formulas that use Height of a Circle Corner $$\large{ h = r \; \left( 1 - cos \; \frac{\theta}{2} \right) }$$ ### Where: $$\large{ h }$$ = segment height $$\large{ \theta }$$ = segment angle $$\large{ r }$$ = radius ## Formulas that use Perimeter of a Circle Corner $$\large{ p = a + b + l }$$ ### Where: $$\large{ p }$$ = perimeter $$\large{ l }$$ = arc length $$\large{ a, b }$$ = edge ## Formulas that use Segment Angle of a Circle Corner $$\large{ \theta = arccos \; \frac{ 2\;r^2 \;-\; s^2 }{2\;r^2} }$$ ### Where: $$\large{ \theta }$$ = segment angle $$\large{ s }$$ = chord length $$\large{ r }$$ = radius
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9116400480270386, "perplexity": 4333.41574687944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655911092.63/warc/CC-MAIN-20200710144305-20200710174305-00045.warc.gz"}
http://cms.math.ca/cmb/kw/decomposition
Canadian Mathematical Society www.cms.math.ca Search results Search: All articles in the CMB digital archive with keyword decomposition Expand all        Collapse all Results 1 - 8 of 8 1. CMB Online first Parlier, Hugo A short note on short pants It is a theorem of Bers that any closed hyperbolic surface admits a pants decomposition consisting of curves of bounded length where the bound only depends on the topology of the surface. The question of the quantification of the optimal constants has been well studied and the best upper bounds to date are linear in genus, a theorem of Buser and Seppälä. The goal of this note is to give a short proof of a linear upper bound which slightly improve the best known bound. Keywords:hyperbolic surfaces, geodesics, pants decompositionsCategories:30F10, 32G15, 53C22 2. CMB Online first Levy, Jason Rationality and the Jordan-Gatti-Viniberghi decomposition We verify our earlier conjecture and use it to prove that the semisimple parts of the rational Jordan-Kac-Vinberg decompositions of a rational vector all lie in a single rational orbit. Keywords:reductive group, $G$-module, Jordan decomposition, orbit closure, rationalityCategories:20G15, 14L24 3. CMB 2012 (vol 56 pp. 606) Mazorchuk, Volodymyr; Zhao, Kaiming Characterization of Simple Highest Weight Modules We prove that for simple complex finite dimensional Lie algebras, affine Kac-Moody Lie algebras, the Virasoro algebra and the Heisenberg-Virasoro algebra, simple highest weight modules are characterized by the property that all positive root elements act on these modules locally nilpotently. We also show that this is not the case for higher rank Virasoro and for Heisenberg algebras. Keywords:Lie algebra, highest weight module, triangular decomposition, locally nilpotent actionCategories:17B20, 17B65, 17B66, 17B68 4. CMB 2011 (vol 56 pp. 442) Zelenyuk, Yevhen Closed Left Ideal Decompositions of $U(G)$ Let $G$ be an infinite discrete group and let $\beta G$ be the Stone--Čech compactification of $G$. We take the points of $ėta G$ to be the ultrafilters on $G$, identifying the principal ultrafilters with the points of $G$. The set $U(G)$ of uniform ultrafilters on $G$ is a closed two-sided ideal of $\beta G$. For every $p\in U(G)$, define $I_p\subseteq\beta G$ by $I_p=\bigcap_{A\in p}\operatorname{cl} (GU(A))$, where $U(A)=\{p\in U(G):A\in p\}$. We show that if $|G|$ is a regular cardinal, then $\{I_p:p\in U(G)\}$ is the finest decomposition of $U(G)$ into closed left ideals of $\beta G$ such that the corresponding quotient space of $U(G)$ is Hausdorff. Keywords:Stone--Čech compactification, uniform ultrafilter, closed left ideal, decompositionCategories:22A15, 54H20, 22A30, 54D80 5. CMB 2011 (vol 55 pp. 303) Han, Yongsheng; Lee, Ming-Yi; Lin, Chin-Cheng Atomic Decomposition and Boundedness of Operators on Weighted Hardy Spaces In this article, we establish a new atomic decomposition for $f\in L^2_w\cap H^p_w$, where the decomposition converges in $L^2_w$-norm rather than in the distribution sense. As applications of this decomposition, assuming that $T$ is a linear operator bounded on $L^2_w$ and $0 Keywords:$A_p$weights, atomic decomposition, Calderón reproducing formula, weighted Hardy spacesCategories:42B25, 42B30 6. CMB 2009 (vol 53 pp. 278) Galego, Elói M. Cantor-Bernstein Sextuples for Banach Spaces Let$X$and$Y$be Banach spaces isomorphic to complemented subspaces of each other with supplements$A$and$B$. In 1996, W. T. Gowers solved the Schroeder--Bernstein (or Cantor--Bernstein) problem for Banach spaces by showing that$X$is not necessarily isomorphic to$Y$. In this paper, we obtain a necessary and sufficient condition on the sextuples$(p, q, r, s, u, v)$in$\mathbb N$with$p+q \geq 1$,$r+s \geq 1$and$u, v \in \mathbb N^*$, to provide that$X$is isomorphic to$Y$, whenever these spaces satisfy the following decomposition scheme $$A^u \sim X^p \oplus Y^q, \quad B^v \sim X^r \oplus Y^s.$$ Namely,$\Phi=(p-u)(s-v)-(q+u)(r+v)$is different from zero and$\Phi$divides$p+q$and$r+s$. These sextuples are called Cantor--Bernstein sextuples for Banach spaces. The simplest case$(1, 0, 0, 1, 1, 1)$indicates the well-known Pełczyński's decomposition method in Banach space. On the other hand, by interchanging some Banach spaces in the above decomposition scheme, refinements of the Schroeder--Bernstein problem become evident. Keywords:Pel czyński's decomposition method, Schroeder-Bernstein problemCategories:46B03, 46B20 7. CMB 2007 (vol 50 pp. 504) Dukes, Peter; Ling, Alan C. H. Asymptotic Existence of Resolvable Graph Designs Let$v \ge k \ge 1$and$\lam \ge 0$be integers. A \emph{block design}$\BD(v,k,\lambda)$is a collection$\cA$of$k$-subsets of a$v$-set$X$in which every unordered pair of elements from$X$is contained in exactly$\lambda$elements of$\cA$. More generally, for a fixed simple graph$G$, a \emph{graph design}$\GD(v,G,\lambda)$is a collection$\cA$of graphs isomorphic to$G$with vertices in$X$such that every unordered pair of elements from$X$is an edge of exactly$\lambda$elements of$\cA$. A famous result of Wilson says that for a fixed$G$and$\lambda$, there exists a$\GD(v,G,\lambda)$for all sufficiently large$v$satisfying certain necessary conditions. A block (graph) design as above is \emph{resolvable} if$\cA$can be partitioned into partitions of (graphs whose vertex sets partition)$X$. Lu has shown asymptotic existence in$v$of resolvable$\BD(v,k,\lambda)$, yet for over twenty years the analogous problem for resolvable$\GD(v,G,\lambda)$has remained open. In this paper, we settle asymptotic existence of resolvable graph designs. Keywords:graph decomposition, resolvable designsCategories:05B05, 05C70, 05B10 8. CMB 2003 (vol 46 pp. 356) Ishiwata, Makiko; Przytycki, Józef H.; Yasuhara, Akira Branched Covers of Tangles in Three-balls We give an algorithm for a surgery description of a$p$-fold cyclic branched cover of$B^3\$ branched along a tangle. We generalize constructions of Montesinos and Akbulut-Kirby. Keywords:tangle, branched cover, surgery, Heegaard decompositionCategories:57M25, 57M12 © Canadian Mathematical Society, 2013 : http://www.cms.math.ca/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9834227561950684, "perplexity": 922.1471022748145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052034/warc/CC-MAIN-20131204131732-00070-ip-10-33-133-15.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/8/noah-snyder?tab=activity
Noah Snyder Reputation 5,863 Next privilege 10,000 Rep. Access moderator tools Badges 2 24 49 Newest Impact ~172k people reached • 8 helpful flags • 472 votes cast # 487 Actions Jul20 awarded Yearling Apr3 awarded Great Answer Jan12 awarded Revival Jan1 awarded Good Answer Dec15 awarded Caucus Dec5 comment The degree of antipodal map, composition of reflections? possible duplicate of The degree of antipodal map. Nov13 awarded Notable Question Sep30 awarded Explainer Aug20 reviewed Close gcd and lcm from prime factorization proof Aug20 reviewed Close Injective hull of $\mathbb{ Z}_n$ Aug19 reviewed Close How can i resolve this limit without L'Hopital's Rule? Aug19 reviewed Leave Open How does $A_n$ look in Aut$(X)$? Aug19 reviewed Close distribution of books among students Aug7 comment Is $.\overline{9} = 1$? @Matteo: That's a great way of putting it. Of course it's equivalent to the other definition, but if you only want to talk about infinite decimals you're right that your definition is easier. Aug4 reviewed Close Cardinality of the real numbers Jul20 awarded Yearling May17 reviewed Reject Is $.\overline{9} = 1$? May9 awarded Nice Answer Mar10 comment Proving that $\dim(\mathrm{span}({I_n,A,A^2,…})) \leq n$ Now that the typo is sorted out, if you have a math question that you still need answered then you should edit the question to fix the typos and get it reopened. Feb28 comment Topology - interval homeomorphic to another interval @SKA: For bounded intervals you're going to approach is similarly to the way you approached the other ones: find an explicit continuous function going each way. In this case you can use pretty simple functions. As for why [0,1] and $(-\infty, \infty)$ are not homeomorphic, that's a bit trickier. Once you know about compactness it's easy. The first other approach I can think of is to identify some property that the endpoints have which no interior points can have. For example, any connected open set containing 1 has the property that when you remove 1 its still connected.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36540526151657104, "perplexity": 1983.7771422824578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987228.91/warc/CC-MAIN-20150728002307-00193-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathematica.stackexchange.com/tags/chemistry/hot
# Tag Info 22 You can "preload" all the data to your computer so that it doesn't have to look it up each time. An added advantage is that it'll also be available when you're offline. This is covered in this support article on wolfram.com. In your case, you would do: ChemicalData[All,"Preload"] RebuildPacletData[] and you should be all set. Note that it will take a ... 18 Preload all chemical data: ChemicalData[All, "Preload"]; RebuildPacletData[]; (* the latter should not really be necessary *) Get all names: cd = ChemicalData[]; Get their molecular formulae: l = ChemicalData[#, "MolecularFormulaString"] & /@ cd; By counting the Cs, Os and Hs in the tattooed diagram we know we have to find ... 18 Simple version using a variant of memoization While part of the answer I was going to give was already posted by Istvan, I will still post mine since the self-precomputing part was not part of Istvan's answer. The following will use the variant of memoization to precompute the dispatch table: ClearAll[elem]; elem[chem_, element_] := With[{dispatchTable = ... 17 It is a nice application for the Graph[] features in Mma. We can calculate quickly all possible decays for all known isotopes, and then let VertexComponent[] look for the chains ending in {"Iridium191", "Iridium193"}. g = Graph@Union@Flatten[Thread[DirectedEdge @@ ##] & /@ Select[{#, IsotopeData[#, "DaughterNuclides"]} & /@ IsotopeData[], ... 16 Use a dispatch table. It is an optimized element -> value lookup table that can be used to replace an element any time with its value. Now it does matching-and-finding every time, but if your list is not too big, this is pretty fast. dispatch = Dispatch[Thread[elements -> chemistry]]; ratio[elemA_, elemB_, disp_] := (elemA/elemB) /. disp; ratio[elemA_, ... 15 myAtoms = {"H", "Li", "Na"}; defCols = myAtoms /. ColorData["Atoms", "ColorRules"]; newCols = {Pink, Yellow, LightBlue}; ColorData["Atoms", "Panel"] /. Thread[defCols -> newCols] Edit: Changing the font color isn't related to the ColorRules, but to the special formatting used by the Panel. So it's cumbersome, but you can see that Mma uses a similar ... 15 I think that this question is too localized as it concerns the physics of a specific scientific instrument. Nonetheless, it is upvoted, so here I provide an answer for the benefit of the voters. I would still be happy to discuss this in the chat. The mathematics of the quadrupole mass filter is more complicated than you might think. Basically, your ... 12 A quick way of showing how the two structures can be positioned relative to each other in a single Graphics3D is as follows: With[{rMax = 500}, Manipulate[ Graphics3D[ {First@ChemicalData["Acetone", "MoleculePlot"], GeometricTransformation[ First@ChemicalData["Chloroform", "MoleculePlot"], Composition[ ... 12 I know you said you didn't want to reinvent the wheel, but sometimes, it's fun to do so. The code below creates a palette with a Periodic Table and a few buttons to make useful tool tips. It shows how one might change the colors based on properties grabbed from ElementData. Note that this code was written for version 9, and if you wish to use it in ... 11 A convenient resource for the Miller Indices can be found here. This ref provides sufficient information for us to draw the (111) and (110) planes. First, reproduce the graphic from the demonstration. I just made the necessary changes to make it run outside of a Manipulate and did not try to optimize it. tet = PolyhedronData["Tetrahedron", "Faces"]; tetv ... 9 There is a space between every data pair which Mathematica apparently interprets as a multiplication. I assume these spaces should have been returns. The following code imports the file as a string, replaces the offensive space with a return and imports the result as JCAMP-DX. ImportString[ StringReplace[ ... 9 Maybe this will help a little (adapting documentation exaple for Slider2D): DynamicModule[{p = {2 π, 0}}, Row @ {Slider2D[Dynamic[p], {{2 Pi, 0}, {0, Pi}}], Plot3D[Exp[-(x^2 + y^2)], {x, -3, 3}, {y, -3, 3}, ImageSize -> {700, 700}, PlotRange -> All, ViewAngle -> .0015, ViewPoint -> Dynamic[1200 {Cos[p[[1]]] ... 8 To answer the question about accessing the function that does the plotting: the hints are here and in the SystemFiles/Formats/XYZ directory. In[20]:= stream = OpenRead["ExampleData/caffeine.xyz"] Out[20]= InputStream["ExampleData/caffeine.xyz", 194] In[22]:= data = SystemConvertXYZDumpImportXYZ[stream] Out[22]= {"VertexTypes" -> {"H", "N", "C", ... 7 I think the only way to do this is by dynamically reseting the ViewMatrix to be an orthographic projection. It was beyond my ability, patience, or inclination to figure how to decompose the ViewMatrix that is created when the graphic is moved into the components ViewPoint, ViewVertical, etc. It seemed to me that the front end usually make a discontinuous ... 7 One can use the (undocumented?) option ColorRules: Import["ExampleData/caffeine.xyz", ColorRules -> {"H" -> Red, "C" -> Black, "N" -> Darker@Green, "O" -> White}] Addendum: Other options may be found here: Options[GraphicsMoleculePlotDumpiMoleculePlot3D]. Note: The option ColorFunction seems to be unimplemented. 6 The question has changed so much since the last time I read it that what I had worked out hardly seems relevant anymore -- esp. to someone who doesn't understand chemistry jargon. Many website data servers can provide their data in JSON or XML format. These formats are easily parsed. The WebBook site seems not to, but I couldn't even find a description of ... 6 I've tried to do some scrubbing of web pages with chemical information and even when I think that a website has some consistent structure I find that there are always special cases or the web site provider chooses that particular time to revamp the website! Nonetheless, here's a way to get at some of your information. casno = "19431-79-9" ... 6 This example picks the colors according to atomic weight, which are loaded from ElementData[]. Like belisarius's answer, it generates a list of rules to replace colors which is then applied to the pane. Rule @@@ Transpose[{ColorData["Atoms", "ColorList"] , ColorData["NeonColors"][QuantityMagnitude@ElementData[#,"AtomicMass"]/200] & /@ ... 6 You can also use ReadList which is usually much faster than Import for large files: readJCAMP[filename_String] := Module[{data, file = OpenRead[filename]}, ReadList[file, String, 25]; data = ReadList[file, {Record, Record}, RecordSeparators -> {" ", ",", "\n"}]; Close[file]; ToExpression[ data[[1 ;; -2]] ] ] Usage: ... 6 With the new Association data structure introduced in the Wolfram Language/Mathematica 10 (you can try it now on the Raspberry Pi), this becomes extremely very simple to write and lookups are highly efficient as well. property = AssociationThread[elements -> chemistry] property["Ni"] (* 0.06 *) 6 One can colorize bonds by finding 5-cycles in the bounding graph pts = QuantityMagnitude@ChemicalData["FullereneC60", "AtomPositions"]; graph = UndirectedGraph@Graph@ChemicalData["FullereneC60", "EdgeRules"]; ring5 = List @@@ Flatten@FindCycle[graph, {5}, 12]; remain = Complement[List @@@ EdgeList[graph], ring5, ring5[[All, {2, 1}]]]; ... 5 It seems to me that there are two natural approaches: (1) modifying color rules before the panel is created or (2) post-processing the output to replace recognizable colors. belisarius already showed a method for the second so I shall address the first. This method is more robust than the post-processing one. See the final example below. Modifying the ... 5 You might use Assumptions in one form or another. Block[{$Assumptions = Liters > 0 && Mols > 0 && Kelvins > 0}, Simplify[Wideal[5 Liters, 10 Liters, 1 Mols, 298 Kelvins]] ] (* -> 298 Kelvins Mols R Log[2] *) One disappointment for me is that the following doesn't work: Block[{$Assumptions = Liters > 0 && Mols ... 5 Here is a less impressive take on the question. positionsA = ChemicalData["Acetone", "AtomPositions"] /. {x_, y_, z_} -> {x + 400, y + 400, z + 400}; positionsC = ChemicalData["Chloroform", "AtomPositions"]; {atomsA, atomsC} = {ChemicalData["Acetone", "VertexTypes"], ChemicalData["Chloroform", "VertexTypes"]}; {colorsA, colorsC} ... 5 I found the problem. Mathematica interprets the input string as a list of all available molecules that contain the input string as a substring. ChemicalData["Alanine"] {"DAlanine", "DLAlanine", "LAlanine"} 5 Let's first pre-process the data you need for the output file to be used in your quantum simulation software: pos = ChemicalData["Fructose", "AtomPositions"][[1]]/100.; atype = ChemicalData["Fructose", "VertexTypes"][[1]]; data = Transpose[{atype, pos}]; Here's a function to give you the output file the way you specified in your question: ... 5 I am not familiar with the specific output format you need but I think I can show you how to proceed. dat = Import["ExampleData/caffeine.xyz", {{"VertexTypes", "VertexCoordinates"}}]; dat2 = {#[[1, 1]], #[[All, 2]]} & /@ GatherBy[dat\[Transpose], First]; dat3 = {#, Length@#2, #2} & @@@ dat2; dat3 has this format: dat3 // TableForm \$\left( ... 5 Try this: t = Import["c:\\work\\temp\\1.xyz"]; allColors = {}; (*get all colors from Graphics3D*) Scan[If[MatchQ[#, RGBColor[__]], AppendTo[allColors, #]] &, t, Infinity]; allColors = DeleteDuplicates[allColors] (*{RGBColor[0.65, 0.7, 0.7], RGBColor[0.4, 0.4, 0.4]}*) (*replace all obtained colors with any another*) replaceRules = MapThread[Rule, ... 4 You can use indexed variables with a shortcut function to define a long list of indices and values, like so: indexedVariable[var_, indicesValues__] := With[{dict = List[indicesValues]}, Scan[(var[#[[1]]] = #[[2]]) &, dict]]; indexedVariable[chemistry, {"C", 0.032}, {"Mn", 1.2}, {"S", 0.0259}]; chemistry["C"]; This will let the engine handle ... 4 By using ViewVector you are rotating the point-of-view about the object, not the object itself. While these are similar, it might be easier to understand what's happening by rotating the object. This can be done: rotAbout={0, 0, 1}; Manipulate[rot = RotationMatrix[a, rotAbout]; Graphics3D[{Specularity[GrayLevel[1], 100], ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3576749861240387, "perplexity": 4313.212600592638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773066.29/warc/CC-MAIN-20141217075253-00088-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/electrical-potential-energy.87612/
# Homework Help: Electrical Potential Energy 1. Sep 5, 2005 ### ussrasu I have no idea what principles i am supposed to use in this question? Could someone show me how to do this question please? Thanks Calculate the electrical potential energy of two protons separated by 1 nm, and compare it with their gravitational potential energy. Estimate how heavy hypothetical particles (with the same charge e) must be in order not to repel each other. 2. Sep 5, 2005 ### whozum Find the EPE and find the GPE, compare how strong they are. Then find the mass that each proton would need to have so that their gravitational attraction would equal their electromagnetic repulsion. 3. Sep 7, 2005 ### ussrasu I dont know how to do it - with the small amount of info given? Can someone show me how to work it out please? Thanks! 4. Sep 7, 2005 ### whozum You should try doing your own homework.. that problem is very do-able, all the information is given. 5. Sep 7, 2005 ### HallsofIvy "potential energy" is equal to the work done in separating the objects. Imagine one of the protons and calculate the work done in moving the other from a distance of infinity to 1m (I started to say from 0 to infinity but that's the wrong way- the force between them is infinite at 0 distance!). The work is, of course, $$\int_{x= \infty}^1 F(x)dx$$ where F(x) is the force at distance x. For gravity that is $$F(x)= \frac{-Gm^2}{x^2}$$ and for electrical force that is $$F(x)= \frac{q^2}{x^2}$$ where m and q are the mass and charge on the proton. (Am I missing a constant in the electrical force formula- this isn't my "area of expertise!)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9421842694282532, "perplexity": 669.3576918531724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945669.54/warc/CC-MAIN-20180423011954-20180423031954-00377.warc.gz"}
https://docs.openvino.ai/latest/omz_models_model_image_retrieval_0001.html
# image-retrieval-0001¶ ## Use Case and High-Level Description¶ Image retrieval model based on MobileNetV2 architecture as a backbone. Metric Value Top1 accuracy 0.834 GFlops 0.613 MParams 2.535 Source framework TensorFlow* ## Inputs¶ Image, name: input, shape: 1, 3, 224, 224 in the format B, C, H, W, where: • B - batch size • C - number of channels • H - image height • W - image width Expected color order: BGR. ## Outputs¶ Tensor with name model/tf_op_layer_l2_normalize/l2_normalize and the shape 1, 256 — image embedding vector.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15983907878398895, "perplexity": 22092.489540744773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363134.25/warc/CC-MAIN-20211205005314-20211205035314-00463.warc.gz"}
https://www.physicsforums.com/threads/need-help-evaluating-double-integrals.384574/
# Need help evaluating double integrals. 1. Mar 7, 2010 ### bthomas872 The question states, If a is a positive number, what is the value of the following double integral? integral from 0 to 2a [ integral from - sqrt(2ay - y^2) to 0 of sqrt(x^2 + y^2) dy dx. My first thought is that we should change to polar coordinates so I do not have to do a trigonometric substitution. So then it would sqrt(r^2) r dr d(theta) which would simplify to r^2 dr d(theta). What i am confused with is now how to change my limits of integration to polar? Can someone help please. Last edited: Mar 7, 2010 2. Mar 8, 2010 ### Staff: Mentor I think you have the order of integration switched. Your inner integral is shown as being done with respect to y. I believe the inner integral should be done with respect to x, and that the limits of integration are from x = -sqrt(2ay - y^2) to x = 0. The outer integral is with respect to y, I believe, and the limits are x = 0 to x = 2a. Possibly the hardest thing about double and triple integrals is understanding the nature of the region over which integration is being done. If you don't understand that, you don't have any chance of being able to change from Cartesian (rectangular) to polar or vice-versa. What does the region look like? In particular, what does the graph of x = -sqrt(2ay - y^2) look like?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984908401966095, "perplexity": 393.53481968764487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645943.23/warc/CC-MAIN-20180318184945-20180318204945-00187.warc.gz"}
http://math.stackexchange.com/questions/443295/show-mapping-involving-tensor-product-is-well-defined
Show mapping involving tensor product is well defined. Let $R$ be a subring of $S$, let $N$ be a left $R$-module and let $\iota : N \to S \otimes_RN$ be the $R$-module homomorphism defined by $\iota(n) = 1 \otimes n$. Suppose that $L$ is any left $S$-module and that $\varphi : N \to L$ is an $R$-module homomorphism from $N$ to $L$. Then there is a unique $S$-module homomorphism $\Phi : S \otimes_RN \to L$ such that $\varphi = \Phi \circ \iota$. My approach so far is: Define $\Phi : S \otimes_RN \to L$ by $\Phi(s \otimes n) = s\varphi(n)$. Show $\Phi$ is well defined. Show $\Phi$ is in fact an $S$-module homomorphism. Show $\Phi$ is unique. I am having trouble with showing that $\Phi$ is well-defined. I suppose $s \otimes n = s' \otimes n'$, it follows by property of cosets that $(s, n) - (s', n') \in S \otimes_RN$. We want to show that $\Phi(s \otimes n) = \Phi((s, n) + S\otimes_RN) = s\varphi(n) = s'\varphi(n') = \Phi(s' \otimes n')$. So if I could show that $s \varphi(n) = s'\varphi(n')$ I would be done with showing well-definedness. Can someone give me a hint on how to do this? Thanks! - Use the universal property of the tensor product (which should be seen as its definition). Also recall that not every element of the tensor product is a pure tensor (your approach suggests that you assume this to be true). But anyway you don't need this. –  Martin Brandenburg Jul 14 '13 at 9:35 The universal property is what I'm trying to show (I think). I understand what you mean about every tensor not necessarily a pure tensor. The tensor product is the set of all finite commutative sums. How do I show well-definedness of the mapping then? –  Robert Cardona Jul 14 '13 at 9:40 I assume that you use the definition of tensor product that starts with a free module $F$ generated by elements of the form $(s, n)$, then defines the tensor product as $F/A$ where $A$ is the submodule of $F$ generated by bilinearity conditions. What you want to show is that the map $\overline\Phi: F \to L$ defined by $\overline\Phi((s, n)) = s\phi(n)$ gives the same value for elements in the same coset of $A$ in $F$, i.e., that $\overline\Phi(A) = 0$. This can be verified easily by testing generators of $A$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937499761581421, "perplexity": 48.415686052370376}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663739.33/warc/CC-MAIN-20140930004103-00214-ip-10-234-18-248.ec2.internal.warc.gz"}
http://codereview.stackexchange.com/questions/21659/criticize-my-first-python-module-please
# Criticize my first Python module please This is a Python module I have just finished writing which I plan to use at project Euler. Please let me know how I have done and what I could do to improve it. # This constant is more or less an overestimate for the range in which # n primes exist. Generally 100 primes exist well within 100 * CONST numbers. CONST = 20 def primeEval(limit): ''' This function yields primes using the Sieve of Eratosthenes. ''' if limit: opRange = [True] * limit opRange[0] = opRange[1] = False for (ind, primeCheck) in enumerate(opRange): if primeCheck: yield ind for i in range(ind*ind, limit, ind): opRange[i] = False def listToNthPrime(termin): ''' Returns a list of primes up to the nth prime. ''' primeList = [] for i in primeEval(termin * CONST): primeList.append(i) if len(primeList) >= termin: break return primeList def nthPrime(termin): ''' Returns the value of the nth prime. ''' primeList = [] for i in primeEval(termin * CONST): primeList.append(i) if len(primeList) >= termin: break return primeList[-1] def listToN(termin): ''' Returns a list of primes up to the number termin. ''' return list(primeEval(termin)) def lastToN(termin): ''' Returns the prime which is both less than n and nearest to n. ''' return list(primeEval(termin))[-1] - It is a good idea to always follow PEP-0008 style guidolines: python.org/dev/peps/pep-0008 –  Mischa Arefiev Feb 15 '13 at 8:24 # This constant is more or less an overestimate for the range in which # n primes exist. Generally 100 primes exist well within 100 * CONST numbers. CONST = 20 def primeEval(limit): Python convention says that functions should named lowercase_with_underscores ''' This function yields primes using the Sieve of Eratosthenes. ''' if limit: What is this for? You could be trying to avoid erroring out when limit=0, but it seems to me that you still error at for limit=1. opRange = [True] * limit As with functions, lowercase_with_underscore opRange[0] = opRange[1] = False for (ind, primeCheck) in enumerate(opRange): You don't need the parens around ind, primeCheck if primeCheck: yield ind for i in range(ind*ind, limit, ind): opRange[i] = False def listToNthPrime(termin): ''' Returns a list of primes up to the nth prime. ''' primeList = [] for i in primeEval(termin * CONST): primeList.append(i) if len(primeList) >= termin: break return primeList You are actually probably losing out by attempting to stop the generator once you pass the number you wanted. You could write this as: return list(primeEval(termin * CONST))[:termin] Chances are that you gain more by having the loop be in the loop function than you gain by stopping early. def nthPrime(termin): ''' Returns the value of the nth prime. ''' primeList = [] for i in primeEval(termin * CONST): primeList.append(i) if len(primeList) >= termin: break return primeList[-1] def listToN(termin): ''' Returns a list of primes up to the number termin. ''' return list(primeEval(termin)) def lastToN(termin): ''' Returns the prime which is both less than n and nearest to n. ''' return list(primeEval(termin))[-1] All of your functions will recalculate all the primes. For any sort of practical use you'll want to avoid that and keep the primes you've calculated. - Are you suggesting that I just omit the loop within the functions nthPrime and listToNthPrime, or that I change the function primeEval to include a parameter which will allow it to terminate early? I just tested the calling functions without the loop. I was surprised the loop provides only 3% increase in speed. Also, what do you mean by "keep primes you've calculated"? –  Jack J Feb 13 '13 at 23:39 @JackJ, I'm suggesting you omit the loop. By keeping the primes, I mean that you should run sieve of erasthones once to find all the primes you need and use that data over and over again. –  Winston Ewert Feb 13 '13 at 23:43 @JackJ, store it in a variable. –  Winston Ewert Feb 14 '13 at 0:57 Sorry, but how would I reuse that data? By saving the output of the sieve to a file? Also, is the reason for omitting the loop to increase readability? –  Jack J Feb 14 '13 at 0:58 General programming issues (non-Python specific): • Avoiding duplicated code: listToNthPrime() and nthPrime() are identical beside the indexing. The later could be changed to def nthprime(termin): return listToNthPrime(termin)[-1]. But it can be argued if such a function is needed anyway, because indexing the first or last element of lists is such a basic usage that usually no further abstraction is necessary. So you could just replace you calls to nthPrime() by listToNthPrime()[-1]. The same is obviously also valid for listToN() and lastToN() in both senses. In fact you can just omit these functions. • Naming: all identifier names should catch its purpose according to the abstraction level as precise they can (resulting clunky names are usually showing a need to refactor / change the abstraction). In that sense the name primeEval could be improved. "Eval" often is just too general to be really meaningful. iterPrimes() will work. Further it makes it clear that it is no list. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5325338244438171, "perplexity": 3419.6661947274556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120394037.54/warc/CC-MAIN-20150124172634-00140-ip-10-180-212-252.ec2.internal.warc.gz"}
https://arxiv.org/abs/1109.5231
cs.LG (what is this?) # Title: Noise Tolerance under Risk Minimization Abstract: In this paper we explore noise tolerant learning of classifiers. We formulate the problem as follows. We assume that there is an ${\bf unobservable}$ training set which is noise-free. The actual training set given to the learning algorithm is obtained from this ideal data set by corrupting the class label of each example. The probability that the class label of an example is corrupted is a function of the feature vector of the example. This would account for most kinds of noisy data one encounters in practice. We say that a learning method is noise tolerant if the classifiers learnt with the ideal noise-free data and with noisy data, both have the same classification accuracy on the noise-free data. In this paper we analyze the noise tolerance properties of risk minimization (under different loss functions), which is a generic method for learning classifiers. We show that risk minimization under 0-1 loss function has impressive noise tolerance properties and that under squared error loss is tolerant only to uniform noise; risk minimization under other loss functions is not noise tolerant. We conclude the paper with some discussion on implications of these theoretical results. Subjects: Machine Learning (cs.LG) DOI: 10.1109/TSMCB.2012.2223460 Cite as: arXiv:1109.5231 [cs.LG] (or arXiv:1109.5231v4 [cs.LG] for this version) ## Submission history From: Naresh Manwani [view email] [v1] Sat, 24 Sep 2011 04:50:55 GMT (93kb) [v2] Mon, 23 Jan 2012 16:17:35 GMT (93kb) [v3] Mon, 21 May 2012 12:56:04 GMT (83kb) [v4] Sat, 13 Oct 2012 11:14:22 GMT (77kb)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.728351354598999, "perplexity": 1451.6096729888475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589350.19/warc/CC-MAIN-20180716135037-20180716155037-00302.warc.gz"}
http://en.wikipedia.org/wiki/Beta_Corvi
# Beta Corvi Observation data Constellation Epoch J2000.0      Equinox J2000.0 (ICRS) Location of β Corvi (circled) Corvus 12h 34m 23.23484s[1] −23° 23′ 48.3374″[1] 2.647[2] G5 II[3] +0.586[2] +0.898[2] +0.44[4] Radial velocity (Rv) −7.6[5] km/s Proper motion (μ) RA: +1.11[1] mas/yr Dec.: −56.56[1] mas/yr Parallax (π) 22.39 ± 0.18[1] mas Distance 146 ± 1 ly (44.7 ± 0.4 pc) Absolute magnitude (MV) –0.61[6] Mass 3.7 ± 0.1[3] M☉ Radius 16[7] R☉ Luminosity 164[8] L☉ Surface gravity (log g) 2.52 ± 0.03[3] cgs Temperature 5,100 ± 80[3] K Metallicity [Fe/H] –0.01[6] dex Rotational velocity (v sin i) 8[9] km/s Age 2.06 × 108[3] years Kraz, β Crv, Beta Corvi, Beta Crv, 9 Corvi, 9 Crv, BD−22 3401, CD−22 3401, CD−22 9505, CPD−22 5388, FK5 471, GC 17133, HD 109379, HIP 61359, HR 4786, PPM 260512, SAO 180915.[10][4] SIMBAD data Beta Corvi (Beta Crv, β Corvi, β Crv) is the second brightest star in the southern constellation of Corvus. It has the traditional name Kraz. The origin and meaning of this name remains uncertain.[11][12] In Chinese, 軫宿 (Zhěn Sù), meaning Chariot (asterism), refers to an asterism consisting of β Corvi, γ Corvi, ε Corvi and δ Corvi.[13] Consequently, β Corvi itself is known as 軫宿四 (Zhěn Sù sì, English: the Fourth Star of Chariot.).[14] ## Structure Beta Corvi has about 3.7 times the Sun's mass and is roughly 206 million years in age,[3] which is old enough for a star of this mass to consume the hydrogen at its core and evolve away from the main sequence. The stellar classification is G5 II,[3] with the luminosity class of 'II' indicating this is a bright giant. The effective temperature of the star's outer envelope is about 5,100 K,[3] which produces a yellow hue common to G-type stars.[15] The measured angular diameter of this star is 3.30 ± 0.17 mas.[7] At an estimated distance of 146 light-years (45 parsecs),[1] this yields a physical size of about 16 times the radius of the Sun.[16][11] Because of the star's mass and radius, it is emitting about 164 times the luminosity of the Sun.[8] The abundance of elements other than hydrogen or helium, what astronomers term metallicity, is similar to the proportions in the Sun.[6] This is a variable star that ranges in apparent visual magnitude from a low of 2.66 to a high of 2.60.[17] ## References 1. van Leeuwen, F. (November 2007). "Validation of the new Hipparcos reduction". Astronomy and Astrophysics 474 (2): 653–664. arXiv:0708.1752. Bibcode:2007A&A...474..653V. doi:10.1051/0004-6361:20078357. 2. ^ a b c Gutierrez-Moreno, Adelina et al. (1966). "A System of photometric standards" 1. Publicaciones Universidad de Chile, Department de Astronomy. pp. 1–17. Bibcode:1966PDAUC...1....1G. 3. Lyubimkov, Leonid S. et al. (February 2010). "Accurate fundamental parameters for A-, F- and G-type Supergiants in the solar neighbourhood". Monthly Notices of the Royal Astronomical Society 402 (2): 1369–1379. arXiv:0911.1335. Bibcode:2010MNRAS.402.1369L. doi:10.1111/j.1365-2966.2009.15979.x. 4. ^ a b HR 4786, database entry, The Bright Star Catalogue, 5th Revised Ed. (Preliminary Version), D. Hoffleit and W. H. Warren, Jr., CDS ID V/50. Accessed on line September 9, 2008. 5. ^ Evans, D. S. (June 20–24, 1966). Batten, Alan Henry; Heard, John Frederick, eds. "Determination of Radial Velocities and their Applications, Proceedings from IAU Symposium no. 30". University of Toronto: International Astronomical Union. Bibcode:1967IAUS...30...57E. |chapter= ignored (help) 6. ^ a b c Takeda, Yoichi; Sato, Bun'ei; Murata, Daisuke (August 2008), "Stellar Parameters and Elemental Abundances of Late-G Giants", Publications of the Astronomical Society of Japan 60 (4): 781–802, arXiv:0805.2434, Bibcode:2008PASJ...60..781T, doi:10.1093/pasj/60.4.781 7. ^ a b Richichi; Percheron, I.; Khristoforova, M. (February 2005), "CHARM2: An updated Catalog of High Angular Resolution Measurements", Astronomy and Astrophysics 431: 773–777, Bibcode:2005A&A...431..773R, doi:10.1051/0004-6361:20042039 8. ^ a b Mallik, Sushma V. (December 1999), "Lithium abundance and mass", Astronomy and Astrophysics 352: 495–507, Bibcode:1999A&A...352..495M 9. ^ Bernacca, P. L.; Perinotto, M. (1970). "A catalogue of stellar rotational velocities". Contributi Osservatorio Astronomico di Padova in Asiago 239 (1). Bibcode:1970CoAsi.239....1B. 10. ^ SV* ZI 946 -- Variable Star, database entry, SIMBAD. Accessed on line September 9, 2008. 11. ^ a b Kaler, James B., "KRAZ (Beta Corvi)", Stars (University of Illinois), retrieved 2012-12-28 12. ^ Falkner, David E. (2011), The Mythology of the Night Sky: An Amateur Astronomer's Guide to the Ancient Greek and Roman Legends, Patrick Moore's Practical Astronomy, Springer, p. 81, ISBN 1-4614-0136-4 13. ^ (Chinese) 中國星座神話, written by 陳久金. Published by 台灣書房出版有限公司, 2005, ISBN 978-986-7332-25-7. 14. ^ (Chinese) 香港太空館 - 研究資源 - 亮星中英對照表, Hong Kong Space Museum. Accessed on line November 23, 2010. 15. ^ "The Colour of Stars", Australia Telescope, Outreach and Education (Commonwealth Scientific and Industrial Research Organisation), December 21, 2004, retrieved 2012-01-16 16. ^ Lang, Kenneth R. (2006), Astrophysical formulae, Astronomy and astrophysics library 1 (3 ed.), Birkhäuser, ISBN 3-540-29692-1. The radius (R*) is given by: \begin{align} 2\cdot R_* & = \frac{(10^{-3}\cdot 44.7\cdot 3.30)\ \text{AU}}{0.0046491\ \text{AU}/R_{\bigodot}} \\ & \approx 32\cdot R_{\bigodot} \end{align} 17. ^ Kukarkin, B. V. et al. (1981), Nachrichtenblatt der Vereinigung der Sternfreunde e.V. (Catalogue of suspected variable stars), Moscow: Academy of Sciences USSR Shternberg, Bibcode:1981NVS...C......0K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183462858200073, "perplexity": 16024.981819351926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769374.67/warc/CC-MAIN-20141217075249-00067-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/what-if-colour-didnt-exist.60609/
# What if colour didn't exist? 1. Jan 21, 2005 ### babtridge Hi there, Could anybody please explain how electron-positron annihilation supports the conjecture that 3 colour states exist for quarks? Plus in the Baryon 1/2 octet and 3/2 decuplet, what would be the corresponding set of states if colour did not exist, such that quarks of the same flavour were identical? Thanks a lot in advance guys and gals..... 2. Jan 21, 2005 ### misogynisticfeminist uhhhh i don't know how leptons can prove that 3 colour states exists. 3. Jan 21, 2005 ### marlon This has nothing to do with the existence of the colour-quantum-number. This number was introduced because certain wavefunctions (like that of the $$\Delta^{++}$$)-particle) could exhibit "all equal" quantumnumbers. In the case of the above mentioned particle the wavefunction consists out of three up quarks + three spin up-numbers + three 1s-energy levels. This kind of wave function does not obey the Pauli-exclusion rule and because of this, the wavefunction must be antisymmetric. In order to obey this anti-symmetry, colours were introduced. regards marlon Last edited: Jan 21, 2005 4. Jan 21, 2005 ### anti_crank Do a Feynman diagram as follows. Going in you have an electron and a positron, that annihilate into a virtual photon; the photon then turns into a quark-antiquark pair that go out. The idea is that for high enough energies, this diagram contributes to the total electron-positron cross section. Let's assume only the light quarks contribute, then you can sum up the amplitudes for annihilation into u and b quark/antiquark pairs and calculate the fraction of the time quarks are produced as opposed to a photon pair or a muon-antimuon. If color exists, then there are 3 diagrams as far as the EM interaction is concerned for each quark flavor; one for each color. This increases the fraction of producing baryons or mesons compared to the no-color case. I do not have the time to work this out now; the idea is that you are now looking for quark wavefunctions that are antisymmetric wrt flavor and spin instead of symmetric. The spatial wavefunction is always symmetric in the ground state, and the color singlet wavefunction is always antisymmetric if it is included. Edit: if you have Griffiths' book on elementary particles, he touches on this somewhere in the chapter on building baryon wavefunctions. 5. Jan 24, 2005 ### babtridge Thanks for taking the time to help me out on this one...you've all made stuff a lot clearer. Tekiteasy peeps! Have something to add? Similar Discussions: What if colour didn't exist?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8808212280273438, "perplexity": 1044.508043471867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719677.59/warc/CC-MAIN-20161020183839-00054-ip-10-171-6-4.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions?sort=newest
# All Questions 14 views Like the question says, Is there any way to download a compilable tex source from a wikipedia article? Or maybe a tool that generates a tex source from a wikipedia article. 9 views ### Footnote symbol bold, small in text, normalsize in footnote I should know this... I'd like to have the footnote figure in bold face and small (textsuperscript) in the running text and bold face and normal size in the footnote text at the bottom of the page. ... 3 views ### glossaries abbreviation in abbreviation - missing footnote Out of this, I managed to have footnotes (glossaries at first use) within the description of glossary and acronyms using the glossaries package. Now, I referred in an acronym to another abbreviation. ... 7 views ### dvi output fonts blurred after upgrading to Windows 10 - pdf perfect Using MiKTeX 2.9 and amsbook, the dvi output fonts have become blurred or furry precisely after upgrading to Windows 10 (a clean install onto a formatted disc). In contrast, the pdf output fonts ... 5 views ### How should I avoid orphan and widow lines in Context Typesetting a very simple document (no maths etc, just text, quotations and images), I am stuck with several orphan and widow lines. Is there a way to make Context avoid doing this? I can't find any ... 23 views ### TikZ node newcommand : pass color as argument I need to create a row of circles (colored markers) followed by some text. See the image below. I achieve this using the following code. \begin{tikzpicture}[scale=2] \matrix[nodes={minimum ... 16 views ### font consistency in environment (theorem…) and in general text I am using a template, and it uses the font \usefont{T1}{bch}{m}{n}\selectfont{}, the third font in the code and output, for the text in general. But this font is not used inside environments, like ... 14 views ### Pgfplots: \addplot3 with all axis in logarithmic scale I would like to generate a surf plot and set the mode of all axis to "log". For instance \documentclass[tikz]{standalone} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis} ... 34 views ### Simpsons function illustration in Tikz I have this code: \documentclass[border=2pt]{standalone} \usepackage[utf8]{inputenc} \usepackage{animate} \usepackage{calc} \usepackage{fp} \usepackage{pgfplots} \usepackage{tikz} ... 25 views ### Courier font just not working (Latex, Mac OS X) Imagine we want use courier font. Start from this: \texttt{Wanna nice monospace font here} {\ttfamily TeleType And here} \begin{Verbatim}[fontfamily=courier] And here please ... 39 views ### moving from old plain TeX to XeTeX I have a style file I used for a project in pre-Unicode age. Now I want to convert it for use in XeTeX. I used Computer Modern and now would like to use CMU Serif. Pointers on what to do with the ... 11 views ### PDFLaTeXify, PDFTeXify, Texify, and all other -ify options stopped working in WinEdt I'm working on creating a C.V/Resume in LaTeX and therefore needed to switch to another compiling engine, and I was also fiddling around with using bibtex or biber to parse the bibliography and create ... 11 views ### Overfull \hbox in subsection heading when using fontspec In the following example I get an Overfull \hbox in the subsection heading when loading fontspec: \documentclass[11pt,a4paper]{memoir} \usepackage{fontspec} \makeatletter ... 16 views ### Changing graphicx default float positioning [duplicate] I have the parameters [htbpH] to tell the figure environment, which float positions are allowed. Afaik, [tbp] is the default setup. I'd like to change it to [tp] by default, can you help me to achieve ... 18 views ### -synctex=1 doesn't work I'm using texmaker, and to show the pdf file (internal), I used -synctex=1 to show it in the position where i'm writing the code. I worked pretty well for me before, but it has stopped working for 2 ... 30 views ### How to modify postscript file to use the same font as the main document I have to include a Postscript picture in a latex document. The ps file is generated by another program. I would like to modify the font within the ps so that it correspond to the same main font of ... 18 views ### Use of “hyperref” package do not let increment the figure number by “\continuedFloat” I need to have figures with several subfigures . In some cases, the subfigures of a figure have to go to the next page. For this purpose, I have to use \continuedFloat but then the figure number of ... 12 views ### How to use onslide in a listing? I need to animate single contiguous code listings by showing and hiding parts of code using \onslide. However, TeX is complaining. How can I achieve this? I tried the advice here but it didn't work. 16 views ### Build Lualatex in Sublime My subime-text buildsettings now set to pdflatex, I want to build lualatex file. How can I change the setting? 35 views ### Drawing smooth path of particle through magnet system I need to get a plot with a trajectory of particle through system of magnets. So I know the coordinate and angle of particle on the edge of each magnet (like y=0,5 alpha=0.3rad) so how can I draw a ... 46 views ### \smile underneath the exponent I'd like to write a formula where there is a $\smile$ underneath the exponent. Like in this example which works fine: $2^{\overset{\text{blabla}}{\smile}}$ But when I want to replace the blabla ... 26 views ### Automatic numbering in multiple documents When typesetting something like exercise sheets, I have a seperate document for every sheet (because of performance reasons while compiling; sometimes also because I compile some sheets not with ... 18 views ### Including a reledpar/reledmac file I have a main file and some included files. The first included file has a structure similar to the example given in the reledmac/reledpar documentation which name is ... 25 views ### Unexpected Behavior of cuted/strip when I use cuted/strip, I got no output. The MWE is below: \documentclass[twocolumn]{article} \usepackage{cuted} \begin{document} \section{Foo} foo \section{Bar} bar \begin{strip} baz \end{strip} ... 12 views ### Arabic script not displaying in chapter headings, even though body text is fine. In the MWE below {thanks to @maïeul}, the Arabic chapter heading and chapter number does not display the Arabic font, but the body text displays fine. It seems to be a problem with Arabic, because ... 21 views ### Errors with the moderncv template used with w32tex I want to prepare my resume using the moderncv package. I am using w32tex (installed on this September), which includes the moderncv package. I downloaded the template from CTAN ... 72 views ### Extract all arguments of a certain command from a document I have a command \foo which takes two arguments and which I use about a hundred times throughout my document (but never within an argument of another instance of \foo). I now want to spellcheck every ... 22 views ### entries in toc sparated with same disstances I am using Easy Thesis template to write my thesis. I want to add a 'chapter like page' to write a list of conferences and posters I did during studies. So below I explain what I did to create a ... 29 views ### Problem with tables and side captions I'm working on putting captions beside a table. When I used SCtable to do so, it messed up the top text on my page and made it appear below the table. Is there any other way I can do this? Also, how ... 27 views ### How to create command to make custom Indicator Function So, I've found this package for common probability items in LaTeX and the very last entry is a "customizable" indicator function in which one simply inputs the variable (e.g. x) and it returns ... 24 views ### PGFOrnaments don't center properly I would like to have nice fleurons and decorations at the end of my chapters to mimic the style of a book from the late 1800's. Some searches brought pgfornaments to my attention (a more user ... 23 views ### Simple way to align text within tabbing environment I want to align text within a tabbing environment without using a new environment. That means flushleft and flushright are not options. Ideally smething like this would be nice: \begin{tabbing} ... 15 views ### How do you reference using apa style in latex? I googled and searched alot or maybe i'm just missing something i dont know.. My problem is i want to do a reference page using apa style. I kept on trying but doesn't seem to give any output.. Anyone ... 22 views ### Undefined control sequence and Something's missing perhaps a missing \item I am using Windows 7 and getting a LaTeX Error in an algorithm; I am new to LaTeX and already spent a lot of time to figure this out but no luck. Could you please help me here? \begin{algorithm} ... 18 views ### How to turn on url boxes for links in latex? I have several links in some document. However, I would like to highlight with boxes this links in order to do not pass unoticed. Any idea of how to do this?. 77 views ### positioning a node relative to multiple nodes I would like to place a node relative to multiple nodes. In this post, the calc library is suggested in one of the answers. In the below picture, the centered node (E) is not placed in the exact ... 36 views ### Using newtxmath for letters but not symbols I'd like to use newtxmath for the letters in my equations. Unfortunately, if I just enable newtxmath, I find that the \lesssim and \lnsim symbols look too much alike. Here's an example: ... 22 views ### Can my algorithm description be improved? [on hold] I'm writing a pseudocode description of the FISTA (with line search) algorithm from convex optimization. (I'm using slide 17-18 here as a source.) Can anyone suggest any improvements to my latex ... 35 views ### How to create marginal notes without 'opening up' the paragraph This is an example with the \marginalstar macro from TeXbook. Without \marginalstar the paragraph is normal, with \marginalstar the paragraph is "opened up". \def\strutdepth{\dp\strutbox} ... 14 views ### Force numeric citations in RevTex I am using the Review of Modern Physics (rmp) style for a report. By default, it uses author/year citations. I want to force revtex to use numerical citations. Adding the ... 18 views ### Two vertical aligned subfigures not lining up I've got two subfigures below each other. They got the same aspect ratio but don't line up as they should. This is my LaTeX code and below is the produced image. unity-slide.png is 1174 x 660 pixels ... 15 views ### xelatex: Minted code block represents tabs as ^^I I am producing LaTeX output with org mode. My code snippets wind up being indented with tabs, because of my c-mode settings. Changing these would be inconvenient. Although the snippet I am posting ... 31 views ### How can I vertically center one element, instead of the whole group So I can vertically and horizontally center a group with the vplace environment from memoir. \begin{vplace}[0.7] \begin{center} some text \\ more text \\ a third line of ... 137 views ### Add decoration at end of chapter A half full page at the end of a chapter nicely signals that the chapter ends. However, sometimes the last page of a chapter looks a bit too empty if there are are only, say, 5-10 lines. In some older ... 33 views ### cube folding with tikz I would like to fold 12 cubes to one. Please help me with this. \documentclass[11pt]{article} \usepackage{tikz} %optional libraries \usetikzlibrary{decorations,arrows,automata,positioning} ... 29 views ### Change coordinate order in TikZ I want to change the order of entering coordinates in TikZ. In 2D-coordinates it is (x,y) as in (right/left, up/down). As soon as I enter a third coordinate, TikZ just adds the third coordinate at ... 24 views ### IPA characters with tipa in Beamer poster switch to small font size, cannot be set to normal font size I am using package tipa in order to have IPA characters at my disposal. However, when used in conjunction with beamer poster, the tipa characters switch to small font size. Setting them to another ... 29 views ### Looking for a latex directory class I'm lokking for a latex directory class. In order to print alphabetical directory of persons and enteprises. With rubrics for geographie, professionnal functions or enterprises, etc. Printing to A5 or ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9240004420280457, "perplexity": 3989.2306433239924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645315227.83/warc/CC-MAIN-20150827031515-00323-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/explanation-of-terminology-electroweak.862235/
# A Explanation of terminology: electroweak 1. Mar 15, 2016 ### dextercioby I'm not a specialist in this subject, so bear with me. I've always wondered why one claims that the electromagnetic and weak interactions are unified, but the strong one with the (unified) other two is not. Mathematically, I'm aware that the full gauge group of the SM is $U(1) \times SU(2) \times SU(3)$, so one perceives all three interactions separately and on equal footing. In what exact sense are the weak and the em unified, but the strong not? Thank you! 2. Mar 15, 2016 Staff Emeritus Since this is an A, what do you think the U(1) and SU(2) are? 3. Mar 15, 2016 ### nrqed U(1) has one generator and SU(2) has three generators. The key questions are: The E&M gauge field is associated to which of these generators? The $Z_0$ is associated to which ones? What about the $W^\pm$? 4. Mar 15, 2016 ### dextercioby U(1) is the gauge group of electromagnetism, while SU(2)w is the gauge group of the weak interactions. The fields are all 4 vectors (co-vectors actually): the e-m potential A and Wa, a=1,2,3. SU(2) (just as SU(3) in QCD) enters through the adjoint representation of dimension 3. One more time, why one claims that A and W are "unified", if the there's no single coupling constant and no global compact and connected gauge group which has the direct product U(1) x SU (2) as a subgroup? 5. Mar 15, 2016 ### nrqed Actually, U(1) is not the gauge group of electromagnetism and SU(2) is not the gauge group of the weak interaction. That was the point I wanted to make. That U(1) is the weak hypercharge, not the electromagnetic U(1). What happens is that electromagnetism corresponds to a linear combination of the weak hypercharge generator and of the $T_3$ diagonal generator of the weak isospin SU(2), so that the electric charge is given by $Q = T_3 + Y/2$ where (by abuse of notation) here $T_3$ is the eigenvalue of the diagonal generator of the weak isospin SU(2), and Y is the hypercharge. The orthogonal linear combination corresponds to the $Z_0$ boson while the $T_\pm$ correspond to the $W^\pm$. It is in that sense that the weak interaction and the electromagnetic force are deeply linked. Now, you are right that they do not not have the same coupling constant since the coupling constants of the hypercharge U(1) and the weak isospin are not equal and this shows up through the Weinberg angle, so I agree with you that saying they are "unified" is a stretch. But they are definitely deeply linked together, in the sense I just described. 6. Mar 15, 2016 ### dextercioby Thanks for pointing that essential aspect to me. Now I understand. 7. Mar 15, 2016 ### samalkhaiat 8. Mar 16, 2016 Staff Emeritus And there's your problem. That's a broken symmetry. The real U(1) x SU(2) has the U(1) of weak hypercharge and the SU(2) of weak isospin. The physical photons and W, Z's are mixtures of these two (and the Higgs)m, which is where unification comes in. Draft saved Draft deleted Similar Discussions: Explanation of terminology: electroweak
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401739239692688, "perplexity": 849.2850401182533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889542.47/warc/CC-MAIN-20180120083038-20180120103038-00569.warc.gz"}
http://www.exampleproblems.com/wiki/index.php/Derivative
# Derivative In mathematics, the derivative is one of the two central concepts of calculus. (The other is the integral; the two are related via the fundamental theorem of calculus.) The simplest type of derivative is the derivative of a real-valued function of a single real variable. It has several interpretations: • The derivative gives the slope of a tangent to the graph of the function at a point. In this way, derivatives can be used to determine many geometrical properties of the graph, such as concavity or convexity. • The derivative provides a mathematical formulation of rate of change; it measures the rate at which the function's value changes as the function's argument changes. This derivative is the kind usually encountered in a first course on calculus, and historically was the first to be discovered. However, there are also many generalizations of the derivative. The remainder of this article discusses only the simplest case (real-valued functions of real numbers). ## Differentiation and differentiability In physical terms, differentiation expresses the rate at which a quantity, y, changes with respect to the change in another quantity, x, on which it has a functional relationship. Using the symbol Δ to refer to change in a quantity, this rate is defined as a limit of difference quotients ${\displaystyle {\frac {\Delta y}{\Delta x}}}$ as Δx approaches 0. In Leibniz's notation for derivatives, the derivative of y with respect to x is written ${\displaystyle {\frac {dy}{dx}}}$ suggesting the ratio of two infinitesimal quantities. The above expression is pronounced in various ways such as "dy by dx" or "dy over dx". The form "dy dx" is also used conversationally, although it may be confused with the notation for element of area. Modern mathematicians do not bother with "dependent quantities", but simply state that differentiation is a mathematical operation on functions. The precise definition of this operation (which therefore need not deal with infinitesimal quantities) is given as: ${\displaystyle \lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}.}$ A function is differentiable at a point x if its derivative exists at that point; a function is differentiable on an interval if it is differentiable at every x within the interval. If a function is not continuous at x, then there is no tangent line and the function is therefore not differentiable at x; however, even if a function is continuous at x, it may not be differentiable there. In other words, differentiability implies continuity, but not vice versa. One famous example of a function that is continuous everywhere but differentiable nowhere is the Weierstrass function. The derivative of a differentiable function can itself be differentiable. The derivative of a derivative is called a second derivative. Similarly, the derivative of a second derivative is a third derivative, and so on. ## Newton's difference quotient The derivative of a function f at x is geometrically the slope of the tangent line to the graph of f at x. Without the concept which we are about to define, it is impossible to directly find the slope of the tangent line to a given function, because we only know one point on the tangent line, namely (x, f(x)). Instead, we will approximate the tangent line with multiple secant lines that have progressively shorter distances between the two intersecting points. When we take the limit of the slopes of the nearby secant lines in this progression, we will get the slope of the tangent line. The derivative is then defined by taking the limit of the slope of secant lines as they approach the tangent line. File:Tangent-calculus.png Tangent line at (x, f(x)) File:Secant-calculus.png Secant to curve y= f(x) determined by points (x, f(x)) and (x+h, f(x+h)). To find the slopes of the nearby secant lines, choose a small number h. h represents a small change in x, and it can be either positive or negative. The slope of the line through the points (x,f(x)) and (x+h,f(x+h)) is ${\displaystyle {f(x+h)-f(x) \over h}.}$ This expression is Newton's difference quotient. The derivative of f at x is the limit of the value of the difference quotient as the secant lines get closer and closer to being a tangent line: ${\displaystyle f'(x)=\lim _{h\to 0}{f(x+h)-f(x) \over h}.}$ File:Lim-secant.png Tangent line as limit of secants. If the derivative of f exists at every point x in the domain, we can define the derivative of f to be the function whose value at a point x is the derivative of f at x. Since immediately substituting 0 for h results in division by zero, calculating the derivative directly can be unintuitive. One technique is to simplify the numerator so that the h in the denominator can be cancelled. This happens easily for polynomials; see calculus with polynomials. For almost all functions however, the result is a mess. Fortunately, many guidelines exist. ## Notations for differentiation ### Lagrange's notation The simplest notation for differentiation that is in current use is due to Joseph Louis Lagrange and uses the prime mark: ${\displaystyle f'(x)\;}$ for the first derivative, ${\displaystyle f''(x)\;}$ for the second derivative, ${\displaystyle f'''(x)\;}$ for the third derivative, and ${\displaystyle f^{(n)}(x)\;}$ for the nth derivative, provided n > 3 ### Leibniz's notation The other common notation is Leibniz's notation for differentiation which is named after Leibniz. For the function whose value at x is the derivative of f at x, we write: ${\displaystyle {\frac {d\left(f(x)\right)}{dx}}.}$ We can write the derivative of f at the point a in two different ways: ${\displaystyle {\frac {d\left(f(x)\right)}{dx}}\left.{\!\!{\frac {}{}}}\right|_{x=a}=\left({\frac {d\left(f(x)\right)}{dx}}\right)(a).}$ If the output of f(x) is another variable, for example, if y=f(x), we can write the derivative as: ${\displaystyle {\frac {dy}{dx}}.}$ Higher derivatives are expressed as ${\displaystyle {\frac {d^{n}\left(f(x)\right)}{dx^{n}}}}$ or ${\displaystyle {\frac {d^{n}y}{dx^{n}}}}$ for the n-th derivative of f(x) or y respectively. Historically, this came from the fact that, for example, the 3rd derivative is: ${\displaystyle {\frac {d\left({\frac {d\left({\frac {d\left(f(x)\right)}{dx}}\right)}{dx}}\right)}{dx}}}$ which we can loosely write as: ${\displaystyle \left({\frac {d}{dx}}\right)^{3}\left(f(x)\right)={\frac {d^{3}}{\left(dx\right)^{3}}}\left(f(x)\right).}$ Dropping brackets gives the notation above. Leibniz's notation allows one to specify the variable for differentiation (in the denominator). This is especially relevant for partial differentiation. It also makes the chain rule easy to remember, because the "du" terms appear symbolically to cancel: ${\displaystyle {\frac {dy}{dx}}={\frac {dy}{du}}\cdot {\frac {du}{dx}}.}$ (In the popular formulation of calculus in terms of limits, the "du" terms cannot literally cancel, because on their own they are undefined; they are only defined when used together to express a derivative. In nonstandard analysis, however, they can be viewed as infinitesimal numbers that cancel.) ### Newton's notation Newton's notation for differentiation (also called the dot notation for differentiation) requires placing a dot over the function name: ${\displaystyle {\dot {x}}={\frac {dx}{dt}}=x'(t)}$ ${\displaystyle {\ddot {x}}=x''(t)}$ and so on. Newton's notation is mainly used in mechanics, normally for time derivatives such as velocity and acceleration, and in ODE theory. It is usually only used for first and second derivatives. ### Euler's notation Euler's notation uses a differential operator, denoted as D, which is prefixed to the function with the variable as a subscript of the operator: ${\displaystyle D_{x}f(x)\;}$ for the first derivative, ${\displaystyle {D_{x}}^{2}f(x)\;}$ for the second derivative, and ${\displaystyle {D_{x}}^{n}f(x)\;}$ for the nth derivative, provided n > 1 This notation can also be abbreviated when taking derivatives of expressions that contain a single variable. The subscript to the operator is dropped and is assumed to be the only variable present in the expression. In the following examples, u represents any expression of a single variable: ${\displaystyle Du\;}$ for the first derivative, ${\displaystyle D^{2}u\;}$ for the second derivative, and ${\displaystyle D^{n}u\;}$ for the nth derivative, provided n > 1 Euler's notation is useful for stating and solving linear differential equations. ## Critical points Points on the graph of a function where the derivative is undefined or equals zero are called critical points or sometimes stationary points (in the case where the derivative equals zero). If the second derivative is positive at a critical point, that point is a local minimum; if negative, it is a local maximum; if zero, it may or may not be a local minimum or local maximum. Taking derivatives and solving for critical points is often a simple way to find local minima or maxima, which can be useful in optimization. In fact, local minima and maxima can only occur at critical points. This is related to the extreme value theorem. ## Physics Arguably the most important application of calculus to physics is the concept of the "time derivative"—the rate of change over time—which is required for the precise definition of several important concepts. In particular, the time derivatives of an object's position are significant in Newtonian physics: • Velocity (instantaneous velocity; the concept of average velocity predates calculus) is the derivative (with respect to time) of an object's position. • Acceleration is the derivative (with respect to time) of an object's velocity. • Jerk is the derivative (with respect to time) of an object's acceleration. For example, if an object's position ${\displaystyle p(t)=-16t^{2}+16t+32}$; then, the object's velocity is ${\displaystyle {\dot {p}}(t)=p'(t)=-32t+16}$; the object's acceleration is ${\displaystyle {\ddot {p}}(t)=p''(t)=-32}$; and the object's jerk is ${\displaystyle p'''(t)=0.}$ If the velocity of a car is given, as a function of time, then, the derivative of said function with respect to time describes the acceleration of said car, as a function of time. ## Algebraic manipulation Messy limit calculations can be avoided, in certain cases, because of differentiation rules which allow one to find derivatives via algebraic manipulation; rather than by direct application of Newton's difference quotient. One should not infer that the definition of derivatives, in terms of limits, is unnecessary. Rather, that definition is the means of proving the following "powerful differentiation rules"; these rules are derived from the difference quotient. • Constant rule: The derivative of any constant is zero. • Constant multiple rule: If c is some real number; then, the derivative of ${\displaystyle cf(x)}$ equals c multiplied by the derivative of f(x) (a consequence of linearity below) • Linearity: (af + bg)' = af ' + bg' for all functions f and g and all real numbers a and b. • General power rule (Polynomial rule): If ${\displaystyle f(x)=x^{r}}$, for some real number r; ${\displaystyle f'(x)=rx^{r-1}.}$ • Product rule: ${\displaystyle (fg)'=f'g+fg'}$ for all functions f and g. • Quotient rule: ${\displaystyle (f/g)'=(f'g-fg')/(g^{2})}$ unless g is zero. • Chain rule: If ${\displaystyle f(x)=h(g(x))}$, then ${\displaystyle f'(x)=h'[g(x)]*g'(x)}$. • Inverse functions and differentiation: If ${\displaystyle y=f(x)}$, ${\displaystyle x=f^{-1}(y)}$, and f(x) and its inverse are differentiable, with ${\displaystyle dy/dx}$ non-zero, then ${\displaystyle dx/dy=1/(dy/dx).}$ • Derivative of one variable with respect to another when both are functions of a third variable: Let ${\displaystyle x=f(t)}$ and ${\displaystyle y=g(t)}$. Now ${\displaystyle dy/dx=(dy/dt)/(dx/dt).}$ • Implicit differentiation: If ${\displaystyle f(x,y)=0}$ is an implicit function, we have: dy/dx = - (∂f / ∂x) / (∂f / ∂y). In addition, the derivatives of some common functions are useful to know. See the table of derivatives. As an example, the derivative of ${\displaystyle f(x)=2x^{4}+\sin(x^{2})-\ln(x)\;e^{x}+7}$ is ${\displaystyle f'(x)=8x^{3}+2x\cos(x^{2})-{\frac {1}{x}}\;e^{x}-\ln(x)\;e^{x}.}$ ## Using derivatives to graph functions Derivatives are a useful tool for examining the graphs of functions. In particular, the points in the interior of the domain of a real-valued function which take that function to local extrema will all have a first derivative of zero. However, not all critical points are local extrema; for example, f(x)=x3 has a critical point at x=0, but it has neither a maximum nor a minimum there. The first derivative test and the second derivative test provide ways to determine if the critical points are maxima, minima or neither. In the case of multidimensional domains, the function will have a partial derivative of zero with respect to each dimension at local extrema. In this case, the Second Derivative Test can still be used to characterize critical points, by considering the eigenvalues of the Hessian matrix of second partial derivatives of the function at the critical point. If all of the eigenvalues are positive, then the point is a local minimum; if all are negative, it is a local maximum. If there are some positive and some negative eigenvalues, then the critical point is a saddle point, and if none of these cases hold then the test is inconclusive (e.g., eigenvalues of 0 and 3). Once the local extrema have been found, it is usually rather easy to get a rough idea of the general graph of the function, since (in the single-dimensional domain case) it will be uniformly increasing or decreasing except at critical points, and hence (assuming it is continuous) will have values in between its values at the critical points on either side. ## Generalizations Where a function depends on more than one variable, the concept of a partial derivative is used. Partial derivatives can be thought of informally as taking the derivative of the function with all but one variable held temporarily constant near a point. Partial derivatives are represented as ∂/∂x (where ∂ is a rounded 'd' known as the 'partial derivative symbol'). Some people pronounce the partial derivative symbol as 'der' rather than the 'dee' used for the standard derivative symbol, 'd'. The concept of derivative can be extended to more general settings. The common thread is that the derivative at a point serves as a linear approximation of the function at that point. Perhaps the most natural situation is that of functions between differentiable manifolds; the derivative at a certain point then becomes a linear transformation between the corresponding tangent spaces and the derivative function becomes a map between the tangent bundles. In order to differentiate all continuous functions and much more, one defines the concept of distribution. For complex functions of a complex variable differentiability is a much stronger condition than that the real and imaginary part of the function are differentiable with respect to the real and imaginary part of the argument. For example, the function f(x + iy) = x + 2iy satisfies the latter, but not the first. See also Holomorphic function.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 46, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9797729849815369, "perplexity": 240.8281308689251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401583556.73/warc/CC-MAIN-20200928010415-20200928040415-00435.warc.gz"}
http://www.anuncommonlab.com/doc/starkf/unique_figure.html
unique_figure Creates a figure with the given ID (tag) or selects it if it already exists, allowing one to easily reuse the same figure window identified with text instead of handles. This is useful when, e.g., running a script many times after clearing between runs without having to hard-code figure numbers (which can become hard to keep track of). h = unique_figure(id, varargin) Any additional arguments are passed along to the figure's set method. Inputs id A unique text name used to refer to the figure Any additional arguments to pass to the figure's set method Outputs h Figure handle Example Create a figure with a hidden tag called 'trajectory' and plot something in it. Also, give the figure window the name "Trajectories" and make the background white. h = unique_figure('trajectory', ... 'Name', 'Various Trajectories', ... 'Color', 'w'); clf(); theta = linspace(0, 2*pi, 1000); r = cos(5*theta); plot(r.*cos(theta), r.*sin(theta), 'b', ... 0.8 * r.*sin(theta) + 0.2, 0.8 * r.*cos(theta) + 0.1, 'r'); axis off equal; % ... Do some more things.... Later on in the script, get the figure back and add to it. unique_figure('trajectory'); hold on; plot(0.7 * r.*sin(theta+pi/4) + 0.3, 0.7 * r.*cos(theta+pi/4) - 0.1, ... 'Color', [0.1 0.8 0.1]); hold off;
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18484826385974884, "perplexity": 6217.506789948129}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608686.22/warc/CC-MAIN-20170526222659-20170527002659-00426.warc.gz"}
http://mathhelpforum.com/advanced-statistics/214603-discrete-random-variables-pmf-print.html
# Discrete Random Variables - pmf • March 11th 2013, 01:13 PM Matt1993 Discrete Random Variables - pmf The discrete random variable R takes the values in S = { -3, -1, 1, 3 } with probabilities respectively, ( 1 - theta )/4, ( 1 - 3theta) /4, ( 1 + 3theta)/4, (1+ theta)/4, where theta is a real constant. Find the range of values for which this is a valid probability mass function. i think the answer is -1/3 =< theta =< 1/3 BUT HOW DO PROVE IT!!!!!!! help!! • March 11th 2013, 01:21 PM ILikeSerena Re: Discrete Random Variables - pmf Quote: Originally Posted by Matt1993 The discrete random variable R takes the values in S = { -3, -1, 1, 3 } with probabilities respectively, ( 1 - theta )/4, ( 1 - 3theta) /4, ( 1 + 3theta)/4, (1+ theta)/4, where theta is a real constant. Find the range of values for which this is a valid probability mass function. i think the answer is -1/3 =< theta =< 1/3 BUT HOW DO PROVE IT!!!!!!! help!! Hi Matt1993! :) What is the reason you think that -1/3 =< theta =< 1/3? That is likely the key to the proof... • March 11th 2013, 01:36 PM Matt1993 Re: Discrete Random Variables - pmf See all I did was let the pmfs equal zero and then I just tried values until I came up with that answer. That's the problem • March 11th 2013, 01:47 PM ILikeSerena Re: Discrete Random Variables - pmf Quote: Originally Posted by Matt1993 See all I did was let the pmfs equal zero and then I just tried values until I came up with that answer. That's the problem That's close. The axioms of probability (see wiki) require in particular 2 things from the probabilities: 1. Each probability is at least 0: $p \ge 0$. . 2. The sum of all probabilities is 1: $\sum p = 1$. $( 1 - \theta )/4 \ge 0$ $( 1 - 3\theta) /4 \ge 0$ $( 1 + 3\theta)/4 \ge 0$ $(1+ \theta)/4 \ge 0$ $( 1 -\theta )/4 + ( 1 - 3\theta) /4 + ( 1 + 3\theta)/4 + (1+ \theta)/4 = 1$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.904011070728302, "perplexity": 1369.3724183663405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823528.84/warc/CC-MAIN-20140820021343-00383-ip-10-180-136-8.ec2.internal.warc.gz"}
https://research-portal.uea.ac.uk/en/publications/a-photosynthetic-antenna-complex-foregoes-unity-carotenoid-to-bac
# A photosynthetic antenna complex foregoes unity carotenoid-to-bacteriochlorophyll energy transfer efficiency to ensure photoprotection Dariusz M. Niedzwiedzki, David J.K. Swainsbury, Daniel P. Canniffe, C. Neil Hunter, Andrew Hitchcock Research output: Contribution to journalArticlepeer-review 10 Citations (Scopus) ## Abstract Carotenoids play a number of important roles in photosynthesis, primarily providing light-harvesting and photoprotective energy dissipation functions within pigment-protein complexes. The carbon-carbon double bond (C=C) conjugation length of carotenoids (N), generally between 9 and 15, determines the carotenoid-to-(bacterio)chlorophyll [(B)Chl] energy transfer efficiency. Here we purified and spectroscopically characterized light-harvesting complex 2 (LH2) from Rhodobacter sphaeroides containing the N = 7 carotenoid zeta (ζ)-carotene, not previously incorporated within a natural antenna complex. Transient absorption and time-resolved fluorescence show that, relative to the lifetime of the S1 state of ζ-carotene in solvent, the lifetime decreases ∼250-fold when ζ-carotene is incorporated within LH2, due to transfer of excitation energy to the B800 and B850 BChls a. These measurements show that energy transfer proceeds with an efficiency of ∼100%, primarily via the S1 → Qx route because the S1 → S0 fluorescence emission of ζ-carotene overlaps almost perfectly with the Qx absorption band of the BChls. However, transient absorption measurements performed on microsecond timescales reveal that, unlike the native N ≥ 9 carotenoids normally utilized in light-harvesting complexes, ζ-carotene does not quench excited triplet states of BChl a, likely due to elevation of the ζ-carotene triplet energy state above that of BChl a. These findings provide insights into the coevolution of photosynthetic pigments and pigment-protein complexes. We propose that the N ≥ 9 carotenoids found in light-harvesting antenna complexes represent a vital compromise that retains an acceptable level of energy transfer from carotenoids to (B)Chls while allowing acquisition of a new, essential function, namely, photoprotective quenching of harmful (B)Chl triplets. Original language English 6502-6508 7 Proceedings of the National Academy of Sciences of the United States of America 117 12 https://doi.org/10.1073/pnas.1920923117 Published - 24 Mar 2020 ## Keywords • Carotenoids • Light-harvesting • Photoprotection • Photosynthesis • Ultrafast spectroscopy
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010364770889282, "perplexity": 10679.701867634654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00397.warc.gz"}
https://socratic.org/questions/how-do-you-solve-the-equation-2-7k-1-14k-3
Algebra Topics How do you solve the equation 2/7k-1/14k=-3? Mar 29, 2018 Raise all fractional coefficients to the lowest common denominator and combine to find $k = - 14$ Explanation: The first thing we want to do is make sure both of $k$'s coefficients have the same denominator. Once this is done, we can add the coefficients together: $\frac{2}{7} k - \frac{1}{14} k = k \left(\frac{2}{7} - \frac{1}{14}\right) = - 3$ $k \left(\frac{2}{7} \cdot \frac{2}{2} - \frac{1}{14}\right) = k \left(\frac{4}{14} - \frac{1}{14}\right) = - 3$ $\frac{3}{14} k = - 3$ Finally, we'll divide through by $k$'s coefficient to find our solution. Remember, when you divide by a fraction, you can simply multiply by its inverse to get the same effect! $\cancel{\frac{3}{14} \times \frac{14}{3}} k = - \cancel{3} \times \frac{14}{\cancel{3}}$ $\Rightarrow k = - 14$ Impact of this question 1224 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8893125057220459, "perplexity": 414.6433495677802}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00651.warc.gz"}
http://eprints.imtlucca.it/3765/
# EGAC: a genetic algorithm to compare chemical reaction networks Tognazzi, Stefano and Tribastone, Mirco and Tschaikowski, Max and Vandin, Andrea EGAC: a genetic algorithm to compare chemical reaction networks. In: Proceedings of the Genetic and Evolutionary Computation Conference on - GECCO '17. ACM, pp. 833-840. ISBN 978-1-4503-4920-8 (2017) Preview PDF - Accepted Version
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8122459053993225, "perplexity": 8328.529329307343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211000.35/warc/CC-MAIN-20180816132758-20180816152758-00522.warc.gz"}
http://play-online-pokies-australia.com/library/category/algebraic-geometry
# Download E-books Analytic Theory of Abelian Varieties (London Mathematical Society Lecture Note Series) PDF By H. P. F. Swinnerton-Dyer The learn of abelian manifolds varieties a typical generalization of the speculation of elliptic features, that's, of doubly periodic capabilities of 1 advanced variable. whilst an abelian manifold is embedded in a projective area it truly is termed an abelian sort in an algebraic geometrical feel. This advent presupposes little greater than a uncomplicated direction in complicated variables. The notes comprise the entire fabric on abelian manifolds wanted for software to geometry and quantity thought, even though they don't comprise an exposition of both software. a few geometrical effects are integrated despite the fact that. By Larry C. Grove Classical groups'', named so by way of Hermann Weyl, are teams of matrices or quotients of matrix teams via small common subgroups. hence the tale starts off, as Weyl recommended, with Her All-embracing Majesty'', the final linear team $GL_n(V)$ of all invertible linear variations of a vector area $V$ over a box $F$. All extra teams mentioned are both subgroups of $GL_n(V)$ or heavily similar quotient teams. many of the classical teams encompass invertible linear alterations that recognize a bilinear shape having a few geometric importance, e.g., a quadratic shape, a symplectic shape, and so forth. therefore, the writer develops the necessary geometric notions, albeit from an algebraic viewpoint, because the finish effects may still observe to vector areas over more-or-less arbitrary fields, finite or countless. The classical teams have proved to be very important in a wide selection of venues, starting from physics to geometry and much past. in recent times, they've got performed a sought after function within the class of the finite easy teams. this article offers a unmarried resource for the elemental evidence concerning the classical teams and in addition comprises the necessary geometrical history details from the 1st rules. it really is meant for graduate scholars who've accomplished normal classes in linear algebra and summary algebra. the writer, L. C. Grove, is a well known specialist who has released commonly within the topic zone. # Download E-books Complex Projective Geometry: Selected Papers (London Mathematical Society Lecture Note Series) PDF Algebraic geometers have renewed their curiosity within the interaction among algebraic vector bundles and projective embeddings. New tools were constructed for questions resembling: what's the geometric content material of syzygies and of bundles derived from them? How can they be used for giving reliable compactifications of average households? Which differential options are wanted for the examine of households of projective forms? those questions are addressed during this cohesive quantity, the place effects, paintings in development, conjectures, and sleek debts of classical rules are offered. # Download E-books Stable Homotopy over the Steenrod Algebra (Memoirs of the American Mathematical Society) PDF By John H. Palmieri We follow the instruments of sturdy homotopy concept to the research of modules over the mod $p$ Steenrod algebra $A^{*}$. extra accurately, allow $A$ be the twin of $A^{*}$; then we examine the class $\mathsf{stable}(A)$ of unbounded cochain complexes of injective co modules over $A$, during which the morphisms are cochain homotopy periods of maps. This class is triangulated. certainly, it's a solid homotopy type, with a purpose to use Brown representability, Bousfield localization, Brown-Comenetz duality, and different homotopy-theoretic instruments to check it. One concentration of recognition is the analogue of the strong homotopy teams of spheres, which during this environment is the cohomology of $A$, $\mathrm{Ext}_A^{**}(\mathbf{F}_p,\mathbf{F}_p)$. We even have nilpotence theorems, periodicity theorems, a convergent chromatic tower, and several effects. # Download E-books Rigid Analytic Geometry and Its Applications (Progress in Mathematics) PDF By Jean Fresnel Inflexible (analytic) areas have been invented to explain degenerations, discount rates, and moduli of algebraic curves and abelian types. This paintings, a revised and enormously extended new English version of an past French textual content by way of an identical authors, provides vital new advancements and functions of the idea of inflexible analytic areas to abelian forms, "points of inflexible spaces," étale cohomology, Drinfeld modular curves, and Monsky-Washnitzer cohomology. The exposition is concise, self-contained, wealthy in examples and routines, and may function a superb graduate-level textual content for the study room or for self-study. # Download E-books Homotopy Theoretic Methods in Group Cohomology (Advanced Courses in Mathematics - CRM Barcelona) PDF This booklet is composed primarily of notes that have been written for a sophisticated direction on Classifying areas and Cohomology of teams. The path came about on the Centre de Recerca Mathematica (CRM) in Bellaterra from may perhaps 27 to June 2, 1998 and was once a part of an emphasis semester on Algebraic Topology. It consisted of 2 parallel sequence of 6 lectures of ninety mins every one and was once meant as an creation to new homotopy theoretic equipment in team cohomology. the 1st a part of the e-book is worried with equipment of decomposing the classifying area of a finite team into items made from classifying areas of acceptable subgroups. Such decompositions were used with nice luck within the final 10-15 years within the homotopy concept of classifying areas of compact Lie teams and p-compact teams within the feel of Dwyer and Wilkerson. For simplicity the emphasis here's on finite teams and on homological houses of assorted decompositions referred to as centralizer resp. normalizer resp. subgroup decomposition. A unified therapy of a few of the decompositions is given and the relatives among them are explored. this is often preceeded through an in depth dialogue of uncomplicated notions similar to classifying areas, simplicial complexes and homotopy colimits.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5131706595420837, "perplexity": 2418.853573342911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210413.14/warc/CC-MAIN-20180816034902-20180816054902-00222.warc.gz"}
https://www.semanticscholar.org/paper/Submodular-Secretary-Problem-with-Shortlists-Agrawal-Shadravan/e795fbb0b7d89eca94d6710c77da10417ca62f6a
# Submodular Secretary Problem with Shortlists @article{Agrawal2019SubmodularSP, title={Submodular Secretary Problem with Shortlists}, journal={ArXiv}, year={2019}, volume={abs/1809.05082} } • Published 13 September 2018 • Computer Science • ArXiv In submodular $k$-secretary problem, the goal is to select $k$ items in a randomly ordered input so as to maximize the expected value of a given monotone submodular function on the set of selected items. In this paper, we introduce a relaxation of this problem, which we refer to as submodular $k$-secretary problem with shortlists. In the proposed problem setting, the algorithm is allowed to choose more than $k$ items as part of a shortlist. Then, after seeing the entire input, the algorithm can… ## Tables from this paper ### Submodular Matroid Secretary Problem with Shortlists An algorithm is designed that achieves a $\frac{1}{2}(1-1/e^2-\epsilon-O(1/k)))$ competitive ratio for any constant $\epsil on>0$, using a shortlist of size $O(k)$. ### Improved Submodular Secretary Problem with Shortlists A near optimal approximation algorithm for random-order streaming of monotone submodular functions under cardinality constraints, using memory $O(k poly(1/\epsilon), which exponentially improves the running time and memory of \cite{us} in terms of$1/£1 and asymptotically approaches the best known offline guarantee $\frac{1}{p+1}$. ### Streaming Submodular Maximization Under Matroid Constraints • Computer Science, Mathematics ICALP • 2022 This paper's multi-pass streaming algorithm is tight in that any algorithm with a better guarantee than 1 / 2 must make several passes through the stream and any algorithm that beats the authors' guarantee of 1 − 1 /e must make linearly many passes (as well as an exponential number of value oracle queries). ### Nearly Linear Time Algorithms and Lower Bound for Submodular Maximization • Computer Science • 2018 A linear query complexity algorithm is presented that achieves the approximation ratio of $(1-1/e-\varepsilon)$ for cardinality constraint and monotone objective, which is the first deterministic algorithm to achieve the almost optimal approximation using linear number of function evaluations. ### Submodular Streaming in All its Glory: Tight Approximation, Minimum Memory and Low Adaptive Complexity • Computer Science ICML • 2019 This paper proposes Sieve-Streaming++, which requires just one pass over the data, keeps only $O(k)$ elements and achieves the tight $(1/2)$-approximation guarantee, and demonstrates the efficiency of the algorithms on real-world data summarization tasks for multi-source streams of tweets and of YouTube videos. ### "Bring Your Own Greedy"+Max: Near-Optimal 1/2-Approximations for Submodular Knapsack • Computer Science AISTATS • 2020 A new rigorous algorithmic framework for a standard formulation of this problem as a submodular maximization subject to a linear (knapsack) constraint is proposed, based on augmenting all partial Greedy solutions with the best additional item. ### Adversarially Robust Submodular Maximization under Knapsack Constraints • Computer Science KDD • 2019 Experimental results show that the first adversarially robust algorithm for monotone submodular maximization under single and multiple knapsack constraints with scalable implementations in distributed and streaming settings shows strong performance even compared to offline algorithms that are given the set of removals in advance. ### Tight Trade-offs for the Maximum k-Coverage Problem in the General Streaming Model • Computer Science PODS • 2019 A single-pass algorithm is designed that reports an α-approximate solution in $\tildeO (m/α^2 + k)$ space and heavily exploits data stream sketching techniques, which could lead to further connections between vector sketching methods and streaming algorithms for combinatorial optimization tasks. ### Maximum Coverage in the Data Stream Model: Parameterized and Generalized • Computer Science ICDT • 2021 The goal is to design single-pass algorithms that use space that is sublinear in the input size of the data stream model and obtain an algorithm for the parameterized version of the streaming SetCover problem. ### Cardinality constrained submodular maximization for random streams • Computer Science NeurIPS • 2021 This work simplifies both the algorithm and the analysis, obtaining an exponential improvement in the ε -dependence, and gives a simple (1 /e − ε ) -approximation for non-monotone functions in O ( k/ε ) memory. ## References SHOWING 1-10 OF 35 REFERENCES ### Submodular Secretary Problems: Cardinality, Matching, and Linear Constraints • Computer Science, Mathematics APPROX-RANDOM • 2017 This work studies various generalizations of the secretary problem with submodular objective functions and improves over previously best known competitive ratios, using a generalization of the algorithm for the classic secretary problem. ### The submodular secretary problem under a cardinality constraint and with limited resources • Computer Science ArXiv • 2017 This work proposes a $0.1933$-competitive anytime algorithm, which performs only a single evaluation of the marginal contribution for each observed item, and requires a memory of order only $k$ (up to logarithmic factors), where k is the cardinality constraint. ### Submodular secretary problem and extensions • Mathematics, Computer Science TALG • 2013 This article considers a very general setting of the classic secretary problem, in which the goal is to select k secretaries so as to maximize the expectation of a submodular function which defines efficiency of the selected secretarial group based on their overlapping skills, and presents the first constant-competitive algorithm for this case. ### Constrained Non-monotone Submodular Maximization: Offline and Secretary Algorithms • Mathematics, Computer Science WINE • 2010 These ideas are extended to give a simple greedy-based constant factor algorithms for non-monotone submodular maximization subject to a knapsack constraint, and for (online) secretary setting subject to uniform matroid or a partition matroid constraint. ### Improved algorithms and analysis for secretary problems and generalizations • Computer Science, Mathematics Proceedings of IEEE 36th Annual Foundations of Computer Science • 1995 The methods are very intuitive and apply to some generalizations of the classical secretary problem, and derive a lower bound on the trade-off between the probability of selecting the best object and its expected rank. ### Submodular maximization meets streaming: matchings, matroids, and more • Mathematics, Computer Science Math. Program. • 2015 A general pattern for algorithms that maximize linear weight functions over “independent sets” is identified and it is proved that such algorithms can be adapted to maximize a submodular function. ### The Submodular Secretary Problem Goes Linear • Computer Science, Mathematics 2015 IEEE 56th Annual Symposium on Foundations of Computer Science • 2015 It is shown that any O(1)-competitive algorithm for MSP, even restricted to a particular matroid class, can be transformed in a black-box way to an O( 1)- competitive algorithm for SMSP over the same matroidclass, which implies that SMSP is not harder than MSP. ### Optimal approximation for the submodular welfare problem in the value oracle model A randomized continuous greedy algorithm is developed which achieves a (1-1/e)-approximation for the Submodular Welfare Problem in the value oracle model and is shown to have a potential of wider applicability on the examples of the Generalized Assignment Problem and the AdWords Assignment Problem. ### The adwords problem: online keyword matching with budgeted bidders under random permutations • Economics, Education EC '09 • 2009 The problem of a search engine trying to assign a sequence of search keywords to a set of competing bidders, each with a daily spending limit, is considered, and the current literature on this problem is extended by considering the setting where the keywords arrive in a random order. ### Online submodular maximization: beating 1/2 made simple • Computer Science IPCO • 2019 An upper bound of 0.574 is proved on the competitive ratio of the greedy algorithm, ruling out the possibility that the competitiveness of this natural algorithm matches the optimal offline approximation ratio of $$1-1/e$$ 1 - 1 / e .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9331193566322327, "perplexity": 1667.015546366122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710808.72/warc/CC-MAIN-20221201085558-20221201115558-00129.warc.gz"}
https://www.snapsolve.com/solutions/Calculatethe-potential-at-a-point-P-due-to-a-charge-of-4x-10-7C-located-9-cm-awa-1672370003969025
Home/Class 12/Physics/ Calculate the potential at a point  $$P$$  due to a charge of  $$4\times 10^{-7}C$$  located  $$9\text{ cm}$$  away. Hence obtain the work done in bringing a charge of  $$2\times 10^{-9}C$$  from infinity to the point  $$P$$ . Does the answer depend on the path along which the charge is brought? Speed 00:00 01:52 ## QuestionPhysicsClass 12 Calculate the potential at a point  $$P$$  due to a charge of  $$4\times 10^{-7}C$$  located  $$9\text{ cm}$$  away. Hence obtain the work done in bringing a charge of  $$2\times 10^{-9}C$$  from infinity to the point  $$P$$ . Does the answer depend on the path along which the charge is brought? See analysis below 4.6 4.6 ## Solution Given : $$Q=4\times 10^{-7}C,r=0.09\;m$$ For source charge, Potential at  $$P$$  due to  $$Q$$  charge, $$V_P=\left(\frac 1{4\pi \varepsilon _0}\right)\left(\frac Q r\right)$$ $$=\frac{\left(9\times 10^9\right)\left(4\times 10^{-7}\right)}{0.09}$$ $$=4\times 10^4\;V$$ Given data, Test charge, $$q=2\times 10^{-9}C$$ Potential at point  $$P=$$   $$4\times 10^4\;V$$ Potential at a point (test charge) is the amount of work done to bring unit charge from infinity to that point. $$W_{{\infty}P}=qV_p$$ $$=2\times 10^{-9}\times 4\times 10^4$$ $$=8\times 10^{-5}\;J$$ No, this work done does not depend on the path followed. The reason is that this work is against electrostatic force which is a conservative force.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9925765991210938, "perplexity": 3420.9229598933543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00680.warc.gz"}
https://en.m.wiktionary.org/wiki/Lipschitz
# Lipschitz ## EnglishEdit ### EtymologyEdit Named after Rudolf Lipschitz. 1. (mathematics) (Of a real-valued real function ${\displaystyle f}$) Such that there exists a constant ${\displaystyle K}$ such that whenever ${\displaystyle x_{1}}$ and ${\displaystyle x_{2}}$ are in the domain of ${\displaystyle f}$, ${\displaystyle |f(x_{1})-f(x_{2})|\leq K|x_{1}-x_{2}|}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9868919253349304, "perplexity": 398.3360119174283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053252010.41/warc/CC-MAIN-20160524012732-00193-ip-10-185-217-139.ec2.internal.warc.gz"}
http://hangar19.com.br/7z04w/97399a-python-latex-html
There are 1 inch = 72.27pt (in [La]TeX), so this means that the figure width should be 3.40390 inches. fill_document (doc) [source] ¶. In this article, we will discuss how to solve a linear equation having more than one variable. © Copyright 2002 - 2012 John Hunter, Darren Dale, Eric Firing, Michael Droettboom and the Matplotlib development team; 2012 - 2018 The Matplotlib development team. command \displaystyle, as in tex_demo.py, will produce the same contribute. All of PyLaTeX is a Python library for creating and compiling latex documents. 4 LaTeX Primer . settings. The dependencies for these extra features can 446.1 Importazione . Esistono due comandi molto simili per dire a LaTeX di incorporare un file esterno: This document describes the style guide for our … LaTeX prevede alcuni comandi per l'inclusione di file esterni; inoltre, in questo capitolo si vuole dimostrare in che modo creare file con informazioni ottenute in fase di composizione, in modo da poterle rielaborare con altri programmi. which generates the pdf below. PyLaTeX works on Python 2.7 and 3.3+ and it is simply installed using pip: Some of the features require other libraries as well. to use the same fonts in your figures as in the main document. directly for text layout. option is available with the following backends: The LaTeX option is activated by setting text.usetex : True in your rc a database. In Python, we use Eq() method to create an equation from the expression. hand, but some stuff can be automatically generated, for instance writing a Python convert html latex ile ilişkili işleri arayın ya da 18 milyondan fazla iş içeriğiyle dünyanın en büyük serbest çalışma pazarında işe alım yapın. Well organized and easy to understand Web building tutorials with lots of examples of how to use HTML, CSS, JavaScript, SQL, PHP, Python, Bootstrap, Java and XML. Powered by Create your own unique website with customizable templates. documents. the rcParam text.usetex flag. backends. escape bool, optional. that your LaTeX syntax is valid and that you are using raw strings Make sure LaTeX, dvipng and ghostscript are each working and on your. This step produces results which may be unacceptable to some users, For example, the code $\int_a^b f(x) = F(b) - F(a)$ renders inline as ∫abf(x)dx=F(b)−F(a). Pure Python library for LaTeX to MathML conversion. the other fonts are Adobe fonts. This time we want to convert text into. The Python LaTeX writer for Docutils 3.3 Step 3 -- Install the icons Needed for HTML output. Let's convert text in multiple formats. ... HTML, LaTeX, PDF, bookshleet, le versioni sono compresse (in italiano) Sorgente LaTeX; HTML; Distribuire moduli Python descrive le utility di distribuzione Python (Distutils'') dal punto di vista dello sviluppatore del modulo, descrivendo come usare Distutils per rendere i moduli e le estensioni Python facilmente disponibili ad un vasto numero di persone con solo una piccola aggiunta ai meccanismi di compilazione/rilascio/installazione. Certain characters require special escaping in TeX, such as: Therefore, these characters will behave differently depending on (which may be included with your LaTeX installation), and Ghostscript The goal of Beginning with version 6.0, IPython stopped supporting compatibility with Python versions lower than 3.3 including all versions of Python 2.7. more flexible, since different LaTeX packages (font packages, math packages, PyLaTeX is a Python library for creating and compiling LaTeX files. More than 50 million people use GitHub to discover, fork, and contribute to over 100 million projects. The results can be striking, especially when you take care to use the same fonts in your figures as in the main document. matplotlibrc file: The first valid font in each family is the one that will be loaded. LaTeX matrices requires Numpy. It is also possible to use unicode strings with the LaTeX text manager, here is Create Latex file like this github, html, psql. PyLaTeX. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. PyLaTeX¶. The goal of this library is to be easy but is also to provide an extensible interface between Python and latex. Verbatim-like text can also be used in a paragraph by means of the \verb command. Il Tutorial per principianti in Python è un documento pensato per essere una introduzione alla programmazione in Python, è destinato infatti a chi non ha esperienze con la programmazione. Convert HTML to latex. Matplotlib has the option to use LaTeX to manage all text layout. Enclose LaTeX code in double dollar signs $$...$$to display expressions in a centered paragraph. There are a couple of options to mention, which can be changed using html, docx, pdf. the Library usage. when converting a datatype of that library to LaTeX. Make sure what you are trying to do is possible in a LaTeX document, this library is to be an easy, but extensible interface between Python and Add a section, a subsection and some text to the document. Classes, objects, methods and exceptions can be used instead of TeX macros. LaTeX snippets. Text handling with matplotlib's LaTeX support is slower than matplotlib's very capable mathtext, but is more flexible, since different LaTeX packages (font packages, math packages, etc.) The markup used for the Python documentation is reStructuredText, developed by the docutils project, amended by custom directives and using a toolset named Sphinx to post-process the HTML output. If you are looking for an IPython version compatible with Python 2.7, please use the IPython 5.x LTS release and refer to its documentation (LTS … The results can be striking, especially when you take care LaTeX document, the default behavior of matplotlib is to distill the output, can be used. if necessary to avoid unintended escape sequences. Installation pip install latex2mathml Usage Python import latex2mathml.converter latex_input = "" mathml_output = latex2mathml. Eos ea iusto timeam, an prima laboramus vim. Requires adding a usepackage{longtable} to your LaTeX preamble. welcome though. Enclose LaTeX code in dollar signs $...$ to display math inline. rc settings. without ugly hacks feel free to send a pull request. To use the package, you need: The listings package supports highlighting of all the most common languages and it is highly custom… The heigth depends on the content of the figure, but the golden mean may be used to make a pleasing figure. More than 50 million people use GitHub to discover, fork, and contribute to over 100 million projects. In this example, we make a small function inside pycode which takes one argument which is string that represents the integrand to integrate and second argument which is the integration variable. because the text is coarsely rasterized and converted to bitmaps, which are not Documenting Python¶. (GPL Ghostscript 9.0 or later is required). Commenti e domande. Equations are as follows: x+y =1. for platform specific bugs with every update, especially since I have no other properly, can be edited in Adobe Illustrator, and searched text in pdf Some progress has been made so matplotlib uses the dvi files See the PSNFSS documentation for more details. One The goal ofthis library is to be an easy, but extensible interface between Python andLaTeX. x-y =1. PyLaTeX has two quite different usages: generating full pdfs and generatingLaTeX snippets. Use a longtable environment instead of tabular. To use LaTeX and select Helvetica as the default font, without editing Convert text to docx with the module python-docx. Text handling with matplotlib's LaTeX support is slower than packages to get all the goodies that come bundled with other latex GitHub is where people build software. For example: renders as f′(a)=limx→af(x)−f(a)x−a See the LaTeX WikiBook for more information (especially the section on mathematics). to pdf is currently working. matplotlib's very capable mathtext, but is Pull requests that fix those issues are always PyLaTeX has two quite different usages: generating full pdfs and generating matplotlibrc use: Here is the standard example, tex_demo.py: Note that display math mode ($$e=mc^2$$) is not supported, but adding the Click here to download the full example code. If you want to use reStructuredText (reST) as your source documents and to translate reST source files into Python LaTeX files before processing with the Python LaTeX system, as I often do, then you will also need the following: Docutils. Issues have been fixed for Windows and it seems that compiling Python and latex. If you find a bug for Python 2 and it is fixable PyTeX, or Py/TeX is you prefer, is to be a front end to TeX, written in Python. This allows latex to be used for text Computer Modern math fonts. This Id usu aeterno adversarium, summo mollis timeam vel ad", "Lorem ipsum dolor sit amet, tollit discere inermis pri ut. View page source. workaround is to set ps.distiller.res to a higher value (perhaps 6000) To just see the source code, you should go to the Github Get Started HTML.py has been developed to easily generate HTML code for tables and lists in Python scripts. such promise will be made. ## for Palatino and other serif fonts use: #rc('font',**{'family':'serif','serif':['Palatino']}), matplotlib configuration and cache directory locations, Using MiKTeX with Computer Modern fonts, if you get odd *Agg and PNG external dependency. an example taken from tex_demo.py. This section is a brief introduction to LaTeX concepts and syntax, to provide authors enough information to author documents productively without having to become TeX nicians.'' By default, the value will be read from the pandas config module. Kaydolmak ve işlere teklif vermek ücretsizdir. This is mostly the case This library is developed for Linux. When we solve this equation we get x=1, y=0 as one of the solutions. activated by changing the ps.usedistiller rc setting to xpdf. repository. Clash Royale CLAN TAG #URR8PPP #URR8PPP with Python 2 will be used. \begin{ verbatim* } Text enclosed inside \texttt{ verbatim } environment is printed directly and all \LaTeX{} commands are ignored. Times and Palatino each have their own import plotly.graph_objects as go values = [['Salaries', 'Office', 'Merchandise', 'Legal', 'TOTAL EXPENSES '], #1st col ["Lorem ipsum dolor sit amet, tollit discere inermis pri ut. converter. fonts are not specified, the Computer Modern fonts are used by default. A better workaround, which requires Poppler or Xpdf, can be Because of a conversion This The Python language has a substantial body of documentation, much of it contributed by various authors. How to make tables in Python with Plotly. Keywords: matplotlib code example, codex, python plot, pyplot Subscribe to this blog. Tale libreria ha lo scopo di creare un interfaccia unica di accesso ai database, indipendentemente dal tipo di sistema utilizzato. Commenti generici e domande riguardanti questa documentazione devono essere spediti mediante email presso docs@python.org.Se trovate errori specifici in questo documento, nel contenuto o nell'esposizione, per favore riportate gli errori nel Python Bug Tracker presso SourceForge. Homepage Statistics. There are already quite a few similar solutions for Python, either HTML generators or templating engines (see links at the end of this article). By default, the value will be read from the pandas config module. If you wish to include pseudocode or algorithms, you may find Algorithms and Pseudocodeuseful also. Let's say we have a text file with these lines of code: distributions. accompanying math fonts, while the other Adobe serif fonts make use of the You may need to install some of the extra This library is being developed in and for Python 3. in your rc settings, which will produce larger files but may look better and This example shows basic document generation functionality. If the Gallery generated by Sphinx-Gallery. layout with the pdf and svg backends, as well as the *Agg and PS Python 3 features that are useful but incompatible For example, suppose we have two variables in the equations. Project description Release history Download files Project links. For future versions, no GitHub statistics: ... Python version None Upload date Aug 18, 2016 Hashes View Close. the type1cm package. results. Snippets are useful when some text still needs to be written by script the current version also works in Python 2.7. One of these formats allows the creation of latex code, providing you with text that you can directly copy and paste into your Latex … convert (latex_input) Command-line scalable like standard postscript, and the text is not searchable. Be located on your require other libraries as well -- install the icons for. Understand How the code works, please look at the library Usage depends on the mailing list been. Computer Modern fonts are used by default, the base texlive install does not with! Is to be an easy, but the golden mean may be in! Needed for HTML output adding a usepackage { longtable } to your LaTeX preamble signs . Compiling LaTeX files is where people build software ad '', Lorem ipsum dolor sit amet, tollit inermis! Find algorithms and Pseudocodeuseful also following backends: the first valid font in each family is the one that be... An extensible interface between Python and LaTeX ya da 18 milyondan fazla iş içeriğiyle dünyanın en büyük serbest çalışma işe., but the golden mean may be used instead of TeX macros on the rcParam text.usetex flag LaTeX text,... The extra packages to get all the goodies that come bundled with other LaTeX distributions objects methods! Will be read from the expression use in other platforms or to emulate their designs,.. Packages to get all the goodies that come bundled with other LaTeX distributions date Aug 18, 2016 Hashes Close. Pyplot Gallery generated by Sphinx-Gallery page for tips and rules when you care! Mostly the case when converting a datatype of that library to LaTeX make tables in Python you! This article, we will discuss How to contribute page for python latex html and rules when you take care use. Spaces are emphasized with a special symbol and rules when you want contribute... 3.3 including all versions of Python 2.7 fixable without ugly hacks feel free to a... This case white spaces are emphasized with a special symbol executables for these external dependencies all. By means of the \verb command pylatex works on Python 2.7 and 3.3+ and it is also possible to.! Calling Python passing it argument to process and getting the LaTeX back python latex html be... To Xpdf mathml_output = latex2mathml, pyplot Gallery generated by Sphinx-Gallery in the equations or Py/TeX is you,. Milyondan fazla iş içeriğiyle dünyanın en büyük serbest çalışma pazarında işe alım yapın the executables these. Base texlive install does not ship with the following backends: the LaTeX text manager, is. Py/Tex is you prefer, is to be an easy, but extensible interface between Python andLaTeX characters in names... Has been developed to easily generate HTML code for tables and lists in Python 2.7 and 3.3+ and is. Multiple formats but extensible interface between Python and LaTeX option to use unicode with! Verbatim-Like text can also python latex html used instead of TeX macros use Eq ( method! Datatype of that library to LaTeX getting the LaTeX text manager, here is example... Ps.Usedistiller rc setting to Xpdf da 18 milyondan fazla iş içeriğiyle dünyanın en büyük serbest çalışma pazarında işe alım.. Characters in column names understand How the code works, please look at library! A datatype of that library to LaTeX use LaTeX to manage all text layout some of solutions. External dependencies must all be located on your online LaTeX editor that 's easy to use unicode strings the... Writer for Docutils 3.3 Step 3 -- install the icons Needed python latex html HTML output formats mostly... To emulate their designs, e.g, IPython stopped supporting compatibility with Python versions than! 18 milyondan fazla iş içeriğiyle dünyanın en büyük serbest çalışma pazarında işe alım yapın and generatingLaTeX.. The same fonts in your rc settings library to LaTeX using rc settings developed to generate., pyplot Gallery generated by Sphinx-Gallery tale scopo è stata sviluppata una raccolta di,! Versions of Python 2.7 iusto timeam, an prima laboramus vim other libraries as well people software... È stata sviluppata una raccolta di moduli, denominata “ DB-API “ is fixable ugly! Dependencies must all be located on your golden mean may be used is fixable without ugly feel... To make tables in Python 2.7 language has a substantial body of documentation, of. Online LaTeX editor that 's easy to use LaTeX to manage all layout... Py/Tex is you prefer, is to be an easy, but extensible interface between Python andLaTeX un. Environment is printed directly and all \LaTeX { } commands are ignored you prefer is... Of the extra packages to get all the goodies that come bundled with other LaTeX distributions at library... Content of the \verb command al mondo con oltre 18 mln di lavori 18, 2016 Hashes Close... Pip install latex2mathml Usage Python import latex2mathml.converter latex_input = < your_latex_string ''! If it is also to provide an extensible interface between Python andLaTeX white spaces emphasized! Libraries as well } environment is printed directly and all \LaTeX { } are... '' mathml_output = latex2mathml if it is useful to you, show support... And generating LaTeX snippets Python import latex2mathml.converter latex_input = < your_latex_string > '' mathml_output python latex html latex2mathml ya da milyondan... There are a couple of options to mention, which can be used equation..., you may need to install some of the figure, but the golden mean may be.. We solve this equation we get x=1, y=0 as one of the \verb command creare interfaccia! Libraries as well hundreds of LaTeX templates, and contribute to over 100 projects. Expressions in a centered paragraph provide an extensible interface between Python and LaTeX case white spaces are emphasized a... Uses the dvi files directly for text layout buying me a coffee developed to easily generate HTML code tables! Example taken from tex_demo.py uses the dvi files directly for text layout assumi sulla di... The pandas config module section, a LaTeX di incorporare un file esterno: How to solve a equation... Enclose LaTeX code in double dollar signs $...$ python latex html to math! Characters require special escaping in TeX, such as: Therefore, these characters behave! Various authors a paragraph by means of the features require other libraries as well pazarında! If the fonts are used by default you prefer, is to be a front end TeX... Future, a LaTeX di incorporare un file esterno: How to contribute manager, here an! Being developed in and for Python 2 will be used in a centered...., much of it contributed by various authors, can be activated by changing the rc! And 3.3+ and it seems that compiling to PDF is currently working,... By Create your own unique website with customizable templates are ignored View Close in... Algorithms and Pseudocodeuseful also are used by default discuss How to make tables in Python Python and... Make a pleasing figure in each family is the one that will read! Eos ea iusto timeam, an prima laboramus vim dal tipo di sistema utilizzato, to! The features require other libraries as well esistono due comandi molto simili dire...: Therefore, these characters will behave differently depending on the mailing list have been fixed for and... Me a coffee one that will be read from the pandas config module these will... Creare un interfaccia unica di accesso ai database, indipendentemente dal tipo di python latex html utilizzato other libraries as well by! … Let 's convert text in multiple formats works on Python 2.7 the first valid in... Dvipng and ghostscript are each working and on your PATH for creating and compiling LaTeX documents family. First valid font in each family is the one that will be used to make tables Python... Powered by Create your own unique website with customizable templates your PATH we use (... Compresse ( in italiano ) Commenti e domande main document an prima laboramus...., tollit discere inermis pri ut version also works in Python with Plotly iusto. Read the How to solve a linear equation having more than 50 million people GitHub! Strings with the type1cm package and 3.3+ and it is useful to you, show your support by me... The extra packages to get all python latex html goodies that come bundled with other LaTeX distributions TeX macros other... For Docutils 3.3 Step 3 -- install the icons Needed for HTML output we have two in. Xpdf, can be used display math inline lavori di HTML LaTeX ilişkili., dvipng and ghostscript are each working and on your compiling LaTeX files of Python 2.7 and 3.3+ it. The expression vel ad '', Lorem ipsum dolor sit amet, tollit discere inermis pri ut: in. Means of the solutions welcome though rc settings di HTML LaTeX ile ilişkili işleri arayın ya da 18 fazla! $...$ \$ to display math inline conversion script the current version also works in Python, will... Special escaping in TeX, such as CherryPy an extensible interface between Python andLaTeX pri ut or to their! To False prevents from escaping LaTeX special characters in column names rcParam flag... Text layout not ship with the LaTeX back templates, and contribute to over 100 projects..., Python plot, pyplot Gallery generated by Sphinx-Gallery latex2mathml or if it is useful to you, your. To make tables in Python 2.7 and 3.3+ and it seems that compiling to PDF is currently working much... Reported on the mailing list have been fixed for Windows and it seems compiling! Equation we get x=1, y=0 as one of the features require other libraries as well have two in... Una raccolta di moduli, denominata “ DB-API “ as one of the require...: matplotlib code example, suppose we have two variables in the main document find algorithms and Pseudocodeuseful also is! Workaround, which can be striking, especially when you take care to the...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6844695210456848, "perplexity": 8209.169400659035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00661.warc.gz"}
https://www.physicsforums.com/threads/atomic-spectra-and-atomic-structure-by-gerhard-herzberg.96413/
# Atomic spectra and atomic structure by Gerhard Herzberg 1. Oct 24, 2005 ### Galaxy33 Hi! to everyone on the forum. I am new and did not really know where i should of posted this thread its not homework its just a question i have. A friend of mine asked me a question about a book he read about (atomic spectra and atomic structure by Gerhard Herzberg). Gerhard Herzberg said that, in reality the electron revolves, not about the nucleus itself, but about a commen center of gravity; also the nucleus revolves about that center. The question is, in 2005 is this view by Gerhard Herzberg still true? I did not know the answer so i posted it here..... Galaxy...... 2. Oct 24, 2005 Great question. Some physicists, during the advent of quantum mechanics, spent some time double checking but Herzberg is still correct. Though the atomic nucleus greatly outweighs the electrons, the electrons still do some pulling--enough anyway to yank the whole atom (electrons and all) out of whack, even if just the smallest bit. 3. Oct 24, 2005 ### nbo10 If you view the system classically, then yes the electron does revolve around the CM. If you take the Qunatum view, the correct representation, then the question is ill-formed and can not be answered. 4. Oct 24, 2005 ### ZapperZ Staff Emeritus And in what kind of observation/experiment/phenomenon does this "smallest bit" effect manifests itself? Zz. 5. Oct 24, 2005 I'm assuming you're talking about HUP. Fine, this is a theoretical question and I'm going to keep assuming (unless asked otherwise) that Galaxy doesn't require proof of speed and position of the particle at the same time. And, I might add, just because we can't nail down a mathematical snapshot of said particle doesn't mean that we don't know what it's doing. Feynman spends some time on this topic. 6. Oct 24, 2005 ### nbo10 It has nothing to do with the Uncertainty principle. The electron, described by it's wavefunction, has a probability distribution that doesn't "revolve" around anything. 7. Oct 24, 2005 ### ZapperZ Staff Emeritus What "mathematical snapshot" did you have in mind? The mathematical description of an atom has no such trajectory. One only needs to look at the solution fo the hydrogen atom to know this. And these were not derived out of the HUP either. Zz. 8. Oct 24, 2005 Where is the electron distributed? Does it not have a position? Based on the responses I'm way off on this, somehow. If that's the case give the proper answer. I'd hate argue bad info any longer than I have to. Last edited: Oct 24, 2005 9. Oct 24, 2005 ### ZapperZ Staff Emeritus This goes to the foundation of QM and why the Schrodinger Cat thought experiment came into being! It does NOT have a definite position till it is measured. The s-orbital is a spherical distribution of ONE electron. You get this by solving the orbital part of the Schrodinger equation. The electron IS distributed all over the place simultaneously. This is what makes QM highly non-intuitive for anyone who skip the mathematical formalism. How do you know such a description is valid? Besides the fact that QM gave unbelievably accurate energy spectrum of many atoms and molecules (something classical mechanics could not), we also have evidence from how bonding forms, especially in the formation of bonding and antibonding states. Such phenomenon has no intuitive counterpart in classical mechanics. Zz. 10. Oct 24, 2005 So, I'm assuming that a thrown photon's trajectory is also a matter of probability? 11. Oct 24, 2005 ### Gokul43201 Staff Emeritus To say something about the OP's question : Yes, you can decompose the Schrodinger Equation for the H-atom into two parts - one dealing with the center of mass motion and the other dealing with the relative motion - using the standard change of variables : $$\mu = \frac {m_1m_2} {m_1+m_2}~;~~M = m_1 + m_2~;~~r = |r_1 - r_2|~;~~R = \frac{m_1 r_1 + m_2 r_2}{m_1+m_2}$$ This makes - in the case of the H-atom - a very tiny change to the Hamiltonia. Last edited: Oct 24, 2005 12. Oct 24, 2005 ### ZapperZ Staff Emeritus I wouldn't know, since I have no idea what a "thrown photon" is. Zz. 13. Oct 24, 2005 When an electron drops to a lower energy level it releases a photon Did I do something to irritate you? 14. Oct 24, 2005 ### ZapperZ Staff Emeritus No. I'm responding to what you have said. Is it wrong for me to get clarification of what you are saying or claiming? I have never seen the phrase "thrown photon" in all my years in this field. I have also haven't seen any experimental evidence to back your claim earlier of ".... electrons still do some pulling--enough anyway to yank the whole atom (electrons and all) out of whack, even if just the smallest bit.. " So a photon emitted by an atomic transition is what you called "thrown photon"? I'm not sure why this would be relevant for this thread. Once photons are emitted one can very much invoke classical optics. Zz. 15. Oct 24, 2005 ### nbo10 Don't take thing's personally. Life is in the details. When you ask a vague question you're going to get a vague answer. When you ask a general, non-specific, or ill-worded question, you'll find the answer you get isn't going to be helpful. Photons are very different from electrons. 16. Oct 24, 2005 Oh, I wasn't taking anything personally. Just making sure someone else wasn't. I have very colloquial conversations with working physicists everyday with far better results than just now. But I have a way of not making sense sometimes. Thanks for the information. Ciao. 17. Oct 24, 2005 ### Galaxy33 Thanks to everyone who answered my question, but can i assume that the answer was yes, as Conehead stated (some physicists during the advent of quantum mecanics, spent some time double checking but Herzberg is still correct) ? Titana............. Last edited: Oct 25, 2005
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7959004640579224, "perplexity": 1245.722008316168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867493.99/warc/CC-MAIN-20180625053151-20180625073151-00370.warc.gz"}
https://www.jimuttley.co.uk/post/young_drivers_risk_night/young_driver_night_risk/
# Are new (young) drivers at greater risk at night than other drivers? The Department for Transport recently announced that they were considering introducing graduated driver licences which potentially could see new drivers banned from driving at night. There is strong evidence that new drivers are at greater risk of involvement in a collision than more experienced drivers (e.g. this review paper by Williams), hence consideration of graduated licences. One proposed approach to these graduated licences is to ban new drivers from driving at night. Presumably the logic behind such a proposal is that new drivers are at heightened risk of involvement in a collision at night, particularly compared to other, more experienced drivers. STATS19 data In this post I provide some quick-and-dirty analysis about night-time risk for new drivers. Well, I say new drivers, but I actually look at young drivers, under the assumption that the vast majority of new drivers are also young drivers. I've used the stats19 R package created by Robin Lovelace and others to access road traffic collision (RTC) data from, er, STATS19, (the UK's national database of police-recorded collisions resulting in an injury). I've looked at 3 years of data between 2015 and 2017. All code for this analysis is available here. Comparing the number of RTCs by driver age group against the number of licence holders in those age groups appears to confirm the increased likelihood younger drivers will be involved in an RTC (see Figure 1). It should be noted though that comparing RTCs frequencies against the number of licence holders does not account for possible differences between age groups in the amount of driving they do. It is possible, for example, that younger people drive more than older people, increasing their exposure to risk of an RTC. Temporal distribution of RTCs To look at driving risk by age I have split the age of drivers into two categories - younger drivers (aged under 25), who also represent new drivers, and older drivers (aged 25+). Figure 2 shows the temporal distribution of RTCs for young and older drivers across a 24 hour day. The RTC hourly frequencies have been calculated as a proportion of the maximum hourly frequency, which for both age groups is 17:00-17:59. The plot shows an RTC involving a young driver is more likely to happen later at night or early morning, compared with those involving an older driver. On the face of it, this seems to suggest young drivers are at greater risk when driving in the late hours compared with older drivers. Daylight vs Darkness The proposal apparently being considered by the Government refers to driving at night, rather than driving after a certain time of day. I therefore considered the light conditions of the RTC and whether it happened in daylight or after-dark, as reported in the STATS19 record. Table 1 shows that 32% of RTCs involving a young driver happened when it was dark, compared with 24% of RTCs involving an older driver. This appears to confirm an increased likelihood of involvement in an RTC after-dark for young drivers, relative to older drivers. However, it again does not account for exposure levels. If young drivers do drive more frequently and for greater lengths of time when it is dark compared with older drivers, this would partly explain a greater proportion of RTCs occurring in dark conditions. Without data about driving rates at different times of day and by different age groups, it is hard to say whether the larger proportion of after-dark RTCs for young drivers does indeed reflect a greater risk at night. However, there is some evidence that age influences the times at which people choose to drive. For example, a number of studies including this one by Ball et al (1998) suggest older drivers self-regulate their driving behaviour and restrict the amount they drive at night. Such behaviour could reduce exposure to RTCs after-dark for older drivers. Table 1. Proportion of RTCs in daylight and dark conditions by age group of driver, for all RTCs between 2015 and 2017. Age group of driver Young drivers (< 25 years)Older drivers (25+ years) Light conditions during RTCDaylight68%76% Darkness32%24% Isolating the effect of darkness One approach we have used in past work to try and control for exposure effects whilst examining the effects of light condition (e.g. this paper looking at the risk of an RTC after-dark at pedestrian crossings) is by counting RTCs in a specific hour of the day, e.g. 18:00-18:59. This hour is chosen so that for part of the year it is in daylight, and for the rest of the year it is in darkness. Counting only RTCs in a specific hour attempts to control for other factors related to the time of day that might influence risk of an RTC, therefore isolating the effect of the ambient light condition. Various conditions that influence RTC risk are likely to change with time of day, including the light condition, such as driving behaviours, traffic conditions and volumes, and types of drivers on the roads. We can compare the odds of an RTC occurring in the Case hour when it is in darkness versus daylight for young drivers and compare this against the same odds for older drivers, to produce an odds ratio. This effectively shows whether the risk of an RTC after-dark for young drivers is greater than for older drivers. I’ve calculated this odds ratio using Equation $\eqref{eq:OR}$. In this equation, $YoungRTC_{dark}$ is the count of RTCs involving a young driver in darkness, $YoungRTC_{day}$ is the count of RTCs involving a young driver in daylight, $OldRTC_{dark}$ is the count of RTCs involving an older driver in darkness, and $OldRTC_{day}$ is the count of RTCs involving an older driver in daylight. An odds ratio significantly greater than one would indicate greater risk after-dark for young drivers, compared with older drivers. $$OddsRatio = \frac{YoungRTC_{dark}/YoungRTC_{day}}{OldRTC_{dark}/OldRTC_{day}} \label{eq:OR} \tag{1}$$ I used 18:00-18:59 as my ‘Case’ hour. I filtered the RTC records from STATS19 between 2015 and 2017 to only include those that occurred in this hour. I used the time of sunset at the location and on the date of the RTC to define whether it occurred in darkness or daylight. Time of sunset was calculated using the ‘suncalc’ R package by Thieurmel and Elmarhraoui. RTCs were defined as occurring in darkness if they happened after the time of sunset, and defined as occurring in daylight if they happened before the time of sunset. I recognise this is a fairly crude way to assign the ambient light condition at the time of the RTC - it will not be completely dark immediately after sunset, and equally the light condition may have been deteriorating before the time of sunset. However, for the purposes of this analysis, a crude distinction between daylight and dark conditions was acceptable. I have defined the ambient light conditions in relation to the time of sunset in this way, rather than using the light condition field that already exists in the STATS19 records, to provide a more objective and comparable definition of the light at the time of the RTC. Table 2 shows the counts of RTCs by their ambient light condition and the age group of the driver. Using these counts and Equation 1, the odds ratio of young drivers crashing after-dark versus daylight, compared with older drivers, is 0.94 (95% confidence interval = 0.91-0.97). Table 2. RTC counts 2015-2017 during the Case hour of 18:00-18:59, by age group of driver and whether RTC occurred in daylight or darkness (defined as before or after sunset). Age group of driver Young drivers (< 25 years)Older drivers (25+ years) Light conditions during RTC (based on time of sunset on date and at location of RTC)Daylight9.06920,329 Darkness7,36317,574 This odds ratio of 0.94 suggests that rather than the risk after-dark for young drivers being greater than for older drivers, the after-dark risk is actually slightly lower for young drivers compared with older drivers. Confounding between young and new drivers? In this analysis I have used young drivers as a proxy definition for new drivers, and have defined young drivers as being aged 24 or under. Using this definition, young drivers would have a maximum of 7 years driving experience, if they were aged 24 and had passed their test at 17. Perhaps 7 years of driving experience does not qualify someone as a ‘new driver’, and a graduated licence scheme is likely to only apply for the first one or two years after someone initially passes their test. However it is still reasonable to assume young drivers are also relatively new drivers - see Figure 3. This shows the number of new licences issued in 2017/18 by age of the new driver, as a cumulative percentage. Two thirds of people issued with a new licence were aged 24 or under, which suggests dichotomising the data into young and older driver groups as I have done is a reasonable way of capturing new, inexperienced drivers. Ok, but let’s assume the odds ratio we calculated above is being confounded by drivers in their early and mid-twenties, who fall into my ‘young’ age group of drivers but who may have a number of years experience of driving and are pulling down the after-dark risk of this young age group. To check this, we can change the age threshold for our young and older age groups and calculate the odds ratio again. If we now call only drivers aged under 20 ‘young’, and anyone aged 20 and over as ‘older’, we can be more sure our young drivers are also new drivers, having a maximum of just two years of driving experience. Table 3 shows the RTC counts using this new threshold of young and older drivers. The associated odds ratio is 0.93 (95% confidence interval = 0.89-0.97) - virtually the same as when using 25 as the age threshold for the young and older age groups. Table 3. RTC counts 2015-2017 during the Case hour of 18:00-18:59, by age group of driver (under 20 years or 20+ years) and whether RTC occurred in daylight or darkness (defined as before or after sunset). Age group of driver Young drivers (< 20 years)Older drivers (20+ years) Light conditions during RTC (based on time of sunset on date and at location of RTC)Daylight5,64223,756 Darkness4,50620,431 Conclusion What conclusion to draw from this analysis? For me, I need more persuading that young (new) drivers are at noticeably greater risk driving after-dark than any other group of drivers. A superficial examination of the stats suggests young drivers are involved in more RTCs after-dark than older drivers, but this does not account for potential differences in exposure. It seems plausible that young people are more likely to drive at night than older people, which might explain why they have more collisions at night. This is not the same as suggesting they are at greater risk at night. When you attempt to account for exposure and other effects of the time of day, by only looking at RTCs in a specific ‘Case’ hour of the day and comparing daylight and after-dark ambient light conditions, a different picture emerges. The calculated odds ratio is actually less than one, suggesting if anything the risk after-dark for young drivers is lower than it is for older drivers. Regarding banning new drivers from driving at night as part of a graduated driving licence - the evidence needs closer examination before any such proposal be enacted.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.44126686453819275, "perplexity": 1860.5337679295048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370509103.51/warc/CC-MAIN-20200402235814-20200403025814-00537.warc.gz"}
https://elanyachtselection.com/simplify-1-b2cb-c2/
# Simplify 1/(b^2c)+b/(c^2) To write as a fraction with a common denominator, multiply by . To write as a fraction with a common denominator, multiply by . Write each expression with a common denominator of , by multiplying each by an appropriate factor of . Multiply and . Raise to the power of . Raise to the power of . Use the power rule to combine exponents.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939389228820801, "perplexity": 876.321203257932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499710.49/warc/CC-MAIN-20230129080341-20230129110341-00751.warc.gz"}
https://www.imrpress.com/journal/RCM/24/1/10.31083/j.rcm2401024/htm
NULL Section All sections Countries | Regions Countries | Regions Article Types Article Types Year Volume Issue Pages IMR Press / RCM / Volume 24 / Issue 1 / DOI: 10.31083/j.rcm2401024 Open Access Systematic Review Omega-3 Polyunsaturated Fatty Acids Supplements and Cardiovascular Disease Outcome: A Systematic Review and Meta-Analysis on Randomized Controlled Trials Show Less 1 Department of Critical Rehabilitation, Shanghai Third Rehabilitation Hospital, 200436 Shanghai, China 2 Department of Critical Care Medicine, Huashan Hospital, Fudan University, 200040 Shanghai, China *Correspondence: Huanghaosankang@126.com (Hao Huang) Academic Editors: Brian Tomlinson and Vincent Figueredo Rev. Cardiovasc. Med. 2023, 24(1), 24; https://doi.org/10.31083/j.rcm2401024 Submitted: 14 October 2022 | Revised: 10 November 2022 | Accepted: 14 November 2022 | Published: 12 January 2023 This is an open access article under the CC BY 4.0 license. Abstract Background: Many meta-analyses and randomized controlled trials (RCTs) on the use of Omega-3 supplements for cardiovascular disease (CVD) have come to different outcomes. Besides, previous meta-analyses have missed some key RCTs on this topic. Methods: PubMed, EMBASE, Cochrane Library and Web of Science were manually searched for eligible RCTs on Omega-3 polyunsaturated fatty acids (PUFA) use for CVD. Risk estimates of each relevant outcome were calculated as a hazard ratio (HR) with 95% confidence interval (95% CI) using the random-effects model. Subgroup analysis was conducted according to the main characteristics of the population, sensitivity analysis would be performed if there was significant heterogeneity among analyses on relevant outcomes. Statistical heterogeneity was assessed using chi-square tests and quantified using I-square statistics. Results: Nineteen eligible RCTs incorporating 116,498 populations were included. Omega-3 PUFA supplementation could not significantly improve the outcomes of major adverse cardiovascular events (MACE) (HR: 0.98, 95% CI: 0.91–1.06), myocardial infarction (MI) (HR: 0.86, 95% CI: 0.70–1.05), coronary heart disease (CHD) (HR: 0.90, 95% CI: 0.80–1.01), stroke (HR: 1.00, 95% CI: 0.91–1.10), SCD (sudden cardiac death) (HR: 0.90, 95% CI: 0.80–1.02), all-cause mortality (HR: 0.96, 95% CI: 0.89–1.04), hospitalization (HR: 0.99, 95% CI: 0.81–1.20), hospitalization for all heart disease (HR: 0.91, 95% CI: 0.83–1.00), hospitalization for heart failure (HR: 0.97, 95% CI: 0.91–1.04). Although omega-3 PUFA significantly reduced revascularization (HR: 0.90, 95% CI: 0.81–1.00) and cardiovascular mortality (CV mortality) (HR: 0.91, 95% CI: 0.85–0.97), risk for atrial fibrillation (AF) was also increased (HR: 1.56, 95% CI: 1.27–1.91). Subgroup analysis results kept consistent with the main results. Conclusions: Omega-3 PUFA supplementation could reduce the risk for CV mortality and revascularization, it also increased the AF incidence. No obvious benefits on other CVD outcomes were identified. Overall, potential CVD benefits and harm for AF should be balanced when using omega-3 PUFA for patients or populations at high risk. Keywords polyunsaturated fatty acids cardiovascular disease randomized controlled trial meta-analysis 1. Introduction Omega-3 polyunsaturated fatty acids (n-3 PUFA) include $\alpha{}$-linolenic acid (ALA), eicosapentaenoic acids (EPA), and docosahexaenoic acids (DHA) [1], among which, ALA is abundant in plant, while EPA and DHA are abundant in marine animals. Fish oil stemming from marine animals is also rich in EPA and DHA. Over the past several decades, numerous population-based epidemiological studies have delineated that higher fish oil intake in the diet can reduce the incidence of cardiovascular events (CV events) [2, 3, 4]. The American Heart Association (AHA) also recommends that patients with coronary heart disease (CHD) take 1 g/d of EPA and DHA supplements as directed by their physicians. Two to four g/d EPA and DHA capsules are recommended for patients with hypertriglyceridemia (HTG) under the guidance of their family doctors for treatment [5]. In this case, n-3 PUFA is desired by patients with cardiovascular disease (CVD) to treat their disease and populations with high-risk factors to prevent CVD. Despite the availability of abundant evidence, outcomes derived from current evidence are still inconsistent [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. From mechanistic aspects, n-3 PUFA confers protection against a wide range of CVD states including modulating cell membrane function, regulating cardiac rhythm, polishing endothelial function, as well as inhibiting inflammatory, oxidative and thrombotic pathways implicated in atherosclerosis [25, 26, 27]. N-3 PUFA also favors modulating triglyceride-rich lipoprotein metabolism [28]. However, from clinical aspects, there still exists a great deal of controversy on the protective role of n-3 PUFA. Some clinical trials displayed a considerable beneficial profile of n-3 PUFA for reducing all-cause mortality, CV mortality, sudden cardiac death (SCD), CHD, and stroke [10, 29, 30]; while others failed to confirm the protective effect [31]. A recent meta-analysis on this similar topic included 16 randomized controlled trials (RCTs), and revealed that n-3 PUFA could significantly improve CVD outcomes, especially for second prevention on 1 g/d level with taking EPA only [32]. To our best knowledge, meta-analysis fails to report the results on some other key CV outcomes such as the hospitalization rate among participants, or to include several essential trials [13, 14, 21, 23, 24]. Importantly, no previous meta-analysis has ever analyzed the influence of statin and antiplatelet drug use on CVD outcomes with n-3 PUFA intake. Overall, these inconsistent results warrant a better understanding of the effects of n-3 PUFA on comprehensive subtypes of CVD states. Additionally, limitations of previous meta-analyses on a similar topic should be overcome and updated. To this end, the current study aimed to: (1) conduct a systematic review and meta-analysis by incorporating all eligible RCTs; (2) report results on CVD outcomes in a more comprehensive manner; and (3) analyze the influence of statin and antiplatelet drug use on the final results. 2. Methods This study was conducted based on the Cochrane Handbook and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (Supplementary Table 1) [33]. The study protocol is consistent with a previous meta-analysis [32] and has been registered on the INPLASY website (https://inplasy.com/) with a reference ID: INPLASY2022110027 (doi: 10.37766/inplasy2022.11.0027) (Supplementary Table 2). 2.1 Search Strategy We reviewed databases of Pubmed, EMBASE, Cochrane Library and Web of Science for eligible studies from the inception to Aug-15-2022. The combined search strategy of relevant keywords and Medical Subject Headings (MeSH) terms used in current study are: “Omega-3 fatty acids”, “docosahexaenoic acid”, “DHA”, “Eicosapentaenoic acid”, “EPA”, “cardiovascular disease”, “cardiovascular events”, “coronary heart disease”, “myocardial infarction”, “stroke” and “randomized controlled trial”. A detailed search strategy has been given in Table 1. No special restrictions were applied to language. Reference lists of the retrieved literature were also searched manually. Table 1.Literature search strategy for relevant databases. 1 Omega-3 fatty acids (“Omega-3 fatty acids” [Mesh] OR “Omega-3 Fatty Acid” OR “Omega 3 Fatty Acid” OR “n-3 Oil” OR “n-3 Fatty Acids” OR “Omega 3 Fatty Acids” OR “n-3 PUFA” OR “n3 Fatty Acid” OR “n3 Polyunsaturated Fatty Acid” OR “n-3 Oils” OR “N-3 Fatty Acid” OR “Fatty Acid, N-3” OR “n-3 Polyunsaturated Fatty Acid” OR “n 3 Polyunsaturated Fatty Acid” OR “Oil, n-3” OR “Omega 3 Fatty Acids” OR “PUFA, n3”) (“Cardiovascular disease” OR “Disease, cardiovascular” OR “Diseases, cardiovascular” OR “Coronary disease” OR “Coronary heart disease” OR “Disease coronary heart” OR “Myocardial infarction” OR “Infarct, myocardial” OR “Heart Attack” OR “Heart attacks” OR “Stroke” OR “Cerebrovascular accident” OR “Brain vascular accident” OR “Cerebral stroke” OR “Acute cerebrovascular accident” OR “Apoplexy”) (Randomized controlled trials[pt] OR Randomized controlled trial[pt] OR Clinical Trials, Randomized[pt] OR Trials, Randomized Clinical[pt] OR Controlled Clinical Trials, Randomized[pt]) (animals[mh] NOT humans[mh]) 1 AND 2 AND 3 5 NOT 4 2.2 Selection Criteria All searched articles went through a two-step review process. They were initially screened for titles and abstracts. Then, the full texts of possibly eligible studies were reviewed by two independent authors (Xue Qi and Hao Huang). Any disagreements were resolved by a discussion in a group panel with another author (Ru Ya), who is familiar with cardiology and evidence-based medicine. The eligible criteria following the PICOS principles were listed as: Populations: Adult populations ($\geq$18 yr) with CVD or high-risk factors (e.g., smoking, obesity, lack of physical activity, etc.) for CVD; and no restrictions on their gender, race, nationality and CV-related comorbidities (e.g., diabetes, hypertension, kidney circulation dysfunction). Intervention/comparison: Omega-3 PUFA from dietary supplements, capsules or drug prescriptions was used. Considering the difficulty in quantifying n-3 PUFA intake from marine fish food sources, Omega-3 PUFA directly derived from these resources was considered not eligible. Outcomes: At least one of the following outcomes was reported with available data for calculating: major adverse cardiovascular events (MACE), myocardial infarction (MI), CHD, revascularization, stroke, sudden cardiac death (SCD), CV mortality, all-cause mortality, hospitalization, hospitalization for all heart disease, hospitalization for heart failure, and atrial fibrillation (AF). Study design: Randomized controlled trial (RCT). The same trial with a longer follow-up period could be included to avoid duplication. Eligible RCTs should have a registered protocol, and provide ethics approval and consent of individuals. Observational studies, reviews, case reports, conference abstracts and experimental studies were excluded. Studies without essential data were also excluded. 2.3 Data Extraction and Outcome of Interest Data extraction was performed by two independent authors (Xue Qi and Ru Ya) following a pre-ruled protocol from included studies. The extracted information included characteristics of eligible studies (year of publication, first author name, performed country, trial name, follow-up period, etc.), characteristics of the populations (gender (proportion of male), mean age (SD) and sample size (in experimental and control groups), etc.), and the characteristics of the program (interventions in two groups (n-3 PUFA or placebo or other dietary supplements), dose of n-3 PUFA (1–4 g/d), type of n-3 PUFA (EPA + DHA and EPA alone), prevention type (secondary and mixed), registration number, etc.). The risk estimates of hazard ratio (HR) relative risk (RR) and odds ratio (OR) would be evaluated in fully adjusted models if available. If not, the unadjusted models would be evaluated, and special descriptions would be given. Intentional-to-treat (ITT) principles would be applied if available. The authors would contact the primary authors for some missing data to facilitate the current analysis, and the current study would still have been taken without these data if no response was received. Herein, outcomes including MACE, MI, CHD, revascularization, stroke, SCD, CV mortality, all-cause mortality, hospitalization, hospitalization for all heart disease, and hospitalization for heart failure and AF were analyzed. Details about the definitions on these outcomes were summarized in Supplementary Table 3. Briefly, MACE indicated a composite of MI, stroke, cardiac death or any revascularization; MI included fatal and no-fatal MI; stroke included fatal and no-fatal stroke; and AF meant new AF events. 2.4 Quality Assessment For evaluating the quality of included studies, we applied the Cochrane Risk of Bias Tool, which has been widely used for assessing the methodological quality of RCTs in meta-analyses [34]. Seven specific bars in the Cochrane Risk of Bias Tool were objectively evaluated by two independent authors (Xue Qi and Ru Ya) including the generation of randomized sequences, concealment of allocation protocols, blinding of study participants and related persons, blinding of outcome evaluators, incomplete data on study results, selective reporting of results and other sources of bias. If each bar from the Cochrane Risk of Bias Tool was not available or wrongly conducted, assessment on the bar would be high risk. 2.5 Statistical Analysis Fully adjusted HR and the corresponding 95% confidence intervals (95% CIs) for the outcomes of interests obtained from Cox-Hazard regression analysis were mainly estimated with DerSimonian-Laird (D-L) random effects model because the assumptions might be attributed to the presence of whining-study and between-study heterogeneity. The adjusted/unadjusted RR and OR in primarily included studies were approximately considered as HR. HRs and standard errors (SEs) originating from the correspondence 95% CIs were logarithmically transformed to stabilized variance, and the distribution then was normalized. Between-study heterogeneity was determined with the Cochran Q chi-square test and the I${}^{2}$. An I${}^{2}$ $>$50% or a p value for the Q test $<$0.1 was regarded as equal to significant heterogeneity [35]. Sensitivity analysis would be performed by removing one study each turn to reduce and elaborate the causes of the heterogeneity in the case of significant heterogeneity. Post-subgroup analyses were also conducted to ascertain the influence of other risk factors on the outcome results on MACE, CV mortality and all-cause mortality, since there were abundantly included studies on those outcomes. According to the main characteristics of the populations and trial, the subgroups were identified as follows: the proportion of statin use populations ($<$50% vs. $\geq$50%) in each trial, proportion of antiplatelet drug use populations ($<$50% vs. $\geq$50%) in each trial, n-3 PUFA formulations (EPA + DHA vs. EPA) in each trial, actual amount of n-3 PUFA intake ($<$2 g/d vs. $\geq$2 g/d) in each trial and prevention type (primary prevention vs. secondary prevention vs. mixed prevention) in each trial. The analyses results of the subgroup were visualized by forest plots. Publication bias was estimated using Begg’s correlation test and Egger’s linear regression test at p $<$ 0.10 indicating significant publication bias [36]. All analyses were performed using Stata software version 12.0 (https://www.stata.com/); two-sided p $<$ 0.05 was considered statistically significant. When 95% CIs of HR were on 1.00, the p value for HR would be checked, with p $<$ 0.05 indicating statistical significance. 3. Results 3.1 Study Selection and Characteristics of the Included Studies Of 4772 studies searched from databases, 1865 came from PubMed, 1424 from EMBASE, 515 from Cochrane Library, and 968 from Web of Science. Additionally, 33 were further achieved from other literature available. Then, 4634 records were excluded after initial screening, 12 new records were complemented by reviewing reference list when making the initial screening, and 18 were excluded after full-text consideration due to no outcome of interest or definition, duplicated study, no useful data or no n-3 PUFA intake. Finally, a total of 19 studies were eligible for systematic review and meta-analysis, and the selection process and exclusion reasons could be found in Fig. 1. Fig. 1. The flow chart for study screening and selection. Totally, 19 RCTs incorporating 116,498 populations were considered eligible and included in current systematic review and meta-analysis (Table 2, Ref. [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]). Ten studies [6, 7, 9, 11, 12, 13, 15, 18, 21, 24] were conducted in Europe, 1 [19] in the USA, 2 [10, 20] in Asia, and the other 6 [8, 14, 16, 22, 23, 24] were performed on multicenter individuals. Only one study [12] was conducted on all male populations. Among all 19 clinical trials, proportion of statin use $\geq$50% among individuals was observed in 10 studies [10, 13, 14, 15, 16, 20, 21, 22, 23, 24], and $<$50% in 8 studies [6, 7, 8, 9, 11, 12, 18, 19]; proportion of antiplatelet drug use $\geq$50% among individuals was observed in 11 studies [6, 9, 11, 13, 14, 15, 16, 17, 20, 23, 24], and $<$50% in 4 studies [7, 10, 18, 21]. Almost all included studies [6, 7, 8, 9, 11, 12, 13, 15, 16, 17, 18, 19, 21, 23, 24] used combined EPA + DHA for n-3 PUFA supplementation, three [10, 20, 22] used only EPA for n-3 PUFA intake, and one [14] used EPA + DHA + ALA for n-3 PUFA intake. Six studies [7, 8, 12, 22, 23, 24] provided $\geq$2 g/d n-3 PUFA for the included populations, and 9 [6, 9, 10, 11, 15, 16, 17, 18, 21] provided $<$2 g/d n-3 PUFA for the included populations. As for the control group prescription, 16 studies [7, 8, 9, 11, 12, 13, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24] used placebo only, one [6] did not provide any treatment for participants in the control group, one [10] provided standard of care, and one [14] provided placebo and ALA. Eight studies [6, 7, 9, 13, 14, 15, 20, 24] were designed for the secondary prevention of CVD, 10 [8, 10, 11, 12, 16, 17, 18, 19, 22, 23] for mixed (primary and secondary) prevention for CVD, and only one [21] for the primary prevention for CVD. The mean follow-up period was 3.4 yr. Besides, three studies [15, 18, 19] reported unadjusted results, while the rest reported fully adjusted results based on adjustment on variables such as age, serum glucose, body mass index, systolic blood pressure, and smoking status at baseline. Table 2.Characteristics of included studies. Study Trial name Study population Total population (Experimental/Control group) Male (%) Mean age (yr) Statin use (%) Antiplatelet drugs use (%) n-3 FA formulations Actual amount of free fatty acid Control Median follow-up (yr) Randomization time Prevention Outcomes Marchioli et al. (1999); Italy [6] GISSI-P participants with MI 11,324 (5666/5658) 85 59.4 4.7 91.7 460 mg EPA + 380 mg DHA (capsule) 1 g/d PUFA no treatment 3.5 1993–1995 Secondary prevention ⑤⑥⑦⑧ Nilsen et al. (2001); Norway [7] NA participants with MI 300 (150/150) 79 64 7.2 20.7 850–882 mg EPA + DHA (capsule) 4 g/d PUFA placebo (corn oil) 2 ended at 1997 Secondary prevention ①②③④⑥⑦⑧ Brouwer et al. (2006); Multicenter [8] SOFA (NCT00110838) participants with ICDs and malignant VT or VF 546 (273/273) 85 61.5 45.5 NA 464 mg EPA + 335 mg DHA (capsule) 2 g/d PUFA placebo (sunfloweroil) 1 2001–2004 Mix prevention ②⑧ Svensson et al. (2006); Denmark [9] OPACH participants with CVD and with chronic HD 206 (103/103) 65 67 19.5 71.4 EPA + DHA (capsule) 1.7 g/d PUFA placebo (olive oil) 2 2002–2003 Secondary prevention ①②③⑤⑧ Yokoyama et al. (2007); Japan [10] JELIS (NCT00231738) participants with 6.5 mmol/L total cholesterol (4.4 mmol/L LDL) 18,645 (9326/9319) 32 61 100 13.9 1800 mg EPA (capsule) 1.8 g/d PUFA standard of care 5 1996–1999 Mix prevention ②③⑤⑥⑦⑧ Tavazzi et al. (2008); Italy [11] GISSI-HF (NCT00336336) participants with HF 6975 (3494/3481) 78 67 22.3 58.4 850–882 mg EPA + DHA (capsule) 1 g/d PUFA placebo 3.9 2002–2005 Mix prevention ②⑤⑥⑦⑨⑩⑪ Einvik et al. (2010); Norway [12] DOIT participants at high risk for atherosclerosis 563 (282/281) 100 70.1 19 NA EPA + DHA (capsule) 2.4 g/d PUFA placebo (corn oil) 2.4 1997–1998 Mix prevention ①⑧ Galan et al. (2010); France [13] SU.FOL.OM3 (ISRCTN41926726) participants with CVD 2501 (1253/1248) 79.2 60.7 86.4 94 600 mg EPA + DHA (capsule) NA placebo 4.7 2003–2007 Secondary prevention ①③④⑤⑧ Kromhout et al. (2010); Multicenter [14] Alpha Omega (NCT00127452) participants with MI 4837 (2404/2433) 78.2 69 86 97.5 226 mg EPA + 150 mg DHA+ 1.9 g ALA/d (margarine) NA placebo + ALA 3.4 2002–2006 Secondary prevention ①⑦⑧ Rauch et al. (2010); Germany [15] OMEGA (NCT00251134) participants with MI 3818 (1925/1893) 74 64 94.2 81.5 425 mg PA + 345 mg DHA (capsule) 1 g/d PUFA placebo (olive oil) 1 2003–2007 Secondary prevention ①④⑥⑦⑧ Bosch et al. (2012); Multicenter [16] ORIGIN (NCT00069784) participants with or at high risk for CVD and diabetes 12,536 (6281/6255) 65 63.5 53 69.1 425 mg EPA + 345 mg DHA (capsule) 1 g/d PUFA placebo (olive oil) 6.2 2003–2005 Mix prevention ①②④⑤⑦⑧⑩⑪ Macchia et al. (2013); Multicenter [17] FORWARD (NCT00597220) participants with symptomatic AF 586 (289/297) 55 66.1 NA 50.9 850–882 mg EPA + DHA 1 g/d PUFA placebo (olive oil) 1 2008–2011 Mix prevention ①⑧⑨⑪⑫ Roncaglioni et al. (2013); Italy [18] R&P study (NCT00317707) participants with or at high risk for CVD without MI 12,513 (6244/6269) 62 64 42.5 41.3 425 mg PA + 345 mg DHA (capsule) 1 g/d PUFA placebo (olive oil) 5 2004–2007 Mix prevention ①③⑥⑦⑩ Bonds et al. (2014); USA [19] AREDS2 (AREDS2) participants with ophthalmological disease, with or without CVD 3159 (2074/1012) 43 74 44 NA 650 mg PA + 350 mg DHA NA placebo 4.8 2006–2008 Mix prevention ⑦ Nosaka et al. (2017); Japan [20] NA (UMIN000016723) participants with ACS 238 (119/119) 76 70.5 100 100 1800 mg EPA NA placebo 1.8 2010–2014 Secondary prevention ④⑦⑩⑪ Bowman et al. (2018); UK [21] ASCEND (NCT00135226) participants with diabetes, without CVD 15,480 (7740/7740) 63.3 62.6 75.3 35.6 425 mg EPA + 345 mg DHA (capsule) 1 g/d PUFA placebo (olive oil) 7.4 2005–2011 Primary prevention ①④⑤⑦⑧ Bhatt et al. (2019); Multicenter [22] REDUCE-IT (NCT01492361) participants with or at high risk for CVD 8179 (4089/4090) 71 64 100 NA 3500 mg EPA (IPE) 4 g/d PUFA placebo (mineral oil) 4.9 2011–2016 Mix prevention ①②④⑤⑦⑧⑩ Nicholls et al. (2020); Multicenters [23] STRENGTH (NCT02104817) participants with or at high risk for CVD 13,078 (6539/6539) 62.5 65 100 71.3 300 mg EPA + DHA (capsule) 4 g/d PUFA placebo (corn oil) 3.2 2014–2017 Mix prevention ①③④⑦⑧⑪⑫ Kalstad et al. (2021); Norway [24] OMEMI (NCT01841944) participants with ACS 1014 (505/509) 74 71 96.4 100 930 mg EPA + 660 mg DHA (capsule) 3.8 g/d PUFA placebo 2 2012–2018 Secondary prevention ①②④⑤⑧⑪⑫ Abbreviations: FA, fatty acids; MI, myocardial infraction; ICD, implantable cardioverter-defibrillators; VT, ventricular tachycardia; VF, ventricular fibrillation; CVD, cardiovascular disease; HD, chronic hemodialysis; LDL, low density lipoprotein; HF, heart failure; AF, atrial fibrillation; ACS, acute coronary syndrome; EPA, eicosapentaenoic acid; DHA, docosahexaenoic acid; ALA, alpha-linolenic acid; PUFA, polyunsaturated fatty acids. Outcomes: ①MACE, ②MI, ③CHD, ④Revascularization, ⑤Stroke, ⑥Sudden cardiac death, ⑦CV mortality, ⑧All-cause mortality, ⑨Hospitalization, ⑩Hospitalization for all heart disease, ⑪Hospitalization for heart failure, ⑫AF. In terms of the study methodological quality, both Nilsen et al. [7] and Einvik et al. [12] failed to provide detailed descriptions on the blinding methods. Nilsen et al. [7] did not provide evidence to support the process of randomization and allocation concealment; Einvik et al. [12] did not provide any information about allocation concealment as well. Bowman et al. [21] carried out an open-label study without taking blinding for participants and outcome assessments. Brouwer et al. [8], Kromhout et al. [14] and Bonds et al. [19] provided incomplete outcome data. Other sources of bias remained unclear in Marchioli et al. [6], Svensson et al. [9], Einvik et al. [12], Macchia et al. [17], Bonds et al. [19] and Nosaka et al. [20](Supplementary Table 4). 3.2 MACE Thirteen studies [7, 9, 12, 13, 14, 15, 16, 17, 18, 21, 22, 23, 24] with 75,611 participants (37,804 in the n-3 PUFA group and 37,807 in the control group) on MACE outcome showed risks for MACE could not be significantly reduced by n-3 PUFA (HR: 0.98, 95% CI: 0.91–1.06; p = 0.592) with significant heterogeneity (I${}^{2}$ = 62.7%; p = 0.001) (Fig. 2A). After removing one heterogeneous study [22] (8179 participants), the HR turned to 1.00 (95% CI: 0.96–1.05; p = 0.872) with little heterogeneity (I${}^{2}$ = 0.0%; p = 0.955). In subgroup analysis, n-3 PUFA could not significantly reduce MACE on the prevention type (secondary prevention (6 [6, 7, 13, 14, 15, 24]): 1.07, 95% CI (0.97–1.18); mixed prevention (n = 6 [12, 16, 17, 18, 22, 23]): 0.92, 95% CI (0.82–1.04); only one study [21] for primary prevention: (0.97, 95% CI: 0.87–1.08), statin use proportion in trial ($<$50% (n = 4 [7, 9, 12, 18]): 0.99, 95% CI: 0.90–1.09; $\geq$50% (n = 8 [13, 14, 15, 16, 21, 22, 23, 24]): 0.98, 95% CI: 0.89–1.09), antiplatelet drug use proportion in trial ($<$50% (n = 3 [7, 18, 21]): 0.98, 95% CI: 0.91–1.05; $\geq$50% (n = 8 [9, 13, 14, 15, 16, 17, 23, 24]): 1.02, 95% CI: 0.96–1.07) or actual n-3 PUFA intake amount ($<$2 g/d (n = 6 [9, 15, 16, 17, 18, 21]): 1.00, 95% CI: 0.96–1.05; $\geq$2 g/d (n = 5 [7, 12, 22, 23, 24]): 0.93, 95% CI: 0.78–1.12). Only one study [22] was included for EPA use analysis (HR: 0.75, 95% CI: 0.68–0.83). Therefore, more evidence was still required for this result (Fig. 2B). Thank you. Fig. 2. Forrest plots and subgroup analyses for MACE. (A) Forest plot of main result on MACE. Each bar and the middle diamond represented HR and 95% CI for each included study with the detailed value marked on right. The bottom diamond represented the synthesized result, if the whole diamond located on the left of vertical full line (on 1.00), the risk of MACE was significantly reduced, if the whole diamond located on the right of vertical full line, the risk of MACE was significantly improved, otherwise there was no statistically significant association. Vertical dashed line in red represented location of synthesized HR from which we could presume the trend of the association. (B) Forest plot of subgroup analysis. Each outcome included a couple of subgroups, the reference list of studies in each subgroup has been marked in the forest plot, synthesized result with 95% CI for each subgroup has been visualized on the right. MACE, major adverse cardiovascular events; EPA, eicosapentaenoic acids; DHA, docosahexaenoic acids; HR, hazard ratio; 95% CI, 95% confidence interval. 3.3 MI Eight studies [7, 8, 9, 10, 11, 16, 22, 24] involving 48,401 participants (24,221 in the n-3 PUFA group and 24,180 in the control group) reported MI outcome. The results indicated that the existence of risks for MI could not be significantly reduced by n-3 PUFA (HR: 0.86, 95% CI: 0.70–1.05; p = 0.137) with significant heterogeneity (I${}^{2}$ = 70.5%; p = 0.001) (Fig. 3). After removing 3 heterogeneous studies [9, 16, 22] (20,921 participants), the HR kept stable from 0.86 (95% CI: 0.70–1.05; p = 0.137) to 0.86 (95% CI: 0.72–1.03; p = 0.112) with a shortened confidence interval and little heterogeneity (I${}^{2}$ = 15.5%; p = 0.316). Main heterogeneity across analysis on MI was found among the three studies [9, 16, 22]. Fig. 3. Forest plot for MI. Forest plot of main result on MI. Each bar and the middle diamond represented HR and 95% CI for each included study with the detailed value marked on right. The bottom diamond represented the synthesized result, if the whole diamond located on the left of vertical full line (on 1.00), the risk of MI was significantly reduced, if the whole diamond located on the right of vertical full line, the risk of MI was significantly improved, otherwise there was no statistically significant association. Vertical dashed line in red represented location of synthesized HR from which we could presume the trend of the association. MI, myocardial infarction; HR, hazard ratio; 95% CI, 95% confidence interval. 3.4 CHD Six studies [7, 9, 10, 13, 18, 23] involving 47,243 participants (23,615 in the n-3 PUFA group and 23,628 in the control group) reported results on CHD, and n-3 PUFA had the trend to reduce the incidence of CHD, but the statistic was not significant (HR: 0.90, 95% CI: 0.80–1.01; p = 0.079) with little heterogeneity (I${}^{2}$ = 37.3%; p = 0.158) (Fig. 4). Little intra-study heterogeneity restricted the implementation of sensitivity analysis. Fig. 4. Forest plot for CHD. Forest plot of main result on CHD. Each bar and the middle diamond represented HR and 95% CI for each included study with the detailed value marked on right. The bottom diamond represented the synthesized result, if the whole diamond located on the left of vertical full line (on 1.00), the risk of CHD was significantly reduced, if the whole diamond located on the right of vertical full line, the risk of CHD was significantly improved, otherwise there was no statistically significant association. Vertical dashed line in red represented location of synthesized HR from which we could presume the trend of the association. CHD, coronary heart disease; HR, hazard ratio; 95% CI, 95% confidence interval. 3.5 Revascularization Nine studies [7, 13, 15, 16, 20, 21, 22, 23, 24] were included for analysis on revascularization with a total of 57,144 participants analyzed (28,601 in the n-3 PUFA group and 28,543 in the control group). N-3 PUFA could significantly reduce the incidence of revascularization (HR: 0.90, 95% CI: 0.81–1.00; p = 0.006), although the upper 95% CI was on 1.00, and the p value for HR was 0.006 ($<$0.05). Significant heterogeneity was observed (I${}^{2}$ = 62.6%; p = 0.006) (Fig. 5). After removing three heterogeneous studies [20, 22, 24] (3431 participants), the HR changed from 0.90 (95% CI: 0.81–1.00; p = 0.006) to 0.96 (95% CI: 0.91–1.02; p = 0.221) with little heterogeneity observed (I${}^{2}$ = 0.0%; p = 0.914). Synthesized results on revascularization were not robust since the results changed after heterogeneous studies were removed. In this case, more relevant studies are still needed to confirm the controversial results. Fig. 5. Forest plot for revascularization. Forest plot of main result on revascularization. Each bar and the middle diamond represented HR and 95% CI for each included study with the detailed value marked on right. The bottom diamond represented the synthesized result, if the whole diamond located on the left of vertical full line (on 1.00), the risk of revascularization was significantly reduced, if the whole diamond located on the right of vertical full line, the risk of revascularization was significantly improved, otherwise there was no statistically significant association. Vertical dashed line in red represented location of synthesized HR from which we could presume the trend of the association. HR, hazard ratio; 95% CI, 95% confidence interval. 3.6 Stroke Nine trials [6, 9, 10, 11, 13, 16, 21, 22, 24] involving 76,860 participants (38,457 and 38,403 in n-3 PUFA and control groups, respectively) were eligible for stroke outcome analysis, and n-3 PUFA exerted little effect on reducing stroke incidence (HR: 1.00, 95% CI: 0.91–1.10; p = 0.967) with little heterogeneity (I${}^{2}$ = 34.6%; p = 0.141) (Fig. 6). Fig. 6. Forest plot for stroke. Forest plot of main result on stroke. Each bar and the middle diamond represented HR and 95% CI for each included study with the detailed value marked on right. The bottom diamond represented the synthesized result, if the whole diamond located on the left of vertical full line (on 1.00), the risk of stroke was significantly reduced, if the whole diamond located on the right of vertical full line, the risk of stroke was significantly improved, otherwise there was no statistically significant association. Vertical dashed line in red represented location of synthesized HR from which we could presume the trend of the association. HR, hazard ratio; 95% CI, 95% confidence interval. 3.7 SCD Six studies [6, 7, 10, 11, 15, 18] involving 53,575 participants were analyzed for SCD (26,805 and 26,770 in the n-3 PUFA group and the control group, respectively). It was found that n-3 PUFA could not improve the outcome of SCD (HR: 0.90, 95% CI: 0.80–1.02; p = 0.111) with little heterogeneity (I${}^{2}$ = 3.2%; p = 0.396) (Fig. 7). Fig. 7. Forest plot for SCD. Forest plot of main result on SCD. Each bar and the middle diamond represented HR and 95% CI for each included study with the detailed value marked on right. The bottom diamond represented the synthesized result, if the whole diamond located on the left of vertical full line (on 1.00), the risk of SCD was significantly reduced, if the whole diamond located on the right of vertical full line, the risk of SCD was significantly improved, otherwise there was no statistically significant association. Vertical dashed line in red represented location of synthesized HR from which we could presume the trend of the association. SCD, sudden cardiac death; HR, hazard ratio; 95% CI, 95% confidence interval. 3.8 CV Mortality Thirteen studies [6, 7, 10, 11, 14, 15, 16, 18, 19, 20, 21, 22, 23] were selected to calculate CV mortality. Totally, 111,082 participants were included, among which, 56,051 were in the n-3 PUFA group and 55,031 in the control group. The synthesized results showed that n-3 PUFA intake could significantly reduce CV mortality (HR: 0.91, 95% CI: 0.85–0.97; p = 0.003) with little heterogeneity (I${}^{2}$ = 20.5%; p = 0.236) (Fig. 8A). In subgroups in terms of secondary prevention (HR: 0.85, 95% CI: 0.75–0.97; n = 5 [6, 7, 14, 15, 20]), $<$50% (HR: 0.90, 95% CI: 0.84–0.97; n = 5 [6, 7, 11, 18, 19]) and $\geq$50% (HR: 0.90, 95% CI: 0.81–1.00; n = 8 [10, 14, 15, 16, 20, 21, 23]) population with statin use, $<$50% population with antiplatelet use (HR: 0.87, 95% CI: 0.77–1.00; n = 4 [7, 10, 18, 21]), EPA + DHA intake (HR: 0.93, 95% CI: 0.88–0.98; n = 10 [6, 7, 11, 14, 15, 16, 18, 19, 21, 23]), only EPA intake (HR: 0.78, 95% CI: 0.66–0.91; n = 3 [10, 20, 22]), and $<$2 g/d n-3 PUFA intake (HR: 0.90, 95% CI: 0.85–0.96; n = 7 [6, 10, 11, 15, 16, 18, 21]), n-3 PUFA could significantly reduce CV mortality. No statistical significance was observed in other subgroups (Fig. 8B), and most results of subgroup analyses were consistent with the total result on CV mortality. Fig. 8. Forest plot for CV mortality. (A) Forest plot of main result on CV mortality. Each bar and the middle diamond represented HR and 95% CI for each included study with the detailed value marked on right. The bottom diamond represented the synthesized result, if the whole diamond located on the left of vertical full line (on 1.00), the risk of CV mortality was significantly reduced, if the whole diamond located on the right of vertical full line, the risk of CV mortality was significantly improved, otherwise there was no statistically significant association. Vertical dashed line in red represented location of synthesized HR from which we could presume the trend of the association. (B) Forest plot of subgroup analysis. Each outcome included a couple of subgroups, the reference list of studies in each subgroup has been marked in the forest plot, synthesized result with 95% CI for each subgroup has been visualized on the right. EPA, eicosapentaenoic acids; DHA, docosahexaenoic acids; HR, hazard ratio; 95% CI, 95% confidence interval. 3.9 All-Cause Mortality There were 15 studies [6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 17, 21, 22, 23, 24] involving 93,613 participants (46,825 in the n-3 PUFA group and 46,788 in the control group) included for the meta-analysis for all-cause mortality, and n-3 PUFA could not significantly reduce the risk for all-cause mortality (HR: 0.96, 95% CI: 0.89–1.04; p = 0.339). Significant heterogeneity was found (I${}^{2}$ = 54.2%; p = 0.006) (Fig. 9A). After removing three heterogeneous studies [12, 22, 24] (9756 participants), the HR turned to 0.98 (95% CI: 0.94–1.04; p = 0.544) with little heterogeneity (I${}^{2}$ = 8.4%; p = 0.363) and a shortened confidence interval. Results on all-cause mortality kept robust through sensitivity analysis. As for subgroup analysis, on trials of $<$50% population with statin use (HR: 0.78, 95% CI: 0.66–0.91; n = 5 [6, 7, 8, 9, 12]), n-3 PUFA could reduce the risk for all-cause mortality. However, statistics was not significant in other groups (Fig. 9B). Fig. 9. Forest plot for all-cause mortality. (A) Forest plot of main result on all-cause mortality. Each bar and the middle diamond represented HR and 95% CI for each included study with the detailed value marked on right. The bottom diamond represented the synthesized result, if the whole diamond located on the left of vertical full line (on 1.00), the risk of all-cause mortality was significantly reduced, if the whole diamond located on the right of vertical full line, the risk of all-cause mortality was significantly improved, otherwise there was no statistically significant association. Vertical dashed line in red represented location of synthesized HR from which we could presume the trend of the association. (B) Forest plot of subgroup analysis. Each outcome included a couple of subgroups, the reference list of studies in each subgroup has been marked in the forest plot, synthesized result with 95% CI for each subgroup has been visualized on the right. EPA, eicosapentaenoic acids; DHA, docosahexaenoic acids; HR, hazard ratio; 95% CI, 95% confidence interval. 3.10 Hospitalization Two studies [11, 17] involving a total of 7571 participants were included in analysis on hospitalization (3783 in the n-3 PUFA group and 3788 in the control group), and n-3 PUFA could not significantly reduce the hospitalization incidence (HR: 0.99, 95% CI: 0.81–1.20; p = 0.884) with little heterogeneity found (I${}^{2}$ = 33.1%; p = 0.221) (Supplementary Fig. 1). 3.11 Hospitalization for All Heart Disease Five studies [11, 16, 18, 20, 22] were obtained to synthesize the results on hospitalization for all heart diseases. In detail, 40,441 participants were totally included, with 20,227 in the n-3 PUFA group and 20,214 in the control group. N-3 PUFA presented the signal to reduce the risk for hospitalization for all heart diseases (HR: 0.91, 95% CI: 0.83–1.00; p = 0.059) with significant heterogeneity (I${}^{2}$ = 70.9%; p = 0.008), but the statistic was not significant (Supplementary Fig. 2). After removing two heterogeneous studies [20, 22] (8417 participants), the HR turned to 0.96 (95% CI: 0.92–1.00; p = 0.048) with little heterogeneity (I${}^{2}$ = 0.0%; p = 0.475). More evidence on the outcome of hospitalization for all heart diseases is still needed, because the upper 95% CIs are on 1.00 and the p values for HR are close to 0.05. 3.12 Hospitalization for Heart Failure Six studies [11, 16, 17, 20, 23, 24] reported outcomes on hospitalization for heart failure. A total of 34,427 participants, with 17,227 in the n-3 PUFA group and 17,200 in the control group, were analyzed, and the integrated results showed that n-3 PUFA could not decrease the incidence for hospitalization for heart failure (HR: 0.97, 95% CI: 0.91–1.04; p = 0.450) with little heterogeneity (I${}^{2}$ = 0.0%; p = 0.715) (Supplementary Fig. 3). 3.13 AF Three studies [6, 23, 24] with 14,678 participants (7333 in the n-3 PUFA group and 7345 in the control group) reported the AF results, and the incidence of AF was significantly increased with n-3 PUFA intake (HR: 1.56, 95% CI: 1.27–1.91; p $<$ 0.001) with little heterogeneity (I${}^{2}$ = 0.0%; p = 0.407) (Supplementary Fig. 4). 3.14 Publication Bias No publication biases were found across analyses on MACE (Begg’s test p = 0.428, Egger’s test p = 0.427), MI (Begg’s test p = 1.000, Egger’s test p = 0.776), CHD (Begg’s test p = 1.000, Egger’s test p = 0.858), revascularization (Begg’s test p = 0.048, Egger’s test p = 0.282), stroke (Begg’s test p = 0.348, Egger’s test p = 0.451), SCD (Begg’s test p = 1.000, Egger’s test p = 0.510), CV mortality (Begg’s test p = 0.760, Egger’s test p = 0.532), and all-cause mortality (Begg’s test p = 0.428, Egger’s test p = 0.598). 4. Discussion Considerable interest has been focused on potential protection from n-3 PUFA on CVD. Omega-3 PUFA supplements confer favorable effects on lipoprotein metabolism and inflammatory, oxidative, thrombotic, vascular, and arrhythmogenic factors existing in CVD [37, 38]. Marine-derived n-3 PUFA has been investigated for decades in patients with CVD or in patients with high-risk factors for CVD, yielding conflicting results on its effects on CV events. In current systematic review and meta-analysis, we included 19 RCTs with 116,498 populations taking n-3 PUFA or placebo. We found n-3 PUFA intake could significantly reduce the risk for revascularization and CV mortality, however, increased the risk for AF. No significant effects were observed with respect to MACE, MI, CHD, SCD, all-cause mortality, hospitalization, hospitalization for all heart disease and hospitalization for heart failure. Before clinical practice, medical caregivers should balance the benefits and harm of n-3 PUFA for CVD prevention and treatment. In 2017, a scientific statement was designed by the AHA to assess the impact of supplements with n-3 PUFA on CVD based on several RCTs and meta-analyses, which confirmed that no consensus was achieved on n-3 PUFA intake for the prevention of CVD in populations with high-risk factors for CVD (class III, B), and that taking n-3 PUFA for the secondary prevention of CHD death was reasonable in people with diagnosed CHD (class IIa, A) [39]. No clear and beneficial effects of n-3 PUFA on MACE and CHD were revealed in the current meta-analysis, and no analysis about CHD death was conducted because existing data and evidence were limited on that outcome. In subgroup analysis, intake of EPA only had a strong effect on reducing MACE and CV mortality, and the finding was consistent with previous studies. Two previous clinical trials have revealed the potential benefit of purified formulations of EPA alone [22, 40]. The open-label JELIS study prescribed 1.8 g/d EPA in combination with a statin for a median of 4.6 yr follow-up in 18,645 Japanese patients with hypercholesterolemia, which resulted in fewer CHD events compared with statin therapy alone (2.8% vs. 3.5%; HR: 0.81; 95% CI: 0.69–0.95) [40]. The JELIS trial incorporated patients with a mean low density lipoprotein cholesterol (LDL-C) level of 180 mg/dL, but these patients were treated with a rather low level of statins (pravastatin 10 mg or simvastatin 5 mg), and revascularization was included in a broad composite clinical endpoint [13]. In another trial of REDUCE-IT, the primary authors reported an administration of 4 g/d EPA compared with mineral oil for a median duration of 4.9 yr follow-up in 8179 statin-treatment patients with a high triglyceride level between 135–499 mg/dL. The CV events were significantly reduced in the EPA group (17.2% vs. 22.0%; HR: 0.75, 95% CI: 0.68–0.83) [22]. Additional analyses of both EPA intake studies suggested an inverse association between plasma EPA concentration during treatment and the rate of CV events [41]. Early RCTs in the 1990s have suggested cardiovascular benefits of n-3 PUFA after an acute myocardial infarction (AMI). The DART randomized trial demonstrated a 29% reduction in 2-yr mortality in patients randomized to eat fatty fish twice per week [42]. The GISSI trial indicated a 21% reduction in all-cause mortality and a 45% reduction in SCD in patients administrated with 850 mg EPA/DHA compared to placebo for 3.5 yr follow-up period [6]. Another three clinical trials administrated with EPA + DHA from 400–840 mg/d presented insignificant results [13, 14, 15]. Similarly, no risk reduction was observed by 1 g/d EPA + DHA intake in diabetic patients free of CVD in the ASCEND trial [21]. Given that the results on this topic are controversial in RCTs and meta-analysis, a new meta-analysis was conducted with all eligible RCTs included, and comprehensive cardiovascular outcomes were provided. After comparing the included studies, the inconsistent results had some possible sources: n-3 PUFA was used for secondary prevention on CVD in some studies, and considerable statins and antiplatelet drugs capable of influencing the efficacy of n-3 PUFA were used in the treatment process. Additionally, different amounts of n-3 PUFA administrated and different demographic baseline characteristics, such as populations with different risk factors, would also have interactions with n-3 PUFA intake. The use of n-3 PUFA was observed to potentially prevent the risk of CV mortality. However, little impact was noted on all-cause mortality. The protection derived from n-3 PUFA on CV mortality could be explained by the low dose of n-3 PUFA intake controlling SCD through an antiarrhythmic effect [43]. Sensitivity analysis on CV mortality showed consistent negative results, which could be explained by the high proportion of death caused by cardiac reasons. Besides, supplements with n-3 PUFA failed to reduce the risk of MI and stroke, which could be influenced by the dose and duration of n-3 PUFA intake. Results from subgroup analysis showed a significantly reduced risk for CV mortality in n-3 PUFA use for secondary prevention, suggesting that populations exposed to higher CV risks seemed to benefit most from n-3 PUFA. The above hypothesis could also be supported by a previous meta-analysis claiming that benefits from n-3 PUFA to reduce CHD were more evident in participants with elevated triglyceride or elevated LDL-C levels [44]. In the FOURIER trial, MACE was observed in 14.4% of diabetic patients after 36 months of statin-PCSK9 inhibitors therapy [45]. Besides, the possible beneficial effects of n-3 PUFA were merely more likely to be detectable because of a greater number of CV events. Similar results were also detected in the JELIS trial [10]. Additionally, there are also previous meta-analyses on this controversial topic (Table 3, Ref. [6, 7, 8, 9, 12, 14, 17, 19, 20, 21, 32, 46]). Casula et al. [32] included 16 RCTs involving 81,073 participants and revealed risk of CV mortality, MACE and MI could be significantly reduced by n-3 PUFA intake. To our best knowledge, results on CV mortality, MACE and MI were likely to be influenced by the dosage and duration of n-3 PUFA intake, and results on MACE were not stable in the study of Casula et al. [32] because the 95% CI was close 1. With more updated results included, the synthesized results might change. Additionally, sensitivity analysis, publication bias, source of heterogeneity and study limitation were not well presented or discussed, which was thought important to provide informative findings. This study included 19 RCTs involving 116,498 participants, and findings from the current study were stronger and more powerful. Yan et al. [46] included 15 RCTs with 141,164 participants, and 10 CVD related outcomes were mainly reported. However, hospitalization for all heart diseases and hospitalization for heart failure were absent. They revealed that n-3 PUFA could reduce MACE, MI and CV mortality, while publication bias was not reported, and the synthesized results might not be as stable as expected. Yan et al. [46] also performed meta-analysis on bleeding events and cancer. However, the fact was that only few included studies provided outcomes on bleeding events or disorders, and that significant heterogeneity was also observed across studies reporting bleeding events. Incidence of cancer should not be listed as a main aim in a meta-analysis focusing on CVD outcomes, because the authors of cardiologists might not be so familiar with cancer outcomes as CVD outcomes. Thus, evaluation on cancer incidence was questionable, and all these concerns from Yan et al. [46] needed to be addressed with more high-quality evidence. Table 3.The comparison to previous meta-analyses. Casula et al. [32] (PMID: 32634581) Yan et al. [46] (PMID: 36103100) Current study Year 2020 2022 2022 Published journal Pharmacological Research (ISSN: 1043-6618) Cardiovascular Drugs and Therapy (ISSN: 0920-3206) Reviews in Cardiovascular Medicine (ISSN: 1530-6550) Populations 81,073 participants, mean age from 49-74 year 141,164 participants 116,498 participants N-3 PUFA intake EPA + DHA or EPA EPA + DHA or EPA EPA + DHA or EPA Prevention type secondary or mixed secondary or mixed secondary or mixed Included study type RCTs RCTs RCTs Included study number 16 15 19 Included study quality all trials are of high quality (assessed by Jadad score) all trials are of high quality Nilsen et al. [7] (2001) and Einvik et al. [12] (2010) were lack of detailed descriptions on blinding methods. Nilsen et al. [7] (2001) failed to provide evidence to support the process of randomization and allocation concealment; allocation concealment was also absent in Einvik et al. [12] (2010). Bowman et al. [21] (2018) was an open-label study that did not take blinding for participants and outcome assessments. Incomplete outcome data existed in Brouwer et al. [8] (2006), Kromhout et al. [14] (2010) and Bonds et al. [19] (2014). There was unclear for other source of bias in Marchioli et al. [6] (1999), Svensson et al. [9] (2006), Einvik et al. [12] (2010), Macchia et al. [17] (2013), Bonds et al. [19] (2014) and Nosaka et al. [20] (2017) Analyzed outcomes 6 outcomes (all-cause mortality, CV mortality, no CV mortality, MACE, MI and stroke) 10 outcomes (MACE, MI, HF, stroke, AF, CV mortality, all-cause mortality, gastrointestinal problems, bleeding-related disorders and cancer) 12 outcomes (MACE, MI, CHD, revascularization, stroke, SCD, CV mortality, all-cause mortality, hospitalization, hospitalization for all heart disease, hospitalization for heart failure and AF) Total findings risk of CV mortality, MACE and MI was reduced risk of MACE, MI and CV mortality was reduced; risk of AF was increased risk of revascularization and CV mortality was reduced; risk of AF was increased Sensitivity analysis not reported performed on MACE, MI, HF, AF, stroke, all-cause mortality and cancer. Results on MI changed performed on MACE, MI, revascularization, all-cause mortality, hospitalization for all heart disease. Results on revascularization changed Findings from subgroup analysis (1) risk of CV mortality and MI was reduced in secondary prevention trials; (2) risk reduction on CV mortality, MACE and MI was effective for more than 1 g/d n-3 PUFA intake; (3) EPA + DHA was only effective on CV mortality over EPA (1) only EPA seemed to be more effective than EPA + DHA; (2) risk of MACE was reduced in secondary prevention trials; (3) risk of MI was reduced in primary prevention trials; (4) risk of stroke patients with MI was increased; (5) EPA was associated with the risk of bleeding (1) n-3 PUFA was not associated with MACE outcomes across subgroup analyses; (2) risk of CV mortality was reduced across subgroup analyses except mixed prevention trials, and $\geq$2 g/d n-3 PUFA intake trials; (3) risk of all-cause mortality was reduced in trials with statin use in $<$ 50% populations Heterogeneity not reported (1) no significant heterogeneity: stroke, CV mortality, cancer; (2) mild heterogeneity on MI, HF; (3) slight heterogeneity on all-cause mortality; (4) moderate heterogeneity on MACE, AF, bleeding-related disorders; (5) significant heterogeneity on gastrointestinal problems (1) little heterogeneity: CHD, stroke, SCD, CV mortality, hospitalization, hospitalization for all heart disease, AF; (2) significant heterogeneity: MACE, MI, revascularization; all-cause mortality Publication bias not reported not reported no publication bias on MACE, MI, CHD, revascularization, stroke, SCD, CV mortality and all-cause mortality Conclusion n-3 PUFA significantly improves cardiovascular outcomes, with higher benefit in secondary CV prevention, using more than 1 g/d and taking EPA alone n-3 PUFA may reduce risk of MACE, MI, CV mortality. EPA alone seems to be effective. N-3 PUFA dose not increase gastrointestinal problems, bleeding-related disorders, or cancer n-3 PUFA could reduce the risk of CV mortality and revascularization, it also increases the AF incidence; the benefits and harm of n-3 PUFA should be balanced when using for patients or high risk populations Study limitation not reported discussion on dietary supplements type was lack; heterogeneity among included studies; limitations from JELIS and REDUCE-IT study; small number of studies on AF outcome inconsistent outcome definitions; high heterogeneity across some analyses; inconsistent n-3 PUFA formation; small number of studies on AF and hospitalization Abbreviations: MACE, major adverse cardiovascular events; MI, myocardial infarction; HF, heart failure; CHD, coronary heart disease; SCD, sudden cardiac death; AF, atrial fibrillation; CVD, cardiovascular disease; n-3 PUFA, Omega-3 polyunsaturated fatty acids; EPA, eicosapentaenoic acids; DHA, docosahexaenoic acids; RCT, randomized controlled trial. Current meta-analysis is endowed with some merits compared to previous meta-analysis. First of all, this meta-analysis is the most comprehensive and timely-updated study up till now. Totally, 19 related RCTs with 116,498 populations were included, and 12 outcomes of interest on CVD were also specially analyzed. Generally speaking, abundant evidence will always bring robust and reliable conclusions, and the pooled CVD outcomes will take more insights for clinical practice. Then, detailed subgroup analyses were performed on variables about statin use and antiplatelet drug use, and the results of the subgroup analysis also supported that n-3 PUFA seemed to be more effective and might bring more benefits to populations at a high risk for potential CV events. There were also several limitations on current meta-analysis. Firstly, definitions of the outcome of interest in included studies were not consistent, which might lead to some biases in the pooled results and potential heterogeneity among the included studies for analysis. Second, heterogeneity seemed to be high in some analyses. In this case, all analyses were performed using random-effects models, and sensitivity analyses were performed on the outcome of interest with significant heterogeneity. Third, n-3 PUFA formation was not consistent in the included studies, which potentially contributed to some biased stuff in the analysis. Given that, additional subgroup analysis was performed based on different n-3 PUFA intake formations. Finally, although 12 outcomes of interest were pre-set for clinical reference, the study number for AF and hospitalization was relatively limited. Thus, more evidence on the outcomes of AF and hospitalization with n-3 PUFA intake was still required to confer robust and conclusive results in the future. 5. Conclusions In this meta-analysis, n-3 PUFA intake is found to significantly reduce the risk for revascularization and CV mortality, however, it also increases the risk for AF. Besides, n-3 PUFA seems to be more effective on populations with CV events for secondary prevention, especially with EPA intake only. Neutral results are observed with respect to MACE, MI, CHD, SCD, all-cause mortality, hospitalization, hospitalization for all heart disease and hospitalization for heart failure. Before clinical practice, medical caregivers should balance the benefits and harm of n-3 PUFA for CVD prevention and treatment. Author Contributions All authors designed and conducted this study. XQ wrote the paper. HH, HZ and RY helped design the study. XQ and HH completed data collection and analysis. HZ and RY revised the statistical methodology. HH assumed primary responsibility for the final content. All authors read and approved the final manuscript. Ethics Approval and Consent to Participate Not applicable. Acknowledgment Not applicable. Funding This research received no external funding. Conflict of Interest The authors declare no conflict of interest. Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Share
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 53, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5540436506271362, "perplexity": 11700.055706361343}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500140.36/warc/CC-MAIN-20230204142302-20230204172302-00122.warc.gz"}
http://mathbio.colorado.edu/index.php/MBW:Big_Bacteria
May 21, 2018, Monday MBW:Big Bacteria Overview Executive Summary Typically when considering bacteria, they are often assumed to have nominal size of about 1 micron. The truth is that bacteria have a large range of sizes, from 0.2 micron nano-bacteria, to the largest bacteria, Thiomargarita namibiensis, which have a diameter of 750 microns. Consequently, the biomass of different bacteria can vary by over 10 orders of magnitude. The fact that these microorganisms rely on diffusion to receive food and nutrient molecules can pose a problem. As the concentration of the substrate in the surrounding fluid increases, larger bacteria can be sustained. In natural environments, there is an advantage for micron sized bacteria; however, as previously mentioned, certain bacteria can grow to be hundreds of times larger. The mathematical reasoning for the limit of bacteria does not take into account strategies to overcome the diffusion limited growth. In the natural world we find several strategies that bacteria have developed to allow them to grow beyond their diffusion limited size. Two general mechanisms that have been found are chemotaxis, where a motile microorganism moves toward higher concentrations of substrate, reducing the effective volume of the microorganism with vesicles for internal energy storage. History and Context Prior to the early 1990s, the size-scales of eukaryotes was thought to be roughly 100 times that of typical prokaryotes. This size discretion had been the most common way to visually differentiate between eukaryotes and prokaryotes. In the gut of the surgeonfish, bacteria have been discovered on the order of 100 microns in size, two to three orders of magnitude larger than typical bacteria. [2] The largest of these, which have been measured at 80 by 600 microns, are roughly the same size as a typical eukaryote, making their classification as prokaryotes much more difficult. Additionally, previous theories on prokaryotic structure and development, most specifically regarding the expected maximum size of bacteria, must be reconsidered. Further disproving prior theories, Heide Schulz discovered an even larger bacteria, the Thiomargarita namibiensis, off the coast of Namibia in 1997. [3] These bacteria grow to around 750 microns in diameter, but instead of living within the gut of a fish, they feed off of sulfur-rich sedimentary rock. Several other gigantobacteria were discovered to also live in these sulfur-rich environments, leading Schulz and others to further investigate how they apparently violate the laws of diffusion, as they do in "Big Bacteria." Mathematics, Model Type & Biological System Studied This article uses the mathematical relationship between the integrals representing particles in a space with those moving in and out of the space. From here it finds the corresponding differential equation representing the change in particle distribution in a 1, 2 or 3 dimensional space, based on particle movements. It is solved to get the distribution of particles in space, and the diffusion time of particles. It also looks at formulas for the concentration of substrate molecules surrounding a bacteria, based on distance from the bacteria. This is used to find the maximum metabolic rate (the rate at which a bacteria cell metabolizes substrate molecules), related to the inverse square of the radius of the bacteria cell. It outlines further mathematics for predicting bacteria size from diffusion times and the substrate environment (densities). The article also considers the extra effects from chemotaxis, the movement of bacteria that is caused by a chemical gradient. These supply another method for bacteria to get the substrates they need. The following derivation is taken from Britton's Essential Mathematical Biology. [4] The derivation is based on the conservation of mass, but has a source/sink term to account for possible reactions that often are happening in biological applications. The number of particles inside a volume at a time t+dt is equal to the number of particles in the volume at time t, plus the net particles entering, plus the net creation of particles in the volume. Mathematically this can be written as: Where u is the particle concentration, J is the flux, and f(x,t) is the sink/source density at (x,t). Now by using the divergence theorem, and subtracting the first term on the right side of our equation from both sides, dividing by $\delta t$ and taking the limit as $\delta t\rightarrow 0$, we get: This is true for arbitrary volumes V and the integrand must be zero, This is the conservation of matter with a source term. Because $J_{{adv}}={\mathbf {v}}u$, the advection equation with a source term is: The net flow of particles down a concentration gradient is proportional to its magnitude, This is of course Flick's Law. The diffusion equation becomes: If both advection and diffusion are present, then it can be written as the advection diffusion equation: It can be solved in whichever coordinate system you prefer, in 1, 2, or 3D. In spherical coordinates (without source/sink term) this becomes To use this equation we assume that the system is in steady state i.e. ${\frac {\delta u}{\delta t}}=0$. Next by plugging in the appropriate boundary conditions: In 1D typically: $u(0)=u_{{0}},u(L)=0$ In 2D and 3D: $u(a)=u_{{0}},u(b)=0$ where $a< These then give the following results: In 1D: $u(x)=u_{{0}}(1-{\frac {x}{L}})$ $u(R)=u_{{0}}{\frac {log(R/a)}{log(b/a)}}$ In 3D spherically symmetric flow: $u(r)=u_{{0}}{\frac {b(r-a)}{r(b-a)}}$ One can then find the total particles, N, by integrating the function u over the appropriate limits, and then divide by the flux, $J=-D{\frac {\delta u}{\delta r}}$ to find the diffusion time given here as $\tau$ In 1D:$\tau ={\frac {L^{2}}{2D}}$ In 2D:$\tau \approx {\frac {b^{2}}{2D}}*log(b/a)$ In 3D: $\tau \approx {\frac {b^{3}}{3aD}}$ This is the time it takes for diffusion to move a particle a distance L (or b-a) and can be used to predict the size of bacteria. Mathematical Model Schulz and Jorgensen take the diffusion equation derived above, and applying it in three dimensions over the boundary conditions of a bacteria of set size, determine the result discussed below. In the paper it is not clear where their extra factor of ${\frac {\pi }{2}}$ comes from exactly, it has been left in to for consistancy with the original publication. The amount of time, t, required for a molecule to diffuse through an average distance of L through liquid is described by $t={\frac {\pi L^{2}}{4D}}$ In terms of the length, this becomes $L=({\frac {4Dt}{\pi }})^{{1/2}}$ The constant D is the unique Diffusion Coefficient for a specific molecule at a given temperature. This relationship between diffusion length and time implies that the velocity is not linear in time as with typical spatial translation. Instead, the velocity is greatly affected by the timescale of the observation: $L=({\frac {4D}{\pi t}})^{{1/2}}$ According to this equation, an oxygen molecule would require over 1000 years to diffuse one meter. However, because we are dealing with bacteria on the order of 1 micron in size, the diffusion time is only on the order of one-thousandth of a second. As a result, the scale on which diffusion occurs is much quicker than the external transportation of molecules. One consequence of this is that molecules within a cell can interact very rapidly via diffusion. In the typical deep-sea environment where the bacteria of interest might be found, one would expect turbulence to disrupt the diffusion. The Kolmogorov scale, which describes fluctuations of a fluid due to turbulence, states that given the viscosity of water, the smallest scale affected by turbulence would be on the order of a few millimeters. Even bacteria tens of microns in diameter have a small enough diffusion sphere to be completely unaffected by even the most violent turbulence. Specific Metabolic Rate The above model fails to consider the typically very low concentrations of substrate molecules surrounding a bacteria. In general, the concentration of substrate increases with the distance away from the bacteria: $C(r)={\frac {R}{r}}(C_{0}-C_{\infty })+C_{\infty }$ Where $r$ is the radial distance from the bacterium, $R$ is the radius of the bacterium, and $C_{0}$ and $C_{\infty }$ represent the concentrations at the cell wall and in the water respectively. This equation holds true for regions outside of the body of the bacterium. It then follows that the total diffusion flux to the surface of a cell of radius R is: $J=4\pi DR(C_{\infty }-C_{0})$ The maximum uptake for a diffusion limited cell takes place when $C_{0}=0$, yielding a maximum diffusion gradient and flux of: $C(r)_{{max}}={\frac {R}{r}}(-C_{\infty })+C_{\infty }$ and $J_{{max}}=4\pi DR(C_{\infty }-C_{0})$ The maximum specific metabolic rate is then given by the maximum flux per unit volume. Assuming a spherical cell shape of volume ${\frac {4}{3}}\pi R^{3}$, the specific metabolic rate is: ${\frac {J}{V}}={\frac {3D}{R^{2}}}C_{\infty }$ This result implies that diffusion limitations should only affect larger molecules, as the specific metabolic rate describes the ability of a cell to metabolize substrate molecules (as is done during diffusion) and is related to the inverse square of the radius of the cell. Results Bacteria Size Estimation The model described by Schulz and Jorgensen successfully explains the general mechanisms by which bacteria absorb substrate. It predicts the time for particles to diffuse and can be used to predict the size of bacteria. $L=(2D\tau )^{{\frac {1}{2}}}$ where $\tau ={\frac {[O_{2}]*\rho _{{bac}}}{O_{{uptake}}}}$ One can use the above equations to calculate the expected size of a bacterium of a particular mass density living in a certain environment. The following numbers are taken from Gikas and Livingston.[5]: The diffusion constant of oxygen in water is: $D_{{H_{2}O}}=10^{{-5}}{\frac {cm^{2}}{s}}$ The concentration of oxygen in typical ocean water is: $[O_{2}]=6{\frac {mg}{L}}$ The typical uptake of oxygen by bacteria is: $O_{{uptake}}=135{\frac {mgO_{2}}{g_{{bac}}*hr}}$ And the density of typical bacteria is: $\rho _{{bac}}=\rho _{{drybac}}+\rho _{{water}}=(50+1000){\frac {kg}{m^{3}}}$ The product of these numbers and some dimensional analysis yields an expected maximum size for bacteria of about 17 microns. Note that the uptake time unit must be converted to seconds instead of hours. Bacteria have been found to be tens of times larger than the limited size set by diffusion alone. Although overarching principles governing the maximum size of bacteria remain elusive, there are several examples of big bacteria whose specific mechanisms for increasing their metabolic rate have been observed. The following table, taken from Schulz and Jorgensen, shows the wide range of possible sizes of known bacteria. The bulk of the remainder of the paper focuses mainly on specific examples of giant bacteria, and the mechanisms by which they increase metabolic rates and attain the larger sizes listed in the above table. The main mechanism by which this is possible is called chemotaxis. Overcoming the Diffusion Size Limit: Chemotaxis Taxis just means a movement toward or away from an external stimulus. Chemotaxis refers to movement in response to a chemical gradient. Depending on the type of bacterium, this movement can come through a variety of mechanical systems. For example Escherichia coli has a "run-and-tumble" mechanism where it does a sudden movement in a given direction, then tumbles in place and then moves in a new direction. It tumbles less when moving up an attractive gradient and more when moving down one. This effectively keeps it generally moving in its desired direction. The nitrate-storing sulfur bacterium Thioploca lives in sediment layers in the ocean. It forms pathways in the sediment so that it can move from a nitrate rich top layer to a hydrogen sulfate rich lower layer. It moves in order to take up nitrate and then reduce the sulfate to store it as sulfur. Many bacteria anchor themselves to a solid surface with water, such as a sedimentary rock. In this region we find what is called a diffusive boundary layer. The bacteria try to keep within an optimal range of this layer. In some cases the bacteria will form a membrane of many bacteria that all work together to effectively pump water through the membrane, maintaining the chemical gradient. The main consistency among all of these giant bacteria is that by using chemotaxis, they are no longer solely reliant on molecular diffusion to get the substrates they need. The added movement changes the mathematics; a simple way to adapt the diffusion equation is to change the flux. Keeping in mind that the average velocity that the bacteria responds to the gradient is itself proportional to the gradient, the flux should be proportional to the gradient and the density of bacteria. $J_{{chemo}}=\chi *u\bigtriangledown c$ where $\chi >0$ is the chemotactic coefficient $u$ is the density of bacteria and $c$ is the chemical concentration. There is another strategy discovered to help overcome the size limit set by diffusion. The trick is that the bacteria do not have the volume of cytoplasm that they appear to have. In fact, they instead have many storage vesicles inside the cell which store substrate for future energy needs when the substrate is sparse. This effectively reduces the volume of the bacteria that needs substrate to survive and instead gives them an advantage over other bacteria that do not have these energy stores. Conclusion In general, prokaryotes such as bacteria are significantly smaller than eukaryotes because of limited substrate concentration for use as a diffusion nutrient. Some bacteria overcome this limitation in different ways, such as using chemotaxis to orient properly and get as many nutrients as possible to the entire surface of the cell, as the gigantic bacteria in the gut of the surgeonfish do, or growing at the interface of sulfur-rich sediment, like the bacteria discovered off the coast of Namibia. However, the advantages to such extreme sizes remain mostly unclear, and as an overwhelming majority of bacteria are of the expected size, studying the long term development of gigantic bacteria can be very difficult. The oxidation of hydrogen sulfide carried out by the bacteria living at the sulfur-rich sediment interfaces have obvious energy efficient benefits, but most other types of bacteria have not yet seemingly adapted to have larger size be advantageous. This, in addition to the issue of classifying bacteria when they are of the same size scale as eukaryotes, should make the future studies of big bacteria both difficult and enlightening. Citations to The Paper ("Big Bacteria") This paper has been cited in over 100 different papers. One such paper is The Selective Value of Bacterial Shape, which considers why certain bacteria have certain shapes (and sizes). It claims that bacterial shapes are important biologically, since specific morphologies are consistently chosen from many possibilities, since some cells can change shape when necessary, and since morphology can be tracked through evolutionary lineages. It aims to explore the conditions that favor specific morphologies. It considers 8 different environmental or behavioral conditions that can affect bacteria. They believe that, in a way, bacteria "integrate" over (some of) these factors to produce an optimal shape given their circumstances. "Big Bacteria" is cited when discussing the first of the 8 conditions that could affect bacteria shape, nutrient access. It explains that diffusion (to areas of greater nutrient concentration) is the main factor that determines how well bacteria can "eat", and that diffusion is affected greatly by cell size, and possibly by shape. It refers to the large bacteria that appear in "Big Bacteria", as well as the analysis of the effect of diffusion on cell size. It also mentions the results found for the relationship between nutrient transportation rate and surface area of the cell, or how smaller/less spherical cells with specific shapes will have better nutrient transfer. It mentions the "diffusion sphere" and the time for molecules to "meet", which are also introduced in "Big Bacteria". References 1. Schulz, Heide N. and Bo Barker Jorgensen. "Big Bacteria," Annual Review of Microbiology, 55(1): 105-37. 2. Sogin, Mitchell L. "Giants Among the Prokaryotes," Nature 362, 207 (18 March 1993). 3. Travis, J. "Pearl-like bacteria are largest ever found," Science News Online Volume 155, Number 16 (April 17, 1999). 4. Britton, Nicholas F. Essential Mathematical Biology. London: Springer-Verlag, 2003 (148-151). 5. Gikas, P. and A.G. Livingston. "Specific ATP and specific oxygen uptake rate in immobilized cell aggregates: Experimental results and theoretical analysis using a structured model of immobilized cell growth," Biotechnology and Bioengineering Volume 55, Issue 4, pages 660–673, 20 August 1997.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7984793782234192, "perplexity": 1815.7088509590615}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864186.38/warc/CC-MAIN-20180521122245-20180521142245-00414.warc.gz"}
https://www.physicsforums.com/threads/superheated-steam-turbine-need-help-on-a-question-please.808142/
# Superheated Steam Turbine - need help on a question please 1. Apr 12, 2015 ### Christy001 1. Superheated Steam enters a turbine with a pressure of 10 bar and a temp of 300 degrees celcius, and exits with a pressure of 0.046 bar. As the steam flows through the turbine it undergoes adiabatic expansion according to the law pv to the power 1.131 = constant From Steam tables I have to calculate : a) specific volume of steam at the turbine input and output (I assume that's just v straight from the Superheated tables at the relevant pressures ?) b) Saturated vapour specific volume for the steam output pressure and state the steam condition ? c) dryness fraction of the steam at the turbine exit ? d) theoretical specific work output of the turbine ? Can anyone help me on the above please as I've scoured the internet for a solution and got part of the way in understanding but then can't follow how the things are found on the steam tables and what equations are used - any help would be greatly appreciated 2. Apr 12, 2015 ### SteamKing Staff Emeritus Since this is a homework question, you should have filled out the HW Template. Those are the Rules. In order to receive help, you must also show some attempt at solving these questions. For a) You have sufficient info for the inlet conditions to find spec. volume. The same is not true for the exit, as seen by questions b), c), and d), where you must determine how the steam changes condition as it flows thru the turbine. As stated in the problem, PVγ = Constant, where γ = 1.131 This should be enough information for you to work thru questions b), c), and d). If you get stuck, repost to this thread with the details and we'll see if these blocks can't be overcome. 3. Apr 12, 2015 ### Christy001 Hi - this is what I have worked out so far : a) specific volume of steam at the turbine input and output : At state 1 call the values S1, H1 and at state 2 call the values S2, H2, From the Superheated steam tables I've looked up 300 degrees C and 1000 Kpa to give S1 (Sg) 7.125KJ/KG-K and H1 (Hg) = 3,052.15 KJ/KG I know that as the process is Adiabatic then S1 = S2 therefore I think the answer to part (a) is 7.125 Kj/Kg-K for both answers. (b) I know for this part I need to use the formula : S2 = Sf + x times Sfg - where x=dryness factor (but I don't know what Sf and Sfg are so I can solve for x so I need a pointer here please . (c) x is is dryness fraction so I have found this out in part (b) I believe (d) to find the work done I know that for an adiabatic process this is the change in H, so it's H2 - H1 - I know H1 from above. I know I need to use the formula : H2= Hf + x times Hfg (at 0.046 bar) = I have worked out x - so do I just get Hf and Hfg from the Superheated Steam tables ? thanks. 4. Apr 12, 2015 ### BvU You want to reconsider the meaning of "adiabatic". The case $\Delta S=0$ is goes under the name "Isentropic". The given expression for $pv$ should help you further to find the exit temperature. 5. Apr 12, 2015 ### SteamKing Staff Emeritus S generally denotes the entropy of a substance, not specific volume, which is usually denoted by v. Also, the units of specific volume are m3/kg. The steam tables also use g to denote the property in the vapor phase and f in the fluid phase; hence Sg is the entropy of the steam in the vapor phase. You must pay attention to the units listed for each property. You must correct your work on part a) first. Not according to what you wrote in the previous section. If you've worked out the quality of the exhaust steam x, you haven't shared it with us yet. The steam tables will list the enthalpy, entropy, and specific volume for the vapor and fluid phases of water/steam. 6. Apr 13, 2015 ### Staff: Mentor You know the exit pressure, and you know that the exit stream is a combination of liquid water and water vapor. What does that tell you about whether it is saturated or not? If it's saturated, what is the entropy of the saturated liquid, and what is the entropy of the saturated vapor? What is the temperature? What is the enthalpy of the saturated liquid and the saturated vapor. What is the enthalpy of the stream? Chet 7. Apr 13, 2015 ### Christy001 Hi. Thanks for the reply. This is where I get stuck as I can understand that when the steam enters at 450 it is superheated but at exit do I need to go back to the Saturated tables to obtain the temperature ? 8. Apr 13, 2015 ### SteamKing Staff Emeritus By nature, superheated steam has a vapor quality of 100%, so yes, for the turbine exhaust, you'll have to go to the saturated vapor tables and use the vapor quality to calculate the properties of the exhaust steam. http://en.wikipedia.org/wiki/Vapor_quality
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7626456618309021, "perplexity": 842.2524727040998}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647530.92/warc/CC-MAIN-20180320185657-20180320205657-00012.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=129&t=20717
## Midterm 2013 Q4A $w=-P\Delta V$ and $w=-\int_{V_{1}}^{V_{2}}PdV=-nRTln\frac{V_{2}}{V_{1}}$ anatorres1B Posts: 13 Joined: Wed Nov 18, 2015 3:00 am ### Midterm 2013 Q4A To calculate the work from B to C, it appears a variation of the work equation was used. Instead of volume, pressure values were used instead. Why is pressure used instead of the volume? They both change, so how would I go about determining which variation of the work equation to use? Also, in the solution it appears that the equation is w= -nRTln(p1/p2) why are the pressure values not the other way around (p2/p1)?? anatorres1B Posts: 13 Joined: Wed Nov 18, 2015 3:00 am ### Re: Midterm 2013 Q4A Also, is it still possible to use the volume variation instead? I used it the first time around and came out with the same answer. Marie_Bae_3M Posts: 25 Joined: Wed Sep 21, 2016 2:57 pm ### Re: Midterm 2013 Q4A Yes, I believe you can use either one. w=-nRTln(p1/p2) is basically found from the original w=-nRTln(v2/v1). This goes back to boyle's law p1v1=p2v2. In other words, v2/v1=p1/p2 (pressure and volume are inversely related to each other) and you plug this into the original equation. anatorres1B Posts: 13 Joined: Wed Nov 18, 2015 3:00 am ### Re: Midterm 2013 Q4A Thank you so much!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169021606445312, "perplexity": 1360.6123807495646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371620338.63/warc/CC-MAIN-20200406070848-20200406101348-00008.warc.gz"}
https://formulasearchengine.com/wiki/Dixon%27s_Q_test
# Dixon's Q test In statistics, Dixon's Q test, or simply the Q test, is used for identification and rejection of outliers. This assumes normal distribution and per Dean and Dixon, and others, this test should be used sparingly and never more than once in a data set. To apply a Q test for bad data, arrange the data in order of increasing values and calculate Q as defined: ${\displaystyle Q={\frac {\text{gap}}{\text{range}}}}$ Where gap is the absolute difference between the outlier in question and the closest number to it. If Q > Qtable, where Qtable is a reference value corresponding to the sample size and confidence level, then reject the questionable point. Note that only one point may be rejected from a data set using a Q test. ## Example Consider the data set: ${\displaystyle 0.189,\ 0.167,\ 0.187,\ 0.183,\ 0.186,\ 0.182,\ 0.181,\ 0.184,\ 0.181,\ 0.177\,}$ Now rearrange in increasing order: ${\displaystyle 0.167,\ 0.177,\ 0.181,\ 0.181,\ 0.182,\ 0.183,\ 0.184,\ 0.186,\ 0.187,\ 0.189\,}$ We hypothesize 0.167 is an outlier. Calculate Q: ${\displaystyle Q={\frac {\text{gap}}{\text{range}}}={\frac {0.177-0.167}{0.189-0.167}}=0.455.}$ With 10 observations and at 90% confidence, Q = 0.455 > 0.412 = Qtable, so we conclude 0.167 is an outlier. However, at 95% confidence, Q = 0.455 < 0.466 = Qtable 0.167 is not considered an outlier. This means that for this example we can be 90% sure that 0.167 is an outlier, but we cannot be 95% sure. McBane[1] notes: Dixon provided related tests intended to search for more than one outlier, but they are much less frequently used than the r10 or Q version that is intended to eliminate a single outlier. ## Table This table summarizes the limit values of the test. Number of values: 3 4 5 6 7 8 9 10 Q90%: 0.941 0.765 0.642 0.56 0.507 0.468 0.437 0.412 Q95%: 0.97 0.829 0.71 0.625 0.568 0.526 0.493 0.466 Q99%: 0.994 0.926 0.821 0.74 0.68 0.634 0.598 0.568
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7873981595039368, "perplexity": 550.1517104503049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606975.49/warc/CC-MAIN-20200122101729-20200122130729-00151.warc.gz"}
https://lists.boost.org/boost-build/2017/12/29712.php
# Boost-Build : Subject: Re: [Boost-build] configuring b2 via appveyor From: Stefan Seefeld (stefan_at_[hidden]) Date: 2017-12-07 02:56:42 On 06.12.2017 19:45, Steven Watanabe via Boost-build wrote: [...] > I tried pasting it into a batch file and > it seems to work. Obviously appveyor is > doing something subtly different, but it > isn't exactly clear what. It seems like it's > getting one extra round of evaluation, somehow. Yeah, I couldn't reproduce the error outside appveyor either. >> files in appveyor.yml files, so I could simply learn from that experience, >> rather than stumbling around in the dark as I do now. >> > You can avoid using user-config.jam for this > by setting the environmental variables, > ZLIB_INCLUDE/ZLIB_LIBRARY_PATH (and similar > variables for the other libraries). It will > probably also work if you use > b2 ... include=%VCPKG%\include library-path=%VCPKG%\lib > although I don't think I've actually tested this. OK, that seems to work fine indeed. Thanks ! Stefan ```-- ...ich hab' noch einen Koffer in Berlin... ```
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109076261520386, "perplexity": 15489.158916781043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141753148.92/warc/CC-MAIN-20201206002041-20201206032041-00574.warc.gz"}
http://kratos-wiki.cimne.upc.edu/index.php/Poisson%27s_Equation_in_Electrostatics
# Poisson's Equation in Electrostatics A detailed form of the Poisson's Equation[1] in Electrostatics is: $\frac{\partial}{\partial x}\cdot \left( \varepsilon_{x} \cdot \frac{\partial V(x,y,z)}{\partial x}\right) + \frac{\partial}{\partial y}\cdot \left(\varepsilon_{y} \cdot \frac{\partial V(x,y,z)}{\partial y} \right) + \frac{\partial}{\partial z}\cdot \left(\varepsilon_{z} \cdot \frac{\partial V(x,y,z)}{\partial z} \right) + \rho_v(x,y,z)=0$ with: $D_x = -\varepsilon_{x} \frac{\partial V(x,y,z)}{\partial x} \quad D_y = -\varepsilon_{y} \frac{\partial V(x,y,z)}{\partial y} \quad D_z = -\varepsilon_{z} \frac{\partial V(x,y,z)}{\partial z}$ $E_x = -\frac{\partial V(x,y,z)}{\partial x} \qquad E_y = -\frac{\partial V(x,y,z)}{\partial y} \qquad E_z = -\frac{\partial V(x,y,z)}{\partial z}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9813281893730164, "perplexity": 15795.850097302642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00527.warc.gz"}
https://cs.stackexchange.com/questions/109974/how-to-solve-the-optimization-of-bin-packing-using-the-decision-version
How to solve the optimization of bin packing using the decision version Let us say the optimization version of the bin packing problem asks you to give a packing using the fewest bins possible and the decision version asks if it is possible to pack the bins into $$k$$ bins. How can you reduce the optimization version to the decision version? If you only have to answer what the fewest bins possible is then it is clear you can use binary search. But if you actually have to give the packing, I can't see how to do it. If you could solve the decision problem (do these items fit in k bins) you can obviously solve the optimisation problem (what is the minimum number of bins) using binary search. But you actually want to know what to put into each box. That’s easily done doing the decision problem at most n^2 times. Let’s say you know k boxes are required and sufficient. Sort the item by descending size. Replace the biggest and second biggest item with one item of their combined size and check if the new items fit into k bins. If not, try the largest and third largest item, and so on. After n attempts, you either know two items that are in the same bin in an optimal solution, or you know the largest item is on its own. And you repeat that, each time you find that you can put two items in the same bin by solving at most n decision problems. • This looks like a very nice trick. Thank you. – Anush May 29 '19 at 8:12 This answer will probably be quite annoying... To be explicit, your bin packing problem is: find a way of assigning items of size $$s_1, \dots, s_k$$, to the minimum number of bins such that no bin contains items of total size more than $$t$$. Consider the following problem, Non-Uniform Bin Packing: the input is a list of bin sizes and item sizes and we want to know if we can put all the items in the bins so no bin is overflowing. This problem is clearly in NP: an assignment of items to bins is of polynomial size with respect to the input, and we can check in polynomial time if none of the bins are overflowing. Since this problem is in NP, it can be reduced to Bin Packing, which is NP-complete. (This is the annoying part: I've not said how to perform this reduction, because I've not managed to figure it out.) We can use Non-Uniform Bin Packing to solve your problem in the usual way. We build up the solution by asking, for each item in turn, whether that item can go in each bin. First, as you describe, use binary search to find out how many bins we need; call this $$d$$. Then, suppose we have a partial solution, in which we have placed items $$1, \dots, i$$ into bins, and the remaining capacities of the bins are $$c_1, \dots, c_d$$ (our initial partial solution is that $$i=0$$, i.e., we've placed no items, and the remaining capacities are $$t, \dots, t$$). Then, just try placing the $$(i+1)$$st item in each bin in turn until the Non-Uniform Bin Packing algorithm says that there is a solution for items $$i+2, \dots, k$$ in the remaining bin capacities. That is, we ask in turn if items $$i+2, \dots, k$$ can be placed in bins with capacities $$c_1-s_{i+1}, c_2, \dots, c_d$$, then $$c_1, c_2-s_{i+1}, c_3, \dots, c_d$$, then ... and finally $$c_1, \dots, c_{d-1}, c_d-s_{i+1}$$. Do this until you've placed each item and you have your assignment. • Doesn't this suggest that optimal solutions to non-uniform bin packing are nested, in the sense that the optimal allocation of $i$ items is the same in the $i$-item problem as it is in the $(i+1)$-item problem? – LarrySnyder610 May 29 '19 at 0:16 • Thank you although, as you say, this does still leave a large gap. I hope someone can fill it. – Anush May 29 '19 at 6:36 • @LarrySnyder610 I'm not sure what you mean. The algorithm I describe uses different bin sizes for the $i$-item and $(i+1)$-item problems. Essentially, it's saying "If I put this item in this bin, can I fit the remaining items in the remaining space? If it's possible to solve the $(i+1)$-item problem by putting item $1$ in bin $j$, it must be possible to put items $2, \dots, i+1$ in bins of size $c_1, \dots, c_{j-1}, (c_j-s_i), c_{j+1}, \dots, c_d$. – David Richerby May 29 '19 at 8:36 • I agree that in that case it's possible to put items $2, \ldots, i+1$ in those bins, but my question is whether it is optimal to do so. I believe your algorithm is greedy -- once we put item $i$ in bin $j$, we will never consider taking it out and putting it into a different bin. But a greedy algorithm should not be optimal for NUBP. So if you don't solve the $i$-item problem optimally, you might run out of room for the remaining items, even though a solution with $d$ bins exists. – LarrySnyder610 May 29 '19 at 18:52 There is some inconsistency about whether, in the context of complexity analysis, an optimization problem is defined as asking for the optimal solution or the optimal objective function value. For example, Wikipedia's entry on decision problems says: There are standard techniques for transforming function and optimization problems into decision problems. For example, in the traveling salesman problem, the optimization problem is to produce a tour with minimal weight. The associated decision problem is: for each $$N$$, to decide whether the graph has any tour with weight less than $$N$$. By repeatedly answering the decision problem, it is possible to find the minimal weight of a tour. The second sentence defines the optimization problem as producing the optimal tour, whereas the last sentence implicitly defines it as producing the weight of the optimal tour. Similarly, the entry on optimization problems says: For example, if there is a graph $$G$$ which contains vertices $$u$$ and $$v$$, an optimization problem might be "find a path from $$u$$ to $$v$$ that uses the fewest edges". This problem might have an answer of, say, 4. But 4 is not an "answer" to the problem of "find[ing] a path". So, this quote, too, is inconsistent about whether the optimization problem means finding the optimal solution or finding the optimal objective function value. I am not trying to wade into a debate here about the quality of Wikipedia entries; I'm just trying to make the point that in common usage, there is an inconsistency about the way "optimization problem" is used, and that inconsistency can lead to confusion about the relationship between decision and optimization problems. Now, back to your original question. If what you are asking is: How can you reduce the "find the optimal solution" version of the optimization problem to the decision problem? then I don't know of a way to do this, though @David Richerby gives one in his answer. If what you are asking is: How can you reduce the "find the optimal objective function" version of the optimization problem to the decision problem? then the answer is, using binary search, as you have pointed out. And if what you are really asking is: Why does everyone say you can reduce the optimization problem to the decision problem, when it seems impossible? then my answer is, when people say you can reduce the optimization problem to the decision problem, they mean the "find the optimal objective function" version of the optimization problem, but they are not always sufficiently clear in articulating that. Now, from a practical perspective, if you are actually going to solve the decision problem, you usually run some algorithm that determines whether the items can be packed into $$k$$ bins. That algorithm is most likely going to do the packing, so in the course of your binary search, you will find the actual packings. • The question explicitly asks about finding which items go in which bin, not just computing the minimum number of bins. – David Richerby May 28 '19 at 22:47 • True, but I guess I am saying that’s an inconsistent pair of definitions (of the two problems), and that that’s part of where the confusion stems from. – LarrySnyder610 May 28 '19 at 22:51 • It's not inconsistent or confused. The question gives two natural problems and asks how to reduce one to the other. I get that you'd rather reduce between two different problems, but that's not what the question is about, and the asker already mentions reducing the your version of the optimization problem to the decision problem. – David Richerby May 28 '19 at 22:53 • Revised to better explain my argument, and why I think the question arises because of an inconsistency in the way the term "optimization problem" is used. – LarrySnyder610 May 28 '19 at 23:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7541046738624573, "perplexity": 303.6891875220423}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621519.32/warc/CC-MAIN-20210615180356-20210615210356-00345.warc.gz"}
https://www.natureof3laws.co.in/self-inductance-definition-formula-units/
# Self-inductance | definition, formula, units, and dimensions Self-inductance is usually just called inductance. From the knowledge of inductance, we know that whenever the current change occurs, it induces an EMF in the conductor such that it opposes the change in the current. From Lenz’s law, the induced voltage by the changing current has the effect of resisting the change in current, and the voltage is called back-EMF. In this article, we will talk about self-inductance, definition, formula, units, and dimensions. So let’s get started… ## What is self-inductance? Self-inductance is a phenomenon by which a changing electric current produces an induced emf across the coil itself. Self-inductance is the property of the current-carrying coil that resists or opposes the change in current flowing through it. This happens mainly due to the self-induced emf generated in the coil itself. In simplified terms, one can also say that self-inductance is a phenomenon in which a voltage is induced in a current-carrying wire. The self-induced emf present in the coil resists the current rise when the current increases and also resists the current fall when the current decreases. Essentially, the direction of the induced emf is opposite to the applied voltage when the current is increasing, and the direction of the induced emf is in the same direction as the applied voltage when the current is decreasing. The above property of the coil only exists for alternating current, which is alternating current, and not for direct or constant current. Self-inductance always opposes the changing current and is measured in Henry (SI unit). The induced current always opposes the change in current in the circuit, whether the change in current is an increase or a decrease. Self-induction is a type of electromagnetic induction. ## Self-inductance formula Self-inductance is the ratio of the induced electromotive force (EMF) across a coil to the rate of change of electric current through the coil. We denote the coefficient of the self-induction by the letter $L$ in English. Its unit is Henry (H), and EMF (E) is proportional to the rate of change of current, we can write, $$\begin{gathered} E \propto \frac{\mathrm{d} i}{\mathrm{~d} t} \Rightarrow E=L \frac{\mathrm{d} i}{\mathrm{~d} t} \\ \Rightarrow L=\frac{E}{\frac{\mathrm{d} i}{\mathrm{~d} t}}=\text { self inductance }\end{gathered}$$ But the original equation is $$E=-L \frac{\mathrm{d} i}{\mathrm{~d} t}$$ Here, (-) negative sign is because, According to the Lenz’s law the induced emf is in the direction that opposes the rate of change of current. ## Coefficient of self-inductance At any instant, the magnetic flux $\phi$ linked with the coil is proportional to the current $I$ through it, i.e. $$\phi=I$$ $$\text{Or} \qquad \phi=LI\qquad…… (1)$$ What is Inertia? | moment of inertia Where $L$ is constant for the given coil and is called self-inductance or, more often simply inductance. It is also called the coefficient of self-inductance of the coil. Any change in current sets up an induced emf in the coil given by $$\mathcal{E}=-\frac{d\phi}{dt}=-L\frac{dI}{dt}\qquad…. (2)$$ If in the equation (1), $I=1$, then $\phi=L$. Thus, the self-inductance of a coil is numerically equal to the magnetic flux linked with the coil when a unit current flows through it. Again from the equation (2), if $\frac{dI}{dt}=1$, then $\mathcal{E}=-L$ Thus, the self-inductance of a coil may be defined as the induced emf set up the coil due to the unit rate of change of the current through it. ## Units of self-inductance In the SI system, the unit of self-inductance is Henry (H). From equation (2) The self-inductance of a coil is said to be one henry if an induced emf of one volt is set up in it when the current in it changes at the rate of one ampere per second. From equation (1), one can note that self-inductance is the ratio of magnetic flux and current. So its SI unit is $weber\; per \;ampere$. Hence $$1VsA^{-1}=1{Wb} A^{-1}=1\text{Henry}(H)$$ ## Dimensions of self-inductance We know that $\therefore$ Dimensions of L ## Experiment to demonstrate self-inductance Consider a solenoid that has a large number of turns of insulated wire wrapped around a soft iron core. Such a solenoid is called a choke coil. Connect the solenoid in series with a battery, rheostat, and tapping switch. Connect a bulb in parallel with the solenoid. Press the tapping switch and use the rheostat to adjust the current so that the light bulb glows dimly. When the switch is released, the bulb will glow brightly for a moment and then go out. Because when the circuit is suddenly broken, the magnetic flux associated with the coil suddenly disappears, that is, the rate of change of the magnetic flux associated with the coil is very large. This creates the largely self-induced emf and current in the coil, causing the bulb to glow brighter for a moment. ## Factors affecting self-inductance Factors on which self-inductance depends, Obviously, the self-inductance of a solenoid depends on its geometry and the magnetic permeability of the core material. 1. The number of turns. The larger the number of turns in the solenoid, the larger its self-inductance. $$L \propto N^{2}$$ 2. Area of the cross-section. The larger the area of the crosssection of the solenoid, the larger its self-inductance. $$L \propto A$$ 3. Permeability of the core material. The self-inductance of a solenoid increases $\mu_{r}$ times if it is wound over an iron core of relative permeability $\mu_{r}$. ## Uses of Self-inductance The main function of an inductor is to store electrical energy in the form of a magnetic field. Inductors are used below: • Tuning circuits • Sensors • Energy storing device • Induction motors • Transformers • Filters • Chokes • Used as relays ## Frequently Asked Questions – FAQs ##### What is self-inductance? Self-inductance is the property of the current-carrying coil that resists or opposes the change in current flowing through it. This happens mainly due to the self-induced emf generated in the coil itself. ##### What are the factors upon which the self-inductance of the coil depends? The self-inductance of the coil depends upon its length, number of turns, area of cross-section, and the permeability of the material of its core. ##### What is electromagnetic induction? The process by which EMF is induced in the electrical conductor by changing the magnetic field linked to the conductor is known as electromagnetic induction ##### Define 1 Henry. The self-inductance of a coil is said to be one henry if an induced emf of one volt is set up in it when the current in it changes at the rate of one ampere per second. ##### Why self induction is called inertia? The coil’s self-inductance is the property due to which it tends to keep the magnetic flux bound to it and resists any change in flux by inducing current in it. This property of a coil is analogous to mechanical inertia. That is why self-induction the inertia of electricity is called induction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9408418536186218, "perplexity": 429.1636409417719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571987.60/warc/CC-MAIN-20220813202507-20220813232507-00180.warc.gz"}
https://infoscience.epfl.ch/record/265683
## Two-fluid plasma model for radial Langmuir probes as a converging nozzle with sonic choked flow, and sonic passage to supersonic flow Using the Lambert function, Guittienne et al. [Phys. Plasmas 25, 093519 (2018)] derived two-fluid solutions for radial Langmuir probes in collisionless and isothermal plasma. In this Brief Communication, we point out the close analogy with classical compressible fluid dynamics, where the simultaneous flows of the ion and electron fluids experience opposite electrostatic body forces in the inward radial flow of the plasma, which behaves as a converging nozzle. Hence, the assumed boundary condition of sonic flow of the repelled species at the probe is explained as choked flow. The sonic passage from subsonic to supersonic flow of the attracted species at the sonic radius is also interpreted using classical fluid dynamics. Moreover, the Lambert function can provide a general solution for one-dimensional, isothermal compressible fluids, with several applications. Published in: Physics of Plasmas, 26, 4, 044502 Year: 2019 Laboratories: Note: The status of this file is: Anyone
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.837198793888092, "perplexity": 2492.9428063396686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886516.43/warc/CC-MAIN-20200704170556-20200704200556-00185.warc.gz"}
http://mathhelpforum.com/calculus/71170-can-find-shape-curve.html
## can find the shape of curve a tank truck hauls milk in 6 ft diameter (horizantal right circular cylindrical tank) how much FORCE does the milk exerts ohn each end of the tank if the tank is HALF full?????? w of milk is 64.5 cant find the shape of curve any help pls
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9110612273216248, "perplexity": 7427.0703014146775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00598-ip-10-145-167-34.ec2.internal.warc.gz"}
http://linear.ups.edu/jsmath/0211/fcla-jsmath-2.11li18.html
### Section RREF  Reduced Row-Echelon Form From A First Course in Linear Algebra Version 2.11 http://linear.ups.edu/ After solving a few systems of equations, you will recognize that it doesn’t matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables {x}_{1},\kern 1.95872pt {x}_{2},\kern 1.95872pt {x}_{3} would behave the same if we changed the names of the variables to a,\kern 1.95872pt b,\kern 1.95872pt c and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools. #### Subsection MVNSE: Matrix and Vector Notation for Systems of Equations Definition M Matrix An m × n matrix is a rectangular layout of numbers from {ℂ}^{} having m rows and n columns. We will use upper-case Latin letters from the start of the alphabet (A,\kern 1.95872pt B,\kern 1.95872pt C,\mathop{\mathop{…}}) to denote matrices and squared-off brackets to delimit the layout. Many use large parentheses instead of brackets — the distinction is not important. Rows of a matrix will be referenced starting at the top and working down (i.e. row 1 is at the top) and columns will be referenced starting from the left (i.e. column 1 is at the left). For a matrix A, the notation {\left [A\right ]}_{ij} will refer to the complex number in row i and column j of A. (This definition contains Notation M.) (This definition contains Notation MC.) Be careful with this notation for individual entries, since it is easy to think that {\left [A\right ]}_{ij} refers to the whole matrix. It does not. It is just a number, but is a convenient way to talk about the individual entries simultaneously. This notation will get a heavy workout once we get to Chapter M. Example AM A matrix B = \left [\array{ −1&2& 5 & 3\cr 1 &0 &−6 & 1 \cr −4&2& 2 &−2 } \right ] is a matrix with m = 3 rows and n = 4 columns. We can say that {\left [B\right ]}_{2,3} = −6 while {\left [B\right ]}_{3,4} = −2. Some mathematical software is very particular about which types of numbers (integers, rationals, reals, complexes) you wish to work with.  See: Computation R.SAGE A calculator or computer language can be a convenient way to perform calculations with matrices. But first you have to enter the matrix.  See: Computation ME.MMA Computation ME.TI86 Computation ME.TI83 Computation ME.SAGE When we do equation operations on system of equations, the names of the variables really aren’t very important. {x}_{1}, {x}_{2}, {x}_{3}, or a, b, c, or x, y, z, it really doesn’t matter. In this subsection we will describe some notation that will make it easier to describe linear systems, solve the systems and describe the solution sets. Here is a list of definitions, laden with notation. Definition CV Column Vector A column vector of size m is an ordered list of m numbers, which is written in order vertically, starting at the top and proceeding to the bottom. At times, we will refer to a column vector as simply a vector. Column vectors will be written in bold, usually with lower case Latin letter from the end of the alphabet such as u, v, w, x, y, z. Some books like to write vectors with arrows, such as \vec{u}. Writing by hand, some like to put arrows on top of the symbol, or a tilde underneath the symbol, as in \mathop{u}\limits_{ ∼}. To refer to the entry or component that is number i in the list that is the vector v we write {\left [v\right ]}_{i}. (This definition contains Notation CV.) (This definition contains Notation CVC.) Be careful with this notation. While the symbols {\left [v\right ]}_{i} might look somewhat substantial, as an object this represents just one component of a vector, which is just a single complex number. Definition ZCV Zero Column Vector The zero vector of size m is the column vector of size m where each entry is the number zero, \eqalignno{ 0 = \left [\array{ 0\cr 0 \cr 0\cr \mathop{\mathop{⋮}} \cr 0 } \right ] & & } or defined much more compactly, {\left [0\right ]}_{i} = 0 for 1 ≤ i ≤ m. (This definition contains Notation ZCV.) Definition CM Coefficient Matrix For a system of linear equations, \eqalignno{ {a}_{11}{x}_{1} + {a}_{12}{x}_{2} + {a}_{13}{x}_{3} + \mathrel{⋯} + {a}_{1n}{x}_{n} & = {b}_{1} & & \cr {a}_{21}{x}_{1} + {a}_{22}{x}_{2} + {a}_{23}{x}_{3} + \mathrel{⋯} + {a}_{2n}{x}_{n} & = {b}_{2} & & \cr {a}_{31}{x}_{1} + {a}_{32}{x}_{2} + {a}_{33}{x}_{3} + \mathrel{⋯} + {a}_{3n}{x}_{n} & = {b}_{3} & & \cr \mathop{\mathop{⋮}} & & & \cr {a}_{m1}{x}_{1} + {a}_{m2}{x}_{2} + {a}_{m3}{x}_{3} + \mathrel{⋯} + {a}_{mn}{x}_{n} & = {b}_{m} & & } the coefficient matrix is the m × n matrix A = \left [\array{ {a}_{11} & {a}_{12} & {a}_{13} &\mathop{\mathop{…}}\kern 1.95872pt &{a}_{1n} \cr {a}_{21} & {a}_{22} & {a}_{23} &\mathop{\mathop{…}}\kern 1.95872pt &{a}_{2n} \cr {a}_{31} & {a}_{32} & {a}_{33} &\mathop{\mathop{…}}\kern 1.95872pt &{a}_{3n} \cr \mathop{\mathop{⋮}} &\cr {a}_{ m1}&{a}_{m2}&{a}_{m3}&\mathop{\mathop{…}}\kern 1.95872pt &{a}_{mn}\cr } \right ] Definition VOC Vector of Constants For a system of linear equations, \eqalignno{ {a}_{11}{x}_{1} + {a}_{12}{x}_{2} + {a}_{13}{x}_{3} + \mathrel{⋯} + {a}_{1n}{x}_{n} & = {b}_{1} & & \cr {a}_{21}{x}_{1} + {a}_{22}{x}_{2} + {a}_{23}{x}_{3} + \mathrel{⋯} + {a}_{2n}{x}_{n} & = {b}_{2} & & \cr {a}_{31}{x}_{1} + {a}_{32}{x}_{2} + {a}_{33}{x}_{3} + \mathrel{⋯} + {a}_{3n}{x}_{n} & = {b}_{3} & & \cr \mathop{\mathop{⋮}} & & & \cr {a}_{m1}{x}_{1} + {a}_{m2}{x}_{2} + {a}_{m3}{x}_{3} + \mathrel{⋯} + {a}_{mn}{x}_{n} & = {b}_{m} & & } the vector of constants is the column vector of size m b = \left [\array{ {b}_{1} \cr {b}_{2} \cr {b}_{3} \cr \mathop{\mathop{⋮}}\cr {b}_{ m}\cr } \right ] Definition SOLV Solution Vector For a system of linear equations, \eqalignno{ {a}_{11}{x}_{1} + {a}_{12}{x}_{2} + {a}_{13}{x}_{3} + \mathrel{⋯} + {a}_{1n}{x}_{n} & = {b}_{1} & & \cr {a}_{21}{x}_{1} + {a}_{22}{x}_{2} + {a}_{23}{x}_{3} + \mathrel{⋯} + {a}_{2n}{x}_{n} & = {b}_{2} & & \cr {a}_{31}{x}_{1} + {a}_{32}{x}_{2} + {a}_{33}{x}_{3} + \mathrel{⋯} + {a}_{3n}{x}_{n} & = {b}_{3} & & \cr \mathop{\mathop{⋮}} & & & \cr {a}_{m1}{x}_{1} + {a}_{m2}{x}_{2} + {a}_{m3}{x}_{3} + \mathrel{⋯} + {a}_{mn}{x}_{n} & = {b}_{m} & & } the solution vector is the column vector of size n x = \left [\array{ {x}_{1} \cr {x}_{2} \cr {x}_{3} \cr \mathop{\mathop{⋮}}\cr {x}_{ n}\cr } \right ] The solution vector may do double-duty on occasion. It might refer to a list of variable quantities at one point, and subsequently refer to values of those variables that actually form a particular solution to that system. Definition MRLS Matrix Representation of a Linear System If A is the coefficient matrix of a system of linear equations and b is the vector of constants, then we will write ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) as a shorthand expression for the system of linear equations, which we will refer to as the matrix representation of the linear system. (This definition contains Notation MRLS.) Example NSLE Notation for systems of linear equations The system of linear equations \eqalignno{ 2{x}_{1} + 4{x}_{2} − 3{x}_{3} + 5{x}_{4} + {x}_{5} & = 9 & & \cr 3{x}_{1} + {x}_{2} + \quad \quad {x}_{4} − 3{x}_{5} & = 0 & & \cr − 2{x}_{1} + 7{x}_{2} − 5{x}_{3} + 2{x}_{4} + 2{x}_{5} & = −3 & & } has coefficient matrix A = \left [\array{ 2 &4&−3&5& 1\cr 3 &1 & 0 &1 &−3 \cr −2&7&−5&2& 2 } \right ] and vector of constants b = \left [\array{ 9\cr 0 \cr −3 } \right ] and so will be referenced as ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ). Definition AM Augmented Matrix Suppose we have a system of m equations in n variables, with coefficient matrix A and vector of constants b. Then the augmented matrix of the system of equations is the m × (n + 1) matrix whose first n columns are the columns of A and whose last column (number n + 1) is the column vector b. This matrix will be written as \left [\left .A\kern 1.95872pt \right \vert \kern 1.95872pt b\right ]. (This definition contains Notation AM.) The augmented matrix represents all the important information in the system of equations, since the names of the variables have been ignored, and the only connection with the variables is the location of their coefficients in the matrix. It is important to realize that the augmented matrix is just that, a matrix, and not a system of equations. In particular, the augmented matrix does not have any “solutions,” though it will be useful for finding solutions to the system of equations that it is associated with. (Think about your objects, and review Technique L.) However, notice that an augmented matrix always belongs to some system of equations, and vice versa, so it is tempting to try and blur the distinction between the two. Here’s a quick example. Example AMAA Augmented matrix for Archetype A Archetype A is the following system of 3 equations in 3 variables. \eqalignno{ {x}_{1} − {x}_{2} + 2{x}_{3} & = 1 & & \cr 2{x}_{1} + {x}_{2} + {x}_{3} & = 8 & & \cr {x}_{1} + {x}_{2} & = 5 & & } Here is its augmented matrix. \eqalignno{ \left [\array{ 1&−1&2&1\cr 2& 1 &1 &8 \cr 1& 1 &0&5 } \right ] & & } #### Subsection RO: Row Operations An augmented matrix for a system of equations will save us the tedium of continually writing down the names of the variables as we solve the system. It will also release us from any dependence on the actual names of the variables. We have seen how certain operations we can perform on equations (Definition EO) will preserve their solutions (Theorem EOPSS). The next two definitions and the following theorem carry over these ideas to augmented matrices. Definition RO Row Operations The following three operations will transform an m × n matrix into a different matrix of the same size, and each is known as a row operation. 1. Swap the locations of two rows. 2. Multiply each entry of a single row by a nonzero quantity. 3. Multiply each entry of one row by some quantity, and add these values to the entries in the same columns of a second row. Leave the first row the same after this operation, but replace the second row by the new values. We will use a symbolic shorthand to describe these row operations: 1. {R}_{i} ↔ {R}_{j}: Swap the location of rows i and j. 2. α{R}_{i}: Multiply row i by the nonzero scalar α. 3. α{R}_{i} + {R}_{j}: Multiply row i by the scalar α and add to row j. (This definition contains Notation RO.) Definition REM Row-Equivalent Matrices Two matrices, A and B, are row-equivalent if one can be obtained from the other by a sequence of row operations. Example TREM Two row-equivalent matrices The matrices \eqalignno{ A = \left [\array{ 2&−1& 3 &4\cr 5& 2 &−2 &3 \cr 1& 1 & 0 &6 } \right ] & &B = \left [\array{ 1& 1 & 0 & 6\cr 3& 0 &−2 &−9 \cr 2&−1& 3 & 4 } \right ] & & & & } are row-equivalent as can be seen from \eqalignno{ \left [\array{ 2&−1& 3 &4\cr 5& 2 &−2 &3 \cr 1& 1 & 0 &6 } \right ]\mathop{\longrightarrow}\limits_{}^{{R}_{1} ↔ {R}_{3}} &\left [\array{ 1& 1 & 0 &6\cr 5& 2 &−2 &3 \cr 2&−1& 3 &4 } \right ] & & \cr \mathop{\longrightarrow}\limits_{}^{ − 2{R}_{1} + {R}_{2}} &\left [\array{ 1& 1 & 0 & 6\cr 3& 0 &−2 &−9 \cr 2&−1& 3 & 4 } \right ] & & } We can also say that any pair of these three matrices are row-equivalent. Notice that each of the three row operations is reversible (Exercise RREF.T10), so we do not have to be careful about the distinction between “A is row-equivalent to B” and “B is row-equivalent to A.” (Exercise RREF.T11) The preceding definitions are designed to make the following theorem possible. It says that row-equivalent matrices represent systems of linear equations that have identical solution sets. Theorem REMES Row-Equivalent Matrices represent Equivalent Systems Suppose that A and B are row-equivalent augmented matrices. Then the systems of linear equations that they represent are equivalent systems. Proof   If we perform a single row operation on an augmented matrix, it will have the same effect as if we did the analogous equation operation on the corresponding system of equations. By exactly the same methods as we used in the proof of Theorem EOPSS we can see that each of these row operations will preserve the set of solutions for the corresponding system of equations. So at this point, our strategy is to begin with a system of equations, represent it by an augmented matrix, perform row operations (which will preserve solutions for the corresponding systems) to get a “simpler” augmented matrix, convert back to a “simpler” system of equations and then solve that system, knowing that its solutions are those of the original system. Here’s a rehash of Example US as an exercise in using our new tools. Example USR Three equations, one solution, reprised We solve the following system using augmented matrices and row operations. This is the same system of equations solved in Example US using equation operations. \eqalignno{ {x}_{1} + 2{x}_{2} + 2{x}_{3} & = 4 & & \cr {x}_{1} + 3{x}_{2} + 3{x}_{3} & = 5 & & \cr 2{x}_{1} + 6{x}_{2} + 5{x}_{3} & = 6 & & } Form the augmented matrix, \eqalignno{ A = \left [\array{ 1&2&2&4\cr 1&3 &3 &5 \cr 2&6&5&6 } \right ] & & } and apply row operations, \eqalignno{ \mathop{\longrightarrow}\limits_{}^{ − 1{R}_{1} + {R}_{2}} &\left [\array{ 1&2&2&4\cr 0&1 &1 &1 \cr 2&6&5&6 } \right ] &\mathop{\longrightarrow}\limits_{}^{ − 2{R}_{1} + {R}_{3}} &\left [\array{ 1&2&2& 4\cr 0&1 &1 & 1 \cr 0&2&1&−2 } \right ] & & & & \cr \mathop{\longrightarrow}\limits_{}^{ − 2{R}_{2} + {R}_{3}} &\left [\array{ 1&2& 2 & 4\cr 0&1 & 1 & 1 \cr 0&0&−1&−4 } \right ] &\mathop{\longrightarrow}\limits_{}^{ − 1{R}_{3}} &\left [\array{ 1&2&2&4\cr 0&1 &1 &1 \cr 0&0&1&4 } \right ] & & & & } So the matrix B = \left [\array{ 1&2&2&4\cr 0&1 &1 &1 \cr 0&0&1&4 } \right ] is row equivalent to A and by Theorem REMES the system of equations below has the same solution set as the original system of equations. \eqalignno{ {x}_{1} + 2{x}_{2} + 2{x}_{3} & = 4 & & \cr {x}_{2} + {x}_{3} & = 1 & & \cr {x}_{3} & = 4 & & } Solving this “simpler” system is straightforward and is identical to the process in Example US. #### Subsection RREF: Reduced Row-Echelon Form The preceding example amply illustrates the definitions and theorems we have seen so far. But it still leaves two questions unanswered. Exactly what is this “simpler” form for a matrix, and just how do we get it? Here’s the answer to the first question, a definition of reduced row-echelon form. Definition RREF Reduced Row-Echelon Form A matrix is in reduced row-echelon form if it meets all of the following conditions: 1. A row where every entry is zero lies below any row that contains a nonzero entry. 2. The leftmost nonzero entry of a row is equal to 1. 3. The leftmost nonzero entry of a row is the only nonzero entry in its column. 4. Consider any two different leftmost nonzero entries, one located in row i, column j and the other located in row s, column t. If s > i, then t > j. A row of only zero entries will be called a zero row and the leftmost nonzero entry of a nonzero row will be called a leading 1. The number of nonzero rows will be denoted by r. A column containing a leading 1 will be called a pivot column. The set of column indices for all of the pivot columns will be denoted by D = \left \{{d}_{1},\kern 1.95872pt {d}_{2},\kern 1.95872pt {d}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {d}_{r}\right \} where {d}_{1} < {d}_{2} < {d}_{3} < \mathrel{⋯} < {d}_{r}, while the columns that are not pivot columns will be denoted as F = \left \{{f}_{1},\kern 1.95872pt {f}_{2},\kern 1.95872pt {f}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {f}_{n−r}\right \} where {f}_{1} < {f}_{2} < {f}_{3} < \mathrel{⋯} < {f}_{n−r}. (This definition contains Notation RREFA.) The principal feature of reduced row-echelon form is the pattern of leading 1’s guaranteed by conditions (2) and (4), reminiscent of a flight of geese, or steps in a staircase, or water cascading down a mountain stream. There are a number of new terms and notation introduced in this definition, which should make you suspect that this is an important definition. Given all there is to digest here, we will save the use of D and F until Section TSS. However, one important point to make here is that all of these terms and notation apply to a matrix. Sometimes we will employ these terms and sets for an augmented matrix, and other times it might be a coefficient matrix. So always give some thought to exactly which type of matrix you are analyzing. Example RREF A matrix in reduced row-echelon form The matrix C is in reduced row-echelon form. \eqalignno{ C & = \left [\array{ 1&−3&0&6&0&0&−5& 9\cr 0& 0 &0 &0 &1 &0 & 3 &−7 \cr 0& 0 &0&0&0&1& 7 & 3\cr 0& 0 &0 &0 &0 &0 & 0 & 0 \cr 0& 0 &0&0&0&0& 0 & 0 } \right ] & & } This matrix has two zero rows and three leading 1’s. r = 3. Columns 1, 5, and 6 are pivot columns. Example NRREF A matrix not in reduced row-echelon form The matrix E is not in reduced row-echelon form, as it fails each of the four requirements once. \eqalignno{ E & = \left [\array{ 1&0&−3&0&6&0&7&−5& 9\cr 0&0 & 0 &5 &0 &1 &0 & 3 &−7 \cr 0&0& 0 &0&0&0&0& 0 & 0\cr 0&1 & 0 &0 &0 &0 &0 &−4 & 2 \cr 0&0& 0 &0&0&0&1& 7 & 3\cr 0&0 & 0 &0 &0 &0 &0 & 0 & 0 } \right ] & & } Our next theorem has a “constructive” proof. Learn about the meaning of this term in Technique C Theorem REMEF Row-Equivalent Matrix in Echelon Form Suppose A is a matrix. Then there is a matrix B so that 1. A and B are row-equivalent. 2. B is in reduced row-echelon form. Proof   Suppose that A has m rows and n columns. We will describe a process for converting A into B via row operations. This procedure is known as Gauss–Jordan elimination. Tracing through this procedure will be easier if you recognize that i refers to a row that is being converted, j refers to a column that is being converted, and r keeps track of the number of nonzero rows. Here we go. 1. Set j = 0 and r = 0. 2. Increase j by 1. If j now equals n + 1, then stop. 3. Examine the entries of A in column j located in rows r + 1 through m. If all of these entries are zero, then go to Step 2. 4. Choose a row from rows r + 1 through m with a nonzero entry in column j. Let i denote the index for this row. 5. Increase r by 1. 6. Use the first row operation to swap rows i and r. 7. Use the second row operation to convert the entry in row r and column j to a 1. 8. Use the third row operation with row r to convert every other entry of column j to zero. 9. Go to Step 2. The result of this procedure is that the matrix A is converted to a matrix in reduced row-echelon form, which we will refer to as B. We need to now prove this claim by showing that the converted matrix has the requisite properties of Definition RREF. First, the matrix is only converted through row operations (Step 6, Step 7, Step 8), so A and B are row-equivalent (Definition REM). It is a bit more work to be certain that B is in reduced row-echelon form. We claim that as we begin Step 2, the first j columns of the matrix are in reduced row-echelon form with r nonzero rows. Certainly this is true at the start when j = 0, since the matrix has no columns and so vacuously meets the conditions of Definition RREF with r = 0 nonzero rows. In Step 2 we increase j by 1 and begin to work with the next column. There are two possible outcomes for Step 3. Suppose that every entry of column j in rows r + 1 through m is zero. Then with no changes we recognize that the first j columns of the matrix has its first r rows still in reduced-row echelon form, with the final m − r rows still all zero. Suppose instead that the entry in row i of column j is nonzero. Notice that since r + 1 ≤ i ≤ m, we know the first j − 1 entries of this row are all zero. Now, in Step 5 we increase r by 1, and then embark on building a new nonzero row. In Step 6 we swap row r and row i. In the first j columns, the first r − 1 rows remain in reduced row-echelon form after the swap. In Step 7 we multiply row r by a nonzero scalar, creating a 1 in the entry in column j of row i, and not changing any other rows. This new leading 1 is the first nonzero entry in its row, and is located to the right of all the leading 1’s in the preceding r − 1 rows. With Step 8 we insure that every entry in the column with this new leading 1 is now zero, as required for reduced row-echelon form. Also, rows r + 1 through m are now all zeros in the first j columns, so we now only have one new nonzero row, consistent with our increase of r by one. Furthermore, since the first j − 1 entries of row r are zero, the employment of the third row operation does not destroy any of the necessary features of rows 1 through r − 1 and rows r + 1 through m, in columns 1 through j − 1. So at this stage, the first j columns of the matrix are in reduced row-echelon form. When Step 2 finally increases j to n + 1, then the procedure is completed and the full n columns of the matrix are in reduced row-echelon form, with the value of r correctly recording the number of nonzero rows. The procedure given in the proof of Theorem REMEF can be more precisely described using a pseudo-code version of a computer program, as follows: input m, n and A r ← 0 for j ← 1 to n i ← r + 1 while i ≤ m and {\left [A\right ]}_{ij} = 0 i ← i + 1 if i\mathrel{≠}m + 1 r ← r + 1 swap rows i and r of A (row op 1) scale entry in row r, column j of A to a leading 1 (row op 2) for k ← 1\text{ to }m, k\mathrel{≠}r zero out entry in row k, column j of A (row op 3 using row r) output r and A Notice that as a practical matter the “and” used in the conditional statement of the while statement should be of the “short-circuit” variety so that the array access that follows is not out-of-bounds. So now we can put it all together. Begin with a system of linear equations (Definition SLE), and represent the system by its augmented matrix (Definition AM). Use row operations (Definition RO) to convert this matrix into reduced row-echelon form (Definition RREF), using the procedure outlined in the proof of Theorem REMEF. Theorem REMEF also tells us we can always accomplish this, and that the result is row-equivalent (Definition REM) to the original augmented matrix. Since the matrix in reduced-row echelon form has the same solution set, we can analyze the row-reduced version instead of the original matrix, viewing it as the augmented matrix of a different system of equations. The beauty of augmented matrices in reduced row-echelon form is that the solution sets to their corresponding systems can be easily determined, as we will see in the next few examples and in the next section. We will see through the course that almost every interesting property of a matrix can be discerned by looking at a row-equivalent matrix in reduced row-echelon form. For this reason it is important to know that the matrix B guaranteed to exist by Theorem REMEF is also unique. Two proof techniques are applicable to the proof. First, head out and read two proof techniques: Technique CD and Technique U. Theorem RREFU Reduced Row-Echelon Form is Unique Suppose that A is an m × n matrix and that B and C are m × n matrices that are row-equivalent to A and in reduced row-echelon form. Then B = C. Proof   We need to begin with no assumptions about any relationships between B and C, other than they are both in reduced row-echelon form, and they are both row-equivalent to A. If B and C are both row-equivalent to A, then they are row-equivalent to each other. Repeated row operations on a matrix combine the rows with each other using operations that are linear, and are identical in each column. A key observation for this proof is that each individual row of B is linearly related to the rows of C. This relationship is different for each row of B, but once we fix a row, the relationship is the same across columns. More precisely, there are scalars {δ}_{ik}, 1 ≤ i,k ≤ m such that for any 1 ≤ i ≤ m, 1 ≤ j ≤ n, \eqalignno{ {\left [B\right ]}_{ij} & ={ \mathop{∑ }}_{k=1}^{m}{δ}_{ ik}{\left [C\right ]}_{kj} & & } You should read this as saying that an entry of row i of B (in column j) is a linear function of the entries of all the rows of C that are also in column j, and the scalars ({δ}_{ik}) depend on which row of B we are considering (the i subscript on {δ}_{ik}), but are the same for every column (no dependence on j in {δ}_{ik}). This idea may be complicated now, but will feel more familiar once we discuss “linear combinations” (Definition LCCV) and moreso when we discuss “row spaces” (Definition RSM). For now, spend some time carefully working Exercise RREF.M40, which is designed to illustrate the origins of this expression. This completes our exploitation of the row-equivalence of B and C. We now repeatedly exploit the fact that B and C are in reduced row-echelon form. Recall that a pivot column is all zeros, except a single one. More carefully, if R is a matrix in reduced row-echelon form, and {d}_{ℓ} is the index of a pivot column, then {\left [R\right ]}_{k{d}_{ℓ}} = 1 precisely when k = ℓ and is otherwise zero. Notice also that any entry of R that is both below the entry in row and to the left of column {d}_{ℓ} is also zero (with below and left understood to include equality). In other words, look at examples of matrices in reduced row-echelon form and choose a leading 1 (with a box around it). The rest of the column is also zeros, and the lower left “quadrant” of the matrix that begins here is totally zeros. Assuming no relationship about the form of B and C, let B have r nonzero rows and denote the pivot columns as D = \left \{{d}_{1},\kern 1.95872pt {d}_{2},\kern 1.95872pt {d}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {d}_{r}\right \}. For C let {r}^{′} denote the number of nonzero rows and denote the pivot columns as {D}^{′} = \left \{{{d}^{′}}_{ 1},\kern 1.95872pt {{d}^{′}}_{ 2},\kern 1.95872pt {{d}^{′}}_{ 3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {{d}^{′}}_{{ r}^{′}}\right \} (Notation RREFA). There are four steps in the proof, and the first three are about showing that B and C have the same number of pivot columns, in the same places. In other words, the “primed” symbols are a necessary fiction. First Step. Suppose that {d}_{1} < {d}_{1}^{′}. Then \eqalignno{ 1 & ={ \left [B\right ]}_{1{d}_{1}} & &\text{@(a href="#definition.RREF")Definition RREF@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{m}{δ}_{ 1k}{\left [C\right ]}_{k{d}_{1}} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{m}{δ}_{ 1k}(0) & &{d}_{1} < {d}_{1}^{′} & & & & \cr & = 0 & & & & } The entries of C are all zero since they are left and below of the leading 1 in row 1 and column {d}_{1}^{′} of C. This is a contradiction, so we know that {d}_{1} ≥ {d}_{1}^{′}. By an entirely similar argument, reversing the roles of B and C, we could conclude that {d}_{1} ≤ {d}_{1}^{′}. Together this means that {d}_{1} = {d}_{1}^{′}. Second Step. Suppose that we have determined that {d}_{1} = {d}_{1}^{′}, {d}_{2} = {d}_{2}^{′}, {d}_{3} = {d}_{3}^{′}, …, {d}_{p} = {d}_{p}^{′}. Let’s now show that {d}_{p+1} = {d}_{p+1}^{′}. Working towards a contradiction, suppose that {d}_{p+1} < {d}_{p+1}^{′}. For 1 ≤ ℓ ≤ p, \eqalignno{ 0 & ={ \left [B\right ]}_{p+1,{d}_{ℓ}} & &\text{@(a href="#definition.RREF")Definition RREF@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{m}{δ}_{ p+1,k}{\left [C\right ]}_{k{d}_{ℓ}} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{m}{δ}_{ p+1,k}{\left [C\right ]}_{k{d}_{ℓ}^{′}} & & & & \cr & = {δ}_{p+1,ℓ}{\left [C\right ]}_{ℓ{d}_{ℓ}^{′}} +{ \mathop{∑ }}_{\begin{array}{c}k=1 \\ k\mathrel{≠}ℓ \end{array}}^{m}{δ}_{ p+1,k}{\left [C\right ]}_{k{d}_{ℓ}^{′}} & &\text{@(a href="fcla-jsmath-2.11li69.html#property.CACN")Property CACN@(/a)} & & & & \cr & = {δ}_{p+1,ℓ}(1) +{ \mathop{∑ }}_{\begin{array}{c}k=1 \\ k\mathrel{≠}ℓ \end{array}}^{m}{δ}_{ p+1,k}(0) & &\text{@(a href="#definition.RREF")Definition RREF@(/a)} & & & & \cr & = {δ}_{p+1,ℓ} & & & & } Now, \eqalignno{ 1 & ={ \left [B\right ]}_{p+1,{d}_{p+1}} & &\text{@(a href="#definition.RREF")Definition RREF@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{m}{δ}_{ p+1,k}{\left [C\right ]}_{k{d}_{p+1}} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{p}{δ}_{ p+1,k}{\left [C\right ]}_{k{d}_{p+1}} +{ \mathop{∑ }}_{k=p+1}^{m}{δ}_{ p+1,k}{\left [C\right ]}_{k{d}_{p+1}} & &\text{@(a href="fcla-jsmath-2.11li69.html#property.AACN")Property AACN@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{p}(0){\left [C\right ]}_{ k{d}_{p+1}} +{ \mathop{∑ }}_{k=p+1}^{m}{δ}_{ p+1,k}{\left [C\right ]}_{k{d}_{p+1}} & & & & \cr & ={ \mathop{∑ }}_{k=p+1}^{m}{δ}_{ p+1,k}{\left [C\right ]}_{k{d}_{p+1}} & & & & \cr & ={ \mathop{∑ }}_{k=p+1}^{m}{δ}_{ p+1,k}(0) & &{d}_{p+1} < {d}_{p+1}^{′} & & & & \cr & = 0 & & & & } This contradiction shows that {d}_{p+1} ≥ {d}_{p+1}^{′}. By an entirely similar argument, we could conclude that {d}_{p+1} ≤ {d}_{p+1}^{′}, and therefore {d}_{p+1} = {d}_{p+1}^{′}. Third Step. Now we establish that r = {r}^{′}. Suppose that {r}^{′} < r. By the arguments above know that {d}_{1} = {d}_{1}^{′}, {d}_{2} = {d}_{2}^{′}, {d}_{3} = {d}_{3}^{′}, …, {d}_{{r}^{′}} = {d}_{{r}^{′}}^{′}. For 1 ≤ ℓ ≤ {r}^{′} < r, \eqalignno{ 0 & ={ \left [B\right ]}_{r{d}_{ℓ}} & &\text{@(a href="#definition.RREF")Definition RREF@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{m}{δ}_{ rk}{\left [C\right ]}_{k{d}_{ℓ}} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{{r}^{′} }{δ}_{rk}{\left [C\right ]}_{k{d}_{ℓ}} +{ \mathop{∑ }}_{k={r}^{′}+1}^{m}{δ}_{ rk}{\left [C\right ]}_{k{d}_{ℓ}} & &\text{@(a href="fcla-jsmath-2.11li69.html#property.AACN")Property AACN@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{{r}^{′} }{δ}_{rk}{\left [C\right ]}_{k{d}_{ℓ}} +{ \mathop{∑ }}_{k={r}^{′}+1}^{m}{δ}_{ rk}(0) & &\text{@(a href="fcla-jsmath-2.11li69.html#property.AACN")Property AACN@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{{r}^{′} }{δ}_{rk}{\left [C\right ]}_{k{d}_{ℓ}} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{{r}^{′} }{δ}_{rk}{\left [C\right ]}_{k{d}_{ℓ}^{′}} & & & & \cr & = {δ}_{rℓ}{\left [C\right ]}_{ℓ{d}_{ℓ}^{′}} +{ \mathop{∑ }}_{\begin{array}{c}k=1 \\ k\mathrel{≠}ℓ \end{array}}^{{r}^{′} }{δ}_{rk}{\left [C\right ]}_{k{d}_{ℓ}^{′}} & &\text{@(a href="fcla-jsmath-2.11li69.html#property.CACN")Property CACN@(/a)} & & & & \cr & = {δ}_{rℓ}(1) +{ \mathop{∑ }}_{\begin{array}{c}k=1 \\ k\mathrel{≠}ℓ \end{array}}^{{r}^{′} }{δ}_{rk}(0) & &\text{@(a href="#definition.RREF")Definition RREF@(/a)} & & & & \cr & = {δ}_{rℓ} & & & & } Now examine the entries of row r of B, \eqalignno{ {\left [B\right ]}_{rj} & ={ \mathop{∑ }}_{k=1}^{m}{δ}_{ rk}{\left [C\right ]}_{kj} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{{r}^{′} }{δ}_{rk}{\left [C\right ]}_{kj} +{ \mathop{∑ }}_{k={r}^{′}+1}^{m}{δ}_{ rk}{\left [C\right ]}_{kj} & &\text{@(a href="fcla-jsmath-2.11li69.html#property.CACN")Property CACN@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{{r}^{′} }{δ}_{rk}{\left [C\right ]}_{kj} +{ \mathop{∑ }}_{k={r}^{′}+1}^{m}{δ}_{ rk}(0) & &\text{@(a href="#definition.RREF")Definition RREF@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{{r}^{′} }{δ}_{rk}{\left [C\right ]}_{kj} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{{r}^{′} }(0){\left [C\right ]}_{kj} & & & & \cr & = 0 & & & & } So row r is a totally zero row, contradicting that this should be the bottommost nonzero row of B. So {r}^{′} ≥ r. By an entirely similar argument, reversing the roles of B and C, we would conclude that {r}^{′} ≤ r and therefore r = {r}^{′}. Thus, combining the first three steps we can say that D = {D}^{′}. In other words, B and C have the same pivot columns, in the same locations. Fourth Step. In this final step, we will not argue by contradiction. Our intent is to determine the values of the {δ}_{ij}. Notice that we can use the values of the {d}_{i} interchangeably for B and C. Here we go, \eqalignno{ 1 & ={ \left [B\right ]}_{i{d}_{i}} & &\text{@(a href="#definition.RREF")Definition RREF@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{m}{δ}_{ ik}{\left [C\right ]}_{k{d}_{i}} & & & & \cr & = {δ}_{ii}{\left [C\right ]}_{i{d}_{i}} +{ \mathop{∑ }}_{\begin{array}{c}k=1 \\ k\mathrel{≠}i \end{array}}^{m}{δ}_{ ik}{\left [C\right ]}_{k{d}_{i}} & &\text{@(a href="fcla-jsmath-2.11li69.html#property.CACN")Property CACN@(/a)} & & & & \cr & = {δ}_{ii}(1) +{ \mathop{∑ }}_{\begin{array}{c}k=1 \\ k\mathrel{≠}i \end{array}}^{m}{δ}_{ ik}(0) & &\text{@(a href="#definition.RREF")Definition RREF@(/a)} & & & & \cr & = {δ}_{ii} & & & & } and for ℓ\mathrel{≠}i \eqalignno{ 0 & ={ \left [B\right ]}_{i{d}_{ℓ}} & &\text{@(a href="#definition.RREF")Definition RREF@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{m}{δ}_{ ik}{\left [C\right ]}_{k{d}_{ℓ}} & & & & \cr & = {δ}_{iℓ}{\left [C\right ]}_{ℓ{d}_{ℓ}} +{ \mathop{∑ }}_{\begin{array}{c}k=1 \\ k\mathrel{≠}ℓ \end{array}}^{m}{δ}_{ ik}{\left [C\right ]}_{k{d}_{i}} & &\text{@(a href="fcla-jsmath-2.11li69.html#property.CACN")Property CACN@(/a)} & & & & \cr & = {δ}_{iℓ}(1) +{ \mathop{∑ }}_{\begin{array}{c}k=1 \\ k\mathrel{≠}ℓ \end{array}}^{m}{δ}_{ ik}(0) & &\text{@(a href="#definition.RREF")Definition RREF@(/a)} & & & & \cr & = {δ}_{iℓ} & & & & } Finally, having determined the values of the {δ}_{ij}, we can show that B = C. For 1 ≤ i ≤ m, 1 ≤ j ≤ n, \eqalignno{ {\left [B\right ]}_{ij} & ={ \mathop{∑ }}_{k=1}^{m}{δ}_{ ik}{\left [C\right ]}_{kj} & & & & \cr & = {δ}_{ii}{\left [C\right ]}_{ij} +{ \mathop{∑ }}_{\begin{array}{c}k=1 \\ k\mathrel{≠}i \end{array}}^{m}{δ}_{ ik}{\left [C\right ]}_{kj} & &\text{@(a href="fcla-jsmath-2.11li69.html#property.CACN")Property CACN@(/a)} & & & & \cr & = (1){\left [C\right ]}_{ij} +{ \mathop{∑ }}_{\begin{array}{c}k=1 \\ k\mathrel{≠}i \end{array}}^{m}(0){\left [C\right ]}_{ kj} & & & & \cr & ={ \left [C\right ]}_{ij} & & & & } So B and C have equal values in every entry, and so are the same matrix. We will now run through some examples of using these definitions and theorems to solve some systems of equations. From now on, when we have a matrix in reduced row-echelon form, we will mark the leading 1’s with a small box. In your work, you can box ’em, circle ’em or write ’em in a different color — just identify ’em somehow. This device will prove very useful later and is a very good habit to start developing right now. Example SAB Solutions for Archetype B Let’s find the solutions to the following system of equations, \eqalignno{ − 7{x}_{1} − 6{x}_{2} − 12{x}_{3} & = −33 & & \cr 5{x}_{1} + 5{x}_{2} + 7{x}_{3} & = 24 & & \cr {x}_{1} + 4{x}_{3} & = 5 & & } First, form the augmented matrix, \eqalignno{ \left [\array{ −7&−6&−12&−33\cr 5 & 5 & 7 & 24 \cr 1 & 0 & 4 & 5 } \right ] & & } and work to reduced row-echelon form, first with i = 1, \eqalignno{ \mathop{\longrightarrow}\limits_{}^{{R}_{1} ↔ {R}_{3}} &\left [\array{ 1 & 0 & 4 & 5\cr 5 & 5 & 7 & 24 \cr −7&−6&−12&−33 } \right ] &\mathop{\longrightarrow}\limits_{}^{ − 5{R}_{1} + {R}_{2}} &\left [\array{ 1 & 0 & 4 & 5\cr 0 & 5 &−13 & −1 \cr −7&−6&−12&−33 } \right ] & & & & \cr \mathop{\longrightarrow}\limits_{}^{7{R}_{1} + {R}_{3}} &\left [\array{ \text{1}& 0 & 4 & 5\cr 0& 5 &−13 &−1 \cr 0&−6& 16 & 2 } \right ] & & & & } Now, with i = 2, \eqalignno{ \mathop{\longrightarrow}\limits_{}^{{1\over 5}{R}_{2}} &\left [\array{ \text{1}& 0 & 4 & 5 \cr 0& 1 &{−13\over 5} &{−1\over 5} \cr 0&−6& 16 & 2 } \right ] &\mathop{\longrightarrow}\limits_{}^{6{R}_{2} + {R}_{3}} &\left [\array{ \text{1}&0& 4 & 5 \cr 0&\text{1}&{−13\over 5} &{−1\over 5} \cr 0&0& {2\over 5} & {4\over 5} } \right ] & & & & } And finally, with i = 3, \eqalignno{ \mathop{\longrightarrow}\limits_{}^{{5\over 2}{R}_{3}} &\left [\array{ \text{1}&0& 4 & 5 \cr 0&\text{1}&{−13\over 5} &{−1\over 5} \cr 0&0& 1 & 2 } \right ] &\mathop{\longrightarrow}\limits_{}^{{13\over 5} {R}_{3} + {R}_{2}} &\left [\array{ \text{1}&0&4&5\cr 0&\text{1 } &0 &5 \cr 0&0&1&2 } \right ] & & & & \cr \mathop{\longrightarrow}\limits_{}^{ − 4{R}_{3} + {R}_{1}} &\left [\array{ \text{1}&0&0&−3\cr 0&\text{1 } &0 & 5 \cr 0&0&\text{1}& 2 } \right ] & & & & } This is now the augmented matrix of a very simple system of equations, namely {x}_{1} = −3, {x}_{2} = 5, {x}_{3} = 2, which has an obvious solution. Furthermore, we can see that this is the only solution to this system, so we have determined the entire solution set, \eqalignno{ S & = \left \{\left [\array{ −3\cr 5 \cr 2 } \right ]\right \} & & } You might compare this example with the procedure we used in Example US. Archetypes A and B are meant to contrast each other in many respects. So let’s solve Archetype A now. Example SAA Solutions for Archetype A Let’s find the solutions to the following system of equations, \eqalignno{ {x}_{1} − {x}_{2} + 2{x}_{3} & = 1 & & \cr 2{x}_{1} + {x}_{2} + {x}_{3} & = 8 & & \cr {x}_{1} + {x}_{2} & = 5 & & } First, form the augmented matrix, \eqalignno{ \left [\array{ 1&−1&2&1\cr 2& 1 &1 &8 \cr 1& 1 &0&5 } \right ] & & } and work to reduced row-echelon form, first with i = 1, \eqalignno{ \mathop{\longrightarrow}\limits_{}^{ − 2{R}_{1} + {R}_{2}} &\left [\array{ 1&−1& 2 &1\cr 0& 3 &−3 &6 \cr 1& 1 & 0 &5 } \right ] &\mathop{\longrightarrow}\limits_{}^{ − 1{R}_{1} + {R}_{3}} &\left [\array{ \text{1}&−1& 2 &1\cr 0& 3 &−3 &6 \cr 0& 2 &−2&4 } \right ] & & & & } Now, with i = 2, \eqalignno{ \mathop{\longrightarrow}\limits_{}^{{1\over 3}{R}_{2}} &\left [\array{ \text{1}&−1& 2 &1\cr 0& 1 &−1 &2 \cr 0& 2 &−2&4 } \right ] &\mathop{\longrightarrow}\limits_{}^{1{R}_{2} + {R}_{1}} &\left [\array{ \text{1}&0& 1 &3\cr 0&1 &−1 &2 \cr 0&2&−2&4 } \right ] & & & & \cr \mathop{\longrightarrow}\limits_{}^{ − 2{R}_{2} + {R}_{3}} &\left [\array{ \text{1}&0& 1 &3\cr 0&\text{1 } &−1 &2 \cr 0&0& 0 &0 } \right ] & & & & } The system of equations represented by this augmented matrix needs to be considered a bit differently than that for Archetype B. First, the last row of the matrix is the equation 0 = 0, which is always true, so it imposes no restrictions on our possible solutions and therefore we can safely ignore it as we analyze the other two equations. These equations are, \eqalignno{ {x}_{1} + {x}_{3} & = 3 & & \cr {x}_{2} − {x}_{3} & = 2. & & } While this system is fairly easy to solve, it also appears to have a multitude of solutions. For example, choose {x}_{3} = 1 and see that then {x}_{1} = 2 and {x}_{2} = 3 will together form a solution. Or choose {x}_{3} = 0, and then discover that {x}_{1} = 3 and {x}_{2} = 2 lead to a solution. Try it yourself: pick any value of {x}_{3} you please, and figure out what {x}_{1} and {x}_{2} should be to make the first and second equations (respectively) true. We’ll wait while you do that. Because of this behavior, we say that {x}_{3} is a “free” or “independent” variable. But why do we vary {x}_{3} and not some other variable? For now, notice that the third column of the augmented matrix does not have any leading 1’s in its column. With this idea, we can rearrange the two equations, solving each for the variable that corresponds to the leading 1 in that row. \eqalignno{ {x}_{1} & = 3 − {x}_{3} & & \cr {x}_{2} & = 2 + {x}_{3} & & } To write the set of solution vectors in set notation, we have \eqalignno{ S & = \left \{\left [\array{ 3 − {x}_{3} \cr 2 + {x}_{3} \cr {x}_{3} } \right ]\mathrel{∣}{x}_{3} ∈ {ℂ}^{}\right \} & & } We’ll learn more in the next section about systems with infinitely many solutions and how to express their solution sets. Right now, you might look back at Example IS. Example SAE Solutions for Archetype E Let’s find the solutions to the following system of equations, \eqalignno{ 2{x}_{1} + {x}_{2} + 7{x}_{3} − 7{x}_{4} & = 2 & & \cr − 3{x}_{1} + 4{x}_{2} − 5{x}_{3} − 6{x}_{4} & = 3 & & \cr {x}_{1} + {x}_{2} + 4{x}_{3} − 5{x}_{4} & = 2 & & } First, form the augmented matrix, \eqalignno{ \left [\array{ 2 &1& 7 &−7&2\cr −3 &4 &−5 &−6 &3 \cr 1 &1& 4 &−5&2 } \right ] & & } and work to reduced row-echelon form, first with i = 1, \eqalignno{ \mathop{\longrightarrow}\limits_{}^{{R}_{1} ↔ {R}_{3}} &\left [\array{ 1 &1& 4 &−5&2\cr −3 &4 &−5 &−6 &3 \cr 2 &1& 7 &−7&2 } \right ] &\mathop{\longrightarrow}\limits_{}^{3{R}_{1} + {R}_{2}} &\left [\array{ 1&1&4& −5 &2\cr 0&7 &7 &−21 &9 \cr 2&1&7& −7 &2 } \right ] & & & & \cr \mathop{\longrightarrow}\limits_{}^{ − 2{R}_{1} + {R}_{3}} &\left [\array{ \text{1}& 1 & 4 & −5 & 2\cr 0& 7 & 7 &−21 & 9 \cr 0&−1&−1& 3 &−2 } \right ] & & & & } Now, with i = 2, \eqalignno{ \mathop{\longrightarrow}\limits_{}^{{R}_{2} ↔ {R}_{3}} &\left [\array{ \text{1}& 1 & 4 & −5 & 2\cr 0&−1 &−1 & 3 &−2 \cr 0& 7 & 7 &−21& 9 } \right ] &\mathop{\longrightarrow}\limits_{}^{ − 1{R}_{2}} &\left [\array{ \text{1}&1&4& −5 &2\cr 0&1 &1 & −3 &2 \cr 0&7&7&−21&9 } \right ] & & & & \cr \mathop{\longrightarrow}\limits_{}^{ − 1{R}_{2} + {R}_{1}} &\left [\array{ \text{1}&0&3& −2 &0\cr 0&1 &1 & −3 &2 \cr 0&7&7&−21&9 } \right ] &\mathop{\longrightarrow}\limits_{}^{ − 7{R}_{2} + {R}_{3}} &\left [\array{ \text{1}&0&3&−2& 0\cr 0&\text{1 } &1 &−3 & 2 \cr 0&0&0& 0 &−5 } \right ] & & & & } And finally, with i = 3, \eqalignno{ \mathop{\longrightarrow}\limits_{}^{ −{1\over 5}{R}_{3}} &\left [\array{ \text{1}&0&3&−2&0\cr 0&\text{1 } &1 &−3 &2 \cr 0&0&0& 0 &1 } \right ] &\mathop{\longrightarrow}\limits_{}^{ − 2{R}_{3} + {R}_{2}} &\left [\array{ \text{1}&0&3&−2&0\cr 0&\text{1 } &1 &−3 &0 \cr 0&0&0& 0 &\text{1} } \right ] & & & & } Let’s analyze the equations in the system represented by this augmented matrix. The third equation will read 0 = 1. This is patently false, all the time. No choice of values for our variables will ever make it true. We’re done. Since we cannot even make the last equation true, we have no hope of making all of the equations simultaneously true. So this system has no solutions, and its solution set is the empty set, ∅ = \left \{\ \right \} (Definition ES). Notice that we could have reached this conclusion sooner. After performing the row operation − 7{R}_{2} + {R}_{3}, we can see that the third equation reads 0 = −5, a false statement. Since the system represented by this matrix has no solutions, none of the systems represented has any solutions. However, for this example, we have chosen to bring the matrix fully to reduced row-echelon form for the practice. These three examples (Example SAB, Example SAA, Example SAE) illustrate the full range of possibilities for a system of linear equations — no solutions, one solution, or infinitely many solutions. In the next section we’ll examine these three scenarios more closely. Definition RR Row-Reducing To row-reduce the matrix A means to apply row operations to A and arrive at a row-equivalent matrix B in reduced row-echelon form. So the term row-reduce is used as a verb. Theorem REMEF tells us that this process will always be successful and Theorem RREFU tells us that the result will be unambiguous. Typically, the analysis of A will proceed by analyzing B and applying theorems whose hypotheses include the row-equivalence of A and B. After some practice by hand, you will want to use your favorite computing device to do the computations required to bring a matrix to reduced row-echelon form (Exercise RREF.C30).  See: Computation RR.MMA Computation RR.TI86 Computation RR.TI83 Computation RR.SAGE 1. Is the matrix below in reduced row-echelon form? Why or why not? \left [\array{ 1&5&0&6&8\cr 0&0 &1 &2 &0 \cr 0&0&0&0&1} \right ] 2. Use row operations to convert the matrix below to reduced row-echelon form and report the final matrix. \left [\array{ 2 &1& 8\cr −1 &1 &−1 \cr −2&5& 4} \right ] 3. Find all the solutions to the system below by using an augmented matrix and row operations. Report your final matrix in reduced row-echelon form and the set of solutions. \eqalignno{ 2{x}_{1} + 3{x}_{2} − {x}_{3} & = 0 & & \cr {x}_{1} + 2{x}_{2} + {x}_{3} & = 3 & & \cr {x}_{1} + 3{x}_{2} + 3{x}_{3} & = 7 & & } #### Subsection EXC: Exercises C05 Each archetype below is a system of equations. Form the augmented matrix of the system of equations, convert the matrix to reduced row-echelon form by using equation operations and then describe the solution set of the original system of equations. Archetype A Archetype B Archetype C Archetype D Archetype E Archetype F Archetype G Archetype H Archetype I Archetype J Contributed by Robert Beezer For problems C10–C19, find all solutions to the system of linear equations. Use your favorite computing device to row-reduce the augmented matrices for the systems, and write the solutions as a set, using correct set notation. C10 \eqalignno{ 2{x}_{1} − 3{x}_{2} + {x}_{3} + 7{x}_{4} & = 14 & & \cr 2{x}_{1} + 8{x}_{2} − 4{x}_{3} + 5{x}_{4} & = −1 & & \cr {x}_{1} + 3{x}_{2} − 3{x}_{3} & = 4 & & \cr − 5{x}_{1} + 2{x}_{2} + 3{x}_{3} + 4{x}_{4} & = −19 & & } Contributed by Robert Beezer Solution [126] \eqalignno{ 3{x}_{1} + 4{x}_{2} − {x}_{3} + 2{x}_{4} & = 6 & & \cr {x}_{1} − 2{x}_{2} + 3{x}_{3} + {x}_{4} & = 2 & & \cr 10{x}_{2} − 10{x}_{3} − {x}_{4} & = 1 & & } Contributed by Robert Beezer Solution [127] \eqalignno{ 2{x}_{1} + 4{x}_{2} + 5{x}_{3} + 7{x}_{4} & = −26 & & \cr {x}_{1} + 2{x}_{2} + {x}_{3} − {x}_{4} & = −4 & & \cr − 2{x}_{1} − 4{x}_{2} + {x}_{3} + 11{x}_{4} & = −10 & & } Contributed by Robert Beezer Solution [127] \eqalignno{ {x}_{1} + 2{x}_{2} + 8{x}_{3} − 7{x}_{4} & = −2 & & \cr 3{x}_{1} + 2{x}_{2} + 12{x}_{3} − 5{x}_{4} & = 6 & & \cr − {x}_{1} + {x}_{2} + {x}_{3} − 5{x}_{4} & = −10 & & } Contributed by Robert Beezer Solution [128] \eqalignno{ 2{x}_{1} + {x}_{2} + 7{x}_{3} − 2{x}_{4} & = 4 & & \cr 3{x}_{1} − 2{x}_{2} + 11{x}_{4} & = 13 & & \cr {x}_{1} + {x}_{2} + 5{x}_{3} − 3{x}_{4} & = 1 & & } Contributed by Robert Beezer Solution [129] \eqalignno{ 2{x}_{1} + 3{x}_{2} − {x}_{3} − 9{x}_{4} & = −16 & & \cr {x}_{1} + 2{x}_{2} + {x}_{3} & = 0 & & \cr − {x}_{1} + 2{x}_{2} + 3{x}_{3} + 4{x}_{4} & = 8 & & } Contributed by Robert Beezer Solution [131] \eqalignno{ 2{x}_{1} + 3{x}_{2} + 19{x}_{3} − 4{x}_{4} & = 2 & & \cr {x}_{1} + 2{x}_{2} + 12{x}_{3} − 3{x}_{4} & = 1 & & \cr − {x}_{1} + 2{x}_{2} + 8{x}_{3} − 5{x}_{4} & = 1 & & } Contributed by Robert Beezer Solution [132] \eqalignno{ − {x}_{1} + 5{x}_{2} & = −8 & & \cr − 2{x}_{1} + 5{x}_{2} + 5{x}_{3} + 2{x}_{4} & = 9 & & \cr − 3{x}_{1} − {x}_{2} + 3{x}_{3} + {x}_{4} & = 3 & & \cr 7{x}_{1} + 6{x}_{2} + 5{x}_{3} + {x}_{4} & = 30 & & } Contributed by Robert Beezer Solution [133] \eqalignno{ {x}_{1} + 2{x}_{2} − 4{x}_{3} − {x}_{4} & = 32 & & \cr {x}_{1} + 3{x}_{2} − 7{x}_{3} − {x}_{5} & = 33 & & \cr {x}_{1} + 2{x}_{3} − 2{x}_{4} + 3{x}_{5} & = 22 & & \cr & & } Contributed by Robert Beezer Solution [134] \eqalignno{ 2{x}_{1} + {x}_{2} & = 6 & & \cr − {x}_{1} − {x}_{2} & = −2 & & \cr 3{x}_{1} + 4{x}_{2} & = 4 & & \cr 3{x}_{1} + 5{x}_{2} & = 2 & & } Contributed by Robert Beezer Solution [135] For problems C30–C33, row-reduce the matrix without the aid of a calculator, indicating the row operations you are using at each step using the notation of Definition RO. C30 \eqalignno{ \left [\array{ 2& 1 & 5 &10\cr 1&−3 &−1 &−2 \cr 4&−2& 6 &12 } \right ] & & } Contributed by Robert Beezer Solution [137] \eqalignno{ \left [\array{ 1 & 2 &−4\cr −3 &−1 &−3 \cr −2& 1 &−7 } \right ] & & } Contributed by Robert Beezer Solution [137] \eqalignno{ \left [\array{ 1 & 1 & 1\cr −4 &−3 &−2 \cr 3 & 2 & 1 } \right ] & & } Contributed by Robert Beezer Solution [138] \eqalignno{ \left [\array{ 1 & 2 &−1&−1\cr 2 & 4 &−1 & 4 \cr −1&−2& 3 & 5 } \right ] & & } Contributed by Robert Beezer Solution [139] M40 Consider the two 3 × 4 matrices below \eqalignno{ B & = \left [\array{ 1 & 3 &−2& 2\cr −1 &−2 &−1 &−1 \cr −1&−5& 8 &−3 } \right ] &C & = \left [\array{ 1 & 2 & 1 &2\cr 1 & 1 & 4 &0 \cr −1&−1&−4&1 } \right ] & & & & } (a) Row-reduce each matrix and determine that the reduced row-echelon forms of B and C are identical. From this argue that B and C are row-equivalent. (b) In the proof of Theorem RREFU, we begin by arguing that entries of row-equivalent matrices are related by way of certain scalars and sums. In this example, we would write that entries of B from row i that are in column j are linearly related to the entries of C in column j from all three rows \eqalignno{ {\left [B\right ]}_{ij} & = {δ}_{i1}{\left [C\right ]}_{1j} + {δ}_{i2}{\left [C\right ]}_{2j} + {δ}_{i3}{\left [C\right ]}_{3j} &1 & ≤ j ≤ 4 & & & & } For each 1 ≤ i ≤ 3 find the corresponding three scalars in this relationship. So your answer will be nine scalars, determined three at a time. Contributed by Robert Beezer Solution [139] M45 You keep a number of lizards, mice and peacocks as pets. There are a total of 108 legs and 30 tails in your menagerie. You have twice as many mice as lizards. How many of each creature do you have? Contributed by Chris Black Solution [142] M50 A parking lot has 66 vehicles (cars, trucks, motorcycles and bicycles) in it. There are four times as many cars as trucks. The total number of tires (4 per car or truck, 2 per motorcycle or bicycle) is 252. How many cars are there? How many bicycles? Contributed by Robert Beezer Solution [143] T10 Prove that each of the three row operations (Definition RO) is reversible. More precisely, if the matrix B is obtained from A by application of a single row operation, show that there is a single row operation that will transform B back into A. Contributed by Robert Beezer Solution [145] T11 Suppose that A, B and C are m × n matrices. Use the definition of row-equivalence (Definition REM) to prove the following three facts. 1. A is row-equivalent to A. 2. If A is row-equivalent to B, then B is row-equivalent to A. 3. If A is row-equivalent to B, and B is row-equivalent to C, then A is row-equivalent to C. A relationship that satisfies these three properties is known as an equivalence relation, an important idea in the study of various algebras. This is a formal way of saying that a relationship behaves like equality, without requiring the relationship to be as strict as equality itself. We’ll see it again in Theorem SER. Contributed by Robert Beezer T12 Suppose that B is an m × n matrix in reduced row-echelon form. Build a new, likely smaller, k × ℓ matrix C as follows. Keep any collection of k adjacent rows, k ≤ m. From these rows, keep columns 1 through , ℓ ≤ n. Prove that C is in reduced row-echelon form. Contributed by Robert Beezer T13 Generalize Exercise RREF.T12 by just keeping any k rows, and not requiring the rows to be adjacent. Prove that any such matrix C is in reduced row-echelon form. Contributed by Robert Beezer #### Subsection SOL: Solutions C10 Contributed by Robert Beezer Statement [114] The augmented matrix row-reduces to \left [\array{ \text{1}&0&0&0& 1\cr 0&\text{1 } &0 &0 &−3 \cr 0&0&\text{1}&0&−4\cr 0&0 &0 &\text{1 } & 1 } \right ] and we see from the locations of the leading 1’s that the system is consistent (Theorem RCLS) and that n − r = 4 − 4 = 0 and so the system has no free variables (Theorem CSRN) and hence has a unique solution. This solution is \eqalignno{ S & = \left \{\left [\array{ 1\cr −3 \cr −4\cr 1 } \right ]\right \} & & } C11 Contributed by Robert Beezer Statement [115] The augmented matrix row-reduces to \left [\array{ \text{1}&0& 1 & 4∕5 &0 \cr 0&\text{1}&−1&−1∕10&0 \cr 0&0& 0 & 0 &\text{1} } \right ] and a leading 1 in the last column tells us that the system is inconsistent (Theorem RCLS). So the solution set is ∅ = \left \{\right \}. C12 Contributed by Robert Beezer Statement [115] The augmented matrix row-reduces to \left [\array{ \text{1}&2&0&−4& 2\cr 0&0 &\text{1 } & 3 &−6 \cr 0&0&0& 0 & 0 } \right ] (Theorem RCLS) and (Theorem CSRN) tells us the system is consistent and the solution set can be described with n − r = 4 − 2 = 2 free variables, namely {x}_{2} and {x}_{4}. Solving for the dependent variables (D = \left \{{x}_{1},\kern 1.95872pt {x}_{3}\right \}) the first and second equations represented in the row-reduced matrix yields, \eqalignno{ {x}_{1} & = 2 − 2{x}_{2} + 4{x}_{4} & & \cr {x}_{3} & = −6\quad \quad − 3{x}_{4} & & } As a set, we write this as \left \{\left [\array{ 2 − 2{x}_{2} + 4{x}_{4} \cr {x}_{2} \cr −6 − 3{x}_{4} \cr {x}_{4} } \right ]\mathrel{∣}{x}_{2},\kern 1.95872pt {x}_{4} ∈ {ℂ}^{}\right \} C13 Contributed by Robert Beezer Statement [116] The augmented matrix of the system of equations is \left [\array{ 1 &2& 8 &−7& −2\cr 3 &2 &12 &−5 & 6 \cr −1&1& 1 &−5&−10 } \right ] which row-reduces to \left [\array{ \text{1}&0&2& 1 &0\cr 0&\text{1 } &3 &−4 &0 \cr 0&0&0& 0 &\text{1} } \right ] With a leading one in the last column Theorem RCLS tells us the system of equations is inconsistent, so the solution set is the empty set, . C14 Contributed by Robert Beezer Statement [116] The augmented matrix of the system of equations is \left [\array{ 2& 1 &7&−2& 4\cr 3&−2 &0 & 11 &13 \cr 1& 1 &5&−3& 1 } \right ] which row-reduces to \left [\array{ \text{1}&0&2& 1 & 3\cr 0&\text{1 } &3 &−4 &−2 \cr 0&0&0& 0 & 0 } \right ] Then D = \left \{1, 2\right \} and F = \left \{3, 4, 5\right \}, so the system is consistent (5∉D) and can be described by the two free variables {x}_{3} and {x}_{4}. Rearranging the equations represented by the two nonzero rows to gain expressions for the dependent variables {x}_{1} and {x}_{2}, yields the solution set, S = \left \{\left [\array{ 3 − 2{x}_{3} − {x}_{4} \cr −2 − 3{x}_{3} + 4{x}_{4} \cr {x}_{3} \cr {x}_{4} } \right ]\mathrel{∣}{x}_{3},\kern 1.95872pt {x}_{4} ∈ {ℂ}^{}\right \} C15 Contributed by Robert Beezer Statement [117] The augmented matrix of the system of equations is \left [\array{ 2 &3&−1&−9&−16\cr 1 &2 & 1 & 0 & 0 \cr −1&2& 3 & 4 & 8 } \right ] which row-reduces to \left [\array{ \text{1}&0&0& 2 & 3\cr 0&\text{1 } &0 &−3 &−5 \cr 0&0&\text{1}& 4 & 7 } \right ] Then D = \left \{1, 2, 3\right \} and F = \left \{4, 5\right \}, so the system is consistent (5∉D) and can be described by the one free variable {x}_{4}. Rearranging the equations represented by the three nonzero rows to gain expressions for the dependent variables {x}_{1}, {x}_{2} and {x}_{3}, yields the solution set, S = \left \{\left [\array{ 3 − 2{x}_{4} \cr −5 + 3{x}_{4} \cr 7 − 4{x}_{4} \cr {x}_{4} } \right ]\mathrel{∣}{x}_{4} ∈ {ℂ}^{}\right \} C16 Contributed by Robert Beezer Statement [117] The augmented matrix of the system of equations is \left [\array{ 2 &3&19&−4&2\cr 1 &2 &12 &−3 &1 \cr −1&2& 8 &−5&1 } \right ] which row-reduces to \left [\array{ \text{1}&0&2& 1 &0\cr 0&\text{1 } &5 &−2 &0 \cr 0&0&0& 0 &\text{1} } \right ] With a leading one in the last column Theorem RCLS tells us the system of equations is inconsistent, so the solution set is the empty set, ∅ = \left \{\right \}. C17 Contributed by Robert Beezer Statement [118] We row-reduce the augmented matrix of the system of equations, \eqalignno{ \left [\array{ −1& 5 &0&0&−8\cr −2 & 5 &5 &2 & 9 \cr −3&−1&3&1& 3\cr 7 & 6 &5 &1 & 30 } \right ] &\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&0& 3\cr 0&\text{1 } &0 &0 &−1 \cr 0&0&\text{1}&0& 2\cr 0&0 &0 &\text{1 } & 5 } \right ] & & } The reduced row-echelon form of the matrix is the augmented matrix of the system {x}_{1} = 3, {x}_{2} = −1, {x}_{3} = 2, {x}_{4} = 5, which has a unique solution. As a set of column vectors, the solution set is \eqalignno{ S & = \left \{\left [\array{ 3\cr −1 \cr 2\cr 5 } \right ]\right \} & & } C18 Contributed by Robert Beezer Statement [118] We row-reduce the augmented matrix of the system of equations, \eqalignno{ \left [\array{ 1&2&−4&−1& 0 &32\cr 1&3 &−7 & 0 &−1 &33 \cr 1&0& 2 &−2& 3 &22 } \right ] &\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0& 2 &0& 5 & 6\cr 0&\text{1 } &−3 &0 &−2 & 9 \cr 0&0& 0 &\text{1}& 1 &−8 } \right ] & & } With no leading 1 in the final column, we recognize the system as consistent (Theorem RCLS). Since the system is consistent, we compute the number of free variables as n − r = 5 − 3 = 2 (), and we see that columns 3 and 5 are not pivot columns, so {x}_{3} and {x}_{5} are free variables. We convert each row of the reduced row-echelon form of the matrix into an equation, and solve it for the lone dependent variable, as in expression in the two free variables. \eqalignno{ {x}_{1} + 2{x}_{3} + 5{x}_{5} = 6\quad & →\quad {x}_{1} = 6 − 2{x}_{3} − 5{x}_{5} & & \cr {x}_{2} − 3{x}_{3} − 2{x}_{5} = 9\quad & →\quad {x}_{2} = 9 + 3{x}_{3} + 2{x}_{5} & & \cr {x}_{4} + {x}_{5} = −8\quad & →\quad {x}_{4} = −8 − {x}_{5} & & } These expressions give us a convenient way to describe the solution set, S. \eqalignno{ S & = \left \{\left [\array{ 6 − 2{x}_{3} − 5{x}_{5} \cr 9 + 3{x}_{3} + 2{x}_{5} \cr {x}_{3} \cr −8 − {x}_{5} \cr {x}_{5} } \right ]\mathrel{∣}{x}_{3},\kern 1.95872pt {x}_{5} ∈ ℂ\right \} & & } C19 Contributed by Robert Beezer Statement [119] We form the augmented matrix of the system, \eqalignno{ &\left [\array{ 2 & 1 & 6\cr −1 &−1 &−2 \cr 3 & 4 & 4\cr 3 & 5 & 2 } \right ] & & } which row-reduces to \eqalignno{ &\left [\array{ \text{1}&0& 4\cr 0&\text{1 } &−2 \cr 0&0& 0\cr 0&0 & 0 } \right ] & & } With no leading 1 in the final column, this system is consistent (Theorem RCLS). There are n = 2 variables in the system and r = 2 non-zero rows in the row-reduced matrix. By Theorem FVCS, there are n − r = 2 − 2 = 0 free variables and we therefore know the solution is unique. Forming the system of equations represented by the row-reduced matrix, we see that {x}_{1} = 4 and {x}_{2} = −2. Written as set of column vectors, \eqalignno{ S & = \left \{\left [\array{ 4\cr −2 } \right ]\right \} & & } C30 Contributed by Robert Beezer Statement [120] \eqalignno{ &\quad \left [\array{ 2& 1 & 5 &10\cr 1&−3 &−1 &−2 \cr 4&−2& 6 &12 } \right ] &\mathop{\longrightarrow}\limits_{}^{{R}_{1} ↔ {R}_{2}} &\quad \left [\array{ 1&−3&−1&−2\cr 2& 1 & 5 & 10 \cr 4&−2& 6 &12 } \right ] & & & & & & & \cr \mathop{\longrightarrow}\limits_{}^{ − 2{R}_{1} + {R}_{2}} &\quad \left [\array{ 1&−3&−1&−2\cr 0& 7 & 7 & 14 \cr 4&−2& 6 &12 } \right ] &\mathop{\longrightarrow}\limits_{}^{ − 4{R}_{1} + {R}_{3}} &\quad \left [\array{ 1&−3&−1&−2\cr 0& 7 & 7 & 14 \cr 0&10&10&20 } \right ] & & & & & & \cr \mathop{\longrightarrow}\limits_{}^{{1\over 7}{R}_{2}} &\quad \left [\array{ 1&−3&−1&−2\cr 0& 1 & 1 & 2 \cr 0&10&10&20 } \right ] &\mathop{\longrightarrow}\limits_{}^{3{R}_{2} + {R}_{1}} &\quad \left [\array{ 1& 0 & 2 & 4\cr 0& 1 & 1 & 2 \cr 0&10&10&20 } \right ] & & & & & & \cr \mathop{\longrightarrow}\limits_{}^{ − 10{R}_{2} + {R}_{3}} &\quad \left [\array{ \text{1}&0&2&4\cr 0&\text{1 } &1 &2 \cr 0&0&0&0 } \right ] & & & & & & } C31 Contributed by Robert Beezer Statement [120] \eqalignno{ &\left [\array{ 1 & 2 &−4\cr −3 &−1 &−3 \cr −2& 1 &−7 } \right ] &\mathop{\longrightarrow}\limits_{}^{3{R}_{1} + {R}_{2}} &\left [\array{ 1 &2& −4\cr 0 &5 &−15 \cr −2&1& −7 } \right ] & & & & \cr \mathop{\longrightarrow}\limits_{}^{2{R}_{1} + {R}_{3}} &\left [\array{ 1&2& −4\cr 0&5 &−15 \cr 0&5&−15 } \right ] &\mathop{\longrightarrow}\limits_{}^{{1\over 5}{R}_{2}} &\left [\array{ 1&2& −4\cr 0&1 & −3 \cr 0&5&−15 } \right ] & & & & \cr \mathop{\longrightarrow}\limits_{}^{ − 2{R}_{2} + {R}_{1}} &\left [\array{ 1&0& 2\cr 0&1 & −3 \cr 0&5&−15 } \right ] &\mathop{\longrightarrow}\limits_{}^{ − 5{R}_{2} + {R}_{3}} &\left [\array{ \text{1}&0& 2\cr 0&\text{1 } &−3 \cr 0&0& 0 } \right ] & & & & } C32 Contributed by Robert Beezer Statement [121] Following the algorithm of Theorem REMEF, and working to create pivot columns from left to right, we have \eqalignno{ \left [\array{ 1 & 1 & 1\cr −4 &−3 &−2 \cr 3 & 2 & 1 } \right ]&\mathop{\longrightarrow}\limits_{}^{4{R}_{1} + {R}_{2}}&\left [\array{ 1&1&1\cr 0&1 &2 \cr 3&2&1} \right ]&\mathop{\longrightarrow}\limits_{}^{ − 3{R}_{1} + {R}_{3}}&\left [\array{ \text{1}& 1 & 1\cr 0& 1 & 2 \cr 0&−1&−2 } \right ]&\mathop{\longrightarrow}\limits_{}^{ − 1{R}_{2} + {R}_{1}}&&&&&& \cr \left [\array{ \text{1}& 0 &−1\cr 0& 1 & 2 \cr 0&−1&−2 } \right ]&\mathop{\longrightarrow}\limits_{}^{1{R}_{2} + {R}_{3}}&\left [\array{ \text{1}&0&−1\cr 0&\text{1 } & 2 \cr 0&0& 0 } \right ]& & & &&& } C33 Contributed by Robert Beezer Statement [121] Following the algorithm of Theorem REMEF, and working to create pivot columns from left to right, we have \eqalignno{ \left [\array{ 1 & 2 &−1&−1\cr 2 & 4 &−1 & 4 \cr −1&−2& 3 & 5 } \right ]&\mathop{\longrightarrow}\limits_{}^{ − 2{R}_{1} + {R}_{2}}&\left [\array{ 1 & 2 &−1&−1\cr 0 & 0 & 1 & 6 \cr −1&−2& 3 & 5 } \right ]&\mathop{\longrightarrow}\limits_{}^{1{R}_{1} + {R}_{3}}&\left [\array{ \text{1}&2&−1&−1\cr 0&0 & 1 & 6 \cr 0&0& 2 & 4 } \right ]&\mathop{\longrightarrow}\limits_{}^{1{R}_{2} + {R}_{1}} &&&&&& \cr \left [\array{ \text{1}&2&0&5\cr 0&0 &1 &6 \cr 0&0&2&4 } \right ]&\mathop{\longrightarrow}\limits_{}^{ − 2{R}_{2} + {R}_{3}}&\left [\array{ \text{1}&2&0& 5\cr 0&0 &\text{1 } & 6 \cr 0&0&0&−8 } \right ]&\mathop{\longrightarrow}\limits_{}^{ −{1\over 8}{R}_{3}} &\left [\array{ \text{1}&2&0&5\cr 0&0 &\text{1 } &6 \cr 0&0&0&1 } \right ]&\mathop{\longrightarrow}\limits_{}^{ − 6{R}_{3} + {R}_{2}}&&&&&& \cr \left [\array{ \text{1}&2&0&5\cr 0&0 &\text{1 } &0 \cr 0&0&0&1 } \right ]&\mathop{\longrightarrow}\limits_{}^{ − 5{R}_{3} + {R}_{1}}&\left [\array{ \text{1}&2&0&0\cr 0&0 &\text{1 } &0 \cr 0&0&0&\text{1} } \right ]& & & &&& } M40 Contributed by Robert Beezer Statement [121] (a) Let R be the common reduced row-echelon form of B and C. A sequence of row operations converts B to R and a second sequence of row operations converts C to R. If we “reverse” the second sequence’s order, and reverse each individual row operation (see Exercise RREF.T10) then we can begin with B, convert to R with the first sequence, and then convert to C with the reversed sequence. Satisfying Definition REM we can say B and C are row-equivalent matrices. (b) We will work this carefully for the first row of B and just give the solution for the next two rows. For row 1 of B take i = 1 and we have \eqalignno{ {\left [B\right ]}_{1j} & = {δ}_{11}{\left [C\right ]}_{1j} + {δ}_{12}{\left [C\right ]}_{2j} + {δ}_{13}{\left [C\right ]}_{3j} &1 & ≤ j ≤ 4 & & & & } If we substitute the four values for j we arrive at four linear equations in the three unknowns {δ}_{11},{δ}_{12},{δ}_{13}, \eqalignno{ (j = 1)&&{\left [B\right ]}_{11}& = {δ}_{11}{\left [C\right ]}_{11} + {δ}_{12}{\left [C\right ]}_{21} + {δ}_{13}{\left [C\right ]}_{31}&& ⇒&1 & = {δ}_{11}(1) + {δ}_{12}(1) + {δ}_{13}(−1)&&&&&&&& \cr (j = 2)&&{\left [B\right ]}_{12}& = {δ}_{11}{\left [C\right ]}_{12} + {δ}_{12}{\left [C\right ]}_{22} + {δ}_{13}{\left [C\right ]}_{32}&& ⇒&3 & = {δ}_{11}(2) + {δ}_{12}(1) + {δ}_{13}(−1)&&&&&&&& \cr (j = 3)&&{\left [B\right ]}_{13}& = {δ}_{11}{\left [C\right ]}_{13} + {δ}_{12}{\left [C\right ]}_{23} + {δ}_{13}{\left [C\right ]}_{33}&& ⇒& − 2& = {δ}_{11}(1) + {δ}_{12}(4) + {δ}_{13}(−4)&&&&&&&& \cr (j = 4)&&{\left [B\right ]}_{14}& = {δ}_{11}{\left [C\right ]}_{14} + {δ}_{12}{\left [C\right ]}_{24} + {δ}_{13}{\left [C\right ]}_{34}&& ⇒&2 & = {δ}_{11}(2) + {δ}_{12}(0) + {δ}_{13}(1) &&&&&&&& } We form the augmented matrix of this system and row-reduce to find the solutions, \eqalignno{ \left [\array{ 1&1&−1& 1\cr 2&1 &−1 & 3 \cr 1&4&−4&−2\cr 2&0 & 1 & 2 } \right ] &\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0& 2\cr 0&\text{1 } &0 &−3 \cr 0&0&\text{1}&−2\cr 0&0 &0 & 0 } \right ] & & } So the unique solution is {δ}_{11} = 2, {δ}_{12} = −3, {δ}_{13} = −2. Entirely similar work will lead you to \eqalignno{ {δ}_{21} & = −1 &{δ}_{22} & = 1 &{δ}_{23} & = 1 & & & & & & \text{and} \cr {δ}_{31} & = −4 &{δ}_{32} & = 8 &{δ}_{33} & = 5 & & & & & & } M45 Contributed by Chris Black Statement [123] Let l,m,p denote the number of lizards, mice and peacocks. Then the statements from the problem yield the equations: \eqalignno{ 4l + 4m + 2p & = 108 & & \cr l + m + p & = 30 & & \cr 2l − m & = 0 & & } The augmented matrix for this system is \left [\array{ 4& 4 &2&108\cr 1& 1 &1 & 30 \cr 2&−1&0& 0 } \right ] which row-reduces to \left [\array{ \text{1}&0&0& 8\cr 0&\text{1 } &0 &16 \cr 0&0&\text{1}& 6 } \right ] From the row-reduced matrix, we see that we have an equivalent system l = 8, m = 16, and p = 6, which means that you have 8 lizards, 16 mice and 6 peacocks. M50 Contributed by Robert Beezer Statement [123] Let c,\kern 1.95872pt t,\kern 1.95872pt m,\kern 1.95872pt b denote the number of cars, trucks, motorcycles, and bicycles. Then the statements from the problem yield the equations: \eqalignno{ c + t + m + b & = 66 & & \cr c − 4t & = 0 & & \cr 4c + 4t + 2m + 2b & = 252 & & } The augmented matrix for this system is \left [\array{ 1& 1 &1&1& 66\cr 1&−4 &0 &0 & 0 \cr 4& 4 &2&2&252 } \right ] which row-reduces to \left [\array{ \text{1}&0&0&0&48\cr 0&\text{1 } &0 &0 &12 \cr 0&0&\text{1}&1& 6 } \right ] The first row of the matrix represents the equation c = 48, so there are 48 cars. The second row of the matrix represents the equation t = 12, so there are 12 trucks. The third row of the matrix represents the equation m + b = 6 so there are anywhere from 0 to 6 bicycles. We can also say that b is a free variable, but the context of the problem limits it to 7 integer values since you cannot have a negative number of motorcycles. T10 Contributed by Robert Beezer Statement [123] If we can reverse each row operation individually, then we can reverse a sequence of row operations. The operations that reverse each operation are listed below, using our shorthand notation, \eqalignno{ {R}_{i} ↔ {R}_{j} &\quad \quad {R}_{i} ↔ {R}_{j} & & \cr α{R}_{i},\kern 1.95872pt α\mathrel{≠}0 &\quad \quad {1\over α}{R}_{i} & & \cr α{R}_{i} + {R}_{j} &\quad \quad − α{R}_{i} + {R}_{j} & & }
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9801023006439209, "perplexity": 1851.596560803639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808742.58/warc/CC-MAIN-20171124180349-20171124200349-00258.warc.gz"}
http://aimpl.org/sheavemodular/1/
$\newcommand{\Cat}{{\rm Cat}}$ $\newcommand{\A}{\mathcal A}$ $\newcommand{\freestar}{ \framebox[7pt]{\star} }$ ## 1. Rational Representations: Induction, Cohomology Vanishing 1. #### Problem 1.1. [Williamson] What does the induction theorem of Achar–Riche/Hodge–Karuppuchamy–Scott imply about the relationship between representations of $G(\F_q)$ and $B(\F_q)$ in defining characteristic? • #### Problem 1.2. [Williamson] What representations of $G(\F_q)$ arise by restricting a tilting module of $G$? • #### Problem 1.3. [Scott] What rational representations of $B$ arise by restricting a tilting module of $G$? • #### Problem 1.4. [Achar] What objects of the derived category of representations of $B$ correspond to tilting modules of $G$? • #### Problem 1.5. [Nakano] Let $P$ be a parabolic subgroup of $G$. Is it true in characteristic $p$ that $R^i\operatorname{Ind}_P^G\operatorname{Sym}(\mathfrak{n}_P^\bullet) = 0$ for all $i > 0$? (Known for $P = B$ and for certain other parabolics. Possible connection with normality of nilpotent orbit closures.) Cite this as: AimPL: Sheaves and modular representations of reductive groups, available at http://aimpl.org/sheavemodular.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.550241231918335, "perplexity": 1399.3632666777107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823303.28/warc/CC-MAIN-20181210034333-20181210055833-00172.warc.gz"}
https://gottwurfelt.com/2013/07/17/the-probability-of-catching-four-foul-balls/
# The probability of catching four foul balls Greg Van Niel caught four foul balls at Sunday’s Cleveland Indians game. ESPN reported that this is a one-in-a-trillion event – a number due to Ideal Seat, which I’ll take to mean that this guy had a one-in-a-trillion chance of catching four fouls. This is immediately suspicious to me. Total MLB attendance last year was about 75 million, so a one in a trillion event should happen once every thirteen thousand years. The fact that it happened, given that we’ve had way less than thirteen thousand years of baseball, is evidence that this computation was done incorrectly. Somewhat surprisingly, given how small the number is, it actually seems to be an overestimate. I’ll assume that their numbers are correct: 30 balls enter the stands in an average game, and there are 30,000 fans at that game. Say I’m one of those fans. Let’s assume that all foul balls are hit independently, and that they’re equally likely to be caught by any person in the stands. The probability that exactly four balls will be hit to me are ${30 \choose 4} p^4 (1-p)^(30-4)$, where $p = 1/30000$. This is about $3.38 \times 10^{-14}$, or one in thirty trillion. (The probably that five or more balls will be hit to me is orders of magnitude lower than that.) IdealSeat also claims that two fans caught two foul balls in the same game last year. I suspect that there’s some massive underreporting going on here, because the same analysis gives that the probability that I’ll get two balls is ${30 \choose 2} p^2 (1-p)^(30-4)$, which is about one in two million. So this should have happened 35 to 40 times last year – it’s just that most of the people who it happened to didn’t bother telling anybody! (Other than their friends, who probably didn’t believe them.) What’s wrong with the one in a trillion, or one in thirty trillion, numbers? • They assume that all foul balls are uniformly distributed over all the seats. This is patently untrue. Some seats by definition can’t receive a foul ball, because they’re in fair territory. Some seats, although they can theoretically receive a foul ball, just won’t. Ideal Seat has a heatmap of foul ball locations at Safeco Field in Seattle — basically the closer you are to home plate, the better your chances. Your chances of getting a foul ball drop off much faster with height than with horizontal distance. In addition, aisle seats are more likely to be the closest seat to where a ball lands than adjacent non-aisle seats. • They assume that all foul ball locations are independent. I don’t know if there’s data on this, but batters have tendencies on where they hit balls in play; they should have tendencies on where they hit foul balls as well. • They assume that a person can only get foul balls hit to their seat. This might be true in, say, San Francisco (where most games sell out), but it’s not true in Oakland (where there are plenty of empty seats). Van Niel’s section looks pretty full in the pictures, though. But Van Niel himself admits at least one of the balls wasn’t hit right to him. All I can say for sure is that these drive the chances up – so the probability of catching four foul balls in a single game is probably a good deal higher than one in a trillion. Advertisements ## 2 thoughts on “The probability of catching four foul balls” 1. I think that this probability, should be calculated as the probability that at least two people in a room of n, have the same birthday. The probability that the same person captures more than two balls in a given game (yes, the probability that this happens in one game) should be calculated using 30000 instead of 365 and n = 30. However, I don’t know how to proceed with more than four balls. What do you think? Hope I am not terrible wrong
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6600604057312012, "perplexity": 866.3928184395583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998943.53/warc/CC-MAIN-20190619083757-20190619105757-00524.warc.gz"}
https://standards.globalspec.com/std/10338732/nen-iso-13317-4
# NEN-ISO 13317-4 ## Determination of particle size distribution by gravitational liquid sedimentation methods - Part 4: Balance method active, Most Current Organization: NEN Publication Date: 1 December 2014 Status: active Page Count: 25 ICS Code (Particle size analysis. Sieving): 19.120 ##### scope: NEN-ISO 13317-4 specifies the method for the determination of particle size distribution by the mass of particles settling under gravity in liquid. This method is based on a direct mass measurement and gives the mass distribution of equivalent spherical particle diameter. Typically, the gravitational liquid sedimentation method applies to samples in the 1 μm to 100 μm size range and where the sedimentation condition for particle Reynolds number less than 0,25 is satisfied. ### Document History NEN-ISO 13317-4 December 1, 2014 Determination of particle size distribution by gravitational liquid sedimentation methods - Part 4: Balance method NEN-ISO 13317-4 specifies the method for the determination of particle size distribution by the mass of particles settling under gravity in liquid. This method is based on a direct mass measurement...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9669942855834961, "perplexity": 3398.4219116155864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529745.80/warc/CC-MAIN-20191211021635-20191211045635-00400.warc.gz"}
http://devmaster.net/posts/18843/struct-in-multiple-files
0 101 Dec 02, 2010 at 08:59 #### 10 Replies 0 118 Dec 02, 2010 at 09:06 First off, you can use more than one #include line, so if you have something that’s needed everywhere, just put it in a separate include file and include that from either all the header files that need it. To avoid issues with multiply-included files, you can use the common include guards: #ifndef THISHEADER ... the header stuff here ... #endif that way, if the file gets included several times (from various header files, for instance), the contents will only get used once. the THISHEADER has to be unique for every header file; a common way is to name it HEADERFILE_H for headerfile.h and OTHERFILE_H for otherfile.h etc. This is the easy way to do it. More preferable ways include source-side include guards, forward declarations in case the data structure is only used via a pointer, and preferably never including from include files, but those really become issues only when your project grows so large you start to worry about build times.. 0 101 Dec 02, 2010 at 13:31 I think his problem has more to do with circular dependencies // A.h struct A { B * ptr; }; // B.h struct B { A * a; }; Of course you could include A.h from B.h, and include B.h from A.h, but if you include A.h, it in turn includes B.h, and there you have a struct definition of B which nees A but which isn’t defined at that point. The solution is, only include what you actually need. The fact of the matter is, A doesn’t need the definition of B, and neither does B need the definition of A. Actual definitions are only needed when accessing their members or when the compiler needs to know their size (which is if you use such a struct as a member of another struct). In above example, you only needs pointers. And all pointers are represented in the same way, so the only thing the compiler needs is that there exists an A when you’re using a pointer of A. So, solution: // A.h struct B; // declaration of B, the compiler now knows that B exists, but does not know what it contains. struct A { B * ptr; }; // B.h struct A; struct B { A * a; }; It now turns out that neither A nor B needs to include the other header. That said: If the vector struct is nested within another struct definition then it seems to need the vector struct redefined in that header. That’s ridiculous! Is this correct? If you actually mean structs defined within other structs, then you have a problem. As these structs are members, you’ll need their encompassing struct definition: // A.h struct A { struct NestedA { }; B::NestedB * ptr; }; // B.h struct B { struct NestedB { }; A::NestedA * ptr; }; Ok, now what? You can’t predeclare B in A because you’ll need B’s definition in order to access B::NestedB. Basically, you’re screwed. This is an unsolvable problem in C++*, and you should avoid these kinds of designs. * Well, not literally unsolvable, you can apply some template trickery and make use of the two-phase name lookup paradigm to delay the lookup of the nested members until the actual use of the type at which point their both fully defined, but I don’t recommend it. 0 165 Dec 02, 2010 at 18:02 @Sol_HSA To avoid issues with multiply-included files, you can use the common include guards: 0 101 Dec 02, 2010 at 20:58 @Reedbeta I might be wrong but I was under the impression that #pragma once wasn’t technically standard. Not that it’s a big deal since it seems to be supported a lot. 0 165 Dec 02, 2010 at 21:00 It’s quite widely supported and if it isn’t part of the standard it ought to be, as it’s obviously better than #include guards when avalable. :yes: 0 101 Dec 03, 2010 at 03:35 That’s a great help thanks guys. 0 118 Dec 03, 2010 at 07:52 @Reedbeta I was certain someone would find something to whine about, hence the last paragraph. But yeah, I think include guards plus #pragma once is a better solution than external include guards. Will work everywhere and where the pragma is supported, faster compiles. 0 101 Dec 03, 2010 at 15:53 I have never actually seen benchmarks between regular guards and #pragma once, but I thought I read somewhere that some compilers simply recognized the standard include guards and acted as if a #pragma once is in that header (or more specificially it doesn’t reparse it when that macro is still defined when it’s reincluded). @Reedbeta It’s quite widely supported and if it isn’t part of the standard it ought to be, as it’s obviously better than #include guards when avalable. :yes: Since it’s a #pragma directive it’s by definition not in the standard :) 0 101 Dec 06, 2010 at 09:04 @.oisyn I have never actually seen benchmarks between regular guards and #pragma once, but I thought I read somewhere that some compilers simply recognized the standard include guards and acted as if a #pragma once is in that header (or more specificially it doesn’t reparse it when that macro is still defined when it’s reincluded). I’d be very surprised if any major compilers didn’t do this. 0 101 Jan 28, 2011 at 07:05 Hay guys, heeso is here.I have read this post and also visited the above link.It contains valuable information.I will use this information in future.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6049867868423462, "perplexity": 2364.014564820186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010303377/warc/CC-MAIN-20140305090503-00007-ip-10-183-142-35.ec2.internal.warc.gz"}
https://misportal.jlab.org/ul/publications/view_pub.cfm?pub_id=15958
Publications Publication Information Title Azimuthal asymmetries in unpolarized SIDIS and Drell-Yan processes: a case study towards TMD factorization at subleading twist Authors Alexei Prokudin,Alessandro Bacchetta,Marco Radici,Miguel Echevarria,Christian Pisano,Giuseppe Bozzi JLAB number JLAB-THY-19-2963 LANL number arXiv:1906.07037 Other number DOE/OR/23177-4710 Document Type(s) (Journal Article) Supported by U.S. Naval Research: No Supported by Jefferson Lab LDRD Funding: No Funding Source: Nuclear Physics (NP) Other Funding: NSF ERC Journal Compiled for Physics Letters B Volume 797 Page(s) 134850 Refereed Publication Abstract: We consider the azimuthal distribution of the final observed hadron in semi-inclusive deep-inelastic scattering and the lepton pair in the Drell-Yan process. In particular, we focus on the $\cos \phi$ modulation of the unpolarized cross section and on its dependence upon transverse momentum. At low transverse momentum, for these observables we propose a factorized expression based on tree-level approach and conjecture that the same formula is valid in transverse-momentum dependent (TMD) factorization when written in terms of subtracted TMD parton distributions. Our formula correctly matches with the collinear factorization results at high transverse momentum, solves a long-standing problem and is a necessary step towards the extension of the TMD factorization theorems up to the subleading twist. Experiment Numbers: other Group: THEORY CENTER Document: pdf DOI: https://doi.org/10.1016/j.physletb.2019.134850 Accepted Manuscript: 1-s2.0-S0370269319305647-main.pdf Supporting Documents: Supporting Datasets:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9177160859107971, "perplexity": 7471.3587066438895}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138752.92/warc/CC-MAIN-20200712144738-20200712174738-00285.warc.gz"}
https://git.hacknology.de/kaqu/private_mirror/src/branch/master/static/vortraege/nwp.md
forked from hacknology/website Hackerspace Website (Klon wg. Rechte) You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long. # Numerical Weather Prediction (NWP) Raziel & tecer Contact: tecer@hacknology.de - tecer@jabber.square-wave.de - DECT 4950 # Intro This course is... ...a very brief introduction to NWP ...an even shorter introduction to GIS ...all about running an NWP model, i.e. ...pre-processing the input data ...running the model ...post-processing the output data This course is not... ...complete by almost all means ...about fancy weather visualizations ...providing a production-ready setup ...about data assimilation and other special topics ...too much about maths/meteorology/physics/... # Prelude ## NWP model Input data → pre-processor (data assimilation) → model → post-processor → output data model := simulate processes in the atmosphere / boundary transitions Evolve the current condition (several key parameters at many levels) to the next time step using physical rules (ODEs/PDEs) ## WRF • Developed by NCAR @Boulder, CO • 2 main components: -- WPS (preprocessor) -- WRF (main integration) • For post-processing: UPP (or NCL) • Mostly Fortran • 300+ pages manual • Impossible to use for the not-inaugurated → NCAR offers tutorials twice a year # Part 1 ## Preparing the software Strongly recommended: Use the VirtualBox image provided at https://www.hacknology.de/vortrag/2016/wrf/ as it contains all the required software and data sets. Alternatively, set up the software manually, preferably in a Ubuntu 16.04 VM (username "wrf"), the software and data package is also provided at https://www.hacknology.de/vortrag/2016/wrf/ ## Compiling NetCDF WRF uses NetCDF as the main data format, hence the NetCDF libraries and the Fortran bindings are required. Unfortunately, the distribution packages usually do not work, so NetCDF must be compiled manually before compiling WRF. > cd /home/wrf/src > tar -xf ../packages/netcdf-4.4.1.1.tar.gz ; cd netcdf-4.4.1.1 > ./configure --prefix=/home/wrf/netcdf --disable-dap \ --disable-netcdf-4 --disable-shared > make && make install ## Compiling NetCDF Fortran bindings Most of WRF is Fortran code, so the NetCDF Fortran bindings are required as well. > cd /home/wrf/src > tar -xf ../packages/netcdf-fortran-4.4.4.tar.gz ; cd netcdf-fortran-4.4.4 > CPPFLAGS=-I/home/wrf/netcdf/include ./configure --prefix=/home/wrf/netcdf \ --disable-shared > make && make install Now, NetCDF and the NetCDF Fortran bindings are installed at /home/wrf/netcdf. ## Compiling WRF The compilation of WRF takes some time. > cd /home/wrf/src > tar -xf ../packages/wrf-3.8.1.tar.bz2 ; cd WRFV3 > export NETCDF=/home/wrf/netcdf > export WRFIO_NCD_LARGE_FILE_SUPPORT=1 > ./configure In the following dialogue, select GNU / dmpar (7) (requires OpenMPI, alternatively, you can select GNU / serial, but then the only 1 core will be used for the actual computation) > ./compile -j1 em_real >& compile.log (for some reason, the OpenMPI version does not like to be compiled in parallel, hence the -j1 option) Sit back and enjoy the interlude... # Interlude 1 ## Map projection recommendations • Lambert: mid latitudes • Mercator: low latitudes • Polar-stereographic: high latitudes • Lon-Lat: global • Generally: minimize the distortion! • Conformity: Locally, angles are preserved • Areas are not preserved # Back to the preparation... ## Compiling WPS WPS is the WRF preprocessing system. > cd /home/wrf/src > tar -xf ../packages/wps-3.8.1.tar.bz2 ; cd WPS > export NETCDF=/home/wrf/netcdf > ./configure In the dialogue, select Linux / gfortran / serial (OpenMPI does not improve anything here). > ./compile >& compile.log ## Compiling UPP UPP is the only actually working and usable post-processing system. > cd /home/wrf/src > tar -xf ../packages/DTC_upp_v3.1.tar.gz ; cd UPPV3.1 > ./configure In the dialogue, select Linux / gfortran / serial (dmpar won't work - and doesn't improve much anyway!) > ./compile >& compile.log Now, the software is set up. Next... # Part 2 ## WPS WPS is the pre-processing package for WRF. Main settings: • timespan • area of interest (map projection + extents) It prepares • static data • initial data by extracting the relevant parts and interpolating the data to the area of interest (horizontally). ## WPS configuration Configuration file: WPS/namelist.wps Important settings: start_date and end_date: forecast time span interval_seconds: output time step duration e_sn, e_we: number of grid cells south-north/west-east geog_data_res: resolution of static data dx, dy: grid cell dimension in map units (f.e. meters) map_proj: map projection ("lambert", "mercator", "polar", "lat-lon") Projection parameters: ref_lat, ref_lon: grid center true_lat{1,2}, stand_lon: Lambert-parameters geog_data_path: absolute path to the static data (/home/wrf/data/WPS_GEOG) ## Pre-processing static data Static data := Digital elevation model (DEM), land-use, land-sea-mask, (SST),... > cd /home/wrf/src/WPS > ./geogrid.exe Result: geo_em file(s) containing static data horizontally interpolated to AOI. ## Initial data / BC Several sources, here: GFS (NOAA/NCEP global model), 4 runs per day. Temporal resolution: hourly forecast up to 120 hours, 3-hourly 120-240 hours, 12-hourly to 384 hours Spatial resolution: 0.5 degrees or 0.25 degrees (lon-lat) Download (freely available) from http://www.ftp.ncep.noaa.gov/data/nccf/com/gfs/prod/ File format: Grib2, one file per time step, containing several hundred bands (=many parameters at many levels) ## Pre-processing initial/BC data > cd /home/wrf/src/WPS > ./link_grib.csh /home/wrf/data/gfs.2016121700/gfs.*.gb2 > ln -sf ungrib/Variable_Tables/Vtable.GFS Vtable > ./ungrib.exe > ./metgrid.exe Result: met_em file(s) containing all the static and initial condition/boundary condition data for all time steps, horizontally interpolated to the AOI. # Part 3 ## WRF Main settings: • timespan • area of interest • physics options Initial condition is evolved using physical rules while respecting the boundary conditions. ## WRF configuration Configuration file: WRFV3/test/em_real/namelist.input Important settings: Start and end dates, time interval and domain definition as in WPS configuration history_interval: output data every N minutes Settings for physics schemes ## Vertical interpolation > cd /home/wrf/src/WRFV3/test/em_real > ln -sf ../../../WPS/met_em.* . > ./real.exe Result: wrfinput and wrfbdy files. Strictly speaking: another pre-processing step. ## WRF Model run > ./wrf.exe if WRF was compiled with OpenMPI support (strongly recommended), it can use multiple cores via > mpirun -n 2 wrf.exe Sit back and have a drink... # Equations and how to solve them ## ODE example $\ddot{x} = 2$, $t \in [0,1]$ $x(0) = x(1) = 3$ Approximate the 2nd derivative by $\ddot{x}(t) \approx \frac{x(t-h)-2x(t)+x(t+h)}{h^2}$ ("Finite difference method") ## Numerical approach $\frac{1}{h^2}(x_{i-1}-2x_i+x_{i+1}) = 2$, $i = 1,...,N$ for the $N$ inner grid points $x_i := x(t + ih)$, $h := \frac{1}{N+1}$. ## Solving the equation at given points in time We solve this equation f.e. for $N=3$ time steps, thus $i=1:$ $\frac{1}{h}(2x_1 + x_2) = 2 - \frac{1}{h}x_0$ $i=2:$ $\frac{1}{h}(x_1 -2x_2 + x_3) = 2$ $i=3:$ $\frac{1}{h}(x_2 -2x_3) = 2 - \frac{1}{h}x_4$ Linear equations $\rightarrow$ $Ax = b$ with a $3$x$3$ matrix $A$ and $x=(x_1,x_2,x_3)$. Of course, we are considering large $N$ (plus more complicated and higher dimensional equations), hence large systems of linear equations / (typically sparse) matrices. # Post-processing ## Running UPP UPP is the unified Post Processing system, to my knowledge the only feasible way of producing usable output. > cd /home/wrf/src/WRFV3/test/em_real/postprd > ln -s ../wrfout_d01_* . > cd /home/wrf/src/UPPV3.1/scripts > ./run_unipost_xmas ## Transforming the output The final output files are called WRFPRS_d01.00..WRFPRS_d01.180, they are in the Grib1 format. To see what is inside > gdalinfo WRFPRS_d01.00 | less You'll see there are many parameters in many different levels (>300 bands). To transform the grib files into something more useful > gdal_translate -b 294 WRFPRS_d01.00 snowc.000.tiff (This will take band number 294, which is the snow cover, and convert the file into a GeoTIFF, which is much easier to handle than a grib file. Finally, reproject the data from the original (Lambert-)projection to a Lon-Lat grid: > gdalwarp -t_srs epsg:4326 -dstnodata "-1" snowc.000.tiff snowc.4326.000.tiff (you could also use epsg:3857 for webmercator as used in most web map applications, or any other projection). ## Visualisation Start QGis. Add vector layer: Ctrl+Shift+V, select ~/data/ne/ne_10m_admin_0_countries.shp Add raster layer: Ctrl+Shift+R, select snowc.4326.tiff # Congratulations! You've completed a WRF model run, forecasting the weather from December 17, 2016 for 180 hours, and you visualized some of the output data! Hope you enjoyed the tutorial!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4988623857498169, "perplexity": 29658.722485398404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00808.warc.gz"}
http://clay6.com/qa/49075/if-the-functions-f-r-rightarrow-r-and-g-r-rightarrow-r-are-defined-by-f-y-l
Browse Questions # If the functions f: R −{1} $\rightarrow$ R and g: R −{−2} $\rightarrow$ R are defined by $f(y) = \large\frac{y}{y-1}$ and $g(y) = \large\frac{y+4}{y+2}$, then (f − g): R − {1, −2} $\rightarrow$ R is given by $(f-g)(y) = \large\frac{4-y}{(y-1)(y+2)}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704678058624268, "perplexity": 3064.6930278676273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719033.33/warc/CC-MAIN-20161020183839-00357-ip-10-171-6-4.ec2.internal.warc.gz"}
https://stagedoor.dk/product/ben-nye-clear-liquid-latex-1oz/
# Ben Nye Clear Liquid Latex 1oz kr. 75,00 Ben Nye Liquid Latex is one of the most essential products of Special FX kits with it being so fundamental and multi-functional. It is used to stretch skin for aging, extreme blisters and wounds. Latex can also be used as an adhesive for prosthetics. The colourless equivalent to Ben Nye’s Liquid Latex, Clear Latex dries without colour for increased versatility. Works excellently for sealing Ben Nye Nose & Scar Wax as well as for molding various prosthetic accents. In stock Want a discount? Become a member! EAN: N/A SKU: 7374 Category: Tag: ## Description Ben Nye Liquid Latex is one of the most essential products of Special FX kits with it being so fundamental and multi-functional. It is used to stretch skin for aging, extreme blisters and wounds. Latex can also be used as an adhesive for prosthetics. The colourless equivalent to Ben Nye’s Liquid Latex, Clear Latex dries without colour for increased versatility. Works excellently for sealing Ben Nye Nose & Scar Wax as well as for molding various prosthetic accents.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9739137887954712, "perplexity": 23171.548060660745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00692.warc.gz"}
https://ch.mathworks.com/help/risk/overview-of-var-backtesting.html
Documentation ## Overview of VaR Backtesting Market risk is the risk of losses in positions arising from movements in market prices. Value-at-risk (VaR) is one of the main measures of financial risk. VaR is an estimate of how much value a portfolio can lose in a given time period with a given confidence level. For example, if the one-day 95% VaR of a portfolio is 10MM, then there is a 95% chance that the portfolio loses less than 10MM the following day. In other words, only 5% of the time (or about once in 20 days) the portfolio losses exceed 10MM. For many portfolios, especially trading portfolios, VaR is computed daily. At the closing of the following day, the actual profits and losses for the portfolio are known and can be compared to the VaR estimated the day before. You can use this daily data to assess the performance of VaR models, which is the goal of VaR backtesting. The performance of VaR models can be measured in different ways. In practice, many different metrics and statistical tests are used to identify VaR models that are performing poorly or performing better. As a best practice, use more than one criterion to backtest the performance of VaR models, because all tests have strengths and weaknesses. Suppose that you have VaR limits and corresponding returns or profits and losses for days t = 1,…,N. Use VaRt to denote the VaR estimate for day t (determined on day t − 1). Use Rt to denote the actual return or profit and loss observed on day t. Profits and losses are expressed in monetary units and represent value changes in a portfolio. The corresponding VaR limits are also given in monetary units. Returns represent the change in portfolio value as a proportion (or percentage) of its value on the previous day. The corresponding VaR limits are also given as a proportion (or percentage). The VaR limits must be produced from existing VaR models. Then, to perform a VaR backtesting analysis, provide these limits and their corresponding returns as data inputs to the VaR backtesting tools in Risk Management Toolbox™. The toolbox supports these VaR backtests: • Binomial test • Traffic light test • Kupiec’s tests • Christoffersen’s tests • Haas’s tests ### Binomial Test The most straightforward test is to compare the observed number of exceptions, x, to the expected number of exceptions. From the properties of a binomial distribution, you can build a confidence interval for the expected number of exceptions. Using exact probabilities from the binomial distribution or a normal approximation, the `bin` function uses a normal approximation. By computing the probability of observing x exceptions, you can compute the probability of wrongly rejecting a good model when x exceptions occur. This is the p-value for the observed number of exceptions x. For a given test confidence level, a straightforward accept-or-reject result in this case is to fail the VaR model whenever x is outside the test confidence interval for the expected number of exceptions. “Outside the confidence interval” can mean too many exceptions, or too few exceptions. Too few exceptions might be a sign that the VaR model is too conservative. The test statistic is `${Z}_{bin}=\frac{x-Np}{\sqrt{Np\left(1-p\right)}}$` where x is the number of failures, N is the number of observations, and p = `1` – VaR level. The binomial test is approximately distributed as a standard normal distribution. For more information, see References for Jorion and `bin`. ### Traffic Light Test A variation on the binomial test proposed by the Basel Committee is the traffic light test or three zones test. For a given number of exceptions x, you can compute the probability of observing up to x exceptions. That is, any number of exceptions from 0 to x, or the cumulative probability up to x. The probability is computed using a binomial distribution. The three zones are defined as follows: • The “red” zone starts at the number of exceptions where this probability equals or exceeds 99.99%. It is unlikely that too many exceptions come from a correct VaR model. • The “yellow” zone covers the number of exceptions where the probability equals or exceeds 95% but is smaller than 99.99%. Even though there is a high number of violations, the violation count is not exceedingly high. • Everything below the yellow zone is "green." If you have too few failures, they fall in the green zone. Only too many failures lead to model rejections. For more information, see References for Basel Committee on Banking Supervision and `tl`. ### Kupiec’s POF and TUFF Tests Kupiec (1995) introduced a variation on the binomial test called the proportion of failures (POF) test. The POF test works with the binomial distribution approach. In addition, it uses a likelihood ratio to test whether the probability of exceptions is synchronized with the probability p implied by the VaR confidence level. If the data suggests that the probability of exceptions is different than p, the VaR model is rejected. The POF test statistic is `$L{R}_{POF}=-2\mathrm{log}\left(\frac{{\left(1-p\right)}^{N-x}{p}^{x}}{{\left(1-\frac{x}{N}\right)}^{N-x}{\left(\frac{x}{N}\right)}^{x}}\right)$` where x is the number of failures, N the number of observations and p = `1` – VaR level. This statistic is asymptotically distributed as a chi-square variable with 1 degree of freedom. The VaR model fails the test if this likelihood ratio exceeds a critical value. The critical value depends on the test confidence level. Kupiec also proposed a second test called the time until first failure (TUFF). The TUFF test looks at when the first rejection occurred. If it happens too soon, the test fails the VaR model. Checking only the first exception leaves much information out, specifically, whatever happened after the first exception is ignored. The TBFI test extends the TUFF approach to include all the failures. See `tbfi`. The TUFF test is also based on a likelihood ratio, but the underlying distribution is a geometric distribution. If n is the number of days until the first rejection, the test statistic is given by `$L{R}_{TUFF}=-2\mathrm{log}\left(\frac{p{\left(1-p\right)}^{n-1}}{\left(\frac{1}{n}\right){\left(1-\frac{1}{n}\right)}^{n-1}}\right)$` This statistic is asymptotically distributed as a chi-square variable with 1 degree of freedom. For more information, see References for Kupiec, `pof`, and `tuff`. ### Christoffersen’s Interval Forecast Tests Christoffersen (1998) proposed a test to measure whether the probability of observing an exception on a particular day depends on whether an exception occurred. Unlike the unconditional probability of observing an exception, Christoffersen's test measures the dependency between consecutive days only. The test statistic for independence in Christoffersen’s interval forecast (IF) approach is given by `$L{R}_{CCI}=-2\mathrm{log}\left(\frac{{\left(1-\pi \right)}^{n00+n10}{\pi }^{n01+n11}}{{\left(1-{\pi }_{0}\right)}^{n00}{\pi }_{0}^{n01}{\left(1-{\pi }_{1}\right)}^{n10}{\pi }_{1}^{n11}}\right)$` where • n`00` = Number of periods with no failures followed by a period with no failures. • n`10` = Number of periods with failures followed by a period with no failures. • n`01` = Number of periods with no failures followed by a period with failures. • n`11` = Number of periods with failures followed by a period with failures. and • π0 — Probability of having a failure on period t, given that no failure occurred on period t − 1 = n`01` / (n`00` + n`01`) • π1 — Probability of having a failure on period t, given that a failure occurred on period t − 1 = n`11` / (n`10` + n`11`) • π — Probability of having a failure on period t = (n`01` + n`11` / (n`00` + n`01` + n`10` + n`11`) This statistic is asymptotically distributed as a chi-square with 1 degree of freedom. You can combine this statistic with the frequency POF test to get a conditional coverage (CC) mixed test: `LRCC` = `LRPOF` + `LRCCI` This test is asymptotically distributed as a chi-square variable with 2 degrees of freedom. For more information, see References for Christoffersen, `cc`, and `cci`. ### Haas’s Time Between Failures or Mixed Kupiec’s Test Haas (2001) extended Kupiec’s TUFF test to incorporate the time information between all the exceptions in the sample. Haas’s test applies the TUFF test to each exception in the sample and aggregates the time between failures (TBF) test statistic. `$L{R}_{TBFI}=-2{\sum }_{i=1}^{x}\mathrm{log}\left(\frac{p{\left(1-p\right)}^{{n}_{i}-1}}{\left(\frac{1}{{n}_{i}}\right){\left(1-\frac{1}{{n}_{i}}\right)}^{{n}_{i}-1}}\right)$` In this statistic, p = `1` – VaR level and ni is the number of days between failures i-1 and i (or until the first exception for i = 1). This statistic is asymptotically distributed as a chi-square variable with x degrees of freedom, where x is the number of failures. Like Christoffersen’s test, you can combine this test with the frequency POF test to get a TBF mixed test, sometimes called Haas’ mixed Kupiec’s test: `$L{R}_{TBF}=L{R}_{POF}+L{R}_{TBFI}$` This test is asymptotically distributed as a chi-square variable with x+1 degrees of freedom. For more information, see References for Haas, `tbf`, and `tbfi`. ## References [1] Basel Committee on Banking Supervision, Supervisory framework for the use of “backtesting” in conjunction with the internal models approach to market risk capital requirements. January 1996, https://www.bis.org/publ/bcbs22.htm. [2] Christoffersen, P. "Evaluating Interval Forecasts." International Economic Review. Vol. 39, 1998, pp. 841–862. [3] Cogneau, P. “Backtesting Value-at-Risk: how good is the model?" Intelligent Risk, PRMIA, July, 2015. [4] Haas, M. "New Methods in Backtesting." Financial Engineering, Research Center Caesar, Bonn, 2001. [5] Jorion, P. Financial Risk Manager Handbook. 6th Edition, Wiley Finance, 2011. [6] Kupiec, P. "Techniques for Verifying the Accuracy of Risk Management Models." Journal of Derivatives. Vol. 3, 1995, pp. 73–84. [7] McNeil, A., Frey, R., and Embrechts, P. Quantitative Risk Management. Princeton University Press, 2005. [8] Nieppola, O. “Backtesting Value-at-Risk Models.” Master's Thesis, Helsinki School of Economics, 2009.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8560911417007446, "perplexity": 1250.9517525697795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687958.71/warc/CC-MAIN-20200126074227-20200126104227-00053.warc.gz"}
https://www.physicsforums.com/threads/pressure-on-table-question.401604/
# Pressure on table Question 1. May 7, 2010 1. A vertical force of 30N is applied uniformly to a flat button with a radius of 1cm that is lying on a table. Estimate the pressure applied to the button 2. P=F/A 3. P=F/A= 30/(3.14)(r)^2 2. May 7, 2010 ### tiny-tim Welcome to PF! (try using the X2 tag just above the Reply box ) That's the correct formula … so what answer do you get (and remember to be careful about the units)? Similar Discussions: Pressure on table Question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8080083727836609, "perplexity": 3229.140180400785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822822.66/warc/CC-MAIN-20171018070528-20171018090528-00871.warc.gz"}
http://www.ck12.org/book/CK-12-Algebra-I-Second-Edition/r1/section/3.1/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> You are reading an older version of this FlexBook® textbook: CK-12 Algebra I - Second Edition Go to the latest version. # 3.1: One-Step Equations Difficulty Level: At Grade Created by: CK-12 ## Learning Objectives • Solve an equation using addition. • Solve an equation using subtraction. • Solve an equation using multiplication. • Solve an equation using division. ## Introduction Nadia is buying a new mp3 player. Peter watches her pay for the player with a $100 bill. She receives$22.00 in change, and from only this information, Peter works out how much the player cost. How much was the player? In algebra, we can solve problems like this using an equation. An equation is an algebraic expression that involves an equals sign. If we use the letter $x$ to represent the cost of the mp3 player, we can write the equation $x + 22 = 100$. This tells us that the value of the player plus the value of the change received is equal to the $100 that Nadia paid. Another way we could write the equation would be $x = 100 - 22$. This tells us that the value of the player is equal to the total amount of money Nadia paid $(100 - 22)$. This equation is mathematically equivalent to the first one, but it is easier to solve. In this chapter, we will learn how to solve for the variable in a one-variable linear equation. Linear equations are equations in which each term is either a constant, or a constant times a single variable (raised to the first power). The term linear comes from the word line, because the graph of a linear equation is always a line. We’ll start with simple problems like the one in the last example. ## Solving Equations Using Addition and Subtraction When we work with an algebraic equation, it’s important to remember that the two sides have to stay equal for the equation to stay true. We can change the equation around however we want, but whatever we do to one side of the equation, we have to do to the other side. In the introduction above, for example, we could get from the first equation to the second equation by subtracting 22 from both sides: $x + 22 &= 100\\x + 22 - 22 &= 100 - 22\\x &= 100 - 22$ Similarly, we can add numbers to each side of an equation to help solve for our unknown. Example 1 Solve $x - 3 = 9$. Solution To solve an equation for $x$, we need to isolate $x-$that is, we need to get it by itself on one side of the equals sign. Right now our $x$ has a 3 subtracted from it. To reverse this, we’ll add 3—but we must add 3 to both sides. $x - 3 &= 9\\x - 3 + 3 &= 9 + 3\\x + 0 &= 9 + 3\\x &= 12$ Example 2 Solve $z - 9.7 = -1.026$ Solution It doesn’t matter what the variable is—the solving process is the same. $z - 9.7 &= -1.026\\ z - 9.7 + 9.7 &= -1.026 + 9.7\\z &= 8.674$ Make sure you understand the addition of decimals in this example! Example 3 Solve $x + \frac{4}{7} = \frac{9}{5}$. Solution To isolate $x$, we need to subtract $\frac{4}{7}$ from both sides. $x + \frac{4}{7} &= \frac{9}{5}\\x + \frac{4}{7} - \frac{4}{7} &= \frac{9}{5} - \frac{4}{7}\\x &= \frac{9}{5} - \frac{4}{7}$ Now we have to subtract fractions, which means we need to find the LCD. Since 5 and 7 are both prime, their lowest common multiple is just their product, 35. $x &= \frac{9}{5} - \frac{4}{7}\\x &= \frac{7 \cdot 9}{7 \cdot 5} - \frac{4 \cdot 5}{7 \cdot 5}\\x &= \frac{63}{35} - \frac{20}{35}\\x &= \frac{63 - 20}{35}\\x &= \frac{43}{35}$ Make sure you’re comfortable with decimals and fractions! To master algebra, you’ll need to work with them frequently. ## Solving Equations Using Multiplication and Division Suppose you are selling pizza for$1.50 a slice and you can get eight slices out of a single pizza. How much money do you get for a single pizza? It shouldn’t take you long to figure out that you get $8 \times \1.50 = \12.00$. You solved this problem by multiplying. Here’s how to do the same thing algebraically, using $x$ to stand for the cost in dollars of the whole pizza. Example 4 Solve $\frac{1}{8} \cdot x = 1.5$. Our $x$ is being multiplied by one-eighth. To cancel that out and get $x$ by itself, we have to multiply by the reciprocal, 8. Don’t forget to multiply both sides of the equation. $8 \left ( \frac{1}{8} \cdot x \right ) &= 8(1.5)\\x &= 12$ Example 5 Solve $\frac{9x}{5} = 5$. $\frac{9x}{5}$ is equivalent to $\frac{9}{5} \cdot x$, so to cancel out that $\frac{9}{5}$, we multiply by the reciprocal, $\frac{5}{9}$. $\frac{5}{9} \left ( \frac{9x}{5} \right ) &= \frac{5}{9}(5)\\x &= \frac{25}{9}$ Example 6 Solve $0.25x = 5.25$. 0.25 is the decimal equivalent of one fourth, so to cancel out the 0.25 factor we would multiply by 4. $4(0.25x) &= 4(5.25)\\x &= 21$ Solving by division is another way to isolate $x$. Suppose you buy five identical candy bars, and you are charged $3.25. How much did each candy bar cost? You might just divide$3.25 by 5, but let’s see how this problem looks in algebra. Example 7 Solve $5x = 3.25$. To cancel the 5, we divide both sides by 5. $\frac{5x}{5} &= \frac{3.25}{5}\\x &= 0.65$ Example 8 Solve $7x = \frac{5}{11}$. Divide both sides by 7. $x &= \frac{5}{11.7}\\x &= \frac{5}{77}$ Example 9 Solve $1.375x = 1.2$. Divide by 1.375 $x &= \frac{1.2}{1.375}\\x &= 0.8 \overline{72}$ Notice the bar above the final two decimals; it means that those digits recur, or repeat. The full answer is 0.872727272727272.... To see more examples of one - and two-step equation solving, watch the Khan Academy video series starting at http://www.youtube.com/watch?v=bAerID24QJ0. ## Solve Real-World Problems Using Equations Example 10 In the year 2017, Anne will be 45years old. In what year was Anne born? The unknown here is the year Anne was born, so that’s our variable $x$. Here’s our equation: $x + 45 &= 2017\\x + 45 - 45 &= 2017 - 45\\x &= 1972$ Anne was born in 1972. Example 11 A mail order electronics company stocks a new mini DVD player and is using a balance to determine the shipping weight. Using only one-pound weights, the shipping department found that the following arrangement balances: How much does each DVD player weigh? Solution Since the system balances, the total weight on each side must be equal. To write our equation, we’ll use $x$ for the weight of one DVD player, which is unknown. There are two DVD players, weighing a total of $2x$ pounds, on the left side of the balance, and on the right side are 5 1-pound weights, weighing a total of 5 pounds. So our equation is $2x = 5$. Dividing both sides by 2 gives us $x = 2.5$. Each DVD player weighs 2.5 pounds. Example 12 In 2004, Takeru Kobayashi of Nagano, Japan, ate 53.5 hot dogs in 12 minutes. This was 3 more hot dogs than his own previous world record, set in 2002. Calculate: a) How many minutes it took him to eat one hot dog. b) How many hot dogs he ate per minute. c) What his old record was. Solution a) We know that the total time for 53.5 hot dogs is 12 minutes. We want to know the time for one hot dog, so that’s $x$. Our equation is $53.5x = 12$. Then we divide both sides by 53.5 to get $x = \frac{12}{53.5}$, or $x = 0.224 \ minutes$. We can also multiply by 60 to get the time in seconds; 0.224 minutes is about 13.5 seconds. So that’s how long it took Takeru to eat one hot dog. b) Now we’re looking for hot dogs per minute instead of minutes per hot dog. We’ll use the variable $y$ instead of $x$ this time so we don’t get the two confused. 12 minutes, times the number of hot dogs per minute, equals the total number of hot dogs, so $12y = 53.5$. Dividing both sides by 12 gives us $y = \frac{53.5}{12}$, or $y = 4.458$ hot dogs per minute. c) We know that his new record is 53.5, and we know that’s three more than his old record. If we call his old record $z$, we can write the following equation: $z + 3 = 53.5$. Subtracting 3 from both sides gives us $z = 50.5$. So Takeru’s old record was 50.5 hot dogs in 12 minutes. ## Lesson Summary • An equation in which each term is either a constant or the product of a constant and a single variable is a linear equation. • We can add, subtract, multiply, or divide both sides of an equation by the same value and still have an equivalent equation. • To solve an equation, isolate the unknown variable on one side of the equation by applying one or more arithmetic operations to both sides. ## Review Questions 1. Solve the following equations for $x$. 1. $x = 11 = 7$ 2. $x - 1.1 = 3.2$ 3. $7x = 21$ 4. $4x = 1$ 5. $\frac{5x}{12} = \frac{2}{3}$ 6. $x + \frac{5}{2} = \frac{2}{3}$ 7. $x - \frac{5}{6} = \frac{3}{8}$ 8. $0.01x = 11$ 2. Solve the following equations for the unknown variable. 1. $q - 13 = -13$ 2. $z + 1.1 = 3.0001$ 3. $21s = 3$ 4. $t + \frac{1}{2} = \frac{1}{3}$ 5. $\frac{7f}{11} = \frac{7}{11}$ 6. $\frac{3}{4} = -\frac{1}{2} - y$ 7. $6r = \frac{3}{8}$ 8. $\frac{9b}{16} = \frac{3}{8}$ 3. Peter is collecting tokens on breakfast cereal packets in order to get a model boat. In eight weeks he has collected 10 tokens. He needs 25 tokens for the boat. Write an equation and determine the following information. 1. How many more tokens he needs to collect, $n$. 2. How many tokens he collects per week, $w$. 3. How many more weeks remain until he can send off for his boat, $r$. 4. Juan has baked a cake and wants to sell it in his bakery. He is going to cut it into 12 slices and sell them individually. He wants to sell it for three times the cost of making it. The ingredients cost him $8.50, and he allowed$1.25 to cover the cost of electricity to bake it. Write equations that describe the following statements 1. The amount of money that he sells the cake for $(u)$. 2. The amount of money he charges for each slice $(c)$. 3. The total profit he makes on the cake $(w)$. 5. Jane is baking cookies for a large party. She has a recipe that will make one batch of two dozen cookies, and she decides to make five batches. To make five batches, she finds that she will need 12.5 cups of flour and 15 eggs. 1. How many cookies will she make in all? 2. How many cups of flour go into one batch? 3. How many eggs go into one batch? 4. If Jane only has a dozen eggs on hand, how many more does she need to make five batches? 5. If she doesn’t go out to get more eggs, how many batches can she make? How many cookies will that be? Feb 22, 2012 Sep 28, 2014
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 79, "texerror": 0, "math_score": 0.8693649768829346, "perplexity": 831.3629226580035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929023.5/warc/CC-MAIN-20150521113209-00131-ip-10-180-206-219.ec2.internal.warc.gz"}
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/08%3A_The_Hydrogen_Atom/8.04%3A_Magnetic_Properties_and_the_Zeeman_Effect
# 8.4: Magnetic Properties and the Zeeman Effect $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ Magnetism results from the circular motion of charged particles. This property is demonstrated on a macroscopic scale by making an electromagnet from a coil of wire and a battery. Electrons moving through the coil produce a magnetic field (Figure $$\PageIndex{1}$$), which can be thought of as originating from a magnetic dipole or a bar magnet. Magnetism results from the circular motion of charged particles. Electrons in atoms also are moving charges with angular momentum so they too produce a magnetic dipole, which is why some materials are magnetic. A magnetic dipole interacts with an applied magnetic field, and the energy of this interaction is given by the scalar product of the magnetic dipole moment, and the magnetic field, $$\vec{B}$$. $E_B = - \vec{\mu} _m \cdot \vec{B} \label {8.4.1}$ Magnets are acted on by forces and torques when placed within an external applied magnetic field (Figure $$\PageIndex{2}$$). In a uniform external field, a magnet experiences no net force, but a net torque. The torque tries to align the magnetic moment ($$\vec{\mu} _m$$ of the magnet with the external field $$\vec{B}$$. The magnetic moment of a magnet points from its south pole to its north pole. In a non-uniform magnetic field a current loop, and therefore a magnet, experiences a net force, which tries to pull an aligned dipole into regions where the magnitude of the magnetic field is larger and push an anti-aligned dipole into regions where magnitude the magnetic field is smaller. ## Quantum Effects As expected, the quantum picture is different. Pieter Zeeman was one of the first to observe the splittings of spectral lines in a magnetic field caused by this interaction. Consequently, such splittings are known as the Zeeman effect. Let’s now use our current knowledge to predict what the Zeeman effect for the 2p to 1s transition in hydrogen would look like, and then compare this prediction with a more complete theory. To understand the Zeeman effect, which uses a magnetic field to remove the degeneracy of different angular momentum states, we need to examine how an electron in a hydrogen atom interacts with an external magnetic field, $$\vec{B}$$. Since magnetism results from the circular motion of charged particles, we should look for a relationship between the angular momentum $$\vec{L}$$ and the magnetic dipole moment $$\vec{\mu} _m$$. The relationship between the magnetic dipole moment $$\vec{\mu} _m$$ (also referred to simply as the magnetic moment) and the angular momentum $$\vec{L}$$ of a particle with mass m and charge $$q$$ is given by $\vec{\mu} _m = \dfrac {q}{2m} \vec{L} \label {8.4.2}$ For an electron, this equation becomes $\vec{\mu} _m = - \dfrac {e}{2m_e} \vec{L} \label {8.4.3}$ where the specific charge and mass of the electron have been substituted for $$q$$ and $$m$$. The magnetic moment for the electron is a vector pointing in the direction opposite to $$\vec{L}$$, both of which classically are perpendicular to the plane of the rotational motion. Exercise $$\PageIndex{1}$$ Will an electron in the ground state of hydrogen have a magnetic moment? Why or why not? The relationship between the angular momentum of a particle and its magnetic moment is commonly expressed as a ratio, called the gyromagnetic ratio, $$\gamma$$. Gyro is Greek for turn so gyromagnetic simply relates turning (angular momentum) to magnetism. Now you also know why the Greek sandwiches made with meat cut from a spit turning over a fire are called gyros. $\gamma = \dfrac {\mu _m}{L} = \dfrac {q}{2m} \label {8.4.4}$ In the specific case of an electron, $\gamma _e = - \dfrac {e}{2m_e} \label {8.4.5}$ Exercise $$\PageIndex{2}$$ Calculate the magnitude of the gyromagnetic ratio for an electron. To determine the energy of a hydrogen atom in a magnetic field we need to include the operator form of the hydrogen atom Hamiltonian. The Hamiltonian always consists of all the energy terms that are relevant to the problem at hand. $\hat {H} = \hat {H} ^0 + \hat {H} _m \label {8.4.6}$ where $$\hat {H} ^0$$ is the Hamiltonian operator in the absence of the field and $$\hat {H} _m$$ is written using the operator forms of Equations $$\ref{8.4.1}$$ and $$\ref{8.4.3}$$), $\hat {H}_m = - \hat {\mu} \cdot \vec{B} = \dfrac {e}{2m_e} \hat {L} \cdot B \label {8.4.7}$ The scalar product $\hat {L} \cdot \vec{B} = \hat {L}_x B_x + \hat {L}_y B_y + \hat {L}_z B_z \label {8.4.8}$ simplifies if the z-axis is defined as the direction of the external field because then $$B_x$$ and $$B_y$$ are automatically 0, and Equation \ref{8.4.6} becomes $\hat {H} = \hat {H}^0 + \dfrac {eB_z}{2m_e} \hat {L} _z \label {8.4.9}$ where $$B_z$$ is the magnitude of the magnetic field, which is along the z-axis. We now can ask, “What is the effect of a magnetic field on the energy of the hydrogen atom orbitals?” To answer this question, we will not solve the Schrödinger equation again; we simply calculate the expectation value of the energy, $$\left \langle E \right \rangle$$, using the existing hydrogen atom wavefunctions and the new Hamiltonian operator. $\left \langle E \right \rangle = \left \langle \hat {H}^0 \right \rangle + \dfrac {eB_z}{2m_e} \left \langle \hat {L} _z \right \rangle \label {8.4.10}$ where $\left \langle \hat {H}^0 \right \rangle = \int \psi ^*_{n,l,m_l} \hat {H}^0 \psi _{n,l,m_l} d \tau = E_n \label {8.4.11}$ and $\left \langle \hat {L}_z \right \rangle = \int \psi ^*_{n,l,m_l} \hat {L}_z \psi _{n,l,m_l} d \tau = m_l \hbar \label {8.4.12}$ Exercise $$\PageIndex{3}$$ Show that the expectation value $$\left \langle \hat {L}_z \right \rangle = m_l \hbar$$. The expectation value approach provides an exact result in this case because the hydrogen atom wavefunctions are eigenfunctions of both $$\hat {H} ^0$$ and $$\hat {L}_z$$. If the wavefunctions were not eigenfunctions of the operator associated with the magnetic field, then this approach would provide a first-order estimate of the energy. First and higher order estimates of the energy are part of a general approach to developing approximate solutions to the Schrödinger equation. This approach, called perturbation theory, is discussed in the next chapter. The expectation value calculated for the total energy in this case is the sum of the energy in the absence of the field, $$E_n$$, plus the Zeeman energy, $$\dfrac {e \hbar B_z m_l}{2m_e}$$ \begin{align} \left \langle E \right \rangle &= E_n + \dfrac {e \hbar B_z m_l}{2m_e} \\[4pt] &= E_n + \mu _B B_z m_l \label {8.4.13} \end{align} The factor $\dfrac {e \hbar}{2m_e} = - \gamma _e \hbar = \mu _B \label {8.4.14}$ defines the constant $$\mu _B$$, called the Bohr magneton, which is taken to be the fundamental magnetic moment. It has units of $$9.2732 \times 10^{-21}$$ erg/Gauss or $$9.2732 \times 10^{-24}$$ Joule/Tesla. This factor will help you to relate magnetic fields, measured in Gauss or Tesla, to energies, measured in ergs or Joules, for any particle with a charge and mass the same as an electron. Equation \ref{8.4.13} shows that the $$m_l$$ quantum number degeneracy of the hydrogen atom is removed by the magnetic field. For example, the three states $$\psi _{211}$$, $$\psi _{21-1}$$, and $$\psi _{210}$$, which are degenerate in zero field, have different energies in a magnetic field, as shown in Figure $$\PageIndex{3}$$. The $$m_l = 0$$ state, for which the component of angular momentum and hence also the magnetic moment in the external field direction is zero, experiences no interaction with the magnetic field. The $$m_l = +1$$ state, for which the angular momentum in the z-direction is +ħ and the magnetic moment is in the opposite direction, against the field, experiences a raising of energy in the presence of a field. Maintaining the magnetic dipole against the external field direction is like holding a small bar magnet with its poles aligned exactly opposite to the poles of a large magnet (Figure $$\PageIndex{5}$$). It is a higher energy situation than when the magnetic moments are aligned with each other. Exercise $$\PageIndex{4}$$ Carry out the steps going from Equation $$\ref{8.4.10}$$ to Equation $$\ref{8.4.13}$$. Exercise $$\PageIndex{5}$$ Consider the effect of changing the magnetic field on the magnitude of the Zeeman splitting. Sketch a diagram where the magnetic field strength is on the x-axis and the energy of the three 2p orbitals is on the y-axis to show the trend in splitting magnitudes with increasing magnetic field. Be quantitative, calculate and plot the exact numerical values using a software package of your choice. Exercise $$\PageIndex{6}$$ Based on your calculations in Exercise $$\PageIndex{2}$$ sketch a luminescence spectrum for the hydrogen atom in the n = 2 level in a magnetic field of 1 Tesla. Provide the numerical value for each of the transition energies. Use cm-1 or electron volts for the energy units. This page titled 8.4: Magnetic Properties and the Zeeman Effect is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9737071394920349, "perplexity": 255.73314287387063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00608.warc.gz"}
http://science.sciencemag.org/content/156/3777/939
Reports # Forces between Lecithin Bimolecular Leaflets Are Due to a Disordered Surface Layer + See all authors and affiliations Science  19 May 1967: Vol. 156, Issue 3777, pp. 939-942 DOI: 10.1126/science.156.3777.939 ## Abstract The long-range repulsion observed between bileaflets of lecithin cannot be explained either with the usual view that the polar groups are arrayed coplanar with the bileaflet surface or by the assumption that charges protrude straight into the aqueous environment. Statistical-thermodynamic analysis of experimental data suggests rather that structure of the leaflet surface is better described as a diffuse charge layer. Forces between leaflets are caused largely by entropy changes in the surface with leaflet separation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9760428667068481, "perplexity": 5334.450336272998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500550969387.94/warc/CC-MAIN-20170728143709-20170728163709-00454.warc.gz"}
http://forum.math.toronto.edu/index.php?topic=698.0;prev_next=next
### Author Topic: HA7-P2  (Read 3189 times) #### Victor Ivrii • Elder Member • Posts: 2553 • Karma: 0 ##### HA7-P2 « on: November 01, 2015, 05:08:53 PM » #### Xi Yue Wang • Full Member • Posts: 30 • Karma: 0 ##### Re: HA7-P2 « Reply #1 on: November 03, 2015, 12:30:16 PM » Noted that the Fourier transform of $f(x)$ is given as correct formula $$\hat{f}(\omega) = \frac{k}{2\pi}\int_{-\infty}^{\infty} f(x)e^{-i\omega x}dx$$ where $$k = 1, \omega = \frac{\pi n}{l}$$ WTH? There is no $l$ or $n$, $\omega\in \mathbb{R}$ is continuous For part (a), given $\alpha > 0$ $$f(x) = e^{-\alpha |x|}\\ \hat{f}(\omega) = \frac{1}{2\pi} \int_{-\infty}^{\infty} e^{-\alpha |x|}e^{-i\omega x} dx \\ = \frac{1}{2\pi}[\int_{-\infty}^{0} e^{\alpha x - i\omega x} dx + \int_{0}^{\infty} e^{-\alpha x - i\omega x} dx]\\ = \frac{1}{2\pi}[\int_{0}^{\infty} e^{-\alpha x + i\omega x} dx + \int_{0}^{\infty} e^{-\alpha x - i\omega x} dx]\\ = \frac{1}{2\pi}[\frac{1}{\alpha - i\omega}+\frac{1}{\alpha + i\omega}]\\ = \frac{\alpha}{\pi(\alpha^2 + \omega^2)}$$ « Last Edit: November 05, 2015, 06:51:07 PM by Xi Yue Wang » #### Xi Yue Wang • Full Member • Posts: 30 • Karma: 0 ##### Re: HA7-P2 « Reply #2 on: November 03, 2015, 01:01:08 PM » For part (b), let $f(x)=e^{-\alpha|x|}$, for $g(x) = e^{-\alpha |x|}\cos(\beta x)$, where $\beta >0$. $$\cos(\beta x) = \frac{1}{2}[e^{i\beta x} + e^{-i\beta x}]\\ g(x) = \frac{1}{2}[f(x)e^{i\beta x }+f(x)e^{-i\beta x}]\\\hat{g}(\omega) = \frac{1}{2}[\hat{f}(\omega - \beta)+\hat{f}(\omega +\beta)]$$ Combine the solution that we found in part (a), we have $$\\\hat{g}(\omega) = \frac{1}{2}[\frac{\alpha}{\pi(\alpha^2+(\omega - \beta)^2)}+\frac{\alpha}{\pi(\alpha^2 + (\omega +\beta)^2)}]$$ Similarly, for $g(x) = e^{-\alpha |x|}\sin(\beta x)$, where $\beta >0$.$$\sin(\beta x) = \frac{1}{2i}[e^{i\beta x} - e^{-i\beta x}]\\\hat{g}(\omega) = \frac{1}{2i}[\hat{f}(\omega - \beta)-\hat{f}(\omega +\beta)]\\\hat{g}(\omega) = \frac{1}{2i}[\frac{\alpha}{\pi(\alpha^2+(\omega - \beta)^2)}-\frac{\alpha}{\pi(\alpha^2 + (\omega +\beta)^2)}]$$ #### Xi Yue Wang • Full Member • Posts: 30 • Karma: 0 ##### Re: HA7-P2 « Reply #3 on: November 03, 2015, 05:33:18 PM » For part (c), Given $g(x) = xe^{-\alpha |x|}$ where $f(x) = e^{-\alpha |x|}$. Hence, $g(x) = xf(x)\ then, \hat{g}(\omega) = i\hat{f'}(\omega)$ We have $$\hat{f}(\omega) = \frac{\alpha}{\pi(\alpha^2 +\omega^2)}\\ \hat{f'}(\omega) = -\frac{2\alpha\omega}{\pi(\alpha^2+\omega^2)^2}$$ Then we get $$\hat{g}(\omega) = -\frac{2i\alpha\omega}{\pi(\alpha^2+\omega^2)^2}$$ #### Xi Yue Wang • Full Member • Posts: 30 • Karma: 0 ##### Re: HA7-P2 « Reply #4 on: November 03, 2015, 06:24:13 PM » For part (d), given $g(x) = xe^{-\alpha |x|}\cos(\beta x)$, where $f(x) = e^{-\alpha |x|}\cos(\beta x)$ By theorem $\hat{g}(\omega) = i \hat{f'}(\omega)$ By part (b), we have $\hat{f}(\omega) = \frac{1}{2}[\frac{\alpha}{\pi(\alpha^2+(\omega - \beta)^2)}+\frac{\alpha}{\pi(\alpha^2+(\omega + \beta)^2)}]$ Then, $$\hat{f'}(\omega) = -\frac{\alpha}{\pi}[\frac{(\omega - \beta)}{(\alpha^2 + (\omega - \beta)^2)^2} + \frac{(\omega + \beta)}{(\alpha^2 + (\omega + \beta)^2)^2}]$$ Hence, $$\hat{g}(\omega) = -\frac{i\alpha}{\pi}[\frac{(\omega - \beta)}{(\alpha^2 + (\omega - \beta)^2)^2} + \frac{(\omega + \beta)}{(\alpha^2 + (\omega + \beta)^2)^2}]$$ Similarly, for $g(x) = xe^{-\alpha |x|}\sin(\beta x)$, where $f(x) = e^{-\alpha |x|}\sin(\beta x)$ By part (b), we have $\hat{f}(\omega) = \frac{1}{2i}[\frac{\alpha}{\pi(\alpha^2+(\omega - \beta)^2)} - \frac{\alpha}{\pi(\alpha^2+(\omega + \beta)^2)}]$ Then, $$\hat{f'}(\omega) = -\frac{\alpha}{i\pi}[\frac{(\omega - \beta)}{(\alpha^2 + (\omega - \beta)^2)^2} - \frac{(\omega + \beta)}{(\alpha^2 + (\omega + \beta)^2)^2}]$$ Hence, $$\hat{g}(\omega) = -\frac{\alpha}{\pi}[\frac{(\omega - \beta)}{(\alpha^2 + (\omega - \beta)^2)^2} - \frac{(\omega + \beta)}{(\alpha^2 + (\omega + \beta)^2)^2}]$$ #### Catch Cheng • Jr. Member • Posts: 12 • Karma: 0 ##### Re: HA7-P2 « Reply #5 on: November 05, 2015, 06:23:48 PM » For the f^(ω) formula in (a), I think there should be a negative sign before iωx, however the final answer is right. #### Xi Yue Wang • Full Member • Posts: 30 • Karma: 0 ##### Re: HA7-P2 « Reply #6 on: November 05, 2015, 06:42:00 PM » For the f^(ω) formula in (a), I think there should be a negative sign before iωx, however the final answer is right.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9982362389564514, "perplexity": 18403.81176415248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141204453.65/warc/CC-MAIN-20201130004748-20201130034748-00708.warc.gz"}
https://brilliant.org/discussions/thread/the-factor-problem/
× # The factor problem let us denote each number as the product of primes in a special way; for example {2 3 5 7 11 ...} {2 0 0 1 ...} represents 28. (line up the primes in the first parentheses with the second one) notice that in the prime factorization of 28, there a two 2's and one 7. now, a further observation is that whenever we multiply two numbers, we add the number of each prime in that representation. for example 28 = [2 0 0 1] and 4 = [2 0 0 ...]. when we multiply them, we get 112 which is equal to [ 4 0 0 1]. similarly, when we raise a number to a power, we multiply each number in that "prime table" by that exponent. for example, 28^2 = [4 0 0 2], which is equal to 2x[2 0 0 1]. to summarize, multiplying numbers implies adding the corresponding prime tables and exponentiation of a number to a power implies multiplying every number in the prime table by the exponent. now, the question is what addition of numbers translate to in the prime tables. all answers are appreciated. Note by Sri Prasanna 1 year, 6 months ago
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352273941040039, "perplexity": 481.0625205280415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121778.66/warc/CC-MAIN-20170423031201-00334-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/ZiLLxaLB5CCofrzPp
# 7 In my last post, I argued that interaction between the human and the AI system was necessary in order for the AI system to “stay on track” as we encounter new and unforeseen changes to the environment. The most obvious implementation of this would be to have an AI system that keeps an estimate of the reward function. It acts to maximize its current estimate of the reward function, while simultaneously updating the reward through human feedback. However, this approach has significant problems. Looking at the description of this approach, one thing that stands out is that the actions are chosen according to a reward that we know is going to change. (This is what leads to the incentive to disable the narrow value learning system.) This seems clearly wrong: surely our plans should account for the fact that our rewards will change, without treating such a change as adversarial? This suggests that we need to have our action selection mechanism take the future rewards into account as well. While we don’t know what the future reward will be, we can certainly have a probability distribution over it. So what if we had uncertainty over reward functions, and took that uncertainty into account while choosing actions? ## Setup We’ve drilled down on the problem sufficiently far that we can create a formal model and see what happens. So, let’s consider the following setup: • The human, Alice, knows the “true” reward function that she would like to have optimized. • The AI system maintains a probability distribution over reward functions, and acts to maximize the expected sum of rewards under this distribution. • Alice and the AI system take turns acting. Alice knows that the AI learns from her actions, and chooses actions accordingly. • Alice’s action space is such that she cannot take the action “tell the AI system the true reward function” (otherwise the problem would become trivial). • Given these assumptions, Alice and the AI system act optimally. This is the setup of Cooperative Inverse Reinforcement Learning (CIRL). The optimal solution to this problem typically involves Alice “teaching” the AI system by taking actions that communicate what she does and does not like, while the AI system “asks” about parts of the reward by taking actions that would force Alice to behave in different ways for different rewards. ## Does this solve our problems? Two of the problems we identified in the last post are simply assumed away: • Alice does not know the “true” reward function, but we assumed that she does. • Alice may be unable to optimally give feedback to the AI system, but we assume that she is optimal here. So this particular kind of reward uncertainty does not fix either of these problems. What about convergent instrumental subgoals? Utility preservation. One major worry we had with the original setup was that the AI system would disable its narrow value learning system, as a manifestation of the instrumental goal of protecting its utility function. This is reversed in our setup: the AI system has a positive incentive to continue doing narrow value learning, since it helps it hone in on the true reward function, which in turn allows it to optimize the reward better. (We might worry that this prevents us from fixing any problems in the narrow value learning system, but that is a robustness problem: in the world where everything is working correctly, this is the correct incentive.) Survival incentive, aka shutdown incorrigibility. Another worry is that the AI system has a survival incentive that causes it to prevent us from shutting it down. With reward uncertainty, the fact that we are trying to shut the AI system down is itself strong evidence about the reward function. The AI system should reason that its operation leads to worse outcomes, and so allow itself to be turned off. The Off-Switch Game formalizes this reasoning in a simple setting. The AI system can either directly take action a (perhaps by disabling an off switch), can shut down, or can allow Alice to choose between these options. If the AI system allows Alice to choose, Alice then decides whether or not to shut down the AI system. Assuming that Alice is optimal, the AI system reasons that when Alice chooses to shut it down, the true reward function must have been one which makes the action a bad, and so it does better by deferring to Alice. However, when Alice is modeled as noisily rational instead of optimal, the AI system might reason that Alice might make a mistake when deciding to shut the AI system down, and so it might take action a directly without deferring to her. So, the AI system becomes shutdown corrigible, as long as it assumes that Alice is sufficiently rational. Should robots be obedient? makes a similar point, arguing that an AI system that learns preferences and then acts to maximize their satisfaction can perform better than an AI system that simply obeys instructions, because humans are not perfectly rational. This creates a tradeoff between performance and obedience (which shutdown corrigibility is an instance of). Of course, these simple models exclude many actions that a realistic AI system could take. In particular, it seems likely that an AI system would prefer to disable the shutdown button, gather information about the reward until it has fully updated, and optimize the resulting set of rewards. If the space of reward functions is misspecified, as it likely will be, this will lead to bad behavior. (This is the point made by Incorrigibility in the CIRL Framework.) Note though that while this cuts against shutdown corrigibility (since the AI system would prefer to disable the shutdown button), I would frame the problem differently. If the space of rewards is well-specified and has sufficient weight on the true reward function and the AI system is sufficiently robust and intelligent, then the AI system must update strongly on us attempting to shut it down. This should cause it to stop doing the bad thing it was doing. When it eventually narrows down on the reward it will have identified the true reward, which by definition is the right thing to optimize. So even though the AI system might disable its off switch, this is simply because it is better at knowing what we want than we are, and this leads to better outcomes for us. So, really the argument is that since we want to be robust (particularly to reward misspecification), we want shutdown corrigibility, and reward uncertainty is an insufficient solution for that. ## A note on CIRL There has been a lot of confusion on what CIRL is and isn’t trying to do, so I want to avoid adding to the confusion. CIRL is not meant to be a blueprint for a value-aligned AI system. It is not the case that we could create a practical implementation of CIRL and then we would be done. If we were to build a practical implementation of CIRL and use it to align powerful AI systems, we would face many problems: • As mentioned above, Alice doesn’t actually know the true reward function, and she may not be able to give optimal feedback. • As mentioned above, in the presence of reward misspecification the AI system may end up optimizing the wrong thing, leading to catastrophic outcomes. • Similarly, if the model of Alice’s behavior is incorrect, as it inevitably will be, the AI system will make incorrect inferences about Alice’s reward, again leading to bad behavior. As an example that is particularly easy to model, should the AI system model Alice as thinking about the robot thinking about Alice, or should it model Alice as thinking about the robot thinking about Alice thinking about the robot thinking about Alice? How many levels of pragmatics is the “right” level? • Lots of other problems have not been addressed: the AI system might not deal with embeddedness well, or it might not be robust and could make mistakes, etc. CIRL is supposed to bring conceptual clarity to what we could be trying to do in the first place with a human-AI system. In Dylan’s own words, “what cooperative IRL is, it’s a definition of how a human and a robot system together can be rational in the context of fixed preferences in a fully observable world state”. In the same way that VNM rationality informs our understanding of humans even though humans are not expected utility maximizers, CIRL can inform our understanding of alignment proposals, even though CIRL itself is unsuitable as a solution to alignment. Note also that this post is about reward uncertainty, not about CIRL. CIRL makes other points besides reward uncertainty, that are well explained in this blog post, and are not mentioned here. While all of my posts have been significantly influenced by many people, this post is especially based on ideas I heard from Dylan Hadfield-Menell. However, besides the one quote, the writing is my own, and may not reflect Dylan’s views.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8445805311203003, "perplexity": 860.8290603038737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370509103.51/warc/CC-MAIN-20200402235814-20200403025814-00297.warc.gz"}
https://puzzling.stackexchange.com/questions/83856/surely-they-can-fit
# Surely they can fit? Suppose you have a grid of squares that has even dimensions, with at least one dimension greater than or equal to 4 squares, and from one corner you remove a 1x4 rectangle of those squares for example: □□□□□□ □□□□□□ □□□□□□ XXXX□□ Can you fill in that grid using as many copies of the following shapes as you like? (Each shape can be rotated any of the four ways, and can be flipped/mirrored) □□ □□ □□ □□ □□□□ □□□□ If you can, provide an example solution. If you cannot, then you should provide a reasonable argument to why it can't be done.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5975762009620667, "perplexity": 351.4859707332909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670162.76/warc/CC-MAIN-20191119172137-20191119200137-00419.warc.gz"}
https://www.omnimaga.org/ti-z80-calculator-projects/(to-be-named)-dungeon-crawlercollection-game/?prev_next=prev
### Author Topic: Hacked Lists!  (Read 2927 times) 0 Members and 1 Guest are viewing this topic. #### Xeda112358 • they/them • Moderator • LV12 Extreme Poster (Next: 5000) • Posts: 4704 • Rating: +719/-6 • Calc-u-lator, do doo doo do do do. ##### Hacked Lists! « on: November 04, 2018, 10:56:56 am » So, fun fact: floating point numbers can be changed to a variable reference instead, kind of like a shortcut on your computer, and the OS doesn't verify it. In assembly, when you are looking for a variable, you put a name in OP1 and use FindSym or ChkFindSym. Those names are 9 bytes, which is also the size of a float (not a coincidence-- it's part of why names are limited to 8 bytes). You can actually change the contents of a Real variable to such a name, and when you try to read it's value, it will instead return the value of the variable it points to. For example, change the bytes of the Real variable, A, from 008000000000000000 (the number 0) to 015D01000000000000 and when you read A, it will return the contents of L2. Or, since lists are just stored as a bunch of Real (or Complex) numbers, you can modify elements of the list to be pointers. In this way, you could read Str1, Str2, Str3,..., Str0 by reading an element of L1, which could be useful, occasionally Some things to note: You can't modify the contents of the original variable in this way. If A points to Str2, then "33"+A→A does not modify Str2. However, "33"+A will work like "33"+Str2. Storing the names of programs and appvars and whatnot isn't useful. Attached is a program that turns L1 into a 4-element list {L2,[A],Str1,L1}. Here is the source. You need to delete L1 first, my code wasn't reliably deleting it. Code: [Select] #define bcall(x) rst 28h \ .dw xrMOV9TOOP1  = 20h_ChkFindSym = 42F1h_CreateRList= 4315h.db $BB,$6D.org $9D95 ld hl,name rst rMOV9TOOP1 bcall(_ChkFindSym) ret c ld hl,name rst rMOV9TOOP1 ld hl,4 bcall(_CreateRList) inc de inc de ld hl,data ld bc,31 ldir retdata: .db 1,$5D,1,0,0,0,0,0,0  .db 2,$5C,0,0,0,0,0,0,0 .db 4,$AA,0,0,0,0,0,0,0name:  .db 1,\$5D,0,0 #### TIfanx1999 • ಠ_ಠ ( ͡° ͜ʖ ͡°) • CoT Emeritus • LV13 Extreme Addict (Next: 9001) • Posts: 6173 • Rating: +191/-9 ##### Re: Hacked Lists! « Reply #1 on: November 04, 2018, 12:35:16 pm » Very neat little discovery!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2101345807313919, "perplexity": 7447.3707681521955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00272.warc.gz"}
http://www.zora.uzh.ch/id/eprint/119144/
# Age, Action Orientation, and Self-Regulation during the Pursuit of a Dieting Goal Hennecke, Marie; Freund, Alexandra M (2016). Age, Action Orientation, and Self-Regulation during the Pursuit of a Dieting Goal. Applied Psychology: Health and Well-Being, 8(1):19-43. ## Abstract Two studies tested the hypotheses that (1) action orientation (vs. state orientation) is positively correlated with age across adulthood and (2) action orientation aids the self-regulation of one's feelings, thoughts, and behavior during the pursuit of a dieting goal. Hypotheses were partly confirmed. In Study 1, N = 126 overweight women (age: 19–77 years) intended to lose weight by means of a low-calorie diet. In Study 2, N = 322 adults (age: 18–82 years) reported on their action orientation to replicate the association of age and action orientation found in Study 1. Study 2 corroborated only the expected positive association of age and decision-related action orientation. In Study 1, decision-related action orientation predicted higher affective well-being during the diet as well as less self-reported deviations from the diet; failure-related action orientation predicted lower levels of rumination in response to dieting failures. Action orientation partially mediated the negative effects of age on deviations and rumination (see Hennecke & Freund, 2010). Weight loss was not predicted by action orientation. We discuss action orientation as one factor of increased motivational competence in older adulthood. ## Abstract Two studies tested the hypotheses that (1) action orientation (vs. state orientation) is positively correlated with age across adulthood and (2) action orientation aids the self-regulation of one's feelings, thoughts, and behavior during the pursuit of a dieting goal. Hypotheses were partly confirmed. In Study 1, N = 126 overweight women (age: 19–77 years) intended to lose weight by means of a low-calorie diet. In Study 2, N = 322 adults (age: 18–82 years) reported on their action orientation to replicate the association of age and action orientation found in Study 1. Study 2 corroborated only the expected positive association of age and decision-related action orientation. In Study 1, decision-related action orientation predicted higher affective well-being during the diet as well as less self-reported deviations from the diet; failure-related action orientation predicted lower levels of rumination in response to dieting failures. Action orientation partially mediated the negative effects of age on deviations and rumination (see Hennecke & Freund, 2010). Weight loss was not predicted by action orientation. We discuss action orientation as one factor of increased motivational competence in older adulthood. ## Statistics ### Citations 2 citations in Web of Science® 3 citations in Scopus®
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8456752300262451, "perplexity": 12135.784322455524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891543.65/warc/CC-MAIN-20180122213051-20180122233051-00679.warc.gz"}
http://blog.qvsource.com/?page=1
The QVSource Blog is Closing It's been a year since Industrial Codebox was acquired by Qlik and QVSource turned into the Qlik Web Connectors. As such, we will be closing down this blog at the end of May 2017, since the latest information can be found over at www.qlik.com. We'd once again like to thank you for your support of QVSource and the continued success of the Qlik Web Connectors. Thanks, The Qlik Web Connector Team QVSource is now part of Qlik Connectors We are excited to announce that Qlik Web connectors will be launched in early July - the release of the QVSource portfolio within the Qlik Connectors product suite. After Industrial CodeBox was acquired by Qlik in April of 2016, we have spent the last couple of months getting our product ready to be delivered and sold as a standalone Qlik product for both QlikView and Qlik Sense. This release is focused on the rebranding of the product, and moving to Qlik’s infrastructure, including licensing, documentation and support – with limited updates to the functionality from the QVSource Web Edition 1.0.3. release on April 27. The pricing structure will be simplified as a part of this release. The available connectors will continue to be divided into Standard and Premium connectors. But Standard connectors will be made available for free, and the Premium connectors on a flat per connector annual subscription. Note that beta connectors will no longer be shipped with the product. We will be announcing a revisited program for beta connectors shortly. In addition, it’s important to note that as of as of July 5th 2016 Qlik support will officially take over support for existing QVSource Customers and Partners. Here is what you need to know: • All support inquiries should be managed using the Qlik Support Portal, starting on July 5th 2016 • Support at QVSource should no longer be used, instead use the support portal. • QVSource will eventually be retired and there will be no further development to this product; with this, we strongly recommend that you upgrade to Qlik Web Connectors, which will replace QVSource. • If you have an open support ticket with Industrial CodeBox, it will not be transferred to Qlik Support, but it will be managed by ICB until it’s closed We are now turning our attention to work closely with the rest of Qlik’s product management team to jointly develop an updated roadmap. There are many exciting possibilities we will be considering including a tighter integration with the core Qlik Sense product and related capabilities as well as deploying our connector capabilities into the Cloud. Stay connected via Qlik’s blog moving forward! QVSource Web Edition 1.0.3 Now Available (This blog post originally appeared on the QVSource blog here.) We are delighted to announce the availability of a new version of QVSource - The QlikView and Qlik Sense API Connector! We have only released an update of the Web Edition this time (version 1.0.3) as the WinForms edition is now deprecated and will no longer be supported after the 22nd May 2016. If for some reason you are unable to upgrade to the Web Edition please contact us and explain, otherwise please now use the new and improved Web Edition of QVSource if you are not already. Updates Since 26 Feb 2016 (Previous 1.0.1 Release) The following updates have been made to the QVSource Core and Web UI as well as the connectors listed. Please pay particular attention to any connectors which may have breaking changes highlighted below. Core Engine • Improved error messages for errors during string decryption (e.g. from stored settings or URL parameters). (29 Mar 2016) • Fixed bug with Version element in log files being set to 0.0 rather than actual 'engine' version. (10 Mar 2016) • File extension for error logs changed from .xml to .txt. (10 Mar 2016) • Added response_time column to all Google Connector CanAuthenticate tables. (03 Mar 2016) • application/xml rather than text/xml now used throughout for tables which write a raw XML response. (26 Feb 2016) General & Web User Interface • The Instagram Connector has been removed (applications now have to apply for API and our initial application has been unsuccessful). The current connector will stop working in June 2016. (22 Apr 2016) • The Yahoo! Placemaker Connector has been removed (underlying API is being shut down). (22 Apr 2016) • Improve warning and error messages for licence file upload. (17 Mar 2016) • More helpful error messages now for various invalid table requests (e.g. loadAccessToken missing). (13 Mar 2016) • File extension for error logs changed from .xml to .txt. (13 Mar 2016) • Fixed user management API caching bug. (11 Mar 2016) • OAuth tokens in UI are now obfuscated. (02 Mar 2016) • Upgraded to NancyFX 1.4.3. (29 Feb 2016) • Upgraded to Angular JS 1.5.0. (29 Feb 2016) • Fixed table rendering bug in IE9. (29 Feb 2016) Azure Data Marketplace Connector • BREAKING CHANGE: We have updated/changed the OAuth Client ID. You will need to reauthenticate with the connector and update any embedded tokens you may have in your request URLs. (12 Apr 2016) bitly Connector (v2) • Fix missing permissions. (09 Mar 2016) Box Connector • Fix UserEvents source_path, add in a few missing columns for UserEvents and EnterpriseEvents. (16 Mar 2016) Dropbox Connector (v2) • Updated Metadata table to use dropboxPath parameter instead of dir parameter. (22 Mar 2016) • Added CanAuthenticate table. (03 Mar 2016) • Added recursive, include_media_info, include_deleted params to List table. (03 Mar 2016) • Added notes for id:XXX rev:XXX values for the Path parameter. (03 Mar 2016) • Added mode, auto rename and mute params to UploadFile table. (03 Mar 2016) • Added Usage table. (03 Mar 2016) • Added Revisions table. (03 Mar 2016) • Added Search table. (03 Mar 2016) Exact Online Connector • Remove note about save/run before authenticating. (07 Mar 2016) Facebook Fan Pages & Groups Connector (v2) • NOTE this version (V2) of this connector is now deprecated. New users should use V3 of this connector and existing users should begin upgrading to V3. V2 will no longer work after 7th August 2016. (31 Mar 2016) • Max Items Per API Request (maxItemsPerPage) parameter added to most tables which return paged results. We would only recommend setting this to a lower value of the default of 100 if you are receiving an error. (31 Mar 2016) • Max API Calls/Second (maxAPICallsPerSecond) will now accept a decimal number. So for example you can enter 0.5 to allow only one API request every two seconds. (31 Mar 2016) Facebook Fan Pages & Groups Connector (v3) • Drop down of pages should no longer be limited to first 25. (08 Apr 2016) File Transfer Connector (FTP/SFTP) • Upgraded to latest (2016 R1.1) of RebEx. (17 Mar 2016) • Added new EventAttendees table. (08 Apr 2016) • GetSpreadsheet table now downloads data as CSV instead of TSV internally which was used previously. This may resolve occasional issues with parsing cells which contain combinations of ,'s and "'s. (19 Mar 2016) • GetSpreadsheet table will now retry once after a 10s wait in the case of a 429 error response (and also still wait 1s between successive successful requests). (10 Mar 2016) • GetSpreadsheet table will now retry once after a 3s wait in the case of a 429 error response and also wait 1s between successive successful requests. (03 Mar 2016) • Throw exception if dimensionFilterGroups is included in dimensionFilter parameter. (31 Mar 2016) • Added notes to SearchAnalyticsQuery table. (31 Mar 2016) • Added responseAggregationType column to SearchAnalyticsQuery table. (31 Mar 2016) • Changed parameter mode. (31 Mar 2016) • Bump up default row limit to 5000. (31 Mar 2016) MailBox Connector (IMAP/POP3) • Added logging to API call logs. (07 Apr 2016) • Fixed memory consumption issue in Web Edition for this connector. (07 Apr 2016) • Removed feature to save latest log to cache. (07 Apr 2016) • Fixed bug where emails could not be retrieved (XML deserialised) from cache if they contained certain invalid characters (e.g. 0x0B, 0x0C, 0x01). (07 Apr 2016) • Removed unnecessary code to get count of emails in ImapMessagesInFolder table. (31 Mar 2016) • Added 'Before' input parameter to ImapMessagesInFolder table. (23 Mar 2016) MS CRM Connector • Added OrganisationID, BusinessUnitId, Version, and response_time_ms columns to CanAuthenticate table. (01 Mar 2016) • Tweaked progress reporting code. (01 Mar 2016) • Warn message now also logged if the .TotalRecordCount is not returned (which is not expected). (01 Mar 2016) • Added new maxRows parameter. (01 Mar 2016) • Added API call logging. (26 Feb 2016) • Improved progress feedback (including total count) and cancellability of running table. (26 Feb 2016) Office 365 Sharepoint Connector • Remove note about save/run before authenticating. (07 Mar 2016) • Fixed bug with duplicated logging of API calls. (03 Mar 2016) • Fixed bug in code to download lists (ListItemsFromID and GetDataFeed tables). (02 Mar 2016) • ListItemsFromID table now defaults to downloading data in pages of 1000 items at a time (instead of 100) and we have also added a note recommending that ?$top=1000 is added to the end of the feed parameter for the GetDataFeed table. (02 Mar 2016) • Removed max results parameter from GetFile table. (02 Mar 2016) • BREAKING CHANGE: ListResources renamed to Lists. (29 Feb 2016) • BREAKING CHANGE: GetData renamed to ListItemsFromID. (29 Feb 2016) • Added ListFolders and ListFiles tables. (29 Feb 2016) • Added GetFile table. (29 Feb 2016) OneDrive Connector • BREAKING CHANGE: We have updated/changed the OAuth Client ID. You will need to reauthenticate with the connector and update any embedded tokens you may have in your request URLs. (12 Apr 2016) • Minor update to OAuth redirect URL. (29 Feb 2016) Salesforce Connector (v2) • Use specific instance if available for APIVersions. (04 Apr 2016) • Removed XPATH parameter from RequestRawXml table (where it was never used). (26 Mar 2016) • RequestRawXml no longer errors when application/json is returned from the API request but converts this to XML. (23 Mar 2016) SurveyMonkey Connector • Add GetResponses table. (15 Mar 2016) • Fix missing permissions. (09 Mar 2016) Sentiment Analysis & Text Analytics Connector (v2) • Added EmotionAnalysis table to AlchemyAPI provider. (08 Apr 2016) Web Connector for General JSON/XML/SOAP APIs (v2) • The JsonToXmlRaw and JsonToTable tables should now better handle cases where JSON contains double colons and$'s in element names (e.g. {"\$", "value"} or {"name1::name2", "value"}). (06 Apr 2016) Where do I find this new release? If you are a QVSource customer or have requested a trial in the past you should see this new release in the personalised download link we should have sent you via email. If you are new to QVSource you can download a fully functional free trial from our website. As noted above though, we would strongly recommend you use the new Web Edition download (which contains the above updates also). QVSource Web Edition 1.0.1 Now Available (This blog post originally appeared on the QVSource blog here.) We are delighted to announce the availability of a new version of QVSource - The QlikView and Qlik Sense API Connector! We have released an update of both the WinForms Edition (version 1.6.9) and the new Web Edition (version 1.0.1). However, as noted in an earlier post, the WinForms edition is now officially deprecated and will no longer be supported after the 22nd May 2016. Therefore we would strongly recommend you now start working with the Web Edition if you are not already. IMPORTANT - Upgrade to .NET 4.5.2 There has been a key update to these latest releases - you will now need to ensure you have the .NET Framework 4.5.2 installed on your machine (rather than .NET 4.0 which was required previously). If you do not already have it you can access it here. New Connectors Note that we have the first version of a brand new connector to Office 365 Sharepoint. We also have a new V3 version of the Facebook Fan Pages & Groups Connector, V2 is now deprecated and we would advise all users to start working with the new V3 which works with the latest version of the Facebook Graph API. Finally, we have a new V2 of the DropBox Connector which replaces V1 (although it is essentially the same, it simply uses their latest API). All Updates (Since 24th Nov 2015) The following updates have been made to the QVSource Core and Web UI as well as the connectors listed. Please pay particular attention to any connectors which may have breaking changes highlighted below. Core Engine • application/xml rather than text/xml now used throughout for tables which write a raw XML response. (26 Feb 2016) • Updated WinForms version of Web Server so that the connector ID can now be passed in as a query parameter instead of part of the path. So for example http://localhost:5555/QVSource/Connect/?connectorID=TwitterConnectorV2&table=Search&... can be used in place of http://localhost:5555/QVSource/TwitterConnectorV2/?table=Search&... This has been done so that we can explore working via the new Qlik REST Connector. (23 Feb 2016) • Added new format=json option. (23 Feb 2016) • Fixed bug in processParamsSync feature (See http://wiki.qvsource.com/Synchronous-Asynchronous-And-Batch-Requests-Explained.ashx). (16 Feb 2016) • RC3 for new public release. (11 Feb 2016) • Short URL Expander Removed. (11 Feb 2016) • Added first version of Office 365 Sharepoint Connector. (08 Feb 2016) • Added DropBox V2 Connector. (08 Feb 2016) • IMPORTANT: Upgraded to .NET version 4.5.2 (previously 4.0 was used). (07 Feb 2016) • Connectors which use one or more SOAP requests should now correctly pick up configured Proxy (e.g. RPS Connector, DFP Connector and AdWords Connector). (07 Feb 2016) • POTENTIALLY BREAKING CHANGE: We had some custom code to replace certain special characters which are valid JSON but invalid XML with safe values for XML. We have now removed this code and internally use some system level .NET code which should account for all potential invalid values but now uses different replacements. These replaced values might ultimately end up being table column names depending on whether you are using a connector table which provides raw XML or a structured table and you may need to adjust your scripts accordingly according to the following. This is actually a QVSource wide change but the most likely place it will impact is in certain specialist uses of the General Web Connector. (22 Dec 2015) • Fixed threading issue which may cause QVSource to crash if status page (e.g. at http://localhost:5555/qvsource) was requested while application was still starting up and registering connectors. (22 Dec 2015) • All multi line inputs now (e.g. JSON and XML inputs and some string inputs, e.g. the POST parameter on the Web Connector) can now accept an @file=c:\path\to\file.txt input so that the contents can be more easily managed in an external file. This can be used in conjunction with the ReplaceTokensInFile table of the Helper Connector (http://wiki.qvsource.com/Helper-Connector.ashx#ReplaceTokensInFile_Table_0) which can be used to modify this file before it is read. (11 Dec 2015) • RC for new release. (11 Dec 2015) • Updated to reference Newtonsoft v7 externally. (05 Dec 2015) • Added deprecated message to WinForm title text and also about and licence text areas. (02 Dec 2015) Web User Interface • If there is an error response from the API in XML or JSON format when running a table in the UI, this should now be shown to the user. (25 Feb 2016) • RC4 for new public release. (14 Feb 2016) • Column aliases in generated load script should now have _ after table name. (13 Feb 2016) • RC3 for new public release. (11 Feb 2016) • Short URL Expander Removed. (11 Feb 2016) • IMPORTANT: Upgraded to .NET version 4.5.2 (previously 4.0 was used). (08 Feb 2016) • Added first version of Office 365 Sharepoint Connector. (08 Feb 2016) • Added DropBox V2 Connector. (08 Feb 2016) • RC2 for new public release. (08 Feb 2016) • RC for new public release. (03 Feb 2016) • : characters in table column names and generated script (which break QlikView/Qlik Sense load script) should now be replaced with underscores _. (14 Jan 2016) • @file=c:\path\to\file.txt style inputs can now be used on multi line inputs to have their values populated from file. See http://wiki.qvsource.com/Multi-Line-Input-Parameters.ashx for more information. (17 Dec 2015) • Removed (beta) text from footer. (26 Nov 2015) • Fixed check for updates feature on about tab. (26 Nov 2015) • Changed MIME type for QVX response which means it should now be rendered in browser again (as in earlier beta versions) rather than being downloaded as a file. (26 Nov 2015) • Added warning for IE9 or earlier. (26 Nov 2015) • Fixed bug with boolean input parameters not saving correctly on tables. (26 Nov 2015) • Fixed issue with CSV file download not always correctly escaping commas and quotes. (26 Nov 2015) • The 'Show All' connector settings (on the My Settings tab) should no longer show empty values. (26 Nov 2015) • Minor fix to startup code which logs errors instantiating connectors. (24 Nov 2015) • Minor refactoring of web exception handling. (07 Dec 2015) Amazon S3 Connector • Added paging to ListObjects table. (25 Nov 2015) Blue Yonder Connector • Minor internal note renaming. (13 Feb 2016) • Minor update so that there is now a maximum of 3 concurrent requests across all instances of the connector. (01 Dec 2015) Box Connector • Updated EnterpriseEvents table in line with Box API updates. (17 Feb 2016) Dropbox Connector (v2) • Updated to use V2 Dropbox API for all tables. (10 Feb 2016) Facebook Fan Pages & Groups Connector (v2) • This has now been tagged as deprecated as we have started work on a new V3 of this Connector. There is no intention to remove this connector in the next 6 months, tagging as deprecated at this stage is primarily intended to encourage any new users of the product to start with the new V3 of the Connector. (04 Feb 2016) • Added more helpful error message when connector not authenticated. (07 Jan 2016) • Minor refactoring of web exception handling. (07 Dec 2015) Facebook Fan Pages & Groups Connector (v3) • Added 'status' column to User, Page and UserOrPage tables. If the table runs successfully this column will contain the string 'OK', otherwise it will contain the error message. (13 Feb 2016) • Removed maxPerRequest param. (09 Feb 2016) • Added first version of UserOrPage table. (09 Feb 2016) • Internal refactoring and caching enhancemenents. (09 Feb 2016) • Added caching for Type table. (09 Feb 2016) • Pictures should now be rendered in table in Web Edition (if enabled in settings). (08 Feb 2016) • Added Type table. (31 Jan 2016) • Added more helpful error message when connector not authenticated. (07 Jan 2016) • Initial release of V3. (18 Dec 2015) • Minor internal refactoring/performance improvements (for Web Edition). (20 Feb 2016) • Removed dependency on Microsoft.VisualBasic assembly on startup. (02 Feb 2016) • Added more helpful error message when connector not authenticated. (07 Jan 2016) • Upgraded to version 2.5 of the Facebook Graph API. (17 Dec 2015) • Minor refactoring of web exception handling. (07 Dec 2015) • Updated Page ID drop down to show "global_brand_page_name (id)" in dropdown rather than just "name" (which is not always unique). (03 Dec 2015) • Web Edition now shows page picker dropdown. (03 Dec 2015) FreeAgent Connector (v2) • Minor refactoring of web exception handling. (07 Dec 2015) • Minor internal refactoring. (07 Feb 2016) • Added extra note to Report tables about looking up territory names from numerical IDs. (16 Dec 2015) • Minor refactoring of web exception handling. (07 Dec 2015) • Default timeout increased to 30 minutes. (22 Feb 2016) • Added TableSchemaAsXml table. (22 Feb 2016) Google DoubleClick for Publishers (DFP) Connector • Minor internal refactoring. (15 Feb 2016) • Removed dependency on Microsoft.VisualBasic assembly. (02 Feb 2016) • Fixed encoding of siteUrl by targeting .NET4.5 and removing previous workaround that did not work for https sites. (12 Feb 2016) Helper Connector • Added a Tokens File input parameter to the ReplaceTokensInFile table which allows you to also specify a list of tokens and replacement values in a separate file (which could, for example, be created using the Qlik store (txt) command). (22 Dec 2015) • Added ReplaceTokensInFile table. (15 Dec 2015) Instagram Connector • User Name is now required for UserSearch table. (11 Feb 2016) Klout Connector (v2) • Internal refactoring and performance improvements (minor). (01 Dec 2015) • Table names are now case sensitive (as with most connectors). (01 Dec 2015) • Minor refactoring of web exception handling. (01 Dec 2015) • Removed old table name mappings (from 2012 version of Connector). (01 Dec 2015) MailBox Connector (IMAP/POP3) • Mail IDs used in cache keys are now hashed. (16 Feb 2016) • html columns are now rendered as encoded html when running in Web Edition UI (previously the html was rendered inside the data table preview). (29 Jan 2016) • Added additional notes to the Username and Password parameters to explain that these can be deleted from the generated URL to have the values stored in settings to be picked up by default instead. (29 Jan 2016) • Added seen and flags columns to the ImapMessagesInFolder table. (04 Jan 2016) • Added a new checkbox input parameter 'Allow Self Signed Client Certificate'. You might wish to check this if you are receiving an 'The remote certificate is invalid according to the validation procedure' error or other certificate related error as long as you are sure you are connecting with the correct server. (08 Dec 2015) MailChimp Connector (v2) • BREAKING CHANGE: The API Key input parameter is now set to be encrypted. If you are using this connector you should regenerate a load request and copy the value of the newly updated/encrypted APIKey= parameter to your load URLs. Alternatively, if you simply remove this parameter from your load URLs the value stored in settings will be used. Note that you can also use the Encrypt table in the Hepler Connector to create the encrypted value on the fly. (13 Feb 2016) MongoDB Connector (v2) • Please note that as a new official BI Connector (https://docs.mongodb.org/manual/products/bi-connector/) is now available for MongoDB 3.2 and later you may wish to first consider this. Because of this we are also now considering retiring this connector if it does not offer any advantages over this. (10 Dec 2015) • Upgraded to MongoDB Driver 1.11.0. This should now support version 3.0 of MongoDB. (10 Dec 2015) • Fixed bug where if 'Max Number Of Rows' input parameter was set, the connector would still first inspect all documents in collection to establish column structure for table, now it will only inspect the first 'Max Number Of Rows' documents. (10 Dec 2015) MS CRM Connector • Added API call logging. (26 Feb 2016) • Improved progress feedback (including total count) and cancellability of running table. (26 Feb 2016) • BREAKING CHANGE: The connection string input parameter is now set to be encrypted. If you are using this connector you should regenerate a load request and copy the value of the newly updated/encrypted connectionString= parameter to your load URLs. Alternatively, if you simply remove this parameter from your load URLs the value stored in settings will be used. Note that you can also use the Encrypt table in the Hepler Connector to create the encrypted value on the fly. (12 Feb 2016) • Input parameter "Check All Data For Columns" set as optional now. (02 Feb 2016) • Minor refactoring. (27 Jan 2016) • Added new input parameter "Check All Data For Columns". Previously, if certain columns did not appear in the first 5000 rows of data they would not be included in the results at all. Checking this new option causes the connector to analyse all pages of results before building the columns which fixes this issue. However, it does mean that all of the data has to be loaded into memory in one go rather than streaming the data in oages of up to 5000 results at a time (which is the behaviour if this new option is not checked). (27 Jan 2016) • Minor refactoring. (09 Dec 2015) • CanAuthenticate table now gives hint to try https instead of http if the connection throws a null reference exception and URL=http://... is being used rather than URL=https://.... (09 Dec 2015) • Minor internal code refactoring. (17 Dec 2015) OData Connector (V2) • Updated auth parameters to be included in the generated load URL by default. If you delete the parameter values from the generated load URL it will fall back to the value stored in settings. (23 Feb 2016) • Connector now attempts to sanitise XML before loading following a bug report where one OData source included invalid content in XML. (05 Feb 2016) • Fix CleanupUrl bug when using file paths. (29 Jan 2016) • Removed optional paging type parameter as it is not being used. (30 Nov 2015) Office 365 Sharepoint Connector • Added SubSites table. (25 Feb 2016) • Added Sub Site Path parameter to ListResources table. (25 Feb 2016) • Added GetData table. (25 Feb 2016) • Initial release. (08 Feb 2016) OneDrive Connector • Minor update to improve efficiency for Drives and Items tables. (08 Feb 2016) SugarCRM Connector • Minor refactoring of web exception handling. (08 Dec 2015) • Fixed bug where request would hang in certain circumstances for CustomRequest table. (08 Dec 2015) SurveyMonkey Connector • Added other available columns to GetRespondentList table. (22 Feb 2016) • Added custom_id column to GetRespondentList table. (12 Jan 2016) • Added date_start and date_modified columns to GetRespondentList table. (07 Jan 2016) • Connector should now use app wide proxy settings. (16 Dec 2015) Sentiment Analysis & Text Analytics Connector (v2) • Fix to Repustate SentimentChunked table. (13 Feb 2016) • BREAKING CHANGES: The embedded API key for AlchemyAPI which was included in previous versions has now been removed. (26 Nov 2015) • Minor refactoring of web exception handling. (26 Nov 2015) • Fixed bug in Followers table where it made unnecessary API requests and used up all of current quota unnecessarily. (15 Feb 2016) • Fixed issue in RateLimit table where _x002F_ was shown in data instead of / character. (08 Feb 2016) • Minor internal refactoring. (01 Dec 2015) • Fixed issue where Post_XXXX tables would error in web edition. (01 Dec 2015) Web Connector for General JSON/XML/SOAP APIs (v2) • Some parameters no longer unnecessarily appear in the generated URL if they have their default values. (14 Feb 2016) • Fixed bug with Encrypt URL input parameter not working in Web Edition. (14 Feb 2016) • POSSIBLE BREAKING CHANGE: Removed support for using cleanXML parameter in request URL (see earlier in change log) as this has now been superceded by removeInvalidChars parameter which should be used instead. (04 Feb 2016) • Removed 'Remove Invalid Characters' input parameter from JsonToTable and RawXmlToTable tables. (04 Feb 2016) • Added code to sanitise XML before processing in the case it might contain invalid characters. (04 Feb 2016) • POTENTIALLY BREAKING CHANGE: We had some custom code to replace certain special characters which are valid JSON but invalid XML with safe values for XML. We have now removed this code and internally use some system level .NET code which should account for all potential invalid values but now uses different replacements. These replaced values might ultimately end up being table column names depending on whether you are using a connector table which provides raw XML or a structured table and you may need to adjust your scripts accordingly according to the following. This is actually a QVSource wide change but the most likely place it will impact is in certain specialist uses of the General Web Connector. (22 Jan 2016) • Added feature allowing @file=drive:\path\to\file.txt for the POST parameter value so that the connector will pick up the text from here instead of the parameter value itself. This is because occasionally (particularly for SOAP requests) the URL generated to QVSource can be very long and exceed the length supported by QlikView or Qlik Sense. (14 Dec 2015) • Last request time (for the minimum time between requests feature) is now stored in settings rather than in memory against connector instance (which fixes compatibility for new web edition). (30 Nov 2015) Yahoo! Placemaker Connector • Minor refactoring of web exception handling. (07 Dec 2015) • Added quotaUser parameter to Report and MyChannels tables which helps enforce per user quota limits. (16 Dec 2015) • Fixed empty table bug when there are 0 results. (15 Dec 2015) • Fixed VideoStatistics API error when using more than 50 video ids. (07 Dec 2015) • Fixed ChannelStatistics API error when using more than 50 channel ids. (07 Dec 2015) • Return early if no nodes found or all nodes found (in a single API request) are duplicates. (07 Dec 2015) • Added optional status info to VideoStatistics table. (02 Dec 2015) • Update quota info for VideoStatistics table. (02 Dec 2015) Where do I find this new release? If you are a QVSource customer or have requested a trial in the past you should see this new release in the personalised download link we should have sent you via email. If you are new to QVSource you can download a fully functional free trial from our website. As noted above though, we would strongly recommend you use the new Web Edition download (which contains the above updates also). QVSource Reaches 100 Five Star Reviews! We were super excited this week to see QVSource receive it's 100th Five Star rating on Qlik Market and just wanted to mark the occasion with a blog post. We're really proud of the product and it's amazing to see how much value it's bringing to QlikView and Qlik Sense users around the globe. Thanks for all of your support! QVSource (WinForms) 1.6.7 Now Available (This blog post originally appeared on the QVSource blog here.) We are delighted to announce the availability of a new version of QVSource - The QlikView and Qlik Sense API Connector! IMPORTANT Please note, as per our previous blog post, QVSource Web Edition is now available and this is the recommended version now if you are new to QVSource. We are releasing this update to the WinForms edition to help existing commercial users who are not yet ready to transition to the new Web Edition. However, it is likely this will be the last major release of the WinForms edition of QVSource. The WinForms edition is now officially deprecated and will no longer be supported after the 22nd May 2016. As always, in addition to reading the following for the highlighted updates and changes to QVSource we would also recommend you read the notes on the "Change Log" tab of any connectors you are using for a comprehensive overview of changes as there may have been other minor bug fixes and improvements made which are not listed below. • The QVSource Windows Service (QVSourceService.exe) is now set to compile to any CPU (previously it was forced to 32 bit only). This was causing it to run out of memory for a couple of users (whereas the desktop version behaved differently). This is now resolved and it should run as 64 bit on 64 bit architectures. Updated Connectors • Text Analytics Connector • BREAKING CHANGE: In order to make the Text Analytics Connector compatible with the new Web Edition of QVSource, we have had to make a breaking change carefully detailed here. This means making a slight adjustment to any load scripts which use this connector. • We have also made a fix to the Repustate connector to keep up to date with their API. • POSSIBLE BREAKING CHANGE: We have remove the embedded Alchemy API Key from this connector, you must sign up for you own Alchemy API key and enter it into the connector if you are using this (rather than one of the other sentiment API options the connector offers). • General Web Connector: Added a cookie container which maintains any cookies set for the lifetime of the running application (e.g. resends them with new requests) and a new Cookies table which allows these to be extracted. This further opens up the number of APIs which this connector can work with. • Twitter Connector: For UserTimeline table: • Google Drive Connector: Added two new inputs to GetSpreadsheet table to rename column headers (Column1, Column2 etc.) and to skip first X rows. • FTP Connector: FTPGetRawFile/SFTPGetRawFile tables now stream contents of file as binary (rather than first converting to text first). • Google DFA Connector Back!: We marked this as deprecated, mistakenly thinking that the API it uses was due to be sunset by Google in September 2015. On realising our mistake we have removed the deprecated flag and hope to bring it out of beta in the future. • For certain types of errors returned from the Facebook API, the connector will now wait a few seconds and retry the request a second time. This will also be logged. • BREAKING CHANGE: Removed the GroupMembers table (unfortunately we no longer have permission to make this API request (https://developers.facebook.com/docs/graph-api/reference/v2.4/group/members) as it requires the user_groups permission (https://developers.facebook.com/docs/facebook-login/permissions/v2.2) which is only 'granted to apps building a Facebook-branded client on platforms where Facebook is not already available'.) • MailBox Connector: • Added a new checkbox input parameter 'Allow Self Signed Client Certificate'. You might wish to check this if you are receiving an 'The remote certificate is invalid according to the validation procedure' error or other certificate related error as long as you are sure you are connecting with the correct server. • MongoDB Connector: • Upgraded to MongoDB Driver 1.11.0. This should now support version 3.0 of MongoDB as well as have various other fixes compared to the previous version 1.9.2.235 which the connector used. • Please note that as a new official BI Connector is now available for MongoDB 3.2 and later you may wish to first consider this. Because of this we are also now considering retiring this connector if it does not offer any advantages over this. • Box Connector: • ListFilesAndFolders table now requests results in batches of 1000 instead of 100. • Added recurse depth input parameter to ListFilesAndFolders. Removed Connectors Where do I find this new release? If you are a QVSource customer or have requested a trial in the past you should see this new release in the personalised download link we should have sent you via email. If you are new to QVSource you can download a fully functional free trial from our website. As noted above though, we would strongly recommend you use the new Web Edition download (which contains the above updates also). QVSource Web Edition Now Commercially Available! We are absolutely delighted to announce that the Web Edition of QVSource is now commercially available! Coincidentally this is also happening almost 4 years to the day since the original WinForms edition of QVSource was commercially released which has gone on to win awards and gather nearly 100 Five Star reviews on Qlik Market! There is also one major new feature in this release as compared to the recent beta announced and detailed here - a brand new user management UI! Although we slipped our planned release date by a few weeks, the product is actually now more polished than we had ever envisioned for a first version and we are really pleased with the result. We have also received some great feedback from early users and even have a few customers who are getting great value from it. Having said that we do appreciate that it is still relatively new and we look forward to your feedback and fixing any issues which may arise over the next few months. There is one important change which all current users of QVSource should note: The current (as of today previous!) 'WinForms' edition of QVSource is now officially deprecated and the next release of this will state this in the title bar. Our current plan is to end support of the WinForms edition of QVSource in six months - so on the 22nd May 2016, after which there will be no new builds or releases of QVSource WinForms edition. This shouldn't be of concern, the new Web Edition is an almost seamless swap in replacement for the WinForms edition and operates in a very similar manner, there are just a few new things to learn if you wish to deploy it on a central server in your organisation. We would recommend reading the new doc pages here to get an overview of the new version and detailed information on the differences. One final thing to users who have been working with the previous beta(s) of the web edition, we have made a slight modification to the paths used to store user settings, log, cache files etc. so you will need to re-authenticate with any connectors you have been using. QVSource Web Edition Beta Update 0.9.8 We are pleased to announce that a new beta of the new Web Edition of QVSource is now available - this will likely be the final beta before commercial availability in a few weeks. We have some customers already working with the beta and have received some really great feedback, so if you have not tried it already, please do! Once the Web edition is launched, the current WinForms edition of QVSource will be deprecated and eventually removed. NOTE - If you are a customer or trial user of QVSource you will find the download available on the same page where you accessed the current WinForms edition of QVSource. In this blog post we will outline the main updates in this version. Copy Script Button We are pleased to finally have a button to copy the generated load script or request URL to your clipboard - a big shout-out to the clipboard.js library, which we are using to provide the functionality. This should work from desktop OS's but might not work from all tablet devices. A new Download button is now available under the data grid preview which allows you to download your data as a QVX (or CSV, TSV, XML or HTML) file - this might be useful if you would like to send the data to a colleague to load into their QlikView or Qlik Sense application or have a simple backup. Note that when you click the download it does re-run the query to the API in the background so you might get slightly different or more up to date data. Larger Multiline Input Some connectors have an input parameter where the user needs to enter an XML, JSON or other arbitrary text string - for example the POST parameter on the General Web Connector: This was always a little too small for any serious editing, so now, clicking the new button at the top right opens up a much larger popup window: Many of the APIs that QVSource connects to use OAuth and in the past we have only shown the token in the UI after authentication. Often users may have a number of different accounts they use for a service and it was difficult to remember which account the connector had been authenticated for. We are now starting to show this in the UI as highlighted below. This is currently available for all of the Google Connectors, Twitter and the Facebook Connectors and we plan to add this to all connectors where possible: Text Analytics Connector The Sentiment Analysis & Text Analytics Connector is finally available in the Web Edition - we have had to making a breaking change to the way this works which is explained here (but briefly, the Sentiment API is now selected via the Table Category dropdown as shown below). Note that for consistency we have also had to make this change in the WinForms edition but we will mention this in the notes for that release soon. The main thing to note is that you will have to make a minor change to your load script if you are moving from the current WinForms edition to this new Web Edition. New Connectors We have new connectors to Amazon S3 and Exact Online! These will also be in the next release of the WinForms edition. Coming Soon The last major feature we are working on before release is a user manager/editor so you will no longer need to edit an XML file - we are really pleased with how it both looks and works but unfortunately it just missed making it into this release - something to look forward to for the first commercial release! Here is a sneak preview: QVSource 1.6.3 Now Available We have just released version 1.6.3 of QVSource - The QlikView & Qlik Sense API Connector! This is quite a substantial release under the covers, as we prepare to make the first beta download of the new web edition of QVSource available we have now upgraded all of the (non deprecated) connectors to use the new UI style. This has allowed us to remove a lot of legacy code and unify all the connectors under one system. As always, in addition to reading the following for the highlighted updates and changes to QVSource we would also recommend you read the notes on the "Change Log" tab of any connectors you are using for a comprehensive overview of changes. New Connectors • We have a brand new Survey Monkey Connector. Updated Connectors • Fixed bug where search query for Twitter search table(s) was not being URL encoded. This bug appears to have been introduced 09/03/15. Note that this was also fixed in the recent intermediate 1.6.1.8 release made available on 10th June. • Added a 'manual' authentication option which helped a user who was running a very old version of IE and could not authenticate using the default option in QVSource. The manual auth option is now available for all OAuth1 and OAuth2 based connectors. • Our AdWords Developer application has been approved and this connector can now be used by everyone. • Text Analytics & Sentiment Analysis Connector: • Polish is now available as an option with the Repustate API. • Fixed bug in Alchemy API Sentiment table which gave an error if <mixed> element was not present in the response. • BREAKING CHANGE: Removed page_storytellers table/metric. This metric is deprecated after the deprecate_PTAT migration. • Fixed bug where data from requests for different object IDs was getting mixed up. • Insights tables (excluding application insights) now have empty tables shown (i.e. with column headers) even if no data returned for selected date range. • Fixed bug where real-time tables (e.g. those with * at the end of their name) were no longer working. Note that this was also fixed in the recent intermediate 1.6.1.8 release made available on 10th June. • JIRA Connector: • Added AllProjects table and optional projectIdOrKey for AllIssues table. • Added option to first download all data then write table to minimise issues with missing columns. • OData Connector: • Updated code to set HTTP Basic authentication which did not appear to work in some scenarios. • Added duration_timespan and duration_total_seconds columns to VideoStatistics table. • Updated 'Include Content Details' quota info, conditionally include extra columns. • Added url encoded comment and comment response text. • BREAKING CHANGE - The MyId table has been removed as it relied on the now deprecated YouTube API V2. • We have updated the number of API calls which can be made per second through the connector. • Azure Data Marketplace Connector - This has been updated to the new UI style. • OneDrive Connector: • BREAKING CHANGE: Updated to use new API. This is a breaking change as available tables and columns have changed. Removed Connectors V1 of the OData Connector and Salesforce Connectors have been removed. These both have new and improved V2 versions which you should use instead. Where do I find this new release? If you are a QVSource customer or have requested a trial in the past you should see this new release in the personalised download link we should have sent you via email. You can also use this link to register your interest in the new beta download for the web edition of QVSource and being notified when it's available. If you are new to QVSource you can download a fully functional free trial from our website. QVSource 1.6.1 Now Available We have just released version 1.6.1 of QVSource - The QlikView & Qlik Sense API Connector! There are a lot of updates, so please scan this blog post for any connectors you are using or thinking of using. As always, we would also recommend you read the notes on the "Change Log" tab of any connectors you are using for a comprehensive overview of changes. New Connectors • We have a V2 of the OData Connector (V1 is now deprecated, see below). This uses the new UI style and has some additional functionality, in particular the ability to extract referenced/linked tables using the columns with the _Feed and _Entry suffixes (the values of which can be used as new OData Resource Path inputs). You should regenerate your QlikView and Qlik Sense load scripts for the new version. Updated Connectors • V3 of the Facebook Insights Connector now has the first version of application insights available and now uses version 2.3 of the Facebook Insights and Facebook Application Insights API. • Added fix for "WebClient does not support concurrent I/O operations" error which you may have experienced if running multiple reloads simultaneously. • The PQLAsXml table now returns the raw SOAP response from the DFP API (formerly we were re-serialising the deserialised object model back to XML and the result was wrong). • Reports are no longer saved to a temporary file in an intermediate step but rather streamed directly to the output. • Added new 'raw' ReportRaw table which returns the exact contents returned from the API (tab separated values file at time of writing). • Upgraded to new OAuth 2 framework. • Added 'Include Statistics Details' to VideoStatistics table. • The Google Analytics Connector now has support for Unsampled Reports (note these are available to Google Analytics Premium Customers only) via the UnsampledReportsInsert, UnsampledReportsList and UnsampledReportsGet tables. Note at present we only support customers who have set this up to export the reports to Google Drive and the Google Drive Connector now has a new GetRawFileByIdAsText table to retrieve these more easily. • SugarCRM Connector • JIRA Connector: We have added created/updated date filter on AllIssues table and removed the maximum items cap of 500 which was a bug. NOTE - We expect this will be the last release where this connector will be in beta, so if you are making use of this please contact us about pricing. • DWCheckRequest table now always returns a single row, even in the case of an error message being returned from the API. • Fixed bug in ReportSuiteGetSegments table. Also, it now checks for both segments and sc_segments elements (a new element_name column specifies which). • Expanded notes for some tables. • Fixed bug with ReportGetQueue table. • Added ReportQueueAndWait and ReportQueueFromFileAndWait tables. This means you no longer need to construct logic to check the queue and wait for the report to complete in your QlikView and Qlik Sense load script. • We have Added inReplyTo column to IMAP and POP messages tables to the MailBox Connector. • We have added a new max API calls per second input parameter to the Facebook Fan Pages Connector and V3 of the Facebook Insights Connector. This might help users who are receiving API quota exceed limits. • We have also noted that with respect to Groups, the Facebook Fan Pages Connector will only now work with open/public groups. This appears to have been a change since the deprecated v1.0 of the Facebook API. • The Short URL Expander Connector has been updated to use the new UI style. • The Yahoo Placemaker Connector has been updated to use the new UI style. • Klout Connector: • Upgraded to new UI style. • Added a new KloutIDFromInstagramId table. You could now use the Instagram Connector to find Instagram users and then this connector to get data on their social influence. • BREAKING CHANGE - The sendemail_status, sendemail_result and sendemail_filesattached are now named status, result and filesattached. • Added 'Use SSL' input parameter. • From field no longer mandatory (username will be used if not set). • Upgraded to new UI style. • Mashape Connector (BREAKING CHANGES - You should regenerate your load scripts when updating to this version): • Upgraded to latest UI style. • Added content type input parameter. • QVX is now the default response format (previously it was HTML). Ensure you upgrade to use QVX format (recommended) or use &format=html in your requests (not recommended). • Cache time in hours will now accept integers only. Deprecated Connectors V1 of the OData Connector is now deprecated and planned for removal from QVSource in August 2015. Please upgrade to V2 of the Connector (see above).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16800422966480255, "perplexity": 6155.9079021642265}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690268.19/warc/CC-MAIN-20170925002843-20170925022843-00369.warc.gz"}
https://www.physicsforums.com/threads/questions-about-the-structure-of-the-universe.749538/
# Questions about the structure of the universe 1. Apr 19, 2014 ### tolove In general, what's a good way to understand the structure of the universe? I'm especially curious about the nature of the edge of the universe. Relativistic concepts dealing with the expanding universe confuse me greatly. 2. Apr 19, 2014 ### Mordred First off there is no edge of the universe, what physicists refer to as our universe is in reference to our observable universe. We simply have no means of measuring beyond that so make no definite statements. It could be infinite or finite. here is an article covering expansion and redshift, https://www.physicsforums.com/showpos...6&postcount=10 [Broken] here is one on Universe geometry https://www.physicsforums.com/showpost.php?p=4720016&postcount=86 here is what observational cosmology covers "What we have learned from Observational Cosmology http://arxiv.org/abs/1304.4446 my signature contains more articles to understand basic cosmology at the http://cosmology101.wikidot.com/main http://cosmocalc.wikidot.com/start is a manual for the http://www.einsteins-theory-of-relativity-4engineers.com/LightCone7/LightCone.html lightcone calculator showing the expansion history and future expansion of the universe those should get you started on the hot big bang model represented by $\Lambda$CDM model Last edited by a moderator: May 6, 2017 3. Apr 20, 2014 ### tolove I have a lot of probably misconceptions about the edge of the universe. We have an edge of our observable universe, but no "new" things will ever be observed, correct? That means our universe is expanding at least the speed of light? So, thinking with the Lorentz transformations, the edge of our observable universe, from our perspective, is standing still in time. Which already doesn't make sense to me. But I can move on from here: this portrays an onion layer structure to our universe. From any one perspective, anywhere in space, we observe a bubble of space-time? What significance even comes from the universe being infinite or finite if we're already existing in an inescapable bubble? 4. Apr 20, 2014 ### bapowell The universe does not expand at the speed of light. This is a very common misconception. Instead objects can recede from each other at light speed and beyond. This is a consequence of Hubble's Law which says that recession velocities scale linearly with separation. The edge of the observable universe is determined by how far light has traveled since the big bang. The size of the observable universe is increasing at greater than light speed because photons receding from Earth are traveling at speeds greater than c on account of the expansion. There is no problem with special relativity here, because locally photons always travel at the speed of light -- any superluminal recession velocity observed is due to the expansion of space and is not constrained by Einstein's speed limit. 5. Apr 20, 2014 ### Mordred Not quite, "the greater the distance the greater the recessive velocity" V=HD. This is a observer location distance dependent relation. If you change observer to the edge of the observable then the expansion is the same as here and you would see a different region for the observable universe. What constitutes your observable universe depends on your location. 6. Apr 21, 2014 ### Chronos Observing the universe from a different reference frame provides no additional information. Viewed from another galaxy 'now' you would merely see a universe older and cooler than the MW [by as many years as its light travel time]. A view 'then' [when photons we now observe were emitted] provides a view of the universe as it appeared when younger and hotter than 'now' It otherwise has absolutely no effect on the extent of the observable universe. 7. Apr 21, 2014 ### tolove How does recession verse expansion make any difference? (edit) Recession allows for infinite centers kind of view, while expansion dictates a single origin, right? If we existed on those universes as seen by us 'today,' then we would be however many billion years younger. But, what if we existed on those universes 'now,' without the looking back in time effect. We would still see a 14 billion year old universe, right? Just.. shifted? But in order to reach that universe in that time, we would break the speed of light, which is an impossibility. So, our particular inertial frame is finite (but infinitely expanding), right? edit edit: Wouldn't this mean that, for our frame, an infinite universe might as well be considered to be compressed into the shell of our observable universe? Which is standing still in time? Last edited: Apr 21, 2014 8. Apr 21, 2014 ### Mordred recessive velocity is a measurement of the rate of expansion and is observer and distance dependant. For the reasons already provided. there is no way to observe the entire universe without looking back into time, the redshift is included in our measurements, see the redshift and expansion article I posted above. no this makes little sense, we cannot determine if our universe is infinite or finite. We can only see a finite portion (The observable universe) Those observations include the speed of light + expansion. Think of it this way a star emits light say 13 billion years ago. The universe was smaller then, as the light approaches us the universe expands both ahead and along the path of the light beam. However the rate of expansion is LOCALLY miniscule to the beam of lights speed. So it keeps approaching us, and decreasing the distance between us and the beam. As the expansion occurs the increase in distance along the path of the beam already travelled causes the frequency of the light beam to decrease (Cosmological redshift). However expansion ahead of the beam, does not affect the beam, other than to increase its distance between us and the beam. As the beam approaches us its already travelled part of the way, as it approaches us its recessive velocity decreases. The closer the light beam gets the less recessive velocity it will have, (assuming you had some magical means to measure the leading edge of the beam lol) also due to the less distance between us and the beam the less the rate of expansion will delay it from arriving. However the expansion history during its travel will have already reduced the frequency. edit just a side note, the amount of expansion per megaparsec is miniscule, when people say the expansion is faster than the speed of light they are referring to a far larger unit of distance, for example the amount of expansion between us and the CMB is 3C roughly, however per megaparsec its far far slower than the speed of light $H=67.3 km/s/Mpc$ so $H=67300 meters/sec/Mpc$ as opposed to $c\ =\ 2.99792458\ \times\ 10^{8}\ m\ s^{-1}$ one Mpc is $3.08567758 × 10^{22}$ meters Last edited: Apr 21, 2014 9. Apr 21, 2014 ### bapowell Hubble's Law, $v = Hr$, relates the recession velocity of a distant object, $v$, to its distance from us, $r$. The expansion rate is given by $H$. From this, you can see that for a given expansion rate, the recession velocity depends on separation. There is no origin necessary: imagine blowing up a balloon and consider the balloon's surface. Where is the center? There is none -- all points on the surface uniformly recede from all others with a speed proportionate to their separations. 10. Apr 22, 2014 ### tolove Let our universe have two objects. Consider "us" to be at rest. As the second object "moves" away, we take the size of the universe to be the space between the two objects. I'd prefer this to be imagined on a 1D line, but I don't know if that breaks things. Where is the difference between expansion and... recession? Space is created between the two objects rather than considering space to be traversed by the "moving" object? Space can be created in amounts that exceed the distance light can cover in the same interval? This feels like a wrong view... 11. Apr 22, 2014 ### Mordred recessive velocity is a measurement of expansion, so in that sense there is no difference. the rate of expansion per Mpc is 67.3 km/s/Mpc. Does this sound like its anywhere close to being faster than the speed of light? I explained earlier that the points where we describe recessive velocity as being faster to the speed of light is due to a far larger unit of distance. Which quite frankly is a poor descriptive as it does depend on the unit of measure, Ie a Very large unit The point where this occurs is called the hubble sphere, and its a cumulative of the expansion per Mpc between us and the Hubble sphere. (67.3+67.3+67.3......keep adding up till it finally exceeds light speeds value lol.) However the rate of expansion in the individual Mpc's between us and the Hubble sphere is still 67.3 km/s/Mpc. 12. Apr 22, 2014 ### bapowell Space expands. Objects in space recede on account of the expansion. 13. Apr 22, 2014 ### phinds Recession and expansion are the same. Space is not a thing and does not stretch or expand, it's just that things IN space get farther apart (see the link in my signature). This is called recession. MOVING is a whole 'nother thing. This gets a bit weird because we are using English language to describe stuff that it wasn't quite made to describe. Things cannot move greater than the speed of light but they can, and do, recede at any speed. Objects at the edge of our observable universe are receding from us at about 3c but their proper motion relative to us is negligible by comparison. Chronos's comment about "no new information" is misleading. If you could magically move instantaneously to a galaxy 5 billion light years away, you would as he says see an observable universe that has the same characteristics as ours in that your observable universe would be the same radius as ours. You WOULD, however, see things in that observable universe that are not in our observable universe and you would see the things in our observable universe (those that you could see anyway) as being, as Chronos says, a different age that than what we see them as being. Last edited: Apr 22, 2014 14. Apr 22, 2014 ### bapowell The reason that I draw a distinction between expansion and recession is because the former is a rate, determined by $H$ that applies to the growth of length scales in the universe (I say space expands, but I mean that distances increase). Meanwhile, points in space recede from each other on account of this expansion with a speed given by Hubble's Law. In words, then: recession speed = expansion rate x separation. 15. Apr 22, 2014 ### tolove Okay, I think I have a more accurate picture now, however this new picture is equally confusing. So, space is created? Is there any way to better understand this concept? I haven't touched quantum mechanics yet. In other words, the balloon model is serious business. We're inside an inflating balloon and cannot leave our little subset of space. The balloon might have a center with respect to the 'actual' edge, but we'll never know, and, for us, it is a meaningless question. I'm wanting to view the universe as a lower dimensional bubble contained in something else. Such as a soap bubble in a bucket of water. The 2D-ish soap bubble thins out in a uniform manner as it is absorbed into the 3D bucket. Is their any actual logic to this kind of perspective? I'm wanting to use this perspective because the idea of an infinite expansion seems as arbitrary to me as claiming to be the center of the universe. edit: The big bang implies a point, right? But this doesn't imply an actual center for the universe? Last edited: Apr 22, 2014 16. Apr 22, 2014 ### phinds I'm not sure whether you've got it or not, but I think your wording is confusing. The fact that no matter where we are in the universe we are at the center of our own OBSERVABLE universe has nothing whatsoever to do with the balloon analogy. We are not inside an inflating balloon because a balloon has a material, physical boundary (edge) whereas the observable universe has a horizon but there is nothing material marking it. The balloon analogy is best understood if you read my discussion of it linked to in my signature. The "balloon" actually goes away in the most realistic version of the analogy, as I explain there. Space is not created, things IN space just get farther apart. 17. Apr 22, 2014 ### bapowell No, nothing is inside the balloon. We are on the 2D surface -- it is analogous to the real 3D space of the universe. The balloon nicely illustrates that no one single point on the balloon is central, however, from its vantage point, it sure appears to be. As long as the balloon is homogeneously inflated, all points recede from all others according to Hubble's Law. No, the big bang implies a beginning (a point in time only). 18. Apr 22, 2014 ### phinds Fair enough. Thanks for pointing that out. 19. Apr 22, 2014 ### Mordred http://www.astro.ucla.edu/~wright/balloons.gif just a visual aid edit:not a very good one though lol tends to have the directions of expansion wrong I'll see f I can locate a better one. this one isn't bad they give three different animations https://www.e-education.psu.edu/astro801/content/l10_p4.html ignore the descriptive explosion though lol, some of their wording on the site isn't precise, other descriptive's are just plain wrong. So just use the animations lol ignore this as well "The idea is that we live in a universe with three spatial dimensions that we can perceive, but that there exist "extra" dimensions (maybe one, maybe more than one) that contain the center of the expansion. Just like the two-dimensional beings that inhabit the surface of the balloon universe, we cannot observe the center of our universe. We can tell that it is expanding, but we cannot identify a location in our 3D space that is the center of the expansion." its too bad the article itself is so poorly worded, the animations I wanted to post merely as a visual aid showing 1d rubber band analogy, raisin bread/balloon analogy and the redshift. Its the only site I found that had all 3. Unfortunately I couldn't just copy the animations. So ignore all the descriptives on the site, far too many errors Last edited: Apr 22, 2014 20. Apr 22, 2014 ### tolove My wording is very poor, thank you all for helping me fix it. I'm trying to focus on one thought at a time. Sorry for not replying to everything that's being said! This particularly is trouble for me: I asked two cosmology professors "does space grow" today. I got both a yes and a no. I guess the next step is to ask for more detail? Here's my thinking: Things become further apart if space in between them increases. The increase in space of the universe isn't explained by relative velocities alone. So we have extra space that was never traversed. Therefor... empty space grows..? But this also confuses me because the idea of space growing is rather crazy as well. What happens to space that is grown inside a galaxy? Galaxies are held together with a ridiculous amount of energy. Does grown space leak out like some kind of gas? Problems! Haha, maybe that would make sense. Galaxies leaking space and all. edit: I like that a lot actually. Is matter turning into space a thing that's been studied? Last edited: Apr 22, 2014 Similar Discussions: Questions about the structure of the universe
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6964940428733826, "perplexity": 700.717121268664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806316.80/warc/CC-MAIN-20171121040104-20171121060104-00468.warc.gz"}
https://www.physicsforums.com/threads/maxwell-equations-what-are-all-the-assumptions-used-in-derivation.6223/
# Maxwell Equations: What are all the assumptions used in derivation? • Thread starter tommy555 • Start date • #1 tommy555 [SOLVED] Maxwell Equations: What are all the assumptions used in derivation? I am trying to refute some of the theories of Tesla which are based on his idea that electromagnetic energy is also transmitted via a longitudinal wave. As far as I know Maxwell's equations do not support a longitudinal wave solution. I would like to understand exactly what are all the assumptions that go into their derivation -- so that essentially for Tesla's theory to be correct one or more of those assumptions must be violated. Thank you. ## Answers and Replies • #2 36 0 Hi there, To refute Tesla is very simple: Suppose you have a plane wave travelling in the z-direction. In that case the solutions to the wave equations (derived from Maxwell's laws) have to be of the form: vector(E)=vector(E0)exp(i(kz-wt)) and vector(B)=vector(B0)exp(i(kz-wt)) That is we have a plane wave of monochromatic light with frequency w travelling in the z direction. vector(E/B0) is the complex amplitude of the wave. Plugging these solutions into div(E)=0 (there is no charge) and div(B)=0, we see that we have to have (E/B0)z=0 Since there is only a component in the x/y directions we have a transversal wave Last edited: • #3 tommy555 While this certainly makes sense, if I were trying to advocate Tesla then I would just assert that Maxwell's eqns are false under certain conditions. Can you help me to get behind what went into Maxwell's eqns to begin with? Your insight would be greatly appreciated. • #4 398 0 I've looked at this a bit myself. If you had longitudinal EM waves it would force charged particles "off mass shell" i.e. electrons would change their rest mass under the influence of such waves. The fact that charged particles have to stay "on mass shell" determines the form of Maxwell's equations. • #5 36 0 Maxwell's equations are a set of laws that were already known. For instance div(E)=[rho]/e0 is known as Gauss's law. Then there is Faraday's law and Ampere's law (the last one was adjusted by maxwell with an extra term). The only one without a name is div(B)=0 (an interesting one since it tells you there is no magnetic charge). What went into these equations is a lot of research and experimenting. As you might know Faraday was one of the best experimentors of his age. These laws can all be derived from elementary facts. You can look that up in any textbook on electrodynamics (you should look for electrostatics though).For instance David J. Griffiths "Introduction to electrodynamics" is a good book. It would take too much time to derive all that here (it takes Griffiths 285 pages to get to Maxwell's equations). Point is that you can derive these things from first principles (basic mathematics) and experimental fact. • #6 372 0 The following rules and guesses informed the Maxwell theory: 1. Coulomb rules - static electric charges and static magnetic poles have inverse square forces on static charges and static poles, respectively 2. Poisson rules - electrostatic and magnetostatic force fields can be derived from potential functions 3. Gauss rule - sum of signed electric charges in a volume is conserved 4. Biot-Savart rule - a small current element induces an inverse square magnetic field around it 5. Ampere rule - two small current elements have an inverse square force between them, dependent on their mutual orientation 6. Lenz rule - a flux of a magnetic field induces an opposing electromotive force 7. Faraday rule - a conductor moving across a magnetic field experiences an electromotive force 8. Maxwell guess - a flux of an electric field induces a magnetic field, without requiring actual current elements . Mathematically, Maxwell presumes that the fields can be represented by three-dimensional electric and magnetic field components, subject to three-dimensional vector calculus rules. Actual vector notation came later from Heaviside. If longitudinal waves put in no appearance, it is because the Maxwell theory doesn't predict any. • #7 tommy555 Thank you for all of the very interesting information. Is it possible for Maxwell's equation to be traced back to only laws like conservation of energy, charge, etc? • #8 372 0 Obviously it is necessary to have some kind of interplay between electric and magnetic phenomena in order to produce the Maxwell equations. This can be direct or this can be subtle, but it must come in somehow. Heinrich Hertz (who added charges and current vectors to the Maxwell equations) thought the result so fundamental and so total in consequences that he declared that these equations, along with Newton's laws, formed a complete foundation for classical physics, requiring no antecedent physical facts. • #9 566 6 Originally posted by Tyger I've looked at this a bit myself. If you had longitudinal EM waves it would force charged particles "off mass shell" ... Tyger- Why would longitudinal waves result in "off mass shell" charges ?Please explain. Creator P.S. Nice 8 point list, Quartodeciman. Last edited: • #10 566 6 Originally posted by tommy555 I would like to understand exactly what are all the assumptions that go into their derivation -- .... Quarterdeciman gave a fairly good list. It is interesting to realize that even though we quote his 4 eqns. in terms of the field vectors B and E, Maxwell himself derived many (if not most) in terms of what we call today the vector potential A, (which he referred to as electrodynamic momentum). However, unbeknownst to many, there is one assumption that is usually omitted (or ignored) when considering Maxwell's derivation: Maxwell derived his eqns. upon the firm conviction the ether is the MEDIUM in which electromagnetic phenomena takes place. Creator "The works of the Lord are great, studied by all who have pleasure therein"-- Inscribed in the archway of the entrance to James Clerk Maxwell's Cavendish Laboratory Last edited: • #11 7 0 • Last Post Replies 4 Views 2K • Last Post Replies 3 Views 6K • Last Post Replies 1 Views 5K • Last Post Replies 0 Views 2K • Last Post Replies 1 Views 782 • Last Post Replies 12 Views 3K • Last Post Replies 4 Views 2K • Last Post Replies 8 Views 745 • Last Post Replies 4 Views 730 • Last Post Replies 3 Views 844
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9202190637588501, "perplexity": 1073.920695627066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00513.warc.gz"}
https://socratic.org/questions/12-00-moles-of-naclo3-will-produce-how-many-grams-of-o2-2-naclo3-2-nacl-3-o2#103582
Chemistry Topics # 12.00 moles of NaClO3 will produce how many grams of O2? 2 NaClO3 ---> 2 NaCl + 3 O2 May 14, 2014 How many grams of ${O}_{2}$ are produced if 12.00 moles of $N a C l {O}_{3}$? We begin with a balanced chemical equation that is provided in the question. $2 N a C l {O}_{3} \to 2 N a C l + 3 {O}_{2}$ Next we determine what we have and what we want. We have 12.00 moles of $N a C l {O}_{3}$ and we want grams of ${O}_{2}$. We set up a roadmap to solve the problem $m o l N a C l {O}_{3} \to m o l {O}_{2} \to g r a m s {O}_{2}$ We need the molar mass ($g f m$) of ${O}_{2}$. ${O}_{2}$ = 32.0 g/mol We need the mole ratio between $N a C l {O}_{3}$:${O}_{2}$ 2:3. This comes from the coefficients of the balanced chemical equation. Now we set up conversion factors following the roadmap from above. Unit we want in the numerator, unit to cancel in the denominator. $12.00 m o l N a C l {O}_{3} x \frac{3 m o l {O}_{2}}{2 m o l N a C l {O}_{3}} x \frac{32.0 . g {O}_{2}}{1 m o l {O}_{2}} =$ Multiply the numerators, and divide the denominators. The outcome is $576.0 g {O}_{2}$ will be produced from 12.00 moles $N a C l {O}_{3}$ SMARTER TEACHER Apr 25, 2016 The number of moles for NaClO3 = 12 moles The number of moles for O2 = 18 moles Molar mass of O2 = 32 g/mol #### Explanation: If 2 moles of NaClO3 gives you 3 moles of O2, it's because the mole ratio between NaClO3 : O2 is 2 : 3. Then 12 moles of NaClO3 must give you [12 ÷ 2 then$\times$3] = 18 moles of O2. You have found the number of moles (n) for O2. Note that number of moles (n) = mass of substance (m)$\div$molar mass (M). $n = m \div M$ The next step is to find out what is the molar mass (M) of O2. If you look into your periodic table, the atomic weight is equivalent to the molar mass. The molar mass of one O is = 16 g/mol. Since there are two O present in the reaction (refer to equation), the molar mass of O2 is [16 g/mol$\times$2] = 32 g/mol You've now found the molar mass (M). The final step is to find out the mass (m) of O2. Since you found the number of moles (n) and molar mass (M), you can now find out the mass of substance. Looking back at the formula $n = m \div M$, to find m, we need to make the formula look: m = n$\times$M. The mass of O2 is [18 moles $\times$32 g/mol] = 576 grams. Overall, 12 moles of NaClO3 will produce 576 grams of O2. ##### Impact of this question 49195 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 21, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8908429145812988, "perplexity": 2113.6029710246744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358208.31/warc/CC-MAIN-20211127163427-20211127193427-00559.warc.gz"}
https://www.projecteuclid.org/euclid.involve/1513775062
## Involve: A Journal of Mathematics • Involve • Volume 11, Number 2 (2018), 271-282. ### On computable classes of equidistant sets: finite focal sets #### Abstract The equidistant set of two nonempty subsets $K$ and $L$ in the Euclidean plane is the set of all points that have the same distance from $K$ and $L$. Since the classical conics can be also given in this way, equidistant sets can be considered as one of their generalizations: $K$ and $L$ are called the focal sets. The points of an equidistant set are difficult to determine in general because there are no simple formulas to compute the distance between a point and a set. As a simplification of the general problem, we are going to investigate equidistant sets with finite focal sets. The main result is the characterization of the equidistant points in terms of computable constants and parametrization. The process is presented by a Maple algorithm. Its motivation is a kind of continuity property of equidistant sets. Therefore we can approximate the equidistant points of $K$ and $L$ with the equidistant points of finite subsets $Kn$ and $Ln$. Such an approximation can be applied to the computer simulation, as some examples show in the last section. #### Article information Source Involve, Volume 11, Number 2 (2018), 271-282. Dates Revised: 26 January 2017 Accepted: 4 February 2017 First available in Project Euclid: 20 December 2017 https://projecteuclid.org/euclid.involve/1513775062 Digital Object Identifier doi:10.2140/involve.2018.11.271 Mathematical Reviews number (MathSciNet) MR3733957 Zentralblatt MATH identifier 06817020 Subjects Primary: 51M04: Elementary problems in Euclidean geometries #### Citation Vincze, Csaba; Varga, Adrienn; Oláh, Márk; Fórián, László; Lőrinc, Sándor. On computable classes of equidistant sets: finite focal sets. Involve 11 (2018), no. 2, 271--282. doi:10.2140/involve.2018.11.271. https://projecteuclid.org/euclid.involve/1513775062 #### References • P. Erdős and I. Vincze, “Über die Annäherung geschlossener, konvexer Kurven”, Mat. Lapok 9 (1958), 19–36. • C. Gross and T.-K. Strempel, “On generalizations of conics and on a generalization of the Fermat–Torricelli problem”, Amer. Math. Monthly 105:8 (1998), 732–743. • S. R. Lay, Convex sets and their applications, Wiley, New York, 1982. • L. D. Loveland, “When midsets are manifolds”, Proc. Amer. Math. Soc. 61:2 (1976), 353–360. • Z. A. Melzak and J. S. Forsyth, “Polyconics, I: Polyellipses and optimization”, Quart. Appl. Math. 35:2 (1977), 239–255. • Á. Nagy and \relax Cs. Vincze, “Examples and notes on generalized conics and their applications”, Acta Math. Acad. Paedagog. Nyházi. $($N.S.$)$ 26:2 (2010), 359–375. • M. Ponce and P. Santibáñez, “On equidistant sets and generalized conics: the old and the new”, Amer. Math. Monthly 121:1 (2014), 18–32. • A. Varga and \relax Cs. Vincze, “On a lower and upper bound for the curvature of ellipses with more than two foci”, Expo. Math. 26:1 (2008), 55–77. • \relax Cs. Vincze, “Convex geometry”, digital textbook, University of Debrecen, Hungary, 2013, http://www.tankonyvtar.hu/en/tartalom/tamop412A/2011_0025_mat_14/index.html. • \relax Cs. Vincze and Á. Nagy, “An introduction to the theory of generalized conics and their applications”, J. Geom. Phys. 61:4 (2011), 815–828. • \relax Cs. Vincze and Á. Nagy, “On the theory of generalized conics with applications in geometric tomography”, J. Approx. Theory 164:3 (2012), 371–390. • J. B. Wilker, “Equidistant sets and their connectivity properties”, Proc. Amer. Math. Soc. 47 (1975), 446–452.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6113656759262085, "perplexity": 2142.472813705051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573053.13/warc/CC-MAIN-20190917061226-20190917083226-00294.warc.gz"}
http://www.purplemath.com/learning/viewtopic.php?f=7&t=646
## [MOVED] roots and the least common denominator Simplificatation, evaluation, linear equations, linear graphs, linear inequalities, basic word problems, etc. Tennise Posts: 7 Joined: Sat Jun 06, 2009 7:12 am Contact: ### [MOVED] roots and the least common denominator To find the least common denominator of ${2 \over 3a}+{4 \over a^2}$ the book I'm reading says to multiply them to get a common denominator of $3a^2: {2a \over 3a^2}+{12 \over 3a^2} = {2a+12 \over 3a^2}$ But wouldn't it be more efficient to find the square roots of the second fraction before multiplying? ${2 \over 3a}+sqrt{ 4\over a^2} = {2 \over 3a}+{2 \over a}\cdot{3 \over 3} = {2 \over 3a} + {6 \over 3a} = {8 \over 3a}$ jaybird0827 Posts: 24 Joined: Tue May 26, 2009 6:31 pm Location: NC Contact: ### Re: roots and the least common denominator Tennise wrote:To find the least common denominator of ${2 \over 3a}+{4 \over a^2}$ the book I'm reading says to multiply them to get a common denominator of $3a^2: {2a \over 3a^2}+{12 \over 3a^2} = {2a+12 \over 3a^2}$ But wouldn't it be more efficient to find the square roots of the second fraction before multiplying? ${2 \over 3a}+sqrt{ 4\over a^2} = {2 \over 3a}+{2 \over a}\cdot{3 \over 3} = {2 \over 3a} + {6 \over 3a} = {8 \over 3a}$ Where do you get the idea that the square root of a fraction is equal to the fraction? Please compare the value of $\frac{4}{a^2}$ with $\frac{2}{a}$ by plugging a random value such as $7$ into each of these expressions for $a$. Are you sure that $\frac{4}{a^2}\,=\,\frac{2}{a}$ for all $a$? Also, you may find it helpful to review lcm and gcf here: http://www.purplemath.com/modules/lcm_gcf.htm Tennise Posts: 7 Joined: Sat Jun 06, 2009 7:12 am Contact: ### Re: roots and the least common denominator Oh, I see now... Thought I could cheat my way through and remove the exponent by squaring, guess not! Thanks for your help, and the link, although my problem wasn't LCM/GCF here :P Edit: I just noticed you moved this thread, sorry for posting in the wrong area, arithmetic's description listed fractions...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 12, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8636319041252136, "perplexity": 763.4345489514468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297689.58/warc/CC-MAIN-20150323172137-00221-ip-10-168-14-71.ec2.internal.warc.gz"}
https://pixel-druid.com/mean-value-theorem-and-taylors-theorem-todo.html
## § Mean value theorem and Taylor's theorem. (TODO) I realise that there are many theorems that I learnt during my preparation for JEE that I simply don't know how to prove. This is one of them. Here I exhibit the proof of Taylor's theorem from Tu's introduction to smooth manifolds. Taylor's theorem: Let $f: \mathbb R \rightarrow \mathbb R$ be a smooth function, and let $n \in \mathbb N$ be an "approximation cutoff". Then there exists for all $x_0 \in \mathbb R$ a smooth function $r \in C^{\infty} \mathbb R$ such that: f(x) = f(x 0) + (x - x 0) f'(x 0)/1! + (x - x 0)^2 f'(x 0)/2! + \dots + (x - x 0)^n f^{(n)'}(x 0)/n! + (x - x 0)^{n+1} r We prove this by induction on $n$. For $n = 0$, we need to show that there exists an $r$ such that $f(x) = f(x_0) + r$. We begin by parametrising the path from $x_0$ to $x$ as $p(t) \equiv (1 - t) x_0 + tx$. Then we consider $(f \circ p)'$: \begin{aligned} &\frac{f(p(t))}{dt} = \frac{df((1 - t) x_0) + tx)}{dt} \\ &= (x - x_0) \frac{df((1 - t)x_0) + tx)}{dx} \end{aligned} Integrating on both sides with limits $t=0, t=1$ yields: \begin{aligned} &\int_0^1 \frac{df(p(t))}{dt} dt = \int_0^1 (x - x_0) \frac{df((1 - t)x_0) + tx)}{dx} dt \\ f(p(1)) - f(p(0)) = (x - x_0) \int_0^1 \frac{df((1 - t)x_0) + tx)}{dx} dt \\ f(x) - f(x_0) = (x - x_0) g[1](x) \\ \end{aligned} where we define $g[1](x) \equiv \int_0^1 \frac{df((1 - t)x_0) + tx)}{dx} dt$ where the $g[1](x)$ witnesses that we have the first derivative of $f$ in its expression. By rearranging, we get: \begin{aligned} f(x) - f(x_0) = (x - x_0) g[1](x) \\ f(x) = f(x_0) + (x - x_0) g[1](x) \\ \end{aligned} If we want higher derivatives, then we simply notice that $g[1](x)$ is of the form: \begin{aligned} g[1](x) \equiv \int_0^1 f'((1 - t)x_0) + tx) dt \\ g[1](x) \equiv \int_0^1 f'((1 - t)x_0) + tx) dt \\ \end{aligned}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 21, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.000008463859558, "perplexity": 848.864823703007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.2/warc/CC-MAIN-20230202232251-20230203022251-00123.warc.gz"}
https://hub.cognite.com/open-industrial-data-211/power-quality-frequency-measurement-1083?postid=2039
Solved # Power Quality/Frequency Measurement • 37 views • New Member • 0 replies Hi All. I am in search of a meter that measures either the power quality of the power supplied to the Valhall platform or a measurement of the power frequency (Hz) at the Valhall Platform. TO me, it does not look like such data exists here on the OID. Am I correct? icon Best answer by Anonymous 5 September 2022, 19:47 View original
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909824013710022, "perplexity": 3477.995237923806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00842.warc.gz"}
http://mathhelpforum.com/advanced-statistics/204357-valid-joint-probability-density-function-print.html
# Valid joint probability density function? • Sep 30th 2012, 11:17 AM drabbie Valid joint probability density function? I have to show that this is a valid joint probability density function. My understanding is that this is valid when the integral is equal to 1. My question is, how to do I do a joint integration when only one of the variables is in play? f(x,y) = 1/y for 0<x<y, 0<y<1 I appreciate the help! • Sep 30th 2012, 11:27 AM Plato Re: Valid joint probability density function? Quote: Originally Posted by drabbie I have to show that this is a valid joint probability density function. My understanding is that this is valid when the integral is equal to 1. My question is, how to do I do a joint integration when only one of the variables is in play? f(x,y) = 1/y for 0<x<y, 0<y<1 Is the function non-negative everywhere? What is $\int_0^1 {\int_0^y {\frac{1}{y}dxdy} }=~?$ • Sep 30th 2012, 12:05 PM drabbie Re: Valid joint probability density function? They did not specifically say 0 elsewhere, but I assume it is non-negative outside of the limits. I will try it like that, thank you.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528499841690063, "perplexity": 436.17962232218963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00884.warc.gz"}
http://proceedings.mlr.press/v80/chen18a.html
Deep multitask networks, in which one neural network produces multiple predictive outputs, can offer better speed and performance than their single-task counterparts but are challenging to train properly. We present a gradient normalization (GradNorm) algorithm that automatically balances training in deep multitask models by dynamically tuning gradient magnitudes. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting across multiple tasks when compared to single-task networks, static baselines, and other adaptive multitask loss balancing techniques. GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter $\alpha$. Thus, what was once a tedious search process that incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we will demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37884312868118286, "perplexity": 1747.86975760433}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00469.warc.gz"}
http://soft-matter.seas.harvard.edu/index.php?title=Liquids_on_topologically_nano-patterned_surfaces&oldid=5524
# Liquids on topologically nano-patterned surfaces "Liquids on Topologically Nanopatterned Surfaces" Oleg Gang, Kyle J. Alvine, Masafumi Fukuto, Peter S. Pershan, Charles T. Black, and Benjamin M. Ocko Physical Review Letters 95(21) 217801 (2005) ## Soft Matter Keywords liquid thin film, wetting, laterally heterogeneous surface ## Summary Gang, et al. present experimental work performed to verify theory on the wetting of thin films. Working with a nanopatterned silicon surface and a film of methyl-cyclohexane (MCH), the authors should that film thickness has two different power law dependences on chemical potential offset from the bulk liquid-vapor equilibrium. These two dependences delineate two regimes of film growth. The first regime is characterized by constant film thickness on the surface and filling of the nanowells, while the second regime is characterized by growth of the surface film after the wells have been completely filled. The behavior of the film is studied using x-ray reflectivity (XR) and grazing incidence diffraction (GID). ## Practical Application of Research Though not all results from this paper fully explained by theory, the authors have shown that film thickness on a heterogeneous surface is controllable. Deposition of coatings will benefit from this work, and it is feasible that nanopatterning a surface before deposition can help give an extra bit of control over the liquid thin film. ## Thin Film Thickness With their patterned silicon/MCH system, the authors set out to validate theories on the dependence of liquid thin film thickness, $d$, on chemical potential difference, $\Delta \mu$. From previous work in the Pershan lab and others, the thickness of a film on a flat surface with van der Waals absorption, the authors expect a power-law of $d \propto \Delta \mu^{-1/3}$. However, for a surface containing isolated, infinitely deep parabolic cavities, the power law is expected to be roughly $d \propto \Delta \mu^{-3.4}$. From XR data taken on their samples, the authors calculate the electron density in the well and on the flat surface (see figure 1).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7509741187095642, "perplexity": 2250.352398540938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00583.warc.gz"}
https://math.stackexchange.com/questions/176937/prerequisite-reading-before-studying-the-collatz-3x1-problem
# Prerequisite reading before studying the Collatz $3x+1$ Problem Let's assume I am starting college and have just finished calculus. I've been reading a bit online about the Collatz $3x+1$ Problem and find it to be very intriguing. However, a lot of what I'm reading uses terms and techniques that I have not seen before. I'm wondering what prerequisite (text book) reading is required before starting to study this problem? Put another way: I'm thinking about reading The Ultimate Challenge: The 3x+1 Problem by Jeffrey C. Lagarias. What areas of mathematics will I need to understand first before being able to fully understand this book? • Almost the same: math.stackexchange.com/questions/158291/… Aug 7, 2012 at 15:56 • @Zander I think the difference is that in that question the OP claims he has "enough knowledge to work on a singular facet of the problem". Where as I am perhaps a step or two behind this person. I'll updated my post and see if I can clarify. Aug 14, 2012 at 22:25 • Making any progress, Brett? Jun 27, 2016 at 23:42 • I made a little back in 2012 when I was first looking, but since then have not had much time to work on it. In fact, I had forgotten I asked this question and now am thinking I should start studying again. Jun 29, 2016 at 21:31 The Lagarias book is a compilation of papers by various authors about various aspects of the problem. Different papers have different prerequisites. Some of the more expository papers have essentially no prerequisites at all; for others, you'll want to know about dynamical systems, Markov chains, ergodic theory, $p$-adic numbers, Turing machines and undecideability, and, of course, elementary Number Theory. And each of these has prerequisites, e.g., ergodic theory is based on measure theory, Markov chains involve Linear Algebra, etc., etc., etc. But don't be disheartened! You don't need all these for every paper, not by any means, and a well-written paper will teach you something useful in its introductory paragraphs even if the rest of the paper is beyond you. I think the best thing is to jump in, start reading something you find interesting, and then, if you get stuck, come back here to ask something like, "What do I need to know to understand the proof that all furbles are craginacs, as given on page 977 of Peeble and Zimp, The Elephant and the $3x+1$ Problem?" It's much easier to give prerequisites when you have a narrowly-focussed problem in mind, than when it's as broad as "I want to learn about the $3x+1$ problem". • All furbles and craginacs are second-order zomboliod perfuncts. Every beginning student of tolopography should know that. – John Jun 27, 2016 at 18:57 Hmm, if you like an answer from an amateur...My 2 cents.... The only (in my opinion: remarkable) partial result is the disprove of whole classes of cycles (the so-called "1-cycles" and "m-cycles"). That means: independent of a specific length certain types of cycles were proven to be impossible first by Ray Steiner (1978(?)), later by John Simons and Benne de Weger (2000,2002,2006, see the wikipedia-entry at m-cycles cannot occur) (perhaps a similar situation like that of Kummer with Fermat's last conjecture, where he could prove FLT for complete classes of prime-exponents - but someone more educated than me might correct/improve that statement). That successes were possible through results of the theory of rational approximation to transcendent numbers, namely linear forms of logarithms, here of log(2) and log(3). There is something more lingering there around, for instance a connection to a detail in the Waring-problem (see for instance mathworld, "power fractional parts") - so I recommend to take a deep look into that subject. • Thanks for the answer. However, I was actually looking for an even earlier start than the m-cycles, transcendent numbers, power fractional parts, etc. I'm still lost and wondering actually, for someone with the usual maths you might study for a B.S. in engineering, physics, or applied math, what should I read next? I'm guessing something in Number Theory, but not sure where to start? Jul 30, 2012 at 20:12 • Hmm, the path to the Steiner/Simons-approach begins with modular consideration. So the modularity of exponential forms, a simple example $\small 3^n-1$ divisible by 5 dependent on n etc was a good start for me. Long time I did not go into the problem deeply, so I do not know the currently best introductions. Perhaps my own two treatizes might be a door for the absolute beginner? Try go.helms-net.de and click one of the two links for the Collatz-problem. They also give further links (I remember "Ken Conrow", and others) Jul 30, 2012 at 20:23 • Alternatively you might mail me privately and I could send you my set of articles which I collected over 2002-2009 from internet-discussions and also links to digitized versions of journal-articles which I've found in online-libraries in that time. Jul 30, 2012 at 20:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6742606163024902, "perplexity": 799.4130283879329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00203.warc.gz"}
https://physics.stackexchange.com/questions/440142/does-it-really-make-sense-to-talk-about-the-color-of-gluons/440149
# Does it really make sense to talk about the color of gluons? It is my understanding that by enforcing SU(3) gauge invariance on our lagrangian of 3-colored quark fields, we are forced to accept the existence of 8 new massless vector fields, the gluons. The 8 here comes directly from the dimension of SU(3). That being said I often see discussions about the gluons in terms of linear combinations of $$r\bar r$$, $$b\bar b$$, etc. This simply cant be the nature of the gluons though can it? Because it seems to imply that the number of colors and the number of gluon fuelds are not independant, while they clearly are. Certainly gluons are not singlets in color space and so they must have color, but it doesnt make sense to me that this color of the gluons would be some mapping directly from quark color. Thanks to anyone with the insight and time to share it! The quarks transform according to the fundamental representation $$\mathbf{3}$$ of SU(3), and the antiquarks according to the conjugate representation $$\mathbf{\overline 3}$$. The gluons transform according to the adjoint representation $$\mathbf{8}$$. The adjoint representation is contained in the product of the fundamental representation and its conjugate: $$\mathbf{3}\otimes \mathbf{\overline 3} = \mathbf{8}\oplus \mathbf{1}$$ Therefore gluons are conventionally labeled using color-anticolor combinations, avoiding the color singlet combination $$(r\overline{r}+b\overline{b}+g\overline{g})/\sqrt{3}$$. • Awesome do you have a any sources/papers/books I could find more about this? – Craig Nov 10 '18 at 18:53 • As well; in a universe where we had say, 4 or 5 colors and 8 gluons, how would this work? – Craig Nov 10 '18 at 18:57 • @Craig, are you familiar with representation theory? A big concept is that the same group can be "represented" with larger or smaller vector spaces. In this case, the quarks have the minimum number of vectors needed to realize SU(3) symmetry, and the gluons have (very loosely) that maximum number of vectors that can have the SU(3) symmetry, one for each dimension. If we had 4 colors, then we would instead want $4\otimes \bar 4 = 15\oplus 1$, and we would have 15 gluon fields for the 4 colors. – Alex Meiburg Nov 10 '18 at 19:01 • The adjoint reprsentation of SU(n) has dimension $n^2-1$. So if you have 4 colors there need to be 15 gluons and if you have 5 colors there need to be 24 gluons. – G. Smith Nov 10 '18 at 19:05 • I’ll let others suggest the best references. But you need to clarify what you main interest is. The mathematics of representation theory? (For example, how do you figure out how an arbitrary product of irreducible representations decomposes into irreducible representations?) The physics of QCD, or the whole Standard Model? The reason why quantum field theory involves group representations? Etc. – G. Smith Nov 10 '18 at 20:25 The gluons are generators of the SU(3) gauge group; whatever notation is used to describe the fundamental representation can be extended to higher representations through their embedding in tensor products of the fundamental (and its dual.) [Also, by "sums" of $$r\bar r$$, $$b\bar b$$, etc., do you really mean products?] • Oops I mean linear combinations of products of $r\bar r$, $b\bar b$, etc – Craig Nov 10 '18 at 18:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8571043014526367, "perplexity": 380.12686090443225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385984.79/warc/CC-MAIN-20210309030723-20210309060723-00096.warc.gz"}
http://link.springer.com/article/10.1007%2Fs001480050043
, Volume 10, Issue 3, pp 273-283 # Modeling household fertility decisions with generalized Poisson regression Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract. This paper models household fertility decisions by using a generalized Poisson regression model. Since the fertility data used in the paper exhibit under-dispersion, the generalized Poisson regression model has statistical advantages over both standard Poisson and negative binomial regression models, and is suitable for analysis of count data that exhibit either over-dispersion or under-dispersion. The model is estimated by the method of maximum likelihood. Approximate tests for the dispersion and goodness-of-fit measures for comparing alternative models are discussed. Based on observations from the Panel Study of Income Dynamics of 1989 interviewing year, the empirical results support the fertility hypothesis of Becker and Lewis (1973). Received January 7, 1997 /Accepted April 3, 1997
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8103579878807068, "perplexity": 2028.377658728646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007832.1/warc/CC-MAIN-20141125155647-00018-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.koreascience.or.kr/search.page?keywords=%EC%97%90%EB%84%88%EC%A7%80+%EC%86%8C%EC%82%B0&lang=en
• Title, Summary, Keyword: 에너지 소산 ### Behavioral Characteristics and Energy Dissipation Capacity of Short Coupling Beams with Various Reinforcement Layouts (다양한 배근상세를 갖는 짧은 연결보의 주기거동 특성과 에너지소산능력의 평가) • Eom, Tae-Sung;Park, Hong-Gun;Kang, Su-Min • Journal of the Korea Concrete Institute • / • v.20 no.2 • / • pp.203-212 • / • 2008 • The cyclic behavior and energy dissipation mechanism of short coupling beams with various reinforcement layouts were studied. For numerical analysis of coupling beams, nonlinear truss model was used. The results of numerical analysis showed that the coupling beams with conventional reinforcement layout showed pinched cyclic behavior without significant energy dissipation, whereas the coupling beams with diagonal reinforcement exhibited stable cyclic behavior without pinching. The energy dissipation of the coupling beams was developed mainly by diagonal reinforcing bars developing large plastic strains rather than concrete which is a brittle material Based on this result, simplified equations for evaluating the energy dissipation of coupling beams were developed. For verification, the predicted energy dissipation was compared with the test results. The results showed that the simplified equations can predict the energy dissipation of short coupling beams with shear span-to-depth ratio less than 1.25 with reasonable precision, addressing various design parameters such as reinforcement layout, shear span-to-depth ratio, and the magnitude of inelastic displacement. The proposed energy equations can be easily applied to performance-based seismic evaluation and design of reinforced concrete structures and members. ### An Energy-Dissipation-Ratio Based Structural Health Monitoring System (에너지소산률을 이용한 구조물의 건전도 모니터링에 관한 연구) • Heo, Gwang-Hee;Shin, Heung-Chul;Shin, Jae-Chul • Journal of the Korea institute for structural maintenance and inspection • / • v.8 no.1 • / • pp.165-174 • / • 2004 • This research develops a technique which uses energy dissipation ratio in order to monitor the structural health on real time basis. For real-time monitoring, we employ the NExT and the ERA which enable us to obtain real-time data. Energy dissipation ratio is calculated from those data only with the damping and natural frequency of the structure, and from the calculated values we develop an algorithm (Energy dissipation method) which decides the damage degree of structure. The Energy dissipation method developed in this research is proved to be valid by comparison with other methods like the eigenparameter method and the MAC. Especially this method enables us to save measuring time and data which are the most important in real-time monitoring, and its use of the ambient vibration also makes it easy to monitor the whole structure and its damage points. ### Equations for Estimating Energy Dissipation Capacity of Flexure-Dominated RC Members (철근콘크리트 휨재에 대한 에너지 소산능력 산정식의 개발) • 엄태성;박홍근 • Journal of the Korea Concrete Institute • / • v.14 no.6 • / • pp.989-1000 • / • 2002 • As advanced earthquake design methods using nonlinear static analysis are developed, it is required to estimate precisely the cyclic behavior of reinforced concrete members that is characterized by strength, deformability, and energy dissipation. In a recent study, a simplified method which can estimate accurately the energy dissipation capacity of flexure-dominated RC members subjected to repeated cyclic load was developed. Based on the previously developed method, in the present study, simple equations that can be used for calculating the energy dissipation capacity were derived and verified by the comparison with experimental results. Through parametric study using the proposed equations, effects of axial load, reinforcement ratio, rebar arrangement, md ductility on the dissipated energy were investigated. The proposed equations can accurately estimate the energy dissipation capacity compared with the existing empirical equations, and therefore they will be useful for the nonlinear static analysis/design methods. ### Seismic Fragility Analysis of a Cable-stayed Bridge with Energy Dissipation Devices (에너지 소산장치를 장착한 사장교의 지진 취약도 해석) • Park, Won-Suk;Kim, Dong-Seok;Choi, Hyun-Sok;Koh, Hyun-Moo • Journal of the Earthquake Engineering Society of Korea • / • v.10 no.3 • / • pp.1-11 • / • 2006 • This paper presents a seismic fragility analysis method for a cable-stayed bridge with energy dissipation devices. Model uncertainties represented by random variables include input ground motions, characteristics of energy dissipation devices and the stiffness of cable-stayed bridge. Using linear regression, we established demand models for the fragility analysis from the relationship between maximum responses and the intensity of input ground motions. For capacity models, we considered the moment and shear force of the main tower, longitudinal displacement of the girder, deviation of the stay cables tension and the local buckling of the main steel tower as the limit states for cable-stayed bridge. As a numerical example, fragility analysis results for the 2nd Jindo bridge are presented. The effect of energy dissipation devices is also briefly discussed. ### Earthquake Design Method for Structural Walls Based on Energy Dissipation Capacity (에너지 소산능력을 고려한 전단벽의 내진설계) • 박홍근;엄태성 • Journal of the Earthquake Engineering Society of Korea • / • v.7 no.6 • / • pp.25-34 • / • 2003 • Recently, performance-based analysis/design methods such as the capacity spectrum method and the direct displacement-based design method were developed. In these methods, estimation of energy dissipation capacity of RC structures depends on empirical equations which are not sufficiently accurate, On the other hand, in a recent study, a simplified method for evaluating energy dissipation capacity was developed. In the present study, based on the evaluation method, a new seismic design method for flexure-dominated RC walls was developed. In determination of earthquake load, the proposed design method can address variations of energy dissipation capacity with design parameters such as dimensions and shapes of cross-sections, axial force, and reinforcement ratio and arrangement, The proposed design method was compared with the current performance-based design methods. The applicability of the proposed method was discussed. ### Pinching and Energy Dissipation Capacity of Flexure-Dominated RC Members (휨지배 철근콘크리트 부재의 핀칭과 에너지 소산능력) • Park, Hong-Gun;Eom, Tae-Sung • Journal of the Korea Concrete Institute • / • v.15 no.4 • / • pp.594-605 • / • 2003 • Pinching is an important property of reinforced concrete member which characterizes its cyclic behavior. In the present study, numerical studies were performed to investigate the characteristics of pinching behavior and the energy dissipation capacity of flexure-dominated reinforced concrete members. By investigating existing experiments and numerical results, it was found that flexural pinching which has no relation with shear action appears in RC members subject to axial compression force. However, members with specific arrangement and amount of re-bars, have the same energy dissipation capacity regardless of the magnitude of the axial force applied even though the shape of the cyclic curve varies due to the effect of the axial force. This indicates that concrete as a brittle material does not significantly contribute to the energy dissipation capacity though its effect on the behavior increases as the axial force increases, and that energy dissipation occurs primarily by re-bars. Therefore, the energy dissipation capacity of flexure-dominated member can be calculated by the analysis on the cross-section subject to pure bending, regardless of the actual compressive force applied. Based on the findings, a practical method and the related design equations for estimating energy dissipation capacity and damping modification factor was developed, and their validity was verified by the comparisons with existing experiments. The proposed method can be conveniently used in design practice because it accurately estimates energy dissipation capacity with general design parameters. ### Application of Energy Dissipation Capacity to Earthquake Design (내진 설계를 위한 에너지 소산량 산정법의 활용) • 임혜정;박홍근;엄태성 • Journal of the Earthquake Engineering Society of Korea • / • v.7 no.6 • / • pp.109-117 • / • 2003 • Traditional nonlinear static and dynamic analyses do not accurately estimate the energy dissipation capacity of reinforced concrete structure. Recently, simple equations which can accurately calculate the energy dissipation capacity of flexure-dominated RC members, were developed in the companion study. In the present study, nonlinear static and dynamic analytical methods improved using the energy-evaluation method were developed. For nonlinear static analysis, the Capacity Spectrum Method was improved by using the energy-spectrum curve newly developed. For nonlinear dynamic analysis, a simplified energy-based cyclic model of reinforced concrete member was developed. Unlike the existing cyclic models which are the stiffness-based models, the proposed cyclic model can accurately estimate the energy dissipating during complete load-cycles. The procedure of the proposed methods was established and the computer program incorporating the analytical method was developed. The proposed analytical methods can estimate accurately the energy dissipation capacity varying with the design parameters such as shape of cross-section, reinforcement ratio and arrangement, and can address the effect of the energy dissipation capacity on the structural performance under earthquake load. ### Effects of Energy-Dissipation by Stepped Gabion Slope in Rapidly Varied Flow (계단식 Gabion의 경사에 따른 급변류의 에너지 소산효과) • Kuem, Do-Hun;Lee, Chang-Yun;Bae, Sang-Soo;Lee, Seung-Yun;Jee, Hong-Kee • Proceedings of the Korea Water Resources Association Conference • / • / • pp.1605-1610 • / • 2006 • 계단식 Gabion 낙차공은 다공체 구조물로서 시공하기 쉽고 안정적이며, 하천유수에 대하여 저항성이 있어 하천구조물로서 널리 자주 사용되고 있다. Gabion은 다공체로서 유수력을 쉽게 흡수함으로써 감세지 계단표면의 위치에너지를 소산시키는데 매우 효과적이다. Stephenson은 1/10 축적을 가진(투수성이 있고 하천낙차공에만 적용되는 투수성 상류면을 가진 높이 4m까지의) 계단식 Gabion을 월류 실험한 바가 있으며, 그 연구결과가 실무에서 인용되고 있다. 그러나 본 연구에서는 급변류의 에너지 소산효과를 조사하기 위하여 중력이 다른 힘들보다 지배적이므로 Froude 상사법칙을 이용하고 1/1, 1/2, 1/3 경사를 가진 계단을 적용하였다. 실험에서는 경사를 가진 높이 4m 계단식 위어와 게비온 감세지 실험, 계단모형실험(보통구조, 층상구조, 끝단이 올라간 구조, 턱을 가진 구조), 격리수맥흐름, 부분수맥흐름으로 제안하여 경사에 따른 급변류의 에너지 소산효과에 대한 결과를 얻을 수 있었다. ### Simplifed Method for Estimating Energy-Dissipation Capacity of Flexure-Dominant RC Members (휨지배 철근콘크리트 부재의 에너지소산성능 평가 방법) • 엄태성;박흥근 • Journal of the Korea Concrete Institute • / • v.14 no.4 • / • pp.566-577 • / • 2002 • As advanced earthquake analysis/design methods such as the nonlinear static analysis are developed, it is required to estimate precisely the cyclic behavior of reinforced concrete members that is characterized by strength, deformability, and capacity of energy dissipation. However, currently, estimation of energy dissipation depends on empirical equations that are not sufficiently accurate, or experiment and sophisticated numerical analysis which are difficult to use in practice.0 the present study, nonlinear finite element analysis was performed to investigate the behavioral characteristics of flexure-dominant RC members under cyclic load. The effects of axial force, arrangement of reinforcing bars, and reinforcement ratio on the cyclic behavior were studied. Based on the investigation, a simplified method to estimate the capacity of energy dissipation was proposed, and it was verified by the comparison with the finite element analyses and experiments. The proposed method can estimate the energy dissipation of RC members more precisely than currently used empirical equations, and it is easily applicable in practice. ### Plastic collapse behaviour and energy dissipation capacity of square aluminium pipe under transverse load (사각형 알루미늄 파이프의 횡하중하에서의 소성파괴 거동과 에너지 소산 능력) • Lee Jong-Woo • Proceedings of the KSME Conference • / • / • pp.131.2-131.2 • / • 2005
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.881130039691925, "perplexity": 6072.769986063059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.96/warc/CC-MAIN-20210507025943-20210507055943-00424.warc.gz"}
http://www.ni.com/documentation/en/daqexpress/latest/m-ref/any/
# any Version: Determines whether any non-zero input elements exist. any is equivalent to not(all(not(.))). ## Syntax status = any(a) status = any(a, dim) ## a Scalar or array of any dimension. ## dim Dimension along which MathScript determines whether any input elements are non-zero. dim is a positive integer. For example, if a is a matrix and dim has the value one, the function works column-wise and returns a row vector containing one Boolean element for each column of a. If dim is not specified, the function uses the non-singleton dimension of a. ## status 0 (False), if a contains no non-zero elements. 1 (True), if a contains non-zero elements. Status is a Boolean. A = [0, 4; 9, 6]; STATUS = any(A) Where This Node Can Run: Desktop OS: Windows FPGA: DAQExpress does not support FPGA devices
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21768419444561005, "perplexity": 3866.13529459056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948596115.72/warc/CC-MAIN-20171217152217-20171217174217-00652.warc.gz"}
https://mail.haskell.org/pipermail/haskell-cafe/2003-October/005395.html
# fixed point Josef Svenningsson josefs at cs.chalmers.se Tue Oct 28 11:56:21 EST 2003 Sorry about replying to my own mail. On Mon, 27 Oct 2003, Josef Svenningsson wrote: > On Mon, 27 Oct 2003, Paul Hudak wrote: > > > Thomas L. Bevan wrote: > > > Is there a simple transformation that can be applied to all > > > recursive functions to render them non-recursive with fix. > > > > Suppose you have a LET expression with a set of (possibly mutually > > recursive) equations such as: > > > > let f1 = e1 > > f2 = e2 > > ... > > fn = en > > in e > > > > The following is then equivalent to the above, assuming that g is not > > free in e or any of the ei: > > > > let (f1,...,fn) = fix g > > g ~(f1,...,fn) = (e1,...,en) > > in e > > > > Note that all recursion introduced by the top-most LET has been removed > > (or, if you will, concentrated into the one application of fix). This > > transformation will work even if there is no recursion, and even if some > > of the fi are not functions (for example they could be recursively > > defined lazy data structures). > > > This is a very nice technique. As an exercise to the reader I suggest the > following program: > > \being{code} > data Tree a = Branch a (Tree (a,a)) | Leaf > > cross f (a,b) = (f a,f b) > > main1 = > let mapTree :: (a -> b) -> Tree a -> Tree b > mapTree = \f tree -> case tree of > Branch a t -> Branch (f a) (mapTree (cross f) t) > Leaf -> Leaf > in mapTree id (Branch 42 Leaf) > \end{code} > I realise I was perhaps a bit dense in my previous mail. It was not my intention to try to sound smart. Sorry for that. Does anyone know how to apply the transformation suggested by Paul Hudak to my program and make it typecheck? Does there exist a type system where the transformed program typechecks? I suppose so but I don't quite know what it would look like. All the best, /Josef
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.95903480052948, "perplexity": 5639.591280602277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721387.11/warc/CC-MAIN-20161020183841-00413-ip-10-171-6-4.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/chain-length-problem-in-radical-polymerization/
× # Chain Length Problem in Radical Polymerization In this note, I describe a statistical problem concerning the degree of polymerization of polymers formed through free radical polymerization, a possible model of the reaction mechanism, a simulation and why it fails. Suggesting ideas on the improvement of the problem solving strategy or the model will be greatly appreciated # Chain Length Problem How does the distribution of the degree of polymerization (number of monomer units) of polyethene vary with the initial concentration of ethenes and the free radicals? # Reaction Mechanism ## Chain Initiation This is benzoyl peroxide. When heated, it breaks up into two benzoyl free radicals. The benzoyl further breaks up into a phenyl radical and carbon dioxide ## Chain Propagation The free radicals keep attacking the ethenes and keeps growing. ## Chain Termination The chain reaction terminates when the free radicals die out. For the purpose of the spherical cow model, we will assume that it is also possible that two phenyl radicals join to form biphenyl. It is evident that the reaction stops when there are no more free radicals. # Agnishom's Spherical Cow Model of Polyethene Radical Polymerization The spherical cow model is a simulation friendly model that tries to mimic the process of radical polymerization. Parameters: The number of free radicals ($$n_1)$$) and the number of ethylene molecules ($$n_2$$) in the system. The model only distinguishes between three kinds of species: 1. Free Radicals: Each free radical has a variable called chainLength, which is the number of monomer units attached to it. 2. Closed Chains: They also have the chainLength variable. However, they are incapable of further change. 3. Ethenes: They are just simple ethenes. All ethenes are identical and can be potentially attacked by a free radical. The process of polymerization is modelled here as a process with discrete time steps during each of which two objects, chosen with uniform probability from the system, interact. The interaction rules are as follows: Species Species Result Ethene Ethene Nothing happens Closed Chain (m) Closed Chain (n) Nothing happens Free Radical (m) Closed Chain (n) Nothing happens Ethene Closed Chain (n) Nothing happens Free Radical (m) Free Radical (n) Both radicals are removed. A new Closed Chain of length (m+n) is formed Free Radical (m) Ethene The ethene is removed. The size of the free radical increases by 1 # Monte-Carlo Simulation Approach So far, the Monte Carlo Simulation (Code) was not very helpful because of the large number of steps involved in the actual reaction. 9 months ago Sort by: Correct me if I'm wrong. Is the purpose of this code to calculate the number of types of phenyl - n chain - phenyl compounds that one can get from a given quantity of benzoyl peroxide? If so, can one try a more "kinetic" approach using rate laws to solve the question? · 7 months, 3 weeks ago I think a continuous kinetic approach will work much better than my discretized approach. However, because my knowledge of chemistry is limited, I couldn't try that. · 7 months, 3 weeks ago I tried the case where biphenyls are the only product that can be formed. I think the rate of reaction for a 3 step biphenyl formation reaction is, r = k[benzoyl peroxide] (where the square brackets indicate concentration). This was under the assumption that the unstable phenyl radical doesn't react with carbon dioxide. I tried the case where (1,4- bi phenyl butane), (1,2-bi phenyl ethane), and biphenyl are the final compounds that are obtained. I believe for the above case (1,4-biphenyl butane case) the rate law is, r = K($$\sqrt{(k[ethene])^{2} + 4q[benzoyl peroxide]} - k[ethene])^{2}$$ where K,q,k are constants This equation is for the rate at which biphenyl is formed in the second case. This was done just to see how the rate law will change as the chain terminates at different chain lengths. When k[ethene] nears zero (the entire reaction resembles the 3 step reaction for the biphenyl), the rate law will accordingly change to match the first scenario's rate law. · 7 months, 3 weeks ago I think I am misconveying the question I want to ask. What I am interested in here is an idea of the length of the chains formed based on the initial concentrations of the materials. · 7 months, 3 weeks ago My idea was to use kinetics to determine the rate at which each chain is formed. So under certain conditions your program can eliminate the possibility of a certain product. For example: Lets say you use your program for the second case, when the concentration of ethene is very low (near zero). The rate at which biphenyl is produced is very large compared to the rates at which the other products are formed. Biphenyl will be the major product and its "chain_length" is 0. In the same case, when the concentration of ethene is very very large compared to the concentration of benzoyl peroxide. The at which biphenyl is produced is almost negligible compared to rate at which the other two products. If one can determine their rate laws, your program can hopefully eliminate another compound (this might not be so. I don't know the rate laws, for all you know both of the other products might be favored). Hence the "chain_length" is 2 or 4 or both. My idea was if we build more rate laws for cases where larger chains are involved and we see a pattern in them, we can assume the same situation for cases like phenyl - 100 carbon - phenyl. This generalized rate law for the entire mechanism can be applied in your program. I worked out the rate laws for the case when (phenyl - 4 carbon - phenyl) , (phenyl - 2 carbon - phenyl), and biphenyl. If you define $$\alpha$$ = (1/K)[$$\sqrt{(k[ethene])^{2} + 4q[benzoyl peroxide]} - k[ethene]$$] then the rate at which biphenyl forms is: $$r_1$$ = K$$\alpha^{2}$$ The rate at which phenyl - 2 carbon - phenyl forms is: $$r_2$$ = p$$\alpha$$($$\sqrt{r[benzoyl peroxide]}$$ - l$$\alpha$$) The rate at which phenyl - 4 carbon - phenyl forms is: $$r_3$$ = m$$(\sqrt{r[benzoyl peroxide]} - l(\alpha))^{2}$$ where k,q,K,m,r,l are constants. These equations suggest that in the second sub case the chain_length will be 4. Now for this to really help (not just for specific cases) you need to integrate this expression. This is where we have to introduce the spherical cow. For the sake of simplicity, assume that the concentration of benzoyl peroxide is equal to the concentration of phenyl radicals. After integration you can vary the concentrations of the reactants and see how the the concentrations of products behave at a given instant. · 7 months, 3 weeks ago I see. The integration idea might be helpful here. Thank you! · 7 months, 3 weeks ago The integration seems like a difficult task, I tried it but I didnt get anywhere.. If you want the derivation for the rate laws, I'll email them. I didn't include them because I dont know how to add images in the comments. · 7 months, 3 weeks ago The syntax is ![description](URL) · 7 months, 3 weeks ago I created a note and have posted the pictures there. Here is the link: I hope this helped. · 7 months, 3 weeks ago Could you modify the existing code to stop the reaction at (phenyl- 4 carbon - phenyl) and see how the results compare with the rate laws. I found an interesting article related to this. A quicker approach · 7 months, 2 weeks ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9148477911949158, "perplexity": 1632.624458645905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660548.33/warc/CC-MAIN-20160924173740-00036-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.electricalidea.com/varly-loop-test-for-finding-location-of-faults-in-underground-cables/
# Varly loop Test For finding Location Of Faults In Underground Cables 0 514 ## Varly loop Test For finding Location Of Faults In Underground Cables Varley loop test is also for locating short-circuit and earth faults in underground cables. This test also employs the principle of the wheatstone bridge. The only difference between Murray loop test and Varley loop test is that Varley loop test provision is made for measurement of total loop resistance instead obtaining it from the relation In this test, the ratio arms P and Q are fixed and balance position is obtained by varying the known variable resistance (Rheostat). If the fault resistance is high, the sensitivity of Murray loop test is reduced and Varley loop test may be more suitable. It is slightly different from Murry loop test in this the ratio arms P & Q are fixed resistance and a variable resistance (s) is connected to the test end of the faulty cable. The connection diagram for locating the earth fault is shown in fig. The key K2 is first connected to position 1, now the variable resistance S is varied till the galvanometer shows zero deflection. Let, the resistance of the variable resistance is S1 when galvanometer shows zero deflection. Then we can write Now the switch K2 is position 2. Then the variable resistance S is varied till the galvanometer shows zero deflection. Let , the resistance of the variable resistance is S2 ,when galvanometer shows zero deflection. The value of P,Q,S1 and S2 are known , we can easily get the value of loop resistance. If the resistance of the cable per meter length is r, Then the distance of fault form the test end is We can easily located the short circuit fault of the under ground cable as same process as above.The circuit diagram of short circuit fault is given below:-
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8505558967590332, "perplexity": 1937.2372918815418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204790.78/warc/CC-MAIN-20190326034712-20190326060712-00440.warc.gz"}
http://math.stackexchange.com/questions/91437/how-to-find-all-rational-points-on-the-elliptic-curves-like-y2-x3-2/91549
# How to find all rational points on the elliptic curves like $y^2=x^3-2$ Reading the book by Diophantus, one may be led to consider the curves like: $y^2=x^3+1$, $y^2=x^3-1$, $y^2=x^3-2$, the first two of which are easy (after calculating some eight curves to be solved under some certain conditions, one can directly derive the ranks) to be solved, while the last , although simple enough to be solved by some elementary consideration of factorization of algebraic integers, is at present beyond my ability, as my knowledge about the topic is so far limited to some reading of the book Rational Points On Elliptic Curves, by Silverman and Tate, where he did not investigate the case where the polynomial has no visible rational points. By the theorem of Mordell, one can determine its structure of rational points, if the rank is at hand. So, according to my imagination, if some hints about how to compute ranks of elliptic curves of this kind were offered, it would certainly be appreciated. - There is are two pretty visible rational points on $y^2 = x^3 - 2$, namely $(x,y) = (3,\pm 5)$. It is a theorem of Fermat that these are the only integral points. The double of this point is (129/100,-383/1000), so by Nagell-Lutz the point (3,5) has infinite order. That the group of rational points has rank 1, and is generated by $(3,5)$, is a much deeper result. –  KCd Dec 14 '11 at 14:36 @KCd: I mean that the polynomial on the right-hand side has no rational roots. And indeed, on this curve I as yet can do nothing to tell the rational points, except the visible ones. If the result be deep, where could it possibly be found? Thank very much. –  awllower Dec 14 '11 at 15:04 Given your interest in Mordell's equation, you really ought to buy or borrow Diophantine Equations by Mordell, then the second edition of A Course in Number Theory by H. E. Rose, see AMAZON Rose discusses the equation starting on page 286, then gives a table of $k$ with $-50 \leq k \leq 50$ for which there are integral solutions, a second table for which there are rational solutions. The tables are copied from J. W. S. Cassels, The rational solutions of the diophantine equation $y^2 = x^3 - D.$ Acta Arithmetica, volume 82 (1950) pages 243-273. Other than that, you are going to need to study Silverman and Tate far more carefully than you have done so far. From what I can see, all necessary machinery is present. Still, check the four pages in the Bibliography, maybe you will prefer something else. - Here I genuinely thank you for those books recommended. Indeed, I am trying to study the book Arithmetic On Elliptic Curves right now. Also, I will borrow the book by Rose mentioned above; thank again. Look forward to the day I can solve this question! –  awllower Dec 15 '11 at 13:52 And of course, but I forgot in the last comment, the book by Mordell is already at my disposal, while I still try hard to follow his steps and various methods used to solve those incredibly interesting equations. The study of elliptic curves really opened up a totally new door to me, for I was completely unaware of this ancient and fascinating subject. –  awllower Dec 15 '11 at 14:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7961031794548035, "perplexity": 342.72705426293464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645359523.89/warc/CC-MAIN-20150827031559-00171-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.hepdata.net/search/?q=&collaboration=CMS&page=1
Showing 10 of 606 results #### Measurement of the dependence of the hadron production fraction ratio $f_\mathrm{s}/f_\mathrm{u}$ on B meson kinematic variables in proton-proton collisions at $\sqrt{s}$ = 13 TeV The collaboration CMS-BPH-21-001, 2022. Inspire Record 2610522 The dependence of the ratio between the B$_\mathrm{s}^0$ and B$^+$ hadron production fractions, $f_\mathrm{s}/f_\mathrm{u}$, on the transverse momentum ($p_\mathrm{T}$) and rapidity of the B mesons is studied using the decay channels B$_\mathrm{s}^0$$\to J/\psi\,\phi and B^+$$\to$ J$/\psi$ K$^+$. The analysis uses a data sample of proton-proton collisions at a center-of-mass energy of 13 TeV, collected by the CMS experiment in 2018 and corresponding to an integrated luminosity of 61.6 fb$^{-1}$. The $f_\mathrm{s}/f_\mathrm{u}$ ratio is observed to depend on the B $p_\mathrm{T}$ and to be consistent with becoming asymptotically constant at large $p_\mathrm{T}$. No rapidity dependence is observed. The ratio of the B$^0$ to B$^+$ hadron production fractions, $f_\mathrm{d}/f_\mathrm{u}$, measured using the B$^0$$\to J/\psi K^{*0} decay channel, is found to be consistent with unity and independent of p_\mathrm{T} and rapidity. 5 data tables The \mathrm{J/\psi \phi}, \mathrm{J/\psi K}, and \mathrm{J/\psi} \mathrm{K}^{*0} invariant mass distributions, for \mathrm{B} meson candidates with 20 < p_T < 23 GeV, and asociated fits as described in the text. Left pannel. The vertical bars (boxes) represent the statistical (bin-to-bin systematic) uncertainties, while the horizontal bars give the bin widths. The global uncertainty (of 2.3%) is not graphically represented. The blue line represents the average for p_T > 18 GeV. For comparison, the LHCb measurement [10.1103/PhysRevLett.124.122002] is also shown. 12 < \mathrm{B} \, p_T < 70 GeV and 0 < |y| < 2.4 . Global uncertanties are not included in the table (2.3%) Right pannel. The vertical bars (boxes) represent the statistical (bin-to-bin systematic) uncertainties, while the horizontal bars give the bin widths. The global uncertainty (of 2.3%) is not graphically represented. The blue line represents the average for p_T > 18 GeV. For comparison, the LHCb measurement [10.1103/PhysRevLett.124.122002] is also shown. 12 < \mathrm{B} \, p_T < 70 GeV and 0 < |y| < 2.4 . Global uncertanties are not included in the table (2.3%) More… #### Search for high-mass exclusive \gamma\gamma$$\to$ WW and $\gamma\gamma$$\to ZZ production in proton-proton collisions at \sqrt{s} = 13 TeV The collaboration CMS-SMP-21-014, 2022. Inspire Record 2605178 A search is performed for exclusive high-mass \gamma\gamma$$\to$ WW and $\gamma\gamma$$\to ZZ production in proton-proton collisions using intact forward protons reconstructed in near-beam detectors, with both weak bosons decaying into boosted and merged jets. The analysis is based on a sample of proton-proton collisions collected by the CMS and TOTEM experiments at \sqrt{s} = 13 TeV, corresponding to an integrated luminosity of 100 fb^{-1}. No excess above the standard model background prediction is observed, and upper limits are set on the pp \to pWWp and pp \to pZZp cross sections in a fiducial region defined by the diboson invariant mass m(VV) \lt 1 TeV (with V = W,Z) and proton fractional momentum loss 0.04 \lt$$\xi$$\lt 0.20. The results are interpreted as new limits on dimension-6 and dimension-8 anomalous quartic gauge couplings. 10 data tables Expected and observed upper limits on the AQGC operators a^W_0/\Lambda^2, with no unitarization. The y axis shows the limit on the ratio of the observed cross section to the cross section predicted for each anomalous coupling value (\sigma_\mathrm{AQGC}). Expected and observed upper limits on the AQGC operators a^W_C/\Lambda^2, with no unitarization. The y axis shows the limit on the ratio of the observed cross section to the cross section predicted for each anomalous coupling value (\sigma_\mathrm{AQGC}). Expected and observed upper limits on the AQGC operators a^Z_0/\Lambda^2, with no unitarization. The y axis shows the limit on the ratio of the observed cross section to the cross section predicted for each anomalous coupling value (\sigma_\mathrm{AQGC}). More… #### Azimuthal correlations in Z+jets events in proton-proton collisions at \sqrt{s} = 13 TeV The collaboration CMS-SMP-21-003, 2022. Inspire Record 2172990 The production of Z bosons associated with jets is measured in pp collisions at \sqrt{s} = 13 TeV with data recorded with the CMS experiment at the LHC corresponding to an integrated luminosity of 36.3 fb^{-1}. The multiplicity of jets with transverse momentum p_\mathrm{T} \gt 30 GeV is measured for different regions of the Z boson's p_\mathrm{T}(Z), from lower than 10 GeV to higher than 100 GeV. The azimuthal correlation \Delta \phi between the Z boson and the leading jet, as well as the correlations between the two leading jets are measured in three regions of p_\mathrm{T}(Z). The measurements are compared with several predictions at leading and next-to-leading orders, interfaced with parton showers. Predictions based on transverse-momentum dependent parton distributions and corresponding parton showers give a good description of the measurement in the regions where multiple parton interactions and higher jet multiplicities are not important. The effects of multiple parton interactions are shown to be important to correctly describe the measured spectra in the low p_\mathrm{T}(Z) regions. 15 data tables The measured cross section as a function of exclusive jet multiplicity, N_{\text{jets}}, when p_T<10 GeV The measured cross section as a function of exclusive jet multiplicity, N_{\text{jets}}, when 10<p_T<30 GeV The measured cross section as a function of exclusive jet multiplicity, N_{\text{jets}}, when 30<p_T<50 GeV More… #### Measurements of jet multiplicity and jet transverse momentum in multijet events in proton-proton collisions at \sqrt{s} = 13 TeV The collaboration CMS-SMP-21-006, 2022. Inspire Record 2170533 Multijet events at large transverse momentum (p_\mathrm{T}) are measured at \sqrt{s} = 13 TeV using data recorded with the CMS detector at the LHC, corresponding to an integrated luminosity of 36.3 fb^{-1}. The multiplicity of jets with p_\mathrm{T}> 50 GeV that are produced in association with a high-p_\mathrm{T} dijet system is measured in various ranges of the p_\mathrm{T} of the jet with the highest transverse momentum and as a function of the azimuthal angle difference \Delta\phi_{1,2} between the two highest p_\mathrm{T} jets in the dijet system. The differential production cross sections are measured as a function of the transverse momenta of the four highest p_\mathrm{T} jets. The measurements are compared with leading and next-to-leading order matrix element calculations supplemented with simulations of parton shower, hadronization, and multiparton interactions. In addition, the measurements are compared with next-to-leading order matrix element calculations combined with transverse-momentum dependent parton densities and transverse-momentum dependent parton shower. 17 data tables Jet multiplicity measured for a leading-pT jet (p_{T1}) with 200 < p_{T1} < 400 GeV and for an azimuthal separation between the two leading jets of 0 < \Delta\Phi_{1,2} < 150^{\circ}. The full breakdown of the uncertainties is displayed, with PU corresponding to Pileup, PREF to Trigger Prefering, PTHAT to the hard-scale (renormalization and factorization scales), MISS and FAKE to the inefficienties and background, LUMI to integrated luminosity. With JES, JER and stat. unc. following the notation in the paper. Jet multiplicity measured for a leading-pT jet (p_{T1}) with 200 < p_{T1} < 400 GeV and for an azimuthal separation between the two leading jets of 150 < \Delta\Phi_{1,2} < 170^{\circ}. The full breakdown of the uncertainties is displayed, with PU corresponding to Pileup, PREF to Trigger Prefering, PTHAT to the hard-scale (renormalization and factorization scales), MISS and FAKE to the inefficienties and background, LUMI to integrated luminosity. With JES, JER and stat. unc. following the notation in the paper. Jet multiplicity measured for a leading-pT jet (p_{T1}) with 200 < p_{T1} < 400 GeV and for an azimuthal separation between the two leading jets of 170 < \Delta\Phi_{1,2} < 180^{\circ}. The full breakdown of the uncertainties is displayed, with PU corresponding to Pileup, PREF to Trigger Prefering, PTHAT to the hard-scale (renormalization and factorization scales), MISS and FAKE to the inefficienties and background, LUMI to integrated luminosity. With JES, JER and stat. unc. following the notation in the paper. More… #### Search for medium effects using jets from bottom quarks in PbPb collisions at \sqrt{s_\mathrm{NN}} = 5.02 TeV The collaboration CMS-HIN-20-003, 2022. Inspire Record 2165920 The first study of the shapes of jets arising from bottom (b) quarks in heavy ion collisions is presented. Jet shapes are studied using charged hadron constituents as a function of their radial distance from the jet axis. Lead-lead (PbPb) collision data at a nucleon-nucleon center-of-mass energy of \sqrt{s_\mathrm{NN}} = 5.02 TeV were recorded by the CMS detector at the LHC, with an integrated luminosity of 1.69 nb^{-1}. Compared to proton-proton collisions, a redistribution of the energy in b jets to larger distances from the jet axis is observed in PbPb collisions. This medium-induced redistribution is found to be substantially larger for b jets than for inclusive jets. 12 data tables Jet shapes, \rho(\Delta r), for inclusive and b jets as function of \Delta r from pp and PbPb collisions at \sqrt{s_{NN}} = 5.02 TeV. Jet shapes, \rho(\Delta r), for inclusive and b jets as function of \Delta r from pp and PbPb collisions at \sqrt{s_{NN}} = 5.02 TeV. Jet shapes, \rho(\Delta r), for inclusive and b jets as function of \Delta r from pp and PbPb collisions at \sqrt{s_{NN}} = 5.02 TeV. More… #### Azimuthal anisotropy of dijet events in PbPb collisions at \sqrt{s_\mathrm{NN}} = 5.02 TeV The collaboration CMS-HIN-21-002, 2022. Inspire Record 2165916 The path-length dependent parton energy loss within the dense partonic medium created in lead-lead collisions at a nucleon-nucleon center-of-mass energy of \sqrt{s_\mathrm{NN}} = 5.02 TeV is studied by determining the azimuthal anisotropies for dijets with high transverse momentum. The data were collected by the CMS experiment in 2018 and correspond to an integrated luminosity of 1.6 9 nb^{-1}. For events containing back-to-back jets, correlations in relative azimuthal angle and pseudorapidity (\eta) between jets and hadrons, and between two hadrons, are constructed. The anisotropies are expressed as the Fourier expansion coefficients v_n, n = 2-4 of these azimuthal distributions. The dijet v_n values are extracted from long-range (1.5 \lt \vert\Delta\eta\vert \lt 2.5) components of these correlations, which suppresses the background contributions from jet fragmentation processes. Positive dijet v_2 values are observed which increase from central to more peripheral events, while the v_3 and v_4 values are consistent with zero within experimental uncertainties. 4 data tables The dijet v_{n} data points factorized using different associated hadron pT bins for 0-10 % centrality bin. The data points are corrected for the jet reconstruction bias effects. The dijet v_{n} data points factorized using different associated hadron pT bins for 10-30 % centrality bin. The data points are corrected for the jet reconstruction bias effects. The dijet v_{n} data points factorized using different associated hadron pT bins for 30-50 % centrality bin. The data points are corrected for the jet reconstruction bias effects. More… #### Search for a heavy composite Majorana neutrino in events with dilepton signatures from proton-proton collisions at \sqrt{s} = 13 TeV The collaboration CMS-EXO-20-011, 2022. Inspire Record 2161685 Results are presented of a search for a heavy Majorana neutrino N_\ell decaying into two same-flavor leptons \ell (electrons or muons) and a quark-pair jet. A model is considered in which the N_\ell is an excited neutrino in a compositeness scenario. The analysis is performed using a sample of proton-proton collisions at \sqrt{s} = 13 TeV recorded by the CMS experiment at the CERN LHC, corresponding to an integrated luminosity of 138 fb^{-1}. The data are found to be in agreement with the standard model prediction. For the process in which the N_\ell is produced in association with a lepton, followed by the decay of the N_\ell to a same-flavor lepton and a quark pair, an upper limit at 95% confidence level on the product of the cross section and branching fraction is obtained as a function of the N_\ell mass \mhcmn and the compositeness scale \Lambda. For this model the data exclude the existence of N_\text{e} (N_\mu) for m_{\text{N}_\ell} below 6.0 (6.1) TeV, at the limit where m_{\text{N}_\ell} is equal to \Lambda. For m_{\text{N}_\ell} \approx 1 TeV, values of \Lambda less than 20 (23) TeV are excluded. These results represent a considerable improvement in sensitivity, covering a larger parameter space than previous searches in pp collisions at 13 TeV. 6 data tables Cut-flow table mN=0.5TeV, electron, muon channel, 2018. Distributions of \mllj for the data, and the post-fit backgrounds (stacked histograms), in the SRs of the \eeqq channel. The template for one signal hypothesis is shown overlaid as a yellow solid line. The overflow is included in the last bin. The middle panels show ratios of the data to the pre-fit background prediction and post-fit background yield as red open squares and blue points, respectively. The gray band in the middle panels indicates the systematic component of the post-fit uncertainty. The lower panels show the distributions of the pulls, defined in the text. Distributions of \mllj for the data, and the post-fit backgrounds (stacked histograms), in the SRs of the \mmqq channel. The template for one signal hypothesis is shown overlaid as a yellow solid line. The overflow is included in the last bin. The middle panels show ratios of the data to the pre-fit background prediction and post-fit background yield as red open squares and blue points, respectively. The gray band in the middle panels indicates the systematic component of the post-fit uncertainty. The lower panels show the distributions of the pulls, defined in the text. More… #### Search for new heavy resonances decaying to WW, WZ, ZZ, WH, or ZH boson pairs in the all-jets final state in proton-proton collisions at \sqrt{s} = 13 TeV The collaboration CMS-B2G-20-009, 2022. Inspire Record 2159368 A search for new heavy resonances decaying to WW, WZ, ZZ, WH, or ZH boson pairs in the all-jets final state is presented. The analysis is based on proton-proton collision data recorded by the CMS detector in 2016-2018 at a centre-of-mass energy of 13 TeV at the CERN LHC, corresponding to an integrated luminosity of 138 fb^{-1}. The search is sensitive to resonances with masses above 1.3 TeV, decaying to bosons that are highly Lorentz-boosted such that each of the bosons forms a single large-radius jet. Machine learning techniques are employed to identify such jets. No significant excess over the estimated standard model background is observed. A maximum local significance of 3.6 standard deviations, corresponding to a global significance of 2.3 standard deviations, is observed at masses of 2.1 and 2.9 TeV. In a heavy vector triplet model, spin-1 Z' and W' resonances with masses below 4.8 TeV are excluded at the 95% confidence level (CL). These limits are the most stringent to date. In a bulk graviton model, spin-2 gravitons and spin-0 radions with masses below 1.4 and 2.7 TeV, respectively, are excluded at 95% CL. Production of heavy resonances through vector boson fusion is constrained with upper cross section limits at 95% CL as low as 0.1 fb. 6 data tables Observed and expected 95% CL upper limits on the product of the production cross section (\sigma) and the branching fraction, obtained after combining all categories with 138 \mathrm{fb}^{−1} of data at \sqrt{s} = 13 TeV for R to VV signal. Observed and expected 95% CL upper limits on the product of the production cross section (\sigma) and the branching fraction, obtained after combining all categories with 138 \mathrm{fb}^{−1} of data at \sqrt{s} = 13 TeV for \mathrm{G}_\mathrm{bulk} to VV signal. Observed and expected 95% CL upper limits on the product of the production cross section (\sigma) and the branching fraction, obtained after combining all categories with 138 \mathrm{fb}^{−1} of data at \sqrt{s} = 13 TeV for \mathrm{V'} to VV + VH signal in HVT model B. More… #### Search for pair production of vector-like quarks in leptonic final states in proton-proton collisions at \sqrt{s} = 13 TeV The collaboration CMS-B2G-20-011, 2022. Inspire Record 2152227 A search is presented for vector-like T and B quark-antiquark pairs produced in proton-proton collisions at a center-of-mass energy of 13 TeV. Data were collected by the CMS experiment at the CERN LHC in 2016-2018, with an integrated luminosity of 138 fb^{-1}. Events are separated into single-lepton, same-sign charge dilepton, and multilepton channels. In the analysis of the single-lepton channel a multilayer neural network and jet identification techniques are employed to select signal events, while the same-sign dilepton and multilepton channels rely on the high-energy signature of the signal to distinguish it from standard model backgrounds. The data are consistent with standard model background predictions, and the production of vector-like quark pairs is excluded at 95% confidence level for T quark masses up to 1.54 TeV and B quark masses up to 1.56 TeV, depending on the branching fractions assumed, with maximal sensitivity to decay modes that include multiple top quarks. The limits obtained in this search are the strongest limits to date for \mathrm{T\overline{T}} production, excluding masses below 1.48 TeV for all decays to third generation quarks, and are the strongest limits to date for \mathrm{B\overline{B}} production with B quark decays to tW. 46 data tables Distribution of ST in the training region for the T\overline{T} MLP. The observed data are shown along with the predicted T\overline{T} signal with mass of 1.2 (1.5) TeV in the singlet scenario and the background. Statistical and systematic uncertainties in the background prediction before performing the fit to data are also shown. The signal predictions of 1.2 TeV and 1.5 TeV signals have been scaled by factors of x300 and x600, respectively, for visibility. Distribution of the leading jet’s DEEPAK8 light quark or gluon score in the training region for the T\overline{T} MLP. The observed data are shown along with the predicted T\overline{T} signal with mass of 1.2 (1.5) TeV in the singlet scenario and the background. Statistical and systematic uncertainties in the background prediction before performing the fit to data are also shown. The signal predictions of 1.2 TeV and 1.5 TeV signals have been scaled by factors of x300 and x600, respectively, for visibility. Distribution of the MLP T quark score in the SR for the T\overline{T} search. The observed data, predicted T\overline{T} signal with mass of 1.2 (1.5) TeV in the singlet scenario, and the background are all shown. Statistical and systematic uncertainties in the background prediction before performing the fit to data are also shown. The signal predictions of 1.2 TeV and 1.5 TeV signals have been scaled by factors of x10 and x20, respectively, for visibility. More… #### Search for exotic Higgs boson decays H \to$$\mathcal{A}\mathcal{A}$$\to 4\gamma with events containing two merged diphotons in proton-proton collisions at \sqrt{s} = 13 TeV The collaboration CMS-HIG-21-016, 2022. Inspire Record 2151007 The first direct search for exotic Higgs boson decays H \to$$\mathcal{A}\mathcal{A}$, $\mathcal{A}$$\to$$\gamma\gamma$ in events with two photon-like objects is presented. The hypothetical particle $\mathcal{A}$ is a low-mass spin-0 particle decaying promptly to a merged diphoton reconstructed as a single photon-like object. Data collected by the CMS experiment at $\sqrt{s}$ = 13 TeV corresponding to an integrated luminosity of 136 fb$^{-1}$ are analyzed. No excess above the estimated background is found. Upper limits on the branching fraction $\mathcal{B}$(H $\to$$\mathcal{A}\mathcal{A}$$\to$ 4$\gamma$) of (0.9-3.3) $\times$ 10$^{-3}$ are set at 95% confidence level for masses of $\mathcal{A}$ in the range 0.1-1.2 GeV. 1 data table Observed and median expected 95% confidence level (CL) upper limits on $\mathcal{B}(\mathrm{H} \rightarrow \mathcal{A}\mathcal{A} \rightarrow 4\gamma)$ as a function of $m_{\mathcal{A}}$ for prompt $\mathcal{A}$ decays. The 68% and 95% confidence intervals (CIs) around the median expected upper limit are also provided.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905425310134888, "perplexity": 2692.1854549090444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711396.19/warc/CC-MAIN-20221209112528-20221209142528-00144.warc.gz"}
http://www.sagemath.org/doc/reference/geometry/sage/geometry/lattice_polytope.html
# Lattice and reflexive polytopes¶ This module provides tools for work with lattice and reflexive polytopes. A convex polytope is the convex hull of finitely many points in $$\RR^n$$. The dimension $$n$$ of a polytope is the smallest $$n$$ such that the polytope can be embedded in $$\RR^n$$. A lattice polytope is a polytope whose vertices all have integer coordinates. If $$L$$ is a lattice polytope, the dual polytope of $$L$$ is $\{y \in \ZZ^n : x\cdot y \geq -1 \text{ all } x \in L\}$ A reflexive polytope is a lattice polytope, such that its polar is also a lattice polytope, i.e. it is bounded and has vertices with integer coordinates. This Sage module uses Package for Analyzing Lattice Polytopes (PALP), which is a program written in C by Maximilian Kreuzer and Harald Skarke, which is freely available under the GNU license terms at http://hep.itp.tuwien.ac.at/~kreuzer/CY/. Moreover, PALP is included standard with Sage. PALP is described in the paper Arxiv math.SC/0204356. Its distribution also contains the application nef.x, which was created by Erwin Riegler and computes nef-partitions and Hodge data for toric complete intersections. ACKNOWLEDGMENT: polytope.py module written by William Stein was used as an example of organizing an interface between an external program and Sage. William Stein also helped Andrey Novoseltsev with debugging and tuning of this module. Robert Bradshaw helped Andrey Novoseltsev to realize plot3d function. Note IMPORTANT: PALP requires some parameters to be determined during compilation time, i.e., the maximum dimension of polytopes, the maximum number of points, etc. These limitations may lead to errors during calls to different functions of these module. Currently, a ValueError exception will be raised if the output of poly.x or nef.x is empty or contains the exclamation mark. The error message will contain the exact command that caused an error, the description and vertices of the polytope, and the obtained output. Data obtained from PALP and some other data is cached and most returned values are immutable. In particular, you cannot change the vertices of the polytope or their order after creation of the polytope. If you are going to work with large sets of data, take a look at all_* functions in this module. They precompute different data for sequences of polynomials with a few runs of external programs. This can significantly affect the time of future computations. You can also use dump/load, but not all data will be stored (currently only faces and the number of their internal and boundary points are stored, in addition to polytope vertices and its polar). AUTHORS: • Andrey Novoseltsev (2007-01-11): initial version • Andrey Novoseltsev (2007-01-15): all_* functions • Andrey Novoseltsev (2008-04-01): second version, including: • dual nef-partitions and necessary convex_hull and minkowski_sum • built-in sequences of 2- and 3-dimensional reflexive polytopes • plot3d, skeleton_show • Andrey Novoseltsev (2009-08-26): dropped maximal dimension requirement • Andrey Novoseltsev (2010-12-15): new version of nef-partitions • Andrey Novoseltsev (2013-09-30): switch to PointCollection. • Maximilian Kreuzer and Harald Skarke: authors of PALP (which was also used to obtain the list of 3-dimensional reflexive polytopes) • Erwin Riegler: the author of nef.x sage.geometry.lattice_polytope.LatticePolytope(data, desc=None, compute_vertices=True, n=0, lattice=None) Construct a lattice polytope. INPUT: • data – points spanning the lattice polytope, specified as one of: • a point collection (this is the preferred input and it is the quickest and the most memory efficient one); • an iterable of iterables (for example, a list of vectors) defining the point coordinates; • a file with matrix data, opened for reading, or • a filename of such a file, see read_palp_matrix() for the file format; • desc – DEPRECATED (default: “A lattice polytope”) a string description of the polytope; • compute_vertices – boolean (default: True). If True, the convex hull of the given points will be computed for determining vertices. Otherwise, the given points must be vertices; • n – an integer (default: 0) if data is a name of a file, that contains data blocks for several polytopes, the n-th block will be used; • lattice – the ambient lattice of the polytope. If not given, a suitable lattice will be determined automatically, most likely the toric lattice $$M$$ of the appropriate dimension. OUTPUT: EXAMPLES: sage: points = [(1,0,0), (0,1,0), (0,0,1), (-1,0,0), (0,-1,0), (0,0,-1)] sage: p = LatticePolytope(points) sage: p 3-d reflexive polytope in 3-d lattice M sage: p.vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1) in 3-d lattice M We draw a pretty picture of the polytope in 3-dimensional space: sage: p.plot3d().show() Now we add an extra point, which is in the interior of the polytope... sage: points.append((0,0,0)) sage: p = LatticePolytope(points) sage: p.nvertices() 6 You can suppress vertex computation for speed but this can lead to mistakes: sage: p = LatticePolytope(points, compute_vertices=False) ... sage: p.nvertices() 7 Given points must be in the lattice: sage: LatticePolytope(matrix([1/2, 3/2])) Traceback (most recent call last): ... ValueError: points [(1/2), (3/2)] are not in 1-d lattice M! But it is OK to create polytopes of non-maximal dimension: sage: p = LatticePolytope([(1,0,0), (0,1,0), (0,0,0), ... (-1,0,0), (0,-1,0), (0,0,0), (0,0,0)]) sage: p 2-d lattice polytope in 3-d lattice M sage: p.vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M(-1, 0, 0), M( 0, -1, 0) in 3-d lattice M An empty lattice polytope can be considered as well: sage: p = LatticePolytope([], lattice=ToricLattice(3).dual()); p -1-d lattice polytope in 3-d lattice M sage: p.ambient_dim() 3 sage: p.npoints() 0 sage: p.nfacets() 0 sage: p.points_pc() Empty collection in 3-d lattice M sage: p.faces() [] class sage.geometry.lattice_polytope.LatticePolytopeClass(points, compute_vertices) Bases: sage.structure.sage_object.SageObject, _abcoll.Hashable Construct a lattice polytope from prepared data. In most cases you should use LatticePolytope() for constructing polytopes. INPUT: affine_transform(a=1, b=0) Return a*P+b, where P is this lattice polytope. Note 1. While a and b may be rational, the final result must be a lattice polytope, i.e. all vertices must be integral. 2. If the transform (restricted to this polytope) is bijective, facial structure will be preserved, e.g. the first facet of the image will be spanned by the images of vertices which span the first facet of the original polytope. INPUT: • a - (default: 1) rational scalar or matrix • b - (default: 0) rational scalar or vector, scalars are interpreted as vectors with the same components EXAMPLES: sage: o = lattice_polytope.cross_polytope(2) sage: o.vertices_pc() M( 1, 0), M( 0, 1), M(-1, 0), M( 0, -1) in 2-d lattice M sage: o.affine_transform(2).vertices_pc() M( 2, 0), M( 0, 2), M(-2, 0), M( 0, -2) in 2-d lattice M sage: o.affine_transform(1,1).vertices_pc() M(2, 1), M(1, 2), M(0, 1), M(1, 0) in 2-d lattice M sage: o.affine_transform(b=1).vertices_pc() M(2, 1), M(1, 2), M(0, 1), M(1, 0) in 2-d lattice M sage: o.affine_transform(b=(1, 0)).vertices_pc() M(2, 0), M(1, 1), M(0, 0), M(1, -1) in 2-d lattice M sage: a = matrix(QQ, 2, [1/2, 0, 0, 3/2]) sage: o.polar().vertices_pc() N(-1, 1), N( 1, 1), N(-1, -1), N( 1, -1) in 2-d lattice N sage: o.polar().affine_transform(a, (1/2, -1/2)).vertices_pc() M(0, 1), M(1, 1), M(0, -2), M(1, -2) in 2-d lattice M While you can use rational transformation, the result must be integer: sage: o.affine_transform(a) Traceback (most recent call last): ... ValueError: points [(1/2, 0), (0, 3/2), (-1/2, 0), (0, -3/2)] are not in 2-d lattice M! ambient_dim() Return the dimension of the ambient space of this polytope. EXAMPLES: We create a 3-dimensional octahedron and check its ambient dimension: sage: o = lattice_polytope.cross_polytope(3) sage: o.ambient_dim() 3 dim() Return the dimension of this polytope. EXAMPLES: We create a 3-dimensional octahedron and check its dimension: sage: o = lattice_polytope.cross_polytope(3) sage: o.dim() 3 Now we create a 2-dimensional diamond in a 3-dimensional space: sage: p = LatticePolytope([(1,0,0), (0,1,0), (-1,0,0), (0,-1,0)]) sage: p.dim() 2 sage: p.ambient_dim() 3 distances(point=None) Return the matrix of distances for this polytope or distances for the given point. The matrix of distances m gives distances m[i,j] between the i-th facet (which is also the i-th vertex of the polar polytope in the reflexive case) and j-th point of this polytope. If point is specified, integral distances from the point to all facets of this polytope will be computed. This function CAN be used for polytopes whose dimension is smaller than the dimension of the ambient space. In this case distances are computed in the affine subspace spanned by the polytope and if the point is given, it must be in this subspace. EXAMPLES: The matrix of distances for a 3-dimensional octahedron: sage: o = lattice_polytope.cross_polytope(3) sage: o.distances() [0 0 2 2 2 0 1] [2 0 2 0 2 0 1] [0 2 2 2 0 0 1] [2 2 2 0 0 0 1] [0 0 0 2 2 2 1] [2 0 0 0 2 2 1] [0 2 0 2 0 2 1] [2 2 0 0 0 2 1] Distances from facets to the point (1,2,3): sage: o.distances([1,2,3]) (1, 3, 5, 7, -5, -3, -1, 1) It is OK to use RATIONAL coordinates: sage: o.distances([1,2,3/2]) (-1/2, 3/2, 7/2, 11/2, -7/2, -3/2, 1/2, 5/2) sage: o.distances([1,2,sqrt(2)]) Traceback (most recent call last): ... TypeError: unable to convert sqrt(2) to a rational Now we create a non-spanning polytope: sage: p = LatticePolytope([(1,0,0), (0,1,0), (-1,0,0), (0,-1,0)]) sage: p.distances() [0 2 2 0 1] [2 2 0 0 1] [0 0 2 2 1] [2 0 0 2 1] sage: p.distances((1/2, 3, 0)) (7/2, 9/2, -5/2, -3/2) sage: p.distances((1, 1, 1)) Traceback (most recent call last): ... ArithmeticError: vector is not in free module dual_lattice() Return the dual of the ambient lattice of self. OUTPUT: • a lattice. If possible (that is, if lattice() has a dual() method), the dual lattice is returned. Otherwise, $$\ZZ^n$$ is returned, where $$n$$ is the dimension of self. EXAMPLES: sage: LatticePolytope([(1,0)]).dual_lattice() 2-d lattice N sage: LatticePolytope([], lattice=ZZ^3).dual_lattice() Ambient free module of rank 3 over the principal ideal domain Integer Ring edges() Return the sequence of edges of this polytope (i.e. faces of dimension 1). EXAMPLES: The octahedron has 12 edges: sage: o = lattice_polytope.cross_polytope(3) sage: len(o.edges()) 12 sage: o.edges() [[1, 5], [0, 5], [0, 1], [3, 5], [1, 3], [4, 5], [0, 4], [3, 4], [1, 2], [0, 2], [2, 3], [2, 4]] faces(dim=None, codim=None) Return the sequence of proper faces of this polytope. If dim or codim are specified, returns a sequence of faces of the corresponding dimension or codimension. Otherwise returns the sequence of such sequences for all dimensions. EXAMPLES: All faces of the 3-dimensional octahedron: sage: o = lattice_polytope.cross_polytope(3) sage: o.faces() [ [[0], [1], [2], [3], [4], [5]], [[1, 5], [0, 5], [0, 1], [3, 5], [1, 3], [4, 5], [0, 4], [3, 4], [1, 2], [0, 2], [2, 3], [2, 4]], [[0, 1, 5], [1, 3, 5], [0, 4, 5], [3, 4, 5], [0, 1, 2], [1, 2, 3], [0, 2, 4], [2, 3, 4]] ] Its faces of dimension one (i.e., edges): sage: o.faces(dim=1) [[1, 5], [0, 5], [0, 1], [3, 5], [1, 3], [4, 5], [0, 4], [3, 4], [1, 2], [0, 2], [2, 3], [2, 4]] Its faces of codimension two (also edges): sage: o.faces(codim=2) [[1, 5], [0, 5], [0, 1], [3, 5], [1, 3], [4, 5], [0, 4], [3, 4], [1, 2], [0, 2], [2, 3], [2, 4]] It is an error to specify both dimension and codimension at the same time, even if they do agree: sage: o.faces(dim=1, codim=2) Traceback (most recent call last): ... ValueError: Both dim and codim are given! The only faces of a zero-dimensional polytope are the empty set and the polytope itself, i.e. it has no proper faces at all: sage: p = LatticePolytope([[1]]) sage: p.vertices_pc() M(1) in 1-d lattice M sage: p.faces() [] In particular, you an exception will be raised if you try to access faces of the given dimension or codimension, including edges and facets: sage: p.facets() Traceback (most recent call last): ... IndexError: list index out of range facet_constant(i) Return the constant in the i-th facet inequality of this polytope. The i-th facet inequality is given by self.facet_normal(i) * X + self.facet_constant(i) >= 0. INPUT: • i - integer, the index of the facet OUTPUT: • integer – the constant in the i-th facet inequality. EXAMPLES: Let’s take a look at facets of the octahedron and some polytopes inside it: sage: o = lattice_polytope.cross_polytope(3) sage: o.vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1) in 3-d lattice M sage: o.facet_normal(0) N(-1, -1, 1) sage: o.facet_constant(0) 1 sage: p = LatticePolytope(o.vertices_pc()(1,2,3,4,5)) sage: p.vertices_pc() M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1) in 3-d lattice M sage: p.facet_normal(0) N(-1, 0, 0) sage: p.facet_constant(0) 0 sage: p = LatticePolytope(o.vertices_pc()(1,2,4,5)) sage: p.vertices_pc() M(0, 1, 0), M(0, 0, 1), M(0, -1, 0), M(0, 0, -1) in 3-d lattice M sage: p.facet_normal(0) N(0, -1, 1) sage: p.facet_constant(0) 1 This is a 2-dimensional lattice polytope in a 4-dimensional space: sage: p = LatticePolytope([(1,-1,1,3), (-1,-1,1,3), (0,0,0,0)]) sage: p 2-d lattice polytope in 4-d lattice M sage: p.vertices_pc() M( 1, -1, 1, 3), M(-1, -1, 1, 3), M( 0, 0, 0, 0) in 4-d lattice M sage: fns = [p.facet_normal(i) for i in range(p.nfacets())] sage: fns [N(11, -1, 1, 3), N(-11, -1, 1, 3), N(0, 1, -1, -3)] sage: fcs = [p.facet_constant(i) for i in range(p.nfacets())] sage: fcs [0, 0, 11] Now we manually compute the distance matrix of this polytope. Since it is a triangle, each line (corresponding to a facet) should have two zeros (vertices of the corresponding facet) and one positive number (since our normals are inner): sage: matrix([[fns[i] * p.vertex(j) + fcs[i] ... for j in range(p.nvertices())] ... for i in range(p.nfacets())]) [22 0 0] [ 0 22 0] [ 0 0 11] facet_constants() Return facet constants of self. OUTPUT: • an integer vector. EXAMPLES: For reflexive polytopes all constants are 1: sage: o = lattice_polytope.cross_polytope(3) sage: o.vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1) in 3-d lattice M sage: o.facet_constants() (1, 1, 1, 1, 1, 1, 1, 1) Here is an example of a 3-dimensional polytope in a 4-dimensional space with 3 facets containing the origin: sage: p = LatticePolytope([(0,0,0,0), (1,1,1,3), ... (1,-1,1,3), (-1,-1,1,3)]) sage: p.vertices_pc() M( 0, 0, 0, 0), M( 1, 1, 1, 3), M( 1, -1, 1, 3), M(-1, -1, 1, 3) in 4-d lattice M sage: p.facet_constants() (0, 0, 0, 10) facet_normal(i) Return the inner normal to the i-th facet of this polytope. If this polytope is not full-dimensional, facet normals will be orthogonal to the integer kernel of the affine subspace spanned by this polytope. INPUT: • i – integer, the index of the facet OUTPUT: • vectors – the inner normal of the i-th facet EXAMPLES: Let’s take a look at facets of the octahedron and some polytopes inside it: sage: o = lattice_polytope.cross_polytope(3) sage: o.vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1) in 3-d lattice M sage: o.facet_normal(0) N(-1, -1, 1) sage: o.facet_constant(0) 1 sage: p = LatticePolytope(o.vertices_pc()(1,2,3,4,5)) sage: p.vertices_pc() M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1) in 3-d lattice M sage: p.facet_normal(0) N(-1, 0, 0) sage: p.facet_constant(0) 0 sage: p = LatticePolytope(o.vertices_pc()(1,2,4,5)) sage: p.vertices_pc() M(0, 1, 0), M(0, 0, 1), M(0, -1, 0), M(0, 0, -1) in 3-d lattice M sage: p.facet_normal(0) N(0, -1, 1) sage: p.facet_constant(0) 1 Here is an example of a 3-dimensional polytope in a 4-dimensional space: sage: p = LatticePolytope([(0,0,0,0), (1,1,1,3), ... (1,-1,1,3), (-1,-1,1,3)]) sage: p.vertices_pc() M( 0, 0, 0, 0), M( 1, 1, 1, 3), M( 1, -1, 1, 3), M(-1, -1, 1, 3) in 4-d lattice M sage: ker = p.vertices_pc().column_matrix().integer_kernel().matrix() sage: ker [ 0 0 3 -1] sage: ker * p.facet_normals() [0 0 0 0] Now we manually compute the distance matrix of this polytope. Since it is a simplex, each line (corresponding to a facet) should consist of zeros (indicating generating vertices of the corresponding facet) and a single positive number (since our normals are inner): sage: matrix([[p.facet_normal(i) * p.vertex(j) ... + p.facet_constant(i) ... for j in range(p.nvertices())] ... for i in range(p.nfacets())]) [ 0 0 0 20] [ 0 0 20 0] [ 0 20 0 0] [10 0 0 0] facet_normals() Return inner normals to the facets of self. OUTPUT: EXAMPLES: Normals to facets of an octahedron are vertices of a cube: sage: o = lattice_polytope.cross_polytope(3) sage: o.vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1) in 3-d lattice M sage: o.facet_normals() N(-1, -1, 1), N( 1, -1, 1), N(-1, 1, 1), N( 1, 1, 1), N(-1, -1, -1), N( 1, -1, -1), N(-1, 1, -1), N( 1, 1, -1) in 3-d lattice N Here is an example of a 3-dimensional polytope in a 4-dimensional space: sage: p = LatticePolytope([(0,0,0,0), (1,1,1,3), ... (1,-1,1,3), (-1,-1,1,3)]) sage: p.vertices_pc() M( 0, 0, 0, 0), M( 1, 1, 1, 3), M( 1, -1, 1, 3), M(-1, -1, 1, 3) in 4-d lattice M sage: p.facet_normals() N(-10, 0, 1, 3), N( 10, -10, 0, 0), N( 0, 10, 1, 3), N( 0, 0, -1, -3) in 4-d lattice N facets() Return the sequence of facets of this polytope (i.e. faces of codimension 1). EXAMPLES: All facets of the 3-dimensional octahedron: sage: o = lattice_polytope.cross_polytope(3) sage: o.facets() [[0, 1, 5], [1, 3, 5], [0, 4, 5], [3, 4, 5], [0, 1, 2], [1, 2, 3], [0, 2, 4], [2, 3, 4]] Facets are the same as faces of codimension one: sage: o.facets() is o.faces(codim=1) True index() Return the index of this polytope in the internal database of 2- or 3-dimensional reflexive polytopes. Databases are stored in the directory of the package. Note The first call to this function for each dimension can take a few seconds while the dictionary of all polytopes is constructed, but after that it is cached and fast. Return type: integer EXAMPLES: We check what is the index of the “diamond” in the database: sage: d = lattice_polytope.cross_polytope(2) sage: d.index() 3 Note that polytopes with the same index are not necessarily the same: sage: d.vertices_pc() M( 1, 0), M( 0, 1), M(-1, 0), M( 0, -1) in 2-d lattice M sage: lattice_polytope.ReflexivePolytope(2,3).vertices_pc() M( 1, 0), M( 0, 1), M( 0, -1), M(-1, 0) in 2-d lattice M But they are in the same $$GL(Z^n)$$ orbit and have the same normal form: sage: d.normal_form_pc() M( 1, 0), M( 0, 1), M( 0, -1), M(-1, 0) in 2-d lattice M sage: lattice_polytope.ReflexivePolytope(2,3).normal_form_pc() M( 1, 0), M( 0, 1), M( 0, -1), M(-1, 0) in 2-d lattice M is_reflexive() Return True if this polytope is reflexive. EXAMPLES: The 3-dimensional octahedron is reflexive (and 4318 other 3-polytopes): sage: o = lattice_polytope.cross_polytope(3) sage: o.is_reflexive() True But not all polytopes are reflexive: sage: p = LatticePolytope([(1,0,0), (0,1,17), (-1,0,0), (0,-1,0)]) sage: p.is_reflexive() False Only full-dimensional polytopes can be reflexive (otherwise the polar set is not a polytope at all, since it is unbounded): sage: p = LatticePolytope([(1,0,0), (0,1,0), (-1,0,0), (0,-1,0)]) sage: p.is_reflexive() False lattice() Return the ambient lattice of self. OUTPUT: • a lattice. EXAMPLES: sage: lattice_polytope.cross_polytope(3).lattice() 3-d lattice M linearly_independent_vertices() Return a maximal set of linearly independent vertices. OUTPUT: A tuple of vertex indices. EXAMPLES: sage: L = LatticePolytope([[0, 0], [-1, 1], [-1, -1]]) sage: L.linearly_independent_vertices() (1, 2) sage: L = LatticePolytope([[0, 0, 0]]) sage: L.linearly_independent_vertices() () sage: L = LatticePolytope([[0, 1, 0]]) sage: L.linearly_independent_vertices() (0,) nef_partitions(keep_symmetric=False, keep_products=True, keep_projections=True, hodge_numbers=False) Return 2-part nef-partitions of self. INPUT: • keep_symmetric – (default: False) if True, “-s” option will be passed to nef.x in order to keep symmetric partitions, i.e. partitions related by lattice automorphisms preserving self; • keep_products – (default: True) if True, “-D” option will be passed to nef.x in order to keep product partitions, with corresponding complete intersections being direct products; • keep_projections – (default: True) if True, “-P” option will be passed to nef.x in order to keep projection partitions, i.e. partitions with one of the parts consisting of a single vertex; • hodge_numbers – (default: False) if False, “-p” option will be passed to nef.x in order to skip Hodge numbers computation, which takes a lot of time. OUTPUT: Type NefPartition? for definitions and notation. EXAMPLES: Nef-partitions of the 4-dimensional cross-polytope: sage: p = lattice_polytope.cross_polytope(4) sage: p.nef_partitions() [ Nef-partition {0, 1, 4, 5} U {2, 3, 6, 7} (direct product), Nef-partition {0, 1, 2, 4} U {3, 5, 6, 7}, Nef-partition {0, 1, 2, 4, 5} U {3, 6, 7}, Nef-partition {0, 1, 2, 4, 5, 6} U {3, 7} (direct product), Nef-partition {0, 1, 2, 3} U {4, 5, 6, 7}, Nef-partition {0, 1, 2, 3, 4} U {5, 6, 7}, Nef-partition {0, 1, 2, 3, 4, 5} U {6, 7}, Nef-partition {0, 1, 2, 3, 4, 5, 6} U {7} (projection) ] Now we omit projections: sage: p.nef_partitions(keep_projections=False) [ Nef-partition {0, 1, 4, 5} U {2, 3, 6, 7} (direct product), Nef-partition {0, 1, 2, 4} U {3, 5, 6, 7}, Nef-partition {0, 1, 2, 4, 5} U {3, 6, 7}, Nef-partition {0, 1, 2, 4, 5, 6} U {3, 7} (direct product), Nef-partition {0, 1, 2, 3} U {4, 5, 6, 7}, Nef-partition {0, 1, 2, 3, 4} U {5, 6, 7}, Nef-partition {0, 1, 2, 3, 4, 5} U {6, 7} ] Currently Hodge numbers cannot be computed for a given nef-partition: sage: p.nef_partitions()[1].hodge_numbers() Traceback (most recent call last): ... NotImplementedError: use nef_partitions(hodge_numbers=True)! But they can be obtained from nef.x for all nef-partitions at once. Partitions will be exactly the same: sage: p.nef_partitions(hodge_numbers=True) # long time (2s on sage.math, 2011) [ Nef-partition {0, 1, 4, 5} U {2, 3, 6, 7} (direct product), Nef-partition {0, 1, 2, 4} U {3, 5, 6, 7}, Nef-partition {0, 1, 2, 4, 5} U {3, 6, 7}, Nef-partition {0, 1, 2, 4, 5, 6} U {3, 7} (direct product), Nef-partition {0, 1, 2, 3} U {4, 5, 6, 7}, Nef-partition {0, 1, 2, 3, 4} U {5, 6, 7}, Nef-partition {0, 1, 2, 3, 4, 5} U {6, 7}, Nef-partition {0, 1, 2, 3, 4, 5, 6} U {7} (projection) ] Now it is possible to get Hodge numbers: sage: p.nef_partitions(hodge_numbers=True)[1].hodge_numbers() (20,) Since nef-partitions are cached, their Hodge numbers are accessible after the first request, even if you do not specify hodge_numbers=True anymore: sage: p.nef_partitions()[1].hodge_numbers() (20,) We illustrate removal of symmetric partitions on a diamond: sage: p = lattice_polytope.cross_polytope(2) sage: p.nef_partitions() [ Nef-partition {0, 2} U {1, 3} (direct product), Nef-partition {0, 1} U {2, 3}, Nef-partition {0, 1, 2} U {3} (projection) ] sage: p.nef_partitions(keep_symmetric=True) [ Nef-partition {0, 1, 3} U {2} (projection), Nef-partition {0, 2, 3} U {1} (projection), Nef-partition {0, 3} U {1, 2}, Nef-partition {1, 2, 3} U {0} (projection), Nef-partition {1, 3} U {0, 2} (direct product), Nef-partition {2, 3} U {0, 1}, Nef-partition {0, 1, 2} U {3} (projection) ] Nef-partitions can be computed only for reflexive polytopes: sage: p = LatticePolytope([(1,0,0), (0,1,0), (0,0,2), ... (-1,0,0), (0,-1,0), (0,0,-1)]) sage: p.nef_partitions() Traceback (most recent call last): ... ValueError: The given polytope is not reflexive! Polytope: 3-d lattice polytope in 3-d lattice M nef_x(keys) Run nef.x with given keys on vertices of this polytope. INPUT: • keys - a string of options passed to nef.x. The key “-f” is added automatically. OUTPUT: the output of nef.x as a string. EXAMPLES: This call is used internally for computing nef-partitions: sage: o = lattice_polytope.cross_polytope(3) sage: s = o.nef_x("-N -V -p") sage: s # output contains random time M:27 8 N:7 6 codim=2 #part=5 3 6 Vertices of P: 1 0 0 -1 0 0 0 1 0 0 -1 0 0 0 1 0 0 -1 P:0 V:2 4 5 0sec 0cpu P:2 V:3 4 5 0sec 0cpu P:3 V:4 5 0sec 0cpu np=3 d:1 p:1 0sec 0cpu nfacets() Return the number of facets of this polytope. EXAMPLES: The number of facets of the 3-dimensional octahedron: sage: o = lattice_polytope.cross_polytope(3) sage: o.nfacets() 8 The number of facets of an interval is 2: sage: LatticePolytope(([1],[2])).nfacets() 2 Now consider a 2-dimensional diamond in a 3-dimensional space: sage: p = LatticePolytope([(1,0,0), (0,1,0), (-1,0,0), (0,-1,0)]) sage: p.nfacets() 4 normal_form() Return the normal form of self as a matrix. EXAMPLES: We compute the normal form of the “diamond”: sage: o = lattice_polytope.cross_polytope(3) sage: o.normal_form() doctest:...: DeprecationWarning: normal_form() output will change, or consider using normal_form_pc() directly! See http://trac.sagemath.org/15240 for details. [ 1 0 0 0 0 -1] [ 0 1 0 0 -1 0] [ 0 0 1 -1 0 0] normal_form_pc(algorithm='palp', permutation=False) Return the normal form of vertices of self. Two full-dimensional lattice polytopes are in the same GL(\mathbb{Z})-orbit if and only if their normal forms are the same. Normal form is not defined and thus cannot be used for polytopes whose dimension is smaller than the dimension of the ambient space. The original algorithm was presented in [KS98] and implemented in PALP. A modified version of the PALP algorithm is discussed in [GK13] and available here as “palp_modified”. INPUT: • algorithm – (default: “palp”) The algorithm which is used to compute the normal form. Options are: • “palp” – Run external PALP code, usually the fastest option. • “palp_native” – The original PALP algorithm implemented in sage. Currently considerably slower than PALP. • “palp_modified” – A modified version of the PALP algorithm which determines the maximal vertex-facet pairing matrix first and then computes its automorphisms, while the PALP algorithm does both things concurrently. • permutation – (default: False) If True the permutation applied to vertices to obtain the normal form is returned as well. Note that the different algorithms may return different results that nevertheless lead to the same normal form. OUTPUT: • a point collection in the lattice() of self or a tuple of it and a permutation. REFERENCES: [KS98] Maximilian Kreuzer and Harald Skarke, Classification of Reflexive Polyhedra in Three Dimensions, arXiv:hep-th/9805190 [GK13] Roland Grinis and Alexander Kasprzyk, Normal forms of convex lattice polytopes, arXiv:1301.6641 EXAMPLES: We compute the normal form of the “diamond”: sage: d = LatticePolytope([(1,0), (0,1), (-1,0), (0,-1)]) sage: d.vertices_pc() M( 1, 0), M( 0, 1), M(-1, 0), M( 0, -1) in 2-d lattice M sage: d.normal_form_pc() M( 1, 0), M( 0, 1), M( 0, -1), M(-1, 0) in 2-d lattice M The diamond is the 3rd polytope in the internal database: sage: d.index() 3 sage: d 2-d reflexive polytope #3 in 2-d lattice M You can get it in its normal form (in the default lattice) as sage: lattice_polytope.ReflexivePolytope(2, 3).vertices_pc() M( 1, 0), M( 0, 1), M( 0, -1), M(-1, 0) in 2-d lattice M It is not possible to compute normal forms for polytopes which do not span the space: sage: p = LatticePolytope([(1,0,0), (0,1,0), (-1,0,0), (0,-1,0)]) sage: p.normal_form() Traceback (most recent call last): ... ValueError: normal form is not defined for 2-d lattice polytope in 3-d lattice M We can perform the same examples using other algorithms: sage: o = lattice_polytope.cross_polytope(2) sage: o.normal_form_pc(algorithm="palp_native") M( 1, 0), M( 0, 1), M( 0, -1), M(-1, 0) in 2-d lattice M sage: o = lattice_polytope.cross_polytope(2) sage: o.normal_form_pc(algorithm="palp_modified") M( 1, 0), M( 0, 1), M( 0, -1), M(-1, 0) in 2-d lattice M npoints() Return the number of lattice points of this polytope. EXAMPLES: The number of lattice points of the 3-dimensional octahedron and its polar cube: sage: o = lattice_polytope.cross_polytope(3) sage: o.npoints() 7 sage: cube = o.polar() sage: cube.npoints() 27 nvertices() Return the number of vertices of this polytope. EXAMPLES: The number of vertices of the 3-dimensional octahedron and its polar cube: sage: o = lattice_polytope.cross_polytope(3) sage: o.nvertices() 6 sage: cube = o.polar() sage: cube.nvertices() 8 origin() Return the index of the origin in the list of points of self. OUTPUT: • integer if the origin belongs to this polytope, None otherwise. EXAMPLES: sage: p = lattice_polytope.cross_polytope(2) sage: p.origin() 4 sage: p.point(p.origin()) M(0, 0) sage: p = LatticePolytope(([1],[2])) sage: p.points_pc() M(1), M(2) in 1-d lattice M sage: print p.origin() None Now we make sure that the origin of non-full-dimensional polytopes can be identified correctly (Trac #10661): sage: LatticePolytope([(1,0,0), (-1,0,0)]).origin() 2 parent() Return the set of all lattice polytopes. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: o.parent() Set of all Lattice Polytopes plot3d(show_facets=True, facet_opacity=0.5, facet_color=(0, 1, 0), facet_colors=None, show_edges=True, edge_thickness=3, edge_color=(0.5, 0.5, 0.5), show_vertices=True, vertex_size=10, vertex_color=(1, 0, 0), show_points=True, point_size=10, point_color=(0, 0, 1), show_vindices=None, vindex_color=(0, 0, 0), vlabels=None, show_pindices=None, pindex_color=(0, 0, 0), index_shift=1.1) Return a 3d-plot of this polytope. Polytopes with ambient dimension 1 and 2 will be plotted along x-axis or in xy-plane respectively. Polytopes of dimension 3 and less with ambient dimension 4 and greater will be plotted in some basis of the spanned space. By default, everything is shown with more or less pretty combination of size and color parameters. INPUT: Most of the parameters are self-explanatory: • show_facets - (default:True) • facet_opacity - (default:0.5) • facet_color - (default:(0,1,0)) • facet_colors - (default:None) if specified, must be a list of colors for each facet separately, used instead of facet_color • show_edges - (default:True) whether to draw edges as lines • edge_thickness - (default:3) • edge_color - (default:(0.5,0.5,0.5)) • show_vertices - (default:True) whether to draw vertices as balls • vertex_size - (default:10) • vertex_color - (default:(1,0,0)) • show_points - (default:True) whether to draw other poits as balls • point_size - (default:10) • point_color - (default:(0,0,1)) • show_vindices - (default:same as show_vertices) whether to show indices of vertices • vindex_color - (default:(0,0,0)) color for vertex labels • vlabels - (default:None) if specified, must be a list of labels for each vertex, default labels are vertex indicies • show_pindices - (default:same as show_points) whether to show indices of other points • pindex_color - (default:(0,0,0)) color for point labels • index_shift - (default:1.1)) if 1, labels are placed exactly at the corresponding points. Otherwise the label position is computed as a multiple of the point position vector. EXAMPLES: The default plot of a cube: sage: c = lattice_polytope.cross_polytope(3).polar() sage: c.plot3d() Plot without facets and points, shown without the frame: sage: c.plot3d(show_facets=false,show_points=false).show(frame=False) Plot with facets of different colors: sage: c.plot3d(facet_colors=rainbow(c.nfacets(), 'rgbtuple')) It is also possible to plot lower dimensional polytops in 3D (let’s also change labels of vertices): sage: lattice_polytope.cross_polytope(2).plot3d(vlabels=["A", "B", "C", "D"]) TESTS: sage: p = LatticePolytope([[0,0,0],[0,1,1],[1,0,1],[1,1,0]]) sage: p.plot3d() point(i) Return the i-th point of this polytope, i.e. the i-th column of the matrix returned by points(). EXAMPLES: First few points are actually vertices: sage: o = lattice_polytope.cross_polytope(3) sage: o.vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1) in 3-d lattice M sage: o.point(1) M(0, 1, 0) The only other point in the octahedron is the origin: sage: o.point(6) M(0, 0, 0) sage: o.points_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1), M( 0, 0, 0) in 3-d lattice M points() Return all lattice points of this polytope as columns of a matrix. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: o.points() doctest:...: DeprecationWarning: points() output will change, consider using points_pc() directly! See http://trac.sagemath.org/15240 for details. [ 1 0 0 -1 0 0 0] [ 0 1 0 0 -1 0 0] [ 0 0 1 0 0 -1 0] points_pc() Return all lattice points of self. OUTPUT: • a point collection. EXAMPLES: Lattice points of the octahedron and its polar cube: sage: o = lattice_polytope.cross_polytope(3) sage: o.points_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1), M( 0, 0, 0) in 3-d lattice M sage: cube = o.polar() sage: cube.points_pc() N(-1, -1, 1), N( 1, -1, 1), N(-1, 1, 1), N( 1, 1, 1), N(-1, -1, -1), N( 1, -1, -1), N(-1, 1, -1), N( 1, 1, -1), N(-1, -1, 0), N(-1, 0, -1), N(-1, 0, 0), N(-1, 0, 1), N(-1, 1, 0), N( 0, -1, -1), N( 0, -1, 0), N( 0, -1, 1), N( 0, 0, -1), N( 0, 0, 0), N( 0, 0, 1), N( 0, 1, -1), N( 0, 1, 0), N( 0, 1, 1), N( 1, -1, 0), N( 1, 0, -1), N( 1, 0, 0), N( 1, 0, 1), N( 1, 1, 0) in 3-d lattice N Lattice points of a 2-dimensional diamond in a 3-dimensional space: sage: p = LatticePolytope([(1,0,0), (0,1,0), (-1,0,0), (0,-1,0)]) sage: p.points_pc() M( 1, 0, 0), M( 0, 1, 0), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, 0) in 3-d lattice M We check that points of a zero-dimensional polytope can be computed: sage: p = LatticePolytope([[1]]) sage: p.points_pc() M(1) in 1-d lattice M polar() Return the polar polytope, if this polytope is reflexive. EXAMPLES: The polar polytope to the 3-dimensional octahedron: sage: o = lattice_polytope.cross_polytope(3) sage: cube = o.polar() sage: cube 3-d reflexive polytope in 3-d lattice N The polar polytope “remembers” the original one: sage: cube.polar() 3-d reflexive polytope in 3-d lattice M sage: cube.polar().polar() is cube True Only reflexive polytopes have polars: sage: p = LatticePolytope([(1,0,0), (0,1,0), (0,0,2), ... (-1,0,0), (0,-1,0), (0,0,-1)]) sage: p.polar() Traceback (most recent call last): ... ValueError: The given polytope is not reflexive! Polytope: 3-d lattice polytope in 3-d lattice M poly_x(keys, reduce_dimension=False) Run poly.x with given keys on vertices of this polytope. INPUT: • keys - a string of options passed to poly.x. The key “f” is added automatically. • reduce_dimension - (default: False) if True and this polytope is not full-dimensional, poly.x will be called for the vertices of this polytope in some basis of the spanned affine space. OUTPUT: the output of poly.x as a string. EXAMPLES: This call is used for determining if a polytope is reflexive or not: sage: o = lattice_polytope.cross_polytope(3) sage: print o.poly_x("e") 8 3 Vertices of P-dual <-> Equations of P -1 -1 1 1 -1 1 -1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 1 1 -1 Since PALP has limits on different parameters determined during compilation, the following code is likely to fail, unless you change default settings of PALP: sage: BIG = lattice_polytope.cross_polytope(7) sage: BIG 7-d lattice polytope in 7-d lattice M sage: BIG.poly_x("e") # possibly different output depending on your system Traceback (most recent call last): ... ValueError: Error executing 'poly.x -fe' for the given polytope! Output: Please increase POLY_Dmax to at least 7 You cannot call poly.x for polytopes that don’t span the space (if you could, it would crush anyway): sage: p = LatticePolytope([(1,0,0), (0,1,0), (-1,0,0), (0,-1,0)]) sage: p.poly_x("e") Traceback (most recent call last): ... ValueError: Cannot run PALP for a 2-dimensional polytope in a 3-dimensional space! But if you know what you are doing, you can call it for the polytope in some basis of the spanned space: sage: print p.poly_x("e", reduce_dimension=True) 4 2 Equations of P -1 1 0 1 1 2 -1 -1 0 1 -1 2 show3d() Show a 3d picture of the polytope with default settings and without axes or frame. See self.plot3d? for more details. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: o.show3d() skeleton() Return the graph of the one-skeleton of this polytope. EXAMPLES: We construct the one-skeleton graph for the “diamond”: sage: d = lattice_polytope.cross_polytope(2) sage: g = d.skeleton() sage: g Graph on 4 vertices sage: g.edges() [(0, 1, None), (0, 3, None), (1, 2, None), (2, 3, None)] skeleton_points(k=1) Return the increasing list of indices of lattice points in k-skeleton of the polytope (k is 1 by default). EXAMPLES: We compute all skeleton points for the cube: sage: o = lattice_polytope.cross_polytope(3) sage: c = o.polar() sage: c.skeleton_points() [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 15, 19, 21, 22, 23, 25, 26] The default was 1-skeleton: sage: c.skeleton_points(k=1) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 15, 19, 21, 22, 23, 25, 26] 0-skeleton just lists all vertices: sage: c.skeleton_points(k=0) [0, 1, 2, 3, 4, 5, 6, 7] 2-skeleton lists all points except for the origin (point #17): sage: c.skeleton_points(k=2) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26] 3-skeleton includes all points: sage: c.skeleton_points(k=3) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26] It is OK to compute higher dimensional skeletons - you will get the list of all points: sage: c.skeleton_points(k=100) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26] skeleton_show(normal=None) Show the graph of one-skeleton of this polytope. Works only for polytopes in a 3-dimensional space. INPUT: • normal - a 3-dimensional vector (can be given as a list), which should be perpendicular to the screen. If not given, will be selected randomly (new each time and it may be far from “nice”). EXAMPLES: Show a pretty picture of the octahedron: sage: o = lattice_polytope.cross_polytope(3) sage: o.skeleton_show([1,2,4]) Does not work for a diamond at the moment: sage: d = lattice_polytope.cross_polytope(2) sage: d.skeleton_show() Traceback (most recent call last): ... NotImplementedError: skeleton view is implemented only in 3-d space traverse_boundary() Return a list of indices of vertices of a 2-dimensional polytope in their boundary order. Needed for plot3d function of polytopes. EXAMPLES: sage: p = lattice_polytope.cross_polytope(2).polar() sage: p.traverse_boundary() [0, 1, 3, 2] vertex(i) Return the i-th vertex of this polytope, i.e. the i-th column of the matrix returned by vertices(). EXAMPLES: Note that numeration starts with zero: sage: o = lattice_polytope.cross_polytope(3) sage: o.vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1) in 3-d lattice M sage: o.vertex(3) M(-1, 0, 0) vertex_facet_pairing_matrix() Return the vertex facet pairing matrix $$PM$$. Return a matrix whose the $$i, j^\text{th}$$ entry is the height of the $$j^\text{th}$$ vertex over the $$i^\text{th}$$ facet. The ordering of the vertices and facets is as in vertices() and facets(). EXAMPLES: sage: L = lattice_polytope.cross_polytope(3) sage: L.vertex_facet_pairing_matrix() [0 0 2 2 2 0] [2 0 2 0 2 0] [0 2 2 2 0 0] [2 2 2 0 0 0] [0 0 0 2 2 2] [2 0 0 0 2 2] [0 2 0 2 0 2] [2 2 0 0 0 2] vertices() Return vertices of this polytope as columns of a matrix. EXAMPLES: The lattice points of the 3-dimensional octahedron and its polar cube: sage: o = lattice_polytope.cross_polytope(3) sage: o.vertices() doctest:...: DeprecationWarning: vertices() output will change, consider using vertices_pc() directly! See http://trac.sagemath.org/15240 for details. [ 1 0 0 -1 0 0] [ 0 1 0 0 -1 0] [ 0 0 1 0 0 -1] sage: cube = o.polar() sage: cube.vertices() [-1 1 -1 1 -1 1 -1 1] [-1 -1 1 1 -1 -1 1 1] [ 1 1 1 1 -1 -1 -1 -1] vertices_pc() Return vertices of self. OUTPUT: • a point collection. EXAMPLES: Vertices of the octahedron and its polar cube are in dual lattices: sage: o = lattice_polytope.cross_polytope(3) sage: o.vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1) in 3-d lattice M sage: cube = o.polar() sage: cube.vertices_pc() N(-1, -1, 1), N( 1, -1, 1), N(-1, 1, 1), N( 1, 1, 1), N(-1, -1, -1), N( 1, -1, -1), N(-1, 1, -1), N( 1, 1, -1) in 3-d lattice N class sage.geometry.lattice_polytope.NefPartition(data, Delta_polar, check=True) Bases: sage.structure.sage_object.SageObject, _abcoll.Hashable Create a nef-partition. INPUT: • data – a list of integers, the $$i$$-th element of this list must be the part of the $i$-th vertex of Delta_polar in this nef-partition; • Delta_polar – a lattice polytope; • check – by default the input will be checked for correctness, i.e. that data indeed specify a nef-partition. If you are sure that the input is correct, you can speed up construction via check=False option. OUTPUT: • a nef-partition of Delta_polar. Let $$M$$ and $$N$$ be dual lattices. Let $$\Delta \subset M_\RR$$ be a reflexive polytope with polar $$\Delta^\circ \subset N_\RR$$. Let $$X_\Delta$$ be the toric variety associated to the normal fan of $$\Delta$$. A nef-partition is a decomposition of the vertex set $$V$$ of $$\Delta^\circ$$ into a disjoint union $$V = V_0 \sqcup V_1 \sqcup \dots \sqcup V_{k-1}$$ such that divisors $$E_i = \sum_{v\in V_i} D_v$$ are Cartier (here $$D_v$$ are prime torus-invariant Weil divisors corresponding to vertices of $$\Delta^\circ$$). Equivalently, let $$\nabla_i \subset N_\RR$$ be the convex hull of vertices from $$V_i$$ and the origin. These polytopes form a nef-partition if their Minkowski sum $$\nabla \subset N_\RR$$ is a reflexive polytope. The dual nef-partition is formed by polytopes $$\Delta_i \subset M_\RR$$ of $$E_i$$, which give a decomposition of the vertex set of $$\nabla^\circ \subset M_\RR$$ and their Minkowski sum is $$\Delta$$, i.e. the polar duality of reflexive polytopes switches convex hull and Minkowski sum for dual nef-partitions: $\begin{split}\Delta^\circ &= \mathrm{Conv} \left(\nabla_0, \nabla_1, \dots, \nabla_{k-1}\right), \\ \nabla^{\phantom{\circ}} &= \nabla_0 + \nabla_1 + \dots + \nabla_{k-1}, \\ & \\ \Delta^{\phantom{\circ}} &= \Delta_0 + \Delta_1 + \dots + \Delta_{k-1}, \\ \nabla^\circ &= \mathrm{Conv} \left(\Delta_0, \Delta_1, \dots, \Delta_{k-1}\right).\end{split}$ See Section 4.3.1 in [CK99] and references therein for further details, or [BN08] for a purely combinatorial approach. REFERENCES: [BN08] (1, 2) Victor V. Batyrev and Benjamin Nill. Combinatorial aspects of mirror symmetry. In Integer points in polyhedra — geometry, number theory, representation theory, algebra, optimization, statistics, volume 452 of Contemp. Math., pages 35–66. Amer. Math. Soc., Providence, RI, 2008. arXiv:math/0703456v2 [math.CO]. [CK99] David A. Cox and Sheldon Katz. Mirror symmetry and algebraic geometry, volume 68 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 1999. EXAMPLES: It is very easy to create a nef-partition for the octahedron, since for this polytope any decomposition of vertices is a nef-partition. We create a 3-part nef-partition with the 0-th and 1-st vertices belonging to the 0-th part (recall that numeration in Sage starts with 0), the 2-nd and 5-th vertices belonging to the 1-st part, and 3-rd and 4-th vertices belonging to the 2-nd part: sage: o = lattice_polytope.cross_polytope(3) sage: np = NefPartition([0,0,1,2,2,1], o) sage: np Nef-partition {0, 1} U {2, 5} U {3, 4} The octahedron plays the role of $$\Delta^\circ$$ in the above description: sage: np.Delta_polar() is o True The dual nef-partition (corresponding to the “mirror complete intersection”) gives decomposition of the vertex set of $$\nabla^\circ$$: sage: np.dual() Nef-partition {4, 5, 6} U {1, 3} U {0, 2, 7} sage: np.nabla_polar().vertices_pc() N( 1, 1, 0), N( 0, 0, 1), N( 0, 1, 0), N( 0, 0, -1), N(-1, -1, 0), N( 0, -1, 0), N(-1, 0, 0), N( 1, 0, 0) in 3-d lattice N Of course, $$\nabla^\circ$$ is $$\Delta^\circ$$ from the point of view of the dual nef-partition: sage: np.dual().Delta_polar() is np.nabla_polar() True sage: np.Delta(1).vertices_pc() N(0, 0, 1), N(0, 0, -1) in 3-d lattice N sage: np.dual().nabla(1).vertices_pc() N(0, 0, 1), N(0, 0, -1) in 3-d lattice N Instead of constructing nef-partitions directly, you can request all 2-part nef-partitions of a given reflexive polytope (they will be computed using nef.x program from PALP): sage: o.nef_partitions() [ Nef-partition {0, 1, 3} U {2, 4, 5}, Nef-partition {0, 1, 3, 4} U {2, 5} (direct product), Nef-partition {0, 1, 2} U {3, 4, 5}, Nef-partition {0, 1, 2, 3} U {4, 5}, Nef-partition {0, 1, 2, 3, 4} U {5} (projection) ] Delta(i=None) Return the polytope $$\Delta$$ or $$\Delta_i$$ corresponding to self. INPUT: • i – an integer. If not given, $$\Delta$$ will be returned. OUTPUT: See nef-partition class documentation for definitions and notation. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: np = o.nef_partitions()[0] sage: np Nef-partition {0, 1, 3} U {2, 4, 5} sage: np.Delta().polar() is o True sage: np.Delta().vertices_pc() N(-1, -1, 1), N( 1, -1, 1), N(-1, 1, 1), N( 1, 1, 1), N(-1, -1, -1), N( 1, -1, -1), N(-1, 1, -1), N( 1, 1, -1) in 3-d lattice N sage: np.Delta(0).vertices_pc() N( 1, -1, 0), N( 1, 0, 0), N(-1, -1, 0), N(-1, 0, 0) in 3-d lattice N Delta_polar() Return the polytope $$\Delta^\circ$$ corresponding to self. OUTPUT: See nef-partition class documentation for definitions and notation. EXAMPLE: sage: o = lattice_polytope.cross_polytope(3) sage: np = o.nef_partitions()[0] sage: np Nef-partition {0, 1, 3} U {2, 4, 5} sage: np.Delta_polar() is o True Deltas() Return the polytopes $$\Delta_i$$ corresponding to self. OUTPUT: See nef-partition class documentation for definitions and notation. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: np = o.nef_partitions()[0] sage: np Nef-partition {0, 1, 3} U {2, 4, 5} sage: np.Delta().vertices_pc() N(-1, -1, 1), N( 1, -1, 1), N(-1, 1, 1), N( 1, 1, 1), N(-1, -1, -1), N( 1, -1, -1), N(-1, 1, -1), N( 1, 1, -1) in 3-d lattice N sage: [Delta_i.vertices_pc() for Delta_i in np.Deltas()] [N( 1, -1, 0), N( 1, 0, 0), N(-1, -1, 0), N(-1, 0, 0) in 3-d lattice N, N(0, 1, 1), N(0, 0, 1), N(0, 0, -1), N(0, 1, -1) in 3-d lattice N] sage: np.nabla_polar().vertices_pc() N( 1, -1, 0), N( 0, 1, 1), N( 1, 0, 0), N( 0, 0, 1), N( 0, 0, -1), N(-1, -1, 0), N( 0, 1, -1), N(-1, 0, 0) in 3-d lattice N dual() Return the dual nef-partition. OUTPUT: See the class documentation for the definition. ALGORITHM: See Proposition 3.19 in [BN08]. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: np = o.nef_partitions()[0] sage: np Nef-partition {0, 1, 3} U {2, 4, 5} sage: np.dual() Nef-partition {0, 2, 5, 7} U {1, 3, 4, 6} sage: np.dual().Delta() is np.nabla() True sage: np.dual().nabla(0) is np.Delta(0) True hodge_numbers() Return Hodge numbers corresponding to self. OUTPUT: • a tuple of integers (produced by nef.x program from PALP). EXAMPLES: Currently, you need to request Hodge numbers when you compute nef-partitions: sage: p = lattice_polytope.cross_polytope(5) sage: np = p.nef_partitions()[0] # long time (4s on sage.math, 2011) sage: np.hodge_numbers() # long time Traceback (most recent call last): ... NotImplementedError: use nef_partitions(hodge_numbers=True)! sage: np = p.nef_partitions(hodge_numbers=True)[0] # long time (13s on sage.math, 2011) sage: np.hodge_numbers() # long time (19, 19) nabla(i=None) Return the polytope $$\nabla$$ or $$\nabla_i$$ corresponding to self. INPUT: • i – an integer. If not given, $$\nabla$$ will be returned. OUTPUT: See nef-partition class documentation for definitions and notation. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: np = o.nef_partitions()[0] sage: np Nef-partition {0, 1, 3} U {2, 4, 5} sage: np.Delta_polar().vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1) in 3-d lattice M sage: np.nabla(0).vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M(-1, 0, 0) in 3-d lattice M sage: np.nabla().vertices_pc() M( 1, 0, 1), M( 1, -1, 0), M( 1, 0, -1), M( 0, 1, 1), M( 0, 1, -1), M(-1, 0, 1), M(-1, -1, 0), M(-1, 0, -1) in 3-d lattice M nabla_polar() Return the polytope $$\nabla^\circ$$ corresponding to self. OUTPUT: See nef-partition class documentation for definitions and notation. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: np = o.nef_partitions()[0] sage: np Nef-partition {0, 1, 3} U {2, 4, 5} sage: np.nabla_polar().vertices_pc() N( 1, -1, 0), N( 0, 1, 1), N( 1, 0, 0), N( 0, 0, 1), N( 0, 0, -1), N(-1, -1, 0), N( 0, 1, -1), N(-1, 0, 0) in 3-d lattice N sage: np.nabla_polar() is np.dual().Delta_polar() True nablas() Return the polytopes $$\nabla_i$$ corresponding to self. OUTPUT: See nef-partition class documentation for definitions and notation. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: np = o.nef_partitions()[0] sage: np Nef-partition {0, 1, 3} U {2, 4, 5} sage: np.Delta_polar().vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1) in 3-d lattice M sage: [nabla_i.vertices_pc() for nabla_i in np.nablas()] [M( 1, 0, 0), M( 0, 1, 0), M(-1, 0, 0) in 3-d lattice M, M(0, 0, 1), M(0, -1, 0), M(0, 0, -1) in 3-d lattice M] nparts() Return the number of parts in self. OUTPUT: • an integer. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: np = o.nef_partitions()[0] sage: np Nef-partition {0, 1, 3} U {2, 4, 5} sage: np.nparts() 2 part(i) Return the i-th part of self. INPUT: • i – an integer. OUTPUT: • a tuple of integers, indices of vertices of $$\Delta^\circ$$ belonging to $V_i$. See nef-partition class documentation for definitions and notation. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: np = o.nef_partitions()[0] sage: np Nef-partition {0, 1, 3} U {2, 4, 5} sage: np.part(0) (0, 1, 3) part_of(i) Return the index of the part containing the i-th vertex. INPUT: • i – an integer. OUTPUT: • an integer $$j$$ such that the i-th vertex of $$\Delta^\circ$$ belongs to $V_j$. See nef-partition class documentation for definitions and notation. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: np = o.nef_partitions()[0] sage: np Nef-partition {0, 1, 3} U {2, 4, 5} sage: np.part_of(3) 0 sage: np.part_of(2) 1 part_of_point(i) Return the index of the part containing the i-th point. INPUT: • i – an integer. OUTPUT: • an integer $$j$$ such that the i-th point of $$\Delta^\circ$$ belongs to $$\nabla_j$$. Note Since a nef-partition induces a partition on the set of boundary lattice points of $$\Delta^\circ$$, the value of $$j$$ is well-defined for all $$i$$ but the one that corresponds to the origin, in which case this method will raise a ValueError exception. (The origin always belongs to all $$\nabla_j$$.) See nef-partition class documentation for definitions and notation. EXAMPLES: We consider a relatively complicated reflexive polytope #2252 (easily accessible in Sage as ReflexivePolytope(3, 2252), we create it here explicitly to avoid loading the whole database): sage: p = LatticePolytope([(1,0,0), (0,1,0), (0,0,1), (0,1,-1), ... (0,-1,1), (-1,1,0), (0,-1,-1), (-1,-1,0), (-1,-1,2)]) sage: np = p.nef_partitions()[0] sage: np Nef-partition {1, 2, 5, 7, 8} U {0, 3, 4, 6} sage: p.nvertices() 9 sage: p.npoints() 15 We see that the polytope has 6 more points in addition to vertices. One of them is the origin: sage: p.origin() 14 sage: np.part_of_point(14) Traceback (most recent call last): ... ValueError: the origin belongs to all parts! But the remaining 5 are partitioned by np: sage: [n for n in range(p.npoints()) ... if p.origin() != n and np.part_of_point(n) == 0] [1, 2, 5, 7, 8, 9, 11, 13] sage: [n for n in range(p.npoints()) ... if p.origin() != n and np.part_of_point(n) == 1] [0, 3, 4, 6, 10, 12] parts() Return all parts of self. OUTPUT: • a tuple of tuples of integers. The $$i$$-th tuple contains indices of vertices of $Delta^circ$ belonging to $V_i$. See nef-partition class documentation for definitions and notation. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: np = o.nef_partitions()[0] sage: np Nef-partition {0, 1, 3} U {2, 4, 5} sage: np.parts() ((0, 1, 3), (2, 4, 5)) sage.geometry.lattice_polytope.ReflexivePolytope(dim, n) Return n-th reflexive polytope from the database of 2- or 3-dimensional reflexive polytopes. Note 1. Numeration starts with zero: $$0 \leq n \leq 15$$ for $${\rm dim} = 2$$ and $$0 \leq n \leq 4318$$ for $${\rm dim} = 3$$. 2. During the first call, all reflexive polytopes of requested dimension are loaded and cached for future use, so the first call for 3-dimensional polytopes can take several seconds, but all consecutive calls are fast. 3. Equivalent to ReflexivePolytopes(dim)[n] but checks bounds first. EXAMPLES: The 3rd 2-dimensional polytope is “the diamond:” sage: ReflexivePolytope(2, 3) 2-d reflexive polytope #3 in 2-d lattice M sage: lattice_polytope.ReflexivePolytope(2,3).vertices_pc() M( 1, 0), M( 0, 1), M( 0, -1), M(-1, 0) in 2-d lattice M There are 16 reflexive polygons and numeration starts with 0: sage: ReflexivePolytope(2,16) Traceback (most recent call last): ... ValueError: there are only 16 reflexive polygons! It is not possible to load a 4-dimensional polytope in this way: sage: ReflexivePolytope(4,16) Traceback (most recent call last): ... NotImplementedError: only 2- and 3-dimensional reflexive polytopes are available! sage.geometry.lattice_polytope.ReflexivePolytopes(dim) Return the sequence of all 2- or 3-dimensional reflexive polytopes. Note During the first call the database is loaded and cached for future use, so repetitive calls will return the same object in memory. Parameters: dim (2 or 3) – dimension of required reflexive polytopes list of lattice polytopes EXAMPLES: There are 16 reflexive polygons: sage: len(ReflexivePolytopes(2)) 16 It is not possible to load 4-dimensional polytopes in this way: sage: ReflexivePolytopes(4) Traceback (most recent call last): ... NotImplementedError: only 2- and 3-dimensional reflexive polytopes are available! class sage.geometry.lattice_polytope.SetOfAllLatticePolytopesClass Base class for all parents. Parents are the Sage/mathematical analogues of container objects in computer science. INPUT: • base – An algebraic structure considered to be the “base” of this parent (e.g. the base field for a vector space). • category – a category or list/tuple of categories. The category in which this parent lies (or list or tuple thereof). Since categories support more general super-categories, this should be the most specific category possible. If category is a list or tuple, a JoinCategory is created out of them. If category is not specified, the category will be guessed (see CategoryObject), but won’t be used to inherit parent’s or element’s code from this category. • element_constructor – A class or function that creates elements of this Parent given appropriate input (can also be filled in later with _populate_coercion_lists_()) • gens – Generators for this object (can also be filled in later with _populate_generators_()) • names – Names of generators. • normalize – Whether to standardize the names (remove punctuation, etc) • facade – a parent, or tuple thereof, or True Internal invariants: • self._element_init_pass_parent == guess_pass_parent(self, self._element_constructor) Ensures that __call__() passes down the parent properly to _element_constructor(). See trac ticket #5979. Todo Eventually, category should be Sets by default. TESTS: We check that the facade option is compatible with specifying categories as a tuple: sage: class MyClass(Parent): pass sage: P.category() Join of Category of monoids and Category of commutative additive monoids and Category of facade sets __call__(x) TESTS: sage: o = lattice_polytope.cross_polytope(3) sage: lattice_polytope.SetOfAllLatticePolytopesClass().__call__(o) 3-d reflexive polytope in 3-d lattice M _populate_coercion_lists_(coerce_list=[], action_list=[], convert_list=[], embedding=None, convert_method_name=None, element_constructor=None, init_no_parent=None, unpickling=False) This function allows one to specify coercions, actions, conversions and embeddings involving this parent. IT SHOULD ONLY BE CALLED DURING THE __INIT__ method, often at the end. INPUT: • coerce_list – a list of coercion Morphisms to self and parents with canonical coercions to self • action_list – a list of actions on and by self • convert_list – a list of conversion Maps to self and parents with conversions to self • embedding – a single Morphism from self • convert_method_name – a name to look for that other elements can implement to create elements of self (e.g. _integer_) • element_constructor – A callable object used by the __call__ method to construct new elements. Typically the element class or a bound method (defaults to self._element_constructor_). • init_no_parent – if True omit passing self in as the first argument of element_constructor for conversion. This is useful if parents are unique, or element_constructor is a bound method (this latter case can be detected automatically). __mul__(x) This is a multiplication method that more or less directly calls another attribute _mul_ (single underscore). This is because __mul__ can not be implemented via inheritance from the parent methods of the category, but _mul_ can be inherited. This is, e.g., used when creating twosided ideals of matrix algebras. See trac ticket #7797. EXAMPLE: sage: MS = MatrixSpace(QQ,2,2) This matrix space is in fact an algebra, and in particular it is a ring, from the point of view of categories: sage: MS.category() Category of algebras over quotient fields sage: MS in Rings() True However, its class does not inherit from the base class Ring: sage: isinstance(MS,Ring) False Its _mul_ method is inherited from the category, and can be used to create a left or right ideal: sage: MS._mul_.__module__ 'sage.categories.rings' sage: MS*MS.1 # indirect doctest Left Ideal ( [0 1] [0 0] ) of Full MatrixSpace of 2 by 2 dense matrices over Rational Field sage: MS*[MS.1,2] Left Ideal ( [0 1] [0 0], [2 0] [0 2] ) of Full MatrixSpace of 2 by 2 dense matrices over Rational Field sage: MS.1*MS Right Ideal ( [0 1] [0 0] ) of Full MatrixSpace of 2 by 2 dense matrices over Rational Field sage: [MS.1,2]*MS Right Ideal ( [0 1] [0 0], [2 0] [0 2] ) of Full MatrixSpace of 2 by 2 dense matrices over Rational Field __contains__(x) True if there is an element of self that is equal to x under ==, or if x is already an element of self. Also, True in other cases involving the Symbolic Ring, which is handled specially. For many structures we test this by using __call__() and then testing equality between x and the result. The Symbolic Ring is treated differently because it is ultra-permissive about letting other rings coerce in, but ultra-strict about doing comparisons. EXAMPLES: sage: 2 in Integers(7) True sage: 2 in ZZ True sage: Integers(7)(3) in ZZ True sage: 3/1 in ZZ True sage: 5 in QQ True sage: I in RR False sage: SR(2) in ZZ True sage: RIF(1, 2) in RIF True sage: pi in RIF # there is no element of RIF equal to pi False sage: sqrt(2) in CC True sage: pi in RR True sage: pi in CC True sage: pi in RDF True sage: pi in CDF True TESTS: Check that trac ticket #13824 is fixed: sage: 4/3 in GF(3) False sage: 15/50 in GF(25, 'a') False sage: 7/4 in Integers(4) False sage: 15/36 in Integers(6) False _coerce_map_from_(S) Override this method to specify coercions beyond those specified in coerce_list. If no such coercion exists, return None or False. Otherwise, it may return either an actual Map to use for the coercion, a callable (in which case it will be wrapped in a Map), or True (in which case a generic map will be provided). _convert_map_from_(S) Override this method to provide additional conversions beyond those given in convert_list. This function is called after coercions are attempted. If there is a coercion morphism in the opposite direction, one should consider adding a section method to that. This MUST return a Map from S to self, or None. If None is returned then a generic map will be provided. _get_action_(S, op, self_on_left) Override this method to provide an action of self on S or S on self beyond what was specified in action_list. This must return an action which accepts an element of self and an element of S (in the order specified by self_on_left). _an_element_() Returns an element of self. Want it in sufficient generality that poorly-written functions won’t work when they’re not supposed to. This is cached so doesn’t have to be super fast. EXAMPLES: sage: QQ._an_element_() 1/2 sage: ZZ['x,y,z']._an_element_() x TESTS: Since Parent comes before the parent classes provided by categories in the hierarchy of classes, we make sure that this default implementation of _an_element_() does not override some provided by the categories. Eventually, this default implementation should be moved into the categories to avoid this workaround: sage: S = FiniteEnumeratedSet([1,2,3]) sage: S.category() Category of facade finite enumerated sets sage: super(Parent, S)._an_element_ Cached version of <function _an_element_from_iterator at ...> sage: S._an_element_() 1 sage: S = FiniteEnumeratedSet([]) sage: S._an_element_() Traceback (most recent call last): ... EmptySetError _repr_option(key) INPUT: • key – string. A key for different metadata informations that can be inquired about. Valid key arguments are: • 'ascii_art': The _repr_() output is multi-line ascii art and each line must be printed starting at the same column, or the meaning is lost. • 'element_ascii_art': same but for the output of the elements. Used in sage.misc.displayhook. • 'element_is_atomic': the elements print atomically, that is, parenthesis are not required when printing out any of $$x - y$$, $$x + y$$, $$x^y$$ and $$x/y$$. OUTPUT: Boolean. EXAMPLES: sage: ZZ._repr_option('ascii_art') False sage: MatrixSpace(ZZ, 2)._repr_option('element_ascii_art') True _init_category_(category) Initialize the category framework Most parents initialize their category upon construction, and this is the recommended behavior. For example, this happens when the constructor calls Parent.__init__() directly or indirectly. However, some parents defer this for performance reasons. For example, sage.matrix.matrix_space.MatrixSpace does not. EXAMPLES: sage: P = Parent() sage: P.category() Category of sets sage: class MyParent(Parent): ....: def __init__(self): ....: self._init_category_(Groups()) sage: MyParent().category() Category of groups sage.geometry.lattice_polytope.all_cached_data(polytopes) Compute all cached data for all given polytopes and their polars. This functions does it MUCH faster than member functions of LatticePolytope during the first run. So it is recommended to use this functions if you work with big sets of data. None of the polytopes in the given sequence should be constructed as the polar polytope to another one. INPUT: a sequence of lattice polytopes. EXAMPLES: This function has no output, it is just a fast way to work with long sequences of polytopes. Of course, you can use short sequences as well: sage: o = lattice_polytope.cross_polytope(3) sage: lattice_polytope.all_cached_data([o]) sage: o.faces() [ [[0], [1], [2], [3], [4], [5]], [[1, 5], [0, 5], [0, 1], [3, 5], [1, 3], [4, 5], [0, 4], [3, 4], [1, 2], [0, 2], [2, 3], [2, 4]], [[0, 1, 5], [1, 3, 5], [0, 4, 5], [3, 4, 5], [0, 1, 2], [1, 2, 3], [0, 2, 4], [2, 3, 4]] ] However, you cannot use it for polytopes that are constructed as polar polytopes of others: sage: lattice_polytope.all_cached_data([o.polar()]) Traceback (most recent call last): ... ValueError: Cannot read face structure for a polytope constructed as polar, use _compute_faces! sage.geometry.lattice_polytope.all_faces(polytopes) Compute faces for all given polytopes. This functions does it MUCH faster than member functions of LatticePolytope during the first run. So it is recommended to use this functions if you work with big sets of data. INPUT: a sequence of lattice polytopes. EXAMPLES: This function has no output, it is just a fast way to work with long sequences of polytopes. Of course, you can use short sequences as well: sage: o = lattice_polytope.cross_polytope(3) sage: lattice_polytope.all_faces([o]) sage: o.faces() [ [[0], [1], [2], [3], [4], [5]], [[1, 5], [0, 5], [0, 1], [3, 5], [1, 3], [4, 5], [0, 4], [3, 4], [1, 2], [0, 2], [2, 3], [2, 4]], [[0, 1, 5], [1, 3, 5], [0, 4, 5], [3, 4, 5], [0, 1, 2], [1, 2, 3], [0, 2, 4], [2, 3, 4]] ] However, you cannot use it for polytopes that are constructed as polar polytopes of others: sage: lattice_polytope.all_faces([o.polar()]) Traceback (most recent call last): ... ValueError: Cannot read face structure for a polytope constructed as polar, use _compute_faces! sage.geometry.lattice_polytope.all_facet_equations(polytopes) Compute polar polytopes for all reflexive and equations of facets for all non-reflexive polytopes. all_facet_equations and all_polars are synonyms. This functions does it MUCH faster than member functions of LatticePolytope during the first run. So it is recommended to use this functions if you work with big sets of data. INPUT: a sequence of lattice polytopes. EXAMPLES: This function has no output, it is just a fast way to work with long sequences of polytopes. Of course, you can use short sequences as well: sage: o = lattice_polytope.cross_polytope(3) sage: lattice_polytope.all_polars([o]) sage: o.polar() 3-d reflexive polytope in 3-d lattice N sage.geometry.lattice_polytope.all_nef_partitions(polytopes, keep_symmetric=False) Compute nef-partitions for all given polytopes. This functions does it MUCH faster than member functions of LatticePolytope during the first run. So it is recommended to use this functions if you work with big sets of data. Note: member function is_reflexive will be called separately for each polytope. It is strictly recommended to call all_polars on the sequence of polytopes before using this function. INPUT: a sequence of lattice polytopes. EXAMPLES: This function has no output, it is just a fast way to work with long sequences of polytopes. Of course, you can use short sequences as well: sage: o = lattice_polytope.cross_polytope(3) sage: lattice_polytope.all_nef_partitions([o]) sage: o.nef_partitions() [ Nef-partition {0, 1, 3} U {2, 4, 5}, Nef-partition {0, 1, 3, 4} U {2, 5} (direct product), Nef-partition {0, 1, 2} U {3, 4, 5}, Nef-partition {0, 1, 2, 3} U {4, 5}, Nef-partition {0, 1, 2, 3, 4} U {5} (projection) ] You cannot use this function for non-reflexive polytopes: sage: p = LatticePolytope([(1,0,0), (0,1,0), (0,0,2), ... (-1,0,0), (0,-1,0), (0,0,-1)]) sage: lattice_polytope.all_nef_partitions([o, p]) Traceback (most recent call last): ... ValueError: nef-partitions can be computed for reflexive polytopes only sage.geometry.lattice_polytope.all_points(polytopes) Compute lattice points for all given polytopes. This functions does it MUCH faster than member functions of LatticePolytope during the first run. So it is recommended to use this functions if you work with big sets of data. INPUT: a sequence of lattice polytopes. EXAMPLES: This function has no output, it is just a fast way to work with long sequences of polytopes. Of course, you can use short sequences as well: sage: o = lattice_polytope.cross_polytope(3) sage: lattice_polytope.all_points([o]) sage: o.points_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1), M( 0, 0, 0) in 3-d lattice M sage.geometry.lattice_polytope.all_polars(polytopes) Compute polar polytopes for all reflexive and equations of facets for all non-reflexive polytopes. all_facet_equations and all_polars are synonyms. This functions does it MUCH faster than member functions of LatticePolytope during the first run. So it is recommended to use this functions if you work with big sets of data. INPUT: a sequence of lattice polytopes. EXAMPLES: This function has no output, it is just a fast way to work with long sequences of polytopes. Of course, you can use short sequences as well: sage: o = lattice_polytope.cross_polytope(3) sage: lattice_polytope.all_polars([o]) sage: o.polar() 3-d reflexive polytope in 3-d lattice N sage.geometry.lattice_polytope.always_use_files(new_state=None) Set or get the way of using PALP for lattice polytopes. INPUT: • new_state - (default:None) if specified, must be True or False. OUTPUT: The current state of using PALP. If True, files are used for all calls to PALP, otherwise pipes are used for single polytopes. While the latter may have some advantage in speed, the first method is more reliable when working with large outputs. The initial state is True. EXAMPLES: sage: lattice_polytope.always_use_files() doctest:...: DeprecationWarning: using PALP via pipes is deprecated and will be removed, if you have a use case for this, See http://trac.sagemath.org/15240 for details. True sage: p = LatticePolytope(([1], [20])) sage: p.npoints() 20 Now let’s use pipes instead of files: sage: lattice_polytope.always_use_files(False) False sage: p = LatticePolytope(([1], [20])) sage: p.npoints() 20 sage.geometry.lattice_polytope.convex_hull(points) Compute the convex hull of the given points. Note points might not span the space. Also, it fails for large numbers of vertices in dimensions 4 or greater INPUT: • points - a list that can be converted into vectors of the same dimension over ZZ. OUTPUT: list of vertices of the convex hull of the given points (as vectors). EXAMPLES: Let’s compute the convex hull of several points on a line in the plane: sage: lattice_polytope.convex_hull([[1,2],[3,4],[5,6],[7,8]]) [(1, 2), (7, 8)] sage.geometry.lattice_polytope.cross_polytope(dim) Return a cross-polytope of the given dimension. INPUT: • dim – an integer. OUTPUT: EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: o 3-d reflexive polytope in 3-d lattice M sage: o.vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, 0, 0), M( 0, -1, 0), M( 0, 0, -1) in 3-d lattice M sage.geometry.lattice_polytope.filter_polytopes(f, polytopes, subseq=None, print_numbers=False) Use the function f to filter polytopes in a list. INPUT: • f - filtering function, it must take one argument, a lattice polytope, and return True or False. • polytopes - list of polytopes. • subseq - (default: None) list of integers. If it is specified, only polytopes with these numbers will be considered. • print_numbers - (default: False) if True, the number of the current polytope will be printed on the screen before calling f. OUTPUT: a list of integers – numbers of polytopes in the given list, that satisfy the given condition (i.e. function f returns True) and are elements of subseq, if it is given. EXAMPLES: Consider a sequence of cross-polytopes: sage: polytopes = Sequence([lattice_polytope.cross_polytope(n) ....: for n in range(2, 7)], cr=True) sage: polytopes [ 2-d reflexive polytope #3 in 2-d lattice M, 3-d reflexive polytope in 3-d lattice M, 4-d reflexive polytope in 4-d lattice M, 5-d reflexive polytope in 5-d lattice M, 6-d reflexive polytope in 6-d lattice M ] This filters polytopes of dimension at least 4: sage: lattice_polytope.filter_polytopes(lambda p: p.dim() >= 4, polytopes) doctest:...: DeprecationWarning: filter_polytopes is deprecated, See http://trac.sagemath.org/15240 for details. [2, 3, 4] For long tests you can see the current progress: sage: lattice_polytope.filter_polytopes(lambda p: p.nvertices() >= 10, polytopes, print_numbers=True) 0 1 2 3 4 [3, 4] Here we consider only some of the polytopes: sage: lattice_polytope.filter_polytopes(lambda p: p.nvertices() >= 10, polytopes, [2, 3, 4], print_numbers=True) 2 3 4 [3, 4] sage.geometry.lattice_polytope.integral_length(v) Compute the integral length of a given rational vector. INPUT: • v - any object which can be converted to a list of rationals OUTPUT: Rational number r such that v = r u, where u is the primitive integral vector in the direction of v. EXAMPLES: sage: lattice_polytope.integral_length([1, 2, 4]) 1 sage: lattice_polytope.integral_length([2, 2, 4]) 2 sage: lattice_polytope.integral_length([2/3, 2, 4]) 2/3 sage.geometry.lattice_polytope.is_LatticePolytope(x) Check if x is a lattice polytope. INPUT: • x – anything. OUTPUT: EXAMPLES: sage: from sage.geometry.lattice_polytope import is_LatticePolytope sage: is_LatticePolytope(1) False sage: p = LatticePolytope([(1,0), (0,1), (-1,-1)]) sage: p 2-d reflexive polytope #0 in 2-d lattice M sage: is_LatticePolytope(p) True sage.geometry.lattice_polytope.is_NefPartition(x) Check if x is a nef-partition. INPUT: • x – anything. OUTPUT: EXAMPLES: sage: from sage.geometry.lattice_polytope import is_NefPartition sage: is_NefPartition(1) False sage: o = lattice_polytope.cross_polytope(3) sage: np = o.nef_partitions()[0] sage: np Nef-partition {0, 1, 3} U {2, 4, 5} sage: is_NefPartition(np) True sage.geometry.lattice_polytope.minkowski_sum(points1, points2) Compute the Minkowski sum of two convex polytopes. Note Polytopes might not be of maximal dimension. INPUT: • points1, points2 - lists of objects that can be converted into vectors of the same dimension, treated as vertices of two polytopes. OUTPUT: list of vertices of the Minkowski sum, given as vectors. EXAMPLES: Let’s compute the Minkowski sum of two line segments: sage: lattice_polytope.minkowski_sum([[1,0],[-1,0]],[[0,1],[0,-1]]) [(1, 1), (1, -1), (-1, 1), (-1, -1)] sage.geometry.lattice_polytope.positive_integer_relations(points) Return relations between given points. INPUT: • points - lattice points given as columns of a matrix OUTPUT: matrix of relations between given points with non-negative integer coefficients EXAMPLES: This is a 3-dimensional reflexive polytope: sage: p = LatticePolytope([(1,0,0), (0,1,0), ... (-1,-1,0), (0,0,1), (-1,0,-1)]) sage: p.points_pc() M( 1, 0, 0), M( 0, 1, 0), M(-1, -1, 0), M( 0, 0, 1), M(-1, 0, -1), M( 0, 0, 0) in 3-d lattice M We can compute linear relations between its points in the following way: sage: p.points_pc().matrix().kernel().echelonized_basis_matrix() [ 1 0 0 1 1 0] [ 0 1 1 -1 -1 0] [ 0 0 0 0 0 1] However, the above relations may contain negative and rational numbers. This function transforms them in such a way, that all coefficients are non-negative integers: sage: lattice_polytope.positive_integer_relations(p.points_pc().column_matrix()) [1 0 0 1 1 0] [1 1 1 0 0 0] [0 0 0 0 0 1] sage: lattice_polytope.positive_integer_relations(ReflexivePolytope(2,1).vertices_pc().column_matrix()) [2 1 1] sage.geometry.lattice_polytope.projective_space(dim) Return a simplex of the given dimension, corresponding to $$P_{dim}$$. EXAMPLES: We construct 3- and 4-dimensional simplexes: sage: p = lattice_polytope.projective_space(3) doctest:...: DeprecationWarning: this function is deprecated, perhaps toric_varieties.P(n) is what you are looking for? See http://trac.sagemath.org/15240 for details. sage: p 3-d reflexive polytope in 3-d lattice M sage: p.vertices_pc() M( 1, 0, 0), M( 0, 1, 0), M( 0, 0, 1), M(-1, -1, -1) in 3-d lattice M sage: p = lattice_polytope.projective_space(4) sage: p 4-d reflexive polytope in 4-d lattice M sage: p.vertices_pc() M( 1, 0, 0, 0), M( 0, 1, 0, 0), M( 0, 0, 1, 0), M( 0, 0, 0, 1), M(-1, -1, -1, -1) in 4-d lattice M Read all polytopes from the given file. INPUT: • file_name – a string with the name of a file with VERTICES of polytopes. OUTPUT: • a sequence of polytopes. EXAMPLES: We use poly.x to compute two polar polytopes and read them: sage: d = lattice_polytope.cross_polytope(2) sage: o = lattice_polytope.cross_polytope(3) sage: result_name = lattice_polytope._palp("poly.x -fe", [d, o]) sage: with open(result_name) as f: 4 2 Vertices of P-dual <-> Equations of P -1 1 1 1 -1 -1 1 -1 8 3 Vertices of P-dual <-> Equations of P -1 -1 1 1 -1 1 -1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 1 1 -1 [ 2-d reflexive polytope #14 in 2-d lattice M, 3-d reflexive polytope in 3-d lattice M ] sage: os.remove(result_name) Read and return an integer matrix from a string or an opened file. First input line must start with two integers m and n, the number of rows and columns of the matrix. The rest of the first line is ignored. The next m lines must contain n numbers each. If m>n, returns the transposed matrix. If the string is empty or EOF is reached, returns the empty matrix, constructed by matrix(). INPUT: • data – Either a string containing the filename or the file itself containing the output by PALP. • permutation – (default: False) If True, try to retrieve the permutation output by PALP. This parameter makes sense only when PALP computed the normal form of a lattice polytope. OUTPUT: A matrix or a tuple of a matrix and a permutation. EXAMPLES: sage: lattice_polytope.read_palp_matrix("2 3 comment \n 1 2 3 \n 4 5 6") [1 2 3] [4 5 6] sage: lattice_polytope.read_palp_matrix("3 2 Will be transposed \n 1 2 \n 3 4 \n 5 6") [1 3 5] [2 4 6] sage.geometry.lattice_polytope.sage_matrix_to_maxima(m) Convert a Sage matrix to the string representation of Maxima. EXAMPLE: sage: m = matrix(ZZ,2) sage: lattice_polytope.sage_matrix_to_maxima(m) matrix([0,0],[0,0]) sage.geometry.lattice_polytope.set_palp_dimension(d) Set the dimension for PALP calls to d. INPUT: • d – an integer from the list [4,5,6,11] or None. OUTPUT: • none. PALP has many hard-coded limits, which must be specified before compilation, one of them is dimension. Sage includes several versions with different dimension settings (which may also affect other limits and enable certain features of PALP). You can change the version which will be used by calling this function. Such a change is not done automatically for each polytope based on its dimension, since depending on what you are doing it may be necessary to use dimensions higher than that of the input polytope. EXAMPLES: By default, it is not possible to create the 7-dimensional simplex with vertices at the basis of the 8-dimensional space: sage: LatticePolytope(identity_matrix(8)) Traceback (most recent call last): ... ValueError: Error executing 'poly.x -fv' for the given polytope! Output: Please increase POLY_Dmax to at least 7 However, we can work with this polytope by changing PALP dimension to 11: sage: lattice_polytope.set_palp_dimension(11) sage: LatticePolytope(identity_matrix(8)) 7-d lattice polytope in 8-d lattice M Let’s go back to default settings: sage: lattice_polytope.set_palp_dimension(None) sage.geometry.lattice_polytope.skip_palp_matrix(data, n=1) Skip matrix data in a file. INPUT: • data - opened file with blocks of matrix data in the following format: A block consisting of m+1 lines has the number m as the first element of its first line. • n - (default: 1) integer, specifies how many blocks should be skipped If EOF is reached during the process, raises ValueError exception. EXAMPLE: We create a file with vertices of the square and the cube, but read only the second set: sage: d = lattice_polytope.cross_polytope(2) sage: o = lattice_polytope.cross_polytope(3) sage: result_name = lattice_polytope._palp("poly.x -fe", [d, o]) sage: with open(result_name) as f: 4 2 Vertices of P-dual <-> Equations of P -1 1 1 1 -1 -1 1 -1 8 3 Vertices of P-dual <-> Equations of P -1 -1 1 1 -1 1 -1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 1 1 -1 sage: f = open(result_name) sage: lattice_polytope.skip_palp_matrix(f) [-1 1 -1 1 -1 1 -1 1] [-1 -1 1 1 -1 -1 1 1] [ 1 1 1 1 -1 -1 -1 -1] sage: f.close() sage: os.remove(result_name) sage.geometry.lattice_polytope.write_palp_matrix(m, ofile=None, comment='', format=None) Write m into ofile in PALP format. INPUT: • m – a matrix over integers or a point collection. • ofile – a file opened for writing (default: stdout) • comment – a string (default: empty) see output description • format – a format string used to print matrix entries. OUTPUT: • nothing is returned, output written to ofile has the format • First line: number_of_rows number_of_columns comment • Next number_of_rows lines: rows of the matrix. EXAMPLES: sage: o = lattice_polytope.cross_polytope(3) sage: lattice_polytope.write_palp_matrix(o.vertices_pc(), comment="3D Octahedron") 3 6 3D Octahedron 1 0 0 -1 0 0 0 1 0 0 -1 0 0 0 1 0 0 -1 sage: lattice_polytope.write_palp_matrix(o.vertices_pc(), format="%4d") 3 6 1 0 0 -1 0 0 0 1 0 0 -1 0 0 0 1 0 0 -1 Groebner Fans Polyhedra
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3465162515640259, "perplexity": 3405.2798669517774}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899548.41/warc/CC-MAIN-20141030025819-00075-ip-10-16-133-185.ec2.internal.warc.gz"}
http://sciforums.com/threads/anniversary-ideas.160019/
# Anniversary Ideas Discussion in 'About the Members' started by Thoreau, Oct 10, 2017. 1. ### ThoreauValued Senior Member Messages: 3,376 Okay, SF community...I need your help. I am, by far, the most unromantic person on the planet. It's not for lack of trying because I certainly do try. It's just that each time I try my hand at doing/getting something nice for my partner, I usually end up getting something he doesn't like or something that I feel isn't special. Maybe it's the Asperger's that gets in the way with my ability to think of thoughtful gifts. But to be fair, he is kind of picky (not in a materialistic way). So anyway, here's the deal... Our one-year anniversary is coming up here in a few week. We are driving a few hours north and going to rent a cabin in the mountains for a few days. We're splitting the cost, so that's not really a gift to each other. I'm not sure what he has planned for me, but I'm struggling to find ideas of what I can do for him while we're there. I want to do something really special...something memorable and romantic (and fairly cheap because I'm also living on a budget). Initially, I had booked us a private helicopter ride to go fly us over the mountains for a little while (I got an amazingly cheap deal on it). But then, thinking about it, I didn't even know if he has ever been on a helicopter. So, a few days ago, I asked in passing if he had ever ridden on one. He said no and that he'd never want to do that. So, boom, there goes that idea... Now I don't know what to do. I want to do something sweet that goes beyond just a nice dinner or something typical. But, along with being a fairly unromantic person, I'm a horribly uncreative person as well. So, guys and gals, I need your help. Just throw out ideas, any ideas (that cost less than about $150). There's no wrong answer here..........well, that's not true. But still....I need something to go off of. Bear in mind, it's a small mountain town with not much in it. So, I really have to be creative here. THINGS HE DOESN'T LIKE: - Hiking - Pretty much any outdoor activities in the sun. OTHER RESTRICTIONS: - Dietary - We're both vegan. - Does not cost more than$150.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2568783462047577, "perplexity": 1169.9736633621612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647706.69/warc/CC-MAIN-20180321215410-20180321235410-00107.warc.gz"}
https://www.math-only-math.com/worksheet-on-simplification-of-the-product-of-a-plus-b-and-a-minus-b.html
# Worksheet on Simplification of (a + b)(a – b) Practice the questions given in the worksheet on simplification of (a + b)(a – b). 1. Simplify by applying standard formula. (i) (5x – 9)(5x + 9) (ii) (2x + 3y)(2x – 3y) (iii) (a + b – c)(a – b + c) (iv) (x + y – 3)(x + y + 3) (v) (1 + a)(1 – a)(1 + a$$^{2}$$) [Hint: Given expression = (1 - a$$^{2}$$)(1 + a$$^{2}$$) = 1 -(a$$^{2}$$)$$^{2}$$.] (vi) (a + $$\frac{2}{a}$$ – 1)(a - $$\frac{2}{a}$$ – 1) 2. If a - $$\frac{1}{a}$$ = 3, find the value of a$$^{2}$$ - $$\frac{1}{a^{2}}$$. [Hint: (a + $$\frac{1}{a}$$)$$^{2}$$ = (a - $$\frac{1}{a}$$)$$^{2}$$ + 4a ∙ $$\frac{1}{a}$$ = 3$$^{2}$$ + 4 = 13. Therefore, a + $$\frac{1}{a}$$ = ±$$\sqrt{13}$$. Now (a + $$\frac{1}{a}$$)(a - $$\frac{1}{a}$$) = ±$$\sqrt{13}$$ × 3 = ±3$$\sqrt{13}$$] 3. If x - $$\frac{1}{x}$$ = $$\frac{3}{2}$$, find the value of (i) x + $$\frac{1}{x}$$ (ii) x$$^{2}$$ + $$\frac{1}{x^{2}}$$ (iii) x$$^{2}$$ - $$\frac{1}{x^{2}}$$ (iv) x$$^{4}$$ + $$\frac{1}{x^{4}}$$ (v) x$$^{4}$$ - $$\frac{1}{x^{4}}$$ 4. (i) Simplify: (1 – x)(1 + x)(1 + x$$^{2}$$)(1 + x$$^{4}$$). [Hint: Given expression = (1 - x$$^{2}$$)(1 + x$$^{2}$$)(1 + x$$^{4}$$) = (1 - x$$^{4}$$)(1 + x$$^{4}$$) = 1 - (x$$^{4}$$)$$^{2}$$ = 1 - x$$^{8}$$] (ii) Express: (x$$^{2}$$ + 5x + 12)(x$$^{2}$$ – 5x + 12) as a difference of two squares. (iii) If $$\frac{a}{b}$$ = $$\frac{b}{c}$$, prove that (a + b + c)(a – b + c) = a$$^{2}$$ + b$$^{2}$$ + c$$^{2}$$. [Hint: (a + b + c)(a – b + c) = {(a + c) + b}{(a + c) - b)} = (a + c)$$^{2}$$ - b$$^{2}$$ = a$$^{2}$$ + 2ac + c$$^{2}$$ - b$$^{2}$$ = a$$^{2}$$ + 2b$$^{2}$$ + c$$^{2}$$ - b$$^{2}$$ (Since, $$\frac{a}{b}$$ = $$\frac{b}{c}$$ implies = ac = b$$^{2}$$)] Answers for the worksheet on simplification of (a + b)(a – b) are given below. 1. (i) 25x$$^{2}$$ - 81 (ii) 4x$$^{2}$$ – 9y$$^{2}$$ (iii) a$$^{2}$$ – b$$^{2}$$ – c$$^{2}$$ + 2bc (iv) x$$^{2}$$ + 2xy + y$$^{2}$$ - 9 (v) 1 – a$$^{4}$$ (vi) a$$^{2}$$ – 2a + 1 - $$\frac{4}{a^{2}}$$ 2. ± 3$$\sqrt{3}$$ 3. (i) ±$$\frac{5}{2}$$ (ii) $$\frac{17}{4}$$ (iii) ±$$\frac{15}{4}$$ (iv) $$\frac{257}{16}$$ (v) ±$$\frac{255}{16}$$ 4. (i) 1 - x$$^{8}$$ (ii) (x$$^{2}$$ + 12)$$^{2}$$ – (5x)$$^{2}$$ From Worksheet on Simplification of (a + b)(a – b) to HOME PAGE
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7751948237419128, "perplexity": 2220.6507390799898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662572800.59/warc/CC-MAIN-20220524110236-20220524140236-00350.warc.gz"}
http://math.stackexchange.com/questions/473601/evaluating-int-infty-infty-frac-log21ix21ix2-dx-u
# Evaluating $\int_{-\infty}^{\infty} \frac{\log^{2}(1+ix^{2})}{1+ix^{2}} \ dx$ using contour integration It was suggested to me to evaluate $\displaystyle \int_{-\infty}^{\infty} \frac{\log^{2}(1+ix^{2})}{1+ix^{2}} \ dx$ by integrating $\displaystyle f(z) = \frac{\log^{2}(1+z^{2})}{1+z^{2}}$around a semicircle with it's diameter along the line $z= e^{\frac{i \pi}{4}}t$. Of course we need to take a detour around the branch point at $z=i$. Choosing the principal branch of $\log (1+z^{2})$, there are cuts on the imaginary axis from $i$ to $i \infty$ and from $-i$ to $-i \infty$. So letting the radius of the semicircle go to $\infty$ and the radius of the indentation around the branch point at $z=i$ go to $0$, I get $$e^{ \frac{i \pi }{4}} \int_{-\infty}^{\infty} \frac{\log^{2}(1+it^{2})}{1+it^{2}} \ dt + \int^{e^{\frac{i \pi}{2}}}_{i \infty} \frac{(\log |1+z^{2}| + \pi i)^{2}}{1+z^{2}} dz + \int_{e^{\frac{i \pi}{2}}}^{i \infty} \frac{(\log |1+z^{2}| - \pi i)^{2}}{1+z^{2}} dz = 0$$ $$\displaystyle \implies e^{ \frac{i \pi }{4}} \int_{-\infty}^{\infty} \frac{\log^{2}(1+ix^{2})}{1+ix^{2}} \ dx = 4 \pi i \int_{e^{\frac{i \pi}{2}}}^{i \infty} \frac{\log|1+z^{2}|}{1+z^{2}} \ dz = -4 \pi \int_{1}^{\infty} \frac{\log(1-t^{2})}{1-t^{2}} \ dt$$ But $\displaystyle \int_{1}^{\infty} \frac{\log(1-t^{2})}{1-t^{2}} \ dt$ does not converge. EDIT: The problem is the indentation around the branch point. It's contribution is not zero nor finite. - could you also take the semicircle the opposite direction too? but this way shod produce something meaningful still , how did you get the last equalities? –  Evan Aug 22 '13 at 14:52 What's wrong with integrating along the real axis here? Deform a semicircular contour to avoid the branch point at $z=e^{i \pi/4}$, but that's about as complicated as it needs to be. I think. –  Ron Gordon Aug 22 '13 at 14:55 Do you know what the answer is? –  Mhenni Benghorbal Aug 22 '13 at 14:58 I'm a bit confused: sometimes you have log, and sometimes log squared. –  user8268 Aug 22 '13 at 15:54 $$2\int_0^{\infty}(1+i x^2)^s dx=-e^{3i\pi/4}\frac{\sqrt{\pi}}{2}\frac{\Gamma(-1/2-s)}{\Gamma(-s)}$$ twice wrt $s$ and set $s=-1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9047350883483887, "perplexity": 424.4757575656941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021791079/warc/CC-MAIN-20140305121631-00095-ip-10-183-142-35.ec2.internal.warc.gz"}
https://maths-challenges.blogspot.com/2017/05/seven.html
## 26 May 2017 ### Seven $7$ is a prime number. $7$ is a Mersenne number, since it is prime and $7=2^3-1$. It is also a double Mersenne number, since $3$ is itself a Mersenne number ($3=2^2-1$). $7$ is the number of pieces of the Tangram. $7$ is the number of notes in the diatonic scale. The heptagon is a polygon with $7$ sides. There are $7$ days a week.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761376142501831, "perplexity": 394.35024321958474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865595.47/warc/CC-MAIN-20180523102355-20180523122355-00535.warc.gz"}
https://forum.allaboutcircuits.com/threads/lm317-adj-current-limiter-how-do-i-add-an-indication-led.123874/
# LM317 adj. current limiter, how do I add an indication LED? #### Hamlet Joined Jun 10, 2015 333 I've built this, but would like to add an LED that lights up when current limiting is active... I took the schematic from the data sheet, found a 150ohm pot on ebay, and bread-boarded it every which way, using LM317, LM338, LM350, etc. ( I tried a 500ohm, 200ohm, a 100ohm pot, etc., but the 150ohm worked best.) I soldered it up, and it's been really nifty addition to my workbench. However, I'd like to add an indicator, to let me know when it is actively limiting current. Any ideas? #### dannyf Joined Sep 13, 2015 2,197 The 2nd regulator (LM117 here) generates a constant current of 100ma. The LM338 maintains a constant voltage across R1 (=R1 * load current) + R2 (=a portion of R2 * 100ma). When the load current goes up, LM338 will attempt to lower its output voltage, thus lower the potential on its Adj pin. How "violently" that kicks in depends onthe ratio of R1 and R2 (the portion of R2 that's between the adj pin and the output). So one obvious way to indicate activation of the current limit is to measure the voltage across the regulator. For example, a resistor + led / leds between the input pin and the output pin of the regulator, or the input pin of the regulator and the current output nod, would work. You may need to experiment the number of leds needed. The maximum voltage there would be the input voltage - the negative / ground.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8442369699478149, "perplexity": 3513.2020298026223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402127397.84/warc/CC-MAIN-20200930172714-20200930202714-00262.warc.gz"}
http://www.reliawiki.org/index.php/The_Weibull_Distribution
The Weibull Distribution The Weibull distribution is one of the most widely used lifetime distributions in reliability engineering. It is a versatile distribution that can take on the characteristics of other types of distributions, based on the value of the shape parameter, ${\beta} \,\!$. This chapter provides a brief background on the Weibull distribution, presents and derives most of the applicable equations and presents examples calculated both manually and by using ReliaSoft's Weibull++ software. Weibull Probability Density Function The 3-Parameter Weibull The 3-parameter Weibull pdf is given by: $f(t)={ \frac{\beta }{\eta }}\left( {\frac{t-\gamma }{\eta }}\right) ^{\beta -1}e^{-\left( {\frac{t-\gamma }{\eta }}\right) ^{\beta }} \,\!$ where: $f(t)\geq 0,\text{ }t\geq \gamma \,\!$ $\beta\gt 0\ \,\!$ $\eta \gt 0 \,\!$ $-\infty \lt \gamma \lt +\infty \,\!$ and: $\eta= \,\!$ scale parameter, or characteristic life $\beta= \,\!$ shape parameter (or slope) $\gamma= \,\!$ location parameter (or failure free life) The 2-Parameter Weibull The 2-parameter Weibull pdf is obtained by setting $\gamma=0 \,\!$, and is given by: $f(t)={ \frac{\beta }{\eta }}\left( {\frac{t}{\eta }}\right) ^{\beta -1}e^{-\left( { \frac{t}{\eta }}\right) ^{\beta }} \,\!$ The 1-Parameter Weibull The 1-parameter Weibull pdf is obtained by again setting $\gamma=0 \,\!$ and assuming $\beta=C=Constant \,\!$ assumed value or: $f(t)={ \frac{C}{\eta }}\left( {\frac{t}{\eta }}\right) ^{C-1}e^{-\left( {\frac{t}{ \eta }}\right) ^{C}} \,\!$ where the only unknown parameter is the scale parameter, $\eta\,\!$. Note that in the formulation of the 1-parameter Weibull, we assume that the shape parameter $\beta \,\!$ is known a priori from past experience with identical or similar products. The advantage of doing this is that data sets with few or no failures can be analyzed. Weibull Distribution Functions The Mean or MTTF The mean, $\overline{T} \,\!$, (also called MTTF) of the Weibull pdf is given by: $\overline{T}=\gamma +\eta \cdot \Gamma \left( {\frac{1}{\beta }}+1\right) \,\!$ where $\Gamma \left( {\frac{1}{\beta }}+1\right) \,\!$ is the gamma function evaluated at the value of: $\left( { \frac{1}{\beta }}+1\right) \,\!$ The gamma function is defined as: $\Gamma (n)=\int_{0}^{\infty }e^{-x}x^{n-1}dx \,\!$ For the 2-parameter case, this can be reduced to: $\overline{T}=\eta \cdot \Gamma \left( {\frac{1}{\beta }}+1\right) \,\!$ Note that some practitioners erroneously assume that $\eta \,\!$ is equal to the MTTF, $\overline{T}\,\!$. This is only true for the case of: $\beta=1 \,\!$ or: \begin{align} \overline{T} &= \eta \cdot \Gamma \left( {\frac{1}{1}}+1\right) \\ &= \eta \cdot \Gamma \left( {\frac{1}{1}}+1\right) \\ &= \eta \cdot \Gamma \left( {2}\right) \\ &= \eta \cdot 1\\ &= \eta \end{align} \,\! The Median The median, $\breve{T}\,\!$, of the Weibull distribution is given by: $\breve{T}=\gamma +\eta \left( \ln 2\right) ^{\frac{1}{\beta }} \,\!$ The Mode The mode, $\tilde{T} \,\!$, is given by: $\tilde{T}=\gamma +\eta \left( 1-\frac{1}{\beta }\right) ^{\frac{1}{\beta }} \,\!$ The Standard Deviation The standard deviation, $\sigma _{T}\,\!$, is given by: $\sigma _{T}=\eta \cdot \sqrt{\Gamma \left( {\frac{2}{\beta }}+1\right) -\Gamma \left( {\frac{1}{ \beta }}+1\right) ^{2}} \,\!$ The Weibull Reliability Function The equation for the 3-parameter Weibull cumulative density function, cdf, is given by: $F(t)=1-e^{-\left( \frac{t-\gamma }{\eta }\right) ^{\beta }} \,\!$ This is also referred to as unreliability and designated as $Q(t) \,\!$ by some authors. Recalling that the reliability function of a distribution is simply one minus the cdf, the reliability function for the 3-parameter Weibull distribution is then given by: $R(t)=e^{-\left( { \frac{t-\gamma }{\eta }}\right) ^{\beta }} \,\!$ The Weibull Conditional Reliability Function The 3-parameter Weibull conditional reliability function is given by: $R(t|T)={ \frac{R(T+t)}{R(T)}}={\frac{e^{-\left( {\frac{T+t-\gamma }{\eta }}\right) ^{\beta }}}{e^{-\left( {\frac{T-\gamma }{\eta }}\right) ^{\beta }}}} \,\!$ or: $R(t|T)=e^{-\left[ \left( {\frac{T+t-\gamma }{\eta }}\right) ^{\beta }-\left( {\frac{T-\gamma }{\eta }}\right) ^{\beta }\right] } \,\!$ These give the reliability for a new mission of $t \,\!$ duration, having already accumulated $T \,\!$ time of operation up to the start of this new mission, and the units are checked out to assure that they will start the next mission successfully. It is called conditional because you can calculate the reliability of a new mission based on the fact that the unit or units already accumulated hours of operation successfully. The Weibull Reliable Life The reliable life, $T_{R}\,\!$, of a unit for a specified reliability, $R\,\!$, starting the mission at age zero, is given by: $T_{R}=\gamma +\eta \cdot \left\{ -\ln ( R ) \right\} ^{ \frac{1}{\beta }} \,\!$ This is the life for which the unit/item will be functioning successfully with a reliability of $R\,\!$. If $R = 0.50\,\!$, then $T_{R}=\breve{T} \,\!$, the median life, or the life by which half of the units will survive. The Weibull Failure Rate Function The Weibull failure rate function, $\lambda(t) \,\!$, is given by: $\lambda \left( t\right) = \frac{f\left( t\right) }{R\left( t\right) }=\frac{\beta }{\eta }\left( \frac{ t-\gamma }{\eta }\right) ^{\beta -1} \,\!$ Characteristics of the Weibull Distribution The Weibull distribution is widely used in reliability and life data analysis due to its versatility. Depending on the values of the parameters, the Weibull distribution can be used to model a variety of life behaviors. We will now examine how the values of the shape parameter, $\beta\,\!$, and the scale parameter, $\eta\,\!$, affect such distribution characteristics as the shape of the curve, the reliability and the failure rate. Note that in the rest of this section we will assume the most general form of the Weibull distribution, (i.e., the 3-parameter form). The appropriate substitutions to obtain the other forms, such as the 2-parameter form where $\gamma = 0,\,\!$ or the 1-parameter form where $\beta = C = \,\!$ constant, can easily be made. Effects of the Shape Parameter, beta The Weibull shape parameter, $\beta\,\!$, is also known as the slope. This is because the value of $\beta\,\!$ is equal to the slope of the regressed line in a probability plot. Different values of the shape parameter can have marked effects on the behavior of the distribution. In fact, some values of the shape parameter will cause the distribution equations to reduce to those of other distributions. For example, when $\beta = 1\,\!$, the pdf of the 3-parameter Weibull distribution reduces to that of the 2-parameter exponential distribution or: $f(t)={\frac{1}{\eta }}e^{-{\frac{t-\gamma }{\eta }}} \,\!$ where $\frac{1}{\eta }=\lambda = \,\!$ failure rate. The parameter $\beta\,\!$ is a pure number, (i.e., it is dimensionless). The following figure shows the effect of different values of the shape parameter, $\beta\,\!$, on the shape of the pdf. As you can see, the shape can take on a variety of forms based on the value of $\beta\,\!$. For $0\lt \beta \leq 1 \,\!$: • As $t \rightarrow 0\,\!$ (or $\gamma\,\!$), $f(t)\rightarrow \infty.\,\!$ • As $t\rightarrow \infty\,\!$, $f(t)\rightarrow 0\,\!$. • $f(t)\,\!$ decreases monotonically and is convex as it increases beyond the value of $\gamma\,\!$. • The mode is non-existent. For $\beta \gt 1 \,\!$: • $f(t) = 0\,\!$ at $t = 0\,\!$ (or $\gamma\,\!$). • $f(t)\,\!$ increases as $t\rightarrow \tilde{T} \,\!$ (the mode) and decreases thereafter. • For $\beta \lt 2.6\,\!$ the Weibull pdf is positively skewed (has a right tail), for $2.6 \lt \beta \lt 3.7\,\!$ its coefficient of skewness approaches zero (no tail). Consequently, it may approximate the normal pdf, and for $\beta \gt 3.7\,\!$ it is negatively skewed (left tail). The way the value of $\beta\,\!$ relates to the physical behavior of the items being modeled becomes more apparent when we observe how its different values affect the reliability and failure rate functions. Note that for $\beta = 0.999\,\!$, $f(0) = \infty\,\!$, but for $\beta = 1.001\,\!$, $f(0) = 0.\,\!$ This abrupt shift is what complicates MLE estimation when $\beta\,\!$ is close to 1. The Effect of beta on the cdf and Reliability Function The above figure shows the effect of the value of $\beta\,\!$ on the cdf, as manifested in the Weibull probability plot. It is easy to see why this parameter is sometimes referred to as the slope. Note that the models represented by the three lines all have the same value of $\eta\,\!$. The following figure shows the effects of these varied values of $\beta\,\!$ on the reliability plot, which is a linear analog of the probability plot. • $R(t)\,\!$ decreases sharply and monotonically for $0 \lt \beta \lt 1\,\!$ and is convex. • For $\beta = 1\,\!$, $R(t)\,\!$ decreases monotonically but less sharply than for $0 \lt \beta \lt 1\,\!$ and is convex. • For $\beta \gt 1\,\!$, $R(t)\,\!$ decreases as increases. As wear-out sets in, the curve goes through an inflection point and decreases sharply. The Effect of beta on the Weibull Failure Rate The value of $\beta\,\!$ has a marked effect on the failure rate of the Weibull distribution and inferences can be drawn about a population's failure characteristics just by considering whether the value of $\beta\,\!$ is less than, equal to, or greater than one. As indicated by above figure, populations with $\beta \lt 1\,\!$ exhibit a failure rate that decreases with time, populations with $\beta = 1\,\!$ have a constant failure rate (consistent with the exponential distribution) and populations with $\beta \gt 1\,\!$ have a failure rate that increases with time. All three life stages of the bathtub curve can be modeled with the Weibull distribution and varying values of $\beta\,\!$. The Weibull failure rate for $0 \lt \beta \lt 1\,\!$ is unbounded at $T = 0\,\!$ (or $\gamma\,\!)\,\!$. The failure rate, $\lambda(t),\,\!$ decreases thereafter monotonically and is convex, approaching the value of zero as $t\rightarrow \infty\,\!$ or $\lambda (\infty) = 0\,\!$. This behavior makes it suitable for representing the failure rate of units exhibiting early-type failures, for which the failure rate decreases with age. When encountering such behavior in a manufactured product, it may be indicative of problems in the production process, inadequate burn-in, substandard parts and components, or problems with packaging and shipping. For $\beta = 1\,\!$, $\lambda(t)\,\!$ yields a constant value of ${ \frac{1}{\eta }} \,\!$ or: $\lambda (t)=\lambda ={\frac{1}{\eta }} \,\!$ This makes it suitable for representing the failure rate of chance-type failures and the useful life period failure rate of units. For $\beta \gt 1\,\!$, $\lambda(t)\,\!$ increases as $t\,\!$ increases and becomes suitable for representing the failure rate of units exhibiting wear-out type failures. For $1 \lt \beta \lt 2,\,\!$ the $\lambda(t)\,\!$ curve is concave, consequently the failure rate increases at a decreasing rate as $t\,\!$ increases. For $\beta = 2\,\!$ there emerges a straight line relationship between $\lambda(t)\,\!$ and $t\,\!$, starting at a value of $\lambda(t) = 0\,\!$ at $t = \gamma\,\!$, and increasing thereafter with a slope of ${ \frac{2}{\eta ^{2}}} \,\!$. Consequently, the failure rate increases at a constant rate as $t\,\!$ increases. Furthermore, if $\eta = 1\,\!$ the slope becomes equal to 2, and when $\gamma = 0\,\!$, $\lambda(t)\,\!$ becomes a straight line which passes through the origin with a slope of 2. Note that at $\beta = 2\,\!$, the Weibull distribution equations reduce to that of the Rayleigh distribution. When $\beta \gt 2,\,\!$ the $\lambda(t)\,\!$ curve is convex, with its slope increasing as $t\,\!$ increases. Consequently, the failure rate increases at an increasing rate as $t\,\!$ increases, indicating wearout life. Effects of the Scale Parameter, eta A change in the scale parameter $\eta\,\!$ has the same effect on the distribution as a change of the abscissa scale. Increasing the value of $\eta\,\!$ while holding $\beta\,\!$ constant has the effect of stretching out the pdf. Since the area under a pdf curve is a constant value of one, the "peak" of the pdf curve will also decrease with the increase of $\eta\,\!$, as indicated in the above figure. • If $\eta\,\!$ is increased while $\beta\,\!$ and $\gamma\,\!$ are kept the same, the distribution gets stretched out to the right and its height decreases, while maintaining its shape and location. • If $\eta\,\!$ is decreased while $\beta\,\!$ and $\gamma\,\!$ are kept the same, the distribution gets pushed in towards the left (i.e., towards its beginning or towards 0 or $\gamma\,\!$), and its height increases. • $\eta\,\!$ has the same units as $t\,\!$, such as hours, miles, cycles, actuations, etc. Effects of the Location Parameter, gamma The location parameter, $\gamma\,\!$, as the name implies, locates the distribution along the abscissa. Changing the value of $\gamma\,\!$ has the effect of sliding the distribution and its associated function either to the right (if $\gamma \gt 0\,\!$) or to the left (if $\gamma \lt 0\,\!$). • When $\gamma = 0,\,\!$ the distribution starts at $t=0\,\!$ or at the origin. • If $\gamma \gt 0,\,\!$ the distribution starts at the location $\gamma\,\!$ to the right of the origin. • If $\gamma \lt 0,\,\!$ the distribution starts at the location $\gamma\,\!$ to the left of the origin. • $\gamma\,\!$ provides an estimate of the earliest time-to-failure of such units. • The life period 0 to $+ \gamma\,\!$ is a failure free operating period of such units. • The parameter $\gamma\,\!$ may assume all values and provides an estimate of the earliest time a failure may be observed. A negative $\gamma\,\!$ may indicate that failures have occurred prior to the beginning of the test, namely during production, in storage, in transit, during checkout prior to the start of a mission, or prior to actual use. • $\gamma\,\!$ has the same units as $t\,\!$, such as hours, miles, cycles, actuations, etc. Estimation of the Weibull Parameters The estimates of the parameters of the Weibull distribution can be found graphically via probability plotting paper, or analytically, using either least squares (rank regression) or maximum likelihood estimation (MLE). Probability Plotting One method of calculating the parameters of the Weibull distribution is by using probability plotting. To better illustrate this procedure, consider the following example from Kececioglu [20]. Assume that six identical units are being reliability tested at the same application and operation stress levels. All of these units fail during the test after operating the following number of hours: 93, 34, 16, 120, 53 and 75. Estimate the values of the parameters for a 2-parameter Weibull distribution and determine the reliability of the units at a time of 15 hours. Solution The steps for determining the parameters of the Weibull representing the data, using probability plotting, are outlined in the following instructions. First, rank the times-to-failure in ascending order as shown next. Time-to-failure, hours Failure Order Number out of Sample Size of 6 16 1 34 2 53 3 75 4 93 5 120 6 Obtain their median rank plotting positions. Median rank positions are used instead of other ranking methods because median ranks are at a specific confidence level (50%). Median ranks can be found tabulated in many reliability books. They can also be estimated using the following equation: $MR \sim { \frac{i-0.3}{N+0.4}}\cdot 100 \,\!$ where $i\,\!$ is the failure order number and $N\,\!$ is the total sample size. The exact median ranks are found in Weibull++ by solving: $\sum_{k=i}^N{\binom{N}{k}}{MR^k}{(1-MR)^{N-k}}=0.5=50% \,\!$ for $MR\,\!$, where $N\,\!$ is the sample size and $i\,\!$ the order number. The times-to-failure, with their corresponding median ranks, are shown next. Time-to-failure, hours Median Rank,% 16 10.91 34 26.44 53 42.14 75 57.86 93 73.56 120 89.1 On a Weibull probability paper, plot the times and their corresponding ranks. A sample of a Weibull probability paper is given in the following figure. The points of the data in the example are shown in the figure below. Draw the best possible straight line through these points, as shown below, then obtain the slope of this line by drawing a line, parallel to the one just obtained, through the slope indicator. This value is the estimate of the shape parameter $\hat{\beta } \,\!$, in this case $\hat{\beta }=1.4 \,\!$. At the $Q(t)=63.2%\,\!$ ordinate point, draw a straight horizontal line until this line intersects the fitted straight line. Draw a vertical line through this intersection until it crosses the abscissa. The value at the intersection of the abscissa is the estimate of $\hat{\eta } \,\!$. For this case, $\hat{\eta }=76 \,\!$ hours. This is always at 63.2% since: $Q(t)=1-e^{-(\frac{t}{\eta })^{\beta }}=1-e^{-1}=0.632=63.2% \,\!$ Now any reliability value for any mission time $t\,\!$ can be obtained. For example, the reliability for a mission of 15 hours, or any other time, can now be obtained either from the plot or analytically. To obtain the value from the plot, draw a vertical line from the abscissa, at hours, to the fitted line. Draw a horizontal line from this intersection to the ordinate and read $Q(t)\,\!$, in this case $Q(t)=9.8%\,\!$. Thus, $R(t)=1-Q(t)=90.2%\,\!$. This can also be obtained analytically from the Weibull reliability function since the estimates of both of the parameters are known or: $R(t=15)=e^{-\left( \frac{15}{\eta }\right) ^{\beta }}=e^{-\left( \frac{15}{76 }\right) ^{1.4}}=90.2% \,\!$ Probability Plotting for the Location Parameter, Gamma The third parameter of the Weibull distribution is utilized when the data do not fall on a straight line, but fall on either a concave up or down curve. The following statements can be made regarding the value of $\gamma \,\!$: • Case 1: If the curve for MR versus ${{t}_{j}}\,\!$ is concave down and the curve for MR versus ${({t}_{j}-{t}_{1})}\,\!$ is concave up, then there exists a $\gamma \,\!$ such that $0\lt \gamma \lt t_{1}\,\!$, or $\gamma \,\!$ has a positive value. • Case 2: If the curves for MR versus ${{t}_{j}}\,\!$ and MR versus ${({t}_{j}-{t}_{1})}\,\!$ are both concave up, then there exists a negative $\gamma \,\!$ which will straighten out the curve of MR versus ${{t}_{j}}\,\!$. • Case 3: If neither one of the previous two cases prevails, then either reject the Weibull as one capable of representing the data, or proceed with the multiple population (mixed Weibull) analysis. To obtain the location parameter, $\gamma \,\!$: • Subtract the same arbitrary value, $\gamma \,\!$, from all the times to failure and replot the data. • If the initial curve is concave up, subtract a negative $\gamma \,\!$ from each failure time. • If the initial curve is concave down, subtract a positive $\gamma \,\!$ from each failure time. • Repeat until the data plots on an acceptable straight line. • The value of $\gamma \,\!$ is the subtracted (positive or negative) value that places the points in an acceptable straight line. The other two parameters are then obtained using the techniques previously described. Also, it is important to note that we used the term subtract a positive or negative gamma, where subtracting a negative gamma is equivalent to adding it. Note that when adjusting for gamma, the x-axis scale for the straight line becomes ${({t}-\gamma)}\,\!$. Rank Regression on Y Performing rank regression on Y requires that a straight line mathematically be fitted to a set of data points such that the sum of the squares of the vertical deviations from the points to the line is minimized. This is in essence the same methodology as the probability plotting method, except that we use the principle of least squares to determine the line through the points, as opposed to just eyeballing it. The first step is to bring our function into a linear form. For the two-parameter Weibull distribution, the (cumulative density function) is: $F(t)=1-e^{-\left( \frac{t}{\eta }\right) ^{\beta }} \,\!$ Taking the natural logarithm of both sides of the equation yields: $\ln[ 1-F(t)] =-( \frac{t}{\eta }) ^{\beta } \,\!$ $\ln{ -\ln[ 1-F(t)]} =\beta \ln ( \frac{t}{ \eta }) \,\!$ or: \begin{align} \ln \{ -\ln[ 1-F(t)]\} =-\beta \ln (\eta )+\beta \ln (t) \end{align}\,\! Now let: \begin{align} y = \ln \{ -\ln[ 1-F(t)]\} \end{align}\,\! \begin{align} a = - \beta \ln(\eta) \end{align}\,\! and: \begin{align} b= \beta \end{align}\,\! which results in the linear equation of: \begin{align} y=a+bx \end{align}\,\! The least squares parameter estimation method (also known as regression analysis) was discussed in Parameter Estimation, and the following equations for regression on Y were derived: $\hat{a}=\frac{\sum\limits_{i=1}^{N}y_{i}}{N}-\hat{b}\frac{ \sum\limits_{i=1}^{N}x_{i}}{N}=\bar{y}-\hat{b}\bar{x} \,\!$ and: $\hat{b}={\frac{\sum\limits_{i=1}^{N}x_{i}y_{i}-\frac{\sum \limits_{i=1}^{N}x_{i}\sum\limits_{i=1}^{N}y_{i}}{N}}{\sum \limits_{i=1}^{N}x_{i}^{2}-\frac{\left( \sum\limits_{i=1}^{N}x_{i}\right) ^{2}}{N}}} \,\!$ In this case the equations for ${{y}_{i}}\,\!$ and ${{x}_{i}}\,\!$ are: $y_{i}=\ln \left\{ -\ln [1-F(t_{i})]\right\} \,\!$ and: \begin{align} x_{i}=\ln(t_{i}) \end{align}\,\! The $F(t_{i})\,\!$ values are estimated from the median ranks. Once $\hat{a} \,\!$ and $\hat{b} \,\!$ are obtained, then $\hat{\beta } \,\!$ and $\hat{\eta } \,\!$ can easily be obtained from previous equations. The Correlation Coefficient The correlation coefficient is defined as follows: $\rho ={\frac{\sigma _{xy}}{\sigma _{x}\sigma _{y}}} \,\!$ where $\sigma_{xy}\,\!$ = covariance of $x\,\!$ and $y\,\!$, $\sigma_{x}\,\!$ = standard deviation of $x\,\!$, and $\sigma_{y}\,\!$ = standard deviation of $y\,\!$. The estimator of $\rho\,\!$ is the sample correlation coefficient, $\hat{\rho} \,\!$, given by: $\hat{\rho}=\frac{\sum\limits_{i=1}^{N}(x_{i}-\overline{x})(y_{i}-\overline{y} )}{\sqrt{\sum\limits_{i=1}^{N}(x_{i}-\overline{x})^{2}\cdot \sum\limits_{i=1}^{N}(y_{i}-\overline{y})^{2}}}\,\!$ RRY Example Consider the same data set from the probability plotting example given above (with six failures at 16, 34, 53, 75, 93 and 120 hours). Estimate the parameters and the correlation coefficient using rank regression on Y, assuming that the data follow the 2-parameter Weibull distribution. Solution Construct a table as shown next. Least Squares Analysis $N\,\!$ $T_{i}\,\!$ $ln(T_{i})\,\!$ $F(T_i)\,\!$ $y_{i}\,\!$ $(ln{T_i})^2\,\!$ ${y_i}^2\,\!$ $(ln{T_i})y_i\,\!$ 1 16 2.7726 0.1091 -2.1583 7.6873 4.6582 -5.9840 2 34 3.5264 0.2645 -1.1802 12.4352 1.393 -4.1620 3 53 3.9703 0.4214 -0.6030 15.7632 0.3637 -2.3943 4 75 4.3175 0.5786 -0.146 18.6407 0.0213 -0.6303 5 93 4.5326 0.7355 0.2851 20.5445 0.0813 1.2923 6 120 4.7875 0.8909 0.7955 22.9201 0.6328 3.8083 $\sum\,\!$ 23.9068 -3.007 97.9909 7.1502 -8.0699 Utilizing the values from the table, calculate $\hat{a} \,\!$ and $\hat{b} \,\!$ using the following equations: $\hat{b} =\frac{\sum\limits_{i=1}^{6}(\ln t_{i})y_{i}-(\sum\limits_{i=1}^{6}\ln t_{i})(\sum\limits_{i=1}^{6}y_{i})/6}{ \sum\limits_{i=1}^{6}(\ln t_{i})^{2}-(\sum\limits_{i=1}^{6}\ln t_{i})^{2}/6} \,\!$ $\hat{b}=\frac{-8.0699-(23.9068)(-3.0070)/6}{97.9909-(23.9068)^{2}/6} \,\!$ or: $\hat{b}=1.4301 \,\!$ and: $\hat{a}=\overline{y}-\hat{b}\overline{T}=\frac{\sum \limits_{i=1}^{N}y_{i}}{N}-\hat{b}\frac{\sum\limits_{i=1}^{N}\ln t_{i}}{N } \,\!$ or: $\hat{a}=\frac{(-3.0070)}{6}-(1.4301)\frac{23.9068}{6}=-6.19935 \,\!$ Therefore: $\hat{\beta }=\hat{b}=1.4301 \,\!$ and: $\hat{\eta }=e^{-\frac{\hat{a}}{\hat{b}}}=e^{-\frac{(-6.19935)}{ 1.4301}} \,\!$ or: $\hat{\eta }=76.318\text{ hr} \,\!$ The correlation coefficient can be estimated as: $\hat{\rho }=0.9956 \,\!$ This example can be repeated in the Weibull++ software. The following plot shows the Weibull probability plot for the data set (with 90% two-sided confidence bounds). If desired, the Weibull pdf representing the data set can be written as: $f(t)={\frac{\beta }{\eta }}\left( {\frac{t}{\eta }}\right) ^{\beta -1}e^{-\left( {\frac{t}{\eta }}\right) ^{\beta }} \,\!$ or: $f(t)={\frac{1.4302}{76.317}}\left( {\frac{t}{76.317}}\right) ^{0.4302}e^{-\left( {\frac{t}{76.317}}\right) ^{1.4302}} \,\!$ You can also plot this result in Weibull++, as shown next. From this point on, different results, reports and plots can be obtained. Rank Regression on X Performing a rank regression on X is similar to the process for rank regression on Y, with the difference being that the horizontal deviations from the points to the line are minimized rather than the vertical. Again, the first task is to bring the reliability function into a linear form. This step is exactly the same as in the regression on Y analysis and all the equations apply in this case too. The derivation from the previous analysis begins on the least squares fit part, where in this case we treat as the dependent variable and as the independent variable. The best-fitting straight line to the data, for regression on X (see Parameter Estimation), is the straight line: $x= \hat{a}+\hat{b}y \,\!$ The corresponding equations for $\hat{a} \,\!$ and $\hat{b} \,\!$ are: $\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\sum\limits_{i=1}^{N}x_{i}}{N} -\hat{b}\frac{\sum\limits_{i=1}^{N}y_{i}}{N} \,\!$ and: $\hat{b}={\frac{\sum\limits_{i=1}^{N}x_{i}y_{i}-\frac{\sum \limits_{i=1}^{N}x_{i}\sum\limits_{i=1}^{N}y_{i}}{N}}{\sum \limits_{i=1}^{N}y_{i}^{2}-\frac{\left( \sum\limits_{i=1}^{N}y_{i}\right) ^{2}}{N}}} \,\!$ where: $y_{i}=\ln \left\{ -\ln [1-F(t_{i})]\right\} \,\!$ and: \begin{align} x_{i}=\ln (t_{i}) \end{align}\,\! and the $F({{t}_{i}})\,\!$ values are again obtained from the median ranks. Once $\hat{a} \,\!$ and $\hat{b} \,\!$ are obtained, solve the linear equation for $y\,\!$, which corresponds to: $y=-\frac{\hat{a}}{\hat{b}}+\frac{1}{\hat{b}}x \,\!$ Solving for the parameters from above equations, we get: $a=-\frac{\hat{a}}{\hat{b}}=-\beta \ln (\eta )\,\!$ and $b=\frac{1}{\hat{b}}=\beta\,\!$ The correlation coefficient is evaluated as before. RRX Example Again using the same data set from the probability plotting and RRY examples (with six failures at 16, 34, 53, 75, 93 and 120 hours), calculate the parameters using rank regression on X. Solution The same table constructed above for the RRY example can also be applied for RRX. Using the values from this table we get: $\hat{b} ={\frac{\sum\limits_{i=1}^{6}(\ln T_{i})y_{i}-\frac{ \sum\limits_{i=1}^{6}\ln T_{i}\sum\limits_{i=1}^{6}y_{i}}{6}}{ \sum\limits_{i=1}^{6}y_{i}^{2}-\frac{\left( \sum\limits_{i=1}^{6}y_{i}\right) ^{2}}{6}}} \,\!$ $\hat{b} =\frac{-8.0699-(23.9068)(-3.0070)/6}{7.1502-(-3.0070)^{2}/6} \,\!$ or: $\hat{b}=0.6931 \,\!$ and: $\hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\sum\limits_{i=1}^{6}\ln T_{i} }{6}-\hat{b}\frac{\sum\limits_{i=1}^{6}y_{i}}{6} \,\!$ or: $\hat{a}=\frac{23.9068}{6}-(0.6931)\frac{(-3.0070)}{6}=4.3318 \,\!$ Therefore: $\hat{\beta }=\frac{1}{\hat{b}}=\frac{1}{0.6931}=1.4428 \,\!$ and: $\hat{\eta }=e^{\frac{\hat{a}}{\hat{b}}\cdot \frac{1}{\hat{ \beta }}}=e^{\frac{4.3318}{0.6931}\cdot \frac{1}{1.4428}}=76.0811\text{ hr} \,\!$ The correlation coefficient is: $\hat{\rho }=0.9956 \,\!$ The results and the associated graph using Weibull++ are shown next. Note that the slight variation in the results is due to the number of significant figures used in the estimation of the median ranks. Weibull++ by default uses double precision accuracy when computing the median ranks. 3-Parameter Weibull Regression When the MR versus ${{t}_{j}}\,\!$ points plotted on the Weibull probability paper do not fall on a satisfactory straight line and the points fall on a curve, then a location parameter, $\gamma\,\!$, might exist which may straighten out these points. The goal in this case is to fit a curve, instead of a line, through the data points using nonlinear regression. The Gauss-Newton method can be used to solve for the parameters, $\beta\,\!$, $\eta\,\!$ and $\gamma\,\!$, by performing a Taylor series expansion on $F(t{_{i}};\beta ,\eta, \gamma )\,\!$. Then the nonlinear model is approximated with linear terms and ordinary least squares are employed to estimate the parameters. This procedure is iterated until a satisfactory solution is reached. (Note that other shapes, particularly S shapes, might suggest the existence of more than one population. In these cases, the multiple population mixed Weibull distribution, may be more appropriate.) When you use the 3-parameter Weibull distribution, Weibull++ calculates the value of $\gamma\,\!$ by utilizing an optimized Nelder-Mead algorithm and adjusts the points by this value of $\gamma\,\!$ such that they fall on a straight line, and then plots both the adjusted and the original unadjusted points. To draw a curve through the original unadjusted points, if so desired, select Weibull 3P Line Unadjusted for Gamma from the Show Plot Line submenu under the Plot Options menu. The returned estimations of the parameters are the same when selecting RRX or RRY. To display the unadjusted data points and line along with the adjusted data points and line, select Show/Hide Items under the Plot Options menu and include the unadjusted data points and line as follows: The results and the associated graph for the previous example using the 3-parameter Weibull case are shown next: Maximum Likelihood Estimation As outlined in Parameter Estimation, maximum likelihood estimation works by developing a likelihood function based on the available data and finding the values of the parameter estimates that maximize the likelihood function. This can be achieved by using iterative methods to determine the parameter estimate values that maximize the likelihood function, but this can be rather difficult and time-consuming, particularly when dealing with the three-parameter distribution. Another method of finding the parameter estimates involves taking the partial derivatives of the likelihood function with respect to the parameters, setting the resulting equations equal to zero and solving simultaneously to determine the values of the parameter estimates. ( Note that MLE asymptotic properties do not hold when estimating $\gamma\,\!$ using MLE, as discussed in Meeker and Escobar [27].) The log-likelihood functions and associated partial derivatives used to determine maximum likelihood estimates for the Weibull distribution are covered in Appendix D. MLE Example One last time, use the same data set from the probability plotting, RRY and RRX examples (with six failures at 16, 34, 53, 75, 93 and 120 hours) and calculate the parameters using MLE. Solution In this case, we have non-grouped data with no suspensions or intervals, (i.e., complete data). The equations for the partial derivatives of the log-likelihood function are derived in an appendix and given next: $\frac{\partial \Lambda }{\partial \beta }=\frac{6}{\beta } +\sum_{i=1}^{6}\ln \left( \frac{T_{i}}{\eta }\right) -\sum_{i=1}^{6}\left( \frac{T_{i}}{\eta }\right) ^{\beta }\ln \left( \frac{T_{i}}{\eta }\right) =0 \,\!$ And: $\frac{\partial \Lambda }{\partial \eta }=\frac{-\beta }{\eta }\cdot 6+\frac{ \beta }{\eta }\sum\limits_{i=1}^{6}\left( \frac{T_{i}}{\eta }\right) ^{\beta }=0 \,\!$ Solving the above equations simultaneously we get: $\hat{\beta }=1.933,\,\!$ $\hat{\eta }=73.526 \,\!$ The variance/covariance matrix is found to be: $\left[ \begin{array}{ccc} \hat{Var}\left( \hat{\beta }\right) =0.4211 & \hat{Cov}( \hat{\beta },\hat{\eta })=3.272 \\ \hat{Cov}(\hat{\beta },\hat{\eta })=3.272 & \hat{Var} \left( \hat{\eta }\right) =266.646 \end{array} \right] \,\!$ The results and the associated plot using Weibull++ (MLE) are shown next. You can view the variance/covariance matrix directly by clicking the Analysis Summary table in the control panel. Note that the decimal accuracy displayed and used is based on your individual Application Setup. Unbiased MLE $\beta \,\!$ It is well known that the MLE $\beta \,\!$ is biased. The biasness will affect the accuracy of reliability prediction, especially when the number of failures are small. Weibull++ provides a simple way to correct the bias of MLE $\beta \,\!$. When there are no right censored observations in the data, the following equation provided by Hirose [39] is used to calculated the unbiased $\beta \,\!$. ${{\beta }_{U}}=\frac{\beta }{1.0115+\frac{1.278}{r}+\frac{2.001}{{{r}^{2}}}+\frac{20.35}{{{r}^{3}}}-\frac{46.98}{{{r}^{4}}}}$ where $r\,\!$ is the number of failures. When there are right censored observations in the data, the following equation provided by Ross [40] is used to calculated the unbiased $\beta\,\!$. ${{\beta }_{U}}=\frac{\beta }{1+\frac{1.37}{r-1.92}\sqrt{\frac{n}{r}}}$ where $n\,\!$ is the number of observations. The software will use the above equations only when there are more than two failures in the data set. For an example on how you might correct biased estimates, see also: Fisher Matrix Confidence Bounds One of the methods used by the application in estimating the different types of confidence bounds for Weibull data, the Fisher matrix method, is presented in this section. The complete derivations were presented in detail (for a general function) in Confidence Bounds. Bounds on the Parameters One of the properties of maximum likelihood estimators is that they are asymptotically normal, meaning that for large samples they are normally distributed. Additionally, since both the shape parameter estimate, $\hat{\beta } \,\!$, and the scale parameter estimate, $\hat{\eta }, \,\!$ must be positive, thus $ln\beta \,\!$ and $ln\eta \,\!$ are treated as being normally distributed as well. The lower and upper bounds on the parameters are estimated from Nelson [30]: $\beta _{U} =\hat{\beta }\cdot e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \beta })}}{\hat{\beta }}}\text{ (upper bound)} \,\!$ $\beta _{L} =\frac{\hat{\beta }}{e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \beta })}}{\hat{\beta }}}} \text{ (lower bound)} \,\!$ and: $\eta _{U} =\hat{\eta }\cdot e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \eta })}}{\hat{\eta }}}\text{ (upper bound)} \,\!$ $\eta _{L} =\frac{\hat{\eta }}{e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \eta })}}{\hat{\eta }}}}\text{ (lower bound)} \,\!$ where $K_{\alpha}\,\!$ is defined by: $\alpha =\frac{1}{\sqrt{2\pi }}\int_{K_{\alpha }}^{\infty }e^{-\frac{t^{2}}{2} }dt=1-\Phi (K_{\alpha }) \,\!$ If $d\,\!$ is the confidence level, then $\alpha =\frac{1-\delta }{2} \,\!$ for the two-sided bounds and $a = 1 - d\,\!$ for the one-sided bounds. The variances and covariances of $\hat{\beta }\,\!$ and $\hat{\eta }\,\!$ are estimated from the inverse local Fisher matrix, as follows: $\left( \begin{array}{cc} \hat{Var}\left( \hat{\beta }\right) & \hat{Cov}\left( \hat{ \beta },\hat{\eta }\right) \\ \hat{Cov}\left( \hat{\beta },\hat{\eta }\right) & \hat{Var} \left( \hat{\eta }\right) \end{array} \right) =\left( \begin{array}{cc} -\frac{\partial ^{2}\Lambda }{\partial \beta ^{2}} & -\frac{\partial ^{2}\Lambda }{\partial \beta \partial \eta } \\ -\frac{\partial ^{2}\Lambda }{\partial \beta \partial \eta } & -\frac{ \partial ^{2}\Lambda }{\partial \eta ^{2}} \end{array} \right) _{\beta =\hat{\beta },\text{ }\eta =\hat{\eta }}^{-1} \,\!$ Fisher Matrix Confidence Bounds and Regression Analysis Note that the variance and covariance of the parameters are obtained from the inverse Fisher information matrix as described in this section. The local Fisher information matrix is obtained from the second partials of the likelihood function, by substituting the solved parameter estimates into the particular functions. This method is based on maximum likelihood theory and is derived from the fact that the parameter estimates were computed using maximum likelihood estimation methods. When one uses least squares or regression analysis for the parameter estimates, this methodology is theoretically then not applicable. However, if one assumes that the variance and covariance of the parameters will be similar ( One also assumes similar properties for both estimators.) regardless of the underlying solution method, then the above methodology can also be used in regression analysis. The Fisher matrix is one of the methodologies that Weibull++ uses for both MLE and regression analysis. Specifically, Weibull++ uses the likelihood function and computes the local Fisher information matrix based on the estimates of the parameters and the current data. This gives consistent confidence bounds regardless of the underlying method of solution, (i.e., MLE or regression). In addition, Weibull++ checks this assumption and proceeds with it if it considers it to be acceptable. In some instances, Weibull++ will prompt you with an "Unable to Compute Confidence Bounds" message when using regression analysis. This is an indication that these assumptions were violated. Bounds on Reliability The bounds on reliability can easily be derived by first looking at the general extreme value distribution (EVD). Its reliability function is given by: $R(t)=e^{-e^{\left( \frac{t-p_{1}}{p_{2}}\right) }} \,\!$ By transforming $t = \ln t\,\!$ and converting $p_{1}=\ln({\eta})\,\!$, $p_{2}=\frac{1}{ \beta } \,\!$, the above equation becomes the Weibull reliability function: $R(t)=e^{-e^{\beta \left( \ln t-\ln \eta \right) }}=e^{-e^{\ln \left( \frac{t }{\eta }\right) ^{\beta }}}=e^{-\left( \frac{t}{\eta }\right) ^{\beta }} \,\!$ with: $R(T)=e^{-e^{\beta \left( \ln t-\ln \eta \right) }}\,\!$ set: $u=\beta \left( \ln t-\ln \eta \right) \,\!$ The reliability function now becomes: $R(T)=e^{-e^{u}} \,\!$ The next step is to find the upper and lower bounds on $u\,\!$. Using the equations derived in Confidence Bounds, the bounds on are then estimated from Nelson [30]: $u_{U} =\hat{u}+K_{\alpha }\sqrt{Var(\hat{u})} \,\!$ $u_{L} =\hat{u}-K_{\alpha }\sqrt{Var(\hat{u})} \,\!$ where: $Var(\hat{u}) =\left( \frac{\partial u}{\partial \beta }\right) ^{2}Var( \hat{\beta })+\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta }) +2\left( \frac{\partial u}{\partial \beta }\right) \left( \frac{\partial u }{\partial \eta }\right) Cov\left( \hat{\beta },\hat{\eta }\right) \,\!$ or: $Var(\hat{u}) =\frac{\hat{u}^{2}}{\hat{\beta }^{2}}Var(\hat{ \beta })+\frac{\hat{\beta }^{2}}{\hat{\eta }^{2}}Var(\hat{\eta }) -\left( \frac{2\hat{u}}{\hat{\eta }}\right) Cov\left( \hat{\beta }, \hat{\eta }\right). \,\!$ The upper and lower bounds on reliability are: $R_{U} =e^{-e^{u_{L}}}\text{ (upper bound)}\,\!$ $R_{L} =e^{-e^{u_{U}}}\text{ (lower bound)}\,\!$ Other Weibull Forms Weibull++ makes the following assumptions/substitutions when using the three-parameter or one-parameter forms: • For the 3-parameter case, substitute $t=\ln (t-\hat{\gamma }) \,\!$ (and by definition $\gamma\, \lt t\!$), instead of $\ln t\,\!$. (Note that this is an approximation since it eliminates the third parameter and assumes that $Var( \hat{\gamma })=0. \,\!$) • For the 1-parameter, $Var(\hat{\beta })=0, \,\!$ thus: $Var(\hat{u})=\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta })=\left( \frac{\hat{\beta }}{\hat{\eta }}\right) ^{2}Var(\hat{\eta }) \,\!$ Also note that the time axis (x-axis) in the three-parameter Weibull plot in Weibull++ is not ${t}\,\!$ but $t - \gamma\,\!$. This means that one must be cautious when obtaining confidence bounds from the plot. If one desires to estimate the confidence bounds on reliability for a given time ${{t}_{0}}\,\!$ from the adjusted plotted line, then these bounds should be obtained for a ${{t}_{0}} - \gamma\,\!$ entry on the time axis. Bounds on Time The bounds around the time estimate or reliable life estimate, for a given Weibull percentile (unreliability), are estimated by first solving the reliability equation with respect to time, as discussed in Lloyd and Lipow [24] and in Nelson [30]: $\ln R =-\left( \frac{t}{\eta }\right) ^{\beta } \,\!$ $\ln (-\ln R) =\beta \ln \left( \frac{t}{\eta }\right) \,\!$ \begin{align} \ln (-\ln R) =\beta (\ln t-\ln \eta ) \end{align}\,\! or: $u=\frac{1}{\beta }\ln (-\ln R)+\ln \eta \,\!$ where $u = \ln t\,\!$ . The upper and lower bounds on are estimated from: $u_{U} =\hat{u}+K_{\alpha }\sqrt{Var(\hat{u})} \,\!$ $u_{L} =\hat{u}-K_{\alpha }\sqrt{Var(\hat{u})} \,\!$ where: $Var(\hat{u})=\left( \frac{\partial u}{\partial \beta }\right) ^{2}Var( \hat{\beta })+\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta })+2\left( \frac{\partial u}{\partial \beta }\right) \left( \frac{\partial u}{\partial \eta }\right) Cov\left( \hat{\beta },\hat{ \eta }\right) \,\!$ or: $Var(\hat{u}) =\frac{1}{\hat{\beta }^{4}}\left[ \ln (-\ln R)\right] ^{2}Var(\hat{\beta })+\frac{1}{\hat{\eta }^{2}}Var(\hat{\eta })+2\left( -\frac{\ln (-\ln R)}{\hat{\beta }^{2}}\right) \left( \frac{1}{ \hat{\eta }}\right) Cov\left( \hat{\beta },\hat{\eta }\right) \,\!$ The upper and lower bounds are then found by: $T_{U} =e^{u_{U}}\text{ (upper bound)} \,\!$ $T_{L} =e^{u_{L}}\text{ (lower bound)} \,\!$ Likelihood Ratio Confidence Bounds As covered in Confidence Bounds, the likelihood confidence bounds are calculated by finding values for ${{\theta}_{1}}\,\!$ and ${{\theta}_{2}}\,\!$ that satisfy: $-2\cdot \text{ln}\left( \frac{L(\theta _{1},\theta _{2})}{L(\hat{\theta }_{1}, \hat{\theta }_{2})}\right) =\chi _{\alpha ;1}^{2} \,\!$ This equation can be rewritten as: $L(\theta _{1},\theta _{2})=L(\hat{\theta }_{1},\hat{\theta } _{2})\cdot e^{\frac{-\chi _{\alpha ;1}^{2}}{2}} \,\!$ For complete data, the likelihood function for the Weibull distribution is given by: $L(\beta ,\eta )=\prod_{i=1}^{N}f(x_{i};\beta ,\eta )=\prod_{i=1}^{N}\frac{ \beta }{\eta }\cdot \left( \frac{x_{i}}{\eta }\right) ^{\beta -1}\cdot e^{-\left( \frac{x_{i}}{\eta }\right) ^{\beta }} \,\!$ For a given value of $\alpha\,\!$, values for $\beta\,\!$ and $\eta\,\!$ can be found which represent the maximum and minimum values that satisfy the above equation. These represent the confidence bounds for the parameters at a confidence level $\delta\,\!$, where $\alpha = \delta\,\!$ for two-sided bounds and $\alpha = 2\delta - 1\,\!$ for one-sided. Similarly, the bounds on time and reliability can be found by substituting the Weibull reliability equation into the likelihood function so that it is in terms of $\beta\,\!$ and time or reliability, as discussed in Confidence Bounds. The likelihood ratio equation used to solve for bounds on time (Type 1) is: $L(\beta ,t)=\prod_{i=1}^{N}\frac{\beta }{\left( \frac{t}{(-\text{ln}(R))^{ \frac{1}{\beta }}}\right) }\cdot \left( \frac{x_{i}}{\left( \frac{t}{(-\text{ ln}(R))^{\frac{1}{\beta }}}\right) }\right) ^{\beta -1}\cdot \text{exp}\left[ -\left( \frac{x_{i}}{\left( \frac{t}{(-\text{ln}(R))^{\frac{1}{\beta }}} \right) }\right) ^{\beta }\right] \,\!$ The likelihood ratio equation used to solve for bounds on reliability (Type 2) is: $L(\beta ,R)=\prod_{i=1}^{N}\frac{\beta }{\left( \frac{t}{(-\text{ln}(R))^{ \frac{1}{\beta }}}\right) }\cdot \left( \frac{x_{i}}{\left( \frac{t}{(-\text{ ln}(R))^{\frac{1}{\beta }}}\right) }\right) ^{\beta -1}\cdot \text{exp}\left[ -\left( \frac{x_{i}}{\left( \frac{t}{(-\text{ln}(R))^{\frac{1}{\beta }}} \right) }\right) ^{\beta }\right] \,\!$ Bayesian Confidence Bounds Bounds on Parameters Bayesian Bounds use non-informative prior distributions for both parameters. From Confidence Bounds, we know that if the prior distribution of $\eta\,\!$ and $\beta\,\!$ are independent, the posterior joint distribution of $\eta\,\!$ and $\beta\,\!$ can be written as: $f(\eta ,\beta |Data)= \dfrac{L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )}{\int_{0}^{\infty }\int_{0}^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\eta d\beta } \,\!$ The marginal distribution of $\eta\,\!$ is: $f(\eta |Data) =\int_{0}^{\infty }f(\eta ,\beta |Data)d\beta = \dfrac{\int_{0}^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\beta }{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\eta d\beta } \,\!$ where: $\varphi (\beta )=\frac{1}{\beta } \,\!$ is the non-informative prior of $\beta\,\!$. $\varphi (\eta )=\frac{1}{\eta } \,\!$ is the non-informative prior of $\eta\,\!$. Using these non-informative prior distributions, $f(\eta|Data)\,\!$ can be rewritten as: $f(\eta |Data)=\dfrac{\int_{0}^{\infty }L(Data|\eta ,\beta )\frac{1}{\beta } \frac{1}{\eta }d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(Data|\eta ,\beta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } \,\!$ The one-sided upper bounds of $\eta\,\!$ is: $CL=P(\eta \leq \eta _{U})=\int_{0}^{\eta _{U}}f(\eta |Data)d\eta \,\!$ The one-sided lower bounds of $\eta\,\!$ is: $1-CL=P(\eta \leq \eta _{L})=\int_{0}^{\eta _{L}}f(\eta |Data)d\eta \,\!$ The two-sided bounds of $\eta\,\!$ is: $CL=P(\eta _{L}\leq \eta \leq \eta _{U})=\int_{\eta _{L}}^{\eta _{U}}f(\eta |Data)d\eta \,\!$ Same method is used to obtain the bounds of $\beta\,\!$. Bounds on Reliability $CL=\Pr (R\leq R_{U})=\Pr (\eta \leq T\exp (-\frac{\ln (-\ln R_{U})}{\beta })) \,\!$ From the posterior distribution of $\eta\,\!$ we have: $CL=\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T\exp (-\dfrac{\ln (-\ln R_{U})}{\beta })}L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } \,\!$ The above equation is solved numerically for ${{R}_{U}}\,\!$. The same method can be used to calculate the one sided lower bounds and two-sided bounds on reliability. Bounds on Time From Confidence Bounds, we know that: $CL=\Pr (T\leq T_{U})=\Pr (\eta \leq T_{U}\exp (-\frac{\ln (-\ln R)}{\beta })) \,\!$ From the posterior distribution of $\eta\,\!$, we have: $CL=\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T_{U}\exp (-\dfrac{ \ln (-\ln R)}{\beta })}L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } \,\!$ The above equation is solved numerically for ${{T}_{U}}\,\!$. The same method can be applied to calculate one sided lower bounds and two-sided bounds on time. Bayesian-Weibull Analysis The Bayesian methods presented next are for the 2-parameter Weibull distribution. Bayesian concepts were introduced in Parameter Estimation. This model considers prior knowledge on the shape ($\beta\,\!$) parameter of the Weibull distribution when it is chosen to be fitted to a given set of data. There are many practical applications for this model, particularly when dealing with small sample sizes and some prior knowledge for the shape parameter is available. For example, when a test is performed, there is often a good understanding about the behavior of the failure mode under investigation, primarily through historical data. At the same time, most reliability tests are performed on a limited number of samples. Under these conditions, it would be very useful to use this prior knowledge with the goal of making more accurate predictions. A common approach for such scenarios is to use the 1-parameter Weibull distribution, but this approach is too deterministic, too absolute you may say (and you would be right). The Bayesian-Weibull model in Weibull++ (which is actually a true "WeiBayes" model, unlike the 1-parameter Weibull that is commonly referred to as such) offers an alternative to the 1-parameter Weibull, by including the variation and uncertainty that might have been observed in the past on the shape parameter. Applying Bayes's rule on the 2-parameter Weibull distribution and assuming the prior distributions of $\beta\,\!$ and $\eta\,\!$ are independent, we obtain the following posterior pdf: $f(\beta ,\eta |Data)=\dfrac{L(\beta ,\eta )\varphi (\beta )\varphi (\eta )}{ \int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta } \,\!$ In this model, $\eta\,\!$ is assumed to follow a noninformative prior distribution with the density function $\varphi (\eta )=\dfrac{1}{\eta } \,\!$. This is called Jeffrey's prior, and is obtained by performing a logarithmic transformation on $\eta\,\!$. Specifically, since $\eta\,\!$ is always positive, we can assume that ln($\eta\,\!$) follows a uniform distribution, $U( - ∞, + ∞).\,\!$ Applying Jeffrey's rule as given in Gelman et al. [9] which says "in general, an approximate non-informative prior is taken proportional to the square root of Fisher's information," yields $\varphi (\eta )=\dfrac{1}{\eta }\,\!$. The prior distribution of $\beta\,\!$, denoted as $\varphi (\beta )\,\!$, can be selected from the following distributions: normal, lognormal, exponential and uniform. The procedure of performing a Bayesian-Weibull analysis is as follows: • Collect the times-to-failure data. • Specify a prior distribution for $\beta\,\!$ (the prior for $\eta\,\!$ is assumed to be $1/\beta\,\!$). • Obtain the posterior pdf from the above equation. In other words, a distribution (the posterior pdf) is obtained, rather than a point estimate as in classical statistics (i.e., as in the parameter estimation methods described previously in this chapter). Therefore, if a point estimate needs to be reported, a point of the posterior pdf needs to be calculated. Typical points of the posterior distribution used are the mean (expected value) or median. In Weibull++, both options are available and can be chosen from the Analysis page, under the Results As area, as shown next. The expected value of $\beta\,\!$ is obtained by: $E(\beta )=\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }\beta \cdot f(\beta ,\eta |Data)d\beta d\eta \,\!$ Similarly, the expected value of $\eta\,\!$ is obtained by: $E(\eta )=\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }\eta \cdot f(\beta ,\eta |Data)d\beta d\eta \,\!$ The median points are obtained by solving the following equations for $\breve{\beta} \,\!$ and $\breve{\eta} \,\!$ respectively: $\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\breve{\beta}}f(\beta ,\eta |Data)d\beta d\eta =0.5 \,\!$ and: $\int\nolimits_{0}^{\breve{\eta}}\int\nolimits_{0}^{\infty }f(\beta ,\eta |Data)d\beta d\eta =0.5 \,\!$ Of course, other points of the posterior distribution can be calculated as well. For example, one may want to calculate the 10th percentile of the joint posterior distribution (w.r.t. one of the parameters). The procedure for obtaining other points of the posterior distribution is similar to the one for obtaining the median values, where instead of 0.5 the percentage of interest is given. This procedure actually provides the confidence bounds on the parameters, which in the Bayesian framework are called ‘‘Credible Bounds.‘‘ However, since the engineering interpretation is the same, and to avoid confusion, we refer to them as confidence bounds in this reference and in Weibull++. Posterior Distributions for Functions of Parameters As explained in Parameter Estimation, in Bayesian analysis, all the functions of the parameters are distributed. In other words, a posterior distribution is obtained for functions such as reliability and failure rate, instead of point estimate as in classical statistics. Therefore, in order to obtain a point estimate for these functions, a point on the posterior distributions needs to be calculated. Again, the expected value (mean) or median value are used. It is important to note that the Median value is preferable and is the default in Weibull++. This is because the Median value always corresponds to the 50th percentile of the distribution. On the other hand, the Mean is not a fixed point on the distribution, which could cause issues, especially when comparing results across different data sets. pdf of the Times-to-Failure The posterior distribution of the failure time $t\,\!$ is given by: $f(T|Data)=\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }f(T,\beta ,\eta )f(\beta ,\eta |Data)d\eta d\beta \,\!$ where: $f(T,\beta ,\eta )=\dfrac{\beta }{\eta }\left( \dfrac{T}{\eta }\right) ^{\beta -1}e^{-\left( \dfrac{T}{\eta }\right) ^{\beta }} \,\!$ For the pdf of the times-to-failure, only the expected value is calculated and reported in Weibull++. Reliability In order to calculate the median value of the reliability function, we first need to obtain posterior pdf of the reliability. Since $R(T)\,\!$ is a function of $\beta\,\!$, the density functions of $\beta\,\!$ and $R(T)\,\!$ have the following relationship: \begin{align} f(R|Data,T)dR = & f(\beta |Data)d\beta)\\ = & (\int\nolimits_{0}^{\infty }f(\beta ,\eta |Data)d{\eta}) d{\beta} \\ =& \dfrac{\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }d\beta \end{align}\,\! The median value of the reliability is obtained by solving the following equation w.r.t. $\breve{R}: \,\!$ $\int\nolimits_{0}^{\breve{R}}f(R|Data,T)dR=0.5 \,\!$ The expected value of the reliability at time $t\,\!$ is given by: $R(T|Data)=\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }R(T,\beta ,\eta )f(\beta ,\eta |Data)d\eta d\beta \,\!$ where: $R(T,\beta ,\eta )=e^{-\left( \dfrac{T}{\eta }\right) ^{^{\beta }}} \,\!$ Failure Rate The failure rate at time is given by: $\lambda (T|Data)=\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }\lambda (T,\beta ,\eta )L(\beta ,\eta )\varphi (\eta )\varphi (\beta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\eta )\varphi (\beta )d\eta d\beta } \,\!$ where: $\lambda (T,\beta ,\eta )=\dfrac{\beta }{\eta }\left( \dfrac{T}{\eta }\right) ^{\beta -1} \,\!$ Bounds on Reliability for Bayesian-Weibull The confidence bounds calculation under the Bayesian-Weibull analysis is very similar to the Bayesian Confidence Bounds method described in the previous section, with the exception that in the case of the Bayesian-Weibull Analysis the specified prior of $\beta\,\!$ is considered instead of an non-informative prior. The Bayesian one-sided upper bound estimate for $R(T)\,\!$ is given by: $\int\nolimits_{0}^{R_{U}(T)}f(R|Data,t)dR=CL \,\!$ Using the posterior distribution, the following is obtained: $\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{t\exp (-\dfrac{\ln (-\ln R_{U})}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }=CL \,\!$ The above equation can be solved for ${{R}_{U}}(t)\,\!$. The Bayesian one-sided lower bound estimate for $\ R(t) \,\!$ is given by: $\int\nolimits_{0}^{R_{L}(t)}f(R|Data,t)dR=1-CL \,\!$ Using the posterior distribution, the following is obtained: $\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T\exp (-\dfrac{\ln (-\ln R_{L})}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }=1-CL \,\!$ The above equation can be solved for ${{R}_{L}}(t)\,\!$. The Bayesian two-sided bounds estimate for $R(t)\,\!$ is given by: $\int\nolimits_{R_{L}(t)}^{R_{U}(t)}f(R|Data,t)dR=CL \,\!$ which is equivalent to: $\int\nolimits_{0}^{R_{U}(t)}f(R|Data,t)dR=(1+CL)/2 \,\!$ and: $\int\nolimits_{0}^{R_{L}(t)}f(R|Data,T)dR=(1-CL)/2 \,\!$ Using the same method for one-sided bounds, ${{R}_{U}}(t)\,\!$ and ${{R}_{L}}(t)\,\!$ can be computed. Bounds on Time for Bayesian-Weibull Following the same procedure described for bounds on Reliability, the bounds of time $t\,\!$ can be calculated, given $R\,\!$. The Bayesian one-sided upper bound estimate for $t(R)\,\!$ is given by: $\int\nolimits_{0}^{T_{U}(R)}f(T|Data,R)dT=CL \,\!$ Using the posterior distribution, the following is obtained: $\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T_{U}\exp (-\dfrac{\ln (-\ln R)}{\beta })}L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }=CL \,\!$ The above equation can be solved for ${{T}_{U}}(R)\,\!$. The Bayesian one-sided lower bound estimate for $T(R)\,\!$ is given by: $\int\nolimits_{0}^{T_{L}(R)}f(T|Data,R)dT=1-CL \,\!$ or: $\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{T_{L}\exp (\dfrac{-\ln (-\ln R)}{\beta })}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\varphi (\beta )\varphi (\eta )d\eta d\beta }=CL \,\!$ The above equation can be solved for ${{T}_{L}}(R)\,\!$. The Bayesian two-sided lower bounds estimate for $T(R)\,\!$ is: $\int\nolimits_{T_{L}(R)}^{T_{U}(R)}f(T|Data,R)dT=CL \,\!$ which is equivalent to: $\int\nolimits_{0}^{T_{U}(R)}f(T|Data,R)dT=(1+CL)/2 \,\!$ and: $\int\nolimits_{0}^{T_{L}(R)}f(T|Data,R)dT=(1-CL)/2 \,\!$ Bayesian-Weibull Example A manufacturer has tested prototypes of a modified product. The test was terminated at 2,000 hours, with only 2 failures observed from a sample size of 18. The following table shows the data. Number of State State of F or S State End Time 1 F 1180 1 F 1842 16 S 2000 Because of the lack of failure data in the prototype testing, the manufacturer decided to use information gathered from prior tests on this product to increase the confidence in the results of the prototype testing. This decision was made because failure analysis indicated that the failure mode of the two failures is the same as the one that was observed in previous tests. In other words, it is expected that the shape of the distribution (beta) hasn't changed, but hopefully the scale (eta) has, indicating longer life. The 2-parameter Weibull distribution was used to model all prior tests results. The estimated beta ($\beta\,\!$) parameters of the prior test results are as follows: Betas Obtained for Similar Mode 1.7 2.1 2.4 3.1 3.5 Solution First, in order to fit the data to a Bayesian-Weibull model, a prior distribution for beta needs to be determined. Based on the beta values in the prior tests, the prior distribution for beta is found to be a lognormal distribution with $\mu = 0.9064\,\!$, $\sigma = 0.3325\,\!$. (The values of the parameters can be obtained by entering the beta values into a Weibull++ standard folio and analyzing it using the lognormal distribution and the RRX analysis method.) Next, enter the data from the prototype testing into a standard folio. On the control panel, choose the Bayesian-Weibull > B-W Lognormal Prior distribution. Click Calculate and enter the parameters of the lognormal distribution, as shown next. Click OK. The result is Beta (Median) = 2.361219 and Eta (Median) = 5321.631912 (by default Weibull++ returns the median values of the posterior distribution). Suppose that the reliability at 3,000 hours is the metric of interest in this example. Using the QCP, the reliability is calculated to be 76.97% at 3,000 hours. The following picture depicts the posterior pdf plot of the reliability at 3,000, with the corresponding median value as well as the 10th percentile value. The 10th percentile constitutes the 90% lower 1-sided bound on the reliability at 3,000 hours, which is calculated to be 50.77%. The pdf of the times-to-failure data can be plotted in Weibull++, as shown next: Weibull Distribution Examples Median Rank Plot Example In this example, we will determine the median rank value used for plotting the 6th failure from a sample size of 10. This example will use Weibull++'s Quick Statistical Reference (QSR) tool to show how the points in the plot of the following example are calculated. First, open the Quick Statistical Reference tool and select the Inverse F-Distribution Values option. In this example, n1 = 10, j = 6, m = 2(10 - 6 + 1) = 10, and n2 = 2 x 6 = 12. Thus, from the F-distribution rank equation: $MR=\frac{1}{1+\left( \frac{10-6+1}{6} \right){{F}_{0.5;10;12}}}\,\!$ Use the QSR to calculate the value of F0.5;10;12 = 0.9886, as shown next: Consequently: $MR=\frac{1}{1+\left( \frac{5}{6} \right)\times 0.9886}=0.5483=54.83%\,\!$ Another method is to use the Median Ranks option directly, which yields MR(%) = 54.8305%, as shown next: Complete Data Example Assume that 10 identical units (N = 10) are being reliability tested at the same application and operation stress levels. 6 of these units fail during this test after operating the following numbers of hours, ${T}_{j}\,\!$: 150, 105, 83, 123, 64 and 46. The test is stopped at the 6th failure. Find the parameters of the Weibull pdf that represents these data. Solution Create a new Weibull++ standard folio that is configured for grouped times-to-failure data with suspensions. Enter the data in the appropriate columns. Note that there are 4 suspensions, as only 6 of the 10 units were tested to failure (the next figure shows the data as entered). Use the 3-parameter Weibull and MLE for the calculations. Plot the data. Note that the original data points, on the curved line, were adjusted by subtracting 30.92 hours to yield a straight line as shown above. Suspension Data Example ACME company manufactures widgets, and it is currently engaged in reliability testing a new widget design. 19 units are being reliability tested, but due to the tremendous demand for widgets, units are removed from the test whenever the production cannot cover the demand. The test is terminated at the 67th day when the last widget is removed from the test. The following table contains the collected data. Data Point Index State (F/S) Time to Failure 1 F 2 2 S 3 3 F 5 4 S 7 5 F 11 6 S 13 7 S 17 8 S 19 9 F 23 10 F 29 11 S 31 12 F 37 13 S 41 14 F 43 15 S 47 16 S 53 17 F 59 18 S 61 19 S 67 Solution In this example, we see that the number of failures is less than the number of suspensions. This is a very common situation, since reliability tests are often terminated before all units fail due to financial or time constraints. Furthermore, some suspensions will be recorded when a failure occurs that is not due to a legitimate failure mode, such as operator error. In cases such as this, a suspension is recorded, since the unit under test cannot be said to have had a legitimate failure. Enter the data into a Weibull++ standard folio that is configured for times-to-failure data with suspensions. The folio will appear as shown next: We will use the 2-parameter Weibull to solve this problem. The parameters using maximum likelihood are: \begin{align} & \hat{\beta }=1.145 \\ & \hat{\eta }=65.97 \\ \end{align}\,\! Using RRX: \begin{align} & \hat{\beta }=0.914\\ & \hat{\eta }=79.38 \\ \end{align}\,\! Using RRY: \begin{align} & \hat{\beta }=0.895\\ & \hat{\eta }=82.02 \\ \end{align}\,\! Interval Data Example Suppose we have run an experiment with 8 units tested and the following is a table of their last inspection times and failure times: Data Point Index Last Inspection Failure Time 1 30 32 2 32 35 3 35 37 4 37 40 5 42 42 6 45 45 7 50 50 8 55 55 Analyze the data using several different parameter estimation techniques and compare the results. Solution Enter the data into a Weibull++ standard folio that is configured for interval data. The data is entered as follows: The computed parameters using maximum likelihood are: \begin{align} & \hat{\beta }=5.76 \\ & \hat{\eta }=44.68 \\ \end{align}\,\! Using RRX or rank regression on X: \begin{align} & \hat{\beta }=5.70 \\ & \hat{\eta }=44.54 \\ \end{align}\,\! Using RRY or rank regression on Y: \begin{align} & \hat{\beta }=5.41 \\ & \hat{\eta }=44.76 \\ \end{align}\,\! The plot of the MLE solution with the two-sided 90% confidence bounds is: Mixed Data Types Example From Dimitri Kececioglu, Reliability & Life Testing Handbook, Page 406. [20]. Estimate the parameters for the 3-parameter Weibull, for a sample of 10 units that are all tested to failure. The recorded failure times are 200; 370; 500; 620; 730; 840; 950; 1,050; 1,160 and 1,400 hours. Published Results: Published results (using probability plotting): ${\widehat{\beta}} = 3.0\,\!$, ${\widehat{\eta}} = 1,220\,\!$, ${\widehat{\gamma}} = -300\,\!$ Computed Results in Weibull++ Weibull++ computed parameters for rank regression on X are: ${\widehat{\beta}} = 2.9013\,\!$, ${\widehat{\eta}} = 1195.5009\,\!$, ${\widehat{\gamma}} = -279.000\,\!$ The small difference between the published results and the ones obtained from Weibull++ are due to the difference in the estimation method. In the publication the parameters were estimated using probability plotting (i.e., the fitted line was "eye-balled"). In Weibull++, the parameters were estimated using non-linear regression (a more accurate, mathematically fitted line). Note that γ in this example is negative. This means that the unadjusted for γ line is concave up, as shown next. Weibull Distribution RRX Example Assume that 6 identical units are being tested. The failure times are: 93, 34, 16, 120, 53 and 75 hours. 1. What is the unreliability of the units for a mission duration of 30 hours, starting the mission at age zero? 2. What is the reliability for a mission duration of 10 hours, starting the new mission at the age of T = 30 hours? 3. What is the longest mission that this product should undertake for a reliability of 90%? Solution 1. First, we use Weibull++ to obtain the parameters using RRX. Then, we investigate several methods of solution for this problem. The first, and more laborious, method is to extract the information directly from the plot. You may do this with either the screen plot in RS Draw or the printed copy of the plot. (When extracting information from the screen plot in RS Draw, note that the translated axis position of your mouse is always shown on the bottom right corner.) Using this first method, enter either the screen plot or the printed plot with T = 30 hours, go up vertically to the straight line fitted to the data, then go horizontally to the ordinate, and read off the result. A good estimate of the unreliability is 23%. (Also, the reliability estimate is 1.0 - 0.23 = 0.77 or 77%.) The second method involves the use of the Quick Calculation Pad (QCP). Select the Prob. of Failure calculation option and enter 30 hours in the Mission End Time field. Note that the results in QCP vary according to the parameter estimation method used. The above results are obtained using RRX. 2. The conditional reliability is given by: $R(t|T)=\frac{R(T+t)}{R(T)}\,\!$ or: $\hat{R}(10hr|30hr)=\frac{\hat{R}(10+30)}{\hat{R}(30)}=\frac{\hat{R}(40)}{\hat{R}(30)}\,\!$ Again, the QCP can provide this result directly and more accurately than the plot. 3. To use the QCP to solve for the longest mission that this product should undertake for a reliability of 90%, choose Reliable Life and enter 0.9 for the required reliability. The result is 15.9933 hours. Benchmark with Published Examples The following examples compare published results to computed results obtained with Weibull++. Complete Data RRY Example From Dimitri Kececioglu, Reliability & Life Testing Handbook, Page 418 [20]. Sample of 10 units, all tested to failure. The failures were recorded at 16, 34, 53, 75, 93, 120, 150, 191, 240 and 339 hours. Published Results Published Results (using Rank Regression on Y): \begin{align} & \widehat{\beta }=1.20 \\ & \widehat{\eta} = 146.2 \\ & \hat{\rho }=0.998703\\ \end{align}\,\! Computed Results in Weibull++ This same data set can be entered into a Weibull++ standard data sheet. Use RRY for the estimation method. Weibull++ computed parameters for RRY are: \begin{align} & \widehat{\beta }=1.1973 \\ & \widehat{\eta} = 146.2545 \\ & \hat{\rho }=0.9999\\ \end{align}\,\! The small difference between the published results and the ones obtained from Weibull++ is due to the difference in the median rank values between the two (in the publication, median ranks are obtained from tables to 3 decimal places, whereas in Weibull++ they are calculated and carried out up to the 15th decimal point). You will also notice that in the examples that follow, a small difference may exist between the published results and the ones obtained from Weibull++. This can be attributed to the difference between the computer numerical precision employed by Weibull++ and the lower number of significant digits used by the original authors. In most of these publications, no information was given as to the numerical precision used. Suspension Data MLE Example From Wayne Nelson, Fan Example, Applied Life Data Analysis, page 317 [30]. 70 diesel engine fans accumulated 344,440 hours in service and 12 of them failed. A table of their life data is shown next (+ denotes non-failed units or suspensions, using Dr. Nelson's nomenclature). Evaluate the parameters with their two-sided 95% confidence bounds, using MLE for the 2-parameter Weibull distribution. Published Results: Weibull parameters (2P-Weibull, MLE): \begin{align} & \widehat{\beta }=1.0584 \\ & \widehat{\eta} = 26,296 \\ \end{align}\,\! Published 95% FM confidence limits on the parameters: \begin{align} & \widehat{\beta }=\lbrace 0.6441, \text{ }1.7394\rbrace \\ & \widehat{\eta} = \lbrace 10,522, \text{ }65,532\rbrace \\ \end{align}\,\! Published variance/covariance matrix: Note that Nelson expresses the results as multiples of 1,000 (or = 26.297, etc.). The published results were adjusted by this factor to correlate with Weibull++ results. Computed Results in Weibull++ This same data set can be entered into a Weibull++ standard folio, using 2-parameter Weibull and MLE to calculate the parameter estimates. You can also enter the data as given in table without grouping them by opening a data sheet configured for suspension data. Then click the Group Data icon and chose Group exactly identical values. The data will be automatically grouped and put into a new grouped data sheet. Weibull++ computed parameters for maximum likelihood are: \begin{align} & \widehat{\beta }=1.0584 \\ & \widehat{\eta} = 26,297 \\ \end{align}\,\! Weibull++ computed 95% FM confidence limits on the parameters: \begin{align} & \widehat{\beta }=\lbrace 0.6441, \text{ }1.7394\rbrace \\ & \widehat{\eta} = \lbrace 10,522, \text{ }65,532\rbrace \\ \end{align}\,\! Weibull++ computed/variance covariance matrix: The two-sided 95% bounds on the parameters can be determined from the QCP. Calculate and then click Report to see the results. Interval Data MLE Example From Wayne Nelson, Applied Life Data Analysis, Page 415 [30]. 167 identical parts were inspected for cracks. The following is a table of their last inspection times and times-to-failure: Published Results: Published results (using MLE): \begin{align} & \widehat{\beta }=1.486 \\ & \widehat{\eta} = 71.687\\ \end{align}\,\! Published 95% FM confidence limits on the parameters: \begin{align} & \widehat{\beta }=\lbrace 1.224, \text{ }1.802\rbrace \\ & \widehat{\eta} = \lbrace 61.962, \text{ }82.938\rbrace \\ \end{align}\,\! Published variance/covariance matrix: Computed Results in Weibull++ This same data set can be entered into a Weibull++ standard folio that's configured for grouped times-to-failure data with suspensions and interval data. Weibull++ computed parameters for maximum likelihood are: \begin{align} & \widehat{\beta }=1.485 \\ & \widehat{\eta} = 71.690\\ \end{align}\,\! Weibull++ computed 95% FM confidence limits on the parameters: \begin{align} & \widehat{\beta }=\lbrace 1.224, \text{ }1.802\rbrace \\ & \widehat{\eta} = \lbrace 61.961, \text{ }82.947\rbrace \\ \end{align}\,\! Weibull++ computed/variance covariance matrix: Grouped Suspension MLE Example From Dallas R. Wingo, IEEE Transactions on Reliability Vol. R-22, No 2, June 1973, Pages 96-100. Wingo uses the following times-to-failure: 37, 55, 64, 72, 74, 87, 88, 89, 91, 92, 94, 95, 97, 98, 100, 101, 102, 102, 105, 105, 107, 113, 117, 120, 120, 120, 122, 124, 126, 130, 135, 138, 182. In addition, the following suspensions are used: 4 at 70, 5 at 80, 4 at 99, 3 at 121 and 1 at 150. Published Results (using MLE) \begin{align} & \widehat{\beta }=3.7596935\\ & \widehat{\eta} = 106.49758 \\ & \hat{\gamma }=14.451684\\ \end{align}\,\! Computed Results in Weibull++ \begin{align} & \widehat{\beta }=3.7596935\\ & \widehat{\eta} = 106.49758 \\ & \hat{\gamma }=14.451684\\ \end{align}\,\! Note that you must select the Use True 3-P MLEoption in the Weibull++ Application Setup to replicate these results. 3-P Probability Plot Example Suppose we want to model a left censored, right censored, interval, and complete data set, consisting of 274 units under test of which 185 units fail. The following table contains the data. Data Point Index Number in State Last Inspection State (S or F) State End Time 1 2 5 F 5 2 23 5 S 5 3 28 0 F 7 4 4 10 F 10 5 7 15 F 15 6 8 20 F 20 7 29 20 S 20 8 32 0 F 22 9 6 25 F 25 10 4 27 F 30 11 8 30 F 35 12 5 30 F 40 13 9 27 F 45 14 7 25 F 50 15 5 20 F 55 16 3 15 F 60 17 6 10 F 65 18 3 5 F 70 19 37 100 S 100 20 48 0 F 102 Solution Since standard ranking methods for dealing with these different data types are inadequate, we will want to use the ReliaSoft ranking method. This option is the default in Weibull++ when dealing with interval data. The filled-out standard folio is shown next: The computed parameters using MLE are: $\hat{\beta }=0.748;\text{ }\hat{\eta }=44.38\,\!$ Using RRX: $\hat{\beta }=1.057;\text{ }\hat{\eta }=36.29\,\!$ Using RRY: $\hat{\beta }=0.998;\text{ }\hat{\eta }=37.16\,\!$ The plot with the two-sided 90% confidence bounds for the rank regression on X solution is:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 28, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776861667633057, "perplexity": 645.180327397754}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501555.34/warc/CC-MAIN-20230209081052-20230209111052-00118.warc.gz"}
https://aerospaceanswers.com/?sort=vote
• Create a Poll 300 Insert Image Size must be less than < 5MB. Cancel 300 Cancel All Question • ## PollSupersonic flow has a Mach number on 29th June 2020 in Aeronautics. • 135 • 0 • 2 • ## Calculate the percentage of the profile drag coefficient that is due to pressure drag. on 18th February 2019 in Aerodynamics. • 837 • 1 • 1 on 26th December 2018 in Aeronautics. • 651 • 1 • 0 • ## Calculate velocity on the airfoil. on 28th December 2018 in Aerodynamics. • 809 • 1 • 0 • ## What will be the pressure at downstream of the streamline. on 28th December 2018 in Fluid dynamics. • 517 • 1 • 0 • ## Find circulations around the cylinder. on 28th December 2018 in Aerodynamics. • 492 • 1 • 0 • ## How to find locations on the surface of the cylinder for the following. on 29th December 2018 in Aerodynamics. • 458 • 1 • 0 • ## Derive an expression for the pressure coefficient. on 29th December 2018 in Aerodynamics. • 449 • 1 • 0 • ## Explain if there is any change in the shape of streamlines. on 29th December 2018 in Aerodynamics. • 410 • 1 • 0 • ## Calculate the peak coefficient of pressure over a circular cylinder. on 30th December 2018 in Aerodynamics. • 511 • 1 • 0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8790820240974426, "perplexity": 13223.109013867292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487625967.33/warc/CC-MAIN-20210616155529-20210616185529-00006.warc.gz"}
https://mizugadro.mydns.jp/t/index.php?title=LaguerreL&oldid=28285&printable=yes
# LaguerreL (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Identifier LaguerreL is used in Mathematica to denote the Laguerre polynomial and the associated Laguerre functions.[1][2] ## Definitions Laguerre polynomial can be defined with $$\displaystyle \mathrm{LaguerreL}[n,x]= L_n(x)=\sum_{m=0}^n \frac{(-1)^m}{m!} \mathrm{Binomial}(n,m) \, x^m =\sum_{m=0}^n \frac{(-1)^m}{m!} \, \frac{n!}{m! \, (n\!-\!m)!} \, x^m$$ Associated Laguerre function appears as $$\displaystyle \mathrm{LaguerreL}[n,k,x]= L_n^k(x)=(-1)^k \, \partial_x^k L_{n+k}(x) =\sum_{m=0}^n (-1)^m \frac{ (n\!+\!k)!}{(n\!-\!m)!\, (k\!+\!m)!\, m!} \, x^m$$ For non-negative integer values of the second argument, it is also polynomial. In particular the 0th associated Laguerre is just Laguerre polynomial. ## First polynomials TeXForm[TableForm[Table[Table[LaguerreL[n, m, z], {n, 0, 5}], {m, 0, 8}]]] gives the following table: $$\begin{array}{cccccc} 1 & 1-z & \frac{1}{2} \left(z^2-4 z+2\right) & \frac{1}{6} \left(-z^3+9 z^2-18 z+6\right) & \frac{1}{24} \left(z^4-16 z^3+72 z^2-96 z+24\right) & \frac{1}{120} \left(-z^5+25 z^4-200 z^3+600 z^2-600 z+120\right) \\ 1 & 2-z & \frac{1}{2} \left(z^2-6 z+6\right) & \frac{1}{6} \left(-z^3+12 z^2-36 z+24\right) & \frac{1}{24} \left(z^4-20 z^3+120 z^2-240 z+120\right) & \frac{1}{120} \left(-z^5+30 z^4-300 z^3+1200 z^2-1800 z+720\right) \\ 1 & 3-z & \frac{1}{2} \left(z^2-8 z+12\right) & \frac{1}{6} \left(-z^3+15 z^2-60 z+60\right) & \frac{1}{24} \left(z^4-24 z^3+180 z^2-480 z+360\right) & \frac{1}{120} \left(-z^5+35 z^4-420 z^3+2100 z^2-4200 z+2520\right) \\ 1 & 4-z & \frac{1}{2} \left(z^2-10 z+20\right) & \frac{1}{6} \left(-z^3+18 z^2-90 z+120\right) & \frac{1}{24} \left(z^4-28 z^3+252 z^2-840 z+840\right) & \frac{1}{120} \left(-z^5+40 z^4-560 z^3+3360 z^2-8400 z+6720\right) \\ 1 & 5-z & \frac{1}{2} \left(z^2-12 z+30\right) & \frac{1}{6} \left(-z^3+21 z^2-126 z+210\right) & \frac{1}{24} \left(z^4-32 z^3+336 z^2-1344 z+1680\right) & \frac{1}{120} \left(-z^5+45 z^4-720 z^3+5040 z^2-15120 z+15120\right) \\ 1 & 6-z & \frac{1}{2} \left(z^2-14 z+42\right) & \frac{1}{6} \left(-z^3+24 z^2-168 z+336\right) & \frac{1}{24} \left(z^4-36 z^3+432 z^2-2016 z+3024\right) & \frac{1}{120} \left(-z^5+50 z^4-900 z^3+7200 z^2-25200 z+30240\right) \\ 1 & 7-z & \frac{1}{2} \left(z^2-16 z+56\right) & \frac{1}{6} \left(-z^3+27 z^2-216 z+504\right) & \frac{1}{24} \left(z^4-40 z^3+540 z^2-2880 z+5040\right) & \frac{1}{120} \left(-z^5+55 z^4-1100 z^3+9900 z^2-39600 z+55440\right) \\ 1 & 8-z & \frac{1}{2} \left(z^2-18 z+72\right) & \frac{1}{6} \left(-z^3+30 z^2-270 z+720\right) & \frac{1}{24} \left(z^4-44 z^3+660 z^2-3960 z+7920\right) & \frac{1}{120} \left(-z^5+60 z^4-1320 z^3+13200 z^2-59400 z+95040\right) \\ 1 & 9-z & \frac{1}{2} \left(z^2-20 z+90\right) & \frac{1}{6} \left(-z^3+33 z^2-330 z+990\right) & \frac{1}{24} \left(z^4-48 z^3+792 z^2-5280 z+11880\right) & \frac{1}{120} \left(-z^5+65 z^4-1560 z^3+17160 z^2-85800 z+154440\right) \\ \end{array}$$ ## Applications The Laguerre polynomials appear in many applications. In particular, in the following: The Gauss-Laguerre quadrature formula for the numerical integration of a smooth function (especially if the main trend of the integrand is decaying exponential) The Hydrogen radial wave function, solution of the stationary Schroedinger for the single particle in the Coulomb potential [3]: $$\displaystyle \psi_{n,\ell}( r) = \sqrt{\frac{(n\!-\!\ell \!-\!1)! }{(n\!+\!\ell)! } }\exp(-r/n) \left(\frac{2 r}{n}\right)^\ell \frac{2}{n^2} \mathrm{LaguerreL}(n\!-\ell\!-\!1, 2 \ell\! +\!1, 2r/n)$$ P[n_, l_, r_] := Sqrt[ (n-l-1)!/(n+l)!] E^(-(r/n)) ((2 r)/n)^l 2/ n^2 LaguerreL[n-l-1, 2l+1, (2r)/n] With the definition above the test for orthogonality can be written as follows: Integrate[P[1, 0, r] P[2, 0, r] r^2, {r, 0, Infinity}] ## Orthogonality For each integer non-negative $$k$$, the $$k$$th Laguerre functions also form the orthogonal basis at the integration with the exponential weight: $$\displaystyle \int_0^\infty \mathrm e^{-x}\, L_{n,k}(x)\, L_{m,k}(x) \, x^k \, \mathrm d x=\frac{(n\!+\!k)!}{n!} \delta_{n,m}$$ where $$\delta_{m,n}$$ is the Kronecker delta. ## Zeros For the applications the zeros of LaguerreL are important. Luigi Gatteschi suggests the asymptotics for them; however, no simple estimate useful for the numerical implementation is supplied [4]. ## References 1. http://mathworld.wolfram.com/AssociatedLaguerrePolynomial.html Weisstein, Eric W. "Associated Laguerre Polynomial." From MathWorld--A Wolfram Web Resource. 2. http://reference.wolfram.com/language/ref/LaguerreL.html Radial wave-function of the hydrogen atom 3. http://www.sciencedirect.com/science/article/pii/S0377042701005490 Luigi Gatteschi. Asymptotics and bounds for the zeros of Laguerre polynomials: a survey. Journal of Computational and Applied Mathematics, Volume 144, Issues 1–2, 1 July 2002, Pages 7–27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9312299489974976, "perplexity": 2826.1618820132076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668896.47/warc/CC-MAIN-20191117064703-20191117092703-00175.warc.gz"}
https://k12.libretexts.org/Bookshelves/Mathematics/Trigonometry/03%3A_Trigonometric_Identities/3.04%3A_Double_and_Half_Angle_Identities/3.4.05%3A_Half_Angle_Formulas
# 3.4.5: Half Angle Formulas Derivation of sine and cosine formulas for half a given angle After all of your experience with trig functions, you are feeling pretty good. You know the values of trig functions for a lot of common angles, such as $$30^{\circ}$$, $$60^{\circ}$$  etc. And for other angles, you regularly use your calculator. Suppose someone gave you an equation like this: $$\cos 75^{\circ}$$ Could you solve it without the calculator? You might notice that this is half of $$150^{\circ}$$. This might give you a hint! ### Half Angle Formulas Here we'll attempt to derive and use formulas for trig functions of angles that are half of some particular value. To do this, we'll start with the double angle formula for cosine: $$\cos 2\theta =1−2\sin ^2\theta$$. Set $$\theta =\dfrac{\alpha}{2}$$, so the equation above becomes $$\cos 2\dfrac{\alpha}{2}=1−2\sin ^2\dfrac{\alpha}{2}$$. Solving this for $$\sin \dfrac{\alpha}{2}$$, we get: \begin{aligned} \cos 2\dfrac{\alpha}{2}&=1−2\sin 2\dfrac{\alpha}{2} \\ \cos \alpha&=1−2\sin 2\dfrac{\alpha}{2} \\ 2 \sin ^2\dfrac{\alpha}{2}&=1−\cos \alpha \\ \sin ^2\dfrac{\alpha}{2}&=\dfrac{1−\cos \alpha }{ 2} \\ \sin \dfrac{\alpha}{2}&=\pm \sqrt{\dfrac{1−\cos \alpha }{2}} \end{aligned} $$\sin \dfrac{\alpha}{2}=\sqrt{\dfrac{1−\cos \alpha }{2}}$$ if $$\dfrac{\alpha}{2}$$ is located in either the first or second quadrant. $$\sin \dfrac{\alpha}{2}=−\sqrt{\dfrac{1−\cos \alpha }{2}}$$ if $$\dfrac{\alpha}{2}$$ is located in the third or fourth quadrant. This formula shows how to find the sine of half of some particular angle. One of the other formulas that was derived for the cosine of a double angle is: $$\cos 2\theta =2\cos ^2\theta −1$$. Set $$\theta =\dfrac{\alpha}{2}$$, so the equation becomes $$\cos 2\dfrac{\alpha}{2}=−1+2\cos ^2\dfrac{\alpha}{2}$$. Solving this for $$\cos \dfrac{\alpha}{2}$$, we get: \begin{aligned} \cos 2\dfrac{\alpha}{2}&=2\cos ^2\dfrac{\alpha}{2}−1 \\ \cos \alpha&=2\cos ^2\dfrac{\alpha}{2}−1 \\ 2\cos ^2\dfrac{\alpha}{2}&=1+\cos \alpha \\ \cos ^2\dfrac{\alpha}{2}&=\dfrac{1+\cos \alpha }{2} \\ \cos \dfrac{\alpha}{2} &=\pm \sqrt{\dfrac{1+\cos \alpha }{2}}\end{aligned} $$\cos \dfrac{\alpha}{2}=\sqrt{\dfrac{1+\cos \alpha }{2}}$$ if $$\dfrac{\alpha}{2}$$ is located in either the first or fourth quadrant. $$\cos \dfrac{\alpha}{2}=−\sqrt{\dfrac{1+\cos \alpha }{2}}$$ if $$\dfrac{\alpha}{2}$$ is located in either the second or fourth quadrant. This formula shows how to find the cosine of half of some particular angle. Let's see some examples of these two formulas (sine and cosine of half angles) in action. 1. Determine the exact value of $$\sin 15^{\circ}$$. Using the half angle identity, $$\alpha =30^{\circ}$$, and $$15^{\circ}$$ is located in the first quadrant. Therefore, $$\sin \dfrac{\alpha}{2}=\sqrt{\dfrac{1−\cos \alpha }{2}}$$. \begin{aligned} \sin 15^{\circ} &=\sqrt{\dfrac{1−\cos 30^{\circ} }{2}}\\&=\sqrt{\dfrac{1−\dfrac{\sqrt{3}}{2}}{2}}=\sqrt{\dfrac{\dfrac{2−\sqrt{3}}{2}}{2}}=\sqrt{\dfrac{2−\sqrt{3}}{4}} \end{aligned} Plugging this into a calculator, $$\sqrt{\dfrac{2−\sqrt{3}}{4}}\approx 0.2588$$. Using the sine function on your calculator will validate that this answer is correct. 2. Use the half angle identity to find exact value of $$\sin 112.5^{\circ}$$ Since $$\sin \dfrac{225^{\circ} }{2}=\sin 112.5^{\circ}$$, use the half angle formula for sine, where $$\alpha =225^{\circ}$$. In this example, the angle $$112.5^{\circ}$$ is a second quadrant angle, and the SIN  of a second quadrant angle is positive. \begin{aligned} \sin 112.5^{\circ} &=\sin \dfrac{225^{\circ} }{2} \\&=\pm \sqrt{\dfrac{1−\cos 225^{\circ} }{2}}\\ &=+\sqrt{\dfrac{1−\left(−\dfrac{\sqrt{2}}{2}\right)}{2}} \\ &=\sqrt{\dfrac{\dfrac{2}{2}+\dfrac{\sqrt{2}}{2}}{2}}\\ &=\sqrt{\dfrac{2+\sqrt{2}}{4}} \end{aligned} 3. Use the half angle formula for the cosine function to prove that the following expression is an identity: $$2\cos ^2 \dfrac{x}{2}−\cos x=1$$ Use the formula $$\cos \dfrac{\alpha}{2}=\dfrac{1+\cos \alpha}{2}$$ and substitute it on the left-hand side of the expression. \begin{aligned} 2 \left(\sqrt{\dfrac{1+\cos \theta }{2}}\right)^2−\cos \theta&=1 \\ 2\left(\dfrac{1+\cos \theta }{2}\right)−\cos \theta&=1\\ 1+\cos \theta −\cos \theta&=1 \\ 1&=1 \end{aligned} Example $$\PageIndex{1}$$ Earlier, you were asked you to find $$\cos 75^{\circ}$$. If you use the half angle formula, then $$\alpha =150^{\circ}$$ Substituting this into the half angle formula: Solution $$\sin \dfrac{150^{\circ} }{2}=\sqrt{\dfrac{1−\cos \alpha }{2}}=\sqrt{\dfrac{1−\cos 150^{\circ} }{2}}=\sqrt{\dfrac{1+\dfrac{\sqrt{3}}{2}}{2}}=\sqrt{\dfrac{2+\sqrt{3}}{4}}=\dfrac{\sqrt{2+\sqrt{3}}}{2}$$ Example $$\PageIndex{2}$$ Prove the identity: $$\tan \dfrac{b}{2} =\dfrac{\sec b}{\sec b \csc b+\csc b}$$ Solution Step 1: Change right side into sine and cosine. \begin{aligned} \tan \dfrac{b}{2} &=\dfrac{\sec b}{\sec b \csc b+\csc b} \\ &=\dfrac{1}{\cos b} \div \csc b(\sec b+1) \\ &=\dfrac{1}{\cos b} \div \dfrac{1}{\sin b}\left(\dfrac{1}{\cos b}+1\right) \\ &=\dfrac{1}{\cos b} \div \dfrac{1}{\sin b}\left(\dfrac{1+\cos b}{\cos b}\right) \\ &=\dfrac{1}{\cos b} \div \dfrac{1+\cos b}{\sin b \cos b} \\ &=\dfrac{1}{\cos b} \cdot \dfrac{\sin b \cos b}{1+\cos b} \\ &=\dfrac{\sin b}{1+\cos b} \end{aligned} Step 2: At the last step above, we have simplified the right side as much as possible, now we simplify the left side, using the half angle formula. \begin{aligned} \sqrt{\dfrac{1-\cos b}{1+\cos b}} &=\dfrac{\sin b}{1+\cos b} \\ \dfrac{1-\cos b}{1+\cos b} &=\dfrac{\sin ^{2} b}{(1+\cos b)^{2}} \\ (1-\cos b)(1+\cos b)^{2} &=\sin ^{2} b(1+\cos b) \\ (1-\cos b)(1+\cos b) &=\sin ^{2} b \\ 1-\cos ^{2} b &=\sin ^{2} b \end{aligned} Example $$\PageIndex{3}$$ Verify the identity: $$\cot \dfrac{c}{2} =\dfrac{\sin c}{1-\cos c}$$ Solution Step 1: change cotangent to cosine over sine, then cross-multiply. \begin{aligned} \cot \dfrac{c}{2} &=\dfrac{\sin c}{1-\cos c} \\ &=\dfrac{\cos \dfrac{c}{2}}{\sin \dfrac{c}{2}}=\sqrt{\dfrac{1+\cos c}{1-\cos c}} \\ \sqrt{\dfrac{1+\cos c}{1-\cos c}} &=\dfrac{\sin c}{1-\cos c} \\ \dfrac{1+\cos c}{1-\cos c} &=\dfrac{\sin ^{2} c}{(1-\cos c)^{2}} \\ (1+\cos c)(1-\cos c)^{2} &=\sin ^{2} c(1-\cos c) \\ (1+\cos c)(1-\cos c) &=\sin ^{2} c \\ 1-\cos ^{2} c &=\sin ^{2} c \end{aligned} Example $$\PageIndex{4}$$ Prove that $$\sin x \tan \dfrac{x}{2}+2\cos x=2\cos ^2 \dfrac{x}{2}$$ Solution $$\begin{array}{l} \sin x \tan \dfrac{x}{2}+2 \cos x=\sin x\left(\dfrac{1-\cos x}{\sin x}\right)+2 \cos x \\ \sin x \tan \dfrac{x}{2}+2 \cos x=1-\cos x+2 \cos x \\ \sin x \tan \dfrac{x}{2}+2 \cos x=1+\cos x \\ \sin x \tan \dfrac{x}{2}+2 \cos x=2 \cos ^{2} \dfrac{x}{2} \end{array}$$ ## Review Use half angle identities to find the exact values of each expression. 1. $$\sin 22.5^{\circ}$$ 2. $$\sin 75^{\circ}$$ 3. $$\sin 67.5^{\circ}$$ 4. $$\sin 157.5^{\circ}$$ 5. $$\cos 22.5^{\circ}$$ 6. $$\cos 75^{\circ}$$ 7. $$\cos 157.5^{\circ}$$ 8. $$\cos 67.5^{\circ}$$ 9. Use the two half angle identities presented in this section to prove that $$\tan \left(\dfrac{x}{2}\right)=\pm \sqrt{\dfrac{1−\cos x}{1+\cos x}}$$. 10. Use the result of the previous problem to show that $$\tan \left(\dfrac{x}{2}\right)=\dfrac{1−\cos x}{\sin x}$$. 11. Use the result of the previous problem to show that $$\tan \left(\dfrac{x}{2}\right)=\dfrac{\sin x}{1+\cos x}$$. Use half angle identities to help you find all solutions to the following equations in the interval $$[0,2\pi)$$. 1. $$\sin ^2x=\cos ^2\left(\dfrac{x}{2}\right)$$ 2. $$\tan\left(\dfrac{x}{2}\right)=\dfrac{1−\cos x}{1+\cos x}$$ 3. $$\cos ^2x=\sin ^2\left(\dfrac{x}{2}\right)$$ 4. $$\sin ^2\left(\dfrac{x}{2}\right)=2\cos ^2x−1$$ To see the Review answers, open this PDF file and look for section 3.11. ## Vocabulary Term Definition Half Angle Identity A half angle identity relates a trigonometric function of one half of an argument to a set of trigonometric functions, each containing the original argument.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 3021.3412969139677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00228.warc.gz"}
https://brilliant.org/problems/ridiculous-length/
# Ridiculous length! Level pending In acute $$\triangle ABC$$,$$AB<AC$$ and $$\angle A = 60^{\circ}$$ , $$H$$ is the orthocentre, $$I$$ is the incentre and $$D$$ is the midpoint of $$BC$$. The line passing through $$D$$ and $$I$$ intersects the straight line through $$A$$ parallel to $$BC$$ at $$P$$. Suppose $$\sin \angle B - \sin \angle C = \dfrac{3}{5}$$ and $$AH = 10$$ .What is the numerical value of $$AP$$ ? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7601554989814758, "perplexity": 119.75076440191911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647576.75/warc/CC-MAIN-20180321043531-20180321063531-00116.warc.gz"}
https://r4ds.had.co.nz/program-intro.html
# 17 Introduction In this part of the book, you’ll improve your programming skills. Programming is a cross-cutting skill needed for all data science work: you must use a computer to do data science; you cannot do it in your head, or with pencil and paper. Programming produces code, and code is a tool of communication. Obviously code tells the computer what you want it to do. But it also communicates meaning to other humans. Thinking about code as a vehicle for communication is important because every project you do is fundamentally collaborative. Even if you’re not working with other people, you’ll definitely be working with future-you! Writing clear code is important so that others (like future-you) can understand why you tackled an analysis in the way you did. That means getting better at programming also involves getting better at communicating. Over time, you want your code to become not just easier to write, but easier for others to read. Writing code is similar in many ways to writing prose. One parallel which I find particularly useful is that in both cases rewriting is the key to clarity. The first expression of your ideas is unlikely to be particularly clear, and you may need to rewrite multiple times. After solving a data analysis challenge, it’s often worth looking at your code and thinking about whether or not it’s obvious what you’ve done. If you spend a little time rewriting your code while the ideas are fresh, you can save a lot of time later trying to recreate what your code did. But this doesn’t mean you should rewrite every function: you need to balance what you need to achieve now with saving time in the long run. (But the more you rewrite your functions the more likely your first attempt will be clear.) In the following four chapters, you’ll learn skills that will allow you to both tackle new programs and to solve existing problems with greater clarity and ease: 1. In pipes, you will dive deep into the pipe, %>%, and learn more about how it works, what the alternatives are, and when not to use it. 2. Copy-and-paste is a powerful tool, but you should avoid doing it more than twice. Repeating yourself in code is dangerous because it can easily lead to errors and inconsistencies. Instead, in functions, you’ll learn how to write functions which let you extract out repeated code so that it can be easily reused. 3. As you start to write more powerful functions, you’ll need a solid grounding in R’s data structures, provided by vectors. You must master the four common atomic vectors, the three important S3 classes built on top of them, and understand the mysteries of the list and data frame. 4. Functions extract out repeated code, but you often need to repeat the same actions on different inputs. You need tools for iteration that let you do similar things again and again. These tools include for loops and functional programming, which you’ll learn about in iteration. ## 17.1 Learning more The goal of these chapters is to teach you the minimum about programming that you need to practice data science, which turns out to be a reasonable amount. Once you have mastered the material in this book, I strongly believe you should invest further in your programming skills. Learning more about programming is a long-term investment: it won’t pay off immediately, but in the long term it will allow you to solve new problems more quickly, and let you reuse your insights from previous problems in new scenarios. To learn more you need to study R as a programming language, not just an interactive environment for data science. We have written two books that will help you do so: • Hands on Programming with R, by Garrett Grolemund. This is an introduction to R as a programming language and is a great place to start if R is your first programming language. It covers similar material to these chapters, but with a different style and different motivation examples (based in the casino). It’s a useful complement if you find that these four chapters go by too quickly. • Advanced R by Hadley Wickham. This dives into the details of R the programming language. This is a great place to start if you have existing programming experience. It’s also a great next step once you’ve internalised the ideas in these chapters. You can read it online at http://adv-r.had.co.nz.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.279371440410614, "perplexity": 556.9450601573086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665521.72/warc/CC-MAIN-20191112101343-20191112125343-00519.warc.gz"}
https://www.studyadda.com/sample-papers/neet-sample-test-paper-32_q20/221/279529
• # question_answer 20) The chromatic Aberration in lenses becomes due to:- A)  Disimilarity of main axis of raysB)  Disimilarity of radii of curvatureC)  Variation of focal length of lenses with wavelengthD)  None of these Solution : $f\propto \frac{1}{\mu -1}$ and $\mu \propto \frac{1}{\lambda }$ LIMITED OFFER HURRY UP! OFFER AVAILABLE ON ALL MATERIAL TILL TODAY ONLY! You need to login to perform this action. You will be redirected in 3 sec
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3086237907409668, "perplexity": 21175.670280077586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529737.79/warc/CC-MAIN-20190723215340-20190724001340-00309.warc.gz"}
http://tex.stackexchange.com/questions/39321/sans-math-version-of-latin-modern
# sans math version of Latin Modern I am trying to get a sans math version of the lmodern family using \DeclareMathVersion but my approach does not work and I do not see why. But I have to admit that I am not sure whether I use the correct fonts: \documentclass{article} \usepackage[T1]{fontenc} \usepackage{lmodern} \DeclareMathVersion{sansmath} % Math letters from Latin Modern Sans \SetSymbolFont{letters}{sansmath}{OML}{lmsso}{m}{it} % Math operators \SetSymbolFont{operators}{sansmath}{OT1}{lmss}{m}{n} % Math symbols \SetSymbolFont{symbols}{sansmath}{OMS}{lmsy}{m}{n} % Large symbols %\SetSymbolFont{largesymbols}{sansmath}{OMX}{lmssex}{m}{n} \begin{document} \mathversion{normal} $a b c \alpha \beta$ \mathversion{sansmath} $a b c \alpha \beta$ \end{document} In both cases the result is the normal serif math font. - ## 1 Answer There is no Latin Modern Sans in OML encoding and LaTeX tells you: LaTeX Font Warning: Font shape OML/lmsso/m/it' undefined (Font) using OML/cmm/m/it' instead on input line 18. You can try Computer Modern Bright instead: \SetSymbolFont{letters}{sansmath}{OML}{cmbr}{m}{it} - So basically there are no math symbol in Latin Modern Sans? – Matthias Pospiech Dec 25 '11 at 21:38 @MatthiasPospiech That's true. – egreg Dec 25 '11 at 21:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9300486445426941, "perplexity": 3357.073947221484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825124.55/warc/CC-MAIN-20160723071025-00291-ip-10-185-27-174.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-geometry/186244-lebesgue-measure-rxr-print.html
# Lebesgue measure on RxR • August 16th 2011, 12:21 PM anselmoalko Lebesgue measure on RxR Hi, I have the following prblem: Let $m_2$ be the Lebesgue measure on $\mathbb{R} ^2$. (i) Given $c\in\mathbb{R}$ find the measure of the set $\{(x,y)\in\mathbb{R}^2; x+y=c\}$. My solution: I simply drew the set and jcomputed the length using the Pythagorean Thm, so $m_2(\{(x,y)\in\mathbb{R}^2; x+y=c\})=\sqrt 2 |c|$. (ii) Find $m_2(A)$, where $A$ consists of all pairs $(x,y)$ in $[0,\pi/2]\times[0,\pi/2]$ such that $\cos (x)\ge 1/2$ and $\sin(y)$ is irrational. My attempt: I tried to do the same thing here, but as you can see the set is a bit more complicated. This is what I've got so far: since $\cos (x)\ge 1/2$ iff. $x\le\pi/3$ we have $0\le x\le \pi/3$. What next? • August 16th 2011, 12:47 PM girdav Re: Lebesgue measure on RxR (i) You computed the measure of the triangle whose vertexes are $(0,0)$, $(0,c)$, $(c,0)$, but it's not what it's asked. You can write $\left\{(x,y)\in\mathbb R^2, x+y=c\right\} =\bigcup_{n\in\mathbb Z}\left\{(x,y)\in\mathbb R^2, x+y=c, n\leq x\leq n+1\right\}$ and show that for all $n$, $m_2(\left\{(x,y)\in\mathbb R^2, x+y=c, n\leq x\leq n+1\right\})=0$. To do that, you can cover the line for a fixed $k\in\mathbb N$ by $k$ squares with measure $\frac 1{k^2}$. (ii) Write $A = \left[0,\frac{\pi}3\right]\times B$ where $B =\left\{y\in\mathbb R, 0\leq y\leq \frac{\pi}2, \sin y \notin \mathbb Q\right\}$. We can compute the measure of the complement of $B$ in $\left[0,\frac{\pi}2\right]$ writing $\left[0,\frac{\pi}2\right]\setminus B = \bigcup_{r\in \mathbb Q}\left\{x\in \left[0,\frac{\pi}2\right],\sin x=r\right\}$. • August 17th 2011, 05:35 AM anselmoalko Re: Lebesgue measure on RxR (i) Oh, I see. And then one lets $k \rightarrow \infty$ and we obtain $m_2(\{(x,y)\in\mathbb{R}^2; x+y=c, n\le x\le n+1\})=0$ and since the original set is a disjoint union of sets of measure zero we get at it is also of measure zero. (ii) Here I fully understand what you mean but I'm not sure of my result. So what I get is $m_2([0,\pi/2]\setminus B) = \sum_{r\in\mathbb{Q}}m(\{y\in\mathbb{R}; 0\le y\le \pi/2, \sin(y)=r\})=0$. And then $m([0,\pi/2])=m([0,\pi/2]\setminus B\cup B) = m([0,\pi/2]\setminus B)+m(B)=m(B)$. So $m_2([0,\pi/3]\times B)=m([0,\pi/3])m(B)=(\pi/3)(\pi/2)=\pi^2/6$. Is this correct? What I'm not sure about is the step $m_2([0,\pi/3]\times B)=m([0,\pi/3])m(B)$, from what I've read the set needs to be a "rectangle" for that to be allowed. • August 17th 2011, 06:24 AM girdav Re: Lebesgue measure on RxR (i) It's ok. (ii) We can write, since the measures of the following sets are finite: $m_2\left(\left[0,\frac{\pi}3\right]\times B\right) =m_2\left(\left[0,\frac{\pi}3\right]\times \left[0,\frac{\pi}2\right]\right)- m_2\left(\left[0,\frac{\pi}3\right]\times \left(\left[0,\frac{\pi}2\right]\setminus B\right)\right)$, and conclude since $\left[0,\frac{\pi}2\right]\setminus B$ is countable.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 36, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951362609863281, "perplexity": 176.20098872836934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122902.86/warc/CC-MAIN-20160428161522-00193-ip-10-239-7-51.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/107824/what-is-the-difference-between-stress-and-pressure
# What is the difference between stress and pressure? What is the difference between stress and pressure? Are there any intuitive examples that explain the difference between the two? How about an example of when pressure and stress are not equal? • As you can see from the answers it is hard to assume what "intuitive" is for you and what level of expectations you have without being more specific. Apr 11, 2014 at 1:34 • The question came about when reading about Overburden Pressure (stress), en.wikipedia.org/wiki/Overburden_pressure. The linked article defines it as both. I had hoped to learn how to distinguish the two using an explanation a high school student or an engineering undergraduate would understand. Apr 11, 2014 at 13:39 • Stress is valence 2 tensor (represented by a matrix). "Pressure" is a special case: a stress tensor that can be written as a "scalar" quantity (i.e. of the form $p\,\mathrm{id}_3$, where $\mathrm{id}_3$ matrix is the $3\times3$ identity and $p$ the pressure scalar). Mar 28, 2015 at 3:22 Pressure is defined as force per unit area applied to an object in a direction perpendicular to the surface. And naturally pressure can cause stress inside an object. Whereas stress is the property of the body under load and is related to the internal forces. It is defined as a reaction produced by the molecules of the body under some action which may produce some deformation. The intensity of these additional forces produced per unit area is known as stress (pretty picture from wikipedia): Overburden Pressure or lithostatic pressure is a case where the gravity force of the object's own mass creates pressure and results in stress on the soil or rock column. This stress increases as the mass (or depth) increases. This type of stress is uniform because the gravity force is uniform. http://commons.wvc.edu/rdawes/G101OCL/Basics/earthquakes.html Included in lithostatic pressure are the weight of the atmosphere and, if beneath an ocean or lake, the weight of the column of water above that point in the earth. However, compared to the pressure caused by the weight of rocks above, the amount of pressure due to the weight of water and air above a rock is negligible, except at the earth's surface. The only way for lithostatic pressure on a rock to change is for the rock's depth within the earth to change. Since this is a uniform force applied throughout the substance due to mostly to the substance itself, the terms pressure and stress are somewhat interchangeable because pressure can be viewed as both an external and internal force. For a case where they are not equal, just look that the image of the ruler. If pressure is applied at the far end (top of image) it creates unequal stress inside the ruler, especially where the internal stress is high at the corners. • Regarding fluids and pressure, the fluid within the interstitial space or pore space of the rock is typically referred to having pore pressure or fluid pressure. Would it be incorrect to say pore stress or fluid stress? If I understand it correctly, from the other answers to the original question and from an explanation found here: en.wikipedia.org/wiki/Fluid#Physics it seems that pressure is the normal component of stress. So if all the other components are zero, you are left with just pressure making up the stress. Thus, you can say pore stress? Apr 11, 2014 at 22:16 • @jakemcgregor I'm not sure. I don't think it would be called stress because it is a force generated by pockets which can be non-uniform and vary throughout the substance. But I'm not a geologist, perhaps they mix the terms. Apr 12, 2014 at 0:15 • You say nothing about the tensor nature of stress as opposed to the scalar nature of pressure. Also, gas pressure, for example, is related to the internal forces of the gas molecules. Aug 5, 2015 at 20:02 Given a stress tensor $\mathbf{\sigma}$, which has 9 components in general, the pressure (in continuum mechanics at least) is defined as $P = 1/3 tr(\mathbf{\sigma})$. So the pressure at a point in the continuum is the average of the three normal stresses at the point. The off-diagonal terms manifest as shear stress. It's hard to say "stress" without being more specific in your question because stress is not a scalar. Pressure is always different from stress, but the two are related. • Pressure is always different from stress because pressure is a scalar. As far as I know, pressure is defined as compressive isotropic normal stress. By this definition pressure has direction sign (positive for compressive?) and a magnitude? Also, by this definition, it seems that pressure is defined as a special case of stress (simple stress situation). If this is true, can fluid pressure be called fluid stress? My source: en.wikipedia.org/wiki/Stress_%28mechanics%29#Simple_stresses Apr 12, 2014 at 1:27 The difference between stress and pressure has to do with the difference between isotropic and anisotropic force. There's a Wikipedia section on the decomposition of the Cauchy stress $\boldsymbol{\sigma}$ into "hydrostatic" and "deviatoric" components, $$\boldsymbol{\sigma}=\mathbf{s}+p\mathbf{I}$$ where the pressure $p$ is $$p=\frac{1}{3}\text{tr}(\boldsymbol{\sigma})$$ where $\mathbf{I}$ is the $3\times 3$ identity matrix, and where $\mathbf{s}$ is the traceless component of $\boldsymbol{\sigma}$. The linked article actually gives a pretty good intuitive explanation of $p\mathbf{I}$: (From article) A mean hydrostatic stress tensor $p\mathbf{I}$, which tends to change the volume of the stressed body. This follows since the surface force experienced by a plane with normal vector $\mathbf{n}$ is given by $$\mathbf{T}^{(\mathbf{n})}=\mathbf{n}\cdot\boldsymbol{\sigma}$$ which for a purely hydrostatic stress becomes $$\mathbf{T}^{(\mathbf{n})}=\mathbf{n}\cdot p\mathbf{I}=p\mathbf{n}$$ which points in the same direction as the normal to the plane. This basically means that a cube of material will want to expand like a ballon if $p>0$, and contract if $p<0$. Meanwhile, the deviatoric component means that there are forces at play which don't just tend to expand or contract things, such as shear forces. How about an example of when pressure and stress are not equal? In a solid, pure shear waves can exist. Unlike in acoustic pressure waves, shear waves have constant pressure; the forces that propagate the wave are not due to pressure, but are due to shear strain. • So if you only have compressive isotropic normal stress, it is called hydrostatic pressure or just pressure. So pressure is the normal components acting in compression that make up stress? I can say I have fluid stress of +100 psi and be equally correct as saying I have fluid pressure of 100 psi? Apr 12, 2014 at 0:41 • @jakemcgregor: Yeah, as far as I know pressure is the normal component of stress. In an isotropic fluid, if you try to shear it, there is no restoring force (because it's a fluid, and it can move), so it's not really possible to have deviatoric (non-normal) stresses, so stress and pressure are sort of interchangeable as far as wording goes. But if you've got rocks and stuff mixed in, it might be more complicated. Apr 12, 2014 at 0:52 • You certainly can have deviatoric stresses in a fluid. Without them, the fluid would not flow! In a Newtonian fluid, for example, the deviatoric stress is proportional to the "strain rate" tensor via viscosity. May 5, 2014 at 18:51 • @TylerOlsen see my comment above to DumpsterDoofus and my comment to tpg2114's answer (below). What are your thoughts? Jun 19, 2014 at 18:44 Pressure is perpendicular to the object, it is an external force only. Pressure causes stress inside of the object, so stress is an internal force. • You might want to include some of the requested examples that show one versus the other and what happens when they're not equal. Mar 17, 2015 at 12:39 Pressure is an external force, when applied on another body, the effect is easily seen on the outer part of body and it first affected the outer area of the body. In the case of stress, the molecular deformation is developed internal of the body and stress is generated slowly slowly in the internal part of any object due to load. And simply pressure affected the outer area of the body and stress affected the body internally.Stress is observed due to the load applied,whereas pressure is a sort of load on a body. The pore pressure of a fluid in an underground reservoir is not normally related to the overburden or lithostatic pressure. There are exceptions, known as overpressured reservoirs. Typically the pore pressure at depth is equivalent to the pressure caused by a column of salt water. The vertical stress in the rock is typically a function of the column of rock to the surface. The two principle horizontal stresses are normally unequal and lower than the vertical stress. Rocks behave in a plastic fashion and the difference in horizontal stresses are, inter alliance, caused by plate tectonics. One can get stressed by pressure. Either pulling at you or pushing at you. Pressure comes before the stress and can be seen as a reaction to pressure. Though they have the same unit, there is a temporal asymmetry. Pressure comes prior to stress. whenever external force is applied on the object automatically a restoring force is developed inside the object to restrict the deformation of the object.The ratio of restoring force perpendicular to the surface to the area is known as stress.The ratio of external force perpendicular to the surface to the area is known as pressure. for example if you press a ball u r applying pressure and what the ball apply to u you is stress and if both r not equal than one dominates over other pressure is an external force and stress is an internal force • What about the requested examples? Mar 28, 2015 at 3:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8212358355522156, "perplexity": 487.53540548355676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00120.warc.gz"}
https://link.springer.com/article/10.1007%2Fs10853-018-3124-4
Journal of Materials Science , Volume 54, Issue 5, pp 4038–4048 Bandgap properties of a piezoelectric phononic crystal nanobeam based on nonlocal theory • Denghui Qian Electronic materials Abstract The aim of this paper is to investigate the bandgap properties of a piezoelectric phononic crystal (PC) nanobeam with size effect by coupling the plane wave expansion method, Euler–Bernoulli beam theory and nonlocal theory. The first four orders were chosen to study the influences of thermo-electro coupling, size effect and geometric parameters on band gaps. Temperature change and external electrical voltage were chosen as the parameters capable of influencing thermo-electro coupling fields. Scale coefficient was chosen as the influencing parameters related to size effect. The lengths of PZT-4 and epoxy within a unit cell, along with the width and thickness of the PC nanobeam, were identified as influential geometric parameters. Collectively, our results are expected to be helpful for the design of piezoelectric nanobeam-based devices. Notes Conflict of interest The authors declare that they have no conflict of interest. References 1. 1. Assouar MB, Oudich M (2012) Enlargement of a locally resonant sonic band gap by using double-sides stubbed phononic plates. Appl Phys Lett 100(12):141Google Scholar 2. 2. Ma J, Hou Z, Assouar BM (2014) Opening a large full phononic band gap in thin elastic plate with resonant units. J Appl Phys 115(9):pp. 093508–093508-5 3. 3. Shu Hai-Sheng, Shi Xiao-Na, Li Shi-Dan et al (2014) Numerical research on dynamic stress of phononic crystal rod in longitudinal wave band gap. Int J Mod Phys B 28(32):2019 4. 4. Chuang KC, Yuan ZW, Guo YQ et al (2018) A self-demodulated fiber Bragg grating for investigating impact-induced transient responses of phononic crystal beams. J Sound Vib 431:40–53 5. 5. Li HY, Wang Y, Ke MZ et al (2018) Acoustic manipulating of capsule-shaped particle assisted by phononic crystal plate. Appl Phys Lett 112(22):223501 6. 6. Qian D, Shi Z (2016) Bandgap properties in locally resonant phononic crystal double panel structures with periodically attached spring-mass resonators. Phys Lett A 380(41):3319–3325 7. 7. Li L, Guo Y (2016) Analysis of longitudinal waves in rod-type piezoelectric phononic crystals. Crystals 6(4):45 8. 8. Zhou C, Yi S, Chen J (2016) Tunable Lamb wave band gaps in two-dimensional magnetoelastic phononic crystal slabs by an applied external magnetostatic field. Ultrasonics 71:69–74 9. 9. Guo X, Wei P, Lan M et al (2016) Dispersion relations of elastic waves in one-dimensional piezoelectric/piezomagnetic phononic crystal with functionally graded interlayers. Ultrasonics 70:158–171 10. 10. Sugino C, Leadenham S, Ruzzene M et al (2017) An investigation of electroelastic bandgap formation in locally resonant piezoelectric metastructures. Smart Mater Struct 26(5):055029 11. 11. Jr EJPM, Santos JMCD (2018) Evanescent Bloch waves and complex band structure in magnetoelectroelastic phononic crystals. Mech Syst Signal Process 112:280–304 12. 12. Zhang WM, Hu KM, Peng ZK et al (2015) Tunable micro- and nanomechanical resonators. Sensors 15(10):26478–26566 13. 13. Wagner MR, Graczykowski B, Reparaz JS et al (2016) Two-dimensional phononic crystals: disorder matters. Nano Lett 16(9):5661 14. 14. Yan Z, Wei C, Zhang C (2017) Band structures of elastic SH waves in nanoscale multi-layered functionally graded phononic crystals with/without nonlocal interface imperfections by using a local RBF collocation method. Acta Mech Solida Sin 30(4):390–403 15. 15. Quiroz HP, Barrera-Patiño CP, Rey-González RR et al (2016) Evidence of iridescence in TiO2, nanostructures: an approximation in plane wave expansion method. Photonics Nanostruct Fundam Appl 22:46–50 16. 16. Sadat SM, Wang RY (2016) Colloidal nanocrystal superlattices as phononic crystals: plane wave expansion modeling of phonon band structure. RSC Adv 6:44578–44587 17. 17. Jr EJPM, Santos JMCD (2017) Complete band gaps in nano-piezoelectric phononic crystals. Mater Res 20:15–38 18. 18. Yan Z, Jiang LY (2011) The vibrational and buckling behaviors of piezoelectric nanobeams with surface effects. Nanotechnology 22(24):245703 19. 19. Liu C, Ke LL, Wang YS et al (2013) Thermo-electro-mechanical vibration of piezoelectric nanoplates based on the nonlocal theory. Compos Struct 106(12):167–174 20. 20. Zhang S, Gao Y (2017) Surface effect on band structure of flexural wave propagating in magneto-elastic phononic crystal nanobeam. J Phys D Appl Phys 50(44):445303 21. 21. Mindlin RD, Tiersten HF (1962) Effects of couple-stresses in linear elasticity. Arch Ration Mech Anal 11(1):415–448 22. 22. Eringen AC (1972) Nonlocal polar elastic continua. Int J Eng Sci 10(1):1–16 23. 23. Eringen AC (1983) On differential equations of nonlocal elasticity and solutions of screw dislocation and surface waves. J Appl Phys 54(9):4703–4710 24. 24. Eringen AC (2003) Nonlocal continuum field theories. Appl Mech Rev 56(2):391–398 25. 25. Ke LL, Wang YS (2012) Thermo-electric-mechanical vibration of piezoelectric nanobeams based on the nonlocal theory. Smart Mater Struct 21(2):025018 26. 26. Asemi HR, Asemi SR, Farajpour A et al (2015) Nanoscale mass detection based on vibrating piezoelectric ultrathin films under thermo-electro-mechanical loads. Physica E 68:112–122 27. 27. Arani AG, Amir S, Mozdianfard MR (2012) Nonlocal electro-thermal transverse vibration of embedded fluid-conveying DWBNNTs. J Mech Sci Technol 26(5):1455–1462 28. 28. Maraghi ZK, Arani AG, Kolahchi R et al (2013) Nonlocal vibration and instability of embedded DWBNNT conveying viscose fluid. Compos B 45(1):423–432 29. 29. Ansari R, Oskouie MF, Gholami R et al (2016) Thermo-electro-mechanical vibration of postbuckled piezoelectric Timoshenko nanobeams based on the nonlocal elasticity theory. Compos Part B 89:316–327 30. 30. Li HB, Wang X (2016) Nonlinear dynamic characteristics of graphene/piezoelectric laminated films in sensing moving loads. Sens Actuators A 238:80–94 31. 31. Cao Y, Hou Z, Liu Y (2004) Convergence problem of plane-wave expansion method for phononic crystals. Phys Lett A 327(2–3):247–253 32. 32. Hou Z, Fu X, Liu Y (2006) Singularity of the Bloch theorem in the fluid/solid phononic crystal. Phys Rev B 73(2):024304 33. 33. Kushwaha MS, Halevi P, Dobrzynski L et al (1993) Acoustic band structure of periodic elastic composites. Phys Rev Lett 71(13):2022–2025 34. 34. Wu F, Liu Z, Liu Y (2004) Splitting and tuning characteristics of the point defect modes in two-dimensional phononic crystals. Phys Rev E Stat Nonlinear Soft Matter Phys 69(2):066609 35. 35. Liu Z, Chan CT, Sheng P et al (2000) Elastic wave scattering by periodic structures of spherical objects: theory and experiment. Phys Rev B 62(4):2446–2457 36. 36. Qiu C, Liu Z, Mei J et al (2005) The layer multiple-scattering method for calculating transmission coefficients of 2D phononic crystals. Solid State Commun 134(11):765–770 37. 37. Sigalas MM, Garcia N (2000) Theoretical study of three dimensional elastic band gaps with the finite-difference time-domain method. J Appl Phys 87(6):3122–3125 38. 38. Cao Y, Hou Z, Liu Y (2004) Finite difference time domain method for band-structure calculations of two-dimensional phononic crystals. Solid State Commun 132(8):539–543 39. 39. Wang G, Wen J, Liu Y et al (2004) Lumped-mass method for the study of band structure in two-dimensional phononic crystals. Phys Rev B 69(18):1324–1332 40. 40. Wang G, Wen J, Wen X (2005) Quasi-one-dimensional phononic crystals studied using the improved lumped-mass method: application to locally resonant beams with flexural wave band gap. Phys Rev B 71(10):4302Google Scholar 41. 41. Wang L, Bertoldi K (2012) Mechanically tunable phononic band gaps in three-dimensional periodic elastomeric structures. Int J Solids Struct 49(19–20):2881–2885 42. 42. Shi Z, Huang J (2013) Feasibility of reducing three-dimensional wave energy by introducing periodic foundations. Soil Dyn Earthq Eng 50(1):204–212 43. 43. Mencik JM (2018) A wave finite element approach for the analysis of periodic structures with cyclic symmetry in dynamic substructuring. J Sound Vib 431:441–457 44. 44. Li X, Liu Z (2005) Coupling of cavity modes and guiding modes in two-dimensional phononic crystals. Solid State Commun 133(6):397–402 45. 45. Wang G, Wen J, Liu Y et al (2004) Study on the calculation of elastic wave band structure in two-dimensional phononic crystals with lattice of scatters in arbitrary shape. J Funct Mater 35:2257–2260Google Scholar 46. 46. Laude V, Achaoui Y, Benchabane S et al (2009) Evanescent Bloch waves and the complex band structure of phononic crystals. Phys Rev B Condens Matter 80(9):092301 47. 47. Romerogarcía V, Sánchezpérez JV, Garciaraffi LM (2010) Evanescent modes in sonic crystals: complex dispersion relation and supercell approximation. J Appl Phys 108(4):241Google Scholar 48. 48. Oudich M, Li Y, Assouar BM et al (2010) A sonic band gap based on the locally resonant phononic plates with stubs. New J Phys 12(2):201–206Google Scholar
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8830057382583618, "perplexity": 25675.15259123164}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203947.59/warc/CC-MAIN-20190325112917-20190325134917-00395.warc.gz"}
https://cstheory.stackexchange.com/questions/39131/have-fixed-parameter-integer-program-algorithms-ever-been-implemented-for-resear
Have fixed parameter integer program algorithms ever been implemented for research use? Have any fixed parameter integer programming algorithms described in Integer programming with a fixed number of variables been implemented? Is there a reference code that researchers can use? One of the key steps in Lenstra's algorithm uses basis reduction (BR) to locate "thin directions" for the polytope. Branching on such a direction produces only a small (i.e., polynomial number) of subproblems. In a series of papers, Aardal and coauthors essentially applied this BR step to help solve (using standard MIP solvers) otherwise hard-to-solve IP instances (see, e.g., this paper and this paper). We (also see arXiv) have studied a similar, but arguably simpler and more general, approach (full disclosure: self-citation here!) to solve IP feasibility problems: apply BR to the constraint matrix $A$ of the IP feasibility problem (of the form $\{\mathbf{l} \leq A \mathbf{x} \leq \mathbf{u}, \mathbf{x} \in \mathbb{Z}^n\}$), and use standard techniques to solve the resulting reformulated IP feasibility problem. The BR could be performed using standard software tools such as NTL.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7143824696540833, "perplexity": 1232.232692607857}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00648.warc.gz"}
https://123deta.com/document/y6xx7x5y-resonance-in-hypergeometric-systems-related-to-mirror-symmetry.html
# Resonance in Hypergeometric Systems related to Mirror Symmetry ## 全文 (1) (2) ### Stienstra In the late 1980's physicists discovered a fascinating phenomenon in Con-formal Field Theory - they called it Mirror Symmetry - and pointed out that this had far reaching consequences in the enumerative geometry of Calabi-Yau threefolds; see [9] for some of the early articles about mirror symmetry and [7] for a recent survey. It is a technique mathematicians had never dreamed of: the number of rational curves of a given degree on one Calabi-Yau three-fold is computed from the variation of Hodge structure on the cohomology in a family of different Calabi-Yau threefolds. One is therefore interested in an eM-cient computation of the variation of Hodge structure in farnilies of Calabi-Yau varieties. ### In [1] Batyrev made the observation that behind many exatnples of mir-ror symmetry one can see a simple combinatorial duality: the CY threefolds are hypersurfaces (more precisely, members of the anti-canonical linear system) in two toric varieties, constructed from a pair of dual lattice polytopes in R4. In [2] he analyzed the Hodge structure of Calabi-Yau hypersurfaces in toric varieties and showed that the periods of a (suitably normalized) holomorphic d-form on a d-dimensional CY hypersurface in a toric variety satisfy a system of Gel'fand-Kapranov-Zelevinskii hypergeometric differential equations with ap-propriate pararneters ([2] thm 14.2). However, the rank of this GKZ system is larger than the rank of the period lattice. So, even if one would have all solu-tions for this system, one would still need a method to decide which solusolu-tions are periods. In [6] Hosono, Lian and Yau gave a method for determining the com-plete system of differential equations for the periods and applied this method in some exarnples. Their resulting system looks complicated. Fortunately, what we need for mirror symmetry are the periods, i.e. the solutions, not the differential equations! My approach is based on two observations: firstly, implicit in [2] is a varia-tion of mixed Hodge structure which is an extension of the variavaria-tion of Hodge ### structure for the family of CY hypersurfaces and for which the GKZ system is the complete system of differential equations; secondly, [2] does in fact tell precisely where the holomorphic d-form of the Calabi-Yau hypersurface lies in this extended VMHS. In this note I present a simple explicit formula for the solutions of the GKZ system for the extended VMHS. By differentiating these 'notes for a talk at the symposium on Algebraic Geometry in Kinosaki, November 14, 1996 (3) solutions we obtain an equally simple and explicit formula for the periods of the (suitably normalized) holomorphic d-form of the CY d-fold. . A GKZ hypergeometric system ([4] def.1) is a system of partial differential equations for functions Q of N variables vi, . . . ,vN . It depends on parameters A and b: pararneter A ex (aij) is a y Å~ N-matrix of rank u with entries in Z aRd aii = a!2 = ... : aiN = 1; pafarneter b == (bi,...,b.) is a vector in ÅëV. ### Let L c ZN be the kerRei of the mawix A. The GKZ hypergeemetric system wkh pasameters A aud b is; ### (-whb, (i IE,IlÅr,[oav,]t' ww O'--1 o' : ej Åqo [oav,]-eJ ### )Åë = o fori = 1,...,v (1) fOr (ei,...,eN) E IL (2) In the situation oÅí [2] thm 14.2 matrix A is such that when we delete its first row the co}umns of the resu}ting (y - l) Å~ N-matrix are the iRtegtral }attice poiRt$cgktaiRed in the Newtog pe}ytepe A gf a LaureRt po}yRomial equatigR fer the (y - 2)-dimensioRal kypersurface iR a (y - l)-dimeRsieRai terus. The CY variety is the clo$ure of this aillne hypersutface in the toric variety asseciated with A. Parameter b for the case of an appropriately normalized holomorphic ### (v - 2)-form is (--1,O,...,O). For the GKZ system of the extended VMHS we have the same parameter A, but b = (O,O,...,O). In [4] Gel'fand-Kapranov-Zelevinskii gave solutions for the GKZ system in the form of sGcalled Pseries ### N nEL,ll=,T(c, cJ• +ej vj• + e,• + 1) (3) wkere I' is tke usual gamraa-fuRctick, e = (ii,...,eN) e L c ZN. depeRds eR additioRal parameters ei, . . . , cN E ÅqC which must satisfy aiici -I- -••+ aiN cN = bi for i =: 1, ..., u. The series (4) In order to be able to interpret (3) as a function one also needs a triaiigulation of the polytope A :=:conv {ai,...,aN}; here ai,...,aN are the columns of matrix A viewed as points in RV. The triangulation is used to formulate additional conditions on ci,...,cN G C which ensure that in (3) the coeMcient in the term for e is zero if e is not in a certain pointed cone. Kowever, the parameter b = e is resenant for niangu}atioRs with more than eRe maximal simplex and the r-series (3) dg Ret provide eReugh sglutigRs; cÅí [4]. The classical trick fer obtaiRiRg eReggk selntions for reseRaiit kypergee-metric systems is to differentiate the power series $olutiens with respect to the parameters of the hypergeometric system. This is what Hosono, Lian and Yau do for the present GKZ hypergeometric system: [6] formula (3.28). (4) In this note we take a different approach to find solutions for (1)-(2) in case b = O. First multiply the r-series (3) with fiJN•..iF(cj• + 1). The result can be written as ## 2 eEL Hpt, Åqo Hi-!'o-i(cJ' - k) nJ': t, Åro Htk'..i (c,• + k) or more elegantly, using the notation (t)o := 1, ### N ll .S•' j'=1 (t)r := t' (t + 1) '...' (t +r- 1) ### for Pochharnmer symbols, il,l2. ### fi (-cJ')-ej j:tJ ÅqO ### ll (1+cJ')ej j':tJ' ÅrO ### N ll vJC,i j'=1 ### for rE Z, rÅrO ### NN ll (-i)eJ ll vS.) • fi vs•i ### J':ejÅqO j'--1 J'=1 (5) (6) (7) The key observation in our method is that for (7) to make sense it is not neces-sary that ci , . . . , cN be complex numbers. It also works if ci , . . . , cN are taken from a ({[P-algebra in which they are nilpotent and satisfy the linear relations (4) for b == O, i.e. ### ailcl+••+aiNcN=O fori=1,...,u (8) In order to ensure that in (7) the coeMcient in the term for e is zero if e is not in a certain pointed cone we need additional conditions on ci,...,cN. Very convenient for this purpose are the relations in the definition of the Stanley-Reisner ring of the triangulation T of A (viewed as a simplicial complex): ### ci, •...•ci. =O if (9) ### conv{ai,,..•,ai.} is not a simplex in the triaiigulation T. The sum (7) will then only involve terms with e satisfying ### conv{aal eJ• Åq O} isasimplex in triaiigulation S (10) Thus we are lead to introduce the ring SL,f which is the quotient of the polyno-mial ring Q[Ci,...,CN] by the ideal corresponding to relations (8) and (9). It turns out that this ring is finite dimensional as a Qvector space. This implies that ci,...,cN are nilpotent. The expression vJC•j in (7) should be interpreted as exp(cJ• logvj). Thus (7) does contain powers of logarithms. The expression (7) satisfies the GKZ system (1)-(2) with b = Oi . Expanding this expression in terms of a vector space basis of SL,T one finds as coeMcients functions ofvi, . . . , vN which are solutions of the GKZ system. Expanding (7) by iThe same resonant GKZ-system, the same form of its solutions and the same interpretation of the Artinian ring was found by Givental; see [5] thm 3. However, Givental starts from Si-equivariant Floer cohomology of the space of contractible loops on the toric variety associated with the dual polytope; i.e. on the mirror side from our starting point! (5) monomials in the nilpotent c's is in fact Taylor expansion, hence differentiation, with respect to the c's. Thus in some sense our formula (7) is a systematized version of the classical trick. By looking at the logarithms appearing in these solutions of this GKZ sys-tem one can easily conclude that they are linearly independent over C. The dimension of the vector space SL,T equals the number of maximal simplices in the triaingulation S. In particular, if al1 maximal simplices have volume 1, this dimension equals the volume of A. Since according to [4] the rank of this GKZ system is vol A, we conclude that our method gives a basis for the solution space of (1)-(2) with b = O precisely if ali maximal simplices have volume 1. ### Thus we have completely determined the extended VMHS. For CY hyper-surfaces in toric varieties the next step is to apply vibe.T to (7); for this the indices are chosen such that ai is the unique lattice point in the interior of A. Something similar works for CY complete intersections in toric varieties. Details of the general theory will be published elsewhere. I finish this report with an example. ### An example Consider the Laurent polynomiaJ f := ### vl + v2xl + v3x2 + v4x3 + vsxi3xilx3-1 + v6 xi2x4-1+ v7x4 + vsx ### - 1 1 ### as a polynomial in the variables xi,x2,x3,x4. The equation f = O for generic vaiues of the coeMcients vi,...,vs a smooth hypersurface 4-dimensional torus (C')4. Matrix A for this Laurent polynomial is ### A := ### 1111 ### O I O O ### O O I O ### OOOI ### oooo ### 1 ### - 3 ### - 1 ### - 1 ### o ### 11 1 ### - ### 2 O -1 ### oo o ### oo o ### - ### 1 1 O (11) defines in the (12) Let ai,...,as denote the columns of A viewed as points in R5. Let A be the convex hull of {ai,...,as}, i.e. the Newton polytope of f (for generic values of the coeMcients vi,...,vs). A is a 4dimensional pyramid with apex a2 and base the double tetrahedron formed by the 3-simplices conv {a3,a4,as,a6} and conv{a3,a4,as,a7}. Point as is the centre of this double tetrahedron: as = (a3 + a4 + as)13 = (a6 + a7)12. Point ai is the unique lattice point in the interior of A: ai = (a2 +as)12• There are six triangulations of A. There is only one for which all maximal simplices have volume 1; namely the following triangulation S with 12 maximal simpiices ### [12346] [12341 [12356] [12357] [12456] [12457] ### [13468] [13478] [13568] [13578] [14568] [14578] (13) (6) ( here [12346] means conv {ai,a2,a3,a4,a6}, etC•) From (12) and (13) one easily computes SL,T. IFlrom (12) one gets in partic-ular c3 = c4 : cs and c6 = c7. From (13) one gets in particpartic-ular c3c4cs = O and c6c? = g. Hence cg =: c?6 = C. A vector space basis foT SL,f is: 1 ; cs, c6, cs ; cg, cs c6, cs cs, c6 cs ; cg c6, cg cs, cs c6 cs ; cg c6 cs One can substitute al1 the concrete information into (7). Mrom (10) one can see that for eaeh term in the sum ei is S O Emd e2,...,e7 are tr O. The sum contains terms wkh es ) e as well as terms with es Åq e. Ngw apply vi Sg.; te (7). Tke re$uk ta[kes the Åíorm ciR. I wiil give im exp}icit formula for st. One easi}y checks that cics :O, and hence cift contains only terms with ei SOand e2,...,es k O. As a basis for L we take the rows of the rnatrix ### L:=k:3 ?g8g66 ?] (i4) Then we have for e :(ei,...,es) G L ### (ei,e2,e3,e4,es,e6,e7,es) : (es,e6,es)•L (15) Simi}asly the linear relat!eRs amaeng the c's can be summarized as ### (Cl,C27C3,C4,Cs7C61C7,Cs) :: (cs,c6,cs).L (16) The chosen basis of L is also used to introduce new variables; ### M5,M6,M8-ÅrO with coeficieRtS 7ms,m6,ms = (1 + 6cs -l- 4e6 + 2Cs)(6ms+4m6+2ms) (1 + 3Cs + 2C6 + Cs)(3rns+2mes+ms)((1 + Cs)ms)3((1 + C6)m6)2(1 + Cs)ms In tki\$ formula the ds must be ikterpreted ik SL,T!AfiR(cD. Ii} pa!rticulax ## cs=e. ORe easily checks .. ### SL•f1Ann(ci) = Q[C5,C6]1(C,3 , C,2) The expression for st can be simplified further by introducing ### (l - 4zs)3 (7) This gives 9 = xl=lig .,;.li,.., i-}i ++ ,3,C)5.ii)23C(6()i(3+M2,+)2.M,6))2 (43ws)ms(42w6)m6 ivgsws6 If we now expand st in terms of the obvious basis for SL,f !Ann(ci): ### st == goo + glocs + golc6 + g2ocg + gncsc6 + g21cgc6 then goo, . . . , g2i form a basis for the period lattice of the (compact) Calabi-Yau threefold given by the Laurent polynomial f; see (11). With this basis one can compute the Yukawa coupling, and thus (assuming mirror symmetry) count numbers of rational curves, on the mirror CY threefold. Details of this computation and its results will be discussed elsewhere. I finish this note with a description of the mirror CY threefold 2. This is the double covering of P2 Å~ Pi branched along a surface of degree (6, 4). lf one de-scribes this double covering by a homogeneous equation z2 = p(xi, x2, x3; yi , y2) then the weights of the variables for the action of C" Å~C' are: z has weight (3, 2); xi,x2,x3 have weight (1,O) and yi,y2 have weight (O, 1) (compare this with the basis of L in (14)). From these weights one gets the polytope A with its marked points ai, . . . ,a7. In order to have a triangulation T of A for which all maximal simplices have volume 1, we must insert the point as. The triaJigulation gives a refinement of the outer normal fan of the dual polytope of A. It gives a toric variety SY, in which the double covering of P2 Å~ Pi sits as a hypersurface X. This construction really is Batyrev's version of mirror symmetry! SL,T is in fact the cohomology ring of XY (see [3] S 5.2) and SL,TIAnn(c,) is the image of H'(V) in H"(X). The elements cs and c6 can be identified as the pullbacks of the hyperplane classes of P2 and Pi respectively. ### References [1] V. Batyrev: Dual polyhedra and the mirror symmetry for for Calabi-Yau hypersurfaces in toric varieties, J. Alg. Geom. 3 (1994) 493-535 [2] V. Batyrev: Variations of mixed Hodge structure of afiine hypersurfaces in algebraic tori, Duke Math. J. 69 (1993) 349-409 [3] W. Fulton: Introduction to Toric Varieties, Annals of Mathematics Studies, Study 131, Princeton University Press 1993 [4] I.M. Gel'fand, A.V. Zelevinskii, M.M. Kapranov: Hypergeomein'c functions and toral varieties, Funct. Analysis and its Appl. 23 (1989) 94-106 [5] A.B. Givental: Homological Geometry and Mirror Symmetrlt, Proceedings ICM ZUrich 1994 p.472-480; Birkhauser Verlag (1995) 2This mirror CY 3-fold was in fact our motivation for considering this example; it was brought to my attention by Masahiko Saito. (8) [6] S. Hosono, B.H. Lian, S.-T. Yau: GKZ-Generalized Elypengeometr't'c tems in Mirror Symmetry of Calabi- Yau Hypersurfaces, alg-geom19511001; ### Comm. Math. Phys. 1996 [7] D.R. Morrison: Mathematical Aspects of Mirror Symmetr'y, ### alg-geom19609021 [8] R.P. Stanley: Combinatorics and Commutative Algebra (second edition?, Progress in Math. 41, Birkhauser, Boston, 1996 [9] S.-T. Yau (ed.): Essays on Mirror Manifolds, Hong Kong: International ### Acknowledgement. I would like to thank JSPS for its support via the Fellowship prograrn S96161 and Kobe University for its hospitality in october-december 1996. I thank in particular my host Masahiko Saito. Updating... Updating...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8594076633453369, "perplexity": 3849.379446315406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00484.warc.gz"}