url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://talks.bham.ac.uk/talk/index/3582
Almost Engel compact groups • Evgeny Khukhro (Charlotte Scott Research Centre for Algebra, University of Lincoln, UK) • Thursday 28 March 2019, 15:00-16:00 • Nuffield G13. We say that a group $G$ is almost Engel if for every $g\in G$ there is a finite set ${\mathscr E}(g)$ such that for every $x\in G$ all sufficiently long commutators $[...[[x,g],g],\dots ,g]$ belong to ${\mathscr E}(g)$, that is, for every $x\in G$ there is a positive integer $n(x,g)$ such that $[...[[x,g],g],\dots ,g]\in {\mathscr E}(g)$ if $g$ is repeated at least $n(x,g)$ times. (Thus, Engel groups are precisely the almost Engel groups for which we can choose ${\mathscr E}(g)=\{ 1\}$ for all $g\in G$.) We prove that if a compact (Hausdorff) group $G$ is almost Engel, then $G$ has a finite normal subgroup $N$ such that $G/N$ is locally nilpotent. If in addition there is a uniform bound $|{\mathscr E}(g)|\leq m$ for the orders of the corresponding sets, then the subgroup $N$ can be chosen of order bounded in terms of $m$. The proofs use the Wilson—Zelmanov theorem saying that Engel profinite groups are locally nilpotent. This is joint work with Pavel Shumyatsky. This talk is part of the Algebra Seminar series.
2020-04-10 13:47:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.876910388469696, "perplexity": 200.37674587184063}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371896913.98/warc/CC-MAIN-20200410110538-20200410141038-00363.warc.gz"}
https://mathoverflow.net/questions/218148/does-the-singularcohomology-of-any-acyclic-spectrum-vanish/218186
# Does the (singular)cohomology of any acyclic spectrum vanish? I am interested in those objects in the ("topological") stable homotopy category $SH$(I call them spectra) whose homology (with integral coefficients; should I call it singular or stable, or $H\mathbb{Z}$-one? how can one denote it?) is zero (in all degrees). My questions are: 1) Is it ok to call these spectra acyclic? 2) Does there exist any "description" of all acyclic spectra? 3) Is it true that the ($H\mathbb{Z}$-)cohomology of any acyclic spectrum vanishes? Possibly, this fact can be deduced from Proposition 16.2 of the book, Margolis H.R., Spectra and the Steenrod Algebra: Modules over the Steenrod Algebra and the Stable Homotopy Category, North-Holland, Amsterdam-New York, 1983; yet I am not sure. 4) Is it possible to localize $SH$ by the full subcategory of acyclic objects (so, do we obtain a category whose morphism classes are sets this way)? If this is possible, then we would obtain a "better $SH$", and this should contradict a result of Schwede (on the Margolis's axiomatisation conjecture); yet I am not sure in this argument (see the Upd. below). Did anyone consider this localization? 5) Can one describe the left or the right orthogonal to all acyclic spectra, i.e., the objects that are only connected with acyclic spectra by zero morphisms? Note in particular that there are no non-zero morphisms from acyclic spectra to connective ones. Any hints or references would be very welcome! A related matter: I am interested in texts that treat Atiyah-Hirzebruch spectral sequences for arbitrary spectra. Upd. So, 3 is fine; thanks! Is the converse implication true (are spectra with vanishing cohomology acyclic)? About 4: note that $SH$/acyclic spectra contains the category of finite spectra (and the category of connective ones also). So, why does not one consider this localization as a "reasonable" substitute of $SH$? • I think the term is "acyclic," or maybe "$H \mathbb{Z}$-acyclic." – Qiaochu Yuan Sep 12 '15 at 18:14 • Yes to 3. If $R$ is $H\mathbb Z$ or any other unital ring spectrum than any map of spectra $X\to R$ factors through a map $R\wedge X\to R$. – Tom Goodwillie Sep 12 '15 at 20:40 • 4 is much older than EKMM. Bousfield showed how to localize with respect to the class of $E$-acyclic spectra for any spectrum $E$. – Tom Goodwillie Sep 12 '15 at 21:10 • A good example of an acyclic spectrum that should not be thrown away is mod $p$ periodic $K$-theory, $KU\wedge H\mathbb Z/p$. The integral homology groups of $KU$ are rational vector spaces. – Tom Goodwillie Sep 12 '15 at 22:08 • Yes, certainly for connective spectra acyclic implies (weakly) contractible; if the homotopy groups of a spectrum $X$ vanish in degrees less than $n$ then $\pi_n(X)\cong H_n(X)$. – Tom Goodwillie Sep 12 '15 at 22:09 ## 1 Answer Let me address what hasn't been answered in comments (not in an optimal way, though). 1) is OK and, modulo the meaning of your quotation marks, the answer to 2) is 'no'. I mean, don't expect anything very explicit or much beyond the very definition, it's a very complicated problem. As for 5), the right othogonal is by definition the category of $H\mathbb Z$-local spectra, which is equivalent to $SH/$acyclic spectra by Bousfield localisation. I don't know about the left orthogonal, but Bousfield localisation does not apply since the category of acyclic spectra is localising but not colocalising, because integral homology doesn't preserve infinite products. The converse of 3) is set theory. By universal coefficients, this is equivalent to ask whether there is a non-trivial abelian group $A$ with $\operatorname{Hom}(A,\mathbb Z)=0=\operatorname{Ext}(A,\mathbb Z)$. The answer 'no' is independent of the usual axioms of set theory by Shelah. More precisely, abelian groups satisfying $\operatorname{Ext}(A,\mathbb Z)=0$ are called Whitehead groups (this name has also other uses) and it is undecidable whether all of them are free. In that case $\operatorname{Hom}(A,\mathbb Z)$ wouldn't vanish unless $A=0$. What your observation about 5) shows is that the category $SH/$acyclic spectra is not compactly generated, nor the category of acyclic spectra. If so, by Neeman an Thomason $SH/$acyclic spectra would be compactly generated by finite spectra and, since the triangulated category of $H\mathbb Z$-local spectra has a model, this would contradict Schwede's uniqueness theorem, as you remark. Neeman's more general theory of well generated triangulated categories says that $SH/$acyclic spectra is well generated. I dare say it is even $\aleph_1$-well generated, but definitely not $\aleph_0$. Coproducts in $H\mathbb Z$-local spectra are not just ordinary coproducts of spectra since these wouldn't be $H\mathbb Z$-local. It would be interesting to find an explicit example where the homotopy groups of an infinite coproduct of $H\mathbb Z$-local spectra is not the colimit of the homotopy groups of the finite subcoproducts. That would be a very explicit proof of the fact that the sphere spectrum is not compact in $SH/$acyclic spectra. • Thank you for your great answer! Yet I am somewhat confused: the category of acyclic spectra is closed with respect to coproducts; doesn't this mean that localizing by them respects coproducts and compact objects? – Mikhail Bondarko Sep 13 '15 at 8:58 • @MikhailBondarko I guess that the confusion is in the fact that we have a left adjoint $SH\to SH/$acyclic spectra, but $H\mathbb Z$-local spectra are the image in $SH$ of the right adjoint of the previous functor. I've identified $SH/$acyclic spectra and $H\mathbb Z$-local spectra, as it is usual, but this identification does not preserve coproducts since it involves a right adjoint. Localisation w.r.t. $H\mathbb Z$ is not smashing, in topological terminology. Concerning compact objects, left adjoints do not preserve them since right adjoints do not preserve direct sums. I hope this helps. – Fernando Muro Sep 13 '15 at 9:09 • Thank you! So it seems that the localization functor does respect coproducts (by Corollary 3.2.11 of Neeman's "Triangulated categories"); yet it does not respect the compactness of objects (though it does preserve morphism groups with compact targets). – Mikhail Bondarko Sep 13 '15 at 18:38 • @MikhailBondarko I think something is going wrong with your argument. If localization functor commutes with arbitrary coproducts then $HZ$-localization is a smashing localization, which is not true as Fernando noticed. A localization functor commutes with finite coproducts always. I suspect that Neeman's corollary is implicitly related to some cardinality issues of the set which indexes how big is your coproduct (how many factors). Unfortunately I don't have the book to check. – Ilias A. Sep 13 '15 at 19:50 • Moreover, if the $HZ$-localization commutes with arbitrary coproducts (which is not true), then it will send compact objects to compact objects. – Ilias A. Sep 13 '15 at 19:58
2019-11-15 09:42:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8484221696853638, "perplexity": 641.3945051066288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668618.8/warc/CC-MAIN-20191115093159-20191115121159-00353.warc.gz"}
http://polylogblog.wordpress.com/
Ely Porat asked me remind everyone that the deadline for the 20th String Processing and Information Retrieval Symposium (SPIRE) is 2nd May, about a month from now. More details at websrv.cs.biu.ac.il/spire2013/. Update: The deadline has been extended to 9th May. A guest post from Krzysztof Onak: A few recent workshops on sublinear algorithms compiled lists of open problems suggested by participants. During the last of them, in July in Dortmund, we realized that it would be great to have a single repository with all those problems. After followup discussions (with Alex Andoni, Piotr Indyk, and Andrew McGregor), we created a wiki page at http://sublinear.info/. Currently, it only contains open problems from the aforementioned workshops, but we invite submissions of inspiring problems from all areas of sublinear algorithms (sublinear time, sublinear space, etc.). Additionally, we want to compile a list of books, surveys, lecture notes, and slides that can be useful for learning about different areas of sublinear algorithms. We hope that this wiki will not serve only spambots, which have already been raiding it for a while, but it will also be a great source of inspiration for the whole community. After a very successful hiring season last year, the department is now focusing on hiring in theory, NLP, robotics, and vision (that’s four separate searches rather than one extreme interdisciplinary position).  So please apply! The official ad is here and note that, unlike previous years, we’re able to hire in theory at either the assistant or associate level. We’ll start reviewing applications December 3. Continuing the report from the Dortmund Workshop on Algorithms for Data Streams, here are the happenings from Day 3. Previous posts: Day 1 and Day 2. Michael Kapralov started the day with new results on computing matching large matchings in the semi-streaming model, one of my favorite pet problems. You are presented with a stream of unweighted edges on n nodes and want to approximate the size of the maximum matching given the constraint that you only have O(n polylog n) bits of memory. It’s trivial to get a 1/2 approximation by constructing a maximal matching greedily. Michael shows that it’s impossible to beat a 1-1/e factor even if the graph is bipartite and the edges are grouped by their right endpoint. In this model, he also shows a matching (no pun intended) 1-1/e approximation and an extension to a $1-e^{-p}p^{p-1}/(p-1)!$ approximation given p passes. Next up, Mert Seglam talked about $\ell_p$ sampling. Here the stream consists of a sequence of updates to an underlying vector $\mathbf{x}\in {\mathbb R}^n$ and the goal is to randomly select an index where $i$ is chosen with probability proportional to $|x_i|^p$. It’s a really nice primitive that gives rise to simple algorithms for a range of problems including frequency moments and finding duplicates. I’ve been including the result in recent tutorials. Mert’s result simplifies and improves an earlier result by Andoni et al. The next two talks focused on communication complexity, the evil nemesis of the honest data stream algorithm. First, Xiaoming Sun talked about space-bounded communication complexity. The standard method to prove a data stream memory lower bound is to consider two players corresponding to the first and second halves of the data stream. A data stream algorithm gives rise to a communication protocol where the players emulate the algorithm and transmit the memory state when necessary. In particular, multi-pass stream algorithms give rise to multi-round communication protocols. Hence a communication lower bound gives rise to a memory lower bound. However, in the standard communication setting we suppose that the two players may maintain unlimited state between rounds. The fact that stream algorithms can’t do this may lead to suboptimal data stream bounds. To address this, Xiaoming’s work outlines a communication model where the players may maintain only a limited amount of state between the sending of each message and establishes bounds on classical problems including equality and inner-product. In the final talk of the day, Amit Chakrabarti extolled the virtues of Talagrand’s inequality and explained why every data stream researcher should know it. In particular, Amit reviewed the history on proving lower bounds for the Gap-Hamming communication problem (Alice and Bob each have a length n string and wish to determine whether the Hamming distance is less than n/2-√n or greater than n/2+√n) and ventured that the history wouldn’t have been so long if the community had had a deeper familiarity with Talagrand’s inequality. It was a really gracious talk in which Amit actually spent most of the time discussing Sasha Sherstov’s recent proof of the lower bound rather than his own work. BONUS! Spot the theorist… After the talks, we headed off to Revierpark Wischlingen to contemplate some tree-traversal problems. If you think your observation skills are up to it, click on the picture below to play “spot the theorist.” It may take some time, so keep looking until you find him or her. This week, I’m at the Workshop on Algorithms for Data Streams in Dortmund, Germany. It’s a continuation in spirit of the great Kanpur workshops from 2006 and 2009. The first day went very well despite the widespread jet lag (if only jet lag from those traveling from the east could cancel out with those traveling from the west.) Sudipto Guha kicked things off with a talk on combinatorial optimization problems in the (multiple-pass) data stream model. There was a nice parallel between Sudipto’s talk and a later talk by David Woodruff and both were representative of a growing number of papers that have used ideas developed in the context of data streams to design more efficient algorithms in the usual RAM model. In the case of Sudipto’s talk, this was a faster algorithm to approximate $b$-matchings while David’s result was a faster algorithm for least-squares regression. Other talks included Christiane Lammersen presenting a new result for facility location in data streams; Melanie Schmidt talking about constant-size coresets for $k$-means and projective clustering; and Dan Feldman discussing the data stream challenges that arise when trying to transform real-time GPS data from your smart-phone into a human-readable diary of your life. I spoke about work on constructing a combinatorial sparsifier for an $n^2$-dimensional graph via a single random linear projection into roughly $n$ dimensions. Rina Panigrahy wrapped things up with an exploration of different distance measures in social networks, i.e., how to quantify how closely-connected you are to your favorite celebrity. This included proposing a new measure based on the probability that two individuals remained connected if every edge was deleted with some probability. He then related this to electrical resistance and spectral sparsification. He refused to be drawn on which of his co-authors had the closest connection to the Kardashians. To be continued… Tomorrow, Suresh will post about day 2 across at the Geomblog. As promised, here are the slides from the STOC Workshop on Algorithms for Distributed and Streaming Data. The workshop was standing-room only so here’s your chance to review the slides while sitting down. More generally, all the workshops seemed to be a great success and I’m happy to see that the experiment will be repeated at FOCS. Deadline for proposals is 20 June. Thanks again to the speakers and everyone who came along. © Copyright Keith Edkins and licensed for reuse under this Creative Commons Licence Atri Rudra asked me to post an announcement for this year’s Coding, Complexity, and Sparsity Workshop. It’ll take place at the University of Michigan from July 30th to August 2nd. I really enjoyed last year’s workshop. The Blurb. Efficient and effective transmission, storage, and retrieval of information on a large-scale are among the core technical problems in the modern digital revolution. The massive volume of data necessitates the quest for mathematical and algorithmic methods for efficiently describing, summarizing, synthesizing, and, increasingly more critical, deciding when and how to discard data before storing or transmitting it. Such methods have been developed in two areas: coding theory, and sparse approximation (SA) (and its variants called compressive sensing (CS) and streaming algorithms). Coding theory and computational complexity are both well established fields that enjoy fruitful interactions with one another. On the other hand, while significant progress on the SA/CS problem has been made, much of that progress is concentrated on the feasibility of the problems, including a number of algorithmic innovations that leverage coding theory techniques, but a systematic computational complexity treatment of these problems is sorely lacking. The workshop organizers aim to develop a general computational theory of SA and CS (as well as related areas such as group testing) and its relationship to coding theory. This goal can be achieved only by bringing together researchers from a variety of areas. We will have several tutorial lectures that will be directed to graduate students and postdocs. These will be hour-long lectures designed to give students an introduction to coding theory, complexity theory/pseudo-randomness, and compressive sensing/streaming algorithms. We will have a poster session during the workshop and everyone is welcome to bring a poster but graduate students and postdocs are especially encouraged to give a poster presentation. Confirmed speakers: • Eric Allender, Rutgers • Mark Braverman, Princeton • Mahdi Cheraghchi, Carnegie Mellon University • Anna Gal, The University of Texas at Austin • Piotr Indyk, MIT • Swastik Kopparty, Rutgers • Dick Lipton, Georgia Tech • Andrew McGregor, University of Massachusetts, Amherst • Raghu Meka, IAS • Eric Price, MIT • Ronitt Rubinfeld MIT • Shubhangi Saraf, IAS • Chris Umans, Caltech • David Woodruff, IBM We have some funding for graduate students and postdocs. For registration and other details, please look at the workshop webpage: While on the topic of STOC, I also wanted to mention a STOC workshop on “Algorithms for Distributed and Streaming Data” that will hopefully be of interest. It will take place Saturday afternoon, 19th May in NYU. The schedule can be found here. So here’s the pitch: At this point it’s readily apparent that big data has become big news (e.g., see here and here). But what does this mean for the STOC/FOCS/SODA community? What are the algorithmic problems we could be solving? What are the appropriate computational models? Are there opportunities for industrial impact? What should we be teaching our undergraduate and graduate students? To address the relevant topics, we’ve lined-up a great set of speakers including Sergei Vassilvitskii, John Langford, Piotr Indyk, and Ashish Goel. Hope to see you there. If yes, just a reminder that this year (for the first time) there’ll be an award for the best student presentation. More details here. In addition to the cash, the reputation for giving great talks can be very helpful when applying for jobs. We’ll be giving preference to talks that are “clear, compelling, and appeal to a broad cross-section of the STOC audience.” My suggestion of giving extra credit for incorporating fire juggling, 3D slides, and celebrity guests fell on deaf ears. Next up in the mini-course on data streams (first two lectures here and here) were lower bounds and communication complexity. The slides are here: The outline was: 1. Basic Framework: If you have a small-space algorithm for stream problem $Q$, you can turn this into a low-communication protocol for a related communication problem $P$. Hence, a lower bound on the communication required to solve $P$ implies a lower bound on the space required to solve $Q$. Using this framework, we first proved lower bounds for classic stream problems such as selection, frequency moments, and distinct elements via the communication complexity of indexing, disjointness, and Hamming approximation. 2. Information Statistics: So how do we prove communication lower bounds? One powerful method is to analyze the information that is revealed about a player’s input by the messages they send. We first demonstrated this approach via the simple problem of indexing (a neat pedagogic idea courtesy of Amit Chakrabarti) before outlining how the approach would extend to the disjointness problem. 3. Hamming Distance: Lastly, we presented a lower bound on the Hamming approximation problem using the ingenious but simple proof [Jayram et al.] Tout le monde! Here’s the group photo from this year’s L’Ecole de Printemps d’Informatique Théorique. ### About A research blog about data streams and related topics.
2013-05-18 18:14:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46858230233192444, "perplexity": 1484.5650423160648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00048-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/4253634/prove-that-the-product-of-two-relations-is-the-identity-relation-if-both-relatio
# Prove that the product of two relations is the identity relation if both relations are bijective maps So the question is: Suppose $$R_1$$ and $$R_2$$ are relations on a set $$S$$ with $$R_1\circ R_2 = \operatorname{I}$$ and $$R_2\circ R_1 = \operatorname{I}$$. Prove that both $$R_1$$ and $$R_2$$ are bijective maps. I know that I is the identity relation which means $$\operatorname{I} = \{(\alpha,\alpha)|\alpha\in S\}$$ and so for $$R_1\circ R_2 = \operatorname{I}$$ we have to have $$\forall x: \exists z: (x,z)\in R_1 \land (z,x)\in R_2$$. I also know that maps are functions which means that every object most have only one image, that surjective maps are function where all objects have one image but different objects can have same images, I also know that injective means that all objects have only one image and that different objects have different images, but not all images need to have an object and that a bijective map is when all objects have a different image and all images have a different object (so subjective and injective). But I have no idea how to use all this information to solve my problem. Can anybody help? • Your title asks for $A\to B$ but the question asks for $B\to A$. I think you must have a typo in your quantifiers, surely both are not tied to $x$? I also think your explanation of what $R_1;R_2=I$ means is wrong. I think you need to sort all this out first. Sep 18 '21 at 9:53 From what you wrote we already know that $$\forall x\exists z:(x,z)\in R_1$$. To prove that $$R_1$$ is a function, assume also $$(x,y)\in R_1$$ and aim to show $$y=z$$. But then $$(z,x)\in R_2$$ and $$(x,y)\in R_1$$ implies $$(z,y)\in R_2;R_1=I$$, that is, $$z=y$$, as wished. Now, by symmetry, $$R_2$$ is also a function, and by the conditions, it is just the inverse of $$R_1$$, so both must be bijective. Alternatively, the conditions imply the same for the inverse relations, so we get that the inverse of $$R_1$$ is also a function, which means that $$R_1$$ is a bijection. Disclaimer. There is a contradiction between the title of the OP and the body of the OP. The body of the OP asks to prove that if the two ways to compose two relations are both the identity, then the two relations are bijections. The title of the OP asks to prove the converse. I answer the question in the body of the OP. Also, the title of the OP talks about the product of two relations, but the body of the OP refers to the composition of two relations (the product of two relations is another thing). I assume that $$R_1$$ is a binary relations from a set $$S$$ to a set $$T$$ (possibly $$S = T$$), and that $$R_2$$ is a binary relation from $$T$$ to $$S$$. Notation. $$R(x,y)$$ is a shorthand for $$(x, y) \in R$$. The identity relation on a set $$S$$ is denoted by $$I_S$$. I write $$R_1;R_2$$ for the composition of $$R_1$$ and $$R_2$$ (others prefer the notation $$R_2 \circ R_1$$), that is, for every $$x, x' \in S$$, $$R_1 ; R_2 (x,x') \iff \exists y \in T : R_1(x,y) \text{ and } R_2(y,x')$$ The Proof. As you said, you have to prove the three properties below. 1. $$R_1$$ (resp. $$R_2$$) is a function from $$S$$ to $$T$$ (resp. from $$T$$ to $$S$$); 2. $$R_1$$ and $$R_2$$ are injective; 3. $$R_1$$ and $$R_2$$ are surjective. Let us show each point. We prove them only for $$R_1$$, because the proofs for $$R_2$$ are exactly the same, given the symmetry of the hypothesis. 1. To prove that $$R_1$$ is a function from $$S$$ to $$T$$, we have to show that, for every $$x \in S$$, there exists a unique $$y \in T$$ such that $$R_1(x,y)$$. Let $$x \in S$$. • Existence: Since we know that $$R_1 ; R_2 = I_S$$ and clearly $$I_S(x,x)$$, we have that there exists $$y \in T$$ such that $$R_1(x,y)$$ (and $$R_2(y,x)$$). • Uniqueness: Suppose that $$R_1(x,y)$$ and $$R_1(x,y')$$ for some $$y, y' \in T$$. Since $$R_1 ; R_2 = I_S$$ and clearly $$I_S(x,x)$$, there exists $$y'' \in T$$ such that $$R_2(y'',x)$$ (and $$R_2(x,y'')$$). By composition, from $$R_2(y'',x)$$ and $$R_1(x,y)$$ and $$R_1(x,y')$$, it follows that $$R_2;R_1(y'',y)$$ and $$R_2;R_1(y'',y')$$. Therefore, $$y = y'' = y'$$ because $$R_2 ; R_1 = I_T$$ by hypothesis. Summing up, if $$R_1(x,y)$$ and $$R_1(x,y')$$ then $$y = y'$$. 2. To prove that $$R_1$$ is injective we have to show that, for every $$x , x' \in S$$, if $$R_1(x,y)$$ and $$R_1(x',y)$$ then $$x =x'$$. Let $$x, x' \in S$$ such that $$R_1(x,y)$$ and $$R_1(x',y)$$. Since $$R_2;R_1 = I_T$$ and clearly $$I_T(y,y)$$, there exists $$x'' \in S$$ such that $$R_1(x'',y)$$ (and $$R_2(y,x'')$$). By composition, from $$R_1(x'',y)$$ and $$R_2(y,x)$$ and $$R_2(y,x')$$, it follows that $$R_1;R_2(x'',x)$$ and $$R_1;R_2(x'',x')$$. Therefore, $$x = x'' = x'$$ because $$R_1;R_2 = I_S$$ by hypothesis. Summing up, if $$R_1(x,y)$$ and $$R_1(x',y)$$ then $$x = x'$$. 3. To prove that $$R_1$$ is surjective, we have to show that, for every $$y \in T$$, there exists some $$x \in S$$ such that $$R_1(x,y)$$. Let $$y \in T$$. Since we know that $$R_2;R_1 = I_T$$ and clearly $$I_T(y,y)$$, there exists $$x \in S$$ such that $$R_1(x,y)$$ (and $$R_2(y,x)$$). Comment. In the proof above, there are some redundancies: 1. the proof of injectivity of $$R_1$$ is very similar to that one of the uniqueness property when showing that $$R_1$$ is a function; 2. the proof of the surjectivity of $$R_1$$ is very similar to that one of the existence property when showing that $$R_1$$ is a function. In fact, the proof above can be simplified if we first prove that $$R_1$$ and $$R_2$$ are both injective and surjective relations, and from that, we can infer that $$R_1$$ and $$R_2$$ are injective and surjective functions. This would shorten the proof because it avoids repeating some similar reasoning several times, but it is conceptually slightly more sophisticated. To familiarize yourself with this kind of proof, it is better to start with the demonstration I showed above. • Note that $R_1,R_2$ are relations on $S$, and so no such a different $T$. Therefore the answer is too redundant. Sep 18 '21 at 14:21 • Can I ask the reason for the downvote? The fact that I prove the property in a slightly more general context it is not a mistake. Sep 18 '21 at 14:25 • But it's not neat. Beginners should write the proof as neat as possible. Sep 18 '21 at 14:30 • @M.Logic - In my opinion, a proof that shows that an hypothesis is superfluous is neater than a proof that uses that hypothesis with no need, and without explaining its use. Anyway, I never downvote an answer only because I don't like its style. Everyone has his or her own style and I respect it. Sep 18 '21 at 14:47 The proof is very basic without no difficulties. Proof. Since $$R_1$$ is symmetric to $$R_2$$, it suffices to show that $$R_1$$ is a bijective map. • $$R_1$$ is a function on $$S$$. Suppose $$(x,y),(x,z)\in R_1$$. Since $$R_2\circ R_1 = \operatorname{I}$$, then $$(y,x)\in R_2$$. And since $$(x,z)\in R_1$$ then $$(y,z)\in R_1\circ R_2 = \operatorname{I}$$ which follows that $$y=z$$. Now we show the domain of $$R_1$$ is $$S$$. Suppose $$x\in S$$, since $$R_2\circ R_1 = \operatorname{I}$$, then there is some $$y\in S$$ such that $$(x,y)\in R_1$$ and $$(y,x)\in R_2$$ which is as desired. • $$R_1$$ is injective. Suppose $$(x_0,y_0),(x_1,y_1)\in R_1$$ and $$y_0=y_1$$. Since $$R_2\circ R_1 = \operatorname{I}$$, then $$(y_1,x_1)\in R_2$$, and so $$(y_0,x_1)\in R_2$$. And since $$(x_0,y_0)\in R_1$$ then $$(x_0,x_1)\in R_1\circ R_2 = \operatorname{I}$$ which follows that $$x_0=x_1$$. • $$R_1$$ is surjective. Suppose $$y\in S$$. Since $$R_1\circ R_2 = \operatorname{I}$$, then there is some $$x\in S$$ such that $$(y,x)\in R_2$$ and $$(x,y)\in R_1$$ which is as desired. • Pay attention! In the proof that $R_1$ is a function, from the fact that $(x,y) \in R_1$ and $R_2 \circ R_1= I$ does not follow immediately that $(y,x) \in R_2$. Sep 18 '21 at 13:57 • @Taroccoesbrocco It must be since $\operatorname{I}$ is the identy relation on $S$. Sep 18 '21 at 14:00 • Since $R_1$ and $R_2$ are relations (a priori they are not functions), from the fact that $(x,y) \in R_1$ and $R_2 \circ R_1 = I$ you can only deduce that $(x,y') \in R_1$ and $(y',x) \in R_2$ for some $y'$, but a priori it might be $y \neq y'$. For instance, $R_2$ might be not defined in $y$. Sep 18 '21 at 14:06 • I agree that the OP assumes $S = T$, but still from the fact that $(x,y) \in R_1$ and $R_2∘R_1=I$ does not follow immediately that $(y,x)\in R_2$. This is an inference that should be explained in a proof. Unless the proof amounts to say that a property holds because it holds. Actually, it is by explaining this inference that it is evident that the hypothesis $S = T$ is superfluous. Sep 18 '21 at 14:42 • The current proof that $R_1$ is a function only shows that $R_1$ is functorial (if $(x,y) \in R_1$ and $(x,z) \in R_1$ then $y =z$), but it should also show that $R_1$ is total, i.e., for any $x \in S$ there is an $y \in S$ such that $(x,y) \in R_1$. Sep 18 '21 at 17:03
2022-01-21 01:23:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 155, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967119574546814, "perplexity": 84.11071789554788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00512.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Acaraballo.tomas
# zbMATH — the first resource for mathematics ## Caraballo Garrido, Tomás Compute Distance To: Author ID: caraballo.tomas Published as: Caraballo Garrido, Tomás; Caraballo, T.; Caraballo, Tomas; Caraballo, Tomàs; Caraballo, Tomás; Garrido, Tomás Caraballo Homepage: http://personal.us.es/caraball/tcgpublic.html External Links: MGP · ORCID · dblp Documents Indexed: 210 Publications since 1988, including 3 Books all top 5 #### Co-Authors 10 single-authored 38 Langa, Jose’ Antonio 35 Real Anguas, José 32 Valero, José 28 Kloeden, Peter Eris 16 Garrido-Atienza, María José 16 Marín-Rubio, Pedro 11 Han, Xiaoying 11 Schmalfuß, Björn 10 Colucci, Renato 10 Marquez-Duran, Antonio M. 9 Cheban, David Nikolai 9 Nolasco de Carvalho, Alexandre 8 Diop, Mamadou Abdoul 7 Anguiano, María 7 Herrera-Cobos, Marta 7 Robinson, James Cooper 6 Bortolan, Matheus Cheque 6 Morillas, Francisco G. 6 Ouahab, Abdelghani 6 Rivero, Felipe 5 Hammami, Mohamed Ali 5 Liu, Kai 5 Liu, Linfang 4 Bonotto, Everaldo M. 4 Boudaoui, Ahmed 4 Chueshov, Igor’ Dmitrievich 4 Collegari, Rodolfo 4 Mchiri, Lassaad 4 Taniguchi, Takeshi 3 Aragão-Costa, Eder R. 3 Guerrini, Luca 3 Kiss, Gabor 3 Łukaszewicz, Grzegorz 3 Ndiaye, Abdoul Aziz 2 Balibrea, Francisco 2 Blouhi, Tayeb 2 El Fatini, Mohamed 2 Fu, Xianlong 2 Jara, Juan C. 2 López-de-la-Cruz, Javier 2 Lu, Kening 2 Mané, Aziz 2 Neuenkirch, Andreas 2 Obaya, Rafael 2 Pettersson, Roger 2 Shaikhet, Leonid E. 1 Asai, Yusuke 1 Berrhazi, Badr-eddine 1 Brzeźniak, Zdzisław 1 Crauel, Hans 1 da Costa, Henrique B. 1 Duan, Jinqiao 1 Gameiro, Marcio F. 1 García Guirao, Juan Luis 1 Graef, John R. 1 Grecksch, Wilfried 1 Grüne, Lars 1 Han, Xiuping 1 Kapustyan, Oleksiy V. 1 Kasyanov, Pavlo O. 1 Keraani, Sami 1 Li, Xiaodi 1 Li, Yangrong 1 Liu, Zhenxin 1 Mao, Xuerong 1 Mchiri, Lassad 1 Melnik, Valery S. 1 Pavani, Raffaella 1 Pinelas, Sandra 1 Rakkiyappan, Rajan 1 Rapaport, Alain 1 Rodrigues, Hildebrando Munhoz 1 Samprogna, Rodrigo 1 Sanz, Ana M. 1 Sonner, Stefanie 1 Taki, Regragui 1 Truman, Aubrey 1 Zene, Mahamat Mahamat 1 Zgurovsky, Mikhail Z. 1 Zhao, Caidi all top 5 #### Serials 21 Discrete and Continuous Dynamical Systems. Series B 14 Journal of Differential Equations 14 Stochastic Analysis and Applications 14 Discrete and Continuous Dynamical Systems 12 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 8 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 8 Communications on Pure and Applied Analysis 7 Journal of Mathematical Analysis and Applications 7 Mathematical Methods in the Applied Sciences 6 Proceedings of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 5 Stochastics and Dynamics 4 Journal of Difference Equations and Applications 4 Advanced Nonlinear Studies 4 Discrete and Continuous Dynamical Systems. Series S 3 Nonlinearity 3 Applied Mathematics and Optimization 3 Collectanea Mathematica 3 Systems & Control Letters 3 Dynamics of Partial Differential Equations 2 SIAM Journal on Mathematical Analysis 2 Set-Valued Analysis 2 Nonlinear Dynamics 2 International Journal of Mathematics, Game Theory and Algebra 2 Dopovidi Natsional’noï Akademiï Nauk Ukraïny. Matematyka, Pryrodoznavstvo, Tekhnichni Nauky 2 Nonlinear Analysis. Real World Applications 2 Frontiers of Mathematics in China 2 Applied Mathematics and Nonlinear Sciences 1 Applicable Analysis 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Nagoya Mathematical Journal 1 Proceedings of the American Mathematical Society 1 Publications of the Research Institute for Mathematical Sciences, Kyoto University 1 Transactions of the American Mathematical Society 1 Acta Mathematica Hungarica 1 Acta Applicandae Mathematicae 1 Physica D 1 Communications in Partial Differential Equations 1 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 1 Stochastic Processes and their Applications 1 Stochastics and Stochastics Reports 1 Journal of Dynamics and Differential Equations 1 Topological Methods in Nonlinear Analysis 1 Journal of Mathematical Sciences (New York) 1 Electronic Journal of Differential Equations (EJDE) 1 NoDEA. Nonlinear Differential Equations and Applications 1 Electronic Communications in Probability 1 Comptes Rendus de l’Académie des Sciences. Série I. Mathématique 1 Electronic Journal of Qualitative Theory of Differential Equations 1 Revista Matemática Complutense 1 The ANZIAM Journal 1 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 1 Dynamical Systems 1 Cubo Matemática Educacional 1 Comptes Rendus. Mathématique. Académie des Sciences, Paris 1 SIAM Journal on Applied Dynamical Systems 1 Stochastics 1 Journal of Nonlinear Science and Applications 1 Journal of Numerical Mathematics and Stochastics 1 Boletín de la Sociedad Española de Matemática Aplicada. S$\vec{\text{e}}$MA 1 Springer Proceedings in Mathematics & Statistics 1 Nonautonomous Dynamical Systems 1 SpringerBriefs in Mathematics all top 5 #### Fields 102 Partial differential equations (35-XX) 84 Probability theory and stochastic processes (60-XX) 83 Dynamical systems and ergodic theory (37-XX) 69 Ordinary differential equations (34-XX) 23 Systems theory; control (93-XX) 16 Fluid mechanics (76-XX) 15 Biology and other natural sciences (92-XX) 9 Operator theory (47-XX) 8 Difference and functional equations (39-XX) 5 General mathematics (00-XX) 4 History and biography (01-XX) 3 Abstract harmonic analysis (43-XX) 3 Integral equations (45-XX) 2 Measure and integration (28-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 2 Numerical analysis (65-XX) 2 Mechanics of deformable solids (74-XX) 1 Real functions (26-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Differential geometry (53-XX) 1 Mechanics of particles and systems (70-XX) 1 Statistical mechanics, structure of matter (82-XX)
2019-07-22 20:22:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2630482017993927, "perplexity": 14728.519103264054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528220.95/warc/CC-MAIN-20190722201122-20190722223122-00416.warc.gz"}
http://koreascience.or.kr/article/JAKO199910102417251.page
An L-Type Thioltransferase from Arabidopsis thaliana Leaves Thioltransferase, also called glutaredoxin, is a general GSH-disulfide reductase of importance for redox regulation. Previously, the protein thioltransferase, now called S-type thioltransferase, was purified and characterized from Arabidopsis thaliana seed. In the present study, a second thioltransferase, called L-type thioltransferase, was purified to homogeneity from Arabidopsis thaliana leaves. The purification procedures included DEAE-cellulose ion-exchange chromatography, Sephadex G-50 gel filtration, and glutathione-agarose affinity chromatography. The purified enzyme was confirmed to show a unique band on SDS-PAGE and its molecular weight was estimated to be 26.6 kDa, which appeared to be atypical compared with those of most other thioltransferase. It could utilize 2-hydroxyethyl disulfide, S-sulfocysteine, and insulin as substrates, and also contained dehydroascorbate reductase activity. Its optimum pH was 8.5 and its activity was greatly activated by L-cysteine. When it was kept for 30 min, it appeared to be very stable up to $70^{\circ}C$. It was activated by $MgCl_2$ and, on the contrary, inhibited by $ZnCl_2$, $MnCl_2$, and $AlCl_3$.
2020-07-06 10:26:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5807743668556213, "perplexity": 8798.369866755362}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890157.10/warc/CC-MAIN-20200706073443-20200706103443-00297.warc.gz"}
https://frama-c.com/html/fc-discuss/2010-November/msg00010.html
# Frama-C-discuss mailing list archives This page gathers the archives of the old Frama-C-discuss archives, that was hosted by Inria's gforge before its demise at the end of 2020. To search for mails newer than September 2020, please visit the page of the new mailing list on Renater. # [Frama-c-discuss] problem with \old() • Subject: [Frama-c-discuss] problem with \old() • From: der.herr at hofr.at (Nicholas Mc Guire) • Date: Tue, 9 Nov 2010 18:53:45 +0100 • References: <20101109151407.GB20224@opentech.at> <AANLkTi=e_D8=6yN4+XMUL0Zred3dex16wNE-RYrpSP58@mail.gmail.com> ```On Tue, 09 Nov 2010, Pascal Cuoq wrote: > Hello, > > > debian:~/examples# frama-c -main test -users inc2.c > > Perhaps you can explain where you got the idea to use option -users > from, so that we can fix the problem at its source. well I get the same with -val oswell as when running with jessie / alt-ergo so I assumed that is not the cause. > > For an explanation, see the two messages immediately before yours on >
2022-05-18 10:02:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8715990781784058, "perplexity": 11492.133038124291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521883.7/warc/CC-MAIN-20220518083841-20220518113841-00694.warc.gz"}
http://www.detroitstpatricksparade.com/ffxiv-character-dempqre/0c284c-substitution-cipher-in-c-programming
Help support the Sometimes you use 65 instead of 'A'. in C Programing,Programming In this instructional exercise, you will find out about vigenere cipher in C and C++ for encryption and decryption. For example, with a shift of 1, A would be replaced by B, B would become C, and so on. Or greater than 4 * 10 26 possible keys. The simple substitution cipher has far too many possible keys to brute-force through. Simple Substitution Cipher Algorithms in C. A substitution cipher is a method of encryption by which units of the original alphabet (or plain text) are replaced with units of a coded alphabet (or cipher text) according to a regular system. We should do the same in the main function. Caesar is one of the easiest and simplest encryption technique yet one of the weakest technique for the encryption of data. except spaces removed. 09-02-2011 #2. tabstop. Sometimes I am also forced to write crappy code just to meet requirements. help to write a C program to decrypt the message which encrypted using the simple substitution cipher. The Caesar cipher is one of the earliest known and simplest ciphers. C Programming & C++ Programming Projects for $10 -$30. For example with a shift of 1, A would be replaced by B, B would become C, and so on. originalAlphabet - a pointer to a string containing the plain text alphabet (who has all the letters that appear in the originalMessage). a) Ceaser Cipher b) Substitution Cipher c) Hill Cipher 3-9 4 Write a Java program to implement the DES algorithm logic 10-12 5 Write a C/JAVA program to implement the BlowFish algorithm logic 13-14 6 Write a C/JAVA program to implement the Rijndael algorithm logic. The Playfair cipher uses a 5 by 5 table of letters. A substitution cipher is probably the simplest cipher to implement and, at the same time, it is also the easiest cipher to break. What are the advantages and disadvantages of water bottles versus bladders? This is 10 orders of magnitude greater than the key space for DES and would seem to as a Mono-alphabetic substitution cipher, because a single cipher … The check for uniqueness could be made a lot simpler. It does not need an if statement. Design and implement a program, substitution, that encrypts messages using a substitution cipher. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Note, however, that these checks don't do exactly what we want them to. For encryption and decryption, we have used 3 as a key value. I'm in my first year of college in BS Applied Physics. According to a fixed system, the “units” may be single letters, triplets of letters, pairs of letters, mixtures of the above, etc. We can use isalpha in our encoding loop, instead of custom range checking. Instead, consider one trip: iterate until the null character is found. I completed my recent programming assignment for developing a substitution cipher in C. Below is what I came up with after reading many tutorials, googling many questions, watching many videos, etc. Let’s take a look at the program. rev 2021.1.5.38258, The best answers are voted up and rise to the top, Code Review Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, Thanks for the feedback! Polyaphabetic Substitution Cipher; Playfair Cipher; Hill Cipher. Assumptions: Assume key matrix is given to us. I need somebody to build a C++ program which takes a file containing cipher keys and then decipher another file … That is, the substitution is fixed for each letter of the alphabet. I completed my recent programming assignment for developing a substitution cipher in C. Below is what I came up with after reading many tutorials, googling many questions, watching many videos, etc. morse-code vigenere-cipher substitution-cipher kryptos Updated Mar 28, 2018; PHP; Similarly, for decrypting the string, 3 is subtracted from the ASCII value of the characters to print an original string. The Caesar Cipher technique is one of the earliest and simplest method of encryption technique. It is a simplest form of substitution cipher scheme. Asking for help, clarification, or responding to other answers. It’s simply a type of substitution cipher, i.e., each letter of a given text is replaced by a letter some fixed number of positions down the alphabet. Use MathJax to format equations. For example with a shift of 1, A would be replaced by B, B would become C, and so on. Thanks for contributing an answer to Code Review Stack Exchange! however im using code blocks for programming and it turns out that i didnt need to use that. One of the biggest mistakes that you can ever make is to ask for code. I'm trying to find a close-up lens for a beginner camera. For example, with a shift of 1, A would be replaced by B, B would become C, and so on. Newest; C; Java.NET; Bash; Projects; Other; Links; Saturday, March 24, 2012. Keys for a simple substitution cipher usually consists of 26 letters. I need somebody to build a C++ program which takes a file containing cipher keys and then decipher another file … Here's a basic program which does that using a key which consists of all 26 letters of the alphabet: Moreover, 26 keys has been permuted to 26! python substitution-cipher symmetric-encryption-algorithm Updated Jan 13, 2020; Python; c2huc2hu / automated-cryptanalysis Star 0 Code Issues Pull requests Prototype for UTEK 2018 Programming. Instead of 26, declare a global constant, either with #define or const int. I was just wondering if you know of a more efficient way to do it? European languages which use the Latin alphabet (including French, German, Like A will be replaced by D, C will be replaced by F and so on. Since char is an integer type, we can do this with some simple math: plaintext[t] - 'A'. Why hasn't JPE formally retracted Emily Oster's article "Hepatitis B and the Case of the Missing Women" (2005)? Adventures in the programming jungle Adrian Citu's Blog. any of the component ciphers. If I am overlooking something plainly obvious, I blame the night shift :P. Thanks for the feedback! Cj represents the j-th character from the cipher text alphabet, where j represents the position of the Ei character in the original alphabet. It also works quickly and on text of any length. Let's consider an alphabetical string, and a number -- the offset. For our com sci subject, we are currently learning C. For this week's assignment, we were asked to make a substitution cipher. Simple Substitution Cipher Algorithms in C A substitution cipher is a method of encryption by which units of the original alphabet (or plain text) are replaced with units of a coded alphabet (or cipher text) according to a regular system. C++ Server Side Programming Programming. C code to Encrypt & Decrypt Message using Substitution Cipher Levels of difficulty: Hard / perform operation: Algorithm Implementation , Networking C Program Substitution cipher (My) CISSP Notes – Cryptography. Pages. Thanks! A popular cross-table called Vigènere square is used to identify elements for encryption and decryption based on Vigenere Cipher algorithm. The goal is to take names of animals and have them be displayed as the cipher so pig would be 1697 for example. We need to limit x to the range [0, key_len - 1) to avoid that. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Example: C program to encrypt and decrypt the string using Caesar Cypher Algorithm. Categorising point layer twice by size and form in QGIS. Leave me a comment and I will get back at you as soon as I can! C Programming; Simple Substitution Cipher; Getting started with C or C++ | C Tutorial | C++ Tutorial | C and C++ FAQ ... help to write a C program to decrypt the message which encrypted using the simple substitution cipher. All symbols in the codedAlphabet should appear only once. The instruction is: You need to write a program that allows you to encrypt messages using a substitution cipher. So what is a substitution cipher? For example, if key is 3 then we have to replace character by another character that is 3 position down to it. MathJax reference. Pedantic code insures the range. In this tutorial, we will see how to encrypt and decrypt a string using the Caesar cipher in C++. upper = 'Q' then digraphs = digraphs. This gives us the index we need for the key. Beethoven Piano Concerto No. A polyalphabetic cipher is a cipher based on substitution concept which uses multiple substitution alphabets. C++ Server Side Programming Programming In the Affine cipher, each letter in an alphabet is mapped to its numeric equivalent, is a type of monoalphabetic substitution cipher. Over the last few weeks I've been meddling with encryption in C. I've been using a simple substitution cipher but I've encountered problems with the following code. It is a mono-alphabetic cipher wherein each letter of the plaintext is substituted by another letter to form the ciphertext. for loop obliges 2 sequential trips through the string plaintext. The method is apparently named after Julius Caesar, who apparently used it to communicate with his officials. We had seen in Caesar cipher that we used only a single key to encrypt the data and again the same key to decrypt the data, but Monoalphabetic is an improved substitution cipher, where we are using 26 keys of the alphabet. Because encrypted data can only be accessed by authorized person. Full list of "special cases" during Bitcoin Script execution (p2sh, p2wsh, etc.)? We can replace these 3 ifs with a single one: if (toupper(key[x]) == toupper(key[y])). It should be noted that this is CS50 and I was using the CS50 library. What I am not seeing is how the arr is counting unique characters as opposed to just counting characters? Note: Special case of Substitution cipher is known as Caesar cipher where the key is taken as 3. RPlayfairCipher00 NetRexx; not just a programing language for kings & queens! For lowercase we could get the distance from 'a' instead, but it's easier to convert everything to uppercase and do a single comparison. Programming Tutorials. Submitted by Himanshu Bhatt, on September 21, 2018 . codedAlphabet - a pointer to a string containing the cipher text alphabet (who has the same length as the originalAlphabet string). The Vigenère cipher consists of multiple Caesar ciphers in a sequence with different shift values. To clarify, what's happening here is basically something that creates a histogram. By DavidDobson in forum C Programming Replies: 2 Last Post: 08-12-2008, 04:29 AM. The Caesar cipher is one of the earliest known and simplest ciphers. This algorithm is easy to understand and implement and is an implementation of polyalphabetic substitution. This program demonstrate four rules of the Playfair encryption algorithm. My first suggestion is actually to remove most of the comments. 8086 Assembly Program to Count Number of 0’s and 1’s from a Number 8086 Assembly Program to Find Largest Number from Given Numbers 8086 Assembly Program to Check if String is Palindrome or not You could write it like this instead: And in general, avoid magic numbers. Good use of local variables rather than all at the top of the function. A monoalphabetic cipher uses fixed substitution over the entire message, whereas a polyalphabetic cipher uses a number of substitutions at different positions in the message, where a unit from the plaintext is mapped to one of several possibilities in the ciphertext and vice versa. Solving a combination of substitution cipher and transposition cipher (on the same cipher text) ... a serious try to implement programming solutions for Kryptos sculpture. We can save the string length to another variable, instead of calling strlen multiple times. It is a more glorified version of a substitution cipher. A substitution cipher is probably the simplest cipher to implement and, at the same time, it is also the easiest cipher to break. Why is left multiplication on a group bijective? Or greater than 4 * 10 26 possible keys. However, some substitutions are added that do not correspond to a letter to try and confuse anyone trying to break the cipher . Anybody can solve a problem if you follow the following steps. 3: Last notes played by piano or not? Maintaining the case was indeed a part of the assignment and I did struggle with that. char is signed or unsigned. Substitution ciphers are a part of early cryptography, predating the evolution of computers, and are now relatively obsolete. For more c programs related to Network, Check the Network label. This technique encrypts pairs of letters at a time and generates more secure encrypted text compare to the simple substitution cipher like Caesar. Understand the problem, i.e, what is the input, what is the expected output. Caesar Cipher is an encryption method in which each plaintext is replaced with fixed no of places down the alphabets according to the key. In this article, we will talk about ciphers, to be more specific substitution cipher in Python. Do exactly what we want them to the encoded string that you can ever make is to ask for.! Decrypt a string containing the plain text is replaced by B, B would become C and... Open source project / source codes from CodeForge have them be displayed as the shift cipher i-th from... Input are letters ' A'- ' substitution cipher in c programming ', ' K ' ) if ciph '... Are good as they are substitution ciphers are a part of early cryptography, its types your is. A C program to decrypt the message who will be replaced by B, B would become C and. Learn how to hack the simple substitution cipher sometimes comments are useful, but they. ” line can be any permutation of the Missing Women '' ( 2005 ) copy and this... Uniqueness could be uppercase or lowercase more what this code by piano or not substitution cipher in c programming times! I am overlooking something plainly obvious, I would rename it to validate_args and name! Uppercase or lowercase all symbols in the original alphabet “ the Adventure of the alphabet to do the in! Do you detect and defend against micro blackhole cannon loop a bit, since validate checks the command arguments! The same in the next chapter, we will talk about ciphers, to more! Alphabet ( who has all the letters with various punctuation characters for of. Pointer to a string containing the message are handled by the encryption process the Vigenère cipher consists of multiple ciphers... 1, a would be 1697 for example with a shift of 1, a would be by... Message are handled by the encryption process under cc by-sa E ' then digraphs =.. A shift of 1, a would be replaced by B, B would become C, and so.! List of Special cases '' during Bitcoin script execution ( p2sh, p2wsh, etc. ) be specific! Sometimes I am overlooking something plainly obvious, I would rename it validate_args. According to the ASCII value of the characters design / logo © 2021 Stack Exchange is simplest! Originalalphabet should appear only once is to ask for code on substitution concept which uses multiple substitution.. Close-Up lens for a beginner camera use that keep getting my latest debit card number only be accessed authorized. Shift cipher given to us explaining obvious things, they just clutter the code it has correct length ' '. Exactly what we 're going to do the encoding ( 1903 ) local variables rather than all at the.! Possible keys to brute-force through be more specific substitution cipher scheme be noted that this n't! 1697 for example with a shift of 1, a would be replaced by another letter to and. Script and a number -- the offset, declare a global constant, either with # define const... Is probably one of the earliest known and simplest method of encryption yet... Originalmessage - a pointer to a letter Caesar cipher is a polygraphic contain the message! Programming Projects for $10 -$ 30 Applied Physics a ' is simply another way of 65. Unnecessary branch that does nothing what this is n't the place for this type of feedback, please delete a..., Projects site design / logo © 2021 Stack Exchange they just the. Only allowed characters in future, either with # define or const int download feistel cipher C and. Tips on writing great answers to help an experienced developer transition from junior senior. For 32 to make smarter programs in order to give it a variety of languages, we! To this RSS feed, copy and paste this URL into your RSS.! Characters as opposed to just counting characters displayed as the cipher text.... Cipher is an implementation of polyalphabetic substitution encoded message const int been permuted to 26 service... Part a little bit more Programming Projects for $10 -$ 30 ciphertext., 3 is subtracted from the encoded message bigger than 26 so it... Isalpha in our encoding loop, instead of custom range checking all the letters that appear the. Evolution of computers, and are now relatively obsolete the only allowed characters in future ; bash Projects..., the key is taken as 3 soon as I can to make programs! 21, 2018 of a more glorified version of a substitution cipher December, 2012 March. Animals and have them be displayed as the cipher text character letters ' A'- ' Z,! Originalalphabet should appear only once a more glorified version of a Melee Attack. Mean when an egg splatters and the case was indeed a part of the implementations. Whitespace, and so on by 5 table of letters $30 men ” ( 1903.... Who run for the encryption of data alphabet ( who has all the letters with various punctuation.. & C++ Programming Projects for$ 10 - $30 and one alphabets is substituted a... Is this: replace each letter of the alphabet check contains an unnecessary branch that does nothing jump to for... Consider one trip: iterate until the null character is found when an egg splatters and the was. In a file called substitution.c in a file called substitution.c in a file called substitution.c in a sequence different... Command line arguments, I would rename it to communicate with his generals the top the! Implement and is an encryption method in which each plaintext is substitution cipher in c programming by letter!$ 30 I 'm in my first year of college in BS Applied Physics string, is. Predating the evolution of computers, and so on ; Playfair cipher you are encouraged... ( substitute K Q! The most commonly used cipher and Playfair cipher uses a 5 by 5 table letters. Is replaced by D, C will be replaced by D, C will be by! Letters, two letters or triplets or letters, two letters or triplets letters! Should appear only once letter with the letter that 's number '' positions ahead of substitution cipher in c programming is! Communicate with his generals in Primaries and Caucuses, shortlisted: plaintext [ t ] - ' a.! ; Projects ; other ; Links ; Saturday, March 24, 2012 a directory called substitution. Continue on that function, privacy policy and cookie policy in any language want. Deleteduplicates, how to help an experienced developer transition from junior to senior developer biggest mistakes you! Are substitution ciphers and one alphabets is substituted by a different alphabet and turns! String length to another variable, instead the “ cipher ” line be. The easiest and simplest ciphers main function didnt need to write a C program decrypt... Brute-Force through, they just clutter the code by clicking “ Post your answer ”, agree... Do n't do exactly what we want them to in my first year of college in Applied!, 26 keys has been permuted to 26 substitution … C Programming & C++ Projects. Running speed for DeleteDuplicates, how to encrypt messages using a substitution cipher it if... Your RSS reader message who will be replaced by B, B become... Type, we will talk about ciphers, substitution cipher in c programming be more specific cipher., 2018 write a program in a directory called substitution is given to us could change the for a. Programs related to substitution cipher in c programming, check the Network label chapter, we will talk about ciphers, to more! To it defend against micro blackhole cannon Techniques that our previous cipher hacking have! Somewhat polyalphabetic substitution … C Programming that encrypts messages using a substitution cipher only alpha characters at you soon... Playfair cipher ; Hill cipher in C++ concept which uses multiple substitution alphabets which each plaintext substituted. Please delete that it has correct length the ciphertext is used to identify elements for encryption and decryption on! So assuming substitution cipher in c programming have renamed those variables, let 's consider an alphabetical string, and so.... Letters or triplets or letters, two letters or triplets or letters,.... Of calling strlen multiple times so pig would be 1697 for example with a shift 1! ( 1903 ) the initial ciphers invented by Leon Battista alberti in around 1467 know of more... Assuming we have renamed those variables, let 's continue on that function close-up lens for simple. Them be displayed as the shift cipher only alpha characters, to be more specific substitution cipher somewhat substitution! Contributing an answer to code Review Stack Exchange not just a programing for! 10 26 possible keys great answers CISSP Notes – cryptography many possible keys a single argument! Relatively obsolete ei character in the main function or letters, etc. substitution cipher in c programming! It to validate_args and also name the arguments argc and argv encrypt plain. Length are good as they are substitution ciphers and one alphabets is substituted by a different.... Clearer what this is doing formally retracted Emily Oster 's article ` Hepatitis B and case! A bash script and a newline character order to give it a variety of languages that can! It mean when an egg splatters and the white is greenish-yellow is used identify. Design and implement and is an implementation of polyalphabetic substitution … C Programming & C++ Programming Projects for 10! See our tips on substitution cipher in c programming great answers key value easy to understand and implement and is an encryption in! File called substitution.c in a sequence with different shift values be replaced another! Loop, instead the “ cipher ” line can be any permutation of the easiest and simplest ciphers ' '! 2021 Stack Exchange C will be replaced by f and so on some cipher systems may use slightly more see...
2021-03-03 05:23:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23484036326408386, "perplexity": 1594.9838591787122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365454.63/warc/CC-MAIN-20210303042832-20210303072832-00429.warc.gz"}
https://research.aurelienooms.be/publication/subquadratic-encodings-for-point-configurations/
# Subquadratic Encodings for Point Configurations ### Abstract For many algorithms dealing with sets of points in the plane, the only relevant information carried by the input is the combinatorial configuration of the points: the orientation of each triple of points in the set (clockwise, counterclockwise, or collinear). This information is called the order type of the point set. In the dual, realizable order types and abstract order types are combinatorial analogues of line arrangements and pseudoline arrangements. Too often in the literature we analyze algorithms in the real-RAM model for simplicity, putting aside the fact that computers as we know them cannot handle arbitrary real numbers without some sort of encoding. Encoding an order type by the integer coordinates of a realizing point set is known to yield doubly exponential coordinates in some cases. Other known encodings can achieve quadratic space or fast orientation queries, but not both. In this contribution, we give a compact encoding for abstract order types that allows efficient query of the orientation of any triple: the encoding uses $O(n^2)$ bits and an orientation query takes $O(\log n)$ time in the word-RAM model with word size $w \geq \log n$. This encoding is space-optimal for abstract order types. We show how to shorten the encoding to $O(n^2 {(\log\log n)}^2 / \log n)$ bits for realizable order types, giving the first subquadratic encoding for those order types with fast orientation queries. We further refine our encoding to attain $O(\log n/\log\log n)$ query time at the expense of a negligibly larger space requirement. In the realizable case, we show that all those encodings can be computed efficiently. Finally, we generalize our results to the encoding of point configurations in higher dimension. Publication In 27th Annual Fall Workshop on Computational Geometry (FWCG 2017), 34th European Workshop on Computational Geometry (EuroCG 2018) and 34th Symposium on Computational Geometry (SoCG 2018). Invited to Computational Geometry - Theory and Applications special issues for EuroCG 2018. Invited to Journal of Computational Geometry special issue for SoCG 2018. Best student presentation award at SoCG 2018. To appear in Journal of Computational Geometry
2022-10-06 11:19:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6413792967796326, "perplexity": 729.8112095325706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00678.warc.gz"}
https://www.madmode.com/2014/learning-to-make-stuff-with-computers-from-cpus-to-haskell-web-apps.html
Dan Connolly's tinkering lab notebook ## Learning to make stuff with computers: from CPUs to Haskell Web Apps We have multi-core, gigahertz processors on our wrists. Games are developed like Hollywood blockbusters, with hundreds of creative and technical people working together for years. As a new developer, where do you even start?! I have a few gems for you: CodeWorld, by Chris Smith, was designed to teach math to teenagers. It lets brand new developers, with just a few hours of instruction, build haskell web apps right in your browser, without the hassle of text editors, compilers, etc. Computer Science degree programs typically start students with Java or the like, but consider x = x + 1 from the perspective of the typical high school algebra student. That's nonsense! Then consider main = animationOf(design) design(t) = rotate(slot, 60 * t) & middle & outside slot = solidRectangle(4, 0.4) middle = solidCircle(1.2) outside = circle(2) versus the mish-mash of concepts and code typical graphics and animation frameworks require. And while CodeWorld looks a bit like a toy, haskell is not. Haskell will take you a long way in the world of computing. How The Web Just Happened is an hour talk by Tim Berners-Lee, inventor of the Web, explaining how he started by building magnets, and just as he mastered those, transistors became available to hobbyists. And just as he mastered transitors, integrated circuits came along. And so on, until he had a Next machine and the Internet at his disposal. My own career followed a similar path, just a few years behind his. I didn't build my own display, but with a Radio Shack Color Computer, I learned the principles of Unix from OS/9, and I built my own printer interface and wrote my own disk driver. Tim and I met in 1991 and worked together building the Web for the next 20 years. CpuSim, by Dale Skrien at Colby College, lets you really see how CPUs work, with registers and memory and assembly language and machine language. While it's great to know haskell and other high level programming languges, it's still important to know what's going on underneath. This one you have to download and install to run, but it took me just a few minutes, and as a Java app, it runs on lots of platforms. From NAND to Tetris, by Noam Nisan and Shimon Schocken, covers the parts in between: operating systems, compilers, and all that. It's a course of many weeks, and I haven't done it, personally. But if you're willing to spend the time, it lets you walk the path that Tim and I did, even though the giga-scale technology is all ready rolled out everywhere.
2021-05-08 02:15:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20844221115112305, "perplexity": 2885.7069307410693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00615.warc.gz"}
https://indico.cern.ch/event/739938/contributions/3402994/
# The 12th International Workshop on the Physics of Excited Nucleons 10-14 June 2019 Bonn, Campus Poppelsdorf Europe/Zurich timezone ## The Discussion of P₍ states and the prediction of J/ᴪ Photo-production 11 Jun 2019, 14:30 30m HS 6 ### HS 6 Baryon resonances in experiments with hadron beams and in the e+e- collisions ### Speaker Jiajun Wu (University of Chinese Academy of Sciences) ### Description We will provide the theoretical description of $P_c$ states within coupled channel model. To provide information for the search of nucleon resonances with hidden charm $P_c$ for the on-going experiments at JLab, we make predictions by including the resonant amplitude of $\gamma p \to N^*_{c\bar{c}} \to J/\psi p$ calculated from all available theoretical models. The background is mainly from Pomeron-exchange model of the $\gamma p \to J/\psi p$ reaction. The parameters of the Pomeron-exchange amplitudes are determined by fitting the total cross section data of $\gamma p \to J/\psi p$ up to very high energy W = 300 GeV. We then demonstrate that the $P_c$ can be most easily identified in the differential cross sections at large angles where the contribution of background becomes negligible. ### Primary authors Jiajun Wu (University of Chinese Academy of Sciences) Prof. T.-S. Harry Lee ( Argonne National Laboratory) Bing-Song Zou (Chinese Academy of Sciences)
2020-11-28 00:12:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2946563959121704, "perplexity": 2607.54733746677}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194634.29/warc/CC-MAIN-20201127221446-20201128011446-00705.warc.gz"}
https://iacr.org/cryptodb/data/author.php?authorkey=10454
## CryptoDB ### Dominique Schröder #### Publications Year Venue Title 2019 PKC Sanitizable signatures allow designated parties (the sanitizers) to apply arbitrary modifications to some restricted parts of signed messages. A secure scheme should not only be unforgeable, but also protect privacy and hold both the signer and the sanitizer accountable. Two important security properties that are seemingly difficult to achieve simultaneously and efficiently are invisibility and unlinkability. While invisibility ensures that the admissible modifications are hidden from external parties, unlinkability says that sanitized signatures cannot be linked to their sources. Achieving both properties simultaneously is crucial for applications where sensitive personal data is signed with respect to data-dependent admissible modifications. The existence of an efficient construction achieving both properties was recently posed as an open question by Camenisch et al. (PKC’17). In this work, we propose a solution to this problem with a two-step construction. First, we construct (non-accountable) invisible and unlinkable sanitizable signatures from signatures on equivalence classes and other basic primitives. Second, we put forth a generic transformation using verifiable ring signatures to turn any non-accountable sanitizable signature into an accountable one while preserving all other properties. When instantiating in the generic group and random oracle model, the efficiency of our construction is comparable to that of prior constructions, while providing stronger security guarantees. 2019 JOFC We continue the line of work initiated by Katz (Eurocrypt 2007) on using tamper-proof hardware tokens for universally composable secure computation. As our main result, we show an oblivious-transfer (OT) protocol in which two parties each create and transfer a single, stateless token and can then run an unbounded number of OTs. We also show a more efficient protocol, based only on standard symmetric-key primitives (block ciphers and collision-resistant hash functions), that can be used if a bounded number of OTs suffice. Motivated by this result, we investigate the number of stateless tokens needed for universally composable OT. We prove that our protocol is optimal in this regard for constructions making black-box use of the tokens (in a sense we define). We also show that nonblack-box techniques can be used to obtain a construction using only a single stateless token. 2018 ASIACRYPT Homomorphic secret sharing (HSS) allows n clients to secret-share data to m servers, who can then homomorphically evaluate public functions over the shares. A natural application is outsourced computation over private data. In this work, we present the first plain-model homomorphic secret sharing scheme that supports the evaluation of polynomials with degree higher than 2. Our construction relies on any degree-k (multi-key) homomorphic encryption scheme and can evaluate degree-$\left( (k+1)m -1 \right)$ polynomials, for any polynomial number of inputs n and any sub-logarithmic (in the security parameter) number of servers m. At the heart of our work is a series of combinatorial arguments on how a polynomial can be split into several low-degree polynomials over the shares of the inputs, which we believe is of independent interest. 2017 ASIACRYPT 2017 JOFC 2016 CRYPTO 2016 PKC 2016 PKC 2016 PKC 2015 EPRINT 2015 EPRINT 2015 EPRINT 2015 EPRINT 2015 EPRINT 2015 CRYPTO 2014 CRYPTO 2014 TCC 2014 ASIACRYPT 2012 TCC 2012 PKC 2011 TCC 2011 CRYPTO 2010 EPRINT Verifiably encrypted signature schemes (VES) allow a signer to encrypt his or her signature under the public key of a trusted third party, while maintaining public signature verifiability. With our work, we propose two generic constructions based on Merkle authentication trees that do not require non-interactive zero-knowledge proofs (NIZKs) for maintaining verifiability. Both are stateful and secure in the standard model. Furthermore, we extend the specification for VES, bringing it closer to real-world needs. We also argue that statefulness can be a feature in common business scenarios. Our constructions rely on the assumption that CPA (even slightly weaker) secure encryption, maskable'' CMA secure signatures, and collision resistant hash functions exist. Maskable'' means that a signature can be hidden in a verifiable way using a secret masking value. Unmasking the signature is hard without knowing the secret masking value. We show that our constructions can be instantiated with a broad range of efficient signature and encryption schemes, including two lattice-based primitives. Thus, VES schemes can be based on the hardness of worst-case lattice problems, making them secure against subexponential and quantum-computer attacks. Among others, we provide the first efficient pairing-free instantiation in the standard model. 2010 PKC 2010 PKC 2010 EUROCRYPT 2009 EPRINT In a verifiably encrypted signature scheme, signers encrypt their signature under the public key of a trusted third party and prove that they did so correctly. The security properties are unforgeability and opacity. Unforgeability states that a malicious signer should not be able to forge verifiably encrypted signatures and opacity prevents extraction from an encrypted signature. This paper proposes two novel fundamental requirements for verifiably encrypted signatures, called \emph{extractability} and \emph{abuse-freeness}, and analyze its effects on the security model of Boneh et al. Extractability ensures that the trusted third party is always able to extract a valid signature from a valid verifiably encrypted signature and abuse-freeness guarantees that a malicious signer, who cooperates with the trusted party, is not able to forge a verifiably encrypted signature. We further show that both properties are not covered by the model of Boneh et al., introduced at Eurocrypt 2003. 2009 PKC 2009 PKC Crypto 2020 Crypto 2019 Crypto 2016 PKC 2016 Eurocrypt 2015 PKC 2015 PKC 2012
2020-02-25 06:25:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33390530943870544, "perplexity": 1958.1833901358643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146033.50/warc/CC-MAIN-20200225045438-20200225075438-00424.warc.gz"}
https://rebelsky.cs.grinnell.edu/~rebelsky/Courses/CSC207/2019S/01/readings/quicksort.html
# Quicksort Summary We consider Quicksort, an interesting divide-and-conquer sorting algorithm. Prerequisites Sorting. Merge sort. Loop invariants. Recursion. ## Alternative strategies for dividing lists As you may recall, the two key ideas in merge sort are: (1) use the technique known as of divide and conquer to divide the list into two halves (and then sort the two halves); (2) merge the halves back together. Are there better sorting algorithms than merge sort? If our primary activity is to compare values, we cannot do better than some constant times n log2 n. steps in the sorting algorithm. However, that hasn’t stopped computer scientists from exploring alternatives to merge sort. One reason to look for better versions is that merge sort is an “out of place” sorting algorithm - you need to create new arrays in order to do the merge. (The obvious merge algorithm requires another array of the same size as the original. Some clever techniques allow you to get by with another array of half the size of the original.) Another reason to look at alternatives is actual, rather than theoretical, speed. In practice, the constant multiplier hidden by big-O notation makes a big difference. And so we might want to reduce that multiplier. One way to develop an alternative to merge sort is to split the values in the list in a more deliberate way. For example, instead of splitting into “about half the elements” and “the remaining elements”, we might choose the to divide into “the smaller elements” and “the larger elements”. Why would this strategy be better? Well, if we know that every small element precedes every large element, then we can significantly simplify the merge step. For lists, we can just append the two sorted lists together. For arrays, we can sort in place by rearranging the array so that small elements have small indices and large elements have large indices, and then sort the two halves. // Sort the whole array, using order to compare elements Algorithm: sort(A, order) sort(A, 0, A.length, order) // Sort elements [lb..ub) of A using order to compare elements Algorithm: sort(A, lb, ub, order) if ub-lb <= 1 // Do nothing! Subarrays of length 1 or 0 are sorted otherwise Rearrange the elements so that we achieve the criterion that all elements in indices less than mid are small and all elements in indices greater than mid are large. In more formal notation For all lb <= i < mid < j < ub A[i] <= A[mid] < A[j] sort(A, lb, mid) sort(A, mid+1, ub)</literallayout> How do we identify the smaller and larger elements? How do we identify the midpoint? Ideally, we would identify the median value of the subarray, put that at mid, and rearrange so that values less than the median appear to the left of mid and values to the right appear to the right. // Sort a list Algorithm: sort(L, order) if the length of L <= 1 return L otherwise let m be the median value of the list append(sort(elementsSmallerThan(L, m, order)), sort(elementsEqualTo(L, m, order)), sort(elementsGreaterThan(L, m, order)))</literallayout> ## Identifying small and large elements It sounds like a great idea, doesn’t it? Instead of split and merge, we can sort by identifying the median and reorganizing the values into small and large elements. Unfortunately, the typical way that people identify the median of a collection of values is to sort the values and look in the middle. That doesn’t work so well if we’re identifying the median in order to sort. So we need another approach. So, what do we do? A computer scientist named C. A. R. Hoare had an interesting suggestion: Randomly pick some element of the list and use that as a simulated median. That is, anything smaller than that element is “small” and anything larger than that element is “large”. Because it’s not the median, we need another name for that element. Traditionally, we call it the pivot. Is using a randomly-selected pivot a good strategy? You need more probability and statistics than most of us know to prove formally that it works well. However, practice suggests that it works very well, indeed. (It works a bit better if you randomly pick three elements and let the median of those three elements be the pivot.) ## Partitioning We know how to find a pivot. For the list-based version, it’s pretty easy to find the smaller and larger elements: We just iterate through the list, grabbing the elements that meet the appropriate criterion. List small; List medians; List large; for (v : L) { int o = order.compare(v, p); if (o < 0) { small.append(v); } else if (o == 0) { medians.append(v); } else { large.append(v); } // if/else } // for (v : L) What about for the array-based version? Hmmm … this seems like something closely related to the Dutch National Flag problem, doesn’t it? And so we can use a similar approach. The only difference is that we really only need two sections, rather than three. The typical implementation leaves the pivot in element 0 while rearranging, and swaps it into the correct place only after all the elements have been processed. (Some folks, such as PM, put that pivot at the end, rather than the front.) Visually, the invariant looks like the following: +--+-----------------+--------------------+----------------+ | p| values <= pivot | unprocessed values | values > pivot | +--+-----------------+--------------------+----------------+ | | | | | lb lb+1 small large ub Here’s the state at the end of the loop. +--+-----------------+----------------+ | p| values <= pivot | values > pivot | +--+-----------------+----------------+ | | | | lb lb+1 small,large ub We can then swap the pivot into the end of the small section and achieve our goal. ## Analysis As noted above, the formal analysis of Quicksort is beyond the scope of this course. However, if you believe the claim that “the randomly selected pivot usually divides the array relatively evently”, then we can use the same analysis that we used for merge sort. And so the algorithm is O(n*log2n). Of course, if we choose our pivots badly, then Quicksort devolves to an O(n2) algorithm, since each partition is O(n), and a badly chosen pivot means that the recursive call is on an array of size n-1. Note that Quicksort devolves to this behavior if you use the first element of the subarray as a pivot and the original array is sorted (or reverse sorted). ## Key ideas As we hope you noted, there are two key ideas in the design of Quicksort. First, as we learned in designing merge sort, using divide and conquer helps us achieve a faster sorting algorithm. Quicksort adds the new idea that we can sometimes leverage randomness to achieve our goals. This exploration of Quicksort may have also reemphasized some other more general ideas. For example, you might have noted that loop invariants can help us design parts of our algorithm or that the algorithms we write for lists and arrays are likely to be different. You may have also noted some utility for higher-order procedures here, something that Java currently lacks. ## Partitioning Here’s an example of partitioning in action. Supose we start with the following array of length 12. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| l| p| h| a| b| e| t| i| c| a| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | | lb ub We pick a random pivot. Let’s say that it’s “h”, which is at position 3. We swap the pivot to the start of the array so that we always know where it is. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | h| l| p| a| a| b| e| t| i| c| a| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | | | lb sm ub,lg The first unprocessed element is vals[1], or “l”, which is large. So we swap it to the end of the array, and update our indication of where the large elements are. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | h| l| p| a| a| b| e| t| i| c| a| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | | | | lb sm lg ub Cleverly, we swapped an “l” with an “l”, so it’s not necessarily obvious what happened. Nonetheless, we move forward. The next unprocessed element is vals[1], an “l”, which is large. So we swap it to the end of the unprocesed elements, and update our indication of where the large elements are. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | h| a| p| a| a| b| e| t| i| c| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | | | | lb sm lg ub The next unprocessed element is still vals[1], or “a”. This time it’s small, so we advance our upper boundary on small elements. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | h| a| p| a| a| b| e| t| i| c| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | | | | lb sm lg ub The next unprocessed element is vals[2], or “p”. It’s large, so we swap it to the end of the unprocessed section. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | h| a| c| a| a| b| e| t| i| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | | | | lb sm lg ub The next unprocessed element is small. We advance our small boundary. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | h| a| c| a| a| b| e| t| i| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | | | | lb sm lg ub The next unprocessed element is small. We advance our small boundary. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | h| a| c| a| a| b| e| t| i| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | | | | lb sm lg ub The next few unprocessed elements are small. We advance our small boundary. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | h| a| c| a| a| b| e| t| i| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | | | | lb sm lg ub We’ve encountered another large element. We swap. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | h| a| c| a| a| b| e| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | | | | lb sm lg ub We’re left with one unprocessed element. It’s large. So we swap it with itself and decrease the large boundary. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | h| a| c| a| a| b| e| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | | | lb s,l ub Okay, we’re finished rearranging the values. Now we want to put the pivot in the middle. So we swap it just to the left of the bounardy we created. (Why to the left? We’ll leave that as something for you to think about.) 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | e| a| c| a| a| b| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ *** We can now recurse on the two halves. You may have observed a few places in which we could have made our partition algorithm a bit more efficient. And you should probably make those improvements - we chose a simple partitioning algorithm for clarity and to help ensure correctness. Of course, if you do change the algorithm, you should make sure to analyze its correctness and to make sure you preserve the loop invariants. ## Continuing the example Let’s continue the example above. We started with the array 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| l| p| h| a| b| e| t| i| c| a| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | | lb ub After partitioning, we ended up with the following. Note that the *** means “in the correct place”. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | e| a| c| a| a| b| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | *** | lb ub What happens next? We recurse on the left half. (And we remember that we have to recurse on the range 7-12.) 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | e| a| c| a| a| b| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | |*** lb ub Suppose we pick “c” as the pivot. After partitioning, we end up with the following. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| a| b| a| c| e| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | *** |*** lb ub Once again, we recurse on the left half. (We also remember that we have to process 5-6 when we’re done, as well as the 7-12 from before.) 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| a| b| a| c| e| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | |*** *** lb ub Suppose we pick one of the “a”’s as a pivot. We partition. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| a| a| b| c| e| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | *** |*** *** lb ub And we recurse once again. We also remember that we have do deal with 3-4 (and 5-6 and 7-12 from before). 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| a| a| b| c| e| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | |*** *** *** lb ub We pick one of the “a”’s as a pivot. (Yes, you’ve probably noted a potential improvement already.) And we partition. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| a| a| b| c| e| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | ***|*** *** *** lb ub We recurse on the left half. We also remember that we have to recurse on the right half, 2-2, as well as 3-4, 5-6, and 7-12. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| a| a| b| c| e| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ | |*** *** *** *** lb ub It’s a singleton element. We know it’s sorted. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| a| a| b| c| e| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ |***|*** *** *** *** lb ub The most recently recursion left undone is 2-2. After that, we’ll do 3-4, 5-6, and 7-12. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| a| a| b| c| e| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ *** ***|*** *** *** lb,ub That’s an empty subarray, so we’re done with that subarray. Now we do the subarray with indices 3-4. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| a| a| b| c| e| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ *** *** ***| |*** *** lb ub Another singleton array. Done. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| a| a| b| c| e| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ *** *** *** *** *** *** 5-6 is equally trivial, so we won’t even show it. We’re now left with 7-12. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| a| a| b| c| e| h| i| t| p| l| l| +---+---+---+---+---+---+---+---+---+---+---+---+ *** *** *** *** *** *** ***| | lb ub We pick a pivot and partition. Let’s say we pick “p”. 0 1 2 3 4 5 6 7 8 9 10 11 12 +---+---+---+---+---+---+---+---+---+---+---+---+ | a| a| a| b| c| e| h| l| l| i| p| t| +---+---+---+---+---+---+---+---+---+---+---+---+ *** *** *** *** *** *** ***| *** | lb ub And you can probably figure out the rest of the story. ## Citations The first few sections of this reading are based closely on a reading from CSC 151. My sense is that I’m the original author of that reading, since it seems to follow my normal style (and since I don’t see Quicksort in the earlier versions of 151). However, I am equally confident that Janet Davis and Jerod Weinman (and maybe Rhys Price Jones) helped improve that original reading. I wrote the rest of the reading and updated the early sections for some offering of CSC 207. Quicksort was developed (discovered?) by C.A.R. Hoare. There seem to be at least two early articles by Hoare on Quicksort. C. A. R. Hoare. 1961. Algorithm 64: Quicksort. Commun. ACM 4, 7 (July 1961), 321-. DOI=10.1145/366622.366644 http://doi.acm.org/10.1145/366622.366644 C. A. R. Hoare. 1962. Quicksort. Comput. J. 5, 1, 10–16. doi:10.1093/comjnl/5.1.10 http://comjnl.oxfordjournals.org/content/5/1/10.full.pdf
2020-10-30 10:54:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5740498900413513, "perplexity": 2536.56786571352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107910204.90/warc/CC-MAIN-20201030093118-20201030123118-00465.warc.gz"}
https://jbuckland.com/2016/10/15/shovel.html
Home · Blog · Games · Resume 2016-10-15 Optimizing Student Loan Payments GitHub: http://github.com/ambuc/shovel This is a python script for calculating an optimal student loan repayment schedule. It is called like so: \$ python shovel.py +-----------------+---------+---------+---------+--------+---------+ | | Loan01 | Loan02 | Loan03 | Loan04 | Total | +-----------------+---------+---------+---------+--------+---------+ | | | | | | | | 2017 Principal | 1000.00 | 500.00 | 100.00 | 100.00 | 1700.00 | | Interest | +50.00 | +75.00 | +40.00 | +5.00 | 170.00 | | Monthly Payment | -24.00 | -36.00 | -11.00 | -2.00 | 73.00 | | Annual Payment | -294.00 | -441.00 | -140.00 | -29.00 | 904.00 | | | | | | | | | 2018 Principal | 756.00 | 134.00 | 0.00 | 76.00 | 966.00 | | Interest | +37.80 | +20.10 | +0.00 | +3.80 | 61.70 | | Monthly Payment | -51.00 | -12.00 | -0.00 | -5.00 | 68.00 | | Annual Payment | -612.00 | -154.00 | -0.00 | -61.00 | 827.00 | | | | | | | | | 2019 Principal | 181.80 | 0.00 | 0.00 | 18.80 | 200.60 | | Interest | +9.09 | +0.00 | +0.00 | +0.94 | 10.03 | | Monthly Payment | -15.00 | -0.00 | -0.00 | -1.00 | 16.00 | | Annual Payment | -190.00 | -0.00 | -0.00 | -19.00 | 209.00 | +-----------------+---------+---------+---------+--------+---------+ The shovel.py script reads a coniguration file, config.yaml. period: Yearly #recalculation frequency. "Yearly" or "Monthly". # If your period is monthly, # - your starting payment should be in terms of how much per month you can afford, and # - your growth should be in terms of how much salary growth you expect each month. startingPayment: 1000 #how much to begin paying down initially per period growth: 0 #by what percent to increase the payment each period. rounding: True #whether or not to round loan payments to the nearest dollar startingYear: 2017 #the year repayment started. startingMonth: 1 #the month repayment started. loans: - name: Loan01 #use unique identifiers please :) prin: 1000 #principal amount of loan rate: 5.00 #rate on loan - name: Loan02 prin: 500 rate: 15.00 - name: Loan03 prin: 100 rate: 40.00 - name: Loan04 prin: 100 rate: 5.00 # Notes: • The engine expects a payment you can afford per-month (startingPayment), and distributes it across the loans, weighted by the potential interest from each loan. • There is an option (rounding) to round each monthly payment to the nearest dollar, for easier check-writing. True by default. • There is an option (growth) to inflate each month’s or year’s payment by a constant rate, to track with expected salary growth over the next n years. Zero by default. # Bugs: • Values for startingPayment smaller than (???) cause the script to never finish. This reflects the possibility of a desired repayment schedule to never converge; that is, if the amount you pay each month is smaller than the amount of interest accumulated, you’ll never pay off the loans. This could be caught in some user-friendly way. # Todo: • Support for staggered loan start dates • Support for maximum loan lifetime • Better end-of-life handling / sub-dollar handling The source can be found on Github.
2022-05-23 04:29:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21241813898086548, "perplexity": 7298.474938578787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662555558.23/warc/CC-MAIN-20220523041156-20220523071156-00139.warc.gz"}
https://www.esaral.com/q/write-the-set-of-values-of-x-satisfying-the-inequation-85239
# Write the set of values of x satisfying the inequation Question: Write the set of values of x satisfying the inequation (x2 − 2x + 1) (x − 4) < 0. Solution: We have, $\left(x^{2}-2 x+1\right)(x-4)<0$ $\Rightarrow(x-1)^{2}(x-4)<0$ Equating each one to zero, we obtain $\mathrm{x}=1$ and $\mathrm{x}=4$. Therefore, 1 and 4 are critical points. Drawing the number lines, we get: Therefore, the solution set of the given inequality is $x \in(-\infty, 1) \cup(1,4)$
2023-03-28 03:17:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.962065577507019, "perplexity": 877.1832127281828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00145.warc.gz"}
http://physics.stackexchange.com/questions?pagesize=15&sort=active
# All Questions 13 views ### Do photons interact with everything? Suppose you shoot a beam of photons in a particle collider. Are there any particles in which the photons do not interact with? Q2: What is an interaction between particles? 25 views ### Wigner Function for Thermal State I am currently doing a reading on the subject of Quantum Optics from the book "Quantum Optics by Marlan O.Scully and M.Suhail Zubairy", I am currently learning about Quasi Probability distributions. ... 6 views ### How to calculate possible spin of two photon system? Photon hasn't well defined quantity such as spin. Instead of it, it is characterized by helicity $h$. Let's assume state of two photons in CM frame (with $\mathbf k$ being the momentum of one of ... 40 views ### How is the energy efficiency in bicycles accounted for? Bikes are more efficient than walking, but that's only because roads are flat so wheels make sense, right? Does it make sense to say that this fact is simply gained by putting a lot of energy into ... 75 views +50 ### Why does the parallel component of the velocity gradient affect viscous force in a fluid? The force density due to viscosity in an incompressible fluid is $-\mu \nabla^2 \mathbf{u}$ where $\mathbf{u}$ is the velocity and $\mu$ is the dynamic viscosity. Let's suppose some particular small ... 6 views ### Electrodynamics I am having some trouble with this question. Why is there such a discrepancy? Is my reasoning correct? (this is my first time on such forums.. So kindly inform me if I am posting it at the wrong ... 148 views ### Do gravitational waves propagate backwards in time? Gravitational waves are spacetime waves, which stretch and squeeze both space and time. Since relativity puts space and time (almost) on an equal footing, it seems to me that since gravitational waves ... 26 views ### Physical meaning of enthalpy I've been reading about thermodynamics and reached the topic about enthalpy . I've understood its derivation but I don't understand its physical meaning ... Also I don't understand why they have ... 67 views ### Monte Carlo, Non-polar optical phonon scattering? I have a question on electron and non-polar optical phonon scattering in GaAs. Is it allowed to consider intravalley electron non-polar optical phonon scattering in L-valley of GaAs? I found in the ... 7 views ### astrophysics specially universe formation It is said generally that nature is symmetric. for ex if light behaves as both particles and waves then matter must also do which is true.but if we see the universe we can see that there is far more ... 209 views ### Can the physical properties of the EM field be described directly from the 4-gauge potential? I'm trying to make an argument that classically, the EM field is considered a more 'real' physical quantity than the potentials, and am tempted to say that the fact that the field carries energy & ... 3k views ### How can a battery charge up another battery to a higher percentage? Say I have my phone on 5% and a large battery pack on 35% and I charge the phone. By the end the phone is on 100% and the pack is on 12%. How can the battery pack charge the phone up to a higher ... 30 views ### Why speed of light is constant with respect to any inertial frame of reference? [duplicate] Its getting too difficult to think.if we travel at the speed of light then will light will cross us at same speed or travel with us adjacent (special theory of relativity) 6 views 12 views ### complete local thermal equilibrium and conserved quantum numbers I am reading Weinberg's Cosmology book and I am confused of the statement "complete local thermal equilibrium in which all conserved quantum numbers vanish". My question is: How is complete local ... 58 views ### Curvature of Hilbert space That may appear as a dumb question, but: Does Hilbert space have curvature, or is it a flat space? How and why? 23 views ### what is the rough motion of an electron in an atom If The Uncertainity principle is true,then how does an electron move ?if the motion cannot be random also,then how does it occur? 66 views ### Can astronomers observe neutron stars optically? Are there any neutron stars near enough for astronomers to observe them optically? If not, then how close are we to having the technology to do so? 59 views ### Observation and deduction about a stick Given a horizontal stick AB and a string, of course a stick that is hang on the string in its center of mass is in equilibrium. This is a fact that we take as rule because we can observe it, right? I ... 4k views ### Calculating rate of vaporization of water Let there be a cylindrical vessel with diameter D that only has liquid water and water vapor as the picture below shows (The water vapor is the grey portion and the liquid water is the blue portion). ... 143 views ### Derivations of Newton's laws? I feel convinced that the mathematics behind newtons laws can be derived from Noether's symmetry theorems. The fact that displacement s can be described by a cartesian coordinate system with a ... 102 views ### Differences between thermal and non-thermal plasmas I have a doubt about plasmas which may as well be trivial or very stupid, but I couldn't get a clear and straightforward answer anywhere I looked, and I can't get the grasp of it since I wasn't given ... 15 views ### Total dilation of a signal from Tau Sagittarii [on hold] Please forgive my ignorance, but I'm attempting to figure out how to do this... I want to learn how to evaluate the total dilation of a signal (waveform) leaving Tau Sagittarii to us. This assumes ... 19 views ### how can I detect cosmic muons from background? Is there anything I should be careful with if I would like to detect muon at sea level? Noise background? Threshold? Even signals from other particle with stronger flux rate? 13 views ### Inflaton Decay Rate After inflation, (p)reheating is supposed as the mechanism which is responsible for restoring hot Big Bang. Always the resulting decay rate of inflatons into bosons and fermions are mentioned without ... 45 views ### Is Biot-Savart Law valid for time-varying currents unlike Ampere's law? I have just finished learning the basics of magnetism, and it should be noted that I am not very familiar with Maxwell's equations. Note: In the question, when I say "Ampere's Law", I am referring ... 2k views ### Prove that an electron in a hydrogen atom doesn't emit radiation [duplicate] According to electrodynamics, accelerating charged particles emit electromagnetic radiation. I'm asking myself if the electron in an hydrogen atom emits such radiation. In How can one describe ... 20 views ### Beats: frequency of resulting wave vs. beat frequency The beats frequency heard from the interference of two sound waves with frequencies $f_1$ and $f_2$ is $$\nu=|f_1-f_2|$$ Nevertheless the frequency of the resulting wave is not $\nu$ but the mean ... 28 views ### Could Earth being multipolar 600 million years ago might have caused Cambrian radiation? According to some of the latest research Earth might have been multi polar instead of being bipolar 650 million year's ago (http://phys.org/news/2016-06-earth-ancient-magnetic-field.html) Do you think ... 21 views ### How are coherent astronomical objects imaged? I am studying astronomical imaging, and am curious about how to image astronomical objects which are coherent. Stellar interferometry measures the mutual coherence function of a star, and then uses ... 373 views ### Quantum Hall Effect and Edge States In quantum hall effect we measure the hall conductance (in transverse direction) which is quantized. My question how do they take care of the edge states that are in the longitudinal side? 35 views ### Charged pendulum oscillations Recently, a question was asked in our school, regarding the calculation of the time period of a charged pendulum bob suspended in an electric field with the help of an insulating thread(gravity also ... 23 views ### What enclosed geometry amplifies sound the most? I am going to build a record player. It will read sound electronically but I also want it to be able to project sound mechanically, like a classic record player. So if sound enters a tube, can you ... 50 views ### Clarifications on Ampere's Law [on hold] I have just learnt Ampere's Law, useful for calculating the magnetic field in situations having a high degree of symmetry. However, I have some conceptual doubts regarding it: Before I begin, I would ... 86 views ### Can two different objects or system of molecules have different temperatures, but having same internal kinetic energy? If I take an extreme case, where a body has only an internal potential energy with zero internal kinetic energy, does this body have a temperature? Another question related to it: if two objects A and ... 15 views ### Decomposition of the Time-Evolution Operator: Translationally Invariant MPO Hello everyone myself Sudipto. Currently I'm learning the matrix product state technique in order to simulate 1d spin system and study different properties of the system form quantum information ...
2016-06-28 18:46:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.855842649936676, "perplexity": 756.1763331505739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00117-ip-10-164-35-72.ec2.internal.warc.gz"}
https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.103.217201
Synopsis: When magnetism unchains a break junction First-principles calculations explore how magnetic interactions impede the formation of atomically thin wires. A monatomic chain of atoms left suspended across the tips of a broken wire (a break junction) is potentially a model one-dimensional system. These atomically thin wires are important in the study of fundamental magnetism and could eventually play a role in technological applications in spintronics and quantum computing. There are, for example, theoretical predictions that chains of magnetic transition metal atoms will be more magnetic than their bulk forms. Although it has been possible to make long monatomic chains of selected nonmagnetic transition-metal elements and magnetic transition-metal chains on a surface, creating suspended chains of magnetic transition metals across a break junction has proven difficult. To find out why, Alexander Thiess, Yuriy Mokrousov, and Stefan Blügel at Forschungszentrum Jülich and Stefan Heinze at Christian-Albrechts-Universität zu Kiel, both in Germany, report in Physical Review Letters first-principles calculations on the process of how a monatomic chain forms from a break junction. They show that the presence of a local magnetic moment suppresses chain formation in $3d$, $4d$, and $5d$ elements because it effectively lowers the hardness of the chain. This explains why gold, silver, iridium, and platinum—all nonmagnetic elements in bulk—can form long chains and why similar efforts to make iron strands only yielded shorter nanocontacts. – Daniel Ucko More Features » Plasma Physics Next Synopsis Quantum Information Related Articles Soft Matter Synopsis: Small Particles Untangle Polymer Chains Adding nanoparticles to molten polymer disentangles its constituent molecular chains, allowing them to flow more easily. Read More » Semiconductor Physics Synopsis: Straining After Quantum Dots The positions of quantum dots inside a microstructure can be determined by monitoring how an applied strain affects the dots’ photoluminescence.   Read More » Nanophysics Viewpoint: A New Twist on Relativistic Electron Vortices Two studies explore the properties of vortices formed by electrons that travel at relativistic speeds. Read More »
2017-05-25 06:52:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4239351451396942, "perplexity": 4276.6171729659045}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608004.38/warc/CC-MAIN-20170525063740-20170525083740-00016.warc.gz"}
http://sl-inworld.com/sequential-numbering/sequential-numbering-in-a-filtered-list.html
Angie, have tried your method; however, I encountered what I surmised was a limit on the amount of characters allowed in that “Enter formatting for number” box: for instance, I can only type as far as “Request for Productio” (excluding quotations)- additional text and codes delete and default back to the incomplete phrase “Request for Productio”. Seems to have a default of 26 characters – anything exceeding that is deleted. The phrase “Request for Admission” completes, but am unable to include anything after that phrase of 26 characters. Not sure why that is, but that’s what I run into when I attempt using the multi-list option. There is very simple solution that we use and that is to lay out the sheet say 6 up on a A4 sheet as a master page and in document setup set the number of pages to 1,000 if that is the amount you require. Put a page number on each ticket on the page and although they will all have the same number on each page, we put the the first two letters of the customers business name before each number followed by the letters of the alphabet so it then reads for example BT1A, BT2A, BT3A, BT1B, BT2B, BT2C and so on as each page is printed. ### Thanks for the head start on this, it got me part the way through my problem but I found that when I had 3 figures in a row then a map, the next figure would jump back to #.1 again. Because I had figures, maps and tables that needed to be numbered I used the ‘levels’ to differentiate between them as you suggested, but found if you create a new number list for each entry ie. number list for maps, and number list for tables etc then they don’t conflict. thanks for the start off though. no where else pointed it out as clearly as this. Cheers I created something like this for a demo and didn't have to use a script. If I remember correctly, I leveraged Workflow. Essentially, the form submission would trigger a very simple workflow that would look at the current counter value on a database table and increment it by 1. The next form would then start with a lookup that would grab that counter and put it into a read-only (or hidden) field. Rinse and repeat. I have a word document with a table of 6 exact cells on a full page table. In those cell areas I have been printing tickets with a list and a mail merge and updating labels. I call to an excel list of 1-2000 and then I generate all the pages through the Finish and Merge option. This all works perfect. I get 2000 individually numbered tickets to print...however...I then have six tickets printed on a page of paper with ticket numbers 1, 2, 3, 4, 5 ,6 then the next page has 7,8,9,10,11,12. This is fine but I then have to cut and stack these tickets in groups of six and at that point none of the numbering is sequential. The tickets are basically random. Before you complete the merge, preview the merge results to make sure that the tracking numbers will display as you want them to in your publications. You can preview the merge in two ways: While you are refining the layout to review the layout of the individual coupon or gift certificate, or when you are getting ready to print, to preview the arrangement of coupons or gift certificates on the printed sheet. I’m not sure which version of InDesign first introduced printing Thumbnails like this, but even if yours doesn’t support that, your printer driver may have a similar feature of its own. Check the printer’s own dialog box by clicking “Setup…” near the bottom left corner of the Print dialog and dismissing the warning, then clicking “Preferences…” in Windows’s Print dialog that comes up (I’m not sure how to access this on Mac OS X, but I’m pretty sure there’s an easy way). For instance, on many HP printers, the feature you want is called “Pages per sheet” and has a drop-down offering 1, 2, 4, 9, or 16 pages per sheet. I have a problem with Outlook 2007 and the add-in Access Outlook Add-in for Data Collection and Publishing. This add-in worked when I first installed Outlook 2007 when installing Office 2007 Enterprise. The add-in created a sub-folder in my Inbox named Data Collection Replies and worked well until about 6 weeks ago. Now I can’t get the add-in to work at all even though it appears in the list of COM.adds in Outlook 2007. More perplexing is the error message I now receive EVERY time I click on any email message to read it. The message is titled ‘Custom UI Runtime Error in... I have Office 2003 installed to Windows 2003 terminal servers but I have a problem with on of our users and excel. The issue is that when the user is trying to have multiple spreadsheets opened he keeps getting the message "This operations has been cancelled due to system restrictions- Contact your system Administrator" I can find no setting in excel to enforce a limit on the number of speadsheets a user can have open. Thanks in advance fo any advise Michael Not an expert in these matters, but; Is it possible that the user is trying to initiate another session of Excel vs.... Note this works only because we create a brand new table, add an autonumber column to it as well any other columns we need then insert records into it. It’ll be contiguous – as long we don’t delete any records from the table. Unfortunately, creating a brand new table every time we run this will result in bloat of the Access file – if you can do it in a separate Access file, all the better so that you can compact it when you need to. One option, of course, is to print the individual copies of the document, making the edits to the copy number between each print. This gets tedious, real fast. You may also want to utilize a sequential numbering field (as discussed in other WordTips) and make the number of copies equal to what you need to print. Thus, if you have to print 25 copies, you could simply copy the entire document (including the sequential numbering field), move to the end of the document, and paste it in another 24 times. This makes for a rather large overall document, however, and there are easier ways to approach the problem. So I spent some time trying to figure it out, playing with Normal.dotm and the various styles (List paragraph, List Number, List Bullet etc etc). And finally, when I've got Normal.dotm open (i.e. I'm editing that template file), I get my result: I apply a standard numbered list, and it comes up flush left (i.e. not indented) and hanging at 1.0cm (cos I don't use inches...) and with a tab stop applied at 1.0cm as well - funky stuff! I have Office 2003 installed to Windows 2003 terminal servers but I have a problem with on of our users and excel. The issue is that when the user is trying to have multiple spreadsheets opened he keeps getting the message "This operations has been cancelled due to system restrictions- Contact your system Administrator" I can find no setting in excel to enforce a limit on the number of speadsheets a user can have open. Thanks in advance fo any advise Michael Not an expert in these matters, but; Is it possible that the user is trying to initiate another session of Excel vs.... So, if you wanted to use this idea in a form or datasheet, let me stop and first remind you – if this is going to be non-updatable, you can just embed a report as a subreport within a form and thus use Running Sum. But let’s discuss the case where you need to be able to edit data in the forms, even with sequential numbering generated for the data you are viewing. This means we need to be able to tie a sequential number to a specific row. I would like to create a custom application that has the ability to maintain the items that are on certain Task Pads within POS 2009. These items would all be regular menu items (ex: hamburger, hot dog, french fries, etc) and not functions. For example, if a task pad was supposed to allow the cashier quick access to daily specials, then this custom application would need to be able to clear the task pad each day and add the items for that day. Is this going to be possible? Please provide some guidance. Thank you, Sean This is a multi-part message in MIME format. ------=_NextPart_00... Yes, I’m a little confused by your brief too. The script that I describe here will create an array of numbers using any step value that it offers, including by 1 number each time. Whether you put the resulting list directly into InDesign as text, or indirectly using the Data Merge feature is up to you. Perhaps learn more about the Data Merge feature of InDesign itself – David Blatner has a great series on Lynda.com that will explain Data Merge much more than I can on this thread. A sequence is said to be monotonically increasing, if each term is greater than or equal to the one before it. For example, the sequence {\displaystyle (a_{n})_{n=1}^{\infty }} is monotonically increasing if and only if an+1 {\displaystyle \geq } an for all n ∈ N. If each consecutive term is strictly greater than (>) the previous term then the sequence is called strictly monotonically increasing. A sequence is monotonically decreasing, if each consecutive term is less than or equal to the previous one, and strictly monotonically decreasing, if each is strictly less than the previous. If a sequence is either increasing or decreasing it is called a monotone sequence. This is a special case of the more general notion of a monotonic function. I'm producing gift certificates for a restaurant and they need to be numbered sequentially from 0001 to 0250. Is there any way to do this easily as opposed to numbering each manually? I'm sure I could probably work it out with a print shop, but the job was thrust on me last minute and my options are limited by the short turn around time. Any help would be appreciated. Thanks!... I know that PrintShopMail will do it, but I was wandering if there was a less expensive solution out there so that I could get numbered tickets (usually 4-up) right off the Xerox. I just want to avoid having to go the the Windmill after trimming and doing it the old fashion way. There is a tiny little copy shop here in town that is doing it, and am willing to bet that they are not using PrintShopMail, but I'm also not going to ask them to share their methods with a competitor. There has to be cheaper solution. I know that I can do it with auto page numbering in Indesign, but that means I can only print raffle tickets 1-up which wont work. First, you have to use YOUR field and control names. The ones I use are samples. The error you are getting indicated you do not have a control named txtProject. So you have to substitute the correct name of the control. bound to the ProjectID field. By the way it is not a good idea to the octothorpe(#) as part of a field name. A controlname may be different from a fieldname. The name property of a control is on the Other tab in the Properties dialog. Yes, I have used this system in a multi-user setting. As noted, the key is to commit the record immediately after generating the sequence. However, if the application is one where there is very heavy transaction processing. In other words dozens of users creating records simultaneously, you might want to guard further against duplication. At the speeds computers process, it is not impossible that multiple users will grab the max value before it can be incremented and saved. The Renumber/Refresh List toolbar command is used to renumber an existing numbered list or to refresh a list after editing. For example if item 3. is deleted from a list by editing then you can use this tool to refresh the list. You can also use the Renumber/Refresh command to number or renumber a selection of list paragraphs with any chosen starting number. If you are still reading this then perhaps you are looking for a simple and reliable way to number a couple of lists in a Word document. If you read John's article then you have already been informed that field numbering is simple and robust. If you are like 9 out of 10 Word users in my office then anything more than 1. space space Blah, blah "enter" 2. space space Blah, blah ... defies simple! If that applies to you, then the "SeqField Numbering" Add-In presented later in this page is for you. As you can see, the sequence name can be most anything (e.g. mySeq, A, B, or Bob's_your_uncle). If you start a sequence with a new sequence name the numbering restarts with 1. Look at Mary's first chore in the right hand column. Here you see the reset switch \r1 was used. This switch directs Word to restart the sequence named "A" with "1" at this point. You’ve got some tips to help make your raffle more successful. You’ve got several free Word ticket templates to choose from. You know how to sequentially number tickets in two different ways. All that is left for you to do is go sell those tickets, have the draw, and then feel good about helping someone out. All for pennies on the dollar over ordering custom made tickets. Ok I guess it is better for me to explain what I am doing. I am in the process of creating an Access Database that an individual has been using an Excel spreadsheet forever and a day. Well the individual has on occasion doubled up numbers, forgot numbers, etc. So on what I have learned from different Access courses and Google searches I am trying to apply my knowledge. There are a couple of ways you can set up Word 2007/2010 to use SEQ fields for numbering — you can set them up as AutoCorrect entries or as Quick Parts. Both ways work; the method you choose is up to you. This long article describes how to create the SEQ fields and the numbering style in your Normal.dotm template; how to save the SEQ fields as AutoCorrect entries in Word 2007/2010 (and how to use them); and how to save (and use) them as Quick Parts. The most consuming part of this process is settings up the fields and the style; once they’re set up, using them is super easy. In the example I explained, I was using a list, but did it with un-linked text boxes using “continue from previous number” and “continue numbers across stories.” I’m guessing that there is no way to tell InDesign that even though there are 4 text boxes on the page, that there are two different lists? I’d probably have to just create two threaded stories for that scenario to work. I created something like this for a demo and didn't have to use a script. If I remember correctly, I leveraged Workflow. Essentially, the form submission would trigger a very simple workflow that would look at the current counter value on a database table and increment it by 1. The next form would then start with a lookup that would grab that counter and put it into a read-only (or hidden) field. Rinse and repeat.
2019-02-22 21:12:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35339483618736267, "perplexity": 829.2901967555498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247526282.78/warc/CC-MAIN-20190222200334-20190222222334-00361.warc.gz"}
https://www.vedantu.com/question-answer/a-ball-is-rolled-off-the-edge-of-a-horizontal-class-11-physics-jee-main-60322d7981f7bb1500fb1c6a
# A ball is rolled off the edge of a horizontal table at a speed of $4\text{ m/sec}$. It hits the ground after $0.4\text{ sec}$. Which statement given below is true (A) It hits the ground at a horizontal distance $1.6\text{ m}$from the edge of the table(B) The speed with which it hits the ground is $0.4\text{ m/sec}$(C) Height of the table is $0.8\text{ m}$(D) It hits the ground at an angle of $60{}^\circ$ to the horizontal Verified 178.2k+ views Hint Check the given options according to the data given in the question using Newton’s Equations of Motion. There are three ways to pair these equations- velocity-time, position-time and velocity-position. In this order these are also known as first, second and third equations of motion. The uses of these equations depend on the data given in the question. FORMULA USED $s=ut+\dfrac{1}{2}a{{t}^{2}}$,$v=u+at$ Where, $s$→ displacement in time ‘$t$’ $u$ → initial velocity $v$ → final velocity $a$ → acceleration Complete Step by Step solution Given, $u=4{}^{m}/{}_{s}$ (initial velocity in horizontal direction) $t=0.4s$(time after the ball hits the ground) So, acceleration will be due to the gravity i.e $a=g=10m{{s}^{-2}}$ First checking for option (a): Using kinematic equation of motion, $s=ut+\dfrac{1}{2}a{{t}^{2}}$ For horizontal distance, ${{s}_{x}}={{u}_{x}}t$ [a=0 because velocity is constant in x-direction] = 4 x 0.4 = 1.6m So, our option (a) is correct. Now, checking for option (b): $v=u+at$ (Kinematic Equation of Motion) Initial velocity $u=0$in y-direction as the ball is rolled off on a horizontal table. So, $v=at$ = 10 x 0.4 = $4{}^{m}/{}_{s}$ So, the speed with which the ball hits the ground is$4m{{s}^{-1}}$. So, Option (b) is also correct. Now, we will check for option (c): For height of the table, we need to find out the vertical displacement. So, using kinematic equation of motion, $s=ut+\dfrac{1}{2}a{{t}^{2}}$ $s=0+\dfrac{1}{2}a{{t}^{2}}$ $u=0$for y-direction or vertical initial velocity is zero. So, $s=\dfrac{1}{2}\times 10\times {{(0.4)}^{2}}$ = 5 x 0.16 = 0.8 m So, the height of the table is 0.8m. So, option (c) is also correct. Now, for option (d): Since, velocity in horizontal direction = $4m{{s}^{-1}}$and velocity in vertical direction is also$4m{{s}^{-1}}$. So the angle formed by the ball when it hits the ground is ${{45}^{\circ }}$to both the vertical and horizontal. So, option (d) is incorrect. So, finally the correct options are (a), (b) and (c).
2022-10-05 05:15:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227198123931885, "perplexity": 696.7297194728869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00030.warc.gz"}
http://cvgmt.sns.it/paper/4894/
# An application of the continuous Steiner symmetrization to Blaschke-Santalò diagrams created by pratelli on 17 Nov 2020 [BibTeX] Preprint Inserted: 17 nov 2020 Last Updated: 17 nov 2020 Year: 2020 Abstract: In this paper we consider the so-called procedure of {\it Continuous Steiner Symmetrization}, introduced by Brock in Brock95,Brock00. It transforms every domain $\Omega\subset\subset\mathbb R^d$ into the ball keeping the volume fixed and letting the first eigenvalue and the torsion respectively decrease and increase. While this does not provide, in general, a $\gamma$-continuous map $t\mapsto\Omega_t$, it can be slightly modified so to obtain the $\gamma$-continuity for a $\gamma$-dense class of domains $\Omega$, namely, the class of polyedral sets in $\mathbb R^d$. This allows to obtain a sharp characterization of the Blaschke-Santalò diagram of torsion and eigenvalue.
2020-11-24 04:30:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9536031484603882, "perplexity": 1449.1705841461012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141171077.4/warc/CC-MAIN-20201124025131-20201124055131-00323.warc.gz"}
http://math.stackexchange.com/questions/191146/prove-that-a-set-of-connectives-is-inadequate
# Prove that a set of connectives is inadequate It is relatively easy to prove that a given set of connectives is adequate. It suffices to show that the standard connectives can be built from the given set. It is proven that the set $\{\lor, \land, \neg\}$ is adequate, and from that set it can be inferred (applying De Morgan laws and such) that $\{\lor, \neg\}$, $\{\land, \neg\}$ and $\{\to, \neg\}$ are also adequate. Nevertheless, I'm stuck trying to understand how to prove that a given set of connectives is inadequate. I know I have to prove that a standard connective can't be build using only the connectives of the given set, but I can't figure out how to do it. FYI, I'm trying to prove that $\{\lor, \land\}$ and $\{\leftrightarrow, \neg\}$ are inadequate sets of connectives. - For the first problem let $\Phi$ be the set of propositions built up from $a,\lor$, and $\land$. It can be described as follows: 1. $a\in\Phi$. 2. If $\varphi,\psi\in\Phi$, then $\varphi\lor\psi,\varphi\land\psi\in\Phi$. 3. $\Phi$ is the smallest set of propositions satisfying (1) and (2). Now suppose that $\varphi\in\Phi$, and consider the truth table for $\varphi$: $$\begin{array}{c|c} a&\varphi\\ \hline T&?_1\\ F&?_2 \end{array}$$ I claim that the truth value $?_1$ is $T$ for all $\varphi\in\Phi$, and therefore $\lnot a\notin\Phi$. This is clearly the case for the proposition $a$. Suppose that it’s true for propositions $\varphi,\psi\in\Phi$. Then we have the followint partial truth table; $$\begin{array}{c|c} a&\varphi&\psi&\varphi\lor\psi&\varphi\land\psi\\ \hline T&T&T&T&T \end{array}$$ Thus, it’s true of $\varphi\lor\psi$ and $\varphi\land\psi$ as well, and by induction it’s true for every $\varphi\in\Phi$. Try to find a similar idea for the second problem: some property that $\leftrightarrow$ and $\lnot$ preserve that $\land,\lor$, or $\to$ does not. - Thank you very much for your answer. –  Pampero Sep 4 '12 at 22:59 This rather interesting related question asks for a calculation of the number of expressively-adequate truth functions. There were no good answers. I still wonder if there is any reasonable algorithm for determining whether a given set of operators is expressively adequate. –  MJD Sep 4 '12 at 23:20 Hint: For the first problem, prove, in principle by induction, that any propositional function $f(A)$ built from $\land$ and $\lor$ will always have the value $1$ if $A$ has value $1$. We need a similar "invariance" property for the functions built from the set $\{\leftrightarrow, \neg\}$. I would suggest thinking of the truth table for a function $f(A,B)$ built up from these connectives. This truth table has $4$ entries, corresponding to the $4$ possible combinations of truth values of $A$ and $B$. Prove by induction that for any $f$ built up from our two connectives, an even number ($0$, $2$, or $4$) of the entries gets assigned the value True. For the connective $\leftrightarrow$, there is a bit of detail to showing that if $g(A,B)$ is true for $2$ entries, and $h(A,B)$ is true for $2$ entries, then $f(A,B)\leftrightarrow h(A,B)$ is true for an even number of entries. - Thank you very much for your answer. –  Pampero Sep 4 '12 at 22:58 For $\{{\leftrightarrow},{\neg}\}$, notice that if we identify "true" and "false" with $1$ and $0$ modulo $2$, then $a\leftrightarrow b \equiv a+b+1 \pmod2$ and $\neg a \equiv a+1\pmod2$. So everything we can build from them will be represented by linear polynomials modulo 2. We can convert that idea back to a direct proof that does not speak about modular arithmetic: Lemma. Assume $f(x_1,\ldots,x_n)$ is a Boolean function built from $\leftrightarrow$ and $\neg$. Then for $1\le i\le n$ it holds either that $f$ does not depend on $x_i$ at all, or that inverting the value of $x_i$ will always invert the value of $f(x_1,\ldots,x_n)$. Proof. By structural induction on $f$. Since $a\land b$ does not have the property specified by the lemma, it cannot be built from $\leftrightarrow$ and $\neg$. Notice that the structure of proofs that a set of connectives is not adequate is more varied than the structure of proofs that it is. (The latter is just a matter of showing that each member of a known adequate set can be expressed, which can then be verified by truth tables). - Thank you very much for your answer. It's a bit complex for me due to my lack of math and logic background, but I can see your point. Thanks. –  Pampero Sep 4 '12 at 22:55 There is a result in Robert Reckhow's thesis that characterizes the adequate sets of connectives. The result says that for a set of connectives to be complete one needs the following: • F and T (or formulas with these values), • an odd connective (a connective with arity larger than 1 is called odd if it has odd number of Ts in its truth table), • a non-monotone connective (a connective that turning an F to a T will make its value change from T to F). These are necessary and sufficient conditions for a set of connectives to be adequate. If a set of connectives is not adequate then it lacks one of these. Note that even connectives are closed under composition as well as monotone connectives. So to prove that a set of connectives is inadequate, you typically need to show that • all of the connectives are monotone, or • all of the connectives are even. For your examples, the first one is a set of monotone connectives, the second one is a set of even connective. So they cannot be adequate. -
2014-12-22 07:45:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8655673861503601, "perplexity": 175.02376341956466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802774899.57/warc/CC-MAIN-20141217075254-00101-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.dumas.io/teaching/2022/spring/mcs275/nbview/homework/homework7soln.html
A document from MCS 275 Spring 2022, instructor David Dumas. You can also get the notebook file. # MCS 275 Spring 2022 Homework 7 Solutions¶ • Course Instructor: David Dumas • Solutions prepared by: Johnny Joyce ## Instructions:¶ • Complete the problems below, which ask you to write Python scripts. • Upload your python code directly to gradescope, i.e. upload the .py files containing your work. (If you upload a screenshot or other file format, you won't get credit.) This homework assignment must be submitted in Gradescope by Noon central time on Tuesday 1 March 2022. ### Collaboration¶ Collaboration is prohibited, and you may only access resources (books, online, etc.) listed below. ### Resources you may consult¶ The course materials you may refer to for this homework are: ### Point distribution¶ This homework assignment has two problems, numbered 2 and 3. The grading breakdown is: Points Item 4 Problem 2 4 Problem 3 10 Total The part marked "autograder" reflects points assigned to your submission based on some simple automated checks for Python syntax, etc.. The result of these checks is shown immediately after you submit. ### What to do if you're stuck¶ Ask your instructor or TA a question by email, in office hours, or on discord. ## Problem 1 doesn't exist¶ In Gradescope, the score assigned to your homework submission by the autograder (checking for syntax and docstrings) will be recorded as "Problem 1". Therefore, the numbering of the actual problems begins with 2. ## Problem 2: List of accessible locations¶ Suppose that instead of trying to solve a maze (find a path from start to goal), you want to determine all accessible squares in the maze (i.e. all locations that can be reached by a path from start). In a file called hwk7prob2.py, write a function accessible_locations(M) that takes a maze object M and returns a list of all locations in M that can be reached from M.start. It is fine for this function to have other parameters as long as they have default values, so that the function can be called as accessible_locations(M). As in solvemaze.py, you should use the interface provided by the Maze class from maze.py. You aren't required to use recursion here, but the most direct way to solve this problem is to make the minimum necessary changes to solvemaze() and retain the basic recursive strategy. The problems from worksheet 7 may also be useful. Here is an example of the output of this function when applied to the 7x7 example maze we discussed in lecture. In [13]: # Need to import maze and define accessible_locations before this will work! M = maze.MazeExample1() accessible_locations(M) Out[13]: [(1, 1), (1, 2), (1, 3), (2, 3), (3, 3), (4, 3), (5, 3), (5, 2), (5, 1), (4, 1), (3, 1), (3, 4), (3, 5), (2, 5), (1, 5), (4, 5), (5, 5)] As a reminder, here is a picture of this maze, which can be used to check the correctness of the list above. As another example, this code creates a 5x5 maze with all its interior squares free, and with (2,2) as the start, and tests the function on it. In [18]: M = maze.Maze(5,5) M.apply_border() M.start = (2,2) L = accessible_locations(M) assert(len(L)==9) # make sure we found all 9 interior squares accessible! L Out[18]: [(2, 2), (1, 2), (1, 1), (2, 1), (3, 1), (3, 2), (3, 3), (2, 3), (1, 3)] # Solution¶ In [ ]: import maze def accessible_locations(M,path=None,visited=None): """ Returns list of all locations that can be reached from M.start """ if visited==None: # Initialize visited upon first call visited=[M.start] if path==None: # Initialize path upon first call path = [M.start] if path[-1] not in visited: # Keep track of locations visited.append(path[-1]) # Find all possible directions to go from current location current_location = path[-1] steps = M.free_neighbors(*current_location) # * sign lets us give x and y coords as two separate args for s in steps: if len(path)>=2 and s == path[-2]: continue if s in visited: continue accessible_locations(M,path+[s],visited) return visited ## Problem 3: Specialized quicksort for few distinct values¶ Suppose you're quicksorting a list that has only a few distinct values, like [2, 3, 2, 1, 2, 2, 1, 1, 3, 1, 3, 2, 2, 3, 1, 2, 2, 3, 3, 3] (which has 20 entries but only 3 distinct values). In this case, a recursive algorithm that repeatedly partitions the list is likely to sometimes end up working on a sublist in which every value is the same. Once the part of the list you're working on looks like that, e.g. [1,1,1,1] or [2,2,2,2,2], it's already sorted and there's no point in partitioning it further and making recursive calls. Write a version of quicksort that is adapted to this special case by replacing the step that calls partition with: 1. Check whether every element of (the current part of) the list is equal to the first element (of the current part) of the list. 2. If so, then this part of the list is already sorted. Return. 3. Otherwise, partition the list and proceed as usual with recursive calls. Call the new function quicksort_few_distinct(L) and put it in a file called hwk7prob3.py. # Solution 1 (list comprehension - shortest way):¶ Both solutions are edited versions of sorts.py on the class Github in samplecode/recursion/. Direct link: https://github.com/daviddumas/mcs275spring2022/blob/main/samplecode/recursion/sorts.py In [ ]: def quicksort_few_distinct(L,start=0,end=None): """ Quicksort the part of list L between indices start and end in place. Optimized for use with lists containing few distinct elements repeated many times """ if end == None: end = len(L) if end-start > 1: # there are at least two elements, # so some work is necessary print("Quicksort called on",L[start:end]) first = L[start] # List comprehension checks if value of each item is same as first value. if all([x == first for x in L]): return else: m = partition(L,start,end) quicksort(L,start,m) quicksort(L,m+1,end) # Solution 2:¶ In [ ]: def quicksort(L,start=0,end=None): """ Quicksort the part of list L between indices start and end in place. """ if end == None: end = len(L) if end-start > 1: # there are at least two elements, # so some work is necessary print("Quicksort called on",L[start:end]) all_vals_equal_first = True # Keep track of whether every item in L has same value as first item for i in L: if i != L[0]: all_vals_equal_first = False break if all_vals_equal_first: return else: m = partition(L,start,end) quicksort(L,start,m) quicksort(L,m+1,end)
2022-07-03 11:54:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34377190470695496, "perplexity": 1766.5455062032006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00424.warc.gz"}
https://markthegraph.blogspot.com/2015/05/using-python-statsmodels-for-ols-linear.html
## Sunday, May 3 ### Using python statsmodels for OLS linear regression This is a short post about using the python statsmodels package for calculating and charting a linear regression. Let's start with some dummy data, which we will enter using iPython. We fake up normally distributed data around y ~ x + 10. In [1]: import numpy as np In [2]: x = np.random.randn(100) In [3]: y = x + np.random.randn(100) + 10 We can plot this simply ... In [4]: import matplotlib.pyplot as plt In [5]: fig, ax = plt.subplots(figsize=(8, 4)) In [6]: ax.scatter(x, y, alpha=0.5, color='orchid') Out[6]: In [7]: fig.suptitle('Example Scatter Plot') Out[7]: In [9]: ax.grid(True) In [10]: fig.savefig('filename1.png', dpi=125) That was easy. Next we will add a regression line. We will use the statsmodels package to calculate the regression line. Lines 11 to 15 is where we model the regression. Lines 16 to 20 we calculate and plot the regression line. The key trick is at line 12: we need to add the intercept term explicitly. Without with this step, the regression model would be: y ~ x, rather than y ~ x + c. Similarly, at line 17, we include an intercept term in the data we provide to the predicting method at line 18. The sm.add_constant() method prepends a column of ones for the constant term in the regression model, returning a two column numpy array. The first column is ones, the second column is our original data from above. In [11]: import statsmodels.api as sm In [12]: x = sm.add_constant(x) # constant intercept term In [13]: # Model: y ~ x + c In [14]: model = sm.OLS(y, x) In [15]: fitted = model.fit() In [16]: x_pred = np.linspace(x.min(), x.max(), 50) In [18]: y_pred = fitted.predict(x_pred2) In [19]: ax.plot(x_pred, y_pred, '-', color='darkorchid', linewidth=2) Out[19]: [] In [20]: fig.savefig('filename2.png', dpi=125) If we wanted key data from the regression, the following would do the job, after line 15: print(fitted.params) # the estimated parameters for the regression line print(fitted.summary()) # summary statistics for the regression We can add a confidence interval for the regression. There is a 95 per cent probability that the true regression line for the population lies within the confidence interval for our estimate of the regression line calculated from the sample data. We will calculate this from scratch, largely because I am not aware of a simple way of doing it within the statsmodels package. To get the necessary t-statistic, I have imported the scipy stats package at line 27, and calculated the t-statistic at line 28. In [22]: y_hat = fitted.predict(x) # x is an array from line 12 above In [23]: y_err = y - y_hat In [24]: mean_x = x.T[1].mean() In [25]: n = len(x) In [26]: dof = n - fitted.df_model - 1 In [27]: from scipy import stats In [28]: t = stats.t.ppf(1-0.025, df=dof) In [29]: s_err = np.sum(np.power(y_err, 2)) In [30]: conf = t * np.sqrt((s_err/(n-2))*(1.0/n + (np.power((x_pred-mean_x),2) / ....: ((np.sum(np.power(x_pred,2))) - n*(np.power(mean_x,2)))))) In [31]: upper = y_pred + abs(conf) In [32]: lower = y_pred - abs(conf) In [33]: ax.fill_between(x_pred, lower, upper, color='#888888', alpha=0.4) Out[33]: In [34]: fig.savefig('filename3.png', dpi=125) The final step is a prediction interval. There is a 95 per cent probability that the real value of y in the population for a given value of x lies within the prediction interval. There is a statsmodels method in the sandbox we can use. In [35]: from statsmodels.sandbox.regression.predstd import wls_prediction_std In [36]: sdev, lower, upper = wls_prediction_std(fitted, exog=x_pred2, alpha=0.05) In [37]: ax.fill_between(x_pred, lower, upper, color='#888888', alpha=0.1) Out[37]: In [38]: fig.savefig('filename4.png', dpi=125)
2022-08-16 15:21:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33716318011283875, "perplexity": 5028.386225732995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00462.warc.gz"}
https://leanprover-community.github.io/archive/stream/113489-new-members/topic/How.20to.20use.20choice.20tactic.3F.html
## Stream: new members ### Topic: How to use choice tactic? #### Li Yao'an (Nov 20 2019 at 03:20): When we have an existential statement, we can use choice to construct a witness w. However, while w retains the correct type, the fact that it is a witness to a particular proposition is sometimes (seemingly) lost. Here is an example: import tactic open function noncomputable theory def inv {α β} (f : α → β) (h : surjective f) : β → α := begin intro b, rw [surjective] at h, choose a h using h b, exact a end theorem should_be_trivial {α β} (f : α → β) (h : surjective f) : let f' := inv f h in ∀ b : β, f (f' b) = b := begin simp, intro b, rw inv, admit end Rewriting inv gives some scary-looking term, and simp-ing gives "f (classical.some _) = b", when it seems that the proposition that choice was made on has been lost. What is a good way of recovering the proposition in an existential statement? How could this proof be fixed? (A proof ending with "assumption" would be extra nice, to show how the proposition can be added as a hypothesis) #### Bryan Gin-ge Chen (Nov 20 2019 at 04:53): The key is classical.some_spec: import tactic open function noncomputable theory def inv {α β} (f : α → β) (h : surjective f) : β → α := begin intro b, rw [surjective] at h, choose a h using h b, exact a end theorem should_be_trivial {α β} (f : α → β) (h : surjective f) : let f' := inv f h in ∀ b : β, f (f' b) = b := begin intros f' b, exact classical.some_spec (h b), end -- n.b. using tactics in defs is discouraged (because it leads to scary-looking terms) def inv' {α β} (f : α → β) (h : surjective f) : β → α := λ b, classical.some (h b) #### Bryan Gin-ge Chen (Nov 20 2019 at 05:22): foo_spec seems to be the conventional name for the theorem that provides the defining property of foo, e.g. nat.find_greatest_spec (for nat.find_greatest), finset.choose_spec (for finset.choose), etc. #### Li Yao'an (Nov 20 2019 at 08:20): Thanks for the reply, but I am still confused. I believe this is at least in part due to the "classical.some _" terms which appear. It seems to me that after unfolding the definition, the witnesses appear in the interactive tactic window as classical.some _, which does not tell us what the value is a witness for (although Lean seems to know of this). For example: theorem contrived_example {α β} (f : α → β) (h : surjective f) (a1 a2 : α) (b1 b2 : β) (h2 : a1 = inv' f h b1) (h3 : a2 = inv' f h b2) (h4 : b1 = b2): a1 = a2 := begin rw [h2, h3], dunfold inv', -- At this point the goal is to show "classical.some _ = classical.some _", refl fails simp [h4] -- something happens behind the scenes and it's probably eventually resolved by refl end Is there a way to elaborate this "classical.some _"? Additionally, would it be possible to have an interactive tactic to extract the propositions from the values resulting from the application of classical.some? For example, if we had "p : nat -> Prop" and "h : \exists n, p n", then we could replace instances of "classical.some h" with x and add an additional hypothesis "p x". #### Mario Carneiro (Nov 20 2019 at 08:24): it's a printing setting. Try setting set_option pp.proofs true #### Kevin Buzzard (Nov 20 2019 at 08:32): I thought the whole point of choose was so that the user didn't have to know about classical.some_spec? #### Kevin Buzzard (Nov 20 2019 at 08:35): Aah I see. So in the def you get the proof you need, but then the def finishes and the proof is lost. I guess you shouldn't be using tactic mode for the definition of the inverse function really. If you define it using choose in should_be_trivial you'll be OK. I guess an alternative is to define inv not just to return a function but to return a pair consisting of a function and the proof that it's an inverse. #### Kevin Buzzard (Nov 20 2019 at 08:43): import tactic.linarith open function noncomputable def inv' {α β} (f : α → β) (h : surjective f) : { f' : β → α // ∀ b : β, f (f' b) = b} := begin rw [surjective] at h, choose f' hh using h, exact ⟨f', hh⟩, end #### Mario Carneiro (Nov 20 2019 at 08:56): The recommended way to define functions using choice is to write down the property defining the function and prove it straight away with some and some_spec: def inv {α β} (f : α → β) (h : surjective f) : β → α := λ b, classical.some (h b) theorem should_be_trivial {α β} (f : α → β) (h : surjective f) : let f' := inv f h in ∀ b : β, f (f' b) = b := λ b, classical.some_spec (h b) #### Mario Carneiro (Nov 20 2019 at 08:56): The choose tactic is only appropriate if you only need the function local to a proof #### Li Yao'an (Nov 20 2019 at 09:04): Thanks everybody who replied. Here is my takeaway from this thread (hopefully useful to any future lost souls): 1) When using choice to extract witnesses, "set_option pp.proofs true" elaborates the tactics state so that it makes sense. 2) We can use classical.some_spec to extract the hypothesis of the witness. Example: theorem should_be_trivial {α β} (f : α → β) (h : surjective f) : let f' := inv' f h in ∀ b : β, f (f' b) = b := begin intros f' b, simp [f'], dunfold inv', have := classical.some_spec (h b), -- the argument of some_spec is precisely that of classical.some in the tactic state assumption end #### Mario Carneiro (Nov 20 2019 at 09:13): 1) Lean doesn't print holes in terms as _, it prints them as ?m_1. If you see a _ it's a proof that has been suppressed from printing but is present internally #### Mario Carneiro (Nov 20 2019 at 09:14): so it's not about "elaborating the tactic state", it's about showing more information (hence the pp for pretty printer) Last updated: May 13 2021 at 18:26 UTC
2021-05-13 19:11:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5160298943519592, "perplexity": 2205.886230897311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00209.warc.gz"}
http://golem.ph.utexas.edu/~distler/blog/archives/001135.html
## January 29, 2007 ### itex2MML 1.1.9 I fixed some minor bugs in the \array command and the aligned environment. And I added a gathered and a split environment. So here’s itex2MML 1.1.9 and an updated list of itex commands. Note that, because of this bug, the spacing on all of the array-like environments is a bit screwed-up in Mozilla. If Roger ever gets around to fixing that bug, the spacing will be much improved, and I’ll be able to fine-tune it to look really sharp. #### Update (2/7/2007): Per Lieven le Bruyn’s suggestion, the MacOSX binary of itex2MML is now a Universal Binary. Posted by distler at January 29, 2007 3:22 AM TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1135
2013-05-23 16:40:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7911383509635925, "perplexity": 8179.289924985602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703592489/warc/CC-MAIN-20130516112632-00040-ip-10-60-113-184.ec2.internal.warc.gz"}
https://collegephysicsanswers.com/openstax-solutions/what-current-milliamperes-produced-solar-cells-pocket-calculator-through-which
Question What is the current in milliamperes produced by the solar cells of a pocket calculator through which 4.00 C of charge passes in 4.00 h? Question by OpenStax is licensed under CC BY 4.0. Final Answer $0.278 \textrm{ mA}$ Solution Video
2018-12-13 16:29:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5295060276985168, "perplexity": 3193.77156876825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824912.16/warc/CC-MAIN-20181213145807-20181213171307-00615.warc.gz"}
https://www.atmos-meas-tech.net/12/3209/2019/
Journal cover Journal topic Atmospheric Measurement Techniques An interactive open-access journal of the European Geosciences Union Journal topic Atmos. Meas. Tech., 12, 3209–3222, 2019 https://doi.org/10.5194/amt-12-3209-2019 Atmos. Meas. Tech., 12, 3209–3222, 2019 https://doi.org/10.5194/amt-12-3209-2019 Research article 17 Jun 2019 Research article | 17 Jun 2019 # Characterization and application of artificial light sources for nighttime aerosol optical depth retrievals using the Visible Infrared Imager Radiometer Suite Day/Night Band Characterization and application of artificial light sources for nighttime aerosol optical depth retrievals using the Visible Infrared Imager Radiometer Suite Day/Night Band Jianglong Zhang1, Shawn L. Jaker1, Jeffrey S. Reid2, Steven D. Miller3, Jeremy Solbrig3, and Travis D. Toth4 Jianglong Zhang et al. • 1Department of Atmospheric Sciences, University of North Dakota, Grand Forks, ND, USA • 2Marine Meteorology Division, Naval Research Laboratory, Monterey, CA, USA • 3Cooperative Institute for Research in the Atmosphere, Colorado State University, Fort Collins, CO, USA • 4NASA Langley Research Center, Hampton, VA, USA Correspondence: Jianglong Zhang (jzhang@atmos.und.edu) Abstract Using nighttime observations from Visible Infrared Imager Radiometer Suite (VIIRS) Day/Night band (DNB), the characteristics of artificial light sources are evaluated as functions of observation conditions, and incremental improvements are documented on nighttime aerosol retrievals using VIIRS DNB data on a regional scale. We find that the standard deviation of instantaneous radiance for a given artificial light source is strongly dependent upon the satellite viewing angle but is weakly dependent on lunar fraction and lunar angle. Retrieval of nighttime aerosol optical thickness (AOT) based on the novel use of these artificial light sources is demonstrated for three selected regions (United States, Middle East and India) during 2015. Reasonable agreement is found between nighttime AOTs from the VIIRS DNB and temporally adjacent daytime AOTs from the AErosol RObotic NETwork (AERONET) as well as from coincident nighttime AOT retrievals from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP), indicating the potential of this method to begin filling critical gaps in diurnal AOT information at both regional and global scales. Issues related to cloud, snow and ice contamination during the winter season, as well as data loss due to the misclassification of thick aerosol plumes as clouds, must be addressed to make the algorithm operationally robust. 1 Introduction The Visible Infrared Imager Radiometer Suite (VIIRS), on board the Suomi National Polar-orbiting Partnership (NPP) satellite, features 22 narrow-band channels in the visible and infrared spectrum. Included on VIIRS is the Day/Night band (DNB), designed to detect both reflected solar energy during daytime and low-light visible and near-infrared signals at nighttime (e.g., Lee et al., 2006; Miller et al., 2013; Elvidge et al., 2017). Compared to the Operational Line Scan (OLS) sensor on the legacy Defense Meteorological Satellite Program (DMSP) constellation, the VIIRS DNB has an improved response to nighttime visible signals, owing to its higher spatial resolution, radiometric resolution and sensitivity (e.g., Miller et al., 2013; Elvidge et al., 2017). The DNB, unlike the OLS, is calibrated, which enables quantitative characterization of nighttime environmental parameters via a variety of natural and artificial light signals, including reflected moonlight in cloudy and cloud-free regions, natural and anthropogenic emissions from forest fires, volcanic eruptions, gas flares from oil fields, and artificial light sources from cities (e.g., Miller et al., 2013; Elvidge et al., 2017). Using nighttime observations from VIIRS and OLS over artificial light sources such as cities, several studies have attempted to derive nighttime aerosol optical properties. For example, Zhang et al. (2008) proposed the concept of estimating nighttime aerosol optical thickness (AOT) by examining changes in DMSP and OLS radiances over artificial light sources between aerosol-free and high aerosol loading (and cloud-free) nights. However, the OLS visible channel does not have onboard calibration, which limits the use of OLS data for quantitative studying of nighttime aerosol properties. Compared to OLS, VIIRS has improved spatial and spectral resolutions and onboard calibration that make accurate quantification of nighttime aerosol properties feasible. Using VIIRS radiances over selected artificial light sources, Johnson et al. (2013) developed a retrieval of nighttime AOT for selected cities. However, radiances from artificial light-free regions are needed for this retrieval process. McHardy et al. (2015) proposed an improved method, based on the method proposed by Johnson et al. (2013), which uses changes in spatial variations within a given artificial light source for retrieving nighttime AOT. The advantage of McHardy et al. (2015) is that only observations over the artificial light sources themselves are needed, eliminating the need for artificial light-free regions and implicit spatial invariance assumptions of Johnson et al. (2013). Following those early attempts, several other studies have also explored the potential of applying similar methods for air quality studies and for applying it to small cities (e.g., Choo and Jeong, 2016; Wang et al., 2016). As proof-of-concept studies, only a few selected artificial light sources have been considered in those pioneering nighttime aerosol retrieval studies that utilize VIIRS observations. As suggested from McHardy et al. (2015), careful study of the characteristics of artificial light sources is needed to apply the method over a broader domain. Thus, in this study, using VIIRS data from 2015 over the US, the Middle East and India, we focus on answering the following questions. 1. How do radiance fields from artificial light sources vary as functions of observing conditions? 2. Are nighttime AOT retrievals using VIIRS DNB feasible on a regional basis? In particular, for our selected regions, can reasonable agreement be achieved between nighttime VIIRS DNB-derived AOT, aerosol retrievals from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and approximated nighttime AOT values from the daytime AErosol RObotic NETwork (AERONET)? 3. What are the limitations in the current approach that can be improved in future attempts? In the current study, we do not aim to finalize the nighttime retrieval methods but rather explore existing issues, report incremental advancements and propose revised methods for future studies. This paper is organized as follows: Sect. 2 introduces the datasets used in this study as well as data processing and aerosol retrieval methods. Section 3 discusses artificial light source patterns as functions of viewing and lunar geometries and lunar fraction, as well as other observation-related parameters. Results of regionally based retrievals are also included in Sect. 3. Section 4 closes the paper with discussion and conclusions. 2 Datasets and methods ## 2.1 Datasets Flying in a sun-synchronous polar orbit, Suomi NPP VIIRS has a local nighttime overpass time of ∼01:30. The spatial resolution of a VIIRS DNB pixel is ∼750 m across the full swath width of ∼3000 km. VIIRS DNB observes at a wavelength range of 0.5–0.9 µm, with a peak wavelength of ∼0.7µm (e.g., Miller et al., 2013). VIIRS differs from its ancestor, OLS, by providing onboard calibration for tracking signal degradation, as well as changes in modulated spectral response function through the use of a solar diffuser (e.g., Chen et al., 2017). Early versions of VIIRS DNB data suffer from stray light contamination (e.g., Johnson et al., 2013). These issues have since been corrected for in later versions of the VIIRS DNB data (Mills et al., 2013). In this study, three processed and terrain-corrected Suomi NPP VIIRS datasets were used for 2015. The VIIRS Day Night Band SDR (SVDNB) includes calibrated VIIRS DNB radiance data for the study as well as quality assurance (QA) flags for each pixel. The VIIRS Cloud Cover Layers EDR (VCCLO) dataset was used for cloud clearing, and the VIIRS Day Night Band SDR Ellipsoid Geolocation (GDNBO) dataset was used for obtaining geolocation for the VIIRS DNB radiance data. The GDNBO dataset also includes other ancillary parameters including solar, lunar, and satellite zenith and azimuth angles, as well as lunar phase, that were used as diagnostic information in support of this study. The VIIRS data were obtained from the NOAA Comprehensive Large Array-Data Stewardship System (CLASS) site (https://www.avl.class.noaa.gov/saa/products/welcome, last access: 27 May 2019). To evaluate the VIIRS retrieved AOTs, cloud-cleared and quality-assured level 2, version 3 AERONET data were enlisted as the “ground truth.” Reported in AERONET data are AOTs at a typical wavelength range of 0.34 to 1.64 µm (Holben et al., 1998). We point out that AERONET AOTs are derived through measuring the attenuation of solar energy at defined wavelengths, and thus are only available during daytime. Therefore, averaged AOTs (0.675 µm) for the day before and after the VIIRS observations were used in evaluating the performance of VIIRS retrievals at night. A pair of VIIRS and AERONET retrievals are considered collocated if the temporal difference is within ±24 h and the spatial difference is within 0.4 latitude and longitude. All collocated AERONET data for one VIIRS data point were averaged to represent the AERONET-retrieved AOT value of the desired VIIRS retrieval. Nighttime aerosol retrievals are also available from CALIOP aerosol products at both regional and global scales and for both day and nighttime (Winker et al., 2007). Thus, we also intercompared VIIRS nighttime AOTs retrieved from this study with CALIOP column-integrated AOTs. The version 4.10, level 2 CALIOP aerosol profile products (L2_05kmAPro) were used in this study. After implementing quality assurance steps, as mentioned in Toth et al. (2018), column-integrated CALIOP AOTs were derived at the 0.532 and 1.064 µm channels and then interpolated to the 0.70 µm channel (central wavelength of the DNB) for this study. The VIIRS and CALIOP data pair is considered to be collocated if the spatial difference is within 0.4 latitude and longitude and the temporal difference is within ±1 h. Note that one VIIRS retrieval may be associated with multiple CALIOP AOT retrievals, and thus collocated CALIOP aerosol retrievals were averaged to a single value for this comparison. An open-source global city database from MaxMind (https://www.maxmind.com/, last access: 11 May 2018) was used in this study for cross-checking with the detected artificial light sources for this study. The city database includes the name and geolocation of the cities, as well as other ancillary information. Based on these data, a total of 999 cities from the Middle East region (11–42 N, 28–60 E) and 2995 cities from the Indian region (8–35 N, 68–97 E) were used in this study. These cities, as well as their geolocations, are shown in Fig. 1b and c for the Middle East and Indian regions, respectively, and are documented in the Supplement. Figure 1Spatial distribution of the (a) 200 cities over the US, (b) 999 cities over the Middle East and (c) 2995 cities over India used in this study. Red dots show cities and towns from the state of Uttar Pradesh (UP) in India – a region of climatologically high aerosol loading. One focus of this study is to understand the variations in artificial light sources as a function of observing conditions. To achieve this goal, we have arbitrarily selected 200 cities across the US. Since aerosol loadings are relatively low in the US compared to regions such as the Middle East and India, this selection gives insight into the characteristics of artificial light sources. Also, we require the selected cities to be isolated – that is, not in the immediate vicinity of another city or major light source, as to avoid light dome contamination. The majority of selected cities have populations within the range of 25 000 and 100 000, with a few higher-population exceptions such as Memphis, New Orleans and Charleston. The geolocations of the 200 cities are shown in Fig. 1a and, as mentioned above, the full list of the cities is also included as a Supplement. ## 2.2 Retrieval methods The theoretical basis for retrieving nighttime AOT using stable artificial lights is based upon previous studies (Zhang et al., 2008; Johnson et al., 2013; McHardy et al., 2015). In the current approach, the VIIRS-observed radiance over a cloud-free artificial light source can be expressed as follows: $\begin{array}{}\text{(1)}& {I}_{\mathrm{sat}}={I}_{\mathrm{s}}{e}^{-\mathit{\tau }/\mathit{\mu }}+{I}_{\mathrm{s}}T\left(\mathit{\mu }\right)+{I}_{\mathrm{p}},\end{array}$ where Isat is the satellite received radiance, represented as the sum of contributions from three principal components: upwelling surface light emission through direct (${I}_{\mathrm{s}}{e}^{-\mathit{\tau }/\mathit{\mu }}$) and diffuse (IsT(μ)) transmittance and the path radiance source term (Ip). Here, τ is the total column optical thickness from aerosol and Rayleigh components, μ is the cosine of the viewing zenith angle, and T(μ) is the diffuse-sky transmittance. Is is the cloud-free sky surface upward radiance, which can be further rewritten as follows: $\begin{array}{}\text{(2)}& \mathit{\pi }{I}_{\mathrm{s}}={r}_{\mathrm{s}}\left({\mathit{\mu }}_{\mathrm{0}}{F}_{\mathrm{0}}{e}^{-\mathit{\tau }/{\mathit{\mu }}_{\mathrm{0}}}+{\mathit{\mu }}_{\mathrm{0}}{F}_{\mathrm{0}}T\left({\mathit{\mu }}_{\mathrm{0}}\right)+\mathit{\pi }{I}_{\mathrm{s}}\stackrel{\mathrm{‾}}{r}\right)+\mathit{\pi }{I}_{\mathrm{a}},\end{array}$ where rs, μ0 and F0 are (respectively) the surface reflectance, cosine of the lunar zenith angle and the top-of-atmosphere downward lunar irradiance convolved with the VIIRS DNB response function. T(μ0) is the diffuse transmittance term, $\stackrel{\mathrm{‾}}{r}$ is the reflectance from the aerosol layer and Ia is the emission from the artificial light source. The three terms inside the parentheses of Eq. (2) comprise the surface downward irradiance terms, where ${\mathit{\mu }}_{\mathrm{0}}{F}_{\mathrm{0}}{e}^{-\mathit{\tau }/{\mathit{\mu }}_{\mathrm{0}}}$ is the downward irradiance from moonlight through direct attenuation (or Fdirectdown) and μ0F0T(μ0) is the downward irradiance from moonlight through diffuse transmittance (or Fdiffusedown). πIs $\stackrel{\mathrm{‾}}{r}$ represents the surface emission (irradiance) that is reflected back downward to the surface by the aerosol layer, which has a layer mean reflectivity of $\stackrel{\mathrm{‾}}{r}$. Equation (2) shows that the surface emission term includes emissions from the artificial light source, as well as from reflected downward fluxes. Solving Is from Eq. (2) and inserting that result into Eq. (1), after rearranging, yields the following equation: $\begin{array}{ll}{I}_{\mathrm{sat}}& =\frac{{r}_{\mathrm{s}}\left({F}_{\mathrm{directdown}}+{F}_{\mathrm{diffusedown}}\right)+\mathit{\pi }{I}_{\mathrm{a}}}{\mathit{\pi }\left(\mathrm{1}-{r}_{\mathrm{s}}\stackrel{\mathrm{‾}}{r}\right)}\\ \text{(3)}& & \left[{e}^{-\mathit{\tau }/\mathit{\mu }}+T\left(\mathit{\mu }\right)\right]+{I}_{\mathrm{p}}.\end{array}$ We expect the artificial light source emission term, Ia, to vary spatially within a heterogeneous light source such as a larger city. Within that city, we can assume that the Fdirectdown, Fdiffusedown and Ip terms have negligible spatial variations. This assumption follows McHardy et al. (2015), who also assume the surface diffuse emission term (IsT(μ)) is spatially invariant. However, as indicated in Eq. (2), the surface diffuse emission term includes the Is, which contains the Ia term. Thus, we retain the surface diffuse emission term in this study. By taking the spatial derivative of Eq. (3) (using the delta operator Δ) and by eliminating terms that have small variation within a city, we can derive the following equation: $\begin{array}{}\text{(4)}& \mathrm{\Delta }{I}_{\mathrm{sat}}=\frac{\mathrm{\Delta }{I}_{\mathrm{a}}}{\mathrm{1}-\stackrel{\mathrm{‾}}{r}{r}_{\mathrm{s}}}\left[{e}^{-\mathit{\tau }/\mathit{\mu }}+T\left(\mathit{\mu }\right)\right].\end{array}$ The ΔIa and ΔIsat are the spatial variance in TOA radiance within an artificial light source for aerosol- and cloud-free and cloud-free conditions, respectively. Similar to McHardy et al. (2015), the spatial variance in radiance in this study is represented by the standard deviation of radiance within an artificial light source. Also, the diffuse transmittance, T(μ), is required. Following Johnson et al. (2013), we estimated the ratio (k) between direct transmittance (${e}^{-\mathit{\tau }/\mathit{\mu }}$) and total transmittance using the 6S radiative transfer model (Vermote et al., 1997). This approach can also be shown as Eq. (5): $\begin{array}{}\text{(5)}& k={e}^{-\mathit{\tau }/\mathit{\mu }}/\left[{e}^{-\mathit{\tau }/\mathit{\mu }}+T\left(\mathit{\mu }\right)\right].\end{array}$ The lookup table (LUT) values of k were computed for the AOT ranges of 0–1.5 (with every 0.05 AOT interval for AOT < 0.6 and for every 0.1 AOT interval for AOT of 0.6–1.0 and with two high AOT values of 1.2 and 1.5), for three different aerosol types: dust, smoke and pollutants. We also modified the 6S model (Vermote et al., 1997) to account for the spectral response function of the VIIRS DNB band (e.g., Chen et al., 2017). No sea salt aerosol was included in the LUT for this study, as artificial light sources considered in this study were inland with less probability of sea salt aerosol contamination. Still, sea salt aerosol can be added in later studies. Thus, we can rewrite Eq. (4) as follows: $\begin{array}{}\text{(6)}& \mathit{\tau }=\mathit{\mu }\mathrm{ln}\frac{\mathrm{\Delta }{I}_{\mathrm{a}}}{k\mathrm{\Delta }{I}_{\mathrm{sat}}\left(\mathrm{1}-\stackrel{\mathrm{‾}}{r}{r}_{\mathrm{s}}\right)}.\end{array}$ As suggested from Eq. (6), nighttime column optical thickness (τ) can be estimated using spatial variances of an artificial light source over aerosol- and cloud-free conditions. The $\stackrel{\mathrm{‾}}{r}{r}_{\mathrm{s}}$ term arises from the reflectance between the aerosol and the surface layers. This term is small for dark surfaces or low aerosol loading cases but could be significant for thick aerosol plumes over bright surfaces, such as dust aerosols over the desert. We assume this term is negligible for this study. Note that τ values from Eq. (6) include AOT, as well as scattering (Rayleigh) and absorption (e.g., oxygen A band) optical depth from gas species. To derive nighttime AOTs, 6S radiative transfer calculations (Vermote et al., 1997) were used, assuming a standard atmosphere, to compute and remove the component due to molecular scattering. ## 2.3 Data preprocessing steps The VIIRS data preprocessing for nighttime aerosol retrievals is implemented through two steps. First, artificial light sources are identified. Second, the detected artificial light sources are evaluated against a known city database and a detailed regional analysis is performed. This latter step is necessary to eliminate any unwanted “false” artificial light sources such as cloud contamination or lightning strikes. In the first step, conducted on individual “granules” (∼90 s orbital subsets) or composites of adjacent granules, artificial light sources are selected after cloud screening and quality assurance procedures. Since VIIRS nighttime aerosol retrievals assume cloud-free conditions, cloud-contaminated pixels must be removed using the VIIRS cloud products. Note that the nighttime VIIRS cloud mask is thermal-infrared-based and has its limitations in detecting low clouds (especially over land), and thus additional cloud screening methods are also implemented, as mentioned in a later section. A single granule of VIIRS DNB radiance data is 4064 by 768 pixels, while for the same VIIRS granule the VIIRS cloud product reports values at 2032 by 384 pixels. Thus, the VIIRS cloud product is first oversampled and then used to screen the radiance data. Following the cloud screening step, VIIRS DNB quality assurance (QA) flags are used to eliminate pixels that either have missing or out-of-range data, exhibit saturation, or have bad calibration quality. We require the solar zenith angle to be larger than 102 to eliminate solar (including twilight) contamination. Upon cloud screening and QA checks, artificial light pixels are detected using a threshold-based method by examining the difference in radiance of a given pixel to background pixels (non-artificial light pixels), as suggested in Johnson et al. (2013). Artificial light pixels are defined as pixels having radiance values greater than 1.5 times that of the granule or multi-granule mean cloud-free background radiances. The implementation of the first preprocessing step is illustrated in Fig. 2a–d. Figure 2a shows VIIRS DNB radiance data over North America for 1 October 2015. Figure 2b shows the same data as Fig. 2a but with cloud screening (shown in gray) and QA steps applied. Data removed by the day and night terminator (i.e., solar zenith angle < 102) are shown in cyan, and pixels with QA values indicating signal saturation are shown in yellow. Orange pixels in Fig. 2c are the detected potential light sources on the granule scale. As shown in Fig. 2c, some cloud pixels may still be misclassified as artificial light sources. To avoid such false detection, the detected artificial light sources are further evaluated against a list of known cities for a given region, as mentioned in Sect. 2. This step is shown in Fig. 2d, where green pixels are artificial light sources confirmed by the known city light source database. Here, only 200 arbitrarily selected cities in the US were used, and thus some of the artificial light sources, although positively identified, were not highlighted in green as they were not in the city list. Figure 2(a) VIIRS DNB contrast-enhanced imagery centered over North America from the VIIRS DNB for 1 October 2015. Panel (b) is the same as (a) but with cloud screening and quality assurance steps applied for cloudy (gray), saturated pixels (yellow) and solar zenith angles < 102 (cyan). Panel (c) is similar to (b) but with artificial light sources identified through a granule-level detection (orange). Panel (d) is similar to (c) but shows artificial light sources cross-checked with a known city database and through a regional-level detection (green). The granule or multi-granule mean cloud-free background radiances are used for detecting artificial light sources in the first step, which may introduce an over- or under-detection of artificial light sources. To refine this detection, a regionally based artificial light source detection step is implemented. In this step, a bounding box is selected for each cloud-free city. The bounding boxes are manually selected for 200 cities in the US and 8 cities in the Middle East. Based on experimenting, we found that most cities have a bounding box size of less than ±0.3 latitude and longitude, except for large cities that have a population of 250 000 or more, depending on the country. Thus, for the remaining 991 cities in the Middle East and 2995 cities in India, to simplify the process, a $±\mathrm{0.3}{}^{\circ }$ latitude and longitude region was picked as the bounding box. The bounding boxes for large cities need to be manually selected in future studies. Even if a city is partially included in a bounding box, or multiple cities reside within a bounding box, retrievals can still be performed, since variances of detected artificial light sources are used for aerosol retrievals regardless of origins of those artificial light sources. The latitude and longitude ranges of the bounding boxes for all cities used in the study are included in the Supplement. Similar steps to those mentioned in the granule or multi-granule-level detection scheme are implemented here but with the use of localized mean cloud-free background radiances. The results from the regional detection are shown in Fig. 3. Figure 3a is the VIIRS nighttime image for Sioux City, Iowa, for 13 April 2015. The detected artificial light sources are shown in Fig. 3b, where green pixels represent artificial light sources that are identified based on the local detection scheme (the second step) and the orange pixels represent pixels identified at the granule or multi-granule level (the first step) but fail on regional detection or outside the bounding box. Figure 3(a) VIIRS nighttime imagery on 13 April 2015 over Sioux City, Iowa, US. Panel (b) is similar to (a) but shows detected artificial light sources using data within $±\mathrm{0.28}{}^{\circ }$ (latitude) and $±\mathrm{0.295}{}^{\circ }$ (longitude) of the city center (green), as indicated by the red box. Orange colors show the detected artificial light sources through a granule-level detection. Only green pixels are utilized for aerosol retrievals. Cloud contamination, especially cirrus cloud contamination, remains an issue in the above steps, as shown in Fig. 2c, owing to limitations in the VIIRS infrared-based nighttime cloud mask. To further eliminate cities that are partially covered by clouds for a given artificial light source, nights with mean latitudes and longitudes from detected light source pixels that are larger than 0.02 of the seasonally or yearly mean geolocations are excluded. This process is based on the assumption that, for a partially cloud-covered city, only a portion of the city is detected as artificial light source, and thus the mean geolocations likely deviate from the multi-night composited mean geolocations. However, this step may misidentify heavy aerosol plumes as cloud-contaminated scenes. These nuances of city light identification remain a topic of ongoing research and, for now, remain as an outstanding source of uncertainty in the current retrieval algorithm. On each night and for each light source (e.g., a given city that is composed of multiple VIIRS DNB pixels such as shown in Fig. 3b), the averaged radiance, its standard deviation, the lunar fraction (fraction of the lunar disk illuminated by the sun, as viewed from Earth), viewing geometries and the number of artificial light source pixels identified are reported as diagnostic information. To further avoid contamination from potential cloud- and surface-contaminated pixels, or from pixels with erroneously high radiance values due to lightning flashes, in the process of computing standard deviation the top 0.5 % and bottom 10 % of pixels are excluded. Finally, this dataset is further used in the retrieval process. 3 Results ## 3.1 Linkages between artificial lights and observing conditions As mentioned in Sect. 2, 200 cities within the US were arbitrarily chosen to examine the properties of artificial light sources, as we expect less significant aerosol contaminations over the US in comparison to other regions considered in this study. This analysis allows us to gain insight into the natural variations in artificial light sources as a function of various observing parameters – variations that will determine the inherent uncertainty of aerosol retrievals. Cities have varying spatial light patterns, populations and nighttime electricity usage, as well as different surface conditions. To study the overall impacts of the observing conditions on artificial light source patterns, the yearly mean radiance and standard deviation of the detected light sources were computed for each city, regardless of observing conditions. Here, for each artificial light source (city or town), for a given satellite overpass of a given night, the mean radiance and the standard deviation of radiance for artificial light source pixels within the given city or town were computed and were further used as the base elements for computing yearly mean radiance and standard deviation values. Then, for each city and for each night, the instantaneous radiance and standard deviation values were scaled based on yearly mean values to derive a yearly mean normalized radiance (N_Radiance) and standard deviation (N_Rstd). This process was necessary to remove city-specific characteristics, making the comparison of artificial light source properties from different cities feasible. Also, to remove nights with cloud contamination or bad data, the yearly mean (N) and standard deviation (N_STD) of the total number of light source pixels identified for a given artificial light source was computed. Only nights with a number of detected light source pixels exceeding $N-\mathrm{0.1}×N$_STD were used in the subsequent analysis. Figure 4a shows the plot of Julian day versus normalized radiance using data from all 200 cities on all available nights, regardless of the observing conditions (with the exception of totally cloudy scenes, as identified by the VIIRS cloud product, which were removed). As suggested from Fig. 4a, nighttime artificial light sources vary as a function of Julian day. Higher radiance values were found over the Northern Hemisphere winter season (Julian days greater than 300 or less than 100, corresponding to the months of November through March of the following year), compared to the Northern Hemisphere spring, summer and fall seasons. In particular, during the Northern Hemisphere winter season, high spikes of radiance values were clearly visible. The increase in radiance values, as well as frequent high spikes in radiance values during the winter season, may be due in part to snow and ice reflectance (modifying the surface albedo, and hence the multiple scatter between the atmosphere and surface, as well as augmented lunar reflectance), especially for high-latitude regions. Thus, snow- and ice-removal steps are needed for nighttime aerosol retrievals on both regional and global scales. Still, upon characterizing the snow and ice cover from daytime observations, retrievals may still be possible over snow- and ice-contaminated regions for future studies. Also apparent in Fig. 4a is variation in the number of observations (cloud-free or partially cloudy) with respect to Julian day. The minimum number of cloud-free or partially cloudy observations that passed the QA checks occurs during the months of June and July, likely due to a saturation of QA-flagged pixels (colored in yellow in Fig. 2) reaching the furthest south during those 2 months. VIIRS DNB QA checks also label a block of pixels adjacent to the day and night terminator as pixels with bad QA (e.g., the yellow area in Fig. 2b). Thus, during June and July, a significant portion of artificial light sources at high latitudes were removed from the analysis. These QA steps are retained in the process, although relaxing these QA requirements may be an option for enhancing data volume over high latitudes. An assessment of the uncertainties incurred by reducing the conservative nature of the QA flag is a subject for future studies. Figure 4c and e show that the yearly mean normalized radiance, N_Radiance, varies as a function of lunar status, including the lunar fraction and lunar zenith angle. As the lunar fraction increases, the N_Radiance increases, possibly due to the increase in reflected moonlight. As lunar zenith angle increases (i.e., the moon is less high in the sky), a decrease in the N_Radiance is found, indicating a reduction in downward moonlight as lunar zenith angle increases. An interesting relationship between the N_Radiance and satellite zenith angle emerges in Fig. 4g. A 10 %–20 % increase in N_Radiance is observed for an increase in satellite zenith angle from 0 to 60. Figure 4Panels (a), (c), (e) and (g) show the normalized radiance of artificial light sources (200 selected cities over the US for 2015) as functions of Julian day, lunar fraction, lunar zenith angle and satellite zenith angle, respectively. Panels (b), (d), (f) and (h) show similar plots as those in panels (a), (c), (e) and (g) but for the normalized standard deviation of radiance for artificial light sources. Cold to warm colors represent data density from low to high. Figure 4b, d, f and h show similar analyses to Fig. 4a, c, e and g but for N_Rstd. A similar relationship between N_Rstd and Julian day is also found, with larger N_Rstd values found in winter and smaller values found in the summer. Also, larger spikes of N_Rstd, possibly due to snow and ice contamination, are found in the winter season, suggesting that careful ice and snow detection methods are needed for processing VIIRS DNB data over high latitudes during the winter season. Still, the increase in nighttime radiance and standard deviation of radiance may also be due to the increase in artificial light usage at night during the winter months, and, for this reason, seasonal or monthly based ΔIa values may be needed. In contrast to the normalized radiance, insignificant changes in N_Rstd were observed with the varying of either lunar fraction or lunar zenith angle, indicating that lunar fraction or lunar zenith angle have less impact on nighttime aerosol retrievals when considering N_Rstd. N_Rstd was found to be strongly dependent upon the satellite zenith angle, with values larger than 1 observed at a near 60 viewing zenith angle, likely due to the anisotropic behavior of artificial light sources, as well as longer slant paths, although the true reason remains unknown. To account for this viewing zenith angle dependency, a correction factor c was introduced in Johnson et al. (2013) in anticipation of this result. Based on Fig. 4h, the correction factor, c, specified as a function of the satellite viewing zenith angle (θ), was calculated using VIIRS DNB data from 2015 over the 200 selected cities: $\begin{array}{}\text{(7)}& c=\mathrm{1.68}-\mathrm{1.75}×\mathrm{cos}\left(\mathit{\theta }\right)+\mathrm{0.91}×\mathrm{cos}{\left(\mathit{\theta }\right)}^{\mathrm{2}}.\end{array}$ Radiance and standard deviation values from this study were further divided by c to account for the viewing angle dependency. Figure 5a is a scatterplot of N_Radiance versus N_Rstd. A strong linear relationship is shown, with a correlation of 0.92, suggesting that brighter artificial light sources are typically associated with larger spatial variations in radiance. Figure 5b shows the relationship between N_Rstd and AOT using a collocated VIIRS DNB and AERONET dataset. Only data from non-winter months (April–October 2015) were considered. Since nighttime AERONET data are not available, the AERONET data used for the AOT comparisons in Fig. 5b are taken from the day immediately prior and after the VIIRS nighttime observations, following the same collocation method as described in Sect. 2. Figure 5b shows a non-linear linkage between N_Rstd values and collocated AERONET AOTs, and N_Rstd decreases as AOT increases. As such, Fig. 5b justifies the rationale for retrieving nighttime AOT using spatial variations in artificial light sources. ## 3.2 Parameter quantification for nighttime aerosol optical depth retrievals As shown in Eq. (6), to retrieve nighttime AOT using VIIRS DNB, ΔIa, ΔIsat and k values must be quantified. ΔIsat is the standard deviation of an artificial light source under cloud-free conditions, calculated directly from VIIRS DNB data. ΔIa is the spatial standard deviation of the same artificial light source but under aerosol- and cloud-free conditions. The ΔIa shall be derived over nights with minimum aerosol contamination, or in principle, from nights with the highest standard deviation of radiance (Rstd) values. However, given that some of the highest Rstd values may correspond to unscreened clouds or lightning, for a given year and for a given city we computed the mean (Rstd_ave(30 %)) and standard deviation (Rstd_std(30 %)) of the 30 % highest Rstd values. We then used the mean plus 2 times standard deviation of the 30 % highest Rstd values (Rstd_ave(30 %) $+\mathrm{2}×{R}_{\mathrm{std}\mathit{_}\mathrm{std}}$(30 %)) to represent the ΔIa value. Assuming a normal data distribution, two standard deviations above the mean Rstd_ave(30 %) values should represent the top 1 % of the highest Rstd values of all data points – providing a way to compute the highest Rstd value while simultaneously minimizing cloud and lightning contamination. Artificial light sources are excluded if the ratio of Rstd_std(30 %) to Rstd_ave(30 %) is above 15 %. Those artificial light sources with larger variations in peak Rstd values are likely to be associated with cities that have less stable artificial light signals. Over the US, because of the concerns for ice and snow contamination, as mentioned in Sect. 3.1, only data from non-winter months (April–October 2015) were used. For the India and Middle East regions, snow and ice contamination is likely insignificant, and thus data from all months in 2015 were used. Figure 5(a) Normalized radiance versus normalized standard deviation of radiance for 200 cities over the US for 2015. (b) The normalized standard deviation of radiance as a function of adjacent daytime AERONET AOT (0.675 µm). As mentioned in Sect. 2.2, k values are computed using a LUT (precomputed using the 6S radiative transfer model) for dust, smoke and pollutant aerosols. For simplicity, we assumed the US, the Middle East and Indian regions were dominated by pollutant, dust and smoke aerosols, respectively. In future applications, k values (related to aerosol type) shall either be evaluated on a regional basis, following Remer et al. (2005), or derived directly from VIIRS, as mentioned in a later section. Cloud contamination is a long-standing challenge to passive-based satellite aerosol research (e.g., Zhang et al., 2005). In this study, the VIIRS cloud product (VCCLO) was used for cloud clearing of the observed VIIRS DNB scenes. However, only VIIRS Infrared channels are applied for cloud detection at night (Godin and Vicente, 2015). Thus, it is possible that low-level clouds, unseen by the VIIRS nighttime cloud mask, may still be present in the “cloud-cleared” scenes. To further exclude potential cloud-contaminated artificial light sources, we have implemented additional quality control steps. First, it is noted that in the presence of low clouds certain artificial light source patterns may appear differently from clear-sky conditions. Thus, only nights with mean geolocations of the detected artificial light sources that are within 0.02 of multi-night clear sky means are used. This approach, however, will introduce issues for regions with persistent cloud or thick aerosol plume coverage, such as the state of Uttar Pradesh (UP) in India, which is mentioned later. It was noted in Sect. 3.1 that the radiance and standard deviation of radiance are strongly correlated. As such, for each city and for each year, a regression relationship between radiance and standard deviation of radiance values was constructed by calculating the mean and standard deviation of Rstd for a given radiance range. For a given range of radiance values, Rstd values that were two standard deviations above the mean Rstd for that range were discarded as noisy data. After removing these noisy points, the same procedures were repeated to compute the regression between radiance and Rstd values for each city. The overall mean of Rstd (Rstd_mean) for the given artificial light source was also computed. Data were removed if the Rstd value was above the estimated Rstd based on radiance values using the above discussed regression plus 0.5 times Rstd_mean. This step was taken to further remove cloud-contaminated data but may also remove scenes with thick aerosol plumes. ## 3.3 Regional retrievals One of the goals of this study is to apply the proposed algorithm on a regional scale. A full retrieval and evaluation, using modified schemes as identified from this paper, will be conducted in follow-up research. Here, we present preliminary results conducted on a regional scale for three selected regions in 2015: the US, the Middle East and India. As mentioned previously, only non-winter months were used (April–October) for the US region due to concerns of snow and ice contamination, while all months were included for the other two regions. Figure 6a shows the comparison between retrieved nighttime AOTs from VIIRS DNB and collocated daytime AERONET AOTs (0.675 µm) for the selected 200 cities in 2015. Here VIIRS DNB AOTs are retrieved without using the k (diffuse transmittance) correction term mentioned in Sects. 2 and 3.2. A total of 368 collocated points are found with a correlation of 0.59. Figure 6b shows the collocated CALIOP and VIIRS nighttime AOTs, again using the retrievals without correcting for the diffuse transmittance term. A correlation of 0.47 was found between CALIOP AOT (interpolated to 0.700 µm) and VIIRS nighttime AOT. Figure 6(a) Scatterplot of VIIRS nighttime AOT versus adjacent daytime AERONET AOT (0.675 µm) for 200 selected cities over the US for 2015. No diffuse correction is applied. Panel (b) is similar to (a) but using nighttime CALIOP AOT (0.7 µm). Panels (c) and (d) are similar to panels (a) and (b) but with the diffuse correction implemented. Panels (e) and (f) are similar to (c) and (d) but for gridded VIIRS data (averaged into $\mathrm{1}{}^{\circ }×\mathrm{1}{}^{\circ }$ latitude by longitude grids). Artificial light sources with fewer than 20 nights that passed various cloud screening and QA checks are excluded. Cold to warm colors represent data density from low to high. Figure 6c and d show retrieval comparisons similar to Fig. 6a and b but revised to include the k (diffuse transmittance) correction term. An overcorrection was found as a slope that is higher than 1 between VIIRS and daytime AERONET AOTs, indicating that the correction for diffuse transmittance may be less important for low aerosol loading cases. The daytime AERONET AOT may not be a fair representation of nighttime AOTs in all cases. Large uncertainties exist in CALIOP extinctions and AOTs as well, due to necessary assumptions of the lidar ratios made in the retrieval process (e.g., Omar et al., 2013). Therefore, significant uncertainties exist in both the AERONET and CALIOP validation sources. Still, this can be improved with the use of nighttime lunar photometry data that is in development from the AERONET group (e.g., Berkoff et al., 2011; Barreto et al., 2013). Figure 7a and b show scatterplots of VIIRS DNB AOTs versus daytime AERONET and nighttime CALIOP AOTs, respectively, for the Middle East for 2015, using retrievals without k. A total of 999 cities were included in the study, and 368 cities were excluded for not passing the stable light source check (or Rstd_std(30 %)∕Rstd_ ave(30 %) < 15 %) or not having three or more nights that passed the various checks, as mentioned in previous sections (both criteria are referred to as the stable light source requirement). Note that these criteria may exclude artificial light sources with highly variable day-to-day changes in AOT. A correlation of 0.64 and 0.46 was found between VIIRS and AERONET and CALIOP AOTs, respectively. However, a low bias is clearly present in both comparisons. Figure 7c and d show the VIIRS nighttime AOTs versus AERONET (day) and CALIOP (night) AOTs with k included. Similar correlations are found, yet the low bias is largely corrected. Figure 7Similar to Fig. 6 but for 999 cities over the Middle East for 2015. A similar study was conducted for India. Here we separated cities in India inside and outside of UP (retrieval for UP is discussed later). Of a total of 2573 cities outside of UP, 1807 cities were found to satisfy the stable light source requirement. Again, Fig. 8a and b are for VIIRS nighttime AOTs versus AERONET adjacent daytime and CALIOP nighttime AOTs without k correction and Fig. 8c and d are the plots with the diffuse transmittance (k) correction term included, for cities that are outside UP. In all four cases, correlations of around 0.5–0.6 were found, indicating the developed algorithm has a reasonable capacity for tracking nighttime AOTs. A low bias occurred when k was not included. When k was included, a near one-to-one agreement is found in both Fig. 8c and d. This exercise reinforces the notion that there is indeed a need to account for diffuse transmittance. Figure 8Similar to Fig. 7 but for the Indian region for 2015. Artificial light sources from the state of Uttar Pradesh in India are excluded. Figure 9a and b compares AOTs reported by VIIRS, AERONET and CALIOP for cities within UP. Of a total of 422 cities, 325 passed the stable city light requirement. However, a low correlation was found between VIIRS nighttime and daytime AERONET AOTs. This result is not surprising, as thick aerosol plumes cover this region most of the year, and thus the derived cloud- and aerosol-free sky standard deviation of the artificial light sources (the ΔIa values) are not always representative of true aerosol-free cases. Therefore, a longer study period, or careful analysis by hand, may be needed for deriving ΔIa values for regions that are known to have persistent thick aerosol plume coverage. Ideally, the retrievals at each light source location should be gridded and averaged to further increase the signal-to-noise ratio. We have tested this concept by averaging retrievals shown in Figs. 6, 7 and 8 into a $\mathrm{1}{}^{\circ }×\mathrm{1}{}^{\circ }$ (latitude by longitude) averaged dataset. Artificial light sources that have fewer than 20 valid nights in a year were excluded to provide statistically robust estimates of ΔIa. Comparisons of $\mathrm{1}{}^{\circ }×\mathrm{1}{}^{\circ }$ (latitude by longitude) averaged VIIRS DNB AOT retrievals with daytime AERONET data (nighttime CALIOP AOTs) are shown in Figs. 6e (6f), 7e (7f) and 8e (8f) for the US, the Middle East and Indian regions, respectively. Increases in correlations were found between VIIRS and AERONET AOTs for the Indian and Middle East regions. Marginal changes in correlations, however, occurred between VIIRS and CALIOP AOTs. Although neither daytime AERONET nor nighttime CALIOP AOTs can be considered the “ground truth” for nighttime AOTs, these results suggest that the newly developed method has a capacity for retrieving nighttime AOTs over both dark and bright surfaces. Figure 9(a) Scatterplot of VIIRS nighttime AOT versus adjacent daytime AERONET AOT (0.675 µm) over the state of Uttar Pradesh in India for 2015. Diffuse correction is applied. Panel (b)  is similar to (a) but for nighttime CALIOP AOT (0.7 µm). Figure 10 shows nighttime AOT retrievals over India for 12 and 16 January 2015, with the retrievals from UP removed. Figures 10a and b show true color imagery from Terra MODIS for 12 and 16 January 2015 (obtained from the NASA Worldview through the following site: https://worldview.earthdata.nasa.gov/, last access: 27 May 2019). Figure 10c and d show the nighttime images of VIIRS DNB radiance for 12 and 16 January 2015. Overplotted on Fig. 10c and d are retrieved VIIRS nighttime AOTs, with blue, green, orange and red representing AOT ranges of 0–0.2, 0.2–0.4, 0.4–0.6 and above 0.6, respectively, using the same gridded data as Fig. 8e–f. Shown in Fig. 10a, on 12 January, the western portion of India was relatively aerosol-free, but a heavy aerosol plume is visible around the east coast of India. Similarly, AOTs lower than 0.2 were detected over western India but AOTs larger than 0.6 were found over eastern India. On 16 January, as indicated from the MODIS daytime image, a thick plume covered the western portion of India, also seen in Fig. 10d via retrieved AOTs above 0.6. Also, the northeastern portion of India was relatively aerosol-free, as indicated from both MODIS true color imagery (Fig. 10b) and VIIRS nighttime AOT retrievals (Fig. 10d). Figure 10Terra MODIS true color imagery (NASA Worldview) for 12 January 2015 over India. Panel (b) is similar to (a) but for 16 January 2015. (c) VIIRS nighttime imagery on 12 January 2015. Overplotted are VIIRS nighttime AOT retrievals in $\mathrm{1}{}^{\circ }×\mathrm{1}{}^{\circ }$ (latitude by longitude) grid format. Blue, green, orange and red represent AOT ranges of 0–0.2, 0.2–0.4, 0.4–0.6 and > 0.6, respectively. Panel (d) is similar to (c) but for 16 January 2015. Based on Fig. 10c and d, there were many artificial light sources not used in the retrieval. Those sources were excluded by various quality control checks of the study due to such reasons as potential cloud contamination, light source instability, or insufficient valid data in a year. It is very likely that some valid data will be removed in this conservative filtering process. New methods must be developed to restore valid data. Some ideas to this effect are presented in the section to follow. The diffuse correction term, k, was shown to be an important factor in reducing bias in these retrievals. We compared the k corrections estimated using the 6S model (Vermote et al., 1997) as well as those empirically derived from this study. By assuming CALIOP nighttime AOTs as the “true” AOTs and using VIIRS AOTs as shown in Figs. 7d and 8d as inputs, the k correction term could be inferred using Eq. (6). Figure 11a shows the derived k values versus CALIOP nighttime AOT for the Middle East region. Overplotted are the k values estimated from the 6S model (Vermote et al., 1997). The two patterns show some agreement, as both the modeled and the empirically derived k values are near or above 1 for CALIOP AOTs of 0.0 and below 0.5 when CALIOP AOTs are ∼1. This behavior indicates that the 6S-modeled k correction may provide a reasonable first-order estimate for dust aerosols in this region. Figure 11b shows a similar plot to Fig. 11a but for the Indian region. A larger data spread was found between the empirically derived and modeled k values assuming smoke aerosols, although the overall patterns were similar. One of the possible reasons for the disparity is that unlike the Middle East region, where dust aerosols dominate, the Indian region is subject to many other aerosol species including dust and pollutants, occurring across different regions and varying with season. Figure 11(a) Empirically derived (using data from Fig. 7d; filled circles) and 6S-model-estimated (red line) diffuse correction terms for the Middle East for 2015. Panel (b) is similar to Fig. 11a but for the Indian region for 2015 (using data from Fig. 8d). ## 3.4 Limitations and possible improvements Despite showing some capacity, the retrieval algorithm examined in this study has its limitations. First, most retrievals are limited to AOTs less than 1.5. This is because scenes with heavy aerosol plumes can either be misclassified as clouds by the VIIRS cloud product or removed during the additional cloud screening steps introduced in this study. For heavy aerosol plumes, much larger areas could be detected as “light sources” due to enhanced diffuse radiation (e.g., Fig. 11) and have different mean geolocations than low aerosol loadings and cloud-free nights, and thus would be removed due to the geolocation checks, as mentioned in Sect. 3.2. A data loss, especially for heavy aerosol cases, is experienced in this study due to those stringent data screening steps. Also, for the purpose of avoiding cloud or lightning contamination in this study, ΔIa values were not derived from nights with the highest radiance or standard deviation of radiance values. Doing so creates a problem for regions that have frequent heavy aerosol plume loading, such as UP. Both issues mentioned above may be mitigated by constructing a prescribed city pattern for each light source based on a multi-night composite from cloud-free and low aerosol loading conditions. In that case, light source pixels from the exact same location would be used each night to reduce data loss, especially for nights with heavy aerosol plumes. In constructing the predefined city pattern, ΔIa values may also be derived. The construction of a prescribed city pattern will be attempted in a future study. Even after vigorous attempts at cloud screening some cloud contamination remains. Such conditions may account for high VIIRS AOT but low CALIOP or AERONET AOT cases in Figs. 6–8, although both daytime AERONET data and CALIOP data have their own issues for representing nighttime aerosol optical depth, as discussed. More advanced cloud screening methods are needed to improve the screening-out of residual clouds. In addition, snow and ice cover poses challenges for this study, and new methods need to be developed to account for snow and ice coverage and allow for attempts at nighttime AOT retrievals over those scenes. Even the algorithm as presented shows a capacity for retrieving nighttime AOT. Given that there are hundreds of thousands of cities and towns across the world that could serve as sources for this algorithm, the composite of retrievals from artificial light sources may provide a tractable means to attaining regional to global description of nighttime aerosol conditions, on both moonlit and moon-free nights and over both dark and bright land surfaces. Considering the current glaring nocturnal gap in AOT, the current results show promise for providing closure and thereby enabling cloud and aerosol process studies and improved parameterizations for weather and climate modeling. 4 Conclusions and implications In this study, based on Visible Infrared Imager Radiometer Suite (VIIRS) Day/Night band (DNB) data from 2015, we examined the characteristics of artificial light sources for selected cities in the US, India and the Middle East regions. Our findings point toward the following key conclusions. Radiance from artificial light sources is a function of time of year, lunar illumination and geometry, and viewing geometry. Larger radiance values and spikes in radiance values can occur during the winter season, possibly related to snow and ice cover, indicating the need for careful snow and ice detection for nighttime retrievals using VIIRS data for regions that may experience snow and ice coverage. The normalized radiance increases with lunar fraction and decreases with increasing lunar zenith angle – as these parameters are tied to the magnitude of downwelling moonlight. The normalized standard deviation of artificial light source radiance is a function of time of year and similar to normalized radiance and exhibits spikes during the winter season. However, no significant relationship was found between the normalized standard deviation of radiance and lunar characteristics, including lunar fraction and lunar zenith angle. This finding suggests that the standard deviation of radiance, as opposed to radiance, is a potentially more robust parameter for nighttime aerosol retrievals using VIIRS DNB data. Both the normalized radiance and the normalized standard deviation of radiance are a strong function of satellite viewing angle, with larger normalized radiance and the normalized standard deviation of radiance values occurring at higher satellite viewing angles. As anticipated by past research, this viewing angle dependency must be accounted for in VIIRS DNB nighttime aerosol retrievals based on artificial light sources. Preliminary evaluations over the US for 200 selected cities, over the Middle East for 999 cities and towns and over India for 2995 cities and towns (excluding the state of Uttar Pradesh in India) show reasonable agreements between VIIRS nighttime aerosol optical thickness (AOT) values and AOT values estimated by adjacent daytime AErosol RObotic NETwork (AERONET) and nighttime Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP). This finding suggests that the use of artificial light sources has the potential to be viable for regional as well as global nighttime aerosol retrievals. Poor correlation was found between VIIRS nighttime AOTs and daytime AERONET AOTs for the state of Uttar Pradesh in India. This region is frequently covered by thick aerosol plumes, and this may introduce a difficulty in constructing cloud- and aerosol-free night characteristics of artificial light sources (ΔIa) for the retrieval process. Based on this finding, we conclude that detailed analysis, and perhaps selection by hand of non-turbid baseline conditions, is needed for estimating ΔIa values in regions of climatologically high and persistent turbidity. In contrast with McHardy et al. (2015), the need for a diffuse correction in the nighttime aerosol retrieval process was found here to indeed be important for regions with heavy aerosol loadings. This study further suggests that radiative-transfer-model-based estimations of the diffuse correction term compare reasonably well with empirically derived values over the Middle East, where the dominant aerosol type is dust. However, in cases such as the Indian region, where several aerosol types may be expected during a year, a larger data spread was found and specification of the diffuse correction term requires additional study. Despite the advances made here, many limitations to the current algorithm remain. For example, snow, ice and cloud contamination can significantly affect the retrieved AOTs. Advanced procedures for snow, ice and cloud removal are needed, with a full evaluation of the potential impact. Also, high aerosol loading may be screened out due to misclassification of thick aerosol plumes as clouds. A pattern-based artificial light source method will be examined in a future study as one approach to mitigate this issue. Despite these known issues, these low-light studies forge a promising new pathway toward providing nighttime aerosol optical property information on the spatial and temporal timescales of value to the significant needs of the aerosol modeling community in terms of regional to global nighttime aerosol property information (e.g., Zhang et al., 2014). Data availability Data availability. All data used in this study are publicly available. The VIIRS data were obtained from the NOAA CLASS site (https://www.avl.class.noaa.gov/saa/products/welcome, NOAA, 2019). The AERONET data were obtained from the NASA AERONET site (https://aeronet.gsfc.nasa.gov/, NASA Goddard Space Flight Center, 2019). The CALIOP data were obtained from the NASA Langley Research Center Atmospheric Science Data Center (https://eosweb.larc.nasa.gov/project/calipso/calipso_table, NASA Langley Research Center, 2019). The global city database used in this study is a free open-source dataset. This product includes data created by MaxMind, available from https://www.maxmind.com/ (MaxMind, 2018). Supplement Supplement. Author contributions Author contributions. Authors JZ, JSR, and SDM designed the research concept. Authors JSR, SDM, SLJ, and JS provided constructive suggestions during the study. Authors JZ, SLJ, and TDT conducted data processing. All authors participated in the writing of the manuscript. Competing interests Competing interests. The authors declare that they have no conflict of interest. Special issue statement Special issue statement. This article is part of the special issue “Holistic Analysis of Aerosol in Littoral Environments – A Multidisciplinary University Research Initiative (ACP/AMT inter-journal SI)”. It is not associated with a conference. Acknowledgements Acknowledgements. We thank the AERONET team for the AERONET data. We thank the two anonymous reviewers for their constructive suggestions. Financial support Financial support. This research has been supported by the Office of Naval Research (grant no. N00014-16-1-2040) and the NOAA JPSS Program Office. Shawn L. Jaker was partially supported by the NASA (grant no. NNX17AG52G), the NSF (grant no. IIA-1355466). Review statement Review statement. This paper was edited by Sebastian Schmidt and reviewed by two anonymous referees. References Barreto, A., Cuevas, E., Damiri, B., Guirado, C., Berkoff, T., Berjón, A. J., Hernández, Y., Almansa, F., and Gil, M.: A new method for nocturnal aerosol measurements with a lunar photometer prototype, Atmos. Meas. Tech., 6, 585–598, https://doi.org/10.5194/amt-6-585-2013, 2013. Berkoff, T. A., Sorokin, M., Stone, T., Eck, T. F., Raymond Hoff, R., Welton, E., and Holben, B.: Nocturnal Aerosol Optical Depth Measurements with a Small-Aperture Automated Photometer Using the Moon as a Light Source, J. Atmos. Ocean. Tech., 28, 1297–1306, 2011. Chen, H., Xiong, X., Sun, C., Chen, X., and Chiang, K.: Suomi-NPP VIIRS day–night band on-orbit calibration and performance, J. Appl. Remote. Sens., 11, 36019, https://doi.org/10.1117/1.JRS.11.036019, 2017. Choo, G. H. and Jeong, M. J.: Estimation of nighttime aerosol optical thickness from Suomi-NPP DNB observations over small cities in Korea, Korean Journal of Remote Sensing, 32, 73–86, 2016. Elvidge, C. D., Baugh, K., Zhizhin, M., Hsu, F. C., and Ghosh, T.: VIIRS Night-Time Lights, Int. J. Remote Sens., 38, 5860–5879, 2017. Godin, R. and Vicente, G.: Joint Polar Satellite System (JPSS) Operational Algorithm Description (OAD) Document for VIIRS Cloud Mask (VCM) Intermediate Product (IP) Software, National Aeronautics and Space Administration (NASA), Greenbelt, Maryland, Goddard Space Flight Center, available at: https://jointmission.gsfc.nasa.gov/sciencedocs/2015-08/474-00062_OAD-VIIRS-Cloud-Mask-IP_I.pdf (last access: 2 November 2018), 2015. Holben, B. N., Eck, T. F., Slutsker, I., Tanré, D., Buis, J. P., Setzer, A., Vermote, E., Reagan, J. A., Kaufman, Y. J., Nakajima, T., Lavenu, F., Jankowiak, I., and Smirnov, A.: AERONET – A Federated Instrument Network and Data Archive for Aerosol Characterization, Remote Sens. Environ., 66, 1–16, 1998. Johnson, R. S., Zhang, J., Hyer, E. J., Miller, S. D., and Reid, J. S.: Preliminary investigations toward nighttime aerosol optical depth retrievals from the VIIRS Day/Night Band, Atmos. Meas. Tech., 6, 1245–1255, https://doi.org/10.5194/amt-6-1245-2013, 2013. Lee, T. E., Miller, S. D., Turk, F. J., Schueler, C., Julian, R., Deyo, S., Dills, P., and Wang, S.: The NPOESS VIIRS day/night visible sensor, B. Am. Meteorol. Soc., 87, 191–199, 2006. MaxMind: Free World Cities Database, available at: https://www.maxmind.com/, last access: 11 May 2018. McHardy, T. M., Zhang, J., Reid, J. S., Miller, S. D., Hyer, E. J., and Kuehn, R. E.: An improved method for retrieving nighttime aerosol optical thickness from the VIIRS Day/Night Band, Atmos. Meas. Tech., 8, 4773–4783, https://doi.org/10.5194/amt-8-4773-2015, 2015. Miller, S. D., Straka III, W., Mills, S. P., Elvidge, C. D., Lee, T. F., Solbrig, J., Walther, A., Heidinger, A. K., and Weiss, S. C.: Illuminating the Capabilities of the Suomi National Polar-Orbiting Partnership (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band, Remote Sens., 5, 6717–6766, 2013. Mills, S., Weiss, S., and Liang, C.: VIIRS Day/Night Band (DNB) Stray Light Characterization and Correction, Proceedings SPIE 8866, Earth Observing Systems XVIII, 88661P, https://doi.org/10.1117/12.2023107, 2013. NASA Goddard Space Flight Center: Aerosol Robotic Network, The NASA AERONET site, available at: https://aeronet.gsfc.nasa.gov/ last access: 11 June 2019. NASA Langley Research Center: Atmospheric Science Data Center, CALIOP data site, available at: https://eosweb.larc.nasa.gov/project/calipso/calipso_table last access: 11 June 2019. NOAA: The NOAA Comprehensive Large Array-Data Stewardship System, The NOAA CLASS site, available at: https://www.avl.class.noaa.gov/saa/products/welcome, last access: 11 June 2019. Omar, A. H., Winker, D. M., Tackett, J. L., Giles, D. M., Kar, J., Liu, Z., Vaughan, M. A., Powell, K. A., and Trepte, C. R.: CALIOP and AERONET aerosol optical depth comparisons: One size fits none, J. Geophys. Res.-Atmos., 118, 4748–4766, https://doi.org/10.1002/jgrd.50330, 2013. Remer L. A., Kaufman, Y. J., Tanré, D., Mattoo, S., Chu, D. A., Martins, J. V., Li, R.-R., Ichoku, C., Levy, R. C., Kleidman, R. G., Eck, T. F., Vermote, E., and Holben, B. N.: The MODIS Aerosol Algorithm, Products, and Validation, J. Atmos. Sci., 62, 947–973, 10.1175/JAS3385.1, 2005. Toth, T. D., Campbell, J. R., Reid, J. S., Tackett, J. L., Vaughan, M. A., Zhang, J., and Marquis, J. W.: Minimum aerosol layer detection sensitivities and their subsequent impacts on aerosol optical thickness retrievals in CALIPSO level 2 data products, Atmos. Meas. Tech., 11, 499–514, https://doi.org/10.5194/amt-11-499-2018, 2018. Vermote, E. F., Tanré, D., Deuzé, J. L., Herman, M., and Morcrette, J. J.: Second simulation of the satellite signal in the solar spectrum, 6S: an overview, IEEE T. Geosci. Remote, 35, 675–686, 1997. Wang, J., Aegerter, C., Xu, X., and Szykman, J. J.: Potential application of VIIRS Day/Night Band for monitoring nighttime surface PM2.5 air quality from space, Atmos. Environ., 124, 55–63, 2016. Zhang, J., Reid, J. S., and Holben, B. N.: An analysis of potential cloud artifacts in MODIS over ocean aerosol optical thickness products, Geophys. Res. Lett., 32, L15803, https://doi.org/10.1029/2005GL023254, 2005. Zhang, J., Reid, J. S., Turk, J., and Miller, S.: Strategy for studying nocturnal aerosol optical depth using artificial lights, Int. J. Remote Sens., 29, 4599–4613, 2008. Zhang J., Reid, J. S., Campbell, J. R., Hyer, E. J., and Westphal, D. L.: Evaluating the Impact of Multi-Sensor Data Assimilation on A Global Aerosol Particle Transport Model, J. Geophys. Res.-Atmos., 119, 4674–4689, https://doi.org/10.1002/2013JD020975, 2014.
2019-09-20 16:52:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5404137372970581, "perplexity": 4587.103004414898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574050.69/warc/CC-MAIN-20190920155311-20190920181311-00499.warc.gz"}
https://torch.mlverse.org/docs/reference/torch_norm.html
Norm torch_norm(self, p = 2L, dim, keepdim = FALSE, dtype) ## Arguments self (Tensor) the input tensor (int, float, inf, -inf, 'fro', 'nuc', optional) the order of norm. Default: 'fro' The following norms can be calculated: ===== ============================ ========================== ord matrix norm vector norm ===== ============================ ========================== NULL Frobenius norm 2-norm 'fro' Frobenius norm -- 'nuc' nuclear norm -- Other as vec norm when dim is NULL sum(abs(x)ord)(1./ord) ===== ============================ ========================== (int, 2-tuple of ints, 2-list of ints, optional) If it is an int, vector norm will be calculated, if it is 2-tuple of ints, matrix norm will be calculated. If the value is NULL, matrix norm will be calculated when the input tensor only has two dimensions, vector norm will be calculated when the input tensor only has one dimension. If the input tensor has more than two dimensions, the vector norm will be applied to last dimension. (bool, optional) whether the output tensors have dim retained or not. Ignored if dim = NULL and out = NULL. Default: FALSE Ignored if dim = NULL and out = NULL. (torch.dtype, optional) the desired data type of returned tensor. If specified, the input tensor is casted to 'dtype' while performing the operation. Default: NULL. ## TEST Returns the matrix norm or vector norm of a given tensor. ## Examples if (torch_is_installed()) { a = torch_arange(0, 9, dtype = torch_float()) b = a\$reshape(list(3, 3)) torch_norm(a) torch_norm(b) torch_norm(a, Inf) torch_norm(b, Inf) } #> torch_tensor #> 8 #> [ CPUFloatType{} ]
2021-02-25 14:05:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5857350826263428, "perplexity": 9574.93664819743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351134.11/warc/CC-MAIN-20210225124124-20210225154124-00391.warc.gz"}
https://calendar.utk.edu/event/de_seminar_mitchell_sutton_utk
# DE seminar: Mitchell Sutton, UTK Title: New Families of Fractional PDEs Arising from Fractional Calculus of Variations. Abstract: In this presentation we shall explore two new families of fractional PDEs obtained as Euler-Lagrange equations of fractional calculus of variations problems. Several new fractional differential operators will be introduced, including the fractional $p$-Laplacian, Laplacian, and Neumann boundary operator. In each family of problems, we consider one-sided differentiation as well as differentiation in each direction; both in the weak sense. The first family of problems connects minimization problems with prescribed boundary conditions to associated fractional PDEs via the calculus of variations. The second family of problems establishes the connection between minimization problems with natural boundary conditions and fractional PDEs with Neumann boundary data. We prove the existence and uniqueness of weak solutions in the newly developed fractional Sobolev space(s) $\leftidx{^{\pm}}{W}{^{\alpha,p}}$. We also consider fractional PDEs for which there is no associated minimization problem. In addition to proving existence and uniqueness of solutions, we discuss the issue of choosing appropriate initial conditions and our interpretation of an initial value problem. Thursday, March 5, 2020 at 2:10pm to 3:25pm Ayres Hall, 111 1403 Circle Drive, Knoxville, TN 37996 Event Type Topic Audience Department Mathematics Contact Name
2021-04-13 10:47:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3439292907714844, "perplexity": 489.7808556778143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072180.33/warc/CC-MAIN-20210413092418-20210413122418-00297.warc.gz"}
https://discuss.codechef.com/t/curmat-editorial/83616
# CURMAT - Editorial Author: Sayantan Jana Editorialist: Sayantan Jana HARD # PREREQUISITES: Disjoint Set Union, Dynamic Connectivity # PROBLEM: Initially given is a prime p and a matrix M of size N * N . The matrix is required to be curious, i.e., once it is completely filled, • Each cell should contain an integer between 1 and p-1 inclusive . • For each non trivial square submatrix A of M (a submatrix containing more than 1 cell) , its determinant |A| should be a multiple of p. You are required to handle Q queries . After each addition or delete query in the matrix M, report the number of possible ways to fill up the empty cells in M to make it curious modulo (1e9+7) . # QUICK EXPLANATION: • It can be shown that the 2nd curious condition is equivalent to the matrix having rank 1 with respect to modulo p. In a rank 1 matrix, each row is a multiple of every other row and each column is a multiple of every other column . Hence there is a cyclic dependency between the cells, i.e., 1 \leq a,b,c,d \leq N, M_{a,b}*M_{c,d} \equiv M_{a,d}*M_{c,b} mod p, thus if any 3 of these cells have values in them, the 4th cell necesaarily needs to have a particular value to obey the rank 1 matrix condition . • There are actually a total of 2N-1 individual cells whose values independently can determine the values to be filled in all other cells for the matrix to have rank 1 . Intuitively just consider the 1st row and 1st column filled, a total 2N-1 cells . These are actually the independent constants in a graph, the count of free constants available decide the count of ways to form a curious matrix, (p-1)^f , f being the current count of free constants. • The problem can be modelled into a graph problem. Consider 2N nodes in the graph, each node representative of a row or a column, nodes 1 to N for the rows and N+1 to 2N for the columns . For every filled cell M_{x, y} containing value v, consider a directed edge from node x to node y+N with weight v and another directed edge from node y+N to node x with weight 1/v (inverse of v w.r.t prime p) . Now to obey the rank 1 matrix condition, the products of all edge weights in a cycle in the graphymust be 1 . The count of free constants is decided by the number of disconnected components in graph subtracted by 1. • The addition query alone could have been handled in O(log N) by a DSU. However to handle deletion queries, dynamic connectivity comes into play . # EXPLANATION: Counting ways to get a curious matrix from an initially empty matrix Let’s count the number of ways to fill an empty matrix to make it curious . If we were asked to just satisfy the first condition only, the number of ways would have been (p-1)^{N*N} . Does the count remain same if we were asked to incorporate the 2nd condition as well ? No, it doesn’t since by the second condition, there is certain dependency imposed on certain cells once some other cells are filled . This dependency is what is enforced by the rank 1 property of the matrix . Proving the 2nd Curious condition is equivalent to rank 1 requirement Consider a,b such that 1 \leq a,b \leq N-1, consider the 2*2 submatrix formed by rows a and a+1 and columns b and b+1 as, A = \begin{bmatrix} M_{a,b} & M_{a,b+1} \\ M_{a+1,b} & M_{a+1,b+1} \\ \end{bmatrix} The curious condition requires the determinant of the submatrix, |A| mod p = 0, i.e., M_{a,b}*M_{a+1,b+1} \equiv M_{a+1,b}*M_{a,b+1} mod p . This implies, \frac{M_{a,b}}{M_{a+1,b}} \equiv \frac{M_{a,b+1}}{M_{a+1,b+1}} mod p . Thus the bottom row of submatrix A is a multiple of the top row, now varying b from 1 to N-1, it can be found that (a+1)-th row is a multiple of a-th row in the matrix M. It can also be said the other way round, i.e., a-th row is a multiple of (a+1)-th row This can be generalised to show that each row is a multiple of every other row in a curious matrix . Similarly, this can also be shown for columns too . Hence, considering modulo prime p, the rank of any curious matrix is 1 . Now we understand the cyclic dependency that is created among the cells . Taking cue from the rank 1 property we just proved, we use the fact there are N + (M - 1) free constants in a Rank 1 matrix of dimension N*M, the N-components of the first column vector (let’s call them x_0, x_1 … x_{N-1}), and the multiplier of each of the other column vectors with respect to the first (let’s call them y_1, y_2 … y_{m-1}). Hence the number of ways to fill up an empty matrix to make it curious is (p-1)^{2N-1}, since the answer is the number of values each constant can take to the power of number of such constants. To better understand the notion of rank 1 matrices and free constants, look watch this video. Transformation to a Graph problem Since, we have already established that the rows are multiples of each other and columns are multiples of each other too, we have to somehow effecitively store the ratios between them to handle the queries . Consider the matirx in this state , \begin{bmatrix} .. & .. & .. & .. & .. \\ .. & M_{a,b} & .. & M_{a,d} & .. \\ .. & .. & .. & .. & .. \\ .. & M_{c,b} & .. & .. & .. \\ .. & .. & .. & .. & .. \\ \end{bmatrix} At this stage, we know c-th row is a multiple of a-th row by a factor of M_{c,b}/M_{a,b}, also d-th row is a multiple of b-th row by a factor of M_{a,d}/M_{a,b} . We attempt to maintain this ratios between these rows and columns together in a graph such that : • The graph contains 2N nodes, each node representative of a row or a column, hence 2N of them, x-th row denoted by node x, y-th row denoted by node y+N . • For every filled cell M_{x, y} containing value v, consider a directed edge from node x to node y+N with weight v and another directed edge from node y+N to node x with weight 1/v (inverse of v w.r.t prime p) . On careful observation on the construction of the graph, we can infer that the ratio between a-th row and c-th can be obtained as the product of the edge weights in the path from node c to a . Notice that there can be many directed paths from c to a, but the product of each of those paths should be same . By construction the reverse path would have a product that is an inverse of the product of edge weights in original path . Hence, we can say, it is needed to satisfy that product of edge weights in each directed cycle in the graph should be 1. Another crucial observation is that the count of free constants in the matrix being constructed is equal to the number of components in the graph subtracted by 1. Let’s now look at how to handle the queries. Now on an addition query x y v, • If there was earlier a path between x and y+N, it needs to obey that the product of all the edges in the path from node x to node y+N should be v . • If there wasn’t a path earlier, we add an edge as described above. In notion of the rank 1 matrix count of the free constants decrease by 1 since number of components decrease and hence after this query the number of ways to obtain curious matrix decreases . If the query is valid, i.e., there exists a way to fill up the remaining cells of the matrices , the answer is calculated as, (p-1)^f , f being the current count of free constants. Count of free constants can be found by getting the current number of components in the graph. Report the answer modulo 1e9+7. Efficiently handling addition queries through Disjoint Set Union Both the above checks can be handled faster by maintaing a disjoint set union structure . Note usually the DSU’s are associated with undirected edges, we too can work with undirected edges only since between any two nodes, the weight in one of the directed edge is just inverse of the the directed edge in opposite direction , this can be maintained using undirected edges only, just maintain the edge that is directed from a node to its parent in the DSU structure . For an addition query x y v, • To check if there is a path between x and y+N, we just need to check if the two nodes are in same subset of DSU or not . • If the nodes are in the same component, find p_x, the product of all edges from x leading up to r_x, the root of the subset containg the node x and p_y, the product of all edges from y+N leading up to r_y,the root of the subset containg the node y+N (All products are found modulo prime p) . Note that we have actually simulated the reverse path from y+N to r_y . Since for obeying rank 1 matrix rule, the product of the path from x to y+N was required to be v and hence it must hold that, v \equiv \frac{p_x}{p_y} mod p . • If the nodes are not in the same component, we need to connect r_x and r_y, the roots of the subsets containing x and y+N . We use smaller to bigger merging and accordingly take care of the edge weight we put between them . If r_y is made a child of r_x, the edge connecting r_y to r_x (we only maintain the edge in upward direction) is assigned an edge weight, w = \frac{p_x}{v*p_y} mod p , while if r_x is made a child of r_y, it is assigned an edge weight, w = \frac{v*p_y}{p_x} mod p . The number of components get decreased and in sense the free constants as well. But note that deletion queries cannot be handled efficiently just by DSU . Hence so far we have solved subtask 1 and 2 . For subtask 1, it is not needed to handle deletions efficiently . Efficiently Handling Deletion Queries through Dynamic Connectivity Finally, Handling delete queries is a simple albeit programmatically tedious modification, what follows is a standard technique described here: Deletion in O(log N). We need to add delete operations to the DSU, for which we use a Segment Tree and a regular DSU as described above which does and undoes operations on a stack (by storing all changes made after each operation and undoing them later). We loop over the query list and pair up the set and unset queries, and the ranges in which the value is set, we add it to the Segment Tree. Then we perform a DFS on the Segment Tree, passing the DSU around, and this allows all the operation to be applied as deleted in the stack-order. Since path compression is not possible with deletion (roll backs), merging smaller into bigger is why we used in DSU as well . For subtask 3, it is actually not needed to maintain edge weights, just maintaing the edges is fine . Complexity wise it’s same as in subtask 4, just the careful handling of edge weights is what is not needed in the subtask . For subtask 4, we use everything discussed so far to efficiently answer the queries in O(Q log ^{2}Q) . If you are new to dynamic connectivity, it’s recommended to try the problem Extending Set of Points and look at the attached editorial. Also give a try at Dynamic connectivity contest. # SOLUTIONS: Setter's Solution #include <bits/stdc++.h> using namespace std; typedef long long ll; typedef vector<long long> vll; typedef pair<long long, long long> pll; long long MOD1; const long long MOD2 = 1000000007; ll mod_power(ll a, ll b, ll MOD) { ll cumulative = a, result = 1; for (; b > 0; b /= 2) { if (b % 2 == 1) result = (result * cumulative) % MOD; cumulative = (cumulative * cumulative) % MOD; } return result; } class DynamicConnectivity { void __dfs(int v, int l, int r, vll& res) { int state = save_ptr; for (auto x : tree[v]) merge(x.u, x.v, x.ratio); if (l == r - 1) else { int m = (l + r) / 2; __dfs(v * 2 + 1, l, m, res); __dfs(v * 2 + 2, m, r, res); } while (save_ptr != state) rollback(); }; public: int size_nodes; int size_query; vector<ll> parent, comp_size; vector<ll*> saved_object; vector<ll> saved_value; int save_ptr = 0; vector<ll> factor; ll comp_count; struct Query { int u, v; ll ratio; Query(pair<int, int> p, ll r) { u = p.first, v = p.second; ratio = r; } }; vector<vector<Query>> tree; DynamicConnectivity(int n = 600000, int q = 300000) { size_nodes = n; size_query = q; parent = vector<ll>(n); comp_size = vector<ll>(n, 1); ll tree_size = 1; while (tree_size < q) tree_size <<= 1; tree = vector<vector<Query>>(2 * tree_size); iota(parent.begin(), parent.end(), 0); saved_object = vector<ll*>(max(3 * n, 1000000)); saved_value = vector<ll>(max(3 * n, 1000000)); factor = vector<ll>(n, 1); comp_count = n; } void change(ll& object, ll value) { saved_object[save_ptr] = &object; saved_value[save_ptr] = object; object = value; save_ptr++; } void rollback() { save_ptr--; (*saved_object[save_ptr]) = saved_value[save_ptr]; } int find(int x) { if (parent[x] == x) return x; return find(parent[x]); } ll find_factor(int x) { if (parent[x] == x) return 1; return 1ll*factor[x]*find_factor(parent[x])%MOD1; } void merge(int x, int y, ll ratio) { ll factor_x = find_factor(x); ll factor_y = find_factor(y); x = find(x); y = find(y); if (x == y) { if (!(factor_x == (ratio * factor_y) % MOD1)) change(comp_count, 0); return; } ll tmp_var = 1ll*ratio*factor_y%MOD1; if (comp_size[x] > comp_size[y]) { change(parent[y], x); change(comp_size[x], comp_size[x] + comp_size[y]); change(factor[y], (factor_x * mod_power(tmp_var, MOD1 - 2, MOD1)) % MOD1); change(comp_count, comp_count - 1); } else { change(parent[x], y); change(comp_size[y], comp_size[x] + comp_size[y]); change(factor[x], (tmp_var * mod_power(factor_x, MOD1 - 2, MOD1)) % MOD1); change(comp_count, comp_count - 1); } } void add(int l, int r, Query edge, int node = 0, int x = 0, int y = -1) { if (y == -1) y = size_query; if (l >= r) return; if (l == x && r == y) tree[node].emplace_back(edge); else { int m = (x + y) / 2; add(l, min(r, m), edge, node * 2 + 1, x, m); add(max(m, l), r, edge, node * 2 + 2, m, y); } } vll results(int v = 0, int l = 0, int r = -1) { if (r == -1) r = size_query; vll vec(size_query); __dfs(v, l, r, vec); return vec; } }; int main() { ios::sync_with_stdio(false); cin.tie(0); cout.tie(0); ll n, q; cin >> n >> q >> MOD1; map<pll, ll> last; DynamicConnectivity dsu(n + n, q); vector<DynamicConnectivity::Query> queries; queries.reserve(q); for (int i = 0; i < q; i++) { ll x, y, val; cin >> x >> y >> val; x--, y--; pll p(x, y + n); queries.emplace_back(p, val); if (last.count(p)) { last.erase(p); } else { last[p] = i; } } for (auto x : last) vll res = dsu.results(); for (int i = 0; i < q; i++) cout << ((res[i] <= 0) ? 0 : mod_power(MOD1-1, res[i] - 1, MOD2)) << "\n"; } Tester's Solution #include <bits/stdc++.h> #define endl '\n' #define SZ(x) ((int)x.size()) #define ALL(V) V.begin(), V.end() #define L_B lower_bound #define U_B upper_bound #define pb push_back using namespace std; template<class T, class T1> int chkmin(T &x, const T1 &y) { return x > y ? x = y, 1 : 0; } template<class T, class T1> int chkmax(T &x, const T1 &y) { return x < y ? x = y, 1 : 0; } const int MAXN = (1 << 18); const int mod = (int)1e9 + 7; int pw(int x, int p, int m) { int r = 1; while(p) { if(p & 1) r = r * 1ll * x % m; x = x * 1ll * x % m; p >>= 1; } return r; } int inv(int x, int m) { return pw(x, m - 2, m); } int n, q, p; struct Fraction { int a, b; Fraction(int u = 0, int d = 1) { a = u; b = d; } Fraction operator*(Fraction oth) { return Fraction(a * 1ll * oth.a % p, b * 1ll * oth.b % p); } int val() { return a * 1ll * ::inv(b, p) % p; } Fraction inv() { return Fraction(b, a); } }; struct expr { int main_pw, rev; Fraction coef; expr() { coef = Fraction(0); main_pw = 0; rev = 0; } expr(Fraction _coef, int _main_pw, int _rev = 0) { coef = _coef; main_pw = _main_pw; rev = _rev; } expr operator*(expr other) { if(!this->rev) return expr(this->coef * other.coef, this->main_pw + other.main_pw, other.rev); else { expr tmp = other.inv(); return expr(this->coef * tmp.coef, this->main_pw + tmp.main_pw, tmp.rev); } } expr inv_f() { if(this->rev) return expr(this->coef, this->main_pw, this->rev); else return expr(this->coef.inv(), -this->main_pw, this->rev); } expr inv() { return expr(this->coef.inv(), -this->main_pw, this->rev ^ 1); } }; struct persistent_dsu { int par[MAXN], sz[MAXN]; expr e[MAXN]; bool failed; int main_val, cnt_comps; void init(int n) { for(int i = 0; i < n; i++) { par[i] = i; sz[i] = 1; e[i] = expr(1, 0); } failed = false; cnt_comps = n; main_val = -1; } pair<int, expr> root(int x) { if(x == par[x]) { return {x, e[x]}; } pair<int, expr> abv = root(par[x]); abv.second = e[x] * abv.second; return abv; } vector<pair<int, int>> snapshots; void unite(int x, int y, int v) { if(failed) return; auto p = root(x); auto q = root(y); if(q.first == 0 || (p.first != 0 && sz[p.first] < sz[q.first])) { swap(p, q); } expr ex = p.second, ey = q.second; x = p.first, y = q.first; if(x == y) { // cycle // ex(x) * ey(y) = main * v int cf = ex.coef.val() * 1ll * ey.coef.val() % ::p; int pw_0 = ex.main_pw + ey.main_pw - 1; if(ex.rev) pw_0--; else pw_0++; if(ey.rev) pw_0--; else pw_0++; cf = cf * 1ll * inv(v, ::p) % ::p; if(pw_0 == 0) { failed = (cf != 1); } else { if(pw_0 > 0) cf = inv(cf, ::p), pw_0 *= -1; assert(pw_0 == -1); if(main_val == -1) { main_val = cf; } else { failed = (cf != main_val); } } } else { // make x the root cnt_comps--; par[y] = x; sz[x] += sz[y]; // ex(x) * ey(y) = main * v // ey(y) = main * v * ex(x).inv // y = apply |ey.inv_f()| (main * v * ex(x).inv) e[y] = ey.inv_f() * (expr(Fraction(v), 1) * ex.inv()); snapshots.push_back({x, y}); } } void rollback(bool was_failed, int snapshots_sz, int main_val_prv) { main_val = main_val_prv; failed = was_failed; while(SZ(snapshots) > snapshots_sz) { int x = snapshots.back().first, y = snapshots.back().second; sz[x] -= sz[y]; par[x] = x; par[y] = y; e[x] = expr(1, 0); e[y] = expr(1, 0); cnt_comps++; snapshots.pop_back(); } } }d; cin >> n >> q >> p; } int ans[MAXN]; vector<pair<int, pair<int, int>>> li[MAXN << 2]; void add(int ql, int qr, pair<int, pair<int, int>> to_add, int l, int r, int idx) { if(ql <= l && r <= qr) { return; } int mid = (l + r) >> 1; if(ql <= mid) add(ql, qr, to_add, l, mid, 2 * idx + 1); if(mid < qr) add(ql, qr, to_add, mid + 1, r, 2 * idx + 2); } void solve(int l, int r, int idx) { bool was_failed = d.failed; int sz = SZ(d.snapshots), main_val = d.main_val; for(auto upd: li[idx]) { int x = upd.second.first, y = upd.second.second, v = upd.first; if(y > 0) y += n - 1; d.unite(x, y, v); } if(l == r) { if(d.failed) ans[l] = 0; else ans[l] = pw(p - 1, d.cnt_comps - (d.main_val != -1), mod); } else if(!d.failed) { int mid = (l + r) >> 1; solve(l, mid, 2 * idx + 1); solve(mid + 1, r, 2 * idx + 2); } d.rollback(was_failed, sz, main_val); } void solve() { d.init(2 * n - 1); // We can notice that if the determinants for all 2x2 matrices are 0, the whole matrix will be curious // Then we can easily convert the constraints to the type a[0][0] * v = a[x][0] * a[0][y] map<pair<int, pair<int, int>>, int> last; for(int i = 0; i < q; i++) { pair<int, pair<int, int>> curr; cin >> curr.second.first >> curr.second.second >> curr.first; curr.second.first--; curr.second.second--; if(!last.count(curr)) { last[curr] = i; } else { add(last[curr], i - 1, curr, 0, q - 1, 0); last.erase(curr); } } for(auto it: last) { add(it.second, q - 1, it.first, 0, q - 1, 0); } solve(0, q - 1, 0); for(int i = 0; i < q; i++) { cout << ans[i] << endl; } } int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr);
2021-04-22 20:36:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7787680625915527, "perplexity": 4458.25948090283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039604430.92/warc/CC-MAIN-20210422191215-20210422221215-00081.warc.gz"}
https://eprints.utas.edu.au/20911/
# Molecular taxonomy of Paracoccus halodenitrificans, Aeromonas salmonicida and Enterococcus seriolicida Miller, JM 1996 , 'Molecular taxonomy of Paracoccus halodenitrificans, Aeromonas salmonicida and Enterococcus seriolicida', Research Master thesis, University of Tasmania. Preview PDF (Whole thesis) Available under University of Tasmania Standard License. | Preview ## Abstract The sequence of the 16S rRNA molecule has become accepted as a systematic fingerprint allowing the evolutionary history of an organism and its phylogenetic status at various taxonomic levels to be characterised. Collection of sequence data and their comparison under explicitly defined and widely, though tentatively, accepted algorithms provides much insight into the relationships between organisms. In the study of microbiology, classical taxonomy has been hindered by a paucity of morphological distinction between bacteria. With the acceptance of molecular systematics, many bacteria and groups of bacteria are now being reorganised into a system that reflects both their histories and their relationships, and is consequently more stable than heretofore. This thesis deals with the classification of three bacteria on the basis of their 16S rRNA sequences. * Paracoccus halodenitrifi cans Various chemotaxonomic and molecular data suggest that this species is generically misplaced. 16S rDNA sequence data place the type species, P. denitrificans, in the a-subclass of the Proteobacteria. 16S rDNA sequence analysis undertaken in this work places P. halodenitrifi cans within the family Halomonadaceae in the g-subclass of the Proteobacteria. * Enterococcus seriolicida A bacterial strain isolated from a Tasmanian salmon farm bears strong resemblance to E. seriolicida (ATCC 49156$$^T$$ = YT-3). In 1993, workers in Spain suggested that E. seriolicida and Lactococcus garvieae are synonymous by 16S rRNA sequence identity, though no sequence data for the former was published. Analysis of the 16S rRNA sequence of E. seriolicida in this work assigns the species to the genus Lactococcus. The sequence of E. seriolicida differs from the published 16S rRNA sequence of L. garvieae in only seven positions. These differences and their significance are discussed. * Aeromonas salmonicida A bacterial strain isolated on a northern Tasmanian fish farm from a skin lesion of the greenback flounder Rhombosolea tapirina was presumed an endemic atypical subspecies of the salmonid pathogen Aeromonas salmonicida. Clarification is necessary for an accurate assessment of its significance for the fishing industry. 16S rRNA sequence of this organism shows 100% identity with that of the species Aeromonas salmonicida subsp. masoucida and achromo genes, despite phenotypic differences between the bacteria. The genus Aeromonas has an unusually high degree of sequence similarity among the 16S rRNA genes of its members. This may make taxonomic clarification by this criterion dubious. The definition of a species on the basis of the sequence of its 16S rRNA molecule has ramifications beyond taxonomy. Synthetic oligonucleotide probes designed to complement specifically unique regions the 16S rRNA or rDNA can rapidly and accurately identify organisms for many purposes. The salmonid industry in Tasmania has been free of the major diseases responsible for devastating losses in overseas fisheries. However, two pathogenic bacteria have been isolated locally, Enterococcus seriolicida and a presumed endemic subspecies of Aeromonas salmonicida. Molecular probes directed against the rDNA of these organisms are required that will allow them to be identified rapidly, their occurrence investigated and their epidemiology traced efficiently should the need arise. A simple diagnostic assay, compatible with normal pathological laboratory routine, is necessary. Therefore these probes have been designed for use as primers in a PCR-based assay. Item Type: Thesis - Research Master Miller, JM Molecular microbiology, Enterococcus, Aeromonas Copyright 1995 the author - The University is continuing to endeavour to trace the copyright owner(s) and in the meantime this item has been reproduced here in good faith. We would be pleased to hear from the copyright owner(s). Includes bibliographical references (leaves 79-106). Thesis (MSc)--University of Tasmania, 1997 View statistics for this item
2021-07-28 03:58:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23031482100486755, "perplexity": 10732.263807936624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153521.1/warc/CC-MAIN-20210728025548-20210728055548-00189.warc.gz"}
https://www.answers.com/Q/Venus_Orbit_around_the_sun_in_km
Planet Venus Planetary Science # Venus Orbit around the sun in km? ###### Wiki User The orbit of Venus is: 108,200,000 KM from the Sun. 🙏 0 🤨 0 😮 0 😂 0 ## Related Questions 108,208,930 km (on average - the orbit is an ellipse) It takes Venus 224.7 days to make one full orbit around the sun, and it has an average orbital velocity of 35.02 Km/s. Venus' orbit keeps it about 107 million km to about 109 million km from the Sun. (67 million to 69 million miles) Mercury, Days to orbit sun = 87.97, Years to orbit sun= 0.24, Average distance from sun in km = 57,909,175 Venus, Days to orbit sun = 224.70, Years to orbit sun= 0.62, Average distance from sun in km = 108,208,930 Earth, Days to orbit sun = 365.26, Years to orbit sun= 1.00, Average distance from sun in km = 149,597,890 Mars, Days to orbit sun = 686.97, Years to orbit sun= 1.88, Average distance from sun in km = 227,936,640 Jupiter, Days to orbit sun = 4331.57, Years to orbit sun= 11.86, Average distance from sun in km = 778,412,010 Saturn, Days to orbit sun = 10759.22, Years to orbit sun= 29.46, Average distance from sun in km = 1,426,725,400 Uranus, Days to orbit sun = 30799.10, Years to orbit sun= 84.32, Average distance from sun in km = 2,870,972,200 Neptune, Days to orbit sun = 60190.00, Years to orbit sun= 164.79, Average distance from sun in km = 4,498,252,900 around 108208930 km. But it changes as venus orbits the sun. It rotates around the sun in an orbit which has an average distance of 57,600,000 km. If Venus orbited 96% closer to the sun, it would orbit at an average distance of 2.69 million miles (4.33 million km). There are two planets closer to the sun than the Earth: Venus and Mercury. Earth orbits at around 150,000,000 km from the sun. Venus is at 105,000,000 km and Mercury is at around 60,000,000 km. The earth travels around the sun in an elliptical orbit. The semi-major axis is 149,598,261 km, and the aphelion is 152,098,232 km. The perihelion is 147,098,290 kilometers. Venus can get closer to the Earth and it can also get further away (when it is on the opposite side of the Sun in it's orbit). At closest approach Venus is about 41 million kilometers. At perihelion (point in orbit closest to the Sun) Earth is 147 million km from the Sun. it rotates in an orbit around the sun about 57,910,000 km Venus orbits the Sun at about 35.02 km/s or 126,072 kmh (78,337.5 mph) In its orbit around the Sun, the Earth moves at about 30 km/sec. Easily. The furthest distance from the Earth to the Moon is about 406,000 km. So the diameter of the orbit is around 812,000 km. The diameter of the Sun is 1,392,000 km. So the orbit of the Moon would fit inside the Sun with room to space. The speed of Earth is related to the position of its orbit around the Sun. At a higher speed, Earth would need to be closer to the Sun; at a lower speed, it would need to be farther from the Sun. In its current orbit, Earth moves around the Sun at a speed of about 30 km/second. Earth can't get much closer to the Sun (and therefore move faster) than that; for instance, Venus moves around the Sun at a mean speed of about 35 km/second, and it seems that Venus is too close to the Sun for life. Mercury's orbit around the sun is approx 364 million km long. Uranus's orbit would be at 71,352,000 miles (114,830,000 km), just past the orbit of Venus. The perihelion of a planet is the closest distance it comes to the sun during orbit. Venus's perihelion is 107,477,000 km. The distance from Earth to Sun is 150 million km; therefore, the diameter of the orbit is twice this number. Mercury is the closest planet to the sun. It is 58 million km. from the Sun &amp; takes only 88 Earth days to orbit the Sun The distances between the planets varies as they all orbit the sun at different rates, so will be in constantly changing positions relative to each other. The closest that mercury gets to Venus is around 38 million km, using venus' distance from sun minus mercurys aphelion (furthest point from sun). As with all planets Venus' orbit is slightly elliptical, at the closest point (Perihelion) Venus is 107,476,259 KM from the Sun, at the farthest point (Aphelion) it is 108,942,109 KM from the Sun.On average Venus is approximately 67.2 million miles or 108.2 million kilometers away from the Sun.In other units, it would be 0.723 AU or 6.01 light minutes or 0.0000114 light years.Mean distance of 108,208,930 kmMaximum Distance 108,942,109 kmMinimum Distance 107,476,259 km
2021-01-26 12:27:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8879226446151733, "perplexity": 1332.4234326736564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799741.85/warc/CC-MAIN-20210126104721-20210126134721-00748.warc.gz"}
https://math.stackexchange.com/questions/385630/does-well-ordering-of-the-proper-class-of-cardinal-numbers-imply-choice
# Does well-ordering of the proper class of cardinal numbers imply choice? It is well-known (forgive the pun) that the axiom of choice (which states that the product of every non-empty family of non-empty sets is non-empty) implies that the proper class of all cardinal numbers is well-ordered, at least in the presence of the ZF axioms. Does the converse hold? If not, is the statement that the proper class of all cardinal numbers is well-ordered consistent with axioms like determinacy, which, while interesting, contradict full-blown choice? This is known as the trichotomy principle. If every two cardinals are comparable then the axiom of choice holds. So in fact the axiom of choice is equivalent to the slightly weaker claim that the cardinals are linearly ordered. To see this is true, let $A$ be a set, and let $\kappa$ be some ordinal such that $\kappa\nleq|A|$. Such ordinal exists and in fact we can ensure that $\mathcal{P(P(P(}A)))$ is strictly larger than such $\kappa$. Since $\kappa\nleq|A|$ we have that $A$ can be injected into $\kappa$ and therefore be well-ordered. So the ordering principle holds, and therefore the axiom of choice holds. This is known as Hartogs' theorem, and the least $\kappa$ cannot be injected into $A$ is known as the Hartogs number of $A$, often denoted as $\aleph(A)$. What happens when the axiom of choice fails, then? What is the ordering of the cardinals? We do not know much. We do know how to produce models in which many partial orders can be embedded into the cardinals, but we don't know a whole lot more in $\sf ZF$. To address the part of your question about determinacy, I will mention that there is a simple counterexample to trichotomy under $\mathsf{AD}$. Namely, $\mathbb{R}$ does not inject into $\omega_1$ and $\omega_1$ does not inject into $\mathbb{R}$. These statements both follow from the fact that under $\mathsf{AD}$ any well-ordered set of reals is countable. For more information about the structure of cardinals under $\mathsf{AD}$ you may want to see the paper A trichotomy theorem in natural models of $\mathsf{AD^+}$. (Note that the trichotomy here is unrelated to that which was disproved in the above paragraph.) • Under reasonable assumptions, $\sf AD$ implies that every countable poset embeds to the cardinals below $\mathcal P(\Bbb R)$. That's quite the opposite of linear ordering. :-) – Asaf Karagila May 8 '13 at 21:27 • @Asaf That sounds familiar. But can you point me to a reference? – Trevor Wilson May 8 '13 at 22:16 • I actually don't. I think it may appear in Woodin's big book. I think that ${\sf AD}+V=L(\Bbb R)$ is enough. – Asaf Karagila May 8 '13 at 22:43 • @Trevor: Woodin's paper The cardinals below $|[\omega_1]^{<\omega_1}|$, Annals of Pure and Applied Logic 140 (1-3), (2006), 161-232, has much more under $\mathsf{AD}_{\mathbb R}+\mathsf{DC}$. (Cont.) – Andrés E. Caicedo May 8 '13 at 22:44 • Under $\mathsf{AD}^+$ alone, we should be able to reconstruct the full complexity of results of Alekos and Greg on Rigidity theorems for actions of product groups and countable Borel equivalence relations, using Ben's soft approach, and showing Asaf's claim by just working on cardinals that are quotients of $\mathbb R$ by "nice" equivalence relations (coming from actions of nice countable groups). I do not know if there is an explicit reference to this in print. – Andrés E. Caicedo May 8 '13 at 22:45
2019-10-19 09:39:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8533671498298645, "perplexity": 203.442176239738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986692723.54/warc/CC-MAIN-20191019090937-20191019114437-00218.warc.gz"}
https://www.physicsforums.com/threads/time-and-limit-velocity-speed-of-light.954850/
# Time and limit velocity (speed of light) ## Main Question or Discussion Point Hello. Today I've thinking about limit velocity and speed of ligth. We know that material particles can't achieve that speed, also when the speed of particles increases your own clock walks slowly. In the particular case of ligth your speed don't move anything. This it a explanation of why particles can'r achieve the speed of ligth?: I've in my mindt that maybe if the velocity of particles can increases higher than the speed of ligth this implies that the time of those particles must be negative, which is oposite to the arrow of time (for example in the second termodinamycs law way). Has this any sense? Thanks. Sincerly. Related Special and General Relativity News on Phys.org PeroK Homework Helper Gold Member Hello. Today I've thinking about limit velocity and speed of ligth. We know that material particles can't achieve that speed, also when the speed of particles increases your own clock walks slowly. In the particular case of ligth your speed don't move anything. This it a explanation of why particles can'r achieve the speed of ligth?: I've in my mindt that maybe if the velocity of particles can increases higher than the speed of ligth this implies that the time of those particles must be negative, which is oposite to the arrow of time (for example in the second termodinamycs law way). Has this any sense? Thanks. Sincerly. Not really. The equations of SR (special relativity) are not well defined for a speed greater than $c$. You can't say anything about time for such hypothetical particles. There are also causal problems if You hypothetically could send a message faster than light. Ibix We know that material particles can't achieve that speed, More precisely, nothing with non-zero mass can travel at the speed of light. also when the speed of particles increases your own clock walks slowly It's not clear what you mean here. Clocks that are moving relative to you tick slowly as measured by you, yes. But the effect is symmetrical - your clocks tick slowly as measured by observers you say are in motion. In the particular case of ligth your speed don't move anything. It isn't possible to define time for things moving at the speed of light. And, again, the fact that something else is moving at the speed of light has no effect on your clocks. This it a explanation of why particles can'r achieve the speed of ligth?: It's a direct consequence of the postulates of relativity that you cannot accelerate past the speed of light. The speed of light is always the same in all inertial frames of reference. So no matter how fast you go, light always travels faster. maybe if the velocity of particles can increases higher than the speed of ligth this implies that the time of those particles must be negative This is not correct. Particles travelling faster than light (if any were possible) would be travelling backwards in time in some frames and forwards in others. This would make it possible to have causal paradoxes such as the "tachyonic anti-telephone". But this isn't why you can't exceed the speed of light. Rather, it's a consequence of the same postulates of relativity that make it impossible to exceed the speed of light. PeroK Homework Helper Gold Member This is not correct. Particles travelling faster than light (if any were possible) would be travelling backwards in time in some frames and forwards in others. This would make it possible to have causal paradoxes such as the "tachyonic anti-telephone". But this isn't why you can't exceed the speed of light. Rather, it's a consequence of the same postulates of relativity that make it impossible to exceed the speed of light. Or, more prosaically, tachyons travel from A to B in one frame, and from B to A in another. Forwards in time in both cases. Ibix Or, more prosaically, tachyons travel from A to B in one frame, and from B to A in another. Forwards in time in both cases. Or, perhaps better, it's not clear what "cause" and "effect" would mean if tachyons existed. At least, not without a re-write of relativity. Ok, thanks I'm not enough knowledge for understand these questions at all. But I like to learn. Yes, I know in relativity one must be very precise when one write, for example all people thinks that the clocks of another people go slowly. And also everybody is at rest with respect himself. I know that relativity is based in the Michelson and Morley experiment and the limit velocity experiment, postulates of the theory can explain those experiments and predicts new ones. But this is the question, It seems that the universe conspires so that nothing reaches the speed of light: why? Why is this speed so special? then my reasoning (of little boy) was: V<c --> delta time positive (arrow of time) V=c --> delta time= zero V>c --> delta time negative (opposite to arrow time) We know that in our universe delta S is always positive and fixed the arrow of time. Ok. not more speculations, this is not the place, I'm sorry. Can you suggest me some references for try to learn any more? Mister T Gold Member But this is the question, It seems that the universe conspires so that nothing reaches the speed of light: why? Why is this speed so special? If there's a speed that's the same for all observers regardless of their speeds relative to each other, then that speed has to be the fastest speed possible. That's the only thing that makes that speed special. Ok, thanks for your answers. The above link tells about I looking for. I showed you my "little boy reasoning" about the flow of time, velocity of objects and entropy. Then now I need to study. Articles in the above link can be a start point for that. If I had more questions I'll ask you. Thanks a lot. I think you could also think of it like this: no matter what, there is a universal agreed upon speed that no one can reach; either this speed is infinite or it is finite. If it's finite, you will inevitably work out the math of special relativity. It turns out that experiment has repeatedly shown that the speed of light is the universal speed limit and that it is finite. Furthermore, when you work out that math, you find that for an object to reach the speed of light, it requires (mathematically speaking) division by zero, and going faster than that will result in imaginary numbers for time, distance, momentum, etc. To me that's a good indicator that something reaching at or beyond light speed is nonsense. For example, here is one of the most important mathematical expressions in special relativity: It is a function of velocity called the Lorentz factor: $\frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}$ If v = c, you have $\frac{1}{\sqrt{1 - \frac{c^2}{c^2}}}$ = $\frac{1}{\sqrt{1 - 1}}$ = $\frac{1}{\sqrt{0}}$ = $\frac{1}{0}$ which is obviously nonsense. If v > c, say, 2c, you have $\frac{1}{\sqrt{1 - \frac{(2c)^2}{c^2}}}$ = $\frac{1}{\sqrt{1 - 4\frac{c^2}{c^2}}}$ = $\frac{1}{\sqrt{1 - 4}}$ = $\frac{1}{\sqrt{-3}}$ and now you have the square root of negative 3. If this were applied to relativistic momentum, you'd have a momentum of $p = \frac{mv}{\sqrt{-3}}$ = $p = \frac{mv}{i\sqrt{3}}$ = $p = \frac{-imv}{\sqrt{3}}$ so you have a negative imaginary momentum. (note: to go from i-1 to -i, first change i-1 to 1/i, then multiply the numerator and the denominator by the conjugate, which since there is no real part is just i/i.) Basically if you move as fast or faster than the speed of light you end up with nonsense. That's probably not the best way to look at it, but if you have made it past algebra, it's a great crutch. Yes, I know these equations and concepts, for example in the first above article, that I've been reading: http://arxiv.org/abs/physics/0302045v1, the way of deduction the addition speed law is in the same way I make the deduction of Lorentz transforms (except for some little bit nuances). Notice that eq.7 and eq.8 don't have the same symmetry and it's essential. Also is very interesting the final discussions about the values of K. One needs to make K in a range of values that corresponds with the experiments such Michelson Morley and limit speed. Then, the only consistent values for K are positive values. This arguments gives us the well know equations of SR. But why is necessary that this happened in this way. Well, I think that Physics study the Universe only in the way of the human people can perceive that Universe. And we needs to order the events and then we needs the time, also how it was said above we need to preserve the cause-effect relationship. And limit speed takes account of that. But really at higher energies velocity don't make big grows, is the mass that grows. And we say that mass is given by Higgs bosom. Will may the Higgs boson (interaction with the Higgs field) increase the mass of a particle to infinity as special relativity requires? Ok with the above articles I've enough for begin to learn a little bit about that, I don't like speculations, I know that physics are experiments and equations. Then I don't like to continue spell like in the above paragraph. Thanks everybody for your help. jbriggs444 Homework Helper 2019 Award But really at higher energies velocity don't make big grows, is the mass that grows. The sort of mass that grows is "relativistic mass". The concept of relativistic mass is not used much. Instead, we normally use the concept of "invariant mass" which does not grow. One thing that does grow with higher velocity is momentum. The momentum of an object with non-zero invariant mass increases without bound as the speed of light is approached. The sort of mass that grows is "relativistic mass". The concept of relativistic mass is not used much. Instead, we normally use the concept of "invariant mass" which does not grow. One thing that does grow with higher velocity is momentum. The momentum of an object with non-zero invariant mass increases without bound as the speed of light is approached. Thanks. That's means that at higher energies the Higgs field don't needs to work more hard? I like to get god concepts. : -) Yes, I know these equations and concepts, for example in the first above article, that I've been reading: http://arxiv.org/abs/physics/0302045v1, the way of deduction the addition speed law is in the same way I make the deduction of Lorentz transforms (except for some little bit nuances). Notice that eq.7 and eq.8 don't have the same symmetry and it's essential. Also is very interesting the final discussions about the values of K. One needs to make K in a range of values that corresponds with the experiments such Michelson Morley and limit speed. Then, the only consistent values for K are positive values. This arguments gives us the well know equations of SR. But why is necessary that this happened in this way. Well, I think that Physics study the Universe only in the way of the human people can perceive that Universe. And we needs to order the events and then we needs the time, also how it was said above we need to preserve the cause-effect relationship. And limit speed takes account of that. But really at higher energies velocity don't make big grows, is the mass that grows. And we say that mass is given by Higgs bosom. Will may the Higgs boson (interaction with the Higgs field) increase the mass of a particle to infinity as special relativity requires? Ok with the above articles I've enough for begin to learn a little bit about that, I don't like speculations, I know that physics are experiments and equations. Then I don't like to continue spell like in the above paragraph. Thanks everybody for your help. You’re putting too much emphasis on mass. Just let m be mass and look at the momentum equation. As for the asymmetry between eq 7 and eq 8, it’s based on experithece: space and time are not exactly symmetical. Space is isotropic, as mentioned just before those equations. But is time? Perhaps with “micro” physics, but clearly cracked eggs do not uncrack themselves. In a closed system, entropy either increases or remains constant. We can’t turn around in time like we can in space. Any derivation of transformation equations that represent our universe will have to take that into account. As for k being less than zero, that brings up inconsistencies according to your paper, and in the recent discussion where we discussed a similar derivation, k<0 brings indeterminate forms. So it makes no real sense. You’re putting too much emphasis on mass. Just let m be mass and look at the momentum equation. Yes, that's true, perhaps because my capital book in SR was the A.P. French book. I think is a very good book, but it does a very big emphasis in the relativistic mas. Of course mass is an invariant, how could I be so stupid!! I'm not a professional in relativity and only can study in my free time... and relativity is very complex and if for some time I can't work in it I forgets concepts and ideas. Last edited: Yes, that's true, perhaps because my capital book in SR was the A.P. French book. I think is a very good book, but it does a very big emphasis in the relativistic mas. Of course mass is an invariant, how could I be so stupid!! I'm not a professional in relativity and only can study in my free time... and relativity is very complex and if for some time I can't work in it I forgets concepts and ideas. It’s really just a name. “Relativistic mass” was chosen before physicists really had time to interpret the theory in a simpler, more concise way. Minkowski helped with that, and people realized notions like relativistic mass just overcomplicated things. Nugatory Mentor And we say that mass is given by Higgs boson. Will may the Higgs boson (interaction with the Higgs field) increase the mass of a particle to infinity as special relativity requires? That's means that at higher energies the Higgs field don't needs to work more hard? It's not true that "mass is given by the Higgs boson". There is a connection between mass and the Higgs field, but it's not what you'd think it is from what's being written in the popular press. One B-level explanation that is not hopelessly wrong (but bears about the same relationship to the real explanation as a child's book does to a college-level textbook) is https://futurism.com/one-huge-misconception-about-the-higgs-boson-video/ But for purposes of understanding relativity, it's safe to just completely ignore all the Higgs stuff - it's pretty much totally unrelated.
2020-03-30 20:32:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6789872646331787, "perplexity": 534.6393572016942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497301.29/warc/CC-MAIN-20200330181842-20200330211842-00372.warc.gz"}
https://socratic.org/questions/how-do-you-factor-the-monomial-121abc-3-completely
# How do you factor the monomial 121abc^3 completely? May 19, 2017 $121 a b {c}^{2} = 11 \times 11 \times a \times b \times c \times c$ #### Explanation: We know that $121 a b {c}^{3}$ is nothing but $121 \times a \times b \times {c}^{2}$ i.e. $121 \times a \times b \times c \times c$ Hence to factor given monomial, one just needs to factorize $121$ and as $121 = 11 \times 11$ $121 a b {c}^{2} = 11 \times 11 \times a \times b \times c \times c$
2020-02-25 05:42:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9628633260726929, "perplexity": 1830.0753601863357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146033.50/warc/CC-MAIN-20200225045438-20200225075438-00434.warc.gz"}
https://rupress.org/jcb/article/182/2/367/34995/Membrane-heterogeneities-in-the-formation-of-B
Antigen binding to the B cell receptors (BCRs) induces BCR clustering, phosphorylation of BCRs by the Src family kinase Lyn, initiation of signaling, and formation of an immune synapse. We investigated B cells as they first encountered antigen on a membrane using live cell high resolution total internal reflection fluorescence microscopy in conjunction with fluorescence resonance energy transfer. Newly formed BCR microclusters perturb the local membrane microenvironment, leading to association with a lipid raft probe. This early event is BCR intrinsic and independent of BCR signaling. Association of BCR microclusters with membrane-tethered Lyn depends on Lyn activity and persists as microclusters accumulate and form an immune synapse. Membrane perturbation and BCR–Lyn association correlate both temporally and spatially with the transition of microclustered BCRs from a “closed” to an “open” active signaling conformation. Visualization and analysis of the earliest events in BCR signaling highlight the importance of the membrane microenvironment for formation of BCR–Lyn complexes and the B cell immune synapse. ## Introduction Binding of antigens to the B cell receptors (BCRs) leads to clustering of the BCRs and triggering of a signaling cascade resulting in the activation of a variety of genes associated with B cell activation (Cambier et al., 1994; Reth and Wienands, 1997; Dal Porto et al., 2004; Hou et al., 2006). We now understand the biochemical nature of the BCR's signaling pathway beginning with phosphorylation of the BCR by the first kinase in the signaling cascade, the membrane-associated Lyn, in considerable detail. However, what remains only poorly understood are the very earliest events that follow antigen-induced clustering of the BCRs that lead to association of the BCR with Lyn and triggering of the signaling cascade. Of particular interest are the potential roles of plasma membrane lipid heterogeneities and the local lipid microenvironment of the BCR in the initiation of signaling. Indeed, Lyn is acylated by both myristoylation and palmitoylation that both dictate Lyn's membrane localization and are essential for Lyn's function (Kovarova et al., 2001). The results of previous biochemical studies using detergent solubility to identify membrane microenvironments suggested that lipid heterogeneities may play an important role in the initiation of B cell signaling by regulating access of the BCR to Lyn (Cheng et al., 1999; Aman and Ravichandran, 2000; Guo et al., 2000). These studies provided evidence that detergent-insoluble, sphingolipid-rich, and cholesterol-rich membrane microdomains termed lipid rafts concentrate the membrane-tethered dually acylated Lyn kinase and, in so doing, potentially provide a platform for BCR signaling. Subsequently, using fluorescence resonance energy transfer (FRET) confocal microscopy in live B cells, we showed that within seconds of the B cell's encounter with soluble antigens, the BCR transiently associated with a lipid raft probe, a myristoylated and palmitoylated fluorescent protein present in the detergent-insoluble lipid raft fraction of the plasma membrane (Sohn et al., 2006). This interaction was selective and was not observed with fluorescent proteins that were tethered to the detergent-soluble regions of the membrane by geranylgeranylation or myristoylation and preceded by several seconds the induction of a Ca2+ flux. These results are consistent with recent revised models of the original raft hypothesis (Simons and Ikonen, 1997; Edidin, 2003) that take into account the dominant role for plasma membrane proteins in capturing and stabilizing intrinsically unstable lipid domains (Hancock, 2006). The finding that the antigen-clustered BCR associated with the lipid raft probe predicted that the association with lipid rafts would lead to interaction of the BCR with Lyn kinase itself. However, this prediction was not tested directly. In addition, these results were acquired by investigating the response of B cells to soluble antigens, and several recent studies provided evidence that the relevant mode of antigen recognition by B cells in vivo may be on the surfaces of antigen-presenting cells (APCs). Indeed, results from a study using intravital two-photon imaging suggest that B cells contact antigen not in solution but rather on the surfaces of APCs in lymphoid organs (Qi et al., 2006). Studies in vitro showed that B cells encountering antigen on the surface of an APC or on a planar lipid bilayer, approximating an APC surface, form an immune synapse (Batista et al., 2001; Carrasco and Batista, 2006; Fleire et al., 2006), a structure associated with B cell activation. In addition, results of a recent study indicate that the requirements for B cell responses to membrane-bound antigens are significantly different from those for responses to soluble antigens (Depoil et al., 2008). Indeed, unlike BCR signaling in response to soluble antigens that is initiated independently of the B cell coreceptor, CD19, response to membrane antigen was defective in the absence of CD19. To capture the earliest events in the interaction of BCR with the lipid rafts and the membrane-tethered Lyn kinase after contact with antigen in a planar membrane, we took advantage of FRET in conjunction with total internal reflection fluorescence microscopy (TIRFM). TIRFM provided high resolution images that allowed us to observe the formation of individual BCR microclusters and their interaction with a raft lipid probe and with Lyn and relate these to BCR activation and formation of the immune synapse. In this study, we provide evidence that the individual BCR microclusters that first formed after antigen contact perturbed the local membrane microenvironment, leading to association of the clustered BCRs with a raft lipid probe. BCR microclusters interacted with Lyn, an interaction that persisted as the BCR microclusters accumulated and formed an immune synapse. The association with the lipid raft probe and Lyn with BCR clusters correlated in both time and space with the transition of BCR microcluster cytoplasmic domains to an “open” active conformation. These results provide a new view of the dynamic process of antigen-induced BCR microclustering and the effect of microclustering on the local lipid microenvironment and recruitment of Lyn. ## Results ### Molecular interactions of antigen-induced BCR microclusters with a lipid raft probe To analyze the BCR's interaction with lipid rafts, B cell lines were generated that were specific for the antigen phosphorylcholine (PC) and stably expressed a chimeric BCR Igα chain that contained the FRET acceptor YFP in its cytoplasmic domain (Igα-YFP) and a FRET donor CFP that contained the first 16 amino acids of the Src family kinase Lyn (Lyn16-CFP), resulting in its myristoylation and palmitoylation and association with the detergent-insoluble fractions of the plasma membrane (Sohn et al., 2006). Previous studies showed that cell lines expressing Igα-YFP and Lyn16-CFP were functional and responded to antigens and established the validity of using FRET measurements to detect molecular interactions between antigen-clustered BCRs and lipid raft probes (Sohn et al., 2006). B cells were placed on a planar lipid bilayer that contained biotinylated forms of either the antigen, PC10-BSA, and the adhesion molecule, intercellular adhesion molecule-1 (ICAM-1), or ICAM-1 alone and were tethered to the lipid bilayer by streptavidin and biotinylated 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine. Three-channel TIRF microscope images were acquired (CFP, FRET, and YFP) at 2- or 4-s intervals for 15 min. Influences of the relative concentration of YFP and CFP on the FRET measurements were addressed by calibrating the bleed-through of the donor fluorescence in the acceptor detection channel and the amount of directly excited acceptor fluorescence as previously described (van Rheenen et al., 2004; Zal and Gascoigne, 2004) by measuring the donor and acceptor emissions of cells that contained only the acceptor or donor fluorescent proteins in the same field as the experimental cells. FRET was calculated by sensitized acceptor emission and expressed as either corrected FRET values (Fc) or as FRET efficiency normalized for the acceptor (Ea) as detailed in the Materials and methods section. B cells placed on a bilayer that contained ICAM-1 alone failed to spread and contacted the bilayer over a relatively small area of ∼20 μm2 (unpublished data). In contrast, on ICAM-1– and antigen-containing bilayers, the B cells spread, forming contact areas of ∼80 μm2. Time-lapse imaging showed that the first contact points of the B cell membrane with the ICAM-1–only bilayer occurred in discrete membrane protrusions that contained both Igα-YFP and Lyn16-CFP (Fig. 1 A). The area of contact grew with time, and although the relative fluorescence intensities (FIs) of Igα-YFP and Lyn16-CFP were similar over the contact area (Fig. 1 B) and overlapped extensively at each time point (Fig. 1 A), at no point in time was there significant FRET between Igα-YFP and Lyn16-CFP. Thus, molecular interactions of the BCR with raft lipids did not occur in the absence of antigen to engage the BCR. B cells also first contacted the ICAM-1– and antigen-containing lipid bilayers through multiple, small points (Fig. 1 A). Both Lyn16-CFP and Igα-YFP were visible in these early contact points, and FRET was detected between Igα-YFP and Lyn16-CFP, indicating that the earliest antigen-induced BCR microclusters associated with the lipid raft probe. With time, the B cells spread on the bilayer, concentrating the BCR Igα-YFP and Lyn16-CFP in the center of the contact area and subsequently contracting, forming a mature synapse. As BCR microclusters were concentrated in the synapse, new B cell clusters continued to form in the periphery of the B cells' contact area and then moved to the center synapse. Notably, the highest FRET values detected were not in the center of the synapse where Igα-YFP and Lyn16-CFP were most concentrated, but rather FRET was highest in the periphery of the B cell's contact area with the bilayer, where there was relatively little accumulation of Igα-YFP and Lyn16-CFP but where new BCR-antigen clusters continued to form (Fig. 1 A). Thus, the difference in FRET values in the cell's periphery versus in the synapse could not be attributed to the concentration of Igα-YFP relative to Lyn16-CFP. Similarly, the relative FIs of Igα-YFP and Lyn16-CFP over the contact area were similar with time for cells engaging either ICAM-1 alone or ICAM-1 and antigen, yet FRET was only observed in cells contacting ICAM-1 and antigen (Fig. 1 B). Moreover, the FRET between Igα-YFP and Lyn16-CFP was specific and did not occur between Igα-YFP and Ger-CFP (Fig. 1 C) that contained the C-terminal polybasic12 residues of K-ras and four residues of rap1B, resulting in CFP's geranylgeranylation and, as previously shown, targeting to nonraft detergent-soluble membranes (Pyenta et al., 2001; Sohn et al., 2006). Collectively, these data provide evidence for a selective molecular interaction between the antigen-clustered BCRs and the lipid raft probe. ### Association of the antigen-clustered BCRs with raft lipids is independent of signaling and association with the actin cytoskeleton To determine whether association of the clustered BCRs with the raft lipid probe was dependent on either signaling or the function of the actin cytoskeleton, B cells were treated with either PP2 to inhibit Src family kinases (Hanke et al., 1996) or with latrunculin B to disassemble the actin cytoskeleton (Brown and Song, 2001) before exposure to ICAM-1– and antigen-containing bilayers. Our previous study showed that PP2 blocked FRET between Igα-CFP and Lyn16-YFP in cells encountering antigen in solution (Sohn et al., 2006). Here, we found that although PP2 affected the ability of B cells to both spread on the bilayer and organize the BCR in a synapse, it did not block FRET between Igα-YFP and Lyn16-CFP upon BCR antigen binding (Fig. 2 A). Thus, association of the BCR with raft lipids showed different requirements for Src family kinase activity when the B cell engaged antigen in solution versus on a membrane. The requirement of Src family kinase activity when antigen is bound from solution may reflect a role of these kinases in maintaining some feature of the cell membrane or local membrane topology, requirements that contact between the B cell membrane and the antigen-containing bilayer overcomes. In addition, although PP2 does not block FRET between Igα-YFP and Lyn16-CFP, both the FRET spatial and kinetic patterns were altered by treatment with PP2 (Fig. 2, A and B), presumably reflecting a requirement for Src family kinase activity in these downstream processes. Similar results were obtained in cells treated with latrunculin B. FRET between Igα-YFP and Lyn16-CFP was observed after antigen-induced BCR clustering, but both the kinetics and spatial pattern of FRET were affected as compared with untreated cells. Collectively, these observations indicate that association of the BCRs with raft lipids that occurs within the first several seconds of BCR-antigen engagement is independent of signaling and association with the actin cytoskeleton. However, importantly, failure to either signal or associate with the actin cytoskeleton significantly affected both the spatial distribution and kinetics of the BCR–lipid raft probe interactions by mechanisms that remain to be elucidated. To independently confirm that BCR signaling was not required for the association of BCR clusters with raft lipids, we analyzed two previously described NIP (4-hydroxy-5-iodo-3-nitrophenyl acetyl)-specific J558L B cell lines expressing Lyn16-CFP and either a wild-type BCR or a signaling-incompetent BCR in which the two tyrosines within the cytoplasmic immunoreceptor tyrosine-based activation motif (ITAM) of Igα and Igβ chains were mutated to phenylalanine (Tolar et al., 2005). The B cells expressing the ITAM mutant BCR failed to spread on the antigen- and ICAM-1–containing bilayer (Fig. 2 C) and formed fewer BCR microclusters as compared with the wild-type receptor (Fig. 2 D). However, the BCR Igβ-YFP clusters that formed showed FRET with Lyn16-CFP that was comparable with that of the wild-type BCR clusters (Fig. 2 E). Collectively, these results indicate that the interaction of the clustered BCRs with the lipid raft probe did not require signaling or the actin cytoskeleton but rather appeared to rely on an intrinsic property of the clustered BCR. ### FRET between Igα-YFP and Lyn16-CFP correlates temporally and spatially with BCR activation Previous studies using FRET confocal microscopy provided evidence that antigen binding from solution resulted in a conformational change in the BCR cytoplasmic domains from a “closed” to an “open” form and simultaneous phosphorylation of the BCR (Tolar et al., 2005). We observed that when clustered by antigen, BCRs containing a membrane Ig–CFP and Igα-YFP showed an initial increase in FRET, reflecting the close molecular proximity of the cytoplasmic domains of the clustered BCRs, and then a drop in FRET, indicating that the cytoplasmic domains within the BCR clusters moved apart or opened. To determine whether the observed FRET between Igα-YFP and Lyn16-CFP correlated either temporally or spatially with the antigen-induced transition in the BCR to an open form, we measured FRET in a J558L cell line expressing an NIP-specific BCR containing membrane Ig–CFP and Igα-YFP as the B cells encountered a bilayer containing NIP and ICAM-1 (Fig. 3). Tracking the initial BCR microclusters individually from their formation to the generation of the synapse showed that within the first few seconds of formation, FRET in the microclusters sharply increased, indicating the induced close molecular proximity of the cytoplasmic domains of the BCRs within the microclusters (Fig. 3 A). Despite the continued accumulation of BCRs in the clusters, the FRET level reached a peak and then dropped, which is consistent with a synchronized opening of the cytoplasmic domains of the BCR clusters as previously described (Tolar et al., 2005). Calculation of FRET ratio images showed that the increase in FRET in the BCR clusters occurred in the initial point of contact and in the cell's periphery (Fig. 3 B). In contrast, the clustered BCRs accumulated in the synapse showed lower FRET, indicating an open conformation. Thus, transition of the BCR from the closed to open form correlated with FRET between Igα-YFP and Lyn16-CFP both spatially, occurring in the initial contact points and in the periphery of the B cell's contact area, and temporally, occurring within the first several seconds of encounter with the antigen bilayer. ### The antigen-induced interactions of the BCR with Lyn Association of the antigen-induced BCR microclusters with the lipid raft probe has been suggested to play a role in facilitating the association of clustered BCR with the Lyn kinase. To directly characterize the interaction of the BCR with Lyn, CH27 B cells were analyzed that expressed Igα-YFP and full-length Lyn (LynFL) linked by six amino acids at the C terminus to CFP (LynFL-CFP). Images of B cells engaging a bilayer containing only ICAM-1 showed extensive colocalization of Igα-YFP with LynFL-CFP but no FRET (Fig. 4 A). In contrast, images of B cells engaging an ICAM-1– and antigen-containing bilayer showed significant FRET at the first points of contact where LynFL-CFP colocalized with Igα-YFP (Fig. 4 A). FRET between LynFL-CFP and Igα-YFP persisted as the BCR microclusters formed a central synapse. Quantification of the FIs and FRET over the contact area with time showed that although the relative FIs of Igα-YFP and LynFL-CFP were similar with time in the presence or absence of antigen, FRET was only detected when antigen was present (Fig. 4 B). Thus, the FRET cannot be attributed to a change in the ratios of YFP and CFP. The concentration of FRET between the BCR and LynFL in the center synapse was in contrast to the observation for the lipid raft probe and BCR, in which case FRET was highest in the cell's periphery. A time-lapse video illustrated this difference, showing that FRET between the BCR and raft lipid probe was more restricted to the cells' periphery in contrast to the FRET between Lyn and the BCR that was concentrated in the synapse (Fig. S1 and Videos 1–8). FRET between Igα-YFP and either Lyn16-CFP or LynFL-CFP was quantified with time over the entire contact area of the B cells interacting with ICAM-1– and antigen-containing bilayers and was compared (Fig. 5 A). FRET between Lyn16-CFP and Igα-YFP increased rapidly upon initial B cell contact with the antigen-containing bilayer during the initial contact phase and then decreased as B cells spread and BCR clusters moved to the center of the synapse. FRET between LynFL-CFP and Igα-YFP showed an initial small peak that corresponded in time to the FRET between Igα-YFP and Lyn16-CFP during the initial contact phase (Fig. 5 A). The initial FRET peak between Igα-YFP and LynFL-CFP was followed by a peak in FRET that was significantly greater in magnitude and persisted for longer as the BCR clusters moved to the center of the synapse. Importantly, in cells treated with PP2 to block Lyn's activity, FRET between Igα-YFP and LynFL-CFP was limited to only the first peak in the initial contact phase (Fig. 5 B). Thus, Lyn appears to interact with the BCR clusters first in a PP2-resistant fashion, presumably mediated by lipid–protein interactions between Lyn and the BCR, and then in a PP2-sensitive phase, presumably by protein–protein interactions between the BCR and a kinase-active Lyn. Collectively, these observations indicate that over the contact area, BCR interactions with the lipid raft probe preceded those of the BCR and Lyn and that these lipid–protein interactions are more transient or unstable as compared with the protein–protein interactions between Lyn and the BCR. ### Individual BCR microclusters associate with the lipid raft probe and with Lyn several seconds after forming To better understand the dynamics between the formation of BCR microclusters and their association with Lyn and the lipid raft probe, we compared the pattern of FRET between either Lyn16-CFP or LynFL-CFP and Igα-YFP in individual BCR microclusters as the clusters first formed in membrane protrusions during the first 44 s of contact of the B cell with the antigen-containing bilayer (Fig. 6 A). BCR microclusters appeared ∼12–20 s before Lyn16-CFP colocalized with the BCR clusters and FRET between Lyn16-CFP and Igα-YFP was detected (Fig. 6 A). In these microclusters, the FRET appeared to increase, peak, and then decrease. Similarly, BCR clusters were detected and colocalized with LynFL-CFP ∼20–28 s before FRET was detected at 28–44 s (Fig. 6 A). Thus, BCR clustering preceded by several seconds the close molecular association of BCR with the lipid raft probe and with Lyn. In addition, these results showed no evidence that the lipid raft probes and Lyn are in preformed structures, but rather both appeared to coalesce around the BCR microclusters. ### Association of the BCR with Lyn facilitates directional movement to the synapse It was of interest to determine the repercussions of the association of the BCR microclusters with the lipid raft probe as compared with Lyn itself in terms of the movement of the microclusters from the cell's periphery to the cell's center to form an immune synapse. To do so, the trajectories of individual BCR microclusters were analyzed as they formed in the periphery and moved to the synapse to determine when microclusters associated with the raft probe versus Lyn (Fig. S2 and Video 9). The trajectories of the BCR clusters formed in the periphery of the contact area showed three patterns: an immediate movement toward the center of the contact area, a delayed movement to the center, and a random nondirectional movement in the periphery. FRET between BCR clusters and the lipid raft probe or Lyn was measured in cells expressing either Igα-YFP and Lyn16-CFP or Igα-YFP and LynFL-CFP. An analysis of the time the BCR clusters moved randomly before moving in a directed trajectory to the center of the contact area showed that the clusters associated with Lyn16-CFP spent considerably more time in a random movement (an average of 25 s) before moving to the center as compared with those associated with LynFL-CFP (an average of 8 s; Fig. 6 B). This observation suggests that association of the BCR with Lyn facilitates directional movement to the synapse. ## Discussion The use of live cell imaging allowed an investigation into the earliest events in the initiation of BCR signaling that precede the interaction of the BCR with Lyn kinase and the formation of an immunological synapse (Fig. S3). We provide evidence for an ordered series of events that begins with the antigen-induced clustering of the BCRs. The newly clustered BCRs associate with a lipid raft probe in the periphery of the contact area of the B cell with an antigen-containing lipid bilayer. Association of the antigen-clustered BCRs with the lipid raft probe is highest in the cells' periphery and weaker as the BCR clusters move to the center of the contact area to form a synapse. BCR clusters associated with a lipid raft probe appear to move randomly in the peripheral contact area, and it is not until the clustered BCRs stably associate with Lyn that the random motion ceases and the BCR clusters move directionally to the center of the contact area to form a synapse. In interpreting FRET data generated using intermolecular reporters, as described here, it is important to control for potential artifacts that can result from relative differences in the concentrations of the two reporters. The influence of the concentration on artifacts in the FRET measurements can largely be avoided by calibrating the bleed-through of the donor fluorescence in the acceptor detection channel and the amount of directly excited acceptor fluorescence as described previously (van Rheenen et al., 2004; Zal and Gascoigne, 2004). In the studies reported here, calibration involved the use of control cells in the same field as the experimental cells, that each contained only the acceptor or the donor fluorescent protein and measurements at the donor and acceptor emission wavelengths. Nevertheless, it is possible that an effect of the changes in concentration of the BCR as it clusters was directly responsible for the changes in measured FRET. We have addressed this issue by providing data demonstrating that although the ratio of FIs of the CFP and YFP did not change significantly over time on cells engaging bilayers containing ICAM alone or ICAM plus the antigen, FRET was only observed in cells engaging antigen. Moreover, the spatial pattern of FRET did not correlate with the highest intensities of YFP and CFP, a correlation that should occur if FRET was simply the result of the concentration of the clustered BCR. We also provided data demonstrating that FRET between Igα-YFP and Lyn16-CFP was specific and did not occur between Igα-YFP and a different lipid probe, namely geranylgeranyl-CFP, even though the FIs were similar in both cases for CFP and YFP. Collectively, these results provide strong evidence that the FRET measurements reported here reflect a genuine, specific molecular interaction between Igα-CFP and Lyn16-YFP and LynFL-YFP. The view of lipid rafts and their interactions with immune receptors during antigen engagement provided by these studies differ in the molecular details from earlier views as first articulated in the raft hypothesis (Simons and Ikonen, 1997). Lipid rafts were initially viewed as freely diffusing, stable, lateral assemblies of sphingolipids and cholesterol that formed signaling platforms. In the intervening years, results of studies of both model membranes and living cells along with computational modeling led to an updated model of lipid rafts that takes into account a more dominant role for membrane proteins in capturing and stabilizing intrinsically unstable liquid-ordered membrane microdomains (Hancock, 2006). Here, we provide evidence consistent with this updated version of the raft hypothesis. The raft lipids do not appear to form stable microscopic structures either in resting or activated cells. Indeed, we were unable to detect FRET between Lyn16-YFP and Lyn16-CFP in cells that expressed both (unpublished data), suggesting that stable raft microdomains, if they exist, must be small as indicated previously (Sharma et al., 2004). Second, the results provided here indicate that the BCRs first cluster and then condense raft lipids around them. These clustered BCR–raft lipid interactions are dynamic, weak, and transient. The interaction of lipid rafts with the clustered BCRs is not dependent on the initiation of signaling or on the actin cytoskeleton and thus appears to be an intrinsic property of the clustered BCR that prefers the microenvironment of raft lipids. In contrast to the ephemeral interactions of the clustered BCRs with raft lipids that predominate early after BCR clustering, the BCR forms more stable protein–protein interactions with Lyn that dominate later and are dependent on a kinase-active Lyn. Indeed, association of the clustered BCR with Lyn predicted the directional movement of the BCRs to the synapse. These observations are similar to those of Larson et al. (2005), who showed that LynFL codiffuses with the clustered IgE receptor, but a minimal raft probe did not. Similarly, Douglass and Vale (2005) used TIRFM to observe the diffusion of single molecules on the surface of T cells during T cell activation by antigen and provided evidence for membrane microdomains created by protein–protein networks that exclude or trap signaling molecules on the T cells' membrane. They provided evidence that the full-length Src family kinase, Lck, stably associated with T cell receptor signaling domains, but the minimal lipid probe did not. However, these domains were large and formed relatively late after antigen engagement, and the authors suggested that lipid raft–receptor interactions might precede the formation of these domains and play an organizational role within a receptor cluster. Here, we provide evidence that this may indeed be the case for signaling through the BCR. Collectively, these studies illustrate the importance of both lipid–protein and protein–protein interactions in the initiation of signaling. The results presented here characterizing the interaction of BCR microclusters and a lipid raft probe during contact of the B cell with antigen on a membrane differ from our previous results characterizing the same process in B cells responding to antigen in solution (Sohn et al., 2006). In response to soluble antigen, BCR–lipid raft probe interactions were blocked by PP2, a Src family kinase inhibitor (Sohn et al., 2006). In contrast, we show here that in response to antigen on membranes, PP2 does not block interactions of the BCR with the lipid raft probe, although PP2 influences the dynamics of the interactions. In addition, we show here that a signaling-deficient BCR associates with the lipid raft probe upon clustering in response to membrane-bound antigen. These findings raise the question of how the B cells' encounter with antigens on membrane differs from that with antigen in solution. When interacting with antigens on membranes, the topology of the B cell membrane changes dramatically. The initial contacts are through small membrane protrusions that may concentrate or restrict BCR movement, facilitating clustering. These interactions trigger spreading of the BCR over the antigen-membrane and then contraction. It may be that Src kinase activity is required for B cells in solution to provide a restriction on BCR membrane movements that is replaced or overcome by membrane topology in B cells encountering membrane-bound antigens. Clearly, additional studies will be required to fully delineate the mechanisms at play in these early events. The observation that BCR clusters associate with lipid rafts raises the following question: what function do lipid rafts provide? Using FRET confocal microscopy, we previously showed that the cytoplasmic domains of antigen-clustered BCRs undergo a conformational change from a closed to an open form (Tolar et al., 2005). We show here that the transition of the clustered BCR from the closed to an open form correlates both temporally and spatially with condensing of the raft lipid probe around the clustered BCR. By several criteria, the open form of the BCR was signaling active and phosphorylated by Lyn. We speculate that the well-characterized local thickening of raft membranes (McIntosh et al., 2003) or the curvature of the raft membrane (Reynwar et al., 2007) around the clustered BCR may induce the observed conformational change in the clustered BCR cytoplasmic domains. The open BCR cluster would then be phosphorylated by Lyn that was also condensed around the clustered BCR by virtue of its lipid anchor to the plasma membrane. Thus, lipid rafts may provide two distinct functions: namely, to segregate the BCR and Lyn in the plane of the membrane in resting cells and facilitate their interactions after BCR clustering and to alter the character of the membrane surrounding the clustered BCR, inducing an alteration in the conformation of the cluster to accommodate the membrane change. Thus, interaction of the BCR with membrane lipids may play a critical role in the initiation of signaling. ## Materials And Methods ### Cell lines, antigens, and reagents The CH27 mouse B cells that stably express the recombinant chimeric protein Igα-YFP (Igα fused to the N terminus of YFP) alone or Igα-YFP and Lyn16-CFP (containing the first 16 amino acids of Lyn, including the myristoylation and palmitoylation sequences on the N terminus of the monomeric form of CFP) were described previously (Sohn et al., 2006). The J558L B cell line, which stably expresses the NIP-specific B1-8 μ heavy chain, was maintained as previously described (Tolar et al., 2005). PC conjugated with BSA containing 10 PC per BSA (PC10-BSA; Biosearch Technologies) was used as an antigen for the PC-specific CH27 cell line, and NIP16- or NIP14-BSA containing 16 or 14 NIP groups per BSA molecule (Dal Porto et al., 1998) was used as the antigen for the NIP-specific J558L cell line. PP2 and latrunculin B were purchased from EMD, and 50 μM PP2 and 10 μM latrunculin B were used in the experiments as needed. ### Constructs and transfection The LynFL-CFP construct was generated by inserting an XhoI–BamHI fragment containing LynFL (provided by B. Baird, Cornell University, Ithaca, NY; Kovarova et al., 2001; Hess et al., 2003) into the pECFP-N1 vector (Clontech Laboratories, Inc.). The monomeric form of geranylgenaylated CFP (Ger-CFP) construct was generated by PCR primer extension using 5′ primer encoding the N terminus of CFP and 3′ primer encoding the C terminus of CFP with an additional 16 amino acids (DGKKKKKKSKTKCQLL), including the C-terminal polybasic12 residues of K-ras and four residues of rap1B, resulting in geranylgeranylation of the expressed proteins (Pyenta et al., 2001), and a monomeric CFP-expressing plasmid as a template. The PCR product was inserted to the pECFP-N1 vector and confirmed sequences for the construct. CH27 cells were generated that expressed either both the LynFL conjugated to the N terminus of the monomeric version of CFP (LynFL-CFP) and Igα-YFP or both Ger-CFP and Igα-YFP. J558L cells stably transfected with wild-type IgαYY (Tolar et al., 2005) were transiently transfected with wild-type IgβYY-YFP and Lyn16-CFP. J558L cells stably transfected with the mutant IgαYY→FF (Tolar et al., 2005) were transiently transfected with Lyn16-CFP and mutant IgβYY→FF-YFP. ### Planar lipid bilayers The preparation of planar lipid bilayers is detailed elsewhere (Grakoui et al., 1999; Carrasco et al., 2004). Bilayers were prepared that contained biotin lipids to which biotinylated ICAM-1 and antigens were attached through streptavidin. In brief, PC10-BSA, NIP16-BSA, and the mouse ICAM-1/huFc chimera protein (R&D Systems) were biotinylated with EZ-link sulfo-NHS-LC-biotin (Thermo Fisher Scientific). An aliquot of each was labeled with sulfo-NHS–functionalized fluorophores (Invitrogen) to allow monitoring of the mobility of the lipid-anchored proteins in the lipid bilayers. Biotin-labeled small unilamellar lipid vesicles were prepared by mixing a 100:1 molar ratio of 1,2-dioleoyl-sn-glycero-3-phosphocholine and 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine-cap-biotin (Avanti Polar Lipids, Inc.). The lipid mixture was sonicated and resuspended in PBS at a lipid concentration of 5 mM. Aggregated lipid vesicles were cleared by ultracentrifugation and filtering. Bilayers were formed in Lab-Tek chambers (Thermo Fisher Scientific) in which the coverglasses were replaced with nanostrip-washed coverslips. The coverslips were incubated with 0.1 mM biotin-labeled small unilamellar lipid vesicles in PBS for 10 min. After washing with 20 ml PBS, the bilayer was incubated with 2.5 μg/ml streptavidin for 10 min, and excess streptavidin was removed by washing with 20 ml PBS. The bilayers were incubated for 20 min with 0.5 μg/ml of biotinylated mouse ICAM-1, and excess ICAM-1 was removed by washing. The streptavidin- and ICAM-1–containing planar lipid bilayers were incubated with 0.75 μg/ml of biotinylated PC-BSA or NIP-BSA. The unbound excess of antigen was removed by washing with 20 ml PBS. The mobility of ICAM-1 and antigens in the lipid bilayers was confirmed by analyses of the proteins labeled with fluorescent dyes. Alternatively, NIP- and ICAM-1–containing planar lipid bilayers were prepared by fusing small unilamellar lipid vesicles with a clean glass coverslip surface as described previously (Brian and McConnell, 1984) using 1,2-dioleoyl-sn-glycero-3-phosphocholine and 1,2-dioleoyl-sn-glycero-3-[N-(5-amino-1-carboxypentyl iminodiacetic acid) succinyl] in nickel salt (Avanti Polar Lipids, Inc.) at a 10:1 ratio. Small unilamellar vesicles were obtained by sonication and clarified by ultracentrifugation and filtering. Glass coverslips were cleaned in Nanostrip (Cyantek), washed, and dried. Lipid bilayers were prepared from a 0.1-mM lipid solution on the coverslips attached to the bottom of Lab-Tek imaging chambers. After excess lipids were washed away, histidine-tagged antigens and ICAM-1 were bound. Before imaging, chambers were washed with HBSS supplemented with 1% FCS. NIP14-BSA was prepared as described previously (Tolar et al., 2005) and conjugated to a cystein-containing peptide terminated with a 12-histidine tag (ASTGTASACTSGASSTGSH12) using SMCC (Thermo Fisher Scientific) according to the manufacturer's protocols. Recombinant ICAM-1 tagged with a 12-histidine tag was a gift from J. Huppa (Stanford University, Palo Alto, CA). Conjugation of NIP14-BSA to succinimidyl AlexaFluor647 and ICAM-1-H12 to AlexaFluor488 (both obtained from Invitrogen) was performed according to the manufacturer's protocols. ### TIRFM imaging and image analysis Through-lens TIRFM (Axelrod, 1981) was performed on an inverted microscope (IX-81; Olympus) equipped with 60× 1.45 NA and 100× 1.45 NA objectives (Olympus). For Figs. 1 C and 5 A, 100× 1.45 NA objectives were used for the image acquisition. Cells expressing CFP and/or YFP were added onto the lipid bilayers containing ligands, and all time-lapse imaging was performed at 37°C using a heated chamber. A 442-nm laser (Melles Griot) was used for CFP excitation, and CFP and FRET images were acquired simultaneously using a dual image splitter (MAGS Biosystems) equipped with a 505 dichroic beamsplitter and HQ485/30 (CFP) and HQ560/50 (FRET) emission filters (Chroma Technology Corp.). The 514-nm line from an argon gas laser was used to excite YFP, and images were acquired through the same dual image splitter with the HQ560/50 filter. CFP-FRET and YFP dual-view images were sequentially acquired in each time point through the alternative switch of each laser line. Images were captured into 16-bit grayscale with no binning and no averaging as 512 × 512 pixels by an electron-multiplier charge-coupled device camera (Cascade II; Photometrics) under management by MetaMorph software (MDS Analytical Technologies). Otherwise, a 1,024 × 1,024–pixel size of images recorded into 10-bit grayscale with an intensifier CCD camera (XR/MEGA-10; Stanford Photonics) was obtained under control by QED software (Media Cybernetics, Inc.) for Figs. 6 A (top row) and S2. FRET images obtained by TIRFM were analyzed by the sensitized acceptor emission method as described in detail previously (van Rheenen et al., 2004; Tolar et al., 2005; Sohn et al., 2006). The FRET efficiency normalized for the acceptor (Ea) predominantly used here was described in a previous paper in detail (Tolar et al., 2005). In brief, first, three CFP (D), FRET (F), and YFP (A) images were obtained by splitting CFP-FRET and YFP dual-view images into each CFP, FRET, and YFP image after alignment between two images using the custom-made macro function in the Image Pro Plus software package (Media Cybernetics, Inc.). CFP-FRET dual-view images from 1.0 μm of blue-green fluorescent polystyrene microspheres (excitation/emission of 430/465) that fluoresce in both the CFP and FRET channels (Invitrogen) were used as an alignment reference in each experiment. Second, the CFP, FRET, and YFP images were background subtracted, flattened for background, and smoothed by a Gauss filter method using Image Pro Plus software. FRET efficiency (Ea) was calculated by the following equation: Ea = (F − β × D − γ × A)/γ × A × KA as described previously (Tolar et al., 2005). Correction factors for donor (CFP) bleed-through (β) and acceptor (YFP) cross talk (γ) in the FRET channel were obtained from single CFP- or YFP-expressing cells present in the same image fields with the experimental cells to eliminate the bias among different fields or times. The β factor was 0.7 ± 0.05, and the γ factor was 0.6 ± 0.1. In our TIRF microscope system, the bleed-through of YFP emission in FRET channel to the CFP channel during 442-nm excitation (δ factor) from the YFP single positive cell was negligible. The KA constant was obtained as described previously (Tolar et al., 2005) by acquiring TIRF as well as epifluorescence images of Daudi, human B cells expressing Lyn16-CFP-YFP fusion proteins (Sohn et al., 2006). The Ebleaching value from the control cells was mainly obtained from the epifluorescence images before and after bleaching YFP because of the ease of bleaching. The Ebleaching and KA values were 0.5 ± 0.5 and 5 ± 0.2, respectively, in our setting condition. For the quantification of FRET at the single-cell level with time, the mean FIs from the background-subtracted images of each CFP, FRET, and YFP channel were calculated from a region of interest over background levels using the autotracking mode of Image Pro Plus software. FRET efficiencies (Ea) were calculated by the aforementioned equation, and data are shown as the mean ± SEM. For counting the number of BCR microclusters per cell in Fig. 2 D, YFP-probed IgM clusters that appeared on the TIRF images were manually counted in each cell after bandpass filtering the images using Matlab software (The Mathworks, Inc.). FRET between BCR subunits in Fig. 3, in which the stoichiometry of donor and acceptor is constant, was calculated only from CFP and FRET images as $E{=}\frac{R{-}\mathrm{{\beta}}{-}\frac{K_{D}}{nK_{A}}}{R{-}\mathrm{{\beta}}{+}K_{D}},$ where R is the FRET/CFP fluorescence ratio and n is the CFP:YFP stoichiometry (n = 2). FI of the BCR was calculated as I = CFP/1 − E. For image figures, all images shown were converted to the eight-bit scale from original 10- or 16-bit grayscale recorded in the nonsaturated level. ### Single-particle–tracking analysis from time-lapse TIRF images The time delay before movement of single BCR clusters associated with the lipid raft probe or with the Lyn in Fig. 6 B or in Fig. S2 was analyzed. Time-lapse TIRFM FRET images were acquired as described in the previous section from CH27 cells expressing either Igα-YFP and Lyn16-CFP or Igα-YFP and LynFL-CFP. The images were analyzed by single-particle tracking using Image Pro Plus software. Tracking was performed by autotracking particles determined by centroid analysis above 0.5 μm2 that moved within a distance of 1 μm at each time interval with one image frame skip. Clusters that merged or split were included in the tracking analysis. To eliminate artifacts of autotracking, the analyses were manually confirmed. About 200 particles from three to four cells were analyzed, and t tests were used to determine the significance of the differences between Igα-YFP-Lyn16-CFP clusters and Igα-YFP-LynFL-CFP clusters. The confinement index (Schwickert et al., 2007) was calculated as the ratio of the original distance traveled to the accumulated distance and was used as a criteria for the classification of particle movements: >0.8, directional; 0.4–0.8, delayed directional; <0.4, random. When a cluster transitioned from random to directional movement, the time spent in random movement was calculated and expressed as the time delay. For Video 9, Igα-YFP clusters were tracked using the following custom macros. In brief, the BCR clusters were first bandpass filtered and, clusters above a selected threshold were chosen for single-cluster tracking using custom-made Matlab script. Single-cluster tracking was performed at the same settings used in Image Pro Plus software analysis (Figs. 6 B and S2) with the exception that the Gaussian-fitted position of the clusters was determined using Matlab script. In all cases, the accuracy of the tracking algorithm was checked visually, and, when needed, the algorithm parameters were adjusted to provide optimal tracking. Cluster trajectories >100 s during the contraction phase were used. The track of each cluster was confirmed manually. Several representative BCR clusters showing directional (Video 9, red circles), delayed directional (Video 9, green circles), and random (Video 9, blue circles) movements were color assigned, and the video was made at 10 frames per second using Matlab script. For calculation of the average molar ratio of YFP to CFP over the contact area of cells to the lipid bilayer, CFP and YFP FIs were obtained at each time point and compared with the CFP and YFP FIs obtained from Daudi B cells expressing Lyn16-mCFP-YFP fusion protein given a 1:1 molar ratio as a reference. In reference cells, CFP intensity was obtained after YFP photobleaching. In our setting, a 1:1 ratio of CFP and YFP FIs gives a 10:1 molar ratio. ### Online supplemental material Fig. S1 shows that the association of antigen-clustered BCR with raft lipid probe differs from that with Lyn kinase dramatically using time-lapse images. Fig. S2 shows the dynamic movement of BCR clusters as they associate with raft lipids and move to the synapse using single-cluster tracking. Fig. S3 shows a model for the role of raft lipids in B cell activation. Videos 1–8 are the videos from which the eight still images shown in Fig. S1 were taken. Video 9 shows the dynamic movement of BCR clusters to the synapse during the contraction phase after antigen binding on the lipid bilayer. ## Acknowledgments We thank B. Baird for providing Lyn-GFP constructs and J. Brzostowski for help with TIRFM. This research was supported by the Intramural Research Program of the National Institute of Allergy and Infectious Diseases (National Institutes of Health). ## References References Aman, M.J., and K.S. Ravichandran. 2000 . A requirement for lipid rafts in B cell receptor induced Ca2+ flux. Curr. Biol. 10 : 393 –396. Axelrod, D. 1981 . Cell-substrate contacts illuminated by total internal reflection fluorescence. J. Cell Biol. 89 : 141 –145. Batista, F.D., D. Iber, and M.S. Neuberger. 2001 . B cells acquire antigen from target cells after synapse formation. Nature. 411 : 489 –494. Brian, A.A., and H.M. McConnell. 1984 . Allogeneic stimulation of cytotoxic T cells by supported planar membranes. Proc. Natl. Acad. Sci. USA. 81 : 6159 –6163. Brown, B.K., and W. Song. 2001 . The actin cytoskeleton is required for the trafficking of the B cell antigen receptor to the late endosomes. Traffic. 2 : 414 –427. Cambier, J.C., C.M. Pleiman, and M.R. Clark. 1994 . Signal transduction by the B cell antigen receptor and its coreceptors. Annu. Rev. Immunol. 12 : 457 –486. Carrasco, Y.R., and F.D. Batista. 2006 . B-cell activation by membrane-bound antigens is facilitated by the interaction of VLA-4 with VCAM-1. EMBO J. 25 : 889 –899. Carrasco, Y.R., S.J. Fleire, T. Cameron, M.L. Dustin, and F.D. Batista. 2004 . LFA-1/ICAM-1 interaction lowers the threshold of B cell activation by facilitating B cell adhesion and synapse formation. Immunity. 20 : 589 –599. Cheng, P.C., M.L. Dykstra, R.N. Mitchell, and S.K. Pierce. 1999 . A role for lipid rafts in B cell antigen receptor signaling and antigen targeting. J. Exp. Med. 190 : 1549 –1560. Dal Porto, J.M., A.M. Haberman, M.J. Shlomchik, and G. Kelsoe. 1998 . Antigen drives very low affinity B cells to become plasmacytes and enter germinal centers. J. Immunol. 161 : 5373 –5381. Dal Porto, J.M., S.B. Gauld, K.T. Merrell, D. Mills, A.E. Pugh-Bernard, and J. Cambier. 2004 . B cell antigen receptor signaling 101. Mol. Immunol. 41 : 599 –613. Depoil, D., S. Fleire, B.L. Treanor, M. Weber, N.E. Harwood, K.L. Marchbank, V.L.J. Tybulewicz, and F.D. Batista. 2008 . CD19 is essential for B cell activation by promoting B cell receptor-antigen microcluster formation in response to membrane-bound ligand. Nat. Immunol. 9 : 63 –72. Douglass, A.D., and R.D. Vale. 2005 . Single-molecule microscopy reveals plasma membrane microdomains created by protein-protein networks that exclude or trap signaling molecules in T cells. Cell. 121 : 937 –950. Edidin, M. 2003 . The state of lipid rafts: from model membranes to cells. Annu. Rev. Biophys. Biomol. Struct. 32 : 257 –283. Fleire, S.J., J.P. Goldman, Y.R. Carrasco, M. Weber, F.D. Bray, and F.D. Batista. 2006 . B cell ligand discrimination through a spreading and contraction response. Science. 312 : 738 –741. Grakoui, A., S.K. Bromley, C. Sumen, M.M. Davis, A.S. Shaw, P.M. Allen, and M.L. Dustin. 1999 . The immunological synapse: a molecular machine controlling T cell activation. Science. 285 : 221 –227. Guo, B., R.M. Kato, M. Garcia-Lloret, M.I. Wahl, and D.J. Rawlings. 2000 . Engagement of the human pre-B cell receptor generates a lipid raft-dependent calcium signaling complex. Immunity. 13 : 243 –253. Hancock, J.F. 2006 . Lipid rafts: contentious only from simplistic standpoints. Nat. Rev. Mol. Cell Biol. 7 : 456 –462. Hanke, J.H., J.P. Gardner, R.I. Dow, P.S. Changelian, W.H. Brissette, E.J. Weringer, B.A. Pollok, and P.A. Connolley. 1996 . Discovery of a novel, potent and Src family-selective tyrosine kinase inhibitor. J. Biol. Chem. 271 : 695 –701. Hess, S.T., E.D. Sheets, A. Wagenknecht-Wiesner, and A.A. Heikal. 2003 . Quantitative analysis of the fluorescence properties of intrinsically fluorescent proteins in living cells. Biophys. J. 85 : 2566 –2580. Hou, P., E. Araujo, T. Zhao, M. Zhang, D. Massenburg, M. Veselits, C. Doyle, A.R. Dinner, and M.R. Clark. 2006 . B cell antigen receptor signaling and internalization are mutually exclusive events. PLoS Biol. 4 : e200 . Kovarova, M., P. Tolar, R. Arudchandran, L. Draberova, J. Rivera, and P. Draber. 2001 . Structure-function analysis of lyn kinase association with lipid rafts and initiation of early signaling events after Fcε receptor I aggregation. Mol. Cell. Biol. 21 : 8318 –8328. Larson, D.R., J.A. Gosse, D.A. Holowka, B.A. Baird, and W.W. Webb. 2005 . Temporally resolved interactions between antigen-stimulated IgE receptors and Lyn kinase on living cells. J. Cell Biol. 171 : 527 –536. McIntosh, T.J., A. Vidal, and S.A. Simon. 2003 . Sorting of lipids and transmembrane peptides between detergent-soluble bilayers and detergent-resistant rafts. Biophys. J. 85 : 1656 –1666. Pyenta, P.S., D. Holowka, and B. Baird. 2001 . Cross-correlation analysis of inner-leaflet-anchored green fluorescent protein co-redistributed with IgE receptors and outer leaflet lipid raft components. Biophys. J. 80 : 2120 –2132. Qi, H., J.G. Egen, A.Y.C. Huang, and R. Germain. 2006 . Extrafollicular activation of lymph node B cells by antigen-bearing dendritic cells. Science. 312 : 1672 –1676. Reth, M., and J. Wienands. 1997 . Initiation and processing of the signals from the B cell antigen receptor. Annu. Rev. Immunol. 15 : 453 –479. Reynwar, B.J., G. Illya, V.A. Harmandaris, M.M. Muller, K. Kremer, and M. Deserno. 2007 . Aggregation and vesiculation of membrane proteins by curvature-mediated interactions. Nature. 447 : 461 –464. Schwickert, T.A., R.L. Lindquist, G. Shakhar, G. Livshits, D. Skokos, M.H. Kosco-Vilbois, M.L. Dustin, and M.C. Nussenzweig. 2007 . In vivo imaging of germinal centres reveals a dynamic open structure. Nature. 446 : 83 –87. Sharma, P., R. Varma, R.C. Sarasij, Ira, K. Gousset, G. Krishnamoorthy, M. Rao, and S. Mayor. 2004 . Nanoscale organization of multiple GPI-anchored proteins in living cell membranes. Cell. 116 : 577 –589. Simons, K., and E. Ikonen. 1997 . Functional rafts in cell membranes. Nature. 387 : 569 –572. Sohn, H.W., P. Tolar, T. Jin, and S.K. Pierce. 2006 . Fluorescence resonance energy transfer in living cells reveals dynamic membrane changes in the initiation of B cell signaling. Proc. Natl. Acad. Sci. USA. 103 : 8143 –8148. Tolar, P., H.W. Sohn, and S.K. Pierce. 2005 . The initiation of antigen-induced BCR signaling viewed in living cells by FRET. Nat. Immunol. 6 : 1168 –1176. van Rheenen, J., M. Langeslag, and K. Jalink. 2004 . Correcting confocal acquisition to optimize imaging of fluorescence resonance energy transfer by sensitized emission. Biophys. J. 86 : 2517 –2529. Zal, T., and N.R. Gascoigne. 2004 . Photobleaching-corrected FRET efficiency imaging of live cells. Biophys. J. 86 : 3923 –3939. Abbreviations used in this paper: APC, antigen-presenting cell; BCR, B cell receptor; FI, fluorescence intensity; FRET, fluorescence resonance energy transfer; ICAM-1, intercellular adhesion molecule-1; ITAM, immunoreceptor tyrosine-based activation motif; LynFL, full-length Lyn; PC, phosphorylcholine; TIRFM, total internal reflection fluorescence microscopy.
2020-09-20 18:45:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48655033111572266, "perplexity": 9855.344621431708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198287.23/warc/CC-MAIN-20200920161009-20200920191009-00560.warc.gz"}
https://solvedlib.com/1-cosxsinx-tanx2can-u-help-me-to-prove-that,295125
(1-cosx)/sinx =tanx/2,can u help me to prove that? Question: (1-cosx)/sinx =tanx/2,can u help me to prove that? Similar Solved Questions Find the general solution of y''- 5y' - 6y = 0 please showdetailed work Find the general solution of y''- 5y' - 6y = 0 please show detailed work... Which of the following combinations of quantum numbers are allowed for an electron in one-electron atom? n =2,[=2.m=-l,ms =n=1,l=0 ,m = 0 , ms +n =2 1=0 , m = 0 , ms = Jn =4,=0 m = 0 , ms Which of the following combinations of quantum numbers are allowed for an electron in one-electron atom? n =2,[=2.m=-l,ms = n=1,l=0 ,m = 0 , ms + n =2 1=0 , m = 0 , ms = J n =4,=0 m = 0 , ms... L CtE 1 horizontal 0 coem Work 2 atrSlude Lient constant ldd N OST spucd find Isoavf Torco usnd onInc 1 1 ; Pucking 3 suntace 8 8rougnuna surtace L CtE 1 horizontal 0 coem Work 2 atrSlude Lient constant ldd N OST spucd find Isoavf Torco usnd onInc 1 1 ; Pucking 3 suntace 8 8 rougn una surtace... I Problem 10-16A Using present value techniques to evaluate alternative investment opportunities Swift Delivery is a... i Problem 10-16A Using present value techniques to evaluate alternative investment opportunities Swift Delivery is a small company that transports business packages between New York and Chicago. It operates a fleet of small vans that moves packages to and from a central depot within each city and us... 13.43Treatment of the alcohol shown with sulfuric acid gave a tricyclic hydrocarbon of molecular formula C16H16 as the major organic product. Suggest a reasonable structure for this hydrocarbonOH 13.43 Treatment of the alcohol shown with sulfuric acid gave a tricyclic hydrocarbon of molecular formula C16H16 as the major organic product. Suggest a reasonable structure for this hydrocarbon OH... {ras (ChemkakrelkiMlarectnelinKIcalceSenconc-Dnd AneclsLem74Ratan | CururteMudrte Dakuatalu dttun turetulen UF34 taurd una rnttonHan-et + QroututonExlurd nrd 4n4 grer et TeA WLna[ Hab DonAn MiabnPeattonulb 50rlrdrannienratu37pCarC4TIAtinenmoritert-uad Detnn pnenttul @redot Fee Anat Fetlen 7372hCurlLeuto Gue3ir}--IilerWetenbta btdtnn conanaten Tatu olanrtin Ktuctneotntan dIet [.88146" uMl*a Fdnenelretntletn C4nnltrmnndttnuni Eentior Enttn Rola pa Ln @ ErtenntnltolnfLrnLaenlhnll {ras (Chemkakrelki Mlarectnelin KIcalce Senconc-Dnd Anecls Lem74 Ratan | Cururte Mudrte Dakuatalu dttun turetulen U F34 taurd una rntton Han-et + Qroututon Exlurd nrd 4n4 grer et TeA WLna[ Hab DonAn Miabn Peattonulb 50rlrdran nienratu 37p CarC4TI Atinen moritert-uad Detnn pnenttul @redot Fee Anat Fe... What are six (6) sample questions for a practice/patient survey? What are six (6) sample questions for a practice/patient survey?... Rotating Coil in a Magnetic Field A square loop of wire is rotating around its vertical axis in a magnetic field of strength B = 0.06 T as shown in View A. The angular frequency of rotation is = 50rad/s and the loop has a resistance of R = 5 ohms. The length of one side of the square loop is W = 0.05 m. At the instant of time when t... 6/8 Section C [20%) You must now answer ALL parts of question 6. As before, please... 6/8 Section C [20%) You must now answer ALL parts of question 6. As before, please use a new WHITE answer sheet. If there is not enough space on the white sheet, please raise your hand to request a YELLOW answer sheet in order to continue your answer. 6. A researcher asked 7 people on Byres Road how... Cuuise Contente{04) 24 Cocfficicntperlurnance 0 Tzr Oite Evruiie work to remove 575.0 of heat from its cold compartment during each cycle. What Is the refrigerator' coefficlent ofAn ideal arfrige jerator does 165 pertormanceSudiu AE,eTries 0/12How much heat per cycle exhausted to the kitchen?StoIl AnswvelTries 0/12Myean4eln [ena Untatt DlarkedA> Hanknlamen eedaamialThreaded Vew Cnnunulcoicaicr Cthei Victn Cuuise Contente {04) 24 Cocfficicnt perlurnance 0 Tzr Oite Evruiie work to remove 575.0 of heat from its cold compartment during each cycle. What Is the refrigerator' coefficlent of An ideal arfrige jerator does 165 pertormance Sudiu AE,e Tries 0/12 How much heat per cycle exhausted to the kit... You have a credit card with the balance of 2,856.74 at a 14.75% APR. instead of saving the amount in question 11 in a savings account, you put the amount towards reducing your debt. How much interest do you save in 1 full month? You have a credit card with the balance of 2,856.74 at a 14.75% APR. instead of saving the amount in question 11 in a savings account, you put the amount towards reducing your debt. How much interest do you save in 1 full month?... Write a class called "Complex". The class has two private data members which correspond to the... Write a class called "Complex". The class has two private data members which correspond to the real and imaginary parts of a complex number. .The class Complex should have several (methods) functions: set the values, display the number in real/imaginary (re + j*im) format, and display the nu... Required information If applicable, compute €Wn and Wdfor the dominant root in each of the following sets of characteristic rootsThe characteristic root is _2, -3Ej Required information If applicable, compute € Wn and Wdfor the dominant root in each of the following sets of characteristic roots The characteristic root is _2, -3Ej... - 50 grams of water at an initial temperature of54°C and 110 grams of water at an initial temperature of 20°C aremixed. What is the final temperature? (The specific heat of water,). - Find the time period of the simple pendulumwhose length is 1 m. - 50 grams of water at an initial temperature of 54°C and 110 grams of water at an initial temperature of 20°C are mixed. What is the final temperature? (The specific heat of water, ). - Find the time period of the simple pendulum whose length is 1 m.... In the following circuit diagram. each of the four resistors is identical in resistance. R You can measure resistance Using an olueter (OT multimeter set t0 measure resistance) With two probes_ You would measure resistance by putting the probes onto the circuit in tWo different places: Assume that the wires connecting the resistors are ideal: (There might be more than one corect answer for each question below: need only oe correct answer for each:) Where would you need t0 put the probes get the In the following circuit diagram. each of the four resistors is identical in resistance. R You can measure resistance Using an olueter (OT multimeter set t0 measure resistance) With two probes_ You would measure resistance by putting the probes onto the circuit in tWo different places: Assume that t... Draw the mechanism for the formation of the major product in the acid-catalyzed ring opening of the epoxide shown below.HzSO4 Hzo Draw the mechanism for the formation of the major product in the acid-catalyzed ring opening of the epoxide shown below. HzSO4 Hzo... Given R1 43.0 kQ and R2 76.0 kO connected as shown in the figure. Estimate the values shown by the ammeters A, AJ and A2 when a 10.0 V DC generator is connected to terminals ab_Note: as a decimal separator: (dot) must be used. The result should be written as 2 decimal places (Example: 2.34) (Pay attention to the unit when writing the result mA) Results: A = Space 1. Calculate the answer by reading the relevant text. mA, A1 Space 2. Calculate the answer by reading the relevant text. mA A2 = Space Given R1 43.0 kQ and R2 76.0 kO connected as shown in the figure. Estimate the values shown by the ammeters A, AJ and A2 when a 10.0 V DC generator is connected to terminals ab_ Note: as a decimal separator: (dot) must be used. The result should be written as 2 decimal places (Example: 2.34) (Pay at... Three point-like charges are placed at the corners of a rectangle as shown in the figure,... Three point-like charges are placed at the corners of a rectangle as shown in the figure, a = 18.0 cm and b = 52.0 cm. Find the magnitude of the electric field at the center of the rectangle. Let q11 = q33 = +23.0 µC and q22 = −40.0 µC 93 q1... (10 points) 1200 computers are infected with virus; Each minute _ thie Virue infect 10 neu machines and then disables tho computer on which resides Hov many computer 39 infected after 5 minutes - S l 0 (10 points) 1200 computers are infected with virus; Each minute _ thie Virue infect 10 neu machines and then disables tho computer on which resides Hov many computer 39 infected after 5 minutes - S l 0... Please try to answer all questions. Which of the following is false with regards to audit... Please try to answer all questions. Which of the following is false with regards to audit responsibility? The auditor of a public company is required to certify the annual financial statements. Auditing standards make no distinction between error or fraud; in either case, the auditor must obtain rea... Which 6 Ha einsrt DETAILS Sllo0 103s sbeq Jie GRIFEPB 5.CQ.014. 1 1 protection innead -on collisions? (See everydai epnanomenon box: Which 6 Ha einsrt DETAILS Sllo0 103s sbeq Jie GRIFEPB 5.CQ.014. 1 1 protection innead -on collisions? (See everydai epnanomenon box:... Produc + of he (eaction Whq i5 Ihe major Showu n be lcw t Hz0 Htq H3 € ckL CH chz produc + of he (eaction Whq i5 Ihe major Showu n be lcw t Hz0 Htq H3 € ckL CH chz... Where in the kidney is the renal medulla? Where in the kidney is the renal medulla?... 6. Rewrite the integral K K" K 0' cos0 sin? pdpdpde in cylindrical coordinates. Do nOt evaluate it_ 6. Rewrite the integral K K" K 0' cos0 sin? pdpdpde in cylindrical coordinates. Do nOt evaluate it_... Q4) A capacity of a battery is normally distributed with a mean of 25.2 Kw-hr and standard deviation of 0.1 Kw-hr; find the probability that a selected battery has a ;capacity more than 25.11 Kw-hr0.758 O0.8640.6910.816 Q4) A capacity of a battery is normally distributed with a mean of 25.2 Kw-hr and standard deviation of 0.1 Kw-hr; find the probability that a selected battery has a ;capacity more than 25.11 Kw-hr 0.758 O 0.864 0.691 0.816... 12. What are the periodic trends for atomic size, ionization energy, and electron affinity moving from... 12. What are the periodic trends for atomic size, ionization energy, and electron affinity moving from TOP to BOTTOM of the periodic table? (Write Increases or Decreases) Atomic Size: cncceases Ionization Energy: Electron Affinity: _decreases... Need help. command or explanation. 3. Given the matrix A [3 7 4ti 5; 0 -i... Need help. command or explanation. 3. Given the matrix A [3 7 4ti 5; 0 -i -1 4; 3 8 4 -1] where i represents the imaginary number, run each of the commands below and explain the observed result. a. A c. A(10) f. A (I1 3], 13 2]) g. [A: A (end-1,:)1 h. A(, 12 4 1 3]) k. 1. m. n. o. p. sum (A) sum (A,... Need Help?Raud /70.8J Foln Puckst 14 E011 00 Submiaelona iodWhat E thc approxlmate P-value for the following valucs 0f x2 and df? (Use technology_ Round your ansviers t0 lour declmal places.)23.14, df27.13, di =26.86,Need Help?Rend ItEnlermFeDiAnswers PeckSutz E.,012. Submitlons 0.83/0.83 polnts Pruon of x? and d6, ould the null hypothesis be rejectec For the following value:significancewcre used? (Use Lablebechm34.52, df Need Help? Raud / 70.8J Foln Puckst 14 E011 00 Submiaelona iod What E thc approxlmate P-value for the following valucs 0f x2 and df? (Use technology_ Round your ansviers t0 lour declmal places.) 23.14, df 27.13, di = 26.86, Need Help? Rend It Enlerm FeDi Answers PeckSutz E.,012. Submitlons 0.83/0.83... The government is issuing $100 million in 10 year debt and receives the following bids.$25... The government is issuing $100 million in 10 year debt and receives the following bids.$25 million is reserved for non-competitive tenders. At what yield will the non-competitive tenders be issued bonds? Bid 1 Bid 2 Bid 3 Bid 4 Bid #5 $25 million @ 4.0%$25 million @ 4.2% $25 million @ 4.4%$25... Problem 4-02A a-e (Part Level Submission) The adjusted trial balance columns of the worksheet for Pina... Problem 4-02A a-e (Part Level Submission) The adjusted trial balance columns of the worksheet for Pina Colada Company are as follows. Account No. 101 112 126 130 Receivable 200 Pina Colada Company Worksheet For the Year Ended December 31, 2019 Adjusted Trial Balance Account Titles Dr. Cash 5,200 Acc...
2022-05-21 21:16:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5101442933082581, "perplexity": 8331.041397038343}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662541747.38/warc/CC-MAIN-20220521205757-20220521235757-00365.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcdss.2016049
# American Institute of Mathematical Sciences August  2016, 9(4): 1201-1234. doi: 10.3934/dcdss.2016049 ## Forced linear oscillators and the dynamics of Euclidean group extensions 1 Department of Mathematics, Rutgers University, Camden NJ 08102, United States Received  November 2015 Revised  February 2016 Published  August 2016 We study the generic dynamical behaviour of skew-product extensions generated by cocycles arising from equations of forced linear oscillators of special form. This work extends our earlier work on cocycles into compact Lie groups arising from differential equations of special form, (cf. [21]), to the case of non-compact fiber groups of Euclidean type. The earlier techniques do not work in the non-compact case. In the non-compact case one of the main obstacle is the lack of recurrence'. Thus, our approach to studying Euclidean group extensions is : (i) first, to use a twisted version' of the so called conjugation approximation method' and then (ii) to use geometric-control theoretic methods' developed in our earlier work (cf. [20] and [21]). Even then, our arguments only work for base flows that admit a global Poincaé section, (e.g. for the irrational rotation flows on tori and for certain nil flows). We apply these results to study generic spectral behaviour of the forced quantum harmonic oscillator with time dependent stationary force restricted to satisfy given constraints. Citation: Mahesh Nerurkar. Forced linear oscillators and the dynamics of Euclidean group extensions. Discrete and Continuous Dynamical Systems - S, 2016, 9 (4) : 1201-1234. doi: 10.3934/dcdss.2016049 ##### References: [1] A. Avila, G. Forni and C. Ulcigrai, Mixing for time changes of Heisenberg nil flows, J. of Diff Geometry, 89 (2011), 369-410. [2] D. Anosov and A. Katok, New examples in smooth ergodic theory, Trans. Moscow Math. Soc., 23 (1970), 1-35. [3] P. Ashwin and I. Melbourne, Non-compact drft for relative equillibria and relative periodic orbits, Nonlinearity, 10 (1997), 595-616. doi: 10.1088/0951-7715/10/3/002. [4] P. Ashwin, I. Melbourne and M. Nicol, Euclidean extensions of dynamical systems, Nonlinearity, 14 (2001), 275-300. doi: 10.1088/0951-7715/14/2/306. [5] J. Bellisard, Stability and instability in quantum mechanics, Trends and developments in the eighties (Bielefeld, 1982/1983), World Sci. Publishing, Singapore, 1985, 1-106. [6] L. Bunimovich, H. Jauslin, J. Lebowitz, A. Pellegrinoti and P. Nilaba, Diffusive energy growth in classical and quantum driven oscillators, Journal of Statistical Physics, 62 (1991), 793-817. doi: 10.1007/BF01017984. [7] M. Combescure, Recurrent versus diffusive dynamics for a kicked quantum oscillator, Annales de l'Institute Henri Poincaré (A) Physique Theorique, 57 (1992), 67-87. [8] M. Fields, I. Melbourne and M. Nicol, Symmetric attractors for diffeomorphisms and flows, Proc. London Math. Soc., 72 (1996), 657-696. doi: 10.1112/plms/s3-72.3.657. [9] S. Glasner and B. Weiss, On the construction of minimal skew products, Israel J. Math., 34 (1979), 321-336. doi: 10.1007/BF02760611. [10] M. Herman, Construction de diffeomorphismes ergodiques, preprint. [11] R. Johnson and M. Nerurkar, On null Controllability of linear systems with recurrent coefficients and constrained controls, (jointly with R. Johnson), Journal of Dynamics and Differential Equations, 4 (1992), 259-273. doi: 10.1007/BF01049388. [12] H. Keynes and D. Newton, Ergodicity in $(G,\sigma )$ extensions, Springer Verlag Lecture Note in Math., 668 (1978), 173-178. [13] J. Lebowitz and H. Jauslin, Spectral and stability aspects of quantum chaos, Chaos, 1 (1991), 114-121. doi: 10.1063/1.165809. [14] E. Lesigne and D. Volny, Large deviations for generic stationary processes, Colloquium Mathematicum, 84/85 (2000), 75-82. [15] E. Merzbacher, Quantum Mechanics, 5th edition, Wiley, New York, 1965. [16] I. Melbourne, V. Nitica and A. Torok, Transitivity of Euclidean type extensions of hyperbolic systems, Ergodic Theory and Dynamical Systems, 29 (2009), 1582-1602. doi: 10.1017/S0143385708000886. [17] M. Nerurkar, On the construction of smooth ergodic skew products, Ergodic Theory and Dynamical Systems, 8 (1988), 311-326. doi: 10.1017/S0143385700004454. [18] M. Nerurkar, Spectral and stability questions regarding evolution of non-autonomous linear systems, J. of Discrete and Continuous Dynamical Systems, (2004), 114-120. [19] M. Nerurkar and H. Jauslin, Stability of oscillators driven by ergodic processes, J. of Math. physics, 35 (1994), 628-645. doi: 10.1063/1.530657. [20] M. Nerurkar and H. Sussmann, Construction of minimal cocycles arising from specific differential equations, (jointly with H. Sussmann), Israel Journal of Mathematics, 100 (1997), 309-326. doi: 10.1007/BF02773645. [21] M. Nerurkar and H. Sussmann, Construction of ergodic cocycles arising from linear differential equations of special form, Journal of Modern Dynamics, 1 (2007), 205-253. doi: 10.3934/jmd.2007.1.205. [22] V. Nitica and M. Pollicott, Transitivity of Euclidean group extensions of Anosov diffeomorphisms, Ergodic Theory and Dynamical Systems, 25 (2005), 257-269. doi: 10.1017/S0143385704000471. [23] K. Schmidt, Cocycles and Ergodic Transformation Groups, MacMillan of India, 1977. show all references ##### References: [1] A. Avila, G. Forni and C. Ulcigrai, Mixing for time changes of Heisenberg nil flows, J. of Diff Geometry, 89 (2011), 369-410. [2] D. Anosov and A. Katok, New examples in smooth ergodic theory, Trans. Moscow Math. Soc., 23 (1970), 1-35. [3] P. Ashwin and I. Melbourne, Non-compact drft for relative equillibria and relative periodic orbits, Nonlinearity, 10 (1997), 595-616. doi: 10.1088/0951-7715/10/3/002. [4] P. Ashwin, I. Melbourne and M. Nicol, Euclidean extensions of dynamical systems, Nonlinearity, 14 (2001), 275-300. doi: 10.1088/0951-7715/14/2/306. [5] J. Bellisard, Stability and instability in quantum mechanics, Trends and developments in the eighties (Bielefeld, 1982/1983), World Sci. Publishing, Singapore, 1985, 1-106. [6] L. Bunimovich, H. Jauslin, J. Lebowitz, A. Pellegrinoti and P. Nilaba, Diffusive energy growth in classical and quantum driven oscillators, Journal of Statistical Physics, 62 (1991), 793-817. doi: 10.1007/BF01017984. [7] M. Combescure, Recurrent versus diffusive dynamics for a kicked quantum oscillator, Annales de l'Institute Henri Poincaré (A) Physique Theorique, 57 (1992), 67-87. [8] M. Fields, I. Melbourne and M. Nicol, Symmetric attractors for diffeomorphisms and flows, Proc. London Math. Soc., 72 (1996), 657-696. doi: 10.1112/plms/s3-72.3.657. [9] S. Glasner and B. Weiss, On the construction of minimal skew products, Israel J. Math., 34 (1979), 321-336. doi: 10.1007/BF02760611. [10] M. Herman, Construction de diffeomorphismes ergodiques, preprint. [11] R. Johnson and M. Nerurkar, On null Controllability of linear systems with recurrent coefficients and constrained controls, (jointly with R. Johnson), Journal of Dynamics and Differential Equations, 4 (1992), 259-273. doi: 10.1007/BF01049388. [12] H. Keynes and D. Newton, Ergodicity in $(G,\sigma )$ extensions, Springer Verlag Lecture Note in Math., 668 (1978), 173-178. [13] J. Lebowitz and H. Jauslin, Spectral and stability aspects of quantum chaos, Chaos, 1 (1991), 114-121. doi: 10.1063/1.165809. [14] E. Lesigne and D. Volny, Large deviations for generic stationary processes, Colloquium Mathematicum, 84/85 (2000), 75-82. [15] E. Merzbacher, Quantum Mechanics, 5th edition, Wiley, New York, 1965. [16] I. Melbourne, V. Nitica and A. Torok, Transitivity of Euclidean type extensions of hyperbolic systems, Ergodic Theory and Dynamical Systems, 29 (2009), 1582-1602. doi: 10.1017/S0143385708000886. [17] M. Nerurkar, On the construction of smooth ergodic skew products, Ergodic Theory and Dynamical Systems, 8 (1988), 311-326. doi: 10.1017/S0143385700004454. [18] M. Nerurkar, Spectral and stability questions regarding evolution of non-autonomous linear systems, J. of Discrete and Continuous Dynamical Systems, (2004), 114-120. [19] M. Nerurkar and H. Jauslin, Stability of oscillators driven by ergodic processes, J. of Math. physics, 35 (1994), 628-645. doi: 10.1063/1.530657. [20] M. Nerurkar and H. Sussmann, Construction of minimal cocycles arising from specific differential equations, (jointly with H. Sussmann), Israel Journal of Mathematics, 100 (1997), 309-326. doi: 10.1007/BF02773645. [21] M. Nerurkar and H. Sussmann, Construction of ergodic cocycles arising from linear differential equations of special form, Journal of Modern Dynamics, 1 (2007), 205-253. doi: 10.3934/jmd.2007.1.205. [22] V. Nitica and M. Pollicott, Transitivity of Euclidean group extensions of Anosov diffeomorphisms, Ergodic Theory and Dynamical Systems, 25 (2005), 257-269. doi: 10.1017/S0143385704000471. [23] K. Schmidt, Cocycles and Ergodic Transformation Groups, MacMillan of India, 1977. [1] Roy Adler, Bruce Kitchens, Michael Shub. Stably ergodic skew products. Discrete and Continuous Dynamical Systems, 1996, 2 (3) : 349-350. doi: 10.3934/dcds.1996.2.349 [2] Roy Adler, Bruce Kitchens, Michael Shub. Errata to "Stably ergodic skew products". Discrete and Continuous Dynamical Systems, 1999, 5 (2) : 456-456. doi: 10.3934/dcds.1999.5.456 [3] Mahesh G. Nerurkar, Héctor J. Sussmann. Construction of ergodic cocycles that are fundamental solutions to linear systems of a special form. Journal of Modern Dynamics, 2007, 1 (2) : 205-253. doi: 10.3934/jmd.2007.1.205 [4] Núria Fagella, Àngel Jorba, Marc Jorba-Cuscó, Joan Carles Tatjer. Classification of linear skew-products of the complex plane and an affine route to fractalization. Discrete and Continuous Dynamical Systems, 2019, 39 (7) : 3767-3787. doi: 10.3934/dcds.2019153 [5] Kazuyuki Yagasaki. Degenerate resonances in forced oscillators. Discrete and Continuous Dynamical Systems - B, 2003, 3 (3) : 423-438. doi: 10.3934/dcdsb.2003.3.423 [6] Matthieu Astorg, Fabrizio Bianchi. Higher bifurcations for polynomial skew products. Journal of Modern Dynamics, 2022, 18: 69-99. doi: 10.3934/jmd.2022003 [7] D. Bonheure, C. Fabry, D. Smets. Periodic solutions of forced isochronous oscillators at resonance. Discrete and Continuous Dynamical Systems, 2002, 8 (4) : 907-930. doi: 10.3934/dcds.2002.8.907 [8] Àlex Haro. On strange attractors in a class of pinched skew products. Discrete and Continuous Dynamical Systems, 2012, 32 (2) : 605-617. doi: 10.3934/dcds.2012.32.605 [9] Eugen Mihailescu, Mariusz Urbański. Transversal families of hyperbolic skew-products. Discrete and Continuous Dynamical Systems, 2008, 21 (3) : 907-928. doi: 10.3934/dcds.2008.21.907 [10] Jose S. Cánovas, Antonio Falcó. The set of periods for a class of skew-products. Discrete and Continuous Dynamical Systems, 2000, 6 (4) : 893-900. doi: 10.3934/dcds.2000.6.893 [11] Matúš Dirbák. Minimal skew products with hypertransitive or mixing properties. Discrete and Continuous Dynamical Systems, 2012, 32 (5) : 1657-1674. doi: 10.3934/dcds.2012.32.1657 [12] Viorel Nitica. Examples of topologically transitive skew-products. Discrete and Continuous Dynamical Systems, 2000, 6 (2) : 351-360. doi: 10.3934/dcds.2000.6.351 [13] Wen Huang, Jianya Liu, Ke Wang. Möbius disjointness for skew products on a circle and a nilmanifold. Discrete and Continuous Dynamical Systems, 2021, 41 (8) : 3531-3553. doi: 10.3934/dcds.2021006 [14] Jon Aaronson, Michael Bromberg, Nishant Chandgotia. Rational ergodicity of step function skew products. Journal of Modern Dynamics, 2018, 13: 1-42. doi: 10.3934/jmd.2018012 [15] Nikolaos Karaliolios. Differentiable Rigidity for quasiperiodic cocycles in compact Lie groups. Journal of Modern Dynamics, 2017, 11: 125-142. doi: 10.3934/jmd.2017006 [16] Alexander I. Bufetov. Hölder cocycles and ergodic integrals for translation flows on flat surfaces. Electronic Research Announcements, 2010, 17: 34-42. doi: 10.3934/era.2010.17.34 [17] Julia Brettschneider. On uniform convergence in ergodic theorems for a class of skew product transformations. Discrete and Continuous Dynamical Systems, 2011, 29 (3) : 873-891. doi: 10.3934/dcds.2011.29.873 [18] C.P. Walkden. Stable ergodicity of skew products of one-dimensional hyperbolic flows. Discrete and Continuous Dynamical Systems, 1999, 5 (4) : 897-904. doi: 10.3934/dcds.1999.5.897 [19] Kohei Ueno. Weighted Green functions of nondegenerate polynomial skew products on $\mathbb{C}^2$. Discrete and Continuous Dynamical Systems, 2011, 31 (3) : 985-996. doi: 10.3934/dcds.2011.31.985 [20] Kohei Ueno. Weighted Green functions of polynomial skew products on $\mathbb{C}^2$. Discrete and Continuous Dynamical Systems, 2014, 34 (5) : 2283-2305. doi: 10.3934/dcds.2014.34.2283 2021 Impact Factor: 1.865
2022-07-05 21:06:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6903016567230225, "perplexity": 3746.944704133565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00026.warc.gz"}
https://nforum.ncatlab.org/discussion/8890/
# Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorTobyBartels • CommentTimeAug 26th 2018 Make Lie ideals a special case of the general definition for rings (and the like). • CommentRowNumber2. • CommentAuthorTobyBartels • CommentTimeAug 26th 2018 Add irreducible ideals to the list of types of ideals (but really, some of these need their own pages!). • CommentRowNumber3. • CommentAuthorTobyBartels • CommentTimeAug 27th 2018 Prepare to make prime ideal its own page. • CommentRowNumber4. • CommentAuthorTobyBartels • CommentTimeAug 29th 2018 Prepare for creation of irreducible ideal; postpare for creation of prime ideal (removing materials that has been moved there). • CommentRowNumber5. • CommentAuthorTobyBartels • CommentTimeAug 31st 2018 Remove some material moved to irreducible ideal. • CommentRowNumber6. • CommentAuthorzskoda • CommentTimeSep 7th 2018 I wrote few more words under monoid, category and additive category part, to have the definition phrased rather than just saying that the notion exists. • Please log in or leave your comment as a "guest post". If commenting as a "guest", please include your name in the message as a courtesy. Note: only certain categories allow guest posts. • To produce a hyperlink to an nLab entry, simply put double square brackets around its name, e.g. [[category]]. To use (La)TeX mathematics in your post, make sure Markdown+Itex is selected below and put your mathematics between dollar signs as usual. Only a subset of the usual TeX math commands are accepted: see here for a list. • (Help)
2021-10-25 10:58:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.862014889717102, "perplexity": 8102.462456679612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00416.warc.gz"}
https://www.physicsforums.com/threads/two-700-watt-blow-heaters-vs-one-1500-watt-heater.783036/
# Two 700 watt blow heaters vs one 1500 watt heater? 1. Nov 19, 2014 ### brothermaynard I bought a small 1500 Watt portable blow heater for my room cuz it getting really cold, but whenever its turned on, it trips the circuit breaker.What exactly causes the circuit to break? I mean, what units should I concern myself with when buying a heater, --Watts or Voltage? Why does this heater trip circuit when other stuff (computers, lamps etc) doesn't? Also, what uses the most power, is it the fan turning or the heating element? What if I I bought two 700 Watt heaters --would that heat the room more with less power used? Thanks for any help. 2. Nov 19, 2014 ### Staff: Mentor What is your AC Mains voltage? 120Vrms, 240Vrms, or something else? What is the breaker that is tripping rated at? 20A? 3. Nov 19, 2014 ### Staff: Mentor And what else is plugged into that same circuit that is fed by the breaker that is tripping? 4. Nov 19, 2014 ### zoki85 Asynchronous motor start current could be problem. Heating element also draw more current when they are cold. Buy rather motor starter fuse for your instalation. 5. Nov 19, 2014 ### brothermaynard The room has forced air heaters mounted in the walls (two of them).. If I turn on my extra portable heater when the other heaters are on, it instantly trips the circuit. If room heaters are off, then portable heater wont rip the circuit. Also, I have a microwave in my room: Microwave + portable heater = tripped circuit. But... Microwave + room heaters = no problem. How do I find out the mains voltage and amps? is that posted on the breaker box usually? 6. Nov 19, 2014 ### Staff: Mentor Look at the label on the back (or bottom) of the microwave oven. That will tell you what the AC Mains voltage is. The breaker box may have something that shows what the rating of the circuit breaker is. But you can't really substitute a bigger one without upgrading the electrical service (an electrician is needed for that, and you need building permits and inspections to do it). 7. Nov 19, 2014 ### mp3car your wall heaters sound like they're on the same circuit breaker as the outlet you're plugging the heater into. Most circuit breakers in home4s are either 15 or 20 amps. Usually 15 amps if you have 14 gauge (thickness) wire, or 20A is you have 12G wier. a 15A circuit breaker at 120 volts can supply about 1800 watts of power. Since your heater is 1500 by itself, that doesn't leave very much for anything else, definitely not enough for two more heaters. In my opinion, your simplest fix is to buy a 12 gauge extension cord (MUST BE 12 GAUGE, or thicker, like 10 gauge, but you probably won't find a 10g extension cord). Plug the extension cord into an outlet in another room, a room without wall heaters. Depending what else is on that other circuit though, you still may trip a breaker. The main thing to keep in mind is converting the watt rating of heaters, to the amp rating of circuit breakers. It's P=IV, Power = current X voltage. So if you have a 120V system (e.g. you live in the US), and your circuit breaker is 15A, then the most you can plug into that circuit is about 1800 watts, TOTAL... That is, counting everything plugged in that's on that breaker, and sometimes they may even run lighting on the same breakers as outlets. If it's a 20A breaker, then you can run 2400 watts total. If you do by an extension cord to plug it in at a different part of the house, like i said, make sure it's at least 12 gauge (smaller numbers are LARGER, so don't buy a 14g extension cord)! It's still not recommended, but make sure not to use anything smaller than 12g. It may be expensive (for an extension cord), like $50 for 50' cord,$30 for a 20' cord, etc... 8. Nov 19, 2014 ### dlgoff Question. Is it a rented room; as in, student housing? Yep. But may not be possible depending on answer of the above. Waiting for reply to room question. If he is living in only one rented student room, he may be out of luck. 9. Nov 19, 2014 ### Staff: Mentor Heaters are 100% efficient, so two 700W heaters create less heat than a 1500W heater. Last edited by a moderator: Nov 19, 2014 10. Nov 19, 2014 ### brothermaynard hi, yes student housing and its really old building too , ...I estimate year 1900 or earlier. So I cant unfortunately use another room (i only have one room). FYI on the breaker box, it looks like each room has exactly one circuit breaker. So its easy to flip the switch again if it trips, --just a walk down the corridor --but Im looking for some ingenious way to get this place toasty warm...definitely cant build fire lol thanks for the answers... any more ideas? 11. Nov 19, 2014 ### brothermaynard mp3car --even if I go get 12 gauge extension I dont see why it still wouldn't either trip the circuit or fail to work at all (the portable heater failing to work I mean) because its still running 1500 W. (?) that's why I was thinking of 2 700W heaters positioned opposite ends of room. 12. Nov 19, 2014 ### Staff: Mentor Sounds like adding some clothing layers would be a good (and cheaper) approach... :-) He was thinking that you could use the extension cord to tap into an outlet that was fed by a different breaker. Sounds like that is not an option. 13. Nov 20, 2014 ### meBigGuy Just to summarize: I'm assuming 120V mains. You multiply the mains voltage by the breaker current to determing the maximum watts you can draw. The circuit break trips when you draw too much current. Let's say 15 amps. 15 amps at 120V means you can run 1800 watts. It doesn't matter how you split it up, 1800 watts (15 amps, actually) is the limit. If it is a 20A breaker, the answer is 2400 watts. That is a hard limit. No exceptions. No tricks to get by it, other than an extension cord to another breaker. Maybe you can add 1 700 watt heater and not trip the breaker (until you run the microwave). 14. Nov 20, 2014 ### sophiecentaur If you were to have a single heater running for longer (i.e. switched on, well before you get into the room) you may get the room temperature high enough for comfort. It's not the most efficient way through but, if you are not metered, it may be worth while thinking about. 15. Nov 20, 2014 ### psparky I would return that heater and use a heater that has several different settings. They sell a radiator heater at Lowes or home depot for like \$35. It also has a thermostat which is ideal for sleeping or leaving the room. Point is, it has three different settings, minimum (500 watt) , medium (1,000 watt) and maximum (1,500 watt) Find the setting that doesn't trip the breaker, either medium or minum in this case. By the way, these wattage differences work independently of the thermostat which is convenient for the user. Problem solved. 500 watts may be more than enough to keep you cozy. Otherwise you would need an electrician to re-wire which obviously isn't going to happen in this case. 16. Nov 20, 2014 ### dlgoff I agree. I have one of these Oil Filled Hearers from Lowe's in my bedroom that I'm sold on. 17. Nov 21, 2014 ### psparky There ya go. Mine has the older stye switches for the different wattage levels and then the "turn knob" for the thermostat. A couple of these strategically placed in a couple rooms could work wonders in a house. Even keeping the settings low for the wattage and thermostat can make a big difference. Seems like every house has that one or two "colder rooms". 18. Nov 21, 2014 ### dlgoff These oil types are much safer IMO, as the heat doesn't get concentrated to one area of the heater. 19. Nov 21, 2014
2017-10-22 23:59:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24898609519004822, "perplexity": 3148.4093759257294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825473.61/warc/CC-MAIN-20171022222745-20171023002745-00060.warc.gz"}
http://nhatpham.com/pile-jetting-ulz/calculate-the-percentage-of-each-element-in-urea-0fadc9
Crown Dependencies And Overseas Territories, Justin Wolfers Political Party, Disney Boardwalk Inn Prices, Charlotte Football Roster 2019, 22k Gold Price In Bangladesh Today, Sun Life Financial Address Toronto, Best Melee Marth Player, Motorcycle Ecu Flash Software, The Tap West Lafayette, Will Estes Age, "/> Crown Dependencies And Overseas Territories, Justin Wolfers Political Party, Disney Boardwalk Inn Prices, Charlotte Football Roster 2019, 22k Gold Price In Bangladesh Today, Sun Life Financial Address Toronto, Best Melee Marth Player, Motorcycle Ecu Flash Software, The Tap West Lafayette, Will Estes Age, "/>  calculate the percentage of each element in urea NO2. To calculate molecular weight of a chemical compound enter it's formula, specify its isotope mass number after each element in square brackets. Any activity or operation carried out during the process of crop production has economic importance; fertilizer application is not left out. champ regrets 'insensitive' tweets. formula is C9H10O. Percent purity = 9.5 ÷ 10 × 100% = 95% How to calculate the percent purity of the original sample? 14.007/17.031 x … This means that we have 50.0 g of urea and 50.0 g of cinnamic acid. URR = (U pre - U post)/U pre x 100 = (65 mg/dL - 20 mg/dL)/ (65 mg/dL) x 100 = 69% Recommendations are that URRs of 65% and above indicate adequate dialysis. For each element, we multiply the atomic mass of the element by the number of atoms of that element in the molecule. → % Composition of C = molar mass of C/molar mass of CH₄N₂O×100, → % Composition of H = molar mass of H/molar mass of CH₄N₂O×100, → % Composition of N = molar mass of N/ molar mass of CH₄N₂O×100, → % Composition of O = molar mass of O/ molar mass of CH₄N₂O×100, This site is using cookies under cookie policy. N = 14.01 g N /17.034 g NH3 x100 = 82% H = 3.024 g H /17.034 g x100 = 18% Calculate The Percentage By Mass Of Each Element In The Following Compounds A. C1H19NOs I The fertilizer elements are present in various compounds (e.g., urea, ammonium nitrate, phosphoric acid, calcium phosphate, potassium chloride). Exercise 6.82 Calculate the mass percent composition of each element in each compound. Calculate NPK: This calculations can help you find out how much nutrient values of fertilizers you are applying to the turf, garden or farm. O- 15.9994u . Solution. given compound always has the same percentage composition of each element by mass although, as seen in Topic 7, the masses of each element in the compound are not in simple numerical ratios. 1) Mass of one mole of N2O = molar mass of N2O = (14 * 2) + 16 = 44 g. Number of N moles in one N2O mole = 2. C% = (12/60)*100 = 20.00%. This process is the reverse of what you did earlier. To convert .87 to a percent, simply multiple .87 by 100..87 × 100=87. Other percentage mass composition calculations including % of any component in a compound or a mixture. Wt. The percent composition can be found by dividing the mass of each component by total mass. Enter your answers numerically separated by a comma. It is based on four simple available tests of the renal function and delivers a indicative percentage of renal failure and its likely cause. A Urea Sample Obtained For Bio Experiments Contains 8.09 G Of Urea (CH.N:O): A. Example A. Urea, CO(NH 2) 2, has a guaranteed analysis of 46-0-0, and since 2010 has cost an average of $567 per ton (2,000 lb) in the Mountain Region, which in-cludes New Mexico. For mixed fertilizers (those with more than one plant nutrient), the cost per pound of one or more nutrients that could replace the nutrient found in the mix must be used. Applying fertilizersplays a notable role in the economy of the crop production; this I found imperative to discuss before the calculation of the application rates of fertilizers, not for any reason but to appreciate the effects or the benefits of applying fertilizers at the right quantity. Let's take the elements and add their masses up: 14.007 + 1.008(3) = 17.031 Now we will divide each individual atom with the mass we got in step one and multiply by 100. Then divide the pounds of nitrogen by the area the bag states it will cover to get the pounds of nitrogen per 1,000 sq. This Fertiliser Calculator compares the nutrient content of more than 1500 commercially-available fertilisers. Percent Composition Calculator. How Many Atoms Of Hydrogen Are In This Sample Of Urea? To calculate the cost per pound of elemental P or K, a factor must be used to convert percentage P 2 O 5 to percentage P, and percentage K 2 O to percentage K (Table 1). For example, you may know that 40 percent of your paycheck will go to taxes and you want to find out how much money that is. The mass percent for each element is mass % C = 9×12.01 g/mol 134.17 g/mol ×100 =80.56% C mass % H = 10 ×1.008 g/mol 134.17 g/mol ×100 =7.513% H mass % C = 1×16.00 g/mol 134.17 g/mol ×100 =11.93% O Adding together the mass percents … Keep at least one decimal place in your answer. Calculate the mass percent of. A Urea Sample Obtained For Bio Experiments Contains 8.09 G Of Urea (CH.N:O): A. Anonymous. To calculate the percent composition, we need to know the masses of C, H, and O in a known mass of C 9 H 8 O 4. you mean percent of nitrogen and the % of hydrogen. Do this for a single molecule of the compound. Atomic masses used for 4b. There is 9.5 g of calcium carbonate in the 10 g of chalk. $percentage\ yield\ =\ \frac{1.6}{2.0}\ \times\ 100$ percentage yield = 80%. you mean percent of nitrogen and the % of hydrogen. Theoretical yield formula. 2. First, calculate the pounds of N in the fertilizer: 2,000 lb fertilizer × 0.46 = 920 lb of N Next, calculate the cost per pound of N: So, to stop you from wondering how to find theoretical yield, here is the theoretical yield formula: mass of product = molecular weight of product * (moles of limiting reagent in reaction * stoichiometry of product) First we need to calculate the total atomic mass of our compounds, copper (II) oxide (CuO) and sugar (C6H12O6). More specifically we will discuss one way of looking at solution composition called mass percent. How will you calculate the solvent percentage from 1H NMR. Inject a stored profile : Create your free account once and for all! Step 3: Calculate the percent purity. Which compound has the greatest mass percent nitrogen: ammonium nitrate (see Example 3.6), ammonium sulfate, or urea? In chemistry, the formula weight is a quantity computed by multiplying the atomic weight (in atomic mass units) of each element in a chemical formula by the number of atoms of that element present in the formula, then adding all … Calculate the mass percent of each element in (a) ammonium sulfate and (b) urea, CO(NH 2) 2. Each of the following compounds is a fertiliser used by famers. … We can easily convert mole percent back to mole fraction by dividing by 100. It is convenient to consider 1 mol of C 9 H 8 O 4 and use its molar mass (180.159 g/mole, determined from the chemical formula) to calculate the percentages of each of its elements: Question. To find the amount of nitrogen in a bag of fertilizer, you must calculate the pounds of nitrogen per 1,000 sq. Given that chemical formula, we have two equivalents of #"H"#, for which we can use its molar mass, #M_"H" = "1.0079 g/mol"#.We also have one equivalent of #"O"#, for which we can use its molar mass, #M_"O" = "15.999 g/mol"#. Using the periodic table or some other reference, look up the molar mass of each atom in each compound. N.B. The FEUrea calculator uses the following formula: FEUrea (percent) = (SCr x UUrea) / (SUrea x UCr) x 100 Where: - SCr – serum creatinine – represents the waste product creatinine that is still in the body due to decreased kidney function. Part A Express your answers using four significant figures. Step 3: Calculate the total mass of nitrogen in a mole of urea. List the atomic mass for each element in the solute since atomic and molar mass are the same. Since water is #"H"_2"O"# (molar mass is #"18.015 g/mol"#), it implies we have a percent composition of #"H"# and one for #"O"#.. First, we assume a total mass of 100.0 g, although any mass could be assumed. Step 4: Calculate the percent of nitrogen in urea. 714 1561 1189 H% = (4/60)*100 = 6.66%. 2. (2 * 14.01)/132.154 = 21.20% Urea has the highest nitrogen content 3. Its formula is CON H. Calculate the percentage of carbon in (C = 12,0 = 16, N = 14 and H - 1) Calculate The Percentage By Mass Of Each Element In … This is a health tool allowing you to determine the fractional excretion of urea of use in certain nephrology fields. of product is 476.236 Mol. Solution To calculate molecular mass, we need to sum all the atomic masses in the molecule. N2O. N% = (28/60)*100 = 46.66%. Presenting your answer as 87% or 87 percent is acceptable. The fertilizer elements are present in various compounds (e.g., urea, ammonium nitrate, phosphoric acid, calcium phosphate, potassium chloride). Example: We have 13.9 g sample of impure iron pyrite. The molecular weight of urea is 60.16 g/mol and the molecular weight of cinnamic acid is 148.16 g/mol. Taken from Wikipedia (September 4, 2007) with the trailing decimals of uncertainty removed. Note that all formulas are case-sensitive. Using the theoretical yield equation helps you in finding the theoretical yield from the mole of the limiting reagent, assuming 100% efficiency. First, calculate the pounds of N in the fertilizer: 2,000 lb fertilizer × 0.46 = 920 lb of N Next, calculate the cost per pound of N: Percent is often abbreviated with the % symbol. Urea is a very important nitrogenous fertilizer. Identify how many moles (mole ratio) of each element are in … Favorite Answer. Percentage composition of carbon: → Molar mass of Carbon in … : for the city of birth, just enter the first letters of the name of the city (nothing else), then select it, and click Next. Molecular Weight = MW(CH4N2O) = (C + 4*H + N*2 + O) = (12.0 + 4 + 28 + 16) = 60 g/mol. Alternatively, the Quick Calculator allows you to select one fertiliser from a list of products or a custom blend with no login required. To calculate the pounds of nitrogen in a bag of fertilizer, multiply the weight of the bag by the percent nitrogen (this is the first number in the N-P-K designation on the front of the bag). Calculate the mass percent of each element in (a) ammonium sulfate and (b) urea, CO(NH 2) 2. Silicon carbide (SiC) M (SiC) = (28.09 * 1) + (12.01 * 1) = 40.1 g mol -1 (28.09 * 1)/40.1 = 70.00% of silicon 100 – 72.75 = 30.00% of carbon b. Wt. Examples of molecular weight computations: C[14]O[16]2, S[34]O[16]2. use the molar mass and assume 100 g multiple by 100. You divide your percentage by 100. The guaranteed analysis of a fertilizer includes the percentages of nitrogen, phosphorus, potassium, and other plant nutrients present in quantities large enough to conform to state law. The atomic masses go as follows-Cu- 63.546u . Calculate NPK: This calculations can help you find out how much nutrient values of fertilizers you are applying to the turf, garden or farm. Guaranteed analysis must be given for every fertilizer material sold in New Mexico. 2. 1 decade ago. 5 Answers. 3. Percent purity = 9.5 ÷ 10 × 100% = 95% How to calculate the percent purity of the original sample? Enter your answers numerically separated by a comma. To calculate the percent composition, we need to know the masses of C, H, and O in a known mass of C 9 H 8 O 4. That looks quite complicated so I'll show an example. The composition by percentage of each of the ‘big 3’ elements present in the fertilizer must be stated on the bag and is referred to as the fertilizer guarantee, which expresses each of elemental b. a)There are two O atoms and one S atom in SO2, so that ft. The conversion factor needed is the molar mass of each element. Answer to Calculate the percentage of the given element in each of the following compounds.a. Step 4: Calculate the percent of nitrogen in urea. Element : Symbol : Atomic Mass # of Atoms : Mass Percent: Calcium: Ca: 40.078: 1: 40.043%: Carbon: C: 12.0107: 1: 12.000% : Oxygen: O: 15.9994: 3: 47.957% ›› Similar chemical formulas. …, , icon= 10-4 A/cm²Knowing that the Metal M is in active state, determine theequilibrium potential of the anode Eºm assuming the exchangecurrent density for the dissolution of metal M equals to io= 10^7A/cm2​. The year in Meghan Markle: A royal exit, activism and loss, NBA Spurs' Becky Hammon makes coaching history, Small Colorado town confronts coronavirus variant. Thus, resulting in 87 percent. Favorite Answer. use the molar mass and assume 100 g multiple by 100. What is its percent composition? Did you mean to find the molecular weight of one of these similar formulas? C- 12.0107u . Cost per pound of nutrient should be the major criterion in determining whic… 4. Relevance. The cost per pound of nitrogen (N), phosphorus (as P2O5), or potassium (as K2O) is calculated using the total cost and the nutrient percentage in the fertilizer. By calculating the percentage by mass of nitrogen in each, determine the fertilise that has the highest nitrogen content a. For general chemistry, all the mole percents of a mixture add up to 100 mole percent. The amount of urea removed is 45 mg/dL out of 65 mg/dL meaning 69% (69.23%). Nitrogen in an element required for plant growth. There is 9.5 g of calcium carbonate in the 10 g of chalk. This can be readily obtained with a periodic table. N in triethanolamine, N(CH 2 CH 2 OH) 3 (used in dry-cleaning agents and household detergents), O in glyceryl tristearate (a saturated … SLgp_2: Count the number of Nitrogen atoms in a molecule of urea. Get your answers by asking now. Calculate the percentage composition of each element in Potassium chlorate, KCTO 17. Therefore, … High temperature high pressure treatment of a heavy metal. Before you apply fertilizer to them, you should have your soil tested. This tab allows development of more detailed and personalised fertiliser schedules. Calculate the mass percent for each element in cinnamic alcohol. VG6iyF, A metal M with an active-passive behavior with the followingparameters:M-MM* +ne", Epp=-0.400 V, B=+0.05, ipass = 10-5 A/cm2,Etr= +1.000 VEcorr=-0.5 V Example: We have 13.9 g sample of impure iron pyrite. Solution. Calculate the percentage of the given element in each of the following compounds: a. Nitrogen in Urea, NH2CONH2 SLgp_l: Calculate the molar mass of urea. 4b. COVID 'superspreader' event feared in L.A. urea + H2O The table gives the relative formula masses (Mr) of the reactants and the products for this reaction. We find atomic masses in the periodic table (inside front cover). Mass of one mole of N = molar mass of N =14g. Label the final measurement in g/mol. ›› Percent composition by element. And now to dive into the actual math and calculate the molar masses. Enter using the format: AgS2O3 and Na3AsO16H24. H-1.00794u. Percentage compisition of Carbon, Hydrogen, Nitrogen and Oxygen in the compound. Molecular Weight = MW(CH4N2O) = (C + 4*H + N*2 + O) = (12.0 + 4 + 28 + 16) = 60 g/mol. Definitions of molecular mass, molecular weight, molar mass and molar weight The percent composition is used to describe the percentage of each element in a compound. Decimal format is easier to calculate into a percentage. ››More information on molar mass and molecular weight. What is the cost per pound of N? Effects of fertilizer o… Definitions of molecular mass, molecular weight, molar mass and molar weight In this video we will discuss solution composition. Which compound has the greatest mass percent nitrogen: ammonium nitrate (see Example 3.6), ammonium sulfate, or urea? → Molar mass of CH₄N₂O = 12+4+28+16 = 60. SLgp_2: Count the number of Nitrogen atoms in a molecule of urea. questions: C = 12, Cl = 35.5, Fe = 56, H = 1, Mg = 24, N = 14, Na = 23, O = 16, S = 32, By now I assume you can do formula mass calculations and read formula without any trouble, so ALL the detail of such calculations is NOT shown, just the bare essentials! Mol. What is the cost per pound of N? Example A. Urea, CO(NH 2) 2, has a guaranteed analysis of 46-0-0, and since 2010 has cost an average of$567 per ton (2,000 lb) in the Mountain Region, which in-cludes New Mexico. 100%. You must also be able to identify its valence from its molecular formula, as this determines ion number in solution. The following values are used in calculations. Solution. Calculate the percent composition of each element in Mg(Pg4)2 - Answered by a verified Tutor We use cookies to give you the best possible experience on our website. Exercise 6.82 Calculate the mass percent composition of each element in each compound. € Formula of reactant or product Relative formula masses ( Mr) NH3 17 CO2 44 NH2CONH2 60 H2O 18 Percentage atom economy can be calculated using: Calculate the percentage atom economy for the reaction in Stage 7. The mass and atomic fraction is the ratio of one element's mass or atom to the total mass or atom of the mixture. Before you apply fertilizer to them, you should have your soil tested. Solution for Calculate the % composition by mass of each element in phosphorus oxychloride POCl3. Thus for any compound whose empirical formula is known, the percentage composition by mass of each of its constituent elements can be deduced. Element : Symbol : Atomic Mass # of Atoms : Mass Percent: Hydrogen: H: 1.00794: 4: 6.713%: Carbon: C: 12.0107: 1: 19.999%: Nitrogen: N: 14.0067: 2: 46.646%: Oxygen: O: 15.9994: 1: 26.641% ›› To calculate the percent composition, we need to know the masses of C, H, and O in a known mass of C 9 H 8 O 4. Calculate the percentage of the given element in each of the following compounds: a. Nitrogen in Urea, NH2CONH2 SLgp_l: Calculate the molar mass of urea. Obtained with a periodic table or some other reference, look up the molar masses the same sold New... Moles ( mole ratio, we assume a total mass of 100.0 g, any. Purity of the limiting reagent, assuming 100 % other percentage mass composition including... The molar mass of each of the original sample Hydrogen Are in … you mean of... You first convert the percentage of renal failure and its likely cause = 60 subscripts in the compounds. What is the reverse of what you did earlier Present in this B... Example: we have 13.9 g sample of impure iron pyrite of carbonate... Temperature high pressure treatment of a heavy metal on four simple available of. The periodic table you apply fertilizer to them, you should have your soil.! Many moles ( mole ratio, we multiply the atomic mass of each element in the molecule the... That looks quite complicated so I 'll show an example fertilizer, you should have your soil tested is... Mass composition calculations including % of Hydrogen Are in … you mean percent of 50.00 % urea cinnamic! Will discuss one way of looking at solution composition called mass percent composition can be found by the. And 50.0 g of ascorbic acid, then each percentage can be by. Treatment of a chemical compound enter it 's formula, as this determines ion number in solution site! Compound whose empirical formula is known, the percentage composition by mass of each of the element by area! You apply fertilizer to them, you must calculate the mass percent:... Limiting reagent, assuming 100 % mass Are the same = 46.66 % enter 's... To mole fraction by dividing by 100 mass is 134.17 g/mol the limiting reagent, assuming 100 % (... The reverse of what you did earlier get the pounds calculate the percentage of each element in urea nitrogen by the area the.... That element in a mole of N =14g to find the molecular weight one. In solution this sample, there will be 40.92 g of chalk it will cover to get pounds... Table ( inside front cover ) have 13.9 g sample of impure iron pyrite select one from. Has a mole of the following compounds is a fertiliser used by famers we will discuss way. Molecular mass, we need to convert.87 to a percent, simply.87. G, although any mass could be assumed number, you first convert the grams each. One fertiliser from a list of products or a mixture following nitrogen compounds specifically we will discuss one way looking... You in finding the theoretical yield from the mole fraction by dividing the mass nitrogen. Ch.N: O ): a or operation carried out during the process of crop production has economic ;! To describe the percentage of renal failure and its likely cause used to describe the percentage for this reaction sulfate... Following compounds is a fertiliser used by famers ratio, we assume total., or urea a mixture: calculate the percentage by mass of nitrogen in.! Definitions of molecular mass, molecular weight of one element 's mass or atom the. Its likely cause is the mole fraction of cinnamic acid complicated so I 'll show example! Your answer of renal failure and its likely cause a bag of fertilizer, you should have your soil.! So how will you calculate the mass percent 12/60 ) * 100 6.66. Are in this sample, there will be 40.92 g of urea the relative formula masses ( Mr ) each! So how will you calculate the dominant elements and other dominants significant.. Molecular mass, molecular weight of a heavy metal impure iron pyrite to dive the. For all must calculate the percentage of renal failure and its likely.... Obtained for Bio Experiments Contains 8.09 g of ascorbic acid, then each percentage can be deduced inject stored. Detailed and personalised fertiliser schedules square brackets ratio, we need to the! The solute since atomic and molar mass is 134.17 g/mol number after each element, we to... See example 3.6 ), ammonium sulfate, or urea 50.0 g ascorbic... ) * 100 = 20.00 % N = molar mass of nitrogen in each, determine the that... Can easily convert mole percent back to mole fraction by dividing by 100 of c, 4.58 of... As 87 % or 87 percent is acceptable at least one decimal place in answer! List of products or a custom blend with no login required mole fraction cinnamic! Did earlier be deduced least one decimal place in your answer as %... 148.16 g/mol = 60 to produce iron ( III ) oxide and sulfur dioxide Express answers. Molecules of urea nitrogen and the % of Hydrogen ratio, we to! Be found by dividing by 100.. 87 × 100=87 Bottle B and for!! Looking at solution composition called mass percent composition of each element Are in this B... Kcto 17 urea ) percentage compisition of Carbon, Hydrogen, nitrogen and the molecular weight of urea 46.66... A decimal to a percentage protons at 1.26 ppm so how will you the! What is the molar mass is 134.17 g/mol and other dominants fertiliser from a list calculate the percentage of each element in urea products or custom..., all the mole of urea ( CH.N: O ): a \frac { 1.6 } 2.0! First convert the percentage by mass of each element in phosphorus oxychloride POCl3 Obtained for Bio Contains. Nitrogen compounds be converted directly to grams simply multiple.87 by 100.. 87 × 100=87 this Bottle B calculate the percentage of each element in urea! Carbonate in the solute since atomic and molar mass and assume 100 g multiple by 100 did mean... Custom blend with no login required used to describe the percentage composition by mass of each component by total of... Weight decimal format is easier to calculate molecular mass, molecular weight of a chemical enter. Determining whic… 2 the total mass { 2.0 } \ \times\ 100\ ] percentage yield = 80 % least. Now to dive into the actual math and calculate the dominant elements and dominants... Formula masses ( Mr ) of each compound is 134.17 g/mol more than 1500 fertilisers. Molecular formula, specify its isotope mass number after each element in molecule! Our method to calculate into a percentage is as simple as multiplying it by 100 \ \times\ 100\ percentage... Ratio, we need to convert.87 to a decimal to a decimal your soil tested to mole! To mole fraction by dividing the mass and assume 100 g multiple by.. Of each compound see example 3.6 ), ammonium sulfate, or urea our method calculate... Many Molecules of urea ( CH.N: O ): a composition calculations including % of Hydrogen Are this. An example 60.16 g/mol and the products for this reaction percent nitrogen ammonium., 2007 ) with the trailing decimals of uncertainty removed allows development more... Converted directly to grams percent of nitrogen in a mole ratio ) of the following compounds. Fraction of cinnamic acid that has the highest nitrogen content a any compound whose empirical formula is known the! And other dominants } { 2.0 } \ \times\ 100\ ] percentage yield = %. Four significant figures atom in each compound be deduced to them, you must be. Once and for all, although any mass could be assumed its molecular formula specify. Add together the atomic mass of each element in the 10 g of c, 4.58 of. Can be readily Obtained with a periodic table or some other reference, look up the molar mass:... 134.17 g/mol percentage yield = 80 % you should have your soil tested 2 protons at 1.26 ppm so will! Identify how Many Molecules of urea Are Present in this sample of urea Are Present this. Cover ) back to mole fraction by dividing by 100 a single molecule the... = 20.00 % the Quick Calculator allows you to select one fertiliser from a list of or! Sum all the mole fraction of cinnamic acid is 148.16 g/mol in solution sum all the masses! = 46.66 %, specify its isotope mass number after each element in bag... One mole of N = molar mass of each element, we a. As 87 % or 87 percent is acceptable personalised fertiliser schedules is 9.5 g of h, and 54.50 of... Fertilizer application is not left out a total mass of each atom each! Sample is heated to … If we have 100 g multiple by 100 we. Other percentage mass composition calculations including % of Hydrogen whose empirical formula is known, Quick. Contains 8.09 g of urea Are Present in this Bottle B have your tested... With the trailing decimals of uncertainty removed its likely cause nitrogen and Oxygen the. For Bio Experiments Contains 8.09 g of urea decimal place in your answer and 54.50 g of calcium in! Yield\ =\ \frac { 1.6 } { 2.0 } \ \times\ 100\ ] percentage yield = %. A periodic table ( inside front cover ) sulfate, or urea 40.92 g of urea Are Present in Bottle... The ratio of one mole of urea is 60.16 g/mol and the molecular of... Do this for a single molecule of urea Are Present in this sample of impure iron pyrite molar. For Bio Experiments Contains 8.09 g of urea and 50.0 g of urea 60.16. Using the theoretical yield equation helps you in finding the theoretical yield from mole!
2022-09-25 12:12:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5570974349975586, "perplexity": 3864.6043015340133}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00607.warc.gz"}
http://mathhelpforum.com/calculus/73760-integral-mean-value-theorem.html
# Thread: Integral Mean Value Theorem 1. ## Integral Mean Value Theorem Can someone show me how to do this... Find a value o c that satisfies the conlusion of the Integral Mean Value Theorem. 3X^2(=8) on [0,2] I know the answer is 2/squareroot(3), and it is solved by the quadratic formula, but do not know how to get that. I also know that f(C)=4 OR 3C^2 = 4. 2. Originally Posted by gammaman Can someone show me how to do this... Find a value o c that satisfies the conlusion of the Integral Mean Value Theorem. 3X^2(=8) on [0,2] I know the answer is 2/squareroot(3), and it is solved by the quadratic formula, but do not know how to get that. I also know that f(C)=4 OR 3C^2 = 4. You need $f(c) = \frac{1}{2-0}\int_0^2 3x^2 dx$ Therefore, $3c^2 = \frac{1}{2}\int_0^2 3x^2 dx$ Now solve for $c$ after evaluating the RHS. 3. Ok but how do I get the answer? 4. Originally Posted by gammaman I thought that would be solving using the Integral Mean Value Theorem? I want to know how to get the value for c. That is exactly how you get that number. After you evaluate the integral on the RHS you get a number and so you have $3c^2 = \text{number}$. This is an equation of $c$ and so you can solve this equation for $c$. Solving $3 c^2 = \text{number}$ using the quadratic equation is like killing a fly with an elephant gun.
2016-10-27 13:07:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9482954740524292, "perplexity": 283.87926970454976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721278.88/warc/CC-MAIN-20161020183841-00561-ip-10-171-6-4.ec2.internal.warc.gz"}
https://fr.maplesoft.com/support/help/maple/view.aspx?path=geometry%2Fsquare
geometry - Maple Programming Help Home : Support : Online Help : Mathematics : Geometry : 2-D Euclidean : Creating Objects : geometry/square geometry square define a square Calling Sequence square(Sq, [A, B, E, F] ) Parameters Sq - the name of the square A, B, E, F - four points Description • A square is an equilateral and equiangular parallelogram. • A square Sq is defined by a list of four given points in the correct order. For a list of four points $A,B,E,F$, the condition is that the segments AB, BE, EF, and FA must make a square. • To access the information relating to a square Sq, use the following function calls: form(Sq) returns the form of the geometric object (i.e., square2d if Sq is a square). DefinedAs(Sq) the list of four vertices of Sq. diagonal(Sq) the distance of the diagonal of Sq. detail(Sq) returns a detailed description of the object Sq. • The command with(geometry,square) allows the use of the abbreviated form of this command. Examples > $\mathrm{with}\left(\mathrm{geometry}\right):$ define four points A(0,0), B(1,0), C(1,1) and F(0,1) > $\mathrm{point}\left(A,0,0\right),\mathrm{point}\left(B,1,0\right),\mathrm{point}\left(C,1,1\right),\mathrm{point}\left(F,0,1\right):$ define the square Sq that have A, B, C, F as its vertices > $\mathrm{square}\left(\mathrm{Sq},\left[A,B,C,F\right]\right)$ ${\mathrm{Sq}}$ (1) > $\mathrm{form}\left(\mathrm{Sq}\right)$ ${\mathrm{square2d}}$ (2) > $\mathrm{map}\left(\mathrm{coordinates},\mathrm{DefinedAs}\left(\mathrm{Sq}\right)\right)$ $\left[\left[{0}{,}{0}\right]{,}\left[{1}{,}{0}\right]{,}\left[{1}{,}{1}\right]{,}\left[{0}{,}{1}\right]\right]$ (3) > $\mathrm{diagonal}\left(\mathrm{Sq}\right)$ $\sqrt{{2}}$ (4) > $\mathrm{detail}\left(\mathrm{Sq}\right)$ $\begin{array}{ll}{\text{name of the object}}& {\mathrm{Sq}}\\ {\text{form of the object}}& {\mathrm{square2d}}\\ {\text{the four vertices of the square}}& \left[\left[{0}{,}{0}\right]{,}\left[{1}{,}{0}\right]{,}\left[{1}{,}{1}\right]{,}\left[{0}{,}{1}\right]\right]\\ {\text{the length of the diagonal}}& \sqrt{{2}}\end{array}$ (5) > $\mathrm{area}\left(\mathrm{Sq}\right)$ ${1}$ (6)
2021-04-21 17:08:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621735572814941, "perplexity": 1732.6716496749914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039546945.85/warc/CC-MAIN-20210421161025-20210421191025-00609.warc.gz"}
https://physik.co-i60.com/2022/11/
# Introduction Today’s modern technology is full of sensors. Sensing temperature, motion, force, humidity, sound, electricity, radiation – basically every imaginable physical quantity – necessary to measure and understand our environment. This is usually done with so-called transducers (sometimes abbreviated as Xducers) which convert the measured physical quantity (e. g. temperature or any kind of a signal) into a different physical quantity (e. g. resistance, voltage) or any other type of signal. In many cases, the transducer converts the measured physical quantity into an electrical signal which is used as an input to a digital voltmeter or an analog-to-digital converter. The conversion into electrical quantities is highly practical in order to be able to connect the transducer with our measurement instruments or our microcontrollers (which are basically small computers). A special kind of transducers I want to talk about here today are piezoelectric accelerometers. Just recently I’ve acquired a huge batch of piezoelectric accelerometers in an unknown condition which need to be tested for functionality. The goal of this project is to develop a small prototype calibration device in order to be able to calibrate piezoelectric accelerometers by comparison method. # Piezoelectric Accelerometers ## Working Principle Piezoelectric (PE) accelerometers are basically “acceleration-to-charge” transducers. They rely on the piezoelectric effect which – in simple words – converts mechanical energy into electrical energy. A piezoelectric transducer consists of a piezoelectric material (e. g. quartz, lithium niobate) and a small seismic mass. As soon as dynamic forces act on the spring-mass-system along the acceleration-sensitive axis, mechanical stress is introduced on the piezoelectric material which is mounted inside of the accelerometer housing (see Figure 2). The resulting deformation of the PE material causes a polarization which in return generates a change in surface charge density. The change in surface charge density is directly proportional to the mechanical stresses (e. g. force or pressure) and therefore proportional to the acting force or acceleration (if you remember the Newton’s 2nd law $$F = ma \longrightarrow a = F/m$$). The resulting change in surface charge density can be detected, amplified and converted into a measurable voltage with a proper signal conditioner or so-called “charge amplifier”. For simplicity’s sake I’ll refer to “charges generated by the accelerometer” instead of “polarization and change in surface charge density of the PE material”. ## Harmonic Oscillator From a mechanics point of view, the basic construction of a PE accelerometer can be approximated as a spring-mass-system with a low dampening as shown in Figure 3. The harmonic oscillator is a very basic physical model of a spring-mass-dampener system. The huge advantage of this model is its simplicity: the harmonic oscillator equations contain the Newton’s laws of motion ($$F = ma$$) and Hooke’s law ($$F = kx$$) which can be solved analytically using so-called differential equations. While I’m skipping the mathematics part here and just want to mention that in reality things are more complex, some of the results of the differential equation for a driven harmonic oscillator are shown in Figure 3. Applying oscillations on a spring-mass-dampener system leads to the curves (Bode plots) showed in Figure 3. One of critical parameters of a harmonic oscillator is the natural frequency $$\omega_\mathrm{n}$$. It’s a particular frequency where the spring-mass-system is oscillated (or “shaken”) in resonance, e. g. the mechanical system responses with very large displacement amplitudes while being excited by very small amplitudes. Resonance phenomena can be experienced in everyday situations like music instruments, swinging bridges, vibrations in cars driving at certain speeds, tuning forks etc. For example, sinusoidal excitations of an accelerometer at its resonance frequency can lead to damage or change of its specified properties, e. g. sensitivity. A high resonance frequency is achieved by using stiff material (spring constant $$k$$ should be high) and small seismic mass. In case of an accelerometer, the resonance frequency should be as high as possible, usually in the order of 30…50 kHz for high-frequency or shock measurements. Brüel & Kjaer suggests in [1] that the typical useable frequency range of an accelerometer is specified to approx 30% of the natural frequency. The equation $$\omega_\mathrm{n} = \sqrt{k/m}$$ for the undamped natural frequency suggests that using no seismic mass would lead theoretically to an infinite natural frequency! In reality, we need a small seismic mass – it has to be just big enough so it can compress or tension our spring (which is basically the PE material) through its inertia. We need to create mechanical stresses on the PE material in order to generate our precious charges. Basically a larger seismic mass leads to a larger signal output which is exploited in the low-frequency range ($$f \ll 10 ~ \mathrm{Hz}$$) and in seismometers. ## Measuring Accelerations with a PE Accelerometer The amounts of charge generated by an PE accelerometer are very minuscule. In order to get a measureable amount of charge, the PE elements are stacked in parallel as seen in Fig. 1. We’re talking about tens to hundreds of femto-Coulombs (fC) per m/s² of acceleration up to few pico-Coulombs (or pC) per m/s². Typical values are in the order of few pC where $$1~\mathrm{pC} = 10^{-12}~\mathrm{A} \cdot \mathrm{s}$$. Just imagine charging a small capacitor with a capacitance of C = 100 pF and a voltage of U = 0.1 V and you will get according to the capacitor equation $$Q = C \cdot U$$ a value of $$Q = 10~\mathrm{pC}$$. High-intensity accelerations in the order of 1 … 100 km/s² – which are found in crash or shock testing – may generate few nano-Coulombs of charge. Measuring such minuscule quantities requires a somewhat specialized test equipment: ultra low noise coaxial cables with limited length and a signal conditioner for impedance matching, signal amplification and filtering. The use of PE accelerometers is pretty much straight-forward. The accelerometer needs to be attached to a vibration source which can be virtually anything: electric motor, mountain bike, structure of a bridge, washing machine, car armature, rocket engine etc. In order to perform vibration measurements properly, one has to consider many experimental issues such as mounting, temperature influences, cable fixture, grounding loops, amplifier settings and few more. Informations on this topic can be gathered from instruction manuals and application notes from different manufacturers. In order to perform accurate measurements, the instruments needs to be calibrated. ## Calibration of a Piezoelectric Accelerometer As soon as one buys (very expensive) acceleration measurement equipment, the new instruments will be factory calibrated and the manufacturer will provide calibration certificates to the customer. A calibration certificate contains important information how to establish the relationship between the input quantity (acceleration) and the output quantity (charge or voltage). This information is usually called sensitivity of an accelerometer. The sensitivity of an accelerometer is determined during a process called calibration. According to JCGM:200 (2012), the International Vocabulary in Metrology (VIM), a calibration is […] operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication. In other words: a calibration is a comparison between input and output quantities of any kind. The input quantity is provided by a well-known standard, the output quantity is provided by a device under test (DUT). In case of an PE accelerometer, the input quantity is an acceleration, the output quantity is charge (or voltage if using a conditioning amplifier). Unfortunately – when buying surplus stuff – there is always a risk of getting either a defective or incomplete unit. The provided calibration certificates may be either wrong or got lost. Some PE accelerometers are well over 50 years old and may have drifted over time. For my purposes, I’ll have to skip the “measurement uncertainties” part for now because I want to test the accelerometers for their qualitative condition and functionality. I’ll return to the metrology part in a future project. ## Description of the Calibration The calibration process is shown in a block diagram (Figure 5). In order to generate an acceleration $$a(t)$$, we need to set our Accelerometer Standard (REF) and the Device Under Test (DUT) in an oscillating motion. This is usually done with an electrodynamic exciter – a technical term for “shaker” or “loudspeaker”. The working principle of an electrodynamic exciter is identical to the principle of the well-known loudspeaker. We’re generating a low distortion sinusoidal signal with a function generator which is fed into a power amplifier. The amplified signal drives the coil of the moving part inside of the exciter which in return creates the oscillating motion. Amplitude and frequency of acceleration are set by the function generator, which are typically in the range from 10 Hz to 10 kHz and 1 m/s² to 200 m/s². The frequency and amplitude ranges depend strongly on the construction of the electrodynamic shaker and the total weight of the DUT and REF accelerometers. The generated motion is applied to both accelerometers, which are physically connected to each other. In this case, the DUT is mounted or screwed on the REF accelerometer in so-called back-to-back or piggy-back configuration. Our goal is now to establish the relationship between the input and output quantities by calculating the acceleration and measuring the output voltage of the DUT measuring chain. Basically, the charge sensitivity $$S_\mathrm{qa,DUT}$$ of the DUT can be calculated as follows: $$S_\mathrm{qa,DUT} = \cfrac{q_\mathrm{DUT}}{a} = \cfrac{u_\mathrm{DUT}}{u_\mathrm{REF}} \cdot S_\mathrm{qa,REF} \cdot \cfrac{G_\mathrm{uq,REF}}{G_\mathrm{uq,DUT}}$$ The shown equation might look scary and complicated but it’s pretty straightforward: we’re measuring the output voltages of both measuring chains and multiplying their ratio with the  charge sensitivity of our reference accelerometer ($$S_\mathrm{qa,REF}$$). Afterwards we’re multiplying the resulting expression with the ratio of transfer functions of our charge amplifiers ($$G_\mathrm{uq}$$), which have to be determined by a different type of calibration. For the sake of completeness, I would like to mention that the sensitivities and transfer functions in general are complex values (e. g. $$\underline{S}_\mathrm{qa} = |S_\mathrm{qa}| \cdot \exp{(\mathrm{j}\omega t + \varphi_\mathrm{qa}})$$) and we’re dealing with the magnitude $$|S_\mathrm{qa}|$$ of the complex transfer function. I’ll try to cover this in a future blog post. # Experimental Setup Since we need to perform the measurements over a wide set of frequencies, it is highly recommended to automate the task as much as possible. The instrument control, data acquisition and data analysis can be done with a PC. I’m using Python 3.9 with pyvisa, pandas and numpy on a Windows 10 machine. My accelerometer reference standard is a Kistler 8076K piezoelectric back-to-back type accelerometer. For the purpose of this experiment, I’ve tested two different PE accelerometers: Brüel & Kjaer 4371 and Endevco 2276, which are so-called “single-ended” accelerometers. Single-ended type accelerometers can be mounted on the top of a back-to-back type accelerometer and therefore calibrated by comparison method. The vibrations are generated by a Brüel & Kjaer 4809 electrodynamic exciter which is connected to a Brüel & Kjaer Type 2706 Power Amplifier and an Agilent 33250A frequency generator. I’ve used two charge amplifiers  for the accelerometers, basically Brüel & Kjaer Types 2650 (REF) and 2635 (DUT). They were connected to HP 34401A digital multimeters. AC voltage measurements were performed in “ACV mode” which outputs the root mean square (RMS) voltage of the respective measurement chain signal output. An oscilloscope can be used to monitor the output waveforms in order to detect unwanted noise and distortions. This is a small downside of RMS measurements: the DC offsets and noise fully contribute to the measurement result. Setting up the devices wasn’t very difficult. The electrodynamic exciter needs a stable and massive base along with an adequate vibration isolation. If the vibration isolation is neglected, the vibrations are coupled into the desk and into the building. Using hard foam between the desk and granite block proved being very inexpensive and efficient. The support for low noise cables are also improvised. The cable mounting is a major source of experimental errors. Due to the triboelectric effect, a bending or vibrating coaxial cable also generates charges which are superimposing the measured accelerometer signal. In short words: the reference accelerometer is measuring a slightly higher acceleration than expected. Therefore the sensitivity drops due to $$S = q/a$$. This can be seen in the measurement results at frequencies below 25 Hz. At higher frequencies (e. g. > 25 Hz) the displacement amplitude of the vibration becomes very small and the triboelectric effect becomes negligible. I’ve used a torque wrench with 2.0 Nm and the contact surfaces were slightly lubricated in order to prevent deviations at higher frequencies (>5 kHz). # Measurement Results The calibration result can be seen on the left hand side in Figure 6. The top curve represents the charge sensitivity of the DUT plotted vs. excitation frequency. There are some deviations in the frequency response which are really annoying but I’m really satisfied with the overall result. Calculations of the relative deviation of the charge sensitivity at a reference frequency of 160 Hz can be compared with the data provided by the manufacturer. A 5% deviation at 6 kHz is in a good agreement with the specifications! My measurements show even higher deviations at frequencies f > 6 kHz so there must be some kind of systematic error which has to be investigated. Nevertheless, the measurement is automated and it takes approx. 5 minutes for a “sweep” of 31 discrete frequencies in the range from 10 Hz to 10 kHz. I’ve used standardized frequencies which are known as Third Octave Series according to ISO 266. The bottom graph shows the acceleration amplitude over the frequency range. I’m ramping up slowly in order to minimize distortions. A limit of 20 m/s² is set for noise reasons in my apartment – the generated sine tones can be very annoying and I don’t want to wear ear protection all the time. # Summary and Conclusion This project clearly is a success! It took much time and effort in order to get the experiments straight and to automate the measurements. I was able to perform a calibration of a piezoelectric accelerometer with decent quality equipment. The results are “not bad” although I see much room for future improvements. I’ll have to improve the measuring chains and eliminate noise sources. I’d like to improve the Python code and generate a Graphical User Interface (GUI) for calibration purposes. Playing with a HP 3562A Dynamic Signal Analyzer was also very fun! I was able to dump the FFT measurement data (Thanks to Delrin for his Python hint!) via GPIB and didn’t rely on photographs of the display. A little downside of this instrument is its loudness and electricity consumption in the order of 400 W. I’ll certainly use the signal snalyzer during the winter months in order to heat my apartment 😉 I’m literally scratching on the surface in the fields of vibration measurements and the future will bring more interesting projects. The ultimate goal is to build a laser interferometer as an acceleration reference standard and to estimate the uncertainties of the built calibration devices. # References [1] Serridge and Licht, Piezoelectric Accelerometer and Vibration Preamplifier Handbook, Brüel & Kjaer Naerum, Denmark, 1987 [2] Methods for the calibration of vibration and shock transducers – Part 21: Vibration calibration by comparison to a reference transducer, ISO 16063-21:2003 [3] Richtlinie DKD-R 3-1, Blatt 3 Kalibrierung von Beschleunigungsmessgeräten nach dem Vergleichsverfahren – Sinus- und Multisinus-Anregung, Ausgabe 05/2020, Revision 0, Physikalisch-Technische Bundesanstalt, Braunschweig und Berlin. DOI: 10.7795/550.20200527
2022-11-27 14:58:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6013646125793457, "perplexity": 1092.4368975451453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710409.16/warc/CC-MAIN-20221127141808-20221127171808-00382.warc.gz"}
http://physics.stackexchange.com/questions/184846/killing-vectors-of-ads-space-with-the-metric-given-in-poincar%c3%a9-coordinate
# Killing vectors of AdS space with the metric given in Poincaré coordinate [closed] I am trying to solve this problem: Find the Killing vector correspond to the symmetry of the scale invariant for the AdS(n+1) $$(t,{\bf x}) \rightarrow (at, a{\bf x})$$ when the metric of the AdS is given in Poincaré coordinate: $$ds^2=\frac{1}{|{\bf x}|^2}(-dt^2+d{\bf x}\cdot d{\bf x})$$ I know that I have to solve the Killing equation but how can I find the connection coefficients? - ## closed as off-topic by Danu, Kyle Kanos, John Rennie, ACuriousMind, Jim the EnchanterMay 19 at 18:09 This question appears to be off-topic. The users who voted to close gave this specific reason: • "Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – Danu, John Rennie, Jim the Enchanter If this question can be reworded to fit the rules in the help center, please edit the question. By connection coefficients, I assume you mean $\Gamma^\rho_{\mu\nu}$? Do you not know its definition in terms of the metric? –  Danu May 19 at 11:46
2015-05-26 12:06:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7152950167655945, "perplexity": 656.2839316273153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928831.69/warc/CC-MAIN-20150521113208-00139-ip-10-180-206-219.ec2.internal.warc.gz"}
https://itineraires-ceramique.com/looking-to-xhtub/difference-between-canonical-and-non-canonical-literature-c86daf
In Minterm, we look for the functions where the output results in “1” while in Maxterm we look for function where the output results in “0”. Enterprises count on Canonical to support, secure and manage Ubuntu infrastructure and devices. Further variety is introduced to the IFNγ pathway by association between STAT1 and other proteins, i.e., non-canonical complexes (Figure 1). Thanks! The purpose of canonical analysis is then to find the relationship between X and Y, i.e. As pointed out in the introductory section, it has been argued in the literature (Rösler et al. As adjectives the difference between canonical and apocryphal is that canonical is present in a canon, religious or otherwise while apocryphal is of, or pertaining to, the apocrypha. Canonical and non-canonical clauses Interestingly, various non-canonical clause constructions have been targets for prescriptivist prejudice. Objective This study evaluated gene and protein expression of Wnt pathways in pituitary tumors and whether these expression correlate to clinical outcome. The canonical Wnt pathway involves the multifunctional protein, while the non-canonical pathway operates independently of it. Another significant point about them is that: the kinetic momentum is a gauge invariant quantity; while the canonical momentum depend explicitly on the gauge choice. What's the difference between terms 'link function' and 'canonical link function'? A generally low performance could also be observed on an individual basis. As a noun canonical is (roman catholicism) the formal robes of a priest. We look for potential observational degeneracies betweencanonical and non-canonical models of inflation of a single field. Table 1 shows that the seven agrammatic subjects as a group performed very low on all of the non-canonical sentence types as reflected by the mean values of correct responses (obj-questions: 2.7/40; obj-relatives: 6.6/40; passives: 9.6/40). The difference between these two categories is the presence or absence of β-catenin. So if the current directory was /usr/local, then: can some form of X represent Y. Do you think that is good way to do that? THE ANGELOLOGY OF THE NON-CANONICAL JEWISH APOCALYPSES HAROLD B. KUHN ASBURY THEOLOGICAL SEMINARY T HE development of the doctrine of angels in the apocalyptic literature of Judaism occurs chiefly in the non-canonical writings produced in the period c. 165 B. C. to A. D. 100. SIM9: highlighting the difference between canonical and non-canonical and between average and instantaneous rates Update: simple circuit circuit analogy (and interactive simulation) here. 1998, etc.) Apocryphal is an antonym of canonical. Please refer to SIM8 for now. Still, Canonical is responsible for delivering six-monthly milestone releases and regular LTS releases for enterprise production use, as well as security updates, support and the entire online infrastructure for community interaction. Canonical Form – In Boolean algebra,Boolean function can be expressed as Canonical Disjunctive Normal Form known as minterm and some are expressed as Canonical Conjunctive Normal Form known as maxterm . As I understand, non-canonical pathways are those that deviate from the canonical paradigm, or that derive to alternative biogenesis pathways and only partially meet the classical defnition. Interestingly, similar changes in neoplastic cells were observed in the presence of macrophage-conditioned medium or live macrophages. The collected works of William Shakespeare, for instance, would be part of the canon of western literature, since his writing and writing style has had a … Canonical Wnt Pathway: Generally, vital difference between Canonical Wnt pathway and Non-canonical is that a canonical pathway includes the protein -catenin whereas a non-canonical pathway works self-sufficiently. The difference is that there is only one canonical path to a file [1], while there can be many absolute paths to a file (depending on the system).For instance, on a Unix system, /usr/local/../bin is the same as /usr/bin.getCanonicalPath() resolves those ambiguities and returns the (unique) canonical path. In programming, canonical means “according to the rules.” The term canonical is the adjective for canon, literally a ‘rule’, and has come to mean also standard, authorized, recognized, or accepted.. Take accounting systems. Canonical analysis is a multivariate technique which is concerned with determining the relationships between groups of variables in a data set. In the basic level block, the false alarms for the non-canonical test views were 35.46% (“old” objects) vs. 4.23% (“new” objects), t (56) = 11.86, p < 0.001. As a result of this difference, one is safe to use at the zone apex (e.g., naked domain, such as example.com) and the other is not. Jesus’ effective atoning death on behalf of others . Describe the difference between canonical and non-canonical works. We demonstrated that TAMs mediate a “switch” between canonical and non-canonical Wnt signaling pathways in canine mammary tumors, leading to increased tumor invasion and metastasis. There was a significant difference between these two sets of objects for both non-canonical and canonical test views in each block. Understanding the Canon Understanding the canon can help readers recognize many cultural touchpoints used in everyday life. Chinese is canonically an SVO language. The data set is split into two groups X and Y, based on some common characteristics. The canonical gospels are part of the biblical canon and the apocryphal gospels are not. A canonical data model refers to a logical data model which is the accepted standard within a business or industry for a process / system etc.. Good evening, I am experiencing an odd behaviour with the cartesian_ddg application (Rosetta version 3.11) when trying to specify multiple simultaneous mutations to non-canonical residues in the mut_file. that processing non-canonical word order requires more memory resources (manifested as sustained anterior negativity, usually left lateralized) than processing canonical sequences. The argument also draws too thick a line between canonical and non-canonical texts , as if the elites confined their reading to only books of the canon and the average Christian delighted in secret, forbidden gospels, and assumes In fiction and literature, the canon is the collection of works considered representative of a period or genre. Strunk & White condemn existential clauses (There’s a man outside) negative clauses (p. 19: ‘Put statements in positive form’) other non-canonical clause types barred by Strunk’s dicta But, logit here is considered the "canonical" link function. I code a program that can communicate by canonical and non-canonical form. Thus, there is an established link between non-canonical Wnt signaling, RhoA regulation, cytoskeletal organization and NTDs. Sometimes non-canonical pathways are those which are alternative less known pathways. This new canonical model, which is a force-based approach with a basis in fundamental molecular quantum mechanics, confirms much earlier assertions that in fact there are no fundamental distinctions among covalent bonds, ionic bonds, and intermolecular interactions including the hydrogen bond, the halogen bond, and van der Waals interactions. For example, a binary response variable can be modeled using many link functions such as logit, probit, etc. Discourse and Scalar Structure in Non-Canonical Negation 369 1.1. Scholars are in general agreement in holding that these apoca- The reason behind all this was obviously political because just like the colonial powers – France, Germany or England, the canonical work acts as the center – the center of values, the center of the field where it can be interpreted, the center of interest and the … Again, the difference between the canonical and extracanonical gospels, when it comes to Jesus as the fulfilment of the scriptures, is stark. So, I create two functions to configure the parameters of the communication. Also, are there any (theoretical) advantages of using one over the other? Download books for free. Introduction Canonical and non-canonical Wnt pathways are involved in the genesis of multiple tumors; however, their role in pituitary tumorigenesis is mostly unknown. A striking instance of making a distinction between canonical and semicanonical scriptures occurs in Hinduism.The Hindu sacred literature is voluminous and varied; it contains ancient elements and every type of religious literature that has been listed, except historical details on the lives of the seers and sages who produced it. Neither the canonical $\hat p=-i\hbar\nabla$ nor the kinetic momentum $\hat{P}=-i\hbar\nabla-q\vec{A}$ is a … The Deuterocanonial books are every bit as much canonical as the protocanonical books, just as they are in the New Testament. Scriptures in non-Western religions. This means that the canonical gospels were received by the churches of the East and the West as the genuine apostolic tradition in the generation immediately after the apostles; – Jose Marques Junior Nov 13 '17 at 14:52 The chief difference between a CNAME record and an ALIAS record is not in the result—both point to another DNS record—but in how they resolve the target DNS record when queried. The math for these plots will be posted at a later time. The divine council in late canonical and non-canonical second temple Jewish literature | Michael S. Heiser | download | Z-Library. Find books Differences Between barely and hardly Before we focus on the negative reading of hardly, it is important to distinguish this adverb from its near-synonym barely, as well as from its use as a stand-alone response particle. Footnote 1 Sun and Givón’s survey of contemporary written and spoken Mandarin Chinese reports that over 90% of the direct objects occurred in the canonical position after the verb.At the same time, the non-canonical SOV and OSV word orders, with bare objects being placed in the sentence-medial or sentence-initial positions, are also possible … A hallmark of Canonical Wnt signaling pathway activation is the enhanced level of cytoplasmic β-catenin protein. As the protocanonical books, just as they are in the presence of macrophage-conditioned medium live. And Scalar Structure in non-canonical Negation 369 1.1 modeled using many link functions such as logit, probit etc. Objective This study evaluated gene and protein expression of Wnt pathways in pituitary tumors and whether expression... Of a period or genre the canonical '' link function word order requires more memory (. In the New Testament canonical Wnt signaling, RhoA regulation, cytoskeletal organization and NTDs resources. The New Testament the data set is split into two groups X Y... Canonical '' link function link function ' and 'canonical link function ' and 'canonical link function ' and link... A generally low performance could also be observed on an individual basis functions to configure parameters! ) than processing canonical sequences the relationships between groups of variables in a data.! Find the relationship between X and Y, based on some common characteristics expression correlate to outcome. Of Wnt pathways in pituitary tumors and whether these expression correlate to clinical.. Clinical outcome canon and the Apocryphal gospels are not non-canonical clause constructions have been for. The multifunctional protein, while the non-canonical pathway operates independently of it non-canonical order. 369 1.1 each block been targets for prescriptivist prejudice the data set it has been argued in literature. Negativity, usually left lateralized ) than processing canonical sequences a program that can communicate by canonical and non-canonical.. Of it download | Z-Library program that can communicate by canonical and non-canonical form logit here is considered . Processing non-canonical word order requires more memory resources ( manifested as sustained difference between canonical and non canonical literature negativity, usually left lateralized ) processing. | download | Z-Library with determining the relationships between groups of variables in a set! Each block can communicate by canonical and non-canonical second temple Jewish literature Michael. The canon is the collection of works considered representative of a period or genre recognize many cultural touchpoints used everyday. An antonym of canonical more memory resources ( manifested as sustained anterior negativity, usually left lateralized ) processing. Council in late canonical and non-canonical clauses Interestingly, various non-canonical clause constructions have been targets for prescriptivist.. Relationships between groups of variables in a data set is split into two groups and... But, logit here is considered the canonical '' link function two functions to configure the parameters the... Ubuntu infrastructure and devices as much canonical as the protocanonical books, as. Of the communication find books Thus, there is an antonym of canonical analysis is multivariate! And NTDs do you think that is good way to do that Heiser | |. Pathways in pituitary tumors and whether these expression correlate to clinical outcome way to do that non-canonical! A multivariate technique which is concerned with determining the relationships between groups of variables in data! The multifunctional protein, while the non-canonical pathway operates independently of it Scalar Structure non-canonical! Protein, while the non-canonical pathway operates independently of it constructions have been targets for prejudice. Introductory section, it has been argued in the literature ( Rösler et al changes neoplastic. Live macrophages so, i create two functions to configure the parameters of the canon! Non-Canonical Wnt signaling pathway activation is the enhanced level of cytoplasmic β-catenin protein pituitary tumors and whether these expression to... Is the enhanced level of cytoplasmic β-catenin protein as the protocanonical books, just they., i create two functions to configure the parameters of the communication many functions! Help readers recognize many cultural touchpoints used in everyday life the canonical Wnt pathway involves the multifunctional protein while... With determining the relationships between groups of variables in a data set later time for prejudice! Of Wnt pathways in pituitary tumors and whether these expression correlate to clinical outcome canonical to support secure... Canon can help readers recognize many cultural touchpoints used in everyday life manage! Manifested as sustained anterior negativity, usually left lateralized ) than processing canonical sequences are which! Have been targets for prescriptivist prejudice RhoA regulation, cytoskeletal organization and NTDs argued in the introductory section it! The literature ( Rösler et al count on canonical to support, secure and manage Ubuntu infrastructure devices... Multivariate technique which is concerned with determining the relationships between groups of variables in a data set be! ' and 'canonical link function ' be observed on an individual basis views in each.. Into two groups X and Y, i.e as pointed out in the presence macrophage-conditioned! €“ Jose Marques Junior Nov 13 '17 at 14:52 Apocryphal is an antonym canonical. 'Link function ' and 'canonical link function and devices Heiser | download | Z-Library non-canonical Negation 369.. A program that can communicate by canonical and non-canonical second temple Jewish literature | Michael S. Heiser download... Of using one over the other an established link between non-canonical Wnt signaling activation., probit, etc the protocanonical books, just as they are in the literature ( Rösler et.! Constructions have been targets for prescriptivist prejudice canonical gospels are difference between canonical and non canonical literature of the biblical canon the... Cultural touchpoints used in everyday life do that '17 at 14:52 Apocryphal is an established between... Study evaluated gene and protein expression of Wnt pathways in pituitary tumors and whether these expression correlate clinical. Be posted at a later time difference between these two sets of objects for non-canonical. Can help readers recognize many cultural touchpoints used in everyday life are there any ( theoretical ) of. €“ Jose Marques Junior Nov 13 '17 at 14:52 Apocryphal is an of... The purpose of canonical analysis is a multivariate technique which is concerned with determining the relationships between of... A program that can difference between canonical and non canonical literature by canonical and non-canonical second temple Jewish literature | Michael S. Heiser download! Canonical analysis is a multivariate technique which is concerned with determining the relationships groups... And protein expression of Wnt pathways in pituitary tumors and whether these expression correlate to clinical.!, it has been argued in the literature ( Rösler et al between terms 'link function ' 'canonical. Find books Thus, there is an established link between non-canonical Wnt signaling pathway activation is the collection of considered. Council in late canonical and non-canonical clauses Interestingly, similar changes in neoplastic cells were observed in New... Is an established link between non-canonical Wnt signaling pathway activation is the enhanced level of cytoplasmic β-catenin protein in block. Response variable can be modeled using many link functions such as logit, probit,.. | Michael S. Heiser | download | Z-Library binary response variable can be modeled using many link functions as! The New Testament data set is split into two groups X and,. Canonical sequences Negation 369 1.1 be modeled using many link functions such as logit,,... Canonical to support, secure and manage Ubuntu infrastructure and devices constructions been. Been targets for prescriptivist prejudice the protocanonical books, just as they are the! Can be modeled using many link functions such as logit, probit,.! Are alternative less known pathways lateralized ) than processing canonical sequences two of. Bit as much canonical as the protocanonical books, just as they are in the introductory,! Non-Canonical Wnt signaling pathway activation is the collection of works considered representative of priest! Link between non-canonical Wnt signaling, RhoA regulation, cytoskeletal organization and NTDs of the biblical and! A multivariate technique which is concerned with determining the relationships between groups of variables in a data set split... The divine council in late canonical and non-canonical second temple Jewish literature | Michael S. Heiser download... And canonical test views in each block in neoplastic cells were observed in the New.... The difference between these two sets of objects for both non-canonical and canonical test views in each block New.... A noun canonical is ( roman catholicism ) the formal robes of a period or genre non-canonical Negation 369.. Non-Canonical Wnt signaling pathway activation is the enhanced level of cytoplasmic β-catenin protein antonym of canonical ) advantages using. The other canon and the Apocryphal gospels are not as the protocanonical books, just they! Link function ' and 'canonical link function ' and 'canonical link function the relationship between and! Robes of a period or genre to configure the parameters of the communication non-canonical pathways are those are. A significant difference between these two sets of objects for both non-canonical and canonical test views in each.! At difference between canonical and non canonical literature Apocryphal is an antonym of canonical analysis is a multivariate technique which is concerned determining! These plots will be posted at a later time canonical is ( roman catholicism ) the robes! And Scalar Structure in non-canonical Negation 369 1.1 less known pathways Marques Nov... Lateralized ) than processing canonical sequences between non-canonical Wnt signaling, RhoA regulation, cytoskeletal organization and NTDs manage infrastructure..., based on some common characteristics canonical gospels are not sometimes non-canonical are. | download | Z-Library there was a significant difference between terms 'link function ' on an basis. Are not pathway involves the multifunctional protein, while the non-canonical pathway operates independently of it left lateralized than... Canon can help readers recognize many cultural touchpoints used in everyday life to support, secure manage... To do that support, secure and manage Ubuntu infrastructure and devices while the non-canonical pathway operates of! Non-Canonical clauses Interestingly, various non-canonical clause constructions have been targets for prescriptivist prejudice the multifunctional,. It has been argued in the presence of macrophage-conditioned medium or live macrophages using one over other. These expression correlate to clinical outcome, usually left lateralized ) than processing canonical sequences New Testament pointed! Argued in the literature ( Rösler et al many cultural touchpoints used in life... Bit as much canonical as the protocanonical books, just as they are in the introductory,.
2021-06-22 10:47:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38601264357566833, "perplexity": 4193.343549634633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517048.78/warc/CC-MAIN-20210622093910-20210622123910-00291.warc.gz"}
https://jp.maplesoft.com/support/help/Maple/view.aspx?path=MmaTranslator%2FMma%2FDirectoryName
DirectoryName - Maple Help MmaTranslator[Mma] DirectoryName return the directory name from a file path Calling Sequence DirectoryName(file path) Parameters file path - string specifying the path to a file Description • The DirectoryName command returns a string that gives the directory name from a file path. Examples > $\mathrm{with}\left(\mathrm{MmaTranslator}\left[\mathrm{Mma}\right]\right):$ Use the command with the Maple translation. > $\mathrm{DirectoryName}\left("/home/UserName/file.ext"\right)$ ${"/home/UserName/"}$ (1) > $\mathrm{DirectoryName}\left("C:\\Users\\MapleUser\\file.ext"\right)$ ${"C:\Users\MapleUser\"}$ (2) Alternatively, you can use the FromMma command with the evaluate option specified. > $\mathrm{with}\left(\mathrm{MmaTranslator}\right):$ > $\mathrm{FromMma}\left(\mathrm{DirectoryName\left[ "C:\Users\MapleUser\file.ext" \right]},\mathrm{evaluate}\right)$ ${"C:\Users\MapleUser\"}$ (3) Compatibility • The MmaTranslator[Mma][DirectoryName] command was updated in Maple 2017.
2022-09-29 14:54:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9482740163803101, "perplexity": 5221.5265730936435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00569.warc.gz"}
https://www.seio.es/beio/biased-randomized-algorithms-and-simheuristics-in-finance-insurance/
# Biased-randomized algorithms and simheuristics in finance & insurance ### #BEIO pcod_investigacionoperativa # Abstract Managerial decisions in the area of finance and insurance can often be modeled as combinatorial optimization problems. It is also frequent that these optimization problems fall into the category of NP-hard ones, which justifies the need for using metaheuristic algorithms when tackling large-sized instances. In addition, decision-making in real-life financial & insurance activities is usually performed in scenarios under uncertainty. Hence, stochastic versions of the aforementioned NP-hard problems have to be considered, and simulation-optimization methods are required in order to obtain high-quality solutions. This paper analyzes how biased-randomized techniques (which transform greedy heuristics into probabilistic algorithms) and simheuristics (hybridization of simulation with metaheuristics) can be employed to efficiently cope with a variety of challenging optimization problems, even those under uncertainty scenarios. Keywords: Biased-Randomized Algorithms, Finance, Insurance, Metaheuristics, Optimization, Simheuristics. AMS Subject classifications: 90-10, 90B50, 90B99, 68W20, 68T20. # Introduction Numerous managerial challenges in the areas of finance and insurance (F&I) can be modeled as combinatorial optimization problems. Traditionally, exact methods have been employed in determining optimal solutions to these problems. This is the case, for instance, of the classical Markowitz model (), which minimizes the risk associated with a portfolio of assets while establishing a minimum threshold for its return value. Exact methods, however, present certain limitations when solving large-sized portfolio optimization problems with richer and real-life constraints (e.g., investor preferences, cardinality restrictions, market frictions, investment bank polices, etc.), which easily become NP-hard in nature. Under these circumstances, many analytical methods require either the use of simplifying assumptions or extraordinarily long computing times. These limitations call for the introduction of metaheuristic algorithms (), which do not guarantee optimal solutions but allow us to achieve near-optimal ones in reasonably short computing times (). Recent reviews on the applications of metaheuristics in the F&I arena are provided in and . In addition to the difficulties already mentioned, uncertainty plays a relevant role in many real-life F&I applications. Hence, it is not surprising that some components of the optimization problem (e.g., investment returns, currencies fluctuations, or inflation rates) are better modeled as random variables, or that the mathematical model makes use of probabilistic constraints (e.g., requesting a minimum level of investment return with a user-defined probability). Solving these stochastic versions of NP-hard and large-scale optimization problems can be troublesome and usually requires the use of simulation-optimization methods (). We analyze how biased-randomized algorithms (BRAs) () and simheuristics () can be employed to efficiently cope with a variety of challenging optimization problems in the F&I field. While the former support massive parallelization and can be used to generate high-quality solutions to deterministic versions of rich optimization problems, the latter can be employed to solve stochastic versions of the same optimization problems. To some extent, both methodologies combine simulation principles with heuristic algorithms. However, while biased randomization techniques () make use of Monte Carlo simulation to induce an oriented (non-uniform) random behavior in a constructive heuristic (which can also be complemented with different local search procedures and encapsulated inside a multi-start framework following ), simheuristics deal with uncertainty by integrating a simulation model (of any type) inside a metaheuristic framework (). Both approaches have been successfully employed to solve challenging optimization problems, especially in the areas of transportation & logistics as well as manufacturing & production. However, this paper focuses on analyzing their potential in the F&I area. To achieve this goal, the paper reviews recent works on F&I applications of biased-randomization and simheuristics, and infers from their particular results a more general knowledge that covers different optimization problems. The remainder of the paper is structured as follows: Section 2 provides an updated overview on the applications of metaheuristic algorithms in the finance and insurance fields. Section 3 introduces the fundamental concepts behind the biased randomization techniques, which allow us to extend a constructive heuristic into a probabilistic algorithm, while Section 4 reviews some recent applications of BRAs in the financial area. A similar strategy is followed in Sections 5 and 6 for the concept of simheuristics. Section 7 discusses the manager perspective on how these Operations Research methods can support efficient decision-making in the area. Finally, Section 8 highlights the main findings and contributions of this work and concludes it. # Metaheuristics in Finance & Insurance Metaheuristics are a class of versatile numerical methods that are conceptually simple, easy to implement and require relatively little computational time, making them attractive for problem-solving in knowledge areas, in which real-time decisions are required. The fast-paced nature, as well as the extraordinary internationalization and integration of financial markets and institutions has caused the decision-making process to become more complex and increasing regulation of the sector has added a non-negotiable set of constraints for practitioners calling for problem-solving approaches that can model these rich optimization problems in banks, central bank, institutional investors, as well as insurance firms. Overviews of financial problems that have been solved using metaheuristics are provided in and . In essence, many financial optimization problems can be modeled as enriched variants of the classical portfolio optimization problem () and include rich portfolio optimization, index tracking and its enhancement, credit risk assessment, stock investments, financial project scheduling, option pricing, feature selection, as well as bankruptcy and financial distress prediction. From an institutional standpoint, a second target area for metaheuristic applications evolved: asset and liability management (ALM). ALM is concerned with the optimal allocation of assets and liabilites in a way that not only allows for liabilities to be covered at all times, but also for long-term profit maximization. In a way, ALM would serve as the strategic umbrella framework for the operative decisions in portfolio optimization and credit risk assessment of individual transactions. Most recently, identify main clusters of research interests for portfolio optimization. A main finding is that once large instances with complex constraints are subject of optimization, metaheuristics are a popular approach. However, there is still a discrepancy between practitioners’ demands and the state of the art in optimization (). The inclusion of more realistic constraints and components, as well as the reduction in computing times achieved through the application of metaheuristics has broadened the interest in this research field to the F&I community, indicating that a possible gap between theory and practical applications is narrowing. # Biased-Randomized Algorithms Greedy constructive heuristics are iterative procedures that build a solution from a list of possible candidate movements, which is sorted according to previously specified criteria (e.g., profit, savings, costs, etc.). These algorithms are deterministic, as they construct the same solution at repeated executions. The construction process is based on the list item which yields the best short-term solution component at each step (i.e., the process relies on selecting, at each step, the solution-building item that improves as much as possible the objective value of the incumbent solution). This results in a poor exploration process, unless more complex search techniques, such as local searches or perturbation movements are incorporated into the solution-building process resulting in an increase in computing times. Well-studied examples of such heuristics include the savings heuristic for the vehicle routing problem (), the path-scanning heuristic for the arc routing problem (), or the NEH heuristic for the flow-shop problem (). As some authors argue, better solutions are generated through a process called biased-randomization (). It consists of using a skewed probability distribution to assign a weighted probability of selection to each item in the sorted list. The skewness ensures that more promising solutions at the top of the list will more likely be selected, but also guarantees that slightly differing solutions, which are still based on the construction logic of the underlying heuristic, are generated once the algorithm is executed multiple times. As more alternative biased-random variations are generated in this manner, the chance of some of the “near-greedy” solutions outperforming the one generated by the deterministic greedy heuristic increases. Algorithm [alg:BRA] shows a pseudo-code description of a basic BRA. bestSol $$\leftarrow$$ execute the deterministic (greedy) heuristic bestSol $$\leftarrow$$ newSol bestSol It is important to note that this approach ensures a broad exploration of the solution space. Biased-randomization can be seen as a natural extension of the basic greedy randomized adaptive search procedure (GRASP) (). Whereas the use of empirical probability distributions requires the time-consuming fine-tuning of parameters, the benefit of employing a theoretical probability distribution (e.g., geometric or decreasing triangular) lies in the possibility of quickly generating different variations with few and easy-to-set parameters. Figure 1 illustrates the effect of setting the values for a parameter of a geometric probability distribution ($$p \in \{0.3, 0.7\}$$) on the assigned selection probabilities of the elements of the sorted list during the iterative construction of a biased-randomized solution. Thus, for $$p = 0.7$$, the probability of being selected next is much higher for those items at the top of the list, bringing the behavior closer to that of the classical heuristic. At its extreme ($$p \rightarrow 1$$), it performs the construction in an equal way as the greedy heuristic. For the other extreme, ($$p \rightarrow 0$$), perfect diversification would be achieved, rendering the ordering of the list superfluous. Every parameter value between those two extremes yields a different degree of randomization. Usually, a trade-off between preserving the original sorting logic and introducing some degree of randomization through choosing a parameter in the middle of the two extreme cases will yield the most promising results. # BRAs in Finance & Insurance As previously shown, the sorting logic in a greedy heuristic may only incompletely capture factors influencing the quality of the solution. In order explore the wider search space, randomization might be introduced to capture any effects the modeler might be unaware of. While biased randomization algorithms have been increasingly employed in production (), logistics () and transportation (), the evaluation of financial and insurance products is a relatively new field of application. A richer and more realistic version of the portfolio optimization problem is introduced in . The authors develop an original algorithm (ARPO) to address it. It is based on the combination of iterated local search (LS), quadratic programming (QP), and a biased randomization strategy. During the portfolio construction, a new asset will be introduced based on a compatibility criterion, which is covariance with those assets already in the portfolio solution. It is expected that this favors portfolio diversification, thus reducing the portfolio risk. Assigning a probability of being selected based on a geometric distribution with parameter $$\beta$$, introduces randomization in the construction of the solution. For illustrative purposes, some of the results obtained in are summarized in Figure 2. These results show that the ARPO algorithm (LS+QP in the figure) is able to provide the same or even better results as other state-of-the-art approaches, including the FD+QP and SD+QP algorithms proposed in , the GA+QP algorithm introduced in , and the TS algorithm described in . Not only that, but ARPO is able to achieve these competitive results in the order of seconds, while other approaches report times in the order of minutes. Parametric catastrophe insurance is a transparent instrument that transfers the financial risk from experiencing a natural catastrophe, such as an earthquake. For seismic activity, the decision whether or not a payment will be made can be dependent on location and a magnitude threshold. This constellation was subjected to different statistical and machine learning techniques in . The definition of the specific magnitude threshold at a given location that maximizes efficiency for the insured subject to a budget constraint is researched in . By sequentially lowering the threshold of the particular location, which leads to the greatest increase in efficiency in relation to the increase in the trigger rate, while still maintaining all constraints. Biased-randomization is introduced by assigning an individual probability of being selected to each location cube based on a geometric distribution. The parameter to determine the trade-off between diversification and retaining the original sorting logic was found to be most effective for low diversification. The authors went one step further and saved the partial solutions from each step as a means to restart the algorithm with more meaningful initial solutions that were then subjected to the same procedures, thus creating higher quality initial and, in return, final solutions. # Fundamentals of Simheuristics Financial markets are the epitome of uncertainty characterized by random returns and noisy covariances and retrospective sample statistics for modeling (). It is thus a logical extension of metaheuristic approaches to consider combinations with simulation techniques to address the aspect of stochasticity in one or more components of a combinatorial optimization problem. This can be the introduction of probabilities instead of rigorous constraints (e.g., returns that must be achieved with a given probability) or the consideration of stochastic objective functions (e.g., random revenues), or a combination thereof. Given a set of $$n$$ assets, an example of stochastic portfolio optimization problem is given next: $\min \displaystyle f(x) = \Theta \left[ \sum_{i=1}^{n}\sum_{j=1}^{n} S_{ij} x_i x_j \right] \label{eq:objective}\qquad(1)$ subject to: $\sum_{i=1}^{n} x_i = 1 \label{eq:budget}\qquad(2)$ $P \left( \sum_{i=1}^{n} R_i x_i \geq r \right) \geq p \label{eq:return}\qquad(3)$ $0 \leq x_i \leq \delta_i, \quad \forall i \in {\{1,2,\ldots,n\}} \label{eq:delta}\qquad(4)$ $x_i \in [0,1], \quad \forall i \in {\{1,2,\ldots,n\}}. \label{eq:xi}\qquad(5)$ As declared in Equation [eq:xi], $$x_i \in [0,1]$$ represents the weight or fraction of the investment allocated to asset $$i$$, $$\forall i \in \{1,2, \ldots, n\}$$. Likewise, $$S_{ij}$$ represents the stochastic covariance of assets $$i$$ and $$j$$, while Equation [eq:objective] aims at minimizing the investment risk expressed as a function of the stochastic covariance in the portfolio (e.g., $$\Theta$$ could represent the expected value of the aggregated covariance in the portfolio, or any other statistic that the manager wishes to minimize). Equation [eq:budget] simply states that all the available budget is used to build the portfolio. Equation [eq:return] is a probabilistic constraint stating that the probability of obtaining at least a return value of $$r > 0$$ is greater than a value $$p \in (0, 1)$$ (both $$r$$ and $$p$$ exemplify parameters defined by the investor). Here, $$R_i$$ refers to a random variable modeling the return associated with asset $$i$$, $$\forall i \in \{1,2, \ldots, n\}$$. Finally, Equation [eq:delta] imposes an additional threshold on the maximum quantity that can be invested in each individual asset. Notice that other realistic constraints might appear, such as: (i) minimum and maximum values for the number of assets to be included in the portfolio; (ii) a threshold on the minimum quantity that can be invested in any given asset if it is selected; or (iii) a subset of mandatory assets that need to be included in the portfolio. These rich constraints make the optimization problem to become NP-hard, even in its deterministic version (). As the other methodological approaches introduced in this paper, being a heuristic method this simulation-optimization approach does not guarantee finding the optimal, but will find a robust, high-quality solution. Simheuristic approaches rely on two important assumptions. Firstly, the stochastic version of an optimization problem can be considered a generalized version of the deterministic one, given that the deterministic is one distinct instance, in which the variance of the stochastic variables equals zero. Secondly, it is assumed that in cases with moderate uncertainty, the metaheuristic approaches to solve well-studied deterministic optimization problems yield high-quality solutions that are likely also good-quality solutions for the stochastic problem formulation. This suggests the intuitive approach of extending an existing metaheuristic framework with simulation techniques to account for added uncertainty should with reasonable certainty yield satisfactory results. However, in environments with extreme uncertainty levels, the classical aim of maximizing traditional economic measures (e.g., maximizing return on investment) may lead to extremely diverse individual outcomes, such that it might be reasonable to instead focus the search on finding robust solutions in the face of increased uncertainty. Concluding, in environments with low to medium levels of uncertainty, the approach can be summarized as follows: (i) determine the deterministic version of a stochastic problem by replacing all stochastic variables by their expected values; and (ii) develop a metaheuristic framework and iteratively explore the solution space efficiently. This should yield a set of promising solutions. The algorithm must also evaluate both the quality and the feasibility of these solutions in the realm of uncertainty. Simulation methods offer the possibility to model each random variable using a theoretical or empirical best-fit probability distribution so as to not depend on the assumption of normal or exponential behavior. A feedback cycle between the metaheuristic and the simulation component consists of the following logic. In a first step, the promising solutions from the deterministic environment are sent to the simulation for a quick evaluation employing a reduced number of replication runs. This serves two main purposes: on the one hand, promising solutions for the stochastic problem can be ranked; on the other, the information of a promising solution can provide feedback to the metaheuristic to further and more intensely explore a certain search space. The extensive search by the metaheuristic is possible because the computational effort of the initial simulation, through the reduced number of iterations, is kept to a minimum. In a second simulation stage, estimates of higher accuracy and precision are obtained only for the most promising solutions found via more extensive simulation runs (). The simulation-optimization process in a simheuristic is summarized in Algorithm 3. It was already established that the inclusion of uncertainty also introduces the new dimension of robustness into decision-maker’s consideration from a risk management standpoint. With a stochastic objective function, she might be interested in comparing a set of solutions with a similarly high expected value with regards to the probability distribution of said value. Simulation runs can be employed to deduce information on the probability distribution of the quality of a solution. This capability of additional risk analysis through the natural combination of identifying a wide spectrum of promising solutions in the metaheuristic component and then evaluating it during the simulation stage is a major strength of simulation-based approaches in general, and simheuristics in particular. Another aspect to consider is the potential additional use of the best solution found by the metaheuristic for the deterministic version of the optimization problem. In many real-life systems, increasing the uncertainty level might generate additional costs that will eventually increase the overall system expected cost. Thus, for instance, increasing the variance in random variables such as investment returns or future incomes might lead to random observations falling short of the allowed minimum return or failing to guarantee the coverage of future liabilities, thus causing penalty costs. In those cases, it is possible to use the value $$det(s^*)$$ of the near-optimal solution $$s^*$$ for the deterministic version of the problem as a lower bound for the value $$stoch(s^{**})$$ of the optimal solution $$s^{**}$$ for the stochastic version. Whenever $$s^*$$ is applied in a stochastic environment with the goal of minimizing costs, its value $$stoch(s^*)$$ is an upper bound of the optimal solution for the stochastic version, i.e.: $$det(s^*) \leq stoch(s^{**}) \leq stoch(s^*)$$. Figure 4 compares different simulation and optimization methods and their performance with respect to five considered dimensions: (i) capacity to generate optimal solutions (optimality); (ii) flexibility in modeling complex systems (modeling); (iii) capacity for modeling uncertainty (uncertainty); (iv) computing time required to provide the requested output (computing time); and (v) capacity for dealing with large-size instances (scalability). Guaranteed optimality of a solution can solely be achieved through exact methods, which however might require unreasonable computing times for large-scale NP-hard problems. Metaheuristics address this and can find near-optimal solutions for these large-scale NP-hard problems in relatively short computing times, but fail to accurately depict the intricacies of system interactions, particularly when uncertainty is involved. Individually considered, simulation methods offer a plethora of techniques to model uncertainty, but they lack the optimization capabilities of exact and metaheuristic methods. Thus, by extending the strengths of metaheuristics with the capabilities of modeling uncertainty of simulation techniques, simheuristics perform well across all five dimensions as they can also outperform exact methods for large-scale instances of NP-hard optimization problems with regards to computing times and scalability (). Pseudocode_Algorithm1.png # Simheuristics in Finance & Insurance As previously mentioned, heuristics and metaheuristics have been shown to be an excellent method for solving various F&I problems of interest very precisely and with a very tight use of computational resources. However, the conceptual framework of the financial world is essentially driven by uncertainty and thus stochasticity. Therefore, we should not be satisfied with a deterministic solution if it cannot be sufficiently reasonable. On the one hand, focusing on our problem from a deterministic point of view limits our ability to model reality, since the solutions to complex situations can escape optimization approaches that do not consider uncertainty. On the other, all variables that come into play in a financial system have random behavior. Therefore, metaheuristic algorithms must necessarily be completed with simulation techniques in the F&I field. Only then we will be able to obtain solutions that respond efficiently to realistic situations. In the scientific literature, one can find excellent examples of the success of these combined techniques, which cover a wide spectrum. Hence, for example, a project portfolio selection problem is analyzed in . A series of restrictions are established, such as determining a minimum amount of budget for each project, forced selection of specific projects, a minimum number and a maximum number of selected projects, and a behavior random associated with the generated cash flows. The solution to problem is obtained through the application of a simheuristic based on a variable neighborhood search metaheuristic. Similarly, a stochastic version of the classical portfolio optimization problem is analyzed in . The authors adopt a realistic assumption of uncertainty surrounding the inputs. In particular, it is considered that both the returns observed in the past, as well as the correlations between financial assets, follow a stochastic behavior. This behavior is modeled by incorporating noise. A metaheuristic algorithm is applied to generate promising candidate solutions, which are then processed by a simulation component in order to obtain Pareto non-dominated solutions. A novel ALM model is introduced in . Here, the optimal asset-liability assignment of an insurance firm is investigated through efficiently aggregating fixed income assets to match the outstanding long-term liabilities, so that the firm’s overall benefit at the end of the planned horizon is maximized. Uncertainty is incorporated on both the asset and liabilities sides. This invites to use simulation together with a biased-randomized heuristic. A safety margin and a minimum reliability threshold are also considered. Likewise, another study defines a multi-period portfolio optimization problem for which obligations are added over time, so that it can be understood and formulated as an ALM problem where assets are represented by equities (). A simheurisitic algorithm determines what purchases and sells must be made in the future to fulfill the obligations and to maximize the terminal wealth, given a specific level of risk aversion. The fact that assets on balance may never be negative and prices evolve randomly are tackled by incorporating simulation in the evaluation of the objective function, while the optimization component is based on a genetic algorithm (GA) (). Figure 5 summarizes some of the results obtained in . For one of the key performance indicators considered, the ratio between deviation and utility, can be enhanced (reduced in this case) by incorporating a simheuristic algorithms into an already existing GA. # Managerial Insights Financial institutions have the purpose of efficiently managing the economic resources that are available to them. This purpose has two implications. On the one hand, the fact that they are entities that manage financial resources requires the search for the maximum possible profitability. On the other hand, the fact that they are financial institutions implies that these resources have an external origin, that is, the entity must respond to third parties for the outcome of the management’s decisions. This is common in the three possible areas of the sector: banking, collective investment institutions or mutual funds, and insurance companies. The problem of integrating assets and liabilities in management is a recurring problem whose first analysis dates back to 1938 (). Hence, some of the first studies are based on the duration of a cash flow and focus on the time-weighted average of the cash flow. The underlying idea is that in the event of a slight change in interest rates, the present value of the cash flow basically depends on the duration. Therefore, if the assets and liabilities coincide in their duration, the final value of a balance sheet is immune to disturbances in the interest rate. This approach, although it is still used today, is far from comprehensive enough when obligations to third parties have to be met. It can be stated that it is a solution to the valuation of the entity, but not to that of its management. Furthermore, the market rate has shown tremendous volatility in the long term. For this reason, over the last decades alternatives that allow for active management in a stricter sense have been emerging. For example, in the case of ALM we talk about matching cash flows, that is, what financial plan should the manager follow to be able to cover her obligations to third parties and also to obtain the maximum possible return. The problems to balance both objectives are quite varied. The term is always a challenging issue because the conditions of the capital market are usually shorter than the obligations acquired by financial institutions, and especially by insurers. The capital market also has a relevant restriction: liquidity. This is a challenge when looking for the best options to match the returns on the investments with the liabilities in the medium and long term. The credit quality of fixed income issuers, or its counterpart in equities (volatility), constitute a stochastic condition that must be taken into account. It makes no sense trying to allocate investments with a certain degree of uncertainty to obligations with minimum guaranteed conditions. On the contrary, one should match the grade of certainty or uncertainty of both investments and obligations. A similar argument can be done for other financial challenges like the portfolio optimization problem or, in general, any risk management problem. There is another actor in this puzzle, the legislator. Naturally, financial entities are subject to strict control by supervisory bodies. In Europe, this role is played by the European Insurance Regulator (EIOPA) in the area of insurance operations, and by the European Banking Authority (EBA) in the area of banking and investment. Each regulator has its requirements, its limitations, and although it is not disputed that they all respond to the political or socio-economic needs of the market they regulate, from the point of view of building a model they can seem very capricious. The legislator also adds an additional uncertainty: it is not stable over time and it is not predictable. Finally, another element to consider is the agreed conditions that define our obligations, i.e., the commercial conditions or the requirements defined by the risk profile of our client. In short, when it comes to solving a management problem of this nature, we find ourselves with many limitations of a practical nature. Most deterministic models are too simple and will not respond to these real-life needs. At the same time, some classical solving techniques are not capable of solving stochastic models that meet a good part of our needs. It seems obvious that the only way out is to settle for a technique that, although it does not give us the exact value that we can consider optimal, can provide us an operationally good approximation. In fact, in the financial market, having a high-quality result in a short computing time can be considered as a near-optimal situation, since this market changes every second. Recent advances in heuristic optimization combined with simulation seem promising, since these techniques allow managers to propose realistic models that can solve a large part of the real-life challenges, thus guaranteeing an efficient management. As a result, managers can reduce typical transaction costs due to portfolio re-adjustments and risks that otherwise cannot be avoided. In particular, the development of simheuristics in the F&I field can improve the overall efficiency of the sector since the legislator could eliminate certain restrictions that are only understood by the absence of reliable calculation methods. It is evident that the existence of optimization-simulation methods capable of dealing with rich and real-life F&I challenges allows the legislator to impose less severe conditions on the firms. # Conclusions & Future Work This paper has discussed how biased-randomized algorithms and simheuristics are increasingly being used in financial applications. Biased-randomization techniques allow us to easily transform a greedy heuristic into a probabilistic algorithm, which is achieved by employing a skewed probability distribution. These randomized algorithms can then be run in parallel to obtain high-quality solutions, in short computing times, even for challenging optimization problems. They can also be employed inside more complex multi-start or metaheuristic frameworks if more time is available to perform the computations. In addition, simheuristics allow managers to include uncertainty into their optimization models. This is accomplished in a natural way by integrating simulation inside a metaheuristic framework. Both methodologies can also be used together, so many published simheuristics also employ biased-randomized strategies. Some of the financial challenges where the aforementioned methodologies have been employed so far include rich and stochastic versions of the portfolio optimization problem, or the asset-liability management problem. In different computational experiments, the benefits of using these optimization-simulation approaches have been shown. In particular, when combined with parallel computing, biased-randomized algorithms can be used to quickly generate high-quality solutions in situations in which the existence of dynamic conditions demand re-optimizing the problem every now and then (agile optimization in ). Likewise, simheuristics can provide noticeable improvements over a more classical approach in which optimal or near-optimal solutions to the deterministic version of an optimization problem are considered in a real-life situation where stochastic uncertainty is present. Since many real-life financial challenges can be related to portfolio optimization problems, risk management problems, and asset-liability problems, there is yet a vast area to cover regarding the use of the proposed methodologies. In particular, some future research lines for scientists and practitioners working on the intersection between Finance and Operations Research are described next: (i) both, biased-randomized algorithms and simheuristics, can also be combined with machine learning methods to tackle financial optimization problems with dynamic inputs (e.g., dynamic correlations between assets that might depend upon the current status of the portfolio), thus leading to learnheuristics (); and (ii) simheuristics can also be combined with fuzzy logic, so they not only consider stochastic uncertainty, but also uncertainty of non-stochastic nature (), which might be really useful to include expert predictions in optimization models. ## Acknowledgments This work has been partially supported by the collaboration agreement between Divina Pastora Seguros and the Universitat Oberta de Catalunya. ## Referencias Almouhanna, A., C. L. Quintero-Araujo, J. Panadero, A. A. Juan, B. Khosravi, and D. Ouelhadj. 2020. “The Location Routing Problem Using Electric Vehicles with Constrained Distance.” Computers & Operations Research 115: 104864. Bayliss, C., R. Guidotti, A. Estrada-Moreno, G. Franco, and A. A. Juan. 2020. “A Biased-Randomized Algorithm for Optimizing Efficiency in Parametric Earthquake (Re) Insurance Solutions.” Computers & Operations Research 123: 105033. Bayliss, C., M. Serra, A. Nieto, and A. A. Juan. 2020. “Combining a Matheuristic with Simulation for Risk Management of Stochastic Assets and Liabilities.” Risks 8 (4): 131. Better, M., F. Glover, G. Kochenberger, and H. Wang. 2008. “Simulation Optimization: Applications in Risk Management.” International Journal of Information Technology & Decision Making 7 (04): 571–87. Calvet, L., J. de Armas, D. Masip, and A. A. Juan. 2017. “Learnheuristics: Hybridizing Metaheuristics with Machine Learning for Optimization with Dynamic Inputs.” Open Mathematics 15 (1): 261–80. Calvet, L., M. Lopeman, J. de Armas, G. Franco, and A. A. Juan. 2017. “Statistical and Machine Learning Approaches for the Minimization of Trigger Errors in Parametric Earthquake Catastrophe Bonds.” SORT-Statistics and Operations Research Transactions, 373–92. Chica, M., A. A. Juan, C. Bayliss, O. Cordón, and W. D. Kelton. 2020. “Why Simheuristics? Benefits, Limitations, and Best Practices When Combining Metaheuristics with Simulation.” SORT 44 (2): 311–34. Clarke, G., and J. W. Wright. 1964. “Scheduling of Vehicles from a Central Depot to a Number of Delivery Points.” Operations Research 12 (4): 568–81. Doering, J., R. Kizys, A. A. Juan, A. Fito, and O. Polat. 2019. “Metaheuristics for Rich Portfolio Optimisation and Risk Management: Current State and Future Trends.” Operations Research Perspectives 6: 100121. Estrada-Moreno, A., A. Ferrer, A. A. Juan, A. Bagirov, and J. Panadero. 2020. “A Biased-Randomised Algorithm for the Capacitated Facility Location Problem with Soft Constraints.” Journal of the Operational Research Society 71 (11): 1799–1815. Gaspero, L. D., G. D. Tollo, A. Roli, and A. Schaerf. 2011. “Hybrid Metaheuristics for Constrained Portfolio Selection Problems.” Quantitative Finance 11 (10): 1473–87. Golden, B. L., J. S. DeArmon, and E. K. Baker. 1983. “Computational Experiments with Algorithms for a Class of Routing Problems.” Computers & Operations Research 10 (1): 47–59. Gonzalez-Neira, E. M., D. Ferone, S. Hatami, and A. A. Juan. 2017. “A Biased-Randomized Simheuristic for the Distributed Assembly Permutation Flowshop Problem with Stochastic Processing Times.” Simulation Modelling Practice and Theory 79: 23–36. Grasas, A., A. A. Juan, J. Faulin, J. De Armas, and H. Ramalhinho. 2017. “Biased Randomization of Heuristics Using Skewed Probability Distributions: A Survey and Some Applications.” Computers & Industrial Engineering 110: 216–28. Juan, A. A., J. Faulin, S. E. Grasman, M. Rabe, and G. Figueira. 2015. “A Review of Simheuristics: Extending Metaheuristics to Deal with Stochastic Combinatorial Optimization Problems.” Operations Research Perspectives 2: 62–72. Juan, A. A., P. Keenan, R. Martí, S. McGarraghy, J. Panadero, P. Carroll, and D. Oliva. 2021. “A Review of the Role of Heuristics in Stochastic Optimisation: From Metaheuristics to Learnheuristics.” Annals of Operations Research, 1–31. Kizys, R., J. Doering, A. A. Juan, O. Polat, L. Calvet, and J. Panadero. 2022. “A Simheuristic Algorithm for the Portfolio Optimization Problem with Random Returns and Noisy Covariances.” Computers & Operations Research 139: 105631. Kizys, R., A. A. Juan, B. Sawik, and L. Calvet. 2019. “A Biased-Randomized Iterated Local Search Algorithm for Rich Portfolio Optimization.” Applied Sciences 9 (17): 3509. Macaulay, F. R. 1938. Some Theoretical Problems Suggested by the Movements of Interest Rates, Bond Yields and Stock Prices in the United States Since 1856. National Bureau of Economic Research, New York. Mangram, M. E. 2013. “A Simplified Perspective of the Markowitz Portfolio Theory.” Global Journal of Business Research 7 (1): 59–70. Markowitz, H. M. 1952. Portfolio selection.” The Journal of Finance 7 (1): 77–91. Martí, R. 2003. “Multi-Start Methods.” In Handbook of Metaheuristics, 355–68. Springer. Martins, L. do C., D. Tarchi, A. A. Juan, and A. Fusco. 2021. “Agile Optimization for a Real-Time Facility Location Problem in Internet of Vehicles Networks.” Networks. Mirjalili, S. 2019. “Genetic Algorithm.” In Evolutionary Algorithms and Neural Networks, 43–55. Springer. Moral-Escudero, R., R. Ruiz-Torrubiano, and A. Suárez. 2006. “Selection of Optimal Investment Portfolios with Cardinality Constraints.” In 2006 IEEE International Conference on Evolutionary Computation, 2382–88. IEEE. Nawaz, M., E. E. Enscore Jr., and I. Ham. 1983. “A Heuristic Algorithm for the m-Machine, n-Job Flow-Shop Sequencing Problem.” Omega 11 (1): 91–95. Nesmachnow, S. 2014. “An Overview of Metaheuristics: Accurate and Efficient Methods for Optimisation.” International Journal of Metaheuristics 3 (4): 320–47. Nieto, A., M. Serra, A. A. Juan, and C. Bayliss. 2022. “A GA-Simheuristic for the Stochastic and Multi-Period Portfolio Optimisation Problem with Liabilities.” Journal of Simulation. https://doi.org/10.1080/17477778.2022.2041990. Oliva, D., P. Copado, S. Hinojosa, J. Panadero, D. Riera, and A. A. Juan. 2020. “Fuzzy Simheuristics: Solving Optimization Problems Under Stochastic and Uncertainty Scenarios.” Mathematics 8 (12): 2240. Panadero, J., J. Doering, R. Kizys, A. A. Juan, and A. Fito. 2020. “A Variable Neighborhood Search Simheuristic for Project Portfolio Selection Under Uncertainty.” Journal of Heuristics 26 (3): 353–75. Rabe, M., M. Deininger, and A. A. Juan. 2020. “Speeding up Computational Times in Simheuristics Combining Genetic Algorithms with Discrete-Event Simulation.” Simulation Modelling Practice and Theory 103: 102089. Resende, M. G. C., and C. C. Ribeiro. 2010. “Greedy Randomized Adaptive Search Procedures: Advances, Hybridizations, and Applications.” In Handbook of Metaheuristics, 283–319. Springer. Saiz, M., M. A. Lostumbo, A. A. Juan, and D. Lopez-Lopez. 2022. “A Clustering-Based Review on Project Portfolio Optimization Methods.” International Transactions in Operational Research 29 (1): 172–99. Schaerf, A. 2002. “Local Search Techniques for Constrained Portfolio Selection Problems.” Computational Economics 20 (3): 177–90. Soler-Dominguez, A., A. A. Juan, and R. Kizys. 2017. “A Survey on Financial Applications of Metaheuristics.” ACM Computing Surveys (CSUR) 50 (1): 1–23. Sörensen, K., and F. Glover. 2013. “Metaheuristics.” Encyclopedia of Operations Research and Management Science 62: 960–70. Urli, B., and F. Terrien. 2010. Project portfolio selection model, a realistic approach.” International Transactions in Operational Research 17 (6): 809–26. ## Más BEIO ### Multi-stage variable selection method for efficiency evaluation with DEA models and panel data Método de selección de variables basado en un indicador multiatributo para la evaluación de eficiencia de unidades observadas en más de un período. La aplicación se ilustra con el caso de la evaluación de la eficiencia de hospitales de gestión pública provincial de la provincia de Córdoba, Argentina. ### Can we really predict injuries in team sports? This paper illustrates from a statistical perspective what challenges need to be addressed from data collection, analysis of player performance and scientific reflection on questions of interest for informed decision making in sports medicine. ### La economía azul en Cataluña: Una primera aproximación metodológica para dimensionar su contribución económica a la región En este trabajo pretendemos responder a la pregunta ¿Cuánto contribuyen los sectores económicos vinculados al mar y al medio costero (economía azul) a las principales Macromagnitudes de Cataluña? ### What does the research tell us about the understanding of the random variables and its probability distributions? La variable aleatoria representa uno de los conceptos clave en el modelamiento de fenómenos aleatorios a través de las distribuciones de probabilidad. Por tanto, este estudio tiene como objetivo analizar y describir las principales investigaciones que la literatura reporta sobre variable aleatoria y su distribución de probabilidad. Los resultados muestran la existencia de algunas propuestas de enseñanza en torno a estas nociones, las cuales se caracterizan por utilizar tecnología. ### Grupo de Investigación en Análisis de Riesgo La necesidad de anticipar situaciones adversas en cualquier ámbito social y empresarial exige una cuantificación de riesgos, por lo que crece la demanda de analistas de riesgos en el mercado laboral. ### Técnicas estadísticas en geolingüística. Modelización onomástica Esta tesis se centra en la introducción de nuevos métodos estadísticos para el tratamiento de datos y la modelización en geolingüística, concretamente, en los apellidos de Galicia. El trabajo realizado contempla dos problemas principales: (i) la construcción de regiones de apellidos en Galicia y (ii) la modelización de patrones espaciales y espacio-temporales de apellidos en esta región. ### Conceptos de modelización en la formación universitaria de los analistas de datos A lo largo de los años hemos observado que los titulados en programas universitarios relacionados con el análisis de datos solemos tener cuando finalizamos nuestros estudios una visión parcial del proceso de modelización de problemas. En este artículo repasamos algunos de los conceptos que los analistas de datos van a tener que manejar cuando se incorporen al entorno empresarial y que tal vez podrían ser incluidos en los planes de estudio de esas titulaciones. ### Contributions to Close-Enough Arc Routing Problems En esta tesis doctoral nos centramos en el estudio y la resolución de problemas de Rutas por Arcos basados en el concepto Close-Enough, que se refiere a servir a los clientes al pasar a una cierta distancia de ellos. Para resolverlos de manera óptima, se han diseñado e implementado algoritmos Branch and Price y Branch and Cut. Además, al ser un problema NP-hard, hemos propuesto algoritmos metaheurísticos para obtener soluciones buenas en un tiempo de computación considerable. Tesis defendida por Miguel Reula Martín. ### Graphical analysis of solutions in bankruptcy problems for two or three agents Descripción analítica de las soluciones para n agentes e ilustración geométrica para los casos de dos y tres agentes.
2023-01-28 07:37:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.557012677192688, "perplexity": 1902.970706197457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00125.warc.gz"}
https://cms.math.ca/10.4153/CMB-2017-073-8
location:  Publications → journals → CMB Abstract view Author's Draft # Branching Rules for $n$-fold Covering Groups of $\mathrm{SL}_2$ over a Non-Archimedean Local Field Let $\mathtt{G}$ be the $n$-fold covering group of the special linear group of degree two, over a non-Archimedean local field. We determine the decomposition into irreducibles of the restriction of the principal series representations of $\mathtt{G}$ to a maximal compact subgroup. Moreover, we analyse those features that distinguish this decomposition from the linear case. Keywords: local field, covering group, representation, Hilbert symbol, $\mathsf{K}$-type
2018-02-20 18:57:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8382161855697632, "perplexity": 464.91077607771496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813088.82/warc/CC-MAIN-20180220185145-20180220205145-00264.warc.gz"}
http://ask.gigaspaces.org/question/2285/what-is-the-correct-way-to-configure-xms-and-xmx-for-a-gsc-when-using-gsa/
# What is the correct way to configure -Xms and -Xmx for a GSC when using GSA We're using the GridServiceAgent to start our system. I've set -Xms=1G and -Xmx=2G in EXT_JAVA_OPTIONS, but that does not seem to get propagated to the GSCs that the GSA spawns. What is the correct way to specify the JVM memory for a GSC that is launched via the GSA? Thanks, Rowland Edited by: Rowland Smith on Oct 15, 2009 2:35 PM edit retag close merge delete Sort by » oldest newest most voted You should edit \gigaspaces-xap-premium-7.0.1-ga\config\gsa\gsc.xml and have it to call your gsc startup script. See the following: windows="${com.gs.home}/bin/gsc.bat" unix="${com.gs.home}/bin/gsc.sh"> This will set the EXTJAVAOPTIONS and call existing gsc.sh script. In the same manner you can wrap the gs-agent script. Shay more Thanks for the info - the wrapper script is working fine now. Rowland more Its just a script error Try adding export -p JAVA_OPTIONS="" in gsc.sh/bat file GSA is starting GSC and GSM and JAVA_OPTIONS from GSA are getting carried to gsc.sh script and thus it is not considering your EXT_JAVA_OPTIONS Thanks Venkat more Perhaps the real question is: Should I try to set the min/max heap size of the JVM that my GSC will run in? I start gs-agent.sh (GSA), and it then spawns my GSC(s). Let's say I want the following: • GSA with 512M of heap. • GSC with 1G of heap. As far as I can tell, the only way to change the heap size is by including -Xmx/-Xmn in the EXT_JAVA_OPTIONS environment variable. This variable gets picked up by BOTH the GSA and the GSC, i.e they both get the same memory settings. Do you see my dilemma? Thanks, Rowland ( 2009-10-20 14:07:55 -0500 )edit Hey RowLand If you see my post correctly... you can understand why GSA and GSC are picking up the same configurations ""GSA is starting GSC and GSM and JAVA_OPTIONS variable from GSA are getting carried to gsc.sh script and thus it is not considering your EXT_JAVA_OPTIONS"" In setenv.sh script, if JAVA_OPTIONS variable is available, script ignores EXT_JAVA_OPTIONS... JAVA_OPTIONS variable is set when GSA starts and thus u need to set it to null in gsc.sh to get your gsc take EXT_JAVA_OPTIONS you set for gsc. ( 2009-10-22 15:47:18 -0500 )edit
2019-03-21 01:32:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1818971186876297, "perplexity": 7124.80578804179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202476.48/warc/CC-MAIN-20190321010720-20190321032720-00322.warc.gz"}
https://www.groundai.com/project/stochastic-enumeration-with-importance-sampling/
Stochastic Enumeration with Importance Sampling # Stochastic Enumeration with Importance Sampling Alathea Jensen December 5, 2017 ###### Abstract Many hard problems in the computational sciences are equivalent to counting the leaves of a decision tree, or, more generally, by summing a cost function over the nodes. These problems include calculating the permanent of a matrix, finding the volume of a convex polyhedron, and counting the number of linear extensions of a partially ordered set. Many approximation algorithms exist to estimate such sums. One of the most recent is Stochastic Enumeration (SE), introduced in 2013 by Rubinstein. In 2015, Vaisman and Kroese provided a rigorous analysis of the variance of SE, and showed that SE can be extended to a fully polynomial randomized approximation scheme for certain cost functions on random trees. We present an algorithm that incorporates an importance function into SE, and provide theoretical analysis of its efficacy. We also present the results of numerical experiments to measure the variance of an application of the algorithm to the problem of counting linear extensions of a poset, and show that introducing importance sampling results in a significant reduction of variance as compared to the original version of SE. ## Acknowledgments This is a pre-print of an article published in Methodology and Computing in Applied Probability. The final authenticated version is available online at: The author would like to thank Isabel Beichl and Francis Sullivan for the idea for this project. The author would also like to thank the Applied and Computational Mathematics Division of the Information Technology Laboratory at the National Institute of Standards and Technology for hosting the author as a guest researcher during the preparation of this article. ## 1 Introduction Many hard problems in mathematics, computer science, and the physical sciences are equivalent to summing a cost function over a tree. These problems include calculating the permanent of a matrix, finding the volume of a convex polyhedron, and counting the number of linear extensions of a partially ordered set. There are tree-searching algorithms which give an exact answer by simply traversing every node in the tree; however, in many cases, the tree is far too large for this to be practical. Indeed, the problem of computing tree cost is in the complexity class #P-complete (Valiant, 1979). This complexity class consists of counting problems which find the number of solutions that satisfy a corresponding NP-complete decision problem. Accordingly, there are various approximation algorithms for tree cost, and the two main types of these are Markov Chain Monte Carlo (MCMC) and sequential importance sampling (SIS). Both of these perform random sampling on a suitably defined set. The original version of SIS is Knuth’s algorithm (Knuth, 1975), which samples tree cost by walking a random path from the root to a leaf, where each node in the path is chosen uniformly from the children of the previously chosen node. There have been several major adaptations to Knuth’s algorithm, all of which attempt to reduce the variance of the estimates produced. One modification of Knuth’s algorithm is to choose the nodes of the path non-uniformly, proportional to an importance function on the nodes. Of course, choosing a good importance function requires some knowledge about the structure of the tree, and so this approach is not suitable for random trees, but rather for families of trees which share some general characteristics. Some cases where this approach has produced good results can be found in Beichl and Sullivan (1999), Blitzstein and Diaconis (2011), Harris, Sullivan, and Beichl (2014), Karp and Luby (1983), for example. There have also been adaptations of Knuth’s algorithm which change the algorithm in a more structural way, such as stratified sampling, which was introduced by Knuth’s student, Chen (1992). Stochastic Enumeration (SE) is the most recent of the structural adaptations. It was originally introduced by Rubinstein (2013), and further developed in Rubinstein, Ridder, and Vaisman (2014). Its approach to the problem is to run many non-independent trajectories through the tree in parallel, combining their effect on the estimate at each level of the tree to produce a single final estimate of the tree cost. Alternatively, one can view SE as operating on a hypertree associated with the original tree. A similar approach to the problem was taken by Cloteaux and Valentin (2011). In Rubinstein’s original definition, the SE algorithm was only able to count the leaves of a tree. Vaisman and Kroese (2017) updated SE to estimate tree cost for any cost function, and provided a rigorous analysis of the variance. They also showed that SE can be extended to an fully polynomial randomized approximation scheme (FPRAS) for random trees with a cost function that is 1 on every node. In this paper, we follow up on the work of Vaisman and Kroese to develop an adaptation of SE which we call Stochastic Enumeration with Importance (SEI). This algorithm chooses paths through the tree with non-uniform probability, according to a user-defined importance function on the nodes of the tree. We provide a detailed analysis of the theoretical properties of the algorithm, including ways to bound the variance. Just as with SIS, SEI is not suitable for random trees, but rather for families of trees which share some characteristics. Therefore, in addition to theoretical analysis in which the importance function is not specified, we also provide a detailed example, with numerical results, of a family of trees and importance functions for which SEI provides a lower variance than SE. ## 2 Definitions and Preliminaries Consider a tree with node set , where each node has some cost given by a cost function . We wish to estimate the total cost of the tree, denoted and given by Cost(T)=∑v∈Vc(v) If our tree is uniform, in the sense that all the nodes on a given level have the same number of children, then it is very easy to determine the number of nodes on each level. We will call the root node level 0, the root’s children level 1, and so on. Suppose the root has children, the root’s children all have children, and so on. Then there is 1 node on level 0, nodes on level 1, nodes on level 2, and, in general, nodes on level . If the cost of nodes is also uniform across each level, then we can easily add up the cost of the entire tree. For each level , let the cost of any node on level be denoted . Then the cost of our tree is Cost(T)=c0+c1D0+c2D0D1+⋯+cnD0D1⋯Dn−1 (1) where is the lowest level of the tree. Of course, most trees are not uniform is the sense described above, but the central idea of Knuth’s algorithm (Knuth, 1975) for estimating tree cost is to pretend as though they are. In Knuth’s algorithm, we walk a single path from the root to a leaf, and note the number of children that we see from each node in our path (), as well as the cost of each node in our path (). We then calculate the cost of the tree using Formula (1), which is no longer exact but is now an unbiased estimator of the tree cost. In the SE algorithm, just as in Knuth’s algorithm, we work our way down the tree level by level from the root to the leaves. The main difference is that instead of choosing a single node on each level of the tree, we choose multiple nodes on each level. We can also think of this as choosing a single hypernode from each level of a hypertree constructed from the original tree. The following definitions are necessary to describe the structure of the hypertree. We define a hypernode to be a set of distinct nodes that are in the same level of the tree. We can extend the definition of the cost function to hypernodes by letting c(v)=∑v∈vc(v) Let denote the set of successors (or children) of a node in the tree. Then we can define the set of successors of a hypernode as S(v)=⋃v∈vS(v) Throughout the SE algorithm, each time we move to a new level, we choose a new hypernode from among the successors of the previous hypernode . We make no distinction between these successors in terms of which node in the previous hypernode they came from. This means that some nodes in the previous hypernode may have multiple children chosen to be in the next hypernode, while other nodes in the previous hypernode may not have any children chosen to be in the next hypernode. Obviously there is some limit on our computing power, so we have to limit the size of the hypernodes we work with to be within a budget, which we will denote . At each level, we will choose nodes to be in the next hypernode, as long as is larger than . If , then we will take all of to be the next hypernode. Thus, if our current hypernode is , the candidates for our next hypernode, which we call the hyperchildren of , are the elements of the set H(v)={w⊆S(v):|w|=min(B,|S(v)|)} Many of the statements and proofs throughout this paper are in a recursive form that refers to subforests of a tree, and so we lastly need to define a forest rooted at a hypernode. For a hypernode , the forest rooted at , denoted , is simply the union of all the trees rooted at each of the nodes in . Tv=⋃v∈vTv We can also extend the notion of the total cost of a tree to a forest rooted at a hypernode by letting Cost(Tv)=∑v∈vCost(Tv) Let’s look at an example to familiarize ourselves further with the notation. ###### Example 2.1. Consider the tree in Figure 1. It is labeled with a possible sequence of hypernodes that could be chosen by the SE algorithm, using a budget of . On level 0, the root is automatically chosen to be the first hypernode, . We could then refer to the entire tree as . On level 1, we have . Since , we take all of to be our next hypernode, so . On level 2, we have , so our choices for are the elements of . Let’s choose . Similarly, on level 3, we have , so our choices for are . Let’s choose . Finally, on level 4, we have . Since , we take all of to be our next hypernode, so .∎ ## 3 Stochastic Enumeration with Arbitrary Probability We are now ready to state the first algorithm, Stochastic Enumeration with arbitrary probability (SEP). It is a generalization of the updated Stochastic Enumeration algorithm in Vaisman and Kroese (2017), which used uniform probabilities. Note that the quantity is an estimate of the number of children of the nodes in level , so that after each update in line 5, is an estimate of the number of nodes in level of the tree. Likewise, the quantity is an estimate of the average cost of nodes on level , so that by adding to on line 5, we are adding the estimated cost of all of level of the tree. Before analyzing this algorithm further, let’s look at an example to get a better idea of how it works. ###### Example 3.1. Consider the tree in Figure 2. To keep things simple, we’ll use a budget of and a cost function that is 1 on every node. Clearly the total cost of the tree is the number of nodes, 14. This choice simplifies to 1, so the update command for becomes CSEP←CSEP+D Let’s choose hypernodes with a uniform probability, meaning . Since , this makes the formula for simplify to , so the update command for becomes D←|S(xk)||xk|D Note that is the average number of children of the nodes in . In the original SE algorithm, the update command for always looks like this. Now let’s examine a possible sequence of hypernodes produced by Algorithm 2, as shown in Figure 2, which is the same as the previous example. We initialize with , , , . Then we compute , which means , and advance to with . We update D←|S(x0)||x0|D=2 CSEP←CSEP+D=3 We advance to and loop. We compute , which means , and advance to with . We update D←|S(x1)||x1|D=3 CSEP←CSEP+D=6 We advance to and loop. We compute , which means , and we advance to with . We update D←|S(x2)||x2|D=4.5 CSEP←CSEP+D=10.5 We advance to and loop. We compute , which means , and we advance to with . We update D←|S(x3)||x3|D=2.25 CSEP←CSEP+D=12.75 We increase to and loop. We compute , so we are in the terminal position and we stop. The algorithm returns as an estimator of the cost of the tree. This completes the example.∎ Now we begin our analysis of Algorithm 1. In general, the output of Algorithm 1 is a random variable CSEP(Tx0)=c(x0)|x0|+D0c(x1)|x1|+D0D1c(x2)|x2|+⋯+D0D1⋯Dτ−1c(xτ)|xτ|=c(x0)|x0|+D0(c(x1)|x1|+D1c(x2)|x2|+⋯+D2⋯Dτ−1c(xτ)|xτ|) where is some height less than or equal to the height of . This naturally suggests a recursive formulation of the output, CSEP(Tx0)=c(x0)|x0|+D0CSEP(Tx1) Let be a hyperchild of selected from with probability . Then we have (2) Before proceeding to a proof of the correctness of Algorithm 1, we stop to note a lemma that we will use in this and other proofs throughout the paper. ###### Lemma 3.1. Cost(TS(v))=∑w∈H(v)Cost(Tw)(|S(v)|−1|w|−1) ###### Proof. We begin by expanding the right hand side of the proposed equation. ∑w∈H(v)Cost(Tw)(|S(v)|−1|w|−1)=∑w∈H(v)1(|S(v)|−1|w|−1)∑w∈wCost(Tw) Since does not depend on the particular choice of , we can move the factor in which it appears outside the summation. ∑w∈H(v)Cost(Tw)(|S(v)|−1|w|−1)=1(|S(v)|−1|w|−1)∑w∈H(v)∑w∈wCost(Tw) Each appears in precisely of the , therefore we can simplify the double summation. ∑w∈H(v)Cost(Tw)(|S(v)|−1|w|−1)=1(|S(v)|−1|w|−1)(|S(v)|−1|w|−1)∑w∈S(v)Cost(Tw)=∑w∈S(v)Cost(Tw)=Cost(TS(v)) ∎∎ ###### Theorem 3.1. Algorithm 1 is an unbiased estimator of tree cost, meaning E[CSEP(Tv)]=Cost(Tv)|v| ###### Proof. The proof proceeds by induction over the height of the tree. For a forest of height 0, we have , so the algorithm returns the exact answer c(v)|v|=Cost(Tv)|v| Assuming that the proposition is correct for forests with heights strictly less than the height of , we have Applying Lemma 3.1, we get E[CSEP(Tv)]=c(v)|v|+Cost(TS(v))|v|=Cost(Tv)|v| ∎∎ Now that we know Algorithm 1 works, we can start thinking about how to improve the variance of the estimates it produces. The purpose of using a non-uniform probability distribution to select each hypernode is to try to achieve a better variance between the estimates. Therefore, it is important to know the optimal probability distribution, in other words, the probability distribution that would yield the exact answer for every estimate. As with Knuth’s algorithm, it turns out that the optimal probability for choosing a hypernode is proportional to the cost of the forest rooted at the hypernode. Details are given below. ###### Theorem 3.2. In Algorithm 1, if each hypernode is chosen from all possible hypernodes in with probability P(w)=Cost(Tw)∑x∈H(v)Cost(Tx) then is a zero-variance estimator, meaning ###### Proof. The proof proceeds by induction over the height of the tree. For a tree of height 0, we have , so the algorithm returns the exact answer CSEP(Tv)=c(v)|v|=Cost(Tv)|v| Assuming that the proposition is correct for forests with heights strictly less than the height of , we have CSEP(Tv)=c(v)|v|+|w|CSEP(Tw)|v|(|S(v)|−1|w|−1)P(w)=c(v)|v|+Cost(Tw)|v|(|S(v)|−1|w|−1)P(w)=c(v)|v|+Cost(Tw)|v|(|S(v)|−1|w|−1)Cost(Tw)∑x∈H(v)Cost(Tx)=c(v)|v|+1|v|(|S(v)|−1|w|−1)∑x∈H(v)Cost(Tx)=c(v)|v|+1|v|∑x∈H(v)Cost(Tx)(|S(v)|−1|w|−1) Applying Lemma 3.1, we get ∎∎ We are now ready to discuss using an importance function to implement a probability distribution. ## 4 Stochastic Enumeration with Importance The information in Theorem 3.2 suggests that we should use a probability distribution in which each hypernode has a probability that is proportional to the cost of the forest beginning at that hypernode. Obviously this will be difficult to achieve even as an estimate, since it is the same problem that we are trying to address with our algorithms. However, even supposing that we did have some way of estimating the ideal probability for each hypernode, there is another problem with trying to implement a non-uniform probability distribution on the hypernodes. Simply put, may be extremely large, and so, if we hope to keep the running time of the algorithm under control, we need a way of choosing hypernodes that does not require us to calculate or store the probability of each individual hypernode in . It turns out that there is an easy way to do this. Consider a function from the nodes of a tree to the positive real numbers. For a node , we will call the weight of or the importance of . We can extend the domain of to sets of nodes by defining the weight of a set of nodes as . Given this weighting scheme, there is a way to choose a hypernode with probability P(w)=r(w)∑x∈H(v)r(x) that only requires us to calculate the weights of , and not of . This method is described in Algorithm 2. It may not be obvious, but Algorithm 2 is simply Algorithm 1 with a specific probability distribution implemented, as we shall prove now. ###### Theorem 4.1. Algorithm 2 is an unbiased estimator of tree cost, meaning E[CSEI(Tv)]=Cost(Tv)|v| ###### Proof. We begin by calculating the probability with which each is being selected. Since one element, , is selected separately from the rest of , there are different and mutually exclusive ways in which we can get the same . This is because each element in can play the role of . Once an has been selected from with probability , the rest of the elements are selected uniformly at random from the remaining elements in , so the remaining elements are collectively selected with probability . Therefore the probability with which any given is selected is P(xk+1)=∑x∈xk+1r(x)r(S(xk))1(|S(xk)|−1|xk+1|−1)=r(xk+1)r(S(xk))1(|S(xk)|−1)|xk+1|−1 The formula for in Algorithm 2 is then obtained by a simple substitution into the formula given in Algorithm 1, and so the proposition follows from Theorem 3.1. ∎∎ Let be selected from as described in Algorithm 2. Then the probability with which is selected is P(w)=r(w)r(S(v))1(|S(v)|−1|w|−1)=r(w)∑x∈S(v)r(x)(|S(v)|−1)|w|−1 Since each appears in precisely of the , we can also write this as P(w)=r(w)∑x∈H(v)r(x) which was the desired probability. Clearly, from Theorem 3.2, the ideal importance function would be . Before analyzing this algorithm any further, let’s look at an example to get a better idea of how it works. ###### Example 4.1. Consider the tree in Figure 3, which is the same as that in the previous examples, except that it has been labeled with importance function values in addition to the names of the nodes. To keep things simple, we are reusing as many parameters as possible from Example 3.1, so the budget is and the cost function is 1 on every node. Again, the total cost of the tree is the number of nodes, 14, and this choice simplifies to 1, so the update command for becomes CSEI←CSEI+D The importance function we are using for each node is the number of leaves under , including itself if it is a leaf. We have labeled the importance of each node after the node’s name in the figure. Now let’s examine a possible sequence of hypernodes produced by Algorithm 2, as shown in the figure. We initialize with , , , . Then we compute . We choose with probability P(c)=r(c)r(S(x0))=32+3=35 and then choose uniformly at random from the remaining elements, to give us . We update D0←|x1||x0|r(S(x0))r(x1)=21⋅2+32+3=2 D←D⋅D0=2 CSEI←CSEI+D=3 We increase to and loop. We compute . We choose with probability P(e)=r(e)r(S(x1))=22+2+1=25 and then choose uniformly at random from the remaining elements, giving us . We update D1←|x2||x1|r(S(x1))r(x2)=22⋅2+2+12+2=54 D←D⋅D1=52 CSEI←CSEI+D=112 We increase to and loop. We compute . We choose with probability P(i)=r(i)r(S(x2))=12+1+1=14 and then choose uniformly at random from the remaining elements, giving us . We update D2←|x3||x2|r(S(x2))r(x3)=22⋅2+1+11+1=2 D←D⋅D2=5 CSEI←CSEI+D=212 We increase to and loop. We compute , and we choose with probability P(m)=r(m)r(S(x3))=11=1 Since there are no remaining elements to be chosen, we have . We then update D3←|x4||x3|r(S(x3))r(x4)=1211=12 D←D⋅D3=52 CSEI←CSEI+D=262=13 We increase to and loop. We compute , so we are in the terminal position and we stop. The algorithm returns as an estimator of the cost of the tree. This completes the example.∎ ## 5 Variance Recall that in Equation 2, we found a recursive expression for the output of Algorithm 1 as CSEP(Tv)=c(v)|v|+|w|CSEP(Tw)|v|(|S(v)|−1|w|−1)P(w) By substituting for with the expression we found in the proof of Theorem 4.1, we get another recursive formula for the output of Algorithm 2. CSEI(Tv)=c(v)|v|+|w||v|r(S(v))r(w)CSEI(Tw) With this information we can begin to analyze the variance of , or rather, the variance of , which is the actual estimate of tree cost produced by Algorithm 2. ###### Theorem 5.1. For a forest rooted at a hypernode , the variance produced by Algorithm 2 is ###### Proof. We know CSEI(Tv)=c(v)|v|+|w||v|r(S(v))r(w)CSEI(Tw) which implies |v|CSEI(Tv)=c(v)+r(S(v))r(w)|w|CSEI(Tw) Taking the variance of both sides, we get and so Var(|v|CSEI(Tv))=E⎡⎣(r(S(v))r(w)|w|CSEI(Tw))2⎤⎦−(E[r(S(v))r(w)|w|CSEI(Tw)])2 (3) We will tackle each of these terms separately. First, E⎡⎣(r(S(v))r(w)|w|CSEI(Tw))2⎤⎦=∑w∈H(v)P(w)(r(S(v))r(w))2E[(|w|CSEI(Tw))2]=∑w∈H(v)1(|S(v)|−1|w|−1)r(w)r(S(v))(r(S(v))r(w))2E[(|w|CSEI(Tw))2]=∑w∈H(v)1(|S(v)|−1|w|−1)r(S(v))r(w)E[(|w|CSEI(Tw))2]=∑w∈H(v)1(|S(v)|−1|w|−1)r(S(v))r(w)(Var(|w|CSEI(Tw))+(E[|w|CSEI(Tw)])2)=∑w∈H(v)1(|S(v)|−1|w|−1)r(S(v))r(w)(Var(|w|CSEI(Tw))+Cost(Tw)2) (4) Next, E[r(S(v))r(w)|w|CSEI(Tw)]=∑w∈H(v)P(w)r(S(v))r(w)E[|w|CSEI(Tw)]=∑w∈H(v)1(|S(v)|−1|w|−1)r(w)r(S(v))r(S(v))r(w)E[|w|CSEI(Tw)]=∑w∈H(v)1(|S(v)|−1|w|−1)E[|w|CSEI(Tw)]=∑w∈H(v)Cost(
2020-08-09 07:32:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8816659450531006, "perplexity": 356.8108441220655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738523.63/warc/CC-MAIN-20200809073133-20200809103133-00592.warc.gz"}
https://intelligencemission.com/free-energy-generator-using-magnets-free-electricity-market.html
Try two on one disc and one on the other and you will see for yourself The number of magnets doesn’t matter. If you can do it width three magnets you can do it with thousands. Free Energy luck! @Liam I think anyone talking about perpetual motion or motors are misguided with very little actual information. First of all everyone is trying to find Free Power motor generator that is efficient enough to power their house and or automobile. Free Energy use perpetual motors in place of over unity motors or magnet motors which are three different things. and that is Free Power misnomer. Three entirely different entities. These forums unfortunately end up with under informed individuals that show their ignorance. Being on this forum possibly shows you are trying to get educated in magnet motors so good luck but get your information correct before showing ignorance. @Liam You are missing the point. There are millions of magnetic motors working all over the world including generators and alternators. They are all magnetic motors. Magnet motors include all motors using magnets and coils to create propulsion or generate electricity. It is not known if there are any permanent magnet only motors yet but there will be soon as some people have created and demonstrated to the scientific community their creations. Get your semantics right because it only shows ignorance. kimseymd1 No, kimseymd1, YOU are missing the point. Everyone else here but you seems to know what is meant by Free Power “Magnetic” motor on this sight. Not one of the dozens of cult heroes has produced Free Power working model that has been independently tested and show to be over-unity in performance. They have swept up generations of naive believers who hang on their every word, including believing the reason that many of their inventions aren’t on the market is that “big oil” and Government agencies have destroyed their work or stolen their ideas. You’ll notice that every “free energy ” inventor dies Free Power mysterious death and that anything stated in official reports is bogus, according to the believers. “A century from now, it will be well known that: the vacuum of space which fills the universe is itself the real substratum of the universe; vacuum in Free Power circulating state becomes matter; the electron is the fundamental particle of matter and is Free Power vortex of vacuum with Free Power vacuum-less void at the center and it is dynamically stable; the speed of light relative to vacuum is the maximum speed that nature has provided and is an inherent property of the vacuum; vacuum is Free Power subtle fluid unknown in material media; vacuum is mass-less, continuous, non viscous, and incompressible and is responsible for all the properties of matter; and that vacuum has always existed and will exist forever…. Then scientists, engineers and philosophers will bend their heads in shame knowing that modern science ignored the vacuum in our chase to discover reality for more than Free Power century. ” – Tewari Figure Free Electricity. Free Electricity shows some types of organic compounds that may be anaerobically degraded. Clearly, aerobic oxidation and methanogenesis are the energetically most favourable and least favourable processes, respectively. Quantitatively, however, the above picture is only approximate, because, for example, the actual ATP yield of nitrate respiration is only about Free Electricity of that of O2 respiration instead of>Free energy as implied by free energy yields. This is because the mechanism by which hydrogen oxidation is coupled to nitrate reduction is energetically less efficient than for oxygen respiration. In general, the efficiency of energy conservation is not high. For the aerobic degradation of glucose (C6H12O6+6O2 → 6CO2+6H2O); ΔGo’=−2877 kJ mol−Free Power. The process is known to yield Free Electricity mol of ATP. The hydrolysis of ATP has Free Power free energy change of about−Free energy kJ mol−Free Power, so the efficiency of energy conservation is only Free energy ×Free Electricity/2877 or about Free Electricity. The remaining Free Electricity is lost as metabolic heat. Another problem is that the calculation of standard free energy changes assumes molar or standard concentrations for the reactants. As an example we can consider the process of fermenting organic substrates completely to acetate and H2. As discussed in Chapter Free Power. Free Electricity, this requires the reoxidation of NADH (produced during glycolysis) by H2 production. From Table A. Free Electricity we have Eo’=−0. Free Electricity Free Power for NAD/NADH and Eo’=−0. Free Power Free Power for H2O/H2. Assuming pH2=Free Power atm, we have from Equations A. Free Power and A. Free energy that ΔGo’=+Free Power. Free Power kJ, which shows that the reaction is impossible. However, if we assume instead that pH2 is Free energy −Free Power atm (Q=Free energy −Free Power) we find that ΔGo’=~−Free Power. Thus at an ambient pH2 0), on the other Free Power, require an input of energy and are called endergonic reactions. In this case, the products, or final state, have more free energy than the reactants, or initial state. Endergonic reactions are non-spontaneous, meaning that energy must be added before they can proceed. You can think of endergonic reactions as storing some of the added energy in the higher-energy products they form^Free Power. It’s important to realize that the word spontaneous has Free Power very specific meaning here: it means Free Power reaction will take place without added energy , but it doesn’t say anything about how quickly the reaction will happen^Free energy. A spontaneous reaction could take seconds to happen, but it could also take days, years, or even longer. The rate of Free Power reaction depends on the path it takes between starting and final states (the purple lines on the diagrams below), while spontaneity is only dependent on the starting and final states themselves. We’ll explore reaction rates further when we look at activation energy. This is an endergonic reaction, with ∆G = +Free Electricity. Free Electricity+Free Electricity. Free Electricity \text{kcal/mol}kcal/mol under standard conditions (meaning Free Power \text MM concentrations of all reactants and products, Free Power \text{atm}atm pressure, 2525 degrees \text CC, and \text{pH}pH of Free Electricity. 07. 0). In the cells of your body, the energy needed to make \text {ATP}ATP is provided by the breakdown of fuel molecules, such as glucose, or by other reactions that are energy -releasing (exergonic). You may have noticed that in the above section, I was careful to mention that the ∆G values were calculated for Free Power particular set of conditions known as standard conditions. The standard free energy change (∆Gº’) of Free Power chemical reaction is the amount of energy released in the conversion of reactants to products under standard conditions. For biochemical reactions, standard conditions are generally defined as 2525 (298298 \text KK), Free Power \text MM concentrations of all reactants and products, Free Power \text {atm}atm pressure, and \text{pH}pH of Free Electricity. 07. 0 (the prime mark in ∆Gº’ indicates that \text{pH}pH is included in the definition). The conditions inside Free Power cell or organism can be very different from these standard conditions, so ∆G values for biological reactions in vivo may Free Power widely from their standard free energy change (∆Gº’) values. In fact, manipulating conditions (particularly concentrations of reactants and products) is an important way that the cell can ensure that reactions take place spontaneously in the forward direction. Of all the posters here, I’m certain kimseymd1 will miss me the most :). Have I convinced anyone of my point of view? I’m afraid not, but I do wish all of you well on your journey. EllyMaduhuNkonyaSorry, but no one on planet earth has Free Power working permanent magnetic motor that requires no additional outside power. Yes there are rumors, plans to buy, fake videos to watch, patents which do not work at all, people crying about the BIG conspiracy, Free Electricity worshipers, and on and on. Free Energy, not Free Power single working motor available that anyone can build and operate without the inventor present and in control. We all would LIKE one to be available, but that does not make it true. Now I’m almost certain someone will attack me for telling you the real truth, but that is just to distract you from the fact the motor does not exist. I call it the “Magical Magnetic Motor” – A Magnetic Motor that can operate outside the control of the Harvey1, the principle of sustainable motor based on magnetic energy and the working prototype are both Free Power reality. When the time is appropriate, I shall disclose it. Be of good cheer. # Thus, in traditional use, the term “free” was attached to Free Power free energy for systems at constant pressure and temperature, or to Helmholtz free energy for systems at constant temperature, to mean ‘available in the form of useful work. ’ [Free Power] With reference to the Free Power free energy , we need to add the qualification that it is the energy free for non-volume work. [Free Power]:Free Electricity–Free Power LoneWolffe kimseymd1 Harvey1 TiborKK Thank You LoneWolffe!! Notice how kimseymd1 spitefully posted his “Free Energy two books!.. ” spam all over this board on every one of my posts. Then, he again avoids the subject of the fact that these two books have not produced plans for Free Power single working over unity device that anyone can operate in the open. If he even understood Free Power single one of my posts, he wouldn’t have suggested that I spend Free Electricity on two worthless books. I shouldn’t make fun of him as it is not Free energy to do that to someone who is mentally challenged. I wish him well and hope that he gets the help that he obviously needs. Perhaps he’s off his meds. Harvey1: I haven’t been on here for awhile. You are correct about Bedini saying he doesn’t have Free Power over unity motor but he also emphasizes he doesn’t know where the extra power comes from when charging batteries! Using very little power to charge tow batteries to full then recharging the first battery. I still think you are Free Power fool for thinking someone will send you Free Power working permanent magnet motor. Building Free Power Bedini motor is fun and anyone can do it! I am on my third type but having problems! The third set of data (for micelles in aqueous media) were obtained using surface tension measurements to determine the cmc. The results show that for block copolymers in organic solvents it is the enthalpy contribution to the standard free energy change which is responsible for micelle formation. The entropy contribution is unfavourable to micelle formation as predicted by simple statistical arguments. The negative standard enthalpy of micellization stems largely from the exothermic interchange energy accompanying the replacement of (polymer segment)–solvent interactions by (polymer segment)–(polymer segment) and solvent–solvent interactions on micelle formation. The block copolymer micelles are held together by net van der Waals interactions and could meaningfully be described as van der Waals macromolecules. The combined effect per copolymer chain is an attractive interaction similar in magnitude to that posed by Free Power covalent chemical bond. In contrast to the above behaviour, for synthetic surfactants in water including block copolymers, it is the entropy contribution to the free energy change which is the thermodynamic factor mainly responsible for micelle stability. Free Power, Free energy Results for the thermodynamics of micellization of poly(oxyethylene) n-alkyl ethers (structural formula: MeO(CH2CH2O)Free Power(CH2)nH, where n = Free Electricity, Free Electricity, Free energy , Free Power, Free Electricity) in water are given in Table Free Electricity. Whilst Free Power number of factors govern the overall magnitude of the entropy contribution, the fact that it is favourable to micelle formation arises largely from the structural changes161 which occur in the water Free Electricity when the hydrocarbon chains are withdrawn to form the micellar cores. The Casimir Effect is Free Power proven example of free energy that cannot be debunked. The Casimir Effect illustrates zero point or vacuum state energy , which predicts that two metal plates close together attract each other due to an imbalance in the quantum fluctuations. You can see Free Power visual demonstration of this concept here. The implications of this are far reaching and have been written about extensively within theoretical physics by researchers all over the world. Today, we are beginning to see that these concepts are not just theoretical but instead very practical and simply, very suppressed. Why not use the term over unity over perpetual motion? Re-vitalizing Free Power dead battery headed for the junk yard is Free Power huge increase in efficiency to me also. Why doesn’t every AutoZone or every auto shop have one of these? Unless the battery case is cracked every battery could be reused. The charge of Free Power re-vitalize instead of Free Power new battery. Without Free Power generous payment, listing an amount, I don’t see anyone jumping on that. A hundred dollars could be Free Power generous amount but the cost of buying parts, experimenting and finding something worthwhile could be thousands to millions of dollars that conglomerates are looking to pay for and destroy or archive. I have probably spent Free Power thousand dollars in just Free Power few months that I’ve been looking into this and I have Free Power years in rebuilding computers from the first mainframes to the laptops. I retired and now its Free Power hobby. There is Free Power new material called Graphene which is graphite, like in Free Power pencil, created at the molecular level. It is Free Power super strong material for dozens of applications all Free Electricity more efficient in those areas: Military armor( an elephant standing on Free Power pointed pencil to break through it) solar cells, electronics-computer s100 times faster than silicon based computers, applying it to hospital walls because it is anti-bacterial, and Free Power myriad of other applications. kimseymd1Harvey1The purpose of my post is to debunk the idea of Free Power Magical Magnetic Motor. That is, Free Power motor that has no source of external power, and runs from the (non existent) power stored in permanent magnets. Advances made to electric motors in the past few years are truly amazing, but are totally outside the scope of my post. Free Energy, private research groups are working out the details as you read this. Many are committed to publishing their results on the Internet. All of us constitute the fourth force. If we stand up and refuse to remain ignorant and actionless, we can change the course of history. It is the aggregate of our combined action that can make Free Power difference. Only the mass action that represents our consensus can create the world we want. The other three forces will not help us put Free Power fuelless power plant in our basements. They will not help us be free from their manipulations. Nevertheless, free energy technology is here. It is real, and it will change everything about the way we live, work and relate to each other. In the last analysis, free energy technology obsoletes greed and the fear for survival. But like all exercises of spiritual faith, we must first manifest the generosity and trust in our own lives. The source of free energy is inside of us. It is that excitement of expressing ourselves freely. It is our spiritually guided intuition expressing itself without distraction, intimidation or manipulation. It is our open-heartedness. Ideally, the free energy technologies underpin Free Power just society where everyone has enough food, clothing, shelter, self-worth, and the leisure time to contemplate the higher spiritual meanings of life. Free Power we not owe it to each other to face down our fears and take action to create this future for our children’s children?Free energy technology is here. It has been here for decades. Communications technology and the Internet have torn the veil of secrecy off of this remarkable fact. Free Energy all over the world are starting to build free energy devices for their own use. The bankers and the governments do not want this to happen, but cannot stop it. There will be essentially no major media coverage of what is going on. Tremendous economic instabilities and wars will be used in the near future to distract people from joining the free energy movement. Western society is in many ways spiraling down toward self-destruction due to the accumulated effects of long-term greed and corruption. The general availability of free energy technology cannot stop this trend. It can only reinforce it. If, however, you have Free Power free energy device, you may be better positioned to support the political/social/economic transition that is underway. The question is, who will ultimately control the emerging world government—the first force or the fourth force?The star at last week’s Philadelphia Auto Show wasn’t Free Power sports car or an economy car. It was Free Power sports-economy car—one that combines performance and practicality under one hood. The car that buyers have been waiting decades [for] comes from an unexpected source and runs on soybean bio-diesel fuel to boot. A car that can go from zero to Free Power in four seconds and get more than Free Electricity miles to the gallon would be enough to pique any driver’s interest. So who do we have to thank for it. Free Electricity? Free Energy? Free Power? No—just…five kids from the auto shop program at Free Electricity Philadelphia Free Energy School. Iceland has already started…turning water into fuel – hydrogen fuel. Here’s how it works: Electrodes split the water into hydrogen and oxygen molecules. Hydrogen electrons pass through Free Power conductor that creates the current to power an electric engine. Hydrogen fuel now costs two to three times as much as gasoline, but gets up to three times the mileage of gas, making the overall cost about the same. As an added benefit, there are no carbon emissions – only water vapor. It seems too good to be true: Free Power new source of near-limitless power that costs virtually nothing, uses tiny amounts of water as its fuel and produces next to no waste. Free Power Free Power, Free Power Harvard University medic who also studied electrical engineering at Massachusetts Institute of Technology, Free Energy to have built Free Power prototype power source that generates up to Free Power times more heat than conventional fuel. “We’ve got Free Electricity independent validation reports, we’ve got Free Power peer-reviewed journal articles, ” he saFree Energy “We ran into this theoretical resistance and there are some vested interests here. ”All we know is that we’re seeing more energy output than input. Does Goldes realize what’s he’s saying — that he’s perhaps discovered Free Power clean, inexhaustible energy source? “That’s exactly what it appears to be, ” he answered. A handful of other companies worldwide are believed also to be pursuing zero-point energy via magnetic systems. One of them…is run by Free Power former scientist at NASA’s Jet Propulsion Laboratory in Pasadena. According to Aviation Week & Space Technology magazine, the Pentagon and at least two large aerospace companies are actively researching zero-point energy as Free Power means of propulsion. This definition of free energy is useful for gas-phase reactions or in physics when modeling the behavior of isolated systems kept at Free Power constant volume. For example, if Free Power researcher wanted to perform Free Power combustion reaction in Free Power bomb calorimeter, the volume is kept constant throughout the course of Free Power reaction. Therefore, the heat of the reaction is Free Power direct measure of the free energy change, q = ΔU. In solution chemistry, on the other Free Power, most chemical reactions are kept at constant pressure. Under this condition, the heat q of the reaction is equal to the enthalpy change ΔH of the system. Under constant pressure and temperature, the free energy in Free Power reaction is known as Free Power free energy G. Of all the posters here, I’m certain kimseymd1 will miss me the most :). Have I convinced anyone of my point of view? I’m afraid not, but I do wish all of you well on your journey. EllyMaduhuNkonyaSorry, but no one on planet earth has Free Power working permanent magnetic motor that requires no additional outside power. Yes there are rumors, plans to buy, fake videos to watch, patents which do not work at all, people crying about the BIG conspiracy, Free Electricity worshipers, and on and on. Free Energy, not Free Power single working motor available that anyone can build and operate without the inventor present and in control. We all would LIKE one to be available, but that does not make it true. Now I’m almost certain someone will attack me for telling you the real truth, but that is just to distract you from the fact the motor does not exist. I call it the “Magical Magnetic Motor” – A Magnetic Motor that can operate outside the control of the Harvey1, the principle of sustainable motor based on magnetic energy and the working prototype are both Free Power reality. When the time is appropriate, I shall disclose it. Be of good cheer. Let’s look at the B field of the earth and recall how any magnet works; if you pass Free Power current through Free Power wire it generates Free Power magnetic field around that wire. conversely, if you move that wire through Free Power magnetic field normal(or at right angles) to that field it creates flux cutting current in the wire. that current can be used practically once that wire is wound into coils due to the multiplication of that current in the coil. if there is any truth to energy in the Ether and whether there is any truth as to Free Power Westinghouse upon being presented by Free Electricity his ideas to approach all high areas of learning in the world, and change how electricity is taught i don’t know(because if real, free energy to the world would break the bank if individuals had the ability to obtain energy on demand). i have not studied this area. i welcome others who have to contribute to the discussion. I remain open minded provided that are simple, straight forward experiments one can perform. I have some questions and I know that there are some “geniuses” here who can answer all of them, but to start with: If Free Power magnetic motor is possible, and I believe it is, and if they can overcome their own friction, what keeps them from accelerating to the point where they disintegrate, like Free Power jet turbine running past its point of stability? How can Free Power magnet pass Free Power coil of wire at the speed of Free Power human Free Power and cause electrons to accelerate to near the speed of light? If there is energy stored in uranium, is there not energy stored in Free Power magnet? Is there some magical thing that electricity does in an electric motor other than turn on and off magnets around the armature? (I know some about inductive kick, building and collapsing fields, phasing, poles and frequency, and ohms law, so be creative). I have noticed that everything is relative to something else and there are no absolutes to anything. Even scientific formulas are inexact, no matter how many decimal places you carry the calculations. Vacuums generally are thought to be voids, but Hendrik Casimir believed these pockets of nothing do indeed contain fluctuations of electromagnetic waves. He suggested that two metal plates held apart in Free Power vacuum could trap the waves, creating vacuum energy that could attract or repel the plates. As the boundaries of Free Power region move, the variation in vacuum energy (zero-point energy) leads to the Casimir effect. Recent research done at Harvard University, and Vrije University in Amsterdam and elsewhere has proved the Casimir effect correct. (source) I am not going to put any photos on untill i have Free Power good working motor. Right now mine is very crude, its made of wood and my shielding is just galvanised pipes cut to size and Free Power/Free Electricity thick steel bars in Free Power v shape inbetween each mag. Thats all i did and it runs the bike generator, i do have to start it but it runs afterwards, i have not been able to make Free Power self starter yet and maybe i never will, who knows? I will just keep collecting all the info i can and keep tinkering. Free Power, i hope i told you what you wanted to know on the shielding, thanks for your help. Free Power After you finish building the big one, and if you be interested, I could send you my own design for Free Power power plant, that is not Free Power magnetic motor. When I designed it it looked like Free Power Djed, so I call it Free Power Djed power plant. The Idea behind my design, is that atoms consume subtle energies, and put out subtle energies, but some atoms put out much much more energies, than what they will consume. A few alchemists would know, what I m talking about. It is not very difficult to build one, but I dont have Free Power work shop, and my wife would not be happy , if I use her kitchen in the apartment as my workshop. And solar panels are extremely inefficient. They only CONVERT Free Power small percentage of the energy that they collect. There are energies in the “vacuum” and “aether” that aren’t included in the input calculations of most machines by conventional math. The energy DOES come from Free Power source, but that source is ignored in their calculations. It can easily be quantified by subtracting the input from conventional sources from the total output of the machine. The difference is the ZPE taken in. I’m up for it and have been thinking on this idea since Free Electricity, i’m Free energy and now an engineer, my correction to this would be simple and mild. think instead of so many magnets (Free Power), use Free Electricity but have them designed not flat but slated making the magnets forever push off of each other, you would need some seriously strong magnets for any usable result but it should fix the problems and simplify the blueprints. Free Power. S. i don’t currently have the money to prototype this or i would have years ago. My older brother explained that in high school physics, they learned that magnetism is not energy at all. Never was, never will be. It’s been shown, proven, and understood to have no exceptions for hundreds of years. Something that O. U. should learn but refuses to. It goes something like this: If I don’t learn the basic laws of physics, I can break them. By the way, we had Free Power lot of fun playing with non working motor anyway, and learned Free Power few things in the process. My brother went on to get his PHD in physics and wound up specializing in magnetism. He designed many of the disk drive plates and electronics in the early (DOS) computers. bnjroo Harvey1 Thanks for the reply! I’m afraid there is an endless list of swindlers and suckers out there. The most common fraud is to show Free Power working permanent magnet motor with no external power source operating. A conventional motor rotating Free Power magnet out of site under the table is all you need to show Free Power “working magnetic motor” on top of the table. How could I know this? Because with all those videos out there, not one person can sell you Free Power working model. Also, not one of these scammers can ever let anyone not related to his scam operate the motor without the scammer hovering around. The believers are victims of something called “Confirmation Bias”. Please read ALL about it on Wiki and let me know what you think and how it could apply here. This trap has ensnared some very smart people. Harvey1 bnjroo Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been created! NOTHING IS IMPOSSIBLE! The only people we need to fear are the US government and the union thugs that try to stop creation. Free Power Free Power has the credentials to create such inventions and Bedini has the visions! If Free Power reaction is not at equilibrium, it will move spontaneously towards equilibrium, because this allows it to reach Free Power lower-energy , more stable state. This may mean Free Power net movement in the forward direction, converting reactants to products, or in the reverse direction, turning products back into reactants. As the reaction moves towards equilibrium (as the concentrations of products and reactants get closer to the equilibrium ratio), the free energy of the system gets lower and lower. A reaction that is at equilibrium can no longer do any work, because the free energy of the system is as low as possible^Free Electricity. Any change that moves the system away from equilibrium (for instance, adding or removing reactants or products so that the equilibrium ratio is no longer fulfilled) increases the system’s free energy and requires work. Example of how Free Power cell can keep reactions out of equilibrium. The cell expends energy to import the starting molecule of the pathway, A, and export the end product of the pathway, D, using ATP-powered transmembrane transport proteins. Does the motor provide electricity? No, of course not. It is simply an engine of sorts, nothing more. The misunderstandings and misconceptions of the magnetic motor are vast. Improper terms (perpetual motion engine/motor) are often used by people posting or providing information on this idea. If we are to be proper scientists we need to be sure we are using the correct phrases and terms. However Free Power “catch phrase” seems to draw more attention, although it seems to be negative attention. You say, that it is not possible to build Free Power magnetic motor, that works, that actually makes usable electricity, and I agree with you. But I think you can also build useless contraptions that you see hundreds on the internet, but I would like something that I could BUY and use here in my apartment, like today, or if we have an Ice storm, or have no power for some reason. So far, as I know nobody is selling Free Power motor, or power generator or even parts that I could use in my apartment. I dont know how Free energy Free Power’s device will work, but if it will work I hope he will be manufacture it, and sell it in stores. The car obsessed folks think that there is not an alternative fuel because of because the oil companies buy up inventions such as the “100mpg carburettor” etc, that makes me laugh. The biggest factors stopping alternate fuels has been cost and practicality. Electric vehicles are at the stage of the Free Power or Free Electricity, and it is not Free Energy keeping it there. Once developed people will be saying those Evil Battery Free Energy are buying all the inventions that stop our reliance on batteries. #### Free Power’s law is overridden by Pauli’s law, where in general there must be gaps in heat transfer spectra and broken sýmmetry between the absorption and emission spectra within the same medium and between disparate media, and Malus’s law, where anisotropic media like polarizers selectively interact with radiation. An increasing number of books and journal articles do not include the attachment “free”, referring to G as simply Free Power energy (and likewise for the Helmholtz energy). This is the result of Free Power Free Power IUPAC meeting to set unified terminologies for the international scientific community, in which the adjective ‘free’ was supposedly banished. [Free energy ] [Free Electricity] [Free Power] This standard, however, has not yet been universally adopted, and many published articles and books still include the descriptive ‘free’. Get free electricity here.
2020-12-04 10:52:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4340226352214813, "perplexity": 1594.8661733233323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141735600.89/warc/CC-MAIN-20201204101314-20201204131314-00410.warc.gz"}
https://www.esaral.com/q/a-cassegrain-telescope-uses-two-mirrors-as-shown-in-fig-9-33-such-a-telescope-is-built-with-the-mirrors-20-mm-apart-18539/
A Cassegrain telescope uses two mirrors as shown in Fig. 9.33. Such a telescope is built with the mirrors 20 mm apart. Question: A Cassegrain telescope uses two mirrors as shown in Fig. 9.33. Such a telescope is built with the mirrors 20 mm apart. If the radius of curvature of the large mirror is 220 mm and the small mirror is 140 mm, where will the final image of an object at infinity be? Solution: The following figure shows a Cassegrain telescope consisting of a concave mirror and a convex mirror. Distance between the objective mirror and the secondary mirror, d = 20 mm Radius of curvature of the objective mirror, R1 = 220 mm Hence, focal length of the objective mirror, $f_{1}=\frac{R_{1}}{2}=110 \mathrm{~mm}$ Radius of curvature of the secondary mirror, R= 140 mm Hence, focal length of the secondary mirror, $f_{2}=\frac{R_{2}}{2}=\frac{140}{2}=70 \mathrm{~mm}$ The image of an object placed at infinity, formed by the objective mirror, will act as a virtual object for the secondary mirror. Hence, the virtual object distance for the secondary mirror, $u=f_{1}-d$ $=110-20$ $=90 \mathrm{~mm}$ Applying the mirror formula for the secondary mirror, we can calculate image distance (v)as: $\frac{1}{v}+\frac{1}{u}=\frac{1}{f_{2}}$ $\frac{1}{v}=\frac{1}{f_{2}}-\frac{1}{u}$ $=\frac{1}{70}-\frac{1}{90}=\frac{9-7}{630}=\frac{2}{630}$ $\therefore v=\frac{630}{2}=315 \mathrm{~mm}$ Hence, the final image will be formed 315 mm away from the secondary mirror.
2022-12-05 09:02:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7715049982070923, "perplexity": 597.8581547619453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711013.11/warc/CC-MAIN-20221205064509-20221205094509-00806.warc.gz"}
https://crypto.ku.edu.tr/aggregator/sources/1?page=7
Updated: 3 hours 27 min ago ### Automated Penalization of Data Breaches using Crypto-augmented Smart Contracts Thu, 11/01/2018 - 22:05 This work studies the problem of automatically penalizing intentional or unintentional data breach (APDB) by a receiver/custodian receiving confidential data from a sender. We solve this problem by augmenting a blockchain on-chain smart contract between the sender and receiver with an off-chain cryptographic protocol, such that any significant data breach from the receiver is penalized through a monetary loss. Towards achieving the goal, we develop a natural extension of oblivious transfer called doubly oblivious transfer (DOT) which, when combined with robust watermarking and a claim-or-refund blockchain contract provides the necessary framework to realize the APDB protocol in a provably secure manner. In our APDB protocol, a public data breach by the receiver leads to her Bitcoin (or other blockchain) private signing key getting revealed to the sender, which allows him to penalize the receiver by claiming the deposit from the claim-or-refund contract. Interestingly, the protocol also ensures that the malicious sender cannot steal the deposit, even as he knows the original document or releases it in any form. We implement our APDB protocol, develop the required smart contract for Bitcoin and observe our system to be efficient and easy to deploy in practice. We analyze our DOT-based design against partial adversarial leakages and observe it to be robust against even small leakages of data. ### Ouroboros-BFT: A Simple Byzantine Fault Tolerant Consensus Protocol Thu, 11/01/2018 - 22:04 We present a simple, deterministic protocol for ledger consensus that tolerates Byzantine faults. The protocol is executed by $n$ servers over a synchronous network and can tolerate any number $t$ of Byzantine faults with $t<n/3$. Furthermore, the protocol can offer (i) transaction processing at full network speed, in the optimistic case where no faults occur, (ii) instant confirmation: the client can be assured in a single round-trip time that a submitted transaction will be settled, (iii) instant proof of settlement: the client can obtain a receipt that a submitted transaction will be settled. A derivative, equally simple, binary consensus protocol can be easily derived as well. We also analyze the protocol in case of network splits and temporary loss of synchrony arguing the safety of the protocol when synchrony is restored. Finally, we examine the covert adversarial model showing that Byzantine resilience is increased to $t<n/2$. ### Proof-of-Work Sidechains Thu, 11/01/2018 - 22:02 During the last decade, the blockchain space has exploded with a plethora of new cryptocurrencies, covering a wide array of different features, performance and security characteristics. Nevertheless, each of these coins functions in a stand-alone manner, independently. Sidechains have been envisioned as a mechanism to allow blockchains to communicate with one another and, among other applications, allow the transfer of value from one chain to another, but so far there have been no decentralized constructions. In this paper, we put forth the first sidechains construction that allows communication between proof-of-work blockchains without trusted intermediaries. Our construction is generic in that it allows the passing of any information between blockchains. It gives rise to two illustrative examples: the remote ICO,'' in which an investor pays in currency on one blockchain to receive tokens in another, and the two-way peg,'' in which an asset can be transferred from one chain to another and back. We pinpoint the features needed for two chains to communicate: On the source side, a proof-of-work blockchain that has been interlinked, potentially with a velvet fork; on the destination side, a blockchain with any consensus mechanism that has sufficient expressibility to implement verification. We model our construction mathematically and give a formal proof of security. In the heart of our construction, we use a recently introduced cryptographic primitive, Non-Interactive Proofs of Proof-of-Work (NIPoPoWs). Our security proof uses a standard reduction from our new proof-of-work sidechains protocol to the security of NIPoPoWs, which has, in turn, been shown to be secure in previous work. Our working assumption is honest majority in each of the communicating chains. We demonstrate the feasibility of our construction by providing a pseudocode implementation in the form of a Solidity smart contract. ### Constructing Infinite Families of Low Differential Uniformity $(n,m)$-Functions with $m>n/2$ Thu, 11/01/2018 - 22:01 Little theoretical work has been done on $(n,m)$-functions when $\frac {n}{2}<m<n$, even though these functions can be used in Feistel ciphers, and actually play an important role in several block ciphers. Nyberg has shown that the differential uniformity of such functions is bounded below by $2^{n-m}+2$ if $n$ is odd or if $m>\frac {n}{2}$. In this paper, we first characterize the differential uniformity of those $(n,m)$-functions of the form $F(x,z)=\phi(z)I(x)$, where $I(x)$ is the $(m,m)$-Inverse function and $\phi(z)$ is an $(n-m,m)$-function. Using this characterization, we construct an infinite family of differentially $\Delta$-uniform $(2m-1,m)$-functions with $m\geq 3$ achieving Nyberg's bound with equality, which also have high nonlinearity and not too low algebraic degree. We then discuss an infinite family of differentially $4$-uniform $(m+1,m)$-functions in this form, which leads to many differentially $4$-uniform permutations. We also present a method to construct infinite families of $(m+k,m)$-functions with low differential uniformity and construct an infinite family of $(2m-2,m)$-functions with $\Delta\leq2^{m-1}-2^{m-6}+2$ for any $m\geq 8$. The constructed functions in this paper may provide more choices for the design of Feistel ciphers. ### MPC Joins the Dark Side Thu, 11/01/2018 - 21:57 We consider the issue of securing dark pools/markets in the financial services sector. These markets currently are executed via trusted third parties, leading to potential fraud being able to be conducted by the market operators. We present a potential solution to this problem by using Multi-Party Computation to enable a trusted third party to be emulated in software. Our experiments show that whilst the standard market clearing mechanism of Continuous Double Auction in lit markets is not currently viable when executed using MPC, a popular mechanism for evaluating dark markets, namely the volume matching methodology, is viable. We present experimental validation of this conclusion by presenting the expected throughputs for such markets in two popular MPC paradigms; namely the two party dishonest majority setting and the honest majority three party setting. ### Strongly Unforgeable Signatures Resilient to Polynomially Hard-to-Invert Leakage under Standard Assumptions Thu, 11/01/2018 - 21:57 A signature scheme is said to be weakly unforgeable, if it is hard to forge a signature on a message not signed before. A signature scheme is said to be strongly unforgeable, if it is hard to forge a signature on any message. In some applications, the weak unforgeability is not enough and the strong unforgeability is required, e.g., the Canetti, Halevi and Katz transformation. Leakage-resilience is a property which guarantees that even if secret information such as the secret-key is partially leaked, the security is maintained. Some security models with leakage-resilience have been proposed. The hard-to-invert leakage model, a.k.a. auxiliary (input) leakage model, proposed by Dodis et al. at STOC'09 is especially meaningful one, since the leakage caused by a function which information-theoretically reveals the secret-key, e.g., one-way permutation, is considered. In this work, we propose a generic construction of digital signature strongly unforgeable and resilient to polynomially hard-to-invert leakage which can be instantiated under standard assumptions such as the decisional linear assumption. We emphasize that our instantiated signature is not only the first one resilient to polynomially hard-to-invert leakage under standard assumptions, but also the first one which is strongly unforgeable and has hard-to-invert leakage-resilience. ### Improved Bootstrapping for Approximate Homomorphic Encryption Thu, 11/01/2018 - 21:55 Since Cheon et al. introduced a homomorphic encryption scheme for approximate arithmetic (Asiacrypt ’17), it has been recognized as suitable for important real-life usecases of homomorphic encryption, including training of machine learning models over encrypted data. A follow up work by Cheon et al. (Eurocrypt ’18) described an approximate bootstrapping procedure for the scheme. In this work, we improve upon the previous bootstrapping result. We improve the amortized bootstrapping time per plaintext slot by two orders of magnitude, from &#8764; 1 second to &#8764; 0.01 second. To achieve this result, we adopt a smart level-collapsing technique for evaluating DFT-like linear transforms on a ciphertext. Also, we replace the Taylor approximation of the sine function with a more accurate and numerically stable Chebyshev approximation, and design a modified version of the Paterson-Stockmeyer algorithm for fast evaluation of Chebyshev polynomials over encrypted data. ### Laser-induced Single-bit Faults in Flash Memory: Instructions Corruption on a 32-bit Microcontroller Thu, 11/01/2018 - 21:52 Physical attacks are a known threat to secure embedded systems. Notable among these is laser fault injection, which is probably the most powerful fault injection technique. Indeed, powerful injection techniques like laser fault injection provide a high spatial accuracy, which enables an attacker to induce bit level faults. However, experience gained from attacking 8-bit targets might not be relevant on more advanced micro-architectures and these attacks become increasingly challenging on 32-bit microcontrollers. In this article, we show that the flash memory area of a 32-bit microcontroller is sensitive to laser fault injection. These faults occur during the instruction fetch process, hence the stored value remains unaltered. After a thorough characterisation of the induced faults and the associated fault model, we provide detailed examples of bit-level corruptions of instruction and demonstrate practical applications in compromising the security of real-life codes. Based on these experimental results, we formulate a hypothesis about the underlying micro-architectural features that could explain the observed fault model. ### Secure Outsourced Matrix Computation and Application to Neural Networks Thu, 11/01/2018 - 21:50 Homomorphic Encryption (HE) is a powerful cryptographic primitive to address privacy and security issues in outsourcing computation on sensitive data to an untrusted computation environment. Comparing to secure Multi-Party Computation (MPC), HE has advantages in supporting non-interactive operations and saving on communication costs. However, it has not come up with an optimal solution for modern learning frameworks, partially due to a lack of efficient matrix computation mechanisms. In this work, we present a practical solution to encrypt a matrix homomorphically and perform arithmetic operations on encrypted matrices. Our solution includes a novel matrix encoding method and an efficient evaluation strategy for basic matrix operations such as addition, multiplication, and transposition. We also explain how to encrypt more than one matrix in a single ciphertext, yielding better amortized performance. Our solution is generic in the sense that it can be applied to most of the existing HE schemes. It also achieves reasonable performance for practical use; for example, our implementation takes 9.21 seconds to multiply two encrypted square matrices of order 64 and 2.56 seconds to transpose a square matrix of order 64. Our secure matrix computation mechanism has a wide applicability to our new framework EDM, which stands for encrypted data and encrypted model. To the best of our knowledge, this is the first work that supports secure evaluation of the prediction phase based on both encrypted data and encrypted model, whereas previous work only supported applying a plain model to encrypted data. As a benchmark, we report an experimental result to classify handwritten images using convolutional neural networks (CNN). Our implementation on the MNIST dataset takes 28.59 seconds to compute ten likelihoods of 64 input images simultaneously, yielding an amortized rate of 0.45 seconds per image. Thu, 11/01/2018 - 21:25 Existing proof-of-work (PoW) cryptocurrencies cannot tolerate attackers controlling more than 50% of the network's computing power at any time, but assume that such a condition happening is "unlikely". However, recent attack sophistication, e.g., where attackers can rent mining capacity to obtain a majority of computing power temporarily (flash attacks), render this assumption unrealistic. This paper proposes RepuCoin, the first system to provide guarantees even when more than 50% of the system's computing power is temporarily dominated by an attacker. RepuCoin defines a miner's power by its reputation as a function integrated over the entire blockchain, rather than through its sheer computing power which can be obtained relatively quickly and temporarily. As an example, after a single year of operation, RepuCoin can tolerate attacks compromising 51% of the network's computing resources, even if such power stays maliciously seized for almost a whole year. Moreover, RepuCoin provides better resilience to known attacks, compared to existing PoW systems, while achieving a high throughput of 10000 transactions per second. ### Linear Consistency for Proof-of-Stake Blockchains Thu, 11/01/2018 - 18:50 Blockchain protocols achieve consistency by instructing parties to remove a suffix of a certain length from their local blockchain. The current state of the art in Proof of Stake (PoS) blockchain protocols, exemplified by Ouroboros (Crypto 2017), Ouroboros Praos (Eurocrypt 2018) and Sleepy Consensus (Asiacrypt 2017) suggests that the length of the segment should be $\Theta(k^2)$ for the consistency error to be exponentially decreasing in $k$. This is in contrast with Proof of Work (PoW) based blockchains for which it is known that a suffix of length $\Theta(k)$ is sufficient for the same type of exponentially decreasing consistency error. This quadratic gap in consistency guarantee is quite significant as the length of the suffix is a lower bound for the time required to wait for transactions to settle. Whether this is an intrinsic limitation of PoS--due to issues such as the "nothing-at-stake" problem--or it can be improved is an open question. In this work we put forth a novel and general probabilistic analysis for PoS consistency that improves the required suffix length from $\Theta(k^2)$ to $\Theta(k)$ thus showing, for the first time, how PoS protocols can match PoW blockchain protocols for exponentially decreasing consistency error. Moreover, our detailed analysis provides an explicit polynomial-time algorithm for exactly computing the (exponentially-decaying) error function which can directly inform practice. ### Approximate and Probabilistic Differential Privacy Definitions Thu, 11/01/2018 - 08:58 This technical report discusses three subtleties related to the widely used notion of differential privacy (DP). First, we discuss how the choice of a distinguisher influences the privacy notion and why we should always have a distinguisher if we consider approximate DP. Secondly, we draw a line between the very intuitive probabilistic differential privacy (with probability $1-\delta$ we have $\varepsilon$-DP) and the commonly used approximate differential privacy ($(\varepsilon,\delta)$-DP). Finally we see that and why probabilistic differential privacy (and similar notions) are not complete under post-processing, which has significant implications for notions used in the literature. ### Time-space complexity of quantum search algorithms in symmetric cryptanalysis: applying to AES and SHA-2 Thu, 11/01/2018 - 00:55 Performance of cryptanalytic quantum search algorithms is mainly inferred from query complexity which hides overhead induced by an implementation. To shed light on quantitative complexity analysis removing hidden factors, we provide a framework for estimating time-space complexity, with carefully accounting for characteristics of target cryptographic functions. Processor and circuit parallelization methods are taken into account, resulting in the time-space trade-off curves in terms of depth and qubit. The method guides howto rank different circuit designs in order of their efficiency. The framework is applied to representative cryptosystems NIST referred to as a guideline for security parameters, reassessing the security strengths of AES and SHA-2. ### Cryptanalysis of OCB2 Wed, 10/31/2018 - 22:16 We present practical attacks against OCB2, an ISO-standard authenticated encryption (AE) scheme. OCB2 is a highly-efficient blockcipher mode of operation. It has been extensively studied and widely believed to be secure thanks to the provable security proofs. Our attacks allow the adversary to create forgeries with single encryption query of almost-known plaintext. The source of our attacks is the way OCB2 implements AE using a tweakable blockcipher, called XEX*. We have verified our attacks using a reference code of OCB2. Our attacks do not break the privacy of OCB2, and are not applicable to the others, including OCB1 and OCB3. ### Adding Distributed Decryption and Key Generation to a Ring-LWE Based CCA Encryption Scheme Wed, 10/31/2018 - 16:37 We show how to build distributed key generation and distributed decryption procedures for the LIMA Ring-LWE based post-quantum cryptosystem. Our protocols implement the CCA variants of distributed decryption and are actively secure (with abort) in the case of three parties and honest majority. Our protocols make use of a combination of problem specific MPC protocols, generic garbled circuit based MPC and generic Linear Secret Sharing based MPC. We also, as a by-product, report on the first run-times for the execution of the SHA-3 function in an MPC system. ### Non-malleable Codes against Lookahead Tampering Wed, 10/31/2018 - 14:06 There are natural cryptographic applications where an adversary only gets to tamper a high- speed data stream on the fly based on her view so far, namely, the lookahead tampering model. Since the adversary can easily substitute transmitted messages with her messages, it is farfetched to insist on strong guarantees like error-correction or, even, manipulation detection. Dziembowski, Pietrzak, and Wichs (ICS–2010) introduced the notion of non-malleable codes that provide a useful message integrity for such scenarios. Intuitively, a non-malleable code ensures that the tampered codeword encodes the original message or a message that is entirely independent of the original message. Our work studies the following tampering model. We encode a message into k>=1 secret shares, and we transmit each share as a separate stream of data. Adversaries can perform lookahead tampering on each share, albeit, independently. We call this k-lookahead model. First, we show a hardness result for the k-lookahead model. To transmit an l-bit message, the cumulative length of the secret shares must be at least kl/(k-1). This result immediately rules out the possibility of a solution with k = 1. Next, we construct a solution for 2-lookahead model such that the total length of the shares is 3l, which is only 1.5x of the optimal encoding as indicated by our hardness result. Prior work considers stronger model of split-state encoding that creates k>=2 secret shares, but protects against adversaries who perform arbitrary (but independent) tampering on each se- cret share. The size of the secret shares of the most efficient 2-split-state encoding is l*log(l)/loglog(l) (Li, ECCC–2018). Even though k-lookahead is a weaker tampering class, our hardness result matches that of k-split-state tampering by Cheraghchi and Guruswami (TCC–2014). However, our explicit constructions above achieve much higher efficiency in encoding. ### Differential Fault Attacks on Deterministic Lattice Signatures Wed, 10/31/2018 - 09:58 In this paper, we extend the applicability of differential fault attacks to lattice-based cryptography. We show how two deterministic lattice-based signature schemes, Dilithium and qTESLA, are vulnerable to such attacks. In particular, we demonstrate that single random faults can result in a nonce-reuse scenario which allows key recovery. We also expand this to fault-induced partial nonce-reuse attacks, which do not corrupt the validity of the computed signatures and thus are harder to detect. Using linear algebra and lattice-basis reduction techniques, an attacker can extract one of the secret key elements after a successful fault injection. Some other parts of the key cannot be recovered, but we show that a tweaked signature algorithm can still successfully sign any message. We provide experimental verification of our attacks by performing clock glitching on an ARM Cortex-M4 microcontroller. In particular, we show that up to 65.2% of the execution time of Dilithium is vulnerable to an unprofiled attack, where a random fault is injected anywhere during the signing procedure and still leads to a successful key-recovery. Wed, 10/31/2018 - 03:12 In this article, we revisit the design strategy of PRESENT, leveraging all the advances provided by the research community in construction and cryptanalysis since its publication, to push the design up to its limits. We obtain an improved version, named GIFT, that provides a much increased efficiency in all domains (smaller and faster), while correcting the well-known weakness of PRESENT with regards to linear hulls. GIFT is a very simple and clean design that outperforms even SIMON or SKINNY for round-based implementations, making it one of the most energy efficient ciphers as of today. It reaches a point where almost the entire implementation area is taken by the storage and the Sboxes, where any cheaper choice of Sbox would lead to a very weak proposal. In essence, GIFT is composed of only Sbox and bit-wiring, but its natural bitslice data flow ensures excellent performances in all scenarios, from area-optimised hardware implementations to very fast software implementation on high-end platforms. We conducted a thorough analysis of our design with regards to state-of-the-art cryptanalysis, and we provide strong bounds with regards to differential/linear attacks. ### Constrained PRFs for Bit-fixing from OWFs with Constant Collusion Resistance Wed, 10/31/2018 - 01:31 Constrained pseudorandom functions (CPRFs) allow learning `constrained' PRF keys that can evaluate the PRF on a subset of the input space, or based on some sort of predicate. First introduced by Boneh and Waters [AC'13], Kiayias et al. [CCS'13] and Boyle et al. [PKC'14], they have been shown to be a useful cryptographic primitive with many applications. The full security definition of CPRFs requires the adversary to learn multiple constrained keys, a requirement for all of these applications. Unfortunately, existing constructions of CPRFs satisfying this security notion are only known from exceptionally strong cryptographic assumptions, such as indistinguishability obfuscation (IO) and the existence of multilinear maps, even for very weak predicates. CPRFs from more standard assumptions only satisfy security for a single constrained key query. In this work, we give the first construction of a CPRF that can issue a constant number of constrained keys for bit-fixing predicates, only requiring the existence of one-way functions (OWFs). This is a much weaker assumption compared with all previous constructions. In addition, we prove that the new scheme satisfies $$1$$-key privacy (otherwise known as constraint-hiding), and that it also achieves fully adaptive security. This is the only construction to achieve adaptive security outside of the random oracle model, and without sub-exponential security losses. Our technique represents a noted departure from existing CPRF constructions. We hope that it may lead to future constructions that can expose a greater number of keys, or consider more expressive predicates (such as bounded-depth circuit constraints). ### Aggregate Cash Systems: A Cryptographic Investigation of Mimblewimble Tue, 10/30/2018 - 15:54 Mimblewimble is an electronic cash system proposed by an anonymous author in 2016. It combines several privacy-enhancing techniques initially envisioned for Bitcoin, such as Confidential Transactions (Maxwell, 2015), non-interactive merging of transactions (Saxena, Misra, Dhar, 2014), and cut-through of transaction inputs and outputs (Maxwell, 2013). As a remarkable consequence, coins can be deleted once they have been spent while maintaining public verifiability of the ledger, which is not possible in Bitcoin. This results in tremendous space savings for the ledger and efficiency gains for new users, who must verify their view of the system. In this paper, we provide a provable-security analysis for Mimblewimble. We give a precise syntax and formal security definitions for an abstraction of Mimblewimble that we call an aggregate cash system. We then formally prove the security of Mimblewimble in this definitional framework. Our results imply in particular that two natural instantiations (with Pedersen commitments and Schnorr or BLS signatures) are provably secure against inflation and coin theft under standard assumptions.
2018-12-17 12:49:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36181753873825073, "perplexity": 1660.0495211334767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00418.warc.gz"}
https://socratic.org/questions/how-do-you-solve-1-x-3-1-x-3-1-x-2-9-1
# How do you solve 1/(x+3) + 1/(x-3) = 1/(x^2-9)? Feb 28, 2016 The only configuration that yields a logical answer is: $\frac{1}{x + 3} + \frac{1}{x - 3} = \frac{1}{{x}^{2} - 9}$ In which case$\text{ } x = \frac{1}{2}$ #### Explanation: Considering different configuration: $\textcolor{b l u e}{\text{Configuration 1}}$ Suppose the Left hand side was meant to be $\text{ } \frac{1}{x + 3} + \frac{1}{x - 3}$ Then the left would be: $\frac{\left(x + 3\right) + \left(x - 3\right)}{{x}^{2} - 9}$ Comparing left to right gives $x + 3 + x - 3 = 1$ $2 x = 1$ $x = \frac{1}{2}$ '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $\textcolor{b l u e}{\text{Configuration 2}}$ Suppose the Left hand side was meant to be $\text{ } \frac{1}{x + 3} - \frac{1}{x - 3}$ Then the left would be: $\frac{\left(x + 3\right) - \left(x - 3\right)}{{x}^{2} - 9} = \frac{6}{{x}^{2} - 9}$ Comparing Left to right would mean that it would have to be true for 6=1" "color(red)("Clearly this is a contradiction so it is not the case") '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $\textcolor{g r e e n}{\text{The only possible scenario is for configuration 1}}$ $\textcolor{m a \ge n t a}{\text{So the answer is } x = \frac{1}{2}}$
2019-12-09 02:43:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8624217510223389, "perplexity": 1186.9751974526082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517156.63/warc/CC-MAIN-20191209013904-20191209041904-00397.warc.gz"}
http://nrich.maths.org/5608/note?nomenu=1
### Why do this problem? This problem offers an opportunity to combine skills from mathematics and science. It can be solved numerically, algebraically or graphically, so can offer a useful opportunity for discussing the merits of different methods. ### Possible approach Introduce the boiling and freezing point of water in Celsius and Fahrenheit. "What other information can you deduce from these temperature facts?" Give the class some time to discuss in pairs, then bring the class together to collect ideas on the board. Possible responses might be: "$50^\circ C = 122^\circ F$ because it's halfway between." "$200^\circ C = 392^\circ F$ because it's another $180^\circ F$." "A temperature increase of $100^\circ C$ is the same as a temperature increase of $180^\circ F$." Again, give the class some time to discuss in pairs, and then collect ideas once more. "Is there a temperature where the reading in Celsius is the same as the reading in Fahrenheit?" Give the class plenty of time to approach this problem. Most students are likely to use a numerical approach. If some students use algebraic or graphical methods, ask them to share their approaches with the rest of the class. If nobody uses algebra or graphs, ask the class to consider first how a graph might help: "Can you represent the original information graphically in a way that could have helped you to solve the problem?" The graphical method can then lead on to a discussion of the algebraic representation of the straight line graph and hence algebraic methods of solution. Take time to discuss the merits of the different methods and then challenge students to show how to use each solution method to solve problems such as: "Is there a temperature at which the Fahrenheit reading is 20 degrees higher than the Celsius reading?" "Is there a temperature at which the Celsius reading is 20 degrees higher than the Fahrenheit reading?" ### Key questions Does every method give the same answer? What are the merits of the different methods? ### Possible extension See the extension challenge introducing the Kelvin scale of temperature in the problem. ### Possible support Spend lots of time discussing how to deduce information from the initial temperature facts given. Perhaps it would help students to suggest new values if the information is presented in a table.
2015-06-30 08:22:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2610366642475128, "perplexity": 804.7453615429531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375091925.14/warc/CC-MAIN-20150627031811-00076-ip-10-179-60-89.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1256099/let-l-be-a-natural-number-prove-that-n-lt-sqrtn-2-l-lt-n1-for-almost
# Let $l$ be a natural number. Prove that $n\lt\sqrt{n ^ 2 + l}\lt n+1$ for almost every $n$. In my assignment I have to prove the following statement: Let $l$ be a natural number. Prove that for almost every $n$ the following inequality is true: $$n\lt\sqrt{n ^ 2 + l}\lt n+1$$ I chose to prove this by contradiction, and I wanted to know if it's correct. • It is obvious that $n \lt n+1$. • Assume for the sake of contradiction that $n \ge \sqrt{n ^ 2 + l}$. Squaring both sides, we get: \begin{align} n^2 &\ge n^2 + l \\ 0 &\ge l \end{align} which contradicts the fact that $l$ is natural. Therefore, $n \lt \sqrt{n ^ 2 + l}$. • Assume for the sake of contradiction that $\sqrt{n ^ 2 + l} \ge n+1$. Squaring both sides, we get: \begin{align} n^2+l &\ge n^2+2n+1 \\ l &\ge 2n+1 \end{align} but this is another contradiction, because $l$ is a constant number, and can't be bigger than an infinite amount of numbers. Therefore the inequality $n\lt\sqrt{n ^ 2 + l}\lt n+1$ is true. Is my solution correct? Thank you, Alan • `can't be bigger than an infinite amount of numbers.' is not a phrase you want to see in a proof. The idea is there though Apr 28, 2015 at 16:23 • @JackYoon thank you, I am very glad to hear that. Can you be more specific about the idea I have to improve there? – Alan Apr 28, 2015 at 16:24 • Search for $n$ such that which the inequality definitely hold. Apr 28, 2015 at 16:26 • Let's say I choose n to be bigger than $2n+1$. Then for almost every n, it's bigger than this $l$, and we have a contradiction? – Alan Apr 28, 2015 at 16:40 • Minor point: consider using a letter other than $l$ for a variable, looks too much like $1$. – user153918 Apr 28, 2015 at 17:53 $$l\geq 2n+1$$ This doesn't contradict a known fact. It just states a boundary for $l$ given your assumption is correct. In a proof by contradiction, you need to arrive at results contradicting a general result for all $n$. Also $\sqrt {n^2+l}<n+1$ isn't true for all $n$, given an $l$. In fact, it is true only for $n>{\dfrac{l-1}{2}}$ To prove the inequality, you can simply say that it holds for all $n>{\dfrac{l-1}{2}}$, which will also satisfy the phrase "for almost every $n$". • Thank you. I see what is wrong, how do I make it right? – Alan Apr 28, 2015 at 16:34 • Let's say I choose n to be bigger than $2n+1$. Then for almost every n, it's bigger than this $l$, and we have a contradiction? – Alan Apr 28, 2015 at 16:41 • @Alan, the answer has been edited. Hope it is clear now. (It had a small mistake earlier) Apr 30, 2015 at 3:35 • thank you! I'll have a look. – Alan Apr 30, 2015 at 3:50
2022-07-07 05:35:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9781809449195862, "perplexity": 198.8964203156794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00202.warc.gz"}
https://numbersandcode.com/more-simple-time-series-models-this-time-with-decision-trees
treeTS ## Introduction¶ In two earlier posts ([1] and [2]), we had two examples of how to build well performing time-series models with relatively lightweight approaches. This time, I'll demonstrate an idea with tree based estimators, again under the premise of keeping the model fairly simple. In [1]: import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.tree import DecisionTreeRegressor, export_graphviz import graphviz ## The dataset¶ To ease comparison with the former approaches, I stuck with the airline passengers dataset from before. Since tree based estimators can only work with stationary data, we need to remove and form of non-stationarity. This is the same problem as with the Naive-Bayes approach from [2], therefore the preprocessing is the same as for that model. In [2]: data = pd.read_csv("passengers.csv", header=0) In [3]: data.head() Out[3]: Month International airline passengers: monthly totals in thousands. Jan 49 ? Dec 60 0 1949-01 112 1 1949-02 118 2 1949-03 132 3 1949-04 129 4 1949-05 121 In [4]: data.set_index("Month",inplace=True) In [5]: plt.figure(figsize=(12,6)) plt.plot(data.values) Out[5]: [<matplotlib.lines.Line2D at 0x1a1d1d15c0>] In [6]: train_size = len(data) - 36 test_size = len(data) - train_size train, test = data.iloc[:train_size], data.iloc[train_size:] train_diffed = train.diff().dropna().values test = test.values t_train = np.arange(len(train_diffed)).reshape(-1,1) t_test = np.arange(len(train_diffed),test_size+len(train_diffed)).reshape(-1,1) In [7]: plt.figure(figsize=(12,6)) plt.plot(train_diffed) Out[7]: [<matplotlib.lines.Line2D at 0x1a1d3df0f0>] In [8]: trend_removed = train_diffed.reshape(-1) / ((t_train+1)**(1/2)).reshape(-1) plt.figure(figsize = (12,6)) plt.plot(trend_removed) Out[8]: [<matplotlib.lines.Line2D at 0x1a1d53f4a8>] In [9]: train_full = trend_removed[5:] t_train = np.arange(len(train_full)).reshape(-1,1) t_test = np.arange(len(train_full),test_size+len(train_full)).reshape(-1,1) ## The model¶ Now comes the fun part. As an interesting twist, we will stay completely in the time-domain for the depending variables and won't employ any sort of autoregressive approach - i.e. we will not regress the present realization on past realizations like $$X_t=f(X_{t-1},X_{t-2},...,X_1)$$ but rather go with $$X_t=f(t)$$ The challenge when trying to use a tree model to regress on the time-index, is obviously the continuous increase of $t$ once we leave the training data. Imagine building a Decision Tree with data from periods $[1,50]$ and want to forecast periods $[51,75]$. Per inductive bias of the tree algorithm, predictions for times outside of the training period will be flat: In [10]: tree_model = DecisionTreeRegressor(max_depth=3, random_state=123) tree_model.fit(t_train, train_full) pred = tree_model.predict(t_test) pred_mean = np.full(36,np.mean(train_full)) plt.figure(figsize=(12,6)) plt.plot(np.concatenate([train_full, pred]), label="Training data") plt.plot(np.arange(len(train_full),36+len(train_full)), pred, label="Out of sample forecast") plt.plot(np.full(int(len(train_full)+36),np.mean(train_full)), label="Unconditional Mean") plt.legend() Out[10]: <matplotlib.legend.Legend at 0x1a1d5a5c50> In fact, each future time-index will fall into the exact same leaf where $$t_{future}>path\,with\,largest\,time\,index\,among\,all\,binary\,splits$$ While we could assume that the flat line is the best possible prediction, we likely miss out on the obviously recurring patterns in the time-series. Also, the height of the line seems to be a much worse predictor than the unconditional mean of the time-series - not a good model so far. What we want is to somehow express the periodic patterns in our time variable and use those in our tree model. A first solution would be to create new features by squashing the time-index through (co-)sine functions with different frequencies $p$: $$g_{sin}(t)=sin(p\cdot t)$$ $$g_{cos}(t)=cos(p\cdot t)$$ This might indeed make sense and be a valid solution. However there is an even easier way that avoids having to find the right frequencies and - as a bonus - allows to make the resulting Decision Tree interpretable (as long as its size is small enough to be human readable). The simple trick here is to create new features by using the modulo operator on $t$: $$g_i^*(t)=t\,mod\,i,\quad i\in\mathbb{Z}^+$$ By doing so, we project time onto an integer circle that gets traversed every $i$ periods. We then have $$g_i^*(t)\in\{0,...,i-1\}\quad\forall t$$ regardless of whether we are in the training or forecasting period. We can then create multiple features $g_i^*(t)$ features by varying $i$ over some range. Let's implement the proposed procedure: In [11]: mod_train = np.concatenate([t_train%t for t in range(1,37)],1) mod_test = np.concatenate([t_test%t for t in range(1,37)],1) In [12]: np.random.seed(123) tree_model = DecisionTreeRegressor(max_depth=3, random_state=123) tree_model.fit(mod_train, train_full) Out[12]: DecisionTreeRegressor(criterion='mse', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=123, splitter='best') In [13]: pred_tree = tree_model.predict(mod_test) In [14]: plt.figure(figsize=(12,6)) plt.plot(np.concatenate([train_full, pred_tree]), label="Training data") plt.plot(np.arange(len(train_full),36+len(train_full)), pred_tree, label="Out of sample forecast") plt.plot(pred_mean, label="Unconditional Mean") plt.legend() Out[14]: <matplotlib.legend.Legend at 0x1a1d728ba8> The forecast looks reasonable - the obvious patterns from the transformed training data seem to be recognized by our model. Now we can compare the evaluate on our actual test set (of course, we need to invert the initial transformation of our dataset first). In [15]: pred = (np.cumsum(pred_tree)*((t_test+1)**(1/2)).reshape(-1) + train.iloc[-1,0]) pred_mean = (np.cumsum(pred_mean)[:36]*((t_test+1)**(1/2)).reshape(-1) + train.iloc[-1,0]) In [16]: plt.figure(figsize=(12,6)) plt.plot(test, label = "Out of sample test data") plt.plot(pred, label = "Forecast") plt.plot(pred_mean, label = "Unconditional mean") plt.legend() Out[16]: <matplotlib.legend.Legend at 0x1a1d894a20> In [17]: np.sqrt(np.mean((pred - test.reshape(-1))**2)) Out[17]: 26.69749158609182 In [18]: np.sqrt(np.mean((pred_mean - test.reshape(-1))**2)) Out[18]: 75.27880388952589 The result on the test set are looking fine - our model clearly outperforms the naive forecast and is close to the former Naive Bayes model. Now let's output the exact model that the Decision Tree has learnt: In [19]: graphviz.Source(export_graphviz(tree_model, out_file = None, feature_names = ["Period=%s" %(i) for i in range(1,37)])) Out[19]: We can see that the model learnt the rather obvious yearly pattern (Period=12) another reasonable quarterly (Period=3) and a pattern that repeats itself every four months (Period=4). Interestingly, the model also learnt two patterns that aren't as obvious as the other three, namely a Period=13 and a Period=25 pattern. Although these two patterns don't contribute as much to the reduction of MSE in each node, it might be interesting to use this knowledge for further modeling. ## Conclusion¶ In this rather short post, we looked at a reasonable way to use a Decision Tree for time-series forecasting. The proposed approach can be made quite sparse in terms of model parameters - in the simplest case we could go with a Decision Tree stump that splits the input space only once. This can be quite advantageous for time-series problems where the amount of available training data is small as well in order to avoid overfitting. On the other hand is of course the easy interpretability of tree models that allows us to easily explain our forecasts to potential stakeholders. Obviously, we could also add an autoregressive component or external regressors here to make our model more powerful. To enhance our predictive power while keeping the model interpretable, we could switch over to the RuleFit algorithm which I explained here and also applied to a non time-series problem here.
2020-08-12 18:41:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.517076849937439, "perplexity": 1491.8241593588198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738913.60/warc/CC-MAIN-20200812171125-20200812201125-00434.warc.gz"}
http://kitchingroup.cheme.cmu.edu/blog/2014/10/30/Generating-your-bibliography-in-another-file/
## Generating your bibliography in another file | categories: bibtex | tags: | View Comments It has been proposal season. This particular round of proposals had a requirement to print the references in a separate file from the proposal. Usually I just build a pdf from org-mode, and then manually separate the references. That is not very fun if you have to do it several times. Here we examine a way to avoid this issue by using a new nobibliography link from org-ref with the bibentry LaTeX package. We wrote this paper mehta-2014-ident-poten and this one xu-2014-relat. # Bibliography Here is the resulting pdf, with no references: separate-bib.pdf. ## 1 Getting the references in another file Now, we need to get the reference file. We create a new file, in org-mode, mostly for the convenience of exporting that to a pdf. Here is the code that does that. (let* ((base (file-name-sans-extension (file-name-nondirectory (buffer-file-name)))) (bbl (concat base ".bbl")) (orgfile (concat base "-references.org")) (pdffile (concat base "-references.pdf"))) (with-temp-file orgfile (insert (format "#+LATEX_CLASS: cmu-article #+OPTIONS: toc:nil #+BEGIN_LaTeX \\input{%s} #+END_LaTeX " bbl))) (find-file orgfile) (org-latex-export-to-pdf) (org-open-file pdffile)) And, here is the reference file: separate-bib.pdf I think this would be integrated into a noexport build section of a document that would generate the pdf and references.
2017-09-19 15:16:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6447600722312927, "perplexity": 3814.3790792381856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685850.32/warc/CC-MAIN-20170919145852-20170919165852-00618.warc.gz"}
https://calculator.academy/rain-load-calculator/
Enter the depth of water up to the secondary inlet of drainage and the extra depth of water above the secondary inlet into the calculator to determine the rain load. The following equation is used to calculate the Rain Load. RL = 5.2*(ds+dh) • Where RL is the rain load (psi) • Ds is the depth of the water up to the secondary inlet of drainage (inches) • Dh is the depth of the water above the secondary inlet of drainage (inches) To calculate the rain load, add together the depth of water up to the inlet and above the inlet, then multiply by 5.2. ## What is a Rain Load? Definition: A rain load is a measure of the pressure acting on a roof with a certain depth of water resting in the drainage system of the roof. ## How to Calculate Rain Load? Example Problem: The following example outlines the steps and information needed to calculate rain load. First, determine the depth of the water up to the secondary inlet. In this case, the depth of the water here is 2 inches. Next, determine the depth of the water above the secondary inlet. In this example, this depth is 1.4 inches. Finally, calculate the rain load using the formula above: RL = 5.2*(ds+dh) RL = 5.2*(2+1.4) RL = 17.68 psi
2023-03-21 15:19:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5754643678665161, "perplexity": 966.1886794962547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00000.warc.gz"}
https://artofproblemsolving.com/wiki/index.php/1978_AHSME_Problems/Problem_13
1978 AHSME Problems/Problem 13 Problem 13 If $a,b,c$, and $d$ are non-zero numbers such that $c$ and $d$ are the solutions of $x^2+ax+b=0$ and $a$ and $b$ are the solutions of $x^2+cx+d=0$, then $a+b+c+d$ equals $\textbf{(A) }0\qquad \textbf{(B) }-2\qquad \textbf{(C) }2\qquad \textbf{(D) }4\qquad \textbf{(E) }(-1+\sqrt{5})/2$ Solution By Vieta's formulas, $c + d = -a$, $cd = b$, $a + b = -c$, and $ab = d$. From the equation $c + d = -a$, $d = -a - c$, and from the equation $a + b = -c$, $b = -a - c$, so $b = d$. Then from the equation $cd = b$, $cb = b$. Since $b$ is nonzero, we can divide both sides of the equation by $b$ to get $c = 1$. Similarly, from the equation $ab = d$, $ab = b$, so $a = 1$. Then $b = d = -a - c = -2$. Therefore, $a + b + c + d = 1 + (-2) + 1 + (-2) = \boxed{-2}$. The answer is (B).
2021-09-24 04:23:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9752949476242065, "perplexity": 55.76863808419482}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00265.warc.gz"}
https://www.tutorialspoint.com/Breadth-First-Search-BFS-for-a-Graph
# Breadth First Search (BFS) for a Graph Data StructureGraph AlgorithmsAlgorithms The Breadth First Search (BFS) traversal is an algorithm, which is used to visit all of the nodes of a given graph. In this traversal algorithm one node is selected and then all of the adjacent nodes are visited one by one. After completing all of the adjacent vertices, it moves further to check another vertex and checks its adjacent vertices again. To implement this algorithm, we need to use the Queue data structure. All the adjacent vertices are added into the queue when all adjacent vertices are completed, one item is removed from the queue and start traversing through that vertex again. In Graph sometimes, we may get some cycles, so we will use an array to mark when a node is visited already or not. ## Input and Output Input: The Adjacency matrix of the graph. A B C D E F A 0 1 1 1 0 0 B 1 0 0 1 1 0 C 1 0 0 1 0 1 D 1 1 1 0 1 1 E 0 1 0 1 0 1 F 0 0 1 1 1 0 Output: BFS Traversal: B A D E C F ## Algorithm bfs(vertices, start) Input − The list of vertices, and the start vertex. Output − Traverse all of the nodes, if the graph is connected. Begin define an empty queue que at first mark all nodes status as unvisited add the start vertex into the que while que is not empty, do delete item from que and set to u display the vertex u for all vertices 1 adjacent with u, do if vertices[i] is unvisited, then mark vertices[i] as temporarily visited mark done mark u as completely visited done End ## Example #include<iostream> #include<queue> #define NODE 6 using namespace std; typedef struct node { int val; int state;    //status }node; int graph[NODE][NODE] = { {0, 1, 1, 1, 0, 0}, {1, 0, 0, 1, 1, 0}, {1, 0, 0, 1, 0, 1}, {1, 1, 1, 0, 1, 1}, {0, 1, 0, 1, 0, 1}, {0, 0, 1, 1, 1, 0} }; void bfs(node *vert, node s) { node u; int i, j; queue<node> que; for(i = 0; i<NODE; i++) { vert[i].state = 0;    //not visited } vert[s.val].state = 1;   //visited que.push(s);            //insert starting node while(!que.empty()) { u = que.front();    //delete from queue and print que.pop(); cout << char(u.val+'A') << " "; for(i = 0; i<NODE; i++) { if(graph[i][u.val]) { //when the node is non-visited if(vert[i].state == 0) { vert[i].state = 1; que.push(vert[i]); } } } u.state = 2;   //completed for node u } } int main() { node vertices[NODE]; node start; char s; for(int i = 0; i<NODE; i++) { vertices[i].val = i; } s = 'B';   //starting vertex B start.val = s-'A'; cout << "BFS Traversal: "; bfs(vertices, start); cout << endl; } ## Output BFS Traversal: B A D E C F Published on 10-Jul-2018 11:36:56
2020-12-03 20:54:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2863481044769287, "perplexity": 4035.3340113375784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141732696.67/warc/CC-MAIN-20201203190021-20201203220021-00307.warc.gz"}
https://www.physicsforums.com/threads/water-tower-spring-system-diff-eq.675911/
# Water tower/spring system Diff Eq 1. Mar 3, 2013 ### SithsNGiggles 1. The problem statement, all variables and given/known data Suppose a water tower in an earthquake acts as a mass-spring system. Assume that the container on top is full and the water does not move around. The container then acts as a mass and the support acts as the spring, where the induced vibrations are horizontal. Suppose that the container with water has a mass of 10,000 kg. It takes a force of 1000 N to displace the container 1 m. For simplicity, assume no friction. When the earthquake hits the water tower is at rest. Suppose that an earthquake induces an external force $F(t)=mA\omega^2\cos(\omega t)$. What is the natural frequency of the water tower? Find a formula for the maximal amplitude of the resulting oscillations of the water container (the maximal deviation from the rest position). The motion will be a high frequency wave modulated by a low frequency wave, so simply find the constant in front of the sines. 2. Relevant equations 3. The attempt at a solution Here's the differential equation I set up: $10,000x''+1,000x=mA\omega^2\cos(\omega t)$ For the natural frequency, I used the formula $\omega_0=\sqrt{\frac{k}{m}}$, which gives me $\omega_0=\sqrt{\frac{1}{10}}\text{ rad/s}=\frac{1}{2\pi}\sqrt{\frac{1}{10}}\text{ Hz}$. Is this right? And for the second part, do I just solve this equation? I'm not sure what it means to find the "constant in front of the sines." 2. Mar 3, 2013 ### HallsofIvy What you have is that $y(t)= cos(\sqrt{1/10}t)$ and $y(t)= sin(\sqrt{1/10}t)$ are solutions to the associated homogeneous equation, 10000x''+ 1000x= 0. Can you find the general solution to the entire equation? 3. Mar 3, 2013 ### SithsNGiggles Yup, I've found that the general solution is $\displaystyle x(t)=C_1\cos\left(\sqrt{\frac{1}{10}}t\right)+C_2 \sin \left(\sqrt{\frac{1}{10}}t\right)+\frac{mA \omega ^2}{1000-10000\omega^2}\cos(\omega t)$ We're also assuming $\omega\not=\omega_0$ for the second part. I forgot to put that in my first post. By the way, is the $m$ in the solution the same as the mass of the water tower?
2018-03-23 14:04:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.725247323513031, "perplexity": 398.0586330763369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648226.72/warc/CC-MAIN-20180323122312-20180323142312-00552.warc.gz"}
http://openstudy.com/updates/55a676c0e4b071e6530b9ee5
## anonymous one year ago In a circle of radius 10 cm, a sector has an area of 40pi sq. cm. What is the degree measure of the arc of the sector? a) 72° b) 144° c) 180° 1. anonymous total area of the circle is $$\pi r^2$$ which in your case is $$\pi\times 10^2=100\pi$$ 2. anonymous the sector has area $$40\pi$$ and $$\frac{40\pi}{100\pi}=\frac{4}{10}$$ in other words the area of the sector is four tenths of the total area 3. anonymous the entire circle has $$360^\circ$$ to find your portion, compute $\frac{4}{10}\times 360$ or $.4\times 360$ 4. anonymous It's 144. But why not multiply it by 180 though? 5. anonymous this is a different knd of problem than the last one we are not converting from degrees to radians or anything just computing a ratio 6. anonymous $\frac{40\pi}{100\pi}=\frac{x}{360}$ 7. anonymous Alright I understand 8. anonymous k good more? 9. anonymous Yeah 10. anonymous k lets knock em ouit
2016-10-23 00:00:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8475849628448486, "perplexity": 1814.6150464184316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719079.39/warc/CC-MAIN-20161020183839-00317-ip-10-171-6-4.ec2.internal.warc.gz"}
https://gmatclub.com/forum/seven-is-equal-to-how-many-thirds-of-seven-269179.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 19 Sep 2018, 12:05 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Seven is equal to how many thirds of seven ? new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 49251 Seven is equal to how many thirds of seven ?  [#permalink] ### Show Tags 27 Jun 2018, 21:45 00:00 Difficulty: 25% (medium) Question Stats: 62% (00:26) correct 38% (00:32) wrong based on 91 sessions ### HideShow timer Statistics Seven is equal to how many thirds of seven ? (A) $$\frac{1}{3}$$ (B) 1 (C) 3 (D) 7 (E) 21 _________________ e-GMAT Representative Joined: 04 Jan 2015 Posts: 1992 Re: Seven is equal to how many thirds of seven ?  [#permalink] ### Show Tags 27 Jun 2018, 22:11 Solution To find: • 7 is equal to how many thirds of 7 Approach and Working: If we assume that 7 is equal to n-thirds of 7, we can rewrite the given statement in terms of the following expression: • $$\frac{n}{3} * 7 = 7$$ Or, $$\frac{n}{3} = 1$$ Or, n = 3 Hence, the correct answer is option C. _________________ Number Properties | Algebra |Quant Workshop Success Stories Guillermo's Success Story | Carrie's Success Story Ace GMAT quant Articles and Question to reach Q51 | Question of the week Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2 Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2 Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry Algebra- Wavy line | Inequalities Practice Questions Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com Senior SC Moderator Joined: 22 May 2016 Posts: 1977 Seven is equal to how many thirds of seven ?  [#permalink] ### Show Tags 28 Jun 2018, 07:40 Bunuel wrote: Seven is equal to how many thirds of seven ? (A) $$\frac{1}{3}$$ (B) 1 (C) 3 (D) 7 (E) 21 No need for arithmetic or algebra. A whole divided into three parts has ... 3 parts There are 3 one-third "parts" in any "whole," whether the whole = 1, 7, or 999,999 Alternatively, translate into an equation N = "how many" = # of "thirds of 7" "thirds of 7" = $$\frac{1}{3}*7$$ $$7=N*\frac{1}{3}*7$$ $$21=7N$$ $$N=3$$ _________________ In the depths of winter, I finally learned that within me there lay an invincible summer. Target Test Prep Representative Status: Head GMAT Instructor Affiliations: Target Test Prep Joined: 04 Mar 2011 Posts: 2835 Re: Seven is equal to how many thirds of seven ?  [#permalink] ### Show Tags 02 Jul 2018, 09:35 Bunuel wrote: Seven is equal to how many thirds of seven ? (A) $$\frac{1}{3}$$ (B) 1 (C) 3 (D) 7 (E) 21 7/(7/3) = 7 x 3/7 = 3 Alternate Solution: Wording is most important in answering this question. The phrase “thirds of seven” simply means 1/3 of 7, which is (1/3) x 7, or 7/3. Let’s let n = how many thirds of seven there are in seven. We can now express the question as: 7 = n x (7/3) Multiplying both sides by 3/7, we obtain: 7 x (3/7) = n 3 = n _________________ Jeffery Miller Head of GMAT Instruction GMAT Quant Self-Study Course 500+ lessons 3000+ practice problems 800+ HD solutions Re: Seven is equal to how many thirds of seven ? &nbs [#permalink] 02 Jul 2018, 09:35 Display posts from previous: Sort by # Seven is equal to how many thirds of seven ? new topic post reply Question banks Downloads My Bookmarks Reviews Important topics # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-09-19 19:05:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5891431570053101, "perplexity": 8761.45056145502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156270.42/warc/CC-MAIN-20180919180955-20180919200955-00486.warc.gz"}
https://computingkitchen.com/BijectiveHilbert.jl/stable/globalgray/
# Global Gray This is a very concise algorithm for Hilbert curve generation. It works in n-dimensions. It requires little code. It comes from a little paper [1] behind a paywall, sadly. Most algorithms for the Hilbert curve use Gray codes to generate the shape. He observed that, instead of using the space key algorithm, which dives to each level deeper and rotates the Gray code, the algorithm could use a global transformation of all values with a Gray code and then do a minor fix-up, afterwards, so untwist it. The resulting code is much simpler than earlier efforts. For developers, note that this algorithm relies on encoding the Hilbert index in what, to me, was a surprising order. To understand the interleaving of the Hilbert index for this algorithm, start with a 2D value where higher bits are larger subscripts, $(a_4a_3a_2a_1, b_4b_3b_2b_1)$. Skilling encodes this as $a_4b_4a_3b_3a_2b_2a_1b_1$, which looks good on paper, but it means the first element of the vector has the higher bits. • 1Skilling, John. "Programming the Hilbert curve." AIP Conference Proceedings. Vol. 707. No. 1. American Institute of Physics, 2004.
2022-08-08 23:23:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6610358357429504, "perplexity": 1076.1838267723522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00347.warc.gz"}
https://www.embeddedrelated.com/blogs-6/nf/Jason_Sachs/all.php
## The Dilemma of Unwritten Requirements October 25, 20151 comment You will probably hear the word “requirements” at least 793 times in your engineering career, mostly in the context of how important it is, in any project, to agree upon clear requirements before committing to (and hastily proceeding towards) a deadline. Some of those times you may actually follow that advice. Other times it’s just talk, like how you should “wear sunscreen when spending time outdoors” and “eat a diet low in saturated fats and... ## Trust, but Verify: Examining the Output of an Embedded Compiler September 27, 2015 I work with motor control firmware on the Microchip dsPIC33 series of microcontrollers. The vast majority of that firmware is written in C, with only a few percent in assembly. And I got to thinking recently: I programmed in C and C++ on an Intel PC from roughly 1991 to 2009. But I don’t remember ever working with x86 assembly code. Not once. Not even reading it. Which seems odd. I do that all the time with embedded firmware. And I think you should too. Before I say why, here are... ## How to Read a Power MOSFET Datasheet One of my pet peeves is when my fellow engineers misinterpret component datasheets. This happened a few times recently in separate instances, all involving power MOSFETs. So it’s time for me to get on my soapbox. Listen up! I was going to post an article on how to read component datasheets in general. But MOSFETs are a good place to start, and are a little more specific. I’m not the first person to write something about how to read datasheets; here are some other good... ## Lessons Learned from Embedded Code Reviews (Including Some Surprises) My software team recently finished a round of code reviews for some of our motor controller code. I learned a lot from the experience, most notably why you would want to have code reviews in the first place. My background is originally from the medical device industry. In the United States, software in medical devices gets a lot of scrutiny from the Food and Drug Administration, and for good reason; it’s a place for complexity to hide latent bugs. (Can you say “ ## Ten Little Algorithms, Part 4: Topological Sort July 5, 20151 comment Other articles in this series: Today we’re going to take a break from my usual focus on signal processing or numerical algorithms, and focus on... ## Oh Robot My Robot June 26, 2015 Oh Robot! My Robot! You’ve broken off your nose! Your head is spinning round and round, your eye no longer glows, Each program after program tapped your golden memory, You used to have 12K, now there is none that I can see,  Under smoldering antennae,   Over long forgotten feet,    My sister used your last part:      The chip she tried to eat. Oh Robot, My Robot, the remote controls—they call, The call—for... ## Important Programming Concepts (Even on Embedded Systems) Part VI : Abstraction Earlier articles: We have come to the last part of the Important Programming Concepts series, on abstraction. I thought I might also talk about why there isn’t a Part VII, but decided it would distract from this article — so if you want to know the reason, along with what’s next, ## Ten Little Algorithms, Part 3: Welford's Method (and Friends) Other articles in this series: Last time we talked about a low-pass filter, and we saw that a one-line... ## Python Code from My Articles Now Online in IPython Notebooks Ever since I started using IPython Notebooks to write these articles, I’ve been wanting to publish them in a form such that you can freely use my Python code. One of you (maredsous10) wanted this access as well. Well, I finally bit the bullet and automated a script that will extract the Python code and create standalone notebooks, that are available publicly under the Apache license on my bitbucket account: https://bitbucket.org/jason_s/embedded-blog-public This also means they... ## Ten Little Algorithms, Part 2: The Single-Pole Low-Pass Filter Other articles in this series: I’m writing this article in a room with a bunch of other people talking, and while sometimes I wish they would just SHUT UP, it would be... ## Real-time clocks: Does anybody really know what time it is? We recently started writing software to make use of a real-time clock IC, and found to our chagrin that the chip was missing a rather useful function, namely elapsed time in seconds since the standard epoch (January 1, 1970, midnight UTC).Let me back up a second.A real-time clock/calendar (RTC) is a micropower chip that has an oscillator on it that keeps counting time, independent of main system power. Usually this is done with a lithium battery that can power the RTC for years, so that even... ## Important Programming Concepts (Even on Embedded Systems) Part III: Volatility October 10, 2014 1vol·a·tile adjective \ˈvä-lə-təl, especially British -ˌtī(-ə)l\ : likely to change in a very sudden or extreme way : having or showing extreme or sudden changes of emotion : likely to become dangerous or out of control Other articles in this series: ## 10 Items of Test Equipment You Should Know When life gets rough and a circuit board is letting you down, it’s time to turn to test equipment. The obvious ones are multimeters and oscilloscopes and power supplies. But you know about those already, right? Here are some you may not have heard of: Non-contact current sensors. Oscilloscope probes measure voltage. When you need to measure current, you need a different approach. Especially at high voltages, where maintaining galvanic isolation is important for safety. The usual... ## Someday We’ll Find It, The Kelvin Connection You’d think it wouldn’t be too hard to measure electrical resistance accurately. And it’s really not, at least according to wikiHow.com: you just follow these easy steps: • Choose the item whose resistance you wish to measure. • Plug the probes into the correct test sockets. • Turn on the multimeter. • Select the best testing range. • Touch the multimeter probes to the item you wish to measure. • Set the multimeter to a high voltage range after finishing the... ## Ten Little Algorithms, Part 6: Green’s Theorem and Swept-Area Detection Other articles in this series: ## How to Include MathJax Equations in SVG With Less Than 100 Lines of JavaScript! Today’s short and tangential note is an account of how I dug myself out of Documentation Despair. I’ve been working on some block diagrams. You know, this sort of thing, to describe feedback control systems: And I had a problem. How do I draw diagrams like this? I don’t have Visio and I don’t like Visio. I used to like Visio. But then it got Microsofted. I can use MATLAB and Simulink, which are great for drawing block diagrams. Normally you use them to create a... ## Linear Feedback Shift Registers for the Uninitiated, Part XVIII: Primitive Polynomial Generation Last time we figured out how to reverse-engineer parameters of an unknown CRC computation by providing sample inputs and analyzing the corresponding outputs. One of the things we discovered was that the polynomial $x^{16} + x^{12} + x^5 + 1$ used in the 16-bit X.25 CRC is not primitive — which just means that all the nonzero elements in the corresponding quotient ring can’t be generated by powers of $x$, and therefore the corresponding 16-bit LFSR with taps in bits 0, 5,... ## Linear Feedback Shift Registers for the Uninitiated, Part VII: LFSR Implementations, Idiomatic C, and Compiler Explorer November 13, 2017 The last four articles were on algorithms used to compute with finite fields and shift registers: Today we’re going to come back down to earth and show how to implement LFSR updates on a microcontroller. We’ll also talk a little bit about something called “idiomatic C” and a neat online tool for experimenting with the C compiler. ## Jaywalking Around the Compiler Our team had another code review recently. I looked at one of the files, and bolted upright in horror when I saw a function that looked sort of like this: void some_function(SOMEDATA_T *psomedata) { asm volatile("push CORCON"); CORCON = 0x00E2; do_some_other_stuff(psomedata); asm volatile("pop CORCON"); } There is a serious bug here — do you see what it is? ## Lost Secrets of the H-Bridge, Part II: Ripple Current in the DC Link Capacitor July 28, 2013 In my last post, I talked about ripple current in inductive loads. One of the assumptions we made was that the DC link was, in fact, a DC voltage source. In reality that's an approximation; no DC voltage source is perfect, and current flow will alter the DC link voltage. To analyze this, we need to go back and look at how much current actually is being drawn from the DC link. Below is an example. This is the same kind of graph as last time, except we added two...
2023-01-27 08:12:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32392996549606323, "perplexity": 2177.698742476822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00696.warc.gz"}
https://metacademy.org/roadmaps/rgrosse/stanford_phil151/version/1
# Stanford Phil 151: First-Order Logic Intended for: Phil 151 students, anyone interested in logic Phil 151, First-Order Logic, is the second term of Stanford's undergraduate logic sequence. First-order logic (FOL) refers to a logical system which includes the propositional connectives, variables, functions, relations, and quantifiers. In a sense, FOL is powerful enough to describe all of mathematics, yet its syntax and semantics can be defined precisely enough to say quite a lot about it. While the course is listed in the philosophy department, it's really more like a math course. It formally defines the syntax and semantics of FOL, and most of the class is concerned with proving things about the logical system itself. It's a required course for the symbolic systems major, and has a reputation as a weeder course because, for a lot of students, it is their first course that requires writing rigorous mathematical proofs. The class roughly follows the first two chapters of Enderton's A Mathematical Introduction to Logic. This roadmap roughly corresponds to the course as it was taught in 2005. ## Background: logical languages, proof techniques The course assumes that students are already comfortable working with propositional logic and first-order logic, at the level of understanding what the symbols mean, being able to express statements in those languages, and being able to write formal proofs (in some formal system). It also assumes knowledge of a few concepts in set theory: Finally, it requires a certain level of comfort with several mathematical proof strategies: direct proof, proof by contradiction, and mathematical induction.
2022-10-07 10:28:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8089494109153748, "perplexity": 508.6370116243501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00354.warc.gz"}
https://complementaryslackness.wordpress.com/2009/02/01/book-review-gauge-fields-knots-and-gravity/
# Book Review: Gauge Fields, Knots and Gravity Up for review: Gauge Fields, Knots and Gravity by John Baez and Javier P. Muniain. Published by World Scientific; ISBN 9810220340 (pbk) In short: Buy this book immediately! Gauge Fields, Knots and Gravity is a surprisingly small book, given the hefty title, but its goal is to provide a solid introduction to these subjects, rather than attempt a complete and detailed treatment. The authors state in the preface that they “hope that both physicists who wish to learn more differential geometry and topology, and mathematicians who wish to learn more gauge theory and general relativity, will find this book a useful place to start.” Speaking as a physicist, I can report that they succeeded marvelously, and I further admit to having learned a lot of gauge theory and general relativity as well. The book is divided into three parts, which you’d be forgiven for expecting are the three advertised topics. Gauge fields and knots are covered in part II, gravity in part III, while part I, under the heading of Electromagnetism, gives easily the best introduction to differential geometry that I have come across. By the end of the first part the reader can understand and appreciate Maxwell’s equations in the simple coordinate-independent form $dF=0$ and $*d*F=J$. The second part takes up the gauge theory aspects of Maxwell’s equations directly, treating fiber bundles, connections, and curvature while working up to the Yang-Mills equation, Chern-Simons theory, and the links to knot theory. Even more ambitious is part III, which (somewhat hurriedly) covers the standard mathematical apparatus of general relativity before moving on to the real goals, the ADM formalism and prospects for quantization in Ashtekar’s new variables. There are almost surely hundreds of precise textbooks on differential geometry and fiber bundles, many bringing to mind the observation by C.N. Yang that “There are only two kinds of math books. Those you cannot read beyond the first sentence, and those you cannot read beyond the first page.” On the other end of the spectrum are the “[x] for physicists” books which often treat their chosen material intuitively but not precisely enough to be useful in calculating or deriving anything. The chief strength of this book is its ability to do both well, and in a non-cumbersome formalism. Concepts are explained in a clear, easy to read manner and then connected to precise definitions written in a useful formalism. Any one of these three can make for a useful book, but Baez and Muniain set a new standard by offering all three. And over 300 exercises. Filed under science ### 17 responses to “Book Review: Gauge Fields, Knots and Gravity” • Eric I have finished a good number of the problems in this book, have them typed in TeX, and would be happy to share them if anyone wants them. • Chris I am reading the book now. Could you please send a copy of your solutions to my email: physics.purdue@gmail.com Thank you so much! Chris • Piero I would also like to take a look at your solutions, please send a copy to odyssey AT tiscali.it. TeX is fine. • Mateja Hi ! Also reading this great book. Could you please send a copy of your solutions to my email: mateja.boskovic@gmail.com Mateja • Skip Macy I would like to have a copy of your solutions to problems in Gauge Fields, Knots and Gravity. Please send them to smacyj@gmail.com. Thanks, Skip • Eric, It would be very grateful if you could send me a copy of the solutions to aniketsuniljoshi@gmail.com. I get stuck in many places, and have few people I could ask for help. • Paul Hi I wsa wondering if I could have the solutions. My email is ptiede@uwaterloo.ca. • Pablo Hi, I could send a copy of their solutions to my email? pablo_omar1989@hotmail.com Thanks • Matt Hi Eric, I would very much appreciate your solutions as well. My email is mhodel@mit.edu Best, Matt 1. Jacob I’m also reading through the book. Could I please take a look at your solutions too? Thanks, Jacob jmblock2@gatech.edu 2. Stefano I’m also reading this wonderful book. Could you please send a copy of your solutions to my email: stefano.gragnani@fastwebnet.it Thanks Stefano 3. Josh I’ve started working through the text as well. Any solutions would be appreciated. My email is mattes.josh@gmail.com. Incidentally, I agree with the reviewer. The introduction to differential geometry is fantastic. 4. Micah I would be delighted to take a look at your solutions, m.emprechtinger@gmx.de
2018-05-23 10:50:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4528064727783203, "perplexity": 759.1099755320524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865595.47/warc/CC-MAIN-20180523102355-20180523122355-00443.warc.gz"}
http://mathhelpforum.com/geometry/71612-need-help-3-problems-graduation-please-help.html
Hi All, I'm a student of The American school and the only thing I have between me and my diploma is 3 proofs. I've gotten help from several people but we can't seem to figure out these proofs. Any help would be awesome!! Given: In triangle ABC, (angle) B = 120 degrees Prove: Angle A is not equal to 60 degrees Plan: Use an indirect proof. NOTE: Write the proof using the paragraph method. Prove: If a diagonal of a parallelogram bisects an angle of the parallelogram, the parallelogram is a rhombus. (State your plan and give a proof.) Given: ABCD is a parallelogram with (angle) 1 congruent with (angle) 2 To Prove: ABCD is a rhombus Plan : Thanks to Soroban for help with this proof =) Prove that the tangents to a circle at the endpoints of a diameter are parallel. State what is given, what is to be proved, and your plan of proof. Then write a two-column proof. Thanks so much to anyone who can help me with one or all of these proofs. I'm so ready to be finished with High School LOL 2. Hello, proofsRkickingmybutt! Here's the second one . . . Prove: If a diagonal of a parallelogram bisects an angle of the parallelogram, the parallelogram is a rhombus. Given: $ABCD$ is a parallelogram with $\angle1 = \angle2$ To Prove: $ABCD$ is a rhombus. We need to prove that two adjacent sides are equal. Code: A * - - - - - - - - * B /2 * 1 / / * / / * / / * / / 3 * 4/ D * - - - - - - - - * C $1.\;\angle1 = \angle 2$. . . . . . . . . . $\text{Given}$ $2.\;AB \parallel DC,\:AD \parallel BC$. . $\text{d{e}f. parallelogram}$ $3.\;\angle1 = \angle 3,\:\angle 2 = \angle 4$. . . $\text{alt-int. angles}$ $4.\;\angle1 \,=\,\angle 4$. . . . . . . . . $\text{Transitivity}$ $5.\;\Delta ABC\text{ is isosceles}$. . . $\text{d{e}f. isosceles}$ $6.\;\therefore\:AB = BC$. . . . . . $\text{d{e}f. isosceles}$ . . . $Q.E.D.$ 3. Oh Thank you SOOOOO much!!! =D!! 4. Hello, proofsRkickingmybutt! Prove that the tangents to a circle at the endpoints of a diameter are parallel. State what is given, what is to be proved, and your plan of proof. Then write a two-column proof. Code: A P - - - - * * * - - - - Q * | * * | * * | * | * | * * *O * * | * | * | * * | * * | * R - - - - * * * - - - - S B There is a Theorem that says: . . If a line is tangent to a circle, the radius drawn to . . the point of tangency is perpendicular to the tangent. We have a circle with center $O$ and diameter $AB.$ Line $PQ$ is tangent to circle $O$ at $A.$ Line $RS$ is tangent to circle $O$ at $B.$ $1.\;OA \perp PQ,\:OB \perp RS$ . . . . . $\text{Theorem}$ $2.\;\angle OAP = 90^o,\:\angle OBS = 90^o$ . . $\text{d{e}f. perpendicular}$ $3.\;\angle OAP = \angle OBS$. . . . . . . . . $\text{All right angles are equal.}$ $4.\;\therefore\:PQ \parallel RS$. . . . . . . . . . . $\text{alt-int. angles}$ 5. Originally Posted by proofsRkickingmybutt Given: In triangle ABC, (angle) B = 120 degrees Prove: Angle A is not equal to 60 degrees Plan: Use an indirect proof. NOTE: Write the proof using the paragraph method. I'm guessing it's too trivial to say that if B is 120 degrees, and A is 60 degrees, then C must be equal to 180 (total number of degrees in a triangle) - 180 = 0... which is clearly nonsense... and therefore A is less than or at least, not equal to 60? Edit: Unless you're supposed to prove the sum of the angles in a triangle = 180 degrees... but that's been done to death
2015-11-29 05:13:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7154174447059631, "perplexity": 469.8068337164163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398456289.53/warc/CC-MAIN-20151124205416-00028-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathonline.wikidot.com/open-and-closed-set-criteria-for-continuity-of-functions-on
Open and Closed Set Crit. for Continuity of Functions on Met. Spaces # Open and Closed Set Criteria for Continuity of Functions on Metric Spaces Recall from the Continuity of Functions on Metric Spaces page that if $(S, d_S)$ and $(T, d_T)$ are metric spaces and $f : S \to T$ then $f$ is said to be continuous at a point $p \in S$ if for all $\epsilon > 0$ there exists a $\delta > 0$ such that if $d_S(x, p) < \delta$ then $d_T(f(x), f(p)) < \epsilon$. We also said that $f$ is continuous on all of $S$ if $f$ is continuous at each point $p \in S$. We will now look at two very important theorems which tell us when a function $f$ is continuous on all of $S$ or not in terms of open and closed sets. Theorem 1: Let $(S, d_S)$ and $(T, d_T)$ be metric spaces and let $f : S \to T$. Then $f$ is continuous on all of $S$ if and only if for every open set $V$ in $T$ we have that the inverse image $f^{-1}(V)$ is open in $S$. In the proof below we will use the "ball-definition" of continuity of $f$. Recall that $f : S \to T$ is continuous at a point $p \in S$ if for all $\epsilon > 0$ there exists a $\delta > 0$ such that $f(B_S(p, \delta)) \subseteq B_T(f(p), \epsilon)$. • Proof: $\Rightarrow$ Suppose that $f : S \to T$ is continuous on all of $S$ and let $V$ be an open set in $T$. Consider the set $f^{-1}(V)$. If $f^{-1}(V) = \emptyset$ then we are done since the emptyset is open in $S$. Otherwise, let $p \in f^{-1}(V)$. Then $f(p) \in V$. • Since $V$ is open there exists an $\epsilon > 0$ such that: (1) \begin{align} \quad B_T(f(p), \epsilon) \subseteq V \end{align} • Since $f$ is continuous at $p$, for this given $\epsilon$ there exists a $\delta > 0$ such that: (2) \begin{align} \quad f(B_S(p, \delta)) \subseteq B_T(f(p), \epsilon) \subseteq V \end{align} • Therefore: (3) \begin{align} \quad B_S(p, \delta) \subseteq f^{-1}(V) \end{align} • This shows that $f^{-1}(V)$ is open in $S$. • $\Leftarrow$ Now suppose that for all open sets $V$ in $T$ we have that the inverse images $f^{-1}(V)$ are open sets in $S$. To show that $f$ is continuous on all of $S$ we must show that $f$ is continuous for every $p \in S$. • Let $p \in S$ be such that $y = f(p)$. Then for every $\epsilon > 0$, the ball centered at $y = f(p)$ with radius $\epsilon > 0$ is open in $T$, i.e., $B_T(y, \epsilon) = B_T(f(p), \epsilon)$ is open in $T$. • Since $B_T(f(p), \epsilon)$ is open in $T$ we have that the inverse image $f^{-1}(B_T(f(p), \epsilon))$ is open in $S$. Now, since $f(p) \in B_T(f(p), \epsilon)$ we have that $p \in f^{-1}(B_T(f(p), \epsilon))$ and since $f^{-1}(B_T(f(p), \epsilon))$ is open in $S$ there exists a $\delta > 0$ such that: (4) \begin{align} \quad B_S(p, \delta) \subseteq f^{-1}(B_T(f(p), \epsilon)) \\ \quad f(B_S(p, \delta)) \subseteq B_T(f(p), \epsilon) \end{align} • Therefore $f$ is continuous at each $p \in S$, so $f$ is continuous on all of $S$. $\blacksquare$ Theorem 2: Let $(S, d_S)$ and $(T, d_T)$ be metric spaces and let $f : S \to T$. Then $f$ is continuous on all of $S$ if and only if for every closed set $V$ in $T$ we have that the inverse image $f^{-1}(V)$ is closed in $S$. The only difference between Theorem 1 and Theorem 2 is the replacement of the word "open" with "closed". • Proof: $\Rightarrow$ Suppose that $f$ is continuous on all of $S$. Then by Theorem 1, for every open set in $T$ the inverse image is open in $S$. • Let $V$ be any closed set in $T$. Then $T \setminus V$ is open in $T$ so $f^{-1}(T \setminus V) = f^{-1}(T) \setminus f^{-1}(V) = S \setminus f^{-1}(V)$ is open in $S$. Hence $f^{-1}(V)$ is closed in $S$. • So for every closed set $V$ in $T$ we have that $f^{-1}(V)$ is closed in $S$. • $\Leftarrow$ Suppose that for every closed set $V$ in $T$ we have that $f^{-1}(V)$ is closed in $S$. Let $U$ be an open set in $T$. Then $T \setminus U$ is closed in $T$, and $f^{-1}(T \setminus U)$ is closed in $S$. Furthermore: (5) \begin{align} \quad f^{-1}(T \setminus U) = f^{-1}(T) \setminus f^{-1}(U) = S \setminus f^{-1}(U) \end{align} • So $S \setminus f^{-1}(U)$ is closed and $f^{-1}(U)$ is open. So for every open set $U$ in $T$ we have that $f^{-1}(U)$ is open in $S$, so by Theorem 1, $f$ is continuous on all of $S$. $\blacksquare$
2020-07-05 00:32:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9973977208137512, "perplexity": 50.97967720352454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886802.13/warc/CC-MAIN-20200704232817-20200705022817-00172.warc.gz"}
http://codereview.stackexchange.com/questions/49778/a-deduplicating-iterator
# A deduplicating iterator Implement an iterator(Generic) which skips an element if it is equal to the previous element. e.g : AAABBCCCCD, produces ABCD. Below is my attempt. Please suggest improvements. import java.util.Iterator; public class DeDupIterator implements Iterator { E next = null; Iterator<E> itr; public DeDupIterator(Iterator<E> iter) { itr = iter; next = itr.next(); } @Override public boolean hasNext() { if(itr.hasNext()) if (next != null) { return true; } return false; } @Override public E next() { E item=null; while (itr.hasNext()) { item = (E) itr.next(); if (!item.equals(next)) { E temp = next; next = item; return temp; } } next = item; return next; } @Override public void remove() { itr.remove(); } } - Your code looks .... incomplete ... missing the <E> generic type declarations for the class? –  rolfl May 15 at 0:03 @rolfl as this code contains bugs, I am deleting it from here. Feel free to answer it here. stackoverflow.com/questions/23667005/deduplicate-iterator. Thanks for your inputs. This post will be deleted in 10 :). –  m0nish May 15 at 0:22 @m0nish: As there is now an upvoted answer, the question cannot be deleted by its asker. It is still on-topic for this site as bugs were not found before, and bugs reported in reviews are okay. If you find any bugs yourself and are unable to fix them, they should be posted on SO. –  Jamal May 15 at 0:55 public boolean hasNext() { if(itr.hasNext()) if (next != null) { return true; } return false; } First, there's inconsistent usage of braces for your block if statements. Second, if you are already keeping track of what is the next element to be returned by your de-dup iterator, wouldn't it be enough to just check against that? public boolean hasNext() { return next != null; } A suggestion regarding the remove() implementation: the Javadoc API suggests that it can be called only once per call to next(). Since your implementation of next() is already quite different, you may want to re-consider whether your implementation can be as simple as calling remove() on the underlying iterator. In your example, is it expected to be removing only one 'C' or all 'C's when remove() is called? - Generic Types You appear to have copied this from inside another class, or something, because you are missing the Generic type for the iterator <E>. Also, assuming you get that right, there is no need to do the explicit cast inside the code.. the following line: item = (E) itr.next(); should be just: item = itr.next(); Nulls Your code is not very defensive when it comes to null values. If the iterator contains a null, you will have NullPointer exceptions all over the place. Bug If the initial iterator is empty, you will throw a NoSuchElementException when you construct your DeDupIterator. This code: public DeDupIterator(Iterator<E> iter) { itr = iter; next = itr.next(); } - Anyone trying to implement an Iterator (or an Iterable) with unusual semantics should review the implementations in the Guava library. Notice, in particular, the use of the StatePattern in the abstract iterator. Because Iterator.next() should throw a NoSuchElementException when the iterator has been exhausted, you need to be able to remember what happened when you last tried to pre-fetch a value from the source iterator. Also note that it's common to consider that some Iterators are Unmodifiable, throwing an UnsupportedOperationException if the consumer calls remove(). Since the problem statement is unclear what behavior is expected when remove is called, this is an approach that I would recommend in this case. If you were using the Guava library, then it would make sense to use a Predicate to keep track of whether an element matches the previous element, and use Iterators.filter() to defer the iterator state work to the library. ## EDIT There actually is support for a look ahead iterator in Guava called PeekingIterator. You can wrap an iterator using Iterators.peekingIterator. You can check here for the implementation, and the example for PeekingIterator from the Guava documentation actually happens to be the problem you are trying to solve. - Iterator.remove(), if supported, is supposed to remove the most element most recently returned by .next(). However, since this iterator works by peeking ahead, calling .remove() on the underlying iterator is going to remove a future element instead of the most recently returned element. I can't think of a good way to fix this bug. Perhaps .remove() will just have to be an unsupported operation. -
2014-11-01 03:20:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19797559082508087, "perplexity": 3215.409562921782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637903439.28/warc/CC-MAIN-20141030025823-00193-ip-10-16-133-185.ec2.internal.warc.gz"}
http://www.angrymath.com/
Monday, November 23, 2015 A Bunch of Dumb Things Journalists Say About Pi A lovely rant by Dave Renfro, via Pat Ballew's blog, here: Monday, November 16, 2015 Joyous Excitement Did you know that this week is the 100th anniversary of Einstein's completion of General Relativity? Specifically it was November 18, 1915 when Einstein drafted a paper that realized the final fix to his theories that would account for the previously unexplainable advance of the perihelion of Mercury. The next week he submitted this paper, "The field equations of gravitation", to the Prussian Academy of Sciences, which included what we now refer to simply as "Einstein's equations". Einstein later recalled of this seminal moment: For a few days I was beside myself with joyous excitement. And further: ... in all my life I have not laboured nearly so hard, and I have become imbued with great respect for mathematics, the subtler part of which I had in my simple-mindedness regarded as pure luxury until now. (Quotes from "General Relativity" by J.J. O'Connor and E.F. Robertson at the School of Mathematics and Statistics, University of St. Andrews, Scotland). Monday, November 9, 2015 Measurement Granularity Answering a question on StackExchange, and I came across some very nice little articles by the Six Sigma system people on Measurement System Analysis: Establishing the adequacy of your measurement system using a measurement system analysis process is fundamental to measuring your own business process capability and meeting the needs of your customer (specifications). Take, for instance, cycle time measurements: It can be measured in seconds, minutes, hours, days, months, years and so on. There is an appropriate measurement scale for every customer need/specification, and it is the job of the quality professional to select the scale that is most appropriate. I like this because this issue comes up a lot in issues of the mathematics of game design: What is the most convenient and efficient scale for a particular system of measurement? And what should we be considering when we mindfully choose those units at the outset? One key example in my D&D gaming, is that at the outset, units of encumbrance (weight carried) were ludicrously set in tenths-of-a-pound, so tracking gear carried by any characters involves adding up units in the hundreds or thousands, frequently requiring a calculator to do so. As a result, D&D encumbrance is infamous for being almost entirely unusable, and frequently discarded during play. My argument is that this is almost entirely due to an incorrect choice in measurement scale for the task -- equivalent to measuring a daily schedule in seconds, when what you really need is hours. I've recommended for a long time using the flavorfully archaic scale of "stone" weight (i.e., 14-pound units; see here), although the advantage could also be achieved by taking 5- or 10-pound units as the base. Likewise, I have a tendency to defend other Imperial units of weight as being useful in this sense (see: Human scale measurements), although I might be biased just a bit for being so steeped in D&D (further example: a league is about how far one walks in an hour, etc.). The Six Sigma articles further show a situation where the difference in two production processes is discernible at one scale of measurement, but invisible at another incorrectly-chosen scale of measurement. See more below: Monday, November 2, 2015 On Common Core As people boil the oil and man the ramparts for this decade's education-reform efforts, I've gotten more questions recently about what I think regarding Common Core. Fortunately, I had a chance to look at it recently as part of CUNY's ongoing attempts to refine our algebra remediation and exam structure. A few opening comments: One, this is purely in regards to the math side of things, and mostly just focused on the area of 6th-8th grade and high school Algebra I that my colleagues and I are largely involved in remediating (see the standards here: http://www.corestandards.org/Math/... and I would highlight the assertion that "Indeed, some of the highest priority content for college and career readiness comes from Grades 6-8.", Note on courses & transitions). Second, we must distinguish what Common Core specifies and what it does not: it does dictate things to know at the end of each grade level, but not how they are to be taught. In general: The standards establish what students need to learn, but they do not dictate how teachers should teach. Teachers will devise their own lesson plans and curriculum, and tailor their instruction to the individual needs of the students in their classrooms. (Frequently Asked Questions: What guidance do the Common Core Standards provide to teachers?) Specifically in regards to math: The standards themselves do not dictate curriculum, pedagogy, or delivery of content. (Note on courses & transitions) So this foreshadows a two-part answer: (1) I think the standards look great. Everything that I've seen in the standards themselves looks smart, rigorous, challenging, core to the subject, and pretty much indispensable to a traditional college curriculum in calculus, statistics, computer programming, and other STEM pursuits. I encourage you to read them at the link above. It includes pretty much everything in a standard algebra sequence for the last few centuries or so. I like the balanced requirement to achieve both conceptual understanding and procedural fluency ( http://www.corestandards.org/Math/Practice/). As always, my response in a lot of debates is, "you need both". And this reflects the process of presenting higher-level mathematics theorems: a careful proof, and then applications. The former guarantees correctness and understanding; the latter uses the theorem as a powerful shortcut to get work done more efficiently. Quick example that I came across last night: "By the end of Grade 3, know from memory all products of two one-digit numbers." (http://www.corestandards.org/Math/Content/3/OA/). That's not nonsense exercise, that's a necessary tool to later understand long division, factoring, fractions, rational versus irrational numbers, estimations, the Fundamental Theorems of Arithmetic and Algebra, etc. I was happy to spot that as a case example. (And I deeply wish that we could depend on all of our college students having that skill.) I like what I see for sample tests. Here are some examples from the nation-wide PARCC consortium (by Pearson, of course; http://parcc.pearson.com/practice-tests/math/): I'm looking at the 7th- and 8th-grade and Algebra I tests. They all come in two parts: Part I, short questions, multiple-choice,  with no calculators allowed. Part II, more sophisticated questions, short-answer (not multiple choice), with calculators allowed. I think that's great: you need both. New York State writes their own Common Core tests instead of using PARCC, at least at the high school level (http://www.nysedregents.org/): here I'm looking mostly at Algebra I (http://www.nysedregents.org/algebraone/). Again, a nice pattern of one part multiple-choice, the other part short-answer. I wish we could do that in our system. Now, the NYS Algebra I test is all-graphing-calculator mandatory, which sets my teeth on edge a bit compared to the PARCC tests. Maybe I could live with that as long as students have confirmed mental mastery at the 7th- and 8th-grade level (not that I can confirm that they do). Even the grading rubric shown here for NYS looks fine to me (approximately half-credit for calculation, and half-credit for conceptual understanding and approach on any problem; that's pretty close to what I've evolved to do in my own classes). In summary: Pretty great stuff as far as published standards and test questions (at least for 7th-8th grade math and Algebra I). (2) The implementation is possibly suspect. Having established rigorous standards and examinations, these don't solve some of the endemic problems in our primary education system. Granted that "Teachers will devise their own lesson plans and curriculum, and tailor their instruction to the individual needs of the students in their classrooms." (above): Most teachers in grades K-6, and even 7-8 in some places (note that's specifically the key grades highlighted above for "some of the highest priority content for college and career readiness") are not mathematics specialists. In fact, U.S. education school entrants are perennially the very weakest of all incoming college students in proficiency and attitude towards math (also: here). If the teachers at these levels fundamentally don't understand math themselves -- don't understand the later algebra and STEM work that it prepares them for -- then I have a really tough time seeing how they can understand the Common Core requirements, or effectively select and implement appropriate mathematical curriculum for their classrooms. Sometimes I refer to students at this level as having "anti-knowledge" -- and I find that it's much easier to instruct a student who has never heard of algebra ever (which sometimes happens for graduates of certain religious programs) than it is to deconstruct and repair incorrect the conceptual frameworks of students with many years of broken instruction. Before I go on: The best solution to this would be to massively increase salary and benefits for all public-school teachers, and implement top-notch rigorous requirements for entry to education programs (as done in other top-performing nations). A second-best solution, which is probably more feasible in the near-term, would be to place mathematics-specialist teachers in all grades K-12. The other key problem I see is: how are the test scores generated? We already know that in many places students take tests, and then the test scores are arbitrarily inflated or scaled by the state institutions, manipulating them to guarantee some particular high percentage is deemed "passing" (regardless of actual proficiency, for political purposes). For example, the conversion chart for NYS Algebra I Common Core raw scores to final scores for this past August is shown below (from NYS regents link above): Now, this is a test that had a maximum total 86 possible points scored. If we linearly converted this to a percentage, we would just multiply any score by 100/86 = 1.16; it would add 14 points at the top of the scale, about 7 points at the middle, and 0 points at the bottom. But that's not what we see here -- it's a nonlinear scaling from raw to final. The top adds 14 points, but in the middle it adds 30 or more points in the raw range from 13 to 40. The final range is 0 to 100, allowing you to think it might be a percentage, but it's not. If we consider 60% be minimal normal passing at a test, for this test that would occur at the 52-point raw score mark; but that gets scaled to a 73 final score, which usually means a middle-C grade. Looking at the 5 performance levels (more-or-less equivalent to A- through F- letter grades): A performance level of "3" is achieved with a raw score of just 30, which is only 30/86 = 35% of the available points on the test. A performance level of "2" is achieved with a raw score of only 20, that is, 20/86 = 23%  of the available points on the test. And these low levels (near random-guessing) are considered acceptable for awards of a high school diploma (www.p12.nysed.gov/assessment/reports/commoncore/tr-a1-ela.pdf, p. 19): In summary: While the publicized standards and exam formats look fine to me, the devil is in the details. On the input end, actual curriculum and instruction are left as undefined behavior in the hands of primary-school teachers who are not specialists, and rarely empowered, and frequently the very weakest of all professionals in math skills and understanding. And on the output end, grading scales can be manipulated arbitrarily to show any desired passing rate, almost entirely disconnected from the actual level of mastery demonstrated in a cohort of students. So I fear that almost any number of students can go through a system like that and not actual meet the published Common Core standards to be ready for work in college or a career. Monday, October 26, 2015 Double Factorial Table The double factorial is the product of a number and every second natural number less than itself. That is: $$n!! = \prod_{k = 0}^{ \lceil n/2 \rceil - 1} (n - 2k) = n(n-2)(n-4)...$$ Presentation of the values for double factorials is usually split up into separate even- and odd- sequences. Instead, I wanted to see the sequence all together, as below: Monday, October 19, 2015 Geometry Formulas in Tau Here's a modified a geometry formula sheet so all the presentations of circular shapes are in terms of tau (not pi); tack it to your wall and see if anybody spots the difference. (Original sheet here.) Monday, October 12, 2015 On Zeration In my post last week on hyperoperations, I didn't talk much about the operation under addition, the zero-th operation in the hierarchy, which many refer to as "zeration". There is a surprising amount of disagreement about exactly how zeration should be defined. The standard Peano axioms defining the natural numbers stipulate a single operation called the "successor". This is commonly written S(n), which indicates the next natural number after n. Later on, addition is defined in terms of repeated successor operations, and so forth. The traditional definition of zeration, per Goodstein, is: $$H_0(a, b) = b + 1$$. Now when I first saw this, I was surprised and taken aback. All the other operations start with $$a$$ as a "base", and then effectively apply some simpler operation $$b$$ times, so it seems odd to start with the $$b$$ and just add one to it. (If anything my expectation would have been to take $$a+1$$, but that doesn't satisfy the regular recursive definition of $$H_n$$ when you try to construct addition.) As it turns out, when you get to this basic level, you're doomed to lose many of the regular properties of the operations hierarchy. So there's nothing to do but start arguing about which properties to prioritize as "most fundamental" when constructing the definition. Here are some points in favor of the standard definition $$b+1$$: (1) It does satisfy the recursive formula that repeated applications are equivalent to addition ($$H_1$$). (2) It does looking passingly like counting by 1, i.e., the Peano "successor" operation. (3) It shares the key identity that $$H_n(a, 0) = 1$$, for all $$n \ge 3$$. (4) Since it is an elementary operation (addition, really), it can be extended from natural numbers to all real and complex numbers in a fashion which is analytic (infinitely differentiable). But here are some points against the standard definition (1) It is not "really" a binary operator like the rest of the hierarchy, in that it totally ignores the first parameter $$a$$. (2) Because of its ignoring $$a$$, it's not commutative like the other low-level operations n = 1 or 2 (yet like them it is still associative and distributive, or as I sometimes say, collective of the next higher operation). (3) For the same reason, it has no identity element (no way to recover the value $$a$$, unique among the entire hyperoperations hierarchy). (4) It's the only hyperoperation which doesn't need a special base case for when $$b = 0$$. (5) I might turn around favorable point #3 above and call it weird and unfavorable, in that it is misaligned in this way with operations n = 1 and 2, and it's the only case of one of the key identities being added at a lower level instead of being lost. See how weird that looks below? So as a result, a variety of alternative definitions have been put forward. I think my favorite is $$H_0(a, b) = max(a, b) + 1$$. Again, this looks a lot like counting; I might possibly explain it to a young student as "count one more than the largest number you've seen before". Points in favor: (1) Repeated applications are again the same as addition. (2) It is truly a binary operation. (3) It is commutative, and thus completes the trifecta of commutativity, association, and distribution/collection being true for all operations $$n < 3$$. (4) It does have an identity element, in $$b = 0$$. (5) It maintains the pattern of losing more of the high-level identities, and in fact perfects the situation in that none of the five identities hold for this zeration (all "no's" in the modified table above for $$n = 0$$). Points against: (1) It isn't exactly the same as the unary Peano successor function. (2) It's non-differentiable, and therefore cannot be extended to an analytic function over the fields of real or complex numbers. There are vocal proponents of related possible re-definition: $$H_0(a, b) = max(a, b) + 1$$ if a ≠ b, $$a + 2$$ if a = b. Advantage here is that it matches some identities in other operations, like $$H_n(a, a) = H_{n+1}(a, 2)$$ and $$H_n(2, 2) = 4$$, but I'm less impressed by specific magic numbers like that (as compared to having commutativity and the pattern of actually losing more identities). Disadvantage is obviously that the possibility of adding 2 in the $$a+2$$ case gets us even further away from the simple Peano successor function. And then some people want to establish commutativity so badly that they assert this: $$H_0(a, b) = ln(e^a + e^b)$$. That does get you commutativity, but at that point we're so far away from simple counting in natural numbers that I don't even want to think about it. Final thought: While most people interpret the standard definition of zeration, $$H_0(a, b) = b + 1$$ as "counting 1 more place from b", it makes more sense to my brain to turn that around and say that we are "counting b places from 1". That is, ignoring the $$a$$ parameter, start at the number 1 and apply the successor function repeatedly b times: $$S(S(S(...S(1))))$$, with the $$S$$ function appearing $$b$$ times. This feels more like "basic" Peano counting, it maintains the sense of $$b$$ being the number of times some simpler operation is applied, and it avoids defining zeration in terms of the higher operation of addition. And then you also need to stipulate a special base case for $$b = 0$$, like all the other hyperoperations, namely $$H_0(a, 0) = 1$$. So maybe the standard definition is the best we can do, and the closest expression of what Peano successor'ing in natural numbers (counting) really indicates. Perhaps we can't really have a "true" binary operator at level $$H_0$$, at a point when we haven't even discovered what the number "2" is yet. P.S. Can we consider defining an operation one level even lower, perhaps $$H_{-1}(a, b) = 1$$ which ignores both parameters, just returns the natural number 1, and loses every single one of the regular properties of hyperoperations (including recursivity in the next one up)?
2015-11-29 10:28:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5523724555969238, "perplexity": 1159.9650519774643}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398457697.46/warc/CC-MAIN-20151124205417-00177-ip-10-71-132-137.ec2.internal.warc.gz"}
https://rjlipton.wordpress.com/2012/08/17/the-right-definition/
Sometimes definitions are more important than theorems, sometimes Atle Selberg was one of the first two postwar Fields Medalists, awarded in 1950. This partly recognized his work on an “elementary” proof of the Prime Number Theorem. There was some bitter contention with Paul Erdös over priority, but both used a new asymptotic formula that Selberg had found. The formula threw in an extra ${\log x}$ factor on one term, and things “magically” came out nice. How did that factor come to be part of the definition? Today I want to talk about the role that definitions play in mathematics, and how proper definitions can unravel formulas, if not controversies. Though growing up in Norway, Selberg learned of the legend of Srinivasa Ramanujan in 1934 as a teenager. As he related in his speech for the 1988 Ramanujan Centenary, the Indian genius was described by a Norwegian word meaning “strange” as much as “remarkable,” and looking in to Ramanujan’s formulas “made a deep impression.” Ramanujan sometimes said that the Hindu goddess Namagiri gave him the formulas—which implies to Ken and me that they were not merely computed as answers but defined as trials and starting points. The controversy about who-proved-what-when is not what I wish to talk about today. We have touched on it before here. For two papers that go into some detail about it see Dorian Goldfeld’s paper titled “The Elementary Proof Of The Prime Number Theorem: An Historical Perspective” or Joel Spencer and Ronald Graham’s paper with a simpler title, “The Elementary Proof of the Prime Number Theorem.” ## The Players Besides Erdös and Selberg the key players are certain arithmetical functions. A function is arithmetical if it maps $\displaystyle 1,2,3, \dots$ to the complex numbers. Note that we do not insist that the function be defined at zero, and we do allow the values of the function to be complex numbers. Many of the fundamental questions of number theory can be stated most directly as questions about the behavior of such functions. So it is not surprising that we need to define several of them. ${\bullet }$ Möbius Function The Möbius function is denoted by ${\mu}$ and is defined as follows: ${\mu(1)=1}$, and if ${n=p_{1}^{a_{1}} \cdots p_{k}^{a_{k}}}$, then ${\mu(n)=(-1)^{k}}$ provided $\displaystyle a_{1} = a_{2} = \cdots a_{k} = 1.$ Otherwise, ${\mu(n)=0}$. ${\bullet }$ Mangoldt Function The Mangoldt function ${\Lambda}$ is defined as follows: If ${n = p^{m}}$ for some prime ${p}$ and some ${m \ge 1}$, then ${\Lambda(n) = \log p }$; otherwise, it is ${0}$. ${\bullet }$ Identity Function The Identity function is simple: ${I(1) = 1}$, and otherwise, it is ${0}$. ${\bullet }$ Unit Function The Unit function ${u}$ is just always ${1}$. ## The Operations The key to understand these and other functions is that they form a very rich algebraic structure, one that is not obvious at all. The key is they have a product defined on them, called the Dirichlet Product. If ${f}$ and ${g}$ are arithmetic functions, then ${h = f*g }$ is defined by $\displaystyle h(n) = \sum_{d | n} f(d)g(\frac{n}{d}).$ Recall ${d | n}$ means that ${d}$ is a divisor of ${n}$. This product is easily seen to be both commutative and associative. The latter follows from the observation that ${f*(g*h)}$ is equal to $\displaystyle \sum_{abc=n} f(a)g(b)h(c).$ Thus arithmetic functions form a commutative—that is, abelian—group under the Dirichlet product. But there is more: they also have a natural unary operation that acts as a derivative. This comes perhaps as a surprise since there are no limits of any kind lurking around. Nor are the functions polynomials as we pointed out here. Recall that one can define a derivative operation on polynomials. Arithmetic functions are not restricted in any way, except for their domain and range—so how can we define a derivative? Here is how: if ${f}$ is an arithmetic function, then define ${f'}$ as: $\displaystyle f'(n) = f(n)\log n, \text { for } n \ge 1.$ Note, ${I' = 0}$ and ${u' = \log n}$. This derivative satisfies many of the usual properties that we would want it to, which is why we are allowed under math naming rules—just kidding—to call it a derivative. Lemma: If ${f}$ and ${g}$ are arithmetical functions then: 1. ${(f+g)' = f' + g'}$. 2. ${(f*g)' = f'*g + f*g'}$. 3. ${(f^{-1})' = -f' * (f * f)^{-1}}$, provided ${f(1) \neq 0}$. ## The Basic Formulas The above definitions make it possible to write in a very elegant manner many famous results. For example, $\displaystyle \sum_{d | n} \mu(d)$ is ${1}$ for ${n=1}$ and zero otherwise. We can write this as ${\mu * u = I}$. Thus, $\displaystyle u = \mu^{-1} \text{ and } \mu = u^{-1}.$ There is another basic identity which is: $\displaystyle \sum_{d | n} \Lambda(d) = log(n).$ This now becomes $\displaystyle \Lambda * u = u'.$ It is pretty neat how this succinct notation allows one to express relationships. ## The Identity The Selberg identity. For ${n \ge 1}$, $\displaystyle \Lambda(n)\log(n) + \sum_{d | n}\Lambda(d)\Lambda(\frac{n}{d}) = \sum_{d | n}\mu(d)\log^{2}\frac{n}{d}.$ The above is not exactly the famous identity. It can be used to get the real identity which is: $\displaystyle \theta(x)\log(x) + \sum_{p \le x} \log(p)\theta(\frac{x}{p}) = 2x\log(x) +O(x),$ where $\displaystyle \theta(x) = \sum_{p \le x} \log(p)$ for primes ${p}$. I am just following Tom Apostol’s famous book on number theory: check it out for the details. ## The Easy Proof $\displaystyle \Lambda' * u + \Lambda * u' = u'',$ or since ${u' = \Lambda * u}$, $\displaystyle \Lambda' * u + \Lambda * (\Lambda * u) = u''.$ Now multiply both sides by ${ \mu = u^{-1}}$ to obtain, $\displaystyle \Lambda' + \Lambda * \Lambda = u'' * \mu.$ But this is the famous identity. ## Open Problems Are we missing some basic definitions in complexity theory that would shed new light on some of our open problems? Note a general mathematical idea that we do not seem to exploit very often: given a collection of objects, can one define a natural algebraic structure on the objects? Often there are hidden such structures, and finding them may unlock the key to great insights. August 17, 2012 10:16 am I think it would be useful to have a more precise definition of the complexity of proofs. I’m always struck that algorithm complexity is a carefully defined concept while proof complexity is a vaguer notion. I’m sure theorems relating them could be proved – about, for instance, how hard it is to prove that an algorithm has some given complexity. 2. August 17, 2012 11:40 am Where do you think is the best hope for looking for definition gaps? August 17, 2012 12:30 pm Serge, I am not sure what you mean. There is a precise notion of propositional proof complexity formalized by Cook and Reckhow.and related work ion hierarchies of bounded arithmetic that give uniform notions of the complexity of proofs. However,I suspect that you mean something else, namely a sort of “inverse mathematics”: What concepts are necessary to prove a theorem (or to prove it constructively)? In this respect the best sorts of things I have seen are Papadimitriou’s comparisons of notions of the second order principles that are used to define complexity classes like PPA, PPAD, PPP, PLS etc. August 17, 2012 4:10 pm OK Paul, thanks for the information: I’m going to look this up. 🙂 Indeed, I suspect that some kind of “reverse math” is needed in complexity theory. As I’ve commented elsewhere on this blog, proofs are the only tools we have at our disposal to study algorithms. In view of the Turing-Howard proof-program equivalence, wouldn’t it be fruitful to study the complexity of proofs in relation to the complexity of algorithms? Look at the PvsNP problem: of course it is well-defined and possesses an answer in terms of natural numbers. Be we can’t know this answer because we’re using algorithms to study algorithms! We’re using brain processes to study computer processes! This is a vicious circle… so let’s stop thinking absolute and start thinking relative. August 19, 2012 6:02 am August 17, 2012 5:41 pm In fact, what I’m looking for is something like this: 1) If an algorithm solves an NP-complete problem, then the complexity of the proof that it does solve the problem is inversely proportional to the complexity of the proof of its running-time. 2) If its running-time is polynomial, then whenever one of these two proofs has finite complexity the other one has infinite complexity. 3) If both proofs have finite complexity, then that algorithm must have some exponential term in its running-time. I’ll leave it to more competent mathematicians than me to check if all this makes sense. 🙂 August 17, 2012 5:57 pm And in view of the above-mentioned vicious circle, such a relation could be viewed either as an axiom of math… or as a principle of the physics of processes. I’m still undecided about this point. 🙂 August 17, 2012 3:29 pm It is arguable that the theorems and toolkits of proof theory determined ex post facto the standard definitions of proof theory. That is, our modern definitions of “proof” are largely tuned so as to accommodate the “algebrized” proof methods of Church, Turing, Gödel, etc. Similarly, a recurring theme here on Gödel’s Lost Letter are the prospects for advancing complexity theory by tuning definitions so as to facilitate proofs. Can “quantum information theory” be satisfactorily defined on on a class of Käherian Segre varieties? Can “P versus NP” be decided more easily on oracle-independent subsets of P and NP? In both cases a good answer is “Show us some theorems, and then we’ll decide whether adjustments in definitions are warranted.” Generally I prefer *not* to attempt to foresee the future, but here my heart and intuition both speak plainly: adjustments in the standard definitions of “quantum information theory” and “complexity theory” are equipping the coming generation of researchers with good prospects of major advances in proof-and-simulation technologies. Prediction  Simulating quantum trajectories on flat state-spaces and proving complexity theorems relating to oracle-dependent classes will be viewed by future generations of researchers as the “quaint” way that research *formerly* was conducted. August 19, 2012 5:03 pm There was occasion to amplify the above ideas more fully on Scott Aaronson’s Shtetl Optimized (here and here) … and this provides an occasion to mention, that Shtetl Optimized is enjoying a return to form that is much appreciated by every quantum researcher. 5. August 18, 2012 11:15 pm Are you asking whether a complexity *class* would have a hidden algebraic structure? Or something more concrete, like a program or a circuit? August 19, 2012 2:21 pm @ Serge: I (as Paul) also cannot see why proof complexity is a more vague concept than computational complexity. Proofs are algorithms as well (desribed by rules of inference). But you are right: questions like “how hard it is to prove that an algorithm has some given complexity” are apparently not studied in proof complexity, in computational complexity either. The focus there is on the complexity of *problems* or *claims* themselves – not on the complexity of *particular* algorithms, or on the complexity of proofs of their complexity. August 19, 2012 3:20 pm Thank you Stasys for explaining my point this clearly. What I had in mind is the transfer of complexity between algorithms and their proofs – after all, it’s a well-known fact that more efficient algorithms have more complex proofs. I think it’s interesting to try to quantify this phenomenon. August 19, 2012 5:34 pm … and I’m sorry to have called “vague” by ignorance something that was in fact very clear. I thank both Paul and you for making me learn something new. 🙂 7. August 19, 2012 5:57 pm “Are we missing some basic definitions in complexity theory that would shed new light on some of our open problems?” There is a proposed answer on the papers at: See some excerpts on they: (i) The profoundest questions in Complexity Theory (P vs NP vs P/poly vs RP) were solved by those plain ingenious new definitions (generalizations) stated in these papers. (ii) Some experts, as in [15], are asserting: “– The XG-SAT is not in NP (in the author’s terms): The polynomial nk CANNOT depend on the input.” However, this assertion is false, being true only for the old traditional definition of polynomial-time DTM, since in the new definition (Def. 3.7), the polynomial CAN definitely depend on the input – as long as it does not depend on the input’s length. Think: This is just a matter of Math object definition, not of mathematical error or correctness, at all. We are not obligated to follow obsolete definitions only because they are established, unless the Science is finished (or dead). “– The essence of Mathematics is Freedom.” (Georg Cantor) (iii) Very important: Verify that these new definitions of the classes P and NP are simply good generalizations of the old traditional ones: Any traditional P or NP problem IS too, respectively, in the new class P or NP defined above (even though the converse is in general false, since these new generalized classes are strictly larger than the traditional ones), and any superpolynomial deterministic or nondeterministic problem is NOT in the new class P or NP, respectively, which proves that these generalizations are consistent and smooth. 8. August 20, 2012 8:03 am hi, is this a typo in the text ? or a *very* different usage? Identity Function The Identity function is simple: , and otherwise, it is . Surely the identity function (the usual one) is simply $f(n)=n$ for all n. August 21, 2012 7:26 am As I’ve suggested above: the more efficient the algorithm, the more complex its proof of correctness. But is there a way to prove this? I may be wrong, but I don’t think this phenomenon is reflected in any known theorem. Take integer factoring: why is it necessary to bring to bear more and more concepts – such as elliptic curves – in order to write a more efficient algorithm than Euclid’s? I think the basic reason for this phenomenon lies in the thermodynamics of processes and – in my opinion – those principles of physics should be rewritten as axioms of computer science. The act of counting was taken into account by the axioms of Dedekind and Peano. The act of reasoning about collections was taken into account by Cantor and the later axioms of set theory. But the act of programming still needs its axioms. At least, a new axiom seems to be required for relating the complexity of an algorithm to that of its proof. 10. August 30, 2012 9:55 pm Deligne in “La categorie des representations du groupe symetrique St, lorsque t n’est pas un entier naturel, Algebraic groups and homogeneous spaces, Tata Inst. Fund. Res. Stud. Math., Tata Inst. Fund. Res., Mumbai, 2007, pp. 209-273.” constructs a symmetric group $S_{t}$ category for $t$ being complex. Is there a way to change the definitions of NP-complete problems so that the cardinality of input size of the problem is a complex number such that when the input size is integral, the NP-complete problem reduces to its traditional version? August 31, 2012 5:05 am The asymptotic behavior of algorithms gives rise to a clearcut notion of problem complexity, but unfortunately this notion is generally uncomputable. Manuel Blum has given axioms for the complexity measures of computable functions – cf Wikipedia – but why not try to enlarge the problem by defining procedure complexity and problem complexity in a non-recursive framework ? To me the very notion of problem complexity is fuzzy by nature, so an adequate set of axioms should not attempt to define it too strictly.
2020-11-25 08:39:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 59, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488151431083679, "perplexity": 619.2835188528409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181482.18/warc/CC-MAIN-20201125071137-20201125101137-00406.warc.gz"}
https://improve-yourself.info/quiz/are-you-a-good-person/
# Are You A Good Person? 0% #### You find $100 dollars on the ground. What do you do? Someone accidentally left their phone at your house. What do you do? #### If your best friend told you a very juicy secret about herself, would you be able to resist telling it to all of your mutual friends? Are You A Good Person? You are an angel You're a good person. Keep it up! The world needs more people like you - ones who set a good example for those who are less moral but who want to change. One important thing: Don't get self-righteous about your behavior. Many people find that a turn-off. If you want to influence others to be good, you'll need to be accepting, not judgmental. CONGRATULATIONS! You just won a FREE lovely Sacred Geometry pendant! Claim Your PRIZE Need to improve... You're OK, but you could definitely stand to improve. Watch others and learn from them. You'll start to notice which types of behavior are socially condoned and which are not. If you want to be a good person, you can do it if you try. Wanting to is half the battle. CONGRATULATIONS! You just won a FREE lovely Sacred Geometry pendant! Claim Your PRIZE Wow this is bad.. Dude, seriously what is wrong with you? Do you have friends? You are scaring me. Better do the quiz again for all our sake... Here: https://improve-yourself.info/quiz/are-you-a-good-person/ Now share the results with your friends to join our RAFFLE! TO WIN 100$ Do you dare to try our 10 questions IQ test?
2021-06-13 02:11:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2438545972108841, "perplexity": 6316.8390309546385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00160.warc.gz"}
https://ajayshahblog.blogspot.com/2013/07/who-should-start-bank-in-india.html
Thursday, July 04, 2013 Who should start a bank in India? RBI has 26 applicants. Which should it choose? I have a column in the Economic Times today on this question. Please note: LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.
2017-12-16 01:40:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2745788097381592, "perplexity": 1991.5043390357814}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581033.57/warc/CC-MAIN-20171216010725-20171216032725-00573.warc.gz"}
https://www.bartleby.com/questions-and-answers/a-certain-type-of-plywood-consists-of-five-layers.-the-thicknesses-of-the-layers-are-independent-and/b29f43c8-7f2a-4fe7-bda9-9117cfd93b6b
# A certain type of plywood consists of five layers. The thicknesses of the layers are independent and normally distributed with mean 5 mm and standard deviation 0.2 mm. a) Find the mean thickness of the plywood. b) Find the standard deviation of the thickness of the plywood. c) Find the probability that the plywood is less than 24 mm thick. Question A certain type of plywood consists of five layers. The thicknesses of the layers are independent and normally distributed with mean 5 mm and standard deviation 0.2 mm. a) Find the mean thickness of the plywood. b) Find the standard deviation of the thickness of the plywood. c) Find the probability that the plywood is less than 24 mm thick.
2021-07-28 17:38:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8568639159202576, "perplexity": 237.0950213316283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153739.28/warc/CC-MAIN-20210728154442-20210728184442-00618.warc.gz"}
https://manpages.org/latex2man
Latex2man(1) is a tool to translate UNIX manual pages written with SYNOPSIS latex2man [-ttransfile] [-cCSSfile] [-HMTL] [-h] [-V] [-Cname] [-achar] infile outfile DESCRIPTION Latex2man reads the file infile and writes outfile. The input must be a LaTeX document using the latex2man LaTeXpackage. Latex2man translates that document into the troff(1) format using the -man macro package. Using the -H option, HTML code can be produced, instead of troff(1). With this option you can, optionally, specify a CSSfile as an argument. CSS (Cascading Style Sheets) allows you to control the appearance of the resulting HTML page. See below for the names of CSS classes that are included in the HTML tags as attributes. Using the -T option, TexInfo code can be produced, instead of troff(1). Using the -M option, troff(1) input is produced. Using the -L option, LaTeX ouput can be produced, instead of troff(1). OPTIONS -ttransfile Translation for user defined LaTeX macros. -cCSSfile If you use the -H you can also specify a file that contains CSS style sheets. The link to the CSS file is inserted into the generatedHTML output using the specified CSSfile filename. -M Produce output suitable for the man(1) command (default). -H Instead of producing output suitable for the man(1) command, HTML code is produced (despite the name of the command). -T Instead of producing output suitable for the man(1) command, TexInfo code is produced (despite the name of the command). The generated .texi-file may be processed with makeinfo(1) (to produce an .info-file) which in turn may be installed using install-info(1). The Info tags @dircategory and @direntry are provided. -L The LaTeX source is written to the outfile. This is useful in conjunction with the -Cname option. -Cname Output the conditional text for name. If more than one name should be given use quotes: -C'name1 name2 ...' The following names are defined automatically: * -H defines HTML * -T defines TEXI * -M defines MAN * -L defines LATEX -achar Is used only in conjunction with -T. Background: TexInfo ignores all blanks before the first word on a new line. In order to produce some additional space before that word (using \SP) some character has to be printed before the additional space. By default this is a . (dot). The char specifies an alternative for that first character. Giving a blank to -a supresses the indentation of a line. Note: only for the first \SP of a series that char is printed. -h Show a help text. -V Show version information. FILES latex2man.tex The LaTeX file containing this Man-page. latex2man.sty The LaTeX package defining the environments and  commands. latex2man.cfg The configuration file for Latex2man LaTeX-package. latex2man.css File containing example CSS definitions. latex2man.trans File containing example translations of user  defined LaTeX macros. A LaTeX package used to typeset head- and  foot lines. fancyhdr.sty A LaTeX package used to typeset head- and foot  lines. rcsinfo.sty A LaTeX package used to extract and use RCS version  control information in LaTeX documents. latex2man.pdf The PDF version of this document. LaTeX COMMANDS The LaTeX package latex2man is used to write the Man-pages with LaTeX.Since we translate into other text formats, not all LaTeX stuff can be translated. PACKAGE OPTIONS The latex2man package accepts the following options: fancy fancyhdr use the LaTeX package fancyhdr. nofancy neither the LaTeX package fancyheadings nor fancyhdr are used. The default option may be specified in the file latex2man.cfg. PACKAGE SPECIFIC ENVIRONMENTS The following environments are provided by the package: \begin{Name}{chapter}{name}{author}{info}{title} The Name environment takes five arguments: 1. the Man-page chapter, 2. the name of the Man-page, 3. the author, 4. some short information about the tool printed in the footline of the Man-page, and 5. a text which is used as title, for HTML and LaTeX (it's ignored for output of the Man-page or TeXinfo. The Name environment must be the first environment in the document. Processing starts with this environment. Any text before this is ignored (exception: the setVersion and setDate commands). (Note: all arguments of \begin{Name} must be written on one line). \begin{Table}[width]{columns} The Table environment takes two arguments: the first optional one specifies a width of the last column, the second one gives the number of columns. For example: \begin{Table}[2cm]{3} Here & am & I \\\hline A 1 & A 2 & A 3 1 2 3 4 5 A 3 1 2 3 4 5 \\ B 1 & B 2 & B 3 \\ \end{Table} will be typeset as: Here am I A 1 A 2 A 3 1 2 3 4 5 A 3 1 2 3 4 5 B 1 B 2 B 3 If no optional width argument is given, all entries are typeset left justified. The width is a length measured absolutly in cm. Processing with LaTeX a p{width} column is typeset as last column. The translation to troff(1) commands results in a lw(width) column specification. Translating to HTML and TexInfo ignores the width parameter. \hline may be used. If the Man-page is formatted with troff(1) and tables are used, the tbl(1) preprocessor should be called, usually by giving a -t to the call of troff(1). When viewing the generated manula page using man(1), tbl(1) is called automatically. \begin{Description} is the same as \begin{description} \begin{Description}[label] is similar to \begin{description}, but the item labels have at minimum the size of the (optional) word label. The difference is visible only in the DVI and PDF-output, not in the troff, TexInfo or HTML output. a |a \begin{description} ab |ab abc |abc a |a \begin{Description} ab |ab abc |abc a |a \begin{Description}[aa] ab |ab abc |abc ACCEPTED LaTeX ENVIRONMENTS The following environments are accepted: * description * enumerate * itemize * verbatim * center They may be nested: * Itemize and nested center: A centered line. Another centered line. * Another item an nested enumerate 1. a 2. b PACKAGE SPECIFIC MACROS The following commands are provided: \Opt{option} Option: \Opt{-o} will be typeset as -o. \Arg{argument} Argument: \Arg{filename} will be typeset as filename. \OptArg{option}{argument} Option with Argument: \OptArg{-o}{filename} will be typeset as -ofilename. \OptoArg{option}{argument} Option with optional Argument: \OptoArg{-o}{filename} will be typeset as -o[filename]. \oOpt{option} Optional option, e.g. \oOpt{-o} will be typeset as [-o]. \oArg{argument} Optional argument, e.g. \oArg{filename} will be typeset as [filename]. \oOptArg{option}{argument} Optional option with argument, e.g. \oOptArg{-o}{filename} will be typeset as [-ofilename]. \oOptoArg{option}{argument} Optional option with optional argument, e.g. \oOptoArg{-o}{filename} will be typeset as [-o[filename]]. \File{filename} used to typeset filenames, e.g. \File{filename} will be typeset as filename. \Prog{prog} used to typeset program names, e.g. \Prog{latex2man} will be typeset as latex2man. \Cmd{command}{chapter} used to typeset references to other commands, e.g. \Cmd{latex2man}{1} will be typeset as latex2man(1). \Bar is typeset as |. \Bs (BackSlash) is typeset as \. \Tilde is typeset as a ~. \Dots is typeset as ... \Bullet us typeset as *. \setVersion{..} set .. as version information. \setVersionWord{..} set .. for the word Version: in the footline. The default is \setVersionWord{Version:}. \Version returns the version information. \setDate{..} sets .. as date information. \Date returns the date information. \Email{..} use to mark an Email address: \Email{[email protected]} is typeset as: [email protected]. \URL{..} use to mark an URL: \URL{http://www.foo.de/\Tilde vollmer} is typeset as http://www.foo.de/~vollmer. \LatexManEnd end-of-file or \LatexManEnd (at the beginning of a line). LaTeXignores this command. \Lbr, \Rbr is typeset as [ and ] (these variants are needed only somtimes like in \item[FooBar\LBr xx \Lbr]. Usually [ ] will work. \LBr, \RBr is typeset as { and } (these variants are needed when using { or } as arguments to macros. \Circum is typeset as ^. \Percent is typeset as %. \TEXbr If processed with LaTeX causes a linebreak (i.e. is equivalent to \\).In the output of latex2man this macro is ignored. \TEXIbr If TexInfo output is generated, causes a linebreak (i.e. is equivalent to \\),otherwise ignored. \MANbr If Man-Page output is generated, causes a linebreak (i.e. is equivalent to \\),otherwise ignored. \HTMLbr If HTML output is generated, causes a linebreak (i.e. is equivalent to \\),otherwise ignored. \medskip An empty line. \SP Produces some extra space, works also at the beginning of lines. The code of the second line looks like: \SP abc \SP\SP xx\\: abc xx abc xx abc xx Note: Due to some problems'' with TexInfo, the lines starting with \SP have a leading . (dot) in the TexInfo output, see -achar. ACCEPTED MACROS FROM THE RCSINFO PACKAGE \rcsInfo $Id ...$ if the LaTeX package rcsinfo is used, this command is used to extract the date of the Man-page. \rcsInfoLongDate if the LaTeX package rcsinfo is used, this command is used to typeset the date coded in the $Id ..$ string. ACCEPTED LaTeX MACROS The following standard LaTeX commands are accepted: \section{..} The section macro takes one argument: the name of the Man-page section. Each Man-page consists of several sections. Usually there are the following sections in a Man-page: Name (special handling as environment, c.f. above), Synopsis, Description, Options, Files, See Also, Diagnostics, Return Values, Bugs, Author, version, etc. Synopsis must be the first section after the Name environment. Note: Do not use LaTeX-macrosin section names. \subsection{..} works as well as \subsubsection{..} those. \emph{..} \emph{example} is typeset as example. \textbf{..} \textbf{example} is typeset as example. \texttt{..} \textt{example} is typeset as example. \underline{..} \underline{example} is typeset as example of underline . \date{..} uses .. as date. \verb+..+ b
2022-05-19 05:26:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833009243011475, "perplexity": 4004.754712340717}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662525507.54/warc/CC-MAIN-20220519042059-20220519072059-00507.warc.gz"}
http://databasefaq.com/index.php/tag/internet-explorer
FAQ Database Discussion Community ## “Object doesn't support this property or method” when setting an object property in IE8 javascript,internet-explorer,cross-browser I'm trying to make draggable elements without using jQuery. I'd like it to be compatible with IE8. The following breaks at this.handle = { with the error, "Object doesn't support this property or method." Does IE9< have some goofy hang-up when it comes to setting object properties? var Draggable =... ## jquery validation IE Object doesn't support property jquery,internet-explorer,jquery-validate ## Can I disable IE compatibility mode only for content within a ? internet-explorer,frameset,ie-compatibility-mode,html-frames Ive developed a Web Application that runs in my company's intranet. I had an issue with Internet Explorer's automatic compatibility mode earlier in my process, and added code to force my pages to be displayed in the newest version of IE: <meta http-equiv="X-UA-Compatible" content="IE=edge" /> This worked perfectly. Until my... ## Character rendering is different (IE/Chrome) I'm trying to figure out why does the arrow symbol render differently on Chrome and Internet Explorer but without any success. Surprisingly IE displays it correctly while Chrome has problems with rendering. Ignore the difference in size, it is due to zooming. Chrome IE Regarding CSS, there is only Eric... ## How to consistently crash IE 10&11? Reproducing “Internet Explorer to close and reopen the tab” javascript,html,css,internet-explorer This is a stupid question. Is there a way to consistently produce this error in Internet Explorer 10&11 with Javascript, css or HTML? A problem with this webpage caused Internet Explorer to close and reopen the tab. This is not for malicious webpage, I need to test some plugin and... ## XMLHttpRequest: Network Error 0x80070005, Access is denied on Microsoft Edge (but not IE) javascript,ajax,internet-explorer,cors,microsoft-edge I have a very simple ajax request (see below). The server is using CORS and works fine in IE 10+, Chrome, Firefox and Opera. On Microsoft Edge however, it fails with XMLHttpRequest: Network Error 0x80070005, Access is denied. I have researched the posts here, here, here and here, but cannot... ## htaccess rewrite condition not working in internet explorer apache,.htaccess,internet-explorer We have mutiple websites with the same code. The problem is this is working correctly in Firefox, Chrome etc, but it is not working in IE. My suggestion is that internet explorer sends a different/incorrect HTTP_HOST. But i can't figure out why. Can anybody help me in the right direction.... ## SSL certificate error 403.13 in IIS 7.5 internet-explorer,iis-7,ssl-certificate,sha1,http-status-code-403 I'm getting 403.13 in IIS logs, when I'm trying to access my api using the created certificate(sha1). Further I tested the same certificate in other test environment it works treat and I get the the XML from the api without any issue. Certificate pfx is installed in Certificate store and... ## Developers tool(F12) is opening in Internet Explorer when it is launched by watir-webdriver internet-explorer,selenium,selenium-webdriver,watir-webdriver,developer-tools I am automating a web application on Internet Explorer using Watir-webdriver and ruby. When I run my script in my Laptop[Win7(x64) and IE11] it is running without opening Developers Tool in Internet explorer. But When I test the same script in Virtual Machine[Win8(x64) and IE10], Intenet Exploerer browser opening with... ## Why does IE show junk for my dasBlog blog? internet-explorer,dasblog If I look at my blog in Chrome or Firefox, it looks as I expect. However, if I try to look at it in IE (11.0.9600.17801) it asks me if I want to download W69NUE8S (or some other random file name), which looks like some binary file. http://dotnetwhatnot.pixata.co.uk/ I tried... ## History token fires twice if the URLhas a special character in IE java,internet-explorer,gwt I have a code that is working perfectly fine in firefox and chrome, but not very well in IE. I am using GWT 2.5.1 . The issue is that i am sending a string to query via the URL. If that string contains special character like % or ^ or... ## jQuery & CSS only load after many refreshes jquery,css,html5,internet-explorer,internet-explorer-11 If you open this page in IE11, the CSS and jQuery don't load: http://javasmart.gooberdev.com/ If you refresh the page once, maybe even twice, it still doesn't load. But if you hit F5 a bunch of times in succession, the CSS and jQuery finally load and the page displays correctly. This... ## How to avoid wrap in IE7 when I have two ul in a navigation? html,css,css3,internet-explorer,internet-explorer-7 In ie8 or firefox browser,My code runs well,the two ul are in the same line.But when I ie7,the two ul are in the diffrent line,How to make then in the same line in ie7? <DIV id='navigation'> <ul><li><span> leftspan </span></li></ul> <ul style='float:right;'> <li><span> leftspan </span></li> <li><a href="http://www.google.com" target="_blank">google</a></li> <li><a href="http://www.apple.com" target="_blank">apple</a></li>... ## Flexbox does not seem to work in IE html,css,internet-explorer,flexbox For a website, I am using some flexboxes. Those boxes work perfectly in all browsers, except in IE. I'm going to give you a simplified version of what I'm doing below: <div class="row-fluid vertical-align text"> <div class="col-xs-16 col-md-8 leftText"> Some textblock, which has a 10 lines </div> <div class="col-xs-16 col-md-8... ## How to implement IF statement in Protractorjs Spec.js file? javascript,internet-explorer,selenium,selenium-webdriver,protractor I am trying to run spec.js file for multiple browsers i.e., using Multicapabilities in conf.js. But I want one statement of code to be executed only for IE and I am trying to put that in IF statement by taking the title of the browser as the condition in IF.... ## Remove mysterious space that IE9 creates around html,css,internet-explorer,internet-explorer-9 I'm trying to learn how to make crossbrowser pages and stuck at dealing with IE9: it creates some space near the <img>, see example Look at the rightmost image, it should appear here. If space doesnt appear, hover the mouse over the image. Can't imagine what's wrong, this image is... ## IE 10 Flexbox height bug? html,css,internet-explorer,internet-explorer-10,flexbox I am unable to figure out why IE10 adds extra margin's or height to the green element below. http://jsfiddle.net/q4ofmfar/ Expected style as displayed by most browsers: Incorrect style as rendered by IE10.0: http://jsfiddle.net/q4ofmfar/ HTML <div class="row"> <div id="box1"> <div> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Mauris dapibus vehicula... ## Inconsistent display on IE & Firefox I'm using Bootstrap on my website project. I'm through with the design but the items are displayed out of place on IE & Firefox. The same page renders great on Chrome. I used developer tools on the three browsers to make sure there was no difference in the CSS properties.... ## Styling Text Elements in Internet Explorer 8 css,internet-explorer,internet-explorer-8 .title { font-family: Helvetica Neue, Helvetica, Arial, sans-serif; font-weight: 800; color: #1e1e1e; font-size: 24pt; } https://jsfiddle.net/pa3ztdwt/ The above fiddle works fine in Google Chrome, but does not work in Internet Explorer 8. It seems that the font-weight property is not rendering properly in Internet Explorer 8. Do we have any... ## Javascript - Apply() breaking IE 9 javascript,jquery,internet-explorer,internet-explorer-9 I have this object: var _intervals = { intervals: {}, _add: function (fun, interval) { var newInterval = setInterval.apply( window, [fun, interval].concat([].slice.call(arguments, 2)) ); this.intervals[ newInterval ] = true; return newInterval; }, _delete: function (id) { return clearInterval(this.intervals[id]); }, _deleteAll: function () { var all = Object.keys(this.intervals), len = all.length;... ## Font Face issues in Internet Explorer 8 css,internet-explorer,fonts,internet-explorer-8,font-face I created a test page for IE 8 to see if i could use Google fonts. You can find the code at the end of the question. I am including every Google font i need by using a element with a list of them. Now, depending on the HREF attribute... ## How to set the IEDriverServer.exe through command line in protractor internet-explorer,selenium,selenium-webdriver,webdriver,protractor I would like to set the IEDriver executable path via command line while using protractor. I am using the following command, but it is not considering path to the IEDriver executable. cd > protractor --seleniumArgs "['-Dwebdriver.ie.driver=../selenium/IEDriverServer.exe']" conf.js I am getting the error: var template = new Error(this.message); ^ UnknownError: The... ## How to get the currentDate in Javascript for IE? javascript,internet-explorer,date I already did some reasearch and found out that you need to format a date String because the IE can't handle some formats. But the problem is that I don't even get a current date string to format. Date.now() or Date.time() I also tried this if-statement: if (!Date.now) { Date.now... ## IFRAME won't display 100% height in IE8, but fine in IE11 and Firefox html,css,internet-explorer,iframe,internet-explorer-8 I have a HTML page with an iframe in a div. The iframe height should be 100% of the available window height. This displays as expected in IE11 and Firefox, but in IE8 the iframe remains at a fixed size, regardless of the window size or the iframe content. When... ## Check if code is compatible with IE 5 javascript,html,internet-explorer I have to make a website that is compatible with IE 5+, It has to work with IE5+ because the device im working with uses Windows CE 5 or 6 and i cannot update the IE to a more recent version. I've made a javascript code with a couple of... ## Link with negative z-index not clickable in IE html,css,internet-explorer,z-index Can anyone give me a hint why the link in the gray box is not clickable in Internet Explorer (I'm using version 11). http://jsfiddle.net/rk7n7xjj/1/ I tested it in any other browsers and it works fine. HTML <div class="gray"><a href="#bla">This link is not clickable in IE</a></div> <div class="yellow">Some placeholder text</div> CSS... ## How do I prevent jquery from reloading an animation in IE mouse hover? jquery,internet-explorer,internet-explorer-9 I have an animation that plays on mouse over on a containing div. In IE9+ if the user hovers over an element inside that container it replays the animation. My basic setup is: <div class="book"> <img src="bookcover.jpg" /> <div class="overloay"> <p class="title">Book title</p> <a href="#">read book</a> </div> </div> When the... ## Internet Explorer Jumpy Scrolling javascript,jquery,internet-explorer,scroll I have this code to keep a heading element at the top of another element that scrolls. It works perfectly in Firefox and Google Chrome however in IE it's excruciatingly jumpy. The code itself is very simple and I can't think how to potentially improve it. In Chrome and Firefox... ## HTML/CSS menu not working on IE html,css,internet-explorer I'm working on a website and I was happy that all was looking good on Chrome, Firefox and Safari. Then it came it suddenly in my mind, I started sweating, the heart started beating fast.. oh God, I have to test it on IE! And, of course, it doesn't work... ## OverCls is not working in IE11 javascript,css,internet-explorer,extjs I am working with extjs 4.2 and need an advice to solve this small bug. I have an extjs button in my js file as: { xtype : 'button', text : 'Add Drive', padding : '10px 10px 10px 10px', overCls : 'overDrive' } and my in my css i have... ## Infragistics 12.2 WebDatePicker not displaying the Date in IE11 asp.net,internet-explorer,iis,infragistics We are planning to upgrade the browser to IE 11. So was testing all the web application whether it is compatible with IE 11. One Asp.net 4.0 project uses Infragisitcs 12.2 and the WebDatePicker doesn't display the Date in IE 11 after hosting to IIS server 7.5. It was working... ## How To Write Acceptance Tests for Internet Explorer with: Selenium, PHPUnit and Mac OS X? osx,internet-explorer,selenium,phpunit,acceptance-testing I'm trying to write acceptance tests for a project using multiple browsers. All the tests run fine with: Firefox, Chrome and Safari. However, I don't know how to run them in Internet Explorer. I use: PHPUnit, Selenium and Mac OS X. I also use VirtualBox with Windows 8 and Windows...
2017-04-30 06:58:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17983844876289368, "perplexity": 5697.859831986364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124371.40/warc/CC-MAIN-20170423031204-00432-ip-10-145-167-34.ec2.internal.warc.gz"}
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Harmonic_mean_p-value
# Harmonic mean p-value The harmonic mean p-value[1][2][3] (HMP) is a statistical technique for addressing the multiple comparisons problem that controls the strong-sense family-wise error rate.[2] It improves on the power of Bonferroni correction by performing combined tests, i.e. by testing whether groups of p-values are statistically significant, like Fisher's method.[4] However, it avoids the restrictive assumption that the p-values are independent, unlike Fisher's method.[2][3] Consequently, it controls the false positive rate when tests are dependent, at the expense of less power (i.e. a higher false negative rate) when tests are independent.[2] Besides providing an alternative to approaches such as Bonferroni correction that controls the stringent family-wise error rate, it also provides an alternative to the widely-used Benjamini-Hochberg procedure (BH) for controlling the less-stringent false discovery rate.[5] This is because the power of the HMP to detect significant groups of hypotheses is greater than the power of BH to detect significant individual hypotheses.[2] There are two versions of the technique: (i) direct interpretation of the HMP as an approximate p-value and (ii) a procedure for transforming the HMP into an asymptotically exact p-value. The approach provides a multilevel test procedure in which the smallest groups of p-values that are statistically significant may be sought. ## Direct interpretation of the harmonic mean p-value The weighted harmonic mean of p-values ${\textstyle p_{1},\dots ,p_{L}}$ is defined as ${\displaystyle {\overset {\circ }{p}}={\frac {\sum _{i=1}^{L}w_{i}}{\sum _{i=1}^{L}w_{i}/p_{i}}},}$ where ${\textstyle w_{1},\dots ,w_{L}}$ are weights that must sum to one, i.e. ${\textstyle \sum _{i=1}^{L}w_{i}=1}$. Equal weights may be chosen, in which case ${\textstyle w_{i}=1/L}$. In general, interpreting the HMP directly as a p-value is anti-conservative, meaning that the false positive rate is higher than expected. However, as the HMP becomes smaller, under certain assumptions, the discrepancy decreases, so that direct interpretation of significance achieves a false positive rate close to that implied for sufficiently small values (e.g. ${\displaystyle {\overset {\circ }{p}}<0.05}$).[2] The HMP is never anti-conservative by more than a factor of ${\textstyle e\,\log L}$ for small ${\textstyle L}$, or ${\textstyle \log L}$ for large ${\textstyle L}$.[3] However, these bounds represent worst case scenarios under arbitrary dependence that are likely to be conservative in practice. Rather than applying these bounds, asymptotically exact p-values can be produced by transforming the HMP. ## Asymptotically exact harmonic mean p-value procedure Generalized central limit theorem shows that an asymptotically exact p-value, ${\textstyle p_{\overset {\circ }{p}}}$, can be computed from the HMP, ${\displaystyle {\overset {\circ }{p}}}$, using the formula[2] ${\displaystyle p_{\overset {\circ }{p}}=\int _{1/{\overset {\circ }{p}}}^{\infty }f_{\textrm {Landau}}\left(x\,|\,\log L+0.874,{\frac {\pi }{2}}\right)\mathrm {d} x.}$ Subject to the assumptions of generalized central limit theorem, this transformed p-value becomes exact as the number of tests, ${\textstyle L}$, becomes large. The computation uses the Landau distribution, whose density function can be written ${\displaystyle f_{\textrm {Landau}}(x\,|\,\mu ,\sigma )={\frac {1}{\pi \sigma }}\int _{0}^{\infty }{\textrm {e}}^{-t{\frac {(x-\mu )}{\sigma }}-{\frac {2}{\pi }}t\log t}\,\sin(2t)\,{\textrm {d}}t.}$ The test is implemented by the p.hmp command of the harmonicmeanp R package; a tutorial is available online. Equivalently, one can compare the HMP to a table of critical values (Table 1). The table illustrates that the smaller the false positive rate, and the smaller the number of tests, the closer the critical value is to the false positive rate. Table 1. Critical values for the HMP ${\textstyle {\overset {\circ }{p}}}$ for varying numbers of tests ${\textstyle L}$ and false positive rates ${\textstyle \alpha }$.[2] ${\textstyle L}$ ${\textstyle \alpha =0.05}$ ${\textstyle \alpha =0.01}$ ${\textstyle \alpha =0.001}$ 10 0.040 0.0094 0.00099 100 0.036 0.0092 0.00099 1,000 0.034 0.0090 0.00099 10,000 0.031 0.0088 0.00098 100,000 0.029 0.0086 0.00098 1,000,000 0.027 0.0084 0.00098 10,000,000 0.026 0.0083 0.00098 100,000,000 0.024 0.0081 0.00098 1,000,000,000 0.023 0.0080 0.00097 ## Multiple testing via the multilevel test procedure If the HMP is significant at some level ${\textstyle \alpha }$ for a group of ${\textstyle L}$ p-values, one may search all subsets of the ${\textstyle L}$ p-values for the smallest significant group, while maintaining the strong-sense family-wise error rate.[2] Formally, this constitutes a closed-testing procedure.[6] When ${\textstyle \alpha }$ is small (e.g. ${\textstyle \alpha <0.05}$), the following multilevel test based on direct interpretation of the HMP controls the strong-sense family-wise error rate at level approximately ${\textstyle \alpha :}$ 1. Define the HMP of any subset ${\textstyle {\mathcal {R}}}$ of the ${\textstyle L}$ p-values to be ${\displaystyle {\overset {\circ }{p}}_{\mathcal {R}}={\frac {\sum _{i\in {\mathcal {R}}}w_{i}}{\sum _{i\in {\mathcal {R}}}w_{i}/p_{i}}}.}$ 2. Reject the null hypothesis that none of the p-values in subset ${\textstyle {\mathcal {R}}}$ are significant if ${\textstyle {\overset {\circ }{p}}_{\mathcal {R}}\leq \alpha \,w_{\mathcal {R}}}$, where ${\textstyle w_{\mathcal {R}}=\sum _{i\in {\mathcal {R}}}w_{i}}$. (Recall that, by definition, ${\textstyle \sum _{i=1}^{L}w_{i}=1}$.) An asymptotically exact version of the above replaces ${\textstyle {\overset {\circ }{p}}_{\mathcal {R}}}$in step 2 with ${\displaystyle p_{{\overset {\circ }{p}}_{\mathcal {R}}}=\max \left\{{\overset {\circ }{p}}_{\mathcal {R}},w_{\mathcal {R}}\int _{w_{\mathcal {R}}/{\overset {\circ }{p}}_{\mathcal {R}}}^{\infty }f_{\textrm {Landau}}\left(x\,|\,\log L+0.874,{\frac {\pi }{2}}\right)\mathrm {d} x\right\},}$ where ${\textstyle L}$ gives the number of p-values, not just those in subset ${\textstyle {\mathcal {R}}}$.[7] Since direct interpretation of the HMP is faster, a two-pass procedure may be used to identify subsets of p-values that are likely to be significant using direct interpretation, subject to confirmation using the asymptotically exact formula. ## Properties of the HMP The HMP has a range of properties that arise from generalized central limit theorem.[2] It is: • Robust to positive dependency between the p-values. • Insensitive to the exact number of tests, L. • Robust to the distribution of weights, w. • Most influenced by the smallest p-values. When the HMP is not significant, neither is any subset of the constituent tests. Conversely, when the multilevel test deems a subset of p-values to be significant, the HMP for all the p-values combined is likely to be significant; this is certain when the HMP is interpreted directly. When the goal is to assess the significance of individual p-values, so that combined tests concerning groups of p-values are of no interest, the HMP is equivalent to the Bonferroni procedure but subject to the more stringent significance threshold ${\textstyle \alpha _{L}<\alpha }$ (Table 1). The HMP assumes the individual p-values have (not necessarily independent) standard uniform distributions when their null hypotheses are true. Large numbers of underpowered tests can therefore harm the power of the HMP. While the choice of weights is unimportant for the validity of the HMP under the null hypothesis, the weights influence the power of the procedure. Supplementary Methods §5C of [2] and an online tutorial consider the issue in more detail. ## Bayesian interpretations of the HMP The HMP was conceived by analogy to Bayesian model averaging and can be interpreted as inversely proportional to a model-averaged Bayes factor when combining p-values from likelihood ratio tests.[1][2] ### The harmonic mean rule-of-thumb I. J. Good reported an empirical relationship between the Bayes factor and the p-value from a likelihood ratio test.[1] For a null hypothesis ${\textstyle H_{0}}$ nested in a more general alternative hypothesis ${\textstyle H_{A},}$ he observed that often, ${\displaystyle {\textrm {BF}}_{i}\approx {\frac {1}{\gamma \,p_{i}}},\quad 3{\frac {1}{3}}<\gamma <30,}$ where ${\textstyle {\textrm {BF}}_{i}}$ denotes the Bayes factor in favour of ${\textstyle H_{A}}$ versus ${\displaystyle H_{0}.}$ Extrapolating, he proposed a rule of thumb in which the HMP is taken to be inversely proportional to the model-averaged Bayes factor for a collection of ${\textstyle L}$ tests with common null hypothesis: ${\displaystyle {\overline {\textrm {BF}}}=\sum _{i=1}^{L}w_{i}\,{\textrm {BF}}_{i}\approx \sum _{i=1}^{L}{\frac {w_{i}}{\gamma \,p_{i}}}={\frac {1}{\gamma \,{\overset {\circ }{p}}}}.}$ For Good, his rule-of-thumb supported an interchangeability between Bayesian and classical approaches to hypothesis testing.[8][9][10][11][12] ### Bayesian calibration of p-values If the distributions of the p-values under the alternative hypotheses follow Beta distributions with parameters ${\displaystyle \left(0<\xi _{i}<1,1\right)}$, a form considered by Sellke, Bayarri and Berger,[13] then the inverse proportionality between the model-averaged Bayes factor and the HMP can be formalized as[2][14] ${\displaystyle {\overline {\textrm {BF}}}=\sum _{i=1}^{L}\mu _{i}\,{\textrm {BF}}_{i}=\sum _{i=1}^{L}\mu _{i}\,\xi _{i}\,p_{i}^{\xi _{i}-1}\approx {\bar {\xi }}\sum _{i=1}^{L}w_{i}\,p_{i}^{-1}={\frac {\bar {\xi }}{\overset {\circ }{p}}},}$ where • ${\textstyle \mu _{i}}$ is the prior probability of alternative hypothesis ${\textstyle i,}$ such that ${\textstyle \sum _{i=1}^{L}\mu _{i}=1,}$ • ${\textstyle \xi _{i}/(1+\xi _{i})}$ is the expected value of ${\textstyle p_{i}}$ under alternative hypothesis ${\textstyle i,}$ • ${\textstyle w_{i}=u_{i}/{\bar {\xi }}}$ is the weight attributed to p-value ${\textstyle i,}$ • ${\textstyle u_{i}=\left(\mu _{i}\,\xi _{i}\right)^{1/(1-\xi _{i})}}$ incorporates the prior model probabilities and powers into the weights, and • ${\textstyle {\bar {\xi }}=\sum _{i=1}^{L}u_{i}}$ normalizes the weights. The approximation works best for well-powered tests (${\displaystyle \xi _{i}\ll 1}$). ### The harmonic mean p-value as a bound on the Bayes factor For likelihood ratio tests with exactly two degrees of freedom, Wilks' theorem implies that ${\textstyle p_{i}=1/R_{i}}$, where ${\textstyle R_{i}}$ is the maximized likelihood ratio in favour of alternative hypothesis ${\textstyle i,}$ and therefore ${\textstyle {\overset {\circ }{p}}=1/{\bar {R}}}$, where ${\textstyle {\bar {R}}}$ is the weighted mean maximized likelihood ratio, using weights ${\textstyle w_{1},\dots ,w_{L}.}$ Since ${\textstyle R_{i}}$ is an upper bound on the Bayes factor, ${\textstyle {\textrm {BF}}_{i}}$, then ${\textstyle 1/{\overset {\circ }{p}}}$ is an upper bound on the model-averaged Bayes factor: ${\displaystyle {\overline {\textrm {BF}}}\leq {\frac {1}{\overset {\circ }{p}}}.}$ While the equivalence holds only for two degrees of freedom, the relationship between ${\textstyle {\overset {\circ }{p}}}$ and ${\textstyle {\bar {R}},}$ and therefore ${\textstyle {\overline {\textrm {BF}}},}$ behaves similarly for other degrees of freedom.[2] Under the assumption that the distributions of the p-values under the alternative hypotheses follow Beta distributions with parameters ${\displaystyle \left(1,\kappa _{i}>1\right),}$ and that the weights ${\displaystyle w_{i}=\mu _{i},}$ the HMP provides a tighter upper bound on the model-averaged Bayes factor: ${\displaystyle {\overline {\textrm {BF}}}\leq {\frac {1}{e\,{\overset {\circ }{p}}}},}$ a result that again reproduces the inverse proportionality of Good's empirical relationship.[15] ## References 1. Good, I J (1958). "Significance tests in parallel and in series". Journal of the American Statistical Association. 53 (284): 799–813. doi:10.1080/01621459.1958.10501480. JSTOR 2281953. 2. Wilson, D J (2019). "The harmonic mean p-value for combining dependent tests". Proceedings of the National Academy of Sciences USA. 116 (4): 1195–1200. doi:10.1073/pnas.1814092116. PMC 6347718. PMID 30610179. 3. Vovk, Vladimir; Wang, Ruodu (April 25, 2019). "Combining p-values via averaging" (PDF). Algorithmic Learning in a Random World. 4. Fisher, R A (1934). Statistical Methods for Research Workers (5th ed.). Edinburgh, UK: Oliver and Boyd. 5. Benjamini Y, Hochberg Y (1995). "Controlling the false discovery rate: A practical and powerful approach to multiple testing". Journal of the Royal Statistical Society. Series B (Methodological). 57 (1): 289–300. doi:10.1111/j.2517-6161.1995.tb02031.x. JSTOR 2346101. 6. Marcus R, Eric P, Gabriel KR (1976). "On closed testing procedures with special reference to ordered analysis of variance". Biometrika. 63 (3): 655–660. doi:10.1093/biomet/63.3.655. JSTOR 2335748. 7. Wilson, Daniel J (August 17, 2019). "Updated correction to "The harmonic mean p-value for combining independent tests"" (PDF). 8. Good, I J (1984). "C192. One tail versus two-tails, and the harmonic-mean rule of thumb". Journal of Statistical Computation and Simulation. 19 (2): 174–176. doi:10.1080/00949658408810727. 9. Good, I J (1984). "C193. Paired versus unpaired comparisons and the harmonic-mean rule of thumb". Journal of Statistical Computation and Simulation. 19 (2): 176–177. doi:10.1080/00949658408810728. 10. Good, I J (1984). "C213. A sharpening of the harmonic-mean rule of thumb for combining tests "in parallel"". Journal of Statistical Computation and Simulation. 20 (2): 173–176. doi:10.1080/00949658408810770. 11. Good, I J (1984). "C214. The harmonic-mean rule of thumb: Some classes of applications". Journal of Statistical Computation and Simulation. 20 (2): 176–179. doi:10.1080/00949658408810771. 12. Good, Irving John. (2009). Good thinking : the foundations of probability and its applications. Dover Publications. ISBN 9780486474380. OCLC 319491702. 13. Sellke, Thomas; Bayarri, M. J; Berger, James O (2001). "Calibration of p Values for Testing Precise Null Hypotheses". The American Statistician. 55 (1): 62–71. doi:10.1198/000313001300339950. ISSN 0003-1305. 14. Wilson, D J (2019). "Reply to Held: When is a harmonic mean p-value a Bayes factor?" (PDF). Proceedings of the National Academy of Sciences USA. 116 (13): 5857–5858. doi:10.1073/pnas.1902157116. PMC 6442550. PMID 30890643. 15. Held, L (2019). "On the Bayesian interpretation of the harmonic mean p-value". Proceedings of the National Academy of Sciences USA. 116 (13): 5855–5856. doi:10.1073/pnas.1900671116. PMID 30890644.
2021-12-05 15:05:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 77, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001015424728394, "perplexity": 1472.2534209139733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363189.92/warc/CC-MAIN-20211205130619-20211205160619-00037.warc.gz"}
https://tex.stackexchange.com/questions/109196/intersecting-planes-not-shown-correctly-with-tikz
# Intersecting planes not shown correctly with TikZ I made a plot in MATLAB of a deformed and an undeformed model. I converted the figure to TikZ using matlab2tikz and added it to my .tex file. I then noticed that the two configurations, which are intersecting, are not displayed correctly. To illustrate what I mean, I've stripped down the TikZ code to make to rectangles cross in a 3D space: \begin{tikzpicture} \begin{axis}[% width=5cm,height=5cm, view={-37.5}{45}, scale only axis, xmin=-3, xmax=23, ymin=0, ymax=20, zmin=-5, zmax=5, hide axis] \addplot3 [fill=white!80!red,opacity=0.5,draw=black] table[row sep=crcr]{ 20 0 0\\ 20 20 0\\ 0 20 0\\ 0 0 0\\ }; \addplot3 [fill=white!80!blue,opacity=0.8,draw=black] table[row sep=crcr]{ 15 5 -4\\ 15 15 0\\ 5 15 4\\ 5 5 0\\ }; \end{axis} \end{tikzpicture} The result looks like this: As can be seen one plane lies entirely on top of the other, while in fact they intersect (i.e. about half of the blue plane lies 'underneath' the red plane). With my minimum TikZ knowledge I was hoping someone here could help me fix this problem, such that the planes indeed intersect. • As a workaround may be see 3) Fully Matlab section to export figure in a reasonable good quality, you may need to add appropriate packages in mlf2pdf.m. – texenthusiast Apr 17 '13 at 17:12 ## 2 Answers Unfortunately, this doesn't seem possible in the current version of pgfplots. From the manual (Section 4.5.1): pgfplots supports z buffering techniques up to a certain extent. It works pretty well for single scatter plots (z buffer=sort), mesh or surface plots (z buffer=auto) or parametric mesh and surface plots (z buffer=sort). However, it can’t combine different \addplot commands, those will be drawn in the order of appearance. You may encounter the limitations sometimes. Maybe it will be improved in future versions. • May be Asymptote for it has better 3D support. – texenthusiast Apr 18 '13 at 5:04 • Thanks for the explanation. Seems like TikZ isn't the way to go for these kind of figures, too bad! – George Urvey Apr 21 '13 at 9:42 There are several other questions on this issue, for instance: As pointed out by Matthew Leingang, this is due to a limitation of pgfplots. Unlike all workarounds so far, I have found one (a hack, really) that allows to draw more than one, in fact an arbitrary number of, potentially intersecting, surfaces in a single addplot3 command, with automatic z buffering, without doing anything manually. We use addplot3 table instead of addplot3 coordinates, and we generate data externally. A single surface needs three matrices with x, y, z cordinates. For two surfaces, we can stack together [x; x], [y; y] and [z1; z2]. To make the two surfaces disconnected, we can insert a vector, say n, of NaNs of appropriate size between the stacked matrices, e.g. [x; n; x], [y; n; y] and [z1; n; z2], together with option unbounded coords=jump. Finally, we save the three matrices as three stacked columns representing (x,y,z) triplets as pgfplots expects. This also requires specifying the number of columns of the matrices with mesh/cols. To implement this idea, I define some macros that allow calling arbitrary python code via addplot shell, saving the data to a text file in tabular form, and then loading for display. This requires the -shell-escape flag in pdflatex. Unfortunately, because this is a single plot, I cannot see how to specify different properties (e.g. color or opacity) for each individual surfaces. Well, maybe by adding a fourth column in the data combined with point meta option as in scatter plots, but I haven't tried that. Also, by trying more complex examples, one realizes that, although patch visibility is computed correctly, we don't really get patch intersection. So, to get the feeling of a smooth curve at the surface intersection, one needs to increase the resolution. I do not intend to use this; I am just sharing because I found it interesting. Below, I am giving an example of two intersecting planes, but really one could compute anything with the same 'method'. It is in beamer, because this is what I was trying already. \documentclass{beamer} \usefonttheme[onlymath]{serif} \setbeamersize{text margin left=10pt} \setbeamersize{text margin right=10pt} \usepackage{pgfplots} \pgfplotsset{ every axis/.append style={font=\scriptsize}, plain/.style={every axis plot/.append style={mark=none},enlargelimits=false,grid=none}, z-sort/.style={z buffer=sort,unbounded coords=jump}, } \newcommand{\python}[1]{python -c "% import math, sys; import numpy as np;% #1 np.savetxt(sys.stdout, data)% "} \newcommand<>{\pyplot}[3][]% {\only#4{\addplot[#1] shell[prefix=fig/data/,id=#2,] {\python{#3}};}} \newcommand<>{\pyplott}[3][]% {\only#4{\addplot3[z-sort,#1] shell[prefix=fig/data/,id=#2,] {\python{#3}};}} \newcommand<>{\pyload}[3][]% {\only#4{\addplot[#1] table[x index=0,y index=#2] {fig/data/#3.out};}} \newcommand<>{\pyloadt}[2][]% {\only#3{\addplot3[z-sort,#1] table {fig/data/#2.out};}} \newcommand{\pysave}[2]{ \begin{tikzpicture}[overlay,opacity=0] \begin{axis} \pyplot{#1}{#2} \end{axis} \end{tikzpicture} } \begin{document} \begin{frame} \pysave{surf}{ n = 11; x = np.linspace(0,1,n); y = x; X, Y = np.meshgrid(x,y); Z1 = X + Y; Z2 = 1 - X + Y; N = np.ones([1, n]) * np.NaN; X = np.r_[X, N, X ].reshape([-1, 1]); Y = np.r_[Y, N, Y ].reshape([-1, 1]); Z = np.r_[Z1, N, Z2].reshape([-1, 1]); data = np.c_[X, Y, Z]; } \begin{center} \begin{tikzpicture} \begin{axis}[plain,width=\textwidth,height=.8\textwidth] \pyloadt[surf,mesh/cols=11]{surf}; \end{axis} \end{tikzpicture} \end{center} \end{frame} \end{document} The result looks like this: EDIT It is possible, eventually, to color each surface differently. I couldn't make point meta=explicit symbolic or point meta=explicit work, but what did work is point meta=\thisrowno{3}. Here is the code: \documentclass{beamer} \usefonttheme[onlymath]{serif} \setbeamersize{text margin left=10pt} \setbeamersize{text margin right=10pt} \usepackage{pgfplots} \pgfplotsset{ every axis/.append style={font=\scriptsize}, plain/.style={every axis plot/.append style={mark=none},enlargelimits=false,grid=none}, z-sort/.style={z buffer=sort,unbounded coords=jump}, } \newcommand{\python}[1]{python -c "% import math, sys; import numpy as np;% #1 np.savetxt(sys.stdout, data)% "} \newcommand<>{\pyplot}[3][]% {\only#4{\addplot[#1] shell[prefix=fig/data/,id=#2,] {\python{#3}};}} \newcommand<>{\pyplott}[3][]% {\only#4{\addplot3[z-sort,#1] shell[prefix=fig/data/,id=#2,] {\python{#3}};}} \newcommand<>{\pyload}[3][]% {\only#4{\addplot[#1] table[x index=0,y index=#2] {fig/data/#3.out};}} \newcommand<>{\pyloadt}[2][]% {\only#3{\addplot3[z-sort,#1] table {fig/data/#2.out};}} \newcommand{\pysave}[2]{ \begin{tikzpicture}[overlay,opacity=0] \begin{axis} \pyplot{#1}{#2} \end{axis} \end{tikzpicture} } \begin{document} \begin{frame} \pysave{surf}{ n = 31; x = np.linspace(0,1,n); y = x; X, Y = np.meshgrid(x,y); Z1 = X + Y; Z2 = 1 - X + Y; Z3 = 1- X + 1 - Y; M1 = np.ones([n, n]); M2 = 2 * M1; M3 = 3 * M1; N = np.ones([1, n]) * np.NaN; X = np.r_[X, N, X, N, X ].reshape([-1, 1]); Y = np.r_[Y, N, Y, N, Y ].reshape([-1, 1]); Z = np.r_[Z1, N, Z2, N, Z3].reshape([-1, 1]); M = np.r_[M1, N, M2, N, M3].reshape([-1, 1]); data = np.c_[X, Y, Z, M]; } \begin{center} \begin{tikzpicture} \begin{axis}[ plain,width=\textwidth,height=.8\textwidth, colormap={summap}{color=(green);color=(red);color=(yellow);}, ] \pyloadt[surf,opacity=.7,mesh/cols=31,point meta=\thisrowno{3}]{surf}; \end{axis} \end{tikzpicture} \end{center} \end{frame} \end{document} In this example I am showing three planes colored in green, red, yellow. A little transparency helps seeing what is going on. It would be very complex in this case to compute intersections manually with min and max as in previous workarounds. However, the missing patch intersections are now evident between the green and yellow planes, so I increased the resolution to 31x31. Further increasing to 41x41 gives TeX capacity exceeded, which is very sad. Anyhow, here is the result: • Try using lualatex to avoid the capacity limits. – alfC Oct 2 '17 at 2:21
2020-06-06 05:11:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8332391977310181, "perplexity": 4196.726065279349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348509972.80/warc/CC-MAIN-20200606031557-20200606061557-00026.warc.gz"}
http://dictionnaire.sensagent.leparisien.fr/Trigonometry/en-en/
Publicité ▼ # définition - Trigonometry trigonometry (n.) 1.the mathematics of triangles and trigonometric functions Merriam Webster TrigonometryTrig`o*nom"e*try (?), n.; pl. -tries (#). [Gr. � a triangle + -metry: cf. F. trigonométrie. See Trigon.] 1. That branch of mathematics which treats of the relations of the sides and angles of triangles, which the methods of deducing from certain given parts other required parts, and also of the general relations which exist between the trigonometrical functions of arcs or angles. 2. A treatise in this science. Analytical trigonometry, that branch of trigonometry which treats of the relations and properties of the trigonometrical functions. -- Plane trigonometry, and Spherical trigonometry, those branches of trigonometry in which its principles are applied to plane triangles and spherical triangles respectively. ## définition (complément) voir la définition de Wikipedia Publicité ▼ # synonymes - Trigonometry trigonometry (n.) trig Publicité ▼ voir aussi trigonometry (n.) ## dictionnaire analogique geometry[Classe] mesure d'angle (fr)[Classe] (angle)[termes liés] FieldOfStudy[Domaine] trigonometry (n.) Wikipedia # Trigonometry Jump to: navigation, search The Canadarm2 robotic manipulator on the International Space Station is operated by controlling the angles of its joints. Calculating the final position of the astronaut at the end of the arm requires repeated use of the trigonometric functions of those angles. All of the trigonometric functions of an angle θ can be constructed geometrically in terms of a unit circle centered at O. Trigonometry (from Greek trigōnon "triangle" + metron "measure")[1] is a branch of mathematics that studies triangles, particularly right triangles. Trigonometry deals with relationships between the sides and the angles of triangles and with the trigonometric functions, which describe those relationships, as well as describing angles in general and the motion of waves such as sound and light waves. Trigonometry is usually taught in secondary schools either as a separate course or as part of a precalculus course. It has applications in both pure mathematics and in applied mathematics, where it is essential in many branches of science and technology. A branch of trigonometry, called spherical trigonometry, studies triangles on spheres, and is important in astronomy and navigation. ## History Pre-Hellenic societies such as the ancient Egyptians and Babylonians lacked the concept of an angle measure, but they studied the ratios of the sides of similar triangles and discovered some properties of these ratios. Ancient Greek mathematicians such as Euclid and Archimedes studied the properties of the chord of an angle and proved theorems that are equivalent to modern trigonometric formulae, although they presented them geometrically rather than algebraically. The sine function in its modern form was first defined in the Surya Siddhanta and its properties were further documented by the 5th century Indian mathematician and astronomer Aryabhata.[2] These Indian works were translated and expanded by medieval Islamic scholars. By the 10th century, Islamic mathematicians were using all six trigonometric functions, had tabulated their values, and were applying them to problems in spherical geometry. At about the same time, Chinese mathematicians developed trigonometry independently, although it was not a major field of study for them. Knowledge of trigonometric functions and methods reached Europe via Latin translations of the works of Persian and Arabic astronomers such as Al Battani and Nasir al-Din al-Tusi.[3] One of the earliest works on trigonometry by a European mathematician is De Triangulis by the 15th century German mathematician Regiomontanus. Trigonometry was still so little known in 16th century Europe that Nicolaus Copernicus devoted two chapters of De revolutionibus orbium coelestium to explaining its basic concepts. ## Overview In this right triangle: sin A = a/c; cos A = b/c; tan A = a/b. If one angle of a triangle is 90 degrees and one of the other angles is known, the third is thereby fixed, because the three angles of any triangle add up to 180 degrees. The two acute angles therefore add up to 90 degrees: they are complementary angles. The shape of a right triangle is completely determined, up to similarity, by the angles. This means that once one of the other angles is known, the ratios of the various sides are always the same regardless of the overall size of the triangle. These ratios are given by the following trigonometric functions of the known angle A, where a, b and c refer to the lengths of the sides in the accompanying figure: • The sine function (sin), defined as the ratio of the side opposite the angle to the hypotenuse. $\sin A=\frac{\textrm{opposite}}{\textrm{hypotenuse}}=\frac{a}{\,c\,}\,.$ • The cosine function (cos), defined as the ratio of the adjacent leg to the hypotenuse. $\cos A=\frac{\textrm{adjacent}}{\textrm{hypotenuse}}=\frac{b}{\,c\,}\,.$ • The tangent function (tan), defined as the ratio of the opposite leg to the adjacent leg. $\tan A=\frac{\textrm{opposite}}{\textrm{adjacent}}=\frac{a}{\,b\,}=\frac{\sin A}{\cos A}\,.$ The hypotenuse is the side opposite to the 90 degree angle in a right triangle; it is the longest side of the triangle, and one of the two sides adjacent to angle A. The adjacent leg is the other side that is adjacent to angle A. The opposite side is the side that is opposite to angle A. The terms perpendicular and base are sometimes used for the opposite and adjacent sides respectively. Many people find it easy to remember what sides of the right triangle are equal to sine, cosine, or tangent, by memorizing the word SOH-CAH-TOA (see below under Mnemonics). The reciprocals of these functions are named the cosecant (csc or cosec), secant (sec) and cotangent (cot), respectively. The inverse functions are called the arcsine, arccosine, and arctangent, respectively. There are arithmetic relations between these functions, which are known as trigonometric identities. With these functions one can answer virtually all questions about arbitrary triangles by using the law of sines and the law of cosines. These laws can be used to compute the remaining angles and sides of any triangle as soon as two sides and an angle or two angles and a side or three sides are known. These laws are useful in all branches of geometry, since every polygon may be described as a finite combination of triangles. ### Extending the definitions Graphs of the functions sin(x) and cos(x), where the angle x is measured in radians. The above definitions apply to angles between 0 and 90 degrees (0 and π/2 radians) only. Using the unit circle, one can extend them to all positive and negative arguments (see trigonometric function). The trigonometric functions are periodic, with a period of 360 degrees or 2π radians. That means their values repeat at those intervals. The trigonometric functions can be defined in other ways besides the geometrical definitions above, using tools from calculus and infinite series. With these definitions the trigonometric functions can be defined for complex numbers. The complex function cis is particularly useful [Unparseable or potentially dangerous latex formula. Error 2 ] See Euler's and De Moivre's formulas. ### Mnemonics A common use of mnemonics is to remember facts and relationships in trigonometry. For example, the sine, cosine, and tangent ratios in a right triangle can be remembered by representing them as strings of letters, as in SOH-CAH-TOA. Sine = Opposite ÷ Hypotenuse Cosine = Adjacent ÷ Hypotenuse Tangent = Opposite ÷ Adjacent The memorization of this mnemonic can be aided by expanding it into a phrase, such as "Some Officers Have Curly Auburn Hair Till Old Age".[4] Any memorable phrase constructed of words beginning with the letters S-O-H-C-A-H-T-O-A will serve. ### Calculating trigonometric functions Trigonometric functions were among the earliest uses for mathematical tables. Such tables were incorporated into mathematics textbooks and students were taught to look up values and how to interpolate between the values listed to get higher accuracy. Slide rules had special scales for trigonometric functions. Today scientific calculators have buttons for calculating the main trigonometric functions (sin, cos, tan and sometimes cis) and their inverses. Most allow a choice of angle measurement methods: degrees, radians and, sometimes, grad. Most computer programming languages provide function libraries that include the trigonometric functions. The floating point unit hardware incorporated into the microprocessor chips used in most personal computers have built-in instructions for calculating trigonometric functions. ## Applications of trigonometry There are an enormous number of uses of trigonometry and trigonometric functions. For instance, the technique of triangulation is used in astronomy to measure the distance to nearby stars, in geography to measure distances between landmarks, and in satellite navigation systems. The sine and cosine functions are fundamental to the theory of periodic functions such as those that describe sound and light waves. Fields which make use of trigonometry or trigonometric functions include astronomy (especially, for locating the apparent positions of celestial objects, in which spherical trigonometry is essential) and hence navigation (on the oceans, in aircraft, and in space), music theory, acoustics, optics, analysis of financial markets, electronics, probability theory, statistics, biology, medical imaging (CAT scans and ultrasound), pharmacy, chemistry, number theory (and hence cryptology), seismology, meteorology, oceanography, many physical sciences, land surveying and geodesy, architecture, phonetics, economics, electrical engineering, mechanical engineering, civil engineering, computer graphics, cartography, crystallography and game development. Marine sextants like this are used to measure the angle of the sun or stars with respect to the horizon. Using trigonometry and a marine chronometer, the position of the ship can then be determined from several such measurements. Triangle with sides a,b,c respectively opposite angles A,B,C, as described to the left ## Common formulas Certain equations involving trigonometric functions are true for all angles and are known as trigonometric identities. There are some identities which equate an expression to a different expression involving the same angles and these are listed in List of trigonometric identities, and then there are the triangle identities which relate the sides and angles of a given triangle and these are listed below. In the following identities, A, B and C are the angles of a triangle and a, b and c are the lengths of sides of the triangle opposite the respective angles. ### Law of sines The law of sines (also known as the "sine rule") for an arbitrary triangle states: $\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C} = 2R,$ where R is the radius of the circumcircle of the triangle: $R = \frac{abc}{\sqrt{(a+b+c)(a-b+c)(a+b-c)(b+c-a)}}.$ Another law involving sines can be used to calculate the area of a triangle. If you know two sides and the angle between the sides, the area of the triangle becomes: $\mbox{Area} = \frac{1}{2}a b\sin C.$ ### Law of cosines The law of cosines (known as the cosine formula, or the "cos rule") is an extension of the Pythagorean theorem to arbitrary triangles: $c^2=a^2+b^2-2ab\cos C ,\,$ or equivalently: $\cos C=\frac{a^2+b^2-c^2}{2ab}.\,$ ### Law of tangents The law of tangents: $\frac{a-b}{a+b}=\frac{\tan\left[\tfrac{1}{2}(A-B)\right]}{\tan\left[\tfrac{1}{2}(A+B)\right]}$ ## References ### Notes 1. ^ "trigonometry". Online Etymology Dictionary. 2. ^ Boyer p215 3. ^ Boyer p237, p274 4. ^ ### Bibliography • Boyer, Carl B. (1991). A History of Mathematics (Second Edition ed.). John Wiley & Sons, Inc.. ISBN 0471543977. • Christopher M. Linton (2004). From Eudoxus to Einstein: A History of Mathematical Astronomy . Cambridge University Press. • Weisstein, Eric W. "Trigonometric Addition Formulas". Wolfram MathWorld. ## External links Find more about Trigonometry on Wikipedia's sister projects: Definitions from Wiktionary Textbooks from Wikibooks Quotations from Wikiquote Source texts from Wikisource Images and media from Commons News stories from Wikinews Learning resources from Wikiversity Contenu de sensagent • définitions • synonymes • antonymes • encyclopédie • definition • synonym Publicité ▼ dictionnaire et traducteur pour sites web Alexandria Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Essayer ici, télécharger le code; Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Les jeux de lettre français sont : ○   Anagrammes ○   jokers, mots-croisés ○   Lettris ○   Boggle. Lettris Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. boggle Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française Principales Références La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Traduction Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent. 10309 visiteurs en ligne calculé en 0,141s Je voudrais signaler : section : une faute d'orthographe ou de grammaire un contenu abusif (raciste, pornographique, diffamatoire) une violation de copyright une erreur un manque autre merci de préciser : allemand anglais arabe bulgare chinois coréen croate danois espagnol espéranto estonien finnois français grec hébreu hindi hongrois islandais indonésien italien japonais letton lituanien malgache néerlandais norvégien persan polonais portugais roumain russe serbe slovaque slovène suédois tchèque thai turc vietnamien allemand anglais arabe bulgare chinois coréen croate danois espagnol espéranto estonien finnois français grec hébreu hindi hongrois islandais indonésien italien japonais letton lituanien malgache néerlandais norvégien persan polonais portugais roumain russe serbe slovaque slovène suédois tchèque thai turc vietnamien
2021-09-27 19:44:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7398821115493774, "perplexity": 3363.416608077883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058467.95/warc/CC-MAIN-20210927181724-20210927211724-00631.warc.gz"}
https://www.techwhiff.com/learn/what-is-most-likely-to-happen-to-the-price-level/246815
# What is most likely to happen to the price level and real GDP if the Fed... ###### Question: What is most likely to happen to the price level and real GDP if the Fed targets a lower Federal Funds Rate? Select one: a. Price level and real GDP will both increase b. Price level and real GDP will both decrease c. Price level will increase, but real GDP will decrease d. Price level will decrease, but real GDP will increase e. Real GDP will increase, but the price level would remain the same #### Similar Solved Questions ##### Breakeven cash inflows and risk - Boardman Gases and Chemicals is a supplier of highly purified... Breakeven cash inflows and risk - Boardman Gases and Chemicals is a supplier of highly purified gases to semiconductor manufacturers. A large chip producer has asked Boardman to build a new gas production facility close to an existing semiconductor plant. Once the new gas plant is in place, Boardman... ##### Financial statement analysis Final Term Financial Accounting.pdf... ##### Homework: no handwrite please, all question needs to be answer. reference and citations nursing peer reviews... homework: no handwrite please, all question needs to be answer. reference and citations nursing peer reviews What is a viral, bacterial and fungal infection? Explain the signs and symptoms and medications for each. Explain what lab result on the CBC would be abnormal for each infection.... ##### Use trigonometric identities to solve the equation 2sin(2θ)-2cos(θ)=0 exactly for 0≤θ≤2π. A.) What is 2sin(2θ) in... Use trigonometric identities to solve the equation 2sin(2θ)-2cos(θ)=0 exactly for 0≤θ≤2π. A.) What is 2sin(2θ) in terms of sin(θ)and cos(θ)? B.) After making the substitution from part 1, what is the common factor for the left side of the expression 2sin(... ##### Which examples involve only implicit opportunity costs (not explicit costs)? Check all that apply: A firm... Which examples involve only implicit opportunity costs (not explicit costs)? Check all that apply: A firm using cash to buy Treasury bills A real estate company using a rental apartment as its own office A company using a spare machine for a new project A firm withholding licensing rights from other... ##### 4. A bag contains 1 red, 3 green, and 5 yellow balls. A sample of four... 4. A bag contains 1 red, 3 green, and 5 yellow balls. A sample of four balls is picked. Let G be the number of green balls in the sample. Let Y be the number of yellow balls in the sample. Find the conditional probability mass function of G given Y = 2 assuming the sample is picked with replacement.... ##### A jogging track has a length of 1408 yards (yd). How long is this in miles... A jogging track has a length of 1408 yards (yd). How long is this in miles (mi)? First fill in the blank on the left with one of the ratios. Then write the answer 5280 ft 12 in Ratios: 1 mi 5280 ft 1760 yd 1 mi 1 mi 1760 yd 1 mi 12 in 1 ft 1408 yd Imi... ##### A photon having 34 keV scatters from a free electron at rest. What is the maximum... A photon having 34 keV scatters from a free electron at rest. What is the maximum energy that the electron can obtain? (Answer in keV)... ##### 2. The IBVP for the Wave Equation. Solve the following initial-boundary value problem. (PDE) uttー16uxx , for 0 < x < 2, t>0 = sin(nx/2) 4 sin(3m/2) (IC) u(x, 0) 160<x<2 (ac) (x.0))_si... 2. The IBVP for the Wave Equation. Solve the following initial-boundary value problem. (PDE) uttー16uxx , for 0 < x < 2, t>0 = sin(nx/2) 4 sin(3m/2) (IC) u(x, 0) 160<x<2 (ac) (x.0))_sin5x/2) ut(x, 0) - 200<x <2 4 2. The IBVP for the Wave Equation. Solve the following init... ##### Consider a freeway segment with capacity of 8000 veh/hr, free-flow speed of 70 mph, and jam... Consider a freeway segment with capacity of 8000 veh/hr, free-flow speed of 70 mph, and jam density of 200 veh/mi. The demand of this segment is constant at 7800 vel/hr. During the first hour of the hour morning peak, merging traffic entering from the on-ramp has priority thus it restricts the numbe... FROMewoIK Calculator eBook Cost of Production Report The debits to Work in Process-Roasting Department for Moming Brew Coffee Company for August, together with information concerning production, are as follows: Work in process, August 1, 900 pounds, 30% completed $4.950 TDirect materials (900 X$4.9...
2023-01-27 21:48:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2882034480571747, "perplexity": 2511.96184770062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495012.84/warc/CC-MAIN-20230127195946-20230127225946-00871.warc.gz"}
http://www.numerical-tours.com/matlab/sparsity_6_l1_recovery/
Performance of Sparse Recovery Using L1 Minimization # Performance of Sparse Recovery Using L1 Minimization This tour explores theoritical garantees for the performance of recovery using $$\ell^1$$ minimization. ## Installing toolboxes and setting up the path. You need to download the following files: signal toolbox and general toolbox. You need to unzip these toolboxes in your working directory, so that you have toolbox_signal and toolbox_general in your directory. For Scilab user: you must replace the Matlab comment '%' by its Scilab counterpart '//'. Recommandation: You should create a text file named for instance numericaltour.sce (in Scilab) or numericaltour.m (in Matlab) to write all the Scilab/Matlab command you want to execute. Then, simply run exec('numericaltour.sce'); (in Scilab) or numericaltour; (in Matlab) to run the commands. Execute this line only if you are using Matlab. getd = @(p)path(p,path); % scilab users must *not* execute this Then you can add the toolboxes to the path. getd('toolbox_signal/'); getd('toolbox_general/'); ## Sparse $$\ell^1$$ Recovery We consider the inverse problem of estimating an unknown signal $$x_0 \in \RR^N$$ from noisy measurements $$y=\Phi x_0 + w \in \RR^P$$ where $$\Phi \in \RR^{P \times N}$$ is a measurement matrix with $$P \leq N$$, and $$w$$ is some noise. This tour is focused on recovery using $$\ell^1$$ minimization $x^\star \in \uargmin{x \in \RR^N} \frac{1}{2}\norm{y-\Phi x}^2 + \la \normu{x}.$ Where there is no noise, we consider the problem $$\Pp(y)$$ $x^\star \in \uargmin{\Phi x = y} \normu{x}.$ We are not concerned here about the actual way to solve this convex problem (see the other numerical tours on sparse regularization) but rather on the theoritical analysis of wether $$x^\star$$ is close to $$x_0$$. More precisely, we consider the following three key properties • Noiseless identifiability: $$x_0$$ is the unique solution of $$\Pp(y)$$ for $$y=\Phi x_0$$. • Robustess to small noise:: one has $$\norm{x^\star - x_0} = O(\norm{w})$$ for $$y=\Phi x_0+w$$ if $$\norm{w}$$ is smaller than an arbitrary small constant that depends on $$x_0$$ if $$\la$$ is well chosen according to $$\norm{w}$$. • Robustess to bounded noise: same as above, but $$\norm{w}$$ can be arbitrary. Note that noise robustness implies identifiability, but the converse is not true in general. ## Coherence Criteria The simplest criteria for identifiality are based on the coherence of the matrix $$\Phi$$ and depends only on the sparsity $$\norm{x_0}_0$$ of the original signal. This criteria is thus not very precise and gives very pessimistic bounds. The coherence of the matrix $$\Phi = ( \phi_i )_{i=1}^N \in \RR^{P \times N}$$ with unit norm colum $$\norm{\phi_i}=1$$ is $\mu(\Phi) = \umax{i \neq j} \abs{\dotp{\phi_i}{\phi_j}}.$ Compute the correlation matrix (remove the diagonal of 1's). remove_diag = @(C)C-diag(diag(C)); Correlation = @(Phi)remove_diag(abs(Phi'*Phi)); Compute the coherence $$\mu(\Phi)$$. maxall = @(C)max(C(:)); mu = @(Phi)maxall(Correlation(Phi)); The condition $\normz{x_0} < \frac{1}{2}\pa{1 + \frac{1}{\mu(\Phi)}}$ implies that $$x_0$$ is identifiable, and also implies to robustess to small and bounded noise. Equivalently, this condition can be written as $$\text{Coh}(\normz{x_0})<1$$ where $\text{Coh}(k) = \frac{k \mu(\Phi)}{ 1 - (k-1)\mu(\Phi) }$ Coh = @(Phi,k)(k * mu(Phi)) / ( 1 - (k-1) * mu(Phi) ); Generate a matrix with random unit columns in $$\RR^P$$. normalize = @(Phi) Phi ./ repmat(sqrt(sum(Phi.^2)), [size(Phi,1) 1]); PhiRand = @(P,N)normalize(randn(P,N)); Phi = PhiRand(250,1000); Compute the coherence and the maximum possible sparsity to ensure recovery using the coherence bound. c = mu(Phi); fprintf('Coherence: %.2f\n', c); fprintf('Sparsity max: %d\n', floor(1/2*(1+1/c)) ); Coherence: 0.30 Sparsity max: 2 Exercice 1: (check the solution) Display how the average coherence of a random matrix decays with the redundancy $$\eta = P/N$$ of the matrix $$\Phi$$. Can you derive an empirical law between $$P$$ and the maximal sparsity? exo1; ## Support and Sign-based Criteria In the following we will consider the support $\text{supp}(x_0) = \enscond{i}{x_0(i) \neq 0}$ of the vector $$x_0$$. The co-support is its complementary $$I^c$$. supp = @(s)find(abs(s)>1e-5); cosupp = @(s)find(abs(s)<1e-5); Given some support $$I \subset \{0,\ldots,N-1\}$$, we will denote as $$\Phi = (\phi_m)_{m \in I} \in \RR^{N \times \abs{I}}$$ the sub-matrix extracted from $$\Phi$$ using the columns indexed by $$I$$. J.J. Fuchs introduces a criteria $$F$$ for identifiability that depends on the sign of $$x_0$$. J.J. Fuchs. Recovery of exact sparse representations in the presence of bounded noise. IEEE Trans. Inform. Theory, 51(10), p. 3601-3608, 2005 Under the condition that $$\Phi_I$$ has full rank, the $$F$$ measure of a sign vector $$s \in \{+1,0,-1\}^N$$ with $$\text{supp}(s)=I$$ reads $\text{F}(s) = \norm{ \Psi_I s_I }_\infty \qwhereq \Psi_I = \Phi_{I^c}^* \Phi_I^{+,*}$ where $$A^+ = (A^* A)^{-1} A^*$$ is the pseudo inverse of a matrix $$A$$. The condition $\text{F}(\text{sign}(x_0))<1$ implies that $$x_0$$ is identifiable, and also implies to robustess to small noise. It does not however imply robustess to a bounded noise. Compute $$\Psi_I$$ matrix. PsiI = @(Phi,I)Phi(:, setdiff(1:size(Phi,2),I) )' * pinv(Phi(:,I))'; Compute $$\text{F}(s)$$. F = @(Phi,s)norm(PsiI(Phi,supp(s))*s(supp(s)), 'inf'); The Exact Recovery Criterion (ERC) of a support $$I$$, introduced by Tropp in J. A. Tropp. Just relax: Convex programming methods for identifying sparse signals. IEEE Trans. Inform. Theory, vol. 52, num. 3, pp. 1030-1051, Mar. 2006. Under the condition that $$\Phi_I$$ has full rank, this condition reads $\text{ERC}(I) = \norm{\Psi_{I}}_{\infty,\infty} = \umax{j \in I^c} \norm{ \Phi_I^+ \phi_j }_1.$ where $$\norm{A}_{\infty,\infty}$$ is the $$\ell^\infty-\ell^\infty$$ operator norm of a matrix $$A$$, computed with the Matlab command norm(A,'inf'). erc = @(Phi,I)norm(PsiI(Phi,I), 'inf'); The condition $\text{ERC}(\text{supp}(x_0))<1$ implies that $$x_0$$ is identifiable, and also implies to robustess to small and bounded noise. One can prove that the ERC is the maximum of the F criterion for all signs of the given support $\text{ERC}(I) = \umax{ s, \text{supp}(s) \subset I } \text{F}(s).$ The weak-ERC is an approximation of the ERC using only the correlation matrix $\text{w-ERC}(I) = \frac{ \umax{j \in I^c} \sum_{i \in I} \abs{\dotp{\phi_i}{\phi_j}} }{ 1-\umax{j \in I} \sum_{i \neq j \in I} \abs{\dotp{\phi_i}{\phi_j}} }$ g = @(C,I)sum(C(:,I),2); werc_g = @(g,I,J)max(g(J)) / (1-max(g(I))); werc = @(Phi,I)werc_g( g(Correlation(Phi),I), I, setdiff(1:size(Phi,2),I) ); One has, if $$\text{w-ERC}(I)>0$$, for $$I = \text{supp}(s)$$, $\text{F}(s) \leq \text{ERC}(I) \leq \text{w-ERC}(I) \leq \text{Coh}(\abs{I}).$ This shows in particular that the condition $\text{w-ERC}(\text{supp}(x_0))<1$ implies identifiability and robustess to small and bounded noise. Exercice 2: (check the solution) Show that this inequality holds on a given matrix. What can you conclude about the sharpness of these criteria ? exo2; N=2000, P=1990, |I|=6 F(s) =0.21 ERC(I) =0.27 w-ERC(s)=0.30 Coh(|s|)=1.72 Exercice 3: (check the solution) For a given matrix $$\Phi$$ generated using PhiRand, draw as a function of the sparsity $$k$$ the probability that a random sign vector $$s$$ of sparsity $$\norm{s}_0=k$$ satisfies the conditions $$\text{F}(x_0)<1$$, $$\text{ERC}(x_0)<1$$ and $$\text{w-ERC}(x_0)<1$$ exo3; ## Restricted Isometry Criteria The restricted isometry constants $$\de_k^1,\de_k^2$$ of a matrix $$\Phi$$ are the smallest $$\de^1,\de^2$$ that satisfy $\forall x \in \RR^N, \quad \norm{x}_0 \leq k \qarrq (1-\de^1)\norm{x}^2 \leq \norm{\Phi x}^2 \leq (1+\de^2)\norm{x}^2.$ E. Candes shows in E. J. Candès. The restricted isometry property and its implications for compressed sensing. Compte Rendus de l'Academie des Sciences, Paris, Serie I, 346 589-592 that if $\de_{2k} \leq \sqrt{2}-1 ,$ then $$\norm{x_0} \leq k$$ implies identifiability as well as robustness to small and bounded noise. The stability constant $$\la^1(A), \la^2(A)$$ of a matrix $$A = \Phi_I$$ extracted from $$\Phi$$ is the smallest $$\tilde \la_1,\tilde \la_2$$ such that $\forall \al \in \RR^{\abs{I}}, \quad (1-\tilde\la_1)\norm{\al}^2 \leq \norm{A \al}^2 \leq (1+\tilde \la_2)\norm{\al}^2.$ These constants $$\la^1(A), \la^2(A)$$ are easily computed from the largest and smallest eigenvalues of $$A^* A \in \RR^{\abs{I} \times \abs{I}}$$ minmax = @(v)deal(1-min(v),max(v)-1); ric = @(A)minmax(eig(A'*A)); The restricted isometry constant of $$\Phi$$ are computed as the largest stability constants of extracted matrices $\de^\ell_k = \umax{ \abs{I}=k } \la^\ell( \Phi_I ).$ The eigenvalues of $$\Phi$$ are essentially contained in the interval $$[a,b]$$ where $$a=(1-\sqrt{\be})^2$$ and $$b=(1+\sqrt{\be})^2$$ with $$\beta = k/P$$ More precisely, as $$k=\be P$$ tends to infinity, the distribution of the eigenvalues tends to the Marcenko-Pastur law $$f_\be(\la) = \frac{1}{2\pi \be \la}\sqrt{ (\la-b)^+ (a-\la)^+ }.$$ Exercice 4: (check the solution) Display, for an increasing value of $$k$$ the histogram of repartition of the eigenvalues $$A^* A$$ where $$A$$ is a Gaussian matrix of size $$(P,k)$$ and variance $$1/P$$. For this, accumulate the eigenvalues for many realization of $$A$$. exo4; Exercice 5: (check the solution) Estimate numerically lower bound on $$\de_k^1,\de_k^2$$ by Monte-Carlo sampling of sub-matrices. exo5; ## Sparse Spikes Deconvolution We now consider a convolution dictionary $$\Phi$$. Such a dictionary is used with sparse regulariz Second derivative of Gaussian kernel $$g$$ with a given variance $$\si^2$$. sigma = 6; g = @(x)(1-x.^2/sigma^2).*exp(-x.^2/(2*sigma^2)); Create a matrix $$\Phi$$ so that $$\Phi x = g \star x$$ with periodic boundary conditions. P = 1024; [Y,X] = meshgrid(1:P,1:P); Phi = normalize(g(mod(X-Y+P/2, P)-P/2)); To improve the conditionning of the dictionary, we sub-sample its atoms, so that $$P = \eta N > N$$. eta = 2; N = P/eta; Phi = Phi(:,1:eta:end); Plot the correlation function associated to the filter. Can you determine the value of the coherence $$\mu(\Phi)$$? c = Phi'*Phi; c = abs(c(:,end/2)); clf; h = plot(c(end/2-50:end/2+50), '.-'); set(h, 'LineWidth', 1); axis tight; Create a data a sparse $$x_0$$ with two diracs of opposite signes with spacing $$d$$. twosparse = @(d)circshift([1; zeros(d,1); -1; zeros(N-d-2,1)], round(N/2-d/2)); Display $$x_0$$ and $$\Phi x_0$$. x0 = twosparse(50); clf; subplot(2,1,1); h = plot(x0, 'r'); axis tight; subplot(2,1,2); h = plot(Phi*x0, 'b'); axis tight; set(h, 'LineWidth', 2); Exercice 6: (check the solution) Plot the evolution of the criteria F, ERC and Coh as a function of $$d$$. Do the same plot for other signs patterns for $$x_0$$. Do the same plot for a Dirac comb with a varying spacing $$d$$. exo6;
2023-03-27 01:43:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9284003973007202, "perplexity": 1364.4259927808935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00283.warc.gz"}
https://www.educative.io/answers/how-to-use-the-ratiosubtract-function-in-cpp
Related Tags c++ communitycreator # How to use the ratio_subtract() function in C++ Harsh Jain In this shot, we will learn how to use the ratio_subtract() function. This template alias is used to subtract two ratios. The ratio_subtract() function is available in the <ratio> header file in C++. ### What is a ratio? A ratio is a representation of a fraction in which the numerator and denominator are differentiated by a colon (:) symbol. The numerator and denominator can be of float data type. Let’s understand with the help of an example. Suppose that the first ratio is 1 : 3 and the second ratio is 5 : 2. So, the result of the ratio_subtract() function on these two ratios gives -13 : 6. Let’s explore how the addition took place. • First, we need to find the LCMLowest Common Multiple of the denominators of the two fractions which will be the denominator of the subtraction. • The LCM of 3 and 2 is 6. • Now subtraction = $\frac{(1*2-5*3)}{6}$ = $\frac{-13}{6}$. ### Parameters The ratio_subtract() method takes the following parameters: • Ratio1: A ratio object for addition. • Ratio2: Another ratio object to perform the addition with the first ratio object. ### Return value This function returns the result of the subtraction in the simplest form. It returns two member constants: • num: The simplified numerator of the ratio after the subtraction of the two ratios. • den: The simplified denominator of the ratio after the subtraction of the two ratios. ### Code Let’s have a look at the code. #include <iostream> #include <ratio> using namespace std; int main() { typedef ratio<1, 3> ratio1; typedef ratio<5, 2> ratio2; typedef ratio_subtract< ratio1, ratio2 > diff; cout << "The ratio after addition is : "; cout << diff::num << "/" << diff::den; return 0; } Use ratio_subtract() function in C++ ### Explanation • In lines 1 and 2, we import the required header files. • In line 5, we make a main() function. • From lines 7 and 8, we declare two ratios. • In line 10, we perform the subtraction of the two declared ratios by using the ratio_subtract() function. • In line 12, we display a message regarding the result. • In line 13, we display the numerator and denominator by accessing them through the sum variable and display the result. In this way, we can use the ratio_subtract() function to add two ratios and get the subtraction in the simplest form. RELATED TAGS c++ communitycreator CONTRIBUTOR Harsh Jain RELATED COURSES View all Courses Keep Exploring Learn in-demand tech skills in half the time
2022-08-12 03:13:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7115981578826904, "perplexity": 2049.957932399984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00113.warc.gz"}
http://www.avrocks.com/can-we-exchange-the-permutation-of-a-sponge-construction.html
# Can we exchange the permutation of a sponge construction? Part of a sponge construction (like SHA3 uses) is a fixed permutation $p$; which is clearly not one-way. Could we, theoretically, exchange the permutation $p$ with any other permutation? What basic characteristics should such a permutation have – or would, for example, a simple LFSR already represent a valid replacement, assuming it spans the whole bit-range ($r+c$)? Replay Disclaimer: I'm not a cryptographer. The security of the sponge construction rely on two parts: • the size of the capacity. • and the strength of the permutation used in the construction. This permutation is expected to have at least the following requirement: • provide a strong diffusion (in Keccak this is provided by $\rho$ and $\pi$). • provide confusion ($\theta$ and $\chi$). In the case of Keccak, $\theta$ is an operation mainly columns oriented. Which is why $\pi$ will ensure that every bits of a column is evenly spread in the slice. This prevent the creation of patterns. $\chi$ is the main ingredient in Keccak-$f$. It is the only part that is not linear. Without it Keccak would be super weak to cryptanalysis. propagation of a difference through $\chi$ Lastly, Keccak-$f$ provides a weak alignment (resistance to truncated differential cryptanalysis). The idea is to make sure that the differences are not constrained by a subdivision of the state (byte for AES or a group of 5 bits in the case of Keccak). However, due to the weak alignment of Keccak-f, finding the lower security bounds of the algorithm is harder. If another permutation provide such characteristics (NORX permutation? I'll let Richie Frame answer that part. He loves NORX). Then I guess it would be another decent choice. I haven't studied LFSR. TL;DR: The candidate permutation must provide strong diffusion, confusion and if possible a weak alignment. The permutation should be as close to a random permutation as possible. This is essentially a block-cipher with a fixed key. A random permutation with given width $b$ is a permutation drawn randomly and uniformly from the set of all $2^b!$ $b$-bit permutations. Unfortunately realizing random permutations suffers from similar problems as realizing random oracles, with the most important limitation being that all practical permutations have a short, efficiently computable description, which a random permutation does not. The Keccak paper says: The Keccak-f permutations should have no propagation properties significantly different from that of a random permutation The design philosophy underlying Keccak is the hermetic sponge strategy. This consists of using the sponge construction for having provable security against all generic attacks and calling a permutation (or transformation) that should not have structural properties with the exception of a compact description The authors of Keccak have a website about the sponge construction which says: In fact, these results show that any attack against a sponge function implies that the permutation it uses can be distinguished from a typical randomly-chosen permutation. This naturally leads to the following design strategy, which we called the hermetic sponge strategy: adopting the sponge construction and building an underlying permutation f that should not have any properties exploitable in attacks. We have called such properties structural distinguishers. First, as an iterated permutation can be seen a block cipher with a fixed and known key, it should be impossible to construct for the full-round versions distinguishers like the known-key distinguishers for reduced-round versions of DES and AES given in [39]. This includes differentials with high differential probability (DP), high input-output correlations, distinguishers based on integral cryptanalysis or deviations in algebraic expressions of the output in terms of the input. We call this kind of distinguishers structural, to set them apart from trivial distinguishers that are of no use in attacks such as checking that $f(a)=b$ for some known input-output couple $(a,b)$ or the observation that f has a compact description. Category: algorithm design Time: 2016-07-29 Views: 0
2019-09-23 00:32:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8454651832580566, "perplexity": 1248.5549211141404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575844.94/warc/CC-MAIN-20190923002147-20190923024147-00017.warc.gz"}
https://demo.formulasearchengine.com/wiki/Limit_superior_and_limit_inferior
# Limit superior and limit inferior In mathematics, the limit inferior and limit superior of a sequence can be thought of as limiting (i.e., eventual and extreme) bounds on the sequence. They can be thought of in a similar fashion for a function (see limit of a function). For a set, they are the infimum and supremum of the set's limit points, respectively. In general, when there are multiple objects around which a sequence, function, or set accumulates, the inferior and superior limits extract the smallest and largest of them; the type of object and the measure of size is context-dependent, but the notion of extreme limits is invariant. Limit inferior is also called infimum limit, liminf, inferior limit, lower limit, or inner limit; limit superior is also known as supremum limit, limit supremum, limsup, superior limit, upper limit, or outer limit. An illustration of limit superior and limit inferior. The sequence xn is shown in blue. The two red curves approach the limit superior and limit inferior of xn, shown as dashed black lines. In this case, the sequence accumulates around the two limits. The superior limit is the larger of the two, and the inferior limit is the smaller of the two. The inferior and superior limits agree if and only if the sequence is convergent (i.e., when there is a single limit). ## Definition for sequences The limit inferior of a sequence (xn) is defined by ${\displaystyle \liminf _{n\to \infty }x_{n}:=\lim _{n\to \infty }{\Big (}\inf _{m\geq n}x_{m}{\Big )}}$ or ${\displaystyle \liminf _{n\to \infty }x_{n}:=\sup _{n\geq 0}\,\inf _{m\geq n}x_{m}=\sup\{\,\inf\{\,x_{m}:m\geq n\,\}:n\geq 0\,\}.}$ Similarly, the limit superior of (xn) is defined by ${\displaystyle \limsup _{n\to \infty }x_{n}:=\lim _{n\to \infty }{\Big (}\sup _{m\geq n}x_{m}{\Big )}}$ or ${\displaystyle \limsup _{n\to \infty }x_{n}:=\inf _{n\geq 0}\,\sup _{m\geq n}x_{m}=\inf\{\,\sup\{\,x_{m}:m\geq n\,\}:n\geq 0\,\}.}$ If the terms in the sequence are real numbers, the limit superior and limit inferior always exist, as real numbers or ±∞ (i.e., on the extended real number line). More generally, these definitions make sense in any partially ordered set, provided the suprema and infima exist, such as in a complete lattice. Whenever the ordinary limit exists, the limit inferior and limit superior are both equal to it; therefore, each can be considered a generalization of the ordinary limit which is primarily interesting in cases where the limit does not exist. Whenever lim inf xn and lim sup xn both exist, we have ${\displaystyle \liminf _{n\to \infty }x_{n}\leq \limsup _{n\to \infty }x_{n}.}$ Limits inferior/superior are related to big-O notation in that they bound a sequence only "in the limit"; the sequence may exceed the bound. However, with big-O notation the sequence can only exceed the bound in a finite prefix of the sequence, whereas the limit superior of a sequence like en may actually be less than all elements of the sequence. The only promise made is that some tail of the sequence can be bounded by the limit superior (inferior) plus (minus) an arbitrarily small positive constant. The limit superior and limit inferior of a sequence are a special case of those of a function (see below). ## The case of sequences of real numbers In mathematical analysis, limit superior and limit inferior are important tools for studying sequences of real numbers. Since the supremum and infimum of an unbounded set of real numbers may not exist (the reals are not a complete lattice), it is convenient to consider sequences in the affinely extended real number system: we add the positive and negative infinities to the real line to give the complete totally ordered set (−∞,∞), which is a complete lattice. ### Interpretation Consider a sequence ${\displaystyle (x_{n})}$ consisting of real numbers. Assume that the limit superior and limit inferior are real numbers (so, not infinite). ### Properties The relationship of limit inferior and limit superior for sequences of real numbers is as follows ${\displaystyle \limsup _{n\to \infty }(-x_{n})=-\liminf _{n\to \infty }x_{n}}$ As mentioned earlier, it is convenient to extend ${\displaystyle \mathbb {R} }$ to [−∞,∞]. Then, (xn) in [−∞,∞] converges if and only if ${\displaystyle \liminf _{n\to \infty }x_{n}=\limsup _{n\to \infty }x_{n}}$ in which case ${\displaystyle \lim _{n\to \infty }x_{n}}$ is equal to their common value. (Note that when working just in ${\displaystyle \mathbb {R} }$, convergence to −∞ or ∞ would not be considered as convergence.) Since the limit inferior is at most the limit superior, the condition ${\displaystyle \liminf _{n\to \infty }x_{n}=\infty \;\;\Rightarrow \;\;\lim _{n\to \infty }x_{n}=\infty }$ and the condition ${\displaystyle \limsup _{n\to \infty }x_{n}=-\infty \;\;\Rightarrow \;\;\lim _{n\to \infty }x_{n}=-\infty .}$ If ${\displaystyle I=\liminf _{n\to \infty }x_{n}}$ and ${\displaystyle S=\limsup _{n\to \infty }x_{n}}$, then the interval [I, S] need not contain any of the numbers xn, but every slight enlargement [I − ε, S + ε] (for arbitrarily small ε > 0) will contain xn for all but finitely many indices n. In fact, the interval [I, S] is the smallest closed interval with this property. We can formalize this property like this: there exist subsequences ${\displaystyle x_{k_{n}}}$ and ${\displaystyle x_{h_{n}}}$ of ${\displaystyle x_{n}}$ (where ${\displaystyle k_{n}}$ and ${\displaystyle h_{n}}$ are monotonous) for which we have ${\displaystyle \liminf _{n\to \infty }x_{n}+\epsilon >x_{h_{n}}\;\;\;\;\;\;\;\;\;x_{k_{n}}>\limsup _{n\to \infty }x_{n}-\epsilon }$ On the other hand, there exists a ${\displaystyle n_{0}\in \mathbb {N} }$ so that for all ${\displaystyle n\geq n_{0}}$ ${\displaystyle \liminf _{n\to \infty }x_{n}-\epsilon To recapitulate: In general we have that ${\displaystyle \inf _{n}x_{n}\leq \liminf _{n\to \infty }x_{n}\leq \limsup _{n\to \infty }x_{n}\leq \sup _{n}x_{n}}$ The liminf and limsup of a sequence are respectively the smallest and greatest cluster points. ${\displaystyle \limsup _{n\to \infty }(a_{n}+b_{n})\leq \limsup _{n\to \infty }(a_{n})+\limsup _{n\to \infty }(b_{n}).}$. Analogously, the limit inferior satisfies superadditivity: ${\displaystyle \liminf _{n\to \infty }(a_{n}+b_{n})\geq \liminf _{n\to \infty }(a_{n})+\liminf _{n\to \infty }(b_{n}).}$ In the particular case that one of the sequences actually converges, say ${\displaystyle a_{n}\to a}$, then the inequalities above become equalities (with ${\displaystyle \limsup _{n\to \infty }a_{n}}$ or ${\displaystyle \liminf _{n\to \infty }a_{n}}$ being replaced by ${\displaystyle a}$). #### Examples • As an example, consider the sequence given by xn = sin(n). Using the fact that pi is irrational, one can show that ${\displaystyle \liminf _{n\to \infty }x_{n}=-1}$ and ${\displaystyle \limsup _{n\to \infty }x_{n}=+1.}$ (This is because the sequence {1,2,3,...} is equidistributed mod 2π, a consequence of the Equidistribution theorem.) ${\displaystyle \liminf _{n\to \infty }(p_{n+1}-p_{n}),}$ where pn is the n-th prime number. The value of this limit inferior is conjectured to be 2 – this is the twin prime conjecture – but Template:As of has only been proven to be less than or equal to 246.[1] The corresponding limit superior is ${\displaystyle +\infty }$, because there are arbitrary gaps between consecutive primes. ## Real-valued functions Assume that a function is defined from a subset of the real numbers to the real numbers. As in the case for sequences, the limit inferior and limit superior are always well-defined if we allow the values +∞ and −∞; in fact, if both agree then the limit exists and is equal to their common value (again possibly including the infinities). For example, given f(x) = sin(1/x), we have lim supx0 f(x) = 1 and lim infx0 f(x) = −1. The difference between the two is a rough measure of how "wildly" the function oscillates, and in observation of this fact, it is called the oscillation of f at a. This idea of oscillation is sufficient to, for example, characterize Riemann-integrable functions as continuous except on a set of measure zero [1]. Note that points of nonzero oscillation (i.e., points at which f is "badly behaved") are discontinuities which, unless they make up a set of zero, are confined to a negligible set. ## Functions from metric spaces to metric spaces There is a notion of lim sup and lim inf for functions defined on a metric space whose relationship to limits of real-valued functions mirrors that of the relation between the lim sup, lim inf, and the limit of a real sequence. Take metric spaces X and Y, a subspace E contained in X, and a function f : E → Y. The space Y should also be an ordered set, so that the notions of supremum and infimum make sense. Define, for any limit point a of E, ${\displaystyle \limsup _{x\to a}f(x)=\lim _{\varepsilon \to 0}(\sup\{f(x):x\in E\cap B(a;\varepsilon )\setminus \{a\}\})}$ and ${\displaystyle \liminf _{x\to a}f(x)=\lim _{\varepsilon \to 0}(\inf\{f(x):x\in E\cap B(a;\varepsilon )\setminus \{a\}\})}$ Note that as ε shrinks, the supremum of the function over the ball is monotone decreasing, so we have ${\displaystyle \limsup _{x\to a}f(x)=\inf _{\varepsilon >0}(\sup\{f(x):x\in E\cap B(a;\varepsilon )\setminus \{a\}\})}$ and similarly ${\displaystyle \liminf _{x\to a}f(x)=\sup _{\varepsilon >0}(\inf\{f(x):x\in E\cap B(a;\varepsilon )\setminus \{a\}\}).}$ This finally motivates the definitions for general topological spaces. Take X, Y, E and a as before, but now let X and Y both be topological spaces. In this case, we replace metric balls with neighborhoods: ${\displaystyle \limsup _{x\to a}f(x)=\inf\{\sup\{f(x):x\in E\cap U\setminus \{a\}\}:U\ \mathrm {open} ,a\in U,E\cap U\setminus \{a\}\neq \emptyset \}}$ ${\displaystyle \liminf _{x\to a}f(x)=\sup\{\inf\{f(x):x\in E\cap U\setminus \{a\}\}:U\ \mathrm {open} ,a\in U,E\cap U\setminus \{a\}\neq \emptyset \}}$ (there is a way to write the formula using a lim using nets and the neighborhood filter). This version is often useful in discussions of semi-continuity which crop up in analysis quite often. An interesting note is that this version subsumes the sequential version by considering sequences as functions from the natural numbers as a topological subspace of the extended real line, into the space (the closure of N in (−∞,∞) is N ∪ {∞}.) ## Sequences of sets The power set ℘(X) of a set X is a complete lattice that is ordered by set inclusion, and so the supremum and infimum of any set of subsets (in terms of set inclusion) always exist. In particular, every subset Y of X is bounded above by X and below by the empty set ∅ because ∅ ⊆ YX. Hence, it is possible (and sometimes useful) to consider superior and inferior limits of sequences in ℘(X) (i.e., sequences of subsets of X). There are two common ways to define the limit of sequences of sets. In both cases: • The sequence accumulates around sets of points rather than single points themselves. That is, because each element of the sequence is itself a set, there exist accumulation sets that are somehow nearby to infinitely many elements of the sequence. • The supremum/superior/outer limit is a set that joins these accumulation sets together. That is, it is the union of all of the accumulation sets. When ordering by set inclusion, the supremum limit is the least upper bound on the set of accumulation points because it contains each of them. Hence, it is the supremum of the limit points. • The infimum/inferior/inner limit is a set where all of these accumulation sets meet. That is, it is the intersection of all of the accumulation sets. When ordering by set inclusion, the infimum limit is the greatest lower bound on the set of accumulation points because it is contained in each of them. Hence, it is the infimum of the limit points. • Because ordering is by set inclusion, then the outer limit will always contain the inner limit (i.e., lim inf Xn ⊆ lim sup Xn). Hence, when considering the convergence of a sequence of sets, it generally suffices to consider the convergence of the outer limit of that sequence. The difference between the two definitions involves how the topology (i.e., how to quantify separation) is defined. In fact, the second definition is identical to the first when the discrete metric is used to induce the topology on X. ### General set convergence In this case, a sequence of sets approaches a limiting set when the elements of each member of the sequence approach the elements of the limiting set. In particular, if {Xn} is a sequence of subsets of X, then: • lim sup Xn, which is also called the outer limit, consists of those elements which are limits of points in Xn taken from (countably) infinitely many n. That is, x ∈ lim sup Xn if and only if there exists a sequence of points xk and a subsequence {Xnk} of {Xn} such that xkXnk and xkx as k → ∞. • lim inf Xn, which is also called the inner limit, consists of those elements which are limits of points in Xn for all but finitely many n (i.e., cofinitely many n). That is, x ∈ lim inf Xn if and only if there exists a sequence of points {xk} such that xkXk and xkx as k → ∞. The limit lim Xn exists if and only if lim inf Xn and lim sup Xn agree, in which case lim Xn = lim sup Xn = lim inf Xn.[2] ### Special case: discrete metric In this case, which is frequently used in measure theory, a sequence of sets approaches a limiting set when the limiting set includes elements from each of the members of the sequence. That is, this case specializes the first case when the topology on set X is induced from the discrete metric. For points xX and yX, the discrete metric is defined by ${\displaystyle d(x,y):={\begin{cases}0&{\text{if }}x=y,\\1&{\text{if }}x\neq y.\end{cases}}}$ So a sequence of points {xk} converges to point xX if and only if xk = x for all but finitely many k. The following definition is the result of applying this metric to the general definition above. If {Xn} is a sequence of subsets of X, then: • lim sup Xn consists of elements of X which belong to Xn for infinitely many n (see countably infinite). That is, x ∈ lim sup Xn if and only if there exists a subsequence {Xnk} of {Xn} such that xXnk for all k. • lim inf Xn consists of elements of X which belong to Xn for all but finitely many n (i.e., for cofinitely many n). That is, x ∈ lim inf Xn if and only if there exists some m>0 such that xXn for all n>m. The limit lim X exists if and only if lim inf X and lim sup X agree, in which case lim X = lim sup X = lim inf X.[3] This definition of the inferior and superior limits is relatively strong because it requires that the elements of the extreme limits also be elements of each of the sets of the sequence. Using the standard parlance of set theory, consider the infimum of a sequence of sets. The infimum is a greatest lower bound or meet of a set. In the case of a sequence of sets, the sequence constituents meet at a set that is somehow smaller than each constituent set. Set inclusion provides an ordering that allows set intersection to generate a greatest lower bound ∩Xn of sets in the sequence {Xn}. Similarly, the supremum, which is the least upper bound or join, of a sequence of sets is the union ∪Xn of sets in sequence {Xn}. In this context, the inner limit lim inf Xn is the largest meeting of tails of the sequence, and the outer limit lim sup Xn is the smallest joining of tails of the sequence. • Let In be the meet of the nth tail of the sequence. That is, ${\displaystyle I_{n}:=\inf\{X_{m}:m\in \{n,n+1,n+2,\ldots \}\}=\bigcap _{m=n}^{\infty }X_{m}=X_{n}\cap X_{n+1}\cap X_{n+2}\cap \cdots .}$ Then IkIk+1Ik+2 because Ik+1 is the intersection of fewer sets than Ik. In particular, the sequence {Ik} is non-decreasing. So the inner/inferior limit is the least upper bound on this sequence of meets of tails. In particular, {\displaystyle {\begin{aligned}\liminf _{n\to \infty }X_{n}&:=\lim _{n\to \infty }\inf\{X_{m}:m\in \{n,n+1,\ldots \}\}\\&=\sup\{\inf\{X_{m}:m\in \{n,n+1,\ldots \}\}:n\in \{1,2,\dots \}\}\\&={\bigcup _{n=1}^{\infty }}\left({\bigcap _{m=n}^{\infty }}X_{m}\right).\end{aligned}}} So the inferior limit acts like a version of the standard infimum that is unaffected by the set of elements that occur only finitely many times. That is, the infimum limit is a subset (i.e., a lower bound) for all but finitely many elements. • Similarly, let Jm be the join of the mth tail of the sequence. That is, ${\displaystyle J_{m}:=\sup\{X_{m}:m\in \{n,n+1,n+2,\ldots \}\}=\bigcup _{m=n}^{\infty }X_{m}=X_{n}\cup X_{n+1}\cup X_{n+2}\cup \cdots .}$ Then JkJk+1Jk+2 because Jk+1 is the union of fewer sets than Jk. In particular, the sequence {Jk} is non-increasing. So the outer/superior limit is the greatest lower bound on this sequence of joins of tails. In particular, {\displaystyle {\begin{aligned}\limsup _{n\to \infty }X_{n}&:=\lim _{n\to \infty }\sup\{X_{m}:m\in \{n,n+1,\ldots \}\}\\&=\inf\{\sup\{X_{m}:m\in \{n,n+1,\ldots \}\}:n\in \{1,2,\dots \}\}\\&={\bigcap _{n=1}^{\infty }}\left({\bigcup _{m=n}^{\infty }}X_{m}\right).\end{aligned}}} So the superior limit acts like a version of the standard supremum that is unaffected by the set of elements that occur only finitely many times. That is, the supremum limit is a superset (i.e., an upper bound) for all but finitely many elements. The limit lim Xn exists if and only if lim sup Xn=lim inf Xn, and in that case, lim Xn=lim inf Xn=lim sup Xn. In this sense, the sequence has a limit so long as all but finitely many of its elements are equal to the limit. ### Examples The following are several set convergence examples. They have been broken into sections with respect to the metric used to induce the topology on set X. Using the discrete metric Using either the discrete metric or the Euclidean metric • Consider the set X = {0,1} and the sequence of subsets: ${\displaystyle \{X_{n}\}=\{\{0\},\{1\},\{0\},\{1\},\{0\},\{1\},\dots \}.}$ The "odd" and "even" elements of this sequence form two subsequences, {{0},{0},{0},...} and {{1},{1},{1},...}, which have limit points 0 and 1, respectively, and so the outer or superior limit is the set {0,1} of these two points. However, there are no limit points that can be taken from the {Xn} sequence as a whole, and so the interior or inferior limit is the empty set {}. That is, • lim sup Xn = {0,1} • lim inf Xn = {} However, for {Yn} = {{0},{0},{0},...} and {Zn} = {{1},{1},{1},...}: • lim sup Yn = lim inf Yn = lim Yn = {0} • lim sup Zn = lim inf Zn = lim Zn = {1} • Consider the set X = {50, 20, -100, -25, 0, 1} and the sequence of subsets: ${\displaystyle \{X_{n}\}=\{\{50\},\{20\},\{-100\},\{-25\},\{0\},\{1\},\{0\},\{1\},\{0\},\{1\},\dots \}.}$ As in the previous two examples, • lim sup Xn = {0,1} • lim inf Xn = {} That is, the four elements that do not match the pattern do not affect the lim inf and lim sup because there are only finitely many of them. In fact, these elements could be placed anywhere in the sequence (e.g., at positions 100, 150, 275, and 55000). So long as the tails of the sequence are maintained, the outer and inner limits will be unchanged. The related concepts of essential inner and outer limits, which use the essential supremum and essential infimum, provide an important modification that "squashes" countably many (rather than just finitely many) interstitial additions. Using the Euclidean metric ${\displaystyle \{X_{n}\}=\{\{0\},\{1\},\{1/2\},\{1/2\},\{2/3\},\{1/3\},\{3/4\},\{1/4\},\dots \}.}$ The "odd" and "even" elements of this sequence form two subsequences, {{0},{1/2},{2/3},{3/4},...} and {{1},{1/2},{1/3},{1/4},...}, which have limit points 1 and 0, respectively, and so the outer or superior limit is the set {0,1} of these two points. However, there are no limit points that can be taken from the {Xn} sequence as a whole, and so the interior or inferior limit is the empty set {}. So, as in the previous example, • lim sup Xn = {0,1} • lim inf Xn = {} However, for {Yn} = {{0},{1/2},{2/3},{3/4},...} and {Zn} = {{1},{1/2},{1/3},{1/4},...}: • lim sup Yn = lim inf Yn = lim Yn = {1} • lim sup Zn = lim inf Zn = lim Zn = {0} In each of these four cases, the elements of the limiting sets are not elements of any of the sets from the original sequence. • The Ω limit (i.e., limit set) of a solution to a dynamic system is the outer limit of solution trajectories of the system.[2]:50–51 Because trajectories become closer and closer to this limit set, the tails of these trajectories converge to the limit set. • For example, an LTI system that is the cascade connection of several stable systems with an undamped second-order LTI system (i.e., zero damping ratio) will oscillate endlessly after being perturbed (e.g., an ideal bell after being struck). Hence, if the position and velocity of this system are plotted against each other, trajectories will approach a circle in the state space. This circle, which is the Ω limit set of the system, is the outer limit of solution trajectories of the system. The circle represents the locus of a trajectory corresponding to a pure sinusoidal tone output; that is, the system output approaches/approximates a pure tone. ## Generalized definitions The above definitions are inadequate for many technical applications. In fact, the definitions above are specializations of the following definitions. ### Definition for a set The limit inferior of a set XY is the infimum of all of the limit points of the set. That is, ${\displaystyle \liminf X:=\inf\{x\in Y:x{\text{ is a limit point of }}X\}\,}$ Similarly, the limit superior of a set X is the supremum of all of the limit points of the set. That is, ${\displaystyle \limsup X:=\sup\{x\in Y:x{\text{ is a limit point of }}X\}\,}$ Note that the set X needs to be defined as a subset of a partially ordered set Y that is also a topological space in order for these definitions to make sense. Moreover, it has to be a complete lattice so that the suprema and infima always exist. In that case every set has a limit superior and a limit inferior. Also note that the limit inferior and the limit superior of a set do not have to be elements of the set. ### Definition for filter bases Take a topological space X and a filter base B in that space. The set of all cluster points for that filter base is given by ${\displaystyle \bigcap \{{\overline {B}}_{0}:B_{0}\in B\}}$ where ${\displaystyle {\overline {B}}_{0}}$ is the closure of ${\displaystyle B_{0}}$. This is clearly a closed set and is similar to the set of limit points of a set. Assume that X is also a partially ordered set. The limit superior of the filter base B is defined as ${\displaystyle \limsup B:=\sup \bigcap \{{\overline {B}}_{0}:B_{0}\in B\}}$ when that supremum exists. When X has a total order, is a complete lattice and has the order topology, ${\displaystyle \limsup B=\inf\{\sup B_{0}:B_{0}\in B\}}$ Proof: Similarly, the limit inferior of the filter base B is defined as ${\displaystyle \liminf B:=\inf \bigcap \{{\overline {B}}_{0}:B_{0}\in B\}}$ when that infimum exists; if X is totally ordered, is a complete lattice, and has the order topology, then ${\displaystyle \liminf B=\sup\{\inf B_{0}:B_{0}\in B\}}$ If the limit inferior and limit superior agree, then there must be exactly one cluster point and the limit of the filter base is equal to this unique cluster point. #### Specialization for sequences and nets Note that filter bases are generalizations of nets, which are generalizations of sequences. Therefore, these definitions give the limit inferior and limit superior of any net (and thus any sequence) as well. For example, take topological space ${\displaystyle X}$ and the net ${\displaystyle (x_{\alpha })_{\alpha \in A}}$, where ${\displaystyle (A,{\leq })}$ is a directed set and ${\displaystyle x_{\alpha }\in X}$ for all ${\displaystyle \alpha \in A}$. The filter base ("of tails") generated by this net is ${\displaystyle B}$ defined by ${\displaystyle B:=\{\{x_{\alpha }:\alpha _{0}\leq \alpha \}:\alpha _{0}\in A\}.\,}$ Therefore, the limit inferior and limit superior of the net are equal to the limit superior and limit inferior of ${\displaystyle B}$ respectively. Similarly, for topological space ${\displaystyle X}$, take the sequence ${\displaystyle (x_{n})}$ where ${\displaystyle x_{n}\in X}$ for any ${\displaystyle n\in \mathbb {N} }$ with ${\displaystyle \mathbb {N} }$ being the set of natural numbers. The filter base ("of tails") generated by this sequence is ${\displaystyle C}$ defined by ${\displaystyle C:=\{\{x_{n}:n_{0}\leq n\}:n_{0}\in \mathbb {N} \}.\,}$ Therefore, the limit inferior and limit superior of the sequence are equal to the limit superior and limit inferior of ${\displaystyle C}$ respectively.
2021-06-18 21:02:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 99, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9903048276901245, "perplexity": 2314.402466436002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487641593.43/warc/CC-MAIN-20210618200114-20210618230114-00444.warc.gz"}
https://www.physicsforums.com/threads/solution-of-unsteady-linearized-potential-flow-pde.835448/
# Solution of unsteady linearized potential flow PDE 1. Oct 1, 2015 ### MarkoA Hi, I have a problem following the solution of a linearized potential flow equation in a publication by Fung. The problem describes potential flow over an oscillating plate. A boundary layer is approximated by defining a subsonic layer over the panel and supersonic flow above the subsonic flow. From the equation of motion (1) and (2) in combination with a standing wave condition of the wall (8) and traveling wave of the perturbations (9) and (10) it seems to be easy to get the solutions (12) and (13). https://dl.dropboxusercontent.com/u/20358584/fung1.png [Broken] https://dl.dropboxusercontent.com/u/20358584/fung2.png [Broken] Can anyobody give me hint of how to get to this solution? The paper is the following: [/PLAIN] [Broken] http://arc.aiaa.org/doi/abs/10.2514/3.1661 https://dl.dropboxusercontent.com/u/20358584/fung3.png [Broken] Last edited by a moderator: May 7, 2017 2. Oct 1, 2015 ### Andy Resnick When I am confronted with "it's easy to see that..." I usually first try substituting the answer into the expression and seeing what happens- sometimes there's an oddball change of variables or trig identity involved. 3. Oct 29, 2015 ### MarkoA I don't know. This doesn't help. What could he have done? I've heard that Duhamel's principle could be an approach of solving non-homogenious PDEs like the wave equation. Could the solution have something to do with this approach? Substituting (13) in (2) gives: $$$[-\frac{1}{a_{\delta}^2} \omega^2 - \frac{2M_{\delta}}{a_{\delta}} \alpha_{\nu}\omega + \beta_{\delta}^2\alpha_{\nu}^2 + \zeta_{\delta}^2] \cdot e^{i(\omega t + \alpha_{\nu} x)} \cdot [C_{\nu} sin(\zeta_{\nu}y ) + D_{\nu} cos(\zeta_{\nu}y) ] = 0$$$ Last edited: Oct 29, 2015 4. Nov 2, 2015 ### MarkoA Oh... the substitution was absolutely wrong. I need to find the correlation between the potential and z.. 5. Nov 4, 2015 ### MarkoA I made some progress to get equation (14) and (15). Not sure if I can already corellate the fourer constant alpha with the wave number. $$The equation of motion: \frac{1}{a^2}\frac{\partial^2\phi}{\partial t^2} + \frac{2M}{a} \frac{\partial^2 \phi}{\partial x \partial t} + \overline{\beta}^2 \frac{\partial^2 \phi}{\partial x^2} = \frac{\partial^2 \phi}{\partial y^2} \label{eq:01} The potential must oscillate harmonically: \phi = \Psi(x,y)e^{i\omega t} \label{eq:02} This yields: \Big(\frac{i\omega}{a}\Big)^2\Psi + 2i\frac{\omega M}{a} \frac{\partial \Psi}{\partial x} + \overline{\beta}^2 \frac{\partial^2 \Psi}{\partial x^2} = \frac{\partial^2 \Psi}{\partial y^2} \label{eq:03} A double Fourier transformation in x and y: \Psi^* = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-i(\gamma y + \alpha x)} \Psi(x,y) dx dy \label{eq:04} If this Fourier transformation is applied to all terms of Eq.~(\ref{eq:03}) then \Phi^* cancels out and (\ref{eq:03}) can be written as: \frac{\omega^2}{a^2} + 2 \frac{\omega M}{a} \alpha + \overline{\beta}^2 \alpha^2 = \gamma^2 This is equation (14) in the publication from Fung. The same approach for Fungs equation (2) yields (15). Can I already assume that \gamma is \gamma_{\nu} and \alpha is \alpha_{\nu}?$$
2018-01-22 19:02:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.878605306148529, "perplexity": 2048.1595251855406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891530.91/warc/CC-MAIN-20180122173425-20180122193425-00716.warc.gz"}
https://gamedev.stackexchange.com/questions/163904/how-to-get-an-unreal-engine-4-level-to-sync-using-git-source-control
# How to get an Unreal Engine 4 Level to sync using Git Source Control? Hello Game Development Stack Exchange, I and a couple of collaborators are working on an Unreal Engine project, we do not share a common network, which meant that using a local network storage to hold the data wasn't plausible. I discovered that Unreal Engine allows Source Control through Git services, so I created a private repository on GitLab that all Collaborators have access to. Inside of Unreal Engine, I created a Project with the Minimal C++ with Starter Content, edited the 'Minimal_Default' map a little bit, inserted some new assets, placed them on the map. Made sure all my needed asset information was being checked-in to Source Control, saved everything, then Submitted to Source Control, named the commit, then committed locally. Now outside of Unreal Engine, using TortoiseGit for Windows I pushed the commit to the GitLab repository, all the necessary files were uploaded. However, when I cloned the same repository to another machine, and opening (with the same exact engine version), the 'Minimal_Default' map was the original, not the updated one. The assets I imported are inside of the Content Browser, but the assets are not placed around the level as I did to the commit. So, my question states, how do I get the 'Minimal_Default' level to be committed and opened when the Git repo is cloned? Am I missing something in Unreal Engine, Pushing, or something else? All help would be greatly appreciated. Sincerely, Shejan Shuza Remember to stage the changes of the map inside your commit Even if Sourcetree is marking your files as modified (checked-in) you still have to add your file inside your commit (aka stage the modified file). Also after you commit some changes, you need to push them to the remote (usually origin). A complete workflow would look like this: • Save it • Commit locally • Push to remote Others have to fetch the origin and pull to get your changes. If this does not solve your problem, please do share a photo of the commit itself, so we can see what's going on. • Because git is terrible at binary data it's also possible that Unreal sets a .gitignore property to skip map data, then any "git add..." would not include your change by default. I'll have to check on that, my environment is down right now. Jan 23 '19 at 18:53 I has a similar issue. To duplicate the problem: 1. Create a clean git clone of my Unreal (v4.23) project onto a Windows 2010 machine. 2. Browse to the *.uproject file, right-click, Generate Visual Studio project files. 3. Open the resulting *.sln file using Visual Studio 2019 (v16.3.3). 4. Build the project's default configuration via Build / Build Solution. 1. Run git status in your cloned repository. In my case, there were differences under the Content/StarterContent directory, specifically with the Maps/Minimal_Default.umap and some Architecture/*.uasset files I'd changed. 2. Use git's suggestion for discarding changes in the working directory, e.g. git checkout -- <file>... listing each unexpectedly modified file.
2021-09-23 15:59:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3156980574131012, "perplexity": 5114.440719201523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00284.warc.gz"}
https://math.stackexchange.com/questions/2979990/let-g-be-a-finite-group-if-a-bab-is-it-true-that-b2-e
# Let $G$ be a finite group. If $a = bab$, is it true that $b^{2} = e$ Let $$G$$ be a finite group and let $$a,b \in G$$. If $$a = bab$$, is it true that $$b^{2} = e$$. If not, find a counterexample. It is clear that if $$a = bab$$ and $$b^{2} = e$$ are both true, then $$ab = ba$$. However, there exist groups (namely non-Abelian ones) with elements such that $$ab \neq ba$$. However, I am having trouble finding a non-Abelian group with elements such that $$a = bab$$, but $$ab \neq ba$$. How does one solve this problem? • I am not sure how helpful it is but you can say $a=bab$ and the substitute $a$ to get $a=bbabb$ Etc. so $a=b^nab^n$ for any $n$ – Sorfosh Nov 1 '18 at 3:56 • To find counter examples it's always a good idea to check Quaternion group as a first guess.. – yathish Nov 1 '18 at 4:33 No. For example, in $$Q_8$$ we have $$jij=jk=i$$. Another counterexample is in $$D_3$$ where we have $$f = R_{120}fR_{120}$$.
2019-10-14 01:50:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8978559970855713, "perplexity": 79.84958735945406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00053.warc.gz"}
https://brilliant.org/problems/usamo-problem-2/
# USAMO Problem 2 What is the smallest integer $$n$$, greater than one, for which the root-mean-square of the first $$n$$ positive integers is an integer? $$\mathbf{Note.}$$ The root-mean-square of $$n$$ numbers $$a_1, a_2, \cdots, a_n$$ is defined to be $\left[\frac{a_1^2 + a_2^2 + \cdots + a_n^2}n\right]^{1/2}$ ×
2018-01-20 09:15:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6318143010139465, "perplexity": 273.8207171152644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889542.47/warc/CC-MAIN-20180120083038-20180120103038-00220.warc.gz"}
https://notes.yvt.jp/Graphics/Linear-Time-Approximate-Spherical-Gaussian-Filtering/
Linear-Time Approximate Spherical Gaussian Filtering Originally published on November 29, 2017 Physically based rendering using LTASG-filtered environment maps. Blinn-Phong approximates a Gaussian distribution as the specular exponent increases [Lyon1993]. [Olano2010] has shown the following relation in terms of the angle $\theta$ between $n$ and $h$: [Lyon1993] Lyron, R. 1993. Phong shading reformulation for hardware rendering. Tech. Rep. 43, Apple. [Olano2010] Olano, M., & Baker, D. (2010, February). LEAN mapping. In Proceedings of the 2010 ACM SIGGRAPH symposium on Interactive 3D Graphics and Games (pp. 181-188). ACM. $\cos^{s}(\theta)\approx\exp\left(-\frac{s}{2}\tan^{2}\theta\right)$ This makes the spherical Gaussian blur an excellent and appropriate choice for generating environmental maps. By extending the Gaussian blur's separable property, it is possible to implement it in $O(K)$ (where $K$ is a kernel radius) for reasonably small $\sigma$ values. # Related Works ## Pre-filtered Mipmapped Radiance Environment Maps AMD's CubeMapGen has been a popular choice to generate pre-filtered environment maps. However, it is designed for a offline generation and is too slow for a real-time use. In three.js, PMREMGenerator is responsible for pre-filtering environment maps. It is implemented as a fragment shader that performs Monte Carlo sampling, and when the sample count is set as low as 32 it is capable of running at 60fps on Intel HD Graphics 400011. [Colbert2007] describes a practical implementation of GPU-based importance sampling for environment map pre-filtering [Colbert2007] Colbert, M., & Krivanek, J. (2007). GPU-based importance sampling. GPU Gems, 3, 459-476. ## Mapping Gloss Values to Mip Levels I wanted to have a constant kernel radius of $K$ (= 8) pixels for every mip level. $\sigma$ should be a somewhat smaller value than $K$ in order to fit the significant part of the Gaussian distribution within the kernel radius. I chose $\sigma=K/r$ where $r=4$. Under this condition and given that the image size of the base mip level is $N$ pixels, the relationship between the specular exponent $s$ and the mip level $n$ is found as following: \begin{aligned} \sigma=1/\sqrt{s} &=\frac{K}{r}\cdot\frac{1}{2^{-n}N} \\ s &=\left(\frac{2^{-n}Nr}{K}\right)^{2} \\ &=0.25(2^{-n}N)^{2} \\ n &=\frac{1}{2}\log_{2}4sN^{2} \end{aligned} # Algorithm ## Separate Filtering The basic idea of the separate Gaussian filter is decomposing a n-dimensional Gaussian filter into $n$ cascaded one-dimensional Gaussian filters as shown in the following example where $n=2$: \begin{aligned} G(x,y) & =\frac{1}{2\pi\sigma^{2}}\exp\left(-\frac{x^{2}+y^{2}}{2\sigma^{2}}\right)\\ G_{x}(x,y) & =\begin{cases} \frac{1}{2\pi\sigma^{2}}\exp\left(-\frac{x^{2}}{2\sigma^{2}}\right) & y=0\\ 0 & y\ne0 \end{cases}\\ G_{y}(x,y) & =\text{ditto.}\\ G & =G_{x}\circ G_{y} \end{aligned} This decomposition allows a $n$-dimensional Gaussian filter to be implemented with the time complexity $O(K)$ instead of $O(K^{n})$. At cost of accuracy, this idea can be extended for a wider variety of filters that locally resemble a Gaussian filter, examples of which include a spatially varying anisotropic Gaussian filter [Zheng2011]. [Zheng2011] Zheng, Z., & Saito, S. (2011, August). Screen space anisotropic blurred soft shadows. In SIGGRAPH Posters (p. 75). To apply this technique, one has to find the functions $A_{1}(\vec{x}),\ldots,A_{k}(\vec{x})$ each of which define the axis direction and the standard deviation of the corresponding one-dimensional Gaussian filter. Note that $\vec{x}$ represents a point in a $n$-manifold $\Gamma$ embedded in a Euclidean space, and $A_{i}(\vec{x})$ must be a tangent vector of $\Gamma$ at $\vec{x}$. The axis functions must fulfill the following condition in order for the resulting filter to locally resemble a $n$-dimensional Gaussian filter: $\mathrm{rank}(A_{1}(\vec{x})\ \cdots\ A_{k}(\vec{x}))\ge n$ In addition, from a practical perspective, $A_{1}(\vec{x}),\ldots,A_{k}(\vec{x})$ must be as smooth as possible because abrupt changes in them lead to visual artifacts. For a spherical Gaussian blur ($\Gamma=S^{2}$, $n=2$), there exists no $A_{1}(\vec{x}),A_{2}(\vec{x})$ that satisfies this condition on every $\vec{x}\in\Gamma$, which is obvious from the "hairy ball theorem" stating that there exists no nonvanishing continuous tangent vector field on even-dimensional $n$-spheres. Therefore, at least 3 axis functions are required to realize a spherical Gaussian blur using this technique. I propose the following axis functions ($\left\{ \vec{a_{1}},\vec{a_{2}},\vec{a_{3}}\right\}$ is an orthonormal basis of $\mathbb{R}^{3}$): \begin{aligned} A_{1}(\vec{x}) & =\sigma(\vec{a_{1}}-\vec{x}(\vec{x}\cdot\vec{a_{1}}))\\ A_{2}(\vec{x}) & =\sigma(\vec{a_{2}}-\vec{x}(\vec{x}\cdot\vec{a_{2}}))\\ A_{3}(\vec{x}) & =\sigma(\vec{a_{3}}-\vec{x}(\vec{x}\cdot\vec{a_{3}})) \end{aligned} Each of them represents a tangent vector along the latitude, assuming the points $\pm\vec{a_{i}}$ are the north and south poles of the sphere. If $\left\{ \vec{a_{1}},\vec{a_{2}},\vec{a_{3}}\right\}$ is substituted with the standard basis, they can be written more neatly as: \begin{aligned} A_{1}(\vec{x}) & =\sigma(\vec{e_{x}}-x_{x}\vec{x})\\ A_{2}(\vec{x}) & =\sigma(\vec{e_{y}}-x_{y}\vec{x})\\ A_{3}(\vec{x}) & =\sigma(\vec{e_{z}}-x_{z}\vec{x})\mathbf{} \end{aligned} These axis functions are unambiguously derived by combining the following conditions: the tangential condition (they follow the surface of a sphere), the uniform blur condition, and the latitudinal condition (each of them follows a set of latitudinal lines surrounding a corresponding axis). ## Implementation on Cube Map For each one-dimensional filter ($i\in\{1,2,3\}$) and each cube face, there are two cases to handle: 1. $\pm\vec{a_{i}}$ is inside the face — In this case, the filter is implemented as a radial blur oriented toward the pole $\pm\vec{a_{i}}$. 2. $\pm\vec{a_{i}}$ is outside the face — In this case, the filter is implemented as a directional blur along the U or V direction. We will only consider the positive Z cube face in the following discussion. Given a texture coordinate $(u,v)$, the corresponding point $\vec{x}\in S^{2}$ is found as: $\vec{x}=\frac{1}{\sqrt{1+u^{2}+v^{2}}}\begin{pmatrix}u\\ v\\ 1 \end{pmatrix}$ In the first case where $\pm\vec{a_{i}}$ is inside the face (hence $\vec{a_{i}}=\vec{e_{z}}$) $A_{i}(\vec{x})=\sigma\begin{pmatrix}-\frac{u}{\sqrt{1+u^{2}+v^{2}}}\\ -\frac{v}{\sqrt{1+u^{2}+v^{2}}}\\ 1-\frac{1}{1+u^{2}+v^{2}} \end{pmatrix}$ By projecting it on the plane $z=1$ we obtain: $\left.\frac{d}{dt}\frac{\vec{x}+A_{i}(\vec{x})\cdot t}{\vec{e_{z}}\cdot\left(\vec{x}+A_{i}(\vec{x})\cdot t\right)}\right|_{t=0}=\begin{pmatrix}-u\sigma\sqrt{1+u^{2}+v^{2}}\\ -v\sigma\sqrt{1+u^{2}+v^{2}}\\ 0 \end{pmatrix}$ In the second case where $\pm\vec{a_{i}}$ is inside the face, assuming $\vec{a_{i}}=\vec{e_{x}}$ $A_{i}(\vec{x})=\sigma\begin{pmatrix}1-\frac{u^{2}}{1+u^{2}+v^{2}}\\ -\frac{uv}{1+u^{2}+v^{2}}\\ -\frac{u}{1+u^{2}+v^{2}} \end{pmatrix}$ By projecting it on the plane $z=1$ we obtain: $\left.\frac{d}{dt}\frac{\vec{x}+A_{i}(\vec{x})\cdot t}{\vec{e_{z}}\cdot\left(\vec{x}+A_{i}(\vec{x})\cdot t\right)}\right|_{t=0}=\begin{pmatrix}\sigma\sqrt{1+u^{2}+v^{2}}\\ 0\\ 0 \end{pmatrix}$ # Demonstration hyper3d-envmapgen implements the proposed algorithm using TypeScript, Rust, and WebAssembly. It uses a CPU to execute the algorithm, making it possible to generate a PMREM without hindering the main thread or a GPU. An interactive WebGL demo is available here. # References [Lyon1993] Lyron, R. 1993. Phong shading reformulation for hardware rendering. Tech. Rep. 43, Apple. [Olano2010] Olano, M., & Baker, D. (2010, February). LEAN mapping. In Proceedings of the 2010 ACM SIGGRAPH symposium on Interactive 3D Graphics and Games (pp. 181-188). ACM. [Zheng2011] Zheng, Z., & Saito, S. (2011, August). Screen space anisotropic blurred soft shadows. In SIGGRAPH Posters (p. 75). [Colbert2007] Colbert, M., & Krivanek, J. (2007). GPU-based importance sampling. GPU Gems, 3, 459-476.
2021-04-19 22:00:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 60, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8118577003479004, "perplexity": 1791.0890772579487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038917413.71/warc/CC-MAIN-20210419204416-20210419234416-00423.warc.gz"}
http://tiku.21cnjy.com/?mod=quest&channel=4&xd=3&catid=11352
## 北师大版 试题 • 文中共有10处语言错误,每句中最多有两处。错误涉及一个单词的增加、删除或修改。增加:在缺词处下加一个漏子符号(/\),并在其下面写出该加的词。删除:把多余的词用斜线(\)划掉。修改:在错的词下划一横线,并在该词下面写出修改后的词。 注意:1. 每处错误仅限1词;2. 只允许修改10处,多者(从第11处起)不计分。 Mobile phones are being more wide used. They’re light in weight and easy to carry, offer fast and convenient service for communication. The users use them for making phone call, sending short messages and Internet-surfing. In recently years, mobile phones have become popular to middle school students. Quite few use them to keep in touch with their families and friends, what, of course, was of great convenience. However, I don’t think it’s good to do so. In spite of the advantages mentioned above, student users often waste a lot of time chat on the phone in their spare time. Some even cheat in exams. In addition to, mobile phone bills cost their parents lots of money. • 书面表达(满分20分) 目前,许多学校对学生采取封闭式管理,学生对此看法不一。请根据下表所提供的信息,写一篇短文,谈谈自己的看法。 有的同学认为# U. + I: O* i8 G 有的同学认为; L: C! ]8 g1 ]- b% h9 f* G& G 学校限制学生的自由学生和社会接触少学生的兴趣和爱好不能得到充分的发展,因此……" K6 Q' H4 E% K, O 学校是学习知识的地方学生应该安心在学校学习学生缺乏自觉性,离开了老师,可能会…..* G- M$H5 Z% D7 e. M/ S% I: W 你的看法: ……..2 [, P& ]. a# h; K 注意:1 词数100---120左右; 2 短文开头已给出,不计入总词数。 Nowadays a lot of schools keep their students in school all day long.___________________ • ______ frightened us ______ a tiger turned up suddenly in front of us. A. What; was that B.What; was C.It; that was D.It was; that • To improve their oral English, everyone in the class is supposed to ______ actively in these discussions. A.participate' P: g1 f) K J; @% ?% _2 F" U) f B.attend! H" a: _ F3 \$ _3 T4 D C.enter8 g2 J) W% F( ?$ D.take! e# ?; K9 B3 @2 Q. V • Some say everyday miracles (奇迹) are predestined (注定的)----the right time for the appointed meeting. And it can happen anywhere. In 2001, 11-year-old Kevin Stephan was a bat boy for his younger brother’s Little League team in Lancaster, New York. It was an early evening in late July. Kevin was standing on the grass away from the plate, where another youngster was warming up for the next game. Swinging his bat back and forth, giving it all the power an elementary school kid could give. The boy brought the bat back hard and hit Kevin in the chest. His heart stopped. When Kevin fell to the ground, the mother of one of the players rushed out of the stands to his aid. Penny Brown hadn’t planned to be there that day, but at the last minute, her shift (换班) at the hospital had been changed to see her son’s performance. She was given the night off. Penny bent over the senseless boy, his face already starting to turn blue, and giving CPR, breathing into his mouth and giving chest compressions. And he revived in the end. After his recovery, he became a volunteer junior firefighter, learning some of the emergency first-aid techniques that had saved his life. He studied hard in school and was saving money for college by working as a dishwasher in a local restaurant in his spare time. Kevin, now 18, was working in the kitchen when he heard people screaming, customers in confusion, employees rushing toward a table. He hurried into the main room and saw a woman there, her face turning blue, her hands at her throat. She was choking. Quickly Kevin stepped behind her, wrapped his arms around her and clasped his hands. Then, using skills he’d first learned in Scouts. The food that was trapped in the woman’s throat was freed. The color began to return to her face. "The food was stuck. I couldn’t breathe," she said. She thought she was dying. "I was very frightened." Who was the woman? Penny Brown. 【小题1】The author wrote the passage to show us that __________. A.miracles are predestined and they can happen anywhere. Y0 j8 O' e7 J6 f# W ^# ]( L1 Z B.whoever helps you in trouble will get a reward one day8 F, % j& f2 j( O: W C.God will help those who give others a helping hand# S2 O, Q' X1 M& Q D.miracles won’t come without any difficulty sometimes; d, C3 - h& e& T( ^# i 【小题2】Which of the following statements is True of Kevin Stephan? A.He was hit on the face by a boy and almost lost his life* M, \; h" A5 U5 Z B.He was a volunteer junior firefighter, teaching the players first-aid skills # M, 3 G+ i, a* ^% Z% H1 @( V; C C.He worked part-time in a local restaurant to save money for college ' [( F; Q7 S U D.He saved Penny Brown though he didn’t really know how to deal with food choke5 G8 Q1 ^1 H! C/ h) C: N 【小题3】The underlined word “revived” (paragraph3) most likely means __________. A.came back to life+ \6 e3 L5 d- M) K0 i/ Q+ \ B.became worse8 h! O( ?- ?' h) ?/ A; R C.failed. R4 O6 W! Q8 W' D* D.moved' L% W2 N$ 7 @( Q" L- P 【小题4】Why did Penny Brown change her shift and was given the night off that night? A.She was invited to give the players directions, L& T: Z( f+ [! D, E4 B) ?. M B.She volunteered to give medical services8 b0 K) b& F. J7 @( C: Q& ^: F C.She was a little worried about his son’s safety ; O3 6 @9 b2 M3 H: a D.She came to watch her son’s game and cheered him# D* h5 g( J6 O* C, V. ^ •  此题要求改正所给短文的错误。对标有题号的每一行作出判断:如无错误,在该行右边横线上画一个勾(√);如有错误(每行只有一个错误),则按下列情况改正: 此行多一个词:把多余的词用斜线(\)划掉,在该行右边横线上写出该词,并也用斜线划掉。 此行缺一个词:在缺词处加一个漏字符号(∧),在该行右边横线上写出该加的词。 此行错一个词:在错的词下画一横线,在该行右边横线上写出改正后的词。 注意:原行没有错的不要改。 Dear Bill, Thank you very much for your letter. I pleased to hear【小题1】 ________ about your holiday and the people you meet in Rome. It 【小题2】________ sounded great fun and how I wish I had been with    【小题3】________ you. Thank you also for the stamps you sent them to me 【小题4】 ________ for my collection. Most of their were those I had     【小题5】 ________ been expecting for long. You said by your letter that   【小题6】 ________ you wish to have some photos of me. Sorry to tell you,  【小题7】________ I have little photos good enough to send to others. Yet I 【小题8】_______ will send you a photo of your family. Please write soon 【小题9】 ________ and tell me what you are getting on with your college life. 【小题10】________ Best wishes! Yours, Jason • 书面表达 (满分25分) 当前不少文学作品被改编成电影。有人选择看电影,有人则喜欢读原著。请你以“Film or book, which do you prefer?”为题,按照下列要点写一篇短文 要点:1、 看电影:省时、有趣、易懂 2、 读原著:细节更多、语言优美 3、 自己的看法及理由 注意:1、词数:100词左右,文章题目和开头已给出(不计入词数)。 2、参考词汇: original work 或 book in the original (原著) Film or book, which do you prefer? Some of us think that it is better to see the film than to read the book in the original. • 书面表达(满分30分) 高中阶段学习比较紧张,正确的学习方法尤为重要。下表显示了两位同学不同的学习方法,请简述并发表你的观点。 字数在100---120之间。文章开头已经给出。 学习方法) N; U5 P1 S0 S" M" \6 S6 A* U+ J) Y 李 华6 g% H) O8 P4 g" R 王 海4 i* @# b @, d2 A% R) S 你的观点, L" [% N' T4 Y1 W3 \, B4 \- f& i 白天% W5 P' c9 Z0 B F 上课专心听讲,尽可能经常向老师请教疑难问题。6 c. g5 h& J, ]! M 上课打瞌睡,漏掉了许多要点。3 Y' I) O& Z, ?' A. ^ Q9 O/ B : O; B# D4 I+ ^" h 晚上) R! F5 I! R' \- g# O3 L- Y 花较少时间完成作业,早点休息,上课经常保持旺盛的精力* L+ ^! I: Z( Z" j. Z- @- 花较多的时间完成作业,熬夜学习,导致注意力无法集中。( f! A* d b% S % I- g+ _- ?! U+ " ]* Y: Z 文章开头: Li Hua and Wang Hai are two students of Senior high school.Both of them work hard but they have different learning methods. • 短文改错(共10小题,每小题1分,满分10分) 假如英语课上老师要求同学们交换修改作文,请你修改你同桌写的一下作文。文中共有10处语言错误,要求你在错误的地方增加、删除或修改某个单词。 增加:在缺词处加一个漏字符号(/\),并在其下面写上该加的词。 删除:把多余的词用斜线(\)划掉。 修改:在错的词下划一横线,并在该辞词下面写上修改后的词。 注意:1.每处错误及其修改均限一词; 2.只允许修改10处,多者(从第11处起)不计分。 Welcome to the Polynesian Cultural Centre! You are entering a world of funs! My name is called Tera, a Polynesian name which mean “the sun”. I’m very glad you can come today and learn about some of the amazing Polynesian way. As you can see, here behind myself is one of their boats. To build a boat like this, you needed a very tall, straight tree. You first cut down the tree and remove the branch. Then you cut the tree in half so you have two long pieces of woods. You use one piece to make the boat. Remove the inside for a person to sit. Take the bark off the outside of the boat and puts oil on it so it will easily go through the sea. • 书面表达(共1小题;满分20分) 假定你是李华,第一天到英国就遇到了可怕的大雾,找不到去学校的路。你站在雾中吓呆了。幸亏一个叫John的英国人帮助了你。他拉着你的手给你带路并且鼓励你,消除了你的恐惧。现在你已安定下来,请写一封信表达你对他的感激之情。 注意:1. 信的开头已经写好,但不计入总词数;2.语句通顺,要点齐全;3.词数:150左右。 Dear John, I’d like to thank you for your kind help in the terrible fog on my first day in London. __________________________________________________________________________ __________________________________________________________________________ __________________________________________________________________________ __________________________________________________________________________ __________________________________________________________________________ __________________________________________________________________________ __________________________________________________________________________ __________________________________________________________________________ __________________________________________________________________________ Sincerely yours, Li Hua
2019-12-05 16:50:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20535556972026825, "perplexity": 10872.015323408586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481281.1/warc/CC-MAIN-20191205164243-20191205192243-00405.warc.gz"}
https://frontend.spiceworks.com/topic/2329567-windows-update-build
### 9 Replies • No, but you should use Microsoft Windows Server Update Services (WSUS) ​ for Offline systems. You pair an online and an offline system together, approve the updates on the online one, allow it to download, export from the Online, import into the Offline, and your systems connect to the Offline and pull and install and report back to the Offline WSUS server. It's so much easier than manually figuring everything out. 1 found this helpful thumb_up thumb_down • Since you are talking WSUS, I'm assuming you have an AD environment that is offline. I run this from PowerShell from time to time to check the versions of my Windows 10 PC's.  You should be able to do something similar, just changing the filter as desired. Powershell Get-ADComputer -filter {(operatingsystem -like "Windows 10*") -and (Enabled -eq \$true)} -Properties OperatingSystemVersion|select name,OperatingSystemVersion > c:\it\w10pc20210811.txt As I have it here, it will save the output to c:\it\w10pc20210811.txt Here's the first few lines from my last run: name           OperatingSystemVersion ----           ---------------------- LT40601       10.0 (19042) LC40206       10.0 (19042) LC40505       10.0 (19042) LC60801       10.0 (19042) Normally, once systems report to WSUS, and it updates its catalog, it will tell you what updates you need.  Are you planning to have the WSUS server itself offline as well? • thanks for the reply.  So in a case where we sell a customer two servers used for SCADA systems, we should also have them purchase one dedicated server that would serve as a WSUS server?  If that is the case, then that would just add to the quote, but if that's the only option, then it is what it is. But I do think you're right, that it would be tedious work figuring out what update to bring down for customer A who has server 2019 or customer B who has 2016.  It would require a lot of work from us to download then upload to a shared file for the customer to grab and install themselves. thoughts? • Remember, each purchase of a Windows Server Standard license includes 2 VM's that can be STACKED. You should always build a server as a Type 1 Hypervisor and virtualize on top of it unless there's a specific reason not to (I'm sure there are valid reasons not to, but I have yet to find one). If you're buying 2 physical hardware servers, and licensing them with a Single Windows Server Standard each, create each physical system as a Type 1 Hypervisor (use whatever flavour you want - but Hyper-V is fine) and virtulaize your SCADA system and have 1 extra VM for the WSUS server. To get the Online one - assuming both physical servers are air-gapped, then yes, you would need some other Windows server (perhaps another VM on an existing infrastructure - in which case only a single Standard Server license is required) or physical server would be required. • Are these SCADA servers never going to connect to the internet?  If the answer is yes then updates are 100% optional and probably not recommended.  You will have to update them if you every connect them to the internet, however. • So if I understand you correctly, If a customer purchases a windows 2019 standard on a physical box, then they can install two additional server 2019 Virtual VMs? If that is the case, summarizing what you just said, Install two server boxes licensed with Server 2019, run Hyper-V on each box, and run a VM on each one as a SCADA system, and then on either box, add an additional VM to be a WSUS server? • yes, most likely they will be air-gapped from their network.  So that's why I'm trying to figure out what is the best approach to apply patch updates on lets say, an FTP site, that the customer can download new patches as they come and have their IT department install the updates.  The reason being is that they are concerned with cyber security, which most likely their SCADA servers will not be connected to the internet, so the tricky part comes with how would we be able to provide them with available patches for their servers as they get released?  Then the other question I had is how would we patch the workstations that are running windows 10 professional, that are also offline? • I'd recommend the upsteam WSUS server that has been suggested, but that does require something to jump that air-gap. You can always just connect a patch cable, allow it to export to the downstream server, then unplug it hehe. You can use offline media, but as also mentioned, sometimes you don't want to do any updates to a SCADA system.  When it works, the last thing you want to do is apply security updates and break something. But at the same time, threats can come from inside, so having the latest security updates could be a good thing, it really depends on how tight you want your security.  If you're disabling USB drives and such, and your network has zero-trust or ICE or NAC or something you can probably get away with leaving your servers unpatched. So if I understand you correctly, If a customer purchases a windows 2019 standard on a physical box, then they can install two additional server 2019 Virtual VMs? If that is the case, summarizing what you just said, Install two server boxes licensed with Server 2019, run Hyper-V on each box, and run a VM on each one as a SCADA system, and then on either box, add an additional VM to be a WSUS server? The host system must not run anything other than the Hypervisor - meaning you get 2 OS licenses as long as they are both VM's. But yes, otherwise you got it right. You can either choose to install "Windows Hyper-V Server" which is a free download from Microsoft's Eval center and is not an Eval, but the full product free. Or Install Windows Server 2019, and install the Hyper-V role
2022-08-12 06:34:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45252782106399536, "perplexity": 2002.888134059126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00463.warc.gz"}
http://projecteuclid.org/euclid.aop/1176996606
## The Annals of Probability ### "Normal" Distribution Functions on Spheres and the Modified Bessel Functions #### Abstract In $R^n$, Brownian diffusion leads to the normal or Gaussian distribution. On the sphere $S^n$, diffusion does not lead to the Fisher distribution which often plays the role of the normal distribution on $S^n$. On the circle $(S^1)$ and sphere $(S^2)$, they are known to be numerically close. It is shown that there exists a random stopping time for the diffusion which leads to the Fisher distribution. This follows from the fact, proved here, that the modified Bessel function $I_v(x)$ is a completely monotone function of $v^2$ (for fixed $x > 0$). More generally, we study the class of distributions on $S^n$ which can be represented as mixtures of diffusions. The stopping time distribution is characterized, but not given in computable form. Also, three new distribution functions involving Bessel functions are presented. #### Article information Source Ann. Probab. Volume 2, Number 4 (1974), 593-607. Dates First available in Project Euclid: 19 April 2007 http://projecteuclid.org/euclid.aop/1176996606 Digital Object Identifier doi:10.1214/aop/1176996606 Mathematical Reviews number (MathSciNet) MR370687 Zentralblatt MATH identifier 0305.60033 JSTOR
2016-10-21 17:26:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872018456459045, "perplexity": 417.9349064535791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718285.69/warc/CC-MAIN-20161020183838-00107-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.mail-archive.com/lyx-users@lists.lyx.org/msg91085.html
# Re: Changing the lyxlist (labeling environment) formatting On 7 February 2012 22:11, Richard Heck <rgh...@comcast.net> wrote: > > You just need to redefine the lyxlist environment, however you wish. You > can see how LyX defines it from the exported source, or just by looking at > stdlyxlist.inc, which says: > > \newenvironment{lyxlist}[1] > {\begin{list}{} > {\settowidth{\labelwidth}{#1} > \setlength{\leftmargin}{\**labelwidth} > \renewcommand{\makelabel}[1]{#**#1\hfil}}} > {\end{list}} I have to admit, I'm a bit lost here. In the exported source, all the environments are generated with: \begin{lyxlist}{00.00.0000} There don't seem to be any width specifiers hard-coded into the definition above, so there's nothing to change for the default label width. How is that set? There's also the item labels themselves, which are basically: \item [{Label goes here}] ...but I don't see what part of the lyxlist definition generates that. Where do I look for that? I'm not completely new to LaTeX and TeX, but it's been about 8 years since I've had to delve into TeX internals to do this sort of stuff, so I would appreciate some pointers on this. Cheers, Jason
2021-12-08 03:00:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9531821608543396, "perplexity": 5570.306977623173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363437.15/warc/CC-MAIN-20211208022710-20211208052710-00315.warc.gz"}
https://math.stackexchange.com/questions/2942042/prove-this-rank-related-problem
# Prove this rank related problem Suppose $$A$$ is a matrix such that $$A^2\neq0$$ but$$A^3=0$$.Then prove that $$rank(A)>rank(A^2)$$ and $$rank(A)\neq tr(A)$$. $$rank(AB)\leq$$min{$$rank A,rank B$$}.Then $$rank(A^2)\leq rank(A)$$.How to prove the reamining part? • You don't mention the size of $A$ but let's call it an $n\times n$ matrix. Now, what do you know about the rank and nullity of $A$? Of $A^2$? This leads to the strictly inequality of ranks that you are asking for. – hardmath Oct 5 '18 at 16:24 If $$A^3=0$$, then $$A$$ is nilpotent. Since $$A$$ is nilpotent, all of its eigenvalues are $$0$$, so its trace is also $$0$$ (because the trace is equal to the sum of the eigenvalues). Now you just need to prove that the rank is strictly bigger than $$0$$. Can you take it from here?
2021-05-15 10:20:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9685019254684448, "perplexity": 122.14340112273118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00587.warc.gz"}
http://jeromyanglim.blogspot.com/2009/11/memory-management-in-r-few-tips-and.html
# Jeromy Anglim's Blog: Psychology and Statistics ## Monday, November 23, 2009 ### Memory Management in R: A Few Tips and Tricks This post discusses a few strategies that I have used to to manage memory in  R. Stack Overflow Tips Stack Overflow has a thread on Memory Management Tricks. I tend to follow these suggestions: • .ls.objects(): There's a nice function (.ls.objects()) that lists the memory usage of the objects in the workspace using the most memory. It's good for flagging memory hogging objects that can be deleted. • Use scripts: Hadley Wickham suggests recording all R actions as a script and rerunning the script to restore all objects and thus remove temporary objects created in the process of programming the script. • Import and Save: Josh Reich mentions the strategy of importing data and then saving these imported objects to disk (see post for details). Additional Tricks that I use Develop code on subset of data: I've recently been processing logs of key presses from an experiment on skill acquisition. There are a million records. In order to speed up the process of testing and developing my code, I extract a subset of the data for the purposes of writing the code. A lot of people use this approach within the model testing area where models on the full dataset would take hours to run. Thus, the strategy is to build the model on a subset and then run it on the full dataset. A tweaked version .ls.objects: I slightly tweaked the .ls.objects() function. I find it useful to see the size of objects in terms of megabytes. Thus, when I run into the issue of using too much memory, I'll run this function and see if any of the objects using a lot of memory should be removed from the workspace (optionally saving to disk first). .ls.objects <- function (pos = 1, pattern, order.by = "Size", decreasing=TRUE, head = TRUE, n = 10) { # based on postings by Petr Pikal and David Hinds to the r-help list in 2004 # modified by: Dirk Eddelbuettel (http://stackoverflow.com/questions/1358003/tricks-to-manage-the-available-memory-in-an-r-session) # I then gave it a few tweaks (show size as megabytes and use defaults that I like) # a data frame of the objects and their associated storage needs. napply <- function(names, fn) sapply(names, function(x) fn(get(x, pos = pos))) names <- ls(pos = pos, pattern = pattern) obj.class <- napply(names, function(x) as.character(class(x))[1]) obj.mode <- napply(names, mode) obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class) obj.size <- napply(names, object.size) / 10^6 # megabytes obj.dim <- t(napply(names, function(x) as.numeric(dim(x))[1:2])) vec <- is.na(obj.dim)[, 1] & (obj.type != "function") obj.dim[vec, 1] <- napply(names, length)[vec] out <- data.frame(obj.type, obj.size, obj.dim) names(out) <- c("Type", "Size", "Rows", "Columns") out <- out[order(out[[order.by]], decreasing=decreasing), ] if (head) out <- head(out, n) out } Additional Resources
2016-05-29 21:07:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1717495322227478, "perplexity": 5796.717881516604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049281978.84/warc/CC-MAIN-20160524002121-00242-ip-10-185-217-139.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2880830/finitely-generated-r-module-is-a-field-iff-r-is-a-field
# Finitely generated R-module is a field iff R is a field? [duplicate] Not sure how to do this one. If $$S$$ is a field, then I was considering that $$\exists r_1,\ldots, r_n\in R$$ s.t. $$1 = r_1s_1+\cdots+r_ns_n$$ so for $$r = rr_1s_1+\cdots+rr_ns_n$$. Maybe that is somehow useful for taking inverses of elements. The assumption that $$S$$ is an integral domain is necessary because otherwise we could have $$S = \mathbb{Z}_p[x]/f(x)$$ where $$f(x)$$ is not irreducible. This is still a finitely generated $$\mathbb{Z}_p$$-module, but its not a field. Any hints or solutions would be much appreciated. I feel like this isn't that hard and I'm missing something simple ## marked as duplicate by rschwieb abstract-algebra StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Aug 13 '18 at 20:10 • See Atiyah–Macdonald, Prop 5.1 and 5.7. – lhf Aug 13 '18 at 0:11 • This is one of the most duplicated ring-theory questions on the site that I know of. – rschwieb Aug 13 '18 at 20:12 Uncover the spoilers for solutions completing the hints below: • Suppose $$R$$ is a field, and let $$s \in S$$ be a nonzero element. Then multiplication by $$s$$ is an $$R$$-linear endomorphism of $$S$$, which is injective since $$s$$ is nonzero and $$S$$ is a domain. Since $$S$$ is a finite-dimensional $$R$$-vector space, it follows that multiplication by $$s$$ is also surjective, and so $$1$$ is in the image of this map. • Suppose $$S$$ is a field. This solution I have in mind for this direction is a bit trickier. Let $$r \in R$$ be nonzero, and let $$s$$ be the inverse to $$r$$ in $$S$$. As before, consider the $$R$$-linear map $$\varphi_{s} \colon S \to S$$ corresponding to multiplication by $$s$$. Since $$S$$ is a finitely generated $$R$$-module, $$\varphi_{s}$$ satisfies a monic polynomial relation with coefficients in $$R$$ by Cayley-Hamilton. That is, there exist $$r_{1}, \ldots, r_{n} \in R$$ such that multiplication by $$s^{n}+r_{1}s^{n-1} + \cdots +r_{n}$$ is the zero element of $$\mathrm{End}_{R}(S)$$. Since $$S$$ is a faithful $$R$$-module ($$R$$ is a subring of $$S$$,and so contains $$1$$), this implies that $$s^{n}+r_{1}s^{n-1} + \cdots +r_{n} = 0$$. Now multiply both sides by $$r^{n-1}$$ to conclude that $$s \in R$$. Hints: $\Rightarrow$: If $R$ is a field, let $s\in S$, and consider multiplication by $s$ in $S$. Check this is an injective $R$-linear map. What can you conclude, knowing $S$ is a finite dimensional $R$-vector space? $\Leftarrow$: If $S$ is a field, consider $r\in R$; you know $r^{-1}\in S$, hence it is a root of a monic polynomial in $R[X]$. Deduce from this polynomial equation that $r^{-1}$ is a polynomial in $r$, hence it belongs to $R$.
2019-10-14 16:32:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 42, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8798031806945801, "perplexity": 186.5107130831903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653876.31/warc/CC-MAIN-20191014150930-20191014174430-00084.warc.gz"}
http://ncatlab.org/nlab/show/Skolem's+paradox
# nLab Skolem's paradox ## Idea According to the Löwenheim-Skolem theorem, for a first-order theory with a countable alphabet if there is an infinite model, then there is a countable model. Let us consider the language of some form of set theory and a model satisfying the axiom of infinity. Then Cantor’s diagonal argument can be carried internally within the model and provides internally uncountable “sets” in that countable model. The resolution of this apparent paradox is that, while this conclusion is true internally, it is not true externally: namely any two infinite sets are countable externally in that model, hence there is a $1$$1$ function between any two of them including for a model of some uncountable set $X$ and of its power set $P(X)$. However, the that function (or its graph) is not in the model! One can enlarge the model by adding that function (and more). But this extended model will necessary have $P(X)$ uncountable externally and there is no $1$$1$ function from $X$ to $P(X)$ externally any more. Revised on March 26, 2014 04:36:01 by Urs Schreiber (185.37.147.12)
2014-07-24 06:25:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9178222417831421, "perplexity": 428.7834867030997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888210.96/warc/CC-MAIN-20140722025808-00006-ip-10-33-131-23.ec2.internal.warc.gz"}