text
stringlengths
100
957k
meta
stringclasses
1 value
Question # In the circuit shown, initially there is no charge on capacitors and keys ${{S}_{1}}$ and ${{S}_{2}}$ are open. The values of the capacitors are ${{C}_{1}}=10\mu F$, ${{C}_{2}}=10\mu F$, and ${{C}_{3}}={{C}_{4}}=80\mu F$. Which statement is/are correct?A.) The key ${{S}_{1}}$ is kept closed for a long time such that the capacitors are fully charged. Now the key ${{S}_{2}}$ is closed, at this time the instantaneous current across $3\Omega$ resistor (between points P and Q) will be $0.2A$ (rounded off to first decimal place).B.) If the key ${{S}_{1}}$ is kept closed for a long time such that capacitors are fully charged, the voltage across ${{C}_{1}}$ will be $4V$.C.) At time $t=0$, the key ${{S}_{1}}$ is closed, the instantaneous current in the closed circuit will be $25mA$.D.) If ${{S}_{1}}$ is kept closed for a long time such that capacitors are fully charged, the voltage difference between P and Q will be $10V$. Hint: In a steady state circuit, the voltage across the capacitor becomes constant and the capacitor acts like an open circuit. We will use this concept and apply the equation $q=CV$ to find charge on the capacitor plates. For finding instantaneous current, we will use KVL in different loops. The moment we close the switch ${{S}_{1}}$, charge on the capacitor becomes zero. All capacitors can be replaced by a wire. Values of capacitors are $\text{70 }\!\!\Omega\!\!\text{ , 30 }\!\!\Omega\!\!\text{ , and 100 }\!\!\Omega\!\!\text{ }$. After closing key ${{S}_{1}}$, voltage in the circuit is $5V$ across the points P and Q. The current flowing in the circuit will be$i=\dfrac{V}{70+100+30}=\dfrac{5}{200}=25$ We get, $i=25mA$ Now, if key ${{S}_{1}}$ is kept closed for a long time, the capacitor will be in its steady state because of continuous flow of charge in the circuit. For a capacitor, we will use the formula $q=CV$ where $q$ is the charge on the capacitor, $C$ is the capacitance, and $V$ is the voltage drop across the capacitor. Applying KVL in the loop for which ${{S}_{1}}$ is closed, \begin{align} & \dfrac{q}{{{C}_{1}}}+\dfrac{q}{{{C}_{2}}}+\dfrac{q}{{{C}_{3}}}-5=0 \\ & \dfrac{q}{10}+\dfrac{q}{80}+\dfrac{q}{80}-5=0 \\ & \dfrac{10q}{80}=5 \\ & q=40 \\ \end{align} Voltage across ${{C}_{1}}=\dfrac{q}{{{C}_{1}}}=\dfrac{40}{10}=4V$ We get the value of $q$ equals to $40\mu C$. Now the moment the key ${{S}_{2}}$ is closed, the charge on the capacitor will remain the same. Applying KVL again we get \begin{align} & (-10+x)\left( 30+\dfrac{40}{10}+70y \right)=0 \\ & 30x+70y-6=0 \\ \end{align} Let $x$ be the instantaneous current in the circuit while key ${{S}_{1}}$ is closed. Voltage drop in the circuit $=5V$ and equivalent resistance will be the sum of resistances $\text{70 }\!\!\Omega\!\!\text{ , 30 }\!\!\Omega\!\!\text{ , and 100 }\!\!\Omega\!\!\text{ }$, as all three are present in series in the circuit. We get $i=\dfrac{V}{R}=\dfrac{5}{200}=0.025$ $i=25mA$ After closing key ${{S}_{2}}$, we can find the value of current flowing in the branch PQ. Equivalent resistance in this case will be $151\Omega$ \begin{align} & {{I}_{PQ}}=\dfrac{6\times 2}{151}=0.08 \\ & {{I}_{PQ}}=80mA \\ \end{align} If the key ${{S}_{1}}$ is kept closed for a long time such that capacitors are fully charged, the voltage across ${{C}_{1}}$ will be $4V$. Also, at time $t=0$, the key ${{S}_{1}}$ is closed, the instantaneous current in the closed circuit will be $25mA$. Hence, correct options are B and C. Note: Students should keep in mind the concept of steady state. Steady state can be achieved by keeping the switch opened or closed for a long period of time. There must be no accumulation of mass or energy over the steady state time and the system is present in total equilibrium. Apply all the equations very carefully.
{}
## Superradiance and the assumption of indiscernable atom field coupling Hi, If we have a system on N atoms confined to dimensions much smaller than the wavelength corresponding to the transition, we see superradiant decay. Now, in these cases, we always assume that the excitation is symmetrically or antisymmetrically distributed. For instance, if we have one excitation among N atoms, an example of an initial state is $$\mid\psi\rangle=\frac{1}{\sqrt{N}}\sum_{j}\mid g_{1}g_{2}..e_{j}..g_{N}\rangle$$The claim is that the correct treatment of superradiance needs to assume that the atomic ensemble couples to the field in an indiscernible way. In other words, there is no way to know which atom emitted the photon. Why is this assumption necessary and why does it correctly describe superradiance? Why can I not say, for instance, that the kth atom is excited and all others are in the ground state? Remember, we are not doing an actual experiment so for our purposes, all the atoms are just dipoles and no dimensions have been specified for the atoms. Thank you! PhysOrg.com physics news on PhysOrg.com >> Atomic-scale investigations solve key puzzle of LED efficiency>> Study provides better understanding of water's freezing behavior at nanoscale>> Iron-platinum alloys could be new-generation hard drives Recognitions: Science Advisor Quote by McLaren Rulez The claim is that the correct treatment of superradiance needs to assume that the atomic ensemble couples to the field in an indiscernible way. In other words, there is no way to know which atom emitted the photon. Why is this assumption necessary and why does it correctly describe superradiance? Why can I not say, for instance, that the kth atom is excited and all others are in the ground state? Remember, we are not doing an actual experiment so for our purposes, all the atoms are just dipoles and no dimensions have been specified for the atoms. This is a bit like asking, why we need to assume indistinguishability in the double slit experiment and cannot just have a look at two individual slits. If you have an indistinguishable situation, that also means your atoms will coherently radiate in phase with each other. That makes a huge difference. For independently radiating atoms, you will get an intensity proportional to N (the number of atoms). Basically you just take the field of each individual atom, square it and sum up the intensities. For indistinguishable atoms you cannot neglect phase and you cannot sort one field to one atom. So you have to sum up all the fields (or probability amplitudes if you want a quantum optics treatment). This sum will be proportional to N. Now you need to square afterwards to get the intensity, which will obviously go as N^2. Thank you for replying. I see your point. Thread Tools Similar Threads for: Superradiance and the assumption of indiscernable atom field coupling Thread Forum Replies Chemistry 0 Atomic, Solid State, Comp. Physics 1 Classical Physics 1 Advanced Physics Homework 0 Advanced Physics Homework 0
{}
# How do you find the y intercept of y + x = -8? Jun 20, 2018 $- 8$ #### Explanation: To find the $y$-intercept of our equation, we must put it in slope-intercept form, which is given by $y = m x + b$, where $b$ is the $y$-intercept. We essentially just want $y$ on the left side. Let's subtract $x$ from both sides to get $y = - x - 8$ We see that our $b$ value is $- 8$. This is our $y$-intercept. Hope this helps!
{}
## On Estimating $L_2^2$ Divergence We give a comprehensive theoretical characterization of a nonparametric estimator for the $L_2^2$ divergence between two continuous distributions. We first bound the rate of convergence of our estimator, showing that it is $\sqrt{n}$-consistent provided the densities are sufficiently smooth... In this smooth regime, we then show that our estimator is asymptotically normal, construct asymptotic confidence intervals, and establish a Berry-Ess\'{e}en style inequality characterizing the rate of convergence to normality. We also show that this estimator is minimax optimal. read more PDF Abstract # Code Add Remove Mark official No code implementations yet. Submit your code now # Datasets Add Datasets introduced or used in this paper # Results from the Paper Add Remove Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.
{}
Automorphisms of FInite Groups Inder Bir Singh Passi, Mahnender Singh, and Manoj Kumar Yadav Publisher: Springer Publication Date: 2019 Number of Pages: 217 Series: Springer Monographs in Mathematics Price: 109.99 ISBN: 978-981-13-2894-7 Category: Monograph [Reviewed by Michael Berg , on 10/6/2019 ] The leitmotif for this book is the observation that “the symmetries of a group G are encoded in the automorphism group $Aut(G)$ of $G$,” the focus falling on finite groups $G$.  The authors consider a number of interesting questions surrounding this theme, including what they refer to as the Ledermann-Neumann theorem and the Divisibility Problem.  We find out from the book’s Preface that the result of Ledermann and Neumann, dating back to 1956, constructs a cubic polynomial $f$ with coefficients in the natural numbers with the beautiful property that if $p$ is any prime, $h$ any natural number, and $G$ is a finite group whose order is divisible by $p^{f(h)}$ then $p^{h}$ divides the order of $G$’s automorphism group.  The way to look at this result is to ask the question, given the requirement that the order of  $Aut(G)$ should be divisible by $p^{h}$, what can be said about the order of $G$?  How $p$-small can G be so as to force, still, that the automorphism group of $G$ should have order divisible by $p^{h}$?  In other words, we’re looking for lower bounds on number theoretic functions $f$ such that  $p^{f(h} | o(G)$ implies $p^{h} |o(Aut\;G)$.  Here there is a result in place to the effect that we need $f(h) \geq h-1$ (established by Hyde in 1970).  Say the authors: “One might wonder whether this bound can be lowered further if we restrict ourselves to … finite p-groups.” Together with the foregoing considerations consider the aforementioned Divisibility Problem, phrased as follows: given any non-cyclic finite $p$-group, $G$, does the identity function $i:= i|_{N\backslash \{1,2\} }$ satisfy the desired property, namely, that if $p^{i(h)} | o(G)$ then $p^{h}|o(Aut\;G)$?  Well, this is where things do get a little sticky: in 2015 it was shown by Gonzáles-Sanchez and Jaikin-Zapirain that there actually exist non-cyclic finite $p$-groups of order $> p^{2}$ with the property that this order fails to divide the order of the corresponding automorphism groups: counterexamples to the foregoing claim.   “We [the authors of the book under review] present a detailed exposition of this important development in the theory of automorphism groups.  However, it is still intriguing to know for what other classes of non-cyclic finite p-groups the problem has an affirmative solution and to construct explicit counterexamples.”  In order to further this cause the authors stipulate that “[a] non-cyclic finite $p$-group $G$ of order $> p^{2}$ is said to have the Divisibility Property if [ $o(G)| o(Aut\;G)$].”  They add, not surprisingly, that “determining all  finite $p$-groups admitting [the] Divisibility Property continues to be a challenging problem.” The authors present a few other themes of a very similar flavor: the tenor of what they are up to is amply illustrated, however, by the preceding examples. The discussion of all these themes is split into three parts, respectively concerning Wells’ exact sequence, the number-theoretic function $f$ discussed above, and the foregoing Divisibility Property.  It is proper, indeed, to start with a thorough discussion of the Wells exact sequence, seeing that this result in the cohomology of groups is concerned with the question of extending automorphisms of the kernel or image groups in a short exact sequence to the central term: clearly a very useful tool in the kind of analysis the authors propose. Thus, the Wells exact sequence is attached to any given s.e.s. of finite groups.  Obviously, the question of extending and lifting automorphisms in the setting of short exact sequences is, in itself, of independent interest. With these three parts delineated, the book’s first two chapters introduce $p$-groups and the Wells exact sequence, the next chapter deals with the Ledermann-Neumann theorem, and the final three chapters go at the Divisibility Property.  The audience for this interesting book includes group theorists and graduate students headed in this direction.  The authors propose that prerequisites for reading this work include “an advanced course in the theory of finite groups,” as well as some more exotic stuff needed in particular places in their book: for example, the book’s last chapter requires “basic analysis, topology, and Lie algebras.” Michael Berg is Professor of Mathematics at Loyola Marymount University in Los Angeles, CA.
{}
# using \ForEach to input chapters in a document I have several chapters in the folder /Chapters and I have the following LaTeX code: \newcommand\chapnames{Chap1, Chap2, Chap3, Chap4, Chap5, Chap6} \ForEach {,} {\input{./Chapters/\thislevelitem}} {\chapnames} Unfortunately, this piece of code does not work unless I change it to: \ForEach {,} {\input{./Chapters/\thislevelitem}} {Chap1, Chap2, Chap3, Chap4, Chap5, Chap6} Is there a way to make it work while using \chapnames variable as shown in the \ForEach code at the top? - What package are you using for \ForEach? –  egreg Jan 6 '13 at 19:22 This might be of interest: How to iterate through the name of files in a folder –  Werner Jan 6 '13 at 19:24 I'm using forarray package. –  Ivan Jan 6 '13 at 19:25 @Qrrbrbirlbel Thanks! That worked nicely :) –  Ivan Jan 6 '13 at 19:38 From the forarray manual, subsection 3.1.2 “The command \ForEachX”, page 4: The command \ForEachX processes the list of items in the same way as the command \ForEach. However, it expands its third argument, a token containing the actual list, before processing it. It has the following syntax: \ForEachX{<separator >}{<function>}{<list token>} Therefore, your code sample has to be written as \newcommand\chapnames{Chap1, Chap2, Chap3, Chap4, Chap5, Chap6} \ForEachX {,} {\input{./Chapters/\thislevelitem}} {\chapnames} -
{}
Mechanics Homework - 1053 - Similar Triangles Name_______________ Mr. Haynes Date_______________ Problem - If the distance between points A and D is 1.75 m, how long is the cable?
{}
# Understanding Galois theory, swapping roots I am trying to understand the rudiments of galois theory but I have a hard time answering the following question: Suppose we have $f(X)=x^5-2$ over $\mathbb{Q}$. What is $Gal(E_f: \mathbb{Q})$? After seeing similar questions here I am more confused: Isn't it true that since $f$ is irreducible, every possible permutation of its roots can be extended linearly to a endomorphism $E_f \rightarrow E_f$ while holding $\mathbb{Q}$ constant? If that is so then shouldn't the Galois Group be $S_5$? I understand that this isn't the case, but can't really understand why... • Your reasoning shows that the Galois group of the splitting field of a polynomial of degree $n$, is a subgroup of $S_n$. You don't necessarily get all permutations as elements of the Galois group. – Watson May 29 '18 at 11:31 • The Galois group is not $S_5$ because it has order $20$, not $120$. – lhf May 29 '18 at 11:32 • – lhf May 29 '18 at 11:32 • I think the problem is that I have understood the "swapping roots" theorem wrongly. Indeed I can find an endomorphism $σ$ that sents any root I want to any other root I want, but there might be constraints on $σ$. For example, I cannot have an $σ$ sending $2^{\frac{1}{5}}$ and $2^{\frac{1}{5}}ζ_5$ to themselves but do a cyclic permutation to the others. – Nick A. May 29 '18 at 11:33 • This is exactly the problem: Indeed, for any pair of roots you find an automorphism sending the first root to the second, but you do not have much control what will happen to the other roots. You cannot even say what the second root will be sent to, in particular there is in general no way to extend a transposition of roots to an automorphisms, which would be sufficient to prove that the Galois group is $S_5$. – asdq May 29 '18 at 11:52 First note that the splitting field of $f$ is given by $L = \mathbb{Q}(\sqrt[5]{2}, \zeta_5)$, where $\zeta_5$ is the primitive root of unity. Now we know that any automorphism on $L$ is uniquely determined by the action on the adjoined elements $\sqrt[5]{2}$ and $\zeta_{5}$. Additionally we know that if $\alpha_1$ and $\alpha_2$ are roots of the same irreducible polynomial over $\mathbb{Q}$ then the identity automorphism on $\mathbb{Q}$ can be extended to a automorphism $\sigma$ of the splitting field s.t. $\sigma(\alpha_1) = \alpha_2$. In particular in our case we can have automorphism of $L$ s.t. $\sigma(\sqrt[5]{2}) = \zeta_5^2\sqrt[5]{2}$. Now we can do the same for $\zeta_5$ and for example get an automorphism $\tau$ on $L$, s.t. $\tau(\zeta_5) = \zeta_5^3$ and it is identity on $\mathbb{Q}$. However note that this doesn't give you every possible permutation of roots, as it's not possible to permute the first three roots, while to keep the other two fixed. So to summarize there is an element of the Galois group of $f$ (assuming $f$ is irreducible) s.t. that it send any root to any other root of $f$, but it's not necessarily keeping the other roots fixed or permuting them in arbitrary manner.
{}
# How to embed films with movie15 and Distiller The following minimal working example that embeds a film in a pdf: \documentclass{article} \pdfoutput=0 \usepackage{graphicx} \usepackage[british]{babel} \usepackage[ps2pdf]{hyperref} \usepackage{movie15} \begin{document} Test. \begin{figure}[ht] \includemovie[playerid=AAPL_QuickTime,autoplay,controls,repeat,% text={Anything here}]{10cm}{10cm}% {/Users/christoph/Desktop/BLF/Films/Ultrafast-camera.mov} \end{figure} \end{document} does not work in the dvi - ps- Distiller route. It compiles in latex, produces ps, but Distiller says: Distilling: minworkex.ps %%[ Error: undefinedfilename; OffendingCommand: file ]%% Stack: (r) (/Users/christoph/Desktop/BLF/Films/Ultrafast-camera.mov) {fstream1} -mark- %%[ Flushing: rest of job (to end-of-file) will be ignored ]%% %%[ Warning: PostScript error. No PDF file produced. ] %% Distill Time: 0 seconds (00:00:00) **** End of Job **** To show that nothing obvious is wrong, I add that the pdf is produced correctly if I change the filename to relative and use ps2pdf instead of Distiller. Any help would be greatly appreciated! I would like to produce my free downloadable textbook with Distiller in the next edition. P.S. I know that movie15 is obsolete, but for the time being it seems hard to change the many types of code for the dozens of movies in the tex source. File reading and writing commands in Postscript are considered "unsafe" by current Ghostscript and Distiller versions and produce an error during PS-to-PDF conversion. This default behaviour can be overridden using specific commandline options when running these programs: acrodist /F ... ps2pdf -dNOSAFER ... http://www.ghostscript.com/doc/9.19/Use.htm#Other_parameters • What is the exact command? So far, I use open -a "/Applications/Adobe Acrobat XI Pro/Acrobat Distiller.app" /Users/christoph/Desktop/BLF/minworkex.ps Where does the /F go? – Motion Mountain Jun 10 '16 at 3:54 • On my OSX, the command acrodist does not work. There is also no such Option in the Distiller app to be clicked... The manual says to use "Apple Mac OS: AllowPSFileOps user preference" How does one do that? – Motion Mountain Jun 10 '16 at 4:00 • As I found out, on OSX, /F or -F is realized by using the "AllowPSFileOps user preference". It requires to edit the plist file. Thank you! – Motion Mountain Jun 11 '16 at 11:37
{}
# All Questions 348 views ### What is relation between electrons and photon? [closed] What is the relation between electrons and photons? Why do atoms get excited when their electrons come in contact with photons? Why do electrons go from a higher to lower energy level when emitting a ... 187 views ### What's an ideal wire? I'm not talking about an ideal wire in a circuit (a wire with infinite conductance). I'm talking about an ideal wire in the case of the magnetic field of an infinite current carrying wire. What ... 5k views ### Why electrons get excited? [closed] Why and how are electrons get excited and what happen inside an atom when electrons get excited? 127 views ### Why is the independence of orthogonal vector-quantities always implicit in books/lectures? The "theorem" that I can "just" separately deal with orthogonal quantities (like horizontal and vertical force or velocity, etc), I never found explicitly mentioned, but just implicitly in the ... 145 views ### Do batteries lose charge? Say I have a charge of +q and -q on the positive and negative terminals of a battery. If I connect wires to each terminal, but don't connect the wires (essentially creating an open circuit), the ... 609 views 31 views ### model for flexible stick I'm trying to model a flexible stick with a partial differential equation. I want one of the ends to be fixed and the other end to swing. Do you guys know of any good models I can use? Any ... 152 views ### Can geodesics in a Lorentzian manifold change their character? From a physics perspective, it's pretty easy to see why a a massive particle will be restricted to timelike paths, etc. but does the math guarantee that on its own or do we have to impose it? More ... 245 views ### Heisenberg's uncertainty principle - Planck's (reduced) constant divided by two or not? [duplicate] The most common form of Heisenberg's uncertainty principle I've seen online is $$\Delta x \Delta p ~\geq~ \dfrac{\hbar}{2}.$$ However, I also regularly see $$\Delta x \Delta p ~\geq~ \hbar.$$ ... 274 views ### Modal analysis with aerodynamic damping I'm using modal decomposition to predict the steady state response of a beam structure to harmonic loading. The structure itself is very lightly damped, but we know from experiments that the ... 239 views ### Challenge: Answer this gedanken (PFP - Perpendicularly Fired Photon) I'm challenging anyone who can answer the following question objectively: As usual, imagine a railway station and trains which are equipped with single photon sources (one each in the platform and ... 67 views ### Proton as superposition of hadrons: $\vert p\rangle = c_0\vert p_0\rangle+c_1\vert h\rangle+\cdots$ I have a question regarding hadron fluctuations. For instance on page 85 in Feynman's "Photon-Hadron Interactions" equation 15.2 reads: \tag1\vert \omega\rangle = \vert ... 65 views ### Human and Ultraviolet light As I know, ultraviolet light can be created from burning something at high temperature. So I have a question: Can a human body become a source of ultraviolet light at high temperature? If the answer ... 88 views ### Capacitor in series? Say you have two charged capacitors in series. Zoom in on one capacitor. For this specific capacitor, the charge on the two plates will be the same in magnitude, according to my textbook. My teacher ... 459 views ### Field inside a wire? This answer gives a great explanation of why the field inside a wire connected to a battery must be equal at all points: Why doesn't the electric field inside a wire in a circuit fall off with ... 818 views ### Quantum Wave Mechanics I am studying QM-I these days. Now, I just think of the wave function as just a mathematical function that defines the state of the particle at an instant and from it you can extract various ... 187 views ### Poynting theorem and entering power I refer to the time-domain version of the Poyinting theorem in electro-magnetism: \$- \displaystyle \oint_S (\mathbf{E} \times \mathbf{H}) \cdot d\mathbf{S} - \int_V \mathbf{E} \cdot \mathbf{J}_i \ dV ... 78 views ### Characteristic x-ray in energy spectrum Context: Monte Carlo simulation of a linear accelerator photon beam. The energy spectrum for photons as calculated from the phase space files found in here has a peak somewhere near ... 78 views ### What is the place of an electromagnetic field in the electromagnetic spectrum? [closed] What place should I give an electromagnetic field (produced by a current conducting coil) in the electromagnetic spectrum? What will its wavelength and frequency be? 191 views ### String-net models on non-trivalent lattices I have just started reading about string net models. The following aspect wasn't entirely clear to me: String net models are most naturally defined on trivalent networks, that is to say networks ...
{}
Known limitations¶ The following limitations have been noticed when using OpenCOR on our supported platforms. Windows, Linux and macOS¶ • By default, OpenCOR uses the system’s language for menus, message boxes, etc., as long as it is either English or French (please contact us if you would like OpenCOR to support other languages). If the system uses another language, OpenCOR will default to English. Otherwise, if you specify English or French, then please be aware that system messages, diaogs, etc. will still be displayed using the system’s language (assuming it is not one of the languages supported by OpenCOR). • OpenCOR uses the CellML API, which is known to have the following limitations: • It will crash OpenCOR if you try to export a CellML file to a user-defined format that is described in a file that contains valid, but unknown, XML. • It may incorrectly (in)validate certain CellML files. Windows and Linux¶ • A scaled display will, on Windows 7 and Linux, result in some aspects of OpenCOR being rendered at the wrong size (e.g. icons will be smaller and scroll bars bigger). On Windows 10, OpenCOR should scale itself automatically, although it will look more or less blurry depending on your display scaling and screen resolution. In case OpenCOR does not scale itself, turn off Fix scaling for apps: or better, locate your copy of OpenCOR, right click on [OpenCOR]\bin\OpenCOR.exe, click on the Properties menu item, and have the high DPI scaling performed by the system: Windows¶ • The File Browser window plugin may, on some systems, result in OpenCOR being slow to respond at startup. This has nothing to do with OpenCOR, but most likely with a Windows shell add-on. This page may help address the issue, but if not then you might have to disable the File Browser window plugin.
{}
13. Deep Learning on Sequences¶ Deep learning on sequences is part of a broader long-term effort in machine learning on sequences. Sequences is a broad term that includes text, integer sequences, DNA, and other ordered data. Deep learning on sequences often intersects with another field called natural language processing (NLP). NLP is a much broader field than deep learning, but there is quite a bit of overlap with sequence modeling. We’ll focus on the application of deep learning on sequences and NLP to molecules and materials. NLP in chemistry would at first appear to be a rich area, especially with the large amount of historic chemistry data existing only in plain text. However, the most work in this area has been on representations of molecules as text via the SMILES[Wei88] (and recently SELFIES [KHN+20]) encoding. There is nothing natural about SMILES and SELFIES though, so we should be careful to discriminate between work on natural language (like identifying names of compounds in a research article) from predicting the solubility of a compound from its SMILES. I hope there will be more NLP in the area, but publishers prevent bulk access/ML on publications. Few corpuses (collections of natural language documents) exist for NLP on chemistry articles. Audience & Objectives This chapter builds on Standard Layers and Attention Layers. After completing this chapter, you should be able to • Define natural language processing and sequence modeling • Recognize and be able to encode molecules into SMILES or other string encodings • Understand RNNs and know some layer types • Construct RNNs in seq2seq or seq2vec configurations • Know the transformer architecture • Understand how design can be made easier in a latent space One advantage of working with molecules as text relative to graph neural networks (GNNs) is that existing ML frameworks have many more features for working with text, due to the strong connection between NLP and sequence modeling. Another reason is that it is easier to train generative models, because generating valid text is easier than generating valid graphs. You’ll thus see generative/unsupervised learning of chemical space more often done with sequence models, whereas GNNs are typically better for supervised learning tasks and can incorporate spatial features (e.g., ). Outside of deep learning, graphical representations are in viewed as more robust than text encodings when used in methods like genetic algorithms and chemical space exploration [BFSV19]. NLP with sequence models can also be used to understand natural language descriptions of materials and molecules, which is essential for materials that are defined with more than just the molecular structure. In sequence modeling, unsupervised learning is very common. We can predict the probability of the next token (word or character) in a sequence, like guessing the next word in a sentence. This does not require labels, because we only need examples of the sequences. It is also called pre-training, because it precedes training on sequences with labels. For chemistry this could be predicting the next atom in a SMILES string. For materials, this might be predicting the next word in a synthesis procedure. These pre-trained have statistical model of a language and can be fine-tuned (trained a second time) for a more specific task, like predicting if a molecule will bind to a protein. Another common task in sequence modeling (inluding NLP) is to convert sequences into continuous vectors. This doesn’t always involve deep learning. Models that can embed sequences into a vector space are often called seq2vec or x2vec, where x might be molecule or synthesis procedure. Finally, we often see translation tasks where we go from one sequence language to another. A sequence to sequence model (seq2seq) is similar to pre-training because it actually predicts probabilities for the output sequence. 13.1. Converting Molecules into Text¶ Before we can begin to use neural networks, we need to convert molecules into text. Simplified molecular-input line-entry system (SMILES) is a de facto standard for converting molecules into a string. SMILES enables molecular structures to be correctly saved in spreadsheets, databases, and input to models that work on sequences like text. Here’s an example SMILES string: CC(NC)CC1=CC=C(OCO2)C2=C1. SMILES was crucial to the field of cheminformatics and is widely used today beyond deep learning. Some of the first deep learning work was with SMILES strings because of the ability to apply NLP models to SMILES strings. Let us imagine SMILES as a function whose domain is molecular graphs (or some equivalent complete description of a molecule) and the image is a string. This can be thought of as an encoder that converts a molecular graph into a string. The SMILES encoder function is not surjective – there are many strings that cannot be reached from decoding graphs. The SMILES encoder function is injective – each graph has a different SMILES string. The inverse of this function, the SMILES decoder, cannot have the domain of all strings because some strings do not decode to valid molecular graphs. This is because of the syntax rules of SMILES. Thus, we can regard the domain to be restricted to valid SMILES string. In that case, the decoder is surjective – all graphs are reachable via a SMILES string. The decoder is not injective – multiple graphs can be reached by SMILES string. This last point, the non-injectivity of a SMILES decoder, is a problem identified in database storage and retrieval of compounds. Since multiple SMILES strings map to the same molecular graph, it can happen that multiple entries in a database are actually the same molecule. One way around this is canonicalization which is a modification to the encoder to make a unique SMILES string. It can fail though [OBoyle12]. If we restrict ourselves to valid, canonical SMILES, then the SMILES decoder function is injective and surjective – bijective. The difficulty of canonicalization and thus perceived weakness of SMILES in creating unique strings led (in part) to the creation of InChi strings. InChI is an alternative that is inherently canonical. InChI strings are typically longer and involve more tokens, which seems to affect their use in deep learning. InChI as a representation is often worse with the same amount of data vs SMILES. If you’ve read the previous chapters on equivariances (Input Data & Equivariances and Equivariant Neural Networks), a natural question is if SMILES is permutation invariant. That is, if you change the order of atoms in the molecular graph that has no effect on chemistry, is the SMILES string identical? Yes, if you use the canonical SMILES. So in a supervised setting, using canonical SMILES gives an atom ordering permutation invariant neural network because the representation will not be permuted after canonicalization. Be careful; you should not trust that SMILES you find in a datset are canonical . 13.1.1. SELFIES¶ Recent work from Krenn et al. developed an alternative approach to SMILES called SELF-referencIng Embedded Strings (SELFIES)[KHN+20]. Every string is a valid molecule. Note that the characters in SELFIES are not all ASCII characters, so it’s not like every sentence encodes a molecule (would be cool though). SELFIES is an excellent choice for generative models because any SELFIES string automatically decodes to a valid molecule. SELFIES, as of 2021, is not directly canonicalized though and thus is not permutation invariant by itself. However, if you add canonical SMILES as an intermediate step, then SELFIES are canonical. It seems that models which output a molecule (generative or supervised) benefit from using SELFIES instead of SMILES because the model does not need to learn how to make valid strings – all strings are already valid SELFIES [RZS20]. This benefit is less clear in supervised learning and no difference has been observed empirically[CGR20]. Here’s a blog post giving an overview of SELFIES and its applications. 13.1.2. Demo¶ You can get a sense for SMILES and SELFIES in this demo page that uses a RNN (discussed below) to generate SMILES and SELFIES strings. 13.1.3. Stereochemistry¶ SMILES and SELFIES can treat stereoisomers, but there are a few complications. rdkit, the dominant Python package, cannot treat non-tetrahedral chiral centers with SMILES as of 2022. For example, even though SMILES according to its specification can correctly distinguish cisplatin and transplatin, the implementation of SMILES in rdkit cannot. Other examples of chirality that are present in the SMILES specification but not implementations are planar and axial chirality. SELFIES relies on SMILES (most often the rdkit implementation) and thus is also susceptible to this problem. This is an issue for any organometallic compounds. In organic chemistry though, most chirality is tetrahedral and correctly treated by rdkit. 13.1.4. What is a chemical bond?¶ More broadly, the idea of a chemical bond is a concept created by chemists [Bal11]. You cannot measure the existence of a chemical bond in the lab and it is not some quantum mechanical operator with an observable. There are certain molecules which cannot be represented by classic single,double,triple,aromatic bonded representations, like ferrocene or diborane. This bleeds over to text encoding of a molecule where the bonding topology doesn’t map neatly to bond order. The specific issue this can cause is that multiple unique molecules may appear to have the same encoding (non-injective). In situations like this, it is probably better to just work with the exact 3D coordinates and then bond order or type is less important than distance between atoms. 13.2. Running This Notebook¶ Click the    above to launch this page as an interactive Google Colab. See details below on installing packages. 13.3. Recurrent Neural Networks¶ Recurrent neural networks (RNN) have been by far the most popular approach to working with molecular strings. RNNs have a critical property that they can have different length input sequences, making it appropriate for SMILES or SELFIES which both have variable length. RNNs have recurrent layers that consume an input sequence element-by-element. Consider an input sequence $$\mathbf{X}$$ which is composed of a series of vectors (recall that characters or words can be represented with one-hot or embedding vectors) $$\mathbf{X} = \left[\vec{x}_0, \vec{x}_1,\ldots,\vec{x}_L\right]$$. The RNN layer function is binary and takes as input the $$i$$th element of the input sequence and the output from the $$i - 1$$ layer function. You can write it as: (13.1)$$$f(f\ldots f(\vec{x}_0,\vec{0}), \vec{x}_1), \vec{x}_2)\ldots \vec{x}_L)$$$ Commonly we would like to actually see and look at the these intermediate outputs from the layer function $$f_4(\vec{x}_4, f_3(\ldots)) = \vec{h}_4$$. These $$\vec{h}$$s are called the hidden state because of the connection between RNNs and Markov State Models. We can unroll our picture of an RNN to be: where the initial hidden state is assumed to be $$\vec{0}$$, but could be trained. The output at the end is shown as $$\vec{y}$$. Notice there are no subscripts on $$f$$ because we use the same function and weights at each step. This re-use of weights makes the choice of parameter number independent of input lengths, which is also necessary to make the RNN accommodate arbitrary length input sequences. It should be noted that the length of $$\vec{y}$$ may be a function of the input length, so that the $$\vec{h}_i$$ may be increasing in length at each step to enable an output $$\vec{y}$$. Some diagrams of RNNs will show that by indicating a growing output sequence as an additional output from $$f(\vec{x}_i, h_{i-1})$$. Interestingly, the form of $$f(\vec{x}, \vec{h})$$ is quite flexible based on the discussion above. There have been hundreds of ideas for the function $$f$$ and it is problem dependent. The two most common are long short-term memory (LSTM) units and gated recurrent unit (GRU). You can spend quite a bit of time trying to reason about these functions, understanding how gradients propagate nicely through them, and there is an analogy about how they are inspired by human memory. Ultimately, they are used because they perform well and are widely-implemented so we do not need to spend much time on these details. The main thing to know is that GRUs are simpler and faster, but LSTMs seem to be better at more difficult sequences. Note that $$\vec{h}$$ is typically 1-3 different quantities in modern implementations. Another details is the word units. Units are like the hidden state dimension, but because the hidden state could be multiple quantities (e.g., LSTM) we do not call it dimension. The RNN layer allows us to input an arbitrary length sequence and outputs a label which could depend on the length of the input sequence. You can imagine that this could be used for regression or classification. $$\hat{y}$$ would be a scalar. Or you could take the output from an RNN layer into an MLP to get a class. 13.3.1. Generative RNNs¶ An interesting use case for an RNN is in unsupervised generative models, where we try to predict new examples. This means that we’re trying to learn $$P(\mathbf{X})$$ [SKTW18]. With a generative RNN, we predict the sequence one symbol at a time by conditioning on a growing sequence. This is called autoregressive generation. (13.2)$$$P(\mathbf{X}) = \prod P(\vec{x}_L | \vec{x}_{L - 1}, \vec{x}_{L - 2}, \ldots,\vec{x}_0)\ldots P(\vec{x}_1 | \vec{x}_0) P(\vec{x}_0))$$$ The RNN is trained to take as input a sequence and output the probability for the next character. Our network is trained to be this conditional probability: $$P(\vec{x}_i | \vec{x}_{L - i}, \vec{x}_{L - i}, \ldots, \vec{x}_0)$$. What about the $$P(\vec{x}_0)$$ term? Typically we just pick what the first character should be. Or, we could create an artificial “start” character that marks the beginning of a sequence (typically 0) and always choose that. We can train the RNN to agree with $$P(\vec{x}_i | \vec{x}_{L - i}, \vec{x}_{L - i}, \ldots, \vec{x}_0)$$ by taking an arbitrary sequence $$\vec{x}$$ and choosing a split point $$\vec{x}_i$$ and training on the proceeding sequence elements. This is just multi-class classification. The number of classes is the number of available characters and our model should output a probability vector across the classes. Recall the loss for this cross-entropy. When doing this process with SMILES an obvious way to judge success would be if the generated sequences are valid SMILES strings. This at first seems reasonable and was used as a benchmark for years in this topic. However, this is a low-bar: we can find valid SMILES in much more efficient ways. You can download 77 million SMILES [CGR20] and you can find vendors that will give you a multi-million entry database of purchasable molecules. You can also just use SELFIES and then an untrained RNN will generate only valid strings, since SELFIES is bijective. A more interesting metric is to assess if your generated molecules are in the same region of chemical space as the training data[SKTW18]. I believe though that generative RNNs are relatively poor compared with other generative models in 2021. They are still strong though when composed with other architectures, like VAEs or encoder/decoder [RZS20]. You can see a worked out example in Generative RNN in Browser. As in our Graph Neural Networks chapter, we run into issues with variable length inputs. The easiest and most compute efficient way to treat this is to pad (and/or trim) all strings to be the same length, making it easy to batch examples. A memory efficient way is to not batch and either batch gradients as a separate step or trim your sequences into subsequences and save the RNN hidden-state between them. Due to the way that NVIDIA has written RNN kernels, padding should always be done on the right (sequences all begin at index 0). The character used for padding is typically 0. Don’t forget, we will always first convert our string characters to integers corresponding to indices of our vocabulary (see Standard Layers). Thus, remember to make sure that the index 0 should be reserved for padding. Masking is used for two things. Masking is used to ensure that the padded values are not accidentally considered in training. This is framework dependent and you can read about Keras here, which is what we’ll use. The second use for masking is to do element-by-element training like the generative RNN. We train each time with a shorter mask, enabling it to see more of the sequence. This prevents you from needing to slice-up the training examples into many shorter sequences. This idea of a right-mask that prevents the model for using characters farther in the sequence is sometimes called causal masking because we’re preventing characters from the “future” affecting the model. 13.5. RNN Solubility Example¶ Let’s revisit our solubility example from before. We’ll use a GRU to encode the SMILES string into a vector and then apply a dense layer to get a scalar value for solubility. Let’s revisit the solubility AqSolDB[SKE19] dataset from Regression & Model Assessment. Recall it has about 10,000 unique compounds with measured solubility in water (label) and their SMILES strings. Many of the steps below are explained in the Standard Layers chapter that introduces Keras and the principles of building a deep model. I’ve hidden the cell below which sets-up our imports and shown a few rows of the dataset. import pandas as pd import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import dmol ) features_start_at = list(soldata.columns).index("MolWt") np.random.seed(0) ID Name InChI InChIKey SMILES Solubility SD Ocurrences Group MolWt ... NumRotatableBonds NumValenceElectrons NumAromaticRings NumSaturatedRings NumAliphaticRings RingCount TPSA LabuteASA BalabanJ BertzCT 0 A-3 N,N,N-trimethyloctadecan-1-aminium bromide InChI=1S/C21H46N.BrH/c1-5-6-7-8-9-10-11-12-13-... SZEMGTQCPRNXEG-UHFFFAOYSA-M [Br-].CCCCCCCCCCCCCCCCCC[N+](C)(C)C -3.616127 0.0 1 G1 392.510 ... 17.0 142.0 0.0 0.0 0.0 0.0 0.00 158.520601 0.000000e+00 210.377334 1 A-4 Benzo[cd]indol-2(1H)-one InChI=1S/C11H7NO/c13-11-8-5-1-3-7-4-2-6-9(12-1... GPYLCFQEKPUWLD-UHFFFAOYSA-N O=C1Nc2cccc3cccc1c23 -3.254767 0.0 1 G1 169.183 ... 0.0 62.0 2.0 0.0 1.0 3.0 29.10 75.183563 2.582996e+00 511.229248 2 A-5 4-chlorobenzaldehyde InChI=1S/C7H5ClO/c8-7-3-1-6(5-9)2-4-7/h1-5H AVPYQKSLYISFPO-UHFFFAOYSA-N Clc1ccc(C=O)cc1 -2.177078 0.0 1 G1 140.569 ... 1.0 46.0 1.0 0.0 0.0 1.0 17.07 58.261134 3.009782e+00 202.661065 3 A-8 zinc bis[2-hydroxy-3,5-bis(1-phenylethyl)benzo... InChI=1S/2C23H22O3.Zn/c2*1-15(17-9-5-3-6-10-17... XTUPUYCJWKHGSW-UHFFFAOYSA-L [Zn++].CC(c1ccccc1)c2cc(C(C)c3ccccc3)c(O)c(c2)... -3.924409 0.0 1 G1 756.226 ... 10.0 264.0 6.0 0.0 0.0 6.0 120.72 323.755434 2.322963e-07 1964.648666 4 A-9 4-({4-[bis(oxiran-2-ylmethyl)amino]phenyl}meth... InChI=1S/C25H30N2O4/c1-5-20(26(10-22-14-28-22)... FAUAZXVRLVIARB-UHFFFAOYSA-N C1OC1CN(CC2CO2)c3ccc(Cc4ccc(cc4)N(CC5CO5)CC6CO... -4.662065 0.0 1 G1 422.525 ... 12.0 164.0 2.0 4.0 4.0 6.0 56.60 183.183268 1.084427e+00 769.899934 5 rows × 26 columns We’ll extract our labels and convert SMILES into padded characters. We make use of a tokenizer, which is essentially a look-up table for how to go from the characters in a SMILES string to integers. To make our model run faster, I will filter out very long SMILES strings. # filter out long smiles smask = [len(s) <= 96 for s in soldata.SMILES] print(f"Removed {soldata.shape[0] - sum(smask)} long SMILES strings") # make tokenizer with 128 size vocab and # have it examine all text in dataset vocab_size = 128 tokenizer = tf.keras.preprocessing.text.Tokenizer( vocab_size, filters="", char_level=True ) tokenizer.fit_on_texts(filtered_soldata.SMILES) Removed 285 long SMILES strings seqs = tokenizer.texts_to_sequences(filtered_soldata.SMILES) # Now build dataset # now split into val, test, train and batch N = soldata.shape[0] split = int(0.1 * N) test_data = data.take(split).batch(16) nontest = data.skip(split) val_data, train_data = nontest.take(split).batch(16), nontest.skip(split).shuffle( 1000 ).batch(16) We’re now ready to build our model. We will just use an embedding then RNN and some dense layers to get to a final predicted solubility. model = tf.keras.Sequential() # make embedding and indicate that 0 should be treated as padding mask ) # RNN layer # a dense hidden layer # regression, so no activation model.summary() Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, None, 16) 2048 gru (GRU) (None, 32) 4800 dense (Dense) (None, 32) 1056 dense_1 (Dense) (None, 1) 33 ================================================================= Total params: 7,937 Trainable params: 7,937 Non-trainable params: 0 _________________________________________________________________ Now we’ll compile our model and train it. This is a regression problem, so we use mean squared error for our loss. result = model.fit(train_data, validation_data=val_data, epochs=25, verbose=0) plt.plot(result.history["loss"], label="training") plt.plot(result.history["val_loss"], label="validation") plt.legend() plt.xlabel("Epoch") plt.ylabel("Loss") plt.show() As usual, we could keep training and I encourage you to explore adding regularization or modifying the architecture. Let’s now see how the test data looks. # evaluate on test data yhat = [] test_y = [] for x, y in test_data: yhat.extend(model(x).numpy().flatten()) test_y.extend(y.numpy().flatten()) yhat = np.array(yhat) test_y = np.array(test_y) # plot test data plt.plot(test_y, test_y, ":") plt.plot(test_y, yhat, ".") plt.text(min(y) + 1, max(y) - 2, f"correlation = {np.corrcoef(test_y, yhat)[0,1]:.3f}") plt.text(min(y) + 1, max(y) - 3, f"loss = {np.sqrt(np.mean((test_y - yhat)**2)):.3f}") plt.title("Testing Data") plt.show() Linear regression from Regression & Model Assessment still wins, but this demonstrates the use of an RNN for this task. 13.6. Transformers¶ Transformers have been well-established now as the current state of the art for language modeling tasks. The transformer architecture is actually just multi-headed attention blocks repeated in multiple layers. The paper describing the architecture was quite a breakthrough. At the time, the best models used convolutions, recurrence, attention and encoder/decoder. The paper title was “attention is all you need” and that is basically the conclusion [VSP+17]. They found that multi-head attention (including self-attention) was what mattered and this led to transformers. Transformers are simple and scalable because each layer is nearly the same operation. This has led to simple “scaling-up the language model” resulting in things like GPT-3, which has billions of parameters and cost millions of dollars to train. GPT-3 is also surprisingly good and versatile. The single model is able to answer questions, describe computer code, translate languages, and infer recipe instructions for cookies. I highly recommend reading the paper, it’s quite interesting[BMR+20]. 13.6.1. Architecture¶ The transformer is fundamentally made-up of layers of multi-head attention blocks as discussed in Attention Layers. You can get a detailed overview of the transformer architecture here. The overall architecture is an encoder/decoder like seen in Variational Autoencoder. Like the variational autoencoder, the decoder portion can be discarded and only the encoder is used for supervised tasks. Thus, you might pre-train the encoder/decoder with self-supervised training (strings with withheld characters) on a large dataset without labels and then use only the encoder for a regression tasks with a smaller dataset. What exactly is going in and out of the encoder/decoder? The transformer is an example of a sequence to sequence (seq2seq) model and the most obvious interpretation is translating between two languages like English to French. The encoder takes in English and the decoder produces French. Or maybe SMILES to IUPAC name. However, that requires “labels” (the paired sequence). To do self-supervised training pre-training, we need the input to the encoder to be a sequence missing some values and the decoder output to be the same sequence with probabilities for each position values filled in. This is called masked self-supervised training. If you pre-train in this way, you can do two tasks with your pre-trained encoder/decoder. You can use the encoder alone as a way to embed a string into real numbers and then a downstream task like predicting a molecule’s enthalpy of formation from its SMILES string. The other way to use a model trained this way is for autoregressive generation. The input might be a few characters or a prompt [RM21] specifically crafted like a question. This is similar the generative RNN, although it allows more flexibility. There are many details to transformers and “hand-tuned” hyperparameters. Examples in modern transformers are layer normalizations (similar to batch normalization), embeddings, dropout, weight decay, learning rate decay, and position information encoding [LOG+19]. Position information is quite an interesting topic – you need to include the location of a token (character) in its embedding. Was it the first character or last character? This is key because when you compute the attention between tokens, the relative location is probably important. Some recent promising work proposed a kind of phase/amplitude split, where the position is the phase and the amplitude is the embedding called rotary positional encodings[SLP+21]. If you would like to see how to implement a real transformer with most of these details, take a look at this Keras tutorial. Because transformers are so tightly coupled with pre-training, there has been a great deal of effort in pre-training models. Aside from GPT-3, a general model pre-trained on an enormous corpus of billions of sequences from multiple languages, there are many language specific pre-trained models. Hugging Face is a company and API that hosts pre-trained transformers for specific language models like Chinese language, XML, SMILES, or question and answer format. These can be quickly downloaded and utilized, enabling rapid use of state-of-the art language models. 13.7. Using the Latent Space for Design¶ One of the most interesting applications of these encoder/decoder seq2seq models in chemistry is their use for doing optimal design of a molecule. We pre-train an encoder/decoder pair with masking. The encoder brings our molecule to a continuous representation (seq2vec). Then we can do regression in this vector space for whatever property we would like (e.g., solubility). Then we can optimize this regressed model, finding an input vector that is a minimum or maximum, and finally convert that input vector into a molecule using the decoder . The vector space output by the encoder is called the latent space like we saw in Variational Autoencoder. Of course, this works for RNN seq2seq models, transformers, or convolutions. 13.8. Representing Materials as Text¶ Materials are an interesting problem for deep learning because they are not defined by a single molecule. There can be information like the symmetry group or components/phases for a composite material. This creates a challenge for modeling, especially for real materials that have complexities like annealing temperature, additives, and age. From a philosophical point of view, a material is defined by how it was constructed. Practically that means a material is defined by the text describing its synthesis [BDC+18]. This is an idea taken to its extreme in Tshitoyan et al. [TDW+19] who found success in representing thermoelectrics via the text describing their synthesis [SC16]. This work is amazing to me because they had to manually collect papers (publishers do not allow ML/bulk download on articles) and annotate the synthesis methods. Their seq2vec model is relatively old (2 years!) and yet there has not been much progress in this area. I think this is a promising direction but challenging due to the data access limitations. For example, recent progress by Friedrich et al. [FAT+20] built a pre-trained transformer for solid oxide fuel cells materials but their corpus was limited to open access articles (45) over a 7 year period. This is one critical line of research that is limited due to copyright issues. Text can be copyrighted, not data, but maybe someday a court can be convinced that they are interchangeable. 13.9. Applications¶ As discussed above, molecular design has been one of the most popular areas for sequence models in chemistry . Transformers have been found to be excellent at predicting chemical reactions. Schwaller et al. [SPZ+20] have shown how to do retrosynthetic pathway analysis with transformers. The transformers take as input just the reactants and reagents and can predict the products. The models can be calibrated to include uncertainty estimates [SLG+19] and predict synthetic yield [SVLR20]. Beyond taking molecules as input, Vaucher et al. trained a seq2seq transformer that can translate the unstructured methods section of a scientific paper into a set of structured synthetic steps [VZG+20]. Finally, Schwaller et al. [SPV+21] trained a transformer to classify reactions into organic reaction classes leading to a fascinating map of chemical reactions. 13.10. Summary¶ • Text is a natural representation of both molecules and materials • SMILES and SELFIES are ways to convert molecules into strings • Recurrent neural networks (RNNs) are an input-length independent method of converting strings into vectors for regression or classification • RNNs can be trained in seq2seq (encoder/decoder) setting by having it predict the next character in a sequence. This yields a model that can autoregressively generate new sequences/molecules • Withholding or masking sequences for training is called self-supervised training and is a pre-training step for seq2seq models to enable them to learn the properties of a language like English or SMILES • Transformers are currently the best seq2seq models • The latent space of seq2seq models can be used for molecular design • Materials can be represented as text which is a complete representation for many materials 13.11. Cited References¶ Wei88 David Weininger. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. Journal of chemical information and computer sciences, 28(1):31–36, 1988. SKE19 Murat Cihan Sorkun, Abhishek Khetan, and Süleyman Er. AqSolDB, a curated reference set of aqueous solubility and 2D descriptors for a diverse set of compounds. Sci. Data, 6(1):143, 2019. doi:10.1038/s41597-019-0151-1. YCW20 Ziyue Yang, Maghesree Chakraborty, and Andrew D White. Predicting chemical shifts with graph neural networks. bioRxiv, 2020. KGrossGunnemann20 Johannes Klicpera, Janek Groß, and Stephan Günnemann. Directional message passing for molecular graphs. In International Conference on Learning Representations. 2020. VSP+17 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, 5998–6008. 2017. KHN+20(1,2) Mario Krenn, Florian Häse, AkshatKumar Nigam, Pascal Friederich, and Alan Aspuru-Guzik. Self-referencing embedded strings (SELFIES): a 100% robust molecular string representation. Machine Learning: Science and Technology, 1(4):045024, nov 2020. URL: https://doi.org/10.1088/2632-2153/aba947, doi:10.1088/2632-2153/aba947. BFSV19 Nathan Brown, Marco Fiscato, Marwin HS Segler, and Alain C Vaucher. Guacamol: benchmarking models for de novo molecular design. Journal of chemical information and modeling, 59(3):1096–1108, 2019. OBoyle12 Noel M O’Boyle. Towards a universal smiles representation-a standard method to generate canonical smiles based on the inchi. Journal of cheminformatics, 4(1):1–14, 2012. RZS20(1,2) Kohulan Rajan, Achim Zielesny, and Christoph Steinbeck. Decimer: towards deep learning for chemical image recognition. Journal of Cheminformatics, 12(1):1–9, 2020. CGR20(1,2,3) Seyone Chithrananda, Gabe Grand, and Bharath Ramsundar. Chemberta: large-scale self-supervised pretraining for molecular property prediction. arXiv preprint arXiv:2010.09885, 2020. Bal11 Philip Ball. Beyond the bond. Nature, 469(7328):26–28, 2011. SKTW18(1,2,3) Marwin HS Segler, Thierry Kogej, Christian Tyrchan, and Mark P Waller. Generating focused molecule libraries for drug discovery with recurrent neural networks. ACS central science, 4(1):120–131, 2018. GomezBWD+18(1,2,3) Rafael Gómez-Bombarelli, Jennifer N Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. ACS central science, 4(2):268–276, 2018. BMR+20 Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, and others. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. TDG+21 Yi Tay, Mostafa Dehghani, Jai Gupta, Dara Bahri, Vamsi Aribandi, Zhen Qin, and Donald Metzler. Are pre-trained convolutions better than pre-trained transformers? arXiv preprint arXiv:2105.03322, 2021. RM21 Laria Reynolds and Kyle McDonell. Prompt programming for large language models: beyond the few-shot paradigm. arXiv preprint arXiv:2102.07350, 2021. WWCF21 Yuyang Wang, Jianren Wang, Zhonglin Cao, and Amir Barati Farimani. Molclr: molecular contrastive learning of representations via graph neural networks. arXiv preprint arXiv:2102.10056, 2021. LOG+19 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. SLP+21 Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. Roformer: enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864, 2021. BDC+18 Keith T Butler, Daniel W Davies, Hugh Cartwright, Olexandr Isayev, and Aron Walsh. Machine learning for molecular and materials science. Nature, 559(7715):547–555, 2018. TDW+19 Vahe Tshitoyan, John Dagdelen, Leigh Weston, Alexander Dunn, Ziqin Rong, Olga Kononova, Kristin A Persson, Gerbrand Ceder, and Anubhav Jain. Unsupervised word embeddings capture latent knowledge from materials science literature. Nature, 571(7763):95–98, 2019. SC16 Matthew C Swain and Jacqueline M Cole. Chemdataextractor: a toolkit for automated extraction of chemical information from the scientific literature. Journal of chemical information and modeling, 56(10):1894–1904, 2016. FAT+20 Annemarie Friedrich, Heike Adel, Federico Tomazic, Johannes Hingerl, Renou Benteau, Anika Maruscyk, and Lukas Lange. The sofc-exp corpus and neural approaches to information extraction in the materials science domain. arXiv preprint arXiv:2006.03039, 2020. MFGS18 Daniel Merk, Lukas Friedrich, Francesca Grisoni, and Gisbert Schneider. De novo design of bioactive small molecules by artificial intelligence. Molecular informatics, 37(1-2):1700153, 2018. SPZ+20 Philippe Schwaller, Riccardo Petraglia, Valerio Zullo, Vishnu H Nair, Rico Andreas Haeuselmann, Riccardo Pisoni, Costas Bekas, Anna Iuliano, and Teodoro Laino. Predicting retrosynthetic pathways using transformer-based models and a hyper-graph exploration strategy. Chemical Science, 11(12):3316–3325, 2020. SLG+19 Philippe Schwaller, Teodoro Laino, Théophile Gaudin, Peter Bolgar, Christopher A Hunter, Costas Bekas, and Alpha A Lee. Molecular transformer: a model for uncertainty-calibrated chemical reaction prediction. ACS central science, 5(9):1572–1583, 2019. SVLR20 Philippe Schwaller, Alain C Vaucher, Teodoro Laino, and Jean-Louis Reymond. Prediction of chemical reaction yields using deep learning. ChemRxiv Preprint, 2020. URL: https://doi.org/10.26434/chemrxiv.12758474.v2. VZG+20 Alain C Vaucher, Federico Zipoli, Joppe Geluykens, Vishnu H Nair, Philippe Schwaller, and Teodoro Laino. Automated extraction of chemical synthesis actions from experimental procedures. Nature communications, 11(1):1–11, 2020. SPV+21 Philippe Schwaller, Daniel Probst, Alain C Vaucher, Vishnu H Nair, David Kreutter, Teodoro Laino, and Jean-Louis Reymond. Mapping the space of chemical reactions using attention-based neural networks. Nature Machine Intelligence, pages 1–9, 2021.
{}
This study uses the data provided by the Leiden Ranking 2020 to support the claim that percentile-based indicators are linked by a power law function. A constant calculated from this function, ep, and the total number of papers fully characterize the percentile distribution of publications. According to this distribution, the probability that a publication from a country or institution is in the global xth percentile can be calculated from a simple equation: P = ep(2−lgx). By taking the Leiden Ranking PPtop 10%/100 as an approximation of the ep constant, our results demonstrate that other PPtop x% indicators can be calculated applying this equation. Consequently, given a PPtop x% indicator, all the others are redundant. Even accepting that the total number of papers and a single PPtop x% indicator are sufficient to fully characterize the percentile distribution of papers, the results of comparisons between universities and research institutions differ depending on the percentile selected for the comparison. We discuss which Ptop x% and PPtop x% indicators are the most convenient for these comparisons to obtain reliable information that can be used in research policy. The rapid progress in the availability of data on research output and faster methods for their analysis are “leading to a quantitative understanding of the genesis of scientific discovery, creativity, and practice and developing tools and policies aimed at accelerating scientific progress” (Fortunato, Bergstrom et al., 2018, p. 1). Among all the analyses that can be done on research output, one of the most important is the efficiency analysis of the research carried out by institutions and countries; this importance is continually increasing in parallel with the increasing importance that research plays in modern economies. Worldwide R&D (research and development) expenditures amounted to \$1,918 billion in 2015 (National Science Board, 2018), and society needs to know the relevance of what research institutions produce with these expenditures and their efficiency in producing it. Describing this need, 28 years ago Garfield and Welljams-Dorof (1992) began a paper with the following statement: “Government policy-makers, corporate research managers, and university administrators need valid and reliable S&T indicators for a variety of purposes: for example, to measure the effectiveness of research expenditures, identify areas of strength and excellence, set priorities for strategic planning, monitor performance relative to peers and competitors, and target emerging specialties and new technologies for accelerated development.” Since then, and despite this obvious need, a method to measure the effectiveness of research expenditures has not been indisputably established. ### 1.1. Citation-Based Indicators of Research Performance Indicators of research performance have been sought for a long time (e.g., Godin, 2003); since Francis Narin (1976) used the term evaluative bibliometrics, many indicators have been proposed and those based on citation counts are the most reliable (De Bellis, 2009; Moet, 2005). However, the use of citation counts for scientific assessments has triggered a long-standing debate (Aksnes, Langfeldt, & Wouters, 2019). In the context of this debate, it should be strongly emphasized that citation counts correlate with the scientific relevance or impact of a scientific publication, but they do not always measure the relevance of a specific scientific publication. There are several reasons why many papers receive more or fewer citations than they deserve (MacRoberts & MacRoberts, 1989) or, more commonly, that they receive them belatedly (Garfield, 1980). Even worse, recognition of novelty in science might be delayed and the reporting papers are ignored in short-term citation counting (Wang, Veugelers, & Stephan, 2017). In contrast, when many papers are aggregated the numbers of papers with excessive and scant number of citations are canceled out. In other words: “to a certain extent, the biased are averaged out at aggregated levels” (Aksnes et al., 2019, p. 5). This canceling out cannot be assured with a low number of papers, and this precludes the use of bibliometrics for the evaluation of small numbers of papers, as in the case of individual researchers. It is worth noting that this does not prevent many papers from being correctly evaluated by bibliometric indices; the impediment for their use is that not all papers are correctly evaluated. Unfortunately, this issue is frequently ignored and bibliometric tools are used in the evaluation of researchers (e.g., Kaptay, 2020; Siudem, Zogala-Siudem et al., 2020). In contrast, at the aggregation level of institutions, citation indicators have been validated against peer review (Rodríguez-Navarro & Brito, 2020a; Traag & Waltman, 2019). As mentioned above, many indicators have been proposed for the research evaluation of institutions and countries, but those based on citation percentiles that refer to worldwide production (Bornmann, 2010; Bornmann, Leydesdorff, & Wang, 2013; Mcallister, Narin, & Corrigan, 1983) have demonstrated superiority and replaced others based on averages (Opthof & Leydesdorff, 2010). Top percentile indicators have been used by the National Science Board of the USA since 2010 (National Science Board, 2010) and by the Leiden Ranking since 2011 (Waltman, Calero-Medina et al., 2012). Several studies have addressed the need for research performance indicators to be validated against peer review or other external criteria (Harnad, 2009). Many validation studies have been performed, many of them against peer review. In an extensive study testing many indicators, including percentile indicators (HEFCE, 2015), it has been concluded that “results at output-by-author level (Supplementary Report II) [has] shown that individual metrics give significantly different outcomes from the REF peer review process, and therefore cannot provide a like-for-like replacement for REF peer review” (Wilsdon, Allen et al., 2015, p. 138). However, two further studies using the same data have proved that at the university level, which implies a higher aggregation level, top percentile indicators show good correlations with peer review (Rodríguez-Navarro & Brito, 2020a; Traag & Waltman, 2019). In summary, there is strong evidence supporting the claim that citation-based percentile indicators are excellent tools for the analysis of research outputs. The challenge is to convert these bibliometric indicators into metrics that can be used by “government policy-makers, corporate research managers, and university administrators” (Garfield & Welljams-Dorof, 1992) to calculate the efficiency of research institutions. ### 1.2. Dichotomous and United Indicators In a specific discipline and for certain years, a top percentile indicator records the number of papers that an institution has among the set of global papers in that percentile, when they are ranked from the most cited downwards. This evaluation implies the classification of papers published by a research institution in two groups, depending on whether or not they belong to a certain set of global papers. In terms of citations, the two groups are defined depending on whether they are above or below a certain citation threshold—the issue of citation ties has been discussed previously (Schreiber, 2013; Waltman & Schreiber, 2013). This dichotomous classification of papers (Albarrán, Herrero et al., 2017; Bornmann, 2013) leads to the important notion that “dichotomous procedures rely on the idea that only the upper part of the distribution matters” (Albarrán et al., 2017, p. 628). Consequently, in formal terms, dichotomous indicators do not consider papers that are excluded by the criterion. For example, the use of the top 1% or 10% most highly cited papers as a frame of reference (Tijssen, Visser, & van Leeuwen, 2002) implies that the 99% or 90% other papers are not counted. Thus, it seems that the numbers of such papers or of the citations that they received does not matter. To integrate all papers in the indicators, after counting the papers in percentile ranks, different weights can be assigned to each rank (higher for the ranks with higher citations), and the weighted numbers of papers are added to obtain a united indicator (Bornmann & Mutz, 2011). Leydesdorff and Bornmann (2011) called this type of percentile indicators integrated impact indicators because they take into account the size and shape of the distribution, which is very skewed. This approach has been extensively investigated and different percentile ranks and weights have been proposed (Bornmann, 2013; Bornmann, Leydesdorff, & Mutz, 2013; Bornmann, Tekles, & Leydesdorff, 2019; Leydesdorff & Bornmann, 2012; Leydesdorff, Bornmann, & Adams, 2019; Leydesdorff, Bornmann et al., 2011). It is worth noting that weighted counts of publications in ranks do not require that the ranks be based on percentiles (Vinkler, 2011). The notion of dichotomy, according to which a single top percentile indicator does not take into account the excluded papers, and that a united indicator is needed for research evaluation, would be correct if the numbers of papers in percentiles were unpredictably distributed. But if the numbers of papers in all percentiles obey a function, the number of papers in a single top percentile could be sufficient to determine the numbers in all the other percentiles. This implies that no paper is ignored if only one percentile is used for evaluation, because the number of papers in any percentile is dependent on the function that describes the citation-based distribution of all papers. This type of function occurs frequently in natural sciences. For example, physics textbooks tell us that the pressure (equivalent to percentile) and volume (equivalent to number of papers in the percentile) of gases follow a strict law that depends on the amount of gas (equivalent to the total number of papers) and the temperature (equivalent to the efficiency of the research institution). A law of this type also exists in bibliometrics. Citations are universally distributed (Radicchi, Fortunato, & Castellano, 2008) and the numbers of papers in top percentiles obey a power law. This power law is a consequence of another basic relationship in citation analysis: the double rank function. “By ranking publications by their number of citations from highest to lowest, publications from institutions or countries have two ranking numbers: one for the internal and other for world positions; the internal ranking number can be expressed as a function of the world ranking number”; this function is a power law (Rodríguez-Navarro & Brito, 2018a, p. 31). Therefore, by knowing the total number of papers and the number of papers in a single top percentile, the number of papers in any other percentile can be easily calculated. The percentile law can be expressed in the following way: $Theprobabilityofpublishingapaperintoppercentilex=ep2–lgx$ (1) where ep is a mathematical derivative (10α) of the exponent (α) of the power law that the numbers of papers versus top percentiles obey (Brito & Rodríguez-Navarro, 2018; Rodríguez-Navarro & Brito, 2019). For an institution with the same percentile distribution as the global production, ep is equal to 0.1 and, in practice, the highest values of ep are around 0.3. ### 1.3. Discussion About Size-Independent Indicators The present study is largely based on Eq. 1. This equation calculates a probability, which is size independent. The usefulness of size-independent bibliometric indicators and of the application of terms such as productivity, performance, and efficiency in research evaluation have been debated (Abramo & D’Angelo, 2016a, 2016b; Glänzel, Thijs, & Debackere, 2016; Ruiz-Castillo, 2016; Waltman, van Eck et al., 2016). That discussion is out of the scope of this study. However, we think that the ideal for a research institution is size independent: that for a given total number of papers, the number of highly cited papers should be as high as possible. This conclusion emphasizes the importance of size-independent indicators for research evaluation purposes, especially the convenience of the ep constant, because it allows calculation of the probability of publishing a paper at any highly cited level. Regarding the use of size-dependent and size-independent indicators, there are not many differences. It is worth noting that if we know the cumulative probability function given by Eq. 1, the cumulative frequency of papers in any top percentile is equal to the probability multiplied by the total number of papers. Thus, the most relevant size-dependent indicator of a research system is the total number of publications, because the number of papers in top percentiles is a function of the total number of papers and the ep constant. Given the exponential nature of Eq. 1 and the range of numerical values between which ep varies, to produce a significant number of highly cited papers institutions with a low ep constant must publish many more papers than others that have high ep constants. In other words, the ep constant, mathematically equivalent to PPtop 10%/100, measures the efficiency of a research system. ### 1.4. Aims of This Study The above standpoint indicates that in the research assessment of the publications of institutions or countries, only two parameters—the total number of papers and the ep constant—are needed to characterize research performance at all citation levels. The former describes the size and the latter describes the efficiency; if both are known, the number of papers in any other top percentile can be calculated. As already described, this notion has theoretical and empirical support (Brito & Rodríguez-Navarro, 2020; Rodríguez-Navarro & Brito, 2018a, 2019), but it has not been tested against a large number of institutions. Therefore, the first aim of this study was to test it at the university level, making use of the detailed information provided by the Leiden Ranking. The second aim was to investigate which top percentile should be used to compare the research output of different institutions. It is worth noting that when comparing two institutions by their ratio of publications at different top percentiles, if their ep constants are different, the ratio will vary depending on which percentile is used for the comparison (e.g., top 10% or top 1%). Even the question of which of the two institutions is ahead and which is lagging might have opposite responses depending on the percentile used for the comparison and their total numbers of publications (see Figure 4 in Rodríguez-Navarro & Brito, 2019). For the aims of this study, we took advantage of the detailed data provided by the Leiden Ranking 2020 (https://www.leidenranking.com/; Excel file downloaded on August 21, 2020; these data have been deposited in Zenodo, DOI: 10.5281/zenodo.4603232), using in all cases fractional counting. The Leiden Ranking includes five research fields: “Biomedical and health sciences,” “Life and earth sciences,” “Mathematical and computer sciences,” Physical sciences and engineering,” and “Social sciences and humanities.” Previous studies in different research fields (Brito & Rodríguez-Navarro, 2020; Rodríguez-Navarro & Brito, 2018a, 2018b, 2020a, 2020b) demonstrate that the calculation of the ep constant is statistically robust in three of the Leiden Ranking fields: “Biomedical and health sciences,” Life and earth sciences,” and “Physical sciences and engineering.” There is no information in “Mathematical and computer sciences,” and in “Social sciences and humanities” the ep constant has only been studied economics and business (Rodríguez-Navarro & Brito, 2020a). Therefore, for the purpose of this study, any of the three aforementioned Leiden Ranking fields could be studied. The field of “Biomedical and health sciences” was not the first choice because “health sciences” might be weak in some universities. Between the other two fields, we selected “Physical sciences and engineering” versus “Life and earth sciences.” Although the difference is not large, the number of universities with at least four top 1% most cited papers in the Leiden Ranking evaluation periods (4 years) was higher in “Physical sciences and engineering” than in “Life and earth sciences”; this is a comparative advantage, as shown below. Henceforth, we will keep the notation of the Leiden Ranking: P is the total number of papers and Ptop x% is the number of papers in the top x percentile; PPtop x% is the Ptop x%/P ratio multiplied by 100. The Leiden Ranking reports publications for four percentiles (50, 10, 5, and 1) and these are the data that we compared with the calculated data. For the calculation of the number of publications in these percentiles we used Eq. 1, taking the value of the ep constant as PPtop 10%/100. Because of the statistical variability of PPtop 10%, the best method for the calculation of the ep constant is to count the number of papers in 5–10 top percentiles and fit them to a power law (Rodríguez-Navarro & Brito, 2019). However, for the purposes of this study, using the PPtop 10%/100 as a substitute for the ep constant is sufficiently accurate. The same calculation approach was used when we recorded more stringent percentiles, for example 0.02. Pearson and Spearman correlations were studied using the free statistics software calculators of Wessa (2017a, 2017b). Two-sided p-values are always recorded. ### 3.1. PPtop x% Indicators Are Qualitatively Redundant The numbers of papers in the top percentiles of global publications follow a power law, before and after dividing by the total number of papers (Rodríguez-Navarro & Brito, 2019). By definition, in all universities their PPtop x% plots have a common point when the top percentile is 100, and according to Eq. 1, from this point the PPtop x% plots diverge if the universities do not have identical ep constants. Therefore, if Eq. 1 is correct the order of universities in the Leiden Ranking based on PPtop x% should be the same at any of the recorded percentiles: 1, 5, 10, and 50. In practice there will be some deviations, because the number of papers produced by universities is low and the calculation of top percentile data is affected by statistical variability. In fact, the data provided in the Leiden Ranking includes the lower and upper bounds of the stability interval for each university’s PPtop x% indicator, and overlapping between these bounds in universities is frequent. To avoid this problem, if we select a few universities that publish a high number of papers and that are distant in the ranking, their relative positions will be maintained at all percentiles recorded in the Leiden Ranking. Figure 1 shows that this in fact happens, but this is a small sample, which is not sufficient to demonstrate that Eq. 1 is of general application. Figure 1. Double logarithmic plot of the four PPtop x% indicators reported in the Leiden Ranking, PPtop 50%, PPtop 10%, PPtop 5%, and PPtop 1%, for three universities that are distant in the ranking. Field of “Physical sciences and engineering,” time period 2009–2012. Figure 1. Double logarithmic plot of the four PPtop x% indicators reported in the Leiden Ranking, PPtop 50%, PPtop 10%, PPtop 5%, and PPtop 1%, for three universities that are distant in the ranking. Field of “Physical sciences and engineering,” time period 2009–2012. Close modal Next, we selected all the universities listed in the Leiden Ranking with more than 2,000 papers in the field of “Physical Sciences and Engineering.” This limitation in the number of papers is intended to keep the variability of the PPtop x% data as low as possible. Then we calculated the Spearman rank correlation coefficients between the PPtop x% data of different percentiles. Table 1 shows the correlation matrix between percentiles in the first (2006–2009) and last (2015–2018) periods recorded in the Leiden Ranking (similar results are found for other periods). The correlation coefficients are high (> 0.9 with a single exception) and the p values are very low, from 10–33 to 10–127. As might be expected, rank correlations are lower when the top 1% and top 50% results are compared, but are still remarkable. Additionally, Figure 2 shows the least and most dispersed scatter plots of ranks of the correlations studied (Table 1). Table 1. Spearman rank correlation matrix between the four PPtop x% indicators reported in the Leiden Ranking for universities with more than 2,000 publications 2006–2009PPtop 1%PPtop 5%PPtop 10% PPtop 5% 0.98 PPtop 10% 0.97 0.99 PPtop 50% 0.94 0.97 0.98 2015–2018PPtop 1%PPtop 5%PPtop 10% PPtop 5% 0.96 PPtop 10% 0.95 0.99 PPtop 50% 0.89 0.94 0.96 2006–2009PPtop 1%PPtop 5%PPtop 10% PPtop 5% 0.98 PPtop 10% 0.97 0.99 PPtop 50% 0.94 0.97 0.98 2015–2018PPtop 1%PPtop 5%PPtop 10% PPtop 5% 0.96 PPtop 10% 0.95 0.99 PPtop 50% 0.89 0.94 0.96 Field of “Physical sciences and engineering.” Time periods 2006–2009, 71 universities, and 2015–2018, 151 universities. All 2-sided p-values are below 1 × 10−32. Figure 2. Examples of scatter plots of ranks of the correlations reported in Table 1, the least and most disperse plots. Left panel, PPtop 5% versus PPtop 10% in period 2015–2018, 151 universities; right panel, PPtop 1% versus PPtop 50% in period 2006–2009, 71 universities. Figure 2. Examples of scatter plots of ranks of the correlations reported in Table 1, the least and most disperse plots. Left panel, PPtop 5% versus PPtop 10% in period 2015–2018, 151 universities; right panel, PPtop 1% versus PPtop 50% in period 2006–2009, 71 universities. Close modal These results demonstrate that PPtop x% indicators are redundant, all showing the same ranking information, although their values were obviously very different. ### 3.2. PPtop x% Indicators Can Be Easily Calculated from PPtop 10% Before addressing the issue of whether empirical PPtop x% indicators follow Eq. 1, for guiding purposes, we addressed a basic descriptive question about the distribution of universities according to these indicators. Figure 3 shows the distributions of universities based on the four indicators PPtop 50%, PPtop 10%, PPtop 5%, and PPtop 1%, for the time period 2009–2012 (in other time periods the distributions are similar). The PPtop 50% distribution resembles a normal distribution and meets normality criteria. The other three distributions show increasing kurtosis, with a long right tail that resembles lognormal distributions. However, although the distributions are heavy tailed, they do not meet the criteria for this type of distribution. Figure 3. Histograms of the values PPtop 50%, PPtop 10%, PPtop 5%, and PPtop 1% reported in the Leiden Ranking for the field of “Physical sciences and engineering” in the time period of 2009–2012; 1,177 universities. Figure 3. Histograms of the values PPtop 50%, PPtop 10%, PPtop 5%, and PPtop 1% reported in the Leiden Ranking for the field of “Physical sciences and engineering” in the time period of 2009–2012; 1,177 universities. Close modal Next, we tested the agreement between the PPtop x% indicators reported in the Leiden Ranking and their calculated values from Eq. 1, taking PPtop 10%/100 as the value of ep. In a first attempt we used the data of the 1,177 universities in the field of “Physical sciences and engineering” for the time period 2009–2012. Visually, the scatter plots in Figure 4 show a strong linear relationship between the two values for PPtop 50% and PPtop 5%. A linear relationship was also observed for PPtop 1%, but in this case the data had too much noise. This high variability was due to the large number of universities with a very low number of papers in Ptop 1%: The value was zero in 203 universities and 1 in 186; in fact, low values of Ptop 1% are associated with large stability intervals of PPtop 1% in the Leiden Ranking. The Pearson correlation coefficients for the calculated versus the empirical values of PPtop 50%, PPtop 5%, and PPtop 1% were high: 0.89, 0.96, and 0.78, respectively. The p-values were very small; the largest was 3.9 × 10–242 for PPtop 1%. Figure 4. Scatter plots of the two values of PPtop 50%, PPtop 5%, and PPtop 1%, one calculated from PPtop 10% and the other the reported in the Leiden Ranking; research field of “Physical sciences and engineering” and time period 2009–2012; 1,177 universities. The lines are meant only to guide the eye. Figure 4. Scatter plots of the two values of PPtop 50%, PPtop 5%, and PPtop 1%, one calculated from PPtop 10% and the other the reported in the Leiden Ranking; research field of “Physical sciences and engineering” and time period 2009–2012; 1,177 universities. The lines are meant only to guide the eye. Close modal Although these correlations were clear, the relationship between the Leiden Ranking and calculated values of Ptop 1% was uncertain because the variability could conceal possible deviations of small groups of universities. To overcome this problem the obvious possibility was to exclude from the analysis the universities with Ptop 1% values below a certain threshold. This approach, however, had to be carried out avoiding the introduction of biases, which were less likely if the threshold was low. By using the threshold of Ptop 1% ≥ 5, the total set of 1,177 universities was divided into two sets, above and below the threshold, of 474 and 703 universities. The corresponding scatter plots of the Leiden Ranking versus the calculated data of PPtop 50% and PPtop 5% (Figure 5) show high similarity in the two sets and with the scatter plot of the total set of universities (Figure 4). These results suggested that the set of 474 universities was reasonably representative of the total number of universities for the comparison of the Leiden Ranking and calculated values, at least at the PPtop 50% and PPtop 5% levels. Figure 5. Scatter plot of the two values of PPtop 50% and PPtop 5% shown in Figure 4 divided into two sets: Ptop 1% ≥ 5 (a; 474 universities) and Ptop 1% < 5 (b; 703 universities). The lines are meant only to guide the eye. Figure 5. Scatter plot of the two values of PPtop 50% and PPtop 5% shown in Figure 4 divided into two sets: Ptop 1% ≥ 5 (a; 474 universities) and Ptop 1% < 5 (b; 703 universities). The lines are meant only to guide the eye. Close modal For PPtop 1%, the set of 703 universities (Figure 6A) shows high variability and the accuracy of fitting a regression line was very low. In the other set of 474 universities (Figure 6B) the variability was lower and the scatter plot reveals that some universities with high values of PPtop 1% deviate from the general trend of the other universities. Consequently, a second-order polynomial that passes through the origin fits the data better than a straight line; a higher order polynomial or eliminating the constraint of passing through the origin did not significantly improve the fitting. This finding suggested that a small set of universities could deviate from the relationship of the other universities. It is likely that this possible set of universities might have very large values of either Ptop 1% or PPtop 10%. The scatter plot in Figure 6C shows that the exclusion of 34 universities with Ptop 1% > 40 does not significantly affect the deviation from a straight regression line observed in Figure 6B. In contrast, the exclusion of 25 universities with PPtop 10% ≥ 0.20 eliminates the deviation from a straight regression line. Figure 6D shows that in this case the fittings of straight and polynomial lines overlap. Figure 6. Scatter plot of the two values of PPtop 1% shown in Figure 4 divided into two sets: Ptop 1% ≥ 5 (A; 474 universities) and Ptop 1% < 5 (B; 703 universities). In panels C and D, the set of 474 was subdivided excluding the universities in which Ptop 1% > 40 (C) and PPtop 10% ≥ 0.20 (D). Green lines: straight linear regression. Brown lines: fitting to a second-order polynomial. In D, the green and brown lines overlap. Figure 6. Scatter plot of the two values of PPtop 1% shown in Figure 4 divided into two sets: Ptop 1% ≥ 5 (A; 474 universities) and Ptop 1% < 5 (B; 703 universities). In panels C and D, the set of 474 was subdivided excluding the universities in which Ptop 1% > 40 (C) and PPtop 10% ≥ 0.20 (D). Green lines: straight linear regression. Brown lines: fitting to a second-order polynomial. In D, the green and brown lines overlap. Close modal ### 3.3. Research Efficiency and Contribution to the Progress of Knowledge Although the total number of papers and their number in a single percentile are sufficient to define the efficiency of research institutions, in the case of quantitative comparisons it is necessary to select the percentile at which the comparison between institutions must be made. This is so because differences between institutions increase with the stringency of the percentile (Figure 1). However, we must distinguish two different cases, depending on whether we are interested in efficiency, which is size independent, or in the contribution to the progress of knowledge, which is size dependent. In the first case, if it is necessary to select a PPtop x% indicator, the selection might be simple. Considering the data reported in Figure 1 and the exponential form of Eq. 1, it is obvious that the differences increase following a known pattern, which indicates that the ratios between universities’ PPtop x% also increase or decrease following a known pattern. In these conditions, the convenient percentile cannot be established in general terms and will depend on the target that is pursued (Section 4.2). If we are interested in the contribution to the progress of knowledge, the relationships between institutions become more complex because, as previously mentioned, the pertinent indicator is the size-dependent Ptop x%. If the institutions publish similar numbers of papers the case is not different from that described above for efficiency. For example, for Stanford University, Sorbonne University, and Kyushu University in Table 2, the differences increase when the percentile decreases, but the order of the universities does not change. In contrast, if the number of papers is different, even the order of the institutions could change when the stringency of the indicator increases. Table 2 shows this fact again with three universities: Shanghai Jiao Tong University, Sorbonne University, and Yale University. This is the order (from higher to lower) when using P, Ptop 50%, and Ptop 10%, but for Ptop 1%, Yale University is now first, and the other two universities keep the same order as in the other percentiles. Interestingly, at this percentile the three universities are very similar. Finally, using Ptop 0.01%, the order changes again: Now, Yale University is first and Sorbonne University is ahead of Shanghai Jiao Tong University. At this percentile, the contribution of Yale University to the progress of knowledge is almost 10 and eight times higher than those of Shanghai Jiao Tong and Sorbonne Universities, respectively. With this complex behavior, the question of which university contributes the most to scientific progress is puzzling, unless we agree about the percentile that should be used to measure scientific progress. Table 2. Variation of Ptop x% indicators in selected universities 2006–2009 PPtop 50%Ptop10%Ptop 1%Ptop 0.1%Ptop 0.01% Stanford University 2,825 2,068 741 109 50.97 13.37 Sorbonne University 2,641 1,518 321 31 4.72 0.57 Kyushu University 2,669 1,144 188 13 0.93 0.07 2009–2012 Shanghai Jiao Tong University 4,832 2,379 437 37 3.57 0.32 Sorbonne University 2,559 1,483 314 29 4.73 0.58 Yale University 1,268 916 298 42 16.46 3.87 2006–2009 PPtop 50%Ptop10%Ptop 1%Ptop 0.1%Ptop 0.01% Stanford University 2,825 2,068 741 109 50.97 13.37 Sorbonne University 2,641 1,518 321 31 4.72 0.57 Kyushu University 2,669 1,144 188 13 0.93 0.07 2009–2012 Shanghai Jiao Tong University 4,832 2,379 437 37 3.57 0.32 Sorbonne University 2,559 1,483 314 29 4.73 0.58 Yale University 1,268 916 298 42 16.46 3.87 The values of P, Ptop 50%, Ptop 10%, and Ptop 1% were taken from the Leiden Ranking, Ptop 0.1% and Ptop 0.01% were calculated from PPtop 10% as described in the text. Field of “Physical sciences and engineering.” ### 4.1. All PPtop x% Indicators Can Be Calculated from Only One The purpose of our study was to demonstrate that Eq. 1 is correct by using the data reported by the Leiden Ranking for a large number of universities. This implies that a single PPtop x% indicator is sufficient to calculate all PPtop x% indicators and therefore to reveal the efficiency of a research institution. The size-independent PPtop x% indicators are 100 times the probabilities described by Eq. 1 and PPtop 10% is equal to the ep constant multiplied by 100 (Rodríguez-Navarro & Brito, 2019). This constant is normally calculated by statistical fitting from several percentile counts, but it can also be calculated with a lower precision from the value of a single PPtop x%. The high Spearman rank correlation coefficients found between the four Leiden Ranking PPtop x% indicators—PPtop 50% , PPtop 10%, PPtop 5%, and PPtop 1%—for universities with more than 2,000 papers (Table 1) imply that the four indicators reveal the same as predicted by Eq. 1. The same conclusion is reached when studying the correlation between the PPtop 50%, PPtop 5%, and PPtop 1% data recorded in the Leiden Ranking and the data calculated applying Eq. 1—substituting PPtop 10%/100 for the ep constant. A clear correlation is shown by the three scatter plots for PPtop 50%, PPtop 5%, and Ptop 1% in 1,177 universities (time period 2009–2012; Figure 4). However, the scatter plot for PPtop 1% is very noisy because in many universities Ptop 1% is very low and shows a large variability, which hinders the study of deviations that seem to occur. Eliminating the universities with fewer than five papers in Ptop 1%, there remain 474 universities. Comparison of the scatter plots of the two sets, 474 and 703 universities, and the complete set of universities (Figures 5 and 6) strongly suggests that the set with 474 universities is a representative sample of the total number of universities and may be used to study possible deviations of PPtop 1%. In Figure 6, the PPtop 1% scatter plot shows higher variability than that observed for the PPtop 50% and PPtop 5% plots (Figure 5), and the best universities deviate from the trend followed by of the rest of the universities. Several factors contribute to these facts. In the first place, the exponent of Eq. 1 for PPtop 1% is higher than for PPtop 50% and PPtop 5%, which increases the error of substituting PPtop 10% for the ep constant—ep should be calculated by fitting the data of several percentiles. Furthermore, the number of Ptop 1% papers is low in many universities, which implies a higher variability in the counting of the papers in this percentile than in the counts of the other two percentiles. These general observations are not sufficient to explain the deviations that are observed in Figure 6 for the most efficient universities (panels B, C, and D); we found that by excluding the 25 universities with PPtop 10% ≥ 0.20 from the set of 474 universities, the deviation from a straight regression line disappears. This result indicates that Eq. 1 suffers slight deviations in highly competitive universities, which would not be surprising, because deviations of empirical data from a general law are common in many scientific fields. In the example of physics given in Section 1.2, the mentioned function applies to ideal gases but suffers deviations in real gases. However, for PPtop 1% the deviation is of minor importance for evaluation purposes because the number of these outstanding institutions is an insignificant portion of the total number of institutions: 25 out of 1,177. In summary, percentile indicators are dichotomous indicators only in appearance, because all of them can be calculated from the total number of papers and a mathematical constant that reveals the research efficiency of institutions and countries. The existence of slight deviations from Eq. 1 in some specific cases does not impede the use of this equation in general evaluations. ### 4.2. Which Top Percentile Should Be Used for Quantitative Evaluations? Our data demonstrate that if the purpose is to rank research institutions by the PPtop x% indicator, any percentile can be used. Conversely, for quantitative evaluations, such as comparison with research investments (de Marco, 2019), a certain percentile must be selected, because quantitative relationships between institutions change depending on the percentile (Figure 1). For example, let us imagine two research institutions, A and B, in which investments are similar, but the numbers of papers in the evaluation period are 1,000 and 500, and the PPtop 10% indicators are 14% and 20%, respectively. It is evident that if we are comparing the cost of a publication, institution A shows the better performance. The same occurs at the top 10% level (Ptop 10% = P · ep and ep = PPtop 10%), 140 versus 100 papers, but not at the top 1% level, where both institutions show the same Ptop 1%, equal to 20 (Ptop 1% = P · ep2). At a landmark level (percentile 0.02, Bornmann, Ye, & Ye, 2018) the advantage is for institution B: 0.69 for A versus 1.3 for B (Ptop 0.02% = P · ep3.7). Therefore, although A produces twice as many papers than B, the cost of a landmark level paper in A is almost twice the cost as in B. These bibliometric calculations show the importance of answering the question posed in the title of this section. From a scientific point of view, and if we are considering a size-dependent indicator, the top 0.01 percentile, close to the landmark level, might be a reasonable answer. For the contribution to the progress of knowledge, the same percentile should apply to scientifically advanced countries and to countries that are developing a research system. Because the target of scientific research is globally established, the research indicator should also be globally established. The same reasoning does not apply to size-independent indicators, because higher is not always better and high-level excellence is not always the right target. To our knowledge, many research policy makers do not address the evaluative puzzle arising from the example given above, and they choose a certain percentile without much thought. Similarly, in many countries, especially in those with a generally low level of research performance (e.g., Spain), policy makers are preoccupied with the idea of having “excellent” research institutions, and they make important investments in a very few institutions with the purpose of making them “excellent.” Aside from the fact that in many cases in these countries research “excellence” is mismeasured by the journal impact factors (Brito & Rodríguez-Navarro, 2019), the results of these efforts are anything but excellent. This is because the contribution to the national research system of an excellent institution will most likely be of low relevance. This would be the case if, for example, in such an institution the PPtop 10% is 15% and the average in the rest of the country’s institutions is 9.0%, but the “excellent” institutions publish only less than one hundredth of the total number of publications. In this case a simple calculation demonstrates that more than 90% of the top 0.01 publications have been published in the underfunded institutions. Therefore, in countries with weak research systems, investing to raise the average PPtop 10% of the country, for example to 0.12, would be more profitable than investing in the much desired “excellent” institutions. Another example illustrates why PPtop x% targets have to be adapted to circumstances. In Europe, in the field of technology there are no universities with the PPtop 10% (Leiden Ranking 2020, field of “Physical sciences and engineering”) as high as in some U.S. universities, such as Harvard University, Stanford University, and Massachusetts Institute of Technology (MIT). However, at the country level, several European countries have similar or even higher Ptop 0.01% per million inhabitants than the United States (Rodríguez-Navarro & Brito, 2018b). In these countries it might be a mistake to pursue universities with the high PPtop 10% of the aforementioned US universities. A country’s high PPtop 10% can be obtained from many types of institutions’ PPtop 10% distributions, and it seems that each country should pursue the highest possible Ptop 0.01% per million inhabitants rather than other targets. Making use of the data provided by the Leiden Ranking for many universities, we found further empirical evidence supporting the notion that the size-independent PPtop x% indicators are not dichotomous indicators: Any PPtop x% indicator is sufficient to define the research efficiency of a research institution and all PPtop x% indicators can be easily calculated from only one. Therefore, the information given by the Leiden Ranking and the National Science Board of the National Science Foundation, which report several PPtop x% indicators for the same institution or country, is obviously informative, but actually redundant. Similarly, in Ptop x% indicators, which are size dependent and measure the contribution of research institutions and countries to the advancement of science, by knowing the total number of papers all Ptop x% indicators can be easily calculated from only one, provided that the total number of papers is known. Both the Ptop x% and PPtop x% indicators vary depending on the top percentile selected, which raises the question of which percentile assessments should be made. Our results suggest that for the assessment of contribution to scientific progress, the top 0.01 percentile appears to be the most convenient. In the case of research efficiencies, any single percentile allows comparing countries and research institutions, but for statistical reasons the top 10 percentile might be the best. The distributions of universities according to PPtop x% indicators (x ≤ 10) are heavy tailed, which implies that the highest probabilities of making important discoveries accumulate in a very low proportion of all universities. Research policy makers should study the PPtop x% indicators of their research institutions before launching research policies that are addressed to the scientific progress of the country. We thank two anonymous reviewers for their helpful suggestions on improving the original manuscript. Alonso Rodríguez-Navarro: Conceptualization, Data curation, Formal analysis, Investigation, Supervision, Visualization, Writing—original draft, Writing—review & editing. Ricardo Brito: Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Visualization, Writing—review & editing. The authors declare that there are no competing interests. This work was supported by the Spanish Ministerio de Economía y Competitividad, Grant Number FIS2017-83709-R. The raw data were downloaded from the Leiden Ranking; these data are available at Zenodo (DOI 10.5281/zenodo.4603232). Abramo , G. , & D’Angelo , C. A. ( 2016a ). A farewell to the MNCS and like size-independent indicators . Journal of Informetrics , 10 , 646 651 . Abramo , G. , & D’Angelo , C. A. ( 2016b ). A farewell to the MNCS and like size-independent indicators: Rejoinder . Journal of Informetrics , 10 , 679 683 . Aksnes , D. W. , Langfeldt , L. , & Wouters , P. ( 2019 ). Citations, citation indicators, and research quality: An overview of basic concepts and theories . SAGE Open , January . Albarrán , P. , Herrero , C. , Ruiz-Castillo , J. , & Villar , A. ( 2017 ). The Herrero-Villar approach to citation impact . Journal of Informetrics , 11 , 625 640 . Bornmann , L. ( 2010 ). Towards an ideal method of measuring research performance: Some comments to the Opthof and Leydesdorff (2010) paper . Journal of Informetrics , 4 , 441 443 . Bornmann , L. ( 2013 ). How to analyze percentile citation impact data meaningfully in bibliometrics: The statistical analysis of distributions, percentile rank classes, and top-cited papers . Journal of the American Society for Information Science and Technology , 64 , 587 595 . Bornmann , L. , Leydesdorff , L. , & Mutz , R. ( 2013 ). The use of percentile rank classes in the analysis of bibliometric data: Opportunities and limits . Journal of Informetrics , 7 , 158 165 . Bornmann , L. , Leydesdorff , L. , & Wang , J. ( 2013 ). Which percentile-based appoach should be preferred for calculating normalized citation impact values? An empirical comparison of five approaches including a newly developed citation-rank approach (P100) . Journal of Informetrics , 7 , 933 944 . Bornmann , L. , & Mutz , R. ( 2011 ). Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages in field-normalization . Journal of Informetrics , 5 , 228 230 . Bornmann , L. , Tekles , A. , & Leydesdorff , L. ( 2019 ). How well does I3 perform for impact measurement compared to other bibliometric indicators? The convergent validity of several (field-normalized) indicators . Scientometrics , 119 , 1187 1205 . Bornmann , L. , Ye , A. , & Ye , F. ( 2018 ). Identifying landmark publications in the long run using field-normalized citation data . Journal of Documentation , 74 , 278 288 . Brito , R. , & Rodríguez-Navarro , A. ( 2018 ). Research assessment by percentile-based double rank analysis . Journal of Informetrics , 12 , 315 329 . Brito , R. , & Rodríguez-Navarro , A. ( 2019 ). Evaluating research and researchers by the journal impact factor: Is it better than coin flipping? Journal of Informetrics , 13 , 314 324 . Brito , R. , & Rodríguez-Navarro , A. ( 2020 ). The USA dominates world research in basic medicine and biotechnology . Journal of Scientometric Research , 9 , 154 162 . De Bellis , N. ( 2009 ). Bibliometrics and Citation Analysis – From the Science Citation Index to Cybermetrics . Lanham, MD : The Scarecrop Press . De Marco , A. ( 2019 ). Metrics and evaluation of scientific productivity: Would it be useful to normalize the data taking in consideration the investments? Microbial Cell Factories , 18 , 181 . Fortunato , S. , Bergstrom , C. T. , Börner , K. , Evans , J. A. , Helbing , D. , … Barabási , A.-L. ( 2018 ). Science of science . Science , 359 , eaao0185 . Garfield , E. ( 1980 ). Premature discovery or delayed recognition – Why? Current Contents , 21 , May 26 , 5 10 . Garfield , E. , & Welljams-Dorof , A. ( 1992 ). Citation data: Their use as quantitative indicators for science and technology evaluation and policy-making . Science and Public Policy , 19 , 321 327 . Glänzel , W. , Thijs , B. , & Debackere , K. ( 2016 ). Productivity, performance, efficiency, impact – What do we measure anyway? Some comments on the paper “A farewell to the MNCS and like size-independent indicators” by Abramo and D’Angelo . Journal of Informetrics , 10 , 658 660 . Godin , B. ( 2003 ). The emergence of S&T indicators: Why did governments supplement statistics with indicators? Research Policy , 32 , 679 691 . , S. ( 2009 ). Open access scientometrics and the UK research assessment exercise . Scientometrics , 79 , 147 156 . HEFCE . ( 2015 ). The Metric Tide: Correlation analysis of REF2014 scores and metrics (Supplementary Report II to the independent Review of the Role of Metrics in Research Assessment and Management) . Kaptay , G. ( 2020 ). The k-index is introduced to replace the h-index to evaluate better the scientific excellence of individuals . Heliyon , 6 ( 7 ), e04415 . Leydesdorff , L. , & Bornmann , L. ( 2011 ). Integrated impact indicators compared with impact factors: An alternative research design with policy implications . Journal of the American Society for Information Science and Technology , 62 , 2133 2146 . Leydesdorff , L. , & Bornmann , L. ( 2012 ). Percentile ranks and the integrated impact indicator (I3) . Journal of the American Society for Information Science and Technology , 63 , 1901 1902 . Leydesdorff , L. , Bornmann , L. , & , J. ( 2019 ). The integrated impact indicator revised (I3): A non-parametric alternative to the journal impact factor . Scientometrics , 119 , 1669 1694 . Leydesdorff , L. , Bornmann , L. , Mutz , R. , & Opthof , T. ( 2011 ). Turning the tables on citation analysis one more time: Principles for comparing sets of documents . Journal of the American Society for Information Science and Technology , 62 , 1370 1381 . MacRoberts , M. H. , & MacRoberts , B. R. ( 1989 ). Problems of citation analysis: A crtical review . Journal of the American Society for Information Science and Technology , 40 , 342 349 . Mcallister , P. R. , Narin , F. , & Corrigan , J. G. ( 1983 ). Programmatic evaluation and comparison based on standardized citation scores . IEEE Transactions on Engineering Management , EM-30 ( 4 ), 205 211 . Moet , H. F. ( 2005 ). Citation analysis in research evaluation . Berlin : Springer Verlag . Narin , F. ( 1976 ). Evaluative bibliometrics: The use of publication and citation analysis in the evaluation of scientific activity . Computer Horizon Inc . National Science Board . ( 2010 ). Science and engineering indicators . National Science Foundation . National Science Board . ( 2018 ). Science and engineering indicators 2018 . National Science Foundation Opthof , T. , & Leydesdorff , L. ( 2010 ). Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance . Journal of Informetrics , 4 , 423 430 . , F. , Fortunato , S. , & Castellano , C. ( 2008 ). Universality of citation distributions: Toward an objective measure of scientific impact . Proceedings of the National Academy of Sciences of the USA , 105 , 17268 17272 . Rodríguez-Navarro , A. , & Brito , R. ( 2018a ). Double rank analysis for research assessment . Journal of Informetrics , 12 , 31 41 . Rodríguez-Navarro , A. , & Brito , R. ( 2018b ). Technological research in the EU is less efficient than the world average. EU research policy risks Europeans’ future . Journal of Informetrics , 12 , 718 731 . Rodríguez-Navarro , A. , & Brito , R. ( 2019 ). Probability and expected frequency of breakthroughs – basis and use of a robust method of research assessment . Scientometrics , 119 , 213 235 . Rodríguez-Navarro , A. , & Brito , R. ( 2020a ). Like-for-like bibliometric substitutes for peer review: Advantages and limits of indicators calculated from the ep index . Research Evaluation , 29 , 215 230 . Rodríguez-Navarro , A. , & Brito , R. ( 2020b ). Might Europe one day again be a global scientific powerhouse? Analysis of ERC publications suggests it will not be possible without changes in research policy . Quantitative Science Studies , 1 , 872 893 . Ruiz-Castillo , J. ( 2016 ). Research output indicators are not productivity indicators . Journal of Informetrics , 10 , 661 663 . Schreiber , M. ( 2013 ). How much do different ways of calculating percentiles influence the derived performance indicators? Scientometrics , 97 , 821 829 . Siudem , G. , Zogala-Siudem , B. , Cena , A. , & Gagolewski , M. ( 2020 ). Three dimensions of scientific impact . Proceedings of the National Academy of Sciences USA , 117 , 13896 13900 . Tijssen , R. J. W. , Visser , M. S. , & van Leeuwen , T. N. ( 2002 ). Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference? Scientometrics , 54 , 381 397 . Traag , V. A. , & Waltman , L. ( 2019 ). Systematic analysis of agreement between metrics and peer review in the UK REF . Palgrave Communications , 5 , 29 . Vinkler , P. ( 2011 ). Application of the distribution of citations among publications in scientometric evaluation . Journal of the American Society for Information Science and Technology , 62 , 1963 1928 . Waltman , L. , Calero-Medina , C. , Kosten , J. , Noyons , E. C. M. , Tijssen , R. J. W. , … Wouters , P. ( 2012 ). The Leiden ranking 2011/2012: Data collection, indicators, and interpretation . Journal of the American Society for Information Science and Technology , 63 , 2419 2432 . Waltman , L. , & Schreiber , M. ( 2013 ). On the calculation of percentile-based bibliometric indicators . Journal of the American Society for Information Science and Technology , 64 , 372 379 . Waltman , L. , van Eck , N. J. , Visser , M. , & Wouters , P. ( 2016 ). The elephant in the room: The problems of quantifying productivity in evaluative scientometrics . Journal of Informetrics , 10 , 671 674 . Wang , J. , Veugelers , R. , & Stephan , P. ( 2017 ). Bias against novelty in science: A cautionary tale for users of bibliometric indicators . Research Policy , 46 , 1416 1436 . Wessa , P. ( 2017a ). Pearson Correlation (v1.0.13) in Free Statistics Software (v1.2.1) . Office for Research Development and Education . https://www.wessa.net/rwasp_correlation.wasp/ Wessa , P. ( 2017b ). Spearman Rank Correlation (v1.0.3) in Free Statistics Software (v1.2.1) . Office for Research Development and Education . https://www.wessa.net/rwasp_spearman.wasp/ Wilsdon , J. , Allen , L. , Belfiore , E. , Campbell , P. , Curry , S. , … Johnson , B. ( 2015 ). The metric tide: Report of the independent review of the role of metrics in research assessment and management . ## Author notes Handling Editor: Ludo Waltman This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.
{}
Outlook: Myomo Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Wait until speculative trend diminishes Time series to forecast n: 17 Jan 2023 for (n+6 month) Methodology : Modular Neural Network (Emotional Trigger/Responses Analysis) ## Abstract Myomo Inc. Common Stock prediction model is evaluated with Modular Neural Network (Emotional Trigger/Responses Analysis) and Factor1,2,3,4 and it is concluded that the MYO stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Wait until speculative trend diminishes ## Key Points 1. Can stock prices be predicted? 2. Dominated Move 3. Short/Long Term Stocks ## MYO Target Price Prediction Modeling Methodology We consider Myomo Inc. Common Stock Decision Process with Modular Neural Network (Emotional Trigger/Responses Analysis) where A is the set of discrete actions of MYO stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Factor)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Emotional Trigger/Responses Analysis)) X S(n):→ (n+6 month) $∑ i = 1 n s i$ n:Time series to forecast p:Price signals of MYO stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## MYO Stock Forecast (Buy or Sell) for (n+6 month) Sample Set: Neural Network Stock/Index: MYO Myomo Inc. Common Stock Time series to forecast n: 17 Jan 2023 for (n+6 month) According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Wait until speculative trend diminishes X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Myomo Inc. Common Stock 1. However, an entity is not required to separately recognise interest revenue or impairment gains or losses for a financial asset measured at fair value through profit or loss. Consequently, when an entity reclassifies a financial asset out of the fair value through profit or loss measurement category, the effective interest rate is determined on the basis of the fair value of the asset at the reclassification date. In addition, for the purposes of applying Section 5.5 to the financial asset from the reclassification date, the date of the reclassification is treated as the date of initial recognition. 2. In accordance with the hedge effectiveness requirements, the hedge ratio of the hedging relationship must be the same as that resulting from the quantity of the hedged item that the entity actually hedges and the quantity of the hedging instrument that the entity actually uses to hedge that quantity of hedged item. Hence, if an entity hedges less than 100 per cent of the exposure on an item, such as 85 per cent, it shall designate the hedging relationship using a hedge ratio that is the same as that resulting from 85 per cent of the exposure and the quantity of the hedging instrument that the entity actually uses to hedge those 85 per cent. Similarly, if, for example, an entity hedges an exposure using a nominal amount of 40 units of a financial instrument, it shall designate the hedging relationship using a hedge ratio that is the same as that resulting from that quantity of 40 units (ie the entity must not use a hedge ratio based on a higher quantity of units that it might hold in total or a lower quantity of units) and the quantity of the hedged item that it actually hedges with those 40 units. 3. In some circumstances, the renegotiation or modification of the contractual cash flows of a financial asset can lead to the derecognition of the existing financial asset in accordance with this Standard. When the modification of a financial asset results in the derecognition of the existing financial asset and the subsequent recognition of the modified financial asset, the modified asset is considered a 'new' financial asset for the purposes of this Standard. 4. Adjusting the hedge ratio allows an entity to respond to changes in the relationship between the hedging instrument and the hedged item that arise from their underlyings or risk variables. For example, a hedging relationship in which the hedging instrument and the hedged item have different but related underlyings changes in response to a change in the relationship between those two underlyings (for example, different but related reference indices, rates or prices). Hence, rebalancing allows the continuation of a hedging relationship in situations in which the relationship between the hedging instrument and the hedged item chang *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Myomo Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Myomo Inc. Common Stock prediction model is evaluated with Modular Neural Network (Emotional Trigger/Responses Analysis) and Factor1,2,3,4 and it is concluded that the MYO stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Wait until speculative trend diminishes ### MYO Myomo Inc. Common Stock Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementB1Baa2 Balance SheetCBa2 Leverage RatiosB3B3 Cash FlowBaa2B3 Rates of Return and ProfitabilityBaa2Ba1 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 89 out of 100 with 671 signals. ## References 1. B. Derfer, N. Goodyear, K. Hung, C. Matthews, G. Paoni, K. Rollins, R. Rose, M. Seaman, and J. Wiles. Online marketing platform, August 17 2007. US Patent App. 11/893,765 2. Chernozhukov V, Chetverikov D, Demirer M, Duflo E, Hansen C, et al. 2018a. Double/debiased machine learning for treatment and structural parameters. Econom. J. 21:C1–68 3. J. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Transactions on Automatic Control, 37(3):332–341, 1992. 4. Van der Vaart AW. 2000. Asymptotic Statistics. Cambridge, UK: Cambridge Univ. Press 5. B. Derfer, N. Goodyear, K. Hung, C. Matthews, G. Paoni, K. Rollins, R. Rose, M. Seaman, and J. Wiles. Online marketing platform, August 17 2007. US Patent App. 11/893,765 6. L. Prashanth and M. Ghavamzadeh. Actor-critic algorithms for risk-sensitive MDPs. In Proceedings of Advances in Neural Information Processing Systems 26, pages 252–260, 2013. 7. Mnih A, Hinton GE. 2007. Three new graphical models for statistical language modelling. In International Conference on Machine Learning, pp. 641–48. La Jolla, CA: Int. Mach. Learn. Soc. Frequently Asked QuestionsQ: What is the prediction methodology for MYO stock? A: MYO stock prediction methodology: We evaluate the prediction models Modular Neural Network (Emotional Trigger/Responses Analysis) and Factor Q: Is MYO stock a buy or sell? A: The dominant strategy among neural network is to Wait until speculative trend diminishes MYO Stock. Q: Is Myomo Inc. Common Stock stock a good investment? A: The consensus rating for Myomo Inc. Common Stock is Wait until speculative trend diminishes and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of MYO stock? A: The consensus rating for MYO is Wait until speculative trend diminishes. Q: What is the prediction period for MYO stock? A: The prediction period for MYO is (n+6 month)
{}
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Switching LPV control designs using multiple parameter-dependent Lyapunov functions. (English) Zbl 1133.93370 Summary: We study the switching control of linear parameter-varying (LPV) systems using multiple parameter-dependent Lyapunov functions to improve performance and enhance control design flexibility. A family of LPV controllers is designed, each suitable for a specific parameter subregion. They are switched so that the closed-loop system remains stable and its performance is optimized. Two switching logics, hysteresis switching and switching with average dwell time, are examined. The control synthesis conditions for both switching logics are formulated as matrix optimization problems, which are generally non-convex but can be convexified under some simplifying assumptions. The hysteresis switching LPV control scheme is then applied to an active magnetic bearing problem. ##### MSC: 93D30 Scalar and vector Lyapunov functions 15A39 Linear inequalities of matrices LMI toolbox Full Text: ##### References: [1] Apkarian, P.; Gahinet, P.: A convex characterization of gain-scheduled H$\infty$controllers. IEEE transactions on automatic control 40, 853-864 (1995) · Zbl 0826.93028 [2] Becker, G. (1996). Additional results on parameter-dependent controllers for LPV systems. Proceedings of the 13th IFAC world congress (pp. 351-356). [3] Becker, G.; Packard, A.: Robust performance of linear parametrically varying systems using parametrically-dependent linear feedback. Systems and control letters 23, 205-215 (1994) · Zbl 0815.93034 [4] Boyd, S. P.; Yang, Q.: Structured and simultaneous Lyapunov functions for system stability problems. International journal of control 50, 2215-2240 (1989) · Zbl 0683.93057 [5] Branicky, M.: Multiple Lyapunov functions and other analysis tools for switched and hybrid systems. IEEE transactions on automatic control 43, No. 4, 475-482 (1998) · Zbl 0904.93036 [6] Decarlo, R. A.; Branicky, M. S.; Pettersson, S.; Lennartson, B.: Perspectives and results on the stability and stabilizability of hybrid systems. Proceedings of IEE 88, No. 7, 1069-1082 (2000) [7] Gahinet, P.; Nemirovskii, A.; Laub, A. J.; Chilali, M.: LMI control toolbox. (1995) [8] Hespanha, J. P.; Liberzon, D.; Morse, A. S.: Hysteresis-based switching algorithm for supervisory control of uncertain systems. Automatica 39, 263-272 (2003) · Zbl 1011.93500 [9] Hespanha, J. P., & Morse, A. S. (1999). Stability of switched systems with average dwell-time. Proceedings of the 38th IEEE conference on decision control (pp. 2655-2660). [10] Johansson, M.; Ranzer, A.: Computation of piecewise quadratic Lyapunov functions for hybrid systems. IEEE transactions on automatic control 43, No. 4, 555-559 (1998) · Zbl 0905.93039 [11] Liberzon, D.: Switching in systems and control. (2003) · Zbl 1036.93001 [12] Lim, S. (1999). Analysis and control of linear parameter-varying systems. Ph.D. dissertation, Stanford University. [13] Lim, S., & Chan, K. (2003). Stability analysis of hybrid linear parameter-varying systems. Proceedings of the 2003 American control conference (pp. 4822-4827). [14] Malmborg, J., Bernhardsson, B., & Astrom, K. J. (1996). A stabilizing switching scheme for multi-controller systems. Proceedings of the IFAC world congress. [15] Mohamed, A. M.; Busch-Vishmiac, I.: Imbalance compensation and automation balancing in magnetic bearing systems using the Q-parameterization theory. IEEE transactions on control system technology 3, No. 2, 202-211 (1995) [16] Packard, A. K.: Gain scheduling via linear fractional transformations. Systems and control letters 22, No. 2, 79-92 (1994) · Zbl 0792.93043 [17] Peleties, P., & Decarlo, R. (1991). Asymptotic stability of m-switched systems using Lyapunov-like functions. Proceedings of the American control conference (pp. 1679-1684) [18] Pettersson, S., & Lennartson, B. (2001). Stabilization of hybrid systems using a min-projection strategy. Proceedings of the American control conference (pp. 223-228). [19] Prajna, S., & Papachristodoulou, A. (2003). Analysis of switched and hybrid systems--beyond piecewise quadratic methods. Proceedings of the American control conference (pp. 2779-2784). [20] Wicks, M. A., Peleties, P., & DeCarlo, R. A. (1994). Construction of piecewise Lyapunov functions for stabilizing switched systems. Proceedings of the 33rd IEEE conference on decision control (pp. 3492-3497). [21] Wu, F.: A generalized LPV system analysis and control synthesis framework. International journal of control 74, No. 7, 745-759 (2001) · Zbl 1011.93046 [22] Wu, F.; Yang, X. H.; Packard, A.; Becker, G.: Induced L2 norm control for LPV systems with bounded parameter variation rates. International journal of robust nonlinear control 6, No. 9/10, 983-998 (1996) · Zbl 0863.93074 [23] Ye, H.; Michel, A. N.; Hou, L.: Stability theory for hybrid dynamical systems. IEEE transactions on automatic control 43, No. 4, 461-474 (1998) · Zbl 0905.93024
{}
# In computation, the function f is defined by$$f\left( x \right) = \left\{ {\begin{array}{*{20}{c}} {a{x^2} + 2x - 1}&{x \le 1}\\ {b - c{x^2}}&{x > 1} \end{array}} \right.$$where a, b, c are constants. It is given that f(x) is differentiable at x = 1 and f'(0) = f'(2). Then the value of b is 1. 0 2. 14 3. 16 4. 17 Option 1 : 0 ## Differentiability MCQ Question 1 Detailed Solution Concept: function f(x) is continuous at x = a if, Left limit = Right limit = Function value = Real and finite A function is said to be differentiable at x =a if, Left derivative = Right derivative = Well define Calculation: Given: $$f\left( x \right) = \left\{ {\begin{array}{*{20}{c}} {a{x^2} + 2x - 1,}&{x \le 1}\\ {b - c{x^2},}&{x > 1} \end{array}} \right\}$$ Given that f(x) is differentiable at x = 1 f'(1-) = f'(1+) ⇒ 2a(1) + 2 = -2c(1) ⇒ 2a + 2 = -2c ⇒ a + c = -1       ___(1) f(x) is differentiable, hence f(x) is continuous at x = 1 ⇒ f(1-) = f(1+) ⇒ a(1)2 + 2(1) - 1 = b – c(1)2 ⇒ a + 1 = b – x ⇒ b = a + c + 1        ___(2) From equations (1) and (2) ⇒ b = 0 # If y = log sin x, then $$\frac{dy}{dx}$$ is 1. $$\frac{1}{sin~x} cos~x$$ 2. tan x 3. $$\frac{1}{sin~x}$$ 4. log cos x Option 1 : $$\frac{1}{sin~x} cos~x$$ ## Differentiability MCQ Question 2 Detailed Solution Concept: Chain Rule of derivatives states that, if y = f(u) and u = g(x) are both differentiable functions, then: $$\frac{dy}{dx}=\frac{dy}{du}\times \frac{du}{dx}$$ $$\frac{{d\left( {\ln x} \right)}}{{dx}} = \frac{1}{x},\;for\;x > 0$$ $$\frac{{d\left( {\sin x} \right)}}{{dx}} = \; cosx$$ Calculation: Given: y = log sinx Let sin x = u ⇒ y = log u $$\frac{d}{{dx}}\left( {\log u} \right) = \frac{1}{{u}}\frac{d}{{dx}}\left( {u} \right)$$ $$= \frac{1}{{\sin x}}\left( { cos x} \right)$$ Hence, the value of $$\frac{dy}{dx}$$ will be $$\frac{1}{sin~x} cos~x$$. # Consider a function f(x, y, z) given byf(x, y, z) = (x2 + y2 – 2z2)(y2 + z2)The partial derivative of this function with respect to x at the point x = 2, y = 1 and z = 3 is _______ ## Differentiability MCQ Question 3 Detailed Solution Concept: In Partial Differentiation, all variables are considered as a constant except the independent derivative variable i.e If f(x,y,z) is a function, then its partial derivative with respect to x is calculated by keeping y and z as constant. Calculation: f(x, y, z) = (x2 + y2 – 2z2)(y2 + z2) $$\frac{{\partial f}}{{\partial x}} = \left( {2x} \right)\left( {{y^2} + {z^2}} \right)$$ At the point, x = 2, y = 1 and z = 3 is $$\frac{{\partial f}}{{\partial x}} = 2\left( 2 \right)\left( {{1^2} + {3^2}} \right) = 40$$ # Consider the functionsI. e-xII. x2 – sin xIII. $$\sqrt {{x^3} + 1}$$ Which of the above functions is/are increasing everywhere in [0, 1]? 1. III only 2. II only 3. II and III only 4. I and III only Option 1 : III only ## Differentiability MCQ Question 4 Detailed Solution Concept: A function f(x) is said to be increasing in the given interval if it’s first order differential $$f'\left( x \right) \ge 0$$ holds for every point in the given interval. Calculation: Function I: $$f\left( x \right) = {e^{ - x}}\therefore f'\left( x \right) = \; - {e^{ - x}}$$ $$\because {f'}\left( 0 \right) = - 1 < 0$$ Therefore, the function is non increasing. Function II: $$f\left( x \right) = \;{x^2} - \sin x,\;f'\left( x \right) = 2x - \cos x$$ $$f'\left( 0 \right) = \;0 - \cos 0 = - 1 < 0$$ Therefore, this function is also non increasing. Furthermore, since cosine function is periodic function, it cannot be strictly increasing in the given range Function III: $$f\left( x \right) = \sqrt {{x^3} + 1}$$ $${\rm{f'}}\left( {\rm{x}} \right) = \frac{{3{x^2}}}{{2\sqrt {{x^3} + 1} }}$$ $$f'\left( 0 \right) = 0,\;f'\left( 1 \right) > 0$$ Therefore, the function is increasing in the given interval. Therefore $$f\left( x \right) = \sqrt {{x^3} + 1}$$ function is the only increasing everywhere in [0, 1] # If $$v = {\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{1}{2}}},~then~~\frac{{{\partial ^2}v}}{{\partial {x^2}}} + \frac{{{\partial ^2}v}}{{\partial {y^2}}} + \frac{{{\partial ^2}v}}{{\partial {z^2}}}$$ is 1. -1/2 2. -1 3. 0 4. 1 Option 3 : 0 ## Differentiability MCQ Question 5 Detailed Solution Concept if y = xn, then$$\frac{{\partial y}}{{\partial x}} = n{X^{n - 1}}\frac{{{\partial ^2}y}}{{\partial {x^2}}} = n\left( {n - 1} \right){X^{n - 2}}$$ Calculation: Given: $$v = {\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{1}{2}}}$$ $$\frac{{\partial v}}{{\partial x}} = - \frac{1}{2}{\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{3}{2}}} \times 2x = - x{\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{3}{2}}}$$ $$\frac{{{\partial ^2}v}}{{\partial {x^2}}} = - x\left\{ { - \frac{3}{2}{{\left( {{x^2} + {y^2} + {z^2}} \right)}^{ - \frac{5}{2}}} \times 2x} \right\} + {\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{3}{2}}}\left( { - 1} \right)$$ $$\Rightarrow 3{x^2}{\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{5}{2}}} - {\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{3}{2}}}$$ Similarly, $$\frac{{{\partial ^2}v}}{{\partial {y^2}}} = 3{y^2}{\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{5}{2}}} - {\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{3}{2}}}$$ $$\frac{{{\partial ^2}v}}{{\partial {z^2}}} = 3{z^2}{\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{5}{2}}} - {\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{3}{2}}}$$ Now,$$\frac{{{\partial ^2}v}}{{\partial {x^2}}} + \frac{{{\partial ^2}v}}{{\partial {y^2}}} + \frac{{{\partial ^2}v}}{{\partial {z^2}}} = \left( {3{x^2} + 3{y^2} + 3{z^2}} \right){\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{5}{2}}} - 3{\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{3}{2}}}$$ $$3\left( {{x^2} + {y^2} + {z^2}} \right){\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{5}{2}}} - 3{\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{3}{2}}}$$ $$3{\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{3}{2}}} - 3{\left( {{x^2} + {y^2} + {z^2}} \right)^{ - \frac{3}{2}}} = 0$$ # Let r = x2 + y - z and z3 -xy + yz + y3 = 1. Assume that x and y are independent variables. At (x, y, z) = (2, -1, 1), the value (correct to two decimal places) of $$\frac{{\partial {\rm{r}}}}{{\partial {\rm{x}}}}{\rm{\;}}$$ is _________ . ## Differentiability MCQ Question 6 Detailed Solution r = x2 + y - z   ---(1) z3 -xy + yz + y3 = 1    ---(2) $$\frac{{\partial r}}{{\partial x}} = 2x + \frac{{\partial y}}{{\partial x}} - \frac{{\partial z}}{{\partial x}}$$ Since y is an independent derivative of ‘y’ w.r.t. ‘x’ is 0. $$\frac{{\partial r}}{{\partial x}} = 2x - \frac{{\partial z}}{{\partial x}}$$      ----(1) From 2nd relation: Z3 – xy + yz + y3 = 1 Differentiate w.r.t x $$3{Z^2}\frac{{\partial z}}{{\partial x}} - y + y\frac{{\partial z}}{{\partial x}} = 0$$ $$\left( {3{Z^2} + y} \right)\frac{{\partial z}}{{\partial x}} = y$$ $$\frac{{\partial z}}{{\partial x}} = \frac{y}{{3{z^2} + y}}$$      ----(2) Substitute in (1) $$\frac{{\partial r}}{{\partial x}} = 2r - \frac{y}{{3{z^2} + y}}$$ At, (2, -1, 1) $${\left( {\frac{{\partial r}}{{\partial x}}} \right)_{\left( {2,\; - 1,1} \right)}} = 2\left( 2 \right) - \frac{{ - 1}}{{3{{\left( 1 \right)}^2} + \left( { - 1} \right)}}$$ $$= 4 + \frac{1}{2}$$ ⇒ 9/2 = 4.5 # If $$u = {\log _e}\left( {\frac{{{x^4} + {y^4}}}{{x + y}}} \right)$$, the value of $$x\frac{{\partial u}}{{\partial x}} + y\frac{{\partial u}}{{\partial y}}$$ is 1. 6 2. 5 3. 4 4. 3 Option 4 : 3 ## Differentiability MCQ Question 7 Detailed Solution Concept: A function f(x, y) is said to be homogeneous of degree n in x and y, it can be written in the form f(λx, λy) = λn f(x, y) Euler’s theorem: If f(x, y) is a homogeneous function of degree n in x and y and has continuous first and second-order partial derivatives, then $$x\frac{{\partial f}}{{\partial x}} + y\frac{{\partial f}}{{\partial y}} = nf$$ $${x^2}\frac{{{\partial ^2}f}}{{\partial {x^2}}} + 2xy\frac{{{\partial ^2}f}}{{\partial x\partial y}} + {y^2}\frac{{\partial f}}{{\partial y}} = n\left( {n - 1} \right)f$$ If z is homogeneous function of x & y of degree n and z = f(u), then $$x\frac{{\partial u}}{{\partial x}} + y\frac{{\partial u}}{{\partial y}} = n\frac{{f\left( u \right)}}{{f'\left( u \right)}}$$ Calculation: Given, $$u = {\log _e}\left( {\frac{{{x^4} + {y^4}}}{{x + y}}} \right)$$ $$z = \frac{{{x^4} + {y^4}}}{{x + y}}$$ z is a homogenous function of x & y with a degree 3. Now, z = eu Thus, by Euler’s theorem: $$x\frac{{\partial u}}{{\partial x}} + y\frac{{\partial u}}{{\partial y}} = 3\frac{{{e^u}}}{{{e^u}}}$$ $$x\frac{{\partial u}}{{\partial x}} + y\frac{{\partial u}}{{\partial y}} = 3$$ # If the function $$u = \ln \left( {\frac{{{x^3} + {x^2}y - {y^3}}}{{x - y}}} \right)$$ then $$x\frac{{\delta u}}{{\delta x}} + y\frac{{\delta u}}{{\delta y}}$$ is 1. 2eu 2. e2u 3. 2 4. 1/2 Option 3 : 2 ## Differentiability MCQ Question 8 Detailed Solution Concept: A function f(x, y) is said to be homogeneous of degree n in x and y, it can be written in the form f(λx, λy) = λn f(x, y) Euler’s theorem: If f(x, y) is a homogeneous function of degree n in x and y and has continuous first and second-order partial derivatives, then $$x\frac{{\partial f}}{{\partial x}} + y\frac{{\partial f}}{{\partial y}} = nf$$ $${x^2}\frac{{{\partial ^2}f}}{{\partial {x^2}}} + 2xy\frac{{{\partial ^2}f}}{{\partial x\partial y}} + {y^2}\frac{{\partial f}}{{\partial y}} = n\left( {n - 1} \right)f$$ If z is homogeneous function of x & y of degree n and z = f(u), then $$x\frac{{\partial u}}{{\partial x}} + y\frac{{\partial u}}{{\partial y}} = n\frac{{f\left( u \right)}}{{f'\left( u \right)}}$$ Calculation: Given, $$u = \ln \left( {\frac{{{x^3} + {x^2}y - {y^3}}}{{x - y}}} \right)$$ $$z = {\frac{{{x^3} + {x^2}y - {y^3}}}{{x - y}}}$$ z is a homogenous function of x & y with a degree 2. Now, z = eu Thus, by Euler’s theorem: $$x\frac{{\partial u}}{{\partial x}} + y\frac{{\partial u}}{{\partial y}} =2 \frac{{{e^u}}}{{{e^u}}}$$ $$x\frac{{\partial u}}{{\partial x}} + y\frac{{\partial u}}{{\partial y}} = 2$$ # If $$\begin{array}{l} H = {\tan ^{ - 1}}\frac{x}{y}\\ \end{array}$$, x = u + v, y = u - v then $$\begin{array}{l} \frac{{\partial H}}{{\partial V}}\\ \end{array}$$is 1. $$\frac{u}{{\mathop u\nolimits^2 + \mathop v\nolimits^2 }}$$ 2. $$\frac{{ - v}}{{\mathop u\nolimits^2 + \mathop v\nolimits^2 }}$$ 3. $$\frac{u}{{\mathop x\nolimits^2 + \mathop y\nolimits^2 }}$$ 4. $$\frac{{ - 2v}}{{\mathop x\nolimits^2 + \mathop y\nolimits^2 }}$$ Option 1 : $$\frac{u}{{\mathop u\nolimits^2 + \mathop v\nolimits^2 }}$$ ## Differentiability MCQ Question 9 Detailed Solution Concept: $$\frac{{\partial H}}{{\partial v}} = \frac{{\partial H}}{{\partial t}} \times \frac{{\partial t}}{{\partial v}}$$ Calculation: Given: $$\begin{array}{l} H = {\tan ^{ - 1}}\frac{x}{y}\\ \end{array}$$ x = u + v y = u - v substituting values of x and y in H, we get $$H = {\tan ^{ - 1}}\left[ {\frac{{u + v}}{{u - v}}} \right]$$ let $$t = \frac{{u + v}}{{u - v}}$$------------(1) $$H = {\tan ^{ - 1}}t$$ differentiating w.r.t 't' $$\begin{array}{l} \frac{{\partial H}}{{\partial t}} = \frac{1}{{1 + \mathop t\nolimits^2 }}\\ \end{array}$$------------(2) In equation (1), differentiate 't' w.r.t v $$\frac{{\partial t}}{{\partial v}} = \frac{{(u - v)(1) - (u + v)(-1)}}{{\mathop {\left( {u - v} \right)}\nolimits^2 }}={\frac{{2u}}{{\mathop {\left( {u - v} \right)}\nolimits^2 }}}$$---------(3) Multiplying (2) and (3), we get $$\frac{{\partial H}}{{\partial v}} = \frac{1}{{1 + \mathop t\nolimits^2 }}\left[ {\frac{{2u}}{{\mathop {\left( {u - v} \right)}\nolimits^2 }}} \right]\\ \\$$ By substituting '$$t = \frac{{u + v}}{{u - v}}$$' we get $$\frac{{\partial H}}{{\partial v}} = \frac{u}{{\mathop u\nolimits^2 + \mathop v\nolimits^2 }}\\ \\$$ # Let f(x) be a polynomial and g(x) = f’(x) be its derivative. If the degree of (f(x) + f(– x)) is 10, then the degree of (g(x) – g(– x)) is _______. ## Differentiability MCQ Question 10 Detailed Solution Given that, degree of (f(x) + f(– x)) is 10. Consider f(x) polynomial as a10. f(x) + f(– x) = a10 + (– a)10 = a10 + a10 = a20 g(x) – g(– x) = 10a9 – (– 10a9) [given that g(x) = f’(x) ] = 20 a9 From this, degree of g(x) – g(–x) is 9. # A function f (x) is defined as $$g\left( t \right) = \left\{ {\begin{array}{*{20}{c}} {{e^x},}&{x < 1}\\ {\ln x + a{x^2}+bx,}&{x \ge 1} \end{array}} \right.$$, where x ϵ R. Which one of the following statements is TRUE? 1. f(x) is NOT differentiable at x = 1 for any values of a and b 2. f(x) is differentiable at x = 1 for the unique value of a and b. 3. f(x) is differentiable at x = 1 for all values of a and b such that a + b = e 4. f(x) is differentiable at x = 1 for all values of a and b. Option 2 : f(x) is differentiable at x = 1 for the unique value of a and b. ## Differentiability MCQ Question 11 Detailed Solution Concept: A function is said to be differentiable at x =a if, Left derivative = Right derivative = Well defined Analysis: $$f\left( x \right) = \left\{ {\begin{array}{*{20}{c}} {{e^x},\;x < 1}\\ {\log x + a{x^2} + bx,\;x \ge 1} \end{array}} \right.$$ Taking Differentiation, $$f'\left( x \right) = \left\{ {\begin{array}{*{20}{c}} {{e^x},\;x < 1}\\ {\frac{1}{x} + 2ax + b,\;x \ge 1} \end{array}} \right.$$ f’(1) = e, x < 1 f’ (1) = 1 + 2a + b, x ≥ 1 since f(x) is differentiable at x = 1, e = 1 + 2a + b → (1) At x = 1, f(1) = e, x < 1 f(1) = a + b, x ≥ 1 since f(x) is continuous at x = 1, e = a + b → (2) From (1) and (2) ⇒ 1 + 2a + b = a + b ⇒ a = -1 ⇒ b = e + 1 f(x) is differentiable at x = 1 for the unique values of a and b. # Consider two functions: $$x\; = \;\psi ln\phi$$ and $$y\; = \;\phi ln\psi$$. Which one of the following is the correct expression for ∂ψ/∂x? 1. $$\frac{{ln\psi }}{{ln\phi ln\psi - 1}}$$ 2. $$\frac{{ln\phi }}{{ln\phi \psi - 1}}$$ 3. $$\frac{{xln\psi }}{{ln\phi \psi - 1}}$$ 4. $$\frac{{xln\phi }}{{ln\phi ln\psi - 1}}$$ Option 1 : $$\frac{{ln\psi }}{{ln\phi ln\psi - 1}}$$ ## Differentiability MCQ Question 12 Detailed Solution $$x\; = \;\psi ln\phi \;\& \;y\; = \phi ln\psi$$ Partial differentiating both equations with respect to x $$\Rightarrow 1 = {\psi _x}\ln \phi + \frac{\psi }{\phi } \times {\phi _x}$$ …………..(i) $$\Rightarrow 0 = {\phi _x}\ln \psi + \frac{\phi }{\psi } \times {\psi _x}\;$$ $$\Rightarrow \frac{\phi }{\psi } = \frac{{ - {\phi _x} \times \ln \psi }}{{{\psi _x}}}$$ $$\Rightarrow \frac{\phi }{{\psi \times {\phi _x}}} = \frac{{ - \;\ln \psi }}{{{\psi _x}}}$$ $$put\;\left( {ii} \right)in\;\left( i \right)$$ $$1 = {\psi _x}\ln \phi - \frac{{{\psi _x}}}{{ln\psi }} = \;{\psi _x} \times \left[ {\frac{{ln\phi \;ln\psi - 1}}{{\ln \psi }}} \right]$$ $${\psi _x} = \;\frac{{ln\psi }}{{ln\phi ln\psi - 1}}$$ # If $$u = log\left( {\frac{{{x^2} + {y^2}}}{{x + y}}} \right)$$, what is the value of $$x\frac{{\partial u}}{{\partial x}} + y\frac{{\partial u}}{{\partial y}}?$$ 1. 0 2. 1 3. u 4. eu Option 2 : 1 ## Differentiability MCQ Question 13 Detailed Solution Concept: A function f(x, y) is said to be homogeneous of degree n in x and y, it can be written in the form f(λx, λy) = λn f(x, y) Euler’s theorem: If f(x, y) is a homogeneous function of degree n in x and y and has continuous first and second-order partial derivatives, then $$x\frac{{\partial f}}{{\partial x}} + y\frac{{\partial f}}{{\partial y}} = nf$$ $${x^2}\frac{{{\partial ^2}f}}{{\partial {x^2}}} + 2xy\frac{{{\partial ^2}f}}{{\partial x\partial y}} + {y^2}\frac{{\partial f}}{{\partial y}} = n\left( {n - 1} \right)f$$ If z is homogeneous function of x & y of degree n and z = f(u), then $$x\frac{{\partial u}}{{\partial x}} + y\frac{{\partial u}}{{\partial y}} = n\frac{{f\left( u \right)}}{{f'\left( u \right)}}$$ Calculation: Given, $$u = log\left( {\frac{{{x^2} + {y^2}}}{{x + y}}} \right)$$ $$z = \frac{{{x^2} + {y^2}}}{{x + y}}$$ z is a homogenous function of x & y with a degree 1. Now, z = eu Thus, by Euler’s theorem: $$x\frac{{\partial u}}{{\partial x}} + y\frac{{\partial u}}{{\partial y}} = \frac{{{e^u}}}{{{e^u}}}$$ $$x\frac{{\partial u}}{{\partial x}} + y\frac{{\partial u}}{{\partial y}} = 1$$ # Each of four particles move along an x-axis. Their coordinates (in meters) as functions of time (in seconds) are given by1) particle 1: x (t) = 3.5 – 2.7 t32) particle 2: x (t) = 3.5 + 2.7 t33) particle 3: x (t) = 3.5 – 2.7 t24) particle 4: x (t) = 3.5 – 3.4t - 2.7 t2Which of these particles have constant acceleration? 1. All four 2. Only (1) and (2) 3. Only (2) and (3) 4. Only (3) and (4) Option 4 : Only (3) and (4) ## Differentiability MCQ Question 14 Detailed Solution Concept: If X = f(t) defines a position vector of a particle then the first derivative represents the velocity of the particle and the second derivative represent the acceleration of the particle. Now If, $$\frac{d^2x}{dt^2}\ne f(t)$$  then it represents a constant acceleration. Calculation: Given: particle 1:  x(t) = 3.5 - 2.7 t3, particle 2:  x(t) = 3.5 + 2.7 t3, particle 3:  x(t) = 3.5 - 2.7 t3, particle 4:  x(t) = 3.5 - 2.7 t3 The second derivative of Particle 1: $$\frac{{{d^2}}}{{d{t^2}}}{x(t){}} = \frac{{{d^2}}}{{d{t^2}}}(3.5 - 2.7 t^3)$$ $$\frac{{{d^2}}}{{d{t^2}}}{{x(t)}} = \frac{d}{{dt}}\left( {0 - 2.7 \times 3\;{t^2}} \right)$$ $$\frac{{{d^2}}}{{d{t^2}}}{x(t){}} = - 16.2\;t$$ which is a function of time 't' hence acceleration is not constant. The second derivative of Particle 2: $$\frac{{{d^2}}}{{d{t^2}}}{x{(t)}} = \frac{{{d^2}}}{{d{t^2}}}(3.5 + 2.7 t^3)$$ $$\frac{{{d^2}}}{{d{t^2}}}{x{\left( t \right)}} = \frac{d}{{dt}}\left( {0 + 2.7 \times 3\;{t^2}} \right)$$ $$\frac{{{d^2}}}{{d{t^2}}}{x{\left( t \right)}} = 16.2\;t$$ which is a function of time 't' hence acceleration is not constant. The second derivative of Particle 3: $$\frac{{{d^2}}}{{d{t^2}}}{x{\left( t \right)}} = \frac{{{d^2}}}{{d{t^2}}}(3.5 - 2.7 t^2)$$ $$\frac{{{d^2}}}{{d{t^2}}}{x{\left( t \right)}} = \frac{d}{{dt}}\left( {0 - 2.7 \times 3\;{t}} \right)$$ $$\frac{{{d^2}}}{{d{t^2}}}{x{\left( t \right)}} = - 8.1$$ which is not a function of time 't' hence acceleration is constant. The second derivative of Particle 4: $$\frac{{{d^2}}}{{d{t^2}}}{x{\left( t \right)}} = \frac{{{d^2}}}{{d{t^2}}}(3.5 - 3.4t - 2.7{t^2})$$ $$\frac{{{d^2}}}{{d{t^2}}}{x{\left( t \right)}} = \frac{d}{{dt}}\left( {0 - 3.4 - 2.7 \times 2\;{t^{}}} \right)$$ $$\frac{{{d^2}}}{{d{t^2}}}{x{\left( t \right)}} = -5.4$$ which is not a function of time 't' hence acceleration is constant. Hence only particle 3 and particle 4 have Constant acceleration. # If $$\rm z = \tan^{-1} \frac y x$$ then the value of $$\rm \frac {\partial^2z}{\partial x^2} + \frac {\partial^2 z}{\partial y^2}$$ is equal to- 1. $$\rm \frac {-y}{x^2 + y^2}$$ 2. $$\rm \frac {x}{x^2 + y^2}$$ 3. $$\rm \frac {2xy}{x^2 + y^2}$$ 4. 0 Option 4 : 0 ## Differentiability MCQ Question 15 Detailed Solution Concept: Let z = f(x,y), then partial derivative of z w.r.to x is nothing but an ordinary derivative of z w.r.to x keeping y as constant and it is denoted by $$\dfrac {\partial z}{\partial x}$$ The partial derivative of z w.r.to y is nothing but an ordinary derivative of z w.r.to x keeping x as constant and it is denoted by $$\dfrac {\partial z}{\partial y}$$ Calculations: Let $$\rm z = \tan^{-1} \frac y x$$ ⇒z = f(x,y), then a partial derivative of z w.r.to x is nothing but an ordinary derivative of z w.r.to x keeping y as constant and it is denoted by $$\dfrac {\partial z}{\partial x}$$ and a partial derivative of z w.r.to y is nothing but an ordinary derivative of z w.r.to x keeping x as constant and it is denoted by $$\dfrac {\partial z}{\partial y}$$ Taking partial derivative w. r. to x on both side, we get $$\rm \dfrac {\partial z}{\partial x} = \dfrac{1}{1+(\dfrac{y}{x})^2}.\dfrac{-y}{x^2}$$ $$\rm \dfrac {\partial z}{\partial x} = \dfrac{-y}{x^2+y^2}$$ Again taking partial derivative w. r. to x on both side, we get $$\rm \dfrac {\partial ^2z}{\partial x^2} = \dfrac{2xy}{(x^2+y^2)^2}$$           ....(1) Now, taking partial derivative w. r. to y on both sides, we get $$\rm \dfrac {\partial z}{\partial y} = \dfrac{1}{1+(\dfrac{y}{x})^2}.\dfrac{1}{x}$$ $$\rm \dfrac {\partial z}{\partial y} = \dfrac{x}{x^2+y^2}$$ Again taking partial derivative w. r. to y on both side, we get $$\rm \dfrac {\partial ^2z}{\partial y^2} = \dfrac{-2xy}{(x^2+y^2)^2}$$       ....(2) Adding equation (1) and (2), we get. $$\rm \dfrac {\partial^2z}{\partial x^2} + \dfrac {\partial^2 z}{\partial y^2}$$ = $$\rm \dfrac{2xy}{(x^2+y^2)^2} - \dfrac{2xy}{(x^2+y^2)^2}$$ $$\rm \dfrac {\partial^2z}{\partial x^2} + \dfrac {\partial^2 z}{\partial y^2}$$ = 0 Hence, If $$\rm z = \tan^{-1} \frac y x$$ then the value of $$\rm \dfrac {\partial^2z}{\partial x^2} + \dfrac {\partial^2 z}{\partial y^2}$$ is equal to zero # If z = xy In(xy), then 1. $$x\frac{{\partial z}}{{\partial x}} + y\frac{{\partial z}}{{\partial y}} = 0$$ 2. $$y\frac{{\partial z}}{{\partial x}} = x\frac{{\partial z}}{{\partial y}}$$ 3. $$x\frac{{\partial z}}{{\partial x}} = y\frac{{\partial z}}{{\partial y}}$$ 4. $$y\frac{{\partial z}}{{\partial x}} + x\frac{{\partial z}}{{\partial y}} = 0$$ Option 3 : $$x\frac{{\partial z}}{{\partial x}} = y\frac{{\partial z}}{{\partial y}}$$ ## Differentiability MCQ Question 16 Detailed Solution Taking the partial derivative on both the sides, we get: $$x\frac{{\partial z}}{{\partial x}} = y\frac{{\partial z}}{{\partial y}}$$ In partial derivative, we take the other variable as a constant. # The partial derivative of the function$$f\left( {x,\;y,\;z} \right) = {e^{1 - x\cos y}} + xz{e^{ - 1/\left( {1 + {y^2}} \right)}}$$With respect to x at the point (1, 0, e) is 1. -1 2. 0 3. 1 4. $$\frac{1}{e}$$ Option 2 : 0 ## Differentiability MCQ Question 17 Detailed Solution Concept: When the input of a function is made up of multiple variables, partial derivative is used to understand how the function changes as we let just one of the variable changes, keeping all the other variables as constant. Application: Given $$f\left( {x,\;y,\;z} \right) = {e^{1 - x\cos y}} + xz{e^{ - \frac{1}{{1 + {y^2}}}}}$$ Differentiating the above with respect to x and treating y and z as a constant, we get: $$\frac{{\partial f}}{{\partial x}} = {e^{(1 - x\cos y)}} \cdot \frac{d}{{dx}}\left( {1 - x\cos y} \right) + z{e^{\frac{{ - 1}}{{1 + {y^2}}}}} \cdot \frac{\partial }{{\partial x}}\left( x \right)$$ $$\frac{{\partial f}}{{\partial x}} = {e^{\left( {1 - x\cos y} \right)}} \cdot \left( { - \cos y} \right) + z{e^{\frac{{ - 1}}{{{y^2} + 1}}}}$$ $$\frac{{\partial f}}{{\partial x}} = - \cos y{e^{(1 - x\cos y)}} + z{e^{ - \frac{1}{{{y^2} + 1}}}}$$ At point (1, 0, e), the partial derivative gives a value: $$\frac{{\partial f}}{{\partial x}}|_\left( {1,0,e} \right)\; = - \cos \left( 0 \right)\;{e^{(1 - \cos 0)}} + e.{e^{ - \frac{1}{1}}}$$ = -1e0 + e.e-1 = -1 + 1 $$\frac{{\partial f}}{{\partial x}}|_\left( {1,0,e} \right)\; =0$$ # $$\mathop {\lim }\limits_{x \to b} \frac{{{b^x} - {x^b}}}{{{x^x} - {b^b}}} = -1,$$ value of b = ? 1. 0 2. e 3. 1 4. None Option 3 : 1 ## Differentiability MCQ Question 18 Detailed Solution Explanation: $$\mathop {\lim }\limits_{x \to b} \frac{{{b^x} - {x^b}}}{{{x^x} - {b^b}}}$$ The given limit is in 0/0 form, So Applying L hospitals rule $$\begin{array}{l} \mathop {\lim }\limits_{x \to b} \frac{{{b^x}\log b - b{x^{b - 1}}}}{{{x^x}\left( {1\; + \;\log x} \right)}} = - 1\\ \Rightarrow \frac{{{b^b}\log b - b.{b^{b - 1}}}}{{{b^b}\left( {1\; + \;\log b} \right)}} = \frac{{{b^b}\left( {\log b - 1} \right)}}{{{b^b}\left( {1\; + \;\log b} \right)}} = - 1 \end{array}$$ ⇒ log b - 1 = -1 - log b ⇒ 2 log b = 0 ∴  b = 1 # Given the following statements about a function f: R → R, select the right option:P: If f(x) is continuous at x = x0, then it is also differentiable at x = x0.Q: If f(x) is continuous at x = x0, then it may not be differentiable at x = x0.R: If f(x) is differentiable at x = x0, then it is also continuous at x = x0. 1. P is true, Q is false, R is false 2. P is false, Q is true, R is true 3. P is false, Q is true, R is false 4. P is true, Q is false, R is true Option 2 : P is false, Q is true, R is true ## Differentiability MCQ Question 19 Detailed Solution The following properties are true in calculus: • If a function is differentiable at any point, then it is necessarily continuous at the point. • But the converse of this statement is not true i.e. continuity is a necessary but not sufficient condition for the Existence of a finite derivative. • Differentiability implies Continuity. • Continuity does not necessarily imply differentiability. Hence, Statement P is wrong. Statement Q is right. Statement R is right. # For the two functions f (x, y) = x3 – 3xy2 and g(x,y) = 3x2y – y3. Which one of the following options is correct? 1. $$\frac {\partial f}{\partial x} = \frac {\partial g}{\partial x}$$ 2. $$\frac {\partial f}{\partial x} = - \frac {\partial g}{\partial y}$$ 3. $$\frac {\partial f}{\partial y} = - \frac {\partial g}{\partial x}$$ 4. $$\frac {\partial f}{\partial y} = \frac {\partial g}{\partial x}$$ Option 3 : $$\frac {\partial f}{\partial y} = - \frac {\partial g}{\partial x}$$ ## Differentiability MCQ Question 20 Detailed Solution Concept: Partial derivative: A partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Calculation: Given: f (x, y) = x3 – 3xy2 and g(x,y) = 3x2y – y3 $$\frac {\partial f}{\partial x} = \frac {\partial}{\partial x} (x^3 - 3xy^2) =$$ 3x2 - 3y2 $$\frac {\partial g}{\partial x} = \frac {\partial}{\partial x} (3x^2y - y^3) =$$ 6xy $$\frac {\partial f}{\partial y} = \frac {\partial}{\partial y} (x^3 - 3xy^2) =$$ -6xy $$\frac {\partial g}{\partial y} = \frac {\partial}{\partial y} (3x^2y \ -\ y^3 ) =$$ 3x2 - 3y2 From the above values, only option (3) is correct i.e. $$\frac {\partial f}{\partial y} = - \frac {\partial g}{\partial x}$$
{}
# Sine For other uses, see Sine (disambiguation). Not to be confused with sign. Sine Basic features Parity odd Domain (−,)a Codomain [−1,1]a Period 2π Specific values At zero 0 Maxima ((2k + ½)π, 1)b Minima ((2k − ½)π, 1) Specific features Root kπ Critical point kππ/2 Inflection point kπ Fixed point 0 In mathematics, the sine is a trigonometric function of an angle. The sine of an acute angle is defined in the context of a right triangle: for the specified angle, it is the ratio of the length of the side that is opposite that angle to the length of the longest side of the triangle (the hypotenuse). More generally, the definition of sine (and other trigonometric functions) can be extended to any real value in terms of the length of a certain line segment in a unit circle. More modern definitions express the sine as an infinite series or as the solution of certain differential equations, allowing their extension to arbitrary positive and negative values and even to complex numbers. The sine function is commonly used to model periodic phenomena such as sound and light waves, the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. The function sine can be traced to the jyā and koṭi-jyā functions used in Gupta period Indian astronomy (Aryabhatiya, Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin.[1] The word "sine" comes from a Latin mistranslation of the Arabic jiba, which is a transliteration of the Sanskrit word for half the chord, jya-ardha.[2] ## Right-angled triangle definition For the angle α, the sine function gives the ratio of the length of the opposite side to the length of the hypotenuse, To define the trigonometric functions for an acute angle α, start with any right triangle that contains an angle of measure α; in the accompanying figure, angle A in triangle ABC has measure α. The three sides of the triangle are named as follows: • The opposite side is the side opposite to the angle of interest, in this case side a. • The hypotenuse is the side opposite the right angle, in this case side h. The hypotenuse is always the longest side of a right-angled triangle. • The adjacent side is the remaining side, in this case side b. It forms a side of (is adjacent to) both the angle of interest (angle A) and the right angle. Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse. (The other trigonometric functions of the angle can be defined similarly; for example, the cosine of the angle is the ratio between the adjacent side and the hypotenuse, while the tangent gives the ratio between the opposite and adjacent sides.) As stated, the value sin(α) appears to depend on the choice of right triangle containing an angle of measure α. However, this is not the case: all such triangles are similar, and so the ratio is the same for each of them. ## Relation to slope Main article: Slope The trigonometric functions can be defined in terms of the rise, run, and slope of a line segment relative to some horizontal line. • When the length of the line segment is 1, sine takes an angle and tells the rise. • Sine takes an angle and tells the rise per unit length of the line segment. • Rise is equal to sin θ multiplied by the length of the line segment. In contrast, cosine is used for telling the run from the angle; and tangent is used for telling the slope from the angle. Arctan is used for telling the angle from the slope. The line segment is the equivalent of the hypotenuse in the right-triangle, and when it has a length of 1 it is also equivalent to the radius of the unit circle. ## Relation to the unit circle Illustration of a unit circle. The radius has a length of 1. The variable t is an angle measure. In trigonometry, a unit circle is the circle of radius one centered at the origin (0, 0) in the Cartesian coordinate system. Let a line through the origin, making an angle of θ with the positive half of the x-axis, intersect the unit circle. The x- and y-coordinates of this point of intersection are equal to cos θ and sin(θ), respectively. The point's distance from the origin is always 1. Unlike the definitions with the right triangle or slope, the angle can be extended to the full set of real arguments by using the unit circle. This can also be achieved by requiring certain symmetries and that sine be a periodic function. Animation showing how the sine function (in red) is graphed from the y-coordinate (red dot) of a point on the unit circle (in green) at an angle of θ in radians. ## Identities Exact identities (using radians): These apply for all values of . ### Reciprocal The reciprocal of sine is cosecant, i.e., the reciprocal of sin(A) is csc(A), or cosec(A). Cosecant gives the ratio of the length of the hypotenuse to the length of the opposite side: ### Inverse The usual principal values of the arcsin(x) function graphed on the cartesian plane. Arcsin is the inverse of sin. The inverse function of sine is arcsine (arcsin or asin) or inverse sine (sin-1). As sine is non-injective, it is not an exact inverse function but a partial inverse function. For example, sin(0) = 0, but also sin(π) = 0, sin(2π) = 0 etc. It follows that the arcsine function is multivalued: arcsin(0) = 0, but also arcsin(0) = π, arcsin(0) = 2π, etc. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each x in the domain the expression arcsin(x) will evaluate only to a single value, called its principal value. k is some integer: Or in one equation: Arcsin satisfies: and ### Calculus For the sine function: The derivative is: The antiderivative is: C denotes the constant of integration. ### Other trigonometric functions The sine and cosine functions are related in multiple ways. The two functions are out of phase by 90°: = for all angles x. Also, the derivative of the function sin(x) is cos(x). It is possible to express any trigonometric function in terms of any other (up to a plus or minus sign, or using the sign function). Sine in terms of the other common trigonometric functions: f θ Using plus/minus (±) Using sign function (sgn) f θ = ± per Quadrant f θ = I II III IV cos + + + + cot + + + + tan + + + + sec + + + + Note that for all equations which use plus/minus (±), the result is positive for angles in the first quadrant. The basic relationship between the sine and the cosine can also be expressed as the Pythagorean trigonometric identity: where sin2x means (sin(x))2. ## Properties relating to the quadrants The four quadrants of a Cartesian coordinate system. Over the four quadrants of the sine function is as follows. Quadrant Degrees Radians Value Sign Monotony Convexity 1st Quadrant increasing concave 2nd Quadrant decreasing concave 3rd Quadrant decreasing convex 4th Quadrant increasing convex Points between the quadrants. k is an integer. The quadrants of the unit circle and of sin x, using the Cartesian coordinate system. Degrees Radians Radians Point type Root, Inflection Maxima Root, Inflection Minima For arguments outside those in the table, get the value using the fact the sine function has a period of 360° (or 2π rad): , or use . Or use and . For complement of sine, we have . ## Series definition The sine function (blue) is closely approximated by its Taylor polynomial of degree 7 (pink) for a full cycle centered on the origin. This animation shows how including more and more terms in the partial sum of its Taylor series gradually builds up a sine curve. Using only geometry and properties of limits, it can be shown that the derivative of sine is cosine, and that the derivative of cosine is the negative of sine. Using the reflection from the calculated geometric derivation of the sine is with the 4n + k-th derivative at the point 0: This gives the following Taylor series expansion at x = 0. One can then use the theory of Taylor series to show that the following identities hold for all real numbers x (where x is the angle in radians) :[3] If x were expressed in degrees then the series would contain messy factors involving powers of π/180: if x is the number of degrees, the number of radians is y = πx /180, so The series formulas for the sine and cosine are uniquely determined, up to the choice of unit for angles, by the requirements that The radian is the unit that leads to the expansion with leading coefficient 1 for the sine and is determined by the additional requirement that The coefficients for both the sine and cosine series may therefore be derived by substituting their expansions into the pythagorean and double angle identities, taking the leading coefficient for the sine to be 1, and matching the remaining coefficients. In general, mathematically important relationships between the sine and cosine functions and the exponential function (see, for example, Euler's formula) are substantially simplified when angles are expressed in radians, rather than in degrees, grads or other units. Therefore, in most branches of mathematics beyond practical geometry, angles are generally assumed to be expressed in radians. A similar series is Gregory's series for arctan, which is obtained by omitting the factorials in the denominator. ### Continued fraction The sine function can also be represented as a generalized continued fraction: The continued fraction representation expresses the real number values, both rational and irrational, of the sine function. ## Fixed point The fixed point iteration xn+1 = sin xn with initial value x0 = 2 converges to 0. Zero is the only real fixed point of the sine function; in other words the only intersection of the sine function and the identity function is sin(0) = 0. ## Arc length The arc length of the sine curve between and is This integral is an elliptic integral of the second kind. The arc length for a full period is where is the Gamma function. The arc length of the sine curve from 0 to x is the above number divided by times x, plus a correction that varies periodically in x with period . The Fourier series for this correction can be written in closed form using special functions, but it is perhaps more instructive to write the decimal approximations of the Fourier coefficients. The sine curve arc length from 0 to x is ## Law of sines Main article: Law of sines The law of sines states that for an arbitrary triangle with sides a, b, and c and angles opposite those sides A, B and C: This is equivalent to the equality of the first three expressions below: where R is the triangle's circumradius. It can be proven by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation, a technique to determine unknown distances by measuring two angles and an accessible enclosed distance. ## Special values Some common angles (θ) shown on the unit circle. The angles are given in degrees and radians, together with the corresponding intersection point on the unit circle, (cos θ, sin θ). For certain integral numbers x of degrees, the value of sin(x) is particularly simple. A table of some of these values is given below. x (angle) sin x Degrees Radians Gradians Turns Exact Decimal 0 0g 0 0 0 180° π 200g 1/2 15° 1/12π 16 2/3g 1/24 0.258819045102521 165° 11/12π 183 1/3g 11/24 30° 1/6π 33 1/3g 1/12 1/2 0.5 150° 5/6π 166 2/3g 5/12 45° 1/4π 50g 1/8 0.707106781186548 135° 3/4π 150g 3/8 60° 1/3π 66 2/3g 1/6 0.866025403784439 120° 2/3π 133 1/3g 1/3 75° 5/12π 83 1/3g 5/24 0.965925826289068 105° 7/12π 116 2/3g 7/24 90° 1/2π 100g 1/4 1 1 90 degree increments: x in degrees 0° 90° 180° 270° 360° x in radians 0 π/2 π 3π/2 2π x in gons 0 100g 200g 300g 400g x in turns 0 1/4 1/2 3/4 1 sin x 0 1 0 -1 0 Other values not listed above: ## Relationship to complex numbers An illustration of the complex plane. The imaginary numbers are on the vertical coordinate axis. Sine is used to determine the imaginary part of a complex number given in polar coordinates (r,φ): the imaginary part is: r and φ represent the magnitude and angle of the complex number respectively. i is the imaginary unit. z is a complex number. Although dealing with complex numbers, sine's parameter in this usage is still a real number. Sine can also take a complex number as an argument. ### Sine with a complex argument Domain coloring of sin(z) over (-π,π) on x and y axes. Brightness indicates absolute magnitude, saturation represents complex argument. sin(z) as a vector field The definition of the sine function for complex arguments z: where i 2 = −1, and sinh is hyperbolic sine. This is an entire function. Also, for purely real x, For purely imaginary numbers: It is also sometimes useful to express the complex sine function in terms of the real and imaginary parts of its argument: #### Partial fraction and product expansions of complex sine Using the partial fraction expansion technique in Complex Analysis, one can find that the infinite series both converge and are equal to . Similarly we can find Using product expansion technique, one can derive #### Usage of complex sine sin z is found in the functional equation for the Gamma function, which in turn is found in the functional equation for the Riemann zeta-function, As a holomorphic function, sin z is a 2D solution of Laplace's equation: It is also related with level curves of pendulum.[4] ### Complex graphs real component imaginary component magnitude real component imaginary component magnitude ## History While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The function sine (and cosine) can be traced to the jyā and koṭi-jyā functions used in Gupta period (320 to 550 CE) Indian astronomy (Aryabhatiya, Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin.[1] The first published use of the abbreviations 'sin', 'cos', and 'tan' is by the 16th century French mathematician Albert Girard; these were further promulgated by Euler (see below). The Opus palatinum de triangulis of Georg Joachim Rheticus, a student of Copernicus, was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In a paper published in 1682, Leibniz proved that sin x is not an algebraic function of x.[5] Roger Cotes computed the derivative of sine in his Harmonia Mensurarum (1722).[6] Leonhard Euler's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting "Euler's formula", as well as the near-modern abbreviations sin., cos., tang., cot., sec., and cosec.[7] ### Etymology Look up sine in Wiktionary, the free dictionary. Etymologically, the word sine derives from the Sanskrit word for chord, jiva*(jya being its more popular synonym). This was transliterated in Arabic as jiba جــيــب, abbreviated jb جــــب . Since Arabic is written without short vowels, "jb" was interpreted as the word jaib جــيــب, which means "bosom", when the Arabic text was translated in the 12th century into Latin by Gerard of Cremona. The translator used the Latin equivalent for "bosom", sinus (which means "bosom" or "bay" or "fold").[8][9] The English form sine was introduced in the 1590s. ## Software implementations The sine function, along with other trigonometric functions, is widely available across programming languages and platforms. In computing, it is typically abbreviated to sin. Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387. In programming languages, sin is typically either a built-in function or found within the language's standard math library. For example, the C standard library defines sine functions within math.h: sin(double), sinf(float), and sinl(long double). The parameter of each is a floating point value, specifying the angle in radians. Each function returns the same data type as it accepts. Many other trigonometric functions are also defined in math.h, such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly, Python defines math.sin(x) within the built-in math module. Complex sine functions are also available within the cmath module, e.g. cmath.sin(z). CPython's math functions call the C math library, and use a double-precision floating-point format. There is no standard algorithm for calculating sine. IEEE 754-2008, the most widely used standard for floating-point computation, does not address calculating trigonometric functions such as sine.[10] Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g. sin(1022). A once common programming optimization, used especially in 3D graphics, was to pre-calculate a table of sine values, for example one value per degree. This allowed results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage. ## References 1. Uta C. Merzbach, Carl B. Boyer (2011), A History of Mathematics, Hoboken, N.J.: John Wiley & Sons, 3rd ed., p. 189. 2. Victor J. Katz (2008), A History of Mathematics, Boston: Addison-Wesley, 3rd. ed., p. 253, sidebar 8.1. 3. See Ahlfors, pages 43–44. 4. math.stackexchange questions : why-are-the-phase-portrait-of-the-simple-plane-pendulum-and-a-domain-coloring-of ... 5. Nicolás Bourbaki (1994). Elements of the History of Mathematics. Springer. 6. See Merzbach, Boyer (2011). 7. Eli Maor (1998), Trigonometric Delights, Princeton: Princeton University Press, p. 35-36. 8. Victor J. Katz (2008), A History of Mathematics, Boston: Addison-Wesley, 3rd. ed., p. 253, sidebar 8.1. 9. Grand Challenges of Informatics, Paul Zimmermann. September 20, 2006 – p. 14/31 This article is issued from Wikipedia - version of the 12/3/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.
{}
## On Inverse Network Problems and their Generalizations • In the context of inverse optimization, inverse versions of maximum flow and minimum cost flow problems have thoroughly been investigated. In these network flow problems there are two important problem parameters: flow capacities of the arcs and costs incurred by sending a unit flow on these arcs. Capacity changes for maximum flow problems and cost changes for minimum cost flow problems have been studied under several distance measures such as rectilinear, Chebyshev, and Hamming distances. This thesis also deals with inverse network flow problems and their counterparts tension problems under the aforementioned distance measures. The major goals are to enrich the inverse optimization theory by introducing new inverse network problems that have not yet been treated in the literature, and to extend the well-known combinatorial results of inverse network flows for more general classes of problems with inherent combinatorial properties such as matroid flows on regular matroids and monotropic programming. To accomplish the first objective, the inverse maximum flow problem under Chebyshev norm is analyzed and the capacity inverse minimum cost flow problem, in which only arc capacities are perturbed, is introduced. In this way, it is attempted to close the gap between the capacity perturbing inverse network problems and the cost perturbing ones. The foremost purpose of studying inverse tension problems on networks is to achieve a well-established generalization of the inverse network problems. Since tensions are duals of network flows, carrying the theoretical results of network flows over to tensions follows quite intuitively. Using this intuitive link between network flows and tensions, a generalization to matroid flows and monotropic programs is built gradually up. • Inverse Netzwerkprobleme und deren Verallgemeinerung $Rev: 13581$
{}
# Expanding Binomial using Laurent Series Find a general expression for the Laurent Series expansion of $f(z) = (z-\alpha)^{-n}$ when $|z|>|\alpha|$. I've tried expanding the function using the binomial theorem, and I've tried integrating with the general formula for Laurent Series coefficients using Cauchy's Integral formula, but I can't get the solution, which I know is supposed to be $\Sigma_{j=n}^{\infty} \alpha^{j-n} \frac{(j-1)!}{(n-1)!(j-n)!}z^{-j}$ - $$(z-\alpha)^{-n} = \frac1{z^n} \left(1-\frac\alpha z\right)^{-n}.$$
{}
# Reducing Validation loss for Triplet Loss Embeddings I'm trying to create a facial recognition detector using triplet loss followed by a kNN algorithm. I have roughly 10000 input images with 3 different classes, input size is 80x80. Model structure uses resnet with imagenet weightings, followed by a few dense layers for embeddings: base_cnn = resnet.ResNet50( weights="imagenet", input_shape=image_input_shape, include_top=False ) trainable = False for layer in base_cnn.layers: layer.trainable = trainable x = Flatten()(base_cnn.output) x = Dense(128, activation="relu", kernel_regularizer=tf.keras.regularizers.l2(0.0001))(x) x = BatchNormalization()(x) x = Dropout(0.5)(x) output = Dense(embedding_size)(x) embedding = Model(base_cnn.input, output, name="Embedding") The problem is that the training loss is small, but the validation loss is barely decreasing (if at all): I assume this is due to overfitting, however I've added in everything I can think of to prevent this (dropout, regularisation and I've played with the learning rate). I'm guessing that the dataset size is the problem, but hoped by using transfer learning I'd be able to get decent results with this dataset. Any suggestions on how to interpret this, and how to improve validation loss? • How did you choose triplets? Feb 16 at 17:43 • Train some layers even from base_cnn
{}
# 4.3 Fitting linear models to data  (Page 3/14) Page 3 / 14 Given data of input and corresponding outputs from a linear function, find the best fit line using linear regression. 1. Enter the input in List 1 (L1). 2. Enter the output in List 2 (L2). 3. On a graphing utility, select Linear Regression (LinReg). ## Finding a least squares regression line Find the least squares regression line using the cricket-chirp data in [link] . 1. Enter the input (chirps) in List 1 (L1). 2. Enter the output (temperature) in List 2 (L2). See [link] . L1 44 35 20.4 33 31 35 18.5 37 26 L2 80.5 70.5 57 66 68 72 52 73.5 53 3. On a graphing utility, select Linear Regression (LinReg). Using the cricket chirp data from earlier, with technology we obtain the equation: $T\left(c\right)=30.281+1.143c$ Will there ever be a case where two different lines will serve as the best fit for the data? No. There is only one best fit line. ## Distinguishing between linear and nonlinear models As we saw above with the cricket-chirp model, some data exhibit strong linear trends, but other data, like the final exam scores plotted by age, are clearly nonlinear. Most calculators and computer software can also provide us with the correlation coefficient    , which is a measure of how closely the line fits the data. Many graphing calculators require the user to turn a ”diagnostic on” selection to find the correlation coefficient, which mathematicians label as $\text{\hspace{0.17em}}r\text{\hspace{0.17em}}$ The correlation coefficient provides an easy way to get an idea of how close to a line the data falls. We should compute the correlation coefficient only for data that follows a linear pattern or to determine the degree to which a data set is linear. If the data exhibits a nonlinear pattern, the correlation coefficient for a linear regression is meaningless. To get a sense for the relationship between the value of $\text{\hspace{0.17em}}r\text{\hspace{0.17em}}$ and the graph of the data, [link] shows some large data sets with their correlation coefficients. Remember, for all plots, the horizontal axis shows the input and the vertical axis shows the output. ## Correlation coefficient The correlation coefficient is a value, $\text{\hspace{0.17em}}r,$ between –1 and 1. • $r>0\text{\hspace{0.17em}}$ suggests a positive (increasing) relationship • $r<0\text{\hspace{0.17em}}$ suggests a negative (decreasing) relationship • The closer the value is to 0, the more scattered the data. • The closer the value is to 1 or –1, the less scattered the data is. ## Finding a correlation coefficient Calculate the correlation coefficient for cricket-chirp data in [link] . Because the data appear to follow a linear pattern, we can use technology to calculate $\text{\hspace{0.17em}}r\text{\hspace{0.17em}}$ Enter the inputs and corresponding outputs and select the Linear Regression. The calculator will also provide you with the correlation coefficient, $\text{\hspace{0.17em}}r=0.9509.\text{\hspace{0.17em}}$ This value is very close to 1, which suggests a strong increasing linear relationship. Note: For some calculators, the Diagnostics must be turned "on" in order to get the correlation coefficient when linear regression is performed: [2nd]>[0]>[alpha][x–1], then scroll to DIAGNOSTICSON. ## Fitting a regression line to a set of data Once we determine that a set of data is linear using the correlation coefficient, we can use the regression line to make predictions. As we learned above, a regression line is a line that is closest to the data in the scatter plot, which means that only one such line is a best fit for the data. what is the coefficient of -4× -1 Shedrak the operation * is x * y =x + y/ 1+(x × y) show if the operation is commutative if x × y is not equal to -1 An investment account was opened with an initial deposit of \$9,600 and earns 7.4% interest, compounded continuously. How much will the account be worth after 15 years? lim x to infinity e^1-e^-1/log(1+x) given eccentricity and a point find the equiation 12, 17, 22.... 25th term 12, 17, 22.... 25th term Akash College algebra is really hard? Absolutely, for me. My problems with math started in First grade...involving a nun Sister Anastasia, bad vision, talking & getting expelled from Catholic school. When it comes to math I just can't focus and all I can hear is our family silverware banging and clanging on the pink Formica table. Carole I'm 13 and I understand it great AJ I am 1 year old but I can do it! 1+1=2 proof very hard for me though. Atone hi Not really they are just easy concepts which can be understood if you have great basics. I am 14 I understood them easily. Vedant find the 15th term of the geometric sequince whose first is 18 and last term of 387 I know this work salma The given of f(x=x-2. then what is the value of this f(3) 5f(x+1) hmm well what is the answer Abhi If f(x) = x-2 then, f(3) when 5f(x+1) 5((3-2)+1) 5(1+1) 5(2) 10 Augustine how do they get the third part x = (32)5/4 make 5/4 into a mixed number, make that a decimal, and then multiply 32 by the decimal 5/4 turns out to be AJ how Sheref can someone help me with some logarithmic and exponential equations. 20/(×-6^2) Salomon okay, so you have 6 raised to the power of 2. what is that part of your answer I don't understand what the A with approx sign and the boxed x mean it think it's written 20/(X-6)^2 so it's 20 divided by X-6 squared Salomon I'm not sure why it wrote it the other way Salomon I got X =-6 Salomon ok. so take the square root of both sides, now you have plus or minus the square root of 20= x-6 oops. ignore that. so you not have an equal sign anywhere in the original equation? hmm Abhi is it a question of log Abhi 🤔. Abhi I rally confuse this number And equations too I need exactly help salma But this is not salma it's Faiza live in lousvile Ky I garbage this so I am going collage with JCTC that the of the collage thank you my friends salma Commplementary angles hello Sherica im all ears I need to learn Sherica right! what he said ⤴⤴⤴ Tamia hii Uday hi salma hi Ayuba Hello opoku hi Ali greetings from Iran Ali salut. from Algeria Bach hi Nharnhar what is a good calculator for all algebra; would a Casio fx 260 work with all algebra equations? please name the cheapest, thanks. a perfect square v²+2v+_ kkk nice
{}
# Information Security and Cryptography Research Group ## The Round Complexity of Perfectly Secure General VSS ### Ashish Choudhury, Kaoru Kurosawa, Arpita Patra ICITS, Lecture Notes in Computer Science, Springer, vol. 6673, pp. 143-162, 2011. The round complexity of verifiable secret sharing (VSS) schemes has been studied extensively for threshold adversaries. In particular, Fitzi et al. showed an efficient $3$-round VSS for $n \geq 3t+1$ \cite{FitziVSSTCC06}, where an infinitely powerful adversary can corrupt $t$ (or less) parties out of $n$ parties. This paper shows that for non-threshold adversaries: 1. Two round perfectly secure VSS is possible if and only if the underlying adversary structure satisfies the ${\mathcal Q}^4$ condition; 2. Three round perfectly secure VSS is possible if and only if the underlying adversary structure satisfies the ${\mathcal Q}^3$ condition. Further as a special case of our three round protocol, we can obtain a more efficient $3$-round VSS than the VSS of Fitzi et al. for $n = 3t+1$. More precisely, the communication complexity of the reconstruction phase is reduced from ${\mathcal O}(n^3)$ to ${\mathcal O}(n^2)$. We finally point out a flaw in the reconstruction phase of the VSS of Fitzi et al., and show how to fix it. ## BibTeX Citation @inproceedings{ChKuPa11b, author = {Ashish Choudhury, Kaoru Kurosawa, Arpita Patra}, title = {The Round Complexity of Perfectly Secure General VSS}, editor = {Serge Fehr}, booktitle = {ICITS}, pages = 143-162, series = {Lecture Notes in Computer Science}, volume = 6673, year = 2011, publisher = {Springer}, }
{}
Distance to rect edge from point based on direction I have a rect that is at position (0,0) at the top left and is 2200x2200 in size. I am trying to specify the the distance to get to the edge of the rect in specific direction (degrees) from a point within the rect. So say I am at point 50,50, and want to move 270 degrees until I hit the coordinate of the edge of the rect. What would be the best way to go about this? Right now I have another function I use to get distance based on direction distance = 2200 direction = 315 toEdge = distance*math.cos(math.radians(direction)), distance*math.sin(math.radians(direction)) This will just get a point 2200 units away from direction. I think this is half the step there, but can't get the math right on on how I translate this into getting the distance to the rect edge depending on coordinate. If anyone could help me out I would be appreciative. Thanks. 1 Answer If I understood what you're asking ,after you get your toEdge point , you need Intersection of a Line and a Rectangle to get intersection point. Then calculte distance from start point to intersection point.
{}
# Great to see you here! ## COVID-19 Lockdown 2.0 — an update from November 14 2020 Austria is locked down again. Not as much as in spring, but many people have to stay at home again. Homeoffice for the win. As I work as a construction worker I cannot work from home – and I won’t have to. • I still leave the house at around 6:00 AM • I still work something around 10 hours a day • I still come home at around 6:00 PM • I still have not much time since I leave the house when it’s dark outside, and I come home when it’s dark again • The rest of the time is mainly used for eating and sleeping ;-) ## Explore this website You may find useful information on this website. Use it wisely ;-) Are you interested in amateur radio? My projects page or the Amateur radio category could look interesting to you. But if you’re looking for some handy notes that I wrote down to remember – the Notes category would be something for you. There’s not much going on there right now, but I will fill this up when I catch some time soon. I’m just rebuilding the website because I moved back to Jekyll from Hugo. The website was last built and published on 2020-12-30 09:19:05 +0000. ## New: Today I Learned I don’t work in a tech job so there are a lot of apps and programs that I don’t use that frequent why I decided to write down many commands and use cases. This is probably an enhanced version of the Notes-category mentioned above. ### Today I Learned categories This website uses cookies. If you don’t like them, disable them in your browser or just go away. ## Recent posts These posts build a recent snapshot of what’s going on on this website. ## Some of my projects You know this websites domain is oe7drt.com which reflects my callsign. So most projects are probably ham-related. See Projects for more ## A few last words That’s it for now – I’ll update things whenever possible for me. I’m quite busy in summer at work so don’t expect too much content flying in…
{}
# Problem with the new 2017 ACM Master Article Template I'm writing a paper using the new 2017 ACM Master Article Template, and I've experienced the `Undefined control sequence` messages: 1. `\orcid` — this one's not too important, I can just omit my `orcid` 2. `\@currentaffiliation ->\institution` 3. `…->\streetaddress`, `…->\city`, `…->state`, `…->postcode` 4. others that I don't need 5. `<argument> [\protect \citeauthoryear`this one is critical Note that I get these warnings when running their example `sample-sigconf.tex`. While the 2nd and 3rd ones are a little annoying, if the control sequences are ignored, it just prints the institution, city, state, etc. which I can live with. However, the last one prevents my bibliography from printing correctly. I've looked at this similar sounding question/answer, but that answer did not work for me (unsurprisingly, because I'm not using the `named` bibliography style). I also experimented with this approach, which seems to work, but I don't think the bibliography style it provides is ACM compliant. Has anyone else had success with this template yet? Here is the command I'm using to compile it: ``````latexmk -pdf -pdflatex="pdflatex -interaction=nonstopmode" -use-make sample-sigconf.tex `````` Here is my `pdflatex` version: ``````▶ pdflatex --version pdfTeX 3.14159265-2.6-1.40.17 (TeX Live 2016) kpathsea version 6.2.2 Copyright 2016 Han The Thanh (pdfTeX) et al. There is NO warranty. Redistribution of this software is covered by the terms of both the pdfTeX copyright and the Lesser GNU General Public License. named COPYING and the pdfTeX source. Primary author of pdfTeX: Han The Thanh (pdfTeX) et al. Compiled with libpng 1.6.21; using libpng 1.6.21 Compiled with zlib 1.2.8; using zlib 1.2.8 `````` • Here, with updated TeXLive 2016, I can compile `sample-sigconf.tex`. Try to update your TeXLive distribution. – Paul Gaborit Mar 7 '17 at 14:17 • Note: I use `Document Class: acmart 2017/03/04 v1.31`! – Paul Gaborit Mar 7 '17 at 14:25 • @PaulGaborit, thank you! I updated my TeXLive distribution, and the problem was resolved. Make that an answer, and I'll accept it! – Ben Hocking Mar 7 '17 at 16:20 You need to update `acmart`. I introduced `\orcid` in v1.15, 2016/06/25. • I appreciate the advice, but I was already using version v1.31 (as described in the `README` file). However, asking TeXLive 2016 to update all did fix the problem, though it took a while for the update to complete. – Ben Hocking Mar 7 '17 at 17:38 • Could you run your document adding `\listfiles` before `\documentclass` and post the log? – Boris Mar 7 '17 at 17:40 • It occurs to me that perhaps the problem is me not understanding what it means to "update" `acmart`, and that perhaps by updating TeXLive 2016, I did that inadvertently. Specifically, when I downloaded what I'm calling the `acmart` files (the ones that are v1.31), while it has a acmart.dtx file, there is no .sty or .cls file in it, which seems unusual. – Ben Hocking Mar 7 '17 at 17:42 • After you download the package, run `latex` on `acmart.ins`, which produces `acmart.cls`. – Boris Mar 7 '17 at 17:52
{}
DODO Docs Search ⌃K # PMM Details It is self-evident that the price of an asset should change depending on the asset supply. When developing the PMM algorithm, the DODO Team observed two major properties of crypto markets. These properties are: 1. 1. Most of the liquidity is concentrated around the mid-market price, i.e. the price changes non-linearly with respect to the inventory. 2. 2. There is liquidity even if the price deviates far from the mid-market price, but it is very limited. The DODO Team therefore introduced a nonlinear equation for the price curve to make the depth distribution more consistent with the market, and more flexible as well. The equation for the price P is as follows: $P = i(1-k + k(\frac{B_0}{B})^2)$ where • $i$ is the initial "guide price" • $k$ is the "slippage factor" • $B$ denotes the current token supply • ${B_0}$ denotes the equilibrium supply (which can be interpreted as the exposure you are willing to hold) • $\frac{B_0}{B}$ is used to indicate how much the current token supply has shifted compared to the equilibrium state. Note that "equilibrium" does not mean that both tokens in a pool are worth the same. What constitutes an "equilibrium" is subjective, and anyone can set what they think is the equilibrium. Under this formula: • When $k=1$ , this curve is exactly the same bonding curve as AMM. • When $0 , this curve concentrates liquidity more around the $i$ price than an AMM. • When $k=0$ , this curve becomes a straight line, and the price remains fixed. In token pairs, the two tokens have different names based on how they are used. They are known as base or quote tokens, abbreviated as $B$ and $Q$ (respectively). Base tokens are those tokens whose price is expressed in terms of the other (quote) token. For example, in the ETH-USDC trading pair, the price of 1 ETH (the base token) is expressed as an amount of USDC tokens (the quote token). Despite their conceptual differences, base and quote tokens have equal status in this system, i.e. they are symmetric. So, in case there is an undersupply of quote tokens, we can replace the multiplication in the price curve equation with division, as follows: $P=i/(1-k+(\frac{Q_0}{Q})^2k)$ Therefore, the PMM price curve corresponds to the formula : $P=iR$ Which is determined by the following rule: If $B , then $R=1-k+(\frac{B_0}{B})^2k$ If $Q , then $R=1/(1-k+(\frac{Q_0}{Q})^2k)$ Otherwise, $R=1$ .
{}
# Wind foils the plan A ballistic missile is projected from ground with speed $$100m/s$$ making $$37°$$ with horizontal. After 8 seconds of projection a strong wind starts blowing in the direction of motion imparting an acceleration of $$25m/{ s }^{ 2 }$$. The missile which was to land in enemy's camp is now heading to ally's camp. Find $$\left[ minimum\quad projection\quad speed \right]$$ of the anti-ballistic missile to destroy the ballistic missile if it takes 1 second to deploy it. Assume acceleration imparted by wind to both ballistic missile and anti-ballistic missile is same. $$g=10m/{s}^{2}$$ $$\cos { 37° } =\frac { 4 }{ 5 }$$ This problem is originally part of set Mechanics problems by Abhishek Sharma. Try more problems here. ×
{}
# 5 Card Poker probability with specific cards already dealt I'm trying to calculate probability of a 5-Card Poker hand where two cards (a single player's hand) have already been dealt. Calculating probability of receiving Two-Pair is a good example that should answer all of my questions within itself. Here is my calculation of 5 Card Poker odds of achieving Two Pair without any cards yet dealt: $$\binom{13}{2} \cdot \binom{4}{2}^2 \cdot \binom{11}{1} \cdot \binom{4}{1} = 123,552 / \binom{52}{5} = 4.7539\%$$ However, considering the probability of achieving Two Pair with a 7,2 Offsuit in the player's hand, here is my calculation: $$\binom{2}{2} \cdot \binom{3}{1}^2 \cdot \binom{11}{1} \cdot \binom{4}{1} +\left[\binom{11}{1} \cdot \binom{4}{2} \cdot \binom{2}{1} \cdot \binom{3}{1}\right] = 792/\binom{50}{3} = 4.0408\%$$ • The first 4 binomials are for the occurance of repeats of 7 and 2 followed by any other random card of any suit, hence 2 Choose 2 for two of two specific cards and the 3 Choose 1 squared because each of the two cards would be one of the 3 remaining suits. 11 Choose 1 for any rank of the 11 remaining, and 4 Choose 1 for any suit, as they don't matter. • The 4 binomials in brackets are for the occurrence of a new pair followed by one of the previous cards (7 or 2). 11 Choose 1 for any card not 7 or 2, 4 Choose 2 because it must be two suits of that same rank. 2 Choose 1 for one of the two existing ranks (7 or 2) and any of the 3 remaining suits of this rank therefore 3 Choose 1. I add both binomial sets together to get 792 out of the 3 card flop with 50 remaining cards (52 minus 7 and 2) to get 4.0408%. I am not sure if this is correct, or if I need to subtract the Full House occurrences from this. Any feedback would be greatly appreciated. I tried testing this out by comparing the probabilities to a Mississippi Stud game (5 card poker) and getting the EV of this scenario, but the EV did not match up. • You're question is unclear IMO. For example: "Pre-deal ($52$ cards) here is my calculation" - Calculation of what? And what does "Pre-deal" even mean? Another example: "two cards (a single player's hand)" - since when are two cards consist a single player's hand? – barak manos Oct 6 '16 at 4:03 • That is the calculation of getting two-pair in 5 card poker with ALL 52 cards still remaining, none removed. – chriskgregory Oct 6 '16 at 4:05 • Right... well, the preceding statements says something about $2$ cards which have already been dealt. Why is that no longer the case then? – barak manos Oct 6 '16 at 4:06 • for your second example it seems pretty straightforward - In 5 card poker you are dealt TWO CARDS. You must make a hand with the upcoming 3 cards (2 + 3 = 5 Card Poker) – chriskgregory Oct 6 '16 at 4:07 • I'm trying to display my method of calculation with 52 cards - then my second calculation where there are 2 cards removed, and you are using those to achieve two pair. I do not believe my second calculation is correct, therefore by displaying my method of arriving to my conclusion I figured the reader would easier be able to correct me. – chriskgregory Oct 6 '16 at 4:09 [edit: with apologies] Reinterpreting the question as: "What is the probability of forming two-pair if you already have 2 and 7 in your hand, and must draw three more cards?" • You draw another set of 2 and 7, and one other card: $\binom 3 1^2\binom{11} 1\binom 4 1$ ways • You draw either another 2 or 7, and another pair: $\binom 2 1\binom 3 1\binom{11}1\binom 4 2$ ways • Out of $\binom{50} 3$ ways to draw any 3 cards from 50. $$\dfrac{\binom 3 1^2\binom{11} 1\binom 4 1+\binom 2 1\binom 3 1\binom{11}1\binom 4 2}{\binom {50}3}$$ • I'm trying to understand but I'm having a hard time following. I am not educated and didn't even attend high school to my great regret, if you can try to elaborate a bit more I would greatly appreciate it. – chriskgregory Oct 6 '16 at 4:12 • Well, you could choose a king pair and a queen pair, or a pair of aces and pair of threes, or.... – Graham Kemp Oct 6 '16 at 4:12 • Ah I'm following. I thought that was what was in the []s – chriskgregory Oct 6 '16 at 4:14 • Or am I miss interpreting what 2 and 7 removed means – Graham Kemp Oct 6 '16 at 4:14 • So I have 7clubs and 2 diamonds. There are still 50 cards remaining in the deck. For my calculation of Two Pair using the 50 remaining cards, I took two scenarios and added them together - 1: 7272x 2: 7266x (66 being replaceable with any other pair). I wrote it as 11 Choose 1 by 4 Choose 1. – chriskgregory Oct 6 '16 at 4:15
{}
1 answer # 8 10 11 12 8. In unit-vector notation, A = (4.0 m) + (3.0 m)ŷ and... ###### Question: 8 10 11 12 8. In unit-vector notation, A = (4.0 m) + (3.0 m)ŷ and B = (-13.0 m)Â + (7.0 m), (a) what is the sum of the two vectors? (b) What are the magnitude and (c) the direction of A + B (relative to )? 9. Two vectors are given by A = (4.0 m) – (3.0 m)ỹ + (1.0)2 and B = (-1.0 m) + (1.0 m)ĝ+ (4.0 m)2. In unit-vector notation, find (a) A + B, (b) A -B, and (c) a third vector C such that A - B + C = 0. 10. Two vectors are given by A = (4.0 m) + (3.0 mg and B = (-13.0 m)X + (7.0 m) 2. What are the magnitude and direction of A-B. 11. A pilot decides to take his small plane for a Sunday afternoon excursion. He first files north for 155.3 km, then makes a 90° turn to his right and files on a straight line for 62.5 km, then makes another 90° turn to his right and flies 47.5 km on a straight line. (a) How far away from his home airport is he at this point? (b) in which direction does he need to fly from this point on to make it home in a straight line? (c) What was the farthest distance he was away from the home airport during the trip? 12. You are walking in the Lumpinee park heading southwest from your starting point, for 1.72 km. You reach a pool so you make a 90° right turn and walk another 3.12 km to a bridge. How far away are you from you stating point? ## Answers #### Similar Solved Questions 5 answers ##### MulLAle Cloice Ce Aiba A0 22 prsbm Co K[DP; Lemf Zojt 1730_1 pbkre676x 1 479 238 X+7eo 2 3Ix too 626X [20 MulLAle Cloice Ce Aiba A0 22 prsbm Co K[DP; Lemf Zojt 1730_1 pbkre 676x 1 479 238 X+7eo 2 3Ix too 626X [20... 1 answer ##### Calculate the angular momentum for a rotating disk, sphere, and rod. (a) A uniform disk of... Calculate the angular momentum for a rotating disk, sphere, and rod. (a) A uniform disk of mass 16 kg, thickness 0.5 m, and radius 0.9 m is located at the origin, oriented with its axis along the y axis. It rotates clockwise around its axis when viewed from above (that is, you stand at a point on th... 1 answer 1 answer ##### Describe how loss carryforwards, carrybacks and deferred taxes are presented in the financial statements and what... Describe how loss carryforwards, carrybacks and deferred taxes are presented in the financial statements and what the impact of each are.... 5 answers ##### 1352Thie arcuit the ripht has CF5| 4ashown AnsWct I Eollowang"Tdeal" batteryWlanCEreTnc battery (Unts12V(utentWhat - thc pow4 dtop for cach IeeotFower(RI) 1352 Thie arcuit the ripht has CF5| 4ashown AnsWct I Eollowang "Tdeal" battery Wlan CEreT nc battery (Unts 12V (utent What - thc pow4 dtop for cach Ieeot Fower(RI)... 5 answers ##### What is the phase shift of y = csc2(x + Zv? 042 2SUBRAMT AXSWERASK FOR HELPWJiioi What is the phase shift of y = csc2(x + Zv? 0 4 2 2 SUBRAMT AXSWER ASK FOR HELP WJiioi... 1 answer ##### Prepare (using a spreadsheet package) an entire duration amortization schedule as follows: Initial amount: Shs 4,000,000/=;... Prepare (using a spreadsheet package) an entire duration amortization schedule as follows: Initial amount: Shs 4,000,000/=; rate of interest: 13.5% per annum; period: 10 years payable monthly in arrears. An additional amount of Shs 400,000/= is repaid with the 30th installment, and the monthly insta... 5 answers ##### 7. Find tne rate compounded month if P6, 000 accumulates to P7, 000 in 2 years and 3 months: FindF=mWhich yield more interest, ( 8%, m ) or ( 9% =2 )7 7. Find tne rate compounded month if P6, 000 accumulates to P7, 000 in 2 years and 3 months: Find F= m Which yield more interest, ( 8%, m ) or ( 9% =2 )7... 5 answers ##### Following of my slides from lecture on 4/10. Also check the formula for variance_ Example 4: Suppose there are three defective items in lot of 50 items_ A sample of size ten is taken at random and without rcplacement_ Let X denotes the nuber of defective items in the sample_ Find the probability that the sample contains exactly one delective item; b) at most one defective item following of my slides from lecture on 4/10. Also check the formula for variance_ Example 4: Suppose there are three defective items in lot of 50 items_ A sample of size ten is taken at random and without rcplacement_ Let X denotes the nuber of defective items in the sample_ Find the probability tha... 5 answers ##### PHYCn Untcr t Fhtelcs-ILExan-AmarkyEsolRC cicuitros reSuc} Cnd capaci ccin {eries *n Grch anc bcllety- The Rejitarce S00ko Cesagilonze olmno capocie 5D0= Datory fmf 120EcnmaneIne ercuil NeWmn aller cling Ine mlcn Lnch cnaae 5e2oOi mcumum * OlJo RCi ntalk comdete; Cuenaiceo Th= curent n ine circuito seconds after Ihe Iilch cloted:asb) Calculate tre equivalent resistance belxeen the pointf 0 and40n7001.80n}KM PHYCn Untcr t Fhtelcs-IL Exan-A marky Esol RC cicuitros reSuc} Cnd capaci ccin {eries *n Grch anc bcllety- The Rejitarce S00ko Cesagilonze olmno capocie 5D0= Datory fmf 120 Ecnmane Ine ercuil NeWmn aller cling Ine mlcn Lnch cnaae 5e2oOi mcumum * OlJo RCi ntalk comdete; Cuenaiceo Th= curent n ine c... -- 0.024128--
{}
# How to fix slow covergence and highly oscillatory integrand issues? I'm trying to numerically solve an integral in a specific region and then to visualize it as follows. RegionPlot3D[ NIntegrate[1/Sqrt[r] - 1/Sqrt[l + r Sin[t]], {r, l, t} ∈ ImplicitRegion[r + l Sin[t] > 0 && l > 0 && r > 0, {r, l, t}]], {r, 0, 5}, {l, 0, 10}, {t, 0, pi/4}] However, Mathematica complains that NIntegrate::slwcon: Numerical integration converging too slowly; suspect one of the following: singularity, value of the integration is 0, highly oscillatory integrand, or WorkingPrecision too small. I basically tried to get rid of potential singularities by taking that specific integration region into account. Yet, I have no idea about the slow convergence of highly oscillatory integrand. Edit 1: There is also another warning saying NIntegrate::ncvb: NIntegrate failed to converge to prescribed accuracy after 27 recursive bisections in l near {r,l,t} = {0.184661,1.641681647898248*10^690538901,0.184661}. NIntegrate obtained 8.935806122974667*10^22717757165+31581.2 I and 2.09657839572837915.954589770191005*^22717757166 for the integral and error estimates. How can I fix these issues? Edit 2: What I am actually looking for is the 3D plot corresponding to the following integral function F(l,\theta) where r is a fixed number (say, 10, or whatever). I am particularly in trouble to get rid of singularities and divergent subsets of the variable's domains. • Your formulation is unclear to me: after integration over {r,l,t} you obtain a number so what you plot in {r,l,t}? Jul 29, 2019 at 17:38 • @user64494: I actually doesn't want a single number, but the parametric plot of the result of the integration according to {r,l,t}. Jul 29, 2019 at 17:40 • Don't understand. The result of NIntegrate[1/Sqrt[r] - 1/Sqrt[l + r Sin[t]], {r, l, t} ∈ ImplicitRegion[r + l Sin[t] > 0 && l > 0 && r > 0, {r, l, t}]] is a complex number (maybe, with its imaginary part equal to zero). Jul 29, 2019 at 17:44 • (1) Why RegionPlot3D? The syntax is wrong. (2) NIntegrate::slwcon is a warning, not an error: If there are no other errors, then the integral evaluated fine. Are there other error messages? Jul 29, 2019 at 18:06 • @MichaelE2: My basic interest is in the variation of the integral result regarding different combinations of l and t (with a fixed r). Considering I am not interested in infinity, can you please show me how to plot that integral function in terms of l and t? Jul 29, 2019 at 18:24 ff[r_?NumericQ, t_?NumericQ, l_?NumericQ] := In the original code, there was a restriction, r + l Sin[t] > 0, but I'm not sure how you want to handle that. You might use Piecewise[] or multiply the integrand by Boole[r + ll Sin[tt] > 0]`, if appropriate.
{}
# Special Aircraft Service ## Battlefield - Airborne - Tactical (BAT) => BAT Tech Help => Topic started by: proweather on January 30, 2017, 08:27:49 PM Title: Error on La Chute Maps [solved] Post by: proweather on January 30, 2017, 08:27:49 PM I get the load.ini error when I try to load either of the La Chute maps in FMB. Code: [Select] Loading map.ini defined airfields:INTERNAL ERROR: Can't open file 'MAPS/_Tex/water/PacificBlue.tga'INTERNAL ERROR: LoadTextureFromTga('MAPS/_Tex/water/PacificBlue.tga')WARNING: Error: 'WARNING: Error: 'WARNING: Error: 'WARNING: Error: 'WARNING: Error: 'WA'WARNING: TLandscape::LoadMap('WF_La_Chute_Spit973/Chute_load.ini') - errors in loadingWorld.land().LoadMap() error: java.lang.RuntimeException: Landscape 'WF_La_Chute_Spit973/Chute_load.ini' loading error[Jan 31, 2017 3:26:06 AM] -------------- END log session ------------- Title: Re: Error on La Chute Maps Post by: dpeters95 on January 30, 2017, 09:33:03 PM Just to let you know, both maps load fine for me.  I know that doesn't help much but at least you know the problem is most likely with the install. Title: Re: Error on La Chute Maps Post by: SAS~Gerax on January 31, 2017, 01:57:58 AM I get the load.ini error when I try to load either of the La Chute maps in FMB. in wich module? Title: Re: Error on La Chute Maps Post by: Uzin on January 31, 2017, 04:01:50 AM "INTERNAL ERROR: Can't open file 'MAPS/_Tex/water/PacificBlue.tga' " Isn't this the whole reason ? Missing that file ? Title: Re: Error on La Chute Maps Post by: Mono27 on January 31, 2017, 04:29:54 AM I have the same problem here with La Chute map in WAW module. Title: Re: Error on La Chute Maps Post by: Epervier on January 31, 2017, 07:30:02 AM I had the same problem in 409. Unlike the alarm it is the "PacificBlueTL.tga" file which was absent. If you want to test this ... Title: Re: Error on La Chute Maps Post by: proweather on January 31, 2017, 03:18:25 PM Thanks Epervier, but that did not help. Title: Re: Error on La Chute Maps Post by: Epervier on February 01, 2017, 02:11:40 AM Test this : rename "PacificBlueTL.tga" to "PacificBlue.tga" !  :D  :-X  :-| Title: Re: Error on La Chute Maps Post by: Airbourne on February 01, 2017, 02:56:14 AM This last works for me. Put in Epervier's insert and then rename PacificBlueTL.tga to PacificBlue.tga and away you go, or at least, away I go. Title: Re: Error on La Chute Maps Post by: proweather on February 01, 2017, 07:29:56 AM I renamed the file and the map loads. Thanks Epervier, problem solved. Title: Re: Error on La Chute Maps Post by: dpeters95 on February 01, 2017, 01:28:12 PM Why do these users not have the file?  I assumed one person could be an install problem but I see 3 people that saying they had a problem.  The maps load fine for me and I didn't have to add anything?  I believe them, I'm just curious... Title: Re: Error on La Chute Maps Post by: Epervier on February 01, 2017, 01:59:39 PM Have you added others stand alone Mods in your BAT ? If yes, It may be a reason ...  :-X   :-| Title: Re: Error on La Chute Maps Post by: SAS~Monty27 on February 01, 2017, 07:05:24 PM It is odd that there should be this issue for anyone.  However, Gabriel's fix is very small and easy to include as a kind of insurance file.  One of the odd things in serving a widespread universal install I guess.  Many thanks for that one. Title: Re: Error on La Chute Maps [solved] Post by: dpeters95 on February 01, 2017, 07:16:17 PM Have you added others stand alone Mods in your BAT ? If yes, It may be a reason ...  :-X   :-| Nope!  I don't use any mods in BAT.  I am also the person that tested all of these maps prior to the BAT release.  We made sure that they all worked (not just loading wise but creating and saving missions on them) before releasing BAT.  That is why I am curious why anyone can't load them. Now, I also see someone in the forum missing textures for the map Reno saying it won't load?????  It does for me... Title: Re: Error on La Chute Maps [solved] Post by: Mono27 on February 02, 2017, 06:32:00 AM Thanks for the help Epervier but still doesnt work. I get a "WF_La_Chute_Spit973/Chute_load.ini" loading error. That happens when I try to load a mission from the Lancaster campaign. If I try to load the map in the FMB it just CTD. Any idea to solve this, please? Title: Re: Error on La Chute Maps [solved] Post by: Mono27 on February 02, 2017, 06:58:31 AM Nevermind. I could get it to work! :-) Title: Re: Error on La Chute Maps [solved] Post by: SAS~Gerax on February 02, 2017, 08:36:34 AM I could get it to work! :-) Will you please tell us how you did this so that other members can benefit from this?  ;) Title: Re: Error on La Chute Maps [solved] Post by: Mono27 on February 02, 2017, 10:13:57 AM Of course! I copied the file from Epervier in a wrong folder. And it should be inside the maps folder of the WAW2 module. I did a search of the "_tex" folder inside WAW2. Once I found it I copied the folder "water" (with the PacificBlueLT.tga file inside) from the downloaded zip inside that "_tex" folder and it finally works!. Now I see a big white squares on the parking places of the airbases in the map. But that is another question that I dont know how to solve it. Title: Re: Error on La Chute Maps [solved] Post by: dpeters95 on February 02, 2017, 08:14:06 PM Of course! I copied the file from Epervier in a wrong folder. And it should be inside the maps folder of the WAW2 module. I did a search of the "_tex" folder inside WAW2. Once I found it I copied the folder "water" (with the PacificBlueLT.tga file inside) from the downloaded zip inside that "_tex" folder and it finally works!. Now I see a big white squares on the parking places of the airbases in the map. But that is another question that I don't know how to solve it. Now, I'm really confused trying to figure out why you have this problem to begin with...  There is no "_TEX" folder inside the "#WAW2\MAPMODS\MAPS" folder.  There was a "_TEX" folder in the "#WAW\MAPMODS\MAPS" folder in CUP, but not in #WAW2...  Why do you have one?  Did you install other mods?  Maybe it's me?  Except that mine works. Also, here are the parking spaces on the that airbase: (http://i1250.photobucket.com/albums/hh539/dpeters95/Parking%20Spaces.jpg) I have no problem with them either.  There is certainly something different with your install. Title: Re: Error on La Chute Maps [solved] Post by: PhantomII on February 02, 2017, 08:25:53 PM Hi Dpeters, The only folder in my BAT install that has a _tex folder is the #SAS folder. I"m just wondering if he put it in this folder. I have no trouble loading either one of the LaChute maps. Title: Re: Error on La Chute Maps [solved] Post by: dpeters95 on February 02, 2017, 09:27:01 PM Hi Dpeters, The only folder in my BAT install that has a _tex folder is the #SAS folder. I"m just wondering if he put it in this folder. I have no trouble loading either one of the LaChute maps. True, not sure why it's there but there is an empty "_TEX" folder in the #SAS folder along with a copy of the "all.ini" file.  But, he also has other problems with those maps leading me to believe he has an install problem. I believe the PacificBlue.tga water texture is an add on texture which would need to be included with the La Chute map files.  So, they could have been left out (if everyone else has the same problem), however, those parking areas appear to use one of the default IL2 parking textures as you can see from my snapshot above.  That suggests that it's an install problem. Title: Re: Error on La Chute Maps [solved] Post by: Mono27 on February 03, 2017, 04:52:07 AM Dpeters, now I am not sure...I will look what I did when I am at home. Maybe I explained it wrong. In that case I am sorry. Title: Re: Error on La Chute Maps Post by: benson on February 03, 2017, 05:06:19 AM dpeters. I see your illustration above on how the runways are fine in your install. I started a different thread on this problem. I too can see the exact same view as yours while looking at the map in the editor but if I spawn an aircraft there and then go ingame I see a large black square. Title: Re: Error on La Chute Maps [solved] Post by: Mono27 on February 03, 2017, 08:51:21 AM Hi. This is the path I have: WAW2/MAPMODS/Maps/_Tex/water/PacificBlueTL.tga The map works fine except for the strange squares on the parking areas. Title: Re: Error on La Chute Maps [solved] Post by: dpeters95 on February 03, 2017, 09:57:18 AM Hi. This is the path I have: WAW2/MAPMODS/Maps/_Tex/water/PacificBlueTL.tga The map works fine except for the strange squares on the parking areas. That is exactly what I am confused about.  There is no "_Tex" folder in the WAW2 folder by default.  Why did you have one?  Don't get me wrong here, I am not doubting you, I am simply trying to figure out why at least 2-3 of you have this problem and what can be done to fix it without continually adding duplicate textures into future releases.  If it is an install problem, you are probably missing more map files you just don't know it right now. I checked in game and I do not have a problem with the parking area.  I will post about that in the other topic... Title: Re: Error on La Chute Maps [solved] Post by: talhuman on February 06, 2017, 07:28:11 PM Lancaster campaign is not working. WAW2 MAPMODSfolder only has an all.ini file no tex or .tga files Title: Re: Error on La Chute Maps [solved] Post by: dpeters95 on February 06, 2017, 08:40:25 PM Lancaster campaign is not working. WAW2 MAPMODSfolder only has an all.ini file no tex or .tga files That's because all the map files are compressed into SFS files.  You won't see the individual map files in the #WAW2 folders.  Not familiar with the Lancaster campaign.  Was it written for BAT, if not, there is no guarantee it will work.  The campaign might need to be updated. Title: Re: Error on La Chute Maps [solved] Post by: Sharkzz on February 17, 2017, 01:00:41 AM heloo, @  Test this : rename "PacificBlueTL.tga" to "PacificBlue.tga" as was said, I too have a problem with chute map, this I could fix in cup when there were tex. folder etc, but not now, as also is obvious, there are NO tex folders etc in BAT.. so how can I rename "PacificBlueTL.tga" to "PacificBlue.tga" ? when it is all locked up in SFS file ?. this is in WAW2 FMB, ( as the Lanc campaign uses the chute map I haven't, and wont try it till I can fix it. like I did in CUP ). BAT installed on clean 4.12  etc as it should be, as it is within FMB I gather it is not just a WAW2 prob. though I haven't used other version/s in activator yet. so where and how can I rename and place the "fix" offered ? any help will be appreciated muchly. & it's always the chute map LOL ..... sharkzz Title: Re: Error on La Chute Maps [solved] Post by: Sharkzz on February 17, 2017, 10:44:17 AM G'day Sharkzz, hope this helps mate.  :-* LOL Ok, On reading through this thread, we know that there aren't texture and map folders as we had in CUP. BUT, to solve this Chute problem I tried a few things just to see what would happen. In regards to the Chute map, which quite a few of us had/have problems with, in CUP, We, the problem children  (lol), worked out that changing texture tga names worked, as well as adding files ie: land textures etc, and mixing up what chute maps we could lay our hands on, fixed the problem. SO SAYING, In WAW2 - MAPS folder I 1st placed into the WAW2 maps folder texture and "working" Chute  maps folders, renaming the water texture fixed the summer chute map, but didn't fix the winter map I still got in FMB the missing land error text you get in FMB when the map doesn't load correctly.  So saying again, (sorry to repeat myself so much but it's 4.31 am here and lack of sleep due to pain I am only able to get 2 to 3 hours sleep a night for the last 6 months and it's taking it's toll), I copied a winter Chute map folder WHOLEY into the WAW2 MAPS folder, (to fix the winter map). and NOW, I have working winter and summer Chute maps. I DID NOT HAVE TO ALTER ANY TEXT IN THE "all in" file. ! Lanc campaign loads and works great as well as being able to open in FMB BOTH Chute maps. ( phew !). So in my case, (I am NOT certain about other people's installs of BAT), JUST placing a texture folder with known working map folders, in this case Chute, has ixed the problem with no noticeable bad effects to the game. ALL is going great . I hope this helps those in need. If you need some help with a clearer explanation PLEASE ask me and I will endeavour to help you as much as I can, with my limited knowledge and intelligence (not much I'm afraid in that area lol). Cheers to ALL...... BAT is truly the next BEST step in what SAS has been doing with IL2 and I am extremely happy and enjoying BAT !. To Sharkzz,  from Sharkzz  8) Thanks m8  LOL.... ~S~ Salutae' Title: Re: Error on La Chute Maps Post by: Cheyenne on March 03, 2017, 11:59:24 PM Ok I dl the BAT and installed it on a clean copy of Il-2 4.12.1 and all the maps load fine excet the La Chute map. I got the same error reading as per say in the 1st post. After reading through this post I still do not under satnd a thing what has been said or were to look. Please explain how I can get this map working. Please explain in a way I can understand. Thank Yo Title: Re: Error on La Chute Maps Post by: vpmedia on March 04, 2017, 12:03:37 AM open the maps load.ini and change the water to a stock setting, for example [WATER] Water    = water/Water.tga or Water    = water/PacificWater.tga
{}
# sjPlot 1.3 available #rstats #sjPlot March 27, 2014 By (This article was first published on Strenge Jacke! » R, and kindly contributed to R-bloggers) I just submitted my package update (version 1.3) to CRAN. The download is already available (currently source, binaries follow). While the last two updates included new functions for table outputs (see here and here for details on these functions), the current update only provides small helper functions as new functions. The focus of this update was to improve existing functions and make their handling easier and more comfortable. ## Automatic label detection One major feature is that many functions now automatically detect variables and value labels, if possible. For instance, if you have imported a SPSS dataset (e.g. with the function sji.SPSS), value labels are automatically attached to all variables of the data frame. With the autoAttachVarLabels parameter set to TRUE, even variable labels will be attached to the data frame after importing the SPSS data. If you have factors with specified factor levels, these will automatically be used as value labels. Furthermore, you can manually attach value and variable labels using the new function sji.setVariableLabels and sji.setValueLabels. But what are the exactly the benefits of this new feature? Let me give you an example. To plot a proportional table with axis and legend labels, prior to sjPlot 1.3 you needed following code: data(efc) efc.val <- sji.getValueLabels(efc) efc.var <- sji.getVariableLabels(efc) sjp.xtab(efc$e16sex, efc$e42dep, axisLabels.x=efc.val[['e16sex']], legendTitle=efc.var['e42dep'], legendLabels=efc.val[['e42dep']]) Since version 1.3, you only need to write: data(efc) sjp.xtab(efc$e16sex, efc$e42dep) ## Reliability check for index scores One new table output function included in this update is sjt.itemanalysis, which helps performing an item analysis on a scale or data frame if you want to develop scales or index scores. Let’s say you have several items and you want to compute a principal component analysis in order to identify different components that can be composed to an index score. In such cases, you might want to perform reliability and item discrimination tests. This is shown in the following example, which performs a PCA on the COPE-Index-scale, followed by a reliability analysis of each extracted “score”: data(efc) df <- as.data.frame(efc[,c(start:end)]) colnames(df) <- sji.getVariableLabels(efc)[c language="(start:end)"][/c] factor.groups <- sjt.pca(df, no.output=TRUE)\$factor.index sjt.itemanalysis(df, factor.groups) The result is following table, where two components have been extracted via the PCA, and the variables belonging each component are treated as one “index score”: Note that you don’t need to define groups, you can also treat a data frame as one single “index”. Beside that, many functions – especially the table output functions – got new parameters to change the appearance of the output (amount of digits, including NA’s, additional information in tables etc.). Refer to the package news to get a complete overview of what was changed since the last version. The latest developer build can be found on github. Tagged: R, rstats, sjPlot
{}
How to calculate distance to other galaxies using type 1a supernova? This is for a project in astronomy with extensive data analysis. How should I calculate the distance of galaxies using AGN and supernova Ia? I am using the data from dr17 of SDSS. • What have you already tried? Type Ia have absolute magnitudes of -19.3, so you probably could use inverse square law given you have magnitude of the Ia in said galaxy. Jan 31, 2022 at 18:19 • @fasterthanlight But who ever has that information? Jan 31, 2022 at 19:18 $$M=-21.726+2.698\cdot \Delta m$$ where $$\Delta m$$ is the observed decline of the B-magnitude 15 days after maximum. With the observed apparent magnitude $$m$$ you have then the distance modulus $$m-M$$ and thus the distance $$d$$ $$d=10^{1+(m-M)/5} \ \mathrm{Parsecs}$$
{}
# Revision history [back] ### How to export a plot to a jpg/png file Hi everybody, I've got some problems with saving/exporting images. This is my code: var ('x y z') #Define a 3d curve as the intersection of 2 surfaces #(but not explicit in z so we must use implicit plots) # Let's plot xmin=-3; xmax=3; ymin=-1; ymax=5; zmin=-3; zmax=3; # Axes Ax=line3d(([xmin,0,0],[xmax,0,0]), thickness=2, color='red') Ay=line3d(([0,ymin,0],[0,ymax,0]), thickness=2, color='blue') Az=line3d(([0,0,zmin],[0,0,zmax]), thickness=2, color='green') # Surfaces S1=implicit_plot3d(y==x^2+z^2,(x,xmin,xmax),(y,ymin,ymax),(z,zmin,zmax), opacity=0.4, color='yellow') S2=implicit_plot3d(z==y/2,(x,xmin,xmax),(y,ymin,ymax),(z,zmin,zmax), opacity=0.4, color='purple') show(S1+S2+Ax+Ay+Az, aspect_ratio=[1,1,1]) I just wanted to execute the code, rotate the plot right from the graphics and export the image as a png file. This should by done by right-clicking on the plot and chosing export to png from the menu... Unfortunately, the output is a blank screen with just the string "Invalid JSmol request: saveFile" Any suggestion, please? I'm working with the OS X version, and I'm a total newbie... Thanks in advice, Mauro 2 None tmonteil 23073 ●25 ●166 ●422 http://wiki.sagemath.o... ### How to export a plot to a jpg/png file Hi everybody, I've got some problems with saving/exporting images. This is my code: var ('x y z') #Define a 3d curve as the intersection of 2 surfaces #(but not explicit in z so we must use implicit plots) # Let's plot xmin=-3; xmax=3; ymin=-1; ymax=5; zmin=-3; zmax=3; # Axes Ax=line3d(([xmin,0,0],[xmax,0,0]), thickness=2, color='red') Ay=line3d(([0,ymin,0],[0,ymax,0]), thickness=2, color='blue') Az=line3d(([0,0,zmin],[0,0,zmax]), thickness=2, color='green') # Surfaces S1=implicit_plot3d(y==x^2+z^2,(x,xmin,xmax),(y,ymin,ymax),(z,zmin,zmax), opacity=0.4, color='yellow') S2=implicit_plot3d(z==y/2,(x,xmin,xmax),(y,ymin,ymax),(z,zmin,zmax), opacity=0.4, color='purple') show(S1+S2+Ax+Ay+Az, aspect_ratio=[1,1,1]) I just wanted to execute the code, rotate the plot right from the graphics and export the image as a png file. This should by done by right-clicking on the plot and chosing export to png from the menu... Unfortunately, the output is a blank screen with just the string "Invalid JSmol request: saveFile" Any suggestion, please? I'm working with the OS X version, and I'm a total newbie... Thanks in advice, Mauro 3 retagged tmonteil 23073 ●25 ●166 ●422 http://wiki.sagemath.o... ### How to export a plot to a jpg/png file Hi everybody, I've got some problems with saving/exporting images. This is my code: var ('x y z') #Define a 3d curve as the intersection of 2 surfaces #(but not explicit in z so we must use implicit plots) # Let's plot xmin=-3; xmax=3; ymin=-1; ymax=5; zmin=-3; zmax=3; # Axes Ax=line3d(([xmin,0,0],[xmax,0,0]), thickness=2, color='red') Ay=line3d(([0,ymin,0],[0,ymax,0]), thickness=2, color='blue') Az=line3d(([0,0,zmin],[0,0,zmax]), thickness=2, color='green') # Surfaces S1=implicit_plot3d(y==x^2+z^2,(x,xmin,xmax),(y,ymin,ymax),(z,zmin,zmax), opacity=0.4, color='yellow') S2=implicit_plot3d(z==y/2,(x,xmin,xmax),(y,ymin,ymax),(z,zmin,zmax), opacity=0.4, color='purple') show(S1+S2+Ax+Ay+Az, aspect_ratio=[1,1,1]) I just wanted to execute the code, rotate the plot right from the graphics and export the image as a png file. This should by done by right-clicking on the plot and chosing export to png from the menu... Unfortunately, the output is a blank screen with just the string "Invalid JSmol request: saveFile" Any suggestion, please? I'm working with the OS X version, and I'm a total newbie... Thanks in advice, Mauro 4 retagged FrédéricC 3640 ●3 ●36 ●73 ### How to export a plot to a jpg/png file Hi everybody, I've got some problems with saving/exporting images. This is my code: var ('x y z') #Define a 3d curve as the intersection of 2 surfaces #(but not explicit in z so we must use implicit plots) # Let's plot xmin=-3; xmax=3; ymin=-1; ymax=5; zmin=-3; zmax=3; # Axes Ax=line3d(([xmin,0,0],[xmax,0,0]), thickness=2, color='red') Ay=line3d(([0,ymin,0],[0,ymax,0]), thickness=2, color='blue') Az=line3d(([0,0,zmin],[0,0,zmax]), thickness=2, color='green') # Surfaces S1=implicit_plot3d(y==x^2+z^2,(x,xmin,xmax),(y,ymin,ymax),(z,zmin,zmax), opacity=0.4, color='yellow') S2=implicit_plot3d(z==y/2,(x,xmin,xmax),(y,ymin,ymax),(z,zmin,zmax), opacity=0.4, color='purple') show(S1+S2+Ax+Ay+Az, aspect_ratio=[1,1,1]) I just wanted to execute the code, rotate the plot right from the graphics and export the image as a png file. This should by done by right-clicking on the plot and chosing export to png from the menu... Unfortunately, the output is a blank screen with just the string "Invalid JSmol request: saveFile" Any suggestion, please? I'm working with the OS X version, and I'm a total newbie... Thanks in advice, Mauro
{}
## Precalculus (6th Edition) (a) slope = $\frac{-1}{3}$. y-intercept: $(0, 2)$ x-intercept: $(6, 0)$ (b) $y=-\frac{1}{3}x+2$. RECALL: (1) The slope is the ratio rise (change in y) over run (change in x). (2) The x-intercept is the point where the graph touches/crosses the x-axis. (3) The y-intercept is the point where the graph touches/crosses the y-axis. (4) The slope-intercept form of a line's equation is $y=mx+b$ where $m$=slope and $(0, b)$ is the line's y-intercept. (a) From the point $(0, 2)$ to the point $(3, 1)$, the change in $y$ is $-1$ while the change in $x$ is $3$. Thus, the slope is $\frac{-1}{3}$. The graph crosses the y-axis at the y-intercept $(0, 2)$. With a slope of $-\frac{1}{3}$, for a change of $3$ units in $x$, there corresponds a 1-unit decrease in the value of $y$. Thus, from $(3, 1)$, a 3-unit increase in $x$ gives a 1-unit decrease in $y$ leading to the x-intercept of $(6, 0)$ (b) With a slope of $-\frac{1}{3}$ and a y-intercept of $(0, 2)$, then the equation of the line is $y=-\frac{1}{3}x+2$.
{}
# Analogy for abstract classes Abstract classes are peculiar things. Consider: public abstract class Shape { String name; protected Point startPoint; public Shape(String name){ this.name = name; } public String getName(){ return name; } public abstract double getArea(); } The classic way to explain them is to say that they are like interfaces, but can implement methods. But explaining this way seems counter to their essential nature, and forces us to devolve into a series of random-factoid comparisons between interfaces and Abstract classes. (Interestingly, the JavaDoc commits this particular sin.) At no point in that discussion do we even try to get at the intuitive heart of what abstract classes really are. On the other hand, classes can also be declared "abstract" even if everything is fully implemented and there are no abstract methods at all, which the programmer can do simply to make sure that the class cannot be instantiated directly. This lends itself to a different kind of explanation. We could say that "Abstract classes represent abstract ideas, so that means that we can't instantiate them, and they can declare methods that must be implemented by child-classes." But, of course, they can have constructor methods. This seems mysterious until you realize that they are instantiated during constructor-chaining, so saying that they can't ever be instantiated is really not true. I can think of no explanation for abstract classes that really gets to their essential nature. Has anyone else found an elegant way to describe them? The rules for abstract classes are the same as for other classes, with a few minor differences: • An abstract class may have methods that do not have a body • An abstract class has to be sub-classed, at some level, to a non-abstract class before you can instantiate an object • It is not legal to directly instantiate an object of an abstract class With an abstract base class that has abstract methods, or methods without bodies, all of those methods must be implemented in the subclass, at some level, before instantiation of an object is allowed. So, imagine the school's official letterhead. At the top is the school logo, name, address, and maybe their motto. Everyone sending an official letter from the school is supposed to use the same letterhead, and they should use the same style and format as other official letters. Sometimes it's allowed to add to the top, such as when the principal gets to add his name to the letterhead, but others are not supposed to do that. To make it easier for everyone to follow the same format at style, the secretary created a "sample" letter for everyone to use. Still some people just didn't get it right, especially the Athletics Department, who used a totally different letterhead. Finally, to end the problem the principle made one, perfect, letter. Instead of real information, however, he used place holders. It looks something like this: The School of Choice $School of Choice$ 342-54-87971 15 Our Plaza, NE $Best Students & Best Teachers$ 342-54-87915 fax Best Town, Milan $Better Grades$ @school_of_choice From: {your name here} {today's date} TO: {name of recipient} Subject: {subject goes here} Mauris ut mattis dolor. Donec mollis arcu a augue varius, eu pharetra ante gravida. Sed ut posuere mi, vitae luctus urna. Vestibulum porta, velit quis facilisis hendrerit, nibh magna faucibus urna, a pellentesque arcu nisi a tortor. Suspendisse nec maximus lorem. Maecenas convallis lorem quis neque fringilla, sit amet fermentum justo rutrum. Maecenas. Vivamus elementum consequat semper. Nullam consectetur nunc quis turpis scelerisque pellentesque. Proin blandit, nulla a imperdiet vestibulum, odio dui tempor neque, sit amet vehicula magna nibh in tortor. Fusce nec sapien vitae nibh faucibus hendrerit. Nam fermentum egestas arcu in vulputate. Proin vulputate metus libero. } www.school-of-choice.edu This "letter" is saved on the school's network where any faculty member can access it, but students cannot. (It wouldn't be right for students to send out "official" letters would it?) The "letter" is also made read only. When a teacher needs to send a note to your parents (oops!) they can Word and load this letter template. They cannot save the template because it's read-only, [it is an abstract letter] but they can save a copy of it with a new name [a subclass of the letter]. Still, if they just printed this, and mailed it, it would look really bad for the teacher [it is still an abstract letter]. In order to make the letter "legal" to send, all the {your name here} type things have to be replaced with real information [replace the abstract methods with real ones]. If the teacher's department has its own web page on the school's site, then it can replace the last line, www.school-of-choice.edu with the full page address www.school-of-choice.edu/comp-sci/ [override a base class method]. Now that everything is replaced it is "legal" to print the letter [instantiate it], and send it to your parents. (Yikes!). • Good high school example, and pretty rich. – Ben I. Jun 10 '17 at 14:25 It all came about, because the creators of Java looked at C++ and saw that it did not implement multiple inheritance safely (where as it was repeated inheritance that was unsafe, but they did not realise. And it is just C++'s implementation that makes it difficult to do repeated inheritance safely), and so decided that multiple inheritance was unsafe. They therefore decided to not have multiple inheritance in Java. They then used abstract classes and interfaces, in place of deferred classes. A deferred class is one concept, but Java has two keywords interface and abstract to implement them. A deferred class is one that can not be instantiated; It may have zero or more abstract methods. In Java when all of the methods are abstract, then we make it an interface, so as to allow multiple (interface) inheritance. Therefore an analogy for an abstract class and for an interface is a deferred class. An analogy for a deferred class is just a class that is not finished, but is useful to be inherited from (for reuse). You can explain it like this: Abstract classes are theoretical\conceptual blueprints for a building. When you want to create a concrete (no pun intended) blueprint out of the conceptual one, you are likely to make some modifications here and there, but essentially you'd be bound to the conceptual blueprints. However, you did, in fact, create a blueprint using the conceptual idea, so in a way you did instantiate it when you created the child bluprints. This is the same with the classes: The abstract class isn't created, but a child is created and because of they way polymorphism works (a class extending another class effectively makes the child into the parent's type), you are in a way "instantiating" the abstract class. This analogy gives the idea that by creating the child class, you instantiate the abstract class in constructor-chaining. I tend to begin with an example of a class which allows us to share behaviors with related child classes for which there is no clear parent that you would want to use (your Shape class is a good example of this). The example I typically use is that of a Student class and an Instructor class, with a Person abstract that they can each inherit from. We then talk about the mechanical details of abstracts, and how they differ from interfaces. To motivate how they relate to interfaces, and should be used differently, I tend to lean on the "is-a" nature of inheritance, rather than the behavior-oriented nature of interfaces. Talking about method implementation is no longer a good motivator, since as of Java 8, interfaces can implement default methods. This illustrates practical and technical differences, while emphasizing how proper use of abstracts and interfaces plays into good application design. • That's an especially good point about Java8 - I didn't know that that change had happened. It sounds like we're almost back to the multiple inheritance problem, then. – Ben I. Jun 10 '17 at 13:45 You can think of it from the bottom-up perspective. You're making a program for a fast-food restaurant. So you have a Hamburger class, a FrenchFry class, a Soda class, and so on. Eventually, you realize that there are lots of similarities in all of those classes. They all have mass, calorie counts, prices, and so on. So it makes sense to have a parent class called Food where those common items can be defined once and inherited by all of the other classes. Except, of course, one shouldn't be able to instantiate the Food class directly. Food items should belong to an identifiable subclass like Hamburger or FrenchFry. (Insert school cafeteria joke here.) I'd try something like: As we have seen, it can be useful to move common fields or methods to a common super type, so we have to write (and debug) it only once. However, sometimes the common behavior is only a small part of an object that does not have a meaningful existence on its own. Java therefore allows us to declare abstract classes. Abstract classes are incomplete in that they do not have to implement all the methods they declare. As a consequence, they can not be instantiated on their own. However, if we implement the methods in a subclass, we can instantiate that one, and thereby use the shared code. I'd gloss over people making classes abstract only to prevent their instantiation, as I consider it a questionable practice.
{}
## D’ ALEMBERT’s Test of Convergence of Series Statement A series $\sum {u_n}$ of positive terms is convergent if from and after some fixed term $\dfrac {u_{n+1}} {u_n} < r < {1}$ , where r is a fixed number. The series is divergent if $\dfrac{u_{n+1}} {u_n} > 1$ from and after some fixed term. D’ Alembert’s Test is also known as the ratio test…
{}
# Surface Area of a Molecule Geometry Level 4 A new molecule $$\ce{A2Z}$$ has been discovered. It is made of two atoms of $$\ce{A}$$ and one atom of $$\ce{Z}$$. The surface area of a molecule can tell us loads about the chemical and physical properties of the molecule, hence it is imperative that we calculate it. If the surface area of an $$\ce{A2Z}$$ molecule is $$S \text{ pm}^2$$, then what is $$\dfrac{S}{\pi}$$? Details and Assumptions • We can model the molecule as three overlapping spheres as shown in the animation. The white spheres are atoms of $$\ce{A}$$ and the red sphere is an atom of $$\ce{Z}$$ • The radius of Sphere $$\ce{A}$$ is 40 pm • The radius of Sphere $$\ce{Z}$$ is 60 pm • The distance between the centers of Sphere $$\ce{A}$$ and Sphere $$\ce{Z}$$ is 80 pm ×
{}
# Philosophy:Kaharingan Kaharingan is a folk religion professed by many Dayaks in Kalimantan, Indonesia; particularly Central Kalimantan, although many have converted to Christianity or Islam. The word means life, and this belief system includes a concept of a supreme deity—although this may be the result of the need to conform to the idea of "One Supreme God" (Ketuhanan yang Maha Esa), which is the first principle of the Indonesian state ideology Pancasila. Hindu-Javanese influence can be seen in this religion, and the Indonesian government views it as a form of Hinduism because the Indonesian government recognizes only six official religions, and Kaharingan is not one of them.[1][2] The main festival of Kaharingan is the Tiwah festival, which lasts for thirty days, and involves the sacrifice of many animals like buffaloes, cows, pigs and chickens, as offerings to the Supreme God.[3] The supreme God in Kaharingan is called Ranying. In addition, the religion has ritual offerings called Yadnya, places of worship called Balai Basarah or Balai Kaharingan and a holy book called Panaturan, Talatah Basarah (group of prayers) and Tawar (a guide to seek God's help by giving rice). ## Overview Sandung of Dayak Pesaguan people in Ketapang Regency, West Kalimantan. Note a sculpture of a dragon above it Sandung of Dayak Pebihingan people, side by side with two tombs. Ketapang Regency The term Kaharingan comes from the Old Dayak word "Haring" meaning "Life" or "Alive". This concept is expressed in the symbol of the faith depicting the Tree of Life. This Tree of Life resembles a spear that has three branches on eitherside, some facing up and some down. At the bottom of the symbol are two receptacles, while at the very top are a hornbill and the sun. The hornbill bird and the sun represent Ranying Mahalala meaning God Almighty, Source of all life on earth. The spear and its branches denote the upper world and the afterlife, while the lower part where are the receptacles convey the idea of man’s earthly life. Although both the spiritual world and the earthly world are different, but they are closely connected to one another and are inseparable since they are both interdependent. The branches where some face up while others face down mean that there is an eternal balance between the earthly and the afterlife. That life on earth is temporary, and that human life is designed for the hereafter. Altogether the Tree of Life expresses the core of the Kaharingan faith, which is that Human Life must be a balance and kept in harmony between man and his fellow humans, between man and his natural environment, and between man and the Almighty. This is also the basic concept of the Balinese Hindu religion, where in Bali it is known as the Tri Hita Karana. In practice the Ngaju Dayaks focus on the supernatural world of spirits, including ancestral spirit. For them, the secondary funeral is most important, usually held after several months or even years after burial. During the second funeral rites (known as tiwah) the bones are exhumed and cleansed then placed in a special mausoleum, called sandung. The spirit of the deceased is then believed to watch over the village. The mausoleums are often beautifully decorated showing scenes of the upper world. An ornate ship of the dead made of rubber is usually placed next to the remains depiciting his entourage that accompany the soul to paradise. One of the most outstanding features of the Dayak faith is their Local Wisdom and innate concern to preserve the forest and the natural environment. There are strict rules and directives on how to treat the rainforests, what may be done or taken from the forests and what are taboo. The Dayaks’ local wisdom directs that trespassing these rules will destroy the balance of the forest and animals living in the forest, and so directly or indirectly will adversely damage communities living from the forest bounty. ## Recognition by the Indonesian government Among the many tribes of Dayaks in Borneo, those living in the upper reaches of the rivers in the province of Central Kalimantan are the Dayak Ngaju, the Lawangan, the Ma'anyan and the Ot Danum, known as the Barito Dayaks, named after the large Barito river. The Ngaju, who inhabit the Kahayan river basin by the present city of Palangkaraya, are involved in agricultural commerce, planting rice, cloves, coffee, palm oil, pepper and cocoa, whilst, the other tribes still mostly practice subsistence farming through the slash and burn lifestyle. The Dayak Ngaju were more open to technological and cultural influences from outside than most other Dayak, even during precolonial times. With the arrival of the Dutch and - in 1835 - the missionary Rheinische Mission (later followed up by the Basler Mission, many converted to Christianity. The missionaries founded schools and increased the literacy rate. Education stimulated a 'national awakening' among the Ngaju and Ma'anyan Dayak. Already long before the Second World War, the Dayak founded nationalistic political parties. During the Indonesian battle for independence against the Dutch, the Dayak from the Kalimantan region fought under Major Tjilik Riwut, a parachutist from the Ngaju Dayak who practiced the traditional religion. After the proclamation of independence, Jakarta decided that the Islamic Banjarmasin and mostly Dayak area west of it, should be one province. The plan got resistance from the Dayak - the Ngaju in front - which demanded a sole province. Under Riwut, which had become big during the revolution, the Dayak began a small guerrilla. The Indonesian army limited escalation of the conflict, probably because Riwut had been a loyal soldier. In 1957, the province of Kalimantan Tengah ("Central Kalimantan") or 'KalTeng' was officially formed by Presidential Law. The government was led by the Ngaju, and Rawit became governor. The 'battle' was about a sole province, together with a revaluation of the traditional Dayak culture, especially the religious part - a reaction on the worsening missionaries. The traditional religions of the Ngaju, Ot Danum, Ma'anyan and other Dayak was named Kaharingan ('power of life'). After the Communist Party of Indonesia was declared illegal in the 1960s, the subject 'religion' became very sensitive. The state ideology saw religion as the belief in one God and the membership ow a 'acknowledged' world religion with a holy book. The Dayak were seen as 'atheists', which were highly associated with the communist ideology, and had to pick between two options: conversion to a world religion or being pressured by local authorities to do so. With this in mind; it is fairly clear why the missions with their schools, hospitals and minimal pressure, had much more success after the 1960s. Compared to the environments in the 17th and 18th century, Christianity offered more possibilities for social progress than Islam. Over time the ban on local religions was abandoned. In 1980, Kaharingan was officially recognised as religion, but only as a part of the Hindu Dharma, so in fact it was placed under Hinduism. In Central Kalimantan, a small minority does practice this religion. ## The carnival in the Jungle In the religion of the Ngaju, the supernatural world is important, in which also the souls of ancestors have their place. Just like among other Dayak, the Ngaju known ritual re-burial, which usually takes place several months (sometimes several years) after the initial burial. This re-burial is very important for the soul of the deceased, so it can reach the highest point in heaven. With practicing the rytes, they protect themselves against bad supernatural powers. The first funeral takes place just after someone has died. During this ceremony, masked dancers protect the deceased against the bad spirits. Guided by drums, the kaharingan-priests start singing, which will send the soul to heaven. On its journey in the traditional ship of souls it is accompanied by spirits. Once in heaven, which consists of several 'layers', the soul has to wait in the lowest layer until the re-burial takes place. Picture: Sandung During this second ritual (tiwah), the remainders of the deceased are excavated, cleaned and put in a special grave. These woodcarved and decorated graves often have the shape of a bird or watersnake and are decorated with images of the hereafter. Too bad sandung from the factory replace the traditional tombes. The tiwah is a big, complex and long lasting event. The cost vary between US$6,000 and US$12,000. It's common that several families hold a tiwah together, so they can spread the cost of sacrificed animals like a big number of waterbuffalo's and pigs. Once there were more than 200 souls brought to a higher level in one ceremony. But a tiwah also is a happy event. In the open air, foodstalls and shops are put up, and at some distance, there is also some gambling. The tiwah is the carnival of the jungle. Sandung made of concrete are not the only changes in the Kaharingan religion. To make the religion acceptable for the government a council was set up, which had to control the theological and ritual activities of the about 330,000 supporters. None of the about 78 basir upu (top specialists about rites) are part of the council and also the about three hundred kaharingan priests are not in there as well. The council reflects aspects of the religion which are also known in other big religions. It also organises weekly meetings in specially built kaharingan-communal rooms, complete with speeches, prayers and psalms. Furthermore, the council registers and coordinates all tiwah (there are two to ten every year), before asking the police for a permit. From early times the Iban believed that fighting cocks used by the supernatural, turned into human warriors and the cock fight is then closely tied with "intangible qualities of human nature, spiritual fulfillment and religious refinement"[4] Shamanic curing or balian is one of the core features of Kaharingan ritual practices. Because this healing practice often occurs as a result of the loss of a soul resulting in some kind of illness, the focus of this practice is thus on the body. Sickness comes by offending one of the many spirits inhabiting the earth and fields, usually from a failure to sacrifice to them. The goal of the balian is to call back the wayward soul and restore the health of the community through trance, dance, and possession.
{}
# Path integral for particle with spin and Dirac propagator I am currently trying to compute a path integral for fermion particle using the action, provided in chapter 9 of Polyakov "Gauge fields and strings", and show that it yields Dirac propagator in the end. I personally find this fact really fascinating, which is why I decided to look into this topic, but the derivation itself troubles me. I start with the following action: $$S = -\frac{1}{2} \int \limits_0^1 dt\,\left[\frac{\dot{q}^2}{e} + m^2 e - i(\psi_\mu \dot{\psi}^\mu - \psi_5 \dot{\psi}_5) + \frac{i\chi}{e} \left(\psi_\mu \dot{q}^\mu + me \psi_5 \right) \right]$$ (some sign conventions are different from Polyakov, but this doesn't really matter; here $$\psi$$ and $$\chi$$ are Grassmann-valued) and I want to calculate $$\int \frac{Dq\,D\psi\,De\,D\chi}{VolDiff} \exp (iS)$$ with boundary conditions $$q(0)=0,q(1)=x^\mu,\psi(1) = \Psi$$ This integral is supposed to give me a symbol for evolution operator "$$e^{-iHt}$$". Integrals over $$q$$ and $$\psi$$ are easy to compute (one can do it even by discretization, as there is a "canonical" measure for both cases); the result could be written as $$\int d^Dp\, e^{-i(px)} \int \frac{De\,D\chi}{VolDiff} \cdot \\ \cdot \exp \left(-i \frac{p^2-m^2}{2}\left[\int d\tau\, e(\tau) - \frac{i}{8}\int d\tau_1\,d\tau_2\, \chi(\tau_1) \chi(\tau_2) \text{sign }(\tau_2 - \tau_1) \right] \right) \cdot \\ \cdot \exp \left(\frac{1}{2} \int d\tau\,\chi(\tau) (p_\mu \Psi^\mu + m\Psi_5) \right)$$ The troublesome part are the $$\chi$$ and $$e$$ fields. They are in a nontrivial representation of reparametrization group: if one changes $$\tau \to f(\tau)$$, they change according to $$e \to e(f(\tau)) (df/d\tau)^{-1}$$ and similarly for $$\chi$$. That is a serious problem for defining the discretized measure for them that respects that; naive approach - to do something like$$\int D\chi \to \lim \limits_{N \to \infty} \prod \limits_{i=1}^N \int d\chi_i$$ - (obviously) doesn't work (to be precise, it gives an additional factor of $$\lim \limits_{N \to \infty} (p^2-m^2)^N$$ ) Another point is that diffeomorphisms are symmetry of the action; which is why we should divide the result by the volume of diff. group, so that path integral won't diverge. Polyakov suggests that one can replace functional integrals modulo VolDiff by two ordinary integrals $$\int \frac{De\,D\chi}{VolDiff} \to \int \limits_0^\infty dT \int d\theta$$ while "fixing the gauge" and replacing fields with constant factors $$e\to T,\chi \to \theta$$. The problem is that I have no idea how to show that (at least somewhat) rigorously. For pure bosonic case, Polyakov has a pretty beautiful calculation of Jacobian for transformation $$De \to Df\,dT$$, where $$f$$ are diffeomorphisms, showing that it is equal to unity. Yet I don't understand if it is possible to generalize this approach here. Moreover, I have problem with "gauge fixing" two fields at the same time to be constant: they change simultaneously under reparametrization. There is also an additional "supersymmetry" which could maybe help to do this, but I wasn't able to. Summarising, I have the following questions. If someone can answer any of them, I'd be super grateful 1. Is it possible to construct reasonable discretized measure for $$e$$ and $$\chi$$ which respects necessary symmetries and makes evident the VolDiff factor? (that would be ideal for me, as discretized integral is at least somehow well-defined, unlike the "formal" approach Polyakov uses) or 1. Is there at least some way to "formally" justify transformation to $$T,\theta$$ integrals from $$\int De\,D\chi$$? • Maybe 19.7 from amazon.com/Integrals-Quantum-Mechanics-Statistics-Financial/dp/… will be helpful Jan 6, 2020 at 12:34 • See also references (31-39) from arxiv.org/abs/hep-th/0101036 Jan 6, 2020 at 12:59 • Most of the references are already familiar to me, yet they don't address the questions I'm interested in. I'll check out the other ones, thanks. Jan 6, 2020 at 13:58 • For sign conventions, are you following a reference? Jan 12, 2020 at 17:21 • @Qmechanic no, I wasn't really following a reference; I wanted to find a way to derive it myself from the very beginning without relying on supersymmetry, as Polyakov does Jan 12, 2020 at 19:35 Building on the answer from @ACuriousMind I want to point out that the procedure being followed is the Faddeev-Popov gauge fixing. The symmetry transformations can be solved to put $$e(t) = T$$, a constant and $$\chi(t) = 0$$ (on the loop) or $$\chi(t) = \theta$$, constant (on the open line). $$T$$ and $$\theta$$ represent physically distinct configurations of the worldline fields (that is, distinguish configurations not related by gauge transformations) and are known as modular parameters. The integral, $$\int \mathscr{D}e(t) \mathscr{D}\chi(t)$$ is divergent because it vastly overcounts independent / distinct configurations related by the reparameterisation and SUSY symmetries; hence you divide out by the volume of these symmetries (also formally infinite). The Fadeev-Popov procedure deals with this by gauge fixing, where the integral becomes $$\int \mathscr{D}e(t) \mathscr{D}\chi(t) \longrightarrow \int \mathscr{D}f \int \mathscr{D}g \int dT \int d\theta \, \mu(T, \theta)$$ where $$\int \mathscr{D}f = \textrm{vol}(D)$$, $$\int \mathscr{D} g = \textrm{vol}(S)$$ give the volumes of the diffeomorphism and SUSY transformation groups that cancel the volumes on the denominator. Here, and it something that @ACuriousMind left out, the measure $$\mu(T, \theta)$$ is the measure on the moduli, and comes from the Faddeev-Popov determinant factor arising in the gauge fixing. In this case, the Faddeev-Popov determinant for gauge fixing $$e(t) = T$$ is equal to $$1$$ on the open line and $$1 / T$$ on the loop. For fixing $$\chi(t) = \theta$$ on the line we get a determinant factor of $$1$$ and for $$\chi(t) = 0$$ on the loop we get the same factor. In other words we have $$\int \mathscr{D}e(t) \mathscr{D}\chi(t) \,\Omega[e(t), \chi(t)] \longrightarrow \textrm{vol}(D) \textrm{vol}(S) \times \begin{cases} \int dT \int d\theta \, \Omega[T, \theta] & \textrm{Line}\\ \int \frac{dT}{T}\, \Omega[T, 0] & \textrm{Loop} \end{cases}$$ for any functional $$\Omega$$ of these fields. Good references include Appendices B and C of https://arxiv.org/abs/1410.3288 or section 1.5.1, 1.5.2 of https://arxiv.org/abs/1512.08694 or the notes at http://www-th.bo.infn.it/people/bastianelli/2-ch6-FT2-2018.pdf (second 2.1). • Thanks a lot for the answer! I am somewhat familiar with Faddeev-Popov procedure. The approach I know is inserting "Faddeev-Popov unity" in the integral (as is, an integral over diff. group with gauge-fixing delta function and determinant), than one proceeds with change of variables in functional integral and so the gauge is fixed. Do you perhaps know of any references in which this approach is followed for this particular problem and the correct result is obtained? Because this one kinda seems more intuitive to me than what Polyakov discusses, but I wasn't able to make it work here. Jan 7, 2020 at 14:49 • Yes - the references are Appendices B and C of the paper referenced above. In this case both of the determinants that appear are relatively trivial, involving the determinant of the first or second derivative operator on periodic or antiperiodic functions. – lux Jan 16, 2020 at 3:53 • What further details are you looking for in an answer to this question? – lux Jan 18, 2020 at 16:45 • I am looking to the most detailed answer possible; there is a lot of points I still don't feel like any of the references elaborated enough on. It would be too much text to fit in the comment, so here's a Google Drive link to a file with some of my questions: drive.google.com/open?id=1p5h9RvS0B2XNjm7kwr0wpPokm5wX_cez . I do not want to waste another people's time on my misunderstandings, but it would be really helpful if you or someone else could comment on them. Jan 19, 2020 at 8:53 • OK I'll take a look sometime soon :-) – lux Jan 20, 2020 at 0:42 The gravity gauge multiplet $$(e,\chi)$$ enjoys a local supersymmetry with infinitesimal transformations $$\delta e = -2\mathrm{i}\epsilon\chi \quad \delta \chi = \frac{\mathrm{d}\epsilon}{\mathrm{d}\tau}, \tag{1}$$ where $$\epsilon$$ is a fermionic parameter. Together with ordinary reparametrization symmetry by some rescaling $$f$$, this gives us two free gauge parameters to fix the two fields $$e$$ and $$\chi$$ to constant values. Since we use (and thereby eliminate from the result) the reparametrization symmetry here, we no longer need to divide out the "diffeomorphisms" corresponding to it when path integrating. What would be left to show is that the coupled system of equations for $$(f,\epsilon)$$ resulting from integrating eqs. $$(1)$$ and setting $$(e,\chi)$$ constant actually has solutions so that this fixing is possible.
{}
장소: 129동 310호 / zoom: 946 805 8300 We prove that groups acting properly on 2-dimensional CAT(0) complexes with a bound on the orders of cell stabilisers satisfy the Tits Alternative, that is, their finitely generated subgroups are either virtually free abelian or contain a (non-abelian) free subgroup. This is joint work with Piotr Przytycki (McGill).
{}
# [NTG-context] typearea Peter Münster ntg-context@ntg.nl Sun, 11 Jan 2004 22:02:47 +0100 (CET) On Tue, 6 Jan 2004, Henning Hraban Ramm wrote: > Am Dienstag, 06.01.04, um 08:21 Uhr (Europe/Zurich) schrieb Peter > Münster: > > Ok, I'm going to try it, perhaps it'll be my first ConTeXt module ;) > > It shouldn't be a module, but an environment. All right. Here is my first try, where you can see, that I've still a problem to be solved, so I would be lucky about any help: % this environment works a little bit like \usepackage[DIVcalc]{typearea} % in LaTeX as explained in detail in scrguien.pdf (KOMA-Script documentation) \startenvironment e-test \def\BCOR{3mm}% eventual binding correction \setbox\scratchbox\hbox{\dorecurse{26}{\character\recurselevel}} \newdimen\PageWidth \PageWidth=\paperwidth \doifmode{BCOR}{\PageWidth=\dimexpr(\PageWidth-\BCOR)} %%%%%% Here is the problem: dividing one length by another. %%%%%% % the following is not working for 2 reasons: % * there is still the "pt" behind the numbers % * one cannot divide by a real number, only integer %\edef\Ratio{\the\numexpr(\the\paperheight / \the\PageWidth)} \edef\Ratio{1.5}% to make the rest work... \newdimen\Width \newdimen\Height \newdimen\Back \Width=\dimexpr(2.5\wd\scratchbox) \Height=\dimexpr(\Ratio\Width) \edef\Top{\the\dimexpr((\paperheight - \Height) / 3 - \headerheight)} \doifmodeelse{oneside}{% \Back=\dimexpr((\PageWidth - \Width) / 2) }{% \Back=\dimexpr((\PageWidth - \Width) / 3) \setuppagenumbering[alternative=doublesided]} \doifmode{BCOR}{\Back=\dimexpr(\Back + \BCOR)}
{}
# Geometric Set Cover in one dimension Consider the geometric set cover problem https://en.wikipedia.org/wiki/Geometric_set_cover_problem. The Wiki article says there is a simple greedy algorithm for the one-dimension case, what is the analysis of that? Is there a constant approximation factor possible for the one-dimensional case if each of the sets in the family contains only consecutive integers and the universe is the set of first n natural numbers? In the usual greedy algorithm for set cover, we take the set that covers the most number of elements, is that some constant times worse than the optimal in this case? Consider $$x_1, …, x_n\in \mathbb{R}$$ and $$I_1, …, I_k$$ be intervals, $$I_i = [a_i, b_i]$$. Suppose without loss of generality that $$x_1 < x_2 < … < x_n$$. Let $$I_i$$ be an interval such that $$x_1 \in I_i$$, and $$b_i$$ maximal for such intervals. Then there is an optimal solution containing $$I_i$$. Indeed, suppose $$I_i$$ is not in an optimal solution. Replace any interval containing $$x_1$$ (there exists at least one) with $$I_i$$. This is still a solution, because there is no point $$< x_1$$, and $$b_i$$ was supposed to be maximal. Reiterate the process with points $$x_j$$ for $$b_i < x_j$$ to obtain an optimal solution. • Just to clarify, the algorithm that you gave looks is the greedy algorithm, i.e. take the largest set covering most elements from one side and reiterate. Is this exactly the optimal solution or will be some constant times the optimal solution? That will mean every set from the optimal solution will intersect some constant number of sets in your solution. Jan 11 at 20:30 • This is the optimal solution, I gave some ideas of the proof in my answer. Jan 11 at 20:38 • Suppose we don't start from $x_1$ and rather pick up the interval $I_i$ that covers the largest number of elements( i.e. $x_i$), take this $I_i$ in our solution, delete all those $x_i$ and reiterate, i.e. pick the next largest interval $I_j$ and so on until none of $x_i$ are left. This is also a greedy solution, will this be the optimal too? Or will this be some constant times the optimal solution that you gave? Jan 11 at 20:43 • Also, consider $x_1 = 0$, $x_2 = 2$, $x_3 = 3$, $x_4 = 5$, $x_5 = 6$, $x_6 = 8$ and $I_1 = [0, 4]$, $I_2 = [1, 7]$ and $I_3 = [4, 8]$. Can you see why your algorithm fails? Jan 11 at 22:31
{}
TOP results by CONfident efFECT Size. Topconfects is an R package intended for RNA-seq or microarray Differntial Expression analysis and similar, where we are interested in placing confidence bounds on many effect sizes–one per gene–from few samples. Topconfects builds on TREAT p-values offered by the limma and edgeR packages. It tries a range of fold changes, and uses this to rank genes by effect size while maintaining a given FDR. This also produces confidence bounds on the fold changes, with adjustment for multiple testing. See nest_confects for details. • A principled way to avoid using p-values as a proxy for effect size. The difference between a p-value of 1e-6 and 1e-9 has no practical meaning in terms of significance, however tiny p-values are often used as a proxy for effect size. This is a misuse, as they might simply reflect greater quality of evidence (for example RNA-seq average read count or microarray average spot intensity). It is better to reject a broader set of hypotheses, while maintaining a sensible significance level. • No need to guess the best fold change cutoff. TREAT requires a fold change cutoff to be specified. Topconfects instead asks you specify a False Discovery Rate appropriate to your purpose. You can then read down the resulting ranked list of genes as far as you wish. The “confect” value given in the last row that you use is the fold change cutoff required for TREAT to produce that set of genes at the given FDR. • (experimental) Rank by a more meaningful effect size. Once we stop thinking in terms of zero vs non-zero effect and start thinking about effect size, we have greater freedom to choose an effect size that is meaningful. For example, examining the interaction of two factors, a linear model on log expression levels allows testing of odds ratios. However this will give a large effect size when, say, a proportion shifts from 0.01% to 0.1%. We may instead be interested in the difference of proportions, to avoid looking at such small shifts. Topconfects provides effect_shift_log2 for this. ## Usage Use limma_confects or edger_confects as part of your limma or edgeR analysis. ## Install install.packages("devtools") devtools::install_github("pfh/topconfects") ## Contact Author: Paul Harrison @paulfharrison paul.harrison@monash.edu ## Future work Gene-set enrichment tests. Here also the smallest p-value does not necessarily imply the greatest interest. ## References McCarthy, D. J., and Smyth, G. K. (2009). Testing significance relative to a fold-change threshold is a TREAT. Bioinformatics 25, 765-771. doi: 10.1093/bioinformatics/btp053
{}
# Morley's MiracleDijkstra's Proof I lifted this proof from the site of E. W. Dijkstra Archives at the University of Texas at Austin. An open letter to Ross Honsberger. University of Waterloo. 30 December 1975 Dear Sir, the other day I encountered your delightful booklet "Mathematical Gems". On account of Chapter 8, I concluded that you might be interested in the following proof of Morley's Theorem "The adjacent pairs of the trisectors of a triangle always meet at the vertices of an equilateral triangle." Choose $\alpha,$ $\beta$ & $\gamma \gt 0$ such that $\alpha + \beta + \gamma = 60^{\circ}.$ Draw an equilateral triangle $XYZ$ and construct the triangles $AXY$ and $BXZ$ with the angles as indicated. Because $\angle AXB = 180^{\circ} – (\alpha +\beta ),$ it follows that, if $\angle BAX = \alpha +x,$ $\angle ABX = \beta – x.$ Using the rule of sines three times (in $\Delta AXB,$ $\Delta AXY,$ and $\Delta BXZ),$ we deduce \begin{align}\displaystyle \frac{\sin (\alpha +x)}{\sin (\beta – x)}&=\frac{BX}{AX}\\ &=\frac{XZ\cdot\sin (60^{\circ}+\gamma)/\sin (\beta )}{XY\cdot\sin (60^{\circ}+\gamma)/\sin (\alpha )}\\ &=\frac{\sin (\alpha )}{\sin (\beta )}. \end{align} Because in the range considered, this equation has a left-hand-side which is a monotonically increasing function of $x$ (on account of the monotonicity of $\sin(\phi )$ in the first quadrant) we conclude $x = 0.$ Thus Morley’s Theorem is proved without any additional lines. I found this proof in the early sixties, but am afraid that I did not publish it. Yours ever, Edsger W.Dijkstra
{}
#### Previous topic 1   An Overview of Networks 3   Other LANs # 2   Ethernet¶ We now turn to a deeper analysis of the ubiquitous Ethernet LAN protocol. Current user-level Ethernet today (2013) is usually 100 Mbps, with Gigabit Ethernet standard in server rooms and backbones, but because Ethernet speed scales in odd ways, we will start with the 10 Mbps formulation. While the 10 Mbps speed is obsolete, and while even the Ethernet collision mechanism is largely obsolete, collision management itself continues to play a significant role in wireless networks. ## 2.1   10-Mbps classic Ethernet¶ The original Ethernet specification was the 1976 paper of Metcalfe and Boggs, [MB76]. The data rate was 10 megabits per second, and all connections were made with coaxial cable instead of today’s twisted pair. In its original form, an Ethernet was a broadcast bus, which meant that all packets were, at least at the physical level, broadcast onto the shared medium and could be seen, theoretically, by all other nodes. If two nodes transmitted at the same time, there was a collision; proper handling of collisions was an important part of the access-mediation strategy for the shared medium. Data was transmitted using Manchester encoding; see 4.1.3   Manchester. The linear bus structure could be modified with repeaters (below), into an arbitrary tree structure, though loops remain something of a problem even with today’s Ethernet. Whenever two stations transmitted at the same time, the signals would collide, and interfere with one another; both transmissions would fail as a result. In order to minimize collision loss, each station implemented the following: 1. Before transmission, wait for the line to become quiet 2. While transmitting, continually monitor the line for signs that a collision has occurred; if a collision happens, then cease transmitting 3. If a collision occurs, use a backoff-and-retransmit strategy These properties can be summarized with the CSMA/CD acronym: Carrier Sense, Multiple Access, Collision Detect. (The term “carrier sense” was used by Metcalfe and Boggs as a synonym for “signal sense”; there is no literal carrier frequency to be sensed.) It should be emphasized that collisions are a normal event in Ethernet, well-handled by the mechanisms above. Classic Ethernet came in version 1 [1980, DEC-Intel-Xerox], version 2 [1982, DIX], and IEEE 802.3. There are some minor electrical differences between these, and one rather substantial packet-format difference. In addition to these, the Berkeley Unix trailing-headers packet format was used for a while. There were three physical formats for 10 Mbps Ethernet cable: thick coax (10BASE-5), thin coax (10BASE-2), and, last to arrive, twisted pair (10BASE-T). Thick coax was the original; economics drove the successive development of the later two. The cheaper twisted-pair cabling eventually almost entirely displaced coax, at least for host connections. The original specification included support for repeaters, which were in effect signal amplifiers although they might attempt to clean up a noisy signal. Repeaters processed each bit individually and did no buffering. In the telecom world, a repeater might be called a digital regenerator. A repeater with more than two ports was commonly called a hub; hubs allowed branching and thus much more complex topologies. Bridges – later known as switches – came along a short time later. While repeaters act at the bit layer, a switch reads in and forwards an entire packet as a unit, and the destination address is likely consulted to determine to where the packet is forwarded. Originally, switches were seen as providing interconnection (“bridging”) between separate Ethernets, but later a switched Ethernet was seen as one large “virtual” Ethernet. We return to switching below in 2.4   Ethernet Switches. Hubs propagate collisions; switches do not. If the signal representing a collision were to arrive at one port of a hub, it would, like any other signal, be retransmitted out all other ports. If a switch were to detect a collision one one port, no other ports would be involved; only packets received successfully are ever retransmitted out other ports. In coaxial-cable installations, one long run of coax snaked around the computer room or suite of offices; each computer connected somewhere along the cable. Thin coax allowed the use of T-connectors to attach hosts; connections were made to thick coax via taps, often literally drilled into the coax central conductor. In a standalone installation one run of coax might be the entire Ethernet; otherwise, somewhere a repeater would be attached to allow connection to somewhere else. Twisted-pair does not allow mid-cable attachment; it is only used for point-to-point links between hosts, switches and hubs. In a twisted-pair installation, each cable runs between the computer location and a central wiring closest (generally much more convenient than trying to snake coax all around the building). Originally each cable in the wiring closet plugged into a hub; nowadays the hub has likely been replaced by a switch. There is still a role for hubs today when one wants to monitor the Ethernet signal from A to B (eg for intrusion detection analysis), although some switches now also support a form of monitoring. All three cable formats could interconnect, although only through repeaters and hubs, and all used the same 10 Mbps transmission speed. While twisted-pair cable is still used by 100 Mbps Ethernet, it generally needs to be a higher-performance version known as Category 5, versus the 10 Mbps Category 3. Here is the format of a typical Ethernet packet (DIX specification): The destination and source addresses are 48-bit quantities; the type is 16 bits, the data length is variable up to a maximum of 1500 bytes, and the final CRC checksum is 32 bits. The checksum is added by the Ethernet hardware, never by the host software. There is also a preamble, not shown: a block of 1 bits followed by a 0, in the front of the packet, for synchronization. The type field identifies the next higher protocol layer; a few common type values are 0x0800 = IP, 0x8137 = IPX, 0x0806 = ARP. The IEEE 802.3 specification replaced the type field by the length field, though this change never caught on. The two formats can be distinguished as long as the type values used are larger than the maximum Ethernet length of 1500 (or 0x05dc); the type values given in the previous paragraph all meet this condition. Each Ethernet card has a (hopefully unique) physical address in ROM; by default any packet sent to this address will be received by the board and passed up to the host system. Packets addressed to other physical addresses will be seen by the card, but ignored (by default). All Ethernet devices also agree on a broadcast address of all 1’s: a packet sent to the broadcast address will be delivered to all attached hosts. It is sometimes possible to change the physical address of a given card in software. It is almost universally possible to put a given card into promiscuous mode, meaning that all packets on the network, no matter what the destination address, are delivered to the attached host. This mode was originally intended for diagnostic purposes but became best known for the security breach it opens: it was once not unusual to find a host with network board in promiscuous mode and with a process collecting the first 100 bytes (presumably including userid and password) of every telnet connection. ### 2.1.1   Ethernet Multicast¶ If switches (below) are involved, they must normally forward multicast packets on all outbound links, exactly as they do for broadcast packets; switches have no obvious way of telling where multicast subscribers might be. To avoid this, some switches do try to engage in some form of multicast filtering, sometimes by snooping on higher-layer multicast protocols. Multicast Ethernet is seldom used by IPv4, but plays a larger role in IPv6 configuration. The second-to-lowest-order bit of the Ethernet address indicates, in the case of physical addresses, whether the address is believed to be globally unique or if it is only locally unique; this is known as the Universal/Local bit. When (global) Ethernet IDs are assigned by the manufacturer, the first three bytes serve to indicate the manufacturer. As long as the manufacturer involved is diligent in assigning the second three bytes, every manufacturer-provided Ethernet address should be globally unique. Lapses, however, are not unheard of. ### 2.1.2   The Slot Time and Collisions¶ The diameter of an Ethernet is the maximum distance between any pair of stations. The actual total length of cable can be much greater than this, if, for example, the topology is a “star” configuration. The maximum allowed diameter, measured in bits, is limited to 232 (a sample “budget” for this is below). This makes the round-trip-time 464 bits. As each station involved in a collision discovers it, it transmits a special jam signal of up to 48 bits. These 48 jam bits bring the total above to 512 bits, or 64 bytes. The time to send these 512 bits is the slot time of an Ethernet; time intervals on Ethernet are often described in bit times but in conventional time units the slot time is 51.2 µsec. The value of the slot time determines several subsequent aspects of Ethernet. If a station has transmitted for one slot time, then no collision can occur (unless there is a hardware error) for the remainder of that packet. This is because one slot time is enough time for any other station to have realized that the first station has started transmitting, so after that time they will wait for the first station to finish. Thus, after one slot time a station is said to have acquired the network. The slot time is also used as the basic interval for retransmission scheduling, below. Conversely, a collision can be received, in principle, at any point up until the end of the slot time. As a result, Ethernet has a minimum packet size, equal to the slot time, ie 64 bytes (or 46 bytes in the data portion). A station transmitting a packet this size is assured that if a collision were to occur, the sender would detect it (and be able to apply the retransmission algorithm, below). Smaller packets might collide and yet the sender not know it, ultimately leading to greatly reduced throughput. If we need to send less than 46 bytes of data (for example, a 40-byte TCP ACK packet), the Ethernet packet must be padded out to the minimum length. As a result, all protocols running on top of Ethernet need to provide some way to specify the actual data length, as it cannot be inferred from the received packet size. As a specific example of a collision occurring as late as possible, consider the diagram below. A and B are 5 units apart, and the bandwidth is 1 byte/unit. A begins sending “helloworld” at T=0; B starts sending just as A’s message arrives, at T=5. B has listened before transmitting, but A’s signal was not yet evident. A doesn’t discover the collision until 10 units have elapsed, which is twice the distance. Here are typical maximum values for the delay in 10 Mbps Ethernet due to various components. These are taken from the Digital-Intel-Xerox (DIX) standard of 1982, except that “point-to-point link cable” is replaced by standard cable. The DIX specification allows 1500m of coax with two repeaters and 1000m of point-to-point cable; the table below shows 2500m of coax and four repeaters, following the later IEEE 802.3 Ethernet specification. Some of the more obscure delays have been eliminated. Entries are one-way delay times, in bits. The maximum path may have four repeaters, and ten transceivers (simple electronic devices between the coax cable and the NI cards), each with its drop cable (two transceivers per repeater, plus one at each endpoint). Ethernet delay budget item length delay, in bits explanation (c = speed of light) coax 2500M 110 bits 23 meters/bit (.77c) transceiver cables 500M 25 bits 19.5 meters/bit (.65c) transceivers 40 bits, max 10 units 4 bits each repeaters 25 bits, max 4 units 6+ bits each (DIX 7.6.4.1) encoders 20 bits, max 10 units 2 bits each (for signal generation) The total here is 220 bits; in a full accounting it would be 232. Some of the numbers shown are a little high, but there are also signal rise time delays, sense delays, and timer delays that have been omitted. It works out fairly closely. Implicit in the delay budget table above is the “length” of a bit. The speed of propagation in copper is about 0.77×c, where c=3×108 m/sec = 300 m/µsec is the speed of light in vacuum. So, in 0.1 microseconds (the time to send one bit at 10 Mbps), the signal propagates approximately 0.77×c×10-7 = 23 meters. Ethernet packets also have a maximum packet size, of 1500 bytes. This limit is primarily for the sake of fairness, so one station cannot unduly monopolize the cable (and also so stations can reserve buffers guaranteed to hold an entire packet). At one time hardware vendors often marketed their own incompatible “extensions” to Ethernet which enlarged the maximum packet size to as much as 4KB. There is no technical reason, actually, not to do this, except compatibility. The signal loss in any single segment of cable is limited to 8.5 db, or about 14% of original strength. Repeaters will restore the signal to its original strength. The reason for the per-segment length restriction is that Ethernet collision detection requires a strict limit on how much the remote signal can be allowed to lose strength. It is possible for a station to detect and reliably read very weak remote signals, but not at the same time that it is transmitting locally. This is exactly what must be done, though, for collision detection to work: remote signals must arrive with sufficient strength to be heard even while the receiving station is itself transmitting. The per-segment limit, then, has nothing to do with the overall length limit; the latter is set only to ensure that a sender is guaranteed of detecting a collision, even if it sends the minimum-sized packet. ### 2.1.3   Exponential Backoff Algorithm¶ Whenever there is a collision the exponential backoff algorithm is used to determine when each station will retry its transmission. Backoff here is called exponential because the range from which the backoff value is chosen is doubled after every successive collision involving the same packet. Here is the full Ethernet transmission algorithm, including backoff and retransmissions: 1. Listen before transmitting (“carrier detect”) 2. If line is busy, wait for sender to stop and then wait an additional 9.6 microseconds (96 bits). One consequence of this is that there is always a 96-bit gap between packets, so packets do not run together. 3. Transmit while simultaneously monitoring for collisions 4. If a collision does occur, send the jam signal, and choose a backoff time as follows: For transmission N, 1≤N≤10 (N=0 represents the original attempt), choose k randomly with 0 ≤ k < 2N. Wait k slot times (k×51.2 µsec). Then check if the line is idle, waiting if necessary for someone else to finish, and then retry step 3. For 11≤N≤15, choose k randomly with 0 ≤ k < 1024 (= 210) 5. If we reach N=16 (16 transmission attempts), give up. If an Ethernet sender does not reach step 5, there is a very high probability that the packet was delivered successfully. Exponential backoff means that if two hosts have waited for a third to finish and transmit simultaneously, and collide, then when N=1 they have a 50% chance of recollision; when N=2 there is a 25% chance, etc. When N≥10 the maximum wait is 52 milliseconds; without this cutoff the maximum wait at N=15 would be 1.5 seconds. As indicated above in the minimum-packet-size discussion, this retransmission strategy assumes that the sender is able to detect the collision while it is still sending, so it knows that the packet must be resent. In the following diagram is an example of several stations attempting to transmit all at once, and using the above transmission/backoff algorithm to sort out who actually gets to acquire the channel. We assume we have five prospective senders A1, A2, A3, A4 and A5, all waiting for a sixth station to finish. We will assume that collision detection always takes one slot time (it will take much less for nodes closer together) and that the slot start-times for each station are synchronized; this allows us to measure time in slots. A solid arrow at the start of a slot means that sender began transmission in that slot; a red X signifies a collision. If a collision occurs, the backoff value k is shown underneath. A dashed line shows the station waiting k slots for its next attempt. At T=0 we assume the transmitting station finishes, and all the Ai transmit and collide. At T=1, then, each of the Ai has discovered the collision; each chooses a random k<2. Let us assume that A1 chooses k=1, A2 chooses k=1, A3 chooses k=0, A4 chooses k=0, and A5 chooses k=1. Those stations choosing k=0 will retransmit immediately, at T=1. This means A3 and A4 collide again, and at T=2 they now choose random k<4. We will Assume A3 chooses k=3 and A4 chooses k=0; A3 will try again at T=2+3=5 while A4 will try again at T=2, that is, now. At T=2, we now have the original A1, A2, and A5 transmitting for the second time, while A4 trying again for the third time. They collide. Let us suppose A1 chooses k=2, A2 chooses k=1, A5 chooses k=3, and A4 chooses k=6 (A4 is choosing k<8 at random). Their scheduled transmission attempt times are now A1 at T=3+2=5, A2 at T=4, A5 at T=6, and A4 at T=9. At T=3, nobody attempts to transmit. But at T=4, A2 is the only station to transmit, and so successfully seizes the channel. By the time T=5 rolls around, A1 and A3 will check the channel, that is, listen first, and wait for A2 to finish. At T=9, A4 will check the channel again, and also begin waiting for A2 to finish. A maximum of 1024 hosts is allowed on an Ethernet. This number apparently comes from the maximum range for the backoff time as 0 ≤ k < 1024. If there are 1024 hosts simultaneously trying to send, then, once the backoff range has reached k<1024 (N=10), we have a good chance that one station will succeed in seizing the channel, that is; the minimum value of all the random k’s chosen will be unique. This backoff algorithm is not “fair”, in the sense that the longer a station has been waiting to send, the lower its priority sinks. Newly transmitting stations with N=0 need not delay at all. The Ethernet capture effect, below, illustrates this unfairness. ### 2.1.4   Capture effect¶ The capture effect is a scenario illustrating the potential lack of fairness in the exponential backoff algorithm. The unswitched Ethernet must be fully busy, in that each of two senders always has a packet ready to transmit. Let A and B be two such busy nodes, simultaneously starting to transmit their first packets. They collide. Suppose A wins, and sends. When A is finished, B tries to transmit again. But A has a second packet, and so A tries too. A chooses a backoff k<2 (that is, between 0 and 1 inclusive), but since B is on its second attempt it must choose k<4. This means A is favored to win. Suppose it does. After that transmission is finished, A and B try yet again: A on its first attempt for its third packet, and B on its third attempt for its first packet. Now A again chooses k<2 but B must choose k<8; this time A is much more likely to win. Each time B fails to win a given backoff, its probability of winning the next one is reduced by about 1/2. It is quite possible, and does occur in practice, for B to lose all the backoffs until it reaches the maximum of N=16 attempts; once it has lost the first three or four this is in fact quite likely. At this point B simply discards the packet and goes on to the next one with N reset to 1 and k chosen from {0,1}. The capture effect can be fixed with appropriate modification of the backoff algorithm; the Binary Logarithmic Arbitration Method (BLAM) was proposed in [MM94]. The BLAM algorithm was considered for the then-nascent 100 Mbps “Fast” Ethernet standard. But in the end a hardware strategy won out: Fast Ethernet supports “full-duplex” mode which is collision-free (see 2.2   100 Mbps (Fast) Ethernet, below). While full-duplex mode is not required for using Fast Ethernet, it was assumed that any sites concerned enough about performance to be worried about the capture effect would opt for full-duplex. ### 2.1.5   Hubs and topology¶ Ethernet hubs (multiport repeaters) change the topology, but not the fundamental constraints. Hubs allow much more branching; typically, each station in the office now has its own link to the wiring closet. Loops are still forbidden. Before inexpensive switches were widely available, 10BASE-T (twisted pair Ethernet) used hubs heavily; with twisted pair, a device can only connect to the endpoint of the wire. Thus, typically, each host is connected directly to a hub. The maximum diameter of an Ethernet consisting of multiple segments, joined by hubs, is constrained by the round-trip-time, and the need to detect collisions before the sender has completed sending, as before. However, twisted-pair links are required to be much shorter, about 100 meters. ### 2.1.6   Errors¶ Packets can have bits flipped or garbled by electrical noise on the cable; estimates of the frequency with which this occurs range from 1 in 104 to 1 in 106. Bit errors are not uniformly likely; when they occur, they are likely to occur in bursts. Packets can also be lost in hubs, although this appears less likely. Packets can be lost due to collisions only if the sending host makes 16 unsuccessful transmission attempts and gives up. Ethernet packets contain a 32-bit CRC error-detecting code (see 5.4.1   Cyclical Redundancy Check: CRC) to detect bit errors. Packets can also be misaddressed by the sending host, or, most likely of all, they can arrive at the receiving host at a point when the receiver has no free buffers and thus be dropped by a higher-layer protocol. ### 2.1.7   CSMA persistence¶ A carrier-sense/multiple-access transmission strategy is said to be nonpersistent if, when the line is busy, the sender waits a randomly selected time. A strategy is p-persistent if, after waiting for the line to clear, the sender sends with probability p≤1. Ethernet uses 1-persistence. A consequence of 1-persistence is that, if more than one station is waiting for line to clear, then when the line does clear a collision is certain. However, Ethernet then gracefully handles the resulting collision via the usual exponential backoff. If N stations are waiting to transmit, the time required for one station to win the backoff is linear in N. When we consider the Wi-Fi collision-handling mechanisms in 3.3   Wi-Fi, we will see that collisions cannot be handled quite as cheaply: for one thing, there is no way to detect a collision in progress, so the entire packet-transmission time is wasted. In the Wi-Fi case, p-persistence is used with p<1. An Ethernet broadcast storm was said to occur when there were too many transmission attempts, and most of the available bandwidth was tied up in collisions. A properly functioning classic Ethernet had an effective bandwidth of as much as 50-80% of the nominal 10Mbps capacity, but attempts to transmit more than this typically resulted in successfully transmitting a good deal less. ### 2.1.8   Analysis of Classic Ethernet¶ How much time does Ethernet “waste” on collisions? A paradoxical attribute of Ethernet is that raising the transmission-attempt rate on a busy segment can reduce the actual throughput. More transmission attempts can lead to longer contention intervals between packets, as senders use the transmission backoff algorithm to attempt to acquire the channel. What effective throughput can be achieved? It is convenient to refer to the time between packet transmissions as the contention interval even if there is no actual contention, even if the network is idle. Thus, a timeline for Ethernet always consists of alternating packet transmissions and contention intervals: As a first look at contention intervals, assume that there are N stations waiting to transmit at the start of the interval. It turns out that, if all follow the exponential backoff algorithm, we can expect O(N) slot times before one station successfully acquires the channel; thus, Ethernets are happiest when N is small and there are only a few stations simultaneously transmitting. However, multiple stations are not necessarily a severe problem. Often the number of slot times needed turns out to be about N/2, and slot times are short. If N=20, then N/2 is 10 slot times, or 640 bytes. However, one packet time might be 1500 bytes. If packet intervals are 1500 bytes and contention intervals are 640 byes, this gives an overall throughput of 1500/(640+1500) = 70% of capacity. In practice, this seems to be a reasonable upper limit for the throughput of classic shared-media Ethernet. #### 2.1.8.1   The ALOHA models¶ We get very similar throughput values when we analyze the Ethernet contention interval using the ALOHA model that was a precursor to Ethernet, and assume a very large number of active senders, each transmitting at a very low rate. In the ALOHA model, stations transmit packets without listening first for a quiet line or monitoring the transmission for collisions (this models the situation of several ground stations transmitting to a satellite; the ground stations are presumed unable to see one another). To model the success rate of ALOHA, assume all the packets are the same size and let T be the time to send one (fixed-size) packet; T represents the Aloha slot time. We will find the transmission rate that optimizes throughput. The core assumption of this model is that that a large number N of hosts are transmitting, each at a relatively low rate of s packets/slot. Denote by G the average number of transmission attempts per slot; we then have G = Ns. We will derive an expression for S, the average rate of successful transmissions per slot, in terms of G. If two packets overlap during transmissions, both are lost. Thus, a successful transmission requires everyone else quiet for an interval of 2T: if a sender succeeds in the interval from t to t+T, then no other node can have tried to begin transmission in the interval t−T to t+T. The probability of one station transmitting during an interval of time T is G = Ns; the probability of the remaining N−1 stations all quiet for an interval of 2T is (1−s)2(N−1). The probability of a successful transmission is thus S = Ns*(1−s)2(N−1) = G(1−G/N)2N ⟶ Ge-2G as N⟶∞. The function S = G e-2G has a maximum at G=1/2, S=1/2e. The rate G=1/2 means that, on average, a transmission is attempted every other slot; this yields the maximum successful-transmission throughput of 1/2e. In other words, at this maximum attempt rate G=1/2, we expect about 2e−1 slot times worth of contention between successful transmissions. What happens to the remaining G−S unsuccessful attempts is not addressed by this model; presumably some higher-level mechanism (eg backoff) leads to retransmissions. A given throughput S<1/2e may be achieved at either of two values for G; that is, a given success rate may be due to a comparable attempt rate or else due to a very high attempt rate with a similarly high failure rate. #### 2.1.8.2   ALOHA and Ethernet¶ The relevance of the Aloha model to Ethernet is that during one Ethernet slot time there is no way to detect collisions (they haven’t reached the sender yet!) and so the Ethernet contention phase resembles ALOHA with an Aloha slot time T of 51.2 microseconds. Once an Ethernet sender succeeds, however, it continues with a full packet transmission, which is presumably many times longer than T. The average length of the contention interval, at the maximum throughput calculated above, is 2e−1 slot times (from ALOHA); recall that our model here supposed many senders sending at very low individual rates. This is the minimum contention interval; with lower loads the contention interval is longer due to greater idle times and with higher loads the contention interval is longer due to more collisions. Finally, let P be the time to send an entire packet in units of T; ie the average packet size in units of T. P is thus the length of the “packet” phase in the diagram above. The contention phase has length 2e−1, so the total time to send one packet (contention+packet time) is 2e−1+P. The useful fraction of this is, of course, P, so the effective maximum throughput is P/(2e−1+P). At 10Mbps, T=51.2 microseconds is 512 bits, or 64 bytes. For P=128 bytes = 2*64, the effective bandwidth becomes 2/(2e-1+2), or 31%. For P=512 bytes=8*64, the effective bandwidth is 8/(2e+7), or 64%. For P=1500 bytes, the model here calculates an effective bandwidth of 80%. These numbers are quite similar to our earlier values based on a small number of stations sending constantly. ## 2.2   100 Mbps (Fast) Ethernet¶ In all the analysis here of 10 Mbps Ethernet, what happens when the bandwidth is increased to 100 Mbps, as is done in the so-called Fast Ethernet standard? If the network physical diameter remains the same, then the round-trip time will be the same in microseconds but will be 10-fold larger measured in bits; this might mean a minimum packet size of 640 bytes instead of 64 bytes. (Actually, the minimum packet size might be somewhat smaller, partly because the “jam signal” doesn’t have to speed up at all, and partly because some of the numbers in the 10 Mbps delay budget above were larger than necessary, but it would still be large enough that a substantial amount of bandwidth would be consumed by padding.) The designers of Fast Ethernet felt this was impractical. However, Fast Ethernet was developed at a time (~1995) when reliable switches (below) were widely available, and “longer” networks could be formed by chaining together shorter ones with switches. So instead of increasing the minimum packet size, the decision was made to ensure collision detectability by reducing the network diameter instead. The network diameter chosen was a little over 400 meters, with reductions to account for the presence of hubs. At 2.3 meters/bit, 400 meters is 174 bits, for a round-trip of 350 bits. This 400-meter number, however, may be misleading: by far the most popular Fast Ethernet standard is 100BASE-TX which uses twisted-pair copper wire (so-called Category 5, or better), and in which any individual cable segment is limited to 100 meters. The maximum 100BASE-TX network diameter – allowing for hubs – is just over 200 meters. The 400-meter distance does apply to optical-fiber-based 100BASE-FX in half-duplex mode, but this is not common. The 100BASE-TX network-diameter limit of 200 meters might seem small; it amounts in many cases to a single hub with multiple 100-meter cable segments radiating from it. In practice, however, such “star” configurations can easily be joined with switches. As we will see below in 2.4   Ethernet Switches, switches partition an Ethernet into separate “collision domains”; the network-diameter rules apply to each domain separately but not to the aggregated whole. In a fully switched (that is, no hubs) 100BASE-TX LAN, each collision domain is simply a single twisted-pair link, subject to the 100-meter maximum length. Fast Ethernet also introduced the concept of full-duplex Ethernet: two twisted pairs could be used, one for each direction. Full-duplex Ethernet is limited to paths not involving hubs, that is, to single station-to-station links, where a station is either a host or a switch. Because such a link has only two potential senders, and each sender has its own transmit line, full-duplex Ethernet is collision-free. Fast Ethernet uses 4B/5B encoding, covered in 4.1.4   4B/5B. Fast Ethernet 100BASE-TX does not particularly support links between buildings, due to the network-diameter limitation. However, fiber-optic point-to-point links are quite effective here, provided full-duplex is used to avoid collisions. We mentioned above that the coax-based 100BASE-FX standard allowed a maximum half-duplex run of 400 meters, but 100BASE-FX is much more likely to use full duplex, where the maximum cable length rises to 2,000 meters. ## 2.3   Gigabit Ethernet¶ If we continue to maintain the same slot time but raise the transmission rate to 1000 Mbps, the network diameter would now be 20-40 meters. Instead of that, Gigabit Ethernet moved to a 4096-bit (512-byte) slot time, at least for the twisted-pair versions. Short frames need to be padded, but this padding is done by the hardware. Gigabit Ethernet 1000Base-T uses so-called PAM-5 encoding, below, which supports a special pad pattern (or symbol) that cannot appear in the data. The hardware pads the frame with these special patterns, and the receiver can thus infer the unpadded length as set by the host operating system. However, the Gigabit Ethernet slot time is largely irrelevant, as full-duplex (bidirectional) operation is almost always supported. Combined with the restriction that each length of cable is a station-to-station link (that is, hubs are no longer allowed), this means that collisions simply do not occur and the network diameter is no longer a concern. There are actually multiple Gigabit Ethernet standards (as there are for Fast Ethernet). The different standards apply to different cabling situations. There are full-duplex optical-fiber formulations good for many miles (eg 1000Base-LX10), and even a version with a 25-meter maximum cable length (1000Base-CX), which would in theory make the original 512-bit slot practical. The most common gigabit Ethernet over copper wire is 1000BASE-T (sometimes incorrectly referred to as 100BASE-TX. While there exists a TX, it requires Category 6 cable and is thus seldom used; many devices labeled TX are in fact 1000BASE-T). For 1000BASE-T, all four twisted pairs in the cable are used. Each pair transmits at 250 Mbps, and each pair is bidirectional, thus supporting full-duplex communication. Bidirectional communication on a single wire pair takes some careful echo cancellation at each end, using a circuit known as a “hybrid” that in effect allows detection of the incoming signal by filtering out the outbound signal. On any one cable pair, there are five signaling levels. These are used to transmit two-bit symbols (4.1.4   4B/5B) at a rate of 125 symbols/µsec, for a data rate of 250 bits/µsec. Two-bit symbols in theory only require four signaling levels; the fifth symbol allows for some redundancy which is used for error detection and correction, for avoiding long runs of identical symbols, and for supporting a special pad symbol, as mentioned above. The encoding is known as 5-level pulse-amplitude modulation, or PAM-5. The target bit error rate (BER) for 1000BASE-T is 10-10, meaning that the packet error rate is less than 1 in 106. In developing faster Ethernet speeds, economics plays at least as important a role as technology. As new speeds reach the market, the earliest adopters often must take pains to buy cards, switches and cable known to “work together”; this in effect amounts to installing a proprietary LAN. The real benefit of Ethernet, however, is arguably that it is standardized, at least eventually, and thus a site can mix and match its cards and devices. Having a given Ethernet standard support existing cable is even more important economically; the costs of replacing cable often dwarf the costs of the electronics. ## 2.4   Ethernet Switches¶ Switches join separate physical Ethernets (or Ethernets and token rings). A switch has two or more Ethernet interfaces; when a packet is received on one interface it is retransmitted on one or more other interfaces. Only valid packets are forwarded; collisions are not propagated. The term collision domain is sometimes used to describe the region of an Ethernet in between switches; a given collision propagates only within its collision domain. All the collision-detection rules, including the rules for maximum network diameter, apply only to collision domains, and not to the larger “virtual Ethernets” created by stringing collision domains together with switches. As we shall see below, a switched Ethernet offers much more resistance to eavesdropping than a non-switched (eg hub-based) Ethernet. Like simpler unswitched Ethernets, the topology for a switched Ethernet is in principle required to be loop-free, although in practice, most switches support the spanning-tree loop-detection protocol and algorithm, below, which automatically “prunes” the network topology to make it loop-free. And while a switch does not propagate collisions, it must maintain a queue for each outbound interface in case it needs to forward a packet at a moment when the interface is busy; on occasion packets are lost when this queue overflows. Ethernet switches use datagram forwarding as described in 1.4   Datagram Forwarding. They start out with empty forwarding tables, and build them through a “learning” process. If a switch does not have an entry for a particular destination, it will fall back on broadcast: it will forward the packet out every interface other than the one on which the packet arrived. A switch learns address locations as follows: for each interface, the switch maintains a table of physical addresses that have appeared as source addresses in packets arriving via that interface. The switch thus knows that to reach these addresses, if one of them later shows up as a destination address, the packet needs to be sent only via that interface. Specifically, when a packet arrives on interface I with source address S and destination unicast address D, the switch enters ⟨S,I⟩ into its forwarding table. To actually deliver the packet, the switch also looks up D in the forwarding table. If there is an entry ⟨D,J⟩ with J≠I – that is, D is known to be reached via interface J – then the switch forwards the packet out interface J. If J=I, that is, the packet has arrived on the same interfaces by which the destination is reached, then the packet does not get forwarded at all; it presumably arrived at interface I only because that interface was connected to a shared Ethernet segment that also either contained D or contained another switch that would bring the packet closer to D. If there is no entry for D, the switch must forward the packet out all interfaces J with J≠I; this represents the fallback to broadcast. As time goes on, this fallback to broadcast is needed less and less often. In the diagram above, each switch’s tables are indicated by listing near each interface the destinations known to be reachable by that interface. The entries shown are the result of the following packets: • A sends to B; all switches learn where A is • B sends to A; this packet goes directly to A; only S3, S2 and S1 learn where B is • C sends to B; S4 does not know where B is so this packet goes to S5; S2 does know where B is so the packet does not go to S1. Once all the switches have learned where all (or most of) the hosts are, packet routing becomes optimal. At this point packets are never sent on links unnecessarily; a packet from A to B only travels those links that lie along the (unique) path from A to B. (Paths must be unique because switched Ethernet networks cannot have loops, at least not active ones. If a loop existed, then a packet sent to an unknown destination would be forwarded around the loop endlessly.) Switches have an additional advantage in that traffic that does not flow where it does not need to flow is much harder to eavesdrop on. On an unswitched Ethernet, one host configured to receive all packets can eavesdrop on all traffic. Early Ethernets were notorious for allowing one unscrupulous station to capture, for instance, all passwords in use on the network. On a fully switched Ethernet, a host physically only sees the traffic actually addressed to it; other traffic remains inaccessible. Typical switches have room for table with 104 - 106 entries, though maxing out at 105 entries may be more common; this is usually enough to learn about all hosts in even a relatively large organization. A switched Ethernet can fail when total traffic becomes excessive, but excessive total traffic would drown any network (although other network mechanisms might support higher bandwidth). The main limitations specific to switching are the requirement that the topology must be loop-free (thus disallowing duplicate paths which might otherwise provide redundancy), and that all broadcast traffic must always be forwarded everywhere. As a switched Ethernet grows, broadcast traffic comprises a larger and larger percentage of the total traffic, and the organization must at some point move to a routing architecture (eg as in 7.6   IP Subnets). One of the differences between an inexpensive Ethernet switch and a pricier one is the degree of internal parallelism it can support. If three packets arrive simultaneously on ports 1, 2 and 3, and are destined for respective ports 4, 5 and 6, can the switch actually transmit the packets simultaneously? A simple switch likely has a single CPU and a single memory bus, both of which can introduce transmission bottlenecks. For commodity five-port switches, at most two simultaneous transmissions can occur; such switches can generally handle that degree of parallelism. It becomes harder as the number of ports increases, but at some point the need to support full parallel operation can be questioned; in many settings the majority of traffic involves one or two server or router ports. If a high degree of parallelism is in fact required, there are various architectures – known as switch fabrics – that can be used; these typically involve multiple simple processor elements. ## 2.5   Spanning Tree Algorithm¶ In theory, if you form a loop with Ethernet switches, any packet with destination not already present in the forwarding tables will circulate endlessly; naive switches will actually do this. In practice, however, loops allow a form of redundancy – if one link breaks there is still 100% connectivity – and so are desirable. As a result, Ethernet switches have incorporated a switch-to-switch protocol to construct a subset of the switch-connections graph that has no loops and yet allows reachability of every host, known as a spanning tree. The switch-connections graph is the graph with nodes consisting of both switches and of the unswitched Ethernet segments and isolated individual hosts connected to the switches. Multi-host Ethernet segments are most often created via Ethernet hubs (repeaters). Edges in the graph represent switch-segment and switch-switch connections; each edge attaches to its switch via a particular, numbered interface. The goal is to disable redundant (cyclical) paths while remaining able to deliver to any segment. The algorithm is due to Radia Perlman, [RP85]. Once the spanning tree is built, all packets are sent only via edges in the tree, which, as a tree, has no loops. Switch ports (that is, edges) that are not part of the tree are not used at all, even if they would represent the most efficient path for that particular destination. If a given segment connects to two switches that both connect to the root node, the switch with the shorter path to the root is used, if possible; in the event of ties, the switch with the smaller ID is used. The simplest measure of path cost is the number of hops, though current implementations generally use a cost factor inversely proportional to the bandwidth (so larger bandwidth has lower cost). Some switches permit other configuration here. The process is dynamic, so if an outage occurs then the spanning tree is recomputed. If the outage should partition the network into two pieces, both pieces will build spanning trees. All switches send out regular messages on all interfaces called bridge protocol data units, or BPDUs (or “Hello” messages). These are sent to the Ethernet multicast address 01:80:c2:00:00:00, from the Ethernet physical address of the interface. (Note that Ethernet switches do not otherwise need a unique physical address for each interface.) The BPDUs contain • The switch ID • the ID of the node the switch believes is the root • the path cost to that root These messages are recognized by switches and are not forwarded naively. Bridges process each message, looking for • a switch with a lower ID (thus becoming the new root) • a shorter path to the existing root • an equal-length path to the existing root, but via a switch or port with a lower ID (the tie-breaker rule) When a switch sees a new root candidate, it sends BPDUs on all interfaces, indicating the distance. The switch includes the interface leading towards the root. Once this process is complete, each switch knows • its own path to the root • which of its ports any further-out switches will be using to reach the root • for each port, its directly connected neighboring switches Now the switch can “prune” some (or all!) of its interfaces. It disables all interfaces that are not enabled by the following rules: 1. It enables the port via which it reaches the root 2. It enables any of its ports that further-out switches use to reach the root 3. If a remaining port connects to a segment to which other “segment-neighbor” switches connect as well, the port is enabled if the switch has the minimum cost to the root among those segment-neighbors, or, if a tie, the smallest ID among those neighbors, or, if two ports are tied, the port with the smaller ID. 4. If a port has no directly connected switch-neighbors, it presumably connects to a host or segment, and the port is enabled. Rules 1 and 2 construct the spanning tree; if S3 reaches the root via S2, then Rule 1 makes sure S3’s port towards S2 is open, and Rule 2 makes sure S2’s corresponding port towards S3 is open. Rule 3 ensures that each network segment that connects to multiple switches gets a unique path to the root: if S2 and S3 are segment-neighbors each connected to segment N, then S2 enables its port to N and S3 does not (because 2<3). The primary concern here is to create a path for any host nodes on segment N; S2 and S3 will create their own paths via Rules 1 and 2. Rule 4 ensures that any “stub” segments retain connectivity; these would include all hosts directly connected to switch ports. ### 2.5.1   Example 1: Switches Only¶ We can simplify the situation somewhat if we assume that the network is fully switched: each switch port connects to another switch or to a (single-interface) host; that is, no repeater hubs (or coax segments!) are in use. In this case we can dispense with Rule 3 entirely. Any switch ports directly connected to a host can be identified because they are “silent”; the switch never receives any BPDU messages on these interfaces because hosts do not send these. All these host port ends up enabled via Rule 4. Here is our sample network, where the switch numbers (eg 5 for S5) represent their IDs; no hosts are shown and interface numbers are omitted. S1 has the lowest ID, and so becomes the root. S2 and S4 are directly connected, so they will enable the interfaces by which they reach S1 (Rule 1) while S1 will enable its interfaces by which S2 and S4 reach it (Rule 2). S3 has a unique lowest-cost route to S1, and so again by Rule 1 it will enable its interface to S2, while by Rule 2 S2 will enable its interface to S3. S5 has two choices; it hears of equal-cost paths to the root from both S2 and S4. It picks the lower-numbered neighbor S2; the interface to S4 will never be enabled. Similarly, S4 will never enable its interface to S5. Similarly, S6 has two choices; it selects S3. After these links are enabled (strictly speaking it is interfaces that are enabled, not links, but in all cases here either both interfaces of a link will be enabled or neither), the network in effect becomes: ### 2.5.2   Example 2: Switches and Segments¶ As an example involving switches that may join via unswitched Ethernet segments, consider the following network; S1, S2 and S3, for example, are all segment-neighbors via their common segment B. As before, the switch numbers represent their IDs. The letters in the clouds represent network segments; these clouds may include multiple hosts. Note that switches have no way to detect these hosts; only (as above) other switches. Eventually, all switches discover S1 is the root (because 1 is the smallest of {1,2,3,4,5,6}). S2, S3 and S4 are one (unique) hop away; S5, S6 and S7 are two hops away. For the switches one hop from the root, Rule 1 enables S2’s port 1, S3’s port 1, and S4’s port 1. Rule 2 enables the corresponding ports on S1: ports 1, 5 and 4 respectively. Without the spanning-tree algorithm S2 could reach S1 via port 2 as well as port 1, but port 1 has a smaller number. S5 has two equal-cost paths to the root: S5⟶S4⟶S1 and S5⟶S3⟶S1. S3 is the switch with the lower ID; its port 2 is enabled and S5 port 2 is enabled. S6 and S7 reach the root through S2 and S3 respectively; we enable S6 port 1, S2 port 3, S7 port 2 and S3 port 3. The ports still disabled at this point are S1 ports 2 and 3, S2 port 2, S4 ports 2 and 3, S5 port 1, S6 port 2 and S7 port 1. Now we get to Rule 3, dealing with how segments (and thus their hosts) connect to the root. Applying Rule 3, • We do not enable S2 port 2, because the network (B) has a direct connection to the root, S1 • We do enable S4 port 3, because S4 and S5 connect that way and S4 is closer to the root. This enables connectivity of network D. We do not enable S5 port 1. • S6 and S7 are tied for the path-length to the root. But S6 has smaller ID, so it enables port 2. S7’s port 1 is not enabled. Finally, Rule 4 enables S4 port 2, and thus connectivity for host J. It also enables S1 port 2; network F has two connections to S1 and port 2 is the lower-numbered connection. All this port-enabling is done using only the data collected during the root-discovery phase; there is no additional negotiation. The BPDU exchanges continue, however, so as to detect any changes in the topology. If a link is disabled, it is not used even in cases where it would be more efficient to use it. That is, traffic from F to B is sent via B1, D, and B5; it never goes through B7. IP routing, on the other hand, uses the “shortest path”. To put it another way, all spanning-tree Ethernet traffic goes through the root node, or along a path to or from the root node. The traditional (IEEE 802.1D) spanning-tree protocol is relatively slow; the need to go through the tree-building phase means that after switches are first turned on no normal traffic can be forwarded for ~30 seconds. Faster, revised protocols have been proposed to reduce this problem. Another issue with the spanning-tree algorithm is that a rogue switch can announce an ID of 0, thus likely becoming the new root; this leaves that switch well-positioned to eavesdrop on a considerable fraction of the traffic. One of the goals of the Cisco “Root Guard” feature is to prevent this; another goal of this and related features is to put the spanning-tree topology under some degree of administrative control. One likely wants the root switch, for example, to be geographically at least somewhat centered. ## 2.6   Virtual LAN (VLAN)¶ What do you do when you have different people in different places who are “logically” tied together? For example, for a while the Loyola University CS department was split, due to construction, between two buildings. One approach is to continue to keep LANs local, and use IP routing between different subnets. However, it is often convenient (printers are one reason) to configure workgroups onto a single “virtual” LAN, or VLAN. A VLAN looks like a single LAN, usually a single Ethernet LAN, in that all VLAN members will see broadcast packets sent by other members and the VLAN will ultimately be considered to be a single IP subnet (7.6   IP Subnets). Different VLANs are ultimately connected together, but likely only by passing through a single, central IP router. VLANs can be visualized and designed by using the concept of coloring. We logically assign all nodes on the same VLAN the same color, and switches forward packets accordingly. That is, if S1 connects to red machines R1 and R2 and blue machines B1 and B2, and R1 sends a broadcast packet, then it goes to R2 but not to B1 or B2. Switches must, of course, be told the color of each of their ports. In the diagram above, S1 and S3 each have both red and blue ports. The switch network S1-S4 will deliver traffic only when the source and destination ports are the same color. Red packets can be forwarded to the blue VLAN only by passing through the router R, entering R’s red port and leaving its blue port. R may apply firewall rules to restrict red–blue traffic. When the source and destination ports are on the same switch, nothing needs to be added to the packet; the switch can keep track of the color of each of its ports. However, switch-to-switch traffic must be additionally tagged to indicate the source. Consider, for example, switch S1 above sending packets to S3 which has nodes R3 (red) and B3 (blue). Traffic between S1 and S3 must be tagged with the color, so that S3 will know to what ports it may be delivered. The IEEE 802.1Q protocol is typically used for this packet-tagging; a 32-bit “color” tag is inserted into the Ethernet header after the source address and before the type field. The first 16 bits of this field is 0x8100, which becomes the new Ethernet type field and which identifies the frame as tagged. Double-tagging is possible; this would allow an ISP to have one level of tagging and its customers to have another level. ## 2.7   Epilog¶ Ethernet dominates the LAN layer, but is not one single LAN protocol: it comes in a variety of speeds and flavors. Higher-speed Ethernet seems to be moving towards fragmenting into a range of physical-layer options for different types of cable, but all based on switches and point-to-point linking; different Ethernet types can be interconnected only with switches. Once Ethernet finally abandons physical links that are bi-directional (half-duplex links), it will be collision-free and thus will no longer need a minimum packet size. Other wired networks have largely disappeared (or have been renamed “Ethernet”). Wireless networks, however, are here to stay, and for the time being at least have inherited the original Ethernet’s collision-management concerns. ## 2.8   Exercises¶ 1. Simulate the contention period of five Ethernet stations that all attempt to transmit at T=0 (presumably when some sixth station has finished transmitting). Assume that time is measured in slot times, and that exactly one slot time is needed to detect a collision (so that if two stations transmit at T=1 and collide, and one of them chooses a backoff time k=0, then that station will transmit again at T=2). Use coin flips or some other source of randomness. 2. Suppose we have Ethernet switches S1 through S3 arranged as below. All forwarding tables are initially empty. S1────────S2────────S3───D │ │ │ A B C (a). If A sends to B, which switches see this packet? (b). If B then replies to A, which switches see this packet? (c). If C then sends to B, which switches see this packet? (d). If C then sends to D, which switches see this packet? 3. Suppose we have the Ethernet switches S1 through S4 arranged as below. All forwarding tables are empty; each switch uses the learning algorithm of 2.4   Ethernet Switches. B │ S4 │ A───S1────────S2────────S3───C │ D Now suppose the following packet transmissions take place: • A sends to B • B sends to A • C sends to B • D sends to A For each switch, list what source nodes (eg A,B,C,D) it has seen (and thus learned about). 4. In the switched-Ethernet network below, find two packet transmissions so that, when a third transmission A⟶D occurs, the packet is delivered to B (that is, it is forwarded out all ports of S2), but is not similarly delivered to C. All forwarding tables are initially empty, and each switch uses the learning algorithm of 2.4   Ethernet Switches. B C │ │ A───S1────────S2────────S3───D Hint: Destination D must be in S3’s forwarding table, but must not be in S2’s. 5. Given the Ethernet network with learning switches below, with (disjoint) unspecified parts represented by ?, explain why it is impossible for a packet sent from A to B to be forwarded by S1 only to S2, but to be forwarded by S2 out all of S2’s other ports. ? ? | | A───S1────────S2───B 6. In the diagram of 2.4   Ethernet Switches, suppose node D is connected to S5, and, with the tables as shown below the diagram, D sends to B. (a). Which switches will see this packet, and thus learn about D? (b). Which of the switches in part (a) do not already know where B is (and will thus forward the packet out all non-arrival interfaces)? 7. Suppose two Ethernet switches are connected in a loop as follows; S1 and S2 have their interfaces 1 and 2 labeled. These switches do not use the spanning-tree algorithm. Suppose A attempts to send a packet to destination B, which is unknown. S1 will therefore forward the packet out interfaces 1 and 2. What happens then? How long will A’s packet circulate? 8. The following network is like that of 2.5.1   Example 1: Switches Only, except that the switches are numbered differently. What is the end result of the spanning-tree algorithm in this case? S1──────S4──────S6 │ │ │ │ │ │ │ │ │ S3──────S5──────S2 9. Suppose you want to develop a new protocol so that Ethernet switches participating in a VLAN all keep track of the VLAN “color” associated with every destination. Assume that each switch knows which of its ports (interfaces) connect to other switches and which may connect to hosts, and in the latter case knows the color assigned to that port. (a). Suggest a way by which switches might propagate this destination-color information to other switches. (b). What happens if a port formerly reserved for connection to another switch is now used for a host?
{}
The Inequalities for Polynomials and Integration over Fractal Arcs The paper is dealing with determination of the integral $\int_{\gamma} f \,dz$ along the fractal arc $\gamma$ on the complex plane by terms of polynomial approximations of the function~$f$. We obtain inequalities for polynomials and conditions of integrability for functions from the H\"older, Besov and Slobodetskii spaces. Categories:26B15, 28A80
{}
# Math Help - derivative 1. ## derivative find the derivative of $y=\frac{x+1}{x-1}$ using first principles so $\frac{f(x+h)-f(x)}{h}$ $ \frac{(x+h)+1}{(x+h)-1} - \frac{x+1}{x-1}$ all over h can't get the right answer 2. Originally Posted by euclid2 find the derivative of $y=\frac{x+1}{x-1}$ using first principles so $\frac{f(x+h)-f(x)}{h}$ $ \frac{(x+h)+1}{(x+h)-1} - \frac{x+1}{x-1}$ all over h can't get the right answer $f(x+h)-f(x)=\frac{x+1+h}{x-1+h}-\frac{x+1}{x-1}$ $=\frac{(x-1)(x+1+h)-(x+1)(x-1+h)}{(x-1+h)(x-1)}$ $=\frac{\color{blue}(x-1)(x+1)\color{black}+(x-1)h\color{blue}-(x+1)(x-1)\color{black}-(x+1)h}{(x-1)(x-1)+h(x-1)}$ $=\frac{xh-h-xh-h}{(x-1)^2+h(x-1)}$ $=\frac{-2h}{(x-1)^2+h(x-1)}$ $\frac{f(x+h)-f(x)}{h}=\frac{-2}{(x-1)^2+h(x-1)}$ setting h=0, the derivative is $f'(x)=\frac{-2}{(x-1)^2}$
{}
## Physics: Principles with Applications (7th Edition) $V = 4.1H - 0.018A - 2.7$ Since volume V is in liters, then the units of each of the three terms on the right side of the equation must be liters. The units of 4.1 must be liters/meter, then the units of 4.1H are (liters/meters)(meters) = liters. The units of 0.018 must be liters/year, then the units of 0.018A are (liters/year)(years) = liters. The units of 2.7 must be liters.
{}
16th European Solar Physics Meeting Sep 6 – 10, 2021 Online Europe/Rome timezone Reconstruction of solar magnetic field during Cycles 15--19 based on proxies from Kodaikanal Solar Observatory Sep 6, 2021, 11:50 AM 13m Online Online Poster Session 1 - Solar Interior, Dynamo, Large-Scale Flows and the Solar Cycle Speaker Bidya Binay Karak (Indian Institute of Technology (BHU) Varanasi) Description The regular observation of the solar magnetic field is available only for about last five cycles. Thus, to understand the origin of the variation of the solar magnetic field, it is essential to reconstruct the magnetic field for the past cycles, utilizing other datasets. Long-term uniform observations for the past 100 years as recorded at the Kodaikanal Solar Observatory (KoSO) provide such an opportunity. We develop a method for the reconstruction of the solar magnetic field using the synoptic observations of the Sun's emission in the Ca IIK and H$\alpha$ lines from KoSO for the first time. The reconstruction method is based on the facts that the Ca II K intensity correlates well with the unsigned magnetic flux, while the sign of the flux is derived from the corresponding H$\alpha$ map which provides the information of the dominant polarities. Based on this reconstructed magnetic map, we study the evolution of the magnetic field in Cycles 15--19. We also study bipolar magnetic regions (BMRs) and their remnant flux surges in their causal relation. Time-latitude analysis of the reconstructed magnetic flux provides an overall view of magnetic field evolution. We identify the reversals of the polar field and critical surges of following and leading polarities. We found that the poleward transport of opposite polarities led to multiple changes of the dominant magnetic polarities in poles. Furthermore, the remnant flux surges that occur between adjacent 11-year cycles reveal physical connections between them. Primary authors Bidya Binay Karak (Indian Institute of Technology (BHU) Varanasi) Dr Alexander Mordvinov (Institute of Solar-Terrestrial Physics, Irkutsk, 664033, Russia) Prof. Dipankar Banerjee (ARIES, Nainital) Dr Elena M. Golubeva (Institute of Solar-Terrestrial Physics, Irkutsk, 664033, Russia) Subhamoy Chatterjee Elena Golubeva Dr Anna Khlystova
{}
# Thread: [SOLVED] characterizing all ideals of a certain ring... :-\ 1. ## [SOLVED] characterizing all ideals of a certain ring... :-\ Hello, and thank you very much for reading. Hate these kinds of questions: Let p be a prime number. Let R= Z(p) be the ring defined as followed: Z(p) = {x/y : gcd(y,p)=1} (notice that it's not the ring {0,1,...,p-1}!) I need to characterize all the ideals in this ring, and all of it's quotient rings... I already proved Z(p) is a ring (I needed to do so before this question). I also noticed that an element x/y is invertible if and only if x is not in pZ (meaning, if and only if gcd(x,p)=1). I know that if an Ideal cosist an invertible element then it is all of R, so I'm seeking for ideals that consist of elements x/y such that gcd(x,p)=1. However, I cannot see how to find how many ideals of this type there are, and moreover - how to show that there are no other types of ideals... :-\ I'll think of quotient rings after I find the ideals... I'm too stuck on this one... Thank you in advance. Yours truely Tomer. 2. Originally Posted by aurora Hello, and thank you very much for reading. Hate these kinds of questions: Let p be a prime number. Let R= Z(p) be the ring defined as followed: Z(p) = {x/y : gcd(y,p)=1} (notice that it's not the ring {0,1,...,p-1}!) I need to characterize all the ideals in this ring, and all of it's quotient rings... I already proved Z(p) is a ring (I needed to do so before this question). I also noticed that an element x/y is invertible if and only if x is not in pZ (meaning, if and only if gcd(x,p)=1). I know that if an Ideal cosist an invertible element then it is all of R, so I'm seeking for ideals that consist of elements x/y such that gcd(x,p)=1. However, I cannot see how to find how many ideals of this type there are, and moreover - how to show that there are no other types of ideals... :-\ I'll think of quotient rings after I find the ideals... I'm too stuck on this one... Thank you in advance. Yours truely Tomer. suppose $I \neq (0)$ is an ideal of $R.$ note that if $\frac{x}{y} \in I,$ for some $y$ coprime with $p,$ then for any $z \in \mathbb{Z}$ coprime with $p$ we have $\frac{x}{z}=\frac{y}{z} \cdot \frac{x}{y} \in I.$ let $n=\min \{x \in \mathbb{N}: \ \ \frac{x}{y} \in I, \ \text{for some} \ y \ \text{with} \ \gcd(p,y)=1 \}.$ (note that if $\frac{x}{y} \in I,$ then $\frac{-x}{y} \in I$ too. so the set we defined is not empty.) so $\frac{n}{y} \in I,$ for some $y$ coprime with $p.$ thus by the above remark $\frac{n}{z} \in I$ for any $z$ coprime with $p.$ now let $\frac{x}{z} \in I.$ we have $x=sn+r,$ where $0 \leq r < n.$ therefore: $\frac{r}{z}=\frac{x}{z}-\frac{sn}{z} \in I,$ because both $\frac{x}{z}$ and $\frac{n}{z}$ are in $I.$ but this will contradict the minimality of $n$ unless $r=0.$ so $\frac{x}{z}=\frac{sn}{z}.$ this proves that $I=\{\frac{sn}{z}: \ \ s,z \in \mathbb{Z}, \ \gcd(p,z)=1 \}=nR.$ obviously $I=R$ if and only if $\gcd(n,p)=1. \ \Box$ Note: so every ideal of $R$ is cyclic. this problem is a good introduction to "localization", which is a powerful tool in the theory of commutative rings. 3. Thank you!! Your detailed proof, and the fact that you mentioned the ideal is cyclic, motivated me to attack the problem from this direction and I thin I mannaged to prove something pretty elegent - That every ideal I that's not trivial (meaning, not {0} or Z(p) ) is of the form (p^k) for any natural k, where (p^k) = p^k * Z(p)! So all the ideals are of the form (p^k)! Thank you thank you thank you. 4. Originally Posted by aurora Thank you!! Your detailed proof, and the fact that you mentioned the ideal is cyclic, motivated me to attack the problem from this direction and I thin I mannaged to prove something pretty elegent - That every ideal I that's not trivial (meaning, not {0} or Z(p) ) is of the form (p^k) for any natural k, where (p^k) = p^k * Z(p)! So all the ideals are of the form (p^k)! Thank you thank you thank you. correct! so $\mathfrak{m}=pR$ is the unique maximal ideal of $R$ and $\{{\mathfrak{m}}^k: \ k \geq 0 \}$ is the set of all nonzero ideals of $R.$ this is not just an accident! the reason is that "a local PID is a discrete valuation ring." (also a discrete valuation ring is a local PID which is not a field!) so if you replace $\mathbb{Z}$ in your problem with any principal ideal domain, you'll get the same result. 5. I'll remember that
{}
I have a small class of 2D drawing utilities, Draw2D which is the global entry point for anything related to drawing. Currently, the class looks like this: struct Draw2D { std::shared_ptr<D2Label> CreateLabel (...); std::shared_ptr<D2Plot> CreatePlot (...); void Draw (D2Element& e); std::vector<std::weak_ptr<D2Element>> elements_; }; struct D2Element { ... }; struct D2Label : D2Element { ... }; So far so good. The key point here is that Draw2D needs to know all it's elements so it can update them and for drawing, each element has to call back to its parent Draw2D instance. I.e. you cannot move elements between instances as all elements share resources provided by the parent. The solution I have right now is weak_ptr which get checked on each update of the Draw2D instance and if stale, the object is removed (meaning the user has deleted it in the meantime.) The problem is that after the Draw2D instance died, of course all still existing instances are dead, too. Even though they are technically valid objects, you cannot use them safely any more. Even queries could fail if they touch the shared resources, and keeping the shared resource alive via shared_ptr does not help either because that just postpones the problems until the drawing backend dies (in which case all resources are invalidated.) Is there some way to rewrite this code to make it clearer to the clients? I tried something like this, as I definitely want clients to be able to release resources early (i.e. before Draw2D dies.) struct D2Element { ~D2Element () { deleter_ (this); } std::function<void OnDelete (D2Element*)> deleter_; } struct Draw2D { void Release (D2Element* element) { // Remove from list of registered elements } ~Draw2D () { // delete all remaining children } D2Label* CreateLabel (...) { auto label = ...; label->deleter_ = std::bind (&Draw2D::Release, this, _1); return label; } } This is ok but now the clients have to wrap it into smart pointers on their own by default. • I'm not sure if I understand the problem, but can't you overload Draw2D::Release(std::shared_ptr<D2Element> element) { Release(element.get();) } (with a cast if necessary) and then create auto label as a shared_ptr? – Felix Dombek Sep 12 '11 at 6:09 • Ok I have worked through your code several times now and it is becoming clear that you have not disclosed a key piece of information. What is this shared resource, the drawing back end as you call it? Why is there a binding needed at all between the D2Element derived object types and the engine (Draw2D) that draws them? – dex black Sep 13 '11 at 12:19 Why not separate the 2D element into two parts. • The interface that can be used • The shared resource • Used by the Draw2D interface • Used by the D2Elements. Then D2Elements does not need to depend on an object that can go out of scope, they have partial ownership of the shared resource (with any common data that they and the Draw2D needs. class Draw2D { // interface as before // Weak pointers as before. std::shared_ptr<SharedResource> data; }; class D2Element { //... As before. std::shared_ptr<SharedResource> sharedData; }; Even of the Draw2D's lifetime ends, then the shared resource (because it is shared with any created children) will not die and thus not be destroyed.
{}
Q # Which term of the AP : 121, 117, 113, . . ., is its first negative term? Q : 1     Which term of the AP :  $\small 121,117,113,...,$ is its first negative term?  [Hint : Find $n$ for $a_n<0$ ] Views Given AP is $\small 121,117,113,...,$ Here $a = 121 \ and \ d = -4$ Let suppose nth term of the AP is first negative term Then, $a_n = a+ (n-1)d$ If nth term is negative then $a_n < 0$ $\Rightarrow 121+(n-1)(-4) < 0$ $\Rightarrow 125<4n$ $\Rightarrow n > \frac{125}{4}=31.25$ Therefore, first negative term must be 32nd term Exams Articles Questions
{}
Did Newton argue that particles speed up when entering a more dense medium? Statement: Newton argued that particles speed up as they travel from air into a dense, transparent object, such as glass. From this source, I gather that he did argue that the light particles sped up when entering a more dense medium. However, it just doesn't make sense. I thought they seemed to slowed down. • Speed does not change.And Newton was wrong many a times. – soumyadeep Sep 18 '14 at 4:31 Like all good physicists Newton proposed a hypothesis for why light refracts when it crosses a boundary between different refractive indices. And his hypothesis makes a certain amount of sense: If the initial velocity of the light is $v$, then the velocity parallel to the boundary is: $$v_p = v\sin i$$ and the velocity normal to the boundary is: $$v_n = v\cos i$$ Newton's hypothesis for the bending of the light ray is that the component of velocity normal to the boundary increases in the denser medium (while the parallel component is unchanged). Since the parallel component is unchanged we have: $$v'_p = v_p = v\sin i = v' \sin r$$ And therefore: $$\frac{v'}{v} = \frac{\sin i}{\sin r}$$ which is exactly the reciprocal of what Snell's law predicts! Until some way was found to measure $v'$ the theory wasn't testable, but it was predictive and indeed gave the correct values for $r$ given any initial value $i$ and constant $v'/v$. Not surprising as it is Snell's law - just derived from a mistaken assumption. Until some way was found to measure $v'$ Newton's theory was as good as any other. The problem is that when the measurements were done $v'$ turned out to be less than $v$. Explaining this required treating light as a wave rather than a particle. However the measurement wasn't done until nearly two centuries after Newton's suggestion. A critic could point out that the change in the normal component of velocity, i.e. the acceleration of the light towards the boundary, would have to depend on angle for the theory to work, and no mechanism for this was suggested. But then Newton suggested no mechanism for his theory of gravity and it took nearly three centuries for Einstein to provide one and show that in this case Newton was right. "From this source, I gather that he did argue that the light particles sped up when entering a more dense medium. However, it just doesn't make sense." It does make sense. They enter the medium at a higher speed, then slow down. If Newton had said they continue to move at a higher speed within the medium, he would have been wrong. But I don't think he said so - "higher speed in a more dense medium" seems to be a mythology serving the opponents of Newton's emission theory of light. • I don't think you understand the statement. But, maybe I am misunderstanding the statement. Anyway, I'm pretty sure he was wrong. – Michael Yaworski Sep 18 '14 at 16:32 • Which statement? Newton most probably used the analogy with an object falling in water - its speed is high as it crosses the water surface but then decreases. – Pentcho Valev Sep 18 '14 at 17:54 • The statement in my original question. And I think the statement implies that once the light is in a new, more dense medium, it's fast than it was before. Why would it increase briefly, just to then decrease? That doesn't make sense to me. – Michael Yaworski Sep 18 '14 at 18:33 • "Why would it increase briefly, just to then decrease?" – Pentcho Valev Sep 18 '14 at 18:49 • "Why would it increase briefly, just to then decrease?" Because, as the light approaches the more dense medium, it is attracted by the surface molecules and accelerates. Then, within the bulk, the effect disappears. – Pentcho Valev Sep 18 '14 at 19:03
{}
# Show Reference: "Developments and Applications of Nonlinear Principal Component Analysis - a Review" Developments and Applications of Nonlinear Principal Component Analysis – a Review Principal Manifolds for Data Visualization and Dimension Reduction In Principal Manifolds for Data Visualization and Dimension Reduction, Vol. 58 (2008), pp. 1-43, doi:10.1007/978-3-540-73750-6_1 by Uwe Kruger, Junping Zhang, Lei Xie edited by Alexander N. Gorban, Balázs Kégl, Donald C. Wunsch, Andrei Y. Zinovyev @incollection{kruger-et-al-2008, abstract = {Although linear principal component analysis ({PCA}) originates from the work of Sylvester [67] and Pearson [51], the development of nonlinear counterparts has only received attention from the 1980s. Work on nonlinear {PCA}, or {NLPCA}, can be divided into the utilization of autoassociative neural networks, principal curves and manifolds, kernel approaches or the combination of these approaches. This article reviews existing algorithmic work, shows how a given data set can be examined to determine whether a conceptually more demanding {NLPCA} model is required and lists developments of {NLPCA} algorithms. Finally, the paper outlines problem areas and challenges that require future work to mature the {NLPCA} research field.}, author = {Kruger, Uwe and Zhang, Junping and Xie, Lei}, booktitle = {Principal Manifolds for Data Visualization and Dimension Reduction}, citeulike-article-id = {1647435}, doi = {10.1007/978-3-540-73750-6\_1}, editor = {Gorban, Alexander N. and K\'{e}gl, Bal\'{a}zs and Wunsch, Donald C. and Zinovyev, Andrei Y.}, journal = {Principal Manifolds for Data Visualization and Dimension Reduction}, keywords = {dimensionality-reduction, learning, pca, unsupervised-learning}, pages = {1--43}, posted-at = {2015-03-17 14:23:09}, priority = {2}, publisher = {Springer Berlin Heidelberg}, series = {Lecture Notes in Computational Science and Enginee}, title = {Developments and Applications of Nonlinear Principal Component Analysis – a Review}, url = {http://dx.doi.org/10.1007/978-3-540-73750-6\_1}, volume = {58}, year = {2008} }
{}
× # A man gave 2400 dollars to his three brothers named X, Y and Z. Z had as twice as many as Y, Y had 1/3 that of z. Find the amount each had. Note by Lionel Warfought Williams 3 years, 1 month ago Sort by: Is that by mistake or is it intentional.... · 3 years, 1 month ago 0 · 3 years, 1 month ago How can z have twice as much as y and y have 1/3 of z? · 3 years, 1 month ago Hahaha, only possible when x takes away all the 2400. · 3 years, 1 month ago
{}
Int J Performability Eng ›› 2022, Vol. 18 ›› Issue (10): 730-740. ### Health Monitoring of Turning Tool through Vibration Signals Processed using Convolutional Neural Network Architecture Revati M. Wahula,*, Archana P. Kalea, and Abhishek D. Patangeb 1. aMES College of Engineering, Pune, 411001, India; bCollege of Engineering, Pune, 411005, India • Submitted on ; Revised on ; Accepted on • Contact: *E-mail address: rmwahul@mescoepune.org Abstract: In consideration of high-precision machining, Tool Condition Monitoring (TCM) is very influential to retain the resolution and accuracy of the machined component. TCM is considered to be always challenging due to diversified operating conditions. In an era of big data analytics, Deep Learning-based networks are gaining significant consideration in the manufacturing industry in the direction of dealing with data gathered from these diversified conditions in a heavy noise environment. In order to address these problems, the Convolutional Neural Network (CNN) based deep learning approach is advocated herein for condition monitoring of a turning tool. After acquiring real-time vibration data corresponding to tool faults, the CNN architecture was designed to assign decimal probabilities to every category in a multi-class classification of tool faults followed by hyperparameters tuning. A rigorous analysis was undertaken through different datasets gathered from diversified machining conditions. The test and validation results demonstrated that the proposed network outperforms the conventional machine learning classifiers.
{}
266 views I Have doubt about the language. Is it asking about the sum of elements if we make the GBL set for the given lattice . ### 1 comment Is it D? I think they are asking no of element in GLB. It should be 0 @Arkaprava i mean to say summing the element in the set This question is poorly framed The GBL set would comprise of {{$\varnothing$},{$x$},{$y$}}. My question is that the elements in the GBL are sets, we can't add sets. That is why I thought the answer should be 0. What is the explanation provided in the answer? Hasse diagram: Greatest Lower Bound of lattice is { } or ϕ Sum of elements in GLB of lattice = 0 in test book exp nothing is clear according to me x+y but it also true we not add set. i think question is wrongly frame. 1 159 views
{}
# How to export user defined macros to Latex? I have defined a macro that helps me type formulas it works fine when I type \seq and a But when I export the formula typed with the macro I get $\seq{a}$, but not $a_1,\cdots,a_n$ as I want. I have tried expand user defined macros in the preference but nothing has changed. Is it possible to achieve what I am expecting? Where do you define the macro? In the document or in the preamble? In my document. I am just experimenting with macros. Try putting it in the preamble (Document -> Part -> Show preamble). With expand user defined macros enabled, I get $v_1, \ldots, v_n$ in the document part of the LaTeX file. With expand user defined macros disabled, I get \newcommand{\seq}[1]{#1_1, \ldots, #1_n} in the LaTeX preamble and $\seq{v}$ in the document. 1 Like There seem some issues when we redefine some macros, even after de-selecting Expand TeXmacs macros with no LaTeX equivalents. For example, <TeXmacs|2.1> <style|<tuple|generic|french>> <\body> <\hide-preamble> <assign|dfn|<macro|body|<arg|body>''>> </hide-preamble> A <dfn|group> is a set <math|G> along with a multiplication <math|<around*|(|\<cdummy\>|)>*<around*|(|\<cdummy\>|)>\<of\>G\<times\>G\<rightarrow\>G> such that </body> <\initial> <\collection> <associate|page-medium|paper> </collection> </initial> is exported to \documentclass{article} \usepackage[french]{babel} %%%%%%%%%% Start TeXmacs macros \newcommand{\cdummy}{\cdot} \newcommand{\of}{:} \newcommand{\tmdfn}[1]{\textbf{#1}} %%%%%%%%%% End TeXmacs macros \begin{document} A {\tmdfn{group}} is a set $G$ along with a multiplication $(\cdummy) (\cdummy) \of G \times G \rightarrow G$ such that \end{document} 1 Like I tried to put the following in to the preamble But when I go back to document and type \seq I get: Note that wierd bar in the second line. Would you please give a more detailed explaination on how to do this? Thank you in advance! The problem is the fact that you typed the math tag in multiple lines, retype it without going to the line (this is meaningful in TeXmacs), e.g. exactly how you did it in your first post. Then should be ok. NB: It is usually good time spent if you go through the online help, especially “Help->Manual” and for macros “Help->Manual->Write your own style files”. Best of all Joris’ book “The Jolly Writer” explains in detail many useful topics. 1 Like Indeed, it seems that TeXmacs does not care about the preamble when exporting this to LaTeX. What if you put the redefinition in a style file? Is it possible to copy to Latex with user defined macros enabled? I just want to copy a line from the .tm file and generate a whole .tex file seems kind of heavy to me. Well, I’ve never tried but you can select the content, and the do “Copy to -> LaTeX” and check what TeXmacs does. My guess is that it will not expand the macro. If you think about it, in general what you would like to achieve is not easy. If I would have to do it, the easier whay is to export the whole document or write myself the LaTeX macros I would like to have. TeXmacs is not optimized for interoperability with LaTeX. It will expand if the definition of the macro is in the preamble, if the option Expand user-defined macros is enabled in User preferences. However, Copy to LaTeX is very unreliable. For example, when there are two lines, it will produce a strange unicode. By the way, there is a usage of this Copy to LaTeX: writing a question or an answer on Math StackExchange, say. I don’t know whether I did something incorrectly. If I extract the style file from the Focus menu and load this style file, when inputting the macro \dfn, it is even undefined (shown as red). On the other hand, if I extract the style package, the macro works but the export to LaTeX still ignores this.
{}
#### Chapter 10 Basic Concepts of Descriptive Vector Geometry Section 10.1 From Arrows to Vectors # 10.1.3 Vectors in the Plane and in the Space Points on the plane or in space that are defined as ordered pairs or triples with respect to a given coordinate system can be connected by line segments. Assigning a direction to these line segments (one of the end points of the segment is specified as the initial point and the other one is specified as the end point) results in arrows that point from one point to the other (see left and right figure below for the two-dimensional and the three-dimensional cases). According to these figures, an arrow provides the following information: it specifies how to get from the initial point (at the foot of the arrow) to the end point (at the tip of the arrow). For example, the arrow that connects the point $Q$ to the point $R$ in the left figure above specifies that starting from the initial point $Q$ one has to move $3$ units to the left and $2$ units upwards to get to the terminal point $R$. In more mathematical terms: starting from $Q$, shift by $-3$ in the $x$-direction and by $2$ in the $y$-direction. It is even simpler for the arrow that connects the origin to the point $P$ in the left figure above: to get from $O$ to $P$ move $2$ units in the $x$-direction and $1$ unit in the $y$-direction. Of course, these values are exactly the coordinates of the point $P$. ##### Exercise 10.1.3 For the arrows in the right-hand side figure above (three-dimensional case), specify the (signed) movements in the three coordinate directions that are required to get from the initial point to the terminal point of the corresponding arrow. Proceed in the same way as explained above for two dimensions. • For the arrow from $O$ to $P$, we have: 1. in the $x$-direction: , 2. in the $y$-direction: , 3. in the $z$-direction: . • For the arrow from $Q$ to $R$, we have: 1. in the $x$-direction: , 2. in the $y$-direction: , 3. in the $z$-direction: . For the arrows connecting $Q$ to $R$, movements in the corresponding coordinate directions are determined by the coordinates of the points at the foot and at the tip of the arrows. Thus, in the two-dimensional case, we have: $\begin{array}{c}\hfill R=\left(1;3\right)\\ \hfill Q=\left(4;1\right)\end{array}\right\} ⇒ \left\{\begin{array}{c}\text{in} x\text{-direction:} -3=1-4 \hfill \\ \text{in} y\text{-direction:} 2=3-1 ,\hfill \end{array}$ and in the three-dimensional case: $\begin{array}{c}\hfill R=\left(1;3;-1\right)\\ \hfill Q=\left(3;1;0\right)\end{array}\right\} ⇒ \left\{\begin{array}{c}\text{in} x\text{-direction:} -2=1-3\hfill \\ \text{in} y\text{-direction:} 2=3-1\hfill \\ \text{in} z\text{-direction:} -1=-1-0 .\hfill \end{array}$ The movements in the different coordinate directions are the differences of the coordinates of the terminal point and the initial point of the arrow. This means that all arrows connecting pairs of points with equal coordinate differences only differ from each other by a parallel translation, i.e. they retain their direction. The pairs of points $P$ and $Q$, $A$ and $B$, $O$ and $R$ indicated in the figure below are each connected by arrows that can be made to coincide by parallel translations. Here, an infinite number of pairs of points can be found that are connected by such an arrow. This idea works analogously in the three-dimensional case. Each arrow in the figure above provides the same information, namely a shift by $-3$ in the $x$-direction and $1$ in the $y$-direction. So what could be more natural than regarding each of these arrows only as a representation (a so-called representative) of a more basic object? This basic mathematical object is called vector, and in this case it has the two components: $-3$ ($x$-component) and $1$ ($y$-component) that are written as a so-called $2$-tuple one above the other: $\text{vector represented in the figure above} =\left(\begin{array}{c}\hfill -3\hfill \\ \hfill 1\hfill \end{array}\right) .$ The Info Box below outlines these conclusions and a few more notations and dictions concerning vectors. ##### Info 10.1.4 A two- or three-dimensional vector is a $2$- or $3$-tuple with $2$ or $3$ components called the $x$-, $y$- (and $z$-)components. In general, vectors are denoted by lowercase italic letters accented by a right arrow or by upright boldface lowercase letters. The components of a vector are often denoted by the same lowercase italic letter as the vector, with the corresponding coordinate direction as its index: $\stackrel{\to }{a}=\left(\begin{array}{c}\hfill {a}_{x}\hfill \\ \hfill {a}_{y}\hfill \end{array}\right), \stackrel{\to }{b}=\left(\begin{array}{c}\hfill {b}_{x}\hfill \\ \hfill {b}_{y}\hfill \\ \hfill {b}_{z}\hfill \end{array}\right) .$ An arrow in the plane or in space is called a representative of the vector if the arrow connects two points in the plane or in space such that the differences between the coordinates at the initial and end points of the arrow give the components of the vector. Often, a point $P$ or two points $Q$ and $R$ in the plane or in space are given and one wants to specify the vector that has the arrow from the origin $O$ to the given point $P$ or the arrow connecting $Q$ to $R$ as its representatives. The Info Box below outlines the notation and diction as well as the required vector operation: ##### Info 10.1.5 • Two-dimensional case: Let $P=\left({p}_{x};{p}_{y}\right)$, $Q=\left({q}_{x};{q}_{y}\right)$, and $R=\left({r}_{x};{r}_{y}\right)$ be points in the plane. Then the vector $\stackrel{\to }{QR}:=\left(\begin{array}{c}\hfill {r}_{x}-{q}_{x}\hfill \\ \hfill {r}_{y}-{q}_{y}\hfill \end{array}\right)$ is called the connecting vector from the initial point $Q$ to the terminal point $R$, and $\stackrel{\to }{P}:=\stackrel{\to }{OP}=\left(\begin{array}{c}\hfill {p}_{x}\hfill \\ \hfill {p}_{y}\hfill \end{array}\right)$ is called the position vector of the point $P$. These are exactly those vectors whose representatives include the connecting arrows of the points (see figure below). • Three-dimensional case: Let $P=\left({p}_{x};{p}_{y};{p}_{z}\right)$, $Q=\left({q}_{x};{q}_{y};{q}_{z}\right)$, and $R=\left({r}_{x};{r}_{y};{r}_{z}\right)$ be points in space. Then the vector $\stackrel{\to }{QR}:=\left(\begin{array}{c}\hfill {r}_{x}-{q}_{x}\hfill \\ \hfill {r}_{y}-{q}_{y}\hfill \\ \hfill {r}_{z}-{q}_{z}\hfill \end{array}\right)$ is called the connecting vector from the initial point $Q$ to the point $R$, and $\stackrel{\to }{P}:=\stackrel{\to }{OP}=\left(\begin{array}{c}\hfill {p}_{x}\hfill \\ \hfill {p}_{y}\hfill \\ \hfill {p}_{z}\hfill \end{array}\right)$ is called the position vector of the point $P$. These are exactly those vectors whose representatives include the connecting arrows of the points (see figure below). ##### Example 10.1.6 • Two-dimensional case: The point $P=\left(-1;-2\right)$ has the position vector $\stackrel{\to }{P}=\left(\begin{array}{c}\hfill -1\hfill \\ \hfill -2\hfill \end{array}\right) .$ The vector $\stackrel{\to }{v}=\left(\begin{array}{c}\hfill 2\hfill \\ \hfill 0\hfill \end{array}\right)$ is the connecting vector from the point $A=\left(1;1\right)$ to the point $B=\left(3;1\right)$. Thus, we have $\stackrel{\to }{v}=\stackrel{\to }{AB} .$ However, $\stackrel{\to }{v}\ne \stackrel{\to }{BA}$ since $\stackrel{\to }{BA}=\left(\begin{array}{c}\hfill 1-3\hfill \\ \hfill 1-1\hfill \end{array}\right)=\left(\begin{array}{c}\hfill -2\hfill \\ \hfill 0\hfill \end{array}\right)\ne \left(\begin{array}{c}\hfill 2\hfill \\ \hfill 0\hfill \end{array}\right) .$ • Three-dimensional case: Consider the two points $Q=\left(1;1;1\right)$ and $R=\left(-2;0;2\right)$. The connecting vector from $Q$ to $R$ is: $\stackrel{\to }{QR}=\left(\begin{array}{c}\hfill -2-1\hfill \\ \hfill 0-1\hfill \\ \hfill 2-1\hfill \end{array}\right)=\left(\begin{array}{c}\hfill -3\hfill \\ \hfill -1\hfill \\ \hfill 1\hfill \end{array}\right) .$ However, the connecting vector from $R$ to $Q$ is $\stackrel{\to }{RQ}=\left(\begin{array}{c}\hfill 1-\left(-2\right)\hfill \\ \hfill 1-0\hfill \\ \hfill 1-2\hfill \end{array}\right)=\left(\begin{array}{c}\hfill 3\hfill \\ \hfill 1\hfill \\ \hfill -1\hfill \end{array}\right) .$ Obviously, the vector $\left(\begin{array}{c}\hfill 3\hfill \\ \hfill 1\hfill \\ \hfill -1\hfill \end{array}\right)$ is also the position vector of the point $\left(3;1;-1\right)$. The example above reveals an interesting fact: reversing the orientation of a vector (and thus the orientation of all its representative arrows) results in a vector in which all components have the opposite sign. This vector is also called the opposite vector. This suggests that vector calculations can be carried out component-wise. This will be discussed in detail in Subsection 10.1.4. Obviously, the also exists a vector with all its components equal to $0$ in the two- and three-dimensional cases: $\left(\begin{array}{c}\hfill 0\hfill \\ \hfill 0\hfill \end{array}\right) \text{or} \left(\begin{array}{c}\hfill 0\hfill \\ \hfill 0\hfill \\ \hfill 0\hfill \end{array}\right) .$ This vector is called the (two-dimensional or three-dimensional) zero vector. One can imagine the zero vector as having "arrows of zero length" as its representatives, i.e. arrows that connect a point with itself. In other words, the zero vector is the position vector of the origin. ##### Exercise 10.1.7 Let the points $A=\left(-1;\frac{3}{2}\right) \text{and} B=\left(\pi ;-2\right)$ be given in the plane, and the points $P=\left(0.5;1;-1\right) \text{and} Q=\left(\frac{1}{2};-1;1\right)$ in space as well, as the (two- and three-dimensional) vectors $\stackrel{\to }{a}=\left(\begin{array}{c}\hfill \pi \hfill \\ \hfill -1\hfill \end{array}\right) \text{and} \stackrel{\to }{v}=\left(\begin{array}{c}\hfill 0\hfill \\ \hfill 3\hfill \\ \hfill -3\hfill \end{array}\right) .$ • Find the following vectors: 1. $\stackrel{\to }{AB}=$ 2. $\stackrel{\to }{BA}=$ 3. $\stackrel{\to }{PQ}=$ 4. $\stackrel{\to }{QP}=$ Vektoren können in der Form (a;b) bzw. (a;b;c) eingegeben werden, zum Beispiel (1;0) für den Vektor $\left(\begin{array}{c}\hfill 1\hfill \\ \hfill 0\hfill \end{array}\right)$. Die Zahl $\pi$ kann als pi eingegeben werden. • Find the points $C$ in the plane and $R$ in space such that the following statements are true: 1. $\stackrel{\to }{a}=\stackrel{\to }{CB} ⇔ C=$ 2. $\stackrel{\to }{v}=\stackrel{\to }{QR} ⇔ R=$ Points can be entered in the form (a;b) or (a;b;c) as well. • Draw at least three representatives of the vector $a$. The previous introduction of the concept of a vector reveals the close relationship between vectors and points. Indeed, there is a one-to-one correspondence between points and position vectors: for every point there exists exactly one vector that is the position vector of this point. Conversely, for every vector there exists exactly one point that has this vector as its position vector. This is true in both the two-dimensional and the three-dimensional cases. This justifies the convention we will follow below: describing points by their position vectors. A point $P=\left(2;1\right)$, for example, is often described by its corresponding position vector $\stackrel{\to }{P}=\left(\begin{array}{c}\hfill 2\hfill \\ \hfill 1\hfill \end{array}\right)$ instead of its coordinates $\left(2;1\right)$. If geometric objects such as lines and planes are investigated (see e.g. Subsection 10.2.2), this convention also involves certain advantages in the description of these objects (in particular in the three-dimensional case). Moreover, this one-to-one correspondence between points and position vectors also justifies to use the abbreviations $ℝ{}^{2}$ and $ℝ{}^{3}$ not only for the set of all points in the plane or in space but also for the set of all two-dimensional or three-dimensional vectors. These abbreviations will also be used throughout the following sections.
{}
Automatically growing label in Python Tkinter (just for fun) #!/usr/bin/env python3 from tkinter import * root = Tk() class GrowLabel(Label): def __init__(self, master): Label.__init__(self, master) self.counter = 12 self.config(text=str(self.counter), fg="blue", font=("verdana", self.counter, "bold")) self.pack() button = Button(self.master, text="Stop", command=self.master.destroy) button.pack() def count(self): self.counter += 1 self.config(text=str(self.counter), fg="blue", font=("verdana", self.counter, "bold")) self.after(1000, self.count) label = GrowLabel(root) label.count() root.mainloop() Naming Nouns are best for class names instead of verbs. GrowLabel sounds like an action. GrowingLabel would sound like a label that grows. Related to this, having a count method on a GrowingLabel object doesn't sound natural. The fact that you implement the growing effect using a count should be an internal detail, and not revealed to users of the class. The important point is that the label can grow, not how it does it. So, renaming the method to grow() would be more natural. Calling super-class methods Since you are in Python 3, it would be better to call the super constructor of GrowLabel like this: super().__init__(master) If that doesn't work somehow (I don't have tkinter to try), then try this instead: super(GrowLabel, self).__init__(master) See more details in this related discussion. Avoid wildcard imports Don't use wildcard imports like this: from tkinter import * Quoting from PEP8: Wildcard imports (from import *) should be avoided, as they make it unclear which names are present in the namespace, confusing both readers and many automated tools. I guess you do this because you're using many classes from this module. A better way would be to do like this: import tkinter as tk And then you can prefix the classes you need with tk., like this: root = tk.Tk() • What's the difference between super().__init__, super(GrowingLabel).__init__ and super(GrowingLabel, self).__init__? – qed Nov 13 '14 at 14:16 • super().__init__ is the same (more modern) as super(GrowingLabel, self).__init__. I didn't know this until you asked, I updated my post now accordingly. I don't know what is super(GrowingLabel).__init__ – Stop ongoing harm to Monica Nov 13 '14 at 19:21
{}
# Chemical equation Chemical equation A chemical equation is the symbolic representation of a chemical reaction where the reactant entities are given on the left hand side and the product entities on the right hand side.[1] The coefficients next to the symbols and formulae of entities are the absolute values of the stoichiometric numbers. The first chemical equation was diagrammed by Jean Beguin in 1615.[2] ## Form A chemical equation consists of the chemical formulas of the reactants (the starting substances) and the chemical formula of the products (substances formed in the chemical reaction). The two are separated by an arrow symbol ($\rightarrow$, usually read as "yields") and each individual substance's chemical formula is separated from others by a plus sign. As an example, the formula for the burning of methane can be denoted: CH4 + 2 O2 $\rightarrow$ CO2 + 2 H2O This equation would be read as "CH four plus O two yields CO two and H two O." But for equations involving complex chemicals, rather than reading the letter and its subscript, the chemical formulas are read using IUPAC nomenclature. Using IUPAC nomenclature, this equation would be read as "methane plus oxygen yields carbon dioxide and water." This equation indicates that oxygen and CH4 react to form H2O and CO2. It also indicates that two oxygen molecules are required for every methane molecule and the reaction will form two water molecules and one carbon dioxide molecule for every methane and two oxygen molecules that react. The stoichiometric coefficients (the numbers in front of the chemical formulas) result from the law of conservation of mass and the law of conservation of charge (see "Balancing Chemical Equation" section below for more information). ## Common symbols Symbols are used to differentiate between different types of reactions. To denote the type of reaction[1]: • " = " symbol is used to denote a stoichiometric relation. • "$\rightarrow$" symbol is used to denote a net forward reaction. • "$\rightleftarrows$" symbol is used to denote a reaction in both directions. • "$\rightleftharpoons$" symbol is used to denote an equilibrium. Physical state of chemicals is also very commonly stated in parentheses after the chemical symbol, especially for ionic reactions. When stating physical state, (s) denotes a solid, (l) denotes a liquid, (g) denotes a gas and (aq) denotes an aqueous solution. If the reaction requires energy, it is indicated above the arrow. A capital Greek letter delta (Δ) is put on the reaction arrow to show that energy in the form of heat is added to the reaction. hν is used if the energy is added in the form of light. ## Balancing chemical equations The law of conservation of mass dictates the quantity of each element does not change in a chemical reaction. Thus, each side of the chemical equation must represent the same quantity of any particular element. Similarly, the charge is conserved in a chemical reaction. Therefore, the same charge must be present on both sides of the balanced equation. One balances a chemical equation by changing the scalar number for each chemical formula. Simple chemical equations can be balanced by inspection, that is, by trial and error. Another technique involves solving a system of linear equations. Ordinarily, balanced equations are written with smallest whole-number coefficients. If there is no coefficient before a chemical formula, the coefficient 1 is understood. The method of inspection can be outlined as putting a coefficient of 1 in front of the most complex chemical formula and putting the other coefficients before everything else such that both sides of the arrows have the same number of each atom. If any fractional coefficient exist, multiply every coefficient with the smallest number required to make them whole, typically the denominator of the fractional coefficient for a reaction with a single fractional coefficient. As an example, the burning of methane would be balanced by putting a coefficient of 1 before the CH4: 1 CH4 + O2 $\rightarrow$ CO2 + H2O Since there is one carbon on each side of the arrow, the first atom (carbon) is balanced. Looking at the next atom (hydrogen), the right hand side has two atoms, while the left hand side has four. To balance the hydrogens, 2 goes in front of the H2O, which yields: 1 CH4 + O2 $\rightarrow$ CO2 + 2 H2O Inspection of the last atom to be balanced (oxygen) shows that the right hand side has four atoms, while the left hand side has two. It can be balanced by putting a 2 before O2, giving the balanced equation: CH4 + 2 O2 $\rightarrow$ CO2 + 2 H2O This equation does not have any coefficients in front of CH4 and CO2, since a coefficient of 1 is dropped. ## Ionic equations An ionic equation is a chemical equation in which electrolytes are written as dissociated ions. Ionic equations are used for single and double displacement reactions that occur in aqueous solutions. For example in the following precipitation reaction: CaCl2(aq) + 2AgNO3(aq) $\rightarrow$ Ca(NO3)2(aq) + 2AgCl(s) the full ionic equation would be: Ca2+ + 2Cl + 2Ag+ + 2NO3 $\rightarrow$ Ca2+ + 2NO3 + 2AgCl(s) and the net ionic equation would be: 2Cl(aq) + 2Ag+(aq) $\rightarrow$ 2AgCl(s) or, in reduced balanced form, Ag+ + Cl $\rightarrow$ AgCl(s) In this aqueous reaction the Ca2+ and the NO3 ions remain in solution and are not part of the reaction. They are termed spectator ions and do not participate directly in the reaction, as they exist with the same oxidation state on both the reactant and product side of the chemical equation. They are only needed for charge balance of the original reagents. In a neutralization or acid/base reaction, the net ionic equation will usually be: H+ + OH $\rightarrow$ H2O There are a few acid/base reactions that produce a precipitate in addition to the water molecule shown above. An example would be the reaction of barium hydroxide with phosphoric acid because the insoluble salt barium phosphate is produced in addition to water. Double displacement reactions that feature a carbonate reacting with an acid have the net ionic equation: 2 H+ + CO32− $\rightarrow$ H2O + CO2 If every ion is a "spectator ion", then there was no reaction, and the net ionic equation is null. ## References 1. ^ a b IUPAC. Compendium of Chemical Terminology, 2nd ed. ISBN 0-9678550-9-8. 2. ^ Crosland, M.P. (1959). "The use of diagrams as chemical 'equations' in the lectures of William Cullen and Joseph Black". Annals of Science 15 (2): 75–90. Wikimedia Foundation. 2010. ### Look at other dictionaries: • chemical equation — UK US noun [countable] [singular chemical equation plural chemical equations] chemistry a method of describing the process involved in a chemical reaction, using chemical symbols to show the reactants and …   Useful english dictionary • chemical equation — UK / US noun [countable] Word forms chemical equation : singular chemical equation plural chemical equations chemistry a method of describing the process involved in a chemical reaction, using chemical symbols to show the reactants and products …   English dictionary • chemical equation — cheminė lygtis statusas T sritis chemija apibrėžtis Cheminė reakcija, užrašyta cheminėmis formulėmis. atitikmenys: angl. chemical equation rus. химическое уравнение …   Chemijos terminų aiškinamasis žodynas • chemical equation — an equation that expresses a chemical reaction, the symbols on the left of the equation denoting the substances before, and those on the right those after, the reaction …   Medical dictionary • chemical equation — Method of writing the essential features of a chemical reaction using chemical symbols (or other agreed upon abbreviations). By convention, reactants (present at the start) are on the left, products (present at the end) on the right. A single… …   Universalium • chemical equation — noun a symbolic representation of a chemical reaction; reactants are represented on the left, and products on the right Example: The reaction of sodium with water to produce sodium hydroxide and hydrogen …   Wiktionary • chemical equation — /kɛmɪkəl əˈkweɪʒən/ (say kemikuhl uh kwayzhuhn) noun a representation of a chemical reaction in symbols …   Australian-English dictionary • chemical equation — Смотри химическое уравнение …   Энциклопедический словарь по металлургии • chemical reaction — Any chemical process in which substances are changed into different ones, with different properties, as distinct from changing position or form (phase). Chemical reactions involve the rupture or rearrangement of the bonds holding atoms together… …   Universalium • chemical kinetics — Introduction       the branch of physical chemistry (chemistry) that is concerned with understanding the rates of chemical reactions (chemical reaction). It is to be contrasted with thermodynamics, which deals with the direction in which a… …   Universalium
{}
# A bidirectional formulation for Walk on Spheres Yang Qi, Dario Seyb, Benedikt Bitterli, Wojciech Jarosz Computer Graphics Forum (Proceedings of EGSR) 2022 # Abstract Numerically solving partial differential equations (PDEs) is central to many applications in computer graphics and scientific modeling. Conventional methods for solving PDEs often need to discretize the space first, making them less efficient for complex geometry. Unlike conventional methods, the walk on spheres (WoS) algorithm recently introduced to graphics is a grid-free Monte Carlo method that can provide numerical solutions of Poisson equations without discretizing space. We draw analogies between WoS and classical rendering algorithms, and find that the WoS algorithm is conceptually equivalent to forward path tracing. Inspired by similar approaches in light transport, we propose a novel WoS reformulation that operates in the reverse direction, starting at source points and estimating the Green’s function at “sensor” points. Implementations of this algorithm show improvement over classical WoS in solving Poisson equation with sparse sources. Our approach opens exciting avenues for future algorithms for PDE estimation which, analogous to light transport, connect WoS walks starting from sensors and sources and combine different strategies for robust solution algorithms in all cases. # Cite @article{ qi22bidirectional, author = "Qi, Yang and Seyb, Dario and Bitterli, Benedikt and Jarosz, Wojciech", title = "A bidirectional formulation for {Walk} on {Spheres}", journal = "Computer Graphics Forum (Proceedings of EGSR)", year = "2022", month = jul, volume = "41", number = "4", issn = "1467-8659", doi = "10.1111/cgf.14586", keywords = "Brownian motion, partial differential equations, PDEs, Monte Carlo", abstract = "Numerically solving partial differential equations (PDEs) is central to many applications in computer graphics and scientific modeling. Conventional methods for solving PDEs often need to discretize the space first, making them less efficient for complex geometry. Unlike conventional methods, the walk on spheres (WoS) algorithm recently introduced to graphics is a grid-free Monte Carlo method that can provide numerical solutions of Poisson equations without discretizing space. We draw analogies between WoS and classical rendering algorithms, and find that the WoS algorithm is conceptually equivalent to forward path tracing. Inspired by similar approaches in light transport, we propose a novel WoS reformulation that operates in the reverse direction, starting at source points and estimating the Green's function at sensor'' points. Implementations of this algorithm show improvement over classical WoS in solving Poisson equation with sparse sources. Our approach opens exciting avenues for future algorithms for PDE estimation which, analogous to light transport, connect WoS walks starting from sensors and sources and combine different strategies for robust solution algorithms in all cases." }
{}
# How to define the scope of subscript in a iteration? Newbee to MMA, and completely have no clue about it. I need do a calculation of some iteration, seen in the picture below. $n=3,4,5,..., \begin{cases} L_n=c_n\\ L_{n-1}=c_{n-1}\\ L_{n-p+1}=\displaystyle\sum_{r=1}^{p-2} (p-r-1)\;c_{p-r-1}\;L_{n-r+1}+c_{n-p+1}\;(p=3,4,5,...,n)\\ \end{cases}$ And for convenience and necessity,n equals 8. And the code is editted as Subscript[L, 8] = Subscript[c, 8] Subscript[L, 7] = Subscript[c, 7] Subscript[L, 9 - p] = Sum[(p - r - 1)*Subscript[c, p - r - 1]*Subscript[L, 9 - r] + Subscript[c, 9 - p], {r, 1, p - 2}](*p=3,4,5,6,7,8*) 2 problems are encountered here. Problem 1: $c_1$ to $c_8$ is considered as constants with no certain values. Which code is needed to make it happen? Problem 2: As can be seen in the last part of my code, p has a range from $3$ to $8$. And again, which code is needed? After searching so much info on the Internet, no solution is acquired.... • Being a newbee, avoid Subscript. It is made for formatting, not calculations. For your indexed symbols, simply use c[8], L[7] etc. You can use a Do loop over p to get all the definitions done. – Marius Ladegård Meyer Jul 13 '17 at 13:56 • @MariusLadegårdMeyer thank you for your advice about Subscript, but I do need use it for further programming, for the iteration presented here is just a small step in my work. And I will try Do loop. Thank you again. – Robin_Lyn Jul 14 '17 at 1:37 I agree in part with Marius Ladegård Meyer concerning Subscript : never use it in the left-hand side of a definition because you cannot easily remove it after. But I like using it on the right hand side. So, L[8] := Subscript[c, 8] L[7] := Subscript[c, 7] L[k_] := Sum[((9 - k) - r - 1)*Subscript[c, (9 - k) - r - 1]* L[9 - r] + Subscript[c, k], {r, 1, (9 - k) - 2}] remark the use of := instead of = : delayed assignment Also remark the substitution 9-p -> k in defining the left hand side of the sum : read up on 'pattern recognition' to learn why you need a single argument there (like ' k ' and not a subtraction like ' 9-p ' ) and last but not least,L[k_]:= read up on 'Patterns' and argument naming. You'll get the hang of it soon enough. ;-) • Many thankssssss for your help. And the problem 1 is still no solved, that can Subscript[c, i](i=1,2,3,...,8) be considered as constants and how to do it? And after that, whether is it possible to get mathematical expressions of L[k_] you mentioned above? @Wouter – Robin_Lyn Jul 14 '17 at 1:50 • @Robin_Lyn : have you tried to use the definitions in my answer to express the values of L[k] using Table[L[k], {k, 8}]? it produces {6 Subscript[c, 1] + 5 Subscript[c, 5] Subscript[c, 7] + 6 Subscript[c, 6] Subscript[c, 8] + 4 Subscript[c, 4] (Subscript[c, 6] + Subscript[c, 1] Subscript[c, 8]) +.... – Wouter Jul 14 '17 at 10:36 • yes, I tried and it works~~~And the calculations followed are solved all the way down, for they are similar iterations. A huge stone in my heart is taken away. Greaaaat jooooooy! – Robin_Lyn Jul 14 '17 at 11:05
{}
# How much work must you do to push a 13 kg block of steel across a steel table (muk = 0.6) at a... ## Question: How much work must you do to push a 13 kg block of steel across a steel table ({eq}\mu_k {/eq} = 0.6) at a steady speed of 1.0 m/s for 9.7 s? ## Work-Energy Theorem The work-energy theorem in classical physics relates the work done on/by a system to the change in kinetic energy of a system. The work-energy theorem can be stated as: {eq}\displaystyle W = F_{Net} \: \Delta x = \Delta K {/eq} Here W is the work done by/on the system, {eq}\displaystyle F_{Net} {/eq} is the net force on the system which acts over distance {eq}\displaystyle \Delta x {/eq}, and {eq}\displaystyle \Delta K {/eq} is the change in kinetic energy of the system. Using the work-energy theorem we can say that the total work done is: {eq}\displaystyle W = F_{Net} \: \Delta x = \Delta K {/eq} The kinetic energy of the block of steel is {eq}\displaystyle \frac{1}{2} (12 \: \text{kg})(1.0 \: \text{m/s})^2 {/eq} at the start and end of the block's motion since it is moving at a constant speed, so: {eq}\displaystyle \Delta K = 0 {/eq} Thus: {eq}\displaystyle W = F_{Net} \: \Delta x = 0 {/eq} So we need to do the same amount of work as the force of friction but in the opposite direction so that the block moves at a constant speed. The force of kinetic friction on the block is: {eq}\displaystyle F_f = -\mu_k F_N = -\mu_k mg = -(0.6)(13 \: \text{kg})(9.8 \: \text{m}/\text{s}^2) = -7.644 \: \text{N} {/eq} The force of friction is negative since it opposes the motion and acts over the distance the block covers in 9.7 seconds which is: {eq}\displaystyle \Delta x = v \Delta t = (1.0 \: \text{m/s})(9.7 \: \text{s}) = 9.7 \: \text{m} {/eq} Thus the work done by the force of friction is: {eq}\displaystyle W_f = F_f \Delta x = -74.15 \: \text{J} {/eq} Thus the work we need to do to oppose this energy loss is: {eq}\displaystyle W= -W_f = 74.15 \: \text{J} {/eq} We need to do 74.15 Joules of work to push a 13 kg steel block across a steel table at a constant 1 m/s for 9.7 seconds.
{}
# Let's try using the discriminant Algebra Level 1 $f(x)=ax^2+bx+c$ If $$b>10, 0<a<2, c=3$$, how many times does the graph $$y=f(x)$$ cross the x-axis? ×
{}
## RD Sharma Class 8 Solutions Chapter 1 Rational Numbers Ex 1.8 These Solutions are part of RD Sharma Class 8 Solutions. Here we have given RD Sharma Class 8 Solutions Chapter 1 Rational Numbers Ex 1.8 Other Exercises Question 1. Find a rational number between -3 and 1. Solution: Question 2. Find any five rational number less than 1. Solution: Question 3. Find four rational numbers between $$\frac { -2 }{ 9 }$$ and $$\frac { 5 }{ 9 }$$ . Solution: Question 4. Find two rational numbers between $$\frac { 1 }{ 5 }$$ and $$\frac { 1 }{ 2 }$$ . Solution: Question 5. Find ten rational numbers between $$\frac { 1 }{ 4 }$$ and $$\frac { 1 }{ 2 }$$ . Solution: Question 6. Find ten rational numbers between $$\frac { -2 }{ 5 }$$ and $$\frac { 1 }{ 2 }$$ . Solution: Question 7. Find ten rational numbers between $$\frac { 3 }{ 5 }$$ and $$\frac { 3 }{ 4 }$$ . Solution: Hope given RD Sharma Class 8 Solutions Chapter 1 Rational Numbers Ex 1.8 are helpful to complete your math homework. If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you.
{}
# Crane Corporation has collected the following information after its first year of sales.... Crane Corporation has collected the following information after its first year of sales. Sales were $1,600,000 on 100,000 units, selling expenses$210,000 (40% variable and 60% fixed), direct materials $498,000, direct labor$296,600, administrative expenses $284,000 (20% variable and 80% fixed), and manufacturing overhead$378,000 (70% variable and 30% fixed). Top management has asked you to do a CVP analysis so that it can make plans for the coming year. It has projected that unit sales will increase by 10% next year. Compute (1) the contribution margin for the current year and the projected year, and (2) the fixed costs for the current year. (Assume that fixed costs will remain the same in the projected year.) (1) Contribution margin for current year $Contribution margin for projected year$ (2) Fixed Costs $Compute the break-even point in units and sales dollars for the current year. Break-even point in units units Break-even point in dollars$ The company has a target net income of $212,000. What is the required sales in dollars for the company to meet its target? Sales dollars required for target net income$
{}
Question - Real Numbers and Their Decimal Expansions Account Register Share Books Shortlist ConceptReal Numbers and Their Decimal Expansions Question Express the following in the form p/q, where p and q are integers and q ≠ 0. (i) 0.bar6 (ii) 0.4bar7 (iii) 0.bar001 Solution You need to to view the solution Is there an error in this question or solution? S
{}
# Symmetry and shape coexistence in 10Be Published in Bulgarian Journal of Physics, 2022 DOI: 10.55318/bgjp.2022.49.1.057 | arXiv: 2112.04056 Recommended citation: M. A. Caprio, A.E. McCoy, P. J. Fasano, and T. Dytrych, Bulg. J. Phys. 49, 057066 (2022) (download) Within the low-lying spectrum of 10Be, multiple rotational bands are found, with strikingly different moments of inertia. A proposed interpretation has been that these bands variously represent triaxial rotation and prolate axially-deformed rotation. The bands are well-reproduced in ab initio no-core configuration interaction (NCCI) calculations. We use the calculated wave functions to elucidate the nuclear shapes underlying these bands, by examining the Elliott $\mathrm{SU}(3)$ symmetry content of these wave functions. The ab initio results support an interpretation in which the ground-state band, along with an accompanying $K=2$ side band, represent a triaxial rotor, arising from an $\mathrm{SU}(3)$ irreducible representation in the $0\hbar\omega$ space. Then, the lowest excited $K=0$ band represents a prolate rotor, arising from an $\mathrm{SU}(3)$ irreducible representation in the $2\hbar\omega$ space.
{}
As is well-known, the Brouwer fixed point theorem states that any continuous map from the unit disk in ${\mathbb{R}^n}$ to itself has a fixed point. The standard proof uses the computation of the singular homology groups of spheres. The proof fails, and indeed this is no longer true, for more general compact spaces. However, the following result shows that there is a form of “approximate periodicity” that one can deduce using only elementary facts from general topology. Consider a homeomorphism ${T: X \rightarrow X}$ for ${X}$ a compact metric space, i.e. a discrete dynamical system. We will prove: Theorem 1 (Birkhoff Recurrence Theorem) There exists ${x \in X}$ and a sequence ${n_i \rightarrow \infty}$ with ${T^{n_i} x \rightarrow x}$ as ${i \rightarrow \infty}$. More can actually be said; I’ll return to this topic in the future. One doesn’t need ${T}$ to be a homeomorphism. Before we prove this, we need an auxiliary notion. Say that a homeomorphism ${T: X \rightarrow X}$ is minimal if for every ${x \in X}$, ${T^{\mathbb{Z}} x }$ is dense in ${X}$. I claim that ${T}$ is minimal iff there is no proper closed ${E \subset X}$ with ${TE = E}$ (such ${E}$ are called ${T}$-invariant). This is straightforward. Indeed, if ${T}$ is not minimal, we can take ${E = \overline{ T^{\mathbb{Z}} x}}$. If there is such a ${T}$-invariant ${E}$, then ${T^{\mathbb{Z}} e}$ for ${e \in E}$ is not dense in ${X}$. Lemma 2 Let ${T: X \rightarrow X}$ be a homeomorphism, ${X}$ compact. Then there is a ${T}$-invariant ${E \subset X}$ such that ${T|_E: E \rightarrow E}$ is minimal. (more…)
{}
# Lecture: Context Free Languages, CFGs, PDAs Now we come to our next notion of computation beyond the regular languages and their associated models of computation, regexps nfas and dfas: the context free languages. Our motivating example is going to be the language we’ve seen repeatedly at this point, $\{0^n1^n | n \geq 0\}$. We showed last time there was no way to make a DFA that decides this language. Again, we’ll define our set of languages in terms of some model of computation. To this point, we introduce context free grammars (CFGs). A context free grammar is like a regular expression but much more powerful. The basic model of computation is the same: we have a set of symbols and rules to expand them. What’s different about CFGs over RegExps is that RegExps have a pre-defined set of rules for their expansion, meanwhile part of the definition of a CFG is the set of rules for expansion of symbols. To whit, the CFG that matches our troublesome language is • $A \to 0A1$ • $A \to \epsilon$ So, for example, we can expand to get the string “00001111” by the sequence of expansions $A \to 0A1 \to 00A11 \to 000A111 \to 0000A1111 \to 00001111$. Let’s define these CFGs a bit more formally. A context free grammar is: • A finite set of variables $V$ • An alphabet $\Sigma$, where $\Sigma$ and $V$ are disjoint. These are the “terminals” of the CFG. • A finite set of rules $R$, where a “rule” is a pair of a variable and a sequence of terminals and variables. • A distinguished variable that’s the start variable The set of strings that are generated by all the expansions of the grammar is the language described by the grammar. Again, it’s a finite computable process because since there are a finite number of rules and any string we are testing against has a finite length we can simply brute force check through all the expansions of the grammar that are the length of the target string. We can do a number of other things with CFGs. For example, we could have a CFG for palindromes over an alphabet. There’s one special form for CFGs that we should note specifically, which is Chomsky Normal Form. A CFG is in Chomsky Normal Form whenever it has the following properties • Every expansion of a variable is either to exactly two variables or a single terminal, i.e. is of the form $A \to BC$ or $A \to c$ • No variable except the start variable can expand to $\epsilon$ • No variable can expand to the start variable This means that Chomsky Normal Form CFGs have a very simple inductive structure that we can take advantage of for proofs. What’s particularly useful is that, as we’ll show, all CFGs have an equivalent CFG in Chomsky Normal Form that generates the same language. Now we make this construction clear in steps: • We first introduce a new fresh start variable, $S'$, and have it expand to the old start variable $S$ • The second step is that remove all rules of the form $A \to \epsilon$. This is a recursive process where we pick a variable $A$ that has an $\epsilon$ expansion and then we remove that rule and modify the rest of the expansions to account for the fact that $A$ can expand to nothing. We do this by taking every rule that contains an $A$ on the right hand side, i.e. something like $X \to B \ldots A \ldots C$, and replace it with a rule that has the $A$ removed, i.e. $X \to B \ldots C$. Now, if the rule is $X \to A$ then we replace it with $X \to \epsilon$. Wait, aren’t we removing the $\epsilon$ transitions? Yes, and so we iterate this process until all rules that have an $\epsilon$ on the rhs other than the start variable are eliminated. We are, essentially, propagating up the use of $\epsilon$ to the top of the derivation tree. • Next, we replace all rules of the form $A \to B$ by inlining the possible expansions of $B$ so that if we had $A \to B$ and $B \to \ldots$ then we replace $A \to B$ with $A \to \ldots$. Note that in this step we don’t remove expansions from $B$ • Now, finally, we take care of rules where a variable expands to more than two variables, more than one terminal, or a mixture of variables and terminals. If we have an expansion such as $A \to 0B$ we replace the 0 with a new variable and a single expansion, i.e. $A \to 0B$ will become $A \to XB$ and $X \to 0$. If we have an expansion that has more than two variables, such as $A \to B C D$ then we add a new variable that expands into the sequence piecewise, i.e. the rule becomes $A \to X D$ where $X \to B C$. Note that there’s some freedom here but that no matter how you choose the steps involved you’ll get an equivalent grammar in Chomsky Normal Form It’s probably a good time for an example, so let’s consider our language above for • $A \to 0A1$ • $A \to \epsilon$ Following step 1 of the above process, we get a new start symbol that must expand to $A$ so the grammar becomes • $S \to A$ • $A \to 0A1$ • $A \to \epsilon$ now, we eliminate the $\epsilon$ transitions. • $S \to A$ • $S \to \epsilon$ • $A \to 01$ • $A \to 0A1$ You can see that everywhere there was an $A$ on the rhs, we’ve added a new rule that has the $A$ removed. Now the only place $\epsilon$ shows up is in an expansion of the start variable, which is allowed in Chomsky Normal Form. Next, we eliminate unary transitions so now we have • $S \to 01$ • $S \to 0A1$ • $S \to \epsilon$ • $A \to 01$ • $A \to 0A1$ Yes, this step has created a lot of redundancy in the rules. Chomsky Normal Form is useful for its simple inductive structure, but the price of simplicity is that we can no longer represent things as compactly as we’d like. Finally, we put all the remaining rules in the proper form. First, we’ll clean up the terminals and then make the rest of rules only expand to two variables. • $S \to XY$ • $S \to XAY$ • $S \to \epsilon$ • $A \to XY$ • $A \to XAY$ • $X \to 0$ • $Y \to 1$ and after the final bit of cleanup • $S \to XY$ • $S \to ZY$ • $S \to \epsilon$ • $A \to XY$ • $A \to ZY$ • $X \to 0$ • $Y \to 1$ • $Z \to XA$ and our grammar is now in Chomsky Normal Form. Wow, umm, that’s a lot uglier and harder to read now isn’t it? Moving on! So when dealing with the regular languages, we had regular expressions which had an interpretation as DFAs/NFAs. Now if CFGs play the role of regexps for the context-free languages, then what plays the role of the NFA? Let’s think for a moment about why we couldn’t build an NFA for that pesky language $\{0^n1^n | n \geq 0\}$. We didn’t have any notion of “memory” for our NFA, there was no way to keep count of how many 0s we’d already seen so we’d know to only accept an equal number of 1s. That being said, if we had something that was an awful lot like an NFA yet had a notion of memory then maybe that would solve the problem. That’s exactly what we’re going to introduce: Pushdown automata (PDAs). We’ll get to those next time.
{}
# Revision history [back] ### About finding roots of polynomials in specific domains I have two polynomials $p(x)$ and $q(x)$ and I want to know if there are roots of the equation $\frac{p'}{p} = \frac{q'}{q}$ in the domain $(a,\infinity)$ - where $a = max { roots(p),roots(q) }$ This is the same as asking for the roots of the polynomial, $p'q - pq' = 0$ in the same domain. • Can something in Sage help? ### About finding roots of polynomials in specific domains I have two polynomials $p(x)$ and $q(x)$ and I want to know if there are roots of the equation $\frac{p'}{p} = \frac{q'}{q}$ in the domain $(a,\infinity)$ - $(a,\infty)$ , where $a = max { roots(p),roots(q) }$ This is the same as asking for the roots of the polynomial, $p'q - pq' = 0$ in the same domain. • Can something in Sage help? ### About finding roots of polynomials in specific domains I have two polynomials $p(x)$ and $q(x)$ and I want to know if there are roots of the equation $\frac{p'}{p} = \frac{q'}{q}$ in the domain $(a,\infty)$ , where $a = max { roots(p),roots(q) roots(p), roots(q) }$ This is the same as asking for the roots of the polynomial, $p'q - pq' = 0$ in the same domain. • Can something in Sage help? ### About finding roots of polynomials in specific domains I have two polynomials $p(x)$ and $q(x)$ and I want to know if there are roots of the equation $\frac{p'}{p} = \frac{q'}{q}$ in the domain $(a,\infty)$ , where $a = max { roots(p), roots(q) }$} $This is the same as asking for the roots of the polynomial,$p'q - pq' = 0\$ in the same domain. • Can something in Sage help?
{}
Symbol Problem $\bar{8}$ Find the values of x x that satisfy the inequality and plot the solution in interval notation $2$ on the number line. $\left(0\right)$ $5$ + $3$ $24$ b. $05n=26$ $4Q23°8\leq 0$ d. $7=8\right)x$ $-$ $25$ e. $5$ 14x + $5$ $5a%63$ $6=2346$ $853>2$ $3x=1$
{}
Product Rule The product rule is a formal rule to find the derivatives of products of two or more functions. In Leibniz’s notation we can express it as $$\frac{d}{dx}(u . v) = \frac{du}{dx} . v + u . \frac{dv}{dx}$$ OR In Lagrange’s notation as, $$(u . v)’ = u’ v + u v’$$ This rule can be extended to a derivative of three or more functions. Index History Gottfried Leibniz is credited with the discovery of this rule, he demonstrated it using differentials. Derivation Leibniz’s Argument: Let $$u(x)$$ and $$v(x)$$ be two differentiable functions of x. Then the differential of $$u.v$$ is given by $$d(u . v) = (u + du) . (v + dv) – u.v$$ $$= udv + vdu + du.dv$$ Since the term $$du.dv$$ is negligible compared to $$udv + vdu$$, as it becomes very small. So we neglect the term Hence, Leibniz concluded that $$d(u . v) = v . du + u . dv$$ dividing bot sides we come to $$\frac{d}{dx}(u . v) = \frac{du}{dx} . v + u . \frac{dv}{dx}$$ Proof Let us consider two differentiable functions $$f(x)$$ & $$g(x)$$, & $$h(x) = f(x)g(x)$$. Here, we will be proving that $$h$$ is differential with $$x$$ & $$h’(x)$$ will be $$f’(x)g(x) + f(x)g’(x)$$. Factorising from first principle, $$h(x) = f(x)g(x)$$ $$h’(x) = lim_{x \to 0} \frac{h(x + \Delta x) – h(x)}{\Delta x}$$ $$= lim_{x \to 0} \frac{f(x + \Delta x)g(x + \Delta x) – f(x)g(x)}{ \Delta x}$$ Adding and subtracting $$f(x)g(x + \Delta x)$$ in numerator, $$= lim_{x \to 0} \frac{ f(x + \Delta x)g(x + \Delta x) –f(x)g(x + \Delta x) + f(x)g(x + \Delta x) – f(x)g(x)}{ \Delta x}$$ $$= lim_{x \to 0} \frac{[ f(x + \Delta x) + f(x)] . g(x + \Delta x) + f(x) . [g(x + \Delta x) + g(x)]}{\Delta x}$$ $$= lim_{x \to 0} \frac{f(x + \Delta x) + f(x)}{\Delta x} . lim_{x \to 0} g(x + \Delta x) + lim_{x \to 0} f(x). lim_{x \to 0} \frac{g(x + \Delta x) + g(x)}{\Delta x}$$ Since, $$lim_{x \to 0} g(x + \Delta x) = g(x)$$ $$h’(x) = f’(x)g(x) + f(x)g’(x)$$ Hence proved. Application of Product Rule This rule is used mainly in calculus and is important when one has to differentiate product of two or more functions. It makes calculation clean and easier to solve. Examples Question 1. Differentiate the function: $$(x^3 + 5)(x^2 + 1)$$ Solution. Here, $$f(x) = (x^3 + 5)$$ & $$g(x) = (x^2 + 1)$$ Using this rule we get, $$\frac{d (x^3 + 5) (x^2 + 1)}{dx} = \frac{d (x^3 + 5)}{dx}.(x^2 + 1) + (x^3 + 5). \frac{ d (x^2 + 1)}{dx}$$ = $$(3x^2)(x^2 +1) + (x^3 + 5)(2x)$$ = $$3x^4 + 3x^2 + 2x^4 + 10x$$ => $$5x^4 + 3x^2 + 10x$$. Question 2. Find the derivative of $$h(x)$$, if $$h(x) = f(x)g(x)$$ & $$f(x) = sin(x)$$ & $$g(x) = cos(x)$$. Solution. Given, $$h(x) = sin(x) . cos(x)$$ Therefore, $$h’(x) = cos(x)cos(x) + sin(x)(- sin(x))$$ $$\Rightarrow cos^2(x) – sin^2(x)$$ $$\Rightarrow cos(2x)$$. FAQs What is product rule? It is a rule followed to differentiate the product of two functions, $$(xy)’ = x’y + xy’$$ How we use product rule on three terms? When there are three terms and this rule is to be applied, we group two functions and treat them as a single unit. And hence apply the rule to left over two. Ex. $$(fgh)’ = (fg)’h + (fg)h’$$ $$= (f’g + fg’)h + fgh’$$ $$= f’gh + fg’h + fgh’$$ What is difference between product rule and quotient rule? Product rule is used to find the derivative of a product of function, while the quotient rule is used for finding derivative of function’s quotient. How is product rule different from Leibniz Rule? Leibniz Rule is case of product rule in which we differentiate the product ‘n’ times. Scroll to Top
{}
# 7.3 Unit circle  (Page 7/11) Page 7 / 11 Access these online resources for additional instruction and practice with sine and cosine functions. ## Key equations Cosine $\mathrm{cos}\text{\hspace{0.17em}}t=x$ Sine $\mathrm{sin}\text{\hspace{0.17em}}t=y$ Pythagorean Identity ${\mathrm{cos}}^{2}t+{\mathrm{sin}}^{2}t=1$ ## Key concepts • Finding the function values for the sine and cosine begins with drawing a unit circle, which is centered at the origin and has a radius of 1 unit. • Using the unit circle, the sine of an angle $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ equals the y -value of the endpoint on the unit circle of an arc of length $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ whereas the cosine of an angle $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ equals the x -value of the endpoint. See [link] . • The sine and cosine values are most directly determined when the corresponding point on the unit circle falls on an axis. See [link] . • When the sine or cosine is known, we can use the Pythagorean Identity to find the other. The Pythagorean Identity is also useful for determining the sines and cosines of special angles. See [link] . • Calculators and graphing software are helpful for finding sines and cosines if the proper procedure for entering information is known. See [link] . • The domain of the sine and cosine functions is all real numbers. • The range of both the sine and cosine functions is $\text{\hspace{0.17em}}\left[-1,1\right].$ • The sine and cosine of an angle have the same absolute value as the sine and cosine of its reference angle. • The signs of the sine and cosine are determined from the x - and y -values in the quadrant of the original angle. • An angle’s reference angle is the size angle, $\text{\hspace{0.17em}}t,$ formed by the terminal side of the angle $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ and the horizontal axis. See [link] . • Reference angles can be used to find the sine and cosine of the original angle. See [link] . • Reference angles can also be used to find the coordinates of a point on a circle. See [link] . ## Verbal Describe the unit circle. The unit circle is a circle of radius 1 centered at the origin. What do the x- and y- coordinates of the points on the unit circle represent? Discuss the difference between a coterminal angle and a reference angle. Coterminal angles are angles that share the same terminal side. A reference angle is the size of the smallest acute angle, $\text{\hspace{0.17em}}t,$ formed by the terminal side of the angle $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ and the horizontal axis. Explain how the cosine of an angle in the second quadrant differs from the cosine of its reference angle in the unit circle. Explain how the sine of an angle in the second quadrant differs from the sine of its reference angle in the unit circle. The sine values are equal. ## Algebraic For the following exercises, use the given sign of the sine and cosine functions to find the quadrant in which the terminal point determined by $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ lies. $\text{sin}\left(t\right)<0\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\text{cos}\left(t\right)<0$ $\text{sin}\left(t\right)>0\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\mathrm{cos}\left(t\right)>0$ I $\text{sin}\left(t\right)>0\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\mathrm{cos}\left(t\right)<0$ $\text{sin}\left(t\right)>0\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\mathrm{cos}\left(t\right)>0$ IV For the following exercises, find the exact value of each trigonometric function. $\mathrm{sin}\text{\hspace{0.17em}}\frac{\pi }{2}$ $\mathrm{sin}\text{\hspace{0.17em}}\frac{\pi }{3}$ $\frac{\sqrt{3}}{2}$ $\mathrm{cos}\text{\hspace{0.17em}}\frac{\pi }{2}$ $\mathrm{cos}\text{\hspace{0.17em}}\frac{\pi }{3}$ $\frac{1}{2}$ $\mathrm{sin}\text{\hspace{0.17em}}\frac{\pi }{4}$ $\mathrm{cos}\text{\hspace{0.17em}}\frac{\pi }{4}$ $\frac{\sqrt{2}}{2}$ $\mathrm{sin}\text{\hspace{0.17em}}\frac{\pi }{6}$ $\mathrm{sin}\text{\hspace{0.17em}}\pi$ 0 $\mathrm{sin}\text{\hspace{0.17em}}\frac{3\pi }{2}$ $\mathrm{cos}\text{\hspace{0.17em}}\pi$ -1 $\mathrm{cos}\text{\hspace{0.17em}}0$ $\mathrm{cos}\text{\hspace{0.17em}}\frac{\pi }{6}$ $\frac{\sqrt{3}}{2}$ #### Questions & Answers the gradient function of a curve is 2x+4 and the curve passes through point (1,4) find the equation of the curve 1+cos²A/cos²A=2cosec²A-1 test for convergence the series 1+x/2+2!/9x3 a man walks up 200 meters along a straight road whose inclination is 30 degree.How high above the starting level is he? 100 meters Kuldeep Find that number sum and product of all the divisors of 360 Ajith exponential series Naveen what is subgroup Prove that: (2cos&+1)(2cos&-1)(2cos2&-1)=2cos4&+1 e power cos hyperbolic (x+iy) 10y Michael tan hyperbolic inverse (x+iy)=alpha +i bita prove that cos(π/6-a)*cos(π/3+b)-sin(π/6-a)*sin(π/3+b)=sin(a-b) why {2kπ} union {kπ}={kπ}? why is {2kπ} union {kπ}={kπ}? when k belong to integer Huy if 9 sin theta + 40 cos theta = 41,prove that:41 cos theta = 41 what is complex numbers Dua Yes ahmed Thank you Dua give me treganamentry question Solve 2cos x + 3sin x = 0.5
{}
## Tangent measure distributions of fractal measures • Tangent measure distributions are a natural tool to describe the local geometry of arbitrary measures of any dimension. We show that for every measure on a Euclidean space and every s, at almost every point, all s-dimensional tangent measure distributions define statistically self-similar random measures. Consequently, the local geometry of general measures is not different from the local geometry of self-similar sets. We illustrate the strength of this result by showing how it can be used to improve recently proved relations between ordinary and average densities. $Rev: 13581$
{}
Externally Funded Project Additive combinatorics over finite fields and applications FWF Project P 30405-N32 Runtime: 01.09.2017-31.08.2021 ###### Project Abstract Loosely speaking, additive combinatorics is the study of arithmetic structures within finite sets. It is an indication of the high level of activity in this research area that it has become the primary research interest for three Fields medalists (Terence Tao, Timothy Gowers, and Jean Bourgain), along with several more of the world's most respected and decorated mathematicians (such as Ben Green, Nets Katz and Endre Szemeredi). Additive combinatorics over finite fields is particularly interesting because of its applications to computer science, cryptography, and coding theory. It is a very old area with celebrated results such as the Cauchy-Davenport theorem: Let $A,B$ subsets of a finite field of prime order $p$. Then we have $|A+B|\ge \min\{|A|+|B|-1,p\}$. Recent years have seen a flurry of activity in this area. One influential development was the work of Bourgain, Katz and Tao that shows that for a subset $A$ of a finite field (which is not too large) either the product set $A \cdot A=\{ab : a,b \in A\}$ or the sum set $A+A=\{a+b : a,b \in A\}$ is essentially larger than $A$. Since then this area has gained increasing interest. Among others we will study the following topics which are problems either coming directly from additive combinatorics or dealing with applications where methods from additive combinatorics are very promising: • sum-product and related problems • character sums with convolutions and Balog-Wooley decomposition • covering sets and packing sets, rewriting schemes and error-correction • Waring's problem in finite fields and covering codes • sums of Lehmer numbers. We will use a collection of different methods and their combinations including • theorems from incidence geometry • character sum techniques • polynomial method • probabilistic method • linear programming • methods from algebraic geometry We expect that the results and newly developed methods of this project will provide substantial contributions to both theory and applications. ###### Publications and Preprints Peer Reviewed Journal Publication • H. Aly, A. Winterhof (2020) A note on Hall's sextic residue sequence: correlation measure of order k and related measures of pseudorandomness. IEEE Trans. Inf. Th., Bd. 66 (3), S. 1944--1947. • Makhul, M.; Roche-Newton, O.; Warren, A.; Zeeuw, F. de (2020) Constructions for the Elekes-Szabó and Elekes-Rónyai problems. Electronic Journal of Combinatorics, Bd. to appear, S. 1--10. • Mehdi Makhul, Josef Schicho, Matteo Gallet (2019) Probabilities of incidence between lines and a plane curve over finite fields. Finite Fields and Their Applications, Bd. 61 (101582), S. 22pp. • Warren, A. (2019) On products of shifts in arbitrary fields. Moscow Journal of Combinatorics and Number Theory, Bd. 8 (3), S. 247--261. • R. Hofer, A. Winterhof (2019) r-th order nonlinearity, correlation measure and least significant bit of the discrete logarithm. Cryptography and Communications, Bd. 11 (5), S. 993--997. • Shkredov, O. Roche-Newton and Ilya D. (2019) If A+A is small then AAA is superquadratic. Journal of Number Theory, Bd. 201, S. 124-134. • Makhul, M. (2019) A family of four-variable expanders with quadratic growth. Mosc. J. Comb. Number Theory, Bd. 8 (2), S. 143--149. • A. Warren, A. Winterhof (2019) Conical Kakeya and Nikodym sets in finite fields. Finite Fields and Their Applications, Bd. 59, S. 185--198. • Oliver Roche-Newton, Imre Z. Ruzsa, Chun-Yen Shen, Ilya D. Shkredov (2019) On the size of the set AA+A. Journal of the London Mathematical Society, Bd. 99 (2), S. 477-494. • Brendan Murphy, Giorgis Petridis, Oliver Roche-Newton, Misha Rudnev, Ilya D. Shkredov (2019) New results on sum-product type growth over fields. Mathematika, Bd. 65 (3), S. 588-642. • O. Roche-Newton, A. Warren (2019) Improved bounds for pencils of lines. Proceedings of the American Mathematical Society, Bd. to appear, S. 10. • O. Roche-Newton, I.E. Shparlinski, A. Winterhof (2019) Analogues of the Balog--Wooley decomposition for subsets of finite fields and character sums with convolutions. Annals of Combinatorics, Bd. 23 (1), S. 183-205. • I. Shparlinski, A. Winterhof (2019) Codes correcting restricted errors. Designs, Codes and Cryptography, Bd. 87 (4), S. 855-863. • Z. Sun, A. Winterhof (2019) On the maximum order complexity of subsequences of the Thue-Morse sequence and Rudin-Shapiro sequence along squares. International Journal of Computer Mathematics: Computer Systems Theory, Bd. 4 (1), S. 30-36. • Iosevich, A.; Roche-Newton, O.; Rudnev, M. (2018) On discrete values of bilinear forms. Mat. Sb, Bd. 209 (10), S. 71--88. • O. Roche-Newton, I. Shkredov, A. Winterhof (2018) Packing sets over finite abelian groups. Integers, Bd. 18 (Paper No. A38), S. 9 pp. • N. Anbar, A. Oduzak, V. Patel, L. Quoos, A. Somoza, A. Topuzoglu (2018, online: 2017) On the difference between permutation polynomials over finite fields. Finite Fields and Their Applications, Bd. 49, S. 132-142. • B. Hanson, O. Roche-Newton and D. Zhelezov (online: 2019) On iterated product sets with shifts. Mathematika, Bd. 65 (4), S. 831-850. Book/Monograph • K.-U. Schmidt, A. Winterhof (eds.) (2019) Combinatorics and finite fields: Difference sets, polynomials, pseudorandomness and applications. In Reihe: Radon Series on Computational and Applied Mathematics, 23; Berlin: de Gruyter. Contribution in Collection • N. Anbar, A. Ozdak, V. Patel, L. Quoos, A. Somoza, A. Topuzoğlu (2019) On the Carlitz rank of permutation polynomials: Recent developments. In: Bouw, I., Özman, E., Johnson-Leung, J., Newton, R. (Hrsg.), Proceedings of Women in Numbers Europe 2: Springer, S. 39--55. Workingpaper • Isik, L.; Winterhof, A. (2020) On the index of the Diffie-Hellman mapping. • Gomez, D.; Winterhof, A. (2020) A note on the cross-correlation of Costas permutations. • Swaenepoel, C.; Winterhof, A. (2020) Additive double character sums over some structured sets and applications. • Makhul, M.; Warren, A.; Winterhof, A. (2020) The spherical Kakeya problem in finite fields. • Makhul, M. (2019) On the number of perfect triangles with a fixed angle. • O. Roche-Newton, A. Warren (2019) New expander bounds from affine group energy.
{}
### Journée-séminaire de combinatoire #### (équipe CALIN du LIPN, université Paris-Nord, Villetaneuse) Le 12 octobre 2021 à 14h00 en B107 (& visioconférence), Imre Bárány nous parlera de : Cells in the box and a hyperplane Résumé : It is well known that a line can intersect at most $2n-1$ cells of the $n \times n$ chessboard. What happens in higher dimensions: how many cells of the $d$-dimensional $[0,n]^d$ box can a hyperplane intersect? We determine this number asymptotically. We also prove the integer analogue of the following fact. If $K,L$ are convex bodies in $R^d$ and $K \subset L$, then the surface area $K$ is smaller than that of $L$. Joint work with Peter Frankl. [Slides.pdf] [arXiv] Dernière modification : Wednesday 12 October 2022 Contact pour cette page : Cyril.Banderier at lipn.univ-paris13.fr
{}
# Is this distributional laplacean a measure? Let $\Omega\subset\mathbb{R}^N$ be a bounded smooth domain. Suppose that $0<k<t<1$ and $\Phi$ is a mooth function satisfying: $\Phi(x)=0$ for $x\le k$, $\Phi(x)=1$ for $x\ge t$. Take $u\in C_0^\infty(\overline{\Omega})$ with $u\ge 0$ and $u$ superharmonic ($-\Delta u\ge 0$). Note that $$\Delta (\Phi\circ u)=(\Phi''\circ u)|\nabla u|^2+(\Phi'\circ u)\Delta u,\tag{1}$$ so $$|\Delta (\Phi\circ u)|\le C(|\nabla u|^2+\chi_{u>k}|\Delta u|).\tag{2}$$ Once $u\ge 0$ and $-\Delta u\ge 0$, we have that from $(2)$ that $$\chi_{u>k}|\Delta u|\le \frac{1}{k}u|\Delta u|=-\frac{1}{k}u\Delta u\tag{3}.$$ We combine Green's indetity with $(3)$ to conclude that $$\int_{u>k}|\Delta u|\le \frac{-1}{k}\int _\Omega u\Delta u=\frac{1}{k}\int_\Omega |\nabla u|^2.\tag{4}$$ Therefore, $(1)$ and $(4)$ gives $$\int_\Omega |\Delta (\Phi\circ u)|\le C\left(1+\frac{1}{k}\right)\int_\Omega |\nabla u|^2,$$ or equivalently $$\|\Delta(\Phi\circ u)\|_{\mathcal{M}(\Omega)}\le C\|\nabla u\|_2^2,\ \forall \ u\in C_0^\infty (\overline{\Omega}),\ u\ge 0,\ -\Delta u\ge 0.\tag{5}$$ So my question is: Can we extend $(5)$ for all functions $u\in W_0^{1,2}(\Omega)$ such that $u\ge 0$ and $u$ is superharmonic (the distributional laplacean of $u$ is nonegative)? Remark 1: $C_0^\infty(\overline{\Omega})$ is the space of all $C^\infty(\overline{\Omega})$ functions, which vanishes on the boundary. Remark 2: $C>0$ is a constant, which can change in every line and it depends only on $k$, $\|\Phi'\|_\infty$ and $\|\Phi''\|_\infty$. Remark 3: In this question, I tried to solve this problem, by showing that $(1)$ were true in the sense of measure, also for $u\in W_0^{1,2}(\Omega)$, however, it does not seems to be true. $\textbf{Update (A Supposed Proof)}$: Assume that $u\in W_0^{1,2}(\Omega)$ satisfies $u\ge 0$, $-\Delta u\ge 0$. Let $\varphi_n$ be the standard mollifier sequence. Extend $u$ by zero outside $\Omega$ (we still using the same notation $u$) and consider the sequence $$u_n(x)=\int_{\mathbb{R}^N} \varphi_n(x-y)u(y)dy,\ x\in \mathbb{R}^N$$ Once $u\in W_0^{1,2}(\mathbb{R}^N)$, we have that $u_n\in C_0^\infty(\overline{\Omega})$, $u_n\ge 0$ and $-\Delta u_n\ge 0$. Moreover, $u_n\to u$ in $W^{1,2}(\Omega)$. From $(5)$, we have that $$\|\Delta (\Phi\circ u_n)\|_{\mathcal{M}(\Omega)}\le C\|\nabla u_n\|_2^2,$$ therefore, we can assume without loss of generality that $\Phi\circ u_n$ converges in the weak star topology, to some measure $\mu\in \mathcal{M}(\Omega)$. Let $v\in W_0^{1,1}(\Omega)$ be the solution of the problem $$\left\{ \begin{array}{ccc} \Delta v =\mu&\mbox{ in \Omega} \\ v=0 &\mbox{ on \partial \Omega} \end{array} \right.$$ By the weak convergence, we have that $$\int_\Omega \phi(x)\Delta (\Phi\circ u_n)\to \int_\Omega \phi(x) d\mu=\int_\Omega \phi(x)\Delta v, \forall \phi\in C_0(\overline{\Omega}),\tag{6}$$ hence, from $(6)$ and the definition of $v$, we conclude that $$\int_\Omega \Delta \phi(x) (\Phi\circ u_n)\to\int_\Omega \Delta \phi(x) v,\ \forall \phi\in C_0^\infty(\overline{\Omega}) .$$ As $\Phi\circ u_n \to \Phi\circ u$ in $W^{1,2}(\Omega)$, we must concude that $v=\Phi\circ u$ and thus, $\Delta(\Phi\circ u)\in \mathcal{M}(\Omega)$ and $$\|\Delta (\Phi\circ u)\|_{\mathcal{M}(\Omega)}\le C\|\nabla u\|_2^2$$ Is this proof right, can anyone check it for me please? • Summary: you know that for smooth functions, the mass of $\Delta(\Phi\circ u)$ is controlled by $\|\nabla u\|_{L^2}$ and the mass of $\Delta u$. You want to use a sequence of smooth functions $u_n$ converging to $u$, so that the mass of $\Delta(\Phi\circ u_n)$ remains under control. Indeed, having this would be enough. Derivatives commute with distributional limits, and the distributional limit of measures with uniformly bounded TV is a measure. I think the only gap in your argument is: the extended $u$ picks up some generalized Laplacian along $\partial \Omega$. What to do with it? – user147263 Jul 21 '14 at 0:11 • @Thisismuchhealthier. Thank you for your reply. All calculations are local, i.e., in $\Omega$, so I can't see where the generalized Laplacian along the boundary, would give some trouble. Could you point me out, where is the possible flaw? – Tomás Jul 21 '14 at 1:52 • When you mollify extended $u$, its behavior on the boundary influences what happens inside of $\Omega$. – user147263 Jul 21 '14 at 1:53 • @Thisismuchhealthier. I see, and now I have the problem that $u_n$ is not superharmonic. So, in some way, we will have to consider the normal derivative of $u$ along the boundary. – Tomás Jul 21 '14 at 2:23 • @Thisismuchhealthier. Let $\delta>0$ be small, $\Omega_\delta=\{x\in \Omega: d(x,\partial\Omega)<\delta\}$. Let $u_\delta(x)=u(x)$ for $x\in \Omega\setminus\Omega_\delta$ and $u_\delta(x)=u(\tau_\delta(x))d(x,\partial\Omega)/\delta$ for $x\in \Omega_\delta$, where $\tau_\delta(x)$ satisfies $d(x,\partial(\Omega\setminus\Omega_\delta)=d(x,\tau_\delta(x))$. Is true that $u_\delta \to u$ in $W^{1,2}(\Omega)$? If this is true, I think I can use $u_\delta$ to approximate $u$ by smooth, superharmonic functions. – Tomás Jul 21 '14 at 12:58
{}
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). I think a good illustration of why torsion-free sheaves on singular curves are both interesting and difficult is given by the following. Consider the $GL_n$ case of the Hitchin fibration, i.e., the map from the moduli space of vector bundles of rank $n$ with a twisted endomorphism on a smooth, projective curve to the Hitchin base space of characteristic polynomials. Then a result of Beauville, Naramsihan, and Ramanan (see this paper http://math.unice.fr/~beauvill/pubs/bnr.pdf) says that for a sufficiently nice characteristic polynomial $a$ in the Hitchin base, the stack of torsion-free coherent sheaves of rank one on the associated spectral curve is isomorphic to the Hitchin fiber associated to $a$. See, for example, the notes on the Hitchin fibration on Drinfeld's geometric Langlands page for a quick introduction to these ideas.
{}
Warning This documents an unmaintained version of NetworkX. Please upgrade to a maintained version and see the current NetworkX documentation. # networkx.algorithms.dag.transitive_reduction¶ transitive_reduction(G)[source] Returns transitive reduction of a directed graph The transitive reduction of G = (V,E) is a graph G- = (V,E-) such that for all v,w in V there is an edge (v,w) in E- if and only if (v,w) is in E and there is no path from v to w in G with length greater than 1. Parameters: G (NetworkX DiGraph) – A directed acyclic graph (DAG) The transitive reduction of G NetworkX DiGraph NetworkXError – If G is not a directed acyclic graph (DAG) transitive reduction is not uniquely defined and a NetworkXError exception is raised. References https://en.wikipedia.org/wiki/Transitive_reduction
{}
# MBTiles¶ Driver short name MBTiles The MBTiles driver allows reading rasters in the MBTiles format, which is a specification for storing tiled map data in SQLite databases. Starting with GDAL 2.1, the MBTiles driver has creation and write support for MBTiles raster datasets. Starting with GDAL 2.3, the MBTiles driver has read and write support for MBTiles vector datasets. For standalone Mapbox Vector Tile files or set of MVT files, see the MVT driver. Note: vector write support requires GDAL to be built with GEOS. GDAL/OGR must be compiled with OGR SQLite driver support, and JPEG and PNG drivers. The SRS is always the Pseudo-Mercator (a.k.a Google Mercator) projection. Starting with GDAL 2.3, the driver will open a dataset as RGBA. For previous versions, the driver will try to determine the number of bands by probing the content of one tile. It is possible to alter this behaviour by defining the MBTILES_BAND_COUNT configuration option (or starting with GDAL 2.1, the BAND_COUNT open option) to the number of bands. The values supported are 1, 2, 3 or 4. Four band (Red,Green,Blue,Alpha) dataset gives the maximum compatibility with the various encodings of tiles that can be stored. The driver will use the ‘bounds’ metadata in the metadata table and do necessary tile clipping, if needed, to respect that extent. However that information being optional, if omitted, the driver will use the extent of the tiles at the maximum zoom level. The user can also specify the USE_BOUNDS=NO open option to force the use of the actual extent of tiles at the maximum zoom level. Or it can specify any of MINX/MINY/MAXX/MAXY to have a custom extent. The driver can retrieve pixel attributes encoded according to the UTFGrid specification available in some MBTiles files. They can be obtained with the gdallocationinfo utility, or with a GetMetadataItem(“Pixel_iCol_iLine”, “LocationInfo”) call on a band object. ## Driver capabilities¶ Supports CreateCopy() This driver supports the GDALDriver::CreateCopy() operation Supports Create() This driver supports the GDALDriver::Create() operation Supports Georeferencing This driver supports georeferencing Supports VirtualIO This driver supports virtual I/O operations (/vsimem/, etc.) ## Opening options¶ Starting with GDAL 2.1, the following open options are available: • Raster and vector: • ZOOM_LEVEL=value: Integer value between 0 and the maximum filled in the tiles table. By default, the driver will select the maximum zoom level, such as at least one tile at that zoom level is found in the ‘tiles’ table. • USE_BOUNDS=YES/NO: Whether to use the ‘bounds’ metadata, when available, to determine the AOI. Defaults to YES. • MINX=value: Minimum easting (in EPSG:3857) of the area of interest. • MINY=value: Minimum northing (in EPSG:3857) of the area of interest. • MAXX=value: Maximum easting (in EPSG:3857) of the area of interest. • MAXY=value: Maximum northing (in EPSG:3857) of the area of interest. • Raster only: • BAND_COUNT=AUTO/1/2/3/4: Number of bands of the dataset exposed after opening. Some conversions will be done when possible and implemented, but this might fail in some cases, depending on the BAND_COUNT value and the number of bands of the tile. Defaults to AUTO. • TILE_FORMAT=PNG/PNG8/JPEG: Format used to store tiles. See Tile formats section. Only used in update mode. Defaults to PNG. • QUALITY=1-100: Quality setting for JPEG compression. Only used in update mode. Default to 75. • ZLEVEL=1-9: DEFLATE compression level for PNG tiles. Only used in update mode. Default to 6. • DITHER=YES/NO: Whether to use Floyd-Steinberg dithering (for TILE_FORMAT=PNG8). Only used in update mode. Defaults to NO. • Vector only (GDAL >= 2.3): • CLIP=YES/NO: Whether to clip geometries of vector features to tile extent. Defaults to YES. • ZOOM_LEVEL_AUTO=YES/NO: Whether to auto-select the zoom level for vector layers according to the spatial filter extent. Only for display purpose. Defaults to NO. ## Raster creation issues¶ Depending of the number of bands of the input dataset and the tile format selected, the driver will do the necessary conversions to be compatible with the tile format. When using the CreateCopy() API (such as with gdal_translate), automatic reprojection of the input dataset to EPSG:3857 (WebMercator) will be done, with selection of the appropriate zoom level. Fully transparent tiles will not be written to the database, as allowed by the format. The driver implements the Create() and IWriteBlock() methods, so that arbitrary writing of raster blocks is possible, enabling the direct use of MBTiles as the output dataset of utilities such as gdalwarp. On creation, raster blocks can be written only if the geotransformation matrix has been set with SetGeoTransform() This is effectively needed to determine the zoom level of the full resolution dataset based on the pixel resolution, dataset and tile dimensions. Technical/implementation note: in the general case, GDAL blocks do not exactly match a single MBTiles tile. In which case, each GDAL block will overlap four MBTiles tiles. This is easily handled on the read side, but on creation/update side, such configuration could cause numerous decompression/ recompression of tiles to be done, which might cause unnecessary quality loss when using lossy compression (JPEG). To avoid that, the driver will create a temporary database next to the main MBTiles file to store partial MBTiles tiles in a lossless (and uncompressed) way. Once a tile has received data for its four quadrants and for all the bands (or the dataset is closed or explicitly flushed with FlushCache()), those uncompressed tiles are definitely transferred to the MBTiles file with the appropriate compression. All of this is transparent to the user of GDAL API/utilities ### Tile formats¶ MBTiles can store tiles in PNG or JPEG. Support for those tile formats depend if the underlying drivers are available in GDAL. By default, GDAL will PNG tiles. It is possible to select the tile format by setting the creation/open option TILE_FORMAT to one of PNG, PNG8 or JPEG. When using JPEG, the alpha channel will not be stored. PNG8 can be selected to use 8-bit PNG with a color table up to 256 colors. On creation, an optimized color table is computed for each tile. The DITHER option can be set to YES to use Floyd/Steinberg dithering algorithm, which spreads the quantization error on neighbouring pixels for better rendering (note however than when zooming in, this can cause non desirable visual artifacts). Setting it to YES will generally cause less effective compression. Note that at that time, such an 8-bit PNG formulation is only used for fully opaque tiles, as the median-cut algorithm currently implemented to compute the optimal color table does not support alpha channel (even if PNG8 format would potentially allow color table with transparency). So when selecting PNG8, non fully opaque tiles will be stored as 32-bit PNG. ## Vector creation issues¶ Tiles are generated with WebMercator (EPSG:3857) projection. Several layers can be written. It is possible to decide at which zoom level ranges a given layer is written. ## Creation options¶ The following creation options are available: • Raster and vector: • NAME=string. Tileset name, used to set the ‘name’ metadata item. If not specified, the basename of the filename will be used. • DESCRIPTION=string. A description of the layer, used to set the ‘description’ metadata item. If not specified, the basename of the filename will be used. • TYPE=overlay/baselayer. The layer type, used to set the ‘type’ metadata item. Default to ‘overlay’. • Raster only: • VERSION=string. The version of the tileset, as a plain number, used to set the ‘version’ metadata item. Default to ‘1.1’. • BLOCKSIZE=integer. (GDAL >= 2.3) Block/tile size in width and height in pixels. Defaults to 256. Maximum supported is 4096. • TILE_FORMAT=PNG/PNG8/JPEG: Format used to store tiles. See Tile formats section. Defaults to PNG. • QUALITY=1-100: Quality setting for JPEG compression. Default to 75. • ZLEVEL=1-9: DEFLATE compression level for PNG tiles. Default to 6. • DITHER=YES/NO: Whether to use Floyd-Steinberg dithering (for TILE_FORMAT=PNG8). Defaults to NO. • ZOOM_LEVEL_STRATEGY=AUTO/LOWER/UPPER. Strategy to determine zoom level. LOWER will select the zoom level immediately below the theoretical computed non-integral zoom level, leading to subsampling. On the contrary, UPPER will select the immediately above zoom level, leading to oversampling. Defaults to AUTO which selects the closest zoom level. • RESAMPLING=NEAREST/BILINEAR/CUBIC/CUBICSPLINE/LANCZOS/MODE/AVERAGE. Resampling algorithm. Defaults to BILINEAR. • WRITE_BOUNDS=YES/NO: Whether to write the bounds ‘metadata’ item. Defaults to YES. • Vector only (GDAL >= 2.3): • MINZOOM=integer: Minimum zoom level at which tiles are generated. Defaults to 0. • MAXZOOM=integer: Minimum zoom level at which tiles are generated. Defaults to 5. Maximum supported value is 22 • CONF=string: Layer configuration as a JSon serialized string. • SIMPLIFICATION=float: Simplification factor for linear or polygonal geometries. The unit is the integer unit of tiles after quantification of geometry coordinates to tile coordinates. Applies to all zoom levels, unless SIMPLIFICATION_MAX_ZOOM is also defined. • SIMPLIFICATION_MAX_ZOOM=float: Simplification factor for linear or polygonal geometries, that applies only for the maximum zoom level. • EXTENT=positive_integer. Number of units in a tile. The greater, the more accurate geometry coordinates (at the expense of tile byte size). Defaults to 4096 • BUFFER=positive_integer. Number of units for geometry buffering. This value corresponds to a buffer around each side of a tile into which geometries are fetched and clipped. This is used for proper rendering of geometries that spread over tile boundaries by some rendering clients. Defaults to 80 if EXTENT=4096. • COMPRESS=YES/NO. Whether to compress tiles with the Deflate/GZip algorithm. Defaults to YES. Should be left to YES for FORMAT=MBTILES. • TEMPORARY_DB=string. Filename with path for the temporary database used for tile generation. By default, this will be a file in the same directory as the output file/directory. • MAX_SIZE=integer. Maximum size of a tile in bytes (after compression). Defaults to 500 000. If a tile is greater than this threshold, features will be written with reduced precision, or discarded. • MAX_FEATURES=integer. Maximum number of features per tile. Defaults to 200 000. • BOUNDS=min_long,min_lat,max_long,max_lat. Override default value for bounds metadata item which is computed from the extent of features written. • CENTER=long,lat,zoom_level. Override default value for center metadata item, which is the center of BOUNDS at minimum zoom level. ## Layer configuration (vector)¶ The above mentioned CONF dataset creation option can be set to a string whose value is a JSon serialized document such as the below one: { "boundaries_lod0": { "target_name": "boundaries", "description": "Country boundaries", "minzoom": 0, "maxzoom": 2 }, "boundaries_lod1": { "target_name": "boundaries", "minzoom": 3, "maxzoom": 5 } } boundaries_lod0 and boundaries_lod1 are the name of the OGR layers that are created into the target MVT dataset. They are mapped to the MVT target layer boundaries. It is also possible to get the same behaviour with the below layer creation options, although that is not convenient in the ogr2ogr use case. ## Layer creation options (vector)¶ • MINZOOM=integer: Minimum zoom level at which tiles are generated. Defaults to the dataset creation option MINZOOM value. • MAXZOOM=integer: Minimum zoom level at which tiles are generated. Defaults to the dataset creation option MAXZOOM value. Maximum supported value is 22 • NAME=string: Target layer name. Defaults to the layer name, but can be overridden so that several OGR layers map to a single target MVT layer. The typical use case is to have different OGR layers for mutually exclusive zoom level ranges. • DESCRIPTION=string: A description of the layer. ## Overviews (raster)¶ gdaladdo / BuildOverviews() can be used to compute overviews. Only power-of-two overview factors (2,4,8,16,…) are supported. If more overview levels are specified than available, the extra ones are silently ignored. Overviews can also be cleared with the -clean option of gdaladdo (or BuildOverviews() with nOverviews=0) ## Vector tiles¶ Starting with GDAL 2.3, the MBTiles driver can read MBTiles files containing vector tiles conforming to the Mapbox Vector Tile format (format=pbf). The driver requires the ‘metadata’ table to contain a name=’json’ entry, that has a ‘vector_layers’ array describing layers and their schema. See metadata.json Note: The driver will make no effort of stiching together geometries for features that overlap several tiles. ## Examples:¶ • Accessing a remote MBTiles raster : $gdalinfo /vsicurl/http://a.tiles.mapbox.com/v3/kkaefer.iceland.mbtiles Output: Driver: MBTiles/MBTiles Files: /vsicurl/http://a.tiles.mapbox.com/v3/kkaefer.iceland.mbtiles Size is 16384, 16384 Coordinate System is: PROJCS["WGS 84 / Pseudo-Mercator", GEOGCS["WGS 84", DATUM["WGS_1984", SPHEROID["WGS 84",6378137,298.257223563, AUTHORITY["EPSG","7030"]], AUTHORITY["EPSG","6326"]], PRIMEM["Greenwich",0, AUTHORITY["EPSG","8901"]], UNIT["degree",0.0174532925199433, AUTHORITY["EPSG","9122"]], AUTHORITY["EPSG","4326"]], PROJECTION["Mercator_1SP"], PARAMETER["central_meridian",0], PARAMETER["scale_factor",1], PARAMETER["false_easting",0], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]], AXIS["X",EAST], AXIS["Y",NORTH], EXTENSION["PROJ4","+proj=merc +a=6378137 +b=6378137 +lat_ts=0.0 +lon_0=0.0 +x_0=0.0 +y_0=0 +k=1.0 +units=m +nadgrids=@null +wktext +no_defs"], AUTHORITY["EPSG","3857"]] Origin = (-3757031.250000000000000,11271093.750000000000000) Pixel Size = (152.873992919921875,-152.873992919921875) Image Structure Metadata: INTERLEAVE=PIXEL Corner Coordinates: Upper Left (-3757031.250,11271093.750) ( 33d44'59.95"W, 70d36'45.36"N) Lower Left (-3757031.250, 8766406.250) ( 33d44'59.95"W, 61d36'22.97"N) Upper Right (-1252343.750,11271093.750) ( 11d14'59.98"W, 70d36'45.36"N) Lower Right (-1252343.750, 8766406.250) ( 11d14'59.98"W, 61d36'22.97"N) Center (-2504687.500,10018750.000) ( 22d29'59.97"W, 66d30'47.68"N) Band 1 Block=256x256 Type=Byte, ColorInterp=Red Overviews: 8192x8192, 4096x4096, 2048x2048, 1024x1024, 512x512 Mask Flags: PER_DATASET ALPHA Overviews of mask band: 8192x8192, 4096x4096, 2048x2048, 1024x1024, 512x512 Band 2 Block=256x256 Type=Byte, ColorInterp=Green Overviews: 8192x8192, 4096x4096, 2048x2048, 1024x1024, 512x512 Mask Flags: PER_DATASET ALPHA Overviews of mask band: 8192x8192, 4096x4096, 2048x2048, 1024x1024, 512x512 Band 3 Block=256x256 Type=Byte, ColorInterp=Blue Overviews: 8192x8192, 4096x4096, 2048x2048, 1024x1024, 512x512 Mask Flags: PER_DATASET ALPHA Overviews of mask band: 8192x8192, 4096x4096, 2048x2048, 1024x1024, 512x512 Band 4 Block=256x256 Type=Byte, ColorInterp=Alpha Overviews: 8192x8192, 4096x4096, 2048x2048, 1024x1024, 512x512 • Reading pixel attributes encoded according to the UTFGrid specification : $ gdallocationinfo /vsicurl/http://a.tiles.mapbox.com/v3/mapbox.geography-class.mbtiles -wgs84 2 49 -b 1 -xml Output: <Report pixel="33132" line="22506"> <BandReport band="1"> <LocationInfo> <Key>74</Key> </LocationInfo> <Value>238</Value> </BandReport> </Report> • Converting a dataset to MBTiles and adding overviews : $gdal_translate my_dataset.tif my_dataset.mbtiles -of MBTILES$ gdaladdo -r average my_dataset.mbtiles 2 4 8 16 • Opening a vector MBTiles: $ogrinfo /home/even/gdal/data/mvt/out.mbtiles INFO: Open of /home/even/gdal/data/mvt/out.mbtiles' using driver MBTiles' successful. Metadata: ZOOM_LEVEL=5 name=out.mbtiles description=out.mbtiles version=2 minzoom=0 maxzoom=5 center=16.875000,44.951199,5 bounds=-180.000000,-85.051129,180.000000,83.634101 type=overlay format=pbf 1: ne_10m_admin_1_states_provinces_shpgeojson (Multi Polygon) • Converting a GeoPackage to a Vector tile MBTILES: $ ogr2ogr -f MBTILES target.mbtiles source.gpkg -dsco MAXZOOM=10
{}
# Reconstructing position from depth questions ## Recommended Posts AgentSnoop    110 Hey everyone. I know this topic has been posted a few times, and I've read them for the most part, I've seen other webpages regarding the topic (http://mynameismjp.wordpress.com/2009/05/05/reconstructing-position-from-depth-continued/), however, I'm having some trouble on it and I want to do if I'm making progress. I am trying to set up deferred lighting with shadow mapping now, so I want to reconstruct either world position or view position from my depth buffer. I'm trying to use the frustum corners method, but I'm not sure what my result should look like (full-screen quad): when I output the view-space position as colors, or my view direction as colors. Basically, I'm seeing my terrain (what I'm rendering) with sectors of color (red, yellow, green, black, white, magenta, cyan, green), and the colors more or less move with the camera. Currently I have view direction set up so it when you output it as colors you get: ________ | 1 | 2 | |__|___| | 3 | 4 | |__|___| where 1 is green, 2 is yellow, 3 is black, and 4 is red. Once I get this to work correctly, I can move on to shadows and what not, but everything else seems pretty dependent on this. So, what should I expect when outputing view position, world position, view direction, etc as colors? If you need more information, screen shots, just let me know. Thanks for any help! ##### Share on other sites rubicondev    296 Sounds like you need to multiply your colours by 0.5 and add 0.5 to get all the colour you need. -1 to +1 is going to get clamped to 0 - 1 whatever you do. I have a complete bit of source for setting this up in my SSAO tutorial over at www.rubicondev.com if you want to check that out. ##### Share on other sites AgentSnoop    110 Alright, that makes sense. I guess I'm not really sure what I should be looking at when I output the view space position that I reconstruct. This is what I get: This is with shifting the colors to positive ranges. So, those four colors are screen aligned, so they'll move around on the terrain. Is this correct? EDIT: Alright, so the picture I posted is slightly wrong, because apparently I was sending the nearclip distance as farclip, so I wasn't getting a proper view space depth. So I now have reconstructed view space positions, but world space positions still don't exactly seem correct. [Edited by - AgentSnoop on November 10, 2009 6:43:45 PM] ##### Share on other sites AgentSnoop    110 Ok, so this is what I'm getting for world space position: This is how it's suppose to be (I assume.. I'm outputting this directly) And this is what I'm getting when I try to reconstruct it: The reconstruction is close, but the sectors move with the camera angle (and position maybe), and if I'm looking straight across, or more upwards (more prevalent when the camera is higher up), the blue, black, etc, start to come out as seen above. I'm using the frustum corners in world space. Then, float3 frustumRayWS = IN.frustum; float3 wPos = depth * frustumRayWS + eyepos; I think reconstructing view space works, but then when I try to transform the position to the light view space, etc, I seem to get weird results. Even the ways that are more mathematically intensive aren't working correctly, like: float4 hPos = float4(IN.hPos.xy, depth * IN.hPos.w, IN.hPos.w); float4 wPos4d = mul(hPos, matViewProjInv); float3 wPos3d = wPos4d.xyz / wPos4d.w; EDIT: Also, apparently I can take wPos and multiply it by the view matrix to get what I believe I should get for view space position, so I'm not exactly sure if what I'm getting for world space position is correct or not. [Edited by - AgentSnoop on November 11, 2009 1:19:19 AM] ##### Share on other sites AgentSnoop    110 Alright, I figured out the problem. So in case anyone else happens to have this problem (doubtful most people will), here's the solution. So, I was constructing view/world space correctly; however, when making my frustum corners, I copied how I saw it else where: float farH = 2 * tanf(fov / 2.0f) * farZ; float farW = farH * aspect; float nearH = 2 * tanf(fov / 2.0f) * nearZ; float nearW = nearH * aspect; float nearX = nearW / 2.0f; float nearY = nearH / 2.0f; float farX = farW / 2.0f; float farY = farH / 2.0f; However, I was not setting up my perspective matrix like this, I was messing around with being able to set vertical and horizontal FOVs, so I needed to set my farW differently... essentially the same as farH, but with the FOV i used for the horizontal. This pretty much fixed everything up. Now I'm just trying to get the shadowing to work. I'm having a weird banding issue where it seems values are going out of bounds (negative, then extreme positive real fast) with my rotating directional light. I'm going to have to investigate it a little bit. ##### Share on other sites MJP    19755 Quote: Original post by AgentSnoopNow I'm just trying to get the shadowing to work. I'm having a weird banding issue where it seems values are going out of bounds (negative, then extreme positive real fast) with my rotating directional light. I'm going to have to investigate it a little bit. Are you using DirectX? Debugging shaders in PIX works wonders for this sort of thing. ##### Share on other sites AgentSnoop    110 Quote: Original post by MJPAre you using DirectX? Debugging shaders in PIX works wonders for this sort of thing. Hey, thanks for the suggestion. I played a little bit with it last night and seems like a valuable tool. Anyway, I had a question for you, I'm looking at what you wrote at http://mynameismjp.wordpress.com/2009/05/05/reconstructing-position-from-depth-continued/ : void VSBoundingVolume( in float3 in_vPositionOS : POSITION, out float4 out_vPositionCS : POSITION, out float3 out_vPositionVS : TEXCOORD0 ) { out_vPositionCS = mul(in_vPositionOS, g_matWorldViewProj); // Pass along the view-space vertex position to the pixel shader out_vPositionVS = mul(in_vPositionOS, g_matWorldView); } float3 VSPositionFromDepth(float2 vTexCoord, float3 vPositionVS) { // Calculate the frustum ray using the view-space position. // g_fFarCip is the distance to the camera's far clipping plane. // Negating the Z component only necessary for right-handed coordinates float3 vFrustumRayVS = vPositionVS.xyz * (g_fFarClip/-vPositionVS.z); return tex2D(DepthSampler, vTexCoord).x * vFrustumRayVS; } Does this only work for reconstructing view space, or can it be used for reconstructing world space as well? Also, thanks for all the indirect help with writing the articles. ##### Share on other sites the out put your getting is similar to what i got. i am not reconstructing depth yet, but the output should not look different. you can see a bunch of videos of my stuff here. I am using OpenGL and my coordinate system is flipped compared to yours (i think). so your output looks normal. ##### Share on other sites MJP    19755 Yeah, you can make that work for world-space. You can either simply take the view-space position that you get and transform it by the inverse of your view matrix, or you rotate (not translate) vFrustumRayVS by the inverse of the view matrix so that it's in world space. Something like this should work (not tested): float3 vFrustumRayVS = vPositionVS.xyz * (g_fFarClip/-vPositionVS.z);float3 vFrustumRayWS = mul(vFrustumRayVS, (float3x3)matInvView);return camPosWS + tex2D(DepthSampler, vTexCoord).x * vFrustumRayWS; matInvView would be your inverse view matrix (the world matrix of your camera), and camPosWS would be the world-space position of your camera. ##### Share on other sites AgentSnoop    110 Quote: Original post by MJPYeah, you can make that work for world-space. You can either simply take the view-space position that you get and transform it by the inverse of your view matrix, or you rotate (not translate) vFrustumRayVS by the inverse of the view matrix so that it's in world space. Something like this should work (not tested):float3 vFrustumRayVS = vPositionVS.xyz * (g_fFarClip/-vPositionVS.z);float3 vFrustumRayWS = mul(vFrustumRayVS, (float3x3)matInvView);return camPosWS + tex2D(DepthSampler, vTexCoord).x * vFrustumRayWS;matInvView would be your inverse view matrix (the world matrix of your camera), and camPosWS would be the world-space position of your camera. Ah, that makes sense. Thank you very much! ## Create an account Register a new account
{}
# Thread: Uniform convergence of a sequence of functions 1. ## Uniform convergence of a sequence of functions Hi, I was hoping someone might be able to help me out. Can the following sequence of function be differentiated? As far as I understood, I have to check continuity and uniform convergence. I think it fails to satisfy the latter. How can I explain it? 2. Originally Posted by BMWM5 Hi, I was hoping someone might be able to help me out. Can the following sequence of function be differentiated? As far as I understood, I have to check continuity and uniform convergence. I think it fails to satisfy the latter. How can I explain it? Actually, you need uniform convergence of the term-by-term derivative $\sum_{n=1}^\infty\frac{1}{x^2+n^2}$. This one is easy to prove (cf. normal convergence). 3. I think I got it, thanks. BTW, isn't the derivative $\frac{1}{\frac{{x}^{2}}{{n}^{2}}+{n}^{2}}$ ? (It doesn't make a difference anyway ) 4. Originally Posted by BMWM5 I think I got it, thanks. BTW, isn't the derivative $\frac{1}{\frac{{x}^{2}}{{n}^{2}}+{n}^{2}}$ ? (It doesn't make a difference anyway ) Yes, sorry. And indeed, the proof procedes the same way.
{}
Voltage Multipliers (Doublers, Triplers, Quadruplers, and More) Chapter 3 - Diodes and Rectifiers voltage multiplier is a specialized rectifier circuit producing an output which is theoretically an integer times the AC peak input, for example, 2, 3, or 4 times the AC peak input. Thus, it is possible to get 200 VDC from a 100 Vpeak AC source using a doubler, 400 VDC from a quadrupler. Any load in a practical circuit will lower these voltages. We’ll first go over several types of voltage multipliers—voltage doubler (half- and full-wave), voltage tripler, and voltage quadrupler—then make some general notes about voltage multiplier safety and finish up with the Cockcroft-Walton multiplier. Voltage Doubler A voltage doubler application is a DC power supply capable of using either a 240 VAC or 120 VAC source. The supply uses a switch selected full-wave bridge to produce about 300 VDC from a 240 VAC source. The 120 V position of the switch rewires the bridge as a doubler producing about 300 VDC from the 120 VAC. In both cases, 300 VDC is produced. This is the input to a switching regulator producing lower voltages for powering, say, a personal computer. Half-Wave Voltage Doubler The half-wave voltage doubler in Figure below (a) is composed of two circuits: a clamper at (b) and peak detector (half-wave rectifier) in Figure prior, which is shown in modified form in Figure below (c). C2 has been added to a peak detector (half-wave rectifier). Half-wave voltage doubler (a) is composed of (b) a clamper and (c) a half-wave rectifier. Half-wave Voltage Doubler Operation Circuit Analysis Referring to Figure(b) above , C2 charges to 5 V (4.3 V considering the diode drop) on the negative half cycle of AC input. The right end is grounded by the conducting D2. The left end is charged at the negative peak of the AC input. This is the operation of the clamper. During the positive half cycle, the half-wave rectifier comes into play at Figure(c) above . Diode D2 is out of the circuit since it is reverse biased. C2 is now in series with the voltage source. Note the polarities of the generator and C2, series aiding. Thus, rectifier D1 sees a total of 10 V at the peak of the sinewave, 5 V from generator and 5 V from C2. D1 conducts waveform v(1) (figure below), charging C1 to the peak of the sine wave riding on 5 V DC (figure below v(2)). Waveform v(2) is the output of the doubler, which stabilizes at 10 V (8.6 V with diode drops) after a few cycles of sine wave input. *SPICE 03255.eps C1 2 0 1000p D1 1 2 diode C2 4 1 1000p D2 0 1 diode V1 4 0 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end Voltage doubler: v(4) input. v(1) clamper stage. v(2) half-wave rectifier stage, which is the doubler output. Full-Wave Voltage Doubler The full-wave voltage doubler is composed of a pair of series stacked half-wave rectifiers. (Figure below) The corresponding netlist is in Figure below. Full-Wave Voltage Doubler Operation Analysis The bottom rectifier charges C1 on the negative half cycle of input. The top rectifier charges C2 on the positive halfcycle. Each capacitor takes on a charge of 5 V (4.3 V considering diode drop). The output at node 5 is the series total of C1 + C2 or 10 V (8.6 V with diode drops). *SPICE 03273.eps *R1 3 0 100k *R2 5 3 100k D1 0 2 diode D2 2 5 diode C1 3 0 1000p C2 5 3 1000p V1 2 3 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end Full-wave voltage doubler consists of two half-wave rectifiers operating on alternating polarities. Note that the output v(5) Figure below reaches full value within one cycle of the input v(2) excursion. Full-wave voltage doubler: v(2) input, v(3)voltage at mid point, v(5) voltage at output Deriving Full-wave Doublers from Half-wave Rectifiers Figure below illustrates the derivation of the full-wave doubler from a pair of opposite polarity half-wave rectifiers (a). The negative rectifier of the pair is redrawn for clarity (b). Both are combined at (c) sharing the same ground. At (d) the negative rectifier is re-wired to share one voltage source with the positive rectifier. This yields a ±5 V (4.3 V with diode drop) power supply; though, 10 V is measurable between the two outputs. The ground reference point is moved so that +10 V is available with respect to ground. Full-wave doubler: (a) Pair of doublers, (b) redrawn, (c) sharing the ground, (d) share the same voltage source. (e) move the ground point. Voltage Tripler A voltage tripler (Figure below) is built from a combination of a doubler and a half wave rectifier (C3, D3). The half-wave rectifier produces 5 V (4.3 V) at node 3. The doubler provides another 10 V (8.4 V) between nodes 2 and 3. for a total of 15 V (12.9 V) at the output node 2 with respect to ground. The netlist is in Figure below. Voltage tripler composed of doubler stacked atop a single stage rectifier. Note that V(3) in Figure below rises to 5 V (4.3 V) on the first negative half cycle. Input v(4) is shifted upward by 5 V (4.3 V) due to 5 V from the half-wave rectifier. And 5 V more at v(1) due to the clamper (C2, D2). D1 charges C1 (waveform v(2)) to the peak value of v(1). *SPICE 03283.eps C3 3 0 1000p D3 0 4 diode C1 2 3 1000p D1 1 2 diode C2 4 1 1000p D2 3 1 diode V1 4 3 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end Voltage tripler: v(3) half-wave rectifier, v(4) input+ 5 V, v(1) clamper, v(2) final output. A voltage quadrupler is a stacked combination of two doublers shown in Figure below. Each doubler provides 10 V (8.6 V) for a series total at node 2 with respect to ground of 20 V (17.2 V) The netlist is in Figure below. Voltage quadrupler, composed of two doublers stacked in series, with output at node 2. The waveforms of the quadrupler are shown in Figure below. Two DC outputs are available: v(3), the doubler output, and v(2) the quadrupler output. Some of the intermediate voltages at clampers illustrate that the input sinewave (not shown), which swings by 5 V, is successively clamped at higher levels: at v(5), v(4) and v(1). Strictly v(4) is not a clamper output. It is simply the AC voltage source in series with the v(3) the doubler output. None the less, v(1) is a clamped version of v(4) *SPICE 03441.eps *SPICE 03286.eps C22 4 5 1000p C11 3 0 1000p D11 0 5 diode D22 5 3 diode C1 2 3 1000p D1 1 2 diode C2 4 1 1000p D2 3 1 diode V1 4 3 SIN(0 5 1k) .model diode d .tran 0.01m 5m .end Voltage quadrupler: DC voltage available at v(3) and v(2). Intermediate waveforms: Clampers: v(5), v(4), v(1). Notes on Voltage Multipliers and Line Driven Power Supplies Some notes on voltage multipliers are in order at this point. The circuit parameters used in the examples (V= 5 V 1 kHz, C=1000 pf) do not provide much current, microamps. Furthermore, load resistors have been omitted. Loading reduces the voltages from those shown. If the circuits are to be driven by a kHz source at low voltage, as in the examples, the capacitors are usually 0.1 to 1.0 µF so that milliamps of current are available at the output. If the multipliers are driven from 50/60 Hz, the capacitor are a few hundred to a few thousand microfarads to provide hundreds of milliamps of output current. If driven from line voltage, pay attention to the polarity and voltage ratings of the capacitors. Finally, any direct line driven power supply (no transformer) is dangerous to the experimenter and line operated test equipment. Commercial direct driven supplies are safe because the hazardous circuitry is in an enclosure to protect the user. When breadboarding these circuits with electrolytic capacitors of any voltage, the capacitors will explode if the polarity is reversed. Such circuits should be powered up behind a safety shield. Cockcroft-Walton Multiplier A voltage multiplier of cascaded half-wave doublers of arbitrary length is known as a Cockcroft-Walton multiplier as shown in Figure below. This multiplier is used when a high voltage at low current is required. The advantage over a conventional supply is that an expensive high voltage transformer is not required– at least not as high as the output. Cockcroft-Walton x8 voltage multiplier; output at v(8). The pair of diodes and capacitors to the left of nodes 1 and 2 in Figure above constitute a half-wave doubler. Rotating the diodes by 45o counterclockwise, and the bottom capacitor by 90o makes it look like Figure prior (a). Four of the doubler sections are cascaded to the right for a theoretical x8 multiplication factor. Node 1 has a clamper waveform (not shown), a sinewave shifted up by 1x (5 V). The other odd numbered nodes are sinewaves clamped to successively higher voltages. Node 2, the output of the first doubler, is a 2x DC voltage v(2) in Figure below. Successive even numbered nodes charge to successively higher voltages: v(4), v(6), v(8) D1 7 8 diode C1 8 6 1000p D2 6 7 diode C2 5 7 1000p D3 5 6 diode C3 4 6 1000p D4 4 5 diode C4 3 5 1000p D5 3 4 diode C5 2 4 1000p D6 2 3 diode D7 1 2 diode C6 1 3 1000p C7 2 0 1000p C8 99 1 1000p D8 0 1 diode V1 99 0 SIN(0 5 1k) .model diode d .tran 0.01m 50m .end Cockcroft-Walton (x8) waveforms. Output is v(8). Without diode drops, each doubler yields 2Vin or 10 V, considering two diode drops (10-1.4)=8.6 V is realistic. For a total of 4 doublers one expects 4·8.6=34.4 V out of 40 V. Consulting Figure above, v(2) is about right; however, v(8) is <30 V instead of the anticipated 34.4 V. The bane of the Cockcroft-Walton multiplier is that each additional stage adds less than the previous stage. Thus, a practical limit to the number of stages exist. It is possible to overcome this limitation with a modification to the basic circuit. [ABR] Also note the time scale of 40 msec compared with 5 ms for previous circuits. It required 40 msec for the voltages to rise to a terminal value for this circuit. The netlist in Figure above has a “.tran 0.010m 50m” command to extend the simulation time to 50 msec; though, only 40 msec is plotted. The Cockcroft-Walton multiplier serves as a more efficient high voltage source for photomultiplier tubes requiring up to 2000 V. [ABR] Moreover, the tube has numerous dynodes, terminals requiring connection to the lower voltage “even numbered” nodes. The series string of multiplier taps replaces a heat generating resistive voltage divider of previous designs. An AC line operated Cockcroft-Walton multiplier provides high voltage to “ion generators” for neutralizing electrostatic charge and for air purifiers. Voltage Multiplier Review: • A voltage multiplier produces a DC multiple (2,3,4, etc) of the AC peak input voltage. • The most basic multiplier is a half-wave doubler. • The full-wave double is a superior circuit as a doubler. • A tripler is a half-wave doubler and a conventional rectifier stage (peak detector). • A quadrupler is a pair of half-wave doublers • A long string of half-wave doublers is known as a Cockcroft-Walton multiplier. RELATED WORKSHEETS: • Share Published under the terms and conditions of the Design Science License
{}
# Alternative method for power assessment of small wind turbines 1. Oct 13, 2008 Hello, I am Daniel from La Plata, Argentina and would like some help on a complex (for me) question. I am working with some friends on the development of an innovative wind turbine. We have built several prototypes with diameters of 0.5 m and 2 m. We have tested these prototypes on trucks with the assistance of people from the Fluid Dynamics Lab from the Department of Aeronautical Engineering, La Plata National University. In order to measure torque, a 0.3 m wheel was braked by means of a metal strip wound around said wheel and at the end of which different weights were hung. By means of a tachometer the rotor speeds were measured and after many readings we got a clear picture of what kinds of power our turbine was producing. The report from the lab gave a Cp of 0.53 which is very high for a two-meter rotor. However, we were not pleased with some aspects of the procedure and we firmly believe this Cp should have been a bit higher. Now, if the lab had the right device for measuring torque (i.e.:torque transducer) I would not be asking for help. The fact is, we don't have it nor can we buy one, as they are very expensive (something like U$6000). In the meantime I have been trying an alternative way to do this which seems to give good results but that is systematically rejected by the people who specialize in wind turbines. Description of method: The rotor's mass is constant so when it accelerates in a given wind, it does so with an acceleration = torque/mass. This acceleration will depend essentially on wind speed. If tests are carried out in the open with non-turbulent winds it is possible to make accurate measurements by means of a digital camera. Basically we are interested to know how long it takes for the rotor to go from rest to final speed (to measure angular speed we take a video and then process it frame by frame - it lasts about 40 seconds when wind speed is about 5 m/s). Now knowing the moment of inertia of the rotor it is a simple step to calculate the energy stored in it. If you have the number of joules stored and the number of seconds necessary to do the job, it is easy to calculate the average power produced by the rotor. By means of some maths, it is possible to calculate peak power based on average power (This conversion needs a factor which is dependent on the beta angle which is the angle between chord and rotation plane). I like this method mainly because it is non-invasive, it can be done as many times as one feels necessary and because it costs nothing. My question is then: Do you see anything in this procedure that is not acceptable from a physical point of view? Is there anything else we might do to improve our methodology? Thanks in advance, Daniel 2. Oct 29, 2008 ### danielhugo Hi everybody! You may watch my turbine in action following this link: Believe me, you'll be surprised! Last edited by a moderator: Sep 25, 2014 3. Oct 31, 2008 ### abelanger Wow very nice!! Keep it going :D! 4. Nov 2, 2008 ### makethings Nice vid, How fast is the wind and how much power are you producing? 5. Nov 2, 2008 ### danielhugo Hi, makethings! Wind speed was just below 5 m/s (average 4.5 m/s) and average power output was close to 300 W. The rotor diameter is 2 m. Starting torque at this wind speed is 12.5 Nm. In the video the rotor accelerates from 0 to 6.2 rps in just 35 s (the first five seconds are not in the vid, but it is easy to verify that the rotor increases its speed (omega) 6.28 radians every five seconds and its moment of inertia is 10 kg(m)m2. Thanks for your interest. 6. Nov 11, 2008 ### Enthalpy Hello Daniel and everybody! Found a clear picture of Daniel's turbine here: http://www.theinquirer.net/en/inquirer/news/2006/10/29/boffin-invents-simpler-way-to-see-inside-metal-objects [Broken] at the section "Wind Power galore". Daniel and I already had a discussion elsewhere, but the site is "closed for maintenance", which might well last for a looooong time... http://www.physforum.com/index.php?showtopic=23494&st=0 where I proposed for this turbine a gearless generator that looked quite good to my eyes. Good opportunity to switch to a forum with a different style. The text is still available - though in "low-fidelity format" - here: http://lofi.forum.physorg.com/Small-Wind-Turbine-Power-Measurement_23494.html [Broken] just increase the text size in your browser to read the end of the discussion. I will come back with a simpler and cheaper variant. See you! Last edited by a moderator: May 3, 2017 7. Nov 11, 2008 ### makethings That's cool.. I cannot tell at what altitude in the video you were testing your turbine, but if you were even higher up in the earth's boundary layer you can get faster wind speeds. I am curious how high wind speed can your wind turbine handle? And what would the maximum power you can get from that? Do you have a cut-off wind speed and brake? Or have you tested it to failure in a wind tunnel? 8. Nov 11, 2008 ### danielhugo Hi Enthalpy! I'm very glad to hear from you, after the sudden loss of contact because of the closing of the other site. Totally unexpected! The link to the inquirer shows a nice picture of an earlier version of our turbine. It has 54 blades and solidity within blade area is 100%. At this point we were researching the validity of the theory that says that three is the best number of blades due to the 'fact' that as solidity increases lift goes down. With this turbine we measured a starting torque of 160 Nm at 10 m/s wind speed. Torque at peak power with same wind went down to 110 Nm (peak power was 1600 W). This prototype was built with a very low budget (it cost U$120 in materials and three months' time of hard work to finish it. Aerodynamically it is not as efficient as the model on the video, but it helped us learn a lot and led the way to the new version. Regarding your advice as to the generator, our team is trying to use it on their prototype design. They are doing all they can but we are still running on low budgets. Haven't sold a single turbine yet! Now we have some people from: http://www.aldeaschweitzer.com.ar [Broken] who would like to have a number of our turbines generating power in the province of Neuquén (not like Chubut but close). This forum gives you the possibility of sending private messages; as I am new to it, I don't know how to go about it. As soon as I figure out how to do it, I'll send you a couple of lines. Last edited by a moderator: May 3, 2017 9. Nov 11, 2008 ### danielhugo Hi makethings, On this video, we are 2 km southwest from the River Plate coastline close to the city of Berisso. The wind is just a summer breeze of 16 km/h. The turbine is mounted on a truck and we were making ready to carry out some tests. Normal winds in this part of the Buenos Aires province are rarely higher than twenty km/h, which explains why we need a truck to generate relative winds up to 50-60 km/h. Out of curiosity we subjected the turbine to a 80 km/h wind and it run fantastically! I was below the turbine and I can tell you. It's an incredible experience and I was alone down there because the others in the team were afraid it would explode. We did not measure power at this speed because we would have blown up the generator. Power output at this kind of wind speed is over 10 kw! Max power for our generator is slightly higher than 600W. We are right now preparing a new series of tests. I'll make sure someone shoots a few videos which I'll certainly post here. My regards Last edited: Nov 11, 2008 10. Nov 11, 2008 ### Enthalpy I haven't computed the improved design today. I hope to remove completely the iron core at the stator and let the copper wires alone - with a lower induction then. This would make the prototype easier, as it avoids any exotic material. Maybe tomorrow. The closure of the other forum was unexpected... But I heard "anticiper" before - the French code for such covert actions. No idea what happened to them, nor if it was related to that forum. The forum was under heavy trolling attack for several weeks. 11. Nov 25, 2008 ### Enthalpy 12. Nov 26, 2008 ### Q_Goest Hi Daniel, Welcome to the board. You say this method has been “systematically rejected by the people who specialize in wind turbines.” Can you explain why? I’m not an expert in this field, but if I had to guess, I’d say there are 2 basic problems. The first is trying to determine the rate of change (ie: acceleration) of rotational velocity of the wind turbine. If you can determine that (ex: using fast acting photography) and if you know the rotational inertia, then yes, you can calculate the torque. That should be quite simple really. But the second problem is a bit more difficult to define and defend. The other issue is how instantaneous torque might be affected by a change in air dynamics between a constant rotational turbine and an accelerating one. This is essentially saying that the turbine undergoing an ACCELERATING RPM may have a torque exerted on it that is DIFFERENT than the torque exerted on the same turbine which is rotating at a CONSTANT RPM. I don’t buy this second argument, but I won’t go into detail. You could counter this argument easily by adding lots of rotational inertia to your turbine, such that the rotational acceleration is small. You might for example, run a belt off the turbine to a wheel that carries a ‘flywheel’ to increase the inertia. Regarding the use of torque meters, I find it surprising that a transducer can’t be purchased for only a few hundred dollars. Where have you looked? If you wanted to make your own, that would be fairly simple. The idea you have of a belt over the pully which you then attach ‘fish scales’ to, will do the trick nicely. You’ll need to compensate for the weight of the belt, as well as whatever braking blocks you may need, but in principal, the idea of braking the turbine to determine torque is perfectly valid and easily accomplished. Having these two different techniques and showing they match should go a long way in proving the amount of torque and thus the amount of power your wind turbine is capable of producing. 13. Nov 28, 2008 ### danielhugo Hi Q Goest: Thanks for your welcome! I really like the site. Your question about the people who systematically ... We (TEUSA) are working with the help of a team who call themselves Fluid dynamics and Boundary Layer Lab, (La Plata University, Argentina). Their argument is exactly what you pinpoint as your second basic problem, i.e.: steady state versus accelerating rotational movement. Now, my own view is that the wind is never in a steady state as normal turbulence implies a constant change of wind speed. In this constant change of wind speed, a rotor in a conventional installation goes through all sorts of states, where at least one of them must necessarily be equivalent to the accelerating rotor while it is being tested. This implies, as far as I can see, that steady state torque can be no different than accelerating torque because if it were smaller, the rotor would slow down instead of keeing up its speed. The solution you suggest for measuring rotational velocity, is the one we are using and its really fantastic. You can't beat the speed of light! And on top of this, you are not drawing energy (almost) from the rotor, so Heinsenberg must be very worried about it. Finally, the torque -transducer problem. We checked with different companies, and the cost was always about the same - around U$5000. This is FOB. We have to pay an additional cost (mostly taxes) of around U$3500. This is way too much. We are now trying to design one using two metal discs that by compressing four or six springs mounted on the edge and by using a pen nd some kind of paper roll we may get some approximate numbers. I want to thank you again , this time for you support and interest. I wonder whether you have seen our unique rotor. In case you haven't, here's the link where you can see it in action:
{}
# Paying attention to sigma-algebras So as part of my new resolution to start reading the books on my shelves, I recently read through Probability with Martingales. I’d be lying if I said I fully understood all the material: It’s quite dense, and my ability to read mathematics has atrophied a lot (I’m now doing a reread of Rudin to refresh my memory). But there’s one very basic point that stuck out as genuinely interesting to me. When introducing measure theory, it’s common to treat sigma-algebras as this annoying detail you have to suffer through in order to get to the good stuff. They’re that family of sets that it’s really annoying that it isn’t the whole power set. And we would have gotten away with it, if it weren’t for that pesky axiom of choice. In Probability with Martingales this is not the treatment they are given. The sigma algebras are a first class part of the theory: You’re not just interested in the largest sigma algebra you can get, you care quite a lot about the structure of different families of sigma algebras. In particular you are very interested in sub sigma algebras. Why? Well. If I may briefly read too much into the fact that elements of a sigma algebra are called measurable sets… what are we measuring them with? It turns out that there’s a pretty natural interpretation of sub-sigma algebras in terms of measurable functions: If you have a sigma-algebra $$\mathcal{G}$$ on $$X$$ and a family of measurable functions $$\{f_\alpha : X \to Y_\alpha : \alpha \in A \}$$ then you can look at the the smallest sigma-algebra $$\sigma(f_\alpha) \subseteq \mathcal{G}$$ for which all these functions are still measurable. This is essentially the measurable sets which we can observe by only asking questions about these functions. It turns out that every sub sigma algebra can be realised this way, but the proof is disappointing: Given $$\mathcal{F} \subseteq \mathcal{G}$$ you just consider the identify function $$\iota: (X, \mathcal{F}) \to (X, \mathcal{G})$$ and $$\mathcal{G}$$ is the sigma-algebra generated by this function. One interesting special case of this is sequential random processes. Suppose we have a set of random variables $$X_1, \ldots, X_n, \ldots$$ (not necessarily independent, identically distributed, or even taking values in the same set). Our underlying space then captures an entire infinite chain of random variables stretching into the future. But we are finite beings and can only actually look at what has  happened so far. This then gives us a nested sequence of sigma algebras $$\mathcal{F_1} \subseteq \ldots \subseteq \mathcal{F_n} \subseteq \ldots$$ where $$\mathcal{F_n} = \sigma(X_1, \ldots, X_n)$$ is the collection of things we  we can measure at time n. One of the reasons this is interesting is that a lot of things we would naturally pose in terms of random variables can instead be posed in terms of sigma-algebras. This tends to very naturally erase any difference between single random variables and families of random variables. e.g. you can talk about independence of sigma algebras ($$\mathcal{G}$$ and $$\mathcal{H}$$ are independent iff for $$\mu(G \cap H) = \mu(G) \mu(H)$$ for $$G \in \mathcal{G}, H \in \mathcal{H}$$) and two families of random variables are independent if and only if the generated sigma algebras are independent. A more abstract reason it’s interesting is that it’s quite nice to see the sigma-algebras play a front and center role as opposed to this annoyance we want to forget about. I think it makes the theory richer and more coherent to do it this way. This entry was posted in Numbers are hard on by .
{}
Search by Topic Resources tagged with Working systematically similar to Strange Bank Account: Filter by: Content type: Stage: Challenge level: There are 128 results Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically First Connect Three Stage: 2, 3 and 4 Challenge Level: The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for? You Owe Me Five Farthings, Say the Bells of St Martin's Stage: 3 Challenge Level: Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring? 9 Weights Stage: 3 Challenge Level: You have been given nine weights, one of which is slightly heavier than the rest. Can you work out which weight is heavier in just two weighings of the balance? Consecutive Negative Numbers Stage: 3 Challenge Level: Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? When Will You Pay Me? Say the Bells of Old Bailey Stage: 3 Challenge Level: Use the interactivity to play two of the bells in a pattern. How do you know when it is your turn to ring, and how do you know which bell to ring? First Connect Three for Two Stage: 2 and 3 Challenge Level: First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line. Tetrahedra Tester Stage: 3 Challenge Level: An irregular tetrahedron is composed of four different triangles. Can such a tetrahedron be constructed where the side lengths are 4, 5, 6, 7, 8 and 9 units of length? Games Related to Nim Stage: 1, 2, 3 and 4 This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning. Magic Potting Sheds Stage: 3 Challenge Level: Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it? Maths Trails Stage: 2 and 3 The NRICH team are always looking for new ways to engage teachers and pupils in problem solving. Here we explain the thinking behind maths trails. Creating Cubes Stage: 2 and 3 Challenge Level: Arrange 9 red cubes, 9 blue cubes and 9 yellow cubes into a large 3 by 3 cube. No row or column of cubes must contain two cubes of the same colour. Twinkle Twinkle Stage: 2 and 3 Challenge Level: A game for 2 people. Take turns placing a counter on the star. You win when you have completed a line of 3 in your colour. Teddy Town Stage: 1, 2 and 3 Challenge Level: There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules? Making Maths: Double-sided Magic Square Stage: 2 and 3 Challenge Level: Make your own double-sided magic square. But can you complete both sides once you've made the pieces? Sticky Numbers Stage: 3 Challenge Level: Can you arrange the numbers 1 to 17 in a row so that each adjacent pair adds up to a square number? Weights Stage: 3 Challenge Level: Different combinations of the weights available allow you to make different totals. Which totals can you make? Oranges and Lemons, Say the Bells of St Clement's Stage: 3 Challenge Level: Bellringers have a special way to write down the patterns they ring. Learn about these patterns and draw some of your own. Triangles to Tetrahedra Stage: 3 Challenge Level: Starting with four different triangles, imagine you have an unlimited number of each type. How many different tetrahedra can you make? Convince us you have found them all. More Magic Potting Sheds Stage: 3 Challenge Level: The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it? Summing Consecutive Numbers Stage: 3 Challenge Level: Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? Pair Sums Stage: 3 Challenge Level: Five numbers added together in pairs produce: 0, 2, 4, 4, 6, 8, 9, 11, 13, 15 What are the five numbers? Tea Cups Stage: 2 and 3 Challenge Level: Place the 16 different combinations of cup/saucer in this 4 by 4 arrangement so that no row or column contains more than one cup or saucer of the same colour. Stage: 3 Challenge Level: How many different symmetrical shapes can you make by shading triangles or squares? Coins Stage: 3 Challenge Level: A man has 5 coins in his pocket. Given the clues, can you work out what the coins are? Stage: 3 Challenge Level: Rather than using the numbers 1-9, this sudoku uses the nine different letters used to make the words "Advent Calendar". Twin Corresponding Sudoku III Stage: 3 and 4 Challenge Level: Two sudokus in one. Challenge yourself to make the necessary connections. Wallpaper Sudoku Stage: 3 and 4 Challenge Level: A Sudoku that uses transformations as supporting clues. Ratio Sudoku 1 Stage: 3 and 4 Challenge Level: A Sudoku with clues as ratios. Counting on Letters Stage: 3 Challenge Level: The letters of the word ABACUS have been arranged in the shape of a triangle. How many different ways can you find to read the word ABACUS from this triangular pattern? LOGO Challenge - Following On Stage: 3, 4 and 5 Challenge Level: Remember that you want someone following behind you to see where you went. Can yo work out how these patterns were created and recreate them? Ratio Sudoku 3 Stage: 3 and 4 Challenge Level: A Sudoku with clues as ratios or fractions. Product Sudoku 2 Stage: 3 and 4 Challenge Level: Given the products of diagonally opposite cells - can you complete this Sudoku? Masterclass Ideas: Working Systematically Stage: 2 and 3 Challenge Level: A package contains a set of resources designed to develop students’ mathematical thinking. This package places a particular emphasis on “being systematic” and is designed to meet. . . . Twin Corresponding Sudoku Stage: 3, 4 and 5 Challenge Level: This sudoku requires you to have "double vision" - two Sudoku's for the price of one LOGO Challenge - Sequences and Pentagrams Stage: 3, 4 and 5 Challenge Level: Explore this how this program produces the sequences it does. What are you controlling when you change the values of the variables? Fence It Stage: 3 Challenge Level: If you have only 40 metres of fencing available, what is the maximum area of land you can fence off? Pole Star Sudoku 2 Stage: 3 and 4 Challenge Level: This Sudoku, based on differences. Using the one clue number can you find the solution? More on Mazes Stage: 2 and 3 There is a long tradition of creating mazes throughout history and across the world. This article gives details of mazes you can visit and those that you can tackle on paper. Isosceles Triangles Stage: 3 Challenge Level: Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw? Twin Corresponding Sudokus II Stage: 3 and 4 Challenge Level: Two sudokus in one. Challenge yourself to make the necessary connections. Stage: 3 and 4 Challenge Level: Four numbers on an intersection that need to be placed in the surrounding cells. That is all you need to know to solve this sudoku. Colour Islands Sudoku Stage: 3 Challenge Level: An extra constraint means this Sudoku requires you to think in diagonals as well as horizontal and vertical lines and boxes of nine. Seasonal Twin Sudokus Stage: 3 and 4 Challenge Level: This pair of linked Sudokus matches letters with numbers and hides a seasonal greeting. Can you find it? Football Sum Stage: 3 Challenge Level: Find the values of the nine letters in the sum: FOOT + BALL = GAME LOGO Challenge - the Logic of LOGO Stage: 3 and 4 Challenge Level: Just four procedures were used to produce a design. How was it done? Can you be systematic and elegant so that someone can follow your logic? Intersection Sudoku 1 Stage: 3 and 4 Challenge Level: A Sudoku with a twist. Ratio Sudoku 2 Stage: 3 and 4 Challenge Level: A Sudoku with clues as ratios. LOGO Challenge - Triangles-squares-stars Stage: 3 and 4 Challenge Level: Can you recreate these designs? What are the basic units? What movement is required between each unit? Some elegant use of procedures will help - variables not essential. Intersection Sums Sudoku Stage: 2, 3 and 4 Challenge Level: A Sudoku with clues given as sums of entries.
{}
Choose language PL, EN, ES, DE, FR, RU # Voltage drop from current, single-phase and three-phase current, cable length, cross-section or diameter With this calculator you can calculate voltage drops for single-phase and three-phase AC circuits calculated from the rated current. You will also calculate wire length, wire diameter, wire cross-sectional area, voltage or current. ## Voltage drop calculator from current for single-phase and three-phase circuits - information Voltage drop - voltage reduction in electricity, that is, the difference in electric potential between two points in the circuit where an electric current flows. It is also a concept which in the energy sector can mean: - reduction of the electric voltage between the beginning and the end of the supply line, - voltage reduction below the rated voltage for a given power network. Relative voltage drop is the ratio of voltage drop to rated voltage. The acceptable voltage drop at rated load on the transmission line from the transformer to the electricity consumer must be less than 5% of the rated voltage. Electric energy receivers, to ensure their correct operation, should be supplied with voltage close to the rated voltage. This sometimes requires the use of cables with a cross-section greater than the current carrying capacity. The permissible voltage drop in non-industrial electrical installations in receiving circuits, from a meter to any receiver, according to N-SEP-E-002, should not exceed 3%, and from the meter to a connector 0.5%, with power transmitted up to 100 kVA and 1 % at a power greater than 100 kVA and less than 250 kVA. For circuits made with cables, multi-core or single-core conductors with a conductor cross-section not exceeding 50 mm² Cu (copper) and 70 mm² Al (aluminum), the reactances of these conductors are ignored. Assuming the above assumption, the voltage drops are calculated from the dependence: for single-phase circuits: $$\Delta U_\%=\frac{200 \cdot I_n \cdot L \cdot \cos \phi}{\sigma \cdot S \cdot U_{n}}$$ for three-phase circuits: $$\Delta U_\%=\frac{\sqrt{3} \cdot 100 \cdot I_n \cdot L \cdot \cos \phi}{\sigma \cdot S \cdot U_{n}}$$ where: ΔU% – voltage drop [%], L – wire length [m], In - rated current [A], Un - rated voltage [V], S – cross-sectional area of the line veins [mm²], d - wire diameter, σ – conductivity of the conductor [m/Ωmm²], cosφ – phase shift factor, Given the diameter of a conductor, the cross-sectional area of a conductor can be calculated using the formula: $$S = \frac{\pi \cdot d^2}{4}$$ where: S – conductor cross-sectional area, d – conductor diameter, Conductivity (specific conductivity, specific electrical conductivity) is a physical quantity that characterizes the electrical conductivity of a material. After transformations for single-phase circuits: Wire diameter: $$d = \sqrt {\frac{800 \cdot I_n \cdot L \cdot \cos \phi}{\sigma \cdot \Delta U_\% \cdot U_{n} \cdot \pi}}$$ Rated current: $$I_n = \frac{\sigma \cdot \Delta U_\% \cdot U_{n} \cdot S}{200 \cdot L \cdot \cos \phi}$$ Wire length: $$L = \frac{\sigma \cdot \Delta U_\% \cdot U_{n} \cdot S}{200 \cdot I_n \cdot \cos \phi}$$ Rated voltage: $$U_{n}=\sqrt {\frac{200 \cdot I_n \cdot L \cdot \cos \phi}{\sigma \cdot \Delta U_\% \cdot S}}$$ Cross-sectional area: $$S=\frac{200 \cdot I_n \cdot L \cdot \cos \phi}{\sigma \cdot \Delta U_\% \cdot U_{n}}$$ After transformations for three-phase circuits: Wire diameter: $$d = \sqrt {\frac{\sqrt{3} \cdot 400 \cdot I_n \cdot L \cdot \cos \phi}{\sigma \cdot \Delta U_\% \cdot U_{n} \cdot \pi}}$$ Rated current: $$I_n = \frac{\sigma \cdot \Delta U_\% \cdot U_{n} \cdot S}{\sqrt{3} \cdot 100 \cdot L \cdot \cos \phi}$$ Wire length: $$L = \frac{\sigma \cdot \Delta U_\% \cdot U_{n} \cdot S}{\sqrt{3} \cdot 100 \cdot I_n \cdot \cos \phi}$$ Rated voltage: $$U_{n}=\sqrt {\frac{\sqrt{3} \cdot 100 \cdot I_n \cdot L \cdot \cos \phi}{\sigma \cdot \Delta U_\% \cdot S}}$$ Cross-sectional area: $$S=\frac{\sqrt{3} \cdot 100 \cdot I_n \cdot L \cdot \cos \phi}{\sigma \cdot \Delta U_\% \cdot U_{n}}$$ ## Users of this calculator also used ### Resistance, length and diameter of wire calculator With this calculator, you can calculate the resistance of the cable, knowing the material it is made of. You can also calculate the length of the conductor and the conductor diameter or the conductor cross-section area.
{}
# 계통해석법에 의한 I-131대기방출의 영향평가 • Yook, Chong-Chul (Nuclear Engineering Dept., Hanyang University) ; • Lee, Jong-Il (Nuclear Engineering Dept., Hanyang University) ; • Ha, Chung-Woo (Healh Physics Dept., Korea Advanced Energy Research Institute) • 육종철 (한양대학교 공과대학 원자력공학과) ; • 이종일 (한양대학교 공과대학 원자력공학과) ; • 하정우 (한국에너지연구소 방사선안전관리실) • Published : 1988.06.20 #### Abstract The annual individual and collective doses to the thyroids of four age-dependent groups due to the in-take of I-131 released from the Younggwang nuclear power plant NU-1 & 2, Korea, are estimated using the model presented in ICRP 29. Sensitivity and robustness of the model are analyzed. In case of 0.12% fuel defect during normal operation, the collective dose is founded to be 3.05${\times}10^{-3}$man-thyroid-Sv, which is higher than the value calculated by the GASPAR code, 2.3${\times}10^{-3}$man-thyroid-Sv. The maximal individual annual doses resulting from an acute release are higher than those calculated under the assumption of continuous release by $1.4{\sim}1.7$ times. The most important pathway to the infant is milk and, in contrast, that to child, teen and adult is ingestion of crops. The model used is the calculation appears to be influenced by the variables such as roubstness-index. The weighted committed dose equivalent obtained by the ICRP 29 model is slightly higher than that calculated by the three-compartment model.
{}
## Hypothesis testing: sign test Hi, If you wished to perform hypothesis testing on medians two sets of samples of size, say 10-20, testing $H_0: m_1=m_2$ vs $m_1 < m_2$, how would you go about doing this? Could I set up a table with two columns the values of each sample, then their difference, and use the sign test? This is how you'd do it for testing the value of one median, but I was not sure about comparing two. thanks
{}
Do I have a problem? Basic proportion problem. 1. Aug 11, 2012 Life-Like Hello, this is an odd question but when I compared how I solved it compared to how the book solved it I kinda got worried. It was a very basic problem I been going back and touching up on my basics before I start pre-cal and trig in college. The problem was: After completing 7/10 of his math homework assignment, Josh has 15 more questions complete. What is the total number of questions on his assignment? The answer was 50. I got that by $\frac{7}{10}$=$\frac{(x-15)}{x}$ Followed by cross multiplying and dividing. However the book does it simply by doing the subtraction 10-7 = 3 then setting it up as $\frac{3}{10}$=$\frac{15}{x}$ Then cross multiply and divide. I know someone will say "It works so you can do what you are doing". But is there a possibility this kind of over complication( haha not hardly complicated!) Can hurt me later on? The way I performed it seemed much more intuitive... I have a highschool record of up to ap calc and ap chem and I'm worried if I pursue engineering in college I'll have a hard time solving simple things because of my use to complexity? 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Aug 11, 2012 azizlwl (3/10)x=15 x=50 3. Aug 11, 2012 HallsofIvy Staff Emeritus Do you meant "15 more questions to complete"? What you wrote could be interpreted as meaning has completed 15 questions. If he has already completed 7/10 of his assignment, he still has 3/10 left. If he has 15 questions more to complete, letting "x" be the total number of problems, (3/10)x= 15 so x= (10/3)(15)= 150/3= 50. Okay, that is, they are calculating 1- 7/10= 10/10- 7/10= (10- 7)/10= 3/10 as I did. As long as you have linear problems, you can set them up as "proportions" and get the correct answer. 4. Aug 11, 2012 Life-Like Thanks guys Last edited: Aug 11, 2012
{}
# Implicit Differentiation - Different Approaches #### cargar ##### New member Given is the function F(x,y,z) = x2+y3-z. Determine the Jacobian matrix Dz in P=(1,1,2) using implicit differentiation. My idea is to calculate ∂z/∂x in P(1,1,2) and ∂z/∂y in P(1,1,2) and then just write it in matrix form. So, F(x,y,z)=x2+y3-z=0 ∂z/∂x=-((∂F/∂x)/(∂F/∂z))=-(2x/-1)=2x ∂z/∂x in P(1,1,2)=2 ∂z/∂y=-((∂F/∂y)/(∂F/∂z))=-(3y2/-1)= 3y2 ∂z/∂y in P(1,1,2)=3 Dz (1,1,2) = (∂z/∂x(1,1,2) ∂z/∂y(1,1,2)), Dz is [1 X 2] matrix Dz (1,1,2) = (2 3) I checked the result using explicit differentiation and I obtained the same. But in the book that I use I saw another approach. Namely, as a hint was given this formula: Dx=f(x°)=-[DyF(x°,y°)]-1DxF(x°,y°) I don’t understand how this formula can be used in order to calculate Dz. Any help is appreciated. #### HallsofIvy ##### Elite Member Frankly, I don't understand the notation of the hint. But what you did I would not consider "implicit differentiation. Given $$\displaystyle F(x,y,z)= x^3+ y^2- z= 0$$, assuming that x and y are independent variables and that z is a function of x and y then $$\displaystyle 3x^2- \frac{\partial z}{\partial x}= 0$$ so $$\displaystyle \frac{\partial z}{\partial x}= 3x^2$$. In fact, you could just write $$\displaystyle z= x^3+ y^2$$ and use "regular" partial differentiation. #### cargar ##### New member But what you did I would not consider "implicit differentiation. The above mentioned approach I found in the book Calculus (James Stewart). Given $$\displaystyle F(x,y,z)= x^3+ y^2- z= 0$$, assuming that ... The original function was $$\displaystyle F(x,y,z)= x^2+ y^3- z= 0$$ and not $$\displaystyle F(x,y,z)= x^3+ y^2- z= 0$$ In fact, you could just write $$\displaystyle z= x^3+ y^2$$ and use "regular" partial differentiation. But that would be explicit differentiation, wouldn't it? It seems that both approaches are indeed equivalent. Since Dz is requested, Dxf(x°)=-[DyF(x°,y°)]-1DxF(x°,y°) becomes Dxyf(x°,y°)=-[DzF(x°,y°,z°)]-1 DxyF(x°,y°,z°) It follows that: -[DyF(x°,y°)]-1=-[-1]-1 DxyF(x°,y°,z°)=(2x 3x2) DxyF(1,1,2)=(2x 3x2) So, Dz = -[-1]-1 (2x 3x2) = (2 3) Last edited: #### cargar ##### New member I cannot edit my post, but there is a typo Dxy F(1,1,2) = (2 3) not Dxy F(1,1,2) = (2x 3x2). So, Dz = -[-1]-1 (2 3) = (2 3)
{}
Reasoning about Knowledge and Strategies:Epistemic Strategy Logic # Reasoning about Knowledge and Strategies: Epistemic Strategy Logic ## Abstract In this paper we introduce Epistemic Strategy Logic (ESL), an extension of Strategy Logic with modal operators for individual knowledge. This enhanced framework allows us to represent explicitly and to reason about the knowledge agents have of their own and other agents’ strategies. We provide a semantics to ESL in terms of epistemic concurrent game models, and consider the corresponding model checking problem. We show that the complexity of model checking ESL is not worse than (non-epistemic) Strategy Logic. ## 1 Introduction Formal languages to represent and reason about strategies and coalitions are a thriving area of research in Artificial Intelligence and multi-agent system [5, 9, 20]. Recently, a wealth of multi-modal logics have appeared, which allow to formalise complex strategic abilities and behaviours of individual agents and groups [3, 6]. In parallel to these developments, in knowledge representation there is a well-established tradition of extending logics for reactive systems with epistemic operators to reason about the knowledge agents have of systems evolution. These investigations began in the ’80s with contributions on combinations of linear- and branching-time temporal logics with multi-agent epistemic languages [10, 11, 7]. Along this line of research, [12] introduced alternating-time temporal epistemic logic (ATEL), an extension of ATL with modalities for individual knowledge. The various flavours of logics of time and knowledge have been successfully applied to the specification of distributed and multi-agent systems in domains as diverse as security protocols, UAVs, web services, and e-commerce, as well as to verification by model checking [8, 17]. In this paper we take inspiration from the works above and pursue further this line of research by introducing Epistemic Strategy Logic, an extension of Strategy Logic (SL) [6, 18] that allows agents to reason about their strategic abilities. The extension here proposed is naive in the sense that it suffers many of the shortcomings of its relative ATEL [13]. Nonetheless, we reckon that it constitutes an excellent starting point to analyse the interaction of knowledge and strategic abilities in a language, such as SL, that explicitly allow for quantification on strategies. Related Work. This paper builds on previous contributions on Strategy Logic. SL has been introduced in [6] for two-player concurrent game structures (CGS). In [18] the semantics has been extended to a multi-player setting. Also, [18] introduced bind operators for strategies in the syntax. In the present contribution we consider multi-agent CGS in line with [18]. However, we adopt an agent-based perspective and consider agents with possibly different actions and protocols [7]. Also, our language do not include bind operators to avoid the formal machinery associated with these operators. We leave such an extension for future and more comprehensive work. Finally, the model checking results in Section 4 are inspired by and use techniques from [18]. Even though to our knowledge no epistemic extension of SL has been proposed yet, the interaction between knowledge and strategic reasoning has been studied extensively, especially in the context of alternating-time temporal logic. An extension of ATL with knowledge operators, called ATEL, was put forward in [12], and immediately imperfect information variants of this logic were considered in [15], which introduces alternating-time temporal observational logic (ATOL) and ATEL-R*, as well as uniform strategies. Notice that [15] also analyses the distinction between de re and de dicto knowledge of strategies; this distinction will also be considered later on in the context of Epistemic Strategy Logic. Further, [14] enriches ATL with a constructive notion of knowledge. As regards (non-epistemic) ATL, more elaborate notions of strategy have been considered. In [2] commitment in strategies has been analysed; while [16] introduced a notion of “feasible” strategy. In future work it might be worth exploring to what extent the theoretical results available for the various flavours of ATEL transfer to ESL. Scheme of the paper. In Section 2 we introduce the epistemic concurrent game models (ECGM), which are used in Section 3 to provide a semantics to Epistemic Strategy Logic (ESL). In Section 4 we consider the model checking problem for this setting and state the corresponding complexity results. Finally, in Section 5 we discuss the results and point to future research. For reasons of space, all proofs are omitted. An extended version of this paper with complete proofs is available [4]. ## 2 Epistemic Concurrent Game Models In this section we present the epistemic concurrent game models (ECGM), an extension of concurrent game structures [3, 12], starting with the notion of agent. ###### Definition 1 (Agent) An agent is a tuple such that (i) is the set of local states ; (ii) is the finite set of actions ; and (iii) is the protocol function. Intuitively, each agent is situated in some local state , representing her local information, and performs the actions in according to the protocol function [7]. Differently from [18], we assume that agents have possibly different actions and protocols. To formally describe the interactions between agents, we introduce their synchronous composition. Given a set of atomic propositions and a set of agents, we define the set of global states (resp. the set of joint actions ) as the cartesian product (resp. ). In what follows we denote the th component of a tuple as or, equivalently, as . ###### Definition 2 (Ecgm) Given a set of agents , an epistemic concurrent game model is a tuple such that (i) is the initial global state; (ii) is the global transition function, where is defined iff for every ; and (iii) is the interpretation function for atomic propositions in . The transition function describes the evolution of the ECGM from the initial state . We now introduce some notation that will be used in the rest of the paper. The transition relation on global states is defined as iff there exists s.t. . A run from a state , or -run, is an infinite sequence , where . For , with , we define and . A state is reachable from if there exists an -run s.t.  for some . We define as the set of states reachable from the initial state . Further, let be a placeholder for arbitrary individual actions. Given a subset of agents, an -action is an -tuple s.t. (i) for , and (ii) for . Then, is the set of all -actions and for every is the set of all -actions enabled at . A joint action extends an -action , or , iff for all . The outcome of action at state is the set of all states s.t. there exists a joint action and . Finally, two global states and are indistinguishable for agent , or , iff [7]. ## 3 Epistemic Strategy Logic We now introduce Epistemic Strategy Logic as a specification language for ECGM. Hereafter we consider a set of strategy variables , for every agent . ###### Definition 3 (Esl) For , and , the ESL formulas are defined in BNF as follows: ϕ ::= p∣¬ϕ∣ϕ→ϕ∣Xϕ∣ϕUϕ∣Kiϕ∣∃xiϕ The language ESL is an extension of the Strategy Logic in [6] to a multi-agent setting, including an epistemic operator for each . Alternatively, ESL can be seen as the epistemic extension of the Strategy Logic in [18], minus the bind operator. We do not consider bind operators in ESL for ease of presentation. The ESL formula is read as “agent has some strategy to achieve ”. The interpretation of LTL operators and is standard. The epistemic formula intuitively means that “agent knows ”. The other propositional connectives and LTL operators, as well as the strategy operator , can be defined as standard. Also, notice that we can introduce the nested-goal fragment ESL[NG], the boolean-goal fragment ESL[BG], and the one-goal fragment ESL[1G] in analogy to SL [18]. Further, the free variables of an ESL formula are inductively defined as follows: A sentence is a formula with , and the set of bound variables is defined as . To provide a semantics to ESL formulas in terms of ECGM, we introduce the notion of strategy. ###### Definition 4 (Strategy) Let be an ordinal s.t.  and a set of agents. A -recall -strategy is a function s.t.  for every , where for and is the last element of . Hence, a -recall -strategy returns an enabled -action for every sequence of states of length at most . Notice that for , can be seen as a function from to s.t.  for . In what follows we write for . Then, for , is equal to , where for every , is defined as the set of actions s.t.  if , otherwise. Therefore, a group strategy is the composition of its members’ strategies. Further, the outcome of strategy at state , or , is the set of all -runs s.t.  for all and . Depending on we can define positional strategies, strategies with perfect recall, etc. [9]. However, these different choices do not affect the following results, so we assume that is fixed and omit it. Moreover, by Def. 4 it is apparent that agents have perfect information, as their strategies are determined by global states [5]; we leave contexts of imperfect information for future research. Now let be an assignment that maps each agent to an -strategy . For , we denote as , that is, the -strategy s.t. for every , iff for every . Since , we simply write . Also, denotes the assignment s.t. (i) for all agents different from , , and (ii) . ###### Definition 5 (Semantics of ESL) We define whether an ECGM  satisfies a formula at state according to assignment , or , as follows (clauses for propositional connectives are straightforward and thus omitted): iff iff for , iff for there is s.t.  and implies iff for all , implies iff there exists an -strategy s.t. An ESL formula is satisfied at state , or , if for all assignments ; is true in , or , if . The satisfaction of formulas is independent from bound variables, that is, implies that iff . In particular, the satisfaction of sentences is independent from assignments. We can now state the model checking problem for ESL. ###### Definition 6 (Model Checking Problem) Given an ECGM  and an ESL formula , determine whether there exists an assignment s.t. . Notice that, if is an enumeration of , then the model checking problem amounts to check whether , where is a sentence. Hereafter we illustrate the formal machinery introduced thus far with a toy example. Example. We introduce a turn-based ECGM with two agents, and . First, secretly chooses between 0 and 1. Then, at the successive stage, also chooses between 0 and 1. The game is won by agent if the values provided by the two agents coincide, otherwise wins. We formally describe this toy game starting with agents and . Specifically, is the tuple , where (i) ; (ii) ; and (iii) and . Further, agent is defined as the tuple , where ; ; , and . The intuitive meaning of local states, actions and protocol functions is clear. Also, we consider the set of atomic propositions, which intuitively express that agent (resp. ) has won the game. We now introduce the ECGM , corresponding to our toy game, as the tuple , where (i) ; (ii) the transition function is given as follows for : and (iii) , . Notice that we suppose that our toy game, represented in Fig. 1, is non-terminating. Now, we check whether the following ESL specifications hold in the ECGM . Q ⊨ ∀xA X KB ∃yB X winB (1) Q ⊭ ∀xA X ∃yB KB X winB (2) Q ⊨ ∀xA X KB KA ∃yB X winA (3) Q ⊨ ∀xA X KB ∃yB KA X winA (4) Intuitively, (1) expresses the fact that at the beginning of the game, independently from agent ’s move, at the next step agent knows that there exists a move by which she can enforce her victory. That is, if agent chose 0 (resp. 1), then can choose 1 (resp. 0). However, only knows that there exists a move, but she is not able to point it out. In fact, (2) does not hold, as does not know which specific move chose, so she is not capable of distinguishing states and . Moreover, by (3) knows that knows that there exists a move by which can let win. Also, by (4) this move is known to , as it is the -move matching ’s move. Indeed, in ESL it is possible to express the difference between de re and de dicto knowledge of strategies. One of the first contributions to tackle this issue formally is [15]. Formula (1) expresses agent ’s de dicto knowledge of strategy ; while (2) asserts de re knowledge of the same strategy. Similarly, in (3) agent has de re knowledge of strategy ; while (4) states that agent knows the same strategy de dicto. The de re/de dicto distinction is of utmost importance as, as shown above, having a de dicto knowledge of a strategy does not guarantee that an agent is actually capable of performing the associated sequence of actions. Ideally, in order to have an effective strategy, agents must know it de re. ## 4 Model Checking ESL In this section we consider the complexity of the model checking problem for ESL. In Section 4.1 and 4.2 we provide the lower and upper bound respectively. For reasons of space, we do not provide full proofs, but only give the most important partial results. We refer to [4] for detailed definitions and complete proofs. For an ESL formula we define as the maximum number of alternations of quantifiers and in . Then, ESL[-alt] is the set of ESL formulas with equal to or less than . ### 4.1 Lower Bound In this section we prove that model checking ESL formulas is non-elementary-hard. Specifically, we show that for ESL formulas with maximum alternation the model checking problem is -EXPSPACE-hard. The proof strategy is similar to [18], namely, we reduce the satisfiability problem for quantified propositional temporal logic (QPTL) to ESL model checking. However, the reduction applied is different, as ESL does not contain the bind operator used in [18]. We first state that the satisfiability problem for QPTL sentences built on a finite set of atomic propositions can be reduced to model checking ESL sentences on a ECGM  of fixed size on , albeit exponential. ###### Lemma 1 (QPTL Reduction) Let be a finite set of atomic propositions. There exists an ECGM  on s.t. for every QPTL[-alt] sentence on , there exists an ESL[-alt] sentence s.t. is satisfiable iff . By this result and the fact that the satisfiability problem for QPTL[-alt] is -EXPSPACE-hard [18], we can derive the lower bound for model checking ESL[-alt]. ###### Theorem 2 (Hardness) The model checking problem for ESL[-alt] is -EXPSPACE-hard. In particular, it follows that ESL model checking is non-elementary-hard. ### 4.2 Upper Bound In this section we extend to Epistemic Strategy Logic the model checking procedure for SL in [18], which is based on alternating tree automata (ATA) [19]. We state the following result, which extends Lemma 5.6 in [18]. ###### Lemma 3 Let be an ECGM and an ESL formula. Then, there exists an alternating tree automaton s.t. for every state and assignment , we have that iff the assignment-state encoding belongs to the language . The following result corresponds to Theorem 5.4 in [18]. ###### Theorem 4 (ATA Direction Projection) Let be the ATA in Lemma 3, and a distinguished state. Then, there exists a non-deterministic ATA s.t. for all -labelled -tree , we have that iff , where is the -labelled -tree s.t. . Then, by using Lemma 3 and Theorem 4 we can state the following result. ###### Theorem 5 Let be an ECGM, a state in , an assignment, and an ESL formula. The non-deterministic ATA in Theorem 4 is such that iff . We can finally state the following extension to Theorem 5.8 in [18], which follows from the fact that the non-emptyness problem for alternating tree automata is non-elementary in the size of the formula. ###### Theorem 6 (Completeness) The model checking problem for ESL is PTIME-complete w.r.t. the size of the model and NON-ELEMENTARYTIME w.r.t. the size of the formula. We remark that Theorem 6 can be used to show that the model checking problem for the nested-goal fragment ESL[NG] is PTIME-complete w.r.t. the size of the model and ()-EXPTIME w.r.t. the maximum alternation of a formula. We conclude that the complexity of model checking ESL is not worse than the corresponding problem for the Strategy Logic in [18]. ## 5 Conclusions In this paper we introduced Epistemic Strategy Logic, an extension of Strategy Logic [18] with modalities for individual knowledge. We provided this specification language with a semantics in terms of epistemic concurrent game models (ECGM), and analysed the corresponding model checking problem. A number of developments for the proposed framework are possible. Firstly, the model checking problem for the nested-goal, boolean-goal, and one-goal fragment of SL has lower complexity. It is likely that similar results hold also for the corresponding fragments of ESL. Secondly, we can extend ESL with modalities for group knowledge, such as common and distributed knowledge. Thirdly, we can consider various assumptions on ECGM, for instance perfect recall, no learning, and synchronicity. The latter two extensions, while enhancing the expressive power of the logic, are also likely to increase the complexity of the model checking and satisfiability problems. ### References 1. Thomas Agotnes, Valentin Goranko & Wojciech Jamroga (2007): Alternating-time Temporal Logics with Irrevocable Strategies. In: Proceedings of the 11th Conference on Theoretical Aspects of Rationality and Knowledge, TARK ’07, ACM, New York, NY, USA, pp. 15–24, doi:http://dx.doi.org/10.1145/1324249.1324256. 2. Rajeev Alur, Thomas A. Henzinger & Orna Kupferman (2002): Alternating-time temporal logic. J. ACM 49(5), pp. 672–713, doi:http://dx.doi.org/10.1145/585265.585270. 3. Francesco Belardinelli (2014): Reasoning about Knowledge and Strategies: Epistemic Strategy Logic. Technical Report, Université d’Evry, Laboratoire IBISC. 4. Nils Bulling, Jurgen Dix & Wojciech Jamroga (2010): Model Checking Logics of Strategic Ability: Complexity*. In Mehdi Dastani, Koen V. Hindriks & John-Jules Charles Meyer, editors: Specification and Verification of Multi-agent Systems, Springer US, pp. 125–159, doi:http://dx.doi.org/10.1007/978-1-4419-6984-2. 5. Krishnendu Chatterjee, Thomas A. Henzinger & Nir Piterman (2010): Strategy logic. Inf. Comput. 208(6), pp. 677–693, doi:http://dx.doi.org/10.1016/j.ic.2009.07.004. 6. Ronald Fagin, Joseph Y. Halpern, Yoram Moses & Moshe Y. Vardi (1995): Reasoning About Knowledge. The MIT Press. 7. Peter Gammie & Ron van der Meyden (2004): MCK: Model Checking the Logic of Knowledge. In Rajeev Alur & Doron Peled, editors: CAV, Lecture Notes in Computer Science 3114, Springer, pp. 479–483, doi:http://dx.doi.org/10.1007/978-3-540-27813-9_41. 8. Valentin Goranko & Wojciech Jamroga (2004): Comparing Semantics of Logics for Multi-Agent Systems. Synthese 139(2), pp. 241–280, doi:http://dx.doi.org/10.1023/B:SYNT.0000024915.66183.d1. 9. Joseph Y. Halpern & Moshe Y. Vardi (1986): The Complexity of Reasoning about Knowledge and Time: Extended Abstract. In Juris Hartmanis, editor: STOC, ACM, pp. 304–315, doi:http://dx.doi.org/10.1145/12130.12161. 10. Joseph Y. Halpern & Moshe Y. Vardi (1989): The Complexity of Reasoning about Knowledge and Time. I. Lower Bounds. J. Comput. Syst. Sci. 38(1), pp. 195–237, doi:http://dx.doi.org/10.1016/0022-0000(89)90039-1. 11. Wiebe van der Hoek & Michael Wooldridge (2003): Cooperation, Knowledge, and Time: Alternating-time Temporal Epistemic Logic and its Applications. Studia Logica 75(1), pp. 125–157, doi:http://dx.doi.org/10.1023/A:1026185103185. 12. Wojciech Jamroga (2004): Some Remarks on Alternating Temporal Epistemic Logic. In: Proceedings of Formal Approaches to Multi-Agent Systems (FAMAS 2003), pp. 133–140. 13. Wojciech Jamroga & Thomas Ågotnes (2007): Constructive knowledge: what agents can achieve under imperfect information. Journal of Applied Non-Classical Logics 17(4), pp. 423–475, doi:http://dx.doi.org/10.3166/jancl.17.423-475. 14. Wojciech Jamroga & Wiebe van der Hoek (2004): Agents that Know How to Play. Fundam. Inform. 63(2-3), pp. 185–219. 15. Geert Jonker (2003): Feasible strategies in Alternating-time Temporal Epistemic Logic. Master’s thesis, University of Utrecht. 16. Alessio Lomuscio, Hongyang Qu & Franco Raimondi (2009): MCMAS: A Model Checker for the Verification of Multi-Agent Systems. In A. Bouajjani & O. Maler, editors: CAV, Lecture Notes in Computer Science 5643, Springer, pp. 682–688, doi:http://dx.doi.org/10.1007/978-3-642-02658-4_55. 17. Fabio Mogavero, Aniello Murano, Giuseppe Perelli & Moshe Y. Vardi (2011): Reasoning About Strategies. CoRR abs/1112.6275. Available at http://arxiv.org/abs/1112.6275. 18. David E. Muller & Paul E. Schupp (1987): Alternating Automata on Infinite Trees. Theor. Comput. Sci. 54, pp. 267–276, doi:http://dx.doi.org/10.1016/0304-3975(87)90133-2. 19. Marc Pauly (2002): A Modal Logic for Coalitional Power in Games. J. Log. Comput. 12(1), pp. 149–166, doi:http://dx.doi.org/10.1093/logcom/12.1.149. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minumum 40 characters
{}
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. # Advanced Search Query: Fill in the form and click »Search«... Format: Display: entries per page entries Zbl 1206.93035 Dong, Hongli; Wang, Zidong; Gao, Huijun Observer-based $H_\infty$ control for systems with repeated scalar nonlinearities and multiple packet losses. (English) [J] Int. J. Robust Nonlinear Control 20, No. 12, 1363-1378 (2010). ISSN 1049-8923 Summary: This paper is concerned with the $H_\infty$ control problem for a class of systems with repeated scalar nonlinearities and multiple missing measurements. The nonlinear system is described by a discrete-time state equation involving a repeated scalar nonlinearity, which typically appears in recurrent neural networks. The measurement missing phenomenon is assumed to occur, simultaneously, in the communication channels from the sensor to the controller and from the controller to the actuator, where the missing probability for each sensor/actuator is governed by an individual random variable satisfying a certain probabilistic distribution in the interval $[0,1]$. Attention is focused on the analysis and design of an observer-based feedback controller such that the closed-loop control system is stochastically stable and preserves a guaranteed $H_\infty$ performance. Sufficient conditions are obtained for the existence of admissible controllers. It is shown that the controller design problem under consideration is solvable if certain Linear Matrix Inequalities (LMIs) are feasible. Three examples are provided to illustrate the effectiveness of the developed theoretical results. MSC 2000: *93B36 $H^\infty$-control 93E15 Stochastic stability 93C55 Discrete-time control systems Keywords: observer-based $H_\infty$ control; repeated scalar nonlinearity; stochastic stability; multiple missing measurements; linear matrix inequalities Login Username: Password: Highlights Master Server ### Zentralblatt MATH Berlin [Germany] © FIZ Karlsruhe GmbH Zentralblatt MATH master server is maintained by the Editorial Office in Berlin, Section Mathematics and Computer Science of FIZ Karlsruhe and is updated daily. Other Mirror Sites Copyright © 2013 Zentralblatt MATH | European Mathematical Society | FIZ Karlsruhe | Heidelberg Academy of Sciences Published by Springer-Verlag | Webmaster
{}
# Tba FKTW02 - Frontiers in analysis of kinetic equations We consider solutions of the (repulsive) Vlasov-Poisson System which are small smooth perturbations of a Dirac mass (i.e. $\mu=\delta+fdxdv$ with $f$ small, localized and smooth).  We show that these solutions are global and decay in time at the optimal rate, and moreover that they undergo modified scattering.  The proof is based on an exact integration of the linearized equation through the use of asymptotic action-angle coordinates.  This is a joint work with Klaus Widmayer (EPFL) and Jiaqi Yang (ICERM). This talk is part of the Isaac Newton Institute Seminar Series series.
{}
DETERMINING IF A PRODUCT IS POSITIVE OR NEGATIVE • PRACTICE (online exercises and printable worksheets) • This page gives an in-a-nutshell discussion of the concepts. Want more details, more exercises? Read the full text! Recall the following properties of multiplication of signed numbers: (positive)(positive) = positive (positive)(negative) = negative (negative)(negative) = positive When more than two numbers are being multiplied, just count the number of minus signs: any even number of minus signs collapses to a plus sign; any odd number of minus signs collapses to a minus sign. EXAMPLES: The product $\,(+)(-)(-)(+)(-)\,$ is negative $\,(-)\,$. In this example, there are $\,3\,$ minus signs; $\,3\,$ is an odd number. The product $\,(-)(-)(+)(-)(-)\,$ is positive $\,(+)\,$. In this example, there are $\,4\,$ minus signs; $\,4\,$ is an even number. Master the ideas from this section by practicing the exercise at the bottom of this page. When you're done practicing, move on to: Multiplying and Dividing Fractions Input your answer as: $\quad+\quad$ or $\quad-$ Determine the sign of this product: $\quad+\quad$ or $\quad-\quad$ (an even number, please)
{}
# Synopsis: Dark Transmissions An unusual excess of radio emission from outside the Milky Way may come from dark matter. Like some offshore tax shelter, the cosmic matter portfolio is mostly invisible. Theories abound on what this dark matter might be, but the experimental goal now is to find hard evidence of the nature and properties of these particles that can govern the dynamics of galaxies but remain out of sight. Researchers have looked for telltale signs such as gamma rays from dark matter annihilation and perturbations of the cosmic microwave background; is the dark matter made up of WIMPs (weakly interacting massive particles) or primordial black holes or something even more exotic? Search the archives of Physics, and you will be spoiled for choice (see 1 December 2011 Synopsis and 8 December 2011 Synopsis). Writing in Physical Review Letters, Nicolao Fornengo of the University of Turin, Italy, and colleagues suggest that dark matter might be identified in radio waves emitted from beyond our galaxy. Their idea is based on data collected by the Absolute Radiometer for Cosmology, Astrophysics and Diffusion Emission (ARCADE $2$) instrument. The ARCADE $2$ collaboration found an unusual excess in isotropic radio flux in the frequency window from $3$ to $90$ gigahertz, once they accounted for known extragalactic sources. Fornengo et al. propose that, because the above-normal flux is difficult to explain in standard astrophysical scenarios, the emission might result from synchrotron radiation produced by secondary particles generated by the decay or annihilation of WIMPs. If it can be confirmed that this is the case, it may lead to the detection of important nongravitational signatures of dark matter, and the forensic bank examiners of astrophysics might have a shot at balancing the cosmic ledger books. – David Voss ### Announcements More Announcements » ## Previous Synopsis Quantum Information ## Next Synopsis Particles and Fields ## Related Articles Astrophysics ### Synopsis: Spatial Tests of Dark Matter Maps of merging galaxy clusters could help find signatures of dark matter based on its decay into photons. Read More » Astrophysics ### Synopsis: Quakes in Neutron Stars Simulations of the magnetic field of a neutron star show that shear stresses induced by the field are strong enough to fracture the star’s crust. Read More » Particles and Fields ### Synopsis: IceCube Neutrinos Pass Flavor Test The highest energy neutrinos ever recorded have a flavor distribution of neutrinos that is consistent with the particles having a cosmic origin. Read More »
{}
# Rods, clocks and free fall (metric and connections) ## Main Question or Discussion Point In Classical GR the metric tensor $g_{\mu\nu}$ determines the length of rods and ticking of clocks while the connection $\Gamma^{\alpha}_{\mu\nu}$ determine the equation of geodesic (the free fall motion of particle). Furthermore, in GR the Levi-Civita connection is uniquely determined by the metric tensor as,$$\Gamma_{\mu\nu}^{\alpha}=\frac{1}{2}g^{\alpha\delta}(g_{\mu\delta,\nu}+g_{\nu\delta,\mu}-g_{\mu\nu,\delta})$$ In certain theories of gravity and Quantum gravity, the connection and the metric tensor are taken as independent quantities. It appears to me that when this happens the free fall geodesic equation, describing the motion of particle in a manifold, becomes independent of the structure of space-time, the metric tensor (rods and clocks). This makes, to me at least, very little intuitive sense (may be because the picture painted by classical GR is too strongly imprinted on my mind). So, how is this (making metric and connection independent) physically motivated? Or it just a mathematical curiosity? Last edited: Related Special and General Relativity News on Phys.org Demystifier Classically, the momentum $$p(t)=mv(t)=m\frac{dx(t)}{dt}$$ is uniquely determined by $x(t)$. But in quantum mechanics, $x$ and $p$ are independent. In particular, if you know the former than you cannot know the latter. And that's very non-intuitive if classical picture of $x$ is painted in your mind. I think this is quite analogous to the quantum part of the problem that bothers you. How about the classical part, where concepts should be more intuitive? Can position and momentum be treated as independent quantities in classical mechanics? Yes, in the Hamiltonian formalism they are treated as independent quantities before the equations of motion are solved. In Hamiltonian formalism, the equation above is not a definition, but a solution of one of the equations of motion. So in classical physics it's just a mathematical trick. As you may guess, the independent treatment of metric and connection in classical gravity is something very similar. Last edited: Thanks for your reply. My literature review of the question got me to the same conclusion. haushofer In certain theories of gravity and Quantum gravity, the connection and the metric tensor are taken as independent quantities. Again, which theories are you refering to? The Palatini formalism is a mathematical "reformulation" of GR. Normally, one has the Riemann tensor depending on the connection and its derivatives. And these connections depend on the metric and its derivatives. So, the Einstein equations are second order differential equations for the metric. But from the theory of differential equations, we know that a second order diff.eqn. can be recasted as a set of two first order diff.eqns. (if you're not convinced: try Newton's second law for the position of a particle for instance!) This trick also works for the Einstein equations. We can recast this second order diff.eqn. for the metric as two first (!) order diff.eqns. This recasting is the Palatini equation. Basically, in this formalism you take the Einstein Hilbert action and treat a priori the metric and connection as independent fields. So you vary the action with respect to the metric and to the connection, giving you two first order differential equations. The equation of motion for the connection then gives you the usual relation between metric and connection. See e.g. Samtleben's notes on Supergravity, page 9 onward. So, to make the example of the Newtonian point particle a bit explicit: say, we have Newton's second law $m\ddot{x}=F$ This is a second order diff.eqn. for x. But we can recast it as $\dot{x}=v , \ \ \ \ \ \ \ \ \dot{v}=\frac{F}{m}$ using that $\dot{x}=v$ is the velocity of the particle. Now we have turned a second order diff.eqn for x into two first order eqns. for x and v. Mathematically, one can often use techniques from linear algebra to solve this system for a given F, and physically one can directly draw conclusions for the corresponding phase space. Likewise, the Palatini formalism can make calculations involving variations of the action sometimes easier, as Samtleben also explains on page 9. I'm not sure if it is used to draw conclusions for the corresponding phase space; I've never seen this. I'm not aware of theories of gravity in which the connection is treated as being independent of the metric in all of the dynamics, as in having it's separate degrees of freedom in phase space. That's why I'm asking you: which theories do you refer to? Demystifier The main reason for using a first order formalism is the fact that $R$ contains second time derivatives of the metric, while the usual Lagrangian formalism assumes that Lagrangian depends only on canonical positions and its first time derivatives. One has to eliminate the second time derivatives in the Lagrangian, and the first order formalism is one way to do it. To see how that works, instead of gravity let us study a simple toy model with similar properties. Consider a single degree of freedom $x(t)$ described by the Lagrangian $$L=-\frac{\dot{x}^2}{2}-x\ddot{x} \;\;\;\;\; (1)$$ a) One way to eliminate the second derivative $\ddot{x}$ is to use the identity $$x\ddot{x}=\frac{d}{dt}(x\dot{x})-\dot{x}^2$$ so the Lagrangian is $$L=\frac{\dot{x}^2}{2}+{\rm total \; derivative} \;\;\;\;\; (1')$$ where the total derivative term can be ignored because it does not contribute to the variation of the action $\int dt\, L$. Hence the physics is determined by the first term in (1'), which gives the equation of motion $$\ddot{x}=0 \;\;\;\;\; (2)$$ b) Another way to eliminate the second derivative $\ddot{x}$ is the first order formalism. One first introduces the velocity $v=\dot{x}$ to write (1) as $$L=-\frac{v^2}{2}-x\dot{v} \;\;\;\;\; (3)$$ and then treats $x$ and $v$ in (3) as independent quantities. Hence there are two equations of motion, one for $x$ and another for $v$. The equation for $x$ is $$\frac{d}{dt}\frac{\partial L}{\partial\dot{x}}=\frac{\partial L}{\partial x}$$ which gives $$0=\dot{v} \;\;\;\;\; (4)$$ The equation for $v$ is $$\frac{d}{dt}\frac{\partial L}{\partial\dot{v}}=\frac{\partial L}{\partial v}$$ which gives $$\dot{x}=v \;\;\;\;\; (5)$$ Clearly, Eqs. (4) and (5) are equivalent to Eq. (2), showing that the two formalism a) and b) are equivalent. Note, however, that (5) is not a definition but an equation of motion derived from the Lagrangian (3). Last edited: dextercioby, martinbn, haushofer and 1 other person Again, which theories are you refering to? The Palatini formalism is a mathematical "reformulation" of GR. Normally, one has the Riemann tensor depending on the connection and its derivatives. And these connections depend on the metric and its derivatives. So, the Einstein equations are second order differential equations for the metric. But from the theory of differential equations, we know that a second order diff.eqn. can be recasted as a set of two first order diff.eqns. (if you're not convinced: try Newton's second law for the position of a particle for instance!) This trick also works for the Einstein equations. We can recast this second order diff.eqn. for the metric as two first (!) order diff.eqns. This recasting is the Palatini equation. Basically, in this formalism you take the Einstein Hilbert action and treat a priori the metric and connection as independent fields. So you vary the action with respect to the metric and to the connection, giving you two first order differential equations. The equation of motion for the connection then gives you the usual relation between metric and connection. See e.g. Samtleben's notes on Supergravity, page 9 onward. So, to make the example of the Newtonian point particle a bit explicit: say, we have Newton's second law $m\ddot{x}=F$ This is a second order diff.eqn. for x. But we can recast it as $\dot{x}=v , \ \ \ \ \ \ \ \ \dot{v}=\frac{F}{m}$ using that $\dot{x}=v$ is the velocity of the particle. Now we have turned a second order diff.eqn for x into two first order eqns. for x and v. Mathematically, one can often use techniques from linear algebra to solve this system for a given F, and physically one can directly draw conclusions for the corresponding phase space. Likewise, the Palatini formalism can make calculations involving variations of the action sometimes easier, as Samtleben also explains on page 9. I'm not sure if it is used to draw conclusions for the corresponding phase space; I've never seen this. I'm not aware of theories of gravity in which the connection is treated as being independent of the metric in all of the dynamics, as in having it's separate degrees of freedom in phase space. That's why I'm asking you: which theories do you refer to? I have already gone through the math of the process. The question is not about how different formulations do the math. The question is if there is any physical importance of treating them separately or is it just a mathematical curiosity. All I get from papers is that it is just a generalization and no paper, neither in your reply, do I see any physical motivation for doing it. martinbn All I get from papers is that it is just a generalization and no paper, neither in your reply, do I see any physical motivation for doing it. Which papers? stevendaryl Staff Emeritus In the Einstein-Cartan theory, which is a generalization of GR, the connection is not determined by the metric. But in the formulation described in Wikipedia, the independent variables are not the metric tensor and the connection, but the metric tensor and the torsion (which is the anti-symmetric part of the connection coefficients: $T^k_{ij} = \Gamma^k_{ij} - \Gamma^k_{ji}$ stevendaryl Staff Emeritus In the Einstein-Cartan theory, which is a generalization of GR, the connection is not determined by the metric. But in the formulation described in Wikipedia, the independent variables are not the metric tensor and the connection, but the metric tensor and the torsion (which is the anti-symmetric part of the connection coefficients: $T^k_{ij} = \Gamma^k_{ij} - \Gamma^k_{ji}$ https://en.wikipedia.org/wiki/Einstein–Cartan_theory haushofer I have already gone through the math of the process. The question is not about how different formulations do the math. The question is if there is any physical importance of treating them separately or is it just a mathematical curiosity. All I get from papers is that it is just a generalization and no paper, neither in your reply, do I see any physical motivation for doing it. I'm not aware of any physical motivation, but if you refuse to be more specific it's hard to see what kind of motivation you want in the first place. E.g., it's still not clear to me whether you're refering to standard Palatini formalism or a dynamical modification of GR. Maybe this insight helps: https://www.physicsforums.com/insights/general-relativity-gauge-theory/ robphy Homework Helper Gold Member haushofer
{}